url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/1907.03669
The Weyl formula for planar annuli
We study the zeros of cross-product of Bessel functions and obtain their approximations, based on which we reduce the eigenvalue counting problem for the Dirichlet Laplacian associated with a planar annulus to a lattice point counting problem associated with a special domain in $\mathbb{R}^2$. Unlike other lattice point problems, the one arisen naturally here has interesting features that lattice points under consideration are translated by various amount and the curvature of the boundary is unbounded. By transforming this problem into a relatively standard form and using classical van der Corput's bounds, we obtain a two-term Weyl formula for the eigenvalue counting function for the planar annulus with a remainder of size $O(\mu^{2/3})$. If we additionally assume that certain tangent has rational slope, we obtain an improved remainder estimate of the same strength as Huxley's bound in the Gauss circle problem, namely $O(\mu^{131/208}(\log \mu)^{18627/8320})$. As a by-product of our lattice point counting results, we readily obtain this Huxley-type remainder estimate in the two-term Weyl formula for planar disks.
\section{Introduction} Let $D \subset \mathbb R^2$ be a bounded domain with piecewise smooth boundary, and let \[0< \mu^2_1 < \mu^2_2 \le \mu_3^2 \le \cdots \] be the eigenvalues (counting multiplicity) of the Dirichlet Laplacian associated with $D$. In his seminal work \cite{weyl11:1912}, H. Weyl initiated the study of the asymptotic behavior of the eigenvalue counting function \begin{equation} \mathscr{N}_D(\mu)=\#\left\{k\in\mathbb{N} : \mu_k\leq \mu \right\},\label{e-counting} \end{equation} and he proved that as $\mu \to \infty$, \begin{equation*} \mathscr{N}_D(\mu)=\frac{\mathrm{Area}(D)}{4\pi}\mu^2+o(\mu^2). \end{equation*} If we interpret $\mu_j$'s as the frequencies, i.e. the overtones that can be produced by a drum whose drumhead has the shape $D$, then the Weyl's law mentioned above implies that one can ``hear'' the area of $D$. Weyl further conjectured that one can also ``hear'' the perimeter of $D$. More precisely, he conjectured in 1913 (see \cite{weyl13:1913}) that \begin{equation} \mathscr{N}_D(\mu)=\frac{\mathrm{Area}(D)}{4\pi}\mu^2-\frac{\mathrm{Length}(\partial D)}{4\pi}\mu+o(\mu). \label{WeylConj} \end{equation} Since then, the asymptotic behavior of the eigenvalue counting function has been studied extensively by many mathematicians in many different settings. For example, for closed manifolds the Weyl's conjecture was first proven by J. Duistermaat and V. Guillemin \cite{DuG:1975} under an extra assumption that the set of periodic bicharacteristics has measure zero (which turns out to be necessary in this setting). Their result was later generalized by V. Ivrii \cite{Ivrii1980} (see also R. Melrose \cite{Mel:1980}) to manifolds with boundary under a similar assumption that the set of periodic billiard trajectories has measure zero. While it is still unknown whether the conjecture is true for all bounded domains in $\mathbb R^2$ with piecewise smooth boundaries, it is known that many regions, including all bounded convex domains with analytic boundaries and all bounded domains with piecewise smooth concave boundaries, do satisfy the condition on the periodic billiard trajectories and thus the two-term Weyl's law \eqref{WeylConj}. In terms of such a region $D$, natural questions are: what other information is encoded in $\mathscr{N}_D(\mu)$ and is there a third main term in the asymptotics of $\mathscr{N}_D(\mu)$? It turns out that the answers are no in general. In fact, V. Lazutkin and D. Terman \cite{Lazu:1982} showed that for any $\kappa <1$, there exists a convex planar domain $D$ satisfying \eqref{WeylConj} with an error term of order at least $\mu^\kappa$. In other words, if one sets \begin{equation} \mathscr{R}_D(\mu)=\mathscr{N}_D(\mu) - \frac{\mathrm{Area}(D)}{4\pi}\mu^2+\frac{\mathrm{Length}(\partial D)}{4\pi}\mu, \end{equation} then, for any $\kappa <1$, $\mathscr{R}_D(\mu) \ne O(\mu^\kappa)$ for some convex planar domain $D$. Due to the complexity of the dynamics of the billiard flow, there is in general no hope to get a universal estimate of $\mathscr{R}_D(\mu)$ better than $o(\mu)$. On the other hand, there are many domains (usually with special symmetry) for which one can prove a much better estimate of $\mathscr{R}_D(\mu)$. Such examples include squares, disks, ellipses and, in principle, regions of separable variable type. Since the Laplacian eigenvalues for them are closely related to lattice points, a basic strategy is to convert the eigenvalue counting problem to a lattice point counting problem associated with some special planar region (modulo an error which needs to be controlled). For example, it is easy to see that the Laplacian eigenvalues of the planar unit square are in one-to-one correspondence with integer points in the first quadrant, and thus the eigenvalue counting problem is equivalent to the famous Gauss circle problem, which has received much attention for more than one hundred years while the conjectured error-term estimate $O(\mu^{1/2+\varepsilon})$ is still far from being proved. In principle the same type of arguments can be applied to other domains whose billiard flows are completely integrable. Recently the same idea was applied by Y. Colin de Verdi\`ere \cite{colin:2011} to get a nicer estimate of $\mathscr{R}_D(\mu)$ for planar disks. By studying the asymptotics of the Bessel function, Colin de Verdi\`ere converted the eigenvalue counting problem to a lattice point counting problem associated with a special planar domain with two cusp points. By using tools from analysis, he showed that both the error term in the lattice point problem and the error between the eigenvalue and lattice counting functions are of order $O(\mu^{2/3})$. As a consequence, he proved $\mathscr{R}_{disk}(\mu)=O(\mu^{2/3})$. The same result was also obtained by N. Kuznetsov and B. Fedosov \cite{kuz:1965}. In \cite{GWW2018} three of the authors followed Colin de Verdi\`ere's strategy to study disks and observed that the error between the eigenvalue and lattice counting functions is controlled by the error term in the corresponding lattice point problem. By applying the van der Corput's method of estimating exponential sums in the latter problem, we were able to improve Colin de Verdi\`ere's result a little bit and prove that $\mathscr{R}_{disk}(\mu)=O(\mu^{2/3-1/495})$. However, comparing to known results for squares, the exponent $2/3-1/495$ that we obtained for disks seems to be far from optimal. As we mentioned above, the eigenvalue counting problem for the unit square is equivalent to the Gauss circle problem, which is about counting lattice points in planar disks. So far the best published bound $O(\mu^{131/208}(\log \mu)^{18627/8320})$ is given by M. Huxley in \cite{Huxley:2003}.\footnote{In a recent preprint \cite{BWpreprint}, J. Bourgain and N. Watt was able to improve Huxley's bound in the circle problem to $O(\mu^{517/824+\varepsilon})$ by combining a newly emerging theory from harmonic analysis.} This paper can be viewed as a continuation of \cite{colin:2011} and \cite{GWW2018} in two aspects: improving previous results for disks to a Huxley-type remainder estimate and extending from disks to annuli. In the rest of this paper we let \begin{equation*} \mathscr{D}=\{x\in\mathbb{R}^2 : r\leq |x|\leq R \} \end{equation*} be the annulus centered at the origin with two given radii $0<r<R<\infty$. We obtain the following estimates of $\mathscr{R}_{\mathscr{D}}(\mu)$: \begin{theorem}\label{specthm} The following two-term Weyl formula for the annulus $\mathscr{D}$ \begin{equation*} \mathscr{N}_\mathscr{D}(\mu)=\frac{R^2-r^2}{4}\mu^2-\frac{R+r}{2}\mu+O\left(\mu^{2/3}\right) \end{equation*} holds. Furthermore, if $\pi^{-1}\arccos(r/R)\in \mathbb{Q}$ then the remainder estimate can be improved to \begin{equation*} \mathscr{R}_{\mathscr{D}}(\mu)=O\left(\mu^{131/208}(\log \mu)^{18627/8320} \right). \end{equation*} \end{theorem} \begin{remark} The number $\pi^{-1}\arccos(r/R)$ represents the slope of certain tangent line in the associated lattice point problem. If it is rational then we can apply Huxley's bounds for rounding error sums from \cite{Huxley:2003} in the estimation of the number of lattice points. For details see Section \ref{LatticeSec}. At this moment we are still not sure whether this assumption can be removed or not if one wants the same Huxley-type bound. \end{remark} \begin{remark} Roughly speaking, as in \cite{colin:2011}, the proof of this theorem consists of two parts: first, a reduction from the eigenvalue counting problem to certain lattice counting problem; second, study of the latter problem. However, both parts in the annulus case are more complicated than their counterparts in the disk case. In the first part, we need to find approximations of zeros of cross-product of Bessel functions of the first and second type with errors under good control (see Corollary \ref{approximation}) while in the disk case we only need to study zeros of the Bessel function of the first kind. In the derivation we discuss in four cases depending on the sizes of zeros and use expansions of Bessel functions given by the method of stationary phase and F. Olver \cite{olver:1954}. As a by-product, we obtain estimates of distances between adjacent zeros (see Corollary \ref{cor1}). Based on the approximations, we get a correspondence between eigenvalues and lattice points (translated by various amount), via which the eigenvalue counting problem is reduced to a lattice counting problem naturally. For details see Section \ref{zeros} and \ref{reduction-sec}. In the second part, in order to solve this new lattice point problem, we translate (if necessary) the lattice points to achieve a uniformity in translation. Then we are led to study two lattice point problems with unbounded curvature and (possibly) cusps. Some boundary points with infinite curvature have tangents with rational slope (for example, the points $P_2$ and $P_2'$ in Figure \ref{SymmH}). These points are relatively easier to handle. This phenomenon occurs in the disk case. What is different in the annulus case is that for some boundary points with infinite curvature we do not know whether the slopes of their tangents are rational or not (for example, the points $J$ and $J'$ in Figure \ref{SymmH}). These points bring us troubles in the estimation. With rational slopes we apply Huxley's \cite[Proposition 3]{Huxley:2003}. Concerning the irrational case, Huxley's proposition does not seem to be applicable. For details see Section \ref{reduction-sec} and \ref{LatticeSec}. \end{remark} As to disks, heuristically, since $\pi^{-1}\arccos(0)\in \mathbb{Q}$ letting $r \to 0$ in Theorem \ref{specthm} leads to the following Huxley-type remainder estimate of $\mathscr{R}_{disk}(\mu)$, which improves the main results in \cite{GWW2018}. Its rigorous proof relies on the reduction step from the eigenvalue counting to the lattice point counting (see \cite[Section 3]{colin:2011}, \cite[Section 6]{GWW2018} and its variant in Section \ref{reduction-sec}), Theorem \ref{theorem:no-in-D} (together with the symmetry of the domain $\mathcal{D}$) and the fact that the corresponding domain for the lattice point counting (Figure 1.1 in \cite{colin:2011}) is invariant under the involution $(x,y)\rightarrow (-x, y+x)$ (see \cite[P.3]{colin:2011}). \begin{theorem}\label{diskcase} For planar disks we have \begin{equation*} \mathscr{R}_{disk}(\mu)=O\left(\mu^{131/208}(\log \mu)^{18627/8320} \right). \end{equation*} \end{theorem} \emph{Notations:} As in 3.6.15 \cite[P.15]{abram:1972} we write \begin{equation*} f(x)\sim \sum_{k=0}^{\infty} a_k x^{-k}, \end{equation*} if \begin{equation*} f(x)-\sum_{k=0}^{n-1} a_k x^{-k}=O(x^{-n}) \quad \textrm{as $x\rightarrow \infty$} \end{equation*} for every $n=1, 2, \ldots.$ For functions $f$ and $g$ with $g$ taking nonnegative real values, $f\ll g$ means $|f|\leqslant Cg$ for some constant $C$. If $f$ is nonnegative, $f\gg g$ means $g\ll f$. The notation $f\asymp g$ means that $f\ll g$ and $g\ll f$. If we write a subscript (for instance $\ll_{\sigma}$), we emphasize that the implicit constant depends on that specific subscript. \section{Zeros of cross-product of Bessel functions}\label{zeros} There are a lot of literature on the study of the zeros of cross-product of Bessel functions. Just to mention a few, M. Kline~\cite{Kline:1948}, D. Willis~\cite{willis:1965}, J. Cochran~\cite{cochran:1964, Cochran:1966}, V. Bobkov~\cite{bobkov:2018}, etc. In this section we study such objects from our own perspectives (motivated by the work in \cite{colin:2011}), via whose study we look for a two-term Weyl formula for planar annuli. Let $0<r<R<\infty$ be two given numbers. For any nonnegative integer $n$ we would like to study zeros of the function \begin{equation} f_n(x):=J_n(Rx)Y_n(rx)-J_n(rx)Y_n(Rx),\label{eigenequation} \end{equation} where $J_n$ and $Y_n$ are the Bessel functions of the first and second kind and order $n$ (see \cite[p. 360]{abram:1972}). It is well-known that all zeros are real and simple, and that $f_n$ is an even function. Hence we only study positive zeros. For each nonnegative $n$ we denote its sequence of positive zeros by $0<x_{n, 1}<x_{n, 2}<\cdots<x_{n, k}<\cdots$. In fact $x_{n,k}$ is strictly increasing in $n$ for each fixed $k\in \mathbb{N}$ (see \cite[P.425]{willis:1965}). Throughout this paper we denote by $g$ the function \begin{equation} g(x)=\left(\sqrt{1-x^2}-x\arccos x\right)/\pi \label{def-g} \end{equation} and by $G$ the function \begin{equation*} G(x)=\left\{ \begin{aligned} &Rg(x/R)-rg(x/r)\;\; &\mathrm{for}&\;0\leq x\leq r,\\ &Rg(x/R)\;\; &\mathrm{for}&\;r\leq x\leq R. \end{aligned} \right. \end{equation*} In Figure \ref{H}, the solid curve $P_1JP_2$ represents the graph of $G$ on $[0, R]$, while the half-dashed and half-solid curve $P_{0}JP_{2}$ represents the graph of $Rg(x/R)$ on $[0, R]$. \begin{figure} [ht] \centering \includegraphics[width=0.6\textwidth]{H.eps} \caption{The graph of $G$ with $R=2$ and $r=1$.} \label{H} \end{figure} \begin{lemma}\label{case111} For any $c>0$ and all $n\in \mathbb{N}\cup \{0\}$, if $rx\geq \max\{(1+c)n, 1\}$ then \begin{equation} f_n(x)=-\frac{2}{\pi}\frac{\sin\left( \pi x G\left(\frac{n}{x}\right)\right)+E_1(x)}{\left(\left(Rx\right)^2-n^2\right)^{1/4} \left(\left(rx\right)^2-n^2\right)^{1/4}}, \label{case111-1} \end{equation} where \begin{equation*} E_1(x)=O_c\left(x^{-1}\right). \end{equation*} \end{lemma} \begin{proof} This result follows from an application of the method of stationary phase to the Bessel functions. More precisely, we first apply to all four factors in \eqref{eigenequation} the asymptotics \eqref{jnasy} and \eqref{ynasy} of Bessel functions and then use the angle difference formula for the sine. \end{proof} \begin{lemma}\label{case222} There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$ and all sufficiently large $n$ if $n+n^{1/3+\varepsilon}\leq rx\leq (1+c)n$ then \begin{equation} f_n(x)=-\frac{2}{\pi}\frac{\sin\left( \pi x G\left(\frac{n}{x}\right)\right)+E_2(x)}{\left(\left(Rx\right)^2-n^2\right)^{1/4} \left(\left(rx\right)^2-n^2\right)^{1/4}}, \label{case222-1} \end{equation} where \begin{equation} E_2(x)=O\left(z^{-3/2}\right) \label{case222-4} \end{equation} with $z$ determined by the equation $rx=n+z n^{1/3}$. \end{lemma} \begin{proof} Denote \begin{equation*} Rx=n z_{R} \quad \textrm{and}\quad rx=n z_{r}. \end{equation*} For sufficiently large $n$ we apply to all four factors in \eqref{eigenequation} Olver's asymptotic expansions of Bessel functions of large order (see \cite[p. 368]{abram:1972} or Olver's original paper \cite{olver:1954}; for the convenience of the readers we put those formulas in the appendix). The $\zeta_R=\zeta(z_R)$ and $\zeta_r=\zeta(z_r)$ appearing in the asymptotics are both negative and determined by \eqref{def-zeta1}. They satisfy the following size estimates: \begin{equation*} (-\zeta_R)^{3/2}\asymp 1 \end{equation*} and \begin{equation*} n^{-1+3\varepsilon/2}\ll (-\zeta_r)^{3/2}\ll 1 \end{equation*} whenever $n$ is sufficiently large. Indeed, the first estimate follows from $R/r<z_R\leq R(1+c)/r$ (with $c\in (0,1)$ to be determined below) while the second one follows from \eqref{bound-zeta+} and $1+n^{-2/3+\varepsilon}\leq z_r\leq 1+c$. Then \begin{equation} \begin{split} f_n(x)=&-2 n^{-2/3}(-\zeta_R)^{1/4}(-\zeta_r)^{1/4}\left(z_R^2-1\right)^{-1/4} \left(z_r^2-1\right)^{-1/4}\cdot \\ &\left[\mathrm{Ai}(n^{2/3}\zeta_R)\mathrm{Bi}(n^{2/3}\zeta_r)-\mathrm{Ai}(n^{2/3}\zeta_r)\mathrm{Bi}(n^{2/3}\zeta_R) +E\right] \end{split} \label{case222-2} \end{equation} with the error $E$ being an expression involving the Airy functions of the first and second kind. Since \begin{equation*} n^{2/3}(-\zeta_R)\asymp n^{2/3}, \quad\quad n^{\varepsilon}\ll n^{2/3}(-\zeta_r)\ll n^{2/3}, \end{equation*} $\mathrm{Ai}(-r)$ and $\mathrm{Bi}(-r)$ are both of size $O(r^{-1/4})$ while $\mathrm{Ai}'(-r)$ and $\mathrm{Bi}'(-r)$ of size $O(r^{1/4})$ (see \cite[p. 448--449]{abram:1972}), by using the well-known asymptotics for the Airy functions (see the appendix) and the angle difference formula for the sine, the terms in brackets in \eqref{case222-2} become \begin{equation*} \bigg[\frac{(-\zeta_R)^{-1/4}(-\zeta_r)^{-1/4}}{\pi n^{1/3}}\left(\sin\left(\frac{2}{3}n(-\zeta_R)^{3/2}- \frac{2}{3}n(-\zeta_r)^{3/2}\right)+E_2(x)\right)\bigg], \end{equation*} where \begin{equation*} E_2(x)=O\left(|n^{2/3}\zeta_r|^{-3/2}+n^{-1}\right). \end{equation*} By \eqref{bound-zeta+}, if $c$ is a sufficiently small constant then $|n^{2/3}\zeta_r|\asymp z$. Noticing the definition of $G$ and $z\leq cn^{2/3}$, we then get \eqref{case222-1} and \eqref{case222-4}. \end{proof} \begin{lemma} \label{case2.5} There exists a strictly decreasing real-valued $C^1$ function $\psi: \mathbb{R} \rightarrow (0, 1/4)$ such that $\psi(0)=1/12$, $\lim_{x\rightarrow -\infty}\psi=1/4$, $\lim_{x\rightarrow \infty}\psi=0$, and the image of $\psi'$ is a bounded interval. For any $\varepsilon>0$ and all sufficiently large $n$ if $n-n^{1/3+\varepsilon}\leq rx \leq n+n^{1/3+\varepsilon}$ then \begin{equation} f_n(x)=-\frac{2^{5/6}}{\pi^{1/2}}\frac{\sin\left(\pi xG\left(\frac{n}{x}\right)+\pi \psi\left(z\right)\right)+E_3(x)}{n^{1/3}\left(\left(Rx\right)^2-n^2\right)^{1/4} \left(\mathrm{Ai}^2+\mathrm{Bi}^2\right)^{-1/2}\left(-2^{1/3}z\right)},\label{case2.5-1} \end{equation} where $z$ is determined by the equation $rx=n+z n^{1/3}$ and \begin{equation} E_3(x)=O\left(n^{-2/3+2.5\varepsilon}\right). \label{case2.5-5} \end{equation} \end{lemma} \begin{proof} Notice that if $rx\geq n-n^{1/3+\varepsilon}$ then \begin{equation*} Rx\geq \frac{R}{r}\left(n-n^{1/3+\varepsilon}\right)>\left(1+c'\right)n \end{equation*} for some fixed constant $c'>0$ whenever $n$ is sufficiently large. Denote \begin{equation*} rx=n+zn^{1/3} \quad \textrm{with $-n^\varepsilon\leq z\leq n^\varepsilon$}. \end{equation*} Applying \eqref{jnasy} and \eqref{ynasy} to $J_n(Rx)$ and $Y_n(Rx)$ respectively and Lemma \ref{9.3.4analogue} to both $J_n(rx)$ and $Y_n(rx)$ yields \begin{align} f_n(x)=&-2^{5/6}\pi^{-1/2}\left(\left(Rx\right)^2-n^2\right)^{-1/4}n^{-1/3}\sqrt{\mathrm{Ai}^2+\mathrm{Bi}^2}(-2^{1/3}z) \cdot \nonumber \\ &\bigg[\sin\left( \pi x R g\left(\frac{n/x}{R}\right)-\frac{\pi}{4}\right) \frac{\mathrm{Ai}}{\sqrt{\mathrm{Ai}^2+\mathrm{Bi}^2}}\left(-2^{1/3}z\right)+ \label{equ1}\\ & \ \cos\left( \pi x R g\left(\frac{n/x}{R}\right)-\frac{\pi}{4}\right) \frac{\mathrm{Bi}}{\sqrt{\mathrm{Ai}^2+\mathrm{Bi}^2}}\left(-2^{1/3}z\right)+E(x)\bigg], \label{equ2} \end{align} where \begin{equation*} E(x)=O\left(n^{-2/3+2.5\varepsilon}\right). \end{equation*} To get the bound of $E$, we used the fact that for large $r$ \begin{equation} \mathrm{Ai}^2(-r)+\mathrm{Bi}^2(-r) \sim \pi^{-1} r^{-1/2} \label{case2.5-4} \end{equation} (see \S 2.2 in \cite[p. 395]{olver:1997}) and noticed that $x^{-1}\asymp n^{-1}$. In order to use the angle sum formula for the sine to simplify \eqref{equ1} and \eqref{equ2} we define an angle function $\beta:\mathbb{R}\rightarrow (-\infty, 1/2)$ as follows \begin{equation*} \beta(z)=\left\{ \begin{array}{ll} -(m-1)+\frac{1}{\pi}\arctan \frac{\mathrm{Bi}(-2^{1/3}z)}{\mathrm{Ai}(-2^{1/3}z)}, & \textrm{$z\in (\frac{t_{m-1}}{2^{1/3}}, \frac{t_m}{2^{1/3}})$, $m\in\mathbb{N}$,}\\ -(m-1)-\frac{1}{2}, & \textrm{$z=\frac{t_m}{2^{1/3}}$, $m\in\mathbb{N}$,} \end{array}\right. \end{equation*} where $t_0=-\infty$ and $t_m$ ($m\in\mathbb{N}$) is the $m$th zero of the equation $\textrm{Ai}(-x)=0$. Then the terms in brackets in \eqref{equ1} and \eqref{equ2} become \begin{equation} \left[\sin\left( \pi x R g\left(\frac{n/x}{R}\right)+\pi \beta(z)-\frac{1}{4}\pi\right)+E(x)\right].\label{case2.5-3} \end{equation} Notice that if $z\geq 0$ then \eqref{bound-zeta-} implies \begin{equation*} rx g\left(\frac{n}{rx}\right)=\frac{2\sqrt{2}}{3\pi}z^{3/2}+O\left(z^{2.5}n^{-2/3}\right). \end{equation*} Define a real-valued function $\psi=\psi(z)$ by \begin{equation*} \psi(z)=\left\{ \begin{array}{ll} \beta(z)+\frac{2\sqrt{2}}{3\pi}z^{3/2}-\frac{1}{4}, & \textrm{$z\geq 0$},\\ \beta(z)-\frac{1}{4}, & \textrm{$z\leq 0$}. \end{array}\right. \end{equation*} By rewriting \eqref{case2.5-3} with this $\psi$ and the function $G$, we get \eqref{case2.5-1} and \eqref{case2.5-5}. One can easily check, after checking that $\psi'$ is always negative, that $\psi$ does satisfy those properties claimed in the statement of the lemma. For non-positive $z$, $\psi'<0$ follows trivially from the formula \begin{equation*} \beta'(z)=-\frac{2^{1/3}/\pi^2}{\left(\mathrm{Ai}^2+\mathrm{Bi}^2\right)(-2^{1/3}z)}, \end{equation*} in whose calculation we have used the 10.4.10 in \cite[p. 446]{abram:1972}. To prove the inequality for positive $z$, it suffices to show that \begin{equation*} \pi z^{1/2} \left(\mathrm{Ai}^2+\mathrm{Bi}^2 \right)(-z)<1 \quad \textrm{for all $z>0$}. \end{equation*} This follows from the fact that the left hand side is an increasing function of $z$ (see \S 2.4 in \cite[p. 397]{olver:1997} and \S 7.3 in \cite[p. 342]{olver:1997} or \S 13.74 \cite[P.446]{watson:1966}) and \eqref{case2.5-4}. Since $\psi'(z)\rightarrow 0$ as $|z|\rightarrow \infty$, its image should be a bounded interval. \end{proof} \begin{remark} It follows from the 10.4.78 in \cite[p. 449]{abram:1972} that for large $z$ \begin{equation*} \psi'(z)=-\frac{5\sqrt{2}}{64\pi}z^{-5/2}+O\left(z^{-11/2}\right). \end{equation*} \end{remark} \begin{lemma} \label{case0} For all $n\in \mathbb{N}\cup \{0\}$, at any positive zero of $f_n(x)$, \begin{equation*} Rx_{n,k}>\sqrt{n^2+\pi^2\left(k-\frac{1}{4} \right)^2}. \end{equation*} In particular, $Rx_{n, k}>n$. \end{lemma} \begin{proof} Let $j_{n,k}$ denote the $k$th positive zero of $J_n$. Then \begin{equation*} x_{n, k}\geq j_{n,k}/R \end{equation*} since for fixed $R$, $n$, and $k$, the zero $x_{n,k}$ (as a function of $r$) is increasing in $r$ (by a similar argument as in the proof of Theorem 2 on \cite[P.222]{Cochran:1966}) and converges to $j_{n,k}/R$ as $r\to 0$ (see \cite[P.38]{Kline:1948}). R. McCann \cite[P.102]{McCann:1977} gives \begin{equation*} j_{n,k}>\sqrt{n^2+\pi^2\left(k-\frac{1}{4} \right)^2}, \end{equation*} hence the desired bound. \end{proof} \begin{lemma}\label{case3} For any $\varepsilon>0$ and all sufficiently large $n$ if $rn/R<rx\leq n-n^{1/3+\varepsilon}$ then \begin{equation} f_n(x)=\frac{Y_n(rx)\left(12\pi x G\left(\frac{n}{x} \right)\right)^{1/6}}{\left(\left(Rx\right)^2-n^2\right)^{1/4}} \left(\mathrm{Ai}\left(-\left(\frac{3\pi}{2} x G\left(\frac{n}{x} \right)\right)^{2/3}\right)+E_4(x)\right),\label{case3-1} \end{equation} where $Y_n(rx)<0$ and \begin{equation} E_4(x)=O\left(n^{-4/3}\max\left\{1, \left(x G\left(\frac{n}{x} \right)\right)^{1/6}\right\}\right).\label{case3-3} \end{equation} If we further assume that $x G(n/x)>1$, then \begin{equation} f_n(x)=\sqrt{\frac{2}{\pi}} \frac{Y_n(rx)}{\left(\left(Rx\right)^2-n^2\right)^{1/4}} \left(\sin\left( \pi x G\left(\frac{n}{x}\right)+\frac{\pi}{4}\right)+E_5(x)\right),\label{case3-2} \end{equation} where \begin{equation} E_5(x)=O\left(\left(x G\left(\frac{n}{x}\right)\right)^{-1} \right). \label{case3-5} \end{equation} \end{lemma} \begin{remark} Comparing the asymptotics \eqref{case3-2} with \eqref{case2.5-1} at the same point $x=r^{-1}(n-n^{1/3+\varepsilon})$, we notice that $E_5(x)=O(n^{-1})$ is a better bound than $E_3(x)=O(n^{-2/3+2.5\varepsilon})$. This difference is due to the different methods used to prove these two lemmas. In fact, if we expand $Y_n(rx)$ at $x=r^{-1}(n-n^{1/3+\varepsilon})$ by Lemma \ref{9.3.4analogue}, then $E_5(x)$ becomes $E_3(x)$ and \eqref{case3-2} is consistent with \eqref{case2.5-1}. \end{remark} \begin{proof}[Proof of Lemma \ref{case3}] As in the proof of Lemma \ref{case222}, we denote \begin{equation*} Rx=n z_{R} \quad \textrm{and}\quad rx=n z_{r}. \end{equation*} Since $1<z_R< R/r$ the $\zeta_R=\zeta(z_R)$, determined by \eqref{def-zeta1}, is negative such that \begin{equation*} 0<(-\zeta_R)^{3/2}\ll 1. \end{equation*} Meanwhile, since $r/R< z_r\leq 1-n^{-2/3+\varepsilon}$ the $\zeta_r=\zeta(z_r)$, determined by \eqref{def-zeta2}, is positive such that \begin{equation*} n^{-1+3\varepsilon/2}\ll \zeta_r^{3/2}\ll 1 \end{equation*} whenever $n$ is sufficiently large. With the estimate \begin{equation*} n^{\varepsilon}\ll n^{2/3}\zeta_r\ll n^{2/3}, \end{equation*} applying Olver's asymptotic expansions \eqref{jnuse111} and \eqref{ynuse111} and asymptotics for the Airy functions (the 10.4.59, 10.4.61, 10.4.63, and 10.4.66 in \cite{abram:1972}) yields \begin{equation*} J_n(rx)=\left(2\pi\right)^{-1/2}\left(n^2-(rx)^2\right)^{-1/4}e^{-\frac{2}{3}n\zeta_r^{3/2}} \left(1+O\left(n^{-1}\zeta_r^{-3/2}\right)\right) \end{equation*} and \begin{equation*} Y_n(rx)=-\left(2/\pi\right)^{1/2}\left(n^2-(rx)^2\right)^{-1/4}e^{\frac{2}{3}n\zeta_r^{3/2}} \left(1+O\left(n^{-1}\zeta_r^{-3/2}\right)\right). \end{equation*} Hence $Y_n(rx)$ is always negative and \begin{equation*} \frac{J_n(rx)}{Y_n(rx)}=-\frac{1}{2}e^{-\frac{4}{3}n\zeta_r^{3/2}}\left(1+O\left(n^{-1}\zeta_r^{-3/2}\right)\right) =O\left(e^{-n^{\varepsilon}}\right). \end{equation*} Therefore \begin{equation} f_n(x)=Y_n(rx)\left(J_n(n z_{R})+Y_n(n z_{R})O\left(e^{-n^{\varepsilon}}\right)\right).\label{case3-4} \end{equation} Notice that \begin{equation*} 0<n^{2/3}(-\zeta_R)\ll n^{2/3}. \end{equation*} We discuss in two cases depending on whether $n^{2/3}(-\zeta_R)$ is large or small. Applying Olver's asymptotic expansions to $J_n(n z_{R})$ and $Y_n(n z_{R})$ in \eqref{case3-4} and bounds for the Airy functions yields the formula \eqref{case3-1} with \begin{equation*} E_4(x)=\left\{ \begin{array}{ll} O\left(n^{-4/3}\left(n^{2/3}|\zeta_R|\right)^{1/4}\right), & \textrm{if $n^{2/3}(-\zeta_R)\geq 1$,}\\ O\left(n^{-4/3}\right), & \textrm{if $n^{2/3}(-\zeta_R)< 1$,} \end{array}\right. \end{equation*} hence the bound \eqref{case3-3}, where we have used the fact \begin{equation*} n^{2/3}\left(-\zeta_R\right)=\left(\frac{3\pi}{2} x G\left(\frac{n}{x} \right)\right)^{2/3}. \end{equation*} It follows easily from \eqref{case3-1} and \eqref{case3-3} to get \eqref{case3-2} and \eqref{case3-5} by using the well-known asymptotics of $\mathrm{Ai}(-r)$. \end{proof} We can now collect all previous lemmas and give a description of zeros of $f_n$ for large $n$. \begin{theorem} \label{thm111} There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$ and all sufficiently large $n$ the positive zeros of $f_n$, $\{x_{n,k}\}_{k=1}^{\infty}$, satisfy the following: \begin{enumerate} \item if $rx_{n,k}\geq (1+c)n$ then \begin{equation} x_{n,k} G\left(\frac{n}{x_{n,k}}\right)=k+O\left(x_{n,k}^{-1}\right);\label{thm111-1} \end{equation} \item if $n+n^{1/3+\varepsilon}\leq rx_{n,k}<(1+c)n$ then \begin{equation} x_{n,k} G\left(\frac{n}{x_{n,k}}\right)=k+O\left(z_{n,k}^{-3/2}\right) \label{thm111-2} \end{equation} with $z_{n,k}$ determined by the equation $rx_{n,k}=n+z_{n,k} n^{1/3}$; \item if $n-n^{1/3+\varepsilon}< rx_{n,k}< n+n^{1/3+\varepsilon}$ then \begin{equation} x_{n,k}G\left(\frac{n}{x_{n,k}}\right)=k-\psi\left(z_{n,k}\right)+O\left(n^{-2/3+2.5\varepsilon}\right),\label{thm111-3} \end{equation} where $\psi$ is the function appearing in Lemma \ref{case2.5} and $z_{n,k}$ is defined as above; \item if $rx_{n,k}\leq n-n^{1/3+\varepsilon}$ then \begin{equation} x_{n,k} G\left(\frac{n}{x_{n,k}}\right)=k-\frac{1}{4}+E_{n,k},\label{thm111-4} \end{equation} where \begin{equation*} |E_{n,k}|<\min\left\{\frac{3}{8}, O\left(\left(x_{n,k} G\left(\frac{n}{x_{n,k}}\right)\right)^{-1}\right)\right\}. \end{equation*} \end{enumerate} \end{theorem} \begin{proof} The rough idea of this proof is to apply to $f_n(x)$ the intermediate value theorem in the interval $(0, (s+1/2)\pi/(R-r))$ for any sufficiently large integer $s$ and then J. Cochran's result of the number of zeros within such an interval (see \cite{cochran:1964}). We will study zeros of $f_n$ only in $(n/R, (s+1/2)\pi/(R-r))$ for any sufficiently large integer $s>n^3$ since Lemma \ref{case0} tells us that there is no zeros $\leq n/R$. Inspired by the asymptotics obtained in this section, the study will be done via discussing the values of $h_n(x):=x G(n/x)$\footnote{This notation $h_n$ will be used through the rest of this section.} on the chosen interval (see Figure \ref{hn} for an example of the graph of $h_n$). \begin{figure} [ht] \centering \includegraphics[width=0.6\textwidth]{hn.eps} \caption{The graph of $h_n$ with $n=30$, $R=2$ and $r=1$.} \label{hn} \end{figure} We observe that $h_n: [n/R, \infty)\rightarrow [0, \infty)$ is a continuous and strictly increasing function that maps $(n/R, (s+1/2)\pi/(R-r))$ onto $(0, s+1/2+O(n^{-1}))$. Therefore for each integer $1\leq k\leq s$ there exists an interval $(a_k, b_k)\subset (n/R, (s+1/2)\pi/(R-r))$ such that $h_n$ maps $(a_k, b_k)$ to $(k-3/8, k+1/8)$ bijectively. It is obvious that these intervals $(a_k, b_k)$'s are disjointly located one by one as $k$ increases. We claim that if $n$ is sufficiently large then for each $1\leq k\leq s$ \begin{equation} f_n(a_k)f_n(b_k)<0.\label{IVT-condition} \end{equation} If this is true, the intermediate value theorem ensures the existence of at least one zero of $f_n$ in each $(a_k, b_k)$. Recall that there are exactly $s$ zeros of $f_n$ in $(0, (s+1/2)\pi/(R-r))$ (see \cite{cochran:1964}). Hence there exists one and only one zero in each $(a_k, b_k)$, which must be $x_{n,k}$ by definition. To verify \eqref{IVT-condition} we take advantage of the asymptotics \eqref{case111-1}, \eqref{case222-1}, \eqref{case2.5-1}, \eqref{case3-2} and \eqref{case3-1}. In all cases except the last one (when $x G(n/x)<C$ for a sufficiently large $C$), the verification is easy if we notice that \begin{equation*} h_n(a_k)+\delta_1 \in \left[k-\frac{3}{8}, k-\frac{1}{8}\right] \quad \textrm{and} \quad h_n(b_k)+\delta_2\in \left[k+\frac{1}{8}, k+\frac{3}{8}\right] \end{equation*} for any $0\leq \delta_1, \delta_2\leq 1/4$. In the last case when $x G(n/x)<C$ we use the asymptotics \eqref{case3-1}. The sign of $f_n$ depends on that of \begin{equation} \mathrm{Ai}\left(-\left(3\pi h_n(x)/2 \right)^{2/3}\right)+O_C\left(n^{-4/3}\right). \label{theorem-1} \end{equation} As in the proof of Lemma \ref{case2.5}, we denote by $t_k$ ($k\in\mathbb{N}$) the $k$th zero of the equation $\textrm{Ai}(-x)=0$. \cite[P.405]{olver:1997} gives that \begin{equation*} t_k=\left[\frac{3\pi}{2}\left(k-\frac{1}{4}+\alpha'_k\right)\right]^{2/3} \end{equation*} with a crude estimate $|\alpha'_k|<0.11$. Thus \begin{align} t_k\in &\left(\left[\frac{3\pi}{2}\left(k-0.36\right)\right]^{2/3}, \left[\frac{3\pi}{2}\left(k-0.14\right)\right]^{2/3} \right) \nonumber \\ &\subsetneq \left(\left[\frac{3\pi}{2}h_n(a_k)\right]^{2/3}, \left[\frac{3\pi}{2}h_n(b_k)\right]^{2/3} \right). \label{theorem-2} \end{align} Since $\textrm{Ai}(-x)$ oscillates around zero for positive $x$ and the intervals in \eqref{theorem-2} are disjoint for different $k$'s, the signs of \eqref{theorem-1} at $x=a_k$ and $b_k$ must be opposite whenever $n$ is sufficiently large, which in turn gives \eqref{IVT-condition} in the last case. We are now able to finish the proof of the theorem. For each zero $x_{n,k}$, $f_n(x_{n,k})=0$. If $h_n(x_{n,k})\geq C$ we apply to the left hand side either \eqref{case111-1}, \eqref{case222-1}, \eqref{case2.5-1} or \eqref{case3-2}, and conclude that the factor involving the sine function and $h_n(x_{n,k})$ has to be zero. Since $h_n(x_{n,k})+\delta$ is always in the interval $[k-3/8, k+3/8]$ for any $0\leq \delta\leq 1/4$, applying the arcsine function immediately yields the desired asymptotics. If $h_n(x_{n,k})< C$ we use the fact that $h_n(x_{n,k})\in (h_n(a_k), h_n(b_k))$ to get a crude estimate $|h_n(x_{n,k})-(k-1/4)|<3/8$. \end{proof} For small $n$ we have the following. \begin{theorem} \label{thm222} For any $N\in\mathbb{N}$ there exists a constant $K>0$ such that if $0\leq n\leq N$ and $k\geq K$ then the positive zero $x_{n,k}$ of $f_n$ satisfies \begin{equation} x_{n,k} G\left(\frac{n}{x_{n,k}}\right)=k+O\left(x_{n,k}^{-1}\right). \label{thm222-2} \end{equation} \end{theorem} \begin{proof} If $0\leq n\leq N$ and $x>C_N$ for a sufficiently large constant $C_N$ then Lemma \ref{case111} (with $c=1$) gives a factorization \eqref{case111-1} of $f_n$ with $|E_1(x)|<1/100$. By using such a factorization we study $f_n$ on the interval \begin{equation} \left[\frac{(k-1/2)\pi}{R-r}, \frac{(k+1/2)\pi}{R-r}\right), \label{thm222-1} \end{equation} which is a subset of $(C_N, \infty)$ if $k$ is sufficiently large. As in the proof of Theorem \ref{thm111}, we then study the function $h_n(x)=x G(n/x)$ on a subinterval of \eqref{thm222-1}, denoted by $(a_k, b_k)$, with $h_n((a_k, b_k))=(k-3/8, k+1/8)$. Such a subset indeed exists if $k$ is sufficiently large since $h_n((k\pm 1/2)\pi/(R-r))=k\pm 1/2+O(N^2/k)$. It is easy to see that $f_n(a_k)f_n(b_k)<0$. By the intermediate value theorem there exists at least one zero of $f_n$ in $(a_k, b_k)$, which must be $x_{n,k}$ since there exists exactly one zero in the interval \eqref{thm222-1} if $k$ is sufficiently large (due to the fact that there are exactly $s$ zeros of $f_n$ in $(0, (s+1/2)\pi/(R-r))$ for sufficiently large integer $s$ (see \cite{cochran:1964})). Thus \begin{equation*} \sin\left( \pi h_n(x_{n,k})\right)+E_1(x_{n,k})=0. \end{equation*} Applying the arcsine function yields the desired result. \end{proof} \begin{corollary}\label{cor1} Given any sufficiently large integer $n$ and $0<\sigma<R$, for all $x_{n,k}$'s that are greater than $n/\sigma$ we have \begin{equation*} 1 \ll x_{n,k+1}-x_{n,k}\ll_{\sigma} 1. \end{equation*} Furthermore, if $0<\sigma\leq r$ then the dependence of the implicit constant on $\sigma$ can be removed. For any $N\in\mathbb{N}$ if $0\leq n\leq N$ and $k$ is sufficiently large then \begin{equation*} x_{n,k+1}-x_{n,k}\asymp 1. \end{equation*} \end{corollary} \begin{proof} If $n\geq 1$, a straightforward computation shows that if $x\geq n/r$ then \begin{equation*} h_n'(x)=\frac{\left(R^2-r^2\right)/\pi}{\sqrt{R^2-(n/x)^2}+\sqrt{r^2-(n/x)^2}} \in \left(\frac{R-r}{\pi}, \frac{\sqrt{R^2-r^2}}{\pi} \right]; \end{equation*} if $n/R\leq x\leq n/r$ then \begin{equation*} h_n'(x)=\frac{1}{\pi}\sqrt{R^2-(n/x)^2}\in \left[0, \frac{1}{\pi}\sqrt{R^2-r^2} \right]. \end{equation*} For all sufficiently large $n$ the desired results follow from Theorem \ref{thm111}, the mean value theorem and the above first derivatives. For any $N\in\mathbb{N}$ and $1\leq n\leq N$, we observe that if $k$ is sufficiently large (depending on $N$) then $x_{n,k}>n/r$. We can then derive the desired result similarly with Theorem \ref{thm111} replaced by Theorem \ref{thm222}. The case $n=0$ follows trivially from Theorem \ref{thm222}. \end{proof} \begin{corollary}\label{cor2} The error terms in both \eqref{thm111-1} and \eqref{thm222-2}, in \eqref{thm111-2}, in \eqref{thm111-3} and in \eqref{thm111-4} are of size \begin{equation*} O\left(\frac{1}{n+k}\right),\quad O\left(\frac{n^{1/2}}{\left(k-\frac{G(r)}{r}n\right)^{3/2}}\right), \quad O\left(n^{-2/3+2.5\varepsilon}\right) \quad \textrm{and}\quad O\left(\frac{1}{k}\right) \end{equation*} respectively. \end{corollary} \begin{remark}\label{cor2-3} These bounds are all as small as we want if we choose $n$ or $k$ properly large. It is quite obvious to observe this except (perhaps) for the second bound. As to that, we just need to notice \eqref{cor2-2} below and the corresponding range of $z_{n,k}$, namely $n^{\varepsilon}\leq z_{n,k}<cn^{2/3}$. \end{remark} \begin{proof}[Proof of Corollary \ref{cor2}] For \eqref{thm111-1} and \eqref{thm222-2} the desired bound follows easily from Lemma \ref{case0}. For \eqref{thm111-3} and \eqref{thm111-4} the bounds can be obtained directly from the asymptotics themselves. We will focus on the error term in \eqref{thm111-2} below and prove that \begin{equation} z_{n,k}\asymp n^{-1/3}\left(k-\frac{G(r)}{r}n\right).\label{cor2-2} \end{equation} Let $k_0, k\in \mathbb{N}$ be such that \begin{equation} rx_{n,k_0-1}<n\leq rx_{n,k_0} \label{cor2-1} \end{equation} and \begin{equation*} n+n^{1/3+\varepsilon}\leq rx_{n,k}<(1+c)n. \end{equation*} Hence, by Corollary \ref{cor1}, $k-k_0\asymp x_{n,k}-x_{n,k_0} \gg n^{1/3+\varepsilon}$ which is much greater than $1$. Since $z_{n,k_0}\geq 0>z_{n,k_0-1}$ we have \begin{equation*} z_{n,k}\geq z_{n,k}-z_{n,k_0}=rn^{-1/3}\left(x_{n,k}-x_{n,k_0} \right)\asymp n^{-1/3}\left(k-k_0 \right) \end{equation*} and \begin{equation*} z_{n,k}< z_{n,k}-z_{n,k_0-1}=rn^{-1/3}\left(x_{n,k}-x_{n,k_0-1} \right)\asymp n^{-1/3}\left(k-k_0 \right). \end{equation*} By using \eqref{thm111-3}, \eqref{cor2-1} and the monotonicity of $h_n$ we have \begin{equation*} k-k_0\asymp k-\frac{G(r)}{r}n. \end{equation*} We therefore obtain \eqref{cor2-2}. \end{proof} Let $F: [0, \infty)\times [0, \infty)\setminus \{O\}\rightarrow \mathbb{R}$ be the function homogeneous of degree $1$ which satisfies $F\equiv1$ on the graph of $G$. By implicit differentiation, we have \begin{equation*} \partial_y F(x,y)=\frac{1}{(t, G(t))\cdot (-G'(t),1)} \end{equation*} and \begin{equation*} \partial_x F(x,y)=\frac{-G'(t)}{(t, G(t))\cdot (-G'(t),1)}, \end{equation*} where $0\leq t<R$ is determined by $ty=G(t)x$, that is, $(t, G(t))$ is the intersection point of the graph of $G$ and the line segment connecting the origin $O$ and the point $(x, y)$. Analyzing the sizes of the above derivatives yields \begin{lemma}\label{derivativeF} Following the above notations, we have that if $R-c\leq t<R$ for a sufficiently small constant $c>0$ then \begin{equation*} 0<\partial_y F(x,y)\asymp x^{1/3}y^{-1/3}, \end{equation*} otherwise \begin{equation*} 0<\partial_y F(x,y)\asymp_c 1. \end{equation*} We also have $0\leq \partial_x F(x,y)\ll 1$. In particular, if $0<c'\leq t<R$ then \begin{equation*} 0< \partial_x F(x,y)\asymp_{c'} 1. \end{equation*} \end{lemma} By Theorem \ref{thm111} and \ref{thm222}, Corollary \ref{cor2} and Lemma \ref{derivativeF}, we have the following approximations of zeros. \begin{corollary}\label{approximation} There exists a constant $c\in (0,1)$ such that for any $\varepsilon>0$ there exists a $N\in\mathbb{N}$ such that if $n>N$ then the positive zeros of $f_n$, $\{x_{n,k}\}_{k=1}^{\infty}$, satisfy \begin{equation} x_{n, k}=F(n,k-\tau_{n,k})+R_{n,k},\label{approximation1} \end{equation} where \begin{equation} \tau_{n,k}=\left\{ \begin{array}{ll} 0, & \textrm{if $rx_{n,k}\geq n+n^{1/3+\varepsilon}$,}\\ \psi\left(z_{n,k}\right), & \textrm{if $n-n^{1/3+\varepsilon}< rx_{n,k}< n+n^{1/3+\varepsilon}$,}\\ 1/4, & \textrm{if $rx_{n,k}\leq n-n^{1/3+\varepsilon}$,} \end{array} \right. \label{translation} \end{equation} where $\psi$ is the function appearing in Lemma \ref{case2.5} with $z_{n,k}$ determined by the equation $rx_{n,k}=n+z_{n,k} n^{1/3}$, and \begin{equation*} R_{n,k}\!=\!\left\{ \begin{array}{ll} \!O\left((n+k)^{-1}\right), & \textrm{if $rx_{n,k}\geq (1+c)n$,}\\ \!O\left(n^{1/2}\left(k-\frac{G(r)}{r}n\right)^{-3/2}\right), & \textrm{if $n+n^{1/3+\varepsilon}\leq rx_{n,k}<(1+c)n$,}\\ \!O\left(n^{-2/3+2.5\varepsilon}\right), & \textrm{if $n-n^{1/3+\varepsilon}< rx_{n,k}< n+n^{1/3+\varepsilon}$,}\\ \!O\left(n^{1/3}k^{-4/3}\right), &\textrm{if $rx_{n,k}\leq n-n^{1/3+\varepsilon}$.} \end{array} \right. \end{equation*} If $0\leq n\leq N$ there exists a $K\in\mathbb{N}$ such that if $k>K$ then \eqref{approximation1} holds with \begin{equation} \tau_{n,k}=\left\{ \begin{array}{ll} 0, & \textrm{if $k>K$,}\\ 1/4, & \textrm{if $1\leq k\leq K$,} \end{array} \right. \label{translation2}\end{equation} \footnote{The definition of $\tau_{n,k}$ for $1\leq k\leq K$ is irrelevant here, however we define it anyway for the discussion in the next section.}and \begin{equation*} R_{n,k}=O\left((n+k)^{-1}\right). \end{equation*} \end{corollary} \begin{proof} If $rx_{n,k}>n-n^{1/3+\varepsilon}$ then $x_{n,k}>2n/(R+r)$ for sufficiently large $n$. By using \eqref{thm111-1}--\eqref{thm111-3} and the monotonicity of $h_n$, we have \begin{equation*} \frac{k}{n}\geq \frac{h_n(x_{n,k})}{2n}\geq \frac{1}{R+r}G\left(\frac{R+r}{2}\right), \end{equation*} which, by Lemma \ref{derivativeF}, ensures that $\partial_y F(n, k)\asymp 1$. The \eqref{approximation1} with $rx_{n,k}>n-n^{1/3+\varepsilon}$ then follows from \eqref{thm111-1}--\eqref{thm111-3}, the mean value theorem and Corollary \ref{cor2}. If $rx_{n,k}\leq n-n^{1/3+\varepsilon}$ then $x_{n,k}<n/r$. We argue as above to get that \begin{equation*} \frac{k}{n}=\frac{h_n(x_{n,k})+O(1)}{n}\ll 1. \end{equation*} By Lemma \ref{derivativeF}, if $k/n$ is sufficiently small then $\partial_y F(n, k)\asymp n^{1/3}k^{-1/3}$; otherwise $k/n \asymp 1$ which also ensures that $\partial_y F(n, k)\asymp n^{1/3}k^{-1/3}$. The \eqref{approximation1} with $rx_{n,k}\leq n-n^{1/3+\varepsilon}$ thus follows from \eqref{thm111-4}, the mean value theorem and Corollary \ref{cor2}. At last, the case $0\leq n\leq N$ follows easily from Theorem \ref{thm222}, Corollary \ref{cor2} and Lemma \ref{derivativeF}. \end{proof} \section{Spectrum counting to lattice counting}\label{reduction-sec} Consider the Dirichlet Laplacian operator $\triangle$ on the planar annulus $\mathscr{D}$. Using the standard separation of variables, we know that its spectrum contains exactly the numbers $x_{n,k}^2$, $n\in\mathbb{N}\cup\{0\}$, $k\in\mathbb{N}$, defined at the beginning of Section \ref{zeros}. We also know that in the spectrum each $x_{n,k}$ appears twice for every fixed $n\in\mathbb{N}$ and only once if $n=0$. If we define $x_{n,k}=x_{-n,k}$ for any negative integer $n$, then the spectrum counting function $\mathscr{N}_{\mathscr{D}}(\mu)$ defined by \eqref{e-counting} becomes \begin{equation*} \mathscr{N}_{\mathscr{D}}(\mu)=\#\left\{(n, k)\in\mathbb{Z}\times \mathbb{N} : x_{n,k}\leq \mu \right\}. \end{equation*} Recall that we define in Corollary \ref{approximation} (with the $c$ and $\varepsilon$ appearing there fixed) the amount of translation $\tau_{n,k}$ for $n\in \mathbb{N}\cup \{0\}$, $k\in \mathbb{N}$, namely, \eqref{translation} and \eqref{translation2}. We now extend its definition to $\mathbb{Z}^2$ by letting $\tau_{n,k}$ be $\tau_{-n,k}$ if $n<0$ and $1/4$ if $k\leq 0$. In view of the multiplicity of the spectrum and Corollary \ref{approximation}, each $x_{n,k}$, $n\in\mathbb{Z}$, corresponds to a unique point $(n,k-\tau_{n,k})$. Denote by $\mathcal{D}$ the closed domain symmetric about the $y$-axis and in the first quadrant bounded by the graph of $G$ and the $x$-axis. See the shaded area in Figure \ref{SymmH}. Define a lattice counting function $\mathcal{N}_{\mathcal{D}}(\mu)$ by \begin{equation*} \mathcal{N}_{\mathcal{D}}(\mu)=\#\left(\mu\mathcal{D}\cap \left\{(n,k-\tau_{n,k}) : (n,k)\in\mathbb{Z}^2\right\}\right), \quad \mu>2. \end{equation*} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{SymmH.eps} \caption{The symmetric domain $\mathcal{D}$.} \label{SymmH} \end{figure} Then one can transfer the spectrum counting problem to a lattice counting problem via the following result. In its proof we essentially follow the treatment for ``the boundary parts'' in \cite[Theorem 3.1]{colin:2011}. \begin{proposition}\label{difference1} There exists a constant $C>0$ such that \begin{equation} \left|\mathscr{N}_{\mathscr{D}}(\mu)-\mathcal{N}_{\mathcal{D}}(\mu) \right|\leq \mathcal{N}_{\mathcal{D}}\left(\mu+C\mu^{-0.4}\right)- \mathcal{N}_{\mathcal{D}}\left(\mu-C\mu^{-0.4}\right)+O\left(\mu^{0.6} \right). \label{222} \end{equation} \end{proposition} \begin{proof} To study $\mathscr{N}_{\mathscr{D}}(\mu)$ we would like to use the approximations of $x_{n, k}$'s given by Corollary \ref{approximation}. We need to assume that $\max\{|n|,k\}$ is sufficiently large, however, we will not emphasize this explicitly in the following argument. This treatment will not cause any problem; after all, it will produce at most an $O(1)$ error, which is much less than the error term $O(\mu^{0.6})$ in \eqref{222}. For $k\in\mathbb{N}$, let \begin{equation*} \mathscr{N}_k(\mu):=\#\left\{n\in\mathbb{N} : x_{n,k}\leq \mu \right\}=\#\left\{n\in\mathbb{N} : F(n,k-\tau_{n,k})+R_{n,k}\leq \mu\right\} \end{equation*} and \begin{equation*} \mathcal{N}_k(\mu):=\#\left\{n\in\mathbb{N} : (n,k-\tau_{n,k})\in\mu\mathcal{D}\right\}=\#\left\{n\in\mathbb{N} : F(n,k-\tau_{n,k})\leq\mu\right\}. \end{equation*} Then \begin{align} \Delta_k(\mu):&=|\mathscr{N}_k(\mu)-\mathcal{N}_k(\mu)| \nonumber \\ &\leq \#\left\{n\in\mathbb{N} : \mu-|R_{n,k}|\leq F(n,k-\tau_{n,k})\leq \mu+|R_{n,k}|\right\}. \label{999} \end{align} Hence we only need to consider points $(n,k-\tau_{n,k})$ satisfying $F(n,k-\tau_{n,k})=\mu+O(1)$. We next use Lemma \ref{derivativeF} and Corollary \ref{approximation} to obtain bounds of $\Delta_k(\mu)$. We discuss in several cases depending on the size of $k$. If $1\leq k\leq \mu^{1/4}$ then \begin{equation} \Delta_k(\mu)\ll \mu^{1/3}k^{-4/3}.\label{444} \end{equation} Indeed, in this case we have $n\asymp \mu$. This, together with Theorem \ref{thm111}, leads to \begin{equation*} \frac{G(n/x_{n,k})}{n/x_{n,k}}=\frac{k+O(1)}{n}\ll \mu^{-3/4}, \end{equation*} which implies that $n/x_{n,k}$ is close to $R$ and thus $rx_{n,k}\leq n-n^{1/3+\varepsilon}$. Therefore $R_{n,k}=O(n^{1/3}k^{-4/3})$. Using the estimate of $\partial_x F$ we get \eqref{444}. If $\mu^{1/4}< k\leq \mu^{4/7}$ then a similar argument as above shows that $R_{n,k}=O(n^{1/3}k^{-4/3})=O(1)$ and \begin{equation*} \Delta_k(\mu)\ll 1. \footnote{In this case $R_{n,k}$ may be much smaller than $1$, but there may exist one element in the set in \eqref{999}. Hence we may only use the trivial bound $O(1)$ for $\Delta_k(\mu)$.} \end{equation*} If $\mu^{4/7}< k\leq G(r)\mu-C_1$ for a sufficiently large constant $C_1$ (to be determined below) then \begin{equation} \Delta_k(\mu)\leq \mathcal{N}_k(\mu+C\mu^{-3/7})-\mathcal{N}_k(\mu-C\mu^{-3/7})\label{666} \end{equation} for some constant $C$. Indeed, let us fix arbitrarily an element $n$ belonging to the set in \eqref{999}, hence the point $(n,k-\tau_{n,k})$ is contained in a tubular neighborhood of $\mu\partial \mathcal{D}$ of width much less than $1$ (see Remark \ref{cor2-3}). Since $G'$ is continuous at the point $x=r$ and $G'(r)\in (-1/2, 0)$, as $\mu\rightarrow \infty$ the tubular neighborhood (mentioned above) between $y=G(r)\mu$ and $y=G(r)\mu-C_1$ is close to a parallelogram. A simple geometric argument ensures that if $C_1$ is a sufficiently large constant then $n\geq r\mu$. As a result, \begin{equation} \frac{k}{n}\leq \frac{G(r)}{r}-\frac{C_1}{n}.\label{555} \end{equation} On the other hand side we observe, as a consequence of Theorem \ref{thm111} and the monotonicity of $G$, that if $rx_{n,k}\geq n+n^{1/3+\varepsilon}$ then \begin{equation*} \frac{k}{n}=\frac{G(n/x_{n,k})}{n/x_{n,k}}+O\left(n^{-1-\frac{3}{2}\varepsilon} \right)>\frac{G(r)}{r}+O\left(n^{-1-\frac{3}{2}\varepsilon}\right), \end{equation*} which contradicts with \eqref{555}. Therefore $rx_{n,k}< n+n^{1/3+\varepsilon}$ and $R_{n,k}$ can only be either $O(n^{-2/3+2.5\varepsilon})$ or $O(n^{1/3}k^{-4/3})$, both of which are of size $O(\mu^{-3/7})$ since $n\asymp \mu$. We then readily get \eqref{666}. If $G(r)\mu-C_1< k\leq G(r)\mu+\mu^{0.6}$ then the trivial estimate $R_{n,k}=O(1)$ yields that \begin{equation*} \Delta_k(\mu)\ll 1. \end{equation*} If $k>G(r)\mu+\mu^{0.6}$ then \begin{equation} \Delta_k(\mu)\leq \mathcal{N}_k(\mu+C\mu^{-0.4})-\mathcal{N}_k(\mu-C\mu^{-0.4})\label{888} \end{equation} for some constant $C$. Since the proof is almost the same as that of \eqref{666}, let us be brief. We still fix arbitrarily an element $n$ belonging to the set in \eqref{999}. A geometric argument shows that $n<r\mu$. Thus \begin{equation*} \frac{k}{n}> \frac{G(r)}{r}+\frac{\mu^{0.6}}{n}. \end{equation*} However, if $rx_{n,k}\leq n-n^{1/3+\varepsilon}$ then \begin{equation*} \frac{k}{n}=\frac{G(n/x_{n,k})}{n/x_{n,k}}+\frac{1/4+O(1)}{n}<\frac{G(r)}{r}+\frac{O(1)}{n}, \end{equation*} which is impossible. Hence $rx_{n,k}>n-n^{1/3+\varepsilon}$ and $R_{n,k}$ can be in the form of $O\left((n+k)^{-1}\right)$, $O\left(n^{1/2}\left(k-\frac{G(r)}{r}n\right)^{-3/2}\right)$ or $O(n^{-2/3+2.5\varepsilon})$. In fact, we further observe that if $n/\mu$ is sufficiently small then $k/n$ is sufficiently large and $R_{n,k}$ must be $O\left((n+k)^{-1}\right)$, as a consequence of Theorem \ref{thm111} and \ref{thm222}. To conclude the proof of \eqref{888}, we only need to notice that no matter in which form the $R_{n,k}$ is, it is always of size $O(\mu^{-0.4})$. If $n=0$, by using exactly the same argument as above we get \begin{align} &\left|\#\left\{k\in\mathbb{N} : x_{0,k}\leq \mu \right\}-\#\left\{k\in\mathbb{N} : (0,k-\tau_{0,k})\in\mu\mathcal{D}\right\}\right| \label{333}\\ &\quad \leq \#\left\{k\in\mathbb{N} : (0,k-\tau_{0,k})\in \left(\mu+C\mu^{-1}\right)\mathcal{D}\setminus \left(\mu-C\mu^{-1}\right)\mathcal{D} \right\} \nonumber \end{align} for some constant $C>0$. To conclude, summing the above bounds of $\Delta_k(\mu)$ over $k\in\mathbb{N}$ and using the symmetry between positive and negative $n$'s and the bound \eqref{333} yields the desired inequality. \end{proof} $\mathcal{N}_{\mathcal{D}}(\mu)$ counts the number of lattice points (under various translations) in $\mu\mathcal{D}$. This feature brings us some obstacles in its estimation. To overcome this difficulty we move every point $(n, k-\tau_{n,k})$ to $(n, k-1/4)$ to obtain an uniformity in translation, and then study the relatively standard lattice counting function \begin{equation}\label{lattice-pro2} \mathcal{N}_{\mathcal{D}}^{u}(\mu)=\#\left(\mu\mathcal{D}\cap \left\{(n,k-1/4) : (n,k)\in\mathbb{Z}^2\right\}\right), \quad \mu>2. \end{equation} Here the superscript ``u'' represents the uniformity in translation. Of course such a transformation from $\mathcal{N}_{\mathcal{D}}(\mu)$ to $\mathcal{N}_{\mathcal{D}}^{u}(\mu)$ will cause a difference. To quantify that we need to count the number of lattice points in a band of length $r\mu$ and width $1/4$. (This will be clear in the proof of the next proposition.) For $0<L\leq R\mu$ let us define a band on $[0, L]$ by \begin{equation*} \mathcal{B}_{L}=\left\{(x,y)\in\mathbb{R}^2 : 0\leq x\leq L, \, \mu G\left(\frac{x}{\mu} \right)< y\leq \mu G\left(\frac{x}{\mu} \right)+\frac{1}{4} \right\} \end{equation*} and the number of $\mathbb{Z}^2$ in the band $\mathcal{B}_{r\mu}$ by \begin{equation} \# \left(\mathcal{B}_{r\mu}\cap\mathbb{Z}^2 \right)=\frac{1}{4}r\mu+\mathcal{E}(\mu). \label{error-in-band} \end{equation} One would expect the error term $\mathcal{E}(\mu)$ to be much smaller than the linear term $r\mu/4$ since heuristically the number of lattice points inside a large planar domain is asymptotically equal to the area of the domain with an error term that is not too bad if the curvature involved does not vanish. We will estimate $\mathcal{E}(\mu)$ in the next section. With $\mathcal{E}(\mu)$ defined as above we have \begin{proposition}\label{difference2} \begin{equation*} \mathcal{N}_{\mathcal{D}}^{u}(\mu)=\mathcal{N}_{\mathcal{D}}(\mu)+\frac{1}{2}r\mu+2\mathcal{E}(\mu)+O\left(\mu^{1/3+\varepsilon}\right). \end{equation*} \end{proposition} \begin{proof} In view of the definition of $\tau_{n,k}$, moving the points $(n, k-\tau_{n,k})$ down to $(n, k-1/4)$ can possibly get some of these points in the domain $\mu\mathcal{D}$ but no points out. Hence the difference between $\mathcal{N}_{\mathcal{D}}^{u}(\mu)$ and $\mathcal{N}_{\mathcal{D}}(\mu)$ is equal to twice (due to the symmetry) the number of points $(n, k-\tau_{n,k})$ in the band $\mathcal{B}_{R\mu}$ that are moved in the domain $\mu\mathcal{D}$. There are three types of points $(n, k-\tau_{n,k})$ in this band: \begin{enumerate} \item $(n,k)$'s, which correspond to the case $\tau_{n,k}=0$ and definitely get in $\mu\mathcal{D}$; \item $(n, k-\tau_{n,k})$'s with $0<\tau_{n,k}<1/4$, which may get in $\mu\mathcal{D}$; \item $(n, k-1/4)$'s, which correspond to the case $\tau_{n,k}=1/4$ and are not moved. \end{enumerate} Concerning these three types of points, one key observation is that the points of the first type are all above the line passing through $O$ and $J$ (see Figure \ref{SymmH}) while the points of the third type are all below. This is because of the facts that if $rx_{n,k}\geq n+n^{1/3+\varepsilon}$ then $k/n>G(r)/r$ and if $rx_{n,k}\leq n-n^{1/3+\varepsilon}$ then $k/n<G(r)/r$. We only prove the former fact while the latter one's proof is similar. Indeed, by Theorem \ref{thm111} and the monotonicity of $G$ if $rx_{n,k}\geq (1+c)n$ then \begin{equation*} \frac{k}{n}=\frac{G(n/x_{n,k})}{n/x_{n,k}}+O\left(\frac{1}{n(n+k)}\right)\geq \frac{G\left(\frac{r}{1+c}\right)}{\frac{r}{1+c}}+O\left(\frac{1}{n(n+k)}\right), \end{equation*} which is greater than $G(r)/r$ since $n+k\asymp \mu$. If $n+n^{1/3+\varepsilon}\leq rx_{n,k}<(1+c)n$ similarly we have \begin{equation*} \frac{k}{n}=\frac{G(n/x_{n,k})}{n/x_{n,k}}+O\left(n^{-1-\frac{3}{2}\varepsilon} \right)\geq \frac{G\left(\frac{r}{1+n^{-2/3+\varepsilon}}\right)}{\frac{r}{1+n^{-2/3+\varepsilon}}}+O\left(n^{-1-\frac{3}{2}\varepsilon}\right). \end{equation*} By the mean value theorem and a straightforward computation of $(G(x)/x)'$, we have \begin{equation*} 0<\frac{G\left(\frac{r}{1+n^{-2/3+\varepsilon}}\right)}{\frac{r}{1+n^{-2/3+\varepsilon}}}-\frac{G(r)}{r}\gg n^{-2/3+\varepsilon}. \end{equation*} Combining the last two inequalities yields the desired one. Another key observation is that any point $(n, k-\tau_{n,k})$ of the second type in the band $\mathcal{B}_{R\mu}$ is such that $|n-r\mu|\leq C'\mu^{1/3+\varepsilon}$ for some large constant $C'$. Indeed, by Corollary \ref{approximation}, if $n-n^{1/3+\varepsilon}< rx_{n,k}< n+n^{1/3+\varepsilon}$ then \begin{equation*} x_{n, k}=F(n,k-\tau_{n,k})+O\left(n^{-2/3+2.5\varepsilon}\right)=\mu+O(1). \end{equation*} Plugging this formula of $x_{n,k}$ into the above inequality of $x_{n,k}$ yields the desired range of $n$. As a result, the points $(n,k-\tau_{n,k})$ in the band $\mathcal{B}_{r\mu-C'\mu^{1/3+\varepsilon}}$ are only of the first type, which definitely get in $\mu\mathcal{D}$. By \eqref{error-in-band} its number is equal to $r\mu/4+\mathcal{E}(\mu)+O(\mu^{1/3+\varepsilon})$. Some of the points $(n,k-\tau_{n,k})$ with $|n-r\mu|< C'\mu^{1/3+\varepsilon}$ may get in $\mu\mathcal{D}$. Its number is of size $O(\mu^{1/3+\varepsilon})$. The points $(n,k-\tau_{n,k})$ in $\mathcal{B}_{R\mu}\setminus \mathcal{B}_{r\mu+C'\mu^{1/3+\varepsilon}}$ are only of the third type and not moved. To sum up, the number of points $(n, k-\tau_{n,k})$ in the band $\mathcal{B}_{R\mu}$ that are moved in the domain $\mu\mathcal{D}$ is $r\mu/4+\mathcal{E}(\mu)+O(\mu^{1/3+\varepsilon})$. This finishes the proof. \end{proof} Combining Proposition \ref{difference1} and \ref{difference2} immediately yields that \begin{theorem} \label{reduction} \begin{align*} \left|\mathscr{N}_{\mathscr{D}}(\mu)-\mathcal{N}_{\mathcal{D}}^{u}(\mu)+\frac{1}{2}r\mu \right| &\leq \mathcal{N}_{\mathcal{D}}^{u}\left(\mu^+\right)- \mathcal{N}_{\mathcal{D}}^{u}\left(\mu^-\right)\\ &\quad +2\left(\mathcal{E}\left(\mu^-\right)-\mathcal{E}\left(\mu^+\right)\right)+O\left(\mu^{0.6} \right) \end{align*} with $\mu^+=\mu+C\mu^{-0.4}$ and $\mu^-=\mu-C\mu^{-0.4}$. \end{theorem} Thus we have transferred the study of $\mathscr{N}_{\mathscr{D}}(\mu)$ to those of $\mathcal{N}_{\mathcal{D}}^{u}(\mu)$ and $\mathcal{E}\left(\mu\right)$, which will be done in the following section. \section{Lattice Counting and Proof of Theorem \ref{specthm}}\label{LatticeSec} In this section we study the two associated lattice point problems, $\mathcal{N}_{\mathcal{D}}^{u}(\mu)$ and $\mathcal{E}\left(\mu\right)$, defined in \eqref{lattice-pro2} and \eqref{error-in-band} respectively. Theorem \ref{specthm} follows directly from Theorem \ref{reduction}, Theorem \ref{theorem:no-in-D} and Corollary \ref{no-in-band}. Recall that \[ \mathcal{N}_{\mathcal{D}}^{u}(\mu)=\#\left(\mu\mathcal{D}\cap \left\{(m,n-1/4) : (m,n)\in\mathbb{Z}^2\right\}\right) \] denotes the number of points in the shifted lattice $\mathbb{Z}^2-(0,1/4)$ which lie in $\mu\mathcal{D}$. The domain $\mathcal{D}$, defined in Section \ref{reduction-sec} (see Figure \ref{SymmH}), has an area \begin{equation*} |\mathcal{D}|=\frac{1}{4}\left(R^2-r^2\right)\,. \end{equation*} \begin{theorem}\label{theorem:no-in-D} Let $0\leq r < R$. If the boundary curve of $\mathcal{D}$ has a tangent in $J$ with rational slope (i.e. $\pi^{-1}\arccos(r/R)\in \mathbb{Q}$), then \[ \mathcal{N}_{\mathcal{D}}^{u}(\mu)=|\mathcal{D}|\mu^2-\frac{R}{2}\mu+O\left(\mu^{\theta}(\log \mu)^\Theta\right), \ where \begin{equation}\label{definition-theta} \theta=\frac{131}{208}\approx 0.6298\,,\qquad\Theta=\frac{18627}{8320}\approx 2.2388\,. \end{equation} In case of an irrational slope the asymptotics remains true with the much weaker error term $O(\mu^{2/3})$. \end{theorem} \begin{remark} If the tangent in J has rational slope (this includes the case $r=0$) the error term is of the same quality as the best published result in the circle problem due to Huxley \cite{Huxley:2003}. The linear term can be explained as follows. To every lattice point one can associate an axes parallel square of volume 1 with center in the lattice point. Every such square contributes to $|\mathcal{D}|\mu^2$ the volume of its intersection with $\mu\mathcal{D}$. The points $(n,-1/4)$ with $|n|\leq R\mu$ are not counted in $\mathcal{N}_{\mathcal{D}}^{u}(\mu)$, but contribute to the volume $\frac12R\mu+O(1)$. \end{remark} Since the boundary of $\mathcal{D}$ contains points with infinite curvature (the points $J$, $P_2$ and $J'$, $P_2'$) standard results are not directly applicable. But see \cite{nowak:1980} for a lattice point counting problem in a non-convex domains with cusps and unbounded curvature. Our proof uses the following deep result of M.N. Huxley. \begin{proposition}\label{proposition:Huxley} Let $M,N,C_1,C_2,C_3,C_4\geq 2$ be real parameters and $F:[1,2]\to\mathbb{R}$ a three times continuously differentiable function satisfying \[C_j^{-1}\leq|F^{(j)}(x)|\leq C_j\] for $j=1,2,3$. Denote by $\rho(x)=[x]-x+1/2$ the row-of-teeth function and by $\theta$, $\Theta$ the constants defined in (\ref{definition-theta}). Then there is a constant $B$ which depends only on $C_1,C_2,C_3$ and $C_4$, such that \begin{equation*} \Big|\sum_{M\leq m\leq M_2\leq 2M}\rho\left(NF\left(\frac mM\right)\right)\Big|\leq B (MN)^{\theta/2}\log(MN)^{\Theta} \end{equation*} provided that \begin{equation}\label{condition:Huxley} C_4^{-1}(MN)^{\frac{141}{10}}\log(MN)^{\frac{1083}{280}}\leq M^{\frac{164}{5}}\leq C_4(MN)^{\frac{181}{10}}\log(MN)^{\frac{2907}{1400}}\,. \end{equation} \end{proposition} \begin{proof} This is Case A of Proposition 3 in \cite{Huxley:2003} \end{proof} \begin{remark} In contrast to van der Corput's classical estimate (\ref{corput-second-derivative-estimate}) the proposition uses a condition on the first derivative. In our application $|F'(x)|$ becomes large if we count lattice points near to the boundary point $\mu J$ along lines parallel to the axes. To avoid this we count them on lines parallel to the tangent. This is only possible if the tangent in $J$ has rational slope. \end{remark} \begin{proof}[Proof of Theorem \ref{theorem:no-in-D}] Slightly more general we count points in the shifted lattice $\mathbb{Z}^2-(0,c)$ with $c\in[0,1/2)$. The number $\mathcal{N}_{\mathcal{D}}^{u}(\mu)$ is twice the number of shifted lattice points in the positive quadrant, if points on the $y$-axis are counted with weight 1/2. Divide $\mathcal{D}\cap[0,\infty)^2$ in domains \begin{align*} \mathcal{D}_1&:=\{(x,y)\in\mathcal{D}: 0\leq x\leq R, 0< y\leq G(r)\},\\ \mathcal{D}_2&:=\{(x,y)\in\mathcal{D}: 0\leq x\leq r,y>G(r)\}. \end{align*} See Figure \ref{decompositionD} for these domains. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{decompositionD.eps} \caption{A decomposition of $\mathcal{D}$ in the first quadrant.} \label{decompositionD} \end{figure} The rational case of Theorem \ref{theorem:no-in-D} follows if we prove that \begin{align} \mathcal{N}_{\mathcal{D}_1}^{u}(\mu) &=|\mathcal{D}_1|\mu^2-(1/2-c)R\mu+L_{12}+O\left(\mu^{\theta}(\log \mu)^\Theta\right),\label{ND1}\\ \mathcal{N}_{\mathcal{D}_2}^{u}(\mu) &=|\mathcal{D}_2|\mu^2-L_{12}+O\left(\mu^{\theta}(\log \mu)^\Theta\right),\label{ND2} \end{align} where $L_{12}=\mu r\,\rho(\mu G(r)+c)$ describes the contribution of the line segment separating $\mathcal{D}_1$ from $\mathcal{D}_2$. While (\ref{ND1}) is true in general, we prove (\ref{ND2}) in the irrational case only with the weaker error term $O(\mu^{2/3})$. In $\mu\mathcal{D}_1$ we count lattice points along lines parallel to the $x$-axis. Denote by $H:[0,G(r)]\to[r,R]$ the inverse function of $G$ restricted to $[r,R]$. Since points on the $y$-axis are counted with weight 1/2, and $[x]+1/2=x+\rho(x)$, one finds \begin{align*} \mathcal{N}_{\mathcal{D}_1}^{u}(\mu)&=\sum_{0<n-c\leq \mu G(r)}\!\Big(\Big[\mu H\left(\frac{n-c}{\mu}\right)\Big]+\frac12\Big)\\ &=\sum_{\frac12<n\leq \mu G(r)+c}\!\mu H\left(\frac{n-c}{\mu}\right)+\!\!\sum_{0<n\leq \mu G(r)+c}\rho\left(\mu H\left(\frac{n-c}{\mu}\right)\right). \end{align*} Euler's summation formula \begin{align*} \sum_{a<n\leq b}f(n)=\int_a^b f(x)\,\textrm{d}x+\rho(b)f(b)-\rho(a)f(a)-\int_a^bf'(x)\rho(x)\,\textrm{d}x \end{align*} is used to calculate the first sum. The first integral gives the main term \begin{align*} \int_{1/2}^{\mu G(r)+c}\mu H\big(\frac{x-c}{\mu}\big)\,\textrm{d}x&=\mu^2\int_{(1/2-c)/\mu}^{G(r)}H(x)\,\textrm{d}x\\ &=|\mathcal{D}_1|\mu^2-(1/2-c)R\mu+O(1)\,. \end{align*} By the second mean value theorem and Lemma \ref{Lemma-function-H} the second integral is bounded by \begin{align*} \int_{1/2}^{\mu G(r)+c}H'\big(\frac{x-c}{\mu}\big)\,\rho(x)\,\textrm{d}x\ll\sup_{(1/2-c)/\mu\leq y\leq G(r)}\left|H'(y)\right| \ll \mu^{1/3}\,. \end{align*} Together we obtain \begin{align*} \mathcal{N}_{\mathcal{D}_1}^{u}(\mu) &=|\mathcal{D}_1|\mu^2-(1/2-c)R\mu+L_{12}\\ &\qquad +\sum_{0<n\leq \mu G(r)+c}\rho\left(\mu H\left(\frac{n-c}{\mu}\right)\right)+O(\mu^{1/3})\,. \end{align*} For $n\leq V:=\mu^{\theta}$ the $\rho$-sum is estimated trivially. This contributes $O(\mu^\theta)$ to $\mathcal{N}_{\mathcal{D}_1}^{u}(\mu)$. The remaining sum is divided in sums of the form \begin{align}\label{psisum} \sum_{M\leq m\leq M'\leq 2M}\rho\left(\mu H\left(\frac{m-c}{\mu}\right)\right)\,, \end{align} where $M=2^jV\ll\mu$. To apply Proposition \ref{proposition:Huxley} set \[\mu H\left(\frac{m-c}{\mu}\right)=NF\left(\frac{m}{M}\right)\quad\mbox{with}\quad F(x)=\left(\frac{\mu}{M}\right)^{2/3}H\left(\frac M\mu x-\frac{c}{\mu}\right)\] and $N=M^{2/3}\mu^{1/3}$. By Lemma \ref{Lemma-function-H} $|F^{(j)}(x)|\asymp 1$ for $x\in[1,2]$ and $j=1,2,3$. The condition (\ref{condition:Huxley}) is satisfied since $V\leq M\ll \mu$. This yields the bound $(M^{5/3}\mu^{1/3})^{\theta/2}(\log\mu)^{\Theta}$ for (\ref{psisum}). Summing over $M=2^jV\ll\mu$ gives (\ref{ND1}). Note that this already completes the treatment of the special case $r=0$. If $r>0$ we have to deal with $\mathcal{N}_{\mathcal{D}_2}^{u}(\mu)$. First we assume that the tangent in $J$ has rational slope. Hence $G'(r)=-a/q<0$ with $a$, $q$ relatively prime. The number of shifted lattice points in $\mu\mathcal{D}_2$ is equal to the number of shifted lattice points in the triangle $\mu\mathcal{T}$ minus the number of shifted lattice points in $\mu\mathcal{D}^*_2$, where \begin{align*} \mathcal{T}&:=\big\{(x,y)\in\mathbb{R}^2: 0\leq x<r, G(r)< y\leq G(r)+\frac aq(r-x)\big\},\\ \mathcal{D}^*_2&:=\big\{(x,y)\in\mathbb{R}^2: 0\leq x<r,G(x)< y\leq G(r)+\frac aq(r-x)\big\}. \end{align*} In case of $\mu\mathcal{T}$ and $\mu\mathcal{D}_2^*$ it is easier to count points on the $y$-axis with full weight. Then \begin{equation}\label{label-ND2} \mathcal{N}_{\mathcal{D}_2}^{u}(\mu)=\mathcal{N}_{\mathcal{T}}^{u}(\mu)-\mathcal{N}_{\mathcal{D}_2^*}^{u}(\mu)-\mu (G(0)-G(r))/2+O(1)\,. \end{equation} In $\mu\mathcal{D}_2^*$ we count points along the lines $g_t: ax+q(y+c)=t$, $t\in \mathbb{Z}$. Note that $g_t$ contains points of the shifted lattice $\mathbb{Z}^2-(0,c)$ if and only if $t$ is an integer. The line $g_t$ intersects the lower boundary curve of $\mu\mathcal{D}_2^*$ between $(0,\mu G(0))$ and $(\mu r,\mu G(r))$ in a unique point if $t\in[\mu q\beta,\mu q\gamma]$, where \[\beta=G(0)+\frac{c}{\mu}\,,\qquad\gamma=G(r)+\frac aq r+\frac{c}{\mu}\,.\] Define a function $T$ by writing the $x$-coordinate of the intersection point as $\mu T(t/(\mu q))$. The defining equation of $T$ reads \begin{align}\label{definition-T} G(T(y))+\frac aq T(y)+\frac{c}{\mu}=y \qquad(y\in[\beta,\gamma])\,. \end{align} The strictly increasing function $T$ maps $[\beta,\gamma]$ to $[0,r]$. For every $t_0\in\{0,\dots,q-1\}$ choose $x_0\in\{0,\dots,q-1\}$ such that $ax_0\equiv t_0\pmod q$. If $t\equiv t_0\pmod q$ the lattice points on $g_t$ are the points $(m,n-c)$ with $m=x_0+qk$, $k\in\mathbb{Z}$. Hence the number of shifted lattice points in $\mu\mathcal{D}_2^*\cap g_t$ is equal to the number of integers $k$ such that $-x_0/q\leq k<(\mu T(t/(\mu q))-x_0)/q$. Since the number of integers in $[a,b)$ is $[-a]-[-b]=b-a-\rho(-b)+\rho(-a)$ this number is \[ \frac{\mu}{q}T\left(\frac{t}{\mu q}\right)-\rho\left(-\frac\mu q T\Big(\frac{t}{\mu q}\Big)+\frac{x_0}q\right)+\rho\left(\frac{x_0}q\right)\,. \] This yields \begin{align}\label{ND2lable1} \mathcal{N}_{\mathcal{D}_2^*}^{u}(\mu)=\sum_{\mu q\beta<t\leq\mu q \gamma} \frac{\mu}{q}T\left(\frac{t}{\mu q}\right)-S_1+S_2\, \end{align} with \begin{align*} S_1&:=\sum_{t_0=0}^{q-1}\sum_{\mu \beta-t_0/q<k\leq\mu \gamma-t_0/q} \rho\left(-\frac\mu q T\left(\frac{k}{\mu }+\frac{t_0}{\mu q}\right)+\frac{x_0}q\right),\\ S_2&:=\sum_{t_0=0}^{q-1}\rho\left(\frac{x_0}{q}\right)\big(\mu(\gamma-\beta)+O(1)\big).\\ \end{align*} Using the relation \begin{align}\label{complete-psi-sum} \sum_{k=0}^{q-1}\rho\big((x+k)/q\big)=\rho(x), \end{align} $S_2$ simplifies to \[S_2=\mu(\gamma-\beta)/2+O(q)\,.\] Euler's summation formula applied to the first sum in (\ref{ND2lable1}) yields \[ |\mathcal{D}_2^*|\mu^2+\rho(\mu q\gamma)\frac{\mu}{q}r-\frac1{q^2}\int_{\mu q\beta}^{\mu q \gamma}T'\Big(\frac{x}{\mu q }\Big)\rho(x)\,\textrm{d}x\,. \] By the second mean value theorem, Lemma \ref{Lemma-function-T} and (\ref{label-T}) the last integral is bounded by \[ \int_{\mu q\beta}^{\mu q\gamma-1}T'\big(\frac{x}{\mu q}\big)\,\rho(x)\,\textrm{d}x+\int_{\mu q\gamma-1}^{\mu q\gamma}T'\big(\frac{x}{\mu q}\big)\,\textrm{d}x \] \[ \ll \sup_{\beta\leq y\leq\gamma-1/(\mu q)}T'(y)+\mu q\left(T(\gamma)-T\left(\gamma-\frac1{\mu q}\right)\right)\ll\mu^{1/3}\,. \] In the inner sum of $S_1$ we estimate the terms with $[\mu \gamma]-V<k\leq \mu \gamma$ trivially. The remaining sum is divided in sums of the form \[ \sum_{[\mu \gamma]-M'\leq t\leq [\mu \gamma]-M}\!\rho\left(-\frac\mu q T\left(\frac{k}{\mu}+\frac{t_0}{\mu q}\right)+\frac{x_0}{q}\right)=\!\sum_{M\leq m\leq M'\leq 2M}\!\rho\left(NF\left(\frac mM\right)\right), \] where $M=2^jV\ll \mu$, $N=\mu^{1/3}M^{2/3}q^{-1}$ and \[F(x)=-\Big(\frac{\mu}{M}\Big)^{2/3}T\left(\gamma-\frac{M}{\mu}x+\frac{c_0}\mu\right)+\frac{x_0}{qN}\qquad(x\in[1,2])\] with $c_0=t_0/q+[\gamma\mu]-\gamma\mu$. By Lemma \ref{Lemma-function-T} $|F^{(j)}(x)|\asymp 1$ for $j=1,2,3$. The condition (\ref{condition:Huxley}) of Proposition \ref{proposition:Huxley} is satisfied since $V\leq M\ll \mu$. This yields the bound $(\mu^{1/3}M^{5/3})^{\theta/2}(\log\mu)^{\Theta}$. Summing over $2^jV\ll\mu$ gives $S_2\ll\mu^{\theta}(\log\mu)^\Theta$. Together this proves \begin{align}\label{ND2-label2} \mathcal{N}_{\mathcal{D}_2^*}^{u}(\mu)=|\mathcal{D}_2^*|\mu^2+r\frac\mu q\rho(\mu q \gamma)+\frac{\mu}{2}(\gamma-\beta)+O(\mu^{\theta}(\log \mu)^{\Theta})\,. \end{align} To evaluate $\mathcal{N}_{\mathcal{T}}^{u}(\mu)$ we start from \[ \mathcal{N}_{\mathcal{T}}^{u}(\mu)=\sum_{0\leq n<\mu r}\left(\frac aq(\mu r-n)+\rho\left(\mu\gamma-\frac aq n\right)-\rho\left(\mu G(r)+c\right)\right). \] The last sum is $-L_{12}+O(1)$. Using (\ref{complete-psi-sum}) the second sum is $r\frac{\mu}{q}\rho(\mu q \gamma)+O(q)$. The following version of Euler's summation formula \begin{align*} \sum_{a\leq n<b}f(n)=\int_a^b f(x)\,\textrm{d}x-\rho(-b)f(b)+\rho(-a)f(a)-\int_a^bf'(x)\rho(x)\,\textrm{d}x \end{align*} is used to calculate the first sum. Its value is $|\mathcal{T}|\mu^2+\frac{a}{q}\frac\mu 2+O(1)$. Hence \[ \mathcal{N}_{\mathcal{T}}^{u}(\mu)=|\mathcal{T}|\mu^2+\frac aq\frac \mu2+r\frac\mu q\rho(\mu q\gamma)-L_{12}+O(1)\,. \] Together with (\ref{label-ND2}) and (\ref{ND2-label2}) this proves (\ref{ND2}) and completes the proof of Theorem \ref{theorem:no-in-D} in the rational case. In the irrational case we prove (\ref{ND2}) with the weaker error term $O(\mu^{2/3})$. Since points on the $y$-axis are counted with weight 1/2 an application of Euler's summation formula yields \begin{align*} \mathcal{N}^{u}_{\mathcal{D}_2}(\mu) &= \sum_{0<m\leq \mu r}\left(\Big[\mu G\left(\frac{m}{\mu}\right)+c\Big]-\Big[\mu G\left(r\right)+c\Big]\right)\\ &\qquad\qquad\qquad+\frac\mu 2\big(G(0)-G(r)\big)+O(1)\\ &=|\mathcal{D}_2|\mu^2-L_{12}+\sum_{1\leq m\leq \mu r}\rho\left(\mu G\left(\frac{m}{\mu}\right)+c\right)+O(1)\,. \end{align*} Van der Corput's second derivative estimate \cite{corput:1923} \begin{align}\label{corput-second-derivative-estimate} \sum_{M_1\leq m\leq M_2 }\rho(f(m))\ll\int_{M_1}^{M_2}|f''(x)|^{1/3}\,\textrm{d}x+\max_{x\in[M_1,M_2]}|f''(x)|^{-1/2} \end{align} gives the bound $O(\mu^{2/3})$ for the $\rho$-sum. \end{proof} \begin{corollary}\label{no-in-band} Let $0<r<R$. If the boundary curve of $\mathcal{D}$ has a tangent in $J$ with rational slope then $\mathcal{E}(\mu)$ defined in \eqref{error-in-band} satisfies \begin{equation}\label{assertion-no-in-band} \mathcal{E}(\mu)=O\left(\mu^{\theta}(\log \mu)^\Theta\right) \end{equation} with $\theta$ and $\Theta$ as in \eqref{definition-theta}. In case of an irrational slope the weaker bound $\mathcal{E}(\mu)=O\left(\mu^{2/3} \right)$ is true. \end{corollary} \begin{proof} In $\mathcal{E}(\mu)=\# \left(\mathcal{B}_{r\mu}\cap\mathbb{Z}^2 \right)-\frac{1}{4}r\mu$ we count unshifted lattice points. The number of unshifted lattice points in $\mu\mathcal D_2$ is given by (\ref{ND2}) with $c=0$. Thus in the rational case (\ref{assertion-no-in-band}) is equivalent to \begin{align}\label{ND2+} \mathcal{N}_{\mathcal{D}_2^+}^{u}(\mu)=|\mathcal{D}_2^+|\mu^2-L_{12}+O(\mu^\theta(\log \mu)^\Theta)\,, \end{align} where $\mathcal{N}_{\mathcal{D}_2^+}^{u}(\mu)$ denotes the number of unshifted lattice points in $\mu\mathcal{D}_2^+$ with \[\mathcal{D}_2^+:=\left\{(x,y)\in\mathbb{R}^2: 0\leq x\leq r,\ G(r)<y\leq G(x)+1/(4\mu)\right\}.\] Repeating the proof of (\ref{ND2}) with this slightly modified domain one obtains (\ref{ND2+}). The case of irrational slope is even easier. \end{proof} \begin{lemma}\label{Lemma-function-H} Let $0\leq r<R$. The inverse function $H:[0,G(r)]\to [r,R]$ of $G$ restricted to $[r,R]$ satisfies for j=1,2,3 \[H^{(j)}(y)\asymp y^{\frac23-j}\qquad(y\in (0,G(r)])\,.\] \end{lemma} \begin{proof} For $0\leq r\leq x\leq R$ the function $G(x)=Rg(x/R)$ satisfies \begin{align*} G'(x)&=-\pi^{-1}\arccos(x/R)\asymp(R-x)^{1/2}\,,\\ G''(x)&=(\pi R)^{-1}\left(1-\left(x/R\right)^2\right)^{-1/2}\asymp (R-x)^{-1/2}\,,\\ G'''(x)&=(\pi R^3)^{-1}x\left(1-(x/R)^2\right)^{-3/2}\asymp x(R-x)^{-3/2}\, \end{align*} and, with the positive and bounded function $h(x)=x(1-x^2)^{-1/2}\arccos(x)$, \[3G''(x)^2-G'(x)G'''(x)=(\pi R)^{-2}\textstyle{\left(1-\left(\frac{x}{R}\right)^2\right)^{-1}}\left(3+h\textstyle{\left(\frac xR\right)}\right)\asymp(R-x)^{-1}\,.\] Furthermore $f(x):=G(x)(R-x)^{-3/2}$ is positive with \[\lim_{x\to R}f(x)=R^{-5/2}2^{3/2}(3\pi)^{-1}>0\,.\] This proves \[G(x)\asymp(R-x)^{3/2}\,.\] Set $y=G(x)$. Then $R-x\asymp y^{2/3}$. For the inverse function $H$ one obtains \begin{align*} H'(y)&=\left(G'(x)\right)^{-1}\asymp(R-x)^{-1/2}\asymp y^{-1/3}\,,\\ H''(y)&=-\left(G'(x)\right)^{-3}G''(x)\asymp(R-x)^{-2}\asymp y^{-4/3}\,,\\ H'''(y)&=\left(G'(x)\right)^{-5}\left(3G''(x)^2-G'''(x)G'(x)\right)\asymp(R-x)^{-7/2}\asymp y^{-7/3}\,. \end{align*} \end{proof} \begin{lemma}\label{Lemma-function-T} Let $0<r<R$. The function $T:[\beta,\gamma]\to[0,r]$ defined in (\ref{definition-T}) satisfies for j=1,2,3 \[T^{(j)}(y)\asymp (\gamma-y)^{\frac23-j}\qquad(y\in[\beta,\gamma)).\] \end{lemma} \begin{proof} On $[0,r]$ the function $G$ is defined by $G(x)=Rg(x/R)-rg(x/r)$. Thus \begin{align*} G''(x)&=\textstyle{\frac1{\pi R}\left(1-\left(\frac xR\right)^2\right)^{-1/2}-\frac1{\pi r}\left(1-\left(\frac xr\right)^2\right)^{-1/2}}\asymp (r-x)^{-1/2}\,,\\ G'''(x)&=\textstyle{\frac x\pi\left(\frac1{R^3}\left(1-\left(\frac xR\right)^2\right)^{-3/2}-\frac1{r^3}\left(1-\left(\frac xr\right)^2\right)^{-3/2}\right)}\asymp x(r-x)^{-3/2}\, \end{align*} and \[G'(x)-G'(r)=-\int_x^rG''(u)\,\textrm{d}u\asymp (r-x)^{1/2}\,.\] Since $G''(x)<0$ the function \[f(x):=\left(G(x)-G(r)-G'(r)(x-r)\right)(r-x)^{-3/2}\] is strictly negative with $\lim_{x\uparrow r}f(x)=-2^{3/2}/(3\sqrt{r})>-\infty$. This proves \begin{align}\label{label-asymptotic-G} G(x)-G(r)-G'(r)(x-r)\asymp (r-x)^{3/2}\,. \end{align} Set $x=T(y)$. Subtracting (\ref{definition-T}) from the equation defining $\gamma$ one obtains \[\gamma-y=G(r)-G(x)-G'(r)(r-x)\,.\] With (\ref{label-asymptotic-G}) this yields $\gamma-y\asymp(r-x)^{3/2}$ and \begin{align}\label{label-T} r-T(y)=r-x\asymp(\gamma-y)^{2/3}\,. \end{align} Differentiating (\ref{definition-T}) one obtains \begin{align*} T'(y)&=\left(G'(x)-G'(r)\right)^{-1}\asymp (r-x)^{-1/2}\asymp(\gamma-y)^{-1/3},\\ T''(y)&=-\left(G'(x)-G'(r)\right)^{-3}G''(x)\asymp(r-x)^{-2}\asymp(\gamma-y)^{-4/3},\\ T'''(y)&=\left(G'(x)-G'(r)\right)^{-5}(G''(x))^2\big(3+F(x)\big \asymp(\gamma-y)^{-7/3}. \end{align*} Here $F(x)=-G'''(x)(G'(x)-G'(r))(G''(x))^{-2}$ is a positive bounded function. \end{proof}
{ "timestamp": "2019-07-09T02:26:47", "yymm": "1907", "arxiv_id": "1907.03669", "language": "en", "url": "https://arxiv.org/abs/1907.03669", "abstract": "We study the zeros of cross-product of Bessel functions and obtain their approximations, based on which we reduce the eigenvalue counting problem for the Dirichlet Laplacian associated with a planar annulus to a lattice point counting problem associated with a special domain in $\\mathbb{R}^2$. Unlike other lattice point problems, the one arisen naturally here has interesting features that lattice points under consideration are translated by various amount and the curvature of the boundary is unbounded. By transforming this problem into a relatively standard form and using classical van der Corput's bounds, we obtain a two-term Weyl formula for the eigenvalue counting function for the planar annulus with a remainder of size $O(\\mu^{2/3})$. If we additionally assume that certain tangent has rational slope, we obtain an improved remainder estimate of the same strength as Huxley's bound in the Gauss circle problem, namely $O(\\mu^{131/208}(\\log \\mu)^{18627/8320})$. As a by-product of our lattice point counting results, we readily obtain this Huxley-type remainder estimate in the two-term Weyl formula for planar disks.", "subjects": "Spectral Theory (math.SP); Classical Analysis and ODEs (math.CA); Number Theory (math.NT)", "title": "The Weyl formula for planar annuli", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717460476701, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7089449186330989 }
https://arxiv.org/abs/1701.02420
Polynomially Interpolated Legendre Multiplier Sequences
We prove that every multiplier sequence for the Legendre basis which can be interpolated by a polynomial has the form $\{h(k^2+k)\}_{k=0}^{\infty}$, where $h\in\mathbb{R}[x]$. We also prove that a non-trivial collection of polynomials of a certain form interpolate multiplier sequences for the Legendre basis, and we state conjectures on how to extend these results.
\section{Introduction}\label{s:Introduction} Over the past decade, there has been an effort to characterize multiplier sequences acting on various orthogonal bases for $\mathbb{R}[x]$ (see \cite{bates}, \cite{bdfu}, \cite{bc}, \cite{bo}, \cite{nreup}, \cite{fhms}, \cite{FP}, \cite{P}). The present work focuses on the Legendre polynomial basis $\{P_k\}_{k=0}^{\infty}$. Recall that, for each nonnegative integer $k$, the $k$-th Legendre polynomial $P_k$ can be defined as the polynomial solution to Legendre's differential equation $$ (x^2-1)y'' + 2 x y' - k(k+1) y= 0, $$ normalized so that $P_k(1)=1$. As such, our work is a continuation of the investigations carried out in \cite{bdfu} and \cite{fhms}. We now recall some of the relevant terminology in this subject area that will be used freely throughout the paper. A polynomial is called {\it hyperbolic} if all of its zeros are real. An operator on $\mathbb{R}[x]$ that maps the class of hyperbolic polynomials into itself is called a {\it hyperbolicity preserver}. If $T$ is a linear operator on $\mathbb{R}[x]$ which is diagonal with respect to the Legendre basis, and if $T$ is also a hyperbolicity preserver, then the corresponding eigenvalue sequence $\{\gamma_k\}_{k=0}^{\infty}$ for which $T[P_k(x)] = \gamma_k P_k(x)$ is called a {\it multiplier sequence for the Legendre basis}. Although the bulk of the present work has to do with hyperbolicity preserving operators, there will also be a brief discussion (in Section \ref{s:collection}) of complex zero decreasing operators and sequences which are defined as follows. If an operator $T$ on $\mathbb{R}[x]$ has the property that $Z_c(T[p])\leq Z_c(p)$ for all polynomials $p$, where $Z_c(f)$ denotes the number of nonreal zeros of $f$, then $T$ is called a {\it complex zero decreasing operator}. If $T$ is a linear operator on $\mathbb{R}[x]$ which is diagonal with respect to the Legendre basis and $T$ is a complex zero decreasing operator, then the corresponding eigenvalue sequence is called a {\it complex zero decreasing sequence for the Legendre basis}. It is straightforward to see that every complex zero decreasing operator is a hyperbolicity preserving operator. There are, however, hyperbolicity preserving operators which are not complex zero decreasing operators. Finally, we are obliged to note that the function which is identically zero plays a peculiar role in this theory. As usual, we will declare that this function has only real zeros and define $Z_c(0)=0$. Central to the theory of multiplier sequences is the {\it Laguerre-P\'olya Class} of functions, denoted $\mathcal{L-P}$, which can be defined as follows. A function belongs to the class $\mathcal{L-P}$ if and only if it is a uniform limit, on compact subsets of $\mathbb{C}$, of polynomials with only real zeros. Equivalently, a real entire function $\varphi\in \mathcal{L-P}$, if and only if it can be expressed in the form $$\varphi(x) = cx^me^{-ax^2 + bx}\prod_{k=1}^\omega\left(1+\frac{x}{x_k}\right)e^{\frac{-x}{x_k}} \text{\hspace{5 mm}} (0\le\omega\le\infty),$$ where $b,c,x_k \in \mathbb{R}$, $m$ is a non-negative integer, $a\ge 0$, $x_k\neq 0$, and $\sum_{k=1}^\omega\frac{1}{x_k^2}<\infty$. \begin{prop}\cite[p. 242]{CCturan}\label{TI} Let $\varphi(x)=\sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k\in\mathcal{L-P}$. Then the Tur\'{a}n inequalities hold for the Taylor coefficients of $\varphi$; that is, \begin{equation} \gamma_k^2-\gamma_{k-1}\gamma_{k+1} \ge 0 \text{, \hspace{5mm} } k=1, 2, 3, \ldots. \end{equation} \end{prop} We refer the reader to \cite{CCsurvey} for a comprehensive treatment of the Laguerre-P\'olya Class. For our purposes, we shall only need the fact that an even function in $\mathcal{L-P}$ must have coefficients (of even powers of the variable) which alternate in sign; this follows immediately from Proposition \ref{TI}. In this paper, we focus mainly on sequences that can be interpolated by polynomials. That is to say, sequences of the form $\{p(k)\}_{k=0}^{\infty}$ where $p\in\mathbb{R}[x]$. In Theorem \ref{x^2+x} of Section \ref{s:Form and Order} we prove that any multiplier sequence for the Legendre basis which can be interpolated by a polynomial must have the form $\seq{h(k^2+k)}$, where $h\in\mathbb{R}[x]$. In Section \ref{s:collection}, we prove that a certain type of differential operator is hyperbolicity preserving (Theorem \ref{fall}) and use that result to establish the existence of a new and non-trivial collection of multiplier sequences for the Legendre basis (Corollary \ref{fallseq}). The paper concludes with two conjectures concerning Legendre multiplier sequences interpolated by polynomials of even degree. \section{Preliminaries}\label{s:Preliminaries} It is known that any linear operator $T$ on $\mathbb{R}[x]$ can be represented as a differential operator with coefficients $S_k\in\mathbb{R}[x]$, i.e., $$ T = \sum_{k=0}^{\infty} S_k(x) D^k. $$ The goal of this section is to develop a formula for $S_k(0)$ in the case where $T$ is diagonal with respect to the Legendre basis and has an eigenvalue sequence that can be interpolated by a polynomial. This formula will be useful in proving Theorem \ref{thm:noodd} in the next section, which states that a polynomially interpolated multiplier sequence for the Legendre basis must be interpolated by a polynomial of even degree. This formula will be useful in proving Theorem \ref{thm:noodd} in the next section, which states that if a multiplier sequence for the Legendre basis is interpolated by a polynomial $p$, then $p$ must be of even degree. \begin{lem} \label{Q0lem} For any differential operator $$ T = \sum_{k=0}^{\infty} Q_k(x) D^k $$ on $\mathbb{R}[x]$, the coefficient polynomials evaluated at 0 can be computed by $$ Q_n(0) = \frac{1}{n!} \left[T[x^n]\right]_{x=0} \qquad (n=0, 1, 2, \dots). $$ \end{lem} \begin{proof} The result follows from calculating $$ T[x^n] = Q_0(x) x^n + n x^{n-1} Q_1(x) + \cdots + n! Q_n(x), $$ and evaluating this expression at $x=0$. \end{proof} We now specialize the previous lemma to operators which are diagonal with respect to the Legendre polynomial basis. Throughout the rest of this paper we use the Pocchammer symbol to denote the rising factorial $$ (\alpha)_k = \alpha(\alpha+1)(\alpha+2)\cdots (\alpha+k-1), \qquad (\alpha\in\mathbb{R}, k\in\mathbb{N}) $$ and follow the convention that $(\alpha)_0=1$. \begin{lem}\label{cor1} Let $T$ be a linear operator on $\mathbb{R}[x]$ that satisfies $T[P_n]=\gamma_n P_n$, where $\{\gamma_n\}_{n=0}^{\infty}$ is a sequence of real numbers, and let the differential operator representation of $T$ be given by $$ T = \sum_{k=0}^{\infty} S_k(x) D^k. $$ Then \begin{equation}\label{even} S_{2m+1}(0)=0 \qquad (m=0, 1, 2, \dots) \end{equation} and \begin{equation}\label{s2m1} S_{2m}(0) = \frac{1}{2\cdot 4^m m!} \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) \gamma_{2k} (-1)^{k}}{(k+1/2)_{m+1}}\qquad (m=0, 1, 2, \dots). \end{equation} \end{lem} \begin{proof} We apply Lemma \ref{Q0lem} to the expansion of $x^n$ in terms of Legendre polynomials (see \cite[p. 181]{R}), $$ x^n = \frac{n!}{2^n} \sum_{k=0}^{[n/2]} \frac{(2n - 4k + 1) P_{n-2k}(x)}{k! (3/2)_{n-k}} $$ to obtain $$ S_n(0) = \frac{1}{2^n} \sum_{k=0}^{[n/2]} \frac{(2n - 4k + 1) \gamma_{n-2k} P_{n-2k}(0)}{k! (3/2)_{n-k}}. $$ The relation (see \cite[p. 158]{R}) $$ P_{2k+1}(0) = 0 \qquad (k=0, 1, 2, \dots) $$ shows that $S_{2m+1}(0) = 0$ for all $m$. The relation (see \cite[p. 158]{R}) $$ P_{2k}(0) = \frac{(-1)^k(1/2)_k}{k!} \qquad (k=0, 1, 2, \dots) $$ yields \begin{align*} S_{2m}(0) &= \frac{1}{4^m} \sum_{k=0}^{m} \frac{(4m - 4k + 1) \gamma_{2m-2k} (-1)^{m-k}(1/2)_{m-k}}{k! (3/2)_{2m-k} (m-k)!}\\ &=\frac{1}{4^m m!} \sum_{k=0}^{m} \binom{m}{k} (4k + 1) \gamma_{2k} (-1)^{k} \frac{(1/2)_{k}}{(3/2)_{m+k}} \end{align*} which can be re-written in the form (\ref{s2m1}) as desired. \end{proof} The fact that the odd indexed coefficient polynomials vanish at zero (equation (\ref{even})) will be useful to us in the next section. Continuing to specialize, we now focus on the even indexed coefficient polynomials in the case where the eigenvalue sequence $\{\gamma_k\}_{k=0}^{\infty}$ can be interpolated by a polynomial $p$. In this case, we note that the quantity $(4k + 1) \gamma_{2k}= (4k+1) p(2k)$ that appears in equation (\ref{s2m1}) is a polynomial in $k$. This motivates us to find a closed form for the sum \begin{equation}\label{sigma} \sigma_{m,n} = \sum_{k=0}^{m} \binom{m}{k} \frac{k^n(-1)^k}{(k+1/2)_{m+1}}. \end{equation} \begin{lem}\label{sigmalem1} For $m,n=0, 1, 2, \dots$, the quantity $\sigma_{m,n}$ from equation (\ref{sigma}) satisfies $$ \sigma_{m,n} = \left[ \theta^n \frac{2}{(3/2)_m}{}_2F_1(-m, 1/2; m+3/2; x) \right]_{x=1} \qquad \left(\theta := x \frac{d}{dx}\right). $$ \end{lem} \begin{proof} First note the relation \begin{equation}\label{e1} \frac{(-m)_k}{k!} = \begin{cases} \binom{m}{k} (-1)^k & \qquad k=0, 1, 2, \dots, m,\\ 0 & \qquad k=m+1, m+2, \dots\\ \end{cases} \end{equation} which follows directly from equation (3) of \cite[p. 58]{R} and the definition of the rising factorial. Furthermore, from (see \cite[p. 23, equation (7)]{R}) \begin{equation}\label{gampoc} (\alpha)_n = \frac{\Gamma(\alpha+n)}{\Gamma(\alpha)} \qquad (\alpha \notin \{0, -1, -2, \dots \}) \end{equation} we have, for $k\in\{0, 1, 2, \dots m\}$, \begin{equation}\label{e2} \frac{2\cdot (1/2)_k}{(m+3/2)_k (3/2)_m} = \frac{\Gamma(k+1/2)}{\Gamma(m+k+3/2)} = \frac{1}{(k+1/2)_{m+1}}. \end{equation} Combining equations (\ref{e1}) and (\ref{e2}), we see that \begin{align*} \sum_{k=0}^{m} \binom{m}{k} \frac{(-1)^k x^k}{(k+1/2)_{m+1}} &= \frac{2}{(3/2)_m} \sum_{k=0}^{m} \frac{(-m)_k (1/2)_k x^k}{ k!(m+3/2)_k }\\ &= \frac{2}{(3/2)_m}F(-m, 1/2; m+3/2; x), \end{align*} where $F(a,b;c;x)$ is the hypergeometric function (see \cite[p. 45]{R}). The conclusion that $$ \sigma_{m,n} = \left[ \theta^n \frac{2}{(3/2)_m}{}_2F_1(-m, 1/2; m+3/2; x) \right]_{x=1} \qquad \left(\theta := x \frac{d}{dx}\right) $$ now follows from the fact that $\theta x^k = k x^k$, which holds for all $k$. \end{proof} The next lemma will be useful in obtaining another explicit formula for $\sigma_{m,n}$. \begin{lem}\label{Q_k} The coefficient polynomials $Q_k(x)$ in the expansion $$ (xD)^n = \sum_{k=0}^{n} Q_k(x) D^k $$ are given by $Q_k(x) = S(n,k) x^k,$ where \begin{equation}\label{c_k} S(n,k) = \frac{1}{k!}\sum_{j=0}^k \binom{k}{j} (-1)^{k-j}j^n \end{equation} are the Stirling numbers of the second kind. \end{lem} \begin{proof} This result is known, but we offer a simple proof here based on a formula for $Q_k$ for a linear operator $T:\mathbb{C}[x]\to\mathbb{C}[x]$ \cite[p. 106, Prop. 216]{C}: \begin{equation*} Q_k(x) = \frac{1}{k!}\sum_{j=0}^k \binom{k}{j}T[x^j](-x)^{k-j} \end{equation*} substituting $T=(xD)^n$ yields \begin{equation*} Q_k(x) = \frac{x^k}{k!}\sum_{j=0}^k \binom{k}{j} (-1)^{k-j}j^n, \end{equation*} and the result follows. \end{proof} We now use the previous lemma along with some properties of the hypergeometric function to obtain another formula for $\sigma_{m,n}$. \begin{lem}\label{lemsigma2} For nonnegative integers $m$ and $n$ which satisfy $2m\ge n-1$, the quantity $\sigma_{m,n}$ from equation (\ref{sigma}) can be written in the form $$ \sigma_{m,n} = \frac{2\cdot 4^m (2m-n)!}{m!(4m+1)!!} p_n(m), $$ where \begin{equation}\label{p_n} p_n(m)=\sum_{k=0}^n S(n,k)(-m)_k(1/2)_k(2m-n+1)_{n-k} \end{equation} is polynomial of degree $n$ in the variable $m$. Here, $S(n,k)$ is the Stirling number of the second kind as defined in equation (\ref{c_k}). \end{lem} \begin{proof} We first use Lemma \ref{sigmalem1} and Lemma \ref{Q_k} together with the relations (\cite[p. 49, l. 7]{R}) $$ F(a,b;c;1) = \frac{\Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a) \Gamma(c-b)} \qquad (\text{Re}(c-a-b)>0) $$ and (\cite[p. 69, ex. 1]{R}) $$ \frac{d}{dx} F(a, b;c;x) = \frac{ab}{c} F(a+1,b+1;c+1,x) $$ to obtain \begin{align*} \sigma_{m,n} &= \frac{2}{(3/2)_m} \left[\sum_{k=0}^n S(n,k)x^k D^kF(-m,1/2;m+3/2,x)\right]_{x=1}\\ &= \frac{2}{(3/2)_m} \left[\sum_{k=0}^n S(n,k) {x^k} \frac{(-m)_k(1/2)_k}{(m+3/2)_k}F\left(-m+k,\frac{1}{2}+k;m+\frac{3}{2}+k,x\right)\right]_{x=1}\\ &= \frac{2}{(3/2)_m} \sum_{k=0}^n S(n,k) \frac{(-m)_k(1/2)_k}{(m+3/2)_k} \frac{\Gamma(m+3/2+k)\Gamma(2m-k+1)}{\Gamma(2m+3/2)\Gamma(m+1)}, \end{align*} which is valid for $2m>n-1$. Next, we use the property (\ref{gampoc}) several times to obtain, for $2m\ge n-1$, \begin{align*} \sigma_{m,n} &= \frac{2}{m!}\frac{\Gamma(m+3/2)\Gamma(2m-n+1)}{(3/2)_m\Gamma(2m+3/2)} \sum_{k=0}^n S(n,k) (-m)_k(1/2)_k(2m-n+1)_{n-k}\\ &=\frac{2}{m!}\frac{\Gamma(m+3/2)\Gamma(2m-n+1)}{(3/2)_m\Gamma(2m+3/2)} p_n(m)\\ &=\frac{2}{m!}\frac{(2m-n)!}{(3/2)_m (m+3/2)_m} p_n(m)\\ &=\frac{2}{m!}\frac{(2m-n)!}{(3/2)_{2m} } p_n(m)\\ &=\frac{2\cdot 4^m (2m-n)!}{m!(4m+1)!!} p_n(m) \end{align*} where $p_n(m)$ is defined as in equation (\ref{p_n}). \end{proof} With these results at hand, we can now give explicit expressions for $S_{2m}(0)$ in the case where the eigenvalue sequence $\{\gamma_k\}_{k=0}^{\infty}$ can be interpolated by the monomial $p(k)=k^n$. \begin{lem}\label{kj} If $\gamma_k = k^n$ then $S_{2m}(0)$ as defined in equation (\ref{s2m1}) is given by \begin{equation}\label{s2m3} S_{2m}(0) = \frac{2^n (2m-n-1)!}{(m!)^2 (4m+1)!!} \left[4 p_{n+1}(m)+ (2m-n)p_n(m)\right] \qquad (m\ge n), \end{equation} where $p_j(m)$ is defined as in equation (\ref{p_n}). \end{lem} \begin{proof} Fix a nonnegative integer $n$ and let $\gamma_k=k^n$. Equations (\ref{s2m1}) and (\ref{sigma}) yield \begin{align*} S_{2m}(0) &= \frac{1}{2\cdot 4^m m! } \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) (2k)^n (-1)^{k}}{(k+1/2)_{m+1}}\\ &= \frac{2^{n-1}}{ 4^{m} m!}\left(4\cdot \sigma_{m,n+1} + \sigma_{m,n}\right). \end{align*} An application of Lemma \ref{lemsigma2}, some simplification, and comparison with equation (\ref{p_n}) then gives $$ S_{2m}(0) = \frac{2^{n}}{(m!)^2(4m+1)!!}\left(4\cdot(2m-n-1)! p_{n+1}(m) + (2m-n)!p_n(m)\right), $$ which is valid for $m\ge n$, and the result follows. \end{proof} We have now arrived at the goal of this section, which was to develop a useful formula for $S_{2m}(0)$ in the case where the eigenvalue sequence $\{\gamma_k\}_{k=0}^{\infty}$ can be interpolated by a polynomial $p(k) = \sum_{j=0}^{n} a_j k^j$. \begin{prop} If $\gamma_k = \sum_{j=0}^{n} a_j k^j$ then, for all $m\ge n$, the quantity $S_{2m}(0)$ defined in equation (\ref{s2m1}) satisfies \begin{equation}\label{s2m4} S_{2m}(0) = \frac{(2m-n-1)!}{(m!)^2 (4m+1)!!} P(m), \end{equation} where \begin{equation}\label{P(m)} P(m) = \sum_{j=0}^{n} a_j 2^j (2m-n)_{n-j} [4 p_{j+1}(m)+ (2m-j)p_j(m)] \end{equation} and $p_j(m)$ is defined as in equation (\ref{p_n}). \end{prop} \begin{proof} Let $\gamma_k=\sum_{j=0}^{n} a_j k^j$. Equations (\ref{s2m1}) and (\ref{sigma}) yield \begin{align*} S_{2m}(0) &= \frac{1}{2\cdot 4^m m! } \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) \sum_{j=0}^{n}a_j(2k)^j (-1)^{k}}{(k+1/2)_{m+1}}\\ &= \sum_{j=0}^{n}a_j\left[\frac{1}{2\cdot 4^m m! }\sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) (2k)^j (-1)^{k}}{(k+1/2)_{m+1}}\right]. \end{align*} The quantity in the square brackets is the $S_{2m}(0)$ corresponding to $\gamma_k=k^j$, so Lemma \ref{kj} yields \begin{align*} S_{2m}(0) &=\sum_{j=0}^{n}a_j\frac{2^j (2m-j-1)!}{(m!)^2 (4m+1)!!} \left[4 p_{j+1}(m)+ (2m-j)p_j(m)\right], \end{align*} which is valid for $m\geq n$. Some manipulation of the factorials now gives the desired result. \end{proof} \section{Form and Order}\label{s:Form and Order} In this section, we seek to determine what form a polynomial $p$ must have if the sequence $\{p(k)\}_{k=0}^{\infty}$ is a multiplier sequence for the Legendre basis. The conditions that we determine will, in turn, impose conditions on the order of the differential operator associated with the given sequence. We begin by showing that any such polynomial must have even degree. This settles open question (2) posed in section 5 of \cite{bdfu}, a paper of the second author. \begin{thm} \label{thm:noodd} Suppose that $p \in \mathbb{R}[x]$ is a polynomial of odd degree. Then $\seq{p(k)}$ is not a multiplier sequence for the Legendre basis. \end{thm} \begin{proof} We argue by contradiction. Let $T$ be the operator corresponding to the sequence interpolated by the polynomial $p(x)=\sum_{j=0}^n a_jx^j$ with $n$ odd and $a_n \neq 0$, and suppose that $\{p(k)\}_{k=0}^{\infty}$ is a multiplier sequence for the Legendre basis. The symbol of $T$ is given by $$ G_T(x,y)= \sum_{k=0}^{\infty} \frac{(-1)^k T[x^k] y^k}{k!} = T[e^{-x y}], $$ where the operator $T$ acts on $e^{-xy}$ as a function of $x$ alone (see \cite{bb} for a comprehensive treatment of the symbol of an operator as it relates to hyperbolicity preserving operators). Suppose that the differential operator representation of $T$ is given by $$ T = \sum_{k=0}^{\infty} S_k(x) D^k \qquad \left(D = \frac{d}{dx}\right). $$ The symbol $G_T$ is then given by $$ G_T(x,y) = e^{-xy} \sum_{k=0}^{\infty} S_k(x) (-1)^k y^k $$ As noted in \cite{bo} and \cite{fhms}, we can act on this expression (as a function of $x$ alone) by the classical multiplier sequence $\{1, 0, 0, 0, \dots\}$ and the resulting function $$ G_T(0,y) = \sum_{k=0}^{\infty} S_k(0) (-1)^k y^k $$ must belong to the Laguerre-P\'olya class. By equation (\ref{even}), each term with odd index is zero. Thus, $$ G_T(0,y) = \sum_{k=0}^{\infty} S_{2m}(0) y^{2m} $$ is an even function which belongs to the Laguerre-P\'olya class. It follows, by Proposition \ref{TI}, that the sequence $\{S_{2m}(0)\}_{m=0}^{\infty}$ must alternate in sign (a fact that we aim to contradict). By equations (\ref{s2m4}) and (\ref{P(m)}), for $m\ge n$, $$ S_{2m}(0) = \frac{(2m-n-1)!}{(m!)^2 (4m+1)!!} P(m), $$ where $$ P(m) = \sum_{j=0}^{n} a_j 2^j (2m-n)_{n-j} [4 p_{j+1}(m)+ (2m-j)p_j(m)], $$ and $p_j(m)$ is defined as in equation (\ref{p_n}). Note that $P(m)$ is a polynomial in $m$ which is either identically zero, or is not identically zero. In the latter case, the sequence $\{P(m)\}_{m=0}^{\infty},$ and therefore the sequence $\{S_{2m}(0)\}_{m=0}^{\infty},$ eventually has constant sign, which would give us our desired contradiction. We will finish the proof by demonstrating that $P(m)$ is not identically zero. Indeed, \begin{equation} \label{eq:PPneval} P\left(\frac{n}{2}\right)=a_n 2^n\, 4 p_{n+1}\left(\frac{n}{2}\right) = a_n 2^{n+2}\, S(n+1, n+1)\left(\frac{-n}{2}\right)_{n+1}\left(\frac{1}{2}\right)_{n+1}, \end{equation} which is non-zero due to the fact that we have assumed $n$ is odd, $a_n\neq 0$ and, as is well-known, Stirling numbers of the second kind satisfy $S(n+1,n+1)=1$. \end{proof} It is worthwhile to note that equation (\ref{s2m4}) is only valid for $m>n$ and our choice of $m=n/2$ in the preceding proof does not satisfy this. However, this fact is irrelevant to the argument. All that is required is that the polynomial $P(m)$ not be identically zero and this can be achieved by considering any values for $m$ that we choose. Next, we want to determine conditions under which an even degree polynomial may interpolate a multiplier sequence for the Legendre basis. \begin{thm}\label{x^2+x} If $\{ \gamma_k \}_{k=0}^{\infty}$ is a multiplier sequence for the Legendre basis which is interpolated by a polynomial $p$, then $p(x)=h(x^2+x)$ for some polynomial $h(x)\in \mathbb{R}[x]$. \end{thm} \begin{proof} By Theorem \ref{thm:noodd}, $p$ must have even degree. Noting that \begin{equation} b_k(x) = \begin{cases} (x^2+x)^{k/2} & \qquad k \text{ even},\\ x^k & \qquad k \text{ odd}.\\ \end{cases} \end{equation} forms a basis for $\mathbb{R}[x]$, we can expand $p$ in terms of this basis to obtain $$ p(x) = h(x^2+x) + q(x), $$ where $\deg h = (\deg p)/2$ and $q$ is an odd function in $\mathbb{R}[x]$. We claim that $q$ must be identically zero. By way of contradiction, suppose $q$ is not identically zero. Then $q$ is a polynomial of odd degree. Using equation (\ref{s2m1}) to calculate $S_{2m}(0)$, we have \begin{align*} S_{2m}(0) &= \frac{1}{2\cdot 4^m m!} \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) p(2k) (-1)^{k}}{(k+1/2)_{m+1}}\\ &= \frac{1}{2\cdot 4^m m!} \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) \left[h((2k)^2+(2k))+q(2k)\right] (-1)^{k}}{(k+1/2)_{m+1}}. \end{align*} Splitting up the sum then gives \begin{align} S_{2m}(0) &= \frac{1}{2\cdot 4^m m!} \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) h((2k)^2+(2k)) (-1)^{k}}{(k+1/2)_{m+1}} \label{sumdecomp1}\\ &\hskip .2 in + \frac{1}{2\cdot 4^m m!} \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) q(2k) (-1)^{k}}{(k+1/2)_{m+1}} \label{sumdecomp2}. \end{align} The sum in (\ref{sumdecomp1}) calculates the polynomial coefficients, evaluated at zero, of the differential operator $T_h$ corresponding to the sequence $\{h(k^2+k) \}_{k=0}^{\infty}$. Since $T_h$ is a finite order differential operator, this sum must vanish for all sufficiently large $m$. Thus, for all such $m$, only the sum (\ref{sumdecomp2}) contributes to $S_{2m}(0)$, i.e., $$ S_{2m}(0) = \frac{1}{2\cdot 4^m m!} \sum_{k=0}^{m} \binom{m}{k}\frac{(4k + 1) q(2k) (-1)^{k}}{(k+1/2)_{m+1}}. $$ Note that these are precisely the values of the coefficient polynomials, evaluated at zero, corresponding to the odd degree interpolated sequence $\seq{q(k)}$. The result now follows from the proof of Theorem \ref{thm:noodd}. \end{proof} As an immediate consequence, we get a new proof of the following result which was proved in \cite{fhms}. \begin{cor} If $\{k^2+bk+c\}$ is a Legendre MS, then $b=1$. \end{cor} Theorem \ref{x^2+x} also allows us to conclude that, as a differential operator, any polynomially interpolated multiplier sequence for the Legendre basis must have finite order. \begin{cor} Suppose $\{p(k)\}_{k=0}^{\infty}$ is a multiplier sequence for the Legendre basis, where $p\in\mathbb{R}[x]$, and let $T$ be the corresponding linear operator defined by $T[P_n(x)] = p(n)P_n(x)$ for all $n$. Then the differential operator representation of $T$ has only finitely many terms: $$ T = \sum_{k=0}^{m} S_k(x) D^k. $$ \end{cor} \begin{proof} Let $h$ be a real polynomial for which $p(x) = h(x^2+x)$ and write $$ h(x) = a_0+ a_1 x + \cdots +a_n x^n. $$ Let $\delta$ be the differential operator from Legendre's differential equation $$ \delta = (x^2-1)D^2+ 2 x D. $$ Then $\delta P_k(x) = (k^2+k)P_k(x)$ for all $k$ and it follows that the operator $T$ can be written as $$ T = h(\delta) = a_0 + a_1 \delta + \cdots a_n \delta^n. $$ From this we see that the order of $T$ is at most $2n=\deg p$. \end{proof} \begin{rem} The previous corollary complements a result of Miranian \cite{Miranian}, which implies that a multiplier sequence for the Legendre basis whose corresponding differential operator has \emph{finite} order must have the form $\{h(k^2+k)\}_{k=0}^{\infty}$ where $h\in\mathbb{R}[x].$ In fact, we have shown that any multiplier sequence for the Legendre basis whose corresponding differential operator has infinite order (of which, to date, none have been discovered) cannot be interpolated by a polynomial. \end{rem} \section{A Collection of Legendre Multiplier Sequences}\label{s:collection} In this section, we demonstrate that a certain collection of sequences are multiplier sequences for the Legendre basis. The result is reminiscent of a similar result by Craven and Csordas involving the characterization of polynomially interpolated classical complex zero decreasing sequences (see \cite[Prop. 2.2]{CCczds}). We will use the Bates-Yoshida Quadratic Hyperbolicity Preserver Characterization: \begin{thm}[Bates-Yoshida \cite{BY}] \label{BY-thm} Suppose $Q_2,Q_1,Q_0$ are real polynomials such that $deg(Q_2)=2$, $deg(Q_1)\le 1$, $deg(Q_0)=0$. Then \begin{equation} \nonumber T=Q_2D^2+Q_1D+Q_0 \end{equation} preserves hyperbolicity if and only if \begin{equation} \nonumber W[Q_0,Q_2]^2-W[Q_0,Q_1]W[Q_1,Q_2]\le 0, \;\; \mbox{and} \;\; Q_0 \ll Q_1 \ll Q_2. \end{equation} \end{thm} The compact form of Bates-Yoshida Theorem is derived using the Borcea-Br\"and\'en Characterization of Hyperbolicity Preservers \cite{bb}[Theorem 5], and intricate arguments involving the location of zeros of the coefficient polynomials. With Theorem \ref{BY-thm} in hand, we prove the following. \begin{thm}\label{fall} Let $\delta = (x^2-1)D^2+2 x D$ where $D=d/dx$ and fix positive integers $n$ and $N$. The operator \begin{equation} \label{fallfac} \delta (\delta-1\cdot 2)(\delta-2\cdot 3)\cdots\big(\delta-(n-1)(n)\big)\prod_{j=1}^{N} (\delta-A_j) \end{equation} is a hyperbolicity preserver whenever $-(n+1)\leq A_j \leq n(n+1)$ for all $j$. \end{thm} \begin{proof} As in the proof of Theorem 14 on p. 137 of \cite{nreup}, we may factor the operator $$ \delta (\delta-1\cdot 2)(\delta-2\cdot 3 )\cdots(\delta-(n-1)\cdot n) $$ into a product of operators: $$ \left(\prod_{k=0}^{n-1}\left[(x^2-1)D+2(k+1)x\right]\right)D^{n}. $$ From Leibniz' rule (or by using equations (6) and (7) on p. 135 of \cite{nreup}), we have $$ D^{n} (\delta-A_j) = \left[(x^2-1)D^2 + 2(n+1) x D + n^2+n-A_j\right] D^{n}. $$ It follows that the operator in (\ref{fallfac}) can be factored as \begin{equation}\label{ops} \left(\prod_{k=0}^{n-1}\left[(x^2-1)D+2(1+k)x\right]\right) T D^{n}, \end{equation} where \begin{equation}\label{Top} T = \left(\prod_{j=1}^{N} \left[(x^2-1)D^2 + 2(n+1) x D + n^2+n-A_j\right] \right). \end{equation} By Theorem 8 of \cite{nreup}, any operator of the form $$ q(x) D + \alpha q'(x), $$ where $q\in\mathbb{R}[x]$ has only real zeros and $\alpha\geq 0$, is a complex zero decreasing (and, therefore, hyperbolicity preserving) operator. Thus, the operators in the product, along with $D^n$, of equation (\ref{ops}) are hyperbolicity operators. It remains to show that the operators appearing in the product for $T$ in equation (\ref{Top}) are hyperbolicity preserving. We do this by applying Theorem \ref{BY-thm}. It is clear that the coefficient polynomials of the operator are in proper position. Thus, we only need to find conditions on $A_j$ under which $$ W[n^2+n-A_j, x^2-1]^2 - W[n^2+n-A_j, 2(n+1)x]W[2(n+1)x, x^2-1]\leq 0 $$ for all $x\in\mathbb{R}$, where $W[f,g]$ denotes the Wronskian of $f$ and $g$ $$ W[f,g] = fg'-f'g. $$ A calculation shows that the inequality in question reduces to $$ -4(n^2+n-A_j)\left[(A_j+n+1)x^2+4(n+1)^2\right]\leq 0, $$ and the result follows. \end{proof} The operator $\delta$ appearing in Theorem \ref{fall} satisfies $\delta P_k(x) = (k^2+k)P_k(x)$. From this, we obtain an immediate corollary. \begin{cor}\label{fallseq} Fix positive integers $n$ and $N$, and let \begin{equation} h(x) = x (x-1\cdot 2)(x-2\cdot 3)\cdots\big(x-(n-1)(n)\big)\prod_{j=1}^{N} (x-A_j), \end{equation} where $-(n+1)\leq A_j \leq n(n+1)$ for all $j$. Then $\{h(k^2+k)\}_{k=0}^{\infty}$ is a multiplier sequence for the Legendre basis. \end{cor} We suspect that these results are not as sharp as they could be. For example, Theorem \ref{fall}, indicates that operators of the form $\delta(\delta+A)$ are hyperbolicity preserving for $-2\leq A\leq 2$ but we suspect that these are hyperbolicity preserving for $-2\leq A\leq 4\sqrt{2}-2$ (we elucidate on this point in section \ref{s:symbolcurve} below). Furthermore, based on the similarity of the CZDS results, we suspect the following problem can be answered in the affirmative. \begin{problem} Is it true that an operator of the form $(\ref{fallfac})$ is a complex zero decreasing operator if and only if $-n(n+1)\leq A_j \leq n+1$ for all $j\in\{1, 2, 3, \dots, N\}$? \end{problem} \section{Geometry of the Symbol Curve}\label{s:symbolcurve} In this final section, we use a beautiful result contained in \cite{BBmv} to pose a conjecture about the classification of quartic polynomials which interpolate multiplier sequences for the Legendre basis. The result we refer to deserves to be better known and we state a special case of it here for the convenience of the reader. \begin{thm}\label{curve}\emph{(see \cite[Corollary 4.4.5]{BBmv})} Let $T$ be a finite order differential operator with $$ T = \sum_{k=0}^{n} Q_k(x) D^k \qquad \left(D=\frac{d}{dx}\right). $$ Then $T$ is hyperbolicity preserving if and only if the symbol curve in $\mathbb{R}^2$ $$ 0 = \sum_{k=0}^{n} (-1)^k Q_k(x) y^k $$ has $n$ intersections (counted with multiplicity) with every line of positive slope. \end{thm} Note that the symbol curve can be calculated as $T[\exp(-xy)]$, where $T$ acts on the variable $x$ alone. Now, by the results of Section \ref{s:Form and Order}, any quartic Legendre MS (up to a multiplicative constant) has the form $$ \{(k^2+k)^2+ b (k^2+k) + c\}_{k=0}^{\infty}. $$ We will first focus on the case $c=0$. As noted after Corollary \ref{fallseq}, if $-2\leq b \leq 2$, the sequence is a multiplier sequence for the Legendre basis. Now, since every Legendre MS is also a classical multiplier sequence, we can rule out several values of the parameter $b$. The sequence begins $$ 0, 4+2b, \dots $$ and, since the sequence will eventually always be positive, we must have that all the terms are nonnegative. Therefore, any such sequence with $b<-2$ cannot be a (classical and, therefore,) multiplier sequence for the Legendre basis. Furthermore, applying the sequence to $e^x$ in the standard basis yields $$ T[e^x] = \frac{(k^2+k)^2+b(k^2+k)}{k!} x^k = (x^2+6x+2+b)(2+x)x e^x $$ with zeros $$ x=0, x=-2, x= -3 \pm \sqrt{7-b} $$ from which we see that any such sequence with $b>7$ cannot be a (classical and, therefore,) multiplier sequence for the Legendre basis. It now remains to find out what happens for $2<b\leq 7$. Using Theorem \ref{curve}, we want to determine conditions under which every line with positive slope will have the correct number of intersections with the curve $$ T[e^{-xw}]=0. $$ For our quartic operator, this is equivalent to examining the intersection property for the curve \begin{equation}\label{symbolcurve} 14 x^2 w^2-8 x^3 w^3+x^4 w^4-2 w^4 x^2-6 w^2+8 x w^3+w^4-4 x w+b x^2 w^2-b w^2-2 b x w+c = 0. \end{equation} In Figure \ref{symbol3}, we have graphed this curve for $c=0$ and $b=3$, $b=3.6569$, and $b=4$. \vskip .1 in \begin{figure}[h] \centering \includegraphics[width=1.5in]{symbol3} \hskip .2 in \includegraphics[width=1.5in]{symbol2} \hskip .2 in \includegraphics[width=1.5in]{symbol4} \caption{The symbol curve for $c=0$ and various choices of $b$.} \label{symbol3} \end{figure} We can see a ``breaking point'' somewhere around $b=3.6569$. To find the exact value of $b$, we seek to find when (\ref{symbolcurve}) with $c=0$ has multiple zeros. An analysis of the discriminant then shows that the target value is $b=4\sqrt{2}-2\approx 3.656854248$. For general values of $c$, a similar analysis has given us reason to believe that the following conjecture is true. \begin{conjecture} For the sequence $\{(k^2+k)^2+ b (k^2+k)+c\}_{k=0}^{\infty}$ to be a multiplier sequence for the Legendre basis, it is necessary and sufficient that $b$ and $c$ lie in the region in the $bc$-plane bounded by the $b$-axis, the parabola $c=(b+2)^2/8-4$, and the curve \begin{align*} 0&=b^5- (c+15) b^4+ 4(c+12)b^3+8 (c^2+24 c + 19) b^2- 16(10 c^2+101 c+ 33)b\\ &\hskip .2 in - 16( c^3 -50 c^2 - 185 c +63), \end{align*} as depicted in Figure \ref{sharkfin}. \end{conjecture} \begin{figure}[h] \centering \includegraphics[width=1.5 in]{quartic.jpg} \caption{Conjectured region of allowable values for the parameters $b$ and $c$.} \label{sharkfin} \end{figure} \vskip .2 in Similarly, using the symbol curve as our guide, we believe the following conjecture to be true as well. \begin{conjecture} Let $n$ be a positive integer and suppose $k$ is a positive integer that is at most $n-1$. Then the operator \begin{equation}\label{2^k} \delta^{n-k}(\delta^k - 2^k) \qquad \qquad (\delta = (x^2-1)D^2+ 2 x D) \end{equation} is a hyperbolicity preserving operator. \end{conjecture} We note that many cases of the previous conjecture can be verified using Theorem \ref{fall}. However, there are operators of the form (\ref{2^k}) that remain mysterious, such as $\delta(\delta^3-8)$. Finally, we leave the reader to consider the central problem which remains open for the time being. \begin{problem} Characterize all multiplier sequences for the Legendre basis. \end{problem}
{ "timestamp": "2017-01-11T02:02:37", "yymm": "1701", "arxiv_id": "1701.02420", "language": "en", "url": "https://arxiv.org/abs/1701.02420", "abstract": "We prove that every multiplier sequence for the Legendre basis which can be interpolated by a polynomial has the form $\\{h(k^2+k)\\}_{k=0}^{\\infty}$, where $h\\in\\mathbb{R}[x]$. We also prove that a non-trivial collection of polynomials of a certain form interpolate multiplier sequences for the Legendre basis, and we state conjectures on how to extend these results.", "subjects": "Complex Variables (math.CV)", "title": "Polynomially Interpolated Legendre Multiplier Sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717440735736, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7089449172145244 }
https://arxiv.org/abs/1806.02028
Determining the Generalized Hamming Weight Hierarchy of the Binary Projective Reed-Muller Code
Projective Reed-Muller codes correspond to subcodes of the Reed-Muller code in which the polynomials being evaluated to yield codewords, are restricted to be homogeneous. The Generalized Hamming Weights (GHW) of a code ${\cal C}$, identify for each dimension $\nu$, the smallest size of the support of a subcode of ${\cal C}$ of dimension $\nu$. The GHW of a code are of interest in assessing the vulnerability of a code in a wiretap channel setting. It is also of use in bounding the state complexity of the trellis representation of the code.In prior work by the same authors, a code-shortening algorithm was employed to derive upper bounds on the GHW of binary projective, Reed-Muller (PRM) codes. In the present paper, we derive a matching lower bound by adapting the proof techniques used originally for Reed-Muller (RM) codes by Wei. This results in a characterization of the GHW hierarchy of binary PRM codes.
\section{Introduction} The notion of Generalized Hamming Weights (GHW), introduced by Wei in \cite{Wei}, is a generalization of minimum Hamming weight of a linear code. In \cite{Wei} , the basic properties of GHW are studied and the weight hierarchy for Hamming code, Reed-Solomon codes, binary Reed-Muller code etc are determined. Study of this notion was motivated by applications in cryptography. For instance, when a linear code is used over wire-tap channel of type II (see \cite{OzarowWyner}), the amount of information revealed can be completely characterized using GHW hierarchy of the linear code. In a similar way, GHW can be used to analyze the performance of a linear code when used as a $t$-resilient function \cite{ChorGoldBit}. Later, study of GHW hierarchies found applications in determining the optimum bit order in trellis based decoding. Specifically, the GHW of RM codes found in \cite{Wei} were used in \cite{kasami1993optimum} to prove that standard binary bit order is optimal for RM codes. The $\nu$-th GHW of a code $\mathcal{C}$ is given by, \bean d_{\nu}(\mathcal{C}) = \min \big\{ \ | S(D)| : D \ \text{is a subcode of} \ C \\ \text{with dimension} \ \nu \big\}, \eean where $\text S(\mathcal{D})$ denotes the union of support of all the vectors in $\mathcal{D}$. A geometric approach to determine GHW hierarchy for various classes of codes is described in \cite{TsfVla}. The projective Reed-Muller (PRM) codes, introduced by Gilles Lachaud in \cite{Lachaud86}, are a variant of the Reed-Muller (RM) codes. These codes are based on evaluations of homogeneous polynomials of degree $r$ in projective space $\mathbb{P}^{m-1}(\mathbb{F}_q)$. The dimension and minimum distance of PRM codes were determined by Serre \cite{serre} and S$\o$rensen \cite{Sorensen}. In \cite{boguslavsky}, Boguslavsky determined the second GHW of projective Reed-Muller codes for $r < q-1$ regime. The connections between Tsfasman-Boguslavsky conjecture and GHW of PRM codes were studied in \cite{DattaGhorpade}. To the best of our knowledge, none of the previous works on GHW of PRM codes have considered the binary ($q=2$) version. However, next-to-minimal weight of binary PRM codes is determined in a recent work \cite{CarvalhoNeumann}, wherein next-to-minimal weight means minimal codeword weight that is greater than minimum Hamming weight. Note that the next-to-minimal weight is not the same as second generalized Hamming weight. In this paper we present the GHW hierarchy for binary PRM codes. In a recent work \cite{VajRamKum}, by authors of the current paper, it was shown that binary PRM codes and their shortened versions have Private Information Retrieval (PIR) code property. The shortening technique proposed in that work resulted in an upper bound on the GHW of binary PRM codes. The work presented in this paper started as an attempt to prove the optimality of that shortening procedure. Here we derive a lower bound for the GHW of PRM codes and show that it matches with the upper bound provided in \cite{VajRamKum}. The proofs presented in this paper adapt the ideas from derivation of GHW for RM codes in \cite{Wei}. \paragraph{Organization of paper}In Section~\ref{sec:PRM} we describe the parameters and properties of binary, projective Reed-Muller code. Section~\ref{sec:short} discusses about the shortening procedure proposed in \cite{VajRamKum} that gives an upper bound on GHW of binary PRM codes. A lower bound is derived in Section~\ref{sec:lower} using techniques from \cite{Wei}. In Section~\ref{sec:ghw} we show that these bounds match and thereby determine the GHW hierarchy of the binary projective Reed-Muller code. \paragraph{Notation} We use $d_{\nu}(r,m)$ to denote the $\nu$-th GHW for PRM$(r,m-1)$ code. The notation $[a,b]$ denotes $\{a,a+1, \cdots ,b-1,b\}$ and $[a]=[1,a]$. The support of any code $\mathcal{C}$ is denoted by $S(\mathcal{C})$. \section{Binary Projective Reed-Muller Codes} \label{sec:PRM} Every codeword in the $\text{PRM}(r, m-1)$ code over the field $\mathbb{F}_q$ is a vector of evaluations of a homogeneous polynomial of degree $r$ at a fixed representative of each of the points in the projective space $\mathbb{P}^{m-1}(\mathbb{F}_q)$. For $q=2$, each point in the projective space $\mathbb{P}^{m-1}(\mathbb{F}_2)$ has a unique representative with $m$ components, i.e, $\mathbb{P}^{m-1}(\mathbb{F}_2) \simeq \mathbb{F}_2^m \setminus \{\underline{0}\}.$ A codeword in binary PRM$(r,m-1)$ code is the vector of evaluations at all non-zero points in $\mathbb{F}_2^{m}$ of a binary homogeneous polynomial of degree $r$ in $m$ variables. \bea \label{eq:PRM} f(\underline{x})=f(x_m, \cdots, x_1) = \sum\limits_{|R| = r, R \subseteq [m]} a_{R} \prod\limits_{i \in R} x_i \eea where $a_R \in \mathbb{F}_2$. The coefficients of monomials $\{a_R,|R|=r\}$ represents the message symbols. It can be easily seen that binary PRM code is a systematic code. Here we will be discussing only about binary PRM codes and hence from now on-wards PRM would mean the binary version. Any binary homogeneous polynomial of degree $r$ evaluates to $0$ at vectors (in $\mathbb{F}_2^{m}$) with Hamming weight less than $r$. Hence, there are some coordinates which are always zero in all the codewords and can be deleted from the binary $\text{PRM}(r, m-1)$ code. The \emph{non-degenerate} $\text{PRM}(r,m-1)$ code thus obtained has parameters: \bea \text{Dimension \ } &=& {m \choose r}, \nonumber \\ \text{Block length \ } &=&2^m-\sum\limits_{i = 0}^{r-1} {m \choose i}= \sum\limits_{i = r}^m {m \choose i}. \eea \paragraph*{Support-set viewpoint} Note that each codeword of this code is of the form $(f(\underline{x})$, $\underline{x} \in \mathbb{F}_2^m$ with $w_H(\underline{x}) \ge r)$. Any vector $\underline{x} \in \mathbb{F}_2^m$ can be represented uniquely by its support. This implies that each code symbol can be indexed by a subset of $[m]$ with size $\ge r$. For an example code with $m=4$, code symbol $f(1011)$ can be represented as $f(\{1,3,4\})$ where $\{1,3,4\}$ is the support of vector $(1011)$. \begin{note} Each message symbol as well as its corresponding monomial can be indexed by a $r$-element subset of $[m]$. \end{note} \section{Shortening Algorithm : Upper Bound} \label{sec:short} In this section, we briefly describe the shortening technique proposed in \cite{VajRamKum}, that resulted in upper bound on GHW of binary PRM codes. For the $PRM(r,m-1)$ code, any code-symbol $f(S), S \subseteq[m]$, is given by: \bean f(S) = \sum\limits_{R_i \subseteq S} a_{R_i}, \eean where $R_i, \forall i \in \left[ {m \choose r} \right] $ are the $r$-element subsets of $[m]$ and $a_{R_i}$ are the message symbols. For an example code with $r=2$ and $m=4$, $f(\{1,3,4\}) = f(\{1,3\}) + f(\{1,4\}) + f(\{3,4\})$.\\ Consider that we set the message symbols $a_{R_i}=0$, $\forall R_i \subseteq S$. This is equivalent to setting $f(R_i) = 0$, $\forall R_i \subseteq S$ because the code is systematic. Now we have, \bea f(S) = \sum\limits_{R_i \subseteq S} f(R_i)= 0. \eea This means that the coordinates corresponding to $\{R_i \ | \ \forall R_i \subset S, i \in {m \choose r} \} \cup \{S\}$ can be ignored. Hence, on shortening the PRM$(r,m-1)$ code by setting all message symbols corresponding to some $\gamma$ $r$-element subsets to zero, we can ignore the code coordinates corresponding to the message symbols and possibly some other code coordinates. Therefore, this shortening procedure will result in block length reduction of $\Gamma(r,m,\gamma) \ge \gamma$. The resultant code obtained will have parameters: \bea \text{Dimension } k &=& {m \choose r} - \gamma, \nonumber \\ \text{Block length } n &=& \sum\limits_{i = r}^m {m \choose i} - \Gamma(r,m,\gamma). \eea The aim of a good shortening algorithm for PRM$(r,m-1)$ code should be to pick these message symbols ( $r$-element subsets of $[m]$ ) so that block length reduction is more. With this background, we state without proof the following lemmas from \cite{VajRamKum}. For a given $\gamma$, $r$ and $m$, first a unique vector $\underline{\rho}$ is computed and then $\Gamma(r,m,\gamma)$ is computed using that. Note that $\ell=m-r$ here. \begin{lem}[ Unique $\underline{\rho}$ representation\cite{VajRamKum}~] \label{lem:unique} Any $\gamma < {m \choose \ell}$ can be uniquely represented using a vector $\underline{\rho}=(\rho_{\ell-1}, \cdots \rho_0)$ with $\rho_i \ge 0, \forall i \in [0, \ell-1]$ and $\sum\limits_{i=0}^{\ell-1} \rho_i \le r$ as, \bean \gamma &=& \sum\limits_{t = 0}^{\ell - 1} h\left(\rho_t, r_t, t\right), \\ \text{ where } \ h\left(p, r, t\right) &=& \begin{cases} \sum\limits_{i = 0}^{p-1} { r+t-i \choose r-i} & p > 0\\ 0 & p = 0 \end{cases} \ \\ \text{ and } \ \ r_t &=& r - \small { \sum\limits_{q > t}^{\ell-1} \rho_{q}}. \eean \end{lem} \begin{lem}[ Block length reduction \cite{VajRamKum}~] \label{thm:shorten3} Let $\underline{\rho}=(\rho_{\ell-1}, \cdots \rho_0)$ be the unique representation of a given $\gamma \in \big[0, {m \choose \ell}\big)$. Let $r_t$, $\forall t \in [0, \ell-1]$, be as defined in the previous lemma. By setting $\gamma$ message symbols of \text{PRM}(r, m-1) code to zero, block length reduction of \bean \Gamma(r,m,\gamma) &=& \sum\limits_{t=0}^{\ell - 1} g\left(r_t, t\right), \\ \text{ where } \ g\left(r, t\right) &=& \begin{cases} \sum\limits_{j = 0 }^{t} \sum\limits_{i=0}^{\rho_t - 1} {r+t-i \choose r+j-i} & \rho_t > 0\\ 0 & \rho_t = 0, \end{cases} \eean is possible. \end{lem} The Table~\ref{table:PRM_2_4} shows the shortening procedure which results in Lemma~\ref{thm:shorten3} for the case $m=5$, $r=2$. To reduce dimension by $\gamma$ one has to pick first $\gamma$ 2-element sets in the column $\mathbb{S}$ and set corresponding message symbols to zero. For example, if $\gamma=4$, the message symbols given by $\{1,2\}$, $\{1,3\}$, $\{2,3\}$ and $\{1,4\}$ are set to zero. \begin{table}[h!] \begin{center} \bean \begin{array}{|c|c|c|c|c|} \hline k & \gamma & \mathbb{S} & \Gamma(2,5,\gamma) & n \\ \hline 10 & 0 & \phi & 0 & 26\\ \hline 9 & 1 & \{1,2\} & 1 & 25\\ \hline 8 & 2 & \{1,3\} & 2 & 24 \\ \hline 7 & 3 & \{2,3\} & 4 & 22 \\ \hline 6 & 4 & \{1,4\} & 5 & 21\\ \hline 5 & 5 & \{2,4\} & 7 & 19\\ \hline 4 & 6 & \{3,4\} & 11 & 15\\ \hline 3 & 7 & \{1,5\} & 12 & 14 \\ \hline 2 & 8 & \{2,5\} & 14 & 12\\ \hline 1 & 9 & \{3,5\} & 18 & 8 \\ \hline \end{array} \eean \caption{ Shortening procedure for PRM code with $r=2, m=5$. \label{table:PRM_2_4} \end{center} \end{table} The order in which the $2$-element sets are picked here is called co-lexicographic order. For any two subsets $A$ and $B$ of an ordered set, we say $A > B$ in co-lexicographic order if $\max\big(A\Delta B) \in A$, where $ A \Delta B = (A \setminus B) \cup (B \setminus A)$. For instance we have, $\{1,2\} < \{1,3\}$ in co-lexicographic order since $3 \in \{1,3\}$. Hence, $\{1,2\}$, $\{1,3\}$, $\{2,3\}$, $\{1,4\}$, $\{2,4\}$, $\{3,4\}$, $\{1,5\}$, $\{2,5\}$, $\{3,5\}$, $\{4,5\}$ are in co-lexicographic order. Although it is not explicitly stated in \cite{VajRamKum}, the general shortening procedure used to prove Lemma~\ref{thm:shorten3} picks first $\gamma$ $r-$element subsets of $[m]$ in co-lexicographic order.\\ The terminology anti-lexicographic order is used for the reverse co-lexicographic order. For example, $\{4,5\}$, $\{3,5\}$, $\{2,5\}$, $\{1,5\}$, $\{3,4\}$, $\{2,4\}$, $\{1,4\}$, $\{2,3\}$, $\{1,3\}$, $\{1,2\}$ are in anti-lexicographic order. Hence, the remaining message symbols after shortening will correspond to the first $k$ $r-$element subsets of $[m]$ in anti-lexicographic order. \begin{thm} For the binary $\text{PRM}(r, m-1)$ code, the $k$-th generalized Hamming weight \bea \label{eq:ghw} d_k(r,m) \le \sum\limits_{i = r}^m {m \choose i} - \Gamma(r,m,\gamma), \eea where $\Gamma(r,m,\gamma)$ is the block length reduction given by Lemma~\ref{thm:shorten3} for $\gamma= {m \choose r}-k$. \end{thm} \bprf The shortened version of $\text{PRM}(r,m-1)$ code obtained by setting first $\gamma= {m \choose r}-k$ message symbols in co-lexicographical order to zero is a $k$-dimensional sub code of the $\text{PRM}(r,m-1)$ code. Therefore, the block length of this shortened code gives an upper bound on the $k$-th GHW of the $\text{PRM}(r,m-1)$ code. \eprf \ \\ \section{Lower Bound On GHW Of Binary PRM Codes\label{sec:lower}} In Theorem \ref{thm:lb_ghw} we present a lower bound on GHW for binary PRM codes. The proof shown here adapts techniques from the proof for Reed-Muller codes in \cite{Wei}. The GHW for RM codes determined in \cite{Wei} give a lower bound on GHW for PRM codes since PRM codes are subcodes of RM codes with same parameters. However, we will prove that there is gap between GHW of RM and PRM codes (see Figure \ref{fig:ghw_gap}) by proving a tighter lower bound for PRM codes \begin{figure}[h!] \begin{center} \includegraphics[width=4.4in]{ghw.eps} \end{center} \caption{Gap between GHW of PRM$(2,4)$ and RM$(2,5)$ codes, here $r=2$, $m=5$.}\label{fig:ghw_gap} \end{figure} Every codeword in PRM$(r,m-1)$ code corresponds to evaluations of a binary homogeneous polynomial of degree $r$ in $m$ variables. Hence, we use the notation $f \in $ PRM$(r,m-1)$ to represent the codeword given by evaluations of homogeneous polynomial $f \in \mathbb{F}_2[x_1, \cdots, x_m]$. It can be seen that any $f \in $ PRM$(r,m-1)$ can be represented as $f = f_1 + x_m f_2$, where $f_1 \in $ PRM$(r,m-2)$ and $f_2 \in $ PRM$(r-1,m-2)$. \bthm \label{thm:lb_ghw} For any $0 \le k < {m \choose r}$, \bean \scalebox{0.95}{$ d_k(r,m) \ge \min \limits_{\substack{s+t = k\\ s \le {m-1 \choose r-1}, \ t \le {m-1 \choose r}}} \{ d_s(r-1,m-1) + d_t(r,m-1) \}$} \eean \ethm \bprf Let $\mathcal{C}$ be a subcode of PRM$(r,m-1)$ with support size $d_k(r,m)$ and dimension $k$. Let $L = \mathbb{F}_2[x_1, \cdots, x_{m-1}]$. We define, \bean \mathcal{C}_1 = \{ f \in L ; \ x_m f \in \mathcal{C} \}. \eean Let $\mathcal{C}_2$ be such that $\mathcal{C} = x_m \mathcal{C}_1 \oplus \mathcal{C}_2$, where $\oplus$ denotes direct sum, $x_mC_1 = \{x_m f | f \in C_1\}$. We now define define \bean \mathcal{C}_3 = \{g \in L; \exists f \in L, \ x_m f + g \in \mathcal{C}_2\}. \eean Let the dimension of $\mathcal{C}_1$, $\mathcal{C}_2$ be $s^*$, $t^*$ respectively. It can be observed that $\mathcal{C}_1$ is a subcode of PRM$(r-1,m-2)$ and $\mathcal{C}_3$ a subcode of PRM$(r,m-2)$. Therefore, any element in $\mathcal{C}_2$ can be written as $x_m f + g$, where $f,g \in L$ are homogeneous polynomials and $deg(f) = r-1$, $deg(g) = r$. We will now show that $\mathcal{C}_3$ and $\mathcal{C}_2$ have same dimension. If $g \in \mathcal{C}_3$, then there exists $f_1 \in L$ such that $x_m f_1 + g \in \mathcal{C}_2$. If there is $f_2 \ne f_1$ such that $x_m f_2 + g \in \mathcal{C}_2$, it would imply that $x_m (f_1 + f_2) \in \mathcal{C}_2$. But $x_m (f_1 + f_2) \in x_m \mathcal{C}_1$, resulting in a contradiction. Therefore for every element in $\mathcal{C}_3$, there is a corresponding unique element in $\mathcal{C}_2$. The support size of subcode $\mathcal{C}$ is given by \bean d_k(r,m)=|S(\mathcal{C})| = |S_0(\mathcal{C})| + |S_1(\mathcal{C})|, \eean where $S_i(\mathcal{C})$ corresponds to support when $x_m = i$ for $i = 0,1$. Since $\mathcal{C}_1$ is a $s^*$ dimensional subcode of PRM$(r-1,m-2)$, we have $|S(\mathcal{C}_1)|\ge d_{s^*}(r-1,m-1)$ and similarly $|S(C_3)| \ge d_{t^*}(r,m-1)$.\\ It is clear to see that $|S_1(\mathcal{C})| \ge |S(\mathcal{C}_1)| \ge d_{s^*}(r-1,m-1)$ and $|S_0(\mathcal{C})| \ge |S(\mathcal{C}_3)| \ge d_{t^*}(r,m-1)$. Thus we have, \bean d_k(r,m) &\ge& d_{s^*}(r-1,m-1) + d_{t^*}(r,m-1)\\ &\ge& \scalebox{0.9}{$\min\limits_{\substack{s+t = k\\ s \le {m-1 \choose r-1}, \ t \le {m-1 \choose r}}} \{ d_s(r-1,m-1) + d_t(r,m-1) \}$} \eean \eprf \section{GHW Of Binary PRM Codes\label{sec:ghw}} In this Section, we will first state a well-known theorem from extremal set theory and then use it to prove the GHW results. Let $U_r$ denote the family of all $r$-element subsets of $[m]$. For a collection $K \subseteq U_r$, (upward) shadow is given by \bean \Delta(K) = \{ X \subseteq [m] \ | \ Y \subseteq X \ \text{ for some } Y \in K \} \eean \bthm[Kruskal \cite{Kruskal}, Katona \cite{Katona} ]The collection $K$ consisting of first $k$ $r$-subsets of $[m]$ picked in anti-lexicographic order achieves $\min \{|\Delta(K)\cap U_{r+1}|: K \subseteq U_r$ and $|K| = k \}$. \ethm \bcor\label{cor:minwt_antilex} The collection $K$ consisting of first $k$ $r$-subsets of $[m]$ picked in anti-lexicographic order achieves $\min \{ |\Delta(K)|: K \subseteq U_r$ and $|K| = k \}$. \ecor Let $\sigma_{(r,m,k)}$ denote the support size of subcode formed by monomials corresponding to first $k$ $r$-element subsets of $[m]$ in the anti-lexicographic order. Note that this subcode is same as the shortened PRM code formed by setting ${m \choose r} - k$ message symbols picked in co-lexicographic order to zero. Since this subcode is $k$ dimensional subcode of PRM$(r,m-1)$ we have $\sigma_{(r,m,k)} \ge d_k(r,m)$. \\ \blem \label{lem:wt_antilex} \bean \sigma_{(r,m,k)} \le \min \limits_{\substack{s+t = k\\ s \le {m-1 \choose r-1}, \ t \le {m-1 \choose r}}} \{ \sigma_{(r-1,m-1,s)} + \sigma_{(r, m-1,t)} \} \eean \elem \bprf Suppose $s^*, t^*$ be the values that achieve minimum on the RHS. Then RHS $= \sigma_{(r-1,m-1,s^*)} + \sigma_{(r,m-1,t^*)}$. The RHS corresponds to the support size of subcode formed by $k$ monomials each of degree $r$, with $s^*$ of them containing $m$ and $t^*$ of them without $m$. Now, from Cor \ref{cor:minwt_antilex} we know that picking the first $k$ $r$-element sets in anti-lexicographic order results in minimum support size $\sigma_{(r,m,k)}$. Hence, RHS $\ge \sigma_{(r,m,k)}$. \eprf \bthm \label{thm:ghw_prm} For any $0 \le k < {m \choose r}$, \bean \sigma_{(r,m,k)} = d_k(r,m) \eean The support of subcode formed by first $k$ monomials in anti-lexicographic order is the $k$th GHW of PRM$(r,m-1)$. \ethm \bprf For the case of $m=1$, the statement trivially follows. Now, we use induction over $m$ to prove the theorem and hence assume it is true for $m-1$. Let $\mathcal{C}$ be the subcode with rank $k$ and support size $d_k(r,m)$. Then by Theorem \ref{thm:lb_ghw} we have, \bea |S(\mathcal{C})| &\ge& \scalebox{0.92}{$\min \limits_{\substack{s+t = k\\ s \le {m-1 \choose r-1}, \ t \le {m-1 \choose r}}} \{ d_s(r-1,m-1) + d_t(r,m-1)\}$} \nonumber \\ &=& \min \limits_{\substack{s+t = k\\ s \le {m-1 \choose r-1}, \ t \le {m-1 \choose r}}} \{ \sigma_{(r-1,m-1,s)} + \sigma_{(r,m-1,t)}\} \label{eq:ind}\\ &\ge& \sigma_{(r,m,k)} \label{eq:wt_antilex}\\ &\ge& d_k(r,m) \nonumber \\ &=& |S(\mathcal{C})| \nonumber \eea Here, \eqref{eq:ind} follows from induction assumption and \eqref{eq:wt_antilex} follows from Lemma \ref{lem:wt_antilex}. Therefore, $\sigma_{(r,m,k)} = d_k(r,m)$. \eprf \bcor \label{cor:ghw} For any $0 \le k < {m \choose r}$, \bean d_k(r,m) = \sum\limits_{i=r}^m {m \choose i} - \Gamma\Big(r, m, {m \choose r} - k\Big) \eean where $\Gamma$ is obtained from Theorem \ref{thm:shorten3} \ecor \bprf From Theorem \ref{thm:ghw_prm}, support of $k$ monomials picked in anti-lexicographic order gives the $k$-th generalized Hamming weight. This is same as avoiding (shortening) the first ${m \choose r} - k$, monomials picked in co-lexicographic order. Therefore, the support size obtained from Theorem \ref{thm:shorten3} with $\gamma={m \choose r} - k$ is equal to $\sigma_{(r,m,k)}$ and hence the result. \eprf The above corollary proves the optimality of shortening procedure for PRM codes given in \cite{VajRamKum} and provides the complete GHW hierarchy. GHW hierarchy of PRM codes for some parameters are listed in Table \ref{table:PRM_ghw}. The next corollary gives simplified expression for GHW in some special cases. \bcor \bean d_k(r,m) = (2^k - 1)2^{m-r-k+1} \ \ \text{ for } k \le m-r+1. \eean \ecor \bprf Pick first $k$, $r$-element subsets of $[m]$ in anti-lexicographic order. These sets are of the form: \bean S_i = \{m, m-1, \cdots, m-r+2, m-r-(i-2) \}, \eean for all $i \in [k]$. Now consider the monomials corresponding to $S_i$, for all $i \in [k]$. The vector $\underline{x} \in \mathbb{F}_2^m \setminus \{\underline{0}\}$ for which at-least one of these monomials evaluate to one has, $x_i = 1$ for all $i \in [m-r+2,m]$ and $x_i=1$, for at-least one $i \in [m-r-k+2,m-r+1] $. The remaining $x_i, i \in [1,m-r-k+1]$ can take any value. The number of such vectors is $(2^k - 1)2^{m-r-k+1}$. \eprf \begin{note} \bean d_1(r,m) &=& 2^{m-r}, \\ d_2(r,m) &=& 3\cdot2^{m-r-1}; \ \ m \ge r+1,\\ d_3(r,m) &=& 7\cdot2^{m-r-2}; \ \ m \ge r+2. \eean \end{note} The GHW for PRM code obtained in Cor \ref{cor:ghw} is by considering the sets to be removed. In the GHW derivation for Reed-Muller codes in \cite{Wei}, the counting is done taking into account the sets that remain. The following lemmas gives an expression for GHW of PRM codes using a similar approach. \blem Any $0 \le k < {m \choose r}$ can be uniquely represented by $(r,m)$ canonical form given by \bean k = \sum\limits_{i=1}^t {m_i \choose r_i} \eean where, $r > r_1 \ge r_2 \cdots \ge r_t \ge 0$, $m_i \ge 0$ and $m_i-r_i = m-r-i+1$. \elem \bprf We induct over variable $m$. For $m=1$, the result is trivial. Assume that the statement true for $m-1$. If $k \ge {m-1 \choose r-1}$, define $k' = k - {m-1 \choose r-1} < {m-1 \choose r}$. Then, $k'$ has a $(r, m-1)$ canonical representation given by: \bean k' = \sum\limits_{i = 1}^{t'} {m_i' \choose r_i'} \eean Setting $m_1 = m-1$, $r_1 = r-1$ and $m_{i+1} = m_i'$ and $r_{i+1} = r_i'$ for all $i \in [t']$ satisfies the lemma statement with $t=t'+1$.\\ For $k < {m-1 \choose r-1}$, the $(r-1, m-1)$ canonical form will itself be the $(r,m)$ canonical form. \eprf \ \\ \bcor\label{cor:ghw} For any $0 \le k < {m \choose r}$, the k-th GHW of binary projective Reed Muller code PRM$(r,m-1)$ is given by: \bean d_k(r,m) = \sum\limits_{i=1}^t \sum\limits_{j = r_i}^{m_i} {m_i \choose j}. \eean \ecor \bprf Here, we induct on variable $m$. Assume that the result holds for the case of $m-1$. Let $K$ be the set of first $k$, r-element subsets of $[m]$ in anti-lexicographic order. Consider the case of $k \ge {m-1 \choose r-1}$, here $K$ exhausts all the $r$-element subsets that include $m$. Suppose $S_1(K)$ represents support generated by sets that include $m$ and $d_{k'}(r,m-1)$ the support generated by remaining sets in $K$, where $k' = k - {m-1 \choose r-1}$. Then, \bean |S(K)| = |S_1(K)| + d_{k'}(r,m-1). \eean It can be observed that $|S_1(K)|$ is same as the block length of PRM$(r-1,m-2)$ code. Therefore by induction assumption, \bean |S(K)| = \sum\limits_{j = r-1}^{m-1} {m-1 \choose j} + \sum\limits_{i=1}^{t'} \sum\limits_{j=r_i'}^{m_i'} {m_i' \choose j} \eean where $(m_i', r_i'), \ \forall i \in [t']$ is the $(r,m-1)$ canonical representation of $k'$. Now by picking $m_1 = m-1$, $r_1 = r-1$, $m_{i+1} = m_i'$, $r_{i+1} = r_i'$ for all $i \in [t']$ and $t=t'+1$, we get: \bean |S(K)| &=& \sum\limits_{i=1}^{t} \sum\limits_{j=r_i}^{m_i} {m_i \choose j}\\ &=& d_k(r,m) \eean The second equality follows from Theorem \ref{thm:ghw_prm}. For the case of $k < {m-1 \choose k-1}$, all the $r$-element sets in $K$ include $x_m$ and the support generated by these sets can therefore be determined by $d_k(r-1,m-1)$. The $(r-1,m-1)$ canonical representation for $k$ is also $(r,m)$ canonical representation for $k$. \bea d_k(r,m) &=& d_k(r-1,m-1) \nonumber \\ &=& \sum\limits_{i=1}^t \sum\limits_{j=r_i}^{m_i} {m_i \choose j} \label{eq:dkind}. \eea Equation \eqref{eq:dkind} follows by induction assumption. \eprf \begin{table}[h!] \begin{center} \bean \begin{array}{|c|c|c|} \hline r & m & \text{ GHW Hierarchy } \Big(d_1, \cdots, d_{m \choose r}\Big)\\ \hline 1 & 2 & 2,3 \\ \hline 1 & 3 & 4,6,7 \\ \hline 2 & 3 & 2,3,4 \\ \hline 1 & 4 & 8,12,14,15 \\ \hline 2 & 4 & 4,6,7,9,10,11 \\ \hline 3 & 4 & 2,3,4,5 \\ \hline 1 & 5 & 16,24,28,30,31 \\ \hline 2 & 5 & 8,12,14,15,19,21,22,24,25,26 \\ \hline 3 & 5 & 4,6,7,9,10,11,13,14,15,16 \\ \hline 4 & 5 & 2,3,4,5,6 \\ \hline \end{array} \eean \caption{ GHW hierarchy of binary PRM$(r,m-1)$ code for some parameters.} \label{table:PRM_ghw} \end{center} \end{table} \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-07T02:08:03", "yymm": "1806", "arxiv_id": "1806.02028", "language": "en", "url": "https://arxiv.org/abs/1806.02028", "abstract": "Projective Reed-Muller codes correspond to subcodes of the Reed-Muller code in which the polynomials being evaluated to yield codewords, are restricted to be homogeneous. The Generalized Hamming Weights (GHW) of a code ${\\cal C}$, identify for each dimension $\\nu$, the smallest size of the support of a subcode of ${\\cal C}$ of dimension $\\nu$. The GHW of a code are of interest in assessing the vulnerability of a code in a wiretap channel setting. It is also of use in bounding the state complexity of the trellis representation of the code.In prior work by the same authors, a code-shortening algorithm was employed to derive upper bounds on the GHW of binary projective, Reed-Muller (PRM) codes. In the present paper, we derive a matching lower bound by adapting the proof techniques used originally for Reed-Muller (RM) codes by Wei. This results in a characterization of the GHW hierarchy of binary PRM codes.", "subjects": "Information Theory (cs.IT)", "title": "Determining the Generalized Hamming Weight Hierarchy of the Binary Projective Reed-Muller Code", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717428891156, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7089449163633795 }
https://arxiv.org/abs/1509.07385
Provable approximation properties for deep neural networks
We discuss approximation of functions using deep neural nets. Given a function $f$ on a $d$-dimensional manifold $\Gamma \subset \mathbb{R}^m$, we construct a sparsely-connected depth-4 neural network and bound its error in approximating $f$. The size of the network depends on dimension and curvature of the manifold $\Gamma$, the complexity of $f$, in terms of its wavelet description, and only weakly on the ambient dimension $m$. Essentially, our network computes wavelet functions, which are computed from Rectified Linear Units (ReLU)
\section{Introduction} \label{sec:intro} In the last decade, deep learning algorithms achieved unprecedented success and state-of-the-art results in various machine learning and artificial intelligence tasks, most notably image recognition, speech recognition, text analysis and Natural Language Processing~\cite{lecun2015deep}. Deep Neural Networks (DNNs) are general in the sense of their mechanism for learning features of the data. Nevertheless, in numerous cases, results obtained with DNNs outperformed previous state-of-the-art methods, often requiring significant domain knowledge, manifested in hand-crafted features. Despite the great success of DNNs in many practical applications, the theoretical framework of DNNs is still lacking; along with some decades-old well-known results, developing aspects of such theoretical framework are the focus of much recent academic attention. In particular, some interesting topics are (1) specification of the network topology (i.e., depth, layer sizes), given a target function, in order to obtain certain approximation properties, (2) estimating the amount of training data needed in order to generalize to test data with high accuracy, and also (3) development of training algorithms with performance guarantees. \subsection{The contribution of this work} In this manuscript we discuss the first topic. Specifically, we prove a formal version of the following result: \begin{theoremInf} Let $\Gamma \subset \mathbb{R}^m$ be a smooth $d$-dimensional manifold, $f \in L_2(\Gamma)$ and let $\delta>0$ be an approximation level. Then there exists a depth-4 sparsely-connected neural network with $N$ units where $N=N(\delta, \Gamma, f, m)$, computing the function $f_N$ such that \begin{equation} \|f-f_N\|_2^2 \le \delta. \end{equation} \end{theoremInf} The number $N=N(\delta, \Gamma, f, m)$ depends on the complexity of $f$, in terms of its wavelet representation, the curvature and dimension of the manifold $\Gamma$ and only weakly on the ambient dimension $m$, thus taking advantage of the possibility that $d \ll m$, which seems to be realistic in many practical applications. Moreover, we specify the exact topology of such network, and show how it depends on the curvature of $\Gamma$, the complexity of $f$, and the dimensions $d$, and $m$. Lastly, for two classes of functions we also provide approximation error rates: $L_2$ error rate for functions with sparse wavelet expansion and point-wise error rate for functions in $C^2$: \begin{itemize} \item if $f$ has wavelet coefficients in $l_1$ then there exists a depth-4 network and a constant $c$ so that \begin{equation} \|f-f_N\|_2^2 \le \frac{c}{N} \end{equation} \item if $f \in C^2$ and has bounded Hessian, then there exists a depth-4 network so that \begin{equation} \|f - f_N\|_\infty = O\left(N^{-\frac{2}{d}} \right). \end{equation} \end{itemize} \subsection {The structure of this manuscript} The structure of this manuscript is as follows: in Section \ref{sec:relatedWork} we review some of the fundamental theoretical results in neural network analysis, as well as some of the recent theoretical developments. In Section \ref{sec:preliminaries} we give quick technical review of the mathematical methods and results that are used in our construction. In Section \ref{sec:Main} we describe our main result, namely construction of deep neural nets for approximating functions on smooth manifolds. In Section \ref{sec:counting} we specify the size of the network needed to learn a function $f$, in view of the construction of the previous section. Section \ref{sec:conclusions} concludes this manuscript. \subsection{Notation} $\Gamma$ denotes a $d$-dimensional manifold in $\mathbb{R}^m$. $\{(U_i, \phi_i) \}$ denotes an atlas for $\Gamma$. Tangent hyper-planes to $\Gamma$ are denoted by $H_i$. $f$ and variants of it stand for the function to be approximated. $\varphi, \psi$ are scaling (aka "father") and wavelet (aka "mother") functions, respectively. The wavelet terms are indexed by scale $k$ and offset $b$. The support of a function $f$ is denoted by $\supp(f)$. \section{Related work} \label{sec:relatedWork} There is a huge body of theoretical work in neural network research. In this section, we review some classical theoretical results on neural network theory, and discuss several recent theoretical works. A well known result, proved independently by Cybenko \cite{cybenko1989approximation}, Hornik \cite{hornik1991approximation} and others states that Artificial Neural Networks (ANNs) with a single hidden layer of sigmoidal functions can approximate arbitrary closely any compactly supported continuous function. This result is known as the ``Universal Approximation Property''. It does not relate, however, the number of hidden units and the approximation accuracy; moreover, the hidden layer might contain a very large number of units. Several works propose extensions of the universal approximation property (see, for example\cite{girosi1990networks, girosi1995regularization}, for a regularization perspective and also using radial basis activation functions, \cite{leshno1993multilayer} for all activation functions that achieve the universal approximation property). The first work to discuss the approximation error rate was done by Barron \cite {barron1993universal}, who showed that given a function $f:\mathbb{R}^m \rightarrow \mathbb{R}$ with bounded first moment of the magnitude of the Fourier transform \begin{equation} C_f=\int_{\mathbb{R}^m}|w||\tilde{f}(w)| < \infty \label{eq:barronReq} \end{equation} there exists a neural net with a single hidden layer of $N$ sigmoid units, so that the output $f_N$ of the network satisfies \begin{equation} \|f-f_N \|_2^2 \le \frac{c_f}{N}, \end{equation} where $c_f$ is proportional to $C_f$. We note that the requirement~\eqref{eq:barronReq} gets more restrictive when the ambient dimension $m$ is large, and that the constant $c_f$ might scale with $m$. The dependence on $m$ is improved in \cite{mhaskar2004tractability}, \cite{kurkova2002comparison}. In particular, in \cite{mhaskar2004tractability} the constant is improved to be polynomial in $m$. For $r$ times differentiable functions, Mahskar \cite{mhaskar1996neural} constructs a network with a single hidden layer of $N$ sigmoid units (with weights that do not depend on the target function) that achieves an approximation error rate \begin{equation} \|f-f_N \|_2^2 = \frac{c}{N^{2r/m}}, \end{equation} which is known to be optimal. This rate is also achieved (point-wise) in this manuscript, however, with respect to the dimension $d$ of the manifold, instead of $m$, which might be a significant difference when $d \ll m$. During the decade of $1990$s, a popular direction in neural network research was to construct neural networks in which the hidden units compute wavelets functions (see, for example \cite {zhang1992wavelet}, \cite{pati1993analysis} and \cite{zhao1998multidimensional}). These works, however, do not give any specification of network architecture to obtain desired approximation properties. Several most interesting recent theoretical results consider the representation properties of neural nets. Eldan and Shamir~\cite{eldan2015power} construct a radial function that is efficiently expressible by a 3-layer net, while requiring exponentially many units to be represented accurately by shallower nets. In \cite{montufar2014number}, Montufar et al. show that DNNs can represent more complex functions than can represent a shallow network with the same number of units, where complexity is defined as the number of linear regions of the function. Tishby and Zaslavsky~\cite {tishby2015deep} propose to evaluate the representations obtained by deep networks via the information bottleneck principle, which is a trade-off between compression of the input representation and predictive ability of the output function, however do not provide any theoretical results. A recent work by Chui and Mhaskar brought to our attention~\cite{Chui2015deep} constructs a network with similar functionality to the network we construct in this manuscript. In their network the low layers map the data to local coordinates on the manifold and the upper ones approximate a target function on each chart, however using B-splines. \section{Preliminaries} \label{sec:preliminaries} \subsection {Compact manifolds in $\mathbb{R}^m$} \label{sec:compactManifolds} In this section we review the concepts of \textit{smooth manifolds}, \textit{atlases} and \textit{partition of unity}, which will all play important roles in our construction. Let $\Gamma \subseteq \mathbb{R}^m$ be a compact $d$-dimensional manifold. We further assume that $\Gamma$ is smooth, and that there exists $\delta>0$ so that for all $x\in\Gamma$, $B(x,\delta)\cap \Gamma$ is diffeomorphic to a disc, with a map that is close to the identity. \begin{definition} A \textbf{chart} for $\Gamma$ is a pair $(U, \phi) $ such that $U \subseteq \Gamma$ is open and \begin{equation} \phi:U \rightarrow M, \end{equation} where $\phi$ is a homeomorphism and $M$ is an open subset of a Euclidean space. \end{definition} One way to think of a chart is as a tangent plane at some point $x \in U \subseteq \Gamma$, such that the plane defines a Euclidean coordinate system on $U$ via the map $\phi$. \begin{definition} An \textbf{atlas} for $\Gamma$ is a collection $\{(U_i, \phi_i) \}_{i \in I}$ of charts such that $\cup_i U_i=\Gamma $. \end{definition} \begin{definition} Let $\Gamma$ be a smooth manifold. A \textbf{partition of unity} of $\Gamma$ w.r.t an open cover $\{U_i \}_{i\in I}$ is a family of nonnegative smooth functions $\{\eta_i\}_{i\in I}$ such that for every $x \in \Gamma$, $\sum_i \eta_i(x)=1$ and for every $i$, $\supp(\eta_i) \subseteq(U_i)$. \end{definition} \begin{theorem} (Proposition $13.9$ in \cite{loring2008introduction}) \label{thm:PartitionOfUnity} Let $\Gamma$ be a compact manifold and $\{U_i \}_{i \in I}$ be an open cover of $\Gamma$. Then there exists a partition of unity $\{\eta_i\}_{i\in I}$ such that for each $i$, $\eta_i$ is in $C^\infty$, has compact support and $\supp(\eta_i)\subseteq U_i$. \end{theorem} \subsection {Harmonic analysis on spaces of homogeneous type} \label{sec:Harmonic} \subsubsection {Construction of wavelet frames} In this section we cite several standard results, mostly from \cite {deng2009harmonic}, showing how to construct a wavelet frame of $L_2(\mathbb{R}^d)$, and discuss some of its properties. \begin{definition}(Definition $1.1$ in \cite {deng2009harmonic})\\ A \textbf{space of homogeneous type} $(\mathcal{X}, \mu, \delta)$ is a set $\mathcal{X}$ together with a measure $\mu$ and a quasi-metric $\delta$ (satisfies triangle inequality up to a constant $A$) such that for every $x\in \mathcal{X},\; r>0$ \begin{itemize} \item $0<\mu(B(x,r))<\infty$ \item There exists a constant $A'$ such that $\mu(B(x,2r))\le A' \mu(B(x,r))$ \end{itemize} \end{definition} In this manuscript, we are interested in constructing a wavelet frame on $\mathbb{R}^d$, which, equipped with Lebesgue measure and the Euclidean metric, is a space of homogeneous type. \begin{definition}(Definition $3.14$ in \cite {deng2009harmonic})\label{def:fatherMother}\\ Let $(\mathcal{X}, \mu, \delta)$ be a space of homogeneous type. A family of functions $\{S_k \}_{k\in \mathbb{Z}}$, $S_k: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{C}$ is said to be a family of \textbf{averaging kernels} (``father functions'') if conditions $3.14-3.18$ and $3.19$ with $\sigma=\epsilon$ in \cite {deng2009harmonic} are satisfied. A family $\{D_k \}_{k\in \mathbb{Z}}$, $D_k: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{C}$ is said to be a family of (``mother'') \textbf{wavelets} if for all $x,y\in \mathcal{X}$, \begin{equation} \label{eq:motherD} D_k(x,y) = S_k(x,y)-S_{k-1}(x,y), \end{equation} and $S_k, S_{k-1}$ are averaging kernels. \end{definition} By standard wavelet terminology, we denote \begin{equation} \psi_{k,b}(x) \equiv 2^{-\frac{k}{2}} D_{k}(x,b). \label{eq:mother} \end{equation} \begin{theorem} (A simplified version of Theorem $3.25$ in \cite {deng2009harmonic}) \label{thm:3.25}\\ Let $\{S_k \}$ be a family of averaging kernels. Then there exist families $\{\psi_{k,b}\},\{\widetilde{\psi}_{k,b}\}$ such that for all $f \in L_2(\mathbb{R}^d)$ \begin{equation} f(x) = \sum_{(k,b)\in\Lambda} \langle f,\widetilde{\psi}_{k,b}\rangle\psi_{k,b}(x) \end{equation} Where the functions $\psi_{k,b}$ are given by Equations \eqref{eq:motherD} and \eqref{eq:mother} and $\Lambda =\{(k,b) \in \mathbb{Z} \times \mathbb{R}^d$: $b \in 2^{-\frac{k}{d}}\mathbb{Z}^d\}$ \end{theorem} \begin{remark} \label{remark:smoothness} The kernels $\{S_{k}\}$ need to be such that for every $x \in \mathbb{R}^d$, $\sum_{(k,b)\in\Lambda} S_{k}(x,b)$ is sufficiently large. This is discussed in great generality in chapter 3 in \cite {deng2009harmonic}. \end{remark} \begin{remark} \label{remark:duals} The functions $\widetilde{\psi}_{k,b}$ are called dual elements, and are also a wavelet frame of $L_2(\mathbb{R}^d)$. \end{remark} \subsection{Approximation of functions with sparse wavelet coefficients} \label{sec:barronSparse} In this section we cite a result from \cite{barron2008approximation} regarding approximating functions which have sparse representation with respect to a dictionary $\mathcal{D}$ using finite linear combinations of dictionary elements. Let $f$ a function in some Hilbert space $\mathcal{H}$ with inner product $\langle\cdot,\cdot \rangle$ and norm $\|\cdot \|$, and let $\mathcal{D} \subset \mathcal{H}$ be a dictionary, i.e., any family of functions $(g)_{g\in \mathcal{D}}$ with unit norm. Assume that $f$ can be represented as a linear combination of elements in $\mathcal{D}$ with absolutely summable coefficients, and denote the sum of absolute values of the coefficients in the expansion of $f$ by $\|f\|_{\mathcal{L}_1}$. In \cite{barron2008approximation}, it is shown that $\mathcal{L}_1$ functions can be approximated using $N$ dictionary terms with squared error proportional to $\frac{1}{\sqrt{N}}$. As a bonus, we also get a greedy algorithm (though not always practical) for selecting the corresponding dictionary terms. OGA is a greedy algorithm that at the $k$'th iteration computes the residual \begin{equation} r_{k-1} := f-f_{k-1}, \end{equation} finds the dictionary element that is most correlated with it \begin{equation} g_k \in \arg\max_{g\in\mathcal{D}}|\langle r_{k-1},g \rangle | \end{equation} and defines a new approximation \begin{equation} f_k := P_kf, \end{equation} where $P_k$ is the orthogonal projection operator onto $\text{span}\{g_1,...,g_k\}$. \begin{theorem}(Theorem 2.1 from \cite{barron2008approximation}) \label{thm:BarronGreedy} The error $r_N$ of the OGA satisfies \begin{equation} \|f-f_N \| \le \|f\|_{\mathcal{L}_1}(N+1)^{-1/2}. \end{equation} \end{theorem} Clearly, for $\mathcal{H} = L_2(\mathbb{R}^d)$ we can choose the dictionary to be the wavelet frame given by \begin{equation} \mathcal{D} = \{\psi_{k,b}: (k,b) \in \mathcal{Z} \times \mathbb{R}^d, b\in 2^{-k}\mathbb{Z} \}. \end{equation} \begin{remark} \label{remark:equivalentFrames} Let $\mathcal{D} =\{\psi_{k,b} \}$ be a wavelet frame that satisfies the regularities in conditions $3.14-3.19$ in \cite{deng2009harmonic}. Then if a function $f$ is in $\mathcal{L}_1$ with respect to $\mathcal{D}$, it is also in $\mathcal{L}_1$ with respect to any other wavelet frame that satisfies the same regularities. In other words, having expansion coefficients in $l_1$ does not depend on the specific choice of wavelets (as long as the regularities are satisfied). The idea behind the proof of this claim is explained in appendix~\ref{app:equivFrame}. \end{remark} \begin{remark} Section $4.5$ in \cite {deng2009harmonic} gives a way to check whether a function $f$ has sparse coefficients without actually calculating the coefficients: \begin{equation} f \in \mathcal{L}_1 \text{ iff } \sum_{k \in \mathbb{Z}} 2^{k/2} \|f * \psi_{k,0}\|_1<\infty, \end{equation} i.e., one can determine if $f \in \mathcal{L}_1 $ without explicitly computing its wavelet coefficients; rather, by convolving $f$ with non-shifted wavelet terms in all scales. \end{remark} \section{Approximating functions on manifolds using deep neural nets} \label{sec:Main} In this section we describe in detail the steps in our construction of deep networks, which are designed to approximate functions on smooth manifolds. The main steps in our construction are the following: \begin{enumerate} \item We construct a frame of $L_2(\mathbb{R}^d)$ in which the frame elements can be constructed from rectified linear units (see Section \ref{sec:wavConstruction}). \item Given a $d$-dimensional manifold $\Gamma \subset \mathbb{R}^m$, we construct an atlas for $\Gamma$ by covering it with open balls (see Section \ref{sec:creatingAtlas}). \item We use the open cover to obtain a partition of unity of $\Gamma$ and consequently represent any function on $\Gamma$ as a sum of functions on $\mathbb{R}^d$ (see section \ref{sec:sumOfFunctions}). \item We show how to extend the wavelet terms in the wavelet expansion, which are defined on $\mathbb{R}^d$, to $\mathbb{R}^m$ in a way that depends on the curvature of the manifold $\Gamma$ (see Section \ref{sec:extension}). \end{enumerate} \subsection{Constructing a wavelet frame from rectifier units} \label{sec:wavConstruction} In this section we show how Rectified Linear Units (ReLU) can be used to obtain a wavelet frame of $L_2(\mathbb{R}^d)$. The construction of wavelets from rectifiers is fairly simple, and we refer to results from Section~\ref{sec:Harmonic} to show that they obtain a frame of $L_2(\mathbb{R}^d)$. The rectifier activation function is defined on $\mathbb{R}$ as \begin{equation} \rect(x) = \max\{0,x\}. \end{equation} we define a trapezoid-shaped function $t:\mathbb{R} \rightarrow\mathbb{R}$ by \begin{equation} t(x) = \rect(x+3) - \rect(x+1) - \rect(x-1) + \rect(x-3). \end{equation} We then define the scaling function $\varphi:\mathbb{R}^d \rightarrow \mathbb{R}$ by \begin{equation} \varphi(x) = C_d\rect\left(\sum_{j=1}^d t(x_j) -2(d-1)\right), \label{eq:father} \end{equation} where the constant $C_d$ is such that \begin{equation} \int_{\mathbb{R}^d} \varphi(x)dx=1; \end{equation} for example, $C_1=\frac{1}{8}$. Following the construction in Section \ref{sec:Harmonic}, we define \begin{equation} S_k(x,b) = 2^k\varphi(2^{\frac{k}{d}}(x-b)) \label{eq:Sdef} \end{equation} \begin{lemma} \label{lemma:conditions} The family $\{S_k \}$ is a family of averaging kernels. \end{lemma} The proof is given in Appendix \ref{app:conditions}. Next we define the (``mother'') wavelet as \begin{equation} D_k(x,b) = S_k(x,b) - S_{k-1}(x,b), \end{equation} And denote \begin{equation} \psi_{k,b}(x) \equiv 2^{-\frac{k}{2}}D_k(x,b), \end{equation} and \begin{align} \psi(x) &\equiv \psi_{0,0}(x)\\ & = D_0(x,0)\\ & = S_0(x,0) - S_{-1}(x,0)\\ & = \varphi(x) - 2^{-1}\varphi(2^{-\frac{1}{d}}x)). \end{align} Figure \ref{fig:mother} shows the construction of $\varphi$ and $\psi$ in for $d=1,2$. \begin{figure}[ht!] \centering \includegraphics[width=2in,height=2in]{triangle.jpg} \includegraphics[width=2in,height=2in]{father1D.jpg} \includegraphics[width=2in,height=2in]{mother1D.jpg} \includegraphics[width=2in,height=2in]{father2D.jpg} \includegraphics[width=2in,height=2in]{mother2D.jpg} \includegraphics[width=2in,height=2in]{mother2Da.jpg} \includegraphics[width=2in,height=2in]{mother2Db.jpg} \caption{Top row, from left: the trapezoid function $t$, and the functions $\varphi,\psi$ on $\mathbb{R}$. Bottom rows: the functions $\varphi,\psi$ on $\mathbb{R}^2$ from several points of view.} \label{fig:mother} \end{figure} \begin{remark} \label{remark:motherFather} We can see that \begin{align} \psi_{k,b}(x) &= 2^{-\frac{k}{2}}D_k(x,b)\\ & = 2^{-\frac{k}{2}}(S_k(x,b) - S_{k-1}(x,b)) \\ & = 2^{-\frac{k}{2}}(2^k\varphi(2^\frac{k}{d}(x-b)) - 2^{k-1}\varphi(2^\frac{k-1}{d}(x-b)))\\ & = 2^\frac{k}{2}\left(\varphi(2^\frac{k}{d}(x-b)) - 2^{-1}\varphi(2^\frac{k-1}{d}(x-b))\right)\\ & = 2^\frac{k}{2}\psi\left(2^\frac{k}{d}(x-b)\right). \label{eq:motherFromFather} \end{align} \end{remark} \begin{remark} \label{remark:NumUnitsForMother} With the above construction, $\varphi$ can be computed using a network with $4d$ rectifier units in the first layer and a single unit in the second layer. Hence every wavelet term $\psi_{k,b}$ can be computed using $8d$ rectifier units in the first layer, 2 rectifier units in the second layer and a single linear unit in the third layer. From this, the sum of $k$ wavelet terms can be computed using a network with $8dk$ rectifiers in the first layer, $2k$ rectifiers in the second layer and a single linear unit in the third layer. \end{remark} From Theorem \ref{thm:3.25} and the above construction we then get the following lemma: \begin{lemma} \label{lemma:frame} $\{\psi_{k,b} : k \in \mathbb{Z}, b \in 2^{-k}\mathbb{Z}\}$ is a frame of $L_2(\mathbb{R}^d)$. \end{lemma} Next, the following lemma uses properties of the above frame to obtain point-wise error bounds in approximation of compactly supported functions $f \in C^2$. \begin{lemma} \label{lemma:c2Approx} Let $f \in \L_2(\mathbb{R}^d)$ be compactly supported, twice differentiable and let $\|\nabla^2_f \|_{op}$ be bounded. Then for every $k \in \mathbb{N} \cup \{0 \}$ there exists a combination $f_K$ of terms up to scale $K$ so that for every $x \in \mathbb{R}^d$ \begin{equation} |f(x) - f_K(x)| = O\left(2^{-\frac{2K}{d}}\right). \end{equation} \end{lemma} The proof is given in Appendix \ref{app:c2Approx}. \subsection{Creating an atlas} \label {sec:creatingAtlas} In this section we specify the number of charts that we would like to have to obtain an atlas for a compact $d$ -dimensional manifold $\Gamma \in \mathbb{R}^m$. For our purpose here we are interested in a small atlas. We would like the size $C_\Gamma$ of such atlas to depend on the curvature of $\Gamma$: the lower the curvature is, the smaller is the number of charts we will need for $\Gamma$. Following the notation of Section \ref{sec:compactManifolds}, let $\delta>0$ so that for all $x\in\Gamma$, $B(x,\delta)\cap \Gamma$ is diffeomorphic to a disc, with a map that is close to the identity. We then cover $\Gamma$ with balls of radius $\frac{\delta}{2}$. The number of such balls that are required to cover $\Gamma$ is \begin{equation} C_\Gamma \le \ceil*{\frac{2^d SA(\Gamma)}{\delta^d}T_d}, \end{equation} where $SA(\Gamma)$ is the surface area of $\Gamma$, and $T_d$ is the thickness of the covering (which corresponds to by how much the balls need to overlap). \begin{remark} The thickness $T_d$ scales with $d$ however rather slowly: by \cite {conway1993sphere}, there exist covering with $T_d \le d\log d + 5d$. For example, in $d=24$ there exist covering with thickness of $7.9$. \end{remark} A covering of $\Gamma$ by such a collection of balls defines an open cover of $\Gamma$ by \begin{equation} U_i \equiv B\left(x_i, \delta\right) \cap \Gamma. \end{equation} Let $H_i$ denote the tangent hyperplane tangent to $\Gamma$ at $x_i$. We can now define an atlas by $\{(U_i, \phi_i)\}_{i=1}^{C_\Gamma}$, where $\phi_i$ is the orthogonal projection from $U_i$ onto $H_i$. The above construction is sketched in Figure \ref{fig:manifold}. \begin{figure}[ht!] \centering \includegraphics[width=3in,height=2in]{manifold2.jpg} \caption{Construction of atlas.} \label{fig:manifold} \end{figure} Let $\tilde{\phi}_i$ be the extension of $\phi_i$ to $\mathbb{R}^m$, i.e., the orthogonal projection onto $H_i$. The above construction has two important properties, summarized in Lemma \ref{lemma:radii} \begin{lemma} \label{lemma:radii} For every $x \in U_i$, \begin{equation} \|x - \phi_i(x) \|_2 \le r_1 \le \frac{\delta}{2} \end{equation} and for every $x \in \Gamma\setminus U_i$ such that $\tilde{\phi}_i(x) \in \phi_i(U_i)$ \begin{equation} \|x - \tilde{\phi}_i(x) \|_2 \ge r_2 = \frac{\sqrt{3}}{2}\delta. \end{equation} \end{lemma} \subsection {Representing a function on manifold as a sum of functions in $\mathbb{R}^d$} \label{sec:sumOfFunctions} Let $\Gamma$ be a compact $d$-dimensional manifold in $\mathbb{R}^m$, let $f:\Gamma \rightarrow \mathbb{R}$, let $A =\{(U_i, \phi_i) \}_{i=1}^{C_\Gamma}$ be an atlas obtained by the covering in Section~\ref{sec:creatingAtlas}, and let $\tilde{\phi}_i$ be the extension of $\phi_i$ to $\mathbb{R}^m$. $\{U_i\}$ is an open cover of $\Gamma$, hence by Theorem \ref{thm:PartitionOfUnity} there exists a corresponding partition of unity, i.e., a family of compactly supported $C^\infty$ functions $\{\eta_i\}_{i=1}^{C_\Gamma}$ such that \begin{itemize} \item $\eta_i:\Gamma \rightarrow [0,1]$ \item $\supp(\eta_i) \subseteq(U_i)$ \item $\sum_i \eta_i=1$ \end{itemize} Let $f_i$ be defined by \begin{equation} \label{eq:fi} f_i(x) \equiv f(x)\eta_i(x), \end{equation} and observe that $\sum_i f_i=f$. We denote the image $\phi_i(U_i)$ by $I_i$. Note that $ I_i \subset H_i$, i.e., $I_i$ lies in a $d$-dimensional hyperplane $H_i$ which is isomorphic to $\mathbb{R}^d$. We define $\hat{f}_i$ on $\mathbb{R}^d$ as \begin{equation} \label{eq:fHat} \hat{f}_i(x) = \bigg\{\begin{tabular}{cc} $f_i(\phi^{-1}(x))$ & $x \in I_i$\\ $0$ & otherwise \end{tabular} \end{equation} and observe that $\hat{f}_i$ is compactly supported. This construction gives the following Lemma \begin{lemma} \label{lemma:sumFi} For all $x \in \Gamma$, \begin{equation} \sum_{\{i:x \in U_i\}} \hat{f}_i(\phi_i(x)) = f(x). \end{equation} \end{lemma} Assuming $\hat{f}_i \in L_2(\mathbb{R}^d)$, by Lemma \ref{lemma:frame} it has a wavelet expansion using the frame that was constructed in Section \ref{sec:wavConstruction}. \subsection {Extending the wavelet terms in the approximation of $\hat{f}_i$ to $\mathbb{R}^m$}\label{sec:extension} Assume that $\hat{f}_i \in L_2(\mathbb{R}^d)$ and let \begin{equation} \label{eq:fiWaveletApprox} \hat{f_i} = \sum_{(k,b)}\alpha_{k,b}\psi_{k,b}, \end{equation} be its wavelet expansion, where $\alpha_{k,b} \in \mathbb{R}$ and $\psi_{k,b}$ is defined on $\mathbb{R}^d$. We now show how to extend each $\psi_{k,b}$ to $\mathbb{R}^m$. Let's assume (for now) that the coordinate system is such that the first $d$ coordinates are the local coordinates (i.e., the coordinates on $H_i$) and the remaining $m-d$ coordinates are of the directions which are orthogonal to $H_i$. Intuitively, we would like to extend the wavelet terms on $H_i$ to $\mathbb{R}^m$ so that they remain constant until they "hit" the manifold, and then die off before they "hit" the manifold again. By Lemma \ref{lemma:radii} it therefore suffices to extend each $\psi_{k,b}$ to $\mathbb{R}^m$ so that in each of the $m-d$ orthogonal directions, $\psi_{k,b}$ will be constant in $[-\frac{r_1}{\sqrt{m-d}}, \frac{r_1}{\sqrt{m-d}}]$ and will have a support which is contained in $[-\frac{r_2}{\sqrt{m-d}}, \frac{r_2}{\sqrt{m-d}}]$. Recall from Remark \ref{remark:motherFather} that each of the wavelet terms $\psi_{k,b}$ in Equation~\eqref{eq:fiWaveletApprox} is defined on $\mathbb{R}^d$ by \begin{align} \psi_{k,b}(x) &= 2^\frac{k}{2}\left(\varphi(2^\frac{k}{d}(x-b))- 2^{-1}\varphi(2^\frac{k-1}{d}(x-b))\right)\\ \end{align} and recall that as in Equation~\eqref{eq:father}, the scaling function $\varphi$ was defined on on $\mathbb{R}^d$ by \begin{equation} \varphi(x) = C_d\rect\left(\sum_{j=1}^d t(x_j) -2(d-1)\right). \end{equation} We extend $\psi_{k,b}$ to $\mathbb{R}^m$ by \begin{equation}\label{eq:extendedMother} \psi_{k,b}(x) \equiv 2^\frac{k}{2}\left(\varphi_r(2^\frac{k}{d}(x-b))- 2^{-1}\varphi_r(2^\frac{k-1}{d}(x-b))\right), \end{equation} where \begin{equation}\label{eq:extendedFather} \varphi_r(2^\frac{k}{d}(x-b)) \equiv C_d\rect\left(\sum_{j=1}^d t(2^\frac{k}{d}(x_j-b_j)) +\sum_{j=d+1}^m t_r(x_j) -2(m-1)\right), \end{equation} and $t_r$ is a trapezoid function which is supported on $[-\frac{r_2}{\sqrt{m-d}}, \frac{r_2}{\sqrt{m-d}}]$ and its top (small) base is between $[-\frac{r_1}{\sqrt{m-d}}, \frac{r_1}{\sqrt{m-d}}]$ and has height 2. This definition of $\psi_{k,b}$ gives it a constant height for distance $r_1$ from $H_i$, and then a linear decay, until it vanishes at distance $r_2$. Then by construction we obtain the following lemma \begin{lemma} \label{lemma:outsideSupport} For every chart $(U_i, \phi_i)$ and every $x \in \Gamma \setminus U_i$ such that $\tilde{\phi}_i(x) \in \phi_i(U_i)$, $x$ is outside the support of every wavelet term corresponding to the $i$'th chart. \end{lemma} \begin{remark} \label{remark:extensionUnits} Since the $m-d$ additional trapezoids in Equation~\eqref{eq:extendedFather} do not scale with $k$ and shift with $b$, they can be shared across all scaling terms in Equations~\eqref{eq:extendedMother} and~\eqref{eq:fiWaveletApprox}, so that the extension of the wavelet terms from $\mathbb{R}^d$ to $\mathbb{R}^m$ can be computed with $4(m-d)$ rectifiers. \end{remark} Finally, in order for this construction to work for all $i=1,...,C_\Gamma$ the input $x\in \mathbb{R}^m$ of the network can be first mapped to $\mathbb{R}^{mC_\Gamma }$ by a linear transformation so that the each of the $C_\Gamma$ blocks of $m$ coordinates gives the local coordinates on $\Gamma$ in the first $d$ coordinates and on the orthogonal subspace in the remaining $m-d$ coordinates. These maps are essentially the orthogonal projections $\tilde{\phi}_i$. \section {Specifying the required size of the network} \label{sec:counting} In the construction of Section \ref{sec:Main}, we approximate a function $f \in L_2(\Gamma)$ using a depth $4$ network, where the first layer computes the local coordinates in every chart in the atlas, the second layer computes $\rect$ functions that are to form trapezoids, the third layer computes scaling functions of the form $\varphi(2^\frac{k}{d}(x-b))$ for various $k,b$ and the fourth layer consists of a single node which computes \begin{equation} \hat{f} = \sum_{i=1}^{C_\Gamma} \sum_{(k,b)}\psi^{(i)}_{k,b}, \end{equation} where $\psi^{(i)}_{k,b}$ is a wavelet term on the $i$'th chart. This network is sketched in Figure \ref{fig:netSketch}. \begin{figure}[ht!] \centering \includegraphics[width=4.5in,height=2.5in]{networkSketch1.jpg} \caption{A sketch of the network.} \label{fig:netSketch} \end{figure} From this construction, we obtain the following theorem, which is the main result of this work: \begin{theorem}\label{thm:main} Let $\Gamma$ be a $d$-dimensional manifold in $\mathbb{R}^m$, and let $f \in L_2(\Gamma)$. Let $\{(U_i, \phi_i)\}$ be an atlas of size $C_\Gamma$ for $\Gamma$, as in Section \ref{sec:creatingAtlas}. Then $f$ can be approximated using a 4-layer network with $mC_\Gamma$ linear units in the first hidden layer, $8d\sum_{i=1}^{C_\Gamma}N_i +4C_\Gamma(m-d) $ rectifier units in the second hidden layer, $2\sum_{i=1}^{C_\Gamma}N_i$ rectifier units in the third layer and a single linear unit in the fourth (output) layer, where $N_i$ is the number of wavelet terms that are used for approximating $f$ on the $i$'th chart. \end{theorem} \begin{proof} As in Section \ref{sec:sumOfFunctions}, we construct functions $\hat{f}_i$ on $\mathbb{R}^d$ as in Equation (\ref{eq:fHat}), which, by Lemma \ref{lemma:sumFi}, have the property that for every $x \in \Gamma$, $\sum_{\{i:x \in U_i\}} \hat{f}_i(\phi_i(x)) = f(x)$. The fact that $\hat{f}_i$ is compactly supported means that its wavelet approximation converges to zero outside $\phi_i(U_i)$. Together with Lemma \ref{lemma:outsideSupport}, we then get that an approximation of $f$ is obtained by summing up the approximations of all the $\hat{f}_i$'s. A first layer of the network will consist $mC_\Gamma$ linear units and will compute the map as in the last paragraph of Section \ref{sec:extension}, i.e., linearly transform the input to $C_\Gamma$ blocks, each of dimension $m$, so that in each block $i$ the first $d$ coordinates are with respect to the tangent hyperplane $H_i$ (i.e., will give the representation $\tilde{\phi}_i(x)$) and the remaining $m-d$ coordinates are with respect to directions orthogonal to $H_i$. For each $i=1,..,C_\Gamma$, we approximate each $\hat{f}_i$ to some desired approximation level $\delta$ using $N_i < \infty$ wavelet terms. By Remark \ref{remark:NumUnitsForMother}, $\hat{f}_i$ can be approximated using $8dN_i$ rectifiers in the second layer, $2N_i$ rectifiers in the third layer and a single unit in the fourth layer. By Remark \ref{remark:extensionUnits}, on every chart the wavelet terms in all scales and shifts can be extended to $\mathbb{R}^m$ using (the same) $4(m-d)$ rectifiers in the second layer. Putting this together we get that to approximate $f$ one needs a 4-layer network with $mC_\Gamma$ linear units in the first hidden layer $8d\sum_{i=1}^{C_\Gamma}N_i + 4C_\Gamma(m-d)$ rectifier units in the second hidden layer, $2\sum_{i=1}^{C_\Gamma}N_i$ rectifier units in the third layer and a single linear unit in the fourth (output) layer. \end{proof} \begin{remark} For sufficiently small radius $\delta$ in the sense of section~\ref{sec:compactManifolds}, the desired properties of $\hat{f}_i$ (i.e., being in $L_2$ and possibly having sparse coefficient or being twice differentiable) imply similar properties of $f$. \end{remark} \begin{remark} We observe that the dependence on the dimension $m$ of the ambient space in the first and second layers is through $C_\Gamma$, which depends on the curvature of the manifold. The number $N_i$ of wavelet terms in the $i$'th chart affects the number of units in the second layer only through the dimension $d$ of the manifold, not through $m$. The sizes of the third and fourth layers do not depend on $m$ at all. \end{remark} Finally, assuming regularity conditions on the $\hat{f}_i$, allows us to bound the number $N_i$ of wavelet terms needed for the approximation of $\hat{f}_i$. In particular, we consider two specific cases: $\hat{f}_i \in \mathcal{L}_1$ and $\hat{f}_i \in C^2$, with bounded second derivative. \begin{corollary}\label{cor:sparse} If $\hat{f}_i \in \mathcal{L}_1$ (i.e., $\hat{f}_i$ has expansion coefficients in $l_1$), then by Theorem \ref{thm:BarronGreedy}, $\hat{f}_i$ can be approximated by a combination $\hat{f}_{i,N_i}$ of $N_i$ wavelet terms so that \begin{equation} \|\hat{f}_i- \hat{f}_{i,N_i}\|_2 \le \frac{\| \hat{f}_i\|_{\mathcal{L}_1}}{\sqrt{N_i+1}}. \end{equation} Consequently, denoting the output of the net by $\tilde{f}$, $N \equiv\max_i\{N_i\}$ and $M \equiv \max_i \|\hat{f}_i \|_{\mathcal{L}_1}$, we obtain \begin{equation} \|f-\tilde{f} \|_2^2 \le \frac{C_\Gamma M}{N+1}, \end{equation} using $c_1 + c_2N$ units, where $c_1=C_\Gamma(m + 4(m-d)) + 1$ and $c_2 = (8d+2)C_\Gamma $. \end{corollary} \begin{corollary}\label{cor:c2} If for each $i$ $\hat{f_i}$'s is twice differentiable and $\|\nabla^2_{f_i}\|_{op}$ is bounded, then by Lemma \ref{lemma:c2Approx} $\hat{f_i}$ can be approximated by $\hat{f}_{K,i}$ using all terms up to scale $K$ so that for every $x \in \mathbb{R}^d$ \begin{equation} |\hat{f}_i(x)- \hat{f}_{i,K}(x)| = O \left(2^{-\frac{2K}{d}}\right). \end{equation} Observe that the grid spacing in the $k$'th level is $2^{-\frac{k}{d}}$. Therefore, since $f$ is compactly supported, there are $O\left(\left( 2^\frac{k}{d} \right)^d\right)=O\left(2^k\right)$ terms in the $k$'th level. Altogether, on the $i$'th chart there are $O\left(2^{K+1}\right)$ terms in levels less than $K$. Writing $N\equiv 2^{K+1}$, we get a point-wise error rate of $N^{-\frac{2}{d}}$ using $c_1 + c_2N$ units, where $c_1=C_\Gamma(m + 4(m-d)) + 1$ and $c_2 = (8d+2)C_\Gamma $. \end{corollary} \begin{remark} The unit count in Theorem~\ref{thm:main} and Corollaries \ref{cor:sparse} and \ref{cor:c2} is overly pessimistic, in the sense that we assume that the sets of wavelet terms in the expansion of $\hat{f}_i$, $\hat{f}_j$ do not intersect, where $i,j$ are chart indices. A tighter bound can be obtained if we allow wavelet functions be shared across different charts, in which case the term $C_\Gamma\sum N_i$ in Theorem~\ref{thm:main} can be replaced by the total number of distinct wavelet terms that are used on all charts, hence decreasing the constant $c_2$. In particular, in Corollary \ref{cor:c2} we are using all terms up to the $K$'th scale on each chart. In this case the constant $c_2=8d+2$. \end{remark} \begin{remark} The linear units in the first layer can be simulated using ReLU units with large positive biases, and adjusting the biases of the units in the second layer. Hence the first layer can contain ReLU units instead of linear units. \end{remark} \section{Conclusions} \label{sec:conclusions} The construction presented in this manuscript can be divided to two main parts: analytical and topological. In the analytical part, we constructed a wavelet frame if $L_2(\mathbb{R}^d)$, where the wavelets are computed from Rectified Linear units. In the topological part, given training data on a $d$-dimensional manifold $\Gamma$ we constructed an atlas and represented any function on $\Gamma$ as sum of functions that are defined on the charts. We then used Rectifier units to extend the wavelet approximation of the functions from $\mathbb{R}^d$ to the ambient space $\mathbb{R}^m$. This construction allows us to state the size of a depth 4 neural net given a function $f$ to be approximated on the manifold $\Gamma$. We show how the specified size depends on the complexity of the function (manifested in the number of wavelet terms in its approximation) and the curvature of the manifold (manifested in the size of the atlas). In particular, we take advantage of the fact that $d$ can possibly be much smaller than $m$ to construct a network with size that depends more strongly on $d$. In addition, we also obtain squared error rate in approximation of functions with sparse wavelet expansion and point-wise error rate for twice differentiable functions. The network architecture and corresponding weights presented in this manuscript is hand-made, and is such that achieves the approximation properties stated above. However, it is reasonable to assume that such network is unlikely to be the result of a standard training process. Hence, we see the importance of the results presented in this manuscript by describing the theoretical approximation capability of neural nets, and not by describing trained nets which are used in practice. Several extensions of this work can be considered. First, a more efficient wavelet representation can be obtained on each chart if one allows its wavelets to be non-isotropic (that is, to scale differently in every dimension) and not necessarily axis aligned, but rather, to correspond to the level sets of the function being approximated. When the function is relatively constant in certain directions, the wavelet terms can be "stretched" in these directions. Such thing can be done using curvelets Second, we conjecture that in the representation obtained as an output of convolutional and pooling layers, the data concentrates near a collection of low dimensional manifolds embedded in a high dimensional space, which is our starting point in the current manuscript. We think that this is a result of the application of the same filters to all data points. Assuming our conjecture is true, one can apply our construction to the output of convolutional layers, and by that obtain a network topology which is similar to standard convolutional networks, namely fully connected layers on top of convolutional ones. This will make or arguments here applicable to cases where the data in its initial representation does not concentrate near low dimensional manifold, but its hidden representation does. Finally, we remark that the choice of using rectifier units to construct our wavelet frame is convenient, however somewhat arbitrary. Similar wavelet frames can be constructed by any function (or combination of functions) that can be used to construct ``bump'' functions i.e., functions which are localized and have fast decay. For example, general sigmoid functions $\sigma:\mathbb{R} \rightarrow \mathbb{R}$, which are monotonic and have the properties \begin{equation} \lim_{x \rightarrow -\infty}\sigma(x)=0 \text { and } \lim_{x \rightarrow \infty}\sigma(x)=1 \end{equation} can used to construct a frame in a similar way, by computing ``smooth'' trapezoids. Recall also that by Remark \ref{remark:equivalentFrames}, any two such frames are equivalent. \section*{Acknowledgements} The authors thank Stefan Steinerberger, Roy Lederman for their help, and to Andrew Barron, Ed Bosch, Mark Tygert and Yann LeCun for their comments. Alexander Cloninger is supported by NSF Award No. DMS-1402254.
{ "timestamp": "2016-03-29T02:14:31", "yymm": "1509", "arxiv_id": "1509.07385", "language": "en", "url": "https://arxiv.org/abs/1509.07385", "abstract": "We discuss approximation of functions using deep neural nets. Given a function $f$ on a $d$-dimensional manifold $\\Gamma \\subset \\mathbb{R}^m$, we construct a sparsely-connected depth-4 neural network and bound its error in approximating $f$. The size of the network depends on dimension and curvature of the manifold $\\Gamma$, the complexity of $f$, in terms of its wavelet description, and only weakly on the ambient dimension $m$. Essentially, our network computes wavelet functions, which are computed from Rectified Linear Units (ReLU)", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)", "title": "Provable approximation properties for deep neural networks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717452580315, "lm_q2_score": 0.7185943805178138, "lm_q1q2_score": 0.7089449121200735 }
https://arxiv.org/abs/1701.00665
A note on split extensions of bialgebras
We prove a universal characterization of Hopf algebras among cocommutative bialgebras over a field: a cocommutative bialgebra is a Hopf algebra precisely when every split extension over it admits a join decomposition. We also explain why this result cannot be extended to a non-cocommutative setting.
\section{Introduction} An elementary result in the theory of modules says that in any short exact sequence \[ \xymatrix{0 \ar[r] & K \ar[r]^-{k} & X \ar@<.5ex>[r]^-{f} & Y \ar@{->}@<.5ex>[l]^-{s} \ar[r]& 0} \qquad\qquad f\circ s=1_{Y} \] where the cokernel $f$ admits a section $s$, the middle object $X$ decomposes as a direct sum $X\cong K\oplus Y$. If, however, the given sequence is a short exact sequence of, say, groups or Lie algebras, then this is of course no longer true: then we can at most deduce that $X$ is a semidirect product $K\rtimes Y$ of $K$ and $Y$. In a fundamental way, this interpretation depends on, or even amounts to, the fact that $X$ is generated by its subobjects $k(K)$ and $s(Y)$. One may argue that, in a non-additive setting, the join decomposition $X=k(K)\vee s(Y)$ in the lattice of subobjects of~$X$ is what replaces the direct sum decomposition, valid for split extensions of modules. When the given split extension is a sequence of cocommutative bialgebras (over a commutative ring with unit~$\ensuremath{\mathbb{K}}$), we may ask ourselves the question whether such a join decomposition of the middle object in the sequence always exists. Although kernels are not as nice as one could expect \cite{Agore,AndDev}, it is not difficult to see that \emph{if $Y$ is a Hopf algebra} then the answer is yes. The main point of this note is that this happens \emph{only then}, at least when~$\ensuremath{\mathbb{K}}$ is an algebraically closed field. We shall prove, in other words, the following new universal characterization of cocommutative Hopf algebras among cocommutative bialgebras over $\ensuremath{\mathbb{K}}$: \begin{quote} \emph{All split extensions over a bialgebra $Y$ admit a join decomposition if and only if $Y$ is a Hopf algebra.} \end{quote} This result is along the lines of, and is actually a variation on, a similar characterization of groups among monoids, recently obtained in~\cite{MRVdL-TCOGAM,GM-ACS}. There the authors show that all split extensions (of monoids) over a monoid $Y$ admit a join decomposition if and only if $Y$ is a group. In fact something stronger than the existence of a join decomposition may be proved in a more general context; this will be the subject of Section~\ref{Internal monoids}, where we explore some basic aspects of split extensions of cocommutative bialgebras. In particular, we show that over a Hopf algebra, all split extensions of cocommutative bialgebras admit a join decomposition (Corollary~\ref{Corollary Join Decomposition}). In Section~\ref{Characterization} we focus on the other implication and prove that among cocommutative bialgebras over an algebraically closed field, only Hopf algebras admit join decompositions of their split extensions (Theorem~\ref{Theorem 2}). In the final Section~\ref{Cocommutativity} we explain why the constraint that the bialgebras in this characterization are cocommutative is essential. As it turns out, in a non-cocommutative setting, even the very weakest universal join decomposition condition is too strong. \section{Split extensions over Hopf algebras}\label{Internal monoids} A \textbf{split extension} in a pointed category with finite limits $\ensuremath{\mathcal{C}}$ is a diagram \[ \xymatrix{K \ar[r]^-{k} & X \ar@<.5ex>[r]^-{f} & Y \ar@{->}@<.5ex>[l]^-{s}} \] where $k$ is a kernel and $s$ is a section of $f$. So $f\circ s=1_{Y}$, but a priori we are not asking that $f$ is a cokernel of $k$, so that $(k,f)$ is a short exact sequence, and this is not automatically the case. We do always have that $K$ and $Y$, considered as subobjects of $X$, have a trivial intersection. Indeed, using that $k$ is the pullback of~$0\to Y$ along $f$, it is easy to check that the pullback of~$k$ and~$s$ is zero. In this general context, a join of two subobjects may not always exist, but the concept introduced in the next definition expresses what we want, and agrees with the condition that $X=k(K)\vee s(Y)$ whenever that expression makes sense---as it does in any regular category with binary coproducts, for instance~\cite{Borceux-Bourn}. \begin{definition} A pair of arrows $(k, s)$ with the same codomain $X$ is \textbf{jointly extremally epimorphic} when the arrows $k$ and $s$ cannot both factor through one and the same proper subobject of $X$: whenever we have a diagram \[ \xymatrix@!0@=4em{ & M \ar@{ >->}[d]^- m \\ K \ar[r]_-k \ar[ur] & X & Y \ar[l]^-s \ar[ul]} \] where $m$ is a monomorphism, necessarily $m$ is an isomorphism. We say that a split extension as above is \textbf{strong} when $(k,s)$ is a jointly extremally epimorphic pair; the couple $(f,s)$ is then called a \textbf{strong point}. When we say that a split extension \textbf{admits a join decomposition}, we mean that it is strong. The given split extension is said to be \textbf{stably strong} (the couple $(f,s)$ is a \textbf{stably strong point}) when all of its pullbacks (along any morphism $g\colon W\to Y$) are strong. Following \cite{MRVdL-TCOGAM}, we say that an object $Y$ is \textbf{protomodular} when all split extensions over $Y$ are stably strong. \end{definition} \begin{remark} It is easily seen~\cite{MRVdL-TCOGAM} that the split epimorphism $f$ in a strong point $(f,s)$ is always the cokernel of its kernel $k$. This means, in particular, that all split extensions over a protomodular object $Y$, as well as all of their pullbacks, are (split) short exact sequences which admit a join decomposition. \end{remark} \begin{remark} When all objects in $\ensuremath{\mathcal{C}}$ are protomodular, $\ensuremath{\mathcal{C}}$ is a \textbf{protomodular category} in the sense of~\cite{Bourn1991}. Next to Barr exactness, this is one of the key ingredients in the definition of a semi-abelian category~\cite{Janelidze-Marki-Tholen}, and crucial for results such as the \emph{$3\times 3$ Lemma}, the \emph{Snake Lemma}, the \emph{Short Five Lemma}~\cite{Bourn2001,Borceux-Bourn}, or the existence of a Quillen model category structure for homotopy of simplicial objects~\cite{VdLinden:Simp}. Typical examples are the categories of groups, Lie algebras, crossed modules, loops, associative algebras, etc. As recently shown in~\cite{GKV,Kadjo-PhD}, also the category of cocommutative Hopf algebras over a field of characteristic zero is semi-abelian. \end{remark} Given a category with finite products $\ensuremath{\mathcal{C}}$, we write $\ensuremath{\mathsf{Mon}}(\ensuremath{\mathcal{C}})$ for the category of internal monoids, and $\ensuremath{\mathsf{Gp}}(\ensuremath{\mathcal{C}})$ for the category of internal groups in $\ensuremath{\mathcal{C}}$. For a commutative ring with unit $\ensuremath{\mathbb{K}}$, we let $\ensuremath{\mathsf{CoAlg}}_{\ensuremath{\mathbb{K}},\mathit{coc}}$ denote the category of cocommutative coalgebras over $\ensuremath{\mathbb{K}}$. It is well known~\cite{Sweedler} that there is an equivalence between the category $\ensuremath{\mathsf{BiAlg}_{\K, \coc}}$ of cocommutative bialgebras over $\ensuremath{\mathbb{K}}$ and $\ensuremath{\mathsf{Mon}}(\ensuremath{\mathsf{CoAlg}}_{\ensuremath{\mathbb{K}},\mathit{coc}})$, which restricts to an equivalence between the category $\ensuremath{\mathsf{Hopf}}_{\ensuremath{\mathbb{K}},\mathit{coc}}$ of cocommutative Hopf algebras over $\ensuremath{\mathbb{K}}$ and $\ensuremath{\mathsf{Gp}}(\ensuremath{\mathsf{CoAlg}}_{\ensuremath{\mathbb{K}},\mathit{coc}})$. This is easily seen using that in $\ensuremath{\mathsf{CoAlg}}_{\ensuremath{\mathbb{K}},\mathit{coc}}$ the product $X\times Y$ is $X\otimes Y$ and $1$ is $\ensuremath{\mathbb{K}}$. \begin{theorem}\label{Theorem 1} Let $\ensuremath{\mathcal{C}}$ be a category with finite limits. If $Y \in \ensuremath{\mathsf{Gp}}(\ensuremath{\mathcal{C}})$ then all split extensions in $\ensuremath{\mathsf{Mon}}(\ensuremath{\mathcal{C}})$ over $Y$ are stably strong. In other words, any internal group in $\ensuremath{\mathcal{C}}$ is a protomodular object in $\ensuremath{\mathsf{Mon}}(\ensuremath{\mathcal{C}})$. \end{theorem} \begin{proof} Consider in $\ensuremath{\mathsf{Mon}}(\ensuremath{\mathcal{C}})$ the commutative diagram \[ \xymatrix@C=4em{ & \Kernel (\pi_1) \ar[d]_-{l} \ar@/_1.2pc/[dl] \ar@{=}[r] & \Kernel(f) \ar[d]^{k} \\ M \ar@{{ >}->}[r]^-m & W \times_{Y} X \ophalfsplitpullback \ar@<.5ex>[d]^(.6){\pi_1} \ar[r]^-{\pi_2} & X \ar@<.5ex>[d]^-f \\ & W \ar@<.5ex>[u]^-{\langle 1_{W}, s\circ g\rangle} \ar[r]_-g \ar@/^1.2pc/[lu] & Y \ar@<.5ex>[u]^-s } \] where the bottom right square is a pullback, $m$ is a monomorphism, and $Y$ is an internal group. We shall see that $m$ is an isomorphism. Since only limits are considered, the whole commutative diagram is sent into a category of presheaves of sets by the Yoneda embedding, in such a way that the internal groups and internal monoids in it are mapped to ordinary groups and monoids, respectively. Since the Yoneda embedding reflects isomorphisms, it now suffices to give a proof in~$\ensuremath{\mathsf{Set}}$. There, it is easy to see that $m$ is an isomorphism, since every element $(w, x)$ of~$W \times_{Y} X$ can be written as $(1, x\cdot s(g(w)^{-1})) \cdot (w, sg(w))$, where clearly the first element belongs to the kernel of $\pi_1$ and the second one comes from $W$. \end{proof} \begin{corollary}\label{Corollary Join Decomposition} Cocommutative Hopf algebras are protomodular in $\ensuremath{\mathsf{BiAlg}_{\K, \coc}}$.\hfill\qed \end{corollary} It follows that, over a Hopf algebra, split extensions of bialgebras are well-behaved; not only are they short exact sequences, but it is also not hard to see that the \emph{Split Short Five Lemma} holds for them, so that equivalences classes of split extensions may be considered as in ordinary group cohomology. \section{A universal characterization of cocommutative Hopf algebras}\label{Characterization} The converse is less straightforward. In the case of groups and monoids ($\ensuremath{\mathcal{C}}=\ensuremath{\mathsf{Set}}$ in Theorem~\ref{Theorem 1}), it was shown in \cite{MRVdL-TCOGAM} (resp.\ in \cite{GM-ACS}) that all points in $\ensuremath{\mathsf{Mon}}$ over~$Y$ are stably strong (resp.\ strong) if and only if $Y$ is a group. However, those proofs involve coproducts, and so a Yoneda embedding argument as in Theorem~\ref{Theorem 1} would not work. We now let $\ensuremath{\mathbb{K}}$ be an algebraically closed field. We consider the adjoint pair \[ \xymatrix{\ensuremath{\mathsf{BiAlg}_{\K, \coc}} \ar@<1ex>[r]^-{G} \ar@{}[r]|-{\top} & \ensuremath{\mathsf{Mon}} \ar@<1ex>[l]^-{\ensuremath{\mathbb{K}}[-]}} \] where the left adjoint $\ensuremath{\mathbb{K}}[-]$ is the monoid algebra functor and the right adjoint $G$ sends a bialgebra $B$ (with comultiplication $\Delta_{B}$ and counit $\varepsilon_{B}$) to its monoid of grouplike elements $G(B)=\{x\in B\mid \text{$\Delta_{B}(x)=x\otimes x$ and $\varepsilon_{B}(x)=1$}\}$. \begin{lemma}\label{Lemma K[-] Preserves Monos} $\ensuremath{\mathbb{K}}[-]$ preserves monomorphisms. \end{lemma} \begin{proof} The functor $\ensuremath{\mathbb{K}}[-]$ sends any monoid monomorphism to a bialgebra morphism of which the underlying vector space map is an injection. \end{proof} Our aim is to prove that $G$ preserves protomodular objects: then for any protomodular bialgebra $B$, the monoid of grouplike elements $G(B)$ is a group, so that $B$ is a Hopf algebra by~\cite[8.0.1.c and 9.2.5]{Sweedler}. \begin{proposition}\label{Proposition Unit Split Monic} For any monoid $M$ we have $G(\ensuremath{\mathbb{K}}[M])\cong M$. For any bialgebra~$B$, the counit $\epsilon_{B}\colon \ensuremath{\mathbb{K}}[G(B)]\to B$ of the adjunction at $B$ is a split monomorphism with retraction $\pi_{B}\colon B\to \ensuremath{\mathbb{K}}[G(B)]$, determined in a way which is functorial in $B$. \end{proposition} \begin{proof} The first statement follows immediately from the definition of $\ensuremath{\mathbb{K}}[M]$, while the second depends on \cite[8.0.1.c and 8.1.2]{Sweedler}. \end{proof} Since protomodular objects are closed under retracts~\cite{MRVdL-TCOGAM}, it follows that if~$B$ is a protomodular bialgebra, then so is $\ensuremath{\mathbb{K}}[G(B)]$. \begin{proposition}\label{Proposition G Preserves JSE Pairs} The functor $G$ preserves jointly extremally epimorphic pairs. \end{proposition} \begin{proof} Let $(k,s)$ be a jointly extremally epimorphic pair in $\ensuremath{\mathsf{BiAlg}_{\K, \coc}}$. Then the commutativity of the diagram \[ \xymatrix{\ensuremath{\mathbb{K}}[G(K)] \ar[r]^-{\ensuremath{\mathbb{K}}[G(k)]} & \ensuremath{\mathbb{K}}[G(X)] & \ensuremath{\mathbb{K}}[G(Y)] \ar[l]_-{\ensuremath{\mathbb{K}}[G(s)]}\\ K \ar[r]_-{k} \ar[u]^-{\pi_{K}} & X \ar[u]^-{\pi_{X}} & Y \ar[u]_-{\pi_{Y}} \ar[l]^-{s}} \] obtained via Proposition~\ref{Proposition Unit Split Monic} and the fact that the upward pointing arrows are split epimorphisms imply that the pair $(\ensuremath{\mathbb{K}}[G(k)],\ensuremath{\mathbb{K}}[G(s)])$ is jointly extremally epimorphic. Now suppose that $m$ is a monomorphism making the diagram on the left \[ \vcenter{\xymatrix{&M \ar@{{ >}->}[d]^-{m}\\ G(K) \ar[ru] \ar[r]_-{G(k)} & G(X) & G(Y) \ar[lu] \ar[l]^-{G(s)}}} \qquad \vcenter{\xymatrix{&\ensuremath{\mathbb{K}}[M] \ar@{{ >}->}[d]^-{\ensuremath{\mathbb{K}}[m]}\\ \ensuremath{\mathbb{K}}[G(K)] \ar[ru] \ar[r]_-{\ensuremath{\mathbb{K}}[G(k)]} & \ensuremath{\mathbb{K}}[G(X)] & \ensuremath{\mathbb{K}}[G(Y)] \ar[lu] \ar[l]^-{\ensuremath{\mathbb{K}}[G(s)]}}} \] commute. Applying $\ensuremath{\mathbb{K}}[-]$ we obtain the diagram on the right, in which $\ensuremath{\mathbb{K}}[m]$ is a monomorphism by Lemma~\ref{Lemma K[-] Preserves Monos}. Since, by the above, the bottom pair is jointly extremally epimorphic, we see that $\ensuremath{\mathbb{K}}[m]$ is an isomorphism. But then also $m=G(\ensuremath{\mathbb{K}}[m])$ is an isomorphism, which proves our claim that $(G(k),G(s))$ is a jointly extremally epimorphic pair. \end{proof} \begin{proposition}\label{Proposition G Preserves Protomodular Objects} If all split extensions over a bialgebra $Y$ are strong, then all split extensions over $G(Y)$ are strong. In particular, $G$ preserves protomodular objects. \end{proposition} \begin{proof} Consider a split extension \[ \xymatrix{K \ar[r]^{k} & X \ar@<.5ex>[r]^-{f} & G(Y) \ar@<.5ex>[l]^-{s}} \] over $G(Y)$. We apply the functor $\ensuremath{\mathbb{K}}[-]$, then take the kernel of $\ensuremath{\mathbb{K}}[f]$ to obtain the split extension of bialgebras \[ \xymatrix{L \ar[r]^-{l} & \ensuremath{\mathbb{K}}[X] \ar@<.5ex>[r]^-{\ensuremath{\mathbb{K}}[f]} & \ensuremath{\mathbb{K}}[G(Y)]. \ar@<.5ex>[l]^-{\ensuremath{\mathbb{K}}[s]}} \] From Proposition~\ref{Proposition Unit Split Monic} it follows that all split extensions over $\ensuremath{\mathbb{K}}[G(Y)]$ are strong. Hence $(l,\ensuremath{\mathbb{K}}[s])$ is a jointly extremally epimorphic pair. Applying the functor $G$, we regain the original split extension, since $G$ is a right adjoint, thus preserves kernels; but $G$ also preserves jointly extremally epimorphic pairs by Proposition~\ref{Proposition G Preserves JSE Pairs}, so that the pair $(k,s)$ is jointly extremally epimorphic. As a consequence, all split extensions over the monoid $G(Y)$ are strong, and $G(Y)$ is protomodular~\cite{GM-ACS}. \end{proof} \begin{theorem}\label{Theorem 2} If $\ensuremath{\mathbb{K}}$ is an algebraically closed field and $Y$ is a cocommutative bialgebra over $\ensuremath{\mathbb{K}}$, then the following conditions are equivalent: \begin{tfae} \item $Y$ is a Hopf algebra; \item in $\ensuremath{\mathsf{BiAlg}_{\K, \coc}}$, all split extensions over $Y$ admit a join decomposition; \item $Y$ is a protomodular object in $\ensuremath{\mathsf{BiAlg}_{\K, \coc}}$. \end{tfae} \end{theorem} \begin{proof} (i) implies (iii) is Theorem~\ref{Theorem 1}, and (ii) is obviously weaker than (iii). For the proof that (ii) implies (i), suppose that all split extensions over $Y$ admit a join decomposition. Then Proposition~\ref{Proposition G Preserves Protomodular Objects} implies that in $\ensuremath{\mathsf{Mon}}$ all split extensions over $G(Y)$ are strong. Hence $G(Y)$ is a group by the result in~\cite{GM-ACS}, which makes $Y$ a Hopf algebra by~\cite[8.0.1.c and 9.2.5]{Sweedler}. \end{proof} \begin{remark} This implies that the category $\ensuremath{\mathsf{BiAlg}_{\K, \coc}}$ cannot be protomodular: otherwise all bialgebras would be Hopf algebras. In particular, the \emph{Split Short Five Lemma} is not generally valid for bialgebras. \end{remark} \section{On cocommutativity}\label{Cocommutativity} In this final section we study what happens beyond the cocommutative setting. Here $\ensuremath{\mathbb{K}}$ is a field. All objects in the category of cocommutative $\ensuremath{\mathbb{K}}$-bialgebras satisfy a certain weak join decomposition property: being a category of internal monoids (in~$\ensuremath{\mathsf{CoAlg}}_{\ensuremath{\mathbb{K}},\mathit{coc}}$), the category $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}},\mathit{coc}}$ is \textbf{unital} in the sense of~\cite{Borceux-Bourn}. Given an object $Y$, it is said to be a \textbf{unital object}~\cite{MRVdL-TCOGAM} when every split extension of the type \[ \xymatrix{X \ar@<-.5ex>[r]_-{\langle 1_{X},0\rangle} & X\times Y \ar@<-.5ex>[l]_-{\pi_{X}} \ar@<.5ex>[r]^-{\pi_{Y}} & Y \ar@{->}@<.5ex>[l]^-{\langle 0,1_{Y}\rangle}} \] is strong. Notice how this condition is symmetric in $X$ and $Y$. So protomodular objects are always unital of course, but in fact this condition is weak enough to be satisfied by all cocommutative bialgebras over $\ensuremath{\mathbb{K}}$. Let us now leave the cocommutative setting and ask ourselves what it means for an object~$Y$ in~$\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$ to be unital---a very weak thing to ask, compared with the condition that all split extensions over $Y$ are (stably) strong. \begin{proposition} If $Y$ is a unital object of $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$, then for every object $X$ we have an isomorphism $X\times Y\cong X\otimes Y$. \end{proposition} \begin{proof} Given any bialgebra $X$ we may consider the diagram \[ \xymatrix@=3em{X & \ar[l]_-{\rho_{X}}^-{\cong} X\otimes \ensuremath{\mathbb{K}} \ar@<-.5ex>[r]_-{1_{X}\otimes \eta_{Y}} & X\otimes Y \ar@<-.5ex>[l]_-{1_{X}\otimes \varepsilon_{Y}} \ar@<.5ex>[r]^-{\varepsilon_{X}\otimes 1_{Y}} & \ensuremath{\mathbb{K}}\otimes Y \ar[r]^-{\lambda_{Y}}_-{\cong} \ar@<.5ex>[l]^-{\eta_{X}\otimes 1_{Y}} & Y. } \] We are first going to prove that the comparison morphism \[ m=\langle \rho_{X}\circ(1_{X}\otimes \varepsilon_{Y}),\lambda_{Y}\circ(\varepsilon_{X}\otimes 1_{Y})\rangle \colon X\otimes Y\to X\times Y \] is a monomorphism. Note that it is almost never an injection; for instance, taking $X=Y$ to be a tensor algebra $T(V)$ (with counit $\varepsilon_{T(V)}(v)=0$ for $v\in V$) yields easy counterexamples. However, in the category $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$, monomorphisms need not be injective \cite{NuTo,Agore2}. Let $h\colon {Z\to X\otimes Y}$ be a morphism of bialgebras. We write \[ f=\rho_{X}\circ (1_{X}\otimes \varepsilon_{Y})\circ h\colon Z\to X \qquad \text{and} \qquad g=\lambda_{Y}\circ (\varepsilon_{X}\otimes 1_{Y})\circ h\colon Z\to Y. \] It suffices to prove that $h=(f\otimes g)\circ \Delta_{Z}$ \emph{as vector space maps} for our claim to hold. Indeed, if $h$ and $h'$ induce the same $f$ and $g$, then the given equality of vector space maps proves that $h=h'$. Since $h$ is a coalgebra map, we have that $\Delta_{X\otimes Y}\circ h = (h\otimes h)\circ \Delta_{Z}$. Writing $\tau_{X,Y}\colon{X\otimes Y\to Y\otimes X}$ for the twist map, we calculate: \begin{align*} &(f\otimes g)\circ\Delta_{Z} \\ &= (\rho_{X}\otimes \lambda_{Y})\circ (1_{X}\otimes \varepsilon_{Y}\otimes \varepsilon_{X}\otimes 1_{Y}) \circ (h\otimes h)\circ\Delta_{Z}\\ &= (\rho_{X}\otimes \lambda_{Y})\circ (1_{X}\otimes \varepsilon_{Y}\otimes \varepsilon_{X}\otimes 1_{Y}) \circ \Delta_{X\otimes Y}\circ h\\ &= (\rho_{X}\otimes \lambda_{Y})\circ (1_{X}\otimes \varepsilon_{Y}\otimes \varepsilon_{X}\otimes 1_{Y}) \circ (1_{X}\otimes \tau_{X,Y}\otimes 1_{Y})\circ(\Delta_{X}\otimes \Delta_{Y})\circ h\\ &= (\rho_{X}\otimes \lambda_{Y})\circ (1_{X}\otimes \varepsilon_{X}\otimes \varepsilon_{Y}\otimes 1_{Y}) \circ(\Delta_{X}\otimes \Delta_{Y})\circ h\\ &= (\rho_{X}\otimes \lambda_{Y})\circ (\rho_{X}^{-1}\otimes \lambda_{Y}^{-1})\circ h=h. \end{align*} It follows that $m$ is a monomorphism. Moreover, $m$ makes the diagram \[ \xymatrix@!0@R=4em@C=6em{ & X\otimes Y \ar@{ >->}[d]^-m \\ X \ar[r]_-{\langle 1_{X},0\rangle} \ar[ur]^-{(1_{X}\otimes \eta_{Y})\circ \rho_{X}^{-1}} & X\times Y & Y \ar[l]^-{\langle 0,1_{Y}\rangle} \ar[ul]_-{(\eta_{X}\otimes 1_{Y})\circ \lambda_{Y}^{-1}}} \] commute. The assumption that $Y$ is unital tells us that $m$ is an isomorphism. \end{proof} This immediately implies that any unital object $Y$ in $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$ has to be cocommutative, since $\Delta_{Y}\colon Y\to Y\otimes Y$ is the morphism of bialgebras $\langle 1_{Y},1_{Y}\rangle\colon Y\to Y\times Y$. In particular, the category $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$ is not unital, so it cannot be protomodular, and not even Mal'tsev~\cite{Borceux-Bourn}. However, the situation is actually much worse, since it almost never happens that $X\otimes Y$ is the product of $X$ in $Y$ in the category of all $\ensuremath{\mathbb{K}}$-bialgebras---not even when both $X$ and $Y$ are cocommutative. In fact, $\ensuremath{\mathbb{K}}$ itself cannot be a protomodular object in $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$, since this would imply that all objects of $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$ are unital~\cite{MRVdL-TCOGAM}. As we have just seen, this is manifestly false. The same holds for the category $\ensuremath{\mathsf{Hopf}}_{\ensuremath{\mathbb{K}}}$ of Hopf algebras over $\ensuremath{\mathbb{K}}$. At first this may seem to contradict results in~\cite{Molnar} on split extensions of Hopf algebras. We must keep in mind, though, that for a Hopf algebra $H$, the map $\langle 1_{H},0\rangle$ in the diagram \[ \xymatrix{H \ar@<-.5ex>[r]_-{\langle 1_{H},0\rangle} & H\times H \ar@<-.5ex>[l]_-{\pi_{1}} \ar@<.5ex>[r]^-{\pi_{2}} & H \ar@{->}@<.5ex>[l]^-{\langle 0,1_{H}\rangle}} \] is the kernel of $\pi_{2}$, but $\pi_{2}$ need not be its cokernel, unless $H$ is cocommutative. Hence this diagram does not represent a short exact sequence, and so neither Theorem~4.1 nor Theorem~4.2 in~\cite{Molnar} saying that $H\times H\cong H\otimes H$ applies. We conclude that it makes no sense to study protomodular objects in $\ensuremath{\mathsf{BiAlg}}_{\ensuremath{\mathbb{K}}}$ or in $\ensuremath{\mathsf{Hopf}}_{\ensuremath{\mathbb{K}}}$, and we thus restrict our attention to the cocommutative case. \section*{Acknowledgements} Thanks to Jos\'e Manuel Fern\'andez Vilaboa, Isar Goyvaerts, Marino Gran, James R.~A.~Gray, Gabriel Kadjo and Joost Vercruysse for fruitful discussions and useful comments. We would also like to thank the University of Cape Town and Stellenbosch University for their kind hospitality during our stay in South Africa. \providecommand{\noopsort}[1]{} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2017-03-14T01:11:46", "yymm": "1701", "arxiv_id": "1701.00665", "language": "en", "url": "https://arxiv.org/abs/1701.00665", "abstract": "We prove a universal characterization of Hopf algebras among cocommutative bialgebras over a field: a cocommutative bialgebra is a Hopf algebra precisely when every split extension over it admits a join decomposition. We also explain why this result cannot be extended to a non-cocommutative setting.", "subjects": "Rings and Algebras (math.RA); Category Theory (math.CT)", "title": "A note on split extensions of bialgebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717428891156, "lm_q2_score": 0.7185943805178138, "lm_q1q2_score": 0.7089449104177838 }
https://arxiv.org/abs/2012.02335
Tight Chang's-lemma-type bounds for Boolean functions
Chang's lemma (Duke Mathematical Journal, 2002) is a classical result with applications across several areas in mathematics and computer science. For a Boolean function $f$ that takes values in {-1,1} let $r(f)$ denote its Fourier rank. For each positive threshold $t$, Chang's lemma provides a lower bound on $wt(f):=\Pr[f(x)=-1]$ in terms of the dimension of the span of its characters with Fourier coefficients of magnitude at least $1/t$. We examine the tightness of Chang's lemma w.r.t. the following three natural settings of the threshold:- the Fourier sparsity of $f$, denoted $k(f)$,- the Fourier max-supp-entropy of $f$, denoted $k'(f)$, defined to be $\max \{1/|\hat{f}(S)| : \hat{f}(S) \neq 0\}$,- the Fourier max-rank-entropy of $f$, denoted $k''(f)$, defined to be the minimum $t$ such that characters whose Fourier coefficients are at least $1/t$ in absolute value span a space of dimension $r(f)$.We prove new lower bounds on $wt(f)$ in terms of these measures. One of our lower bounds subsumes and refines the previously best known upper bound on $r(f)$ in terms of $k(f)$ by Sanyal (ToC, 2019). Another lower bound is based on our improvement of a bound by Chattopadhyay, Hatami, Lovett and Tal (ITCS, 2019) on the sum of the absolute values of the level-$1$ Fourier coefficients. We also show that Chang's lemma for the these choices of the threshold is asymptotically outperformed by our bounds for most settings of the parameters involved.Next, we show that our bounds are tight for a wide range of the parameters involved, by constructing functions (which are modifications of the Addressing function) witnessing their tightness. Finally we construct Boolean functions $f$ for which- our lower bounds asymptotically match $wt(f)$, and- for any choice of the threshold $t$, the lower bound obtained from Chang's lemma is asymptotically smaller than $wt(f)$.
\section{Introduction} Chang's lemma \cite{Chang02, Green04} is a classical result in additive combinatorics. Informally, the lemma states that all the large Fourier coefficients of the indicator function of a large subset of an Abelian group reside in a low dimensional subspace. The discovery of this lemma was motivated by an application to improve Frieman's theorem on set additions \cite{Chang02}. The lemma has subsequently found many applications in additive combinatorics and combinatorial number theory. Chang's lemma and the ideas developed in Chang's paper~\cite{Chang02} have been used to prove theorems about arithmetic progressions in sumsets~\cite{Green02, San08}, structure of Boolean functions with small spectral norm~\cite{GS08}, and improved bounds for Roth’s theorem on three-term arithmetic progressions in the integers~\cite{San11, Bloom16, BS20}. Green and Ruzsa~\cite{GR07} used the ideas of Chang's lemma to prove a generalization of Frieman's theorem for arbitrary Abelian groups. The Chang's lemma is known to be sharp for various settings of parameters for the group $\mathbb{Z}_N$ \cite{Green03}. In this paper, our focus is a specialization of Chang's lemma for the Boolean hypercube. Let $f:\pmone^n \to \pmone$ be a Boolean function. For any positive real number $t$ (which we refer to as the \emph{threshold}) define $\cS_t:=\{S \subseteq [n]:|\wh{f}(S)|\geq \frac{1}{t}\}$.\footnote{The function $f$ is implicit in the definition of $\cS_t$ and will be clear from context.}\textsuperscript{,}\footnote{We refer the reader to Section~\ref{sec:prelims} for preliminaries on Fourier analysis.} Viewing elements of $\calS_t$ as vectors in $\ftwo^n$, Chang's lemma gives a lower bound on $\delta(f) :=\Pr[f(x) = -1]$ (called the weight of $f$), in terms of $t$ and the dimension of the span of $\calS_t$ (denoted by $\dim(\cS_t)$). Formally, we have the following lemma, referred to as Chang's lemma in this paper. \begin{lemma}[Chang's lemma~\cite{Chang02}] \label{lem:chang} There exists a universal constant $c>0$ such that the following is true for every integer $n>0$. Let $f:\pmone^n \to \pmone$ be any function and $t$ be any positive real number. Let $\delta(f) := \Pr_x[f(x) = -1]$ and $d = \dim(\cS_t) > 1$. If $\delta(f)<c$, then \[ \delta(f)=\Omega\left(\frac{\sqrt{d}}{t \sqrt{\log \bra{t^2/d}}}\right). \] \end{lemma} \begin{remark} In the literature Chang's lemma is generally stated as an upper bound on $d$ in terms of $\delta(f)$ and $t$. In Section~\ref{sec: appendix} we state the more commonly seen form of Chang's lemma (Lemma~\ref{lem:chang original}) and prove that it is equivalent to Lemma~\ref{lem:chang}. \end{remark} This lemma has found numerous applications in complexity theory and algorithms~\cite{BRTW14, CLRS16}, analysis of Boolean functions~\cite{GS08, TWXZ13}, communication complexity~\cite{TWXZ13, HLY19} and extremal combinatorics~\cite{FKKK18}. See \cite{IMR14} for a proof of Lemma~\ref{lem:chang}. In this paper, we investigate the tightness of Lemma~\ref{lem:chang} for three natural choices of the threshold $t$ based on the Fourier spectrum of the function (see Section~\ref{sec: thresholds considered} for details about these thresholds). We prove additional lower bounds on $\delta(f)$, and compare relative performances of all the bounds under consideration. Our results imply that the bounds given by Chang's lemma for the choices of the threshold that we consider are asymptotically outperformed by one of the bounds we prove for a broad range of the parameters involved. For most regimes of the parameters we are able to construct classes of functions that witness the tightness of our bounds. Interestingly, for each choice of threshold that we consider, $\dim(\cS_t)$ equals the Fourier rank of $f$ (denoted by $r(f)$, see Definition~\ref{defi:rank}). In particular, setting $t$ to be the Fourier sparsity of $f$ (denoted by $k(f)$, see Definition~\ref{defi:sparsity}) leads to a very natural question about the relationship among $r(f), k(f)$ and $\delta(f)$ for a Boolean function $f$. The best known upper bound on $r(f)$ in terms of $k(f)$ is $r(f)=O(\sqrt{k(f)}\log k(f))$~\cite{San19}. We improve upon this bound by incorporating $\delta(f)$ into it, and show \[ r(f)=O(\sqrt{k(f)\delta(f)}\log k(f)). \] Moreover, we also show that this bound is tight; see Section~\ref{sec: our contributions} for a detailed discussion. Throughout this paper, we assume that $f$ is not a constant function or a parity or a negative parity (unless mentioned otherwise). In other words, $k(f),r(f) \geq 2$. \subsection{Thresholds considered for Chang's lemma} \label{sec: thresholds considered} For a Boolean function $f$, let $\supp(f)$ denote the Fourier support of $f$ (Definition~\ref{defi: Fouriersupport}). In this section, we discuss and motivate the choices of the threshold $t$ considered in this work. \paragraph{The Fourier sparsity of $f$.} It was shown in~\cite[Theorem 3.3]{GOS+} that for all $S \in \supp(f)$, $|\wh{f}(S)| \geq \frac{1}{k(f)}$. It follows that $\cS_{k(f)}=\supp(f)$ and hence $\dim(\cS_{k(f)})=r(f)$. Moreover, there exist functions (e.g.~$f = \AND_n$) for which $\dim(\calS_t) = 0$ for $t = o(k(f))$, justifying the choice of threshold $k(f)$. This choice also leads us to a fundamental structural problem of bounding the weight of a Boolean function $f$ from below, in terms of its Fourier sparsity and Fourier rank. The \emph{uncertainty principle} (see, for example, \cite{GT13} for a statement and a proof) asserts that $\delta(f) = \Omega\left(\frac{1}{k(f)}\right)$. Chang's lemma with $t=k(f)$ and the fact that $\log\bra{k(f)^2/r(f)}=\Theta(\log k(f))$ (Lemma~\ref{lem:relationships between rk and k'} (part 1)) implies that \begin{equation} \label{eq:changs lemma with threshold k} \delta(f)=\Omega\left(\frac{1}{k(f)}\sqrt\frac{{r(f)}}{\log k(f)}\right), \end{equation} thereby subsuming the uncertainty principle (note that $r(f)/\log k(f) \geq 1$) and refining it by incorporating $r(f)$ into the bound. \paragraph{The Fourier max-supp-entropy of $f$.} The next choice of the threshold that we consider is the \emph{Fourier max-supp-entropy} of $f$, denoted by $k'(f)$, which we define to be $\max_{S \in \supp(f)}$ $\frac{1}{|\wh{f}(S)|}$ (Definition~\ref{defi:max entropy, max rank entropy}). By its definition $\seespectrum(f)$ is the smallest value of $t$ such that $\cS_t=\supp(f)$. Since $k'(f) \leq k(f)$ (see the discussion in the last item), the knowledge of $\seespectrum(f)$ can potentially offer us a more fine-grained lower bound on $\delta(f)$ than as in the last item; Chang's lemma with $t=\seespectrum(f)$ and $\log \bra{k'(f)^2/r(f)} = \Theta(\log k'(f))$ (Lemma~\ref{lem:relationships between rk and k'} (part 2)) implies \begin{equation} \label{eq: changs lemma with threshold k'} \delta(f)=\Omega\left(\frac{1}{k'(f)}\sqrt\frac{{r(f)}}{\log k'(f)}\right). \end{equation} Notice that Equation~\eqref{eq: changs lemma with threshold k'} subsumes the bound in Equation~\eqref{eq:changs lemma with threshold k}. In~\cite{HKP11} an equivalent statement of the well-known sensitivity conjecture was presented in terms of $k'(f)$.\footnote{In \cite{HKP11} $\log (k'(f)^2)$ is called the Fourier max-entropy while we refer to $k'(f)$ as the Fourier max-supp-entropy.} Granularity is another widely-studied measure that is closely associated with Fourier max-supp-entropy. \paragraph{The Fourier max-rank-entropy of $f$.} Our final choice of the threshold is the \emph{Fourier max-rank-entropy of $f$}, denoted by $\seerank(f)$, which we define to be the smallest positive real number $t$ such that $\dim(\cS_t)=r(f)$ (Definition~\ref{defi:max entropy, max rank entropy}). We have that $\seerank(f)\leq\seespectrum(f)\leq k(f)$ by their definitions. Amongst all settings of the threshold $t$ for which $\dim(\calS_t) = r(f)$, the value $t = k''(f)$ yields the best lower bound from Chang's lemma. Chang's lemma with $t=\seerank(f)$ implies \begin{equation} \label{eq: changs lemma with threshold k''} \delta(f)=\Omega\left(\frac{1}{\seerank(f)}\sqrt\frac{{r(f)}}{\log \bra{\seerank(f)^2/r(f)}}\right), \end{equation} which subsumes the bounds in Equations~\eqref{eq: changs lemma with threshold k'} and~\eqref{eq:changs lemma with threshold k}. \subsection{Our contributions} \label{sec: our contributions} We prove the following results regarding the three natural instantiations of the threshold $t$ (mentioned in the preceding section) for Chang's lemma. \begin{enumerate}[leftmargin=*] \item \textbf{The Fourier sparsity of $f$}: Recall that Chang's lemma with threshold $t=k(f)$ (Equation~\eqref{eq:changs lemma with threshold k}) implies that $\delta(f)=\Omega\left(\frac{1}{k(f)}\sqrt\frac{{r(f)}}{\log k(f)}\right)$. It was shown in~\cite{ACL+19} that $\delta(f)=\Omega\left(\frac{1}{k(f)}\left(\frac{r(f)}{\log k(f)}\right)\right)$, improving upon this bound asymptotically (note that $r(f)/\log k(f) \geq 1$). In this work we improve their bound further. \begin{theorem} \label{thm:delta lower bound in terms of rk only} Let $f: \pmone^n \to \pmone$ be any function such that $k(f)>1$. Then \[ \delta(f)=\Omega\left(\frac{1}{k(f)} \left(\frac{r(f)}{\log k(f)}\right)^2\right). \] \end{theorem} Observe that the statement of Theorem~\ref{thm:delta lower bound in terms of rk only} is equivalent to $r(f)=O(\sqrt{k(f)\delta(f)}\log k(f))$. This bound subsumes the bound $r(f)=O(\sqrt{k(f)} \log k(f))$ shown by Sanyal~\cite{San19}. We prove Theorem~\ref{thm:delta lower bound in terms of rk only} by incorporating $\delta(f)$ in Sanyal's arguments and thereby refining his proof. See Section~\ref{sec:lower bound using k'} for the proof of Theorem~\ref{thm:delta lower bound in terms of rk only}. We also show that Theorem~\ref{thm:delta lower bound in terms of rk only} is tight. For nearly all admissible values of $\rho$ and $\kappa$ we construct many Boolean functions $f$ with $k(f)=O(\kappa)$, $r(f)=O(\rho)$ and $\delta(f)=O\left(\frac{1}{\kappa} \left(\frac{\rho}{\log \kappa}\right)^2\right)$ (Theorem~\ref{thm:delta upper bound in terms of rk and also k'} and Claim~\ref{claim:setting parameters for tightstraightline}). \textbf{Comparison with Sanyal's bound:} The bound $r(f)=O(\sqrt{ k(f)} \log k(f))$ proven by Sanyal is a special case of Theorem~\ref{thm:delta lower bound in terms of rk only} for $\delta(f)=\Theta(1)$. It is not known whether the $\log k(f)$ term is required in Sanyal's upper bound on $r(f)$ (when $f$ equals the Addressing function, $r(f) = \Omega(\sqrt{k(f)})$, see Definition~\ref{defi:Addressing} and Observation~\ref{obs:properties of AND, Bent and Addressing}). For all the functions we construct witnessing the tightness of the bound in Theorem~\ref{thm:delta lower bound in terms of rk only}, $\delta(f)=o(1)$. We prove Theorem~\ref{thm:delta lower bound in terms of rk only} by generalizing Sanyal's proof. As stated before, our bound is tight in this generality, i.e.~the logarithmic factor is required in the upper bound on $r(f)$. This sheds light on the presence of the logarithmic term in the bound $r(f)=O(\sqrt {k(f)} \log k(f))$. \item \textbf{The Fourier max-supp-entropy of $f$}: Recall from Section~\ref{sec: thresholds considered} that the Fourier max-supp-entropy of $f$, denoted $k'(f)$, is defined as $k'(f) = \max_{S \in \supp(f)}$ $\frac{1}{|\wh{f}(S)|}$. It can be shown that $\sqrt{\sparsity(f)} \leq \seespectrum(f) \leq \sparsity(f)/2$ (Lemma~\ref{lem:relationships between rk and k'} (part 2)). We prove the following lower bound. \begin{theorem} \label{thm:delta lower bound in terms of rk and also k'} Let $f: \pmone^n \to \pmone$ be any function such that $k(f) > 1$. Then, \[ \delta(f)=\Omega\left(\max\left\{\frac{1}{k(f)}\bra{\frac{r(f)}{\log \sparsity(f)}}^2, \frac{\sparsity(f)}{\seespectrum(f)^2}\right\}\right). \] \end{theorem} As is evident from the statement, Theorem~\ref{thm:delta lower bound in terms of rk and also k'} presents two lower bounds, one of which is Theorem~\ref{thm:delta lower bound in terms of rk only}. The other lower bound $\delta(f)\geq\frac{\sparsity(f)}{\seespectrum(f)^2}$ is Claim~\ref{claim:delta at least k/k'^2}. Chang's lemma with the threshold $t$ set to $\seespectrum(f)$ (Equation~\eqref{eq: changs lemma with threshold k'}), together with the observation that $\log \sparsity(f) = \Theta(\log \seespectrum(f))$, implies $\delta(f) = \Omega\left(\frac{1}{k'(f)}\sqrt{\frac{r(f)}{\log \sparsity(f)}}\right)$. Theorem~\ref{thm:delta lower bound in terms of rk and also k'} subsumes this bound since \[ \delta(f)=\Omega\left(\frac{1}{k(f)}\left(\frac{r(f)}{\log \sparsity(f)}\right)^2 \cdot\frac{\sparsity(f)}{\seespectrum(f)^2}\right)^{1/2}=\Omega\left(\frac{1}{k'(f)}\left(\frac{r(f)}{\log \sparsity(f)}\right)\right)=\Omega\left(\frac{1}{k'(f)}\sqrt{\frac{r(f)}{\log \sparsity(f)}}\right), \] where the equality follows from $r(f)/\log \sparsity(f) \geq 1$. In addition, observe from the last equality above that the bound of Theorem~\ref{thm:delta lower bound in terms of rk and also k'} is asymptotically larger than the bound obtained from Chang's lemma for $t=k'(f)$ (Equation~\eqref{eq: changs lemma with threshold k'}) except when $r(f)/\log k(f) =\Theta(1)$. Theorem~\ref{thm:delta upper bound in terms of rk and also k'} complements Theorem~\ref{thm:delta lower bound in terms of rk and also k'} by showing that for nearly all admissible values of $r(f), k(f)$ and $\seespectrum(f)$, there exists a function for which the larger of the two bounds presented in Theorem~\ref{thm:delta lower bound in terms of rk and also k'} is tight. \begin{theorem} \label{thm:delta upper bound in terms of rk and also k'} For all $\rho, \kappa, \kappa' \in \N$ such that $\kappa$ is sufficiently large, for all constants $\epsilon > 0$ such that $\log \kappa \leq \rho \leq \kappa^{\frac12 - \epsilon}$ and $\kappa^{\frac12} \leq \kappa' \leq \kappa$, there exists a Boolean function $f_{\rho,\kappa,\kappa'}$ such that $r({f_{\rho,\kappa,\kappa'}}) = \Theta(\rho)$, $k({f_{\rho,\kappa,\kappa'}}) = \Theta(\kappa)$, $ k'({f_{\rho,\kappa,\kappa'}}) = \Theta(\kappa')$ and $$\delta(f_{\rho,\kappa,\kappa'}) = \Theta\left(\max\left\{\frac{1}{\kappa}\paren{\frac{\rho}{\log \kappa}}^2,\frac{\kappa}{\kappa'^2} \right\}\right).$$ \end{theorem} The range of parameters considered in Theorem~\ref{thm:delta upper bound in terms of rk and also k'} is justified by Lemma~\ref{lem:relationships between rk and k'}. We prove Theorem~\ref{thm:delta upper bound in terms of rk and also k'} in two parts. Fix any $\rho, \kappa$ such that $\log \kappa \leq \rho \leq \kappa^{\frac{1}{2} - \epsilon}$ for some constant $\epsilon > 0$. First, for each value of $\kappa' \in [\frac{\kappa \log \kappa}{\rho}, \kappa]$ we construct a function $f$ for which the first lower bound on $\delta(f)$ from Theorem~\ref{thm:delta lower bound in terms of rk and also k'} is tight (Claim~\ref{claim:setting parameters for tightstraightline}). Next, for each value of $\kappa' \in [\kappa^{\frac{1}{2}},\frac{\kappa \log \kappa}{\rho}]$ we construct a function $f$ for which the second lower bound on $\delta(f)$ from Theorem~\ref{thm:delta lower bound in terms of rk and also k'} is tight (Claim~\ref{claim:setting parameters for tight curve}). See Figure~\ref{fig: k'} for a graphical visualization of the bounds in Theorem~\ref{thm:delta lower bound in terms of rk and also k'} for any fixed values of $\rho$ and $\kappa$. \begin{figure}[h] \centering \begin{tikzpicture}[scale = 1, xscale = 1.8, yscale=.7] \def\dummyvar{2}; \def\rcval{2.5}; \def\kval{100}; \def\rval{(\rcval)^2}; \def\logkval{log2 \kval}; \def\rkval{\rval^2/\kval} \begin{axis}[ axis y line = left, axis x line = bottom, x label style={at={(axis description cs:.25,-0.02)},anchor=north}, y label style={at={(axis description cs:-0.01,.5)},rotate=0,anchor=south}, xlabel={$\ \ \ \ \ \ \ \ \ \ $ Fourier max-Entropy ($\kappa'$) $\longrightarrow$}, ylabel={weight($\delta$) $\longrightarrow$}, xtick = {16, 100}, xticklabels = {$\frac{\kappa}{\rho}\log \kappa\ \ \ \ \ $,$\kappa$}, ytick = {1}, yticklabels = {1}, xmin = 0, xmax = 102, ymin = 0, ymax = 1, ] \addplot[name path=testa, domain=0:100, color = ao, thick, dashed, samples=100]{\rkval} node[above,pos=.8] {\textcolor{black}{\textbf{\color{ao}$k$-line:\color{black}} $\ \delta = \rho^2/(\kappa\log^2 \kappa)$}}; \addplot[name path=testa1, domain=16:100, color = green, ultra thick, samples=75]{\rkval}; \addplot[name path=testb, domain=0:100, color = blue, thick, dashed, samples=75]{\kval/(x^2)} node[above, pos=.83] {\textcolor{black}{\textbf{\color{blue}$k'$-curve:\color{black}} $\ \delta = \kappa/(\kappa')^2$}}; \addplot[name path=testb1, domain=0:16, color = blue, ultra thick, samples=15]{\kval/(x^2)}; \addplot[name path=chang, domain=2.5:100, color = red, thick, dashed, samples=75]{.15+\rcval/x} node[above, pos=0.75] {\textcolor{black}{\textbf{\color{red}CL-$k'$-curve :\color{black}} $\ \delta = \sqrt{\rho}/(\kappa'\log {\kappa'})$}}; \addplot [name path=vertline, dotted, thick] coordinates {(\kval/\rval,0) (\kval/\rval, 1)}; \addplot[name path=upper, domain=0:100, color = white, smooth]{1}; \addplot fill between[ of = chang and upper, split, every even segment/.style = {white!10}, every odd segment/.style = {gray!70} ]; \addplot fill between[ of = chang and testb, split, every even segment/.style = {gray!20}, every odd segment/.style = {white!60} ]; \addplot fill between[ of = testa and chang, split, every even segment/.style = {white!60}, every odd segment/.style = {gray!20} ]; \node[anchor=west] (source) at (axis cs:25,.9){Chang's lemma Bound}; \node (destination) at (axis cs:5,.6){}; \draw[->](source)--(destination); \node[anchor=west] (source2) at (axis cs:45,.7){Our Bounds}; \node (destination2) at (axis cs:50,.39){}; \node (destination3) at (axis cs:14,.51){}; \draw[->](source2)--(destination2); \draw[->](source2)--(destination3); \end{axis} \end{tikzpicture} \caption{This plot is constructed for any fixed values of $\rho, \kappa$ for which $\log \kappa \leq \rho \leq \sqrt{\kappa}$, and depicts the relationship between $\delta(f)$ and $k'(f)$ for functions $f$ with $r(f) = \Theta(\rho)$ and $k(f) = \Theta(\kappa)$. For any fixed values of $\rho, \kappa$, we will refer to this plot as the $(\rho,\kappa)$-$k'$-plot. Chang's lemma implies that Boolean functions lie above the CL-$k'$-curve. Theorem~\ref{thm:delta lower bound in terms of rk and also k'} improves upon Chang's lemma and shows that Boolean functions lie above both the $k$-line and the $k'$-curve, highlighted by the dark grey region in the figure. Roughly speaking, Theorem~\ref{thm:delta upper bound in terms of rk and also k'} exhibits functions that lie on the boundary of the dark grey region described by the $k$-line and the $k'$-curve.} \label{fig: k'} \end{figure} \item \textbf{The Fourier max-rank-entropy of $f$}: Recall from Section~\ref{sec: thresholds considered} that the Fourier max-rank-entropy of $f$, denoted $\seerank(f)$, is the smallest positive real number $t$ such that $\dim(\cS_t)=r(f)$ . It can be shown that $\max\cbra{\sqrt{r(f)}, \frac{r(f)}{\log \sparsity(f)}} \leq \seerank(f) \leq \sparsity(f)$ (Lemma~\ref{lem:relationships between rk and k'} (part 2)). We prove the following lower bound. \begin{theorem} \label{thm:delta lower bound in terms of rk and also k''} Let $f: \pmone^n \to \pmone$ be any function such that $k(f) > 1$. Then, \[ \delta(f)=\Omega\left(\max\left\{\frac{1}{\sparsity(f)}\left(\frac{r(f)}{\log \sparsity(f)}\right)^2, \frac{r(f)}{\seerank(f)\log \sparsity(f)}\right\}\right). \] \end{theorem} Theorem~\ref{thm:delta lower bound in terms of rk and also k''} yields a better lower bound than Chang's lemma with the threshold $t=\seerank(f)$ (Equation~\eqref{eq: changs lemma with threshold k''}), except when $r(f) < (\log k(f))^2$ (see the caption of Figure~\ref{fig: k''}). Theorem~\ref{thm:delta lower bound in terms of rk and also k''} presents two lower bounds: the first one is Theorem~\ref{thm:delta lower bound in terms of rk only}, and the second one is Lemma~\ref{lem:delta lower bound in terms of rk and also k''}. We prove Lemma~\ref{lem:delta lower bound in terms of rk and also k''} by strengthening a bound due to~\cite{CHLT19} on the sum of absolute values of level-$1$ Fourier coefficients of a Boolean function in terms of its $\ftwo$-degree. A proof of Theorem~\ref{thm:delta lower bound in terms of rk and also k''} can be found in Section~\ref{sec:lower bound using k''}. We also show that for nearly all admissible values of $r(f), k(f)$ and $k''(f)$, there exist functions for which the larger of the two bounds presented in Theorem~\ref{thm:delta lower bound in terms of rk and also k''} is nearly tight. \begin{theorem} \label{thm:delta upper bound in terms of rk and also k''} For all $\rho, \kappa, \kappa'' \in \N$ such that $\kappa$ is sufficiently large, for all $\epsilon > 0$ such that $\log \kappa \leq \rho \leq \kappa^{\frac{1}{2} - \epsilon}$ and $\rho \leq \kappa'' \leq \kappa$ there exists a Boolean function $f_{\rho,\kappa,\kappa''}$ such that $r(f_{\rho,\kappa, \kappa''}) = \Theta(\rho)$, $k(f_{\rho,\kappa, \kappa''})= \Theta(\kappa)$, $k''(f_{\rho,\kappa, \kappa''}) = \Theta(\kappa'')$ and $$\delta(f_{\rho,\kappa, \kappa''}) = \Theta\left( \max\left\{ \frac{1}{\kappa}\paren{\frac{\rho}{\log \kappa}}^2, \frac{\rho}{\kappa''\log (\kappa''/\rho)} \right\} \right).$$ \end{theorem} The range of parameters considered in Theorem~\ref{thm:delta upper bound in terms of rk and also k''} is justified by Lemma~\ref{lem:relationships between rk and k'}. Theorem~\ref{thm:delta upper bound in terms of rk and also k''} is proved in two parts. Fix any $\rho, \kappa$ such that $\log \kappa \leq \rho \leq \kappa^{\frac{1}{2} - \epsilon}$ for some constant $\epsilon > 0$. First, for each value of $\kappa'' \in [\frac{\kappa \log \kappa}{\rho}, \kappa]$ we construct a function $f$ for which the first lower bound on $\delta(f)$ from Theorem~\ref{thm:delta lower bound in terms of rk and also k''} is tight (Claim~\ref{claim:setting parameters for tightstraightline for k''}). In fact these are the same functions that are used to prove the first bound in Theorem~\ref{thm:delta upper bound in terms of rk and also k'}. Next, for each value of $\kappa'' \in [e\rho,\frac{\kappa \log \kappa}{\rho}]$ we construct a function $f$ for which $\delta(f) = \Theta(\frac{\rho}{\kappa''\log (\kappa''/\rho)})$ (Claim~\ref{claim: setting parameters for tight curve for k''}). From the above discussion one may verify that for every $\rho, \kappa$ that we consider and for every $\kappa'' \geq \rho\cdot \kappa^{\Omega(1)}$, the function that we construct witnesses tightness of the lower bound in Theorem~\ref{thm:delta lower bound in terms of rk and also k''}. In general, for all settings of $\rho, \kappa$ and $\kappa''$ that we consider, the upper bound on $\delta(f)$ from~Theorem~\ref{thm:delta upper bound in terms of rk and also k''} is off by a factor of at most $O(\log \kappa)$ from the lower bound in Theorem~\ref{thm:delta lower bound in terms of rk and also k''}. See Figure~\ref{fig: k''} for a graphical visualization of the bounds in Theorem~\ref{thm:delta lower bound in terms of rk and also k''} for any fixed values of $\rho$ and $\kappa$. \begin{figure}[h] \centering \begin{tikzpicture}[scale = 1, xscale = 1.8, yscale =.7] \def\dummyvar{2}; \def\rcval{2.5}; \def\kval{100}; \def\rval{(\rcval)^2}; \def\logkval{log2 \kval}; \def\rkval{\rval^2/\kval} \begin{axis}[ axis y line = left, axis x line = bottom, x label style={at={(axis description cs:.25,-0.02)},anchor=north}, y label style={at={(axis description cs:-0.01,.5)},rotate=0,anchor=south}, xlabel={max-Entropy ($\kappa''$) $\longrightarrow$}, ylabel={weight ($\delta$) $\longrightarrow$}, xtick = {(5/.66), 12.21, 100}, xticklabels = {$\sqrt{\rho\kappa^{\frac{1}{\sqrt{\rho}}}}\ \ \ \ \ \ \ \ \ $, $\ \ \ \ \ \ \frac{\kappa}{\rho}\log \kappa$,$\kappa$}, ytick = {1}, yticklabels = {1}, xmin = 0, xmax = 102, ymin = 0, ymax = 1, ] \addplot[name path=testa, domain=0:100, color = ao, dashed, thick, samples=100]{\rkval} node[above,pos=.8] {\textcolor{black}{\textbf{\color{ao}$k$-line:\color{black}} $\ \delta = \rho^2/(\kappa\log^2 \kappa)$}}; \addplot[name path=testa1, domain=12.21:100, color = green, smooth, ultra thick, samples=100]{\rkval} node[above,pos=.88]{}; \addplot[name path=testb, domain=0:100, color = blue, dashed, thick, samples=100]{.22 + \rval/(3*(x))} node[above, pos=.78] {\textcolor{black}{\textbf{\color{blue}$k''$-curve:\color{black}} $\ \delta = \rho/(\kappa''\log \kappa)$}}; \addplot[name path=testb1, domain=(5/.66):12.21, color = blue, ultra thick, samples=100]{.22 + \rval/(3*(x))} node[above, pos=.9]{}; \addplot[name path=chang, domain=3.5:100, color = red, dashed, thick, samples=100]{\rcval/(x/1.5)} node[above, pos=0.7] {\textcolor{black}{\textbf{\color{red}CL-$k''$-curve:\color{black}} $\ \delta = \sqrt{\rho}/\left(\kappa''\log\left(\kappa''^2/\rho\right)\right)$}}; \addplot[name path=chang1, domain=3.5:(5/.66), color = red, ultra thick, samples=100]{\rcval/(x/1.5)} node[above, pos=0.88] {\textcolor{black}{}}; \addplot [name path=vertline, dotted, thick, samples=100] coordinates {(12.21,0) (12.21, 1)}; \addplot [name path=vertline, dotted, thick, samples=100] coordinates {((5/.66),0) ((5/.66), 1)}; \addplot[name path=upper, domain=0:100, color = white, samples=100]{1}; \addplot fill between[ of = chang and upper, split, every odd segment/.style = {gray!70} ]; \addplot fill between[ of = chang and testb, split, every even segment/.style = {gray!20}, every odd segment/.style = {gray!20} ]; \addplot fill between[ of = testa and chang, split, every even segment/.style = {white!60}, every odd segment/.style = {gray!20} ]; \node[anchor=west] (source) at (axis cs:25,.9){Chang's lemma Bound}; \node (destination) at (axis cs:5,.75){}; \draw[->](source)--(destination); \node[anchor=west] (source2) at (axis cs:45,.7){Our Bounds}; \node (destination2) at (axis cs:50,.39){}; \node (destination3) at (axis cs:10,.458){}; \draw[->](source2)--(destination2); \draw[->](source2)--(destination3); \end{axis} \end{tikzpicture} \caption{This plot is constructed for any fixed values of $\rho, \kappa$ for which $\log \kappa \leq \rho \leq \sqrt{\kappa}$, and depicts the relationship between $\delta(f)$ and $k''(f)$ for functions $f$ with $r(f) = \Theta(\rho)$ and $k(f) = \Theta(\kappa)$. For any fixed values of $\rho, \kappa$, we will refer to this plot as $(\rho,\kappa)$-$k''$-plot. Chang's lemma implies that Boolean functions lie above the CL-$k''$-curve. Theorem~\ref{thm:delta lower bound in terms of rk and also k''} improves upon Chang's lemma and shows that Boolean functions lie above both the $k$-line and the $k''$-curve, highlighted by the dark grey region in the figure. Although the picture indicates that the CL-$k''$-curve is better than the $k''$-curve for certain ranges of $\kappa''$, this is actually only possible for certain values of $\rho$ and $\kappa$. This is because the CL-$k''$-curve and the $k''$-curve intersect at $\sqrt{\rho \kappa^{1/\sqrt{\rho}}}$, which is less than $\sqrt{\rho}$ if $\rho\geq (\log \kappa)^2$. By Lemma~\ref{lem:relationships between rk and k'} we know that for any function $f$ on this plot, the range of $k''(f)$ is between $\max\{\sqrt{\rho}, \rho/\log \kappa\}$ and $\kappa$. Thus our bounds in Theorem~\ref{thm:delta lower bound in terms of rk and also k''} dominate those given by the CL-$k''$-curve in all $(\rho, \kappa)$-$k''$ plots where $\rho \geq \log^2 \kappa$.} \label{fig: k''} \end{figure} \end{enumerate} \paragraph{Dominating Chang's lemma for all thresholds.} Our final contribution is to show that there exists a function for which: our lower bounds (Theorem~\ref{thm:delta lower bound in terms of rk and also k'} and \ref{thm:delta lower bound in terms of rk and also k''}) asymptotically match its weight, but for any choice of the threshold the lower bound obtained from Chang's lemma (Lemma~\ref{lem:chang}) is asymptotically smaller than its weight (Claim~\ref{claim: beating changs lemma for all thresholds for ADtt'}). \subsection{Applications of our results} \label{sec: Applications of our results} An application of our result is an enhanced understanding of the bound $r(f)=O(\sqrt{ k(f)} \log k(f))$ proven by Sanyal \cite{San19}. This bound is a special case of Theorem~\ref{thm:delta lower bound in terms of rk only} for $\delta(f)=\Theta(1)$. It is not known whether the $\log k(f)$ term is required in Sanyal's upper bound on $r(f)$ (when $f$ equals the Addressing function, $r(f) = \Omega(\sqrt{k(f)})$, see Definition~\ref{defi:Addressing} and Observation~\ref{obs:properties of AND, Bent and Addressing}). For all the functions we construct witnessing the tightness of the bound in Theorem~\ref{thm:delta lower bound in terms of rk only}, $\delta(f)=o(1)$. We prove Theorem~\ref{thm:delta lower bound in terms of rk only} by generalizing Sanyal's proof. As stated before, our bound is tight in this generality, i.e.~the logarithmic factor is required in the upper bound on $r(f)$. This sheds light on the presence of the logarithmic term in the bound $r(f)=O(\sqrt {k(f)} \log k(f))$. Also, Fourier sparsity and Fourier rank of $f$ have intimate connections with the communication complexity of functions of the form $F:=f \circ \XOR$. The Fourier sparsity of $f$ equals the real rank ($\mathsf{rank}(M_F)$) of the communication matrix $M_F$ of $F$, and the Fourier rank of $f$ equals the deterministic (and even exact quantum) one-way communication complexity of $F$ \cite{MO09}. Theorem~\ref{thm:delta lower bound in terms of rk only} thus implies an improved upper bound of $O(\sqrt{k(f) \delta(f)} \log k(f))$ on the one-way communication complexity of $F$ in these models, which asymptotically beats the best known upper bound of $O(\sqrt{\mathsf{rank}(M_F)})$ even for two-way protocols~\cite{TWXZ13, Lovett16}, for the special case of functions of this form (when $\delta(f)=o(1/\log k)$). Given the wide-ranging application of Chang's lemma to areas like additive combinatorics, learning theory and communication complexity, we strongly feel that our refinements of Chang's lemma will find many more applications. \section{Proof techniques for lower bound results}\label{sec: lb overview} Our lower bound results on $\delta(f)$ can be divided into two parts: lower bounds in terms of $r(f)$, $k(f)$, and $k'(f)$ (Theorem~\ref{thm:delta lower bound in terms of rk and also k'}), and lower bounds in terms of $r(f)$, $k(f)$, and $k''(f)$ (Theorem~\ref{thm:delta lower bound in terms of rk and also k''}). Theorem~\ref{thm:delta lower bound in terms of rk and also k'} consists of two lower bounds. The second bound, $\delta({f}) = \Omega\left(\frac{k(f)}{k'(f)^2}\right)$, is a direct application of Parseval's identity (Claim~\ref{claim:delta at least k/k'^2}). The first bound follows from Theorem~\ref{thm:delta lower bound in terms of rk only}, one of the main technical contributions of this paper. Similarly, Theorem~\ref{thm:delta lower bound in terms of rk and also k''} consists of two lower bounds: the first bound is Theorem~\ref{thm:delta lower bound in terms of rk only} and the second one is Lemma~\ref{lem:delta lower bound in terms of rk and also k''}. The formal proofs of Theorems \ref{thm:delta lower bound in terms of rk and also k'} and \ref{thm:delta lower bound in terms of rk and also k''} are given in Section~\ref{sec:lower bound}. We discuss the outline of the proofs of Theorem~\ref{thm:delta lower bound in terms of rk only} and Lemma~\ref{lem:delta lower bound in terms of rk and also k''} in Sections \ref{sec:overview of delta lower bound in terms of rk only} and \ref{sec:overview of delta lower bound in terms of rk and also k''}, respectively. \subsection{Overview of the proof of Theorem~\ref{thm:delta lower bound in terms of rk only}} \label{sec:overview of delta lower bound in terms of rk only} The lower bound on $\delta(f)$ in Theorem~\ref{thm:delta lower bound in terms of rk only} can also be viewed as an upper on $r(f)$ in terms of $\delta(f)$ and $k(f)$. The best known upper bound on the Fourier rank of a Boolean function in terms of the sparsity of the function was given by Sanyal~\cite{San19}. They showed that for any Boolean function $f$, $r({f}) = O(\sqrt{k(f)} \log{k(f)})$. Theorem~\ref{thm:delta lower bound in terms of rk only} improves upon this upper bound on the Fourier rank by adding a dependence on the weight of the function: $r({f}) = O(\sqrt{\delta(f) k(f)} \log{k(f)})$. The outline of the proof is similar to the proof by Sanyal (\cite[Theorem 1.2]{San19}). We give an algorithm which takes a Boolean function as an input and outputs $O(\sqrt{\delta(f) \sparsity(f)} \log{\sparsity(f)})$ parities, such that, any assignment of these parities makes the function constant. This gives an upper bound on Fourier rank of the function since the Fourier support of the function must be contained in the span of this set of parities (Observation~\ref{obs: napdt_implies_rank_ub}). The central ingredient in the algorithm is a lemma in~\cite[Lemma 28]{TWXZ13}. \begin{lemma}[\cite{TWXZ13}] \label{lem:TWXZ13} Let $f: \pmone^n \to \pmone$ a function. There is an affine subspace $V \subseteq \pmone^n$ of co-dimension at most $3\sqrt{\delta(f) k(f)}$ such that $f$ is constant on $V$. \end{lemma} The above lemma is stated slightly differently in \cite{TWXZ13}; they use $\|\wh{f}\|_1$ to bound the co-dimension instead of $3 \sqrt{\delta(f) k(f)}$ (Claim~\ref{claim:lone_le_sqrt_kdelta}). Lemma~\ref{lem:TWXZ13} allows us to fix small number of parities, such that, the sparsity of every possible restriction (for all possible assignments to these parities) is halved. The algorithm is formally stated in Section~\ref{sec:lower bound using k'} (Algorithm~\ref{alg:NAPDT}); we give an outline here. \paragraph{Outline of the algorithm:} Our iterative algorithm incrementally constructs a set of parities such that, finally, the function becomes constant for every assignment of these set of parities. Every iteration, implemented as a \textnormal{\textbf{while}} loop in Algorithm~\ref{alg:NAPDT}, is essentially an application of Lemma~\ref{lem:TWXZ13}. Let $\Gamma$ be the set of parities fixed after a certain number of iterations of the \textnormal{\textbf{while}} loop. For the next iteration of the loop, we ``greedily'' pick a function, out of all possible restrictions corresponding to $2^{|\Gamma|}$ possible assignments, of $\Gamma$. We then find a set of parities such that the greedily picked function becomes constant under some assignment of these parities; a small set of such parities exist (Lemma~\ref{lem:TWXZ13}) and we include these parities in $\Gamma$. The algorithm finishes once all possible restrictions of $f$, corresponding to $\Gamma$, become constant. The termination condition implies that the algorithm outputs a set of parities satisfying the required condition. \paragraph{Completing the proof of Theorem~\ref{thm:delta lower bound in terms of rk only}:} It remains to show is that the number of parities fixed in Algorithm~\ref{alg:NAPDT} is small. Given a Boolean function $f$ and a set of parities $\Gamma$ over the set of the variables of $f$, following equivalence relation over $\supp(f)$ arises naturally: $$ \forall \gamma_1, \gamma_2 \in \supp(f), \gamma_1 \equiv \gamma_2\ \text{iff}\ \gamma_1+\gamma_2 \in \spann(\Gamma). $$ Let us denote $\Gamma$ after the $i$-th iteration of the \textnormal{\textbf{while}} loop by $\Gamma^{(i)}$ ($\Gamma^{(0)} = \emptyset$). Let $\fmin^{(i)}$ be the selected function $\fmin$ after the $i$-th iteration ($\fmin^{(0)} = f$). To bound the total number of parties fixed in Algorithm~\ref{alg:NAPDT}, we would like to bound the number of parities included in the $i$-th iteration of the \textbf{while} loop. In Step~\ref{item: step_a}, the algorithm chooses the minimum number, say $q_i$, of parities such that $\fmin^{(i-1)}$ becomes constant after fixing these parities to some assignment. $\Gamma$ is updated with these parities to obtain $\Gamma^{(i)}$. Let $\ell_i$ be the number of partitions of the Fourier support of $f$ with respect to the equivalence relation corresponding to $\Gamma^{(i)}$. In Step~\ref{item: step_b} the algorithm considers all possible assignments of parities in $\Gamma^{(i)}$ and the corresponding restrictions of $f$. A non-constant restriction with the smallest weight-to-sparsity ratio is chosen to be $\fmin$. The main idea for the analysis of Algorithm~\ref{alg:NAPDT} is to upper bound the ratio of $q_i$ and $(\ell_{i-1} - \ell_i)$ for every iteration $i$. On one hand $\frac{q_i}{(\ell_{i-1} - \ell_{i})}$ is at most the square root of weight-to-sparsity ratio of the chosen $\fmin$ (Lemma~\ref{lem:q_i bound} and its proof). On the other hand, Lemma~\ref{lem:main_lemma} (main technical lemma in this proof, outline of the proof in the next paragraph) ensures that the weight-to-sparsity ratio of $\fmin$ can be upper bounded by $O\left(\frac{\delta(\fmin)k(\fmin)}{\ell_{i-1}^2}\right)$. Thus, we show that for every iteration $i$, $q_i$ is upper bounded by \[ O\left( \frac{\sqrt{\delta(\fmin) \sparsity(\fmin)}}{\ell_{i-1}}(\ell_{i-1} - \ell_{i}) \right). \] Using standard arguments, summing over the iterations, we get a bound of $O(\sqrt{\delta(f) k(f)}\log \ell_0)$ on $|\Gamma|$. Since $\ell_0 = k$, the desired upper bound on $r(f)$ in Theorem~\ref{thm:delta lower bound in terms of rk only} follows from Observation~\ref{obs: napdt_implies_rank_ub}. \paragraph{Outline of the proof of Lemma~\ref{lem:main_lemma}:} Given a Boolean function $f$ and a set of parities $\Gamma$, let $\ell$ be the number of equivalence classes for the equivalence relation corresponding to $\Gamma$. Define $f|_{{(\Gamma, b)}} := f|_{\{x \in \pmone^n: \forall \gamma \in \Gamma, \chi_{\gamma}(x) = b_{\gamma}\}}$ to be the restricted function when parities of $\Gamma$ are set to assignment $b$ in function $f$. Lemma~\ref{lem:main_lemma} states that there exists an assignment $b$ of $\Gamma$ such that the restricted function $g_b = f|_{{(\Gamma, b)}}$ satisfies $$ \frac{\delta(g_{{(b)}})}{ k(g_{{(b)}}) } \leq \frac{4 k(f) \delta(f)}{\ell^2} .$$ In contrast to the proof in \cite{San19}, where we only need to find a restriction with large sparsity, we need to balance both $\delta(g_b)$ and $\sparsity(g_b)$ here.\footnote{For technical reasons, we consider sparsity without the empty Fourier coefficient in this proof.} We show that $\Ex{b}{\delta(g_{b})}= \delta(f)$ and $\Ex{b}{\sparsity(g_{b})} \geq \ell^2/4k(f)$, where $b$'s are picked uniformly from $\pmone^\Gamma$. A careful manipulation of these expected values gives the required $b$. The equality $\Ex{b}{\delta(g_{b})}= \delta(f)$ follows by the observation that the set of inputs of $f$ is partitioned by the set of inputs of $g_b$. For the expectation of the sparsity of $g_b$, observe that the Fourier coefficients of $g_b$ are non-zero polynomials over the parities of $\Gamma$. By the uncertainty principle (Lemma~\ref{lem:uncertainity principle}), any Fourier coefficient is non-zero for a large number of $g_b$'s. Summing up these lower bounds for all Fourier coefficients, we get that the total number of non-zero Fourier coefficients (for all $g_b$) is large. This shows the required lower bound on the expectation of the sparsity of $g_b$, finishing the proof of Lemma~\ref{lem:main_lemma}. \subsection{Overview of the proof of Lemma~\ref{lem:delta lower bound in terms of rk and also k''} (for Theorem~\ref{thm:delta lower bound in terms of rk and also k''})} \label{sec:overview of delta lower bound in terms of rk and also k''} The crucial ingredient to prove the lower bound in Lemma~\ref{lem:delta lower bound in terms of rk and also k''} is the following lemma. \begin{lemma} \label{lem:CHLT_improvement} For any Boolean function $f$, $\sum_{i=1}^n |\widehat{f}(i)| = O(\delta(f) \degtwo(f))$. \end{lemma} This lemma is a refinement of a similar theorem proved in \cite{CHLT19} (Theorem~\ref{thm:CHLT}) which does not contain the factor of $\delta(f)$. The proof of Lemma~\ref{lem:CHLT_improvement} for a Boolean function $f$ essentially applies Theorem~\ref{thm:CHLT} on the XOR of disjoint copies of $f$. Lemma~\ref{lem:CHLT_improvement} shows a bound on the sum of absolute values of level-$1$ Fourier coefficients for the standard basis of the Fourier support of $f$; we extend this bound for any basis of span of the Fourier support of $f$ (Corollary~\ref{cor:improved chlt-implication}). The proof essentially constructs another function $h$ by doing a basis change on parities, and then applies Lemma~\ref{lem:CHLT_improvement} on the function $h$. Lemma~\ref{lem:delta lower bound in terms of rk and also k''} is a direct implication of Corollary~\ref{cor:improved chlt-implication}; observe that every Fourier coefficient on the left hand side of Corollary~\ref{cor:improved chlt-implication} is bigger than $1/k''(f)$ (from the definition of $k''(f)$). \section{Proof techniques for upper bound results}\label{sec: ub overview} In this section we give the overview of our two upper bound results, Theorems~\ref{thm:delta upper bound in terms of rk and also k'} and \ref{thm:delta upper bound in terms of rk and also k''}. For presenting the overview of the proofs of these theorems we will use $(\rho, \kappa)$-$k'$-plots (Figure~\ref{fig: k'}) and $(\rho, \kappa)$-$k''$-plots (Figure~\ref{fig: k''}), respectively. In a $(\rho, \kappa)$-$k'$-plot ($(\rho, \kappa)$-$k''$-plot, respectively) we will refer to the ``intersection point'' as the point of intersection between the $k$-line and $k'$-curve (the point of intersection between the $k$-line and $k''$-curve, respectively). Which intersection point we are referring to will be clear from context. \subsection{Proof techniques for Theorem~\ref{thm:delta upper bound in terms of rk and also k'}} \label{sec: proof of upper bound on delta for k'} To prove Theorem~\ref{thm:delta upper bound in terms of rk and also k'}, we split our goal into two natural parts: constructing functions on the $k$-line and constructing functions on the $k'$-curve. Both the classes of functions are modifications of the Addressing function (Definition~\ref{defi:Addressing}). In these modifications, all or some of the target variables of the Addressing function are replaced with an AND function or a Bent function or a combination of them. We first provide a description of some functions that lie on the intersection point. While we do not require this, we choose to describe these functions in order to provide more intuition. \paragraph{Construction of functions at the intersection point in any $(\rho, \kappa)$-$k'$-plot:} Note that a function lies at the intersection point when \begin{equation}\label{eq:intersection} k'(f) = \frac{k(f) \log(k(f))}{r(f)}. \end{equation} Thus, we want to construct a function $f$ with $k(f)=\Theta(\kappa)$, $r(f)=\Theta(\rho)$, $k'(f) = \Theta\left(\frac{\kappa\log \kappa}{\rho}\right)$ and $\delta(f) = \rho^2/\kappa(\log^2 \kappa)$. In particular, we want to construct such functions for all $\rho, \kappa$ satisfying $\log \kappa \leq \rho \leq \kappa^{\frac{1}{2}}$. Note that, the Addressing function $\AD_t : \pmone^{\log t + t} \to \pmone$ has sparsity $t^2$, rank $(t + \log t)$, max-supp-entropy $t$ and weight $1/2$ (Observation~\ref{obs:properties of AND, Bent and Addressing}) and thus, $\AD_t$ satisfies Equation~\eqref{eq:intersection}. This only gives functions on the intersection point on all $(\rho, \kappa)$-$k'$-plots where $\rho = \Theta(\sqrt{\kappa})$, while we have to exhibit such functions for all $(\rho, \kappa)$-$k'$-plots where $\log \kappa \leq \rho = O\bra{\sqrt{\kappa}}$. Our next step is to tweak $\AD_t$ in such a way that the rank of the new function $f$ does not change significantly while the sparsity and max-supp-entropy both increase by the same multiplicative factor. This would ensure that the resulting function satisfies Equation~\eqref{eq:intersection}. If the resulting function's weight decreases to the required value, we would have a function at the intersection point. In order to tweak $\AD_t$, we consider a special kind of composed function $f:= \AD_t \circt g$,\footnote{see Definition~\ref{defi:composedaddressing} for a precise definition} obtained by replacing each target variable in the addressing function with a function $g$ where each copy of $g$ acts on a set of new variables. Lemma~\ref{lem:properties of composition of addressing and g} gives the properties of such composed functions. Due to the structure of the Fourier spectrum of the Addressing function, Lemma~\ref{lem:properties of composition of addressing and g} gives us $r(f) \approx t \cdot r(g)$, $k(f) \approx t^2 \cdot k(g)$, $k'(f) = t \cdot k'(g)$ and $\delta(f) = \delta(g)$. So, if $g$ is a function on a small number of variables (say $\log t'$) with near-maximal sparsity and max-supp-entropy ($\Theta(t')$), then the resulting function satisfies Equation~\eqref{eq:intersection}. The $\AND$ function is a natural choice for $g$. We denote the resulting function by $\AD_{t, t'}$ (Definition~\ref{defi:ADtt'}), and this is a function at the intersection point for all plots by suitably varying $t$ and $t'$. \paragraph{Constructing functions on the $k$-line:} We start with $\AD_{t,t'}$, the function at the intersection point in $(\rho, \kappa)$-$k'$-plots. We modify $\AD_{t, t'}$ in such a way that its sparsity, rank and weight do not change much, while the max-supp-entropy increases. We replace a single $\AND_{\log t'}$ in $\AD_{t, t'}$ by $\AND_{\log a}$ for some suitable $a > t$, denote the new function by $\AD_{t,t',a}$ (Definition~\ref{defi:ADtt'a}). A suitable setting of the parameters $t, t'$ and $a$ yields functions on the $k$-line for all plots (Claim~\ref{claim:setting parameters for tightstraightline}). \paragraph{Constructing functions on the $k'$-curve of the $(\rho, \kappa)$-$k'$-plot:} We start with $\AD_{t, t'}$ at the intersection point on $(\rho, \kappa/\ell)$-$k'$-plot (for some parameter $\ell > 0$). We modify $\AD_{t, t'}$ in such a way that its rank and weight do not change, the sparsity increases by a multiplicative factor of $\ell$ and the max-supp-entropy increases by a factor of $\sqrt{\ell}$. The new function $f$ will be on the $k'$-curve in the $(\rho, \kappa)$-$k'$-plot because $\frac{k(f)}{k'(f)^2} = \frac{k(\AD_{t,t'})}{k'(\AD_{t,t'})^2} = \delta(\AD_{t,t'}) = \delta(f)$. Note that $k'(f) \approx \frac{\kappa\log(\kappa)}{\rho \sqrt{\ell}}$, thus making $\ell$ suitably large yields functions on the $k'$-curve for all $\rho\leq \kappa' \leq \frac{\kappa\log(\kappa)}{\rho}$ for all plots. We now change $\AD_{t,t'}$ to have the properties mentioned above. We modify each $\AND_{\log t'}$ in $\AD_{t, t'}$ as follows: replace a single variable $x$ by $x \cdot B$, where $B$ is a bent function on $\log \ell$ new variables. We denote this new inner function by $\AB$ (Definition~\ref{defi:AB}), and $\AD_{t} \circt \AB$ by $\AAB$ (Definition~\ref{defi:AAB}). The effect of changing $\AND_{\log t'}$ to $\AB$ keeps its rank and weight roughly the same, while increasing its sparsity by a factor of $\ell$ and increasing its max-supp-entropy by a factor of $\sqrt{\ell}$ (Claim~\ref{claim:properties of AND of Bent}). In Claim~\ref{claim:properties of AAB} we show, using our composition lemma (Lemma~\ref{lem:properties of composition of addressing and g}), that the properties of $\AD_t \circt \AND_{\log t'}$ and $\AD_t \circt \AB$ change in a similar fashion. Thus, a suitable setting of the parameters $t, t', \ell$ yields functions on the $k'$-curve for all plots (Claim~\ref{claim:setting parameters for tight curve}). \subsection{Proof techniques for Theorem~\ref{thm:delta upper bound in terms of rk and also k''}} \label{sec: proof of upper bound on delta for k''} We split our goal into two parts: constructing functions on the $k$-line when $\frac{\kappa}{\rho}\log\kappa \leq \kappa'' \leq \kappa$, and constructing functions on the $k''$-curve when $\kappa \leq \frac{\kappa}{\rho}\log\kappa$. To construct functions on the $k$-line, we use the functions $\AD_{t,t',a}$ constructed for the proof of Theorem~\ref{thm:delta upper bound in terms of rk and also k'}, since $k'(\AD_{t,t',a}) = k''(\AD_{t,t',a})$. For constructing functions on the $k''$-curve, we need to construct functions $f$ such that \begin{equation}\label{eq:ub k'' curve} \delta(f) = \Theta\left(\frac{r(f)}{k''(f)\log\left(k''(f)/r(f)\right)}\right). \end{equation} We will use a similar technique as in our construction of functions on the $k'$-curve in Theorem~\ref{thm:delta upper bound in terms of rk and also k'}. We start from the function $\AD_{t,t'}$ at the intersection point. Note that $\AD_{t,t'}$ satisfies Equation~\eqref{eq:ub k'' curve}. We modify $\AD_{t,t'}$ such that the rank, weight and max-rank-entropy changes very little but the sparsity increases by a multiplicative parameter $2^p$. We achieve this by replacing a variable (say $x$) in $\AD_{t,t'}$ with $x\cdot \AND(y_1, \dots, y_p)$, where $x$ and $y_i$s are all variables in $\AD_{t,t'}$, but for any $i$, $x$ and $y_i$ do not appear in the same monomial (Claim~\ref{claim: setting parameters for tight curve for k''}). The new function $f$ still satisfies Equation~\eqref{eq:ub k'' curve}. This places $f$ on the $k''$-curve in a plot corresponding to the same rank as that of $\AD_{t, t'}$, but where the sparsity increases by a factor of $2^p$. By suitably setting $p$, $t$ and $t'$, we obtain functions on the $k''$-curve for all plots. This proves the second bound in Theorem~\ref{thm:delta upper bound in terms of rk and also k''}. \section{Preliminaries}\label{sec:prelims} All logarithms in this paper are taken to be base 2. We use the notation $[n]$ to denote the set $\cbra{1, 2, \dots, n}$. When necessary, we assume $t$ is a power of $2$. We use the notation $1^n$ (respectively, $(-1)^n$) to denote the $n$-bit string $(1, 1, \dots, 1)$ (respectively, $(-1, -1, \dots, -1)$). For a function $f : \pmone^n \to \pmone$, its $\ftwo$-degree, denoted by $\degtwo(f)$, is the degree of its unique $\ftwo$-polynomial representation. Throughout this paper, we often identify subsets of $[n]$ with their corresponding characteristic vectors in $\ftwo^n$. Thus when we refer to linear algebraic measures of a collection of subsets of $[n]$, we mean the measure on the corresponding subset of $\ftwo^n$ (where $\ftwo^n$ is viewed as an $\ftwo$-vector space). Throughout this paper, we assume that $f$ is not a constant function or a parity or a negative parity, unless mentioned otherwise. \subsection{Fourier analysis of Boolean functions} Consider the vector space of functions from $\fcube{n}$ to $\R$ equipped with the following inner product. \[ \abra{f, g} := \frac{1}{2^n}\sum_{x \in \fcube{n}}f(x)g(x). \] For a set $S \subseteq [n]$, define a \emph{parity} function (which we also refer to as \emph{characters}) $\chi_S : \pmone^{n} \to \pmone$ by $\chi_S(x) = \prod_{i \in S}x_i$. The set of parity functions $\cbra{\chi_S : S \subseteq [n]}$ forms an orthonormal basis for this vector space. Hence, every function $f : \fcube{n} \to \R$ has a unique representation as \[ f = \sum_{S \subseteq [n]}\wh{f}(S)\chi_S, \] where $\wh{f}(S) = \langle f, \chi_S \rangle$ for all $S \subseteq [n]$. The coefficients $\cbra{\wh{f}(S) : S \subseteq [n]}$ are called the \emph{Fourier coefficients} of $f$. Define the Fourier $\ell_1$-norm of a function $f : \pmone^n \to \mathbb{R}$ by $\lone{\wh{f}} := \sum_{S \subseteq [n]} |\wh{f}(S)|$. \begin{defi}[Weight of a Boolean function] \label{defi:weight} Let $f: \pmone^{n} \to \pmone$ be any function. The \emph{weight} of $f$, denoted by $\delta(f)$, is defined as \[ \delta(f) = \Pr_{x \in \pmone^n}[f(x) = -1]. \] \end{defi} The following observation follows from the fact that $\wh{f}(\emptyset) = \frac{1}{2^n}\sum_{x \in \pmone^n}f(x)$. \begin{observation} \label{obs:weight, empty Fourier} Let $f : \pmone^n \to \pmone$ be any function. Then, \[ \wh{f}(\emptyset) = 1 - 2 \delta(f). \] \end{observation} \begin{defi}[Fourier Support] \label{defi: Fouriersupport} Let $f:\pmone^n \to \R$ be any function. The Fourier support of $f$, denoted by $\supp(f)$, is defined as \[ \supp(f) = \cbra{S\subseteq[n] : \wh{f}(S) \neq 0}. \] \end{defi} \begin{remark} In the literature, Fourier support is generally denoted by $\supp(\wh{f})$. For ease of notation we drop the hat symbol above $f$. A similar convention has been adopted in Definitions~\ref{defi:sparsity},~\ref{defi:rank}, and~\ref{defi:max entropy, max rank entropy}. \end{remark} For ease of notation, we sometimes abuse notation and say that the elements of the Fourier support of $f$ are the characters $\cbra{\chi_S : S \subseteq [n], \wh{f}(S) \neq 0}$, rather than the corresponding sets as given in Definition~\ref{defi: Fouriersupport}. \begin{defi}[Fourier sparsity] \label{defi:sparsity} Let $f:\pmone^n \to \R$ be any function. The Fourier sparsity of $f$, denoted by $k(f)$, is defined as \[ k(f) = |\supp(f)|. \] \end{defi} For simplicity we assume that $k(f) \geq 2$ for all Boolean functions $f$ considered in this paper (unless explicitly mentioned otherwise). We often simply refer to the Fourier sparsity as \emph{sparsity}. \begin{theorem}[Parseval's identity] \label{thm:Parseval} Let $f:\pmone^n \to \pmone$ be any function. Then, \[ \sum_{S \subseteq[n]}\wh{f}(S)^2 = 1. \] \end{theorem} We require the following lemma (see, for example,~\cite{GT13}). \begin{lemma}[Uncertainty Principle] \label{lem:uncertainity principle} Let $f:\pmone^n \to \R$ be a polynomial and let $U_n$ denote the uniform distribution on $\pmone^n$. Then, \[ \Pr_{x \sim U_n}[f(x) \neq 0] \geq \frac{1}{k(f)}. \] \end{lemma} \begin{lemma}[{\cite[Theorem 8.1]{GOS+}}] \label{lem:granularity} Let $f: \pmone^n \to \pmone$ be any function. Then, for all $S \subseteq [n]$, $|\wh{f}(S)|$ is an integral multiple of $2^{1 -\lfloor \log k(f) \rfloor}$. \end{lemma} We also require the following lemma relating the $\ftwo{}$-degree of a Boolean function and its Fourier sparsity (see, for example,~\cite{BC99}). \begin{lemma}\label{lem:ftwodeg and Fourier sparsity} Let $f:\fcube{n} \rightarrow \fcube{}$ be any function with $k(f) > 1$. Then, \[ \degtwo(f) \leq \log k(f). \] \end{lemma} The next claim shows that $\degtwo(f)$ does not change under a change of basis over the Fourier domain. \begin{claim} \label{thm:ftwo_deg_does_not_change} Let $f:\pmone^n \to \pmone$ be any function and let $B\in \ftwo^{n \times n}$ be an invertible matrix.. Define the function $f_B:\pmone^n \to \R$ as \[ \widehat{f_B}(\alpha) = \widehat{f}(B\alpha) \ \ \textnormal{for all~} \alpha \in \ftwo^n, \] Then $f_B$ is Boolean valued and $\degtwo(f_B) = \degtwo(f)$. \end{claim} \begin{proof} Viewing $f_B$ and $f$ as functions over the domain $\zone^n$ instead of $\pmone^n$, we get that this basis change over the Fourier domain amounts to applying $(B^{-1})^T$ on the input space (see~\cite[Lemma 4]{ACL+19}). In other words $f_B$ is Boolean valued, and if $p_{f_B}$ and $p_f$ are the $\ftwo$-polynomials representing $f_B$ and $f$, respectively, then $p_{f_B}(x) = p_f((B^{-1})^T x)$. For all $x \in \ftwo^n$, let $p_f(x) = \sum_{\gamma \in \ftwo^n} \widehat{p_f}(\gamma) \prod_{i: \gamma_i = 1} x_i$. If $(B^{-1})^T_{j}$ denotes the $j$-th row of $(B^{-1})^T$ (for $j \in [n]$), then $p_{f_B}$ has the unique representation \begin{align*} p_{f_B}(x) = \widehat{p_f}(\gamma) \prod_{i: \gamma_i = 1} \langle (B^{-1})^T_{i}, x \rangle . \end{align*} So, every variable appearing in the polynomial representation of $p_f$ is replaced by a linear combination (over $\ftwo$) of $x_i$'s in $p_{f_B}$. In particular, the degree of any monomial in the polynomial representation of $p_f$ is at least as large as the degree of its expansion in $p_{f_B}$, and hence $\deg(p_{f_B}) \leq \deg(p_f)$. Since $B$ is invertible, the same argument shows $\deg(p_f) \leq \deg(p_{f_B})$. Thus $\deg(p_{f_B}) = \deg(p_f)$, which implies $\degtwo(f_B) = \degtwo(f)$. \end{proof} The following corollary follows from \cite[Theorem 13]{CHLT19} and Lemma~\ref{lem:ftwodeg and Fourier sparsity}. \begin{corollary}\label{cor:chlt-implication} Let $f:\pmone^n \to \pmone$ be any function, and let $\cS \subseteq \supp(f)$ be a basis of $\spann(\supp(f))$. Then, \[ \sum_{S \in \cS} |\wh{f}(S)| \leq 4\log k(f). \] \end{corollary} We now define notions of restriction of a function $f : \pmone^n \to \pmone$ to a subset $A \subseteq \pmone^n$. \begin{defi}[Restriction] Let $f: \pmone^n \to \pmone$ and $A \subseteq \pmone^n$. The restriction of $f$ to $A$ is the function $f|_A: A \to \pmone$ defined as $f|_A(x) = f(x)$ for all $x \in A$. \end{defi} \begin{defi}[Affine Restriction] Let $f:\pmone^n \rightarrow \pmone$, let $\Gamma$ be a set of parities and $b \in \pmone^{\Gamma}$ be an assignment to these parities. Define the function $f|_{\Vb}$ to be the restriction of $f$ to the affine subspace obtained by fixing parities in $\Gamma$ according to $b$. That is, \[ f|_{\Vb} := f|_{\{x \in \pmone^n: \chi_{\gamma}(x) = b_{\gamma} \textnormal{~for all~} \gamma \in \Gamma\}}. \] \end{defi} \subsection{Fourier expansions and properties of some standard functions} For any integer $n > 0$, define the function $\AND_n : \pmone^n \to \pmone$ by $\AND_n(x) = -1$ if $x = (-1)^n$, and 1 otherwise. We drop the subscript $n$ when it is clear from the context. We state the Fourier expansion of $\AND$ below without proof. \begin{fact}[Fourier expansion of $\AND$] \label{fact:Fourier AND} Let $n \geq 1$ be any positive integer. Then \begin{align*} \wh{\AND_n}(S) = \begin{cases} 1 - \frac{2}{2^n} & S = \emptyset,\\ \frac{2 \cdot (-1)^{|S| + 1}}{2^n} & \text{otherwise}. \end{cases} \end{align*} \end{fact} \begin{defi}[Bent functions] \label{defi:bent} A function $f : \pmone^n \to \pmone$ is said to be a bent function if $|\wh{f}(S)| = |\wh{f}(T)|$ for all $S, T \subseteq [n]$. \end{defi} Using Parseval's identity (Theorem~\ref{thm:Parseval}) we get the following observation. \begin{observation} \label{obs: bentfcoeffs} Let $f : \pmone^n \to \pmone$ be a bent function. Then, $|\wh{f}(S)| = \frac{1}{\sqrt{2^n}}$ for all $S \subseteq [n]$. \end{observation} \begin{defi}[Indicator function] \label{defi:ind} For any integer $n \geq 1$ and $b \in \pmone^n$, define the function $\ind_b : \pmone^n \to \zone$ by \[ \ind_b (x) = \begin{cases} 1 & x = b,\\ 0 & \text{otherwise}. \end{cases} \] \end{defi} We require the following observation about the Fourier expansion of Indicator functions, which we state without proof. \begin{observation}[Fourier expansion of Indicator functions] \label{obs:indexpansion} For any integer $n \geq 1$ and $b \in \pmone^n$, let $\ind_b$ be as in Definition~\ref{defi:ind}. Then, \[ \wh{\ind}_b(S) = \frac{\prod_{i \in S}b_i}{2^n} \quad\text{for all } S \subseteq [n]. \] \end{observation} \begin{defi}[Addressing function]\label{defi:Addressing} For any integer $t \geq 2$, define the Addressing function $\AD_{t} : \fcube{\log t} \times \fcube{t} \rightarrow \fcube{}$ by \[ \AD_{t}(x,y) = y_{\bin(x)}, \] where $x \in \pmone^{\log t}$ and $y \in \pmone^t$, and $\bin(x)$ denotes the integer in $[t]$ whose binary representation is given by $x$ (where $-1$'s are viewed as 1 in the string $x$, and $1$'s are viewed as 0). We refer to the $x$-variables as \emph{addressing variables}, and the $y$-variables as \emph{target variables}. \end{defi} The following combinatorial observation is useful to us. \begin{observation} \label{obs:sum_b char = 0 } For any integer $n \geq 1$ and non-empty subset $S \subseteq [n]$, $$\sum_{b \in \pmone^{n}} \prod_{i \in S} b_i = 0.$$ \end{observation} We require the following representation of Addressing functions. \begin{observation} \label{obs:addexpansion} For any integer $t \geq 2$, $x \in \pmone^{\log t}$ and $y \in \pmone^t$, we have \[ \AD_t(x, y) = \sum_{b \in \pmone^{\log t}} y_b \ind_b(x). \] \end{observation} We next define a way of modifying the Addressing function that is of use to us. In this modification, we replace target variables by functions, each acting on disjoint variables. \begin{defi}[Composed addressing functions] \label{defi:composedaddressing} Let $t \geq 2$, $\ell_1, \dots, \ell_t \geq 1$ be any integers. Let $g_i : \pmone^{\ell_i} \to \pmone$ be any functions for $i \in [t]$. Define the function $\AD_t \circt (g_1, \dots, g_t) : \pmone^{\log t} \times \pmone^{\ell_1 + \cdots + \ell_t} \to \pmone$ by \[ \AD_t \circt (g_1, \dots, g_t) (x, y_1, \dots, y_t) = \AD_t(x, g_1(y_1), \dots, g_t(y_t)), \] where $x \in \pmone^{\log t}$ and $y_i \in \pmone^{\ell_i}$ for all $i \in [t]$. \end{defi} For any function $g : \pmone^s \to \pmone$, we use the notation $\AD_t \circt g$ to denote the function $\AD_t \circt (g, g, \dots, g) : \pmone^{\log t} \times \pmone^{ts} \to \pmone$. \subsection{Fourier-analytic measures of Boolean functions} We now introduce a few Fourier-analytic measures on Boolean functions that we use throughout the rest of the paper, and state some important relationships between them. Recall that we use the notation $\dim(S)$ to denote the dimension of the span of the set $S$. \begin{defi}[Fourier rank] \label{defi:rank} Let $f: \pmone^n \to \pmone$ be any function. Define the Fourier rank of $f$, denoted $r(f)$, by \[ r(f) = \dim(\supp(f)). \] \end{defi} We often refer to Fourier rank as simply \emph{rank}. Sanyal~\cite{San19} showed the following upper bound on the rank of Boolean functions in terms of their sparsity. \begin{theorem}[{\cite[Theorem 1.2]{San19}}] \label{thm:Sanyal-original} Let $f:\pmone^n \to \pmone$ be any function. Then \[ r(f) = O(\sqrt{k(f)}\log k(f)). \] \end{theorem} We require the following observation which gives a simple upper bound on the rank of a Boolean function. \begin{observation} \label{obs: napdt_implies_rank_ub} Let $f: \pmone^n \to \pmone$ be any function and $\Gamma$ be a set of parities. If for all $b \in \pmone^{\Gamma}$ the restricted function $f|_{{(\Gamma, b)}}$ is constant then $r(f) \leq |\Gamma|$. \end{observation} Recall that for any function $f : \pmone^n \to \pmone$ and any real $t > 0$, we define $\calS_t := \{S \subseteq [n]: |\widehat{f}(S)| \geq 1/\threshold\}$ (we suppress the dependence of $\calS_t$ on $f$ as the underlying function will be clear from context). \begin{defi} \label{defi:max entropy, max rank entropy} Let $f:\fcube{n} \to \fcube{}$ be any function. Define the Fourier max-supp-entropy of $f$, denoted $k'(f)$, by \[ k'(f) := \argmin_\threshold\cbra{\cS_{\threshold} = \supp(f)}. \] Equivalently, \[ k'(f) := \max_{S \in \supp(f)}\cbra{\frac{1}{|\wh{f}(S)|}}. \] Define the Fourier max-rank-entropy of $f$, denoted $k''(f)$, by \[ k''(f): = \argmin_\threshold\cbra{\dim(\cS_{\threshold}) = r(f)}. \] \end{defi} We often refer to the Fourier max-supp-entropy and Fourier max-rank-entropy as simply \emph{max-supp-entropy} and \emph{max-rank-entropy}, respectively. \begin{lemma}[Relationships between parameters] \label{lem:relationships between rk and k'} Let $f:\fcube{n} \to \fcube{}$ be any function. Then the following inequalities hold. \begin{enumerate} \item $\log k(f) \leq r(f) = O(\sqrt{k(f)} \log{k(f)})$. \item $\sqrt{k(f)} \leq k'(f) \leq k(f)/2$. \item $\max\cbra{\sqrt{r(f)}, r(f)/(4\log k(f))} \leq k''(f) \leq k'(f)$. \end{enumerate} \end{lemma} \begin{proof}~ \begin{enumerate} \item The first inequality holds since $k(f) \leq 2^{r(f)}$, and the second inequality follows from Theorem~\ref{thm:Sanyal-original}. \item Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k'(f) = \argmin_\threshold\cbra{\cS_{\threshold} = \supp(f)}$. This means for all $S \in \supp(f)$, $|\wh{f}(S)| \geq \frac{1}{k'(f)}$. We have from Parseval's identity (Theorem~\ref{thm:Parseval}) that \[ \sum_{S \subseteq [n]} \wh{f}(S)^2 = 1 \implies k(f)/k'(f)^2 \leq 1 \implies \sqrt{k(f)} \leq k'(f). \] By Lemma~\ref{lem:granularity}, \[ \frac{1}{k'(f)} \geq 2^{1 - \lfloor \log k(f) \rfloor} = \frac{2}{2^{\lfloor \log k(f) \rfloor}} \geq \frac{2}{k(f)}. \] \item Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k''(f) = \argmin_\threshold\cbra{\dim(\cS_{\threshold}) = r(f)}$. Observe that for $t = k'(f)$, we have $\dim(\calS_t) = \dim(\supp(f)) = r(f)$. Hence $k''(f) \leq k'(f)$. Since $\rank(\calS_{k''(f)}) = r(f)$, there exists $\calB \subseteq \calS_{k''(f)}$ such that $|\calB| = r(f)$ and $\calB$ is a basis of $\spann(\supp(f))$. Moreover $|\wh{f}(S)| \geq 1/k''(f)$ for all $S \in \calB$. Choose such a set $\calB$. By Theorem~\ref{thm:Parseval}, \begin{align*} & 1 \geq \sum_{S \in \calB} \wh{f}(S)^2 \geq \frac{r(f)}{(k''(f))^2} \\ & \implies k''(f) \geq \sqrt{r(f)}. \end{align*} By Corollary \ref{cor:chlt-implication}, \begin{align*} & \sum_{S \in \calB}|\wh{f}(S)| \leq 4\log k(f)\\ & \implies \frac{r(f)}{k''(f)} \leq 4\log k(f) \\ & \implies k''(f) \geq r(f)/(4\log k(f)). \end{align*} Therefore $k''(f) \geq \max\cbra{\sqrt{r(f)}, r(f)/(4\log k(f))}$. \end{enumerate} \end{proof} \begin{claim} \label{claim:delta at least k/k'^2} Let $f: \pmone^n \to \pmone$ a function with $k(f) \geq 2$. Then \[ \delta(f) = \Omega\bra{\frac{k(f)}{k'(f)^2}}. \] \end{claim} \begin{proof} Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k'(f) = \argmin_\threshold\cbra{\cS_{\threshold} = \supp(f)}$. This means for all $S \in \supp(f)$, $|\wh{f}(S)| \geq \frac{1}{k'(f)}$. Therefore \begin{align*} (k(f)-1)/(k'(f))^2 & \leq \sum_{S \subseteq [n], S \neq \emptyset} \wh{f}(S)^2\\ &= 1 - \wh{f}(\emptyset)^2 \tag*{by Theorem~\ref{thm:Parseval}}\\ &= 1 - (1-2\delta(f))^2 \tag*{by Observation~\ref{obs:weight, empty Fourier}}\\ &= 4\delta(f) - 4\delta(f)^2 \leq 4\delta(f)\\ \implies \delta(f) &\geq \frac{(k(f) - 1)}{4(k'(f))^2} \geq \frac{k(f)}{8(k'(f))^2}. \tag*{since $k(f) \geq 2$} \end{align*} \end{proof} \begin{claim} \label{claim:lone_le_sqrt_kdelta} Let $f:\pmone^n \to \pmone$ be any function. Then \[ \lone{\wh{f}} \leq 3\sqrt{k(f) \delta(f)}. \] \end{claim} \begin{proof} Since $\widehat{f}(\emptyset) = 1 - 2\delta(f)$ we have $\lone{\wh{f}} = |1- 2 \delta(f)| + \sum_{S \neq \emptyset} |\wh{f}(S)|$. The term $\sum_{S \neq \emptyset} |\wh{f}(S)|$ can be bounded as follows: \begin{align*} \bra{\sum_{S \neq \emptyset } |\wh{f}(S)| }^2 \tag*{by Cauchy-Schwarz inequality} &\leq k(f) \bra{\sum_{S \neq \emptyset} \wh{f}(S)^2}\\ &= k(f) (1 - (1-2\delta(f))^2) \leq 4 k(f)\delta(f). \end{align*} Thus we have, \begin{align*} \lone{\wh{f}} &= |1-2\delta(f)| + \sum_{S \neq \emptyset} |\wh{f}(S)|\\ &\leq |1-2\delta(f)| + 2\sqrt{k(f) \delta(f)} \\ &\leq 1 + 2\sqrt{k(f) \delta(f)} \leq 3 \sqrt{k(f)\delta(f)}. \tag*{since $k(f)\delta(f) \geq 1$ by Lemma~\ref{lem:uncertainity principle}} \end{align*} \end{proof} We require the following observation about the rank, sparsity, max-supp-entropy, max-rank-entropy and weight of $\AND, \AD_{t}$ and bent functions, which follows immediately from definitions and first principles. We omit its proof. \begin{observation} \label{obs:properties of AND, Bent and Addressing} Let $t \geq 2$ and $\ell, t' \geq 4$ be any positive integers, and let $B_\ell: \pmone^{\log \ell} \to \pmone$ be any bent function. Then the rank, sparsity, max-supp-entropy, max-rank-entropy and weight of $\AND_{\log t'}$, $B_\ell$ and $\AD_t$ are as in the following table. \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline $f$ & $r(f)$ & $k(f)$ & $k'(f)$ & $k''(f)$ & $\delta(f)$\\ \hline \hline $\AND_{\log t'}$ & $\log t'$ & $t'$ & $t'/2$ & $t'/2$ & $\frac{1}{t'}$\\ \hline $B_\ell$ & $\log \ell$ & $\ell$ & $\sqrt{\ell}$ & $\sqrt{\ell}$ & $\frac{1}{2} \pm \frac{1}{2 \sqrt{\ell}}$ \\ \hline $\AD_{t}$ & $t + \log t$ & $t^2$ & $t$ & $t$ & $\frac{1}{2}$\\ \hline \end{tabular} \end{center} \end{observation} \section{Lower bound proofs} \label{sec:lower bound} For lower bounds on $\delta(f)$ of a Boolean function $f$, we need to prove two theorems: Theorems~\ref{thm:delta lower bound in terms of rk and also k'} and~\ref{thm:delta lower bound in terms of rk and also k''}. The proof of Theorem~\ref{thm:delta lower bound in terms of rk and also k'} is given in Section~\ref{sec:lower bound using k'} and the proof of Theorem~\ref{thm:delta lower bound in terms of rk and also k''} is given in Section~\ref{sec:lower bound using k''}. \subsection{Proof of Theorem~\ref{thm:delta lower bound in terms of rk and also k'} (and Theorem~\ref{thm:delta lower bound in terms of rk only})} \label{sec:lower bound using k'} Remember that we defined the \emph{Fourier max-supp-entropy} of a Boolean function $f$, denoted by $\seespectrum(f)$, to be $$ \max_{S \in \supp(f)} \frac{1}{|\wh{f}(S)|}. $$ The main aim of this section is to give a lower bound on $\delta(f)$ with respect to $\seespectrum(f)$ for a Boolean function $f$ (Theorem~\ref{thm:delta lower bound in terms of rk and also k'}). We first prove Theorem~\ref{thm:delta lower bound in terms of rk only} which implies Theorem~\ref{thm:delta lower bound in terms of rk and also k'} (together with Claim~\ref{claim:delta at least k/k'^2}). See Section~\ref{sec:overview of delta lower bound in terms of rk only} for an overview of the proof of Theorem~\ref{thm:delta lower bound in terms of rk only}. Theorem~\ref{thm:delta lower bound in terms of rk only} can be viewed as an upper bound of $O(\sqrt{k(f)\delta(f)}\log k(f))$ on the Fourier rank of $f$. In order to prove Theorem~\ref{thm:delta lower bound in terms of rk only}, we give an algorithm (Algorithm~\ref{alg:NAPDT}) which takes a Boolean function $f$ as input and outputs a set of $O(\sqrt{\delta(f) k(f)}\log k(f))$ parities such that any assignment of these parities makes the function constant. From Observation~\ref{obs: napdt_implies_rank_ub}, this implies an upper bound of $O(\sqrt{\delta(f) k(f)}\log k(f))$ on Fourier rank of the function. We start by formally describing this algorithm. Recall that for a function $f:\pmone^n \rightarrow \pmone$, a set of parities $\Gamma$ and an assignment $b \in \pmone^\Gamma$, we define the restriction \[ f|_{\Vb} := f|_{\{x \in \pmone^n: \chi_{\gamma}(x) = b_{\gamma} \textnormal{~for all~} \gamma \in \Gamma\}}. \] Also let $\mathcal{B}_{\Gamma} := \{b \in \pmone^{\Gamma}: f|_{\Vb} \textit{ is not constant} \}$. \begin{algorithm}[H] \label{alg:NAPDT} \SetAlgoLined {\bf Input:} A function $f: \pmone^n \to \pmone$.\\ {\bf Output: } A set $\Gamma$ of parities whose evaluation determines $f$.\\ {\bf Initialization:} $\fmin \leftarrow f$, $\Gamma \leftarrow \emptyset$.\ \\ \While{$\mathcal{B}_{\Gamma}$ is non-empty}{ \begin{enumerate}[label = (\alph*)] \item \label{item: step_a} \textbf{Update $\Gamma$:} Let $\Gamma'$ be the smallest set of parities, such that, there exists $b\in \{-1,1\}^{\Gamma'}$ for which $\fmin|_{{(\Gamma',b)}}$ is constant, \[ \Gamma \leftarrow \Gamma \cup \Gamma'. \] \item \label{item: step_b} \textbf{Update $\fmin$:} Define $b^* := \arg\!\min_{b \in \mathcal{B}_{\Gamma}} \left\{ \frac{\delta(f|_{\Vb})}{k(f|_{\Vb})} \right\}$, and update \[ \fmin \leftarrow f|_{({\Gamma,b^*})}. \] \end{enumerate} } Return $\Gamma$. \caption{} \end{algorithm} Since number of parities are finite and we fix at least one parity at each iteration of Step~\ref{item: step_a} of the \textnormal{\textbf{while}} loop, the algorithm terminates. The termination condition implies that the algorithm outputs a set of parities $\Gamma$ such that for any assignment $b \in \pmone^{\Gamma}$ of $\Gamma$, the restricted function $f_{(\Gamma,b)}$ becomes constant. The only remaining step is to show that the number of parities fixed in Algorithm~\ref{alg:NAPDT} is $O(\sqrt{\delta(f) k(f)}\log k(f))$. For this we first need to recall the notion of equivalence relation defined in Section~\ref{sec:overview of delta lower bound in terms of rk only} and few properties of restricted functions (restricted according to an assignment of a set of parities). \paragraph{Equivalence relation for a set of parities} Let $f$ be the input to Algorithm~\ref{alg:NAPDT}, first we define an equivalence relation given a set of parities over the variables of $f$. Given a set of parities $\Gamma$, define the following equivalence relation among parities in $\supp(f)$. \begin{equation} \label{eq: equivalence relation for Gamma} \forall \gamma_1, \gamma_2 \in \supp(f), \gamma_1 \equiv \gamma_2\ \text{iff}\ \gamma_1+\gamma_2 \in \spann(\Gamma). \end{equation} Let $\ell$ be the number of equivalence classes according to the equivalence relation for $\Gamma$. For $j \in [\ell]$, let $k_j$ be the size of the $j$-th equivalence class. Since the equivalence classes form a partition of $\supp(f)$, we have \begin{observation} \label{obs:coset_sparsity_sum} Following the notation of the paragraph above, $\sum_{j=1}^{\ell} k_j = \sparsity(f)$. \end{observation} Let $\beta_1, \dots , \beta_{\ell} \in \supp(f)$ be some representatives of the equivalence classes. For $j\in [\ell]$, let $\beta_j + \alpha_{j,1}, \dots ,\beta_j + \alpha_{j,k_j}$ be the elements of the $j$-th equivalence class. This notation gives a compact representation of $f$ in terms of these equivalence classes. For all $x \in \pmone^n$, \begin{align} f(x) = \sum_{j =1}^{\ell} P_j (x) \chi_{\beta_j}(x) \label{eq:coset1}, \end{align} where \begin{align} P_j(x) = \sum_{r=1}^{k_j} \wh{f}(\beta_j + \alpha_{j,r}) \cdot \chi_{\alpha_{j,r}}(x). \label{eq:coset2} \end{align} Note that $P_j$ are non-zero multilinear polynomials and depend only on the parities in $\Gamma$. So, fixing parities in $\Gamma$ collapses all the parities in an equivalence class to their representative, thereby making $P_j$'s constant. We will denote $\Gamma$ after the $i$-th iteration of the \textnormal{\textbf{while}} loop by $\Gamma^{(i)}$ (so $\Gamma^{(0)} = \emptyset$). Let $\fmin^{(i)}$ be the selected function $\fmin$ after the $i$-th iteration (thus $\fmin^{(0)} = f$). With the above properties of restricted functions we are ready to prove the main technical lemma needed to show Theorem~\ref{thm:delta lower bound in terms of rk only}. \begin{lemma} \label{lem:main_lemma} Let $f: \pmone^n \to \pmone$ a function. Suppose $\Gamma$ be a set of parities and $\ell$ be the number of equivalence classes of $\supp(f)$ under the equivalence relation defined by in Equation~\eqref{eq: equivalence relation for Gamma}, Then, there exists a $b \in \pmone^{\Gamma}$ such that $f|_{{(\Gamma, b)}}$ is non-constant and \[ \frac{\delta(f|_{{(\Gamma, b)}})}{ k(f|_{{(\Gamma, b)}}) } \leq \frac{4 k(f) \delta(f)}{\ell^2}. \] \end{lemma} \begin{proof} For the sake of succinctness, when $\Gamma$ is clear from the context, let $V_b = \{x \in \pmone^n: \forall \gamma \in \Gamma, x_{\gamma} = b_{\gamma}\}$, for all $b \in \pmone^{\Gamma}$, and $f|_b = f|_{\{x : x \in V_b\}}$. Since we are interested in a non-constant $f|_b$, define $\sparp(f)$ to be the number of non-zero non-empty monomials in Fourier representation of $f$. We first need to prove the following two bounds on the expected values of $\delta(f|_{b})$ and $\sparp(f|_{b})$. \begin{itemize} \item $\Ex{b}{\delta(f|_{b})}= \delta(f)$, \item $\Ex{b}{\sparp(f|_{b})} \geq \frac{\ell^2}{4 k(f)}$. \end{itemize} \paragraph{Expected value of $\delta(f|_{b})$:} Since $\cbra{V_b: b \in \pmone^{\Gamma^{(i)}}}$ form a partition on $\pmone^{n}$ and all partitions are of the same size, we get the expected value of $\delta(f|_{b})$. \begin{equation} \label{weight expectation} \Ex{b}{\delta(f|_{b})}= \delta(f). \end{equation} \paragraph{Expected value of $\sparp(f|_{b})$:} From Equation~\eqref{eq:coset1}, for all $b \in \pmone^{\Gamma}$ and for all $x \in \pmone^n$, \begin{align} f|_{b}(x) = \sum_{j=1}^{\ell} P_{j}(b) \chi_{\beta_j}(x). \label{eq: rep_f_sum_of_cosets} \end{align} For each $j \in [\ell]$ and $b \in \pmone^{\Gamma}$, let $I_{j}(b)$ be the indicator function for $P_j(b) \neq 0$, $$ I_{j}(b) = \begin{cases} 1 & \text{if}\ P_j(b) \neq 0\\ 0 & \text{otherwise}. \end{cases} $$ From Equation~\eqref{eq:coset2}, each $P_j$ is a polynomial having monomials $\{\chi_{\alpha_{j,r}}: r \in [k_j] \}$ with Fourier sparsity of $P_j$ being equal to $k_j$. Since each $P_j$ is a non-zero polynomial, by Lemma~\ref{lem:uncertainity principle} \begin{align} \Ex{b}{I_{j}(b)} = \prob{b \sim \pmone^{\Gamma}}{P_j(b) \neq 0} \geq \frac{1}{k_j}. \label{eq:coset_poly_nonzer_largefrac} \end{align} We calculate the expectation of $\sparp(f|_b)$. \begin{align} \label{spar expectation} \Ex{b}{\sparp(f|_{b})} &= \Ex{b}{\sum_{j=1}^{\ell-1} I_{j}(b)} \tag*{by Equation~(\eqref{eq: rep_f_sum_of_cosets})} \nonumber \\ &= \sum_{j=1}^{\ell-1} \Ex{b}{I_{j}(b)} \tag*{by linearity of expectation} \nonumber \\ &\geq \sum_{j=1}^{\ell-1} \frac{1}{k_j} \tag*{by Equation~\eqref{eq:coset_poly_nonzer_largefrac}} \nonumber \\ &\geq \frac{(\ell-1)^2}{\sum_{j=1}^{\ell-1} k_j} \nonumber \tag*{by Cauchy-Schwarz inequality}\\ &\geq \frac{\ell^2}{4k(f)}. \tag*{by Observation~\ref{obs:coset_sparsity_sum}} \end{align} To finish the proof of the theorem, we use bounds on the two expected values,\footnote{this part of our proof is inspired by a proof of the Cheeger's inequality in spectral graph theory. See, for example, the proof of Fact 2 in \url{https://people.eecs.berkeley.edu/~luca/expanders2016/lecture04.pdf}.} \begin{align*} &\frac{\Ex{b}{\delta(f|_{b})}}{ \Ex{b}{\sparp(f|_{b})}} \leq \frac{4 k(f)\delta(f)}{\ell^2} \\ \iff &\Ex{b}{\delta(f|_{V_b}) - \frac{4\sparsity(f)\delta(f)}{\ell^2} \sparp(f|_{V_b})} \leq 0. \tag*{by linearity of expectation}\\ \end{align*} If $\delta(f|_{V_b}) - \frac{4\sparsity(f)\delta(f)}{\ell^2} \sparp(f|_{V_b}) = 0$ for all $b$, then pick any non-constant $f|_b$. Otherwise, there exists a $b_0$ such that \[ \delta(f|_{V_{b_0}}) - \frac{4\sparsity(f)\delta(f)}{\ell^2} \sparp(f|_{V_{b_0}}) < 0 . \] Since this equation can only be satisfied when $\sparp(f|_{V_{b_0}}) > 0$, $f|_{V_{b_0}}$ is not constant. Dividing by $\sparp(f|_{V_{b_0}})$, $$ \frac{\delta(f|_{b_0})}{k(f|_{b_0})} \leq \frac{\delta(f|_{b_0})}{\sparp(f|_{b_0})} \leq \frac{4 k(f) \delta(f)}{\ell^2}, $$ and $f|_{b_0}$ is non-constant. \end{proof} Lemma~\ref{lem:main_lemma} allows us to bound the number of parities fixed in the $i$-th iteration (in terms of the decrease in number of equivalence classes). \begin{lemma} \label{lem:q_i bound} Suppose $f$ is given as input to Algorithm~\ref{alg:NAPDT}. Consider the $i$-th iteration of Algorithm~\ref{alg:NAPDT}. Let $q_i$ be the be number of parities fixed in Step~\ref{item: step_a} of the $i$-th iteration of the \textnormal{\textbf{while}} loop, and $\ell_i$ be the number of equivalence classes after Step~\ref{item: step_a} of the $i$-th iteration. Then $$ \frac{q_i}{(\ell_{i-1} - \ell_{i})} \leq \frac{6\sqrt{\delta(f) \sparsity(f)}}{\ell_{i-1}}. $$ \end{lemma} \begin{proof} Recall that $\Gamma = \Gamma^{(i)}$ after the $i$-th of Step~\ref{item: step_a} of Algorithm~\ref{alg:NAPDT}. Again, for the sake of succinctness, let $V_b = \{x \in \pmone^n: \forall \gamma \in \Gamma^{(i)}, x_{\gamma} = b_{\gamma}\}$, for all $b \in \pmone^{\Gamma^{(i)}}$, and $f|_b = f|_{\{x : x \in V_b\}}$. Let $\fmin$ be the function chosen after the $i$-th iteration of Step~\ref{item: step_b} of Algorithm~\ref{alg:NAPDT}. Since Step~\ref{item: step_b} of Algorithm~\ref{alg:NAPDT} chooses $\fmin$ to be a non-constant function such that weight-to-sparsity ratio is minimized, from Lemma~\ref{lem:main_lemma} we have, \begin{align} \frac{\delta(\fmin)}{k(\fmin)} &\leq \frac{4 k(f) \delta(f)}{\ell_{i-1}^2}. \label{eq: fmin_i_satisfies_main_lemma} \end{align} Write every $f|_b$ as in Equation~\eqref{eq:coset1}, and define $\mathcal{S}^{(i)} := \bigcup_{b \in \pmone^{\Gamma^{(i)}}} \supp(f|_{b})$. We now prove that $|\mathcal{S}^{(i)}| = \ell_{i}$. \begin{itemize} \item $|\mathcal{S}^{(i)}| \leq \ell_{i}$: Follows from the representation in Equation~\eqref{eq:coset1}, since each $\supp(f|_b)$ is a subset of $\{\chi_{\beta^{(i)}_j} \mid j \in [\ell_i]\}$. \item $|\mathcal{S}^{(i)}| \geq \ell_{i}$: Since $P^{(i)}_j$ is a non-zero polynomial, there exists an assignment to parities in $\Gamma^{(i)}$, such that, $P^{(i)}_j$ is non-zero. Thus, for all $j \in [\ell_i]$, we have $\chi_{\beta_j^{(i)}} \in \mathcal{S}^{(i)}$. \end{itemize} Since $|\mathcal{S}^{(i)}| = \ell_{i}$, Lemma~\ref{lem:TWXZ13} guarantees that $q_i \leq 3 \sqrt{k(\fmin) \delta(\fmin)}$. Since $\fmin$ becomes constant after fixing these $q_i$ parities, every parity in $\supp(\fmin)$ is paired with at least one other parity in $\supp(\fmin)$ for the equivalence class with respect to $\Gamma^{(i)}$.\footnote{There is a boundary case ($k(f) = 1$) which can be dealt with separately, as in~\cite[Lemma 3.4]{San19}. For readability, we assume $k(f) \geq 2$.} This implies that $\ell_{i-1} - \ell_{i} \geq \frac{k(\fmin)}{2}$ Combining the two inequalities in the last paragraph we have, \begin{align*} \frac{q_i}{(\ell_{i-1} - \ell_{i})} &\leq 6\sqrt{\frac{\delta(\fmin)}{k(\fmin)}}. \label{eq: tsang_plus_delta_coset_size} \end{align*} From Equation~\eqref{eq: fmin_i_satisfies_main_lemma}, \begin{align} \frac{q_i}{(\ell_{i-1} - \ell_{i})} &\leq \frac{6\sqrt{\delta(f) \sparsity(f)}}{\ell_{i-1}}. \end{align} \end{proof} We are now ready to prove Theorem~\ref{thm:delta lower bound in terms of rk only}. \begin{proof}[Proof of Theorem \ref{thm:delta lower bound in terms of rk only}] We only need to show that the parities fixed in Algorithm~\ref{alg:NAPDT} is $O(\sqrt{\delta(f) k(f)}\log k(f))$ (Observation~\ref{obs: napdt_implies_rank_ub}). Suppose the while loop runs for $t$ iterations. Let $q_i$ be the number of queries made in Step~\ref{item: step_a} of Algorithm~\ref{alg:NAPDT} in the $i$-th iteration. From Lemma~\ref{lem:main_lemma}, we have \begin{align*} q_i \leq \frac{6\sqrt{\delta(f) \sparsity(f)}}{\ell_{i-1}}(\ell_{i-1} - \ell_{i}) \end{align*} Thus when Algorithm~\ref{alg:NAPDT} is run of $f$, the total number of queries made by the algorithm is \begin{align*} \sum_{i=1}^t q_i &\leq 6 \sqrt{\delta(f) \sparsity(f)}\sum_{i=1}^t \frac{(\ell_{i-1} - \ell_{i})}{\ell_{i-1}}\\ &\leq 6\sqrt{\delta(f) \sparsity(f)}\sum_{i=1}^t \bra{\frac{1}{\ell_{i-1}} + \frac{1}{\ell_{i-1} - 1} \ldots + \frac{1}{ \ell_{i} + 1}} \\ &\leq 6 \sqrt{\delta(f) \sparsity(f)}\sum_{i=1}^{\ell_0} \frac{1}{i} \\ &\leq 6 \sqrt{\delta(f) \sparsity(f)}\log{\ell_0}\\ &= 6\sqrt{\delta(f) \sparsity(f)} \log {\sparsity(f)}. \end{align*} Observation~\ref{obs: napdt_implies_rank_ub} implies $r(f) = O(\sqrt{\delta(f) k(f)} \log k(f))$. \end{proof} Along with Theorem~\ref{thm:delta lower bound in terms of rk only}, this proves Theorem~\ref{thm:delta lower bound in terms of rk and also k'}. \begin{proof}[Proof of Theorem~\ref{thm:delta lower bound in terms of rk and also k'}] The bound $\delta(f) = \Omega\left(\frac{1}{k(f)}\bra{\frac{r(f)}{\log k(f)}}^2\right)$ follows from Theorem~\ref{thm:delta lower bound in terms of rk only} and the bound $\delta(f) = \Omega\left(\frac{\sparsity(f)}{(\seespectrum(f))^2}\right)$ from Claim~\ref{claim:delta at least k/k'^2}. \end{proof} \subsection{Proof of Theorem~\ref{thm:delta lower bound in terms of rk and also k''}} \label{sec:lower bound using k''} Recall from Definition~\ref{defi:max entropy, max rank entropy} that we defined \emph{max-rank-entropy} of a Boolean function $f$, denoted by $\seerank(f)$, to be $$ \argmin_\threshold\{\dim(\cS_{\threshold})\} = r(f). $$ The main aim of this section is to give a lower bound on $\delta(f)$ with respect to $\seerank(f)$ for a Boolean function $f$ (Theorem~\ref{thm:delta lower bound in terms of rk and also k''}). The second bound of Theorem~\ref{thm:delta lower bound in terms of rk and also k''} is given by the following lemma. \begin{lemma} \label{lem:delta lower bound in terms of rk and also k''} Let $f: \pmone^n \to \pmone$ be any function such that $k(f) > 1$. Then, \[ \delta(f) = \Omega\left( \frac{r(f)}{\seerank(f) \log \sparsity(f)}\right). \] \end{lemma} Together with Theorem~\ref{thm:delta lower bound in terms of rk only} proved in Section~\ref{sec:lower bound using k'}, Lemma~\ref{lem:delta lower bound in terms of rk and also k''} implies Theorem~\ref{thm:delta lower bound in terms of rk and also k''}. We now give the proof of Lemma~\ref{lem:delta lower bound in terms of rk and also k''}. See Section~\ref{sec:overview of delta lower bound in terms of rk and also k''} for an overview of the proof of Lemma~\ref{lem:delta lower bound in terms of rk and also k''}. Lemma~\ref{lem:delta lower bound in terms of rk and also k''} gives a lower bound of $\Omega\left( \frac{r(f)}{\seerank(f) \log \sparsity(f)}\right)$ on $\delta(f)$. The crucial ingredient for this lower bound is Lemma~\ref{lem:CHLT_improvement}, which is a refinement of the following theorem. \begin{theorem}[{\cite[Theorem 13]{CHLT19}}] \label{thm:CHLT} Let $f:\pmone^n \to \pmone$ be any function such that $\deg_{\ftwo}(f) = d$. Then, \[ \sum_{i \in [n]} |\wh{f}(\cbra{i})| \leq 4d. \] \end{theorem} The only difference in the statement of Lemma~\ref{lem:CHLT_improvement} and Theorem~\ref{thm:CHLT} is that the right hand side becomes $O(\delta(f) \cdot \deg_{\ftwo}(f))$ instead of $4\deg_{\ftwo}(f)$. \begin{proof}[Proof of Lemma~\ref{lem:CHLT_improvement}] Assume $\delta(f) \leq 1/4$ (otherwise Theorem~\ref{thm:CHLT} implies $\sum_{i=1}^n |\widehat{f}(i)| = O(\delta(f) d)$). Define $F : \pmone^{nt} \to \pmone$ to be \[ F(x^{(1)}, \ldots , x^{(t)}) = f(x^{(1)}) \times \ldots \times f(x^{(t)}), \] where $t$ is a parameter to be fixed later, and $x^{(i)} \in \pmone^n$ for all $i \in [t]$. Since $\degtwo(F) = \degtwo(f)$, Theorem~\ref{thm:CHLT} implies \begin{equation} \label{eq:CHLT-First-L1} \sum_{\substack{S \subseteq [nt]\\ |S| = 1}} |\widehat{F}(S)| = O(d). \end{equation} Since $(1-x)^{1/x}$ is a decreasing function in $x$ for $x \in (0, 1/2]$, we have \begin{equation} \label{eq:1-xex decreasing} (1-x)^{1/x} \geq 1/4 \quad \text{for all}~x \in (0, 1/2]. \end{equation} Expressing the Fourier coefficients of $F$ in terms of the Fourier coefficients of $f$, \begin{align*} \sum_{\substack{S \subseteq [nt]\\ |S| = 1}} |\widehat{F}(S)| &= t \cdot \widehat{f}(\emptyset)^{t-1} \sum_{i=1}^n |\widehat{f}(i)| \\ &= \left(1+\frac{1}{2\delta(f)}\right) \cdot (1- 2\delta(f))^{\frac{1}{2\delta(f)}} \sum_{i=1}^n |\widehat{f}(i)| \tag*{Choosing $t = 1+ \frac{1}{2\delta(f)}$, and by Observation~\ref{obs:weight, empty Fourier}}\\ &\geq \left(1+\frac{1}{2\delta(f)}\right) \cdot \bra{\frac{1}{4}} \sum_{i=1}^n |\widehat{f}(i)| \tag*{by Equation~\eqref{eq:1-xex decreasing}}\\ &\geq \frac{1}{8\delta(f)} \cdot \sum_{i=1}^n |\widehat{f}(i)|. \end{align*} Now, Equation~\eqref{eq:CHLT-First-L1} implies the desired bound, $\sum_{i=1}^n |\widehat{f}(i)| = O(\delta(f) d)$. \end{proof} We would like to extend the upper bound of Lemma~\ref{lem:CHLT_improvement} to any basis of $\spann(\supp(f))$ instead of just the standard basis of the set of parities. \begin{corollary}\label{cor:improved chlt-implication} Let $f:\pmone^n \to \pmone$ be any function with $\degtwo(f) = d$. Suppose $\cS \subseteq \supp(f)$ is a basis of $\spann(\supp(f))$, then \[ \sum_{S \in \cS} |\wh{f}(S)| = O(\delta(f) d) = O(\delta(f) \log k(f)). \] \end{corollary} \begin{proof} The main idea of the proof is to do a basis change on parities and construct another function $h$, the corollary will follow by applying Lemma~\ref{lem:CHLT_improvement} on $h$. Recall that we denote both a subset of $[n]$ and the corresponding indicator vector in $\ftwo^n$, by the same notation. Let $\mathcal{S} = \{S_1, \dots, S_{r(f)}\}$, extend $\mathcal{S}$ to $\mathcal{S'} = \{S_1, \dots, S_{r(f)}, S_{r(f)+1}, \dots, S_n\}$, a complete basis of $\ftwo^n$. Observe that $\wh{f}(S_i) = 0$, for $i \in \{r(f)+1, \dots, n\}$ (since $\mathcal{S}$ spans $\supp(f)$). Fix the change of basis matrix $B \in \ftwo^{n \times n}$ with $i$-th column as $S_i$, $i \in [n]$. Consider the function $h:\pmone^n \to \R$ satisfying $\widehat{h}(\alpha) = \widehat{f}(B\alpha)$, for all $\alpha \in \ftwo^n$. By Claim~\ref{thm:ftwo_deg_does_not_change}, $h$ is Boolean and $\degtwo(h) = \degtwo(f)$. Using Lemma~\ref{lem:CHLT_improvement}, \begin{equation*} \sum_{i \in [n]} |\wh{h}(\{i\})| = O(\delta(f) d) . \end{equation*} From the definition of $h$, $\widehat{h}(e_i) = \widehat{f}(S_i)$ for $i \in [r(f)]$ and $\widehat{h}(e_i) = 0$ for $i \in \{r(f)+1, \dots, n\}$. \begin{equation*} \sum_{S \in \mathcal{S}} |\wh{f}(S)| = O(\delta(f) d) . \end{equation*} The second equality in the statement of the lemma follows from Lemma~\ref{lem:ftwodeg and Fourier sparsity}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:delta lower bound in terms of rk and also k''}] Observe that every summand on the left hand side of Corollary~\ref{cor:improved chlt-implication} is at least $1/\seerank(f)$, giving the following lower bound on $\delta(f)$ and finishing the proof of Lemma~\ref{lem:delta lower bound in terms of rk and also k''}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:delta lower bound in terms of rk and also k''}] From Lemma~\ref{lem:delta lower bound in terms of rk and also k''} we have $\delta(f) =\Omega\left(\frac{r(f)}{\seerank(f) \log k(f)}\right)$, and from Theorem~\ref{thm:delta lower bound in terms of rk only} we have $\delta(f) = \Omega\left( \frac{r(f)^2}{\sparsity(f) \log^2 \sparsity(f)} \right)$. \end{proof} The following corollary combines the lower bounds on $\delta(f)$ from Theorem~\ref{thm:delta lower bound in terms of rk and also k''} and Lemma~\ref{lem:chang} by setting $k''(f)$ as the threshold. \begin{corollary} \label{cor:all delta lower bound in terms of rk and also k''} Let $f: \pmone^n \to \pmone$ be any function such that $k(f) > 1$. Then, \[ \delta(f) = \Omega\left(\max\left\{\frac{r(f)^2}{\sparsity(f) \log^2 \sparsity(f)}, \frac{r(f)}{\seerank(f) \log \sparsity(f)}, \frac{\sqrt{r(f)}}{\seerank(f) \log(\seerank(f)^2/r(f)}\right\}\right). \] \end{corollary} \section{Upper bound proofs} In this section we prove Theorems~\ref{thm:delta upper bound in terms of rk and also k'} and~\ref{thm:delta upper bound in terms of rk and also k''}. Recall that these theorems require us to exhibit functions $f$ witnessing certain upper bounds on $\delta(f)$. The descriptions of these functions are given in Section~\ref{subsec:defining some functions}. In Section~\ref{subsec:proofofclaimfnproperties} we compute certain properties of interest of these functions. Finally in Section~\ref{subsec:tightness} we instantiate these functions with suitable parameters to yield the proofs of Theorems~\ref{thm:delta upper bound in terms of rk and also k'} and~\ref{thm:delta upper bound in terms of rk and also k''}. \subsection{Defining some functions} \label{subsec:defining some functions} The functions we consider are all modifications of the Addressing function defined in Definition~\ref{defi:Addressing}. The main technique we use to define our functions is given in Definition~\ref{defi:composedaddressing}. That is, we first consider an Addressing function on $t + \log t$ input bits. Next, we replace each target bit by a suitable function. Different choices of the various functions substituted yield our upper bounds. In our first modification, we replace the target bits by $\AND$ functions on disjoint variables, each having the same arity. \begin{defi}[AND-Target-Addressing Function] \label{defi:ADtt'} For any integers $t, t' \geq 2$, define the function $\AD_{t, t'} : \pmone^{\log t} \times \pmone^{t \log t'} \to \pmone$ by \[ \AD_{t, t'} = \AD_t \circt \AND_{\log t'}. \] \end{defi} Our next modification is similar to the previous one, except for the fact that one of the $\AND$ functions used in the replacement above has larger arity than the others. \begin{defi}[AND-Target-Addressing Function with a Huge AND] \label{defi:ADtt'a} For any integers $t \geq 2$ and $a \geq t' \geq 2$, define the function $\AD_{t, t', a} : \pmone^{\log t} \times \pmone^{\log a} \times \pmone^{(t-1) \log t'} \to \pmone$ by \[ \AD_{t, t', a} = \AD_t \circt (\AND_{\log a}, \AND_{\log t'}, \dots, \AND_{\log t'}). \] That is, for all $(x, z) \in \pmone^{\log t} \times \pmone^{\log a} \times \pmone^{(t-1) \log t'}$, where $z_{1^{\log t}} \in \pmone^{\log a}$ and $z_{b} \in \pmone^{\log t'}$ for all $1^{\log t} \neq b \in \pmone^{\log t}$, \begin{align*} \AD_{t, t', a}(x,z) = \begin{cases} \AND(z_{1^{\log t}, 1}, \dots, z_{1^{\log t}, \log a}) & \textnormal{ if } x = 1^{\log t} \\ \AND(z_{x,1}, \dots, z_{x,\log t'}) & \textnormal{ otherwise.} \end{cases} \end{align*} \end{defi} We require the following function to define our next modification. \begin{defi}[AND-of-Bent] \label{defi:AB} For any integers $t',\ell \geq 2$, let $B: \pmone^{\log \ell} \to \pmone$ be a bent function on $\log \ell$ input bits. Define the function $\AB : \pmone^{\log t'} \times \pmone^{\log \ell} \to \pmone$ by $$\AB(y,z) = \AND(y_1B(z),y_2, \ldots , y_{\log t'}),$$ where $y \in \pmone^{\log t'}$, $z \in \pmone^{\log \ell}$. \end{defi} In the next modification, we replace each target bit by the function $\AB$ on $\log t' + \log \ell$ input bits, as in Definition~\ref{defi:AB}. \begin{defi}[(AND-of-Bent)-Target-Addressing Function] \label{defi:AAB} For any integers $t,t',\ell \geq 2$, define the function $\AAB : \pmone^{\log t} \times \pmone^{t(\log \ell + \log t')} \to \pmone$ by \[ \AAB = \AD_t \circt \AB. \] \end{defi} We define an auxiliary function, which is a modification of the $\AND$ function where the first variable is replaced by that variable times another $\AND$ on a disjoint set of variables. \begin{defi}[Modified AND] \label{defi:mAND} For any integers $t'\geq 2, p \geq 1$, define the function $\mAND_{t', p} : \pmone^{\log t' + p} \to \pmone$ by \begin{equation} \mAND_{t', p}(y,u) = \AND_{\log t'}(y_1 \AND_p(u), y_2,y_3, \ldots ,y_{\log t'}), \end{equation} where $y \in \pmone^{\log t'}$ and $u \in \pmone^{p}$. \end{defi} In the next modification we replace one of the variables in the first block of $\AD_{t, t'}$ (where the variables in the first block refer to those variables on which $\AND_{\log t'}$ is evaluated when the addressing variables equal $1^{\log t}$) with that variable times the AND of some $p$ variables from the the other blocks. \begin{defi}[Modified $\AD_{t, t'}$ with Modified AND] \label{defi:mAD} Let $t, t' \geq 2$ be any integers and let $p$ be an integer such that $(t-1)(\log t') \geq p \geq 1$. Let $x \in \pmone^{\log t}$, for each $b \in \pmone^{\log t}$, let $y_b \in \pmone^{\log t'}$. Let $u = \cbra{y_{b,i} \vert b \in \pmone^{\log t} \setminus \cbra{1^{\log t}}, i \in [\log t'] }$. Fix an arbitrary ordering on the variables in $u$ and let $u_{\leq p}$ be the the first $p$ variables in $u$ according to that order. Define the function $\mAD_{t,t',p}:\pmone^{\log t + t(\log t')} \to \pmone$ by \begin{equation} \mAD_{t,t',p}(x,y) = \begin{cases} \mAND_{t',p}(y_{1^{\log t}},u_{\leq p}) & \text{ if } x = 1^{\log t}\\ \AND_{\log t'}(y_x) & \text{ otherwise }. \end{cases} \end{equation} \end{defi} The table in the following claim summarizes various properties of interest of the functions defined above: rank, sparsity, max-supp-entropy, max-rank-entropy and weight. The first row is used to show that our lower bounds on weight beat those obtained from Chang's lemma (Lemma~\ref{lem:chang}) for the function $\AD_{t, t'}$, no matter what threshold is chosen (Claim~\ref{claim: beating changs lemma for all thresholds for ADtt'}). The second and third rows of the table will be crucial to prove Theorem~\ref{thm:delta upper bound in terms of rk and also k'} (Claims~\ref{claim:setting parameters for tightstraightline} and~\ref{claim:setting parameters for tight curve}). The second and the last row of the table are required to prove Theorem~\ref{thm:delta upper bound in terms of rk and also k''} (Claims~\ref{claim: setting parameters for tight curve for k''} and~\ref{claim:setting parameters for tightstraightline for k''}). \begin{claim}\label{claim:fnproperties} The rank, sparsity, max-supp-entropy, max-rank-entropy and weight of the functions $\AD_{t,t'}$, $\AD_{t,t',a}$, $\AAB$ and $\mAD_{t, t', p}$ are as follows.\footnote{Precise statements along with quantifications on $t, t', \ell, a, p$ are stated in Section~\ref{subsec:tightness} (Claims~\ref{claim:properties of ADtt'}, \ref{claim:properties of ADtt'a}, \ref{claim:properties of AAB} and \ref{claim:properties of mAD}). We do not formally prove the claimed bound on the max-supp-entropy of $\mAD_{t, t', p}$ since we do not require it; this bound can be observed from the proof of Claim~\ref{claim:properties of mAD}.} \begin{center} \label{table:properties2} \begin{tabular}{ |c|c|c|c|c|c| } \hline $f$ & $r(f)$ & $k(f)$ & $k'(f)$ & $k''(f)$ & $\delta(f)$\\ \hline \hline $\AD_{t,t'}$ & $\Theta(t\log t')$ & $\Theta(t^2t')$ & $\Theta(tt')$ & $\Theta(tt')$ & $\frac{1}{t'}$\\ \hline $\AD_{t,t',a}$ & $\Theta(t\log t' + \log a)$ & $\Theta(t^2t' + ta)$ & $\Theta(at)$ & $\Theta(at)$ & $\frac{1}{t'} + \frac{1}{at} - \frac{1}{tt'}$\\ \hline $\AAB$ & $\Theta(t(\log t' + \log \ell))$ & $ \Theta(t^2t'\ell)$ & $\Theta(tt'\sqrt{\ell})$ & $\Theta(tt'\sqrt{\ell})$ & $\frac{1}{t'}$\\ \hline $\mAD_{t,t',p}$ & $\Theta(t\log t')$ & $\Theta(2^p tt' + t^2t')$ & $\Theta(2^ptt')$ & $\Theta(tt')$ & $\frac{1}{t'}$\\ \hline \end{tabular} \end{center} \end{claim} \subsection{Proof of properties of our constructed functions}\label{subsec:proofofclaimfnproperties} In this section, we prove Claim~\ref{claim:fnproperties} by computing the properties of interest i.e., rank, sparsity, max-supp-entropy, max-rank-entropy and weight for each of the functions $\AD_{t,t'}, \AD_{t,t',a}, \AAB$ and $\mAD_{t,t',p}$. We prove a composition lemma (Lemma~\ref{lem:properties of composition of addressing and g}) that relates the rank, sparsity, max-supp-entropy, max-rank-entropy and weight of $\AD_t \circt g$ to those of $g$. Since $\AD_{t,t'} = \AD_t \circt \AND_{\log t'}$ and $\AAB = \AD_t \circt \AB$, we are able to use the composition lemma to prove the properties of interest of $\AD_{t,t'}$ and $\AAB$ (Claims~\ref{claim:properties of ADtt'} and~\ref{claim:properties of AAB}, respectively). This proves the bounds corresponding to two of the rows in Table~\ref{table:properties2}. To conclude the proof of Claim~\ref{claim:fnproperties} we prove bounds on the rank, sparsity, max-supp-entropy, max-rank-entropy and weight of $\AD_{t, t', a}$ and $\mAD_{t, t', p}$ from first principles (Claims~\ref{claim:properties of ADtt'a} and~\ref{claim:properties of mAD}, respectively). We begin by stating a composition lemma. \begin{restatable}[Composition lemma]{lemma}{composition} \label{lem:properties of composition of addressing and g} Let $t \geq 2, m \geq 1$ be any positive integers, and let $g: \pmone^{m} \to \pmone$ be a non-constant function such that there exists a non-empty set $S \subseteq [m]$ with $0 \neq |\wh{g}(S)| \leq |\wh{g}(\emptyset)|$. Let $f:\pmone^{\log t + m t} \to \pmone$ be defined as \[ f = \AD_t \circt g. \] Then \begin{align} r(f) &= t \cdot r(g) + \log t, \label{eq:rank Adressing composition}\\ k(f) &= 1 + t^2(k(g) -1), \label{eq:sparsity Adressing composition}\\ k'(f) &= t \cdot k'(g), \label{eq:maxEnt Adressing composition}\\ k''(f) &= t \cdot k''(g),\\ \delta(f) &= \delta(g). \label{eq:weight Adressing composition} \end{align} \end{restatable} We defer the proof of the composition lemma (Lemma~\ref{lem:properties of composition of addressing and g}) to Section~\ref{subsubsec:proof of composition lemma} and proceed to compute the properties of $\AD_{t,t'}$ using the composition lemma. The other functions we consider in this section can be viewed as modifications of $\AD_{t, t'}$. We prove the properties of $\AD_{t, t'}$ below to provide insight into our other proofs. Moreover, we use this function to show that Theorem~\ref{thm:delta lower bound in terms of rk and also k'} gives a tight lower bound on $\delta(\AD_{t, t'})$ as opposed to Chang's lemma (Lemma~\ref{lem:chang}) applied with any threshold parameter, which only gives a weaker bound (Claim~\ref{claim: beating changs lemma for all thresholds for ADtt'}). \begin{restatable}[Properties of $\AD_{t,t'}$]{claim}{ADtt} \label{claim:properties of ADtt'} Fix any integers $t \geq 2$, $t' > 4$ let $\AD_{t, t'} : \pmone^{\log t} \times \pmone^{t \log t'} \to \pmone$ be as in Definition~\ref{defi:ADtt'}. Then, \begin{itemize} \item $r(\AD_{t,t'}) = t \log t' + \log t$, \item $k(\AD_{t,t'}) = 1 + t^2(t' -1)$, \item $k'(\AD_{t,t'}) = k''(\AD_{t,t'}) = \frac{tt'}{2}$, and \item $\delta(\AD_{t,t'}) = \frac{1}{t'}$. \end{itemize} \end{restatable} \begin{proof} Recall from Definition~\ref{defi:ADtt'} that $\AD_{t,t'} = \AD \circt \AND$ where $\AND$ is on $\log t'$ bits. Since $t' > 4$, by Observation \ref{obs:properties of AND, Bent and Addressing}, $|\wh{\AND}(\emptyset)| = 1- \frac{2}{t'} > \frac{2}{t'} = |\wh{\AND}(S)|$ for all $S \ne \emptyset$. Therefore the claim follows by Lemma~\ref{lem:properties of composition of addressing and g} and Observation~\ref{obs:properties of AND, Bent and Addressing}. \end{proof} We next compute the properties of $\AAB$. Since $\AAB = \AD \circt \AB$, we first state the properties of $\AB$ in Claim~\ref{claim:properties of AND of Bent} and then deduce the properties of $\AAB$ using composition lemma (Lemma~\ref{lem:properties of composition of addressing and g}) and Claim~\ref{claim:properties of AND of Bent}. \begin{restatable}[Properties of $\AB$]{claim}{propertiesofAB} \label{claim:properties of AND of Bent} For any integers $t' > 3, \ell \geq 2$, let $\AB: \pmone^{\log t'} \times \pmone^{\log \ell} \to \pmone$ be as in Definition \ref{defi:AB}. Then, \begin{itemize} \item $r(\AB) = \log t' + \log \ell$, \item $k(\AB) = 1 + \frac{t'}{2} + \frac{\ell t'}{2}$, \item $k'(\AB) = k''(\AB) = \frac{t'\sqrt{\ell}}{2}$ and \item $\delta(\AB) = \frac{1}{t'}$. \end{itemize} \end{restatable} We defer the proof of Claim~\ref{claim:properties of AND of Bent} to Section~\ref{subsubsec:properties of AND of Bent}. The following claim gives the properties of $\AAB$. \begin{claim}[Properties of $\AAB$] \label{claim:properties of AAB} For any integers $t\geq 2, t' > 3,\ell \geq 2$, let $\AAB : \pmone^{\log t} \times \pmone^{t (\log \ell+ \log t')} \to \pmone$ be as in Definition~\ref{defi:AAB}. Then, \begin{itemize} \item $r(\AAB) = t (\log t' + \log \ell) + \log t$, \item $k(\AAB) = 1 + \frac{1}{2}t^2(\ell + 1)t'$, \item $k'(\AAB) = k''(\AAB) = \frac{tt'\sqrt{\ell}}{2}$, and \item $\delta(\AAB) = \frac{1}{t'}$. \end{itemize} \end{claim} \begin{proof} Recall from Definition~\ref{defi:AAB} that $\AAB = \AD \circt \AB$ where $\AB$ is on $\log \ell + \log t'$ bits. Since $t' > 3$, $\ell \geq 2$, by Claim \ref{claim:properties of AND of Bent}, $|\wh{\AAB}(\emptyset)| = 1- \frac{2}{t'} \geq \frac{2}{t'\sqrt{\ell}} = \frac{1}{k'(\AAB)}$. Therefore the claim follows by Lemma~\ref{lem:properties of composition of addressing and g} and Claim~\ref{claim:properties of AND of Bent}. \end{proof} In the following claim, we deduce the properties of $\AD_{t,t',a}$ from its Fourier expansion. \begin{restatable}[Properties of $\AD_{t,t',a}$]{claim}{ADtta} \label{claim:properties of ADtt'a} Fix any integers $t \geq 2$, $t' \geq 2$ and $a \geq 2t'$. Let $\AD_{t, t',a} : \pmone^{\log t} \times \pmone^{\log a} \times \pmone^{(t-1) \log t'} \to \pmone$ be as in Definition~\ref{defi:ADtt'a}. Then, \begin{itemize} \item $r(\AD_{t, t', a}) = (t - 1)\log t' + \log a + \log t$, \item $k(\AD_{t, t', a}) = (t-1)(t'-1)t + ta$, \item $k'(\AD_{t, t', a}) = k''(\AD_{t,t',a}) = \frac{ta}{2}$, and \item $\delta(\AD_{t, t', a}) = \frac{1}{t'} + \frac{1}{at} - \frac{1}{tt'}$. \end{itemize} \end{restatable} We prove Claim~\ref{claim:properties of ADtt'a} in Section~\ref{subsubsec:properties of ADtt'a}. In the following claim, we deduce the properties of $\mAD_{t,t',p}$ from first principles. \begin{claim}[Properties of $\mAD_{t,t',p}$] \label{claim:properties of mAD}~ Let $t, t' \geq 2$ be any integers and let be an integer such that $(t-1)(\log t') \geq p \geq 2$. Let $\mAD_{t,t',p}:\pmone^{\log t + t(\log t')} \to \pmone$ be as in Definition~\ref{defi:mAD}. Then, \begin{itemize} \item $r(\mAD_{t,t',p}) = t \cdot \log t' + \log t$, \item $k(\mAD_{t,t',p}) = \Theta(2^p tt' + t^2t')$, \item $k''(\mAD_{t,t',p}) = \Theta(tt')$, and \item $\delta(\mAD_{t,t',p}) = \frac{1}{t'}$. \end{itemize} \end{claim} We prove Claim~\ref{claim:properties of mAD} in Section~\ref{subsubsec:properties of mAD}. Finally, the proof of Claim~\ref{claim:fnproperties} follows from some of the claims above. \begin{proof}[Proof of Claim~\ref{claim:fnproperties}] The proof follows from Claims~\ref{claim:properties of ADtt'},~\ref{claim:properties of AAB},~\ref{claim:properties of ADtt'a} and \ref{claim:properties of mAD}. \end{proof} \subsubsection{Proof of Composition lemma (Lemma~\ref{lem:properties of composition of addressing and g})} \label{subsubsec:proof of composition lemma} Let $t \geq 2$ and $m \geq 1$ be any integers. For the purpose of the following proof, we introduce the following notation. For any $b \in \pmone^{\log t}$, $T \subseteq [\log t]$ and non-empty $S_b \subseteq [m]$ , define characters $\chi_{b, S_b, T} : \pmone^{\log t} \times \pmone^{t m} \to \pmone$ by $\chi_{b, S_b, T}(x, z) = \prod_{j \in S_b}z_{b, j} \prod_{i \in T}x_i$. Here $z = (\dots, z_{b}, \dots)$, where $b \in \pmone^{\log t}$ and $z_b \in \pmone^{m}$ for all $b \in \pmone^{\log t}$. \begin{proof}[Proof of Lemma~\ref{lem:properties of composition of addressing and g}] Let $x \in \pmone^{\log t}$ and $z \in \pmone^{t m}$. By Definition~\ref{defi:composedaddressing} and Observation~\ref{obs:addexpansion}, \begin{align} f(x,z) &= \sum_{b \in \pmone^{\log t}} g(z_b) \cdot \ind_b(x)\nonumber\\ &= \sum_{b} \left(\sum_{S_b \subseteq [m]} \wh{g}(S_b)\chi_{S_b}(z_b)\right) \left(\sum_{T \subseteq [\log t]} \wh{\ind}_b(T) \chi_T(x)\right) \nonumber\\ &= \sum_{b} \bra{\wh{g}(\emptyset)\wh{\ind}_b(\emptyset) + \wh{g}(\emptyset) \sum_{T \neq \emptyset} \wh{\ind}_b(T) \chi_T(x) + \sum_{{S_b} \neq \emptyset} \sum_{T} \wh{g}({S_b})\wh{\ind}_b(T) \chi_{S_b}(z_b) \chi_T(x)} \nonumber\\ &= \underbrace{\sum_{b} \wh{g}(\emptyset)\wh{\ind}_b(\emptyset)}_{A_1} + \underbrace{\sum_{b} \wh{g}(\emptyset) \sum_{T \neq \emptyset} \wh{\ind}_b(T) \chi_T(x)}_{A_2} + \underbrace{\sum_{b,{S_b} \neq \emptyset,T} \wh{g}({S_b}) \wh{\ind}_b(T) \chi_{S_b}(z_b)\chi_T(x)}_{A_3}. \label{eq:T1T2T3} \end{align} By Observation~\ref{obs:indexpansion}, \begin{align} A_1 &= \sum_b \wh{g}(\emptyset) \frac1t = \wh{g}(\emptyset), \nonumber\\ A_2 &= \wh{g}(\emptyset) \sum_b \sum_{T \neq \emptyset} \frac{\prod_{i \in T}b_i }{t}\chi_T(x) \nonumber\\ &= \wh{g}(\emptyset) \sum_{T \neq \emptyset} \chi_T(x) \sum_b \frac{\prod_{i \in T}b_i }{t} = 0, \tag*{by Observation \ref{obs:sum_b char = 0 }} \nonumber\\ A_3 &= \sum_{b} \sum_{{S_b} \neq \emptyset} \sum_T \frac{\wh{g}(S_b)\cdot \prod_{i \in T} b_i}{t} \chi_{S_b}(z_b) \chi_T(x) \nonumber\\ &= \sum_{b,{S_b} \neq \emptyset,T} c_{b,{S_b},T} \cdot \chi_{b, S_b, T}(x, z),\nonumber \end{align} where $|c_{b,{S_b},T}| = \frac{|\wh{g}(S_b)|}{t}$ for all $b \in \pmone^{\log t}, T \subseteq [\log t]$ and non-empty ${S_b} \subseteq [m]$. From Equation~\eqref{eq:T1T2T3} and the above expressions for $A_1, A_2$ and $A_3$, we obtain the following Fourier expansion for $f$. \begin{equation} \label{eq:composition Fourier} f = \wh{g}(\emptyset) + \sum_{b, \emptyset \neq {S_b} \in \supp(g), T} c_{b,S_b, T} \cdot \chi_{b, S_b, T}, \end{equation} since $|c_{b,S_b,T}| = \frac{|\wh{g}(S_b)|}{t}$, $c_{b,S_b,t}$ is non-zero iff $\wh{g}(S_b)$ is non-zero. Therefore \begin{equation} \label{eq:supp f} \supp(f) = \cbra{\chi_{\emptyset}} \cup \cbra{\chi_{b,S_b,T}| b \in \pmone^{\log t}, \emptyset \neq S_b \in \supp(g), T \subseteq [\log t]}. \end{equation} \begin{itemize} \item Rank: Fix a Fourier basis $\calB_g$ of $g$ such that $\calB_g \subseteq \supp(g)$ and a character $\chi_U \in \calB_g$. Consider the set of characters $$\calB_f = \cbra{\chi_{b,S_b,\emptyset}| b \in \pmone^{\log t}, S_b \in \calB_g} \cup \{\chi_{\mathbf{1}, U, \{i\}}| i \in [\log t] \}.$$ By Equation~\eqref{eq:supp f}, $\calB_f \subseteq \supp(f)$ and $\supp(f) \subseteq \spann(\calB_f)$. Therefore, \[ r(f) = |\calB_f| = t |\calB_g| + \log t = t\cdot r(g) + \log t. \] \item Sparsity: By Equation~\eqref{eq:supp f}, \[ k(f) = |\supp(f)| = 1 + t^2(k(g)-1). \] \item Max-supp-entropy: Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k'(f)$ equals the smallest non-zero Fourier coefficient of $f$ in absolute value. From the Fourier expansion of $f$ given in Equation~\eqref{eq:composition Fourier}, \begin{align*} k'(f) & = \max\cbra{\frac{1}{|\wh{g}(\emptyset)|}, \max\cbra{\frac{t}{|\wh{g}(S)| } : \emptyset \neq S \in \supp(g)}}\\ & = \max\cbra{\frac{t}{|\wh{g}(S)| } : \emptyset \neq S \in \supp(g)} \tag*{since $|\wh{g}(\emptyset)| \geq \frac1{k'(g)}$ and $t \geq 1$ by assumption}\\ & = t \cdot k'(g). \end{align*} \item Max-rank-entropy: Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k''(f) = \argmin_\theta \{\dim(\cS_{\theta}) = r(f)\},$ where $\cS_{\theta} = \cbra{S: |\wh{f}(S)| \geq \frac{1}{\theta}}$. From the Fourier expansion of $f$ given in Equation~\eqref{eq:composition Fourier}, the following set $\calB_f$ is a spanning set for the Fourier support of $f$. Let $\calB_g$ be a Fourier basis for $g$ such that $|\wh{g}(S)| \geq \frac{1}{k''(g)}$ for all $S \in \calB_g$. Define \begin{align} \calB_f = \cbra{\chi_{b, S_b, T} : b \in \pmone^{\log t}, S_b \in \calB_g, T \subseteq [\log t]} \end{align} One may verify that $\calB_f$ indeed is a spanning set for $\supp(f)$. By Equation~\eqref{eq:composition Fourier}, $|c_{b,{S_b},T}| = \frac{|\wh{g}(S_b)|}{t}$ for all $b \in \pmone^{\log t}, T \subseteq [\log t]$ and non-empty ${S_b} \subseteq [m]$. Hence $k''(f) \leq t \cdot k''(g)$. It now remains to show that $k''(f) \geq t \cdot k''(g)$. Towards a contradiction, consider a basis $T_f \subseteq \cbra{\chi_{b,S_b,T} : b \in \pmone^{\log t}, S_b \in \supp(g)}$ for $\supp(f)$, with $|\wh{f}(S)| > \frac1{t \cdot k''(g)}$ for all $S \in T_f$. Fix any $b \in \pmone^{\log t}$. Observe that the set $\cbra{\chi_{S_b} : \chi_{b, S_b, T} \in T_f}$ forms a spanning set for $\supp(g)$. Moreover, since $|c_{b,{S_b},T}| = \frac{|\wh{g}(S_b)|}{t}$ for all $b \in \pmone^{\log t}, T \subseteq [\log t]$ and non-empty ${S_b} \subseteq [m]$ by Equation~\eqref{eq:composition Fourier}, the set $\cbra{\chi_{S_b} : \chi_{b, S_b, T} \in T_f}$ is such that each of its Fourier coefficients (i.e.~$\wh{g}(S_b)$) has absolute value strictly larger than $\frac1{k''(g)}$, which is a contradiction by the definition of $k''(g)$. \item Weight: By Observation~\ref{obs:weight, empty Fourier}, \begin{align*} \delta(f) = \frac{1 - \wh{f}(\emptyset)}{2} = \frac{1 - \wh{g}(\emptyset)}{2} = \delta(g), \end{align*} where the second equality follows by Equation~\eqref{eq:composition Fourier}. \end{itemize} \end{proof} \subsubsection{Properties of AND of Bent (Claim~\ref{claim:properties of AND of Bent})} \label{subsubsec:properties of AND of Bent} For the purpose of this proof, define the characters $\chi_{S, T} : \pmone^{\log t'} \times \pmone^{\log \ell} \to \pmone$ by $\chi_{S,T}(y, z) = \prod_{i \in S} y_i \cdot \prod_{j \in T} z_j$ for all $S \subseteq [\log t'], T \subseteq [\log \ell]$. \begin{proof}[Proof of Claim~\ref{claim:properties of AND of Bent}] Recall that from Definition~\ref{defi:AB}, for all $y \in \pmone^{\log t'}$ and $z \in \pmone^{\log \ell}$, \begin{equation*} \AB(y,z) = \AND(y_1\B(z), y_2, \dots , y_{\log t'}), \end{equation*} where $\B:\pmone^{\log\ell} \to \pmone$ is a bent function. Since $\AND(y_1,\ldots ,y_{\log t'}) = 1 - 2 \prod_{i=1}^{\log t'} \frac{(1 - y_i)}{2}$, \begin{align*} \AB(y,z) &= \AND(y_1\B(z), y_2, \dots , y_{\log t'})\\ &= 1 - (1 - y_1\B(z)) \cdot \prod_{i=2}^{\log t'} \frac{(1 - y_i)}{2}\\ &= 1 - \bra{ 1 - y_1\sum_{T \subseteq [\log \ell]} \wh{B}(T) \chi_T(z) } \cdot \bra{\frac{2}{t'}\sum_{1 \not\in S \subseteq [\log t']} (-1)^{|S|} \chi_S(y)}\\ &= 1 - \frac{2}{t'} \bra{\sum_{1 \not \in S \subseteq [\log t']} (-1)^{|S|} \chi_S(y)} + \frac{2}{t'} y_1 \bra{\sum_{T \subseteq [\log \ell]} \sum_{1 \in S \subseteq [\log t']} (-1)^{|S|} \wh{B}(T) \chi_S(y)\chi_T(z)} \end{align*} Hence, the Fourier expansion of $\AB$ is given by \begin{equation} \label{eq: abFourier} \AB = \bra{1 - \frac{2}{t'}} \chi_{\emptyset, \emptyset} - \frac{2}{t'}\bra{\sum_{\substack{1 \notin S \subseteq [\log t']\\ S \neq \emptyset}} (-1)^{|S|} \chi_{S, \emptyset}} + \frac{2}{t'} y_1 \bra{\sum_{ \substack{ 1 \in S \subseteq [\log t']\\ T \subseteq [\log \ell] } } (-1)^{|S|} \wh{B}(T) \chi_{S,T}}. \end{equation} Since $\wh{B}(T) \neq 0$ for all $T \subseteq [\log \ell]$ by Definition~\ref{defi:bent}, Equation~\eqref{eq: abFourier} implies \begin{equation} \label{eq:supp AB} \supp(\AB) = \cbra{(\emptyset, \emptyset)} \cup \cbra{(S, \emptyset) : S \neq \emptyset,1 \notin S \subseteq [\log t']} \cup \cbra{(S, T) : 1 \in S \subseteq [\log t'], T \subseteq [\log \ell]}. \end{equation} \begin{itemize} \item Rank: Consider $\calB_B = \cbra{ (\cbra{i}, \emptyset): i \in [\log t']} \cup \cbra{(\cbra{1}, \cbra{j}): j \in [\log \ell]}$. By Equation~\eqref{eq:supp AB}, $\calB_B \subseteq \supp(\AB)$. Moreover $\calB_B$ is a linearly independent set and generates all the characters. Therefore $\calB_B$ forms a Fourier basis of $\AB$. Hence, \[ r(\AB) = |\calB_B| = \log t' + \log \ell. \] \item Sparsity: From Equation~\eqref{eq:supp AB}, \[ k(\AB) = |\supp(\AB)| = \frac{t'}{2} + \frac{\ell t'}{2}. \] \item Max-supp-entropy: Recall from Definition \ref{defi:max entropy, max rank entropy} that $k'(\AB)$ equals the smallest non-zero Fourier coefficient of $\AB$ in absolute value. From the Fourier expansion of $\AB$ given in Equation~\eqref{eq: abFourier}, \begin{align*} k'(\AB) & = \max\cbra{\frac{t'}{t' -2}, \frac{t'}{2}, \max\cbra{\frac{t'}{2|\wh{B}(T)| } : T \subseteq [\log \ell]}}\\ &= \frac{t'\sqrt{\ell}}{2}. \tag*{by Observation~\ref{obs: bentfcoeffs}} \end{align*} \item Max-rank-entropy: Recall from Definition \ref{defi:max entropy, max rank entropy}, $k''(f) = \argmin_\theta \{\dim(\cS_{\theta}) = r(f)\},$ where $\cS_{\theta} = \cbra{S: |\wh{f}(S)| \geq \frac{1}{\theta}}$. Observe from the Fourier expansion of $\AB$ in Equation~\eqref{eq: abFourier} that the only monomials which containing $z$-variables are $\chi_{S,T}$ such that $T \neq \emptyset$. Any such monomial has coefficient whose absolute value is $\frac{2}{t'\sqrt{\ell}}$. So, $k''(\AB) \geq \frac{t'\sqrt{\ell}}{2}$. Furthermore by Lemma~\ref{lem:relationships between rk and k'} $k''(\AB) \leq k'(\AB) = \frac{t'\sqrt{\ell}}{2}$. Therefore \[ k''(\AB) = \frac{t'\sqrt{\ell}}{2}. \] \item Weight: Observation~\ref{obs:weight, empty Fourier} and Equation~\eqref{eq: abFourier} imply \[ \delta(\AB) = \frac{1 -\wh{\AB}(\emptyset, \emptyset)}{2} = \frac{1}{t'}. \] \end{itemize} \end{proof} \subsubsection{Properties of $\AD_{t,t',a}$ (Claim~\ref{claim:properties of ADtt'a})} \label{subsubsec:properties of ADtt'a} Let $t \geq 2$, $t' \geq 2$ and $a \geq 2t'$ be any integers. Consider the function $\AD_{t, t', a} : \pmone^{\log t} \times \pmone^{\log a} \times \pmone^{(t-1)\log t'} \to \pmone$ as in Definition~\ref{defi:ADtt'a}. For the purpose of the following proof, we introduce the following notation. Let $\mathbf{1} := 1^{\log t}$. For any $\mathbf{1} \neq b \in \pmone^{\log t}$, $T \subseteq [\log t]$ and non-empty $S_b \subseteq [\log t']$ , define characters $\chi_{b, S_b, T} : \pmone^{\log t} \times \pmone^{\log a} \times \pmone^{(t-1)\log t'} \to \pmone$ by \begin{align*} \chi_{b, S_b, T}(x, z) = \prod_{j \in S_b}z_{b, j} \prod_{i \in T}x_i. \end{align*} Here $z = (\dots, z_{b}, \dots)$, where $b \in \pmone^{\log t}$, $z_{\bone} \in \pmone^{\log a}$ and $z_b \in \pmone^{\log t'}$ for all $b \in \pmone^{\log t} \setminus \cbra{\bone}$. Also for $b = \mathbf{1}$, $T \subseteq [\log t]$ and non-empty $S_{\mathbf{1}} \subseteq [\log a]$, define characters $\chi_{\mathbf{1}, S_{\mathbf{1}}, T} : \pmone^{\log t} \times \pmone^{\log a} \times \pmone^{(t-1)\log t'} \to \pmone$ by \begin{align*} \chi_{\mathbf{1}, S_{\mathbf{1}}, T}(x, z) = \prod_{j \in S_{\mathbf{1}}}z_{\mathbf{1}, j} \prod_{i \in T}x_i. \end{align*} For any set $U \subseteq [\log t]$, define characters $\chi_{\emptyset, U} : \pmone^{\log t} \times \pmone^{\log a} \times \pmone^{(t-1)\log t'} \to \pmone$ by \[ \chi_{\emptyset, U} (x, z) = \prod_{i \in U}x_i. \] \begin{proof}[Proof of Claim~\ref{claim:properties of ADtt'a}] Let $x \in \pmone^{\log t}$ and $z \in \pmone^{\log a} \times \pmone^{(t-1) \log t'} $ be such that $z_{\mathbf{1}} \in \pmone^{\log a}$ and $z_{b} \in \pmone^{\log t'}$ for $\mathbf{1} \neq b \in \pmone^{\log t}$. By Definition~\ref{defi:ADtt'a} and Observation~\ref{obs:addexpansion}, \begin{equation} \label{eq:topofhugeandproof} \AD_{t,t',a}(x,z) = \underbrace{\AND(z_{\mathbf{1}})\cdot \ind_{\mathbf{1}}(x)}_{A} + \underbrace{\sum_{\mathbf{1} \neq b \in \pmone^{\log t}} \AND(z_b) \cdot \ind_b(x)}_{B}. \end{equation} We first analyze $A$ from Equation~\eqref{eq:topofhugeandproof}. In this analysis, $\AND$ is on $\log a$ variables. \begin{align} A &= \left(\sum_{{S_\mathbf{1}} \subseteq [\log a]} \wh{\AND}({S_\mathbf{1}})\chi_{S_\mathbf{1}}(z_{\mathbf{1}})\right) \left(\sum_{T \subseteq [\log t]} \wh{\ind}_{\mathbf{1}}(T) \chi_T(x)\right) \nonumber\\ &= \bra{\wh\AND(\emptyset) + \sum_{{S_\mathbf{1}} \neq \emptyset}\wh\AND({S_\mathbf{1}}) \chi_{S_\mathbf{1}}(z_{\mathbf{1}})} \bra{\wh{\ind}_{\mathbf{1}}(\emptyset) + \sum_{T \neq \emptyset}\wh{\ind}_{\mathbf{1}}(T) \chi_T(x)} \nonumber\\ &= \bra{\wh{\AND}(\emptyset)\wh{\ind}_{\mathbf{1}}(\emptyset) + \wh{\AND}(\emptyset) \sum_{T \neq \emptyset} \wh{\ind}_{\mathbf{1}}(T) \chi_T(x) + \sum_{{S_\mathbf{1}} \neq \emptyset} \sum_{T} \wh{\AND}({S_\mathbf{1}})\wh{\ind}_\mathbf{1}(T) \chi_{S_\mathbf{1}}(z_\mathbf{1}) \chi_T(x)} \nonumber\\ &= \underbrace{ \wh{\AND}(\emptyset)\wh{\ind}_{\mathbf{1}}(\emptyset)}_{A_1} + \underbrace{ \wh{\AND}(\emptyset) \sum_{T \neq \emptyset} \wh{\ind}_{\mathbf{1}}(T) \chi_T(x)}_{A_2} + \underbrace{\sum_{{S_\mathbf{1}} \neq \emptyset,T} \wh{\AND}({S_\mathbf{1}}) \wh{\ind}_{\mathbf{1}}(T) \chi_{S_\mathbf{1}}(z_{\mathbf{1}})\chi_T(x)}_{A_3}.\label{eq:A1A2A3} \end{align} By Fact~\ref{fact:Fourier AND} and Observation~\ref{obs:indexpansion}, \begin{align} A_1 &= \left(1 - \frac{2}{a}\right)\frac1t , \label{eq:a1}\\ A_2 &= \left(1- \frac{2}{a}\right) \sum_{T \neq \emptyset} \frac{\chi_{T}(x)}{t} =\frac{1}{t}\left(1- \frac{2}{a}\right) \sum_{T \neq \emptyset} \chi_{T}(x), \label{eq:a2}\\ A_3 &= \sum_{{S_\mathbf{1}} \neq \emptyset} \sum_T \frac{2(-1)^{|{S_\mathbf{1}}|+1}}{at} \chi_{S_\mathbf{1}}(z_\mathbf{1}) \chi_T(x). \label{eq:a3} \end{align} We next analyze $B$ from Equation~\eqref{eq:topofhugeandproof}. In this analysis, $\AND$ is on $\log t'$ variables. \begin{align} B &= \sum_{b \neq \mathbf{1}} \left(\sum_{{S_b} \subseteq [\log t']} \wh{\AND}({S_b})\chi_{S_b}(z_b)\right) \left(\sum_{T \subseteq [\log t]} \wh{\ind}_b(T) \chi_T(x)\right) \nonumber\\ &= \sum_{b \neq \mathbf{1}} \bra{\wh\AND(\emptyset) + \sum_{{S_b} \neq \emptyset}\wh\AND({S_b}) \chi_{S_b}(z_b)} \bra{\wh{\ind}_b(\emptyset) + \sum_{T \neq \emptyset}\wh{\ind}_b(T) \chi_T(x)} \nonumber\\ &= \sum_{b \neq \mathbf{1}} \bra{\wh{\AND}(\emptyset)\wh{\ind}_b(\emptyset) + \wh{\AND}(\emptyset) \sum_{T \neq \emptyset} \wh{\ind}_b(T) \chi_T(x) + \sum_{{S_b} \neq \emptyset} \sum_{T} \wh{\AND}({S_b})\wh{\ind}_b(T) \chi_{S_b}(z_b) \chi_T(x)} \nonumber\\ &= \underbrace{\sum_{b \neq \mathbf{1}} \wh{\AND}(\emptyset)\wh{\ind}_b(\emptyset)}_{B_1} + \underbrace{\sum_{b \neq \mathbf{1}} \wh{\AND}(\emptyset) \sum_{T \neq \emptyset} \wh{\ind}_b(T) \chi_T(x)}_{B_2} + \underbrace{\sum_{b \neq \mathbf{1},{S_b} \neq \emptyset,T} \wh{\AND}({S_b}) \wh{\ind}_b(T) \chi_{S_b}(z_b)\chi_T(x)}_{B_3}. \end{align} By Fact~\ref{fact:Fourier AND} and Observation~\ref{obs:indexpansion}, \begin{align} B_1 &= \sum_{\mathbf{1} \neq b \in \pmone^{\log t}} \left(1 - \frac{2}{t'}\right)\left(\frac{1}{t}\right) = \bra{1- \frac{2}{t'}} \bra{1- \frac1t},\label{eq:b1}\\ B_2 &= \left(1- \frac{2}{t'}\right) \sum_{T \neq \emptyset} \sum_{b \neq \mathbf{1}} \frac{\prod_{i \in T}b_i}{t} \chi_T(x)\nonumber \\ &= \bra{1-\frac{2}{t'}}\sum_{T \neq \emptyset} \frac{(-1)}{t} \chi_T(x) =\bra{\frac{-1}{t}}\bra{1-\frac{2}{t'}}\sum_{T \neq \emptyset} \chi_T(x),\label{eq:b2}\\ B_3 &= \sum_{b \neq \mathbf{1}} \sum_{{S_b} \neq \emptyset} \sum_T \frac{2(-1)^{|{S_b}|+1}\cdot \prod_{i \in T} b_i}{tt'} \chi_{S_b}(z_b) \chi_T(x), \label{eq:b3} \end{align} where Equation~\eqref{eq:b2} follows since $\sum_{b \neq \mathbf{1}} \prod_{i \in T}b_i = -1$ for any non-empty $T \subseteq [\log t]$ by Observation \ref{obs:sum_b char = 0 }. From Equation~\eqref{eq:topofhugeandproof}, \begin{align} \label{eq:a123b123} \AD_{t,t',a}(x, z) &= A+B = A_1+B_1 + A_2+B_2 + A_3+B_3. \end{align} Observe that the only terms from Equation~\eqref{eq:a123b123} that contribute to $c_0$ are $A_1$ and $B_1$. Moreover, we have from Equations~\eqref{eq:a1} and~\eqref{eq:b1} that \begin{equation}\label{eq:c0} c_0 = A_1 + B_1 = \left(1 - \frac{2}{a}\right)\frac1t + \bra{1- \frac{2}{t'}} \bra{1- \frac1t} = 1 + \frac{2}{tt'} - \frac{2}{at} - \frac{2}{t'}. \end{equation} Next observe that the only terms contributing to $c_U$ for $\emptyset \neq U \subseteq [\log t]$ appear in $A_2$ and $B_2$. Matching coefficients we obtain from Equations~\eqref{eq:a2} and~\eqref{eq:b2} that for any non-empty $U \subseteq [\log t]$, \begin{equation}\label{eq:cU} c_U = \frac{1}{t}\left(1 - \frac{2}{a}\right) - \frac{1}{t}\left(1 - \frac{2}{t'}\right) = \frac{2}{tt'}-\frac{2}{at}. \end{equation} Next, the only terms contributing to $c_{b, S_b, T}$ for $\mathbf{1} \neq b \in \pmone^{\log t}$, non-empty $S_b \subseteq [\log t']$ and $T \subseteq [\log t]$ arise from $B_3$. By comparing coefficients we obtain from Equation~\eqref{eq:b3} that for any $\mathbf{1} \neq b \in \pmone^{\log t}$, non-empty $S_b \subseteq [\log t']$ and $T \subseteq [\log t]$, \begin{equation}\label{eq:cbsbt} |c_{b, S_b, T}| = \frac{2}{tt'}. \end{equation} Finally the only term that contributes to $c_{\mathbf{1}, S_\mathbf{1}, T}$ for non-empty $S_{\mathbf{1}} \subseteq [\log a]$ and $T \subseteq [\log t]$ is $A_3$. Matching coefficients, we obtain from Equation~\eqref{eq:a3} that for any non-empty $S_{\mathbf{1}} \subseteq [\log a]$ and $T \subseteq [\log t]$, \begin{equation}\label{eq:c1s1t} |c_{\mathbf{1}, S_{\mathbf{1}}, T}| = \frac{2}{at}. \end{equation} Moreover, all terms that appear in Equation~\eqref{eq:a123b123} appear in the cases covered above. This proves the claim. Thus the Fourier expansion of $\AD_{t, t', a}$ is given by \begin{equation}\label{eq: hugeandFourier} \AD_{t,t',a} = c_0 + \sum_{\emptyset \neq U \subseteq [\log t]} c_U\chi_{\emptyset, U} + \sum_{ \substack{ \mathbf{1} \neq b \in \pmone^{\log t},\\ \emptyset \neq S_b \subseteq [\log t'],\\ T \subseteq [\log t]} } c_{b,S_b,T} \cdot \chi_{b,S_b,T} + \sum_{ \substack{ \emptyset \neq S_{\mathbf{1}} \subseteq [\log a],\\ T \subseteq [\log t] } } c_{\mathbf{1},S_{\mathbf{1}},T} \cdot \chi_{\mathbf{1},S_{\mathbf{1}},T}, \end{equation} where $c_0$ is as in Equation~\eqref{eq:c0}, $c_U$ is as in Equation~\eqref{eq:cU} for all $\emptyset \neq U \subseteq [\log t]$, $c_{b,S_b,T}$ is as in Equation~\eqref{eq:cbsbt} for all $\mathbf{1} \neq b \in \pmone^{\log t}, \emptyset \neq S_b \subseteq [\log t']$ and $T \subseteq [\log t]$, and $c_{\mathbf{1},S_{\mathbf{1}},T}$ is as in Equation~\eqref{eq:c1s1t} for all $\emptyset \neq S_{\mathbf{1}} \subseteq [\log a]$ and $T \subseteq [\log t]$. In the Fourier expansion of $\AD_{t,t',a}$ from Equation~\eqref{eq: hugeandFourier}, coefficients in the third and fourth summands are non-zero from Equations~\eqref{eq:c1s1t}~\eqref{eq:cbsbt}, coefficients in the second summand are non-zero by Equation~\eqref{eq:cU} since $a \geq 2t'$. Finally, by Equation~\eqref{eq:c0}, $c_0 \neq 0$ since $1 + \frac{2}{tt'} - \frac{2}{at} - \frac{2}{t'} \geq 1 + \frac{1}{tt'} - \frac{2}{t'} \geq \frac{1}{tt'} \geq 0$ as $a \geq 2t' \geq 4$. From Equation~\eqref{eq: hugeandFourier}, \begin{align} \supp(\AD_{t,t',a}) = \cbra{\chi_{\emptyset, \emptyset}} &\cup\cbra{\chi_{\emptyset, U}| \emptyset \neq U \subseteq [\log t]} \cup \cbra{\chi_{\mathbf{1},S_{\mathbf{1}},T}| \emptyset \neq S_{\mathbf{1}} \subseteq [\log a], T \subseteq [\log t]} \nonumber\\ &\cup \cbra{\chi_{b,S_b,T}| b \in \pmone^{\log t} \setminus \cbra{\mathbf{1}}, \emptyset \neq S_b \subseteq [\log t'], T \subseteq [\log t]}.\label{eq:supp ADtt'a} \end{align} \begin{itemize} \item Rank: Consider the set of characters \[ \calB = \cbra{\chi_{b, \cbra{i}, \emptyset} : b \in \pmone^{\log t} \setminus \cbra{ \mathbf{1}}, i \in [\log t']} \cup \cbra{\chi_{\mathbf{1}, \cbra{i}, \emptyset} : i \in [\log a]} \cup \cbra{\chi_{\mathbf{1}, \cbra{1}, j} : j \in [\log t]}. \] These characters can be seen to be linearly independent and span all monomials. Moreover, by Equation~\eqref{eq:supp ADtt'a}, $\calB \subseteq \supp(\AD_{t,t',a})$. Therefore, \begin{align*} r(\AD_{t, t', a}) & = |\calB| = (t -1)\log{t'} + \log a + \log t. \end{align*} \item Sparsity: By Equation~\eqref{eq:supp ADtt'a}, \begin{align*} k(\AD_{t,t',a}) &= |\supp(\AD_{t,t',a})|\\ &= 1 + t-1 + (a-1)t + (t-1)(t'-1)t\\ &= (t-1)(t'-1)t + ta. \end{align*} \item Max-supp-entropy: Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k'(\AD_{t, t', a})$ equals the inverse of the smallest non zero Fourier coefficient in absolute value. From the Fourier expansion of $\AD_{t, t', a}$ given in Equation~\eqref{eq: hugeandFourier}, the candidates for smallest nonzero coefficient in absolute value are $\cbra{1+\frac{2}{tt'}-\frac{2}{at} -\frac{2}{t'}, \frac{2}{tt'} - \frac{2}{at}, \frac{2}{tt'}, \frac{2}{at}}$. Since $t' \geq 3$, $1+\frac{2}{tt'}-\frac{2}{at} -\frac{2}{t'} \geq \frac{2}{tt'} - \frac{2}{at}$. Since, $a \geq 2t'$, $\frac{2}{tt'} - \frac{2}{at} \geq \frac{2}{at}$ and $\frac{2}{tt'} \geq \frac{2}{at}$. Therefore \begin{align} \label{eq: maxent adtt'} k'(\AD_{t,t',a}) &=\frac{ta}{2}. \end{align} \item Max-rank-entropy: Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k''(f) = \argmin_\theta \{\dim(\cS_{\theta}) = r(f)\},$ where $\cS_{\theta} = \cbra{S: |\wh{f}(S)| \geq \frac{1}{\theta}}$. From the Fourier expansion of $\AD_{t,t',a}$ given in Equation~\eqref{eq: hugeandFourier}, observe that every monomial which involves a variable from $z_{\mathbf{1}}$ has coefficient whose absolute value equals $\frac{2}{at}$. Thus, if $\theta < \frac{at}{2}$, $S_\theta$ does not include any momonial containing a variable from $z_{\mathbf{1}}$. Therefore $k''(\AD_{t,t',a}) \geq \frac{at}{2}$. By Lemma~\ref{lem:relationships between rk and k'}, $k''(\AD_{t,t',a}) \leq k'(\AD_{t,t',a}) = \frac{at}{2}$. Therefore, by Equation~\eqref{eq: maxent adtt'} \[ k''(\AD_{t,t'a}) = \frac{at}{2}. \] \item Weight: From Observation~\ref{obs:weight, empty Fourier} and Equation~\eqref{eq:c0}, \begin{align*} \delta(\AD_{t, t', a}) &= \frac{1 - \wh{\AD_{t,t',a}}(\emptyset, \emptyset)}{2}\\ &= \frac{1}{t'} + \frac{1}{at} - \frac{1}{tt'}. \end{align*} \end{itemize} \end{proof} \subsubsection{Properties of $\mAD_{t,t',p}$ (Claim~\ref{claim:properties of mAD})} \label{subsubsec:properties of mAD} Recall that we constructed the Modified AND function (Definition~\ref{defi:mAND}) by replacing one variable by that variable times the product of an AND function of other variables. The next claim computes the Fourier coefficients of $\mAND_{t', p}$. For the purpose of the following claim, for any $S \subseteq [\log t']$ and $T \subseteq [p]$, define characters $\chi_{S, T} : \pmone^{\log t' + p} \to \pmone$ by $\chi_{S, T}(y, u) = \prod_{i \in S}y_i \prod_{j \in T} u_j$. \begin{claim} \label{claim:mAND Fourier expansion} Let $t' \geq 2, p \geq 2$ be any integers and let $f = \mAND_{t',p} : \pmone^{\log t' + p} \to \pmone$ be as in Definition~\ref{defi:mAND}. Then $f = \sum_{S \subseteq [\log t'], T \subseteq [p]} \wh{f}(S, T)\chi_{S, T}$, where \begin{align*} \wh{f}(S, T) = \begin{cases} 1 - \frac{2}{t'} & S = T = \emptyset\\ \frac{2 \cdot (-1)^{|S|}}{t'} & T = \emptyset, 1 \notin S \subseteq [\log t'], S \neq \emptyset \\ \frac{2 \cdot (-1)^{|S|}}{t'}\bra{1 - \frac{2}{2^p}} & T = \emptyset, 1 \in S \subseteq [\log t']\\ \frac{4 \cdot (-1)^{|S| + |T| + 1}}{2^p t'} & \emptyset \neq T \subseteq [p], 1 \in S \subseteq [\log t']. \end{cases} \end{align*} \end{claim} For the purpose of the proof, recall that we view inputs to $\mAND_{t',p}$ as $(y, u)$, where $y \in \pmone^{\log t'}$ and $u \in \pmone^{p}$. \begin{proof}[Proof of Claim~\ref{claim:mAND Fourier expansion}] We have $\AND_{\log t'}(y_1,\ldots ,y_{\log t'}) = 1 - 2 \prod_{i=1}^{\log t'} \frac{(1 - y_i)}{2}$. Thus, by Definition~\ref{defi:mAND}, \begin{align*} \mAND_{t',p}(y,u) &= \AND_{\log t'}(y_1 \AND_p(u),y_2,\dots,y_{\log t'})\\ &= 1 - (1 - y_1 \AND_p(u)) \prod_{i=2}^{\log t'}\frac{(1-y_i)}{2}\\ &= 1 - \bra{1 - y_1 \sum_{T \subseteq [p]}\wh{\AND_p}(T) \chi_T(u)} \bra{\frac{2}{t'}\sum_{1 \notin S \subseteq [\log t']} (-1)^{|S|} \chi_S(y)}\\ &= 1 - \frac{2}{t'} \bra{\sum_{1 \notin S \subseteq [\log t']} (-1)^{|S|} \chi_S(y)} + \frac{2}{t'} y_1\bra{\sum_{T \subseteq [p]} \sum_{1 \in S \subseteq [\log t']} (-1)^{|S|} \wh{\AND_p}(T) \chi_S(y)\chi_T(u)}\\ &= 1 - \frac{2}{t'} \bra{\sum_{1 \not \in S \subseteq [\log t']} (-1)^{|S|} \chi_S(y)} + \frac{2}{t'}\bra{1 - \frac{2}{2^p}}y_1 \bra{\sum_{1 \in S \subseteq [\log t']} (-1)^{|S|} \chi_S(y)}\\ &+ \frac{4}{2^pt'} \bra{\sum_{\emptyset \neq T \subseteq [p]} \sum_{1 \in S \subseteq [\log t']} (-1)^{|S|+|T|+1} \chi_S(y)\chi_T(u)} \end{align*} This proves the claim. \end{proof} We now prove the required properties of $\mAD_{t,t',p}$. \begin{proof}[Proof of Claim~\ref{claim:properties of mAD}]~ Define $\bone := 1^{\log t}$. Recall that on input $(x, y) \in \pmone^{\log t+ t \log t'}$, we define the set of variables $u = \cbra{y_{b,i}| b \in \pmone^{\log t} \setminus \cbra{\bone}, i \in [\log t'] }$. We also fix an arbitrary ordering on the variables in $u$ and let $u_{\leq p}$ be the the first $p$ variables in $u$ according to that order. By Definition~\ref{defi:mAD}, we have \begin{equation} \label{eq:mAD expansion} \mAD_{t,t',p}(x,y) = \underbrace{\ind_\bone(x) \cdot \mAND_{t',p}(y_\bone, u_{\leq p})}_{T_\bone} + \sum_{\bone \neq b \in \pmone^{\log t}} \underbrace{\ind_b(x) \cdot \AND_{\log t'}(y_b)}_{T_b} \end{equation} We use Claim~\ref{claim:mAND Fourier expansion} to expand $T_\bone$ as \begin{align} \label{eq:Fourier ADtt'p T1} T_\bone =& \ind_\bone(x) \left[1- \frac{2}{t'} + \sum_{\substack{1 \notin S\\ \emptyset \neq S \subseteq [\log t']}}\frac{2 \cdot (-1)^{|S|}}{t'}\prod_{j \in S}y_{\bone, j} + \sum_{1 \in S \subseteq [\log t']} \frac{2 \cdot (-1)^{|S|}}{t'}\bra{1 - \frac{2}{2^p}} \prod_{j \in S} y_{\bone, j} \right. \nonumber \\ & \left. + \sum_{\substack{\emptyset \neq T \subseteq [p] \\ 1 \in S \subseteq [\log t']}} \frac{4 \cdot (-1)^{|S| + |T| + 1}}{2^p t'} \prod_{j \in S, \ell \in T} y_{\bone, j} u_\ell \right], \end{align} and Fact~\ref{fact:Fourier AND} to expand $T_b$, for $b \neq \bone$, as \begin{equation} \label{eq:Fourier ADtt'p Tb} T_b = \ind_b(x) \left[1 - \frac{2}{t'} + \frac{2}{t'} \sum_{S \neq \emptyset} (-1)^{|S|}\prod_{j \in S} y_{b,j} \right]. \end{equation} \begin{itemize} \item Rank: For all $b \in \pmone^{\log t}$, define \[ \calB_b = \cbra{ y_{b,j}|j \in [\log t']}. \] From Equations~\eqref{eq:Fourier ADtt'p T1} and~\eqref{eq:Fourier ADtt'p Tb} the monomials from $\calB_\bone$ occur only in the term $T_\bone$. Since $\wh{\ind_\bone}(\emptyset) = \frac1t$ by Observation~\ref{obs:indexpansion}, Equation~\eqref{eq:Fourier ADtt'p T1} yields that the absolute value of the coefficient of $y_{\bone,1}$ is $\bra{1 - \frac{2}{2^p}} \frac{2}{tt'}$, and the absolute value of the coefficient of $y_{\bone,j}$ is $\frac{2}{tt'}$ for all $j \in [\log t'] \setminus \cbra{1}$. Similarly, for $\bone \neq b \in \pmone^{\log t}$, from Equations~\eqref{eq:Fourier ADtt'p T1} and~\eqref{eq:Fourier ADtt'p Tb} the monomials from $\calB_b$ occur only in the term $T_b$. Since $\wh{\ind_b}(\emptyset) = \frac1t$ by Observation~\ref{obs:indexpansion}, Equation~\eqref{eq:Fourier ADtt'p Tb} yields that the absolute value of the coefficient of $y_{b,j}$ is $\frac{2}{tt'}$ for all $j \in [\log t']$ and $\bone \neq b \in \pmone^{\log t}$. Fix $\bone \neq c \in \pmone^{\log t}$, and define \[ \calB_{x} = \cbra{y_{c,1}\cdot x_i| i \in [\log t]}. \] From Equations~\eqref{eq:Fourier ADtt'p T1} and~\eqref{eq:Fourier ADtt'p Tb} the monomials from $\calB_x$ occur only in the term $T_c$. Since $\wh{\ind_b}(\emptyset) = \frac1t$ by Observation~\ref{obs:indexpansion}, Equation~\eqref{eq:Fourier ADtt'p Tb} yields that the absolute value of the coefficient of $y_{c,1} \cdot x_i$ is $\frac{2}{tt'}$ for all $i \in [\log t]$. Therefore $\bigcup_{b \in \pmone^{\log t}} \calB_b \cup \calB_x \subseteq \supp(\mAD_{t,t',p})$. Since $\bigcup_{b \in \pmone^{\log t}} \calB_b \cup \calB_x$ generate all monomials, \[ r(\mAD_{t,t',p}) = t\log t' + \log t. \] \item Sparsity: By Equation~\eqref{eq:Fourier ADtt'p T1}, all monomials appearing in $T_\bone$, except for those purely in $x$-variables, contain at least one variable from $y_\bone$. Moreover, from Equation~\eqref{eq:Fourier ADtt'p Tb}, no monomial appearing in $T_b$ for $b \neq \bone$ contains a variable from $y_\bone$. Since all Fourier coefficients of $\ind_\bone$ are non-zero by Observation~\ref{obs:indexpansion}, Equation~\eqref{eq:Fourier ADtt'p T1} yields that these monomials contribute \begin{equation} \label{eq:adtt'p sparsity from tbone} t\bra{\frac{t'}{2} -1 + \frac{t'}{2} + (2^p-1)\frac{t'}{2}} = t\bra{\frac{t'}{2}-1+2^{p-1}t'}= \Theta(2^ptt'). \end{equation} to the sparsity of $\mAD_{t, t', p}$. By Equation~\eqref{eq:Fourier ADtt'p Tb}, all monomials all monomials appearing in $T_b$, for any $b \neq \bone$, except for those purely in $x$-variables, contain at least one variable from $y_b$. Moreover, from Equations~\eqref{eq:Fourier ADtt'p T1} and~\eqref{eq:Fourier ADtt'p Tb}, no monomial appearing in $T_b'$ for $b' \neq b$ contains a variable from $y_b$. Since all Fourier coefficients of $\ind_b$ are non-zero by Observation~\ref{obs:indexpansion}, Equation~\eqref{eq:Fourier ADtt'p Tb} yields that these monomials contribute at least (including contributions from each $T_b$ for $b \neq \bone$) \begin{equation} \label{eq:adtt'p sparsity from tb} (t-1)\cdot t \cdot (t' - 1) = \Omega(t^2t') \end{equation} to the sparsity of $\mAD_{t, t', p}$. By Equations~\eqref{eq:adtt'p sparsity from tbone} and~\eqref{eq:adtt'p sparsity from tb}, \[ k(\mAD_{t, t', p}) = \Omega(2^ptt' + t^2t'). \] By Equation~\eqref{eq:mAD expansion}, \begin{align*} k(\mAD_{t,t',p}) & \leq k(\ind_\bone)k(\mAND_{t', p}) + \sum_{\bone \neq b \in \pmone^{\log t}} k(\ind_b)k(\AND_{\log t'})\\ & \leq t \cdot k(\mAND_{t',p}) + (t-1)t \cdot t' \tag*{by Observations~\ref{obs:indexpansion} and~\ref{obs:properties of AND, Bent and Addressing}}\\ & = O(2^p t t' + t^2 t'). \tag*{since $\mAND_{t', p}$ is a function on $(\log t' + p)$ variables} \end{align*} Thus, \[ k(\mAD_{t,t',p}) = \Theta(2^ptt' + t^2t'). \] \item Max-rank-entropy: Recall from the argument for rank that $\calB = \bigcup_{b \in \pmone^{\log t}}\calB_b \cup \calB_x \subseteq \supp(\mAD_{t,t',p})$ generate all the monomials of $\mAD_{t,t',p}$. Moreover the absolute values of the coefficients of these monomials take values in the set $\cbra{\frac{2}{tt'}, \bra{1 - \frac{2}{2^p}}\frac{2}{tt'}}$. Since $p \geq 2$, $ 1 \geq 1- \frac{2}{2^p} \geq \frac12$. Therefore \begin{equation} \label{eq:maxrankent upper bound madtt'p} k''(\mAD_{tt'p}) = O(tt'). \end{equation} Recall that no monomial arising from the terms $T_b$ for $\bone \neq b \in \pmone^{\log t}$ (see Equation~\eqref{eq:Fourier ADtt'p Tb}) contain variables from $y_\bone$. Thus, monomials which contain the variable $y_{\bone,1}$ only appear in Equation~\eqref{eq:Fourier ADtt'p T1}. Moreover, by Observation~\ref{obs:indexpansion} we conclude that the absolute value of coefficient of any such monomial takes values in the set $\cbra{\frac{2}{tt'}\bra{1 - \frac{2}{2^p}}, \frac{4}{2^p t t'}}$. Recall from Definition~\ref{defi:max entropy, max rank entropy} that $k''(f) = \argmin_\theta \{\dim(\cS_{\theta}) = r(f)\}$, where $\cS_{\theta} = \cbra{S: |\wh{f}(S)| \geq \frac{1}{\theta}}$. Therefore if $\theta < \frac{2}{tt'}$, then $\cS_\theta$ does not include any monomial containing $y_{\bone,1}$. Therefore \[ k''(\mAD_{t,t',p}) \geq \bra{1- \frac{2}{2^p}}^{-1}\frac{tt'}{2} \geq \frac{tt'}{2}. \] Hence by Equation~\eqref{eq:maxrankent upper bound madtt'p}, \[ k''(\mAD_{t,t',p}) = \Theta(tt'). \] \item Weight: By Equation~\eqref{eq:mAD expansion}, \begin{align*} \wh{\mAD_{t,t',p}}(\emptyset) &= \wh{\ind_{\bone}}(\emptyset) \wh{\mAND_{t',p}}(\emptyset) + \sum_{\bone \neq b \in \pmone^{\log t}} \wh{\ind_b}(\emptyset) \wh{\mAND_{\log t'}}(\emptyset) \tag*{by Observation~\ref{obs:indexpansion} and Claim~\ref{claim:mAND Fourier expansion}}\\ &= \frac{1}{t} \bra{1-\frac{2}{t'}} + (t-1)\frac{1}{t} \bra{1 - \frac{2}{t'}}. \\ &= 1 - \frac{2}{t'} \end{align*} Thus by Observation~\ref{obs:weight, empty Fourier}, \[ \delta(\mAD_{t,t',p}) = \frac{1 - \wh{\mAD_{t,t',p}(\emptyset)}}{2} = \frac{1}{t'}. \] \end{itemize} \end{proof} \subsection{Setting parameters in our constructed functions} \label{subsec:tightness} In this section we prove Theorems~\ref{thm:delta upper bound in terms of rk and also k'} and~\ref{thm:delta upper bound in terms of rk and also k''}. Recall that these theorems require us to exhibit functions which achieve certain bounds. Claims~\ref{claim:setting parameters for tightstraightline} and~\ref{claim:setting parameters for tight curve} correspond to the bounds in Theorem~\ref{thm:delta upper bound in terms of rk and also k'}, and describe the required functions. Claims~\ref{claim: setting parameters for tight curve for k''} and~\ref{claim:setting parameters for tightstraightline for k''} correspond to the bounds in Theorem~\ref{thm:delta upper bound in terms of rk and also k''}, and describe the required functions. \begin{claim} \label{claim:setting parameters for tightstraightline} For all $\rho, \kappa, \kappa' \in \N$ such that $\kappa$ is sufficiently large, for all $\epsilon > 0$ such that $\log \kappa \leq \rho \leq \kappa^{\frac12 - \epsilon}$ and $\frac{\kappa \log \kappa}{\rho} \leq \kappa' \leq \kappa$, for $t=\frac{2\rho}{\log \kappa}$, $t'= \frac{\kappa \log^2 \kappa}{\rho^2}$ and $a= \frac{2\kappa'\log \kappa}{\rho}$, \begin{itemize} \item $\Omega(\epsilon \rho) = r(\AD_{t,t',a}) = O(\rho)$. \item $k(\AD_{t,t',a}) = \Theta(\kappa)$. \item $ k'(\AD_{t,t',a}) = \Theta(\kappa')$. \item $\delta(\AD_{t,t',a}) = \Theta\paren{\frac{1}{\kappa}\paren{\frac{\rho}{\log \kappa}}^2}$. \end{itemize} \end{claim} We prove Claim~\ref{claim:setting parameters for tightstraightline} in Section~\ref{subsubsec:functions-on-horizontal-k'}. \begin{claim} \label{claim:setting parameters for tight curve} For all $\rho, \kappa, \kappa' \in \N$ such that $\kappa$ is sufficiently large, for all constants $\epsilon > 0$, such that $\kappa^{1/2} \leq \kappa' \leq (\kappa\log \kappa)/\rho$ and $\log \kappa \leq \rho \leq \kappa^{\frac{1}{2} - \epsilon}$ for $t= \frac{2\rho}{\log \kappa}$, $t'= \frac{4\kappa'^2}{\kappa}$ and $\ell = 2\bra{\frac{\kappa\log \kappa}{\kappa'\rho}}^2$, \begin{itemize} \item $\Omega(\epsilon \rho) = r(\AAB) = O(\rho)$. \item $k(\AAB)= \Theta(\kappa)$. \item $k'(\AAB) = \Theta(\kappa')$. \item $\delta(f) = O\bra{\frac{\kappa}{\kappa'^2}}$. \end{itemize} \end{claim} We prove Claim~\ref{claim:setting parameters for tight curve} in Section~\ref{subsubsec:funcs-on-curve-k'}. \begin{claim} \label{claim: setting parameters for tight curve for k''} For all $\rho, \kappa, \kappa'' \in \N$ such that $\kappa$ is sufficiently large, for all constants $ \epsilon > 0$ such that $\log \kappa \leq \rho \leq \kappa^{1/2 - \epsilon}$, $e\rho \leq \kappa'' \leq \frac{\kappa \log \kappa}{\rho}$, for $t = \frac{2\rho}{\log (\kappa''/\rho)}, t' = \frac{\kappa''}{\rho}\log \bra{\kappa''/\rho}$, $p = \log \bra{\frac{4\kappa}{\kappa''}}$, \begin{itemize} \item $r(\mAD_{t, t', p}) = \Theta(\rho)$. \item $\Omega(\kappa) = k(\mAD_{t, t', p}) = O(\kappa/\epsilon)$. \item $k''(\mAD_{t, t', p}) = \Theta(\kappa'')$. \item $\delta(\mAD_{t, t', p}) = \frac{\rho}{\kappa'' \log (\kappa''/\rho)}$. \end{itemize} \end{claim} We prove Claim~\ref{claim: setting parameters for tight curve for k''} in Section~\ref{sec:upper bound on wt in terms of k'' and logk''/r}. \begin{claim} \label{claim:setting parameters for tightstraightline for k''} For all $\rho, \kappa, \kappa' \in \N$ such that $\kappa$ is sufficiently large, for all $\epsilon > 0$ such that $\log \kappa \leq \rho \leq \kappa^{\frac12 - \epsilon}$ and $\frac{\kappa \log \kappa}{\rho} \leq \kappa'' \leq \kappa$, there exists a constant $c \geq 1$ such that the following holds for $t=\frac{2\rho}{\log \kappa}$, $t'= \frac{c\kappa \log^2 \kappa}{\rho^2}$ and $a= \frac{2c\kappa''\log \kappa}{\rho}$. \begin{itemize} \item $\Omega(\epsilon \rho) = r(\AD_{t,t',a}) = O(\rho)$. \item $k(\AD_{t,t',a}) = \Theta(\kappa)$. \item $ k''(\AD_{t,t',a}) = \Theta(\kappa'')$. \item $\delta(\AD_{t,t',a}) = \Theta\paren{\frac{1}{\kappa}\paren{\frac{\rho}{\log \kappa}}^2}$. \end{itemize} \end{claim} \begin{proof} It follows by Claim~\ref{claim:setting parameters for tightstraightline} and the fact that $k'(\AD_{t, t',a}) = k''(\AD_{t, t', a})$ (Claim~\ref{claim:properties of ADtt'a}). \end{proof} \subsubsection{Proof of Claim~\ref{claim:setting parameters for tightstraightline}} \label{subsubsec:functions-on-horizontal-k'} In this section we prove Claim~\ref{claim:setting parameters for tightstraightline}, which gives us Fourier-analytic properties of $\AD_{t, t', a}$ for particular settings of $t, t', a$. \begin{proof}[Proof of Claim~\ref{claim:setting parameters for tightstraightline}] Given any $\rho, \kappa, \kappa'$ as in the assumptions of the claim, recall that we fix the following values. \begin{align} t &= \frac{2\rho}{\log \kappa}, \label{eq:thugeand}\\ t' &= \kappa\left(\frac{\log \kappa}{\rho}\right)^2, \label{eq:t'hugeand}\\ a &= \frac{2\kappa'\log \kappa}{\rho}. \label{eq:ahugeand} \end{align} Since $\rho \geq \log \kappa$, \[ t = \frac{2\rho}{\log \kappa} \geq 2. \] Since $\rho < \sqrt{\kappa}$ by assumption, \[ t' = \kappa\left(\frac{\log \kappa}{\rho}\right)^2 \geq \log^2 \kappa \geq 2, \] for large enough $\kappa$. Since $\kappa' \geq \frac{\kappa\log \kappa}{\rho}$, \begin{equation}\label{eq:a2t'} a = \frac{2\kappa' \log \kappa}{\rho} \geq 2\kappa\bra{\frac{\log \kappa}{\rho}}^2 = 2t'. \end{equation} Hence the assumptions in Claim~\ref{claim:properties of ADtt'a} are satisfied with these values of $t, t', a$. By Equations~\eqref{eq:thugeand},~\eqref{eq:t'hugeand} and~\eqref{eq:ahugeand}, we obtain the following bound which is of use to us later. \begin{equation} \label{eq:at/t'} \frac{at}{t'} = \frac{2\kappa'\log \kappa}{\rho} \cdot \frac{2\rho}{\log \kappa} \cdot \frac{\rho^2}{\kappa \log^2\kappa} = \frac{4\kappa'}{\kappa} \bra{\frac{\rho}{\log \kappa}}^2 \leq \bra{\frac{2\rho}{\log \kappa}}^2 \leq t^2. \end{equation} We have the following properties of $\AD_{t, t', a}$. \begin{itemize} \item Rank: \begin{align*} r(\AD_{t,t',a}) & = (t -1)\log{t'} + \log a + \log t\tag*{by Claim~\ref{claim:properties of ADtt'a}}\\ &= t \log t' + \log \bra{\frac{at}{t'}} \leq 3 t\log t' \tag*{by Equation~\eqref{eq:at/t'}}\\ &= \frac{6\rho}{\log \kappa} \bra{\log (\kappa/\rho^2) + 2\log\log \kappa} \tag*{by Equations~\eqref{eq:thugeand} and~\eqref{eq:t'hugeand}}\\ &\leq \frac{6\rho}{\log \kappa} \bra{\log \kappa + 2\log \kappa}\\ &=O(\rho). \end{align*} For our setting of parameters (Equations~\eqref{eq:thugeand}, \eqref{eq:t'hugeand} and \eqref{eq:ahugeand}), $at = 4\kappa'$. Since $\rho \geq \log \kappa$ and $\kappa' \geq \frac{\kappa \log \kappa}{\rho}$, \begin{equation}\label{eq:at'} \frac{at}{t'} = \frac{4\kappa'}{\kappa}\bra{\frac{\rho}{\log{\kappa}}}^2 \geq \frac{4\rho}{\log \kappa} > 1. \end{equation} Thus, \begin{align*} r(\AD_{t, t', a}) & = (t -1) \log{t'} + \log a + \log t \tag*{by Claim~\ref{claim:properties of ADtt'a}}\\ &= t \log t' + \log {\bra{\frac{at}{t'}}} \geq t \log t' \tag*{by Equation~\eqref{eq:at'}}\\ &= \frac{2\rho}{\log \kappa} \bra{\log (\kappa/\rho^2) + 2\log\log \kappa} \tag*{by Equations~\eqref{eq:thugeand} and~\eqref{eq:t'hugeand}}\\ &\geq \frac{2\rho}{\log \kappa} \bra{\log (\kappa/\rho^2)} \geq \frac{2\rho}{\log \kappa} \bra{\log \kappa^{2\epsilon}} \tag*{since $\rho \leq \kappa^{\frac12 - \epsilon}$}\\ &= \frac{2\rho}{\log \kappa}(2\epsilon \log \kappa)\\ &= \Omega(\epsilon \rho). \end{align*} \item Sparsity: By our choice of parameters (Equations~\eqref{eq:thugeand},~\eqref{eq:t'hugeand} and~\eqref{eq:ahugeand}), we have \begin{equation} \label{eq:tat^2t'} ta = 4\kappa' \leq 4\kappa = t^2t'. \end{equation} Thus, \begin{align*} k(f_{\rho,\kappa,\kappa'}) &= (t-1)(t'-1)t + ta \tag*{by Claim~\ref{claim:properties of ADtt'a}}\\ &= \Theta(t^2t') \tag*{by Equation~\eqref{eq:tat^2t'} and since $t, t' \geq 2$}\\ &= \Theta\bra{\bra{\frac{2\rho}{\log \kappa}}^2 \kappa \bra{\frac{\log \kappa }{\rho}}^2} \tag*{by Equations~\eqref{eq:thugeand} and~\eqref{eq:t'hugeand}}\\ &= \Theta(\kappa). \end{align*} \item Max-supp-entropy: Since $ta = 4\kappa'$ from Equation~\eqref{eq:tat^2t'}, we have by Claim~\ref{claim:properties of ADtt'a} that \begin{align*} k'(f_{\rho,\kappa,\kappa'}) &= \frac{ta}{2} = \Theta(\kappa'). \end{align*} \item Weight: \begin{align*} \delta(f_{\rho,\kappa,\kappa'}) &= \frac{1}{t'} + \frac{1}{at} - \frac{1}{tt'}\tag*{by Claim~\ref{claim:properties of ADtt'a}}\\ &= \frac{1}{t'} + \frac{1}{t}\bra{\frac{1}{a} - \frac{1}{t'}}\\ &\in \left[\frac{1}{t'} - \frac{1}{tt'}, \frac{1}{t'} - \frac{1}{2tt'}\right] \tag*{since $2t' \leq a$ by Equation~\eqref{eq:a2t'}}\\ &= \Theta \bra{\frac{1}{t'}} \tag*{since $t \geq 2$}\\ &= \Theta\bra{\frac{1}{\kappa}\bra{\frac{\rho}{\log \kappa}}^2} \tag*{by Equation~\eqref{eq:t'hugeand}}. \end{align*} \end{itemize} \end{proof} \subsubsection{Proof of Claim~\ref{claim:setting parameters for tight curve}} \label{subsubsec:funcs-on-curve-k'} In this section we prove Claim~\ref{claim:setting parameters for tight curve}, which gives us Fourier-analytic properties of $\AAB$ for particular settings of $t, t', \ell$. \begin{proof}[Proof of Claim~\ref{claim:setting parameters for tight curve}] Given any $\rho, \kappa, \kappa'$ as in the assumptions of the claim, recall that we fix the following values. \begin{align} t & = \frac{2\rho}{\log \kappa}, \label{eq:tAAB}\\ t' & = \frac{4\kappa'^2}{\kappa}, \label{eq:t'AAB}\\ \ell & = 2\bra{\frac{\kappa \log \kappa}{\kappa'\rho}}^2. \label{eq:ellAAB} \end{align} Since $\rho \geq \log \kappa$, we have $t = \frac{2 \rho}{\log \kappa} \geq 2$. Since $\kappa' \geq \sqrt{\kappa}$, we have $t' = \frac{4\kappa'^2}{\kappa} \geq 4$. Finally since $\kappa' \leq (\kappa\log \kappa)/\rho$, $\ell = 2\bra{\frac{\kappa \log \kappa}{\kappa'\rho}}^2 \geq 2$. Hence the assumptions in Claim~\ref{claim:properties of AAB} are satisfied with these values of $t, t'$ and $\ell$. By Equations~\eqref{eq:t'AAB} and~\eqref{eq:ellAAB}, we obtain the following bound which is of use to us later. \begin{equation}\label{eq:lt'} \ell t' = 2\bra{\frac{\kappa \log \kappa}{\kappa'\rho}}^2 \cdot \frac{4\kappa'^2}{\kappa} = 8 \kappa\bra{\frac{\log \kappa}{\rho}}^2. \end{equation} We have the following properties of $\AAB$. \begin{itemize} \item Rank: \begin{align*} r(\AAB) &= t (\log t' + \log \ell) + \log t \tag*{by Claim~\ref{claim:properties of AAB}}\\ &= t \log (\ell t') + \log t \leq 2t \log (\ell t') \tag*{since $\ell \geq 2$ and $t' \geq 4$}\\ &= \frac{4\rho}{\log \kappa}\bra{ \log 8 + \log \bra{\kappa/\rho^2} + 2\log\log \kappa} \tag*{by Equation~\eqref{eq:lt'}}\\ &\leq \frac{4\rho}{\log \kappa}\bra{ \log \kappa + \log \kappa + 2\log \kappa} \tag*{since $\kappa$ is sufficiently large}\\ &= O(\rho). \end{align*} For the lower bound, we have \begin{align*} r(\AAB) &= t (\log t' + \log \ell) + \log t \tag*{by Claim~\ref{claim:properties of AAB}}\\ &\geq t \log(\ell t') \tag*{since $t \geq 1$}\\ &= \frac{2 \rho}{\log \kappa} \cdot \bra{ \log 8 + \log \bra{\kappa/\rho^2} + 2\log\log \kappa} \tag*{by Equations~\eqref{eq:tAAB} and~\eqref{eq:lt'}}\\ &\geq \frac{2 \rho}{\log \kappa} \cdot \bra{\log \bra{\kappa/\rho^2}} \geq \frac{2 \rho}{\log \kappa} \cdot \bra{\log \kappa^{2\epsilon}} \tag*{since $\rho \leq \kappa^{\frac12 - \epsilon}$}\\ &= \frac{2 \rho}{\log \kappa}\bra{2\epsilon\log \kappa}\\ & = \Omega(\epsilon \rho). \end{align*} \item Sparsity: \begin{align*} k(\AAB) &= 1 + \frac12 t^2(\ell+1)t' \tag*{by Claim~\ref{claim:properties of AAB}}\\ &= \Theta(t^2 \ell t')\\ &= \Theta\bra{\bra{\frac{2\rho}{\log \kappa}}^2 \cdot 8\kappa \bra{\frac{\log \kappa}{\rho}}^2} \tag*{by Equations~\eqref{eq:tAAB} and~\eqref{eq:lt'}}\\ & = \Theta(\kappa). \end{align*} \item Max-supp-entropy: \begin{align*} k'(\AAB) &= \frac{tt' \sqrt{\ell}}{2} \tag*{by Claim~\ref{claim:properties of AAB}}\\ &= \frac12 \cdot \frac{2\rho}{\log \kappa} \cdot \frac{(2\kappa')^2}{\kappa} \cdot \frac{\sqrt{2}\kappa \log \kappa}{\kappa' \rho} \tag*{by Equations~\eqref{eq:tAAB}, \eqref{eq:t'AAB} and \eqref{eq:ellAAB}}\\ &= \Theta(\kappa'). \end{align*} \item Weight: \begin{align*} \delta(\AAB) &= \frac{1}{t'} \tag*{by Claim~\ref{claim:properties of AAB}}\\ &= \Theta\bra{\frac{\kappa}{\kappa'^2}}\tag*{by Equation~\eqref{eq:t'AAB}}. \end{align*} \end{itemize} \end{proof} \subsubsection{Proof of Claim~\ref{claim: setting parameters for tight curve for k''}} \label{sec:upper bound on wt in terms of k'' and logk''/r} In this section we prove Claim~\ref{claim: setting parameters for tight curve for k''}, which gives us Fourier-analytic properties of $\mAD_{t, t', p}$ for particular settings of $t, t', p$. \begin{proof}[Proof of Claim~\ref{claim: setting parameters for tight curve for k''}] Given $\rho, \kappa$ and $\kappa''$ choose: \begin{align} p &= \log \left(\frac{4\kappa}{\kappa''}\right) \label{eq:p mAD}\\ t' &= \frac{\kappa''}{\rho} \log \left( \frac{\kappa''}{\rho} \right) \label{eq:t' mAD}\\ t &= \frac{2\rho}{\log(\kappa''/\rho)} \label{eq:t mAD} \end{align} Since $\rho \geq \log \kappa \geq \log (\kappa''/\rho)$, we have $t = \frac{2 \rho}{\log (\kappa''/\rho)} \geq 2$. Since $\kappa'' \geq 2\rho$, we have $t' = \frac{\kappa''}{\rho} \log \bra{\frac{\kappa''}{\rho}} \geq \frac{\kappa''}{\rho} \geq 2$. Note that $p \geq 2$ since $\log \left(\frac{4\kappa}{\kappa''}\right) \geq \log 4 = 2$. Finally we show that $p \leq (t/2) \log t' \leq (t-1) \log t'$: \begin{align*} (t-1) \log t' &\geq (t/2) \log t'\\ &= \frac{\rho}{\log (\kappa''/\rho)} \log\left(\frac{\kappa''}{\rho} \log\left( \frac{\kappa''}{\rho}\right) \right) \\ &= \rho + \rho \cdot \frac{\log \log (\kappa''/\rho)}{\log (\kappa''/\rho)} \\ &\geq \rho \tag*{since $\kappa'' \geq 2\rho$}\\ & \geq \log {\kappa} \geq \log \bra{4\kappa/\kappa''} \tag*{since $\kappa'' \geq e \log \kappa \geq 4$ as $\kappa$ is sufficiently large}\\ & = p. \end{align*} Hence the assumptions in Claim~\ref{claim:properties of mAD} are satisfied with these values of $t, t'$ and $p$. We first state and prove some auxiliary claims which we require. The derivative of $\frac{\log (\kappa''/\rho)}{\kappa''}$ with respect to $\kappa''$ equals \begin{align*} \frac{1 - \ln {(\kappa''/\rho)}}{\ln 2 \cdot (\kappa'')^2}. \end{align*} This value is negative since $\kappa'' > e\cdot \rho$. Thus, $\frac{\log (\kappa''/\rho)}{\kappa''}$ is a decreasing function in $\kappa''$ for $\kappa'' > e\cdot \rho$. Consider the expression \begin{align*} \frac{2^p}{t} &= \frac{4\kappa}{\kappa''} \frac{\log (\kappa''/\rho)}{2\rho}\\ &\geq \frac{2\kappa \cdot \log{\bra{\frac{\kappa\log \kappa}{\rho^2} }}}{\kappa \log \kappa} \tag*{Since $\frac{\log (\kappa''/\rho)}{\kappa''}$ is a monotone decreasing function in $\kappa''$ and $\kappa'' \leq \frac{\kappa \log \kappa}{\rho}$.} \\ &= 2\frac{\bra{\log {\kappa/\rho^2}} + \log{\log \kappa}}{\log \kappa} \\ &\geq 2\frac{\log \kappa^{2\epsilon} + \log \log \kappa}{\log \kappa} \tag*{since $\rho\leq \kappa^{1/2 -\epsilon}$} \\ &\geq 4 \epsilon. \end{align*} Therefore \begin{equation} \label{eq:2^pgeq4epst} 2^p\geq 4 \epsilon t. \end{equation} Next, observe that \begin{equation} \label{eq: tt'mADtt'p} tt' = \frac{2\rho}{\log (\kappa''/\rho)} \bra{\frac{\kappa'' \log (\kappa''/\rho)}{\rho}} = 2\kappa''. \end{equation} We have the following properties of $\mAD_{t, t', p}$. \begin{itemize} \item Rank: \begin{align*} r(\mAD_{t,t',p}) &= \Theta\left(t \log t'\right) \tag*{by Claim~\ref{claim:properties of mAD}}\\ &= \Theta\bra{ \frac{\rho}{\log(\kappa''/\rho)} \bra{\log \frac{\kappa''}{\rho} + \log \log \bra{ \frac{\kappa''}{\rho}} } } \tag*{by Equations~\eqref{eq:t mAD} and~\eqref{eq:t' mAD}}\\ &= \Theta\bra{ \frac{\rho}{\log(\kappa''/\rho)} \bra{\log \frac{\kappa''}{\rho} } }\\ &= \Theta(\rho). \end{align*} \item Sparsity: For the upper bound, we have \begin{align*} k(\mAD_{t,t',p}) &= O(2^ptt' + t^2t') \tag*{by Claim~\ref{claim:properties of mAD}}\\ &= O\bra{2^ptt' + \frac{2^ptt'}{4\epsilon}} \tag*{by Equation~\eqref{eq:2^pgeq4epst}}\\ &=O\bra{\frac{1}{\epsilon} \cdot 2^ptt'} \tag*{since $0 < \epsilon < 1/2$}\\ &= O \bra{\frac{1}{\epsilon} \cdot \frac{\kappa}{\kappa''} \cdot \kappa''} \tag*{by Equations~\eqref{eq: tt'mADtt'p} and~\eqref{eq:p mAD}}\\ &= O\bra{\frac{\kappa}{\epsilon}}. \end{align*} For the lower bound, we have \begin{align*} k(\mAD_{t,t',p}) &= \Omega(2^p t t') \tag*{by Claim~\ref{claim:properties of mAD}}\\ &= \Omega{ \left(\frac{\kappa}{\kappa''} \cdot \kappa''\right)} \tag*{by Equations~\eqref{eq: tt'mADtt'p} and~\eqref{eq:p mAD}}\\ &= \Omega(\kappa). \end{align*} \item Max-rank-entropy: \begin{align*} k''(\mAD_{t,t',p}) &= \Theta(tt') \tag*{by Claim~\ref{claim:properties of mAD}}\\ &= \Theta(\kappa'') \tag*{by Equation~\eqref{eq: tt'mADtt'p}}. \end{align*} \item Weight: \begin{align*} \delta(\mAD_{t,t',p}) &= \frac{1}{t'} \tag*{by Claim~\ref{claim:properties of mAD}}\\ &= \frac{\rho}{\kappa'' \log (\kappa''/\rho). \tag*{by Equation~\eqref{eq:t' mAD}}} \end{align*} \end{itemize} \end{proof} \section{Dominating Chang's lemma for all thresholds} \label{sec: beating chang all thresholds} In this section, we show that there exists a function such that for any choice of threshold, the lower bound on weight of that function that we obtain from Theorems~\ref{thm:delta lower bound in terms of rk and also k'} and~\ref{thm:delta lower bound in terms of rk and also k''} are stronger than the lower bound obtained from Chang's lemma (Lemma~\ref{lem:chang}). \begin{claim}[Beating Chang's lemma for all thresholds for $\AD_{t, t}$] \label{claim: beating changs lemma for all thresholds for ADtt'} Consider any integer $t > 4$ and define the function $f = \AD_{t, t} : \pmone^{\log t} \times \pmone^{t \log t} \to \pmone$ as in Definition~\ref{defi:ADtt'}. Then, \begin{itemize} \item $\delta(f) = \frac{1}{t}$. \item For all real $x > 0$ for which $\dim(\cS_x) > 1$, we have \begin{align*} \frac{\sqrt{\dim(\cS_x)}}{x\sqrt{\log (x^2/\dim(\cS_x))}} = O\left(\frac{1}{t^{3/2}}\right). \end{align*} \item \[ \frac{1}{k(f)}\bra{\frac{r(f)}{\log \sparsity(f)}}^2 = \Omega\bra{\frac{1}{t}}, \quad\frac{\sparsity(f)}{\seespectrum(f)^2} = \Omega\bra{\frac{1}{t}} \quad \text{and}\quad \frac{r(f)}{\seerank(f)\log \sparsity(f)} = \Omega\bra{\frac{1}{t}}. \] \end{itemize} \end{claim} In particular, Claim~\ref{claim: beating changs lemma for all thresholds for ADtt'} shows that our bounds can be strictly stronger than those given by Chang's lemma, in the following sense. \begin{itemize} \item All the lower bounds on $\delta(f)$ from Theorems~\ref{thm:delta lower bound in terms of rk and also k'} and~\ref{thm:delta lower bound in terms of rk and also k''} are tight, as witnessed by $f = \AD_{t, t}$. \item No matter what threshold $x$ is chosen in Lemma~\ref{lem:chang}, the best possible lower bound on $\delta(\AD_{t,t})$ that we get can get from Lemma~\ref{lem:chang} is $\Omega\bra{\frac{1}{t^{3/2}}}$, which is polynomially smaller than $1/t$, the actual weight of $\AD_{t,t}$. \end{itemize} \begin{proof}[Proof of Claim~\ref{claim: beating changs lemma for all thresholds for ADtt'}] From Claim~\ref{claim:properties of ADtt'}, we have \begin{align*} \delta(f) = \frac{1}{t}. \end{align*} Thus from Observation~\ref{obs:weight, empty Fourier}, \begin{align} \label{eq:fhatempty adtt'} \wh{f}(\emptyset) = 1 - \frac{2}{t}. \end{align} First, we show that except for the Fourier coefficient of the empty set, all other Fourier coefficients of $f$ have magnitude equal to $\frac{2}{t^2}$. Towards a contradiction assume that there exists $T \subseteq [\log t]\cup [t \log t]$ such that $\abs{\wh{f}(T)} > \frac{2}{t^2}$. We have, \begin{align} 1 &= \sum_{S \subseteq [\log t]\cup [t \log t]} \wh{f}^2(S) \tag*{from Theorem~\ref{thm:Parseval}} \nonumber \\ &= \left(1 - \frac{2}{t}\right)^2 + \sum_{S \subseteq [\log t]\cup [t \log t], S \neq \emptyset} \wh{f}^2(S) \tag*{by Equation~\eqref{eq:fhatempty adtt'}} \nonumber \\ &= \left(1 - \frac{2}{t}\right)^2 + \wh{f}^2(T) + \sum_{S \subseteq [\log t]\cup [t \log t], S \neq \emptyset, S \neq T} \wh{f}^2(S) \nonumber \\ &> \left(1 - \frac{2}{t}\right)^2 + \frac{4}{t^4} + \sum_{S \subseteq [\log t]\cup [t \log t], S \neq \emptyset, S \neq T} \wh{f}^2(S) \label{eq: beting changs eqn1} \end{align} From Claim~\ref{claim:properties of ADtt'}, $k(f) = 1 + t^2(t - 1) = t^3 - t^2 + 1$ and $k'(f) = t^2/2$. Using these, along with the definition of $k'(f)$ (Definition~\ref{defi:max entropy, max rank entropy}), in Equation~\eqref{eq: beting changs eqn1}, \begin{align*} 1 &> \left(1 - \frac{2}{t}\right)^2 + \frac{4}{t^4} + \frac{4(k(f) - 2)}{t^4}\\ &= \left(1 - \frac{2}{t}\right)^2 + \frac{4}{t^4} + \frac{4(t^3 - t^2 + 1 - 2)}{t^4} \\ &= \left(1 - \frac{2}{t}\right)^2 + \frac{4}{t^4} + \frac{4(t^3 - t^2 -1)}{t^4} \\ &= \left(1 - \frac{2}{t}\right)^2 + \frac{4}{t^4} + \left( \frac{4}{t} - \frac{4}{t^2} - \frac{4}{t^4} \right) \\ &= \left(1 + \frac{4}{t^2} - \frac{4}{t} \right) + \frac{4}{t^4} + \left( \frac{4}{t} - \frac{4}{t^2} - \frac{4}{t^4} \right) \\ &= 1. \end{align*} Thus, \begin{equation} \label{eq:all fcoeffs except emptyset equal adtt'} \abs{\wh{f}(S)} = \frac{2}{t^2} \quad \text{for all non-empty}~S \subseteq ([\log t]\cup [t \log t]). \end{equation} We now prove the second part of the claim. If $x < \frac{t^2}{2}$ then $\cS_x = \cbra{\emptyset}$ and has dimension $0$. On the other hand, for any $x \geq \frac{t^2}{2}$, we have $\cS_x = \supp(f)$ by Equation~\eqref{eq:all fcoeffs except emptyset equal adtt'}, and hence $\dim(\cS_x) = r(f) = (t + 1)(\log t)$ by Claim~\ref{claim:properties of ADtt'}. Hence, in this case, \[ \frac{\sqrt{\dim(\cS_x)}}{x\sqrt{\log (x^2/\dim(\cS_x))}} = O\left(\frac{\sqrt{t \log t}}{t^2\sqrt{\log t}}\right) = O\left(\frac{1}{t^{3/2}}\right), \] On the other hand, by Claim~\ref{claim:properties of ADtt'}, $r(f) = \Theta(t \log t)$, $k(f) = \Theta(t^3)$, $k'(f) = k''(f) = \Theta(t^2)$. The third part of the claim follows. \end{proof} \section{Conclusions} \label{sec: conclusions} In this paper, for Boolean functions $f$, we study the relationship between weight and other Fourier-analytic measures namely rank, sparsity, max-supp-entropy and max-rank-entropy. For a threshold $t > 0$, Chang's lemma gives a lower bound on the weight of a Boolean function $f$ in terms of $\dim\bra{\cbra{S \subseteq [n]:|\wh{f}(S)|\geq \frac{1}{t}}}$. We consider three natural thresholds $t$ in Chang's lemma, namely $k(f)$, $k'(f)$ and $k''(f)$, yielding three lower bounds on weight in terms of these measures. We prove new lower bounds on weight in Theorems~\ref{thm:delta lower bound in terms of rk and also k'} and~\ref{thm:delta lower bound in terms of rk and also k''}, and our bounds dominate all the above-mentioned bounds from Chang's lemma for a wide range of parameters. When $\log k(f) = \Theta(r(f))$, the function $f = \AND$ already shows that all the above lower bounds are tight. To consider all other feasible relationships between $k(f)$ and $r(f)$, we divide our investigation of these lower bounds into two different parts. In the first part, we vary over all feasible settings of $r(f)$, $k(f)$ and $k'(f)$, and construct functions that witness tightness of our lower bounds in Theorem~\ref{thm:delta lower bound in terms of rk and also k'} for nearly all such feasible settings (Theorem~\ref{thm:delta upper bound in terms of rk and also k'}). In the second part, we vary over all feasible settings of $r(f)$, $k(f)$ and $k''(f)$, and construct functions that witness near-tightness of our lower bounds in Theorem~\ref{thm:delta lower bound in terms of rk and also k''} for nearly all such feasible settings (Theorem~\ref{thm:delta upper bound in terms of rk and also k''}). These functions are constructed by carefully composing the Addressing function with suitable inner functions. We show a composition lemma (Lemma~\ref{lem:properties of composition of addressing and g}), which relates the properties of the composed function with those of the inner functions; this allows us to come up with functions that match our lower bounds. We also construct functions for which our lower bounds are asymptotically stronger than the lower bounds obtained from Chang's lemma for all choices of threshold (Claim~\ref{claim: beating changs lemma for all thresholds for ADtt'}). The functions that we construct in this work might be of independent interest. \paragraph{Open Problems.} Claim~\ref{claim:setting parameters for tightstraightline} shows tightness of our upper bound on rank in terms of sparsity and weight (Theorem~\ref{thm:delta lower bound in terms of rk only}). Since our proof of Theorem~\ref{thm:delta lower bound in terms of rk only} is a generalization of the proof of the upper bound $r(f) = O(\sqrt{k(f)} \log k(f))$ due to Sanyal~\cite{San19}, it sheds light on the presence of the $\log k$ factor in Sanyal's upper bound. This still leaves the following question open: do there exist Boolean functions $f$ for which $r(f) = \omega(\sqrt{k(f)})$? There are some ranges of parameters where we were not able to construct functions with upper bounds matching our lower bounds from Theorem~\ref{thm:delta lower bound in terms of rk and also k''}. It will be interesting to see if our techniques can be extended to cover these ranges as well. All thresholds $t$ considered for Chang's lemma in this work satisfy $\dim(\{S \subseteq [n]:|\wh{f}(S)|\geq \frac{1}{t}\}) = r(f)$. It is an interesting problem to obtain Chang's-lemma-type bounds for thresholds for which this dimension is strictly less than $r(f)$. {\bf Acknowledgements: } R.M.~thanks DST (India) for grant DST/INSPIRE/04/2014/001799. S.S.~is supported by an ISIRD Grant from SRIC, IIT Kharagpur. N.S.M.~is supported by the NWO through QuantERA project QuantAlgo 680-91-034. T.M.~would like to thank Prahladh Harsha and Ramprasad Saptharishi for helpful discussions.
{ "timestamp": "2021-05-25T02:13:15", "yymm": "2012", "arxiv_id": "2012.02335", "language": "en", "url": "https://arxiv.org/abs/2012.02335", "abstract": "Chang's lemma (Duke Mathematical Journal, 2002) is a classical result with applications across several areas in mathematics and computer science. For a Boolean function $f$ that takes values in {-1,1} let $r(f)$ denote its Fourier rank. For each positive threshold $t$, Chang's lemma provides a lower bound on $wt(f):=\\Pr[f(x)=-1]$ in terms of the dimension of the span of its characters with Fourier coefficients of magnitude at least $1/t$. We examine the tightness of Chang's lemma w.r.t. the following three natural settings of the threshold:- the Fourier sparsity of $f$, denoted $k(f)$,- the Fourier max-supp-entropy of $f$, denoted $k'(f)$, defined to be $\\max \\{1/|\\hat{f}(S)| : \\hat{f}(S) \\neq 0\\}$,- the Fourier max-rank-entropy of $f$, denoted $k''(f)$, defined to be the minimum $t$ such that characters whose Fourier coefficients are at least $1/t$ in absolute value span a space of dimension $r(f)$.We prove new lower bounds on $wt(f)$ in terms of these measures. One of our lower bounds subsumes and refines the previously best known upper bound on $r(f)$ in terms of $k(f)$ by Sanyal (ToC, 2019). Another lower bound is based on our improvement of a bound by Chattopadhyay, Hatami, Lovett and Tal (ITCS, 2019) on the sum of the absolute values of the level-$1$ Fourier coefficients. We also show that Chang's lemma for the these choices of the threshold is asymptotically outperformed by our bounds for most settings of the parameters involved.Next, we show that our bounds are tight for a wide range of the parameters involved, by constructing functions (which are modifications of the Addressing function) witnessing their tightness. Finally we construct Boolean functions $f$ for which- our lower bounds asymptotically match $wt(f)$, and- for any choice of the threshold $t$, the lower bound obtained from Chang's lemma is asymptotically smaller than $wt(f)$.", "subjects": "Computational Complexity (cs.CC)", "title": "Tight Chang's-lemma-type bounds for Boolean functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743612634027, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.7088953068787788 }
https://arxiv.org/abs/2009.06746
Orbital stability of KdV multisolitons in $H^{-1}$
We prove that multisoliton solutions of the Korteweg--de Vries equation are orbitally stable in $H^{-1}(\mathbb{R})$. We introduce a variational characterization of multisolitons that remains meaningful at such low regularity and show that all optimizing sequences converge to the manifold of multisolitons. The proximity required at the initial time is uniform across the entire manifold of multisolitons; this had not been demonstrated previously, even in $H^1$.
\section{Introduction} The history of the Korteweg--de Vries equation \begin{align}\label{KdV}\tag{KdV} \frac{d\ }{dt} q = - q''' + 6qq' \end{align} is profoundly intertwined with the notion of solitary waves. Indeed, the very goal of Korteweg and de~Vries \cite{KdV1895} was to explain the empirical observation of such waves. The fact that \eqref{KdV} admits solutions of the form \begin{align}\label{1 soliton} q(t,x) = - 2\beta^2 \sech^2(\beta [x - 4\beta^2 t -x_0]) \end{align} (for any $\beta>0$ and $x_0\in {\mathbb{R}}$) explains many aspects of solitary water waves, such as the relation between height and speed. However, the very possibility of Scott Russell's famous chance encounter with such a wave tells us something more: It must be stable! The question of stability was considered already by Boussinesq in \cite{Boo77}. In addition to observing the conservation of both \begin{equation} P(q) := \int \tfrac12 q(x)^2\,dx \qtq{and} H(q) := \int \tfrac12 q'(x)^2 + q(x)^3 \,dx, \end{equation} he also notes that the solitary wave profile solves the Euler--Lagrange equation associated to the problem of optimizing $H$ subject to constrained $P$. Now if the solitary wave were a non-degenerate minimum of $H$ at constrained $P$, then stability would follow immediately, following the Lyapunov model. However, it is not! The simple act of translation shows that it is at best a degenerate minimum. In the pioneering paper \cite{MR0338584}, Benjamin proved the $H^1$-orbital stability of such solitary waves: Solutions close to a soliton profile at time zero remain close to a soliton profile at all times. This variational approach is extremely robust and has seen countless applications since. However, it does not directly give any information about the physical location of the soliton profile, nor how this evolves with time; this is the significance of the adjective `orbital'. In numerical simulations of a discrete form of \eqref{KdV}, Kruskal and Zabusky \cite{KruskalZabusky} observed that solitary waves exhibit an even stronger form of stability: Pairs of solitary waves emerged from collisions with the same profile and speed with which they had entered. Nevertheless, the two waves did interact; each was spatially shifted from its original trajectory. This particle-like behavior led Kruskal and Zabusky to coin the name \emph{soliton}; they presciently appreciated that this was an exotic phenomenon. We now understand that while the orbital stability of single solitary waves is rather common (and can often be proved variationally), stability under collisions is extremely peculiar. The ultimate explanation for this behavior was the discovery that \eqref{KdV} is a completely integrable Hamiltonian system; see \cite{GGKM,MR0336122,MR0252826,MR0303132}. Just as our notion of a solitary wave crystalizes around the concrete particular solutions \eqref{1 soliton} to \eqref{KdV}, so there is a family of special solutions to \eqref{KdV} that embody the behavior of collections of solitons: \begin{definition}[Multisoliton solutions]\label{D:multisoliton} Fix $N\geq 1$. Given $N$ distinct positive parameters $\beta_1, \ldots, \beta_N$ and $N$ real parameters $c_1, \ldots, c_N$, let \begin{align}\label{E:Qbc} Q_{\vec \beta, \vec c}(x)= - 2\tfrac{d^2}{dx^2} \ln \det\bigl[A(x)\bigr] \end{align} where $A(x)$ is the $N\times N$ matrix with entries \begin{align}\label{matrix A} A_{\mu\nu}(x) = \delta_{\mu\nu} + \tfrac{1}{\beta_\mu+\beta_\nu} e^{-\beta_\mu(x-c_\mu)-\beta_\nu(x-c_\nu)}. \end{align} The unique solution to \eqref{KdV} with initial data $q(0,x) = Q_{\vec \beta, \vec c}(x)$ is \begin{align}\label{E:Qbc(t)} q(t,x) = Q_{\vec \beta, \vec c(t)}(x) \qtq{where} c_n(t) = c_n + 4\beta_n^2 t . \end{align} \end{definition} The beautiful formula \eqref{E:Qbc} was originally derived in \cite{KayMoses} as a description of reflectionless potentials appearing in the one-dimensional Schr\"odinger equation. With the discovery of the inverse-scattering approach, the significance of this result for \eqref{KdV} was noted by several authors; see \cite{GGKM,MR0336122,Hirota1971,MR0328386,WadatiToda,Zakharov1971}. By analyzing these exact solutions, the authors confirmed the particle-like interactions, described the long-time asymptotics, and determined the (universal) spatial shifts. The idea that these explicit solutions provide a justification for empirical observations is necessarily predicated (at the very least) on their stability. Indeed, this question has attracted considerable attention over the years, as we shall discuss shortly. Let us begin, however, with our own contribution to this question: \begin{theorem}\label{T:main} Fix $N\geq 1$ and distinct positive parameters $\beta_1, \ldots, \beta_N$. For every $\varepsilon>0$ there exists $\delta>0$ so that for every initial data $q(0)\in H^{-1}({\mathbb{R}})$ satisfying \begin{align*} \inf_{\vec c\in {\mathbb{R}}^N} \|q(0)- Q_{\vec \beta, \vec c}\|_{H^{-1}}<\delta, \end{align*} the corresponding solution $q(t)$ to \eqref{KdV} satisfies \begin{align*} \sup_{t\in{\mathbb{R}}}\inf_{\vec c\in {\mathbb{R}}^N} \|q(t)- Q_{\vec \beta, \vec c}\|_{H^{-1}}<\varepsilon. \end{align*} \end{theorem} One virtue of this result is that it achieves the lowest regularity (in the $H^s$ scale) for which well-posedness is known \cite{KV} or possible \cite{MR2830706}. We shall also see that it is not difficult to recover higher-regularity results post factum: \begin{corollary}\label{C:Hs} Fix $s\in[-1,1]$, $N\geq 1$, and distinct positive parameters $\beta_1, \ldots, \beta_N$. For every $\varepsilon>0$ there exists $\delta>0$ so that \begin{align}\label{Hs stab} \inf_{\vec c\in {\mathbb{R}}^N} \|q(0) - Q_{\vec \beta, \vec c}\|_{H^s}<\delta \ \implies\ % \sup_{t\in{\mathbb{R}}}\inf_{\vec c\in {\mathbb{R}}^N} \|q(t)- Q_{\vec \beta, \vec c}\|_{H^s}<\varepsilon. \end{align} \end{corollary} The restriction $s\leq 1$ should not be taken too seriously. Our goal is simply to illustrate two basic methods of raising the regularity without making the discussion too extensive, yet also recovering the important cases $L^2$ and $H^1$. Let us now turn toward a discussion of prior work, after which we will discuss how the proof of Theorem~\ref{T:main} will proceed. We do not intend to dwell on the question of well-posedness, since this is rather decoupled from the question of stability: Proving an assertion like \eqref{Hs stab} only for Schwartz solutions still cuts to the heart of the matter; the Schwartz restriction can then be trivially removed once well-posedness in $H^s$ is known. Indeed, Benjamin's work on $H^1$-stability should only grow in our estimation when we consider that well-posedness in $H^1$ was not achieved until many years later, \cite{MR1086966}. Conversely, having obtained well-posedness in $H^{-1}$ in \cite{KV}, it is timely to address the orbital stability in this space. It is also true that well-posedness alone provides little assistance in proving \eqref{Hs stab}. Nevertheless, it has proved useful in the consideration of slightly weaker assertions, where $\delta$ is permitted to depend on the parameters $\vec c$ of the multisoliton nearest the initial data. The manner in which it helps is this: Exact multisolitons resolve (as $t\to\pm\infty$) into essentially a linear combination of well-separated (and increasingly separated) simple solitary waves of the form \eqref{1 soliton}. Thus, researchers may confine their analyses to this more favorable scenario and exploit well-posedness to cover the remaining compact time interval. While Benjamin's argument \cite{MR0338584} was both extremely novel and compelling, it did contain some mathematical lacunae, particularly with regard to the treatment of the modulation parameters. These issues were thoroughly addressed by Bona \cite{MR0386438}. This approach was further developed to treat NLS and gKdV by Weinstein \cite{MR0820338}. Orbital stability of the single soliton \eqref{1 soliton} in $L^2$ was only shown much more recently, by Merle and Vega \cite{MR1949297}. These authors also show a form of asymptotic stability: one has $L^2$-convergence to a soliton profile in any bounded window traveling with the soliton. Stronger forms of asymptotic stability such as global $L^2$ convergence are clearly forbidden by the conservative nature of the equation. We should also note that it is not claimed that the solution is converging to a single solitary wave with fixed translation parameter $x_0$; indeed, subsequent analysis by Martel and Merle \cite{MR2109467} shows that this cannot be guaranteed: successive interactions with a large number of wide (and so $L^2$-small) solitary waves can lead to logarithmic divergence of the soliton trajectory from a straight line. The Merle-Vega proof of $L^2$-orbital stability of single solitons combines the Miura map with orbital stability of the kink solutions proven in \cite{MR1831831}. (While Zhidkov focusses on the NLS equation, his variational analysis employs only conservation laws common to mKdV.) In the same paper \cite{MR0235310} that introduced the Lax pair, Lax also discusses two-soliton solutions with a view to explaining the properties of such waves observed in \cite{KruskalZabusky}. His construction of such solutions is based on a differential equation derived from the polynomial conservation laws discovered earlier in \cite{MR0252826}. While Lax does not explicitly express it thus in this paper (see \cite{MR0369963}, however), his equation arises as the Euler--Lagrange equation for optimizing the third conserved quantity with the first two constrained. In general, $N$-solitons are critical points of the variational problem of optimizing the $(N+1)$ polynomial conserved quantity constrained by its $N$ predecessors (we exclude the Casimir $\int q\, dx$ from our enumeration). This constrained variational problem was analyzed by Maddocks and Sachs in \cite{MR1220540}. They showed that multisolitons are in fact local minimizers. The essential (and subtle) point addressed by these authors is to understand the Hessian of the highest-order conservation law on the manifold of multisolitons, both directly and restricted to directions parallel to the constraint manifold. As the analysis in \cite{MR1220540} is localized in small neighbourhoods of the soliton profiles, it does not address either of the following questions: Are $N$-solitons global minimizers of this variational problem? Are they the \emph{only} minimizers? To the best of our knowledge, both questions remain open. Theorem~\ref{T:char} below gives an affirmative answer to both questions for the variational description we employ. Orbital stability of multisolitons in $H^1$ was shown by Martel, Merle, and Tsai in \cite{MR1946336}. The principal part of the argument is showing that a system of well-separated solitons (ordered by speed) is future-stable. Subsequently in \cite{MR2984057}, Alejo, Mu\~noz, and Vega proved orbital stability by using Gardner's generalization of the Miura map and applying the ideas of \cite{MR1946336} to the resulting Gardner equation. These works do not yield orbital stability in the strong form of \eqref{Hs stab}; they rely on local well-posedness in the manner discussed earlier. Additional information on the modulation parameters over this initial interval is obtained (a posteriori) in \cite{MR2339841}. A different approach to orbital stability of solitons based on \emph{auto}B\"acklund transformations (which add or remove solitons) was demonstrated recently in \cite{MR2920823}. This work proves a strong form of $L^2$-orbital stability of one-solitons for the focusing cubic NLS on the line by combining these transformations with stability of the zero solution. This approach was substantially advanced in \cite{KochTataru}, where low-regularity orbital stability of NLS multisolitons (including the delicate case of multiple eigenvalues) was proved. To the best of our knowledge, these ideas have not yet been applied to \eqref{KdV}. Let us now turn to the topic of the methods to be employed in this paper. Our discussion will be somewhat discursive since we shall take the time to introduce the central object of our methodology, the (doubly) renormalized perturbation determinant, as well as historical and contextual matters that we find instructive. As we have discussed, the stability of multisolitons is historically (and physically) inseparable from the complete integrability of \eqref{KdV}. The key question is how this complete integrability is to be exploited. The long-standing approach, introduced already in \cite{GGKM}, is to employ the scattering theory of one-dimensional Schr\"odinger operators with the potential given by the \eqref{KdV} wave form at a fixed time. Despite receiving a great deal of attention over the years (with much impetus taken from the study of KdV), there is currently no satisfactory theory of forward or inverse scattering in any $H^s$ space. While non-trivial problems do attend low regularity, it is the slow decay associated with such spaces that is most devastating. We are truly at a loss as to how to define the reflection coefficient or how to handle embedded eigenvalues and singular continuous spectrum. The inverse-scattering technique is capable of providing extremely detailed long-time asymptotics for the class of solutions to which it is applicable; see \cite{MR2525595}, for example. However, due to the difficulties outlined above, it has not yet yielded stability of even single-solitons in any $H^s$ space. While the reflection coefficient is fragile, it has long been appreciated that the transmission coefficient is much more robust. One intuitive explanation for this is that the transmission coefficient actually represents the boundary values of a function meromorphic in the upper half-plane. Analytically, it is preferable to consider the reciprocal $a(k;q)$ of the transmission coefficient. This is holomorphic in the upper half-plane and its zeros precisely encode the discrete spectrum of the attendant Schr\"odinger operator. The simplest description is as the Wronskian (divided by $2ik$) of the two Jost solutions. An alternate perspective on this function $a(k;q)$ was introduced by Jost and Pais \cite{MR0044404}. They observed that it could be expressed as a Fredholm determinant. In \cite[Chapter~5]{MR2154153}, Simon proves that \begin{equation}\label{JostPaisSimon} a(k; q)= \det \bigl(1+ |q|^{\frac12} R_0(k) |q|^{-\frac12} q\bigr), \qtq{where} R_0(k) = (-\partial^2_x - k^2)^{-1}, \end{equation} coincides with the Wronskian definition provided $\langle x\rangle^{1+\delta} q \in L^1$ with $\delta>0$. Splitting $q$ across the two sides of $R_0$ is necessary if one wishes to treat $q$ with $L^1$-type singularities: neither $R_0 q$ nor $q R_0$ could be guaranteed to be bounded under Simon's hypothesis. However, it turns out to be wiser to factor the free resolvent $R_0$, placing a square-root of this operator on either side of $q$; as we shall see, this will permit potentials with much more severe singularities. On the other hand, one still needs strong decay hypotheses on $q$; for otherwise, the determinant would not be defined. The second layer of renormalization needed to treat $q\in H^{-1}$ employs the renormalized determinant introduced by Hilbert \cite{Hilbert}; see \cite[Chapter~9]{MR2154153}. Combining these two ideas, we are led to consider the following: For $k\in {\mathbb{C}}^+ = \{z \in {\mathbb{C}} : \Im z >0\}$ and Schwartz-class $q$, \begin{align}\label{a ren} a_{\mathrm{ren}} (k;q) := \det_2 \bigl(1+ \sqrt{R_0(k)}\, q\, \sqrt{R_0(k)} \bigr)= a(k;q)\exp\Bigl\{-\tfrac i{2k}\int q(x)\,dx\Bigr\}. \end{align} The square-root of the resolvent is defined via analytic continuation from the case $k=i\kappa$ with $\kappa>0$, in which case $R_0$ is positive definite (and we take the positive definite square-root). To the best of our knowledge, this quantity was first considered by Rybkin. In \cite{MR2683250}, he used it to give the first proof of a priori $H^{-1}$ bounds for solutions to \eqref{KdV}. This approach was developed independently in \cite{KVZ}; alternate approaches to such a priori bounds can be found in \cite{MR3400442,MR3874652}. The fact that $a_{\mathrm{ren}}$ extends continuously (indeed real-analytically) from $q\in\mathcal S$ to merely $q\in H^{-1}$ rests on the basic theory of such regularized determinants and the Hilbert--Schmidt estimate \begin{align}\label{R I2} \Bigl\| \sqrt{R_0(k)}\, q\, \sqrt{R_0(k)} \Bigr\|^2_{\mathfrak I_2} &\leq \frac{|k|}{[\Im k]^2}\int \frac{|\hat q(\xi)|^2}{\xi^2+4|k|^2}\,d\xi. \end{align} Indeed, the mapping $A\mapsto \det_2(1+A)$ is a complex-analytic function on ${\mathfrak{I}}_2$ and \begin{align}\label{HS det2 bnd} \bigl| 1 - \det_2(1+A) \bigr| \lesssim \| A\|_{{\mathfrak{I}}_2} \exp\bigl\{ \| A\|_{{\mathfrak{I}}_2}^2 \bigr\} ; \end{align} see \cite{MR2154153} for details. Our justification for the bound \eqref{R I2} is quite simple. We use the ideal property and the elementary bound $$ |\xi^2-k^2|^{-1} \leq \tfrac{|k|}{\Im k} (\xi^2+|k|^2)^{-1} \qtq{for all} \xi\in{\mathbb{R}} $$ to reduce matters to the $\kappa=|k|$ case of \begin{align}\label{R I2 kappa} \Bigl\| \sqrt{R_0(i\kappa)}\, q\, \sqrt{R_0(i\kappa)} \Bigr\|^2_{\mathfrak I_2} & = \frac{1}{\kappa} \int \frac{|\hat q(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi \qtq{for all} \kappa>0. \end{align} In view of the importance of \eqref{R I2 kappa} for what follows, it will also be convenient to employ the notation $$ \| f \|_{H^{-1}_\kappa}^2 := \int \frac{|\hat f(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi. $$ With these preliminaries set, we may now give our variational characterization of multisolitons: \begin{theorem}[Variational characterization of multisolitons]\label{T:char} Fix $N\geq 1$ and distinct positive parameters $\beta_1, \ldots , \beta_N$. If $q\in H^{-1}$ satisfies \begin{align}\label{aren 0} a_{\mathrm{ren}}(k;q)=0 \qtq{for all} k\in\{i\beta_m:\ 1\leq m\leq N\}, \end{align} then \begin{align}\label{a_ren is max} a_\mathrm{ren}(i\kappa,q) \leq \exp\biggl\{\, \sum_{m=1}^N \ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) +\tfrac{2\beta_m}{\kappa}\biggr\} \quad\text{for all}\quad \kappa\geq 1 + \|q\|_{H^{-1}}^2 . \end{align} If equality holds in \eqref{a_ren is max} for any one such $\kappa$, then $q=Q_{\vec \beta, \vec c}$ for some $\vec c\in {\mathbb{R}}^N$. \end{theorem} By itself, Theorem~\ref{T:char} does not provide stability: one would also need to know that profiles that almost optimize \eqref{a_ren is max} are close to actual optimizers (i.e., to multisolitons). This leaves us with a very clear ambition of a purely variational character: prove that optimizing sequences converge to the manifold of multisolitons. We cannot expect optimizing sequences to have convergent subsequences --- the manifold of optimizers is not compact! This problem arises already in the case of single solitons, due to the translation symmetry. In the one-soliton case, compactness can be restored by incorporating translations. This approach was convincingly demonstrated by Cazenave and Lions \cite{MR0677997}, who proved orbital stability of ground-state solitary waves for a variety of NLS-like equations. Their paper is a major inspiration for what follows. In the multisoliton case, compactness cannot be restored by translation alone. Indeed the long-time dynamics of the multisolitons themselves is to break into asymptotically well-separated one-solitons. We need a profile decomposition! However, unlike most applications of this concentration-compactness technique, there is no sub-additivity in our problem: dichotomy must be embraced, not refuted. As we will discuss, this is just one of several subtle aspects to our implementation of this classic concentration-compactness device. We should note that the scenario of asymptotically well-separated one-solitons is not the only manner in which dichotomy can arise for optimizing sequences (or indeed sequences of optimizers). One may have asymptotically well-separated \emph{multi}solitons. This `gas of molecules' scenario will be analyzed in Section~\ref{S:MS}, where we show that a linear combination of well-separated multisolitons can be well-approximated by a single exact multisoliton. Further ways in which our concentration-compactness analysis diverges from the other examples we know are (i) we are working in trace ideals, not Lebesgue spaces; (ii) while we do have local compactness, this is non-quantitative arising from mere equicontinuity; and (iii) the constraints are apportioned across the profiles in an exotic manner. We will discuss each of these in succession. Trace ideals (which are also known as non-commutative $\ell^p$ spaces) have an additional defect of compactness beyond those of sequence spaces, namely, unitary conjugation. This in an infinite-dimensional group. Local compactness is not a prerequisite for concentration-compactness methods; indeed, with the incorporation of scaling parameters, such methods have proven to be extremely useful in scaling-critical problems. Nevertheless, in the examples we know, local compactness is obtained from the Rellich--Kondrashov Theorem. In our case, however, there is no such quantitative principle. We will be able to show that individual optimizing sequences are equicontinuous, but nothing more. In the standard analyses, a constraint, such as on the total $L^2$ norm, is apportioned across the profiles in an additive manner: the mass of the sequence is the sum of the masses of the profiles, plus that of the remainder. In our case, the constraints are vanishing of the perturbation determinant. In Section~\ref{S:OS}, we will see that the profiles attendant to optimizing sequences share the constraints in a different way: different profiles satisfy different \emph{subsets} of the constraints. The paper is organized as follows: In Section~\ref{S:2}, we first develop the theory of the perturbation determinant a little further. We then use this to prove Theorem~\ref{T:char}. Our approach is this: Building on the existing theory of Schwartz-class potentials, we show that the upper-bound \eqref{a_ren is max} holds across all $q\in H^{-1}$. Having first proved linear independence of the gradients of the constraints, we may analyze the case of equality using the Euler--Lagrange equation. Using this device, we show that optimizers are, in fact, Schwartz class. We may then appeal to classical inverse scattering to deduce that $q$ is an exact multisoliton. In Section~\ref{S:MS}, we show that well-separated linear combinations of multisolitons (which may arise as optimizing sequences) can be approximated by a single multisoliton. This is notationally very cumbersome; nevertheless, we hope that the virtues of deforming $x$ into the complex plane and exploiting the determinantal relation \eqref{inductive Cauchy} shine through. In Section~\ref{S:CC}, we develop a profile decomposition attendant to the functional $q\mapsto \alpha(\kappa;q)$, defined in \eqref{alpha defn}, applied to bounded and equicontinuous sequences in $H^{-1}$. Structurally speaking, our approach is the one we advanced in \cite{MR3098643}, namely, to first prove an inverse inequality and then employ this inductively to extract profiles. In Section~\ref{S:OS}, we prove Theorem~\ref{T:main}, arguing by contradiction. If the theorem were to fail, then there would exist a sequence of solutions $q_n$ so that the initial data $q_n(0)$ converges to the manifold of solitons, and a sequence of times $t_n$ so that $q_n(t_n)$ does not converge to the manifold of solitons. Using the fact that $\alpha(q;\kappa)$ is conserved under the flow, we show that $q_n$ is an optimizing sequence for the variational problem described in Theorem~\ref{T:char}. (Actually, this is not quite correct, the zeros may be slightly displaced.) We then employ the profile decomposition of Section~\ref{S:CC} to show (after some work) that the optimizing sequence can be approximated by a linear combination of well-separated multisolitons. This suffices to reach a contradiction because of the analysis in Section~\ref{S:MS}. We prove Corollary~\ref{C:Hs} in Section~\ref{S:Hs}. In doing so, we illustrate two basic methods for raising the regularity: (i) employing polynomial conservation laws and (ii) exploiting equicontinuity of orbits. Both methods are applicable beyond the range claimed in Corollary~\ref{C:Hs}; however, the details become increasingly cumbersome as the regularity $s$ grows. \subsection*{Acknowledgements} R. K. was supported by NSF grant DMS-1856755 and M.~V. by grant DMS-1763074. \section{Variational characterization of multisolitons}\label{S:2} The ultimate goal of this section is to prove Theorem~\ref{T:char}. This will proceed in several stages. First, we discuss the logarithm of $a_\mathrm{ren}$. Then we show that \eqref{a_ren is max} holds, first for Schwartz-class $q$ and then for general $q\in H^{-1}$. The climax of the proof is showing that all $H^{-1}$ optimizers are, in fact, Schwartz class and then using this information to show that they must be multisolitons. \begin{lemma}\label{L:alpha defn} For $q\in H^{-1}$ and $\kappa\geq 1 + \|q\|_{H^{-1}}^2$, the series \begin{align}\label{alpha defn} \alpha(q;\kappa) := \sum_{\ell=2}^\infty \tfrac{1}{\ell} (-1)^\ell \tr\Bigl\{ \Bigl( \sqrt{R_0(i\kappa )}\, q\, \sqrt{R_0(i\kappa )}\, \Bigr)^\ell \Bigr\} \end{align} converges and \begin{align}\label{arenalpha} a_\mathrm{ren}(q;i\kappa) = \exp\{ - \alpha(q;\kappa)\}. \end{align} Moreover, \begin{align}\label{alpha to L2} \liminf_{\kappa\to\infty} 8\kappa^3 \alpha(i\kappa;q) = \|q\|_{L^2}^2, \end{align} with the understanding that LHS\eqref{alpha to L2} is infinite if $q\notin L^2$. \end{lemma} \begin{proof} Convergence of the series \eqref{alpha defn} under this hypothesis on $\kappa$ follows immediately from \eqref{R I2 kappa}. That exponentiating this series yields the renormalized determinant is well known; indeed, this is little more than the Newton--Girard relation between elementary and power-sum symmetric functions. Employing \eqref{R I2 kappa} in the series \eqref{alpha defn} shows \begin{align}\label{37deg} \biggl| 8\kappa^3\alpha(q;\kappa) - \int \frac{4\kappa^2 |\hat q(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi \biggr| \leq \frac{\| q\|_{H^{-1}}}{\sqrt{\kappa} - \| q\|_{H^{-1}}} \int \frac{4\kappa^2 |\hat q(\xi)|^2}{\xi^2+4\kappa^2}\,d\xi, \end{align} from which \eqref{alpha to L2} follows immediately. \end{proof} Incidentally, we note the inequality \eqref{37deg} is actually the basis of the proof of a priori $H^{-1}$ bounds. Indeed, combining this with a simple bootstrap argument shows that for $\kappa\geq 1 + 64\|q(0)\|_{H^{-1}_\kappa}^2$ and any $t\in{\mathbb{R}}$, \begin{align}\label{37deg'} \frac23 \int \frac{|\hat q(t, \xi)|^2}{\xi^2+4\kappa^2}\,d\xi \leq 2\kappa\alpha(\kappa; q(t)) = 2\kappa\alpha(\kappa; q(0)) \leq \frac87 \int \frac{|\hat q(0, \xi)|^2}{\xi^2+4\kappa^2}\,d\xi . \end{align} Let us now recall some known facts about the reciprocal transmission coefficient $a(k;q)$ in the case $q$ is of Schwartz class. The basic analytical facts listed below can be easily derived from the Wronskian definition of $a(k;q)$ and rigorous proofs can be found in many basic texts on scattering theory. The claim \eqref{a soliton} is more serious. While many introductory texts on the theory of solitons give at least a formal derivation of \eqref{E:Qbc} from the assumption that $a(k;q)$ takes the stated form, a rigorous treatment requires considerable care, especially on the question of uniqueness. We recommend the paper \cite{MR0512420} of Deift and Trubowitz for a complete and self-contained presentation of the following (under rather weaker hypotheses): \begin{prop}\label{P:DT} Fix $q\in \mathcal S$. Then $a(k;q)$ extends continuously to the closed upper half-plane. It has finitely many zeroes in ${\mathbb{C}}^+$, all of which are simple and located on the imaginary axis. Moreover, \begin{gather} |a(k;q)| \geq 1 \quad\text{for all} \quad k\in {\mathbb{R}}, \\ |a(k;q)- 1| = O\bigl(\tfrac1{|k|}\bigr) \quad\text{as} \quad |k|\to \infty \quad\text{uniformly for}\quad \Im k\geq 0, \end{gather} and we have the symmetry \begin{equation}\label{conj symm} \overline{ a(k;q) }= a(-\bar k; q) \quad\text{for all} \quad k\in {\mathbb{C}}^{+}. \end{equation} Finally, given distinct $\beta_1,\ldots,\beta_N\in(0,\infty)$ and $q\in \mathcal S$, \begin{align}\label{a soliton} a(k; q)= \prod_{m=1}^N \frac{k-i\beta_m}{k+i\beta_m} \iff q\in\bigl\{ Q_{\vec \beta, \vec c}:\, \vec c\in {\mathbb{R}}^N\bigr\}. \end{align} \end{prop} This does not address the value of the renormalized perturbation determinant for such multisolitons. The missing ingredient is the following: \begin{equation}\label{trace form0} \int Q_{\vec \beta, \vec c}(x)\, dx= -\sum_{m=1}^N 4\beta_m. \end{equation} This is proved in both \cite{MR0336122} and \cite{MR0303132}. One simple approach that explains the additive structure of RHS\eqref{trace form0} is this: As the $\int q$ is conserved by the flow, the value of LHS\eqref{trace form0} can be determined from the value for $N$ well-separated single solitons. Alternately, one may deduce this by comparing the large-$k$ asymptotics of LHS\eqref{a soliton} with those of $a(\kappa;q)$. From the same references or by the same method, one can also find \begin{align}\label{conserv of multi} P\bigl(Q_{\vec\beta,\vec c}\bigr) = \tfrac{8}{3}\sum_m \beta_m^3 \qtq{and} H\bigl(Q_{\vec\beta,\vec c}\bigr) =-\tfrac{32}5\sum_m\beta_m^5. \end{align} Combining \eqref{a ren}, \eqref{arenalpha}, and \eqref{trace form0} shows \begin{gather} a_{\mathrm{ren}}(k; Q_{\vec \beta, \vec c}) = \prod_{m=1}^N \frac{k-i\beta_m}{k+i\beta_m} e^{{2i\beta_m}/{k}} \qtq{for all} k\in {\mathbb{C}}^+, \label{a ren Qbc} \\ \alpha(\kappa;Q_{\vec \beta, \vec c}) = -\sum_{m=1}^N \tfrac{2\beta_m}{\kappa} + \ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) \qtq{for all} \kappa\geq 1 + \|q\|_{H^{-1}}^2 . \label{alpha Qbc} \end{gather} As Lemma~\ref{L:alpha defn} guarantees that $a_\mathrm{ren}$ is non-vanishing for $\kappa\geq 1 + \|q\|_{H^{-1}}^2$, the restriction on $\kappa$ guarantees $\kappa > \sup_m \beta_m$ and consequently, that RHS\eqref{alpha Qbc} is positive: \begin{align}\label{G} G\bigl(\tfrac{\beta_m}\kappa\bigr) := - \Bigl[ \tfrac{2\beta_m}{\kappa} + \ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) \Bigr] = \sum_{\ell\geq1} \tfrac{2}{2\ell+1} \bigl(\tfrac{\beta_m}\kappa\bigr)^{2\ell+1} \geq 0. \end{align} Recalling \eqref{arenalpha}, we see that multisolitons achieve equality in \eqref{a_ren is max}. We next show that this is indeed a bound for all $q$. This will be done in two steps: first for $q\in\mathcal S$ and then for $q\in H^{-1}$: \begin{prop}\label{P:min alpha} For $q\in \mathcal S$ and $\kappa\geq 1 + \|q\|_{H^{-1}}^2$, \begin{align}\label{E:min alpha} \alpha(\kappa;q) \geq -\sum_{m=1}^N \Bigl[ \ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) +\tfrac{2\beta_m}{\kappa} \Bigr], \end{align} where $\{i\beta_m:\ 1\leq m\leq N\}$ enumerates the $0\leq N<\infty$ zeros of $a_\mathrm{ren}$. \end{prop} \begin{proof} Proposition~\ref{P:DT} shows that $a(k;q)$ has only finitely many zeros and all are simple. Using these, we build the Blaschke product $$ B(k)=\prod_{m=1}^N \frac{k-i\beta_m}{k+i\beta_m}. $$ In the case $a(k;q)$ has no zeros, $B(k)\equiv 1$. Using Proposition~\ref{P:DT} again, we see that $k\mapsto \ln\bigl| \frac{a(k;q)}{B(k)} \bigr|$ is harmonic on ${\mathbb{C}}^+$ and extends continuously to $\partial {\mathbb{C}}^+$. Moreover, \begin{align}\label{prop of frac} \ln\bigl| \tfrac{a(k;q)}{B(k)} \bigr|\geq 0 \quad \text{for all $k\in {\mathbb{R}}$} \qquad \text{and}\qquad\ln\bigl| \tfrac{a(k;q)}{B(k)} \bigr|= O\bigl(\tfrac1{|k|}\bigr) \quad \text{as $|k|\to \infty$}. \end{align} It follows from the maximum principle that this function is non-negative throughout~${\mathbb{C}}^+$. The Herglotz Representation Theorem (cf. \cite[Theorem~3, \S59]{AG}) then guarantees \begin{align}\label{rep} \ln\bigl[\tfrac{a(k;q)}{B(k)}\bigr] = -i \int_{\mathbb{R}} \tfrac{d\mu(t)}{t-k}, \end{align} for some finite positive measure $d\mu$ on ${\mathbb{R}}$. This measure is also even under $t\mapsto -t$; this is inherited from the symmetry \eqref{conj symm} enjoyed by both $a(k;q)$ and $B(k)$. In this way, we see that for $\kappa\geq 1 + \|q\|_{H^{-1}}^2$, \begin{align*} -\ln a(i\kappa ;q) &= -\sum_m \ln \bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) + i\int\tfrac{t+i\kappa}{t^2+\kappa^2}\, d\mu(t)= -\sum_m \ln \bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) -\kappa\int\tfrac{d\mu(t)}{t^2+\kappa^2}. \end{align*} On the other hand, \eqref{a ren}, \eqref{arenalpha}, and \eqref{alpha to L2} show that as $\kappa\to\infty$, \begin{align*} \Bigl|-\ln a(i\kappa ;q) + \tfrac1{2\kappa}\int q(x)\, dx\Bigr| &= O(\kappa^{-3}). \end{align*} Combining these two observations, we deduce that \begin{equation}\label{9:59} \begin{aligned} \int q(x)\, dx &= \lim_{\kappa\to \infty} \biggl[ \sum_m 2\kappa \ln \bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) + \int \tfrac{2\kappa^2}{t^2+\kappa^2}\, d\mu(t) \biggr] \\ &= -4\sum_m\beta_m + 2\int d\mu(t) \end{aligned} \end{equation} and thence that\begin{align}\label{alpha with mu} \alpha(\kappa;q) = -\ln a_{\mathrm{ren}}(i\kappa ;q) &= -\sum_{m=1}^N \Bigl[\ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) +\tfrac{2\beta_m}{\kappa}\Bigr] +\int\tfrac{t^2}{\kappa(t^2+\kappa^2)}\,d\mu(t), \end{align} for all $\kappa\geq 1 + \|q\|_{H^{-1}}^2$. The claim \eqref{E:min alpha} follows since $d\mu\geq 0$. \end{proof} \begin{corollary}\label{C:min alpha} Fix $N\geq 0$ and distinct positive parameters $\beta_1, \ldots , \beta_N$. Assume that $q\in H^{-1}$ satisfies $$ a_{\mathrm{ren}}(i\beta_m;q)=0 \quad\text{for all}\quad 1\leq m\leq N. $$ Then for $\kappa\geq 1 + \|q\|_{H^{-1}}^2$ we have \begin{align}\label{E:C:min alpha} \alpha(\kappa;q) \geq -\sum_{m=1}^N \ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) +\tfrac{2\beta_m}{\kappa}. \end{align} Moreover, if equality holds in \eqref{E:C:min alpha} for one such $\kappa$ then it holds for all such $\kappa$. \end{corollary} \begin{proof} Let $\{f_n\}_{n\geq 1}$ be a sequence of Schwartz functions that converge to $q$ in $H^{-1}$. As the renormalized perturbation determinant is continuous on $H^{-1}$, we have $$ \lim_{n\to \infty} a_{\mathrm{ren}}(k;f_n)=a_{\mathrm{ren}}(k;q) \quad\text{uniformly on compact subsets of ${\mathbb{C}}^+$}. $$ Using Hurwitz's theorem and \eqref{a ren}, we deduce that for each $1\leq m\leq N$ and $n$ sufficiently large there exits distinct $\beta_m^{(n)}$ so that \begin{equation}\label{2:03} a(i\beta_m^{(n)};f_n) = 0 \quad \text{and} \quad \lim_{n\to \infty}\beta_m^{(n)}=\beta_m. \end{equation} In view of \eqref{E:min alpha} and the positivity \eqref{G}, we find that \begin{align}\label{2:04} \alpha(\kappa;f_n) \geq -\sum_{m=1}^N \ln\Bigl(\tfrac{\kappa-\beta_m^{(n)}}{\kappa+\beta_m^{(n)}}\Bigr) +\tfrac{2\beta_m^{(n)}}{\kappa}\xrightarrow[n\to \infty]{} -\sum_{m=1}^N \ln\Bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\Bigr) +\tfrac{2\beta_m}{\kappa}. \end{align} The claim \eqref{E:C:min alpha} now follows because $\alpha$ is continuous on $H^{-1}$. Suppose now that equality holds in \eqref{E:C:min alpha} for some single value $\kappa_0\geq 1 + \|q\|_{H^{-1}}^2$. Let us write $d\mu_n$ for the measure representing $a(k;f_n)$ in the sense of \eqref{rep}. It then follows from \eqref{alpha with mu} and \eqref{2:03} that $$ \int\tfrac{t^2}{\kappa_0(t^2+\kappa_0^2)}\,d\mu_n(t) \to 0 \qtq{and thence that} \int\tfrac{t^2}{\kappa(t^2+\kappa^2)}\,d\mu_n(t) \to 0 $$ for every $\kappa >0$. This in turn guarantees that equality holds in \eqref{E:C:min alpha} for every $\kappa\geq 1 + \|q\|_{H^{-1}}^2$. \end{proof} We are now ready to realize the ultimate goal of this section: \begin{proof}[Proof of Theorem~\ref{T:char}] In view of Corollary~\ref{C:min alpha}, it remains to show that if \eqref{aren 0} holds and \begin{align}\label{alpha sat} \alpha(\kappa;q) = -\sum_{m=1}^N \ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\bigr) +\tfrac{2\beta_m}{\kappa} \qtq{for all} \kappa\geq 1 + \|q\|_{H^{-1}}^2, \end{align} then $q=Q_{\vec \beta,\vec c}$ for some choice of $\vec c \in {\mathbb{R}}^N$. The first step in the proof will be to show that all such optimizers $q$ belong to Schwartz class. In the second step, we will prove that \begin{align}\label{a phi goal} a(k; q)= \prod_{m=1}^N \frac{k-i\beta_m}{k+i\beta_m}. \end{align} In view of Proposition~\ref{P:DT}, this implies that $q$ is a multisoliton, thus completing the proof of the theorem. From \eqref{alpha to L2}, \eqref{G}, and \eqref{alpha sat}, we see already that $q\in L^2$. We will get further regularity and decay by studying the Euler--Lagrange equation satisfied by $q$. To begin, we note that since $a_{\mathrm{ren}}(i\beta_m;q)=0$, there exist $\phi_m\in L^2$ such that $$ \bigl(1+ \sqrt{R_0(i\beta_m)}q\sqrt{R_0(i\beta_m)}\bigr)\phi_m=0 \qtq{and} \|\phi_m\|_2=1. $$ Writing $\psi_m:=\sqrt{R_0(i\beta_m)}\phi_m$ we obtain \begin{align}\label{psi eq} \bigl(-\partial^2 + q+\beta_m^2\bigr)\psi_m=0 \qtq{and} \|\psi_m\|_{H^1_{\beta_m}}=1. \end{align} Note also that the eigenvalue $-\beta_m^2$ must be simple. Indeed, if there were two linearly independent eigenvectors, this would yield linearly independent solutions to \eqref{psi eq}, both belonging to $H^1$; this is inconsistent with constancy of the Wronskian. As $q\in L^2$, we see that \eqref{psi eq} implies that $\psi_m\in H^2$ and so $\psi_m^2\in L^1\cap H^2$. Moreover, a quick computation shows that \begin{align}\label{op to psim2} \bigl(-\partial^3+2\partial q + 2q\partial+4\kappa^2\partial\bigr) \psi_m^2 = 4(\kappa^2-\beta_m^2)(\psi_m^2)' \end{align} in $H^{-1}$ sense. Next, we claim that the functions $\{\psi_m^2\}_{m=1}^N$ are linearly independent. Indeed, assume (towards a contradiction) that there were a \emph{minimal} collection $\Lambda\subseteq \{1, \ldots, N\}$ such that \begin{align}\label{ld} \sum_{m\in \Lambda} c_m \psi_m^2=0 \qquad\text{with}\qquad c_m\neq 0 \qtq{for all} m\in \Lambda. \end{align} Fixing some $n\in \Lambda$ and applying $\bigl(-\partial^3+2\partial q + 2q\partial+4\beta_n^2\partial\bigr)$ to \eqref{ld} and using \eqref{op to psim2} we obtain $$ \sum_{m\neq n\in \Lambda}c_n (\beta_m^2-\beta_n^2)(\psi_m^2)'=0. $$ As $\beta_1, \ldots, \beta_N$ are distinct and $\psi_m^2$ decay at infinity, this contradicts the minimality of the collection $\Lambda$. The functions $\psi_m^2$ represent the gradients of the constraints $a_{\mathrm{ren}}(i\beta_m; \phi)=0$. Indeed, $$ \tfrac{\delta}{\delta q} a_{\mathrm{ren}}(i\beta_m;q) = \xdet_{\substack{\phi_m^{\perp}}}{\!}_2 \bigl(1+ \sqrt{R_0(i\beta_m)}q\sqrt{R_0(i\beta_m)}\bigr) \, \psi_m^2, $$ where the subscript on $\det_2$ indicates the Hilbert space over which the renormalized determinant is computed. Concretely, in this case this is the Hilbert space of functions orthogonal to $\phi_m$. As the eigenvalues $-\beta_m^2$ are simple, the renormalized determinant over $\phi_m^\perp$ is non-zero. The gradient of $\alpha$ is easily derived from the series \eqref{alpha defn}: $$ \tfrac{\delta}{\delta q}\alpha(\kappa; q)=\tfrac1{2\kappa} - g(\kappa;q), $$ where $g(\kappa;q)$ is the diagonal Green's function. This is discussed in greater detail in \cite{KV}. As $q\in L^2$, \cite[Proposition~A.2]{KV} shows that $\tfrac1{2\kappa} - g(\kappa;\phi)\in H^2$; we also have the long-known identity $$ \bigl(-\partial^3+2\partial q + 2q \partial+4\kappa^2\partial\bigr) g(\kappa;q)=0 $$ which holds in $H^{-1}$ sense (cf. \cite[Proposition~2.3]{KV}). As the gradients $\psi_m^2$ of the constraints have been shown to be linearly independent, we deduce that the optimizer $q$ satisfies the Euler--Lagrange equation \begin{align}\label{EL} \tfrac1{2\kappa} - g(\kappa;q) = \sum_{m=1}^N\lambda_m\psi_m^2 \end{align} for each $\kappa\geq 1+\|q\|_{H^{-1}}^2$ and some ($\kappa$-dependent) multipliers $\lambda_1, \ldots, \lambda_N\in {\mathbb{R}}$. Consequently, applying $\bigl(-\partial^3+2\partial q + 2q \partial+4\kappa^2\partial\bigr)$ to \eqref{EL} and using \eqref{op to psim2}, we deduce that \begin{align*} \tfrac1\kappa q' = \sum_{m=1}^N 4\lambda_m (\kappa^2-\beta_m^2)(\psi_m^2)' . \end{align*} However, $q\in L^2$ and $\psi_m^2\in H^2$; thus \begin{align}\label{q from psi} \tfrac1\kappa q = \sum_{m=1}^N 4\lambda_m (\kappa^2-\beta_m^2)\psi_m^2 \end{align} and so $q\in H^2$. By alternately applying \eqref{psi eq} and \eqref{q from psi}, we deduce that $q$ is infinitely smooth. From \eqref{q from psi} we see that $q\in L^1$. It then follows from \eqref{psi eq} that each eigenfunction decays exponentially; see \cite[\S3.8]{MR0069338}. Applying \eqref{q from psi} again we deduce that $q$ decays exponentially. Thus $q\in \mathcal S$. It remains to prove \eqref{a phi goal}. Now that we know $q\in\mathcal S$, we may deploy the technology used in the proof of Proposition~\ref{P:min alpha}. First we note that \eqref{E:min alpha} and the positivity \eqref{G} guarantee that $a_\mathrm{ren}$ has no zeros beyond those prescribed in \eqref{aren 0}. In this way, the representation \eqref{rep} yields $$ a_\mathrm{ren}(k; q) = \exp\biggl\{ - i \int \frac{d\mu(t)}{t-k} \biggr\} \cdot \prod_{m=1}^N \frac{k-i\beta_m}{k+i\beta_m} $$ for some finite positive measure $d\mu$ on ${\mathbb{R}}$. On comparing \eqref{alpha with mu} and \eqref{alpha sat}, we see that any mass $d\mu$ has must be concentrated at the origin. Combining this observation with \eqref{a ren}, we deduce that \begin{equation*} a(k;q) = \exp\biggl\{ \frac{i}{k} \int d\mu + \frac{i}{2k} \int q\,dx \biggr\} \prod_{m=1}^N \frac{k-i\beta_m}{k+i\beta_m} . \end{equation*} As the holomorphic function $a(k;q)$ admits a continuous extension to $\partial {\mathbb{C}}^+$, this forces $\int q(x)\, dx = - 2 \int d\mu$ and so \eqref{a phi goal} holds. \end{proof} \section{Molecular decomposition of multisolitons}\label{S:MS} The principal goal of this section is to prove that linear combinations of well-separated multisolitons are close to the manifold of multisolitons. We refer to this as a molecular decomposition building on the analogy of one-solitons to atoms and of multisolitons to molecules. In fact, we will see that the eigenvalue parameters $\vec \beta^j$ of the molecules in this rarefied gas of multisolitons form a partition of the eigenvalue parameters of the single approximating multisoliton. The interrelation of the position parameters $\vec c^j$ is much more subtle since it must accommodate the correct combination of phase-shifts. \begin{prop}\label{P:mol} Let multisoliton parameters $\vec \beta^j$ and $\vec c^{\,j}$ be given for each $1\leq j\leq J$, with no eigenvalue repeated. For any $J$-tuple of sequences $x_n^j$ satisfying \begin{equation}\label{mol apart} \lim_{n\to \infty} \bigl(x_n^j- x_n^i\bigr)=\infty \quad\text{for all} \quad 1\leq i< j \leq J, \end{equation} there exists a sequence $\vec c_n$ so that setting $\vec\beta=\coprod \vec\beta^{j}$, we have \begin{align}\label{2:22} Q_{\vec\beta,\vec c_n}(x) - \sum_{j=1}^{J}Q_{\vec\beta^j,\vec c^j}(x-x_n^j) \longrightarrow 0 \end{align} in $L^2({\mathbb{R}})$ sense as $n\to \infty$. \end{prop} The decoupling requirement \eqref{mol apart} could be stated with absolute values without affecting the conclusion of the theorem. However, ordering the translation parameters from the start makes the proof much easier to explain. The scenario analyzed here is something of a reverse of the long-time asymptotics of multisolitons. In that scenario, one starts with a multisoliton $Q_{\vec \beta,\vec c(t) }$, with the components of $\vec c(t)$ satisfying an analogue of \eqref{mol apart} as $t\to \infty$, and the goal is to find positions $x^j(t)$ so that $Q_{\vec \beta,\vec c(t) }$ can be approximated by a linear combination of one-solitons as $t\to \infty$. Despite these differences, we still feel that our approach to treating the error terms could streamline discussions of that subject too. Each of the multisolitons appearing in \eqref{2:22} is defined via the determinant of a matrix and each matrix is potentially of a different size. We need a prudent means of indexing all these matrices. For each $1\leq j\leq N$, let $I^j$ denote (disjoint) index sets of size $\#\vec\beta^j$ (the number of entries in $\vec \beta^j$). We will then use $I=\coprod I^j$ as our indexing set of size $\#\vec\beta$. Our first application of these notations is to give a formula for the sequence $c_n$ needed for Proposition~\ref{P:mol}: For $\mu\in I^j$, \begin{equation}\label{E:c_n for mol} (c_n)_\mu = x_n^j + c^j_\mu -\tfrac{1}{\beta_\mu} \sum_\sigma \log \Bigl[ \tfrac{\beta_{\sigma}-\beta_\mu}{\beta_{\sigma} + \beta_\mu} \Bigr], \end{equation} where the sum extends over all $\sigma\in I^\ell$ for all $\ell>j$. We also need to construct two families of matrices: For fixed $1\leq j\leq J$ we define a matrix $B^{(j)}(x;\vec\beta,\vec c)$ indexed over $I\times I$ by \begin{equation}\label{E:B table} \mbox{\def\vrule depth 1.5ex height 2.7ex width 0mm{\vrule depth 1.5ex height 2.7ex width 0mm} \begin{tabular}{|c|c|c|c|} \hline $B_{\mu\nu}^{(j)}(x;\vec\beta,\vec c) \vrule depth 1.5ex height 2.7ex width 0mm$ & $\nu\in I^\ell,\ \ell<j$ & $\nu\in I^j$ & $\nu\in I^\ell,\ \ell>j$ \\ \hline $\mu\in I^\ell,\ \ell<j$\vrule depth 1.5ex height 2.7ex width 0mm & $\delta_{\mu\nu}$ & $0$ & $0$ \\ \hline $\mu\in I^j$\vrule depth 1.5ex height 2.7ex width 0mm & $0$ & $A_{\mu\nu}(x)$ & $\tfrac{1}{\beta_\mu+\beta_\nu} e^{-\beta_\mu(x-c_\mu)}$ \\ \hline $\mu\in I^\ell,\ \ell>j$\vrule depth 1.5ex height 2.7ex width 0mm & $0$ & $\tfrac{1}{\beta_\mu+\beta_\nu} e^{-\beta_\nu(x-c_\nu)}$ & $\tfrac{1}{\beta_\mu+\beta_\nu}$ \\ \hline \end{tabular} }\end{equation} where $A_{\mu\nu}(x)$ is as in \eqref{matrix A}. Similarly, we define \begin{equation}\label{E:E table} \mbox{\def\vrule depth 1.5ex height 2.7ex width 0mm{\vrule depth 1.5ex height 2.7ex width 0mm} \begin{tabular}{|c|c|c|c|}\hline $E_{\mu\nu}^{(j)}(x;\vec\beta,\vec c) \vrule depth 1.5ex height 2.7ex width 0mm$ & $\nu\in I^\ell,\ \ell<j$ & $\nu\in I^j$ & $\nu\in I^\ell,\ \ell>j$ \\ \hline $\mu\in I^\ell,\ \ell<j$\vrule depth 1.5ex height 2.7ex width 0mm & $A_{\mu\nu}(x)-\delta_{\mu\nu}$ & $A_{\mu\nu}(x)$ & $\tfrac{1}{\beta_\mu+\beta_\nu} e^{-\beta_\mu(x-c_\mu)}$ \\ \hline $\mu\in I^j$\vrule depth 1.5ex height 2.7ex width 0mm & $A_{\mu\nu}(x)$ & $0$ & $0$ \\ \hline $\mu\in I^\ell,\ \ell>j$\vrule depth 1.5ex height 2.7ex width 0mm & $\tfrac{1}{\beta_\mu+\beta_\nu} e^{-\beta_\nu(x-c_\nu)}$ & $0$ & $\delta_{\mu\nu} e^{\beta_\mu(x-c_\mu)+\beta_\nu(x-c_\nu)}$ \\ \hline \end{tabular} }\end{equation} As we shall see, $B^{(j)}$ is the dominant term for those $x$ near $x_n^k$, while $E^{(j)}$ functions as an error term. \begin{lemma}\label{L:B&E} Fix $L>0$. Then under the hypotheses of Proposition~\ref{P:mol}, \begin{align}\label{B&E to 0} \limsup_{n\to\infty} \, \bigl\| B^{(j)}(x_n^j+z;\vec\beta,\vec c_n) \bigr\| <\infty \qtq{and} \limsup_{n\to\infty}\, \bigl\| E^{(j)}(x_n^j+z;\vec\beta,\vec c_n) \bigr\| =0 \end{align} uniformly for $z\in[-L,L]+i[-1,1]$. Moreover, for all $x\in{\mathbb{R}}$, \begin{align}\label{Q from B E} Q_{\vec \beta,\vec c_n}(x) = - 2\tfrac{d^2}{dx^2} \ln \det\bigl[B^{(j)}(x;\vec\beta,\vec c_n) + E^{(j)}(x;\vec\beta,\vec c_n) \bigr]. \end{align} \end{lemma} \begin{proof} As we are dealing with finite matrices, our claims about the operator norm can be verified considering each matrix entry individually. From this perspective, the claim \eqref{B&E to 0} follows simply from the behavior of $x_n^j - (c_n)_\mu$: this is bounded when $\mu\in I^j$; it diverges to $+\infty$ when $\mu\in I^\ell$ with $\ell<j$; and it diverges to $-\infty$ when $\mu\in I^\ell$ with $\ell>j$. The claim \eqref{Q from B E} follows readily from the identity $$ \det\bigl[ A_{\vec\beta,\vec c_n}(x) \bigr] = \det\bigl[B^{(j)}(x;\vec\beta,\vec c_n) + E^{(j)}(x;\vec\beta,\vec c_n) \bigr] \times \prod e^{-2\beta_\mu(x-c_\mu)}, $$ where the product is taken over those $\mu\in I^\ell$ for each $\ell<j$. This product appears because common factors have been extracted from these rows and columns. \end{proof} As a stepping-stone to our analysis of $B^{(j)}$ in Lemma~\ref{L:B j}, we first make preparations for evaluating its determinant. In the case $D_{\mu\nu}\equiv 0$, our next lemma relates two Cauchy determinants (as they are known); indeed, it provides the basic inductive step for the complete evaluation of such determinants. \begin{lemma}[A Cauchy-like Determinant]\label{L:MCD} Given an $N\times N$ matrix $D$, real numbers $a_1,\ldots,a_N$, and positive $\beta_1,\ldots,\beta_{N+1}$, we define $$ \tilde a_\mu = \tfrac{\beta_{N+1}-\beta_\mu}{\beta_{N+1} + \beta_\mu} \, a_\mu . $$ Then we have the following identity between two determinants: \begin{align}\label{inductive Cauchy} \begin{vmatrix} D_{\mu\nu} + \tfrac{a_\mu a_\nu}{\beta_\mu+\beta_\nu} & \tfrac{a_\mu}{\beta_\mu+\beta_{N+1}} \\[2mm] \tfrac{a_\nu}{\beta_{N+1}+\beta_\nu} & \tfrac{1}{\beta_{N+1}+\beta_{N+1}} \end{vmatrix} = \tfrac{1}{2\beta_{N+1}} \begin{vmatrix} D_{\mu\nu} + \tfrac{\tilde a_\mu\tilde a_\nu}{\beta_\mu+\beta_\nu} \end{vmatrix} . \end{align} On the right, we have an $N\times N$ determinant. The one on the left is $(N+1)\times (N+1)$, with the extra row and column as indicated. \end{lemma} \begin{proof} This is a simple matter of applying row and column operations: First we subtract $a_\mu$ times the bottom row of LHS\eqref{inductive Cauchy} from the $\mu^\text{th}$ row and use the identity $$ \tfrac{1}{\beta_\mu+\beta_\nu} - \tfrac{1}{\beta_{N+1}+\beta_\nu} = \tfrac{\beta_{N+1}-\beta_\mu}{(\beta_{N+1}+\beta_\nu)(\beta_\mu+\beta_\nu)}. $$ Extracting the common factor from the final column, this yields $$ \text{LHS\eqref{inductive Cauchy}} = \tfrac{1}{2\beta_{N+1}} \begin{vmatrix} D_{\mu\nu} + \tfrac{\hat a_\mu\check a_\nu}{\beta_\mu+\beta_\nu} & \tfrac{\hat a_\mu}{\beta_\mu+\beta_{N+1}} \\[2mm] \check a_\nu & 1 \end{vmatrix} \text{ with } $$ with $\hat a_\mu = (\beta_{N+1}-\beta_\mu) a_\mu$ and $\check a_\nu = \tfrac{a_\nu}{\beta_{N+1} + \beta_\nu}$. Next we subtract $\check a_\nu$ times the last column from the $\nu^\text{th}$ and apply the identity $$ \tfrac{1}{\beta_\mu+\beta_\nu} - \tfrac{1}{\beta_\mu + \beta_{N+1}} = \tfrac{\beta_{N+1}-\beta_\nu}{(\beta_\mu+\beta_\nu)(\beta_\mu+\beta_{N+1})} . $$ The result then follows since the bottom row is now populated by zeros, excepting a one in the final position. \end{proof} \begin{lemma}\label{L:B j} Fix $L>0$. Under the hypotheses on Proposition~\ref{P:mol}, there exists $\delta>0$ so that \begin{align} \bigl| \, \det\bigl[B^{(j)}(x_n^j+z;\vec\beta,\vec c_n) \bigr] \bigr| &\gtrsim 1 \label{E:B j1} \end{align} uniformly for $n\in {\mathbb{N}}$ and $z\in[-L,L]+i[-\delta,\delta]$. Moreover, for every $x\in{\mathbb{R}}$, \begin{align} - 2\tfrac{d^2}{dx^2} \ln \det\bigl[B^{(j)}(x;\vec\beta,\vec c_n) \bigr] &= Q_{\vec\beta^j,\vec c^j}(x-x^j_n). \label{E:B j2} \end{align} \end{lemma} \begin{proof} Applying Lemma~\ref{L:MCD} iteratively, we find that \begin{align}\label{E:dslfkjs} \det\bigl[B^{(j)}(x;\vec\beta,\vec c_n) \bigr] = \Bigl(\prod_\sigma \tfrac{1}{2\beta_\sigma}\Bigr) \cdot \det\Bigl[ \delta_{\mu\nu} + \tfrac{\tilde a_\mu \tilde a_\nu}{\beta_\mu+\beta_\nu} \Bigr]_{I^j\times I^j} \end{align} where the parameters $\tilde a_\mu$ (which also depend on $n$) are given by $$ \tilde a_\mu = \exp\{-\beta_\mu(x-(c_n)_\mu)\} \cdot \prod_\sigma \tfrac{\beta_{\sigma}-\beta_\mu}{\beta_{\sigma} + \beta_\mu}, $$ and both products extend over all $\sigma\in I^\ell$ for all $\ell>j$. Referring back to Definition~\ref{D:multisoliton} and \eqref{E:c_n for mol}, we see that we have succeeded in proving \eqref{E:B j2}. When $z$ is real, the inequality \eqref{E:B j1} follows from \eqref{E:dslfkjs} because the matrix $C$ with entries $ C_{\mu\nu}=\tfrac{\tilde a_\mu \tilde a_\nu}{\beta_\mu+\beta_\nu} $ is bounded and (strictly) positive definite (as is easily verified from \eqref{inductive Cauchy} and Sylvester's criterion). To extend the bound to complex $z$, it suffices to show that we can choose $\delta>0$ so that every eigenvalue of the matrix $C$ has positive real part. Writing $z=x+iy$, we see that it suffices to prove that $$ \sum \overline{\psi_\mu} \bigl[\cos(\beta_\mu y)\cos(\beta_\nu y) - \sin(\beta_\mu y) \sin(\beta_\nu y) \bigr] \tfrac{\tilde a_\mu(x) \tilde a_\nu(x)}{\beta_\mu+\beta_\nu} \psi_\nu \geq 0 $$ for every complex vector $\psi_\mu$. Thus, we see that there is such a choice of $\delta>0$ because of the boundedness and positive-definiteness of $C$ for $z$ real. \end{proof} \begin{proof}[Proof of Proposition~\ref{P:mol}] Our first goal is to prove the following variant of \eqref{2:22}: \begin{align}\label{2:22'} Q_{\vec\beta,\vec c_n}(x+x_n^j) \longrightarrow Q_{\vec\beta^j,\vec c^j}(x) \quad\text{as $n\to\infty$,} \end{align} uniformly for $x\in[-L,L]$ for each fixed $j$ and any fixed $L>0$. Combining Cramer's rule with the Hadamard inequality, we find \begin{align*} \bigl\| B^{(j)}(x;\vec\beta,\vec c_n)^{-1} E^{(j)}(x;\vec\beta,\vec c_n) \bigr\| &\lesssim \frac{\bigl\| B^{(j)}(x;\vec\beta,\vec c_n)\bigr\|^{\#\vec\beta-1} \bigr\| E^{(j)}(x;\vec\beta,\vec c_n) \bigr\|} {\bigl|\det B^{(j)}(x;\vec\beta,\vec c_n)\bigr|} \end{align*} where the implicit constant depends only on $\#\vec \beta$. Thus, it follows from \eqref{E:B j1} and Lemma~\ref{L:B&E} that for each $L>0$ there is a $\delta>0$ so that $$ \ln \det\bigl[B^{(j)}(x_n^j+z;\vec\beta,\vec c_n) + E^{(j)}(x_n^j+z;\vec\beta,\vec c_n) \bigr] = \ln \det\bigl[B^{(j)}(x_n^j+z;\vec\beta,\vec c_n) \bigr] + o(1) $$ as $n\to\infty$ uniformly for $z\in [L,-L]+i[-\delta,\delta] $. Because we have convergence in a \emph{complex} neighbourhood of each $x$, this convergence extends to all derivatives. Thus \eqref{2:22'} follows from \eqref{Q from B E} and \eqref{E:B j2}. From \eqref{2:22'} we may then infer that as $n\to\infty$, \begin{align}\label{on E_n} \int_{E_n} \Bigl| Q_{\vec\beta,\vec c_n}(x) - \sum_{j=1}^{J}Q_{\vec\beta^j,\vec c^j}(x-x_n^j)\Bigr|^2 \,dx \longrightarrow 0, \end{align} where $E_n = \cup_j [x_n^j-L,x_n^j+L]$. On the other hand, from \eqref{conserv of multi} we get $$ \int_{\mathbb{R}} \bigl| Q_{\vec\beta,\vec c_n}(x)\bigr|^2\,dx = \sum_{\mu\in I} \tfrac{16}{3}\beta_\mu^3 = \sum_{j=1}^{J} \int \bigl|Q_{\vec\beta^j,\vec c^j}(x-x_n^j)\bigr|^2 \,dx. $$ Using this and \eqref{on E_n} we find that the integral over the complementary region $E_n^c$ makes an asymptotically negligible contribution for $L$ large. Thus \eqref{2:22} follows. \end{proof} \section{Concentration compactness}\label{S:CC} The goal of this section is to develop a concentration-compactness principle for the functional $\alpha$ acting on bounded equicontinuous sequences in $H^{-1}$. \begin{prop}[Concentration compactness principle]\label{P:CC} Assume that $\{u_n\}_{n \geq 1}$ is a bounded and equicontinuous sequence in $H^{-1}$. Passing to a subsequence there exist $J^*\in\{0,1,2,\ldots\}\cup \{\infty\}$, non-zero profiles $\phi^j\in H^{-1}$, and positions $x_n^j\in {\mathbb{R}}$ such that for any finite $0\leq J\leq J^*$ we have the decomposition $$ u_n(x) = \sum_{j=1}^J \phi^j(x-x_n^j) + r_n^J(x) $$ with the following properties: for each fixed $\kappa\geq 1+ \sup_n\|u_n\|_{H^{-1}}^2$, \begin{align} &\lim_{J\to J^*}\lim_{n\to \infty} \tr \bigl\{ \bigl(\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr)^4\bigr\}=0\label{r to 0},\\ &\sup_J\lim_{n\to \infty} \Bigl[\alpha(\kappa; u_n) - \sum_{j=1}^J\alpha(\kappa;\phi^j) - \alpha(\kappa;r_n^J) \Bigr]= 0,\label{alpha decoupling}\\ &\lim_{n\to \infty} |x_n^j- x_n^\ell|=\infty \quad\text{for all} \quad j\neq \ell. \label{decoup par} \end{align} Moreover, \begin{align}\label{E:alpha r} \lim_{J\to J^*}\ \lim_{n\to \infty}\ \biggl| \alpha(\kappa; r_n^J) - \frac1{2\kappa}\int \frac{|\widehat{r_n^J}(\xi)|^2}{\xi^2+4\kappa^2}\, d\xi\biggr|=0. \end{align} \end{prop} Equation \eqref{r to 0} shows that the remainder is small in the sense that a certain operator is small in ${\mathfrak{I}}_4$, the trace ideal modeled on $\ell^4$. In fact, it is negligible in any ${\mathfrak{I}}_p$ with $p>2$. This follows from \eqref{r to 0} by means of the basic inequality \begin{align}\label{Ip Holder} \| A \|_{{\mathfrak{I}}_p} \leq \| A \|_{{\mathfrak{I}}_{p_1}}^\theta\| A \|_{{\mathfrak{I}}_{p_2}}^{1-\theta} \quad\text{when $1\leq p_1<p<p_2\leq\infty$ and } \theta=\tfrac{p_1}{p}\cdot\tfrac{p_2-p}{p_2-p_1}. \end{align} Nonetheless, as \eqref{E:alpha r} shows, the remainder term may make a significant contribution to $\alpha$. We shall ultimately see that optimizing sequences must have negligible remainder term because it contributes \emph{too much} to $\alpha$. The nucleus of the proof of Proposition~\ref{P:CC} is the inverse inequality Lemma~\ref{L:inverse}. It shows that non-trivial ${\mathfrak{I}}_4$ norm may be attributed to the existence of a non-trivial profile common to a subsequence of the original sequence $u_n$. Before stating this lemma, let us quickly discuss our notations for basic Littlewood-Paley theory; these will be needed in the proof. For $N \in 2^{\mathbb{Z}}$, we write $P_N$ for the Fourier multiplier operators defined via a partition of unity adapted to the partition $\{ \xi\in{\mathbb{R}} : \frac12N < |\xi| \leq 2N\}$ of ${\mathbb{R}}$. We then define projections onto high and low frequencies via $$ P_{\leq N} f = \sum_{2^{\mathbb{Z}}\ni M\leq N} P_M f \qtq{and} P_{\geq N} f = \sum_{2^{\mathbb{Z}}\ni M\geq N} P_M f. $$ One of the key estimates we need is the Bernstein inequality, $$ \| P_{\leq N} f \|_{L^q} \lesssim N^{\frac1p-\frac1q} \| f \|_{L^p} \qtq{whenever} 1\leq p \leq q \leq\infty. $$ \begin{lemma}[Inverse inequality]\label{L:inverse} Assume $\{u_n\}_{n \geq 1}$ are equicontinuous in $H^{-1}$ and satisfy \begin{align*} \varepsilon< \liminf_{n\to \infty}\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}u_n\sqrt{R_0(i\kappa)}\bigr)^4\bigr\}\quad \text{and} \quad \limsup_{n\to \infty}\|u_n\|_{H^{-1}}<A \end{align*} for some positive $\varepsilon$, finite $A$, and some $\kappa\geq 1+A^2$. Then passing to a subsequence there exist a non-zero profile $\phi\in H^{-1}$ and positions $x_n\in {\mathbb{R}}$ such that \begin{align} & \,\, \, u_n(x+x_n)\rightharpoonup \phi(x) \quad\text{weakly in $H^{-1}$},\notag\\ &\lim_{n\to \infty} \Bigl[\alpha(\kappa; u_n) -\alpha(\kappa;u_n(\cdot+x_n)-\phi)\Bigr] = \alpha(\kappa;\phi) .\label{decoup} \end{align} \end{lemma} \begin{proof} Passing to a subsequence, we may assume that for all $n$ we have \begin{align*} \tfrac12\varepsilon<\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}u_n\sqrt{R_0(i\kappa)}\bigr)^4\bigr\} \quad \text{and} \quad \|u_n\|_{H^{-1}}^2<2A^2. \end{align*} For $N\in 2^{\mathbb{N}}$, we use \eqref{R I2 kappa} to estimate \begin{align}\label{hi} \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}[P_{\geq N}u_n]\sqrt{R_0(i\kappa)}\bigr)^4\bigr\} &\lesssim \bigl\| \sqrt{R_0(i\kappa)}[P_{\geq N}u_n]\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_2}^4\notag\\ &\lesssim \kappa^{-2} \|P_{\geq N}u_n\|_{H^{-1}_\kappa}^4\lesssim \|P_{\geq N}u_n\|_{H^{-1}}^4<\tfrac18\varepsilon, \end{align} provided $N$ is sufficiently large depending on $\varepsilon$, in view of the equicontinuity of $u_n$. On the other hand, for dyadic $N\leq 1$ we may use Bernstein to estimate \begin{align}\label{lo} \tr\bigl\{& \bigl(\sqrt{R_0(i\kappa)}[P_{\leq N}u_n]\sqrt{R_0(i\kappa)}\bigr)^4\bigr\}\notag\\ &\lesssim \bigl\| \sqrt{R_0(i\kappa)}[P_{\leq N}u_n]\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_2}^2 \bigl\| \sqrt{R_0(i\kappa)}[P_{\leq N}u_n]\sqrt{R_0(i\kappa)} \bigr\|_\textit{op}^2\notag\\ &\lesssim \kappa^{-1} \|u_n\|_{H^{-1}_\kappa}^2 \kappa^{-4}\|P_{\leq N} u_n\|_{\infty}^2\lesssim \kappa^{-3} N \|u_n\|_{H^{-1}_\kappa}^4\lesssim N A^4<\tfrac18\varepsilon, \end{align} provided $N$ is sufficiently small depending on $\varepsilon$ and $A$. Therefore, passing to a further subsequence, we deduce that there exists a dyadic $N$ such that $$ \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}[P_Nu_n]\sqrt{R_0(i\kappa)}\bigr)^4\bigr\}\geq c(\varepsilon,A) \qquad \text{for all $n$}, $$ where $c(\varepsilon, A)$ is a positive continuous function on $[0, \infty)\times[0,\infty)$. As \begin{align*} \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}[P_Nu_n]\sqrt{R_0(i\kappa)}\bigr)^4\bigr\}\lesssim \kappa^{-1} \|u_n\|_{H^{-1}_\kappa}^2 \kappa^{-4} \|P_N u_n\|_{\infty}^2\lesssim A^2 \|P_N u_n\|_{\infty}^2, \end{align*} there exists $x_n\in {\mathbb{R}}$ such that \begin{align}\label{nontrivial} |[P_Nu_n](x_n)|\gtrsim c(\varepsilon,A)A^{-2}. \end{align} As the sequence $u_n(x+x_n)$ is bounded in $H^{-1}$, passing to a subsequence we find $\phi\in H^{-1}$ such that \begin{align}\label{weak convg} u_n(x+x_n)\rightharpoonup \phi(x) \quad\text{weakly in $H^{-1}$}. \end{align} In view of \eqref{nontrivial}, we see that $\phi\neq 0$. In fact, it is not difficult to verify that \begin{align}\label{nontrivial rem} \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}\phi\sqrt{R_0(i\kappa)}\bigr)^4\bigr\}\geq \tilde c(\varepsilon,A), \end{align} where $\tilde c(\varepsilon, A)$ is a positive continuous function on $[0, \infty)\times[0,\infty)$. Indeed, even the operator norm of $\sqrt{R_0(i\kappa)}\phi\sqrt{R_0(i\kappa)}$ satisfies such a lower bound. It remains to prove the asymptotic decoupling \eqref{decoup}. To this end, it suffices to show that for all $\ell\geq 2$ we have \begin{align}\label{trace decoup} \lim_{n\to \infty}\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}u_n\sqrt{R_0(i\kappa)}\bigr)^\ell\bigr\} &- \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}\bigl[u_n(\cdot+x_n)-\phi\bigr]\sqrt{R_0(i\kappa)}\bigr)^\ell\bigr\}\notag\\ &=\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}\phi\sqrt{R_0(i\kappa)}\bigr)^\ell\bigr\}. \end{align} The case $\ell=2$ of \eqref{trace decoup} follows easily from the weak convergence \eqref{weak convg} and the fact that $H^{-1}_\kappa$ is a Hilbert space. Indeed, by \eqref{R I2 kappa}, \begin{align*} \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}&u_n\sqrt{R_0(i\kappa)}\bigr)^2\bigr\} - \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}\bigl[u_n(\cdot+x_n)-\phi\bigr]\sqrt{R_0(i\kappa)}\bigr)^2\bigr\}\\ &= \kappa^{-1} \|u_n\|_{H^{-1}_\kappa}^2 - \kappa^{-1} \|u_n(\cdot+x_n)-\phi\|_{H^{-1}_\kappa}^2\\ &= \kappa^{-1}\|\phi\|_{H^{-1}_\kappa}^2 + 8\Re \langle R_0(2i\kappa)\phi, u_n(\cdot+x_n)-\phi\rangle_{L^2}\\ &=\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}\phi\sqrt{R_0(i\kappa)}\bigr)^2\bigr\} + o(1) \quad\text{as}\quad n\to \infty. \end{align*} We now turn to the case $\ell\geq 3$ in \eqref{trace decoup}. First, combining \eqref{Ip Holder} with \eqref{hi} and \eqref{lo}, we see that we may discard very high and very low frequencies from further consideration. Thus, it suffices to prove \eqref{trace decoup} under the assumption that $u_n$ and $\phi$ are replaced by $P_{\text{med}}u_n$ and $P_{\text{med}}\phi$, respectively. Passing to a further subsequence, if necessary, in this case we have \begin{align}\label{to 0} P_{\text{med}}\bigl[ u_n(\cdot+x_n)-\phi\bigr] \to 0 \quad\text{uniformly on compact sets}. \end{align} To continue, we write \begin{align*} \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}&[P_{\text{med}}u_n]\sqrt{R_0(i\kappa)}\bigr)^\ell\bigr\} - \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}[P_{\text{med}}\phi]\sqrt{R_0(i\kappa)}\bigr)^\ell\bigr\}\\ &\qquad \qquad \qquad-\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa)}\bigl[P_{\text{med}}\bigl[u_n(\cdot+x_n)-\phi\bigr]\bigr]\sqrt{R_0(i\kappa)}\bigr)^\ell\bigr\}\\ &=\sum \tr\bigl\{ R_0(i\kappa)F_1R_0(i\kappa)F_2\cdots R_0(i\kappa)F_\ell\bigr\}, \end{align*} where the sum is over all choices of $F_1, \ldots, F_\ell\in \{P_{\text{med}}\bigl[u_n(\cdot+x_n)-\phi\bigr], P_{\text{med}}\phi\}$ that are not all identical. We estimate \begin{align*} &\bigl|\tr\bigl\{ R_0(i\kappa)F_1R_0(i\kappa)F_2\cdots R_0(i\kappa)F_\ell\bigr\}\bigr|\\ &\lesssim \bigl\|P_{\text{med}}\bigl[u_n(\cdot+x_n)-\phi\bigr]R_0(i\kappa)P_{\text{med}}\phi\bigr\|_{{\mathfrak{I}}_2} \|\sqrt{R_0(i\kappa)}\|_{\textit{op}}^2\\ &\quad\times \Bigl[ \bigl\| \sqrt{R_0(i\kappa)}[P_{\text{med}}u_n]\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_2} + \bigl\| \sqrt{R_0(i\kappa)}[P_{\text{med}}\phi]\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_2} \Bigr]^{\ell-2}\\ &\lesssim \kappa^{-2-\frac{\ell-2}2}\bigl[ \|u_n\|_{H^{-1}_\kappa} +\|\phi\|_{H^{-1}_\kappa} \bigr]^{\ell-2} \\ &\quad\times\Bigl[\kappa^{-1}\bigl\langle R_0(2i\kappa)(P_{\text{med}}\phi)^2, (P_{\text{med}}[u_n(\cdot+x_n)-\phi])^2\bigr\rangle_{L^2}\Bigr]^{\frac12}, \end{align*} which converges to zero as $n\to \infty$ in view of \eqref{to 0}. \end{proof} We are now ready to complete the \begin{proof}[Proof of Proposition~\ref{P:CC}] Fix $\kappa_0= 1+\sup_n\|u_n\|_{H^{-1}}^2$. We will apply Lemma~\ref{L:inverse} at spectral parameter $\kappa_0$ inductively, extracting one profile at a time. To start, we set $r_n^0:=u_n$. Now suppose we have a decomposition up to level $J\geq 0$ satisfying \eqref{alpha decoupling}. Passing to a subsequence if necessary, we set \begin{align*} A_J:=\lim_{n\to\infty} \|r_n^J\|_{\dot H^{-1}} \qtq{and} \varepsilon_J:=\lim_{n\to \infty} \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa_0)}r_n^J\sqrt{R_0(i\kappa_0)}\bigr)^4\bigr\}. \end{align*} If $\varepsilon_J=0$, we stop and set $J^*=J$. If not, we apply Lemma~\ref{L:inverse} at spectral parameter $\kappa_0$ to $r_n^J$. Passing to a subsequence in $n$, this yields a non-zero profile $\phi^{J+1}\in\dot H^{-1}$ and positions $x_n^{J+1}\in{\mathbb{R}}$ such that \begin{align}\label{weak lim} \phi^{J+1}(x)=\wlim_{n\to\infty}r_n^J\bigl(x+x_n^{J+1}\bigr). \end{align} To continue, we define $r_n^{J+1}(x):=r_n^J(x)-\phi^{J+1} \bigl(x-x_n^{J+1}\bigr)$. From Lemma~\ref{L:inverse}, \begin{align*} \lim_{n\to\infty}\Bigl[\alpha(\kappa_0;r_n^J)-\alpha(\kappa_0;r_n^{J+1}) -\alpha(\kappa_0;\phi^{J+1})\Bigr]=0, \end{align*} which combined with the inductive hypothesis gives \eqref{alpha decoupling} at the level $J+1$ and spectral parameter $\kappa_0$. Moreover, from \eqref{trace decoup} we get \begin{align*} \lim_{n\to \infty}\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa_0)}r_n^J\sqrt{R_0(i\kappa_0)}\bigr)^4\bigr\} &- \tr\bigl\{ \bigl(\sqrt{R_0(i\kappa_0)}r_n^{J+1}\sqrt{R_0(i\kappa_0)}\bigr)^4\bigr\}\\ &=\tr\bigl\{ \bigl(\sqrt{R_0(i\kappa_0)}\phi^{J+1}\sqrt{R_0(i\kappa_0)}\bigr)^4\bigr\}, \end{align*} which combined with \eqref{nontrivial rem} yields \begin{align}\label{eps rec} \varepsilon_{J+1}\leq\varepsilon_J- c_{J+1}(\varepsilon_J, A_J) \end{align} for some positive function $c_{J+1}$ which is continuous on $[0,\infty)\times[0,\infty)$. If $\varepsilon_{J+1}=0$, we stop and set $J^*=J+1$; in this case, \eqref{r to 0} at spectral parameter $\kappa_0$ is automatic. If $\varepsilon_{J+1}>0$ we continue the induction. If the algorithm does not terminate in finitely many steps, we set $J^*=\infty$; in this case, \eqref{eps rec} guarantees that $\varepsilon_J\to 0$ as $J\to \infty$ and so \eqref{r to 0} at spectral parameter $\kappa_0$ follows. Next we confirm that \eqref{r to 0} and \eqref{alpha decoupling} hold at \emph{all} spectral parameters $\kappa\geq \kappa_0$. The asymptotic decoupling \eqref{alpha decoupling} carries over because our argument relies solely on the weak convergence \eqref{weak lim}, as evinced by the proof of \eqref{trace decoup}. The claim \eqref{r to 0} at spectral parameter $\kappa$ follows from that at spectral parameter $\kappa_0$ since \begin{align*} \bigl\|R_0(i\kappa)^{\frac12}R_0(i\kappa_0)^{-\frac12}\bigr\|_{\textit{op}}\leq 1. \end{align*} Next we verify the asymptotic orthogonality condition \eqref{decoup par}. We argue by contradiction. Assume \eqref{decoup par} fails to be true for some pair $(j,\ell)$. Without loss of generality, we may assume that this is the first pair for which \eqref{decoup par} fails, that is, $j<\ell$ and \eqref{decoup par} holds for all pairs $(j,m)$ with $j<m<\ell$. Passing to a subsequence, we may assume \begin{align}\label{cg} \lim_{n\to \infty}\bigl(x_n^j-x_n^\ell\bigr)= x_0. \end{align} From the inductive relation \begin{align*} r_n^{\ell-1}=r_n^j-\sum_{m=j+1}^{\ell-1}\phi^m(\cdot-x_n^m), \end{align*} we get \begin{align}\label{tp} \phi^\ell(x)&=\wlim_{n\to\infty}r_n^{\ell-1}(x+x_n^\ell)\notag\\ &=\wlim_{n\to\infty}r_n^j(x+x_n^\ell)- \sum_{m=j+1}^{\ell-1} \wlim_{n\to \infty}\phi^m(x+x_n^\ell-x_n^m), \end{align} where the weak limits are in the $H^{-1}$ topology. That the first limit on the right-hand side of \eqref{tp} is zero follows from \eqref{cg} and the observation that by construction, $$ \wlim_{n\to\infty}r_n^j(\cdot+x_n^j)=0. $$ That the remaining limits are zero follows from our assumption that \eqref{decoup par} holds for all pairs $(j,m)$ with $j<m<\ell$. Thus \eqref{tp} yields $\phi^\ell=0$, which contradicts the nontriviality of $\phi^\ell$. This completes the proof of \eqref{decoup par}. Lastly, we prove \eqref{E:alpha r}: \begin{align*} \Bigl| \alpha&(\kappa; r_n^J) - \tfrac1{2\kappa}\int \tfrac{\bigl|\widehat{r_n^J}(\xi)\bigr|^2}{\xi^2+4\kappa^2}\, d\xi\Bigr|\\ &\leq \sum_{\ell\geq 3} \tfrac1\ell \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_\ell}^\ell\\ &\leq \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_4}^2 \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_2} \\ &\qquad\qquad + \sum_{\ell\geq 4} \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_4}^4 \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{\textit{op}}^{\ell-4} \\ &\lesssim \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_4}^2 \kappa^{-\frac12}\|r_n^J\|_{H^{-1}_\kappa} \\ &\qquad\qquad + \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_4}^4 \sum_{\ell\geq 4}\kappa^{-\frac{\ell-4}2}\|r_n^J\|_{H^{-1}_\kappa}^{\ell-4}\\ &\lesssim_A \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_4}^2 + \bigl\|\sqrt{R_0(i\kappa)}r_n^J\sqrt{R_0(i\kappa)}\bigr\|_{{\mathfrak{I}}_4}^4. \end{align*} Thus, using \eqref{r to 0} we deduce \eqref{E:alpha r} \end{proof} \section{Orbital stability}\label{S:OS} This section is dedicated to the proof of Theorem~\ref{T:main}. We argue by contradiction. Fix $N\geq 1$ and distinct positive parameters $\beta_1, \ldots , \beta_N$. Assume, towards a contradiction, that there exist $\varepsilon_0>0$, initial data $q_n(0)\in H^{-1}$, and times $t_n\in {\mathbb{R}}$ such that \begin{align}\label{data} \inf_{\vec c\in {\mathbb{R}}^N} \|q_n(0)- Q_{\vec \beta, \vec c}\|_{H^{-1}}\longrightarrow 0 \quad \text{as}\quad n\to \infty \end{align} but \begin{align}\label{witness} \inf_{\vec c\in {\mathbb{R}}^N} \|q_n(t_n)- Q_{\vec \beta, \vec c}\|_{H^{-1}}\geq \varepsilon_0 \quad \text{for all}\quad n\geq 1. \end{align} Recalling that $a_{\mathrm{ren}}$ and $\alpha$ are continuous functions on $H^{-1}$ and conserved by the KdV flow, \eqref{data}, \eqref{a ren Qbc}, and \eqref{alpha Qbc} imply that \begin{align} \lim_{n\to \infty}a_{\mathrm{ren}}(k; q_n(t_n))=\lim_{n\to \infty}a_{\mathrm{ren}}(k; q_n(0)) = \prod_{m=1}^N \frac{k-i\beta_m}{k+i\beta_m} e^{\frac{2i\beta_m}{k}}\label{aren convg} \end{align} uniformly for $k$ in compact subsets of ${\mathbb{C}}^+$ and \begin{align} \lim_{n\to \infty}\alpha(\kappa; q_n(t_n))=\lim_{n\to \infty}\alpha(\kappa; q_n(0)) =-\sum_{m=1}^N\ln\bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m} \bigr) +\tfrac{2\beta_m}{\kappa}\label{alpha convg} \end{align} uniformly for $\kappa\geq 1+ \frac{512}3\sum_m \beta_m^3$. With a view to future needs, our bound on $\kappa$ combines the restriction needed for \eqref{37deg'} with \eqref{conserv of multi} and the embedding $L^2\hookrightarrow H^{-1}$. By Hurwitz's theorem and \eqref{aren convg}, we deduce that for each $1\leq m\leq N$ and $n$ sufficiently large, there exist $\beta_m^{(n)}$ such that \begin{align}\label{seq of 0} a_{\mathrm{ren}}\bigl(i\beta_m^{(n)}; q_n(t_n)\bigr)=0 \quad \text{and}\quad \lim_{n\to \infty}\beta_m^{(n)} = \beta_m. \end{align} Using \eqref{37deg'}, \eqref{alpha convg}, and the notation from \eqref{G}, we obtain $$ \|q_n(t_n)\|_{H^{-1}_\kappa}^2\leq 4\kappa \alpha(\kappa; q_n(t_n))\xrightarrow[n\to \infty]{} \sum_{m=1}^N 4\kappa G\bigl( \tfrac{\beta_m}{\kappa}\bigr). $$ As the right-hand side above converges to zero as $\kappa\to \infty$, we deduce that the sequence $u_n:=q_n(t_n)$ is equicontinuous in $H^{-1}$ and so we may apply Proposition~\ref{P:CC}. Along a subsequence we may decompose \begin{align}\label{decomposition} u_n(x) = \sum_{j=1}^J \phi^j(x-x_n^j) + r_n^J(x) \end{align} satisfying the properties \eqref{r to 0} and \eqref{alpha decoupling}. Our goal is to prove that there are finitely many profiles, each having the shape of a (multi)soliton, and that $r_n^J$ converges to zero in $H^{-1}$. First, we rule out the possibility of vanishing. Assume, towards a contradiction, that there are no profiles in \eqref{decomposition} and so $u_n=r_n$. Invoking \eqref{E:alpha r}, we obtain \begin{align*} \tfrac1{2\kappa}\int \tfrac{|\widehat{u_n}(\xi)|^2}{\xi^2+4\kappa^2}\, d\xi \xrightarrow[n\to \infty]{} -\sum_{m=1}^N \ln\Bigl(\tfrac{\kappa-\beta_m}{\kappa+\beta_m}\Bigr) +\tfrac{2\beta_m}{\kappa}=\sum_{m=1}^N G\bigl( \tfrac{\beta_m}{\kappa}\bigr). \end{align*} This immediately leads to a contradiction since the functions $$ \kappa\mapsto \tfrac{\kappa^3}{2\kappa}\int \tfrac{|\widehat{u_n}(\xi)|^2}{\xi^2+4\kappa^2}\, d\xi \quad \text{and}\quad \kappa\mapsto \sum_{m=1}^N \kappa^3 G\bigl( \tfrac{\beta_m}{\kappa}\bigr) $$ have opposite monotonicity. Therefore, we may assume that there exists at least one non-trivial profile. From \eqref{HS det2 bnd} and \eqref{R I2}, we have \begin{align}\label{Barry bdd} \bigl| 1 - a_{\mathrm{ren}}(k;q)\bigr|\lesssim \exp\bigl\{ C(k) \|q\|_{H^{-1}}^2\bigr\} \end{align} with $C(k)$ bounded for $k$ in compact subsets of ${\mathbb{C}}^+$. Consequently, for $J\geq 1$ fixed, the functions $$ f_n : k\mapsto a_{\mathrm{ren}}(k;u_n) \exp\Bigl\{ -\tfrac{i}{2k}\int \tfrac{\bigl|\widehat{r_n^J}(\xi)\bigr|^2}{\xi^2-4k^2}\, d\xi \Bigr\} $$ are holomorphic and locally bounded on ${\mathbb{C}}^+$. Invoking Montel's theorem and passing to a subsequence, we find that this sequence converges as $n\to \infty$ to a holomorphic function $f$. Moreover, by \eqref{seq of 0} we have $f(i\beta_m)=0$ for all $1\leq m\leq N$. To continue, we combine \eqref{alpha decoupling} with \eqref{E:alpha r} and \eqref{alpha convg} to obtain \begin{align}\label{upper bdd on alpha phij} \sum_{j=1}^{J^*} \alpha(\kappa; \phi^j) \leq \sum_{m=1}^NG\bigl( \tfrac{\beta_m}{\kappa}\bigr) \end{align} and so by \eqref{alpha to L2}, \begin{align}\label{phi in L2} \tfrac18 \sum_{j=1}^{J^*} \|\phi^j\|_2^2=\lim_{\kappa\to \infty}\sum_{j=1}^{J^*} \kappa^3\alpha(\kappa; \phi^j) \leq \sum_{m=1}^N \lim_{\kappa\to \infty} \kappa^3G\bigl( \tfrac{\beta_m}{\kappa}\bigr)<\infty. \end{align} Using this and \eqref{Barry bdd}, we see that the function $k\mapsto \prod_{j=1}^{J^*} a_{\mathrm{ren}}(k;\phi^j)$ is well defined and holomorphic on ${\mathbb{C}}^+$. Invoking \eqref{alpha decoupling} and \eqref{E:alpha r} one more time, we conclude that \begin{align}\label{prod aren phij} \prod_{j=1}^{J^*} a_{\mathrm{ren}}(k;\phi^j) = f(k) \quad\text{for all} \quad k\in {\mathbb{C}}^+. \end{align} Let $\vec \beta^j$ denote the collection of all zeros of $a_{\mathrm{ren}}(i\kappa;\phi^j)$. Evidently, $\coprod_j \vec\beta^j$ enumerates the zeros (with multiplicity) of $f(ik)$, which contains each $\beta_m$, $1\leq m\leq N$. Also, by Corollary~\ref{C:min alpha}, \begin{align}\label{lower bdd on alpha phij} \alpha(\kappa, \phi^{j})\geq \sum_{\beta\in \vec\beta^j} G\bigl( \tfrac{\beta}{\kappa}\bigr). \end{align} Contrasting \eqref{upper bdd on alpha phij} and \eqref{lower bdd on alpha phij}, we see that $\coprod_j \vec\beta^j=\vec \beta$ without any repetitions. Moreover, each $\vec\beta^j$ must be non-empty, for otherwise $\alpha(\kappa, \phi^{j})\equiv 0$ and so $\phi^j\equiv 0$, which is impossible; all profiles are non-zero by construction. From this we deduce that $J^*$ is finite and, after reviewing \eqref{alpha decoupling} and \eqref{E:alpha r}, that $r_n^{J^*}\to 0$ in $H^{-1}$ sense. More importantly, the comparison of \eqref{upper bdd on alpha phij} and \eqref{lower bdd on alpha phij} shows that each $\phi^j$ must be an optimizer for the variational problem of Theorem~\ref{T:char} with parameters $\vec \beta^j$. This theorem then tells us that each $\phi^j$ is indeed a multisoliton. Putting this all together, we deduce that \begin{align}\label{1} u_n(x) = \sum_{j=1}^{J^*}Q_{\vec\beta^j,\vec c^j}(x-x_n^j) +r_n(x) \quad\text{with}\quad \lim_{n\to \infty}\| r_n\|_{H^{-1}}=0. \end{align} In view of Proposition~\ref{P:mol}, this contradicts \eqref{witness} and so completes the proof of Theorem~\ref{T:main}.\qed \section{Higher regularity}\label{S:Hs} The purpose of this section is to demonstrate two methods by which one may deduce orbital stability at higher regularity from Theorem~\ref{T:main}. The two methods are completely independent and so we divide the proof of Corollary~\ref{C:Hs} into two parts: \begin{proof}[Proof of Corollary~\ref{C:Hs} when $s\in\{0,1\}$] We begin with the case $s=0$. Using the conservation of momentum, we find that for any pair of solutions $q(t)$ and $Q(t)$, \begin{align*} \| q(t) - Q(t) \|_{L^2}^2 &= \| q(t) \|_{L^2}^2 - \| Q(t) \|_{L^2}^2 - 2 \langle q(t) - Q(t),\,Q(t)\rangle \\ &\leq \| q(0) \|_{L^2}^2 - \| Q(0) \|_{L^2}^2 + 2 \|q(t) - Q(t)\|_{H^{-1}} \| Q(t) \|_{H^1}. \end{align*} Thus, recalling that the momentum of a multisoliton is determined by $\vec\beta$ alone, we see that \begin{align*} \inf_{\vec c} \| q(t) - Q_{\vec \beta,\vec c} \|_{L^2}^2 &\leq \inf_{\vec c} \| q(0) - Q_{\vec \beta,\vec c}\|_{L^2}\bigl( \|q(0)\|_{L^2}+ \sup_{\vec c}\| Q_{\vec \beta,\vec c} \|_{L^2}\bigr)\\ &\quad + 2 \inf_{\vec c} \|q(t) - Q_{\vec \beta,\vec c}\|_{H^{-1}} \cdot \sup_{\vec c} \| Q_{\vec \beta,\vec c} \|_{H^1} . \end{align*} As observed by Lax \cite{MR0369963}, the first two polynomial conservation laws control the $H^1$ norm. Thus, the $s=0$ case of Corollary~\ref{C:Hs} follows from \eqref{conserv of multi} and Theorem~\ref{T:main}. Turning now to the case of $H^1$ we need one preliminary: by Sobolev embedding, \begin{align*} \| f\|_{L^3} \lesssim \||\nabla|^{\frac16}f\|_{L^2}\lesssim \|f \|_{H^1}^{\frac7{12}} \|f\|_{H^{-1}}^{\frac5{12}}. \end{align*} Proceeding as we did in the $L^2$ case, but using conservation of energy, \begin{align*} \| q'(t) - Q'(t) \|_{L^2}^2 &= 2 H\bigl(q(t)\bigr) - 2 H\bigl(Q(t)\bigr) - 2 \langle q'(t) - Q'(t),\,Q'(t)\rangle \\ & \qquad - \int q(t,x)^3 - Q(t,x)^3\,dx \\ &\lesssim 2 \bigl| H\bigl(q(0)\bigr) - 2 H\bigl(Q(0)\bigr) \bigr| + \| q(t) - Q(t) \|_{H^{-1}} \| Q(t) \|_{H^3} \\ & \qquad + \bigl( \| q(t) \|_{H^1} + \| Q(t) \|_{H^1} \bigr)^{2+\frac7{12}} \|q(t) - Q(t)\|_{H^{-1}}^{\frac5{12}}. \end{align*} Thus the result now follows as before from Theorem~\ref{T:main} and the bounds in \cite{MR0369963}. \end{proof} The key observation for our second method is the equicontinuity of orbits under \eqref{KdV}. The specific formulation we need is as follows. \begin{lemma}\label{L:equiL} Fix $s\in[-1,1)$ and distinct positive parameters $\beta_1, \ldots, \beta_N$. For every $\varepsilon>0$, there exist $\delta>0$ and $N\in 2^{{\mathbb{Z}}}$ so that \begin{align}\label{E:equiL} \inf_{\vec c} \| q(0) - Q_{\vec\beta,\vec c} \|_{H^s} < \delta \ \implies\ \sup_{t\in{\mathbb{R}}} \| q_{\geq N}(t) \|_{H^s} < \varepsilon. \end{align} \end{lemma} \begin{proof} If this assertion were to fail, then there would exist a sequence of solutions $q_n$ and a sequence of times $t_n$ so that $$ \limsup_{n\to\infty} \ \inf_{\vec c} \| q_n(0) - Q_{\vec\beta,\vec c} \|_{H^s} =0, $$ but $\{q_n(t_n):n\in \mathbb{N}\}$ is not equicontinuous in $H^s$. As $\vec c$ varies, the multisolitons $Q_{\vec \beta,\vec c}$ remain uniformly bounded in $H^1$. Thus this family is $H^s$-equicontinuous and then so must be the sequence of initial data $q_n(0)$. When $s=-1$, this directly contradicts the equicontinuity result \cite[Prop.~4.4]{KV}. The analogous equicontinuity result for $-1<s<0$ appears in the proof of \cite[Cor.~5.3]{KV}. Finally, when $0\leq s <1$, we may appeal to \cite[Prop.~3.6]{KVZ}. While this last-quoted result does not explicitly assert equicontinuity, the simplicity with which it may be derived from what is presented there is illustrated (in the $s=0$ case) in \cite[Prop.~A.3(c)]{KV}. \end{proof} It remains to present the \begin{proof}[Proof of Corollary~\ref{C:Hs} when $s\in(-1,1)$] For any $N\in 2^{\mathbb{Z}}$ and any pair of solutions, \begin{align*} \| q(t) - Q(t) \|_{H^s}^2 &\lesssim N^{2+2s} \| q(t) - Q(t) \|_{H^{-1}}^2 + N^{2s-2}\| P_{\geq N} Q(t) \|_{H^1}^2 + \| P_{\geq N} q(t) \|_{H^s}^2. \end{align*} The result now follows from Theorem~\ref{T:main} and Lemma~\ref{L:equiL}. \end{proof}
{ "timestamp": "2020-09-16T02:04:03", "yymm": "2009", "arxiv_id": "2009.06746", "language": "en", "url": "https://arxiv.org/abs/2009.06746", "abstract": "We prove that multisoliton solutions of the Korteweg--de Vries equation are orbitally stable in $H^{-1}(\\mathbb{R})$. We introduce a variational characterization of multisolitons that remains meaningful at such low regularity and show that all optimizing sequences converge to the manifold of multisolitons. The proximity required at the initial time is uniform across the entire manifold of multisolitons; this had not been demonstrated previously, even in $H^1$.", "subjects": "Analysis of PDEs (math.AP)", "title": "Orbital stability of KdV multisolitons in $H^{-1}$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743642277192, "lm_q2_score": 0.7154240018510025, "lm_q1q2_score": 0.7088953029873628 }
https://arxiv.org/abs/1611.08850
Every 4-regular 4-uniform hypergraph has a 2-coloring with a free vertex
In this paper, we continue the study of $2$-colorings in hypergraphs. A hypergraph is $2$-colorable if there is a $2$-coloring of the vertices with no monochromatic hyperedge. It is known (see Thomassen [J. Amer. Math. Soc. 5 (1992), 217--229]) that every $4$-uniform $4$-regular hypergraph is $2$-colorable. Our main result in this paper is a strengthening of this result. For this purpose, we define a vertex in a hypergraph $H$ to be a free vertex in $H$ if we can $2$-color $V(H) \setminus \{v\}$ such that every hyperedge in $H$ contains vertices of both colors (where $v$ has no color). We prove that every $4$-uniform $4$-regular hypergraph has a free vertex. This proves a known conjecture. Our proofs use a new result on not-all-equal $3$-SAT which is also proved in this paper and is of interest in its own right.
\section{Introduction} In this paper, we continue the study of $2$-colorings in hypergraphs. We adopt the notation and terminology from~\cite{HeYe13,HeYe15}. A \emph{hypergraph} $H = (V,E)$ is a finite set $V = V(H)$ of elements, called \emph{vertices}, together with a finite multiset $E = E(H)$ of arbitrary subsets of $V$, called \emph{hyperedges} or simply \emph{edges}. A $k$-edge in $H$ is an edge of size~$k$ in $H$. The hypergraph $H$ is $k$-\emph{uniform} if every edge of $H$ is a $k$-edge. The \emph{degree} of a vertex $v$ in $H$, denoted $d_H(v)$ or simply by $d(v)$ if $H$ is clear from the context, is the number of edges of $H$ which contain $v$. The hypergraph $H$ is $k$-\emph{regular} if every vertex has degree~$k$ in~$H$. For $k \ge 2$, let $\cH_k$ denote the class of all $k$-uniform $k$-regular hypergraphs. The class $\cH_k$ has been widely studied, both in the context of solving problems on total domination as well as in its own right, see for example \cite{AlBru88, HeYe13, HeYe15, HeYe16, ThYe07}. A hypergraph $H$ is $2$-\emph{colorable} if there is a $2$-coloring of the vertices with no monochromatic hyperedge. Equivalently, $H$ is $2$-colorable if it is \emph{bipartite}; that is, its vertex set can be partitioned into two sets such that every hyperedge intersects both partite sets. Alon and Bregman~\cite{AlBru88} established the following result. \begin{thm} \label{Alon} {\rm (Alon, Bregman~\cite{AlBru88})} Every hypergraph in $\cH_k$ is $2$-colorable, provided $k \ge 8$. \end{thm} Thomassen~\cite{Th92} showed that the Alon-Bregman result in Theorem~\ref{Alon} holds for all $k \ge 4$. \begin{thm} \label{Extension} {\rm (Thomassen~\cite{Th92})} Every hypergraph in $\cH_k$ is $2$-colorable, provided $k \ge 4$. \end{thm} As remarked by Alon and Bregman~\cite{AlBru88} the result is not true when $k = 3$, as may be seen by considering the Fano plane. Sufficient conditions for the existence of a $2$-coloring in $k$-uniform hypergraphs are given, for example, by Radhakrishnan and Srinivasan~\cite{RaSr00} and Vishwanathan~\cite{Vi03}. For related results, see the papers by Alon and Tarsi~\cite{AlTa92}, Seymour~\cite{Se74} and Thomassen~\cite{Th83}. A set $X$ of vertices in a hypergraph $H$ is a \emph{free set} in $H$ if we can $2$-color $V(H) \setminus X$ such that every edge in $H$ contains vertices of both colors (where the vertices in $X$ are not colored). A vertex is a \emph{free vertex} in $H$ if we can $2$-color $V(H) \setminus \{v\}$ such that every hyperedge in $H$ contains vertices of both colors (where $v$ has no color). In~\cite{HeYe15} it is conjectured that every hypergraph $H \in \cH_k$, with $k \geq 4$, has a free set of size $k-3$. Further, if the conjecture is true, then the bound $k-3$ cannot be improved for any $k \ge 4$, due to the complete $k$-uniform hypergraph of order $k+1$, as such a hypergraph needs two vertices of each color to ensure every edge has vertices of both colors. The conjecture is proved to hold for $k \in \{5,6,7,8\}$. The case when $k=4$ turned out to be more difficult than the cases when $k \in \{5,6,7,8\}$ and was conjectured separately in~\cite{HeYe15}. \begin{conj}{\rm (\cite{HeYe15})} \label{conj1} Every $4$-regular $4$-uniform hypergraph contains a free vertex. \end{conj} \section{Main Result} Our immediate aim is to prove Conjecture~\ref{conj1}. That is, we prove the following result, which is a strengthening of the result of Theorem~\ref{Extension} in the case when $k = 4$. \begin{thm} \label{t:main1} Every $4$-regular $4$-uniform hypergraph contains a free vertex. \end{thm} As remarked earlier, the complete $4$-regular $4$-uniform hypergraph on five vertices has only one free vertex, and so the result of Theorem~\ref{t:main1} cannot be improved in the sense that there exist $4$-regular $4$-uniform hypergraphs with no free set of size~$2$. Theorem~\ref{t:main1} is also best possible by considering the complement, $\overline{F_7}$, of the Fano plane $F_7$, where the Fano plane is shown in Figure~\ref{f:Fano} and where its complement $\overline{F_7}$ is the hypergraph on the same vertex set $V(F_7)$ and where $e$ is a hyperedge in the complement if and only if $V(F_7) \setminus e$ is a hyperedge in $F_7$. \begin{figure}[htb] \begin{center} \begin{tikzpicture}[scale=0.40] \begin{scope}[xshift=0cm,yshift=0cm] \fill (0,0) circle (0.2cm); \fill (4,0) circle (0.2cm); \fill (8,0) circle (0.2cm); \fill (2,3.464) circle (0.2cm); \fill (6,3.464) circle (0.2cm); \fill (4,6.928) circle (0.2cm); \fill (4,2.3093) circle (0.2cm); \hyperedgetwo{0}{0}{0.5}{8}{0}{0.6}; \hyperedgetwo{4}{6.928}{0.5}{0}{0}{0.6}; \hyperedgetwo{4}{6.928}{0.6}{8}{0}{0.5}; \hyperedgetwo{0}{0}{0.4}{6}{3.464}{0.45}; \hyperedgetwo{4}{6.928}{0.4}{4}{0}{0.45}; \hyperedgetwo{8}{0}{0.4}{2}{3.464}{0.45}; \draw (4,2.3093) circle (1.9523cm); \draw (4,2.3093) circle (2.6523cm); \end{scope} \end{tikzpicture} \end{center} \vskip -0.3 cm \caption{The Fano plane $F_7$} \label{f:Fano} \end{figure} Our proof of Theorem~\ref{t:main1} presented in Section~\ref{S:proof1} uses a surprising connection with not-all-equal $3$-SAT (NAE-$3$-SAT). We will later prove a result on when NAE-$3$-SAT is not only satisfiable, but is satisfiable without assigning all variables truth values. This result is of interest in its own right, but requires some further terminology (see Section~\ref{defns}) before describing it in detail. We remark that our resulting NAE-$3$-SAT result, given by Theorem~\ref{main_nae_3sat}, has also been used by the authors in~\cite{HeYe16+} to solve a conjecture on the so-called fractional disjoint transversal number (which we do not define here). This serves as added motivation of the importance of the NAE-$3$-SAT result which can be used to solve several seemingly unrelated hypergraph problems that seem difficult to solve using a purely hypergraph approach. \section{Terminology and Definitions} \label{defns} For an edge $e$ in a hypergraph $H$, we denote by $H - e$ the hypergraph obtained from $H$ by deleting the edge $e$. Two vertices $x$ and $y$ of $H$ are \emph{adjacent} if there is an edge $e$ of $H$ such that $\{x,y\} \subseteq e$. Further, $x$ and $y$ are \emph{connected} if there is a sequence $x=v_0,v_1,v_2\ldots,v_k=y$ of vertices of $H$ in which $v_{i-1}$ is adjacent to $v_i$ for $i=1,2,\ldots,k$. A \emph{connected hypergraph} is a hypergraph in which every pair of vertices are connected. A \emph{component} of a hypergraph $H$ is a maximal connected subhypergraph of $H$. In particular, we note that a component of $H$ is by definition connected. A subset $T$ of vertices in a hypergraph $H$ is a \emph{transversal} in $H$ if $T$ has a nonempty intersection with every edge of $H$. In the language of transversals, a vertex $v$ is a free vertex in a hypergraph $H$ if $H$ contains two vertex disjoint transversals, neither of which contain the vertex $v$. Transversals in $4$-uniform hypergraphs are well studied (see, for example,~\cite{HeYe15,LaCh90,ThYe07}). In order to prove Conjecture~\ref{conj1}, we use a surprising connection between an instance of not-all-equal $3$-SAT (NAE-$3$-SAT) and a $3$-uniform hypergraph. In order to state this connection we require some further terminology. \begin{definition} An instance, $I$, of $3$-SAT contains a set of variables, $V(I)$, and a set of clauses, $C(I)$. Each clause contains exactly three literals, which are either a variable, $v \in V(I)$, or the negation of a variable, $\barv$, where $v \in V(I)$. A clause, $c \in C(I)$, is satisfied if one of the literals in it is true. That is, the clause $c$ is satisfied if $v \in V(I)$ belongs to $c$ and $v=True$ or $\barv$ belongs to $c$ and $v=False$. The instance $I$ is satisfied if there is a truth assignment to the variables such that all clauses are satisfied. \end{definition} \begin{definition} An instance of NAE-$3$-SAT is equivalent to $3$-SAT, except that we require all clauses to contain a false literal as well as a true one. A clause that contains both a true and false literal we call \emph{nae}-\emph{satisfiable}. If every clause in the instance $I$ is nae-satisfiable, we say that $I$ is \emph{nae}-\emph{satisfiable}. \end{definition} We furthermore need the following definitions. \begin{definition} Given an instance $I$ of NAE-$3$-SAT, we define the \textbf{associated graph} $G_I$ to be the graph with vertex set $V(I)$ and where an edge joins two variables in $G_I$ if they (either in negated or unnegated form) appear in the same clause in $I$. Let $I$ be an instance of NAE-$3$-SAT. We call the instance $I$ \textbf{connected} if one cannot partition the variables $V(I)$ into non-empty sets $V_1$ and $V_2$ such that no clause contains variables from $V_1$ and $V_2$. In other words, the graph $G_I$ associated with $I$ is connected. A \textbf{component} of a NAE-$3$-SAT instance $I$ is a maximal connected sub-instance of $I$. That is, the components of $I$ correspond precisely to the components of the graph $G_I$ associated with $I$. A variable, $v \in V(I)$, is \textbf{free} if $I$ is nae-satisfiable even if we do not assign any truth value to $v$. That is, every clause in $I$ contains a true and a false literal, even without considering literals involving $v$. The \textbf{degree} of a variable $v \in V(I)$, is the number of clauses containing $v$ or $\barv$, and is denoted by $\dg_I(v)$. If the instance $I$ is clear from the context, we simply write $\dg(v)$ rather than $\dg_I(v)$. \end{definition} We are now in a position to define a connection between an instance of NAE-$3$-SAT and a $3$-uniform hypergraph as follows. \begin{definition} \label{defn2} If $H$ is a $3$-uniform hypergraph, we create a NAE-$3$-SAT instance $I_H$ as follows. Let $V(I_H) = V(H)$ and for each edge $e \in H$ add a clause to $I_H$ with the same vertices/variables in non-negated form. We call $I_H$ the NAE-$3$-SAT instance corresponding to $H$. Note that the instance $I_H$ is nae-satisfiable if and only if $H$ is bipartite. In fact the partite sets in the bipartition correspond to the truth values true and false. \end{definition} Throughout this paper, we use the standard notation $[k] = \{1,2,\ldots,k\}$. \section{NAE-$3$-SAT} \label{S:NAE} In this section, we present a key result that we need in order to prove Conjecture~\ref{conj1}, namely the following theorem that establishes a fundamental property of NAE-$3$-SAT in the case when the number of clauses is less than the number of variables. An instance of NAE-$3$-SAT is non-trivial if it contains at least one variable. \begin{thm} \label{main_nae_3sat} Let $I$ be a connected non-trivial instance of NAE-$3$-SAT. If $|C(I)|<|V(I)|$ and $\dg_I(v) \le 3$ for all $v \in V(I)$, then $I$ is nae-satisfiable and contains a free variable. \end{thm} \proof Suppose, to the contrary, that the theorem is false and let $I$ be a counterexample of the theorem with minimum possible $|C(I)|$. Let $C = C(I)$ and $V = V(I)$. If $|C|=0$, then $|V|=1$ and the theorem holds, and so $|C| \ge 1$. We will now show a number of claims which we will use to obtain a contradiction to $I$ being a counterexample. \1 \ClaimX{A} $\dg(v) \ge 2$ for all $v \in V$. \ClaimPF{A} Let $v \in V$ be arbitrary. If $\dg(v)=0$, then $I$ is not connected as $|C| \ge 1$, a contradiction. Suppose that $\dg(v)=1$, and let $c \in C$ be the clause containing $v$. If $c$ contains three variables of degree~$1$, then $|C|=1$ and $|V|=3$ as $I$ is connected, and $I$ is clearly nae-satisfiable and all variables in $V$ are free, contradicting the fact that $I$ is a counterexample. Hence, the clause $c$ has at most two vertices of degree~$1$. Let $c$ contain the variables $v$, $x_1$ and $x_2$ and let $I'$ be the instance of NAE-$3$-SAT obtained by deleting $v$ and the clause $c$. If $I'$ is connected, then, by the minimality of $|C|$, the instance $I'$ is nae-satisfiable and contains a free variable. Assigning $v$ a truth value such that the literal containing $v$ in the clause $c$ is of opposite value to the literal containing $x_1$ or $x_2$, the instance $I$ is nae-satisfiable and contains a free variable, namely the same free variable that belongs to the instance $I'$. This contradicts the fact that $I$ is a counterexample. Therefore, $I'$ is not connected. Since $I$ is connected and $I'$ is not connected, the instance $I'$ contains two components, one containing the variable $x_1$ and the other the variable $x_2$ that belonged to the clause $c$. Both $x_1$ and $x_2$ have degree at most~$2$ in $I'$, while the degree of all other variables in $I'$ is at most~$3$. Therefore, both components of $I'$ satisfy the condition of the theorem and, by the minimality of $|C|$, are nae-satisfiable. (We remark that there exists a free variable in both components, but in this case we assign every variable a truth value.) If the literals associated with $x_1$ and $x_2$ in the clause $c$ have the same truth values, then we can reverse the truth value of all variables in one of the components. Hence, we may assume that the literals associated with $x_1$ and $x_2$ in the clause $c$ have different truth values, implying that $I$ is nae-satisfiable and contains $v$ as a free variable. Once again, we contradict the fact that $I$ is a counterexample. As $v$ was chosen arbitrarily we have proven Claim~A.~\ClaimQED{} \2 \ClaimX{B} There exists a $v \in V$, such that $\dg(v) = 2$. \ClaimPF{B} If the claim was false, then by Claim~A we would have $\dg(v)=3$ for all $v \in V$, which would imply that $3|C| = \sum_{v \in V} \dg(v) = 3|V|$, which is a contradiction to $|C|<|V|$.~\ClaimQED{} \2 By Claim~B, there exists a variable $v \in V$ such that $\dg(v)=2$. Let $c_1$ and $c_2$ be the clauses containing the variable $v$ and let $Q$ contain all variables belonging to $c_1$ or $c_2$. \2 \ClaimX{C} $|Q| \le 4$. \ClaimPF{C} Suppose, to the contrary, that $|Q| \ge 5$. Since the clauses $c_1$ and $c_2$ both contain three variables, and $v$ belongs to both clauses, we note that $|Q| \le 5$. Consequently, $|Q| = 5$. Let $I'$ be the NAE-$3$-SAT obtained from $I$ by deleting $c_1$, $c_2$ and $v$. Suppose that $I'$ contains four distinct components. In this case, each component of $I'$ contains a variable from $Q \setminus \{v\}$. Possibly, a component of $I'$ may contain only one variable and no clause. By the minimality of $|C|$, the instance $I'$ is nae-satisfiable. (We remark that there exists a free variable in each of the four components, but in this case we assign every variable a truth value.) We can set the variables in $Q \setminus \{v\}$ such that both $c_1$ and $c_2$ are nae-satisfiable by reversing all truth values in any of the components of $I'$, if required, implying that $I$ is nae-satisfiable and contains $v$ as a free variable. This contradicts the fact that $I$ is a counterexample. Therefore, $I'$ contains at most three distinct components. Let $Q \setminus \{v\} = \{q_1,q_2,q_3,q_4\}$. Renaming variables, if necessary, we may assume that $q_4$ and one of $q_1$, $q_2$ or $q_3$ belong to the same component in $I'$. Renaming $q_1$, $q_2$ and $q_3$, if necessary, we may assume that $q_1$ and $q_2$ are variables in $c_1$ and $q_3$ is a variable in $c_2$. If $v$ is negated in $c_i$, then negate all literals in $c_i$, for $i \in [2]$. This does not change the problem and implies that we may assume, without loss of generality, that $v$ is not negated in both $c_1$ and $c_2$. Let $\ell_i$ be the literal containing $q_i$ in $c_1$ for $i \in [2]$, and so $\ell_i \in \{q_i,\barq_i\}$. Further, let $\ell_i$ be the literal containing $q_i$ in $c_2$ for $i \in \{3,4\}$, and so $\ell_i \in \{q_i,\barq_i\}$. Let $c'$ be a new clause $\{\ell_1,\ell_2,\barl_3\}$ and let $I'' = I' \cup c'$. We note that $I''$ is connected and the degree of all vertices in $I''$ is at most~$3$. Further, $|C(I'')| = |C(I')|+1 = |C|-1 < |V|-1 = |V(I'')|$. In particular, $|C(I'')| < |C|$. By the minimality of $|C|$, the instance $I''$ is nae-satisfiable and contains a free variable, $f$. If $f \not\in \{q_1,q_2,q_3\}$, then we can always assign to the variable $v$ a truth value, such that $c_1$ and $c_2$ are nae-satisfiable, as indicated in Table~1, where $T$ denotes $True$ and $F$ denotes $False$ and the cases when $(\ell_1,\ell_2,\ell_3) \in \{(T,T,F),(F,F,T)\}$ are impossible due to the clause~$c'$. Further, the free variable, $f$, in $I''$ is also a free variable in $I$. Therefore, the instance $I$ is nae-satisfiable and contains a free variable. This contradicts the fact that $I$ is a counterexample. Therefore, $f \in \{q_1,q_2,q_3\}$. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} $(\ell_1,\ell_2,\ell_3)$ & $v$ & \hspace{0.5cm} & $(\ell_1,\ell_2,\ell_3)$ & $v$ & \hspace{0.5cm} & $(\ell_1,\ell_2,\ell_3)$ & $v$ & \hspace{0.5cm} & $(\ell_1,\ell_2,\ell_3)$ & $v$ \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} $(T,T,T)$ & $F$ & & $(T,F,T)$ & $F$ & & $(F,T,T)$ & $F$ & & $(F,F,T)$ & N/A \\ $(T,T,F)$ & N/A & & $(T,F,F)$ & $T$ & & $(F,T,F)$ & $T$ & & $(F,F,F)$ & $T$ \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \end{tabular} \begin{center} \textbf{Table~1.} Possible assignments of truth values. \end{center} \end{center} Suppose that $f \in \{q_1,q_2\}$. Renaming $q_1$ and $q_2$, if necessary, we may assume that $f=q_1$. As $c'$ is nae-satisfiable in $I''$, the literals $\ell_2$ and $\ell_3$ must have the same truth value. We can therefore assign $v$ the opposite truth value to these two literals in order to get a nae-satisfiable assignment of $I$ where the variable $q_1$ is free. This contradicts the fact that $I$ is a counterexample. Therefore, $f=q_3$. As $c'$ is nae-satisfiable in $I''$ and $f$ is free in $I''$, the literals $\ell_2$ and $\ell_3$ must have opposite truth values. We can therefore assign $v$ the opposite truth value to the literal $\ell_4$ in order to get a nae-satisfiable assignment of $I$ where the variable $q_3$ is free. Once again, this contradicts the fact that $I$ is a counterexample. This completes all cases and therefore completes the proof of Claim~C.~\ClaimQED{} \2 \ClaimX{D} $|Q| \le 3$. \ClaimPF{D} Suppose, to the contrary, that $|Q| \ge 4$, which by Claim~C implies that $|Q|=4$. Therefore, there must exist variables $q_1$, $q_2$ and $q_3$ such that $c_1$ contains the variables $v$, $q_1$ and $q_2$ and the clause $c_2$ contains $v$, $q_1$ and $q_3$. As in the proof of Claim~C.3, we may assume, without loss of generality, that $v$ is not negated in both $c_1$ and $c_2$. Let $I'$ be the NAE-$3$-SAT obtained from $I$ by deleting the two clauses $c_1$ and $c_2$, and deleting the variable $v$. Let $\ell_i$ be the literal containing $q_i$ in $c_1$ for $i \in [2]$, and so $\ell_i \in \{q_i,\barq_i\}$. Further, let $\ell_3$ be the literal containing $q_3$ in $c_2$, and so $\ell_3 \in \{q_3,\barq_3\}$. Let $c'$ be a new clause $\{\ell_1,\ell_2,\barl_3\}$ and let $I'' = I' \cup c'$. We note that $I''$ is connected and the degree of all vertices in $I''$ is at most~$3$. Further, $|C(I'')| = |C(I')|+1 = |C|-1 < |V|-1 = |V(I'')|$. In particular, $|C(I'')| < |C|$. By the minimality of $|C|$, the instance $I''$ is nae-satisfiable and contains a free variable, $f$. If $f \not\in \{q_1,q_2,q_3\}$, then proceeding exactly as in the proof of Claim~C, we show that the instance $I$ is nae-satisfiable and contains a free variable, a contradiction. Therefore, $f \in \{q_1,q_2,q_3\}$. If $f=q_1$, then as $c'$ is nae-satisfiable in $I''$, the literals $\ell_2$ and $\ell_3$ must have the same truth value. We can therefore assign $v$ the opposite truth value to these two literals in order to get a nae-satisfiable assignment of $I$ where the variable $q_1$ is free. If $f=q_2$, then as $c'$ is nae-satisfiable in $I''$, the literals $\ell_1$ and $\ell_3$ must have the same truth value. We can therefore assign $v$ the opposite truth value to these two literals in order to get a nae-satisfiable assignment of $I$ where the variable $q_2$ is free. If $f=q_3$, then as $c'$ is nae-satisfiable in $I''$, the literals $\ell_1$ and $\ell_2$ must have opposite truth values. We can therefore assign $v$ the opposite truth value to the literal corresponding to $q_1$ in $c_2$ in order to get a nae-satisfiable assignment of $I$ where the variable $q_3$ is free. In all the above three cases, the instance $I$ is nae-satisfiable and contains a free variable, a contradiction. This completes all cases and therefore also the proof of Claim~D.~\ClaimQED{} \medskip By Claim~D, $|Q| \le 3$. As every clause contains three variables, this implies that $|Q|=3$. Let $Q = \{v,q_1,q_2\}$. Let $I^*$ be the an instance of NAE-$3$-SAT with $V(I^*)=\{v,q_1,q_2\}$ and $C(I^*)=\{c_1,c_2\}$. \2 \ClaimX{E} The instance $I^*$ is nae-satisfiable and has a free variable. \ClaimPF{E} If at most one literal in $c_1$ is identical to those in $c_2$, then we simply reverse all literals in $c_1$. This does not change the problem and now there are at least two literals in $c_1$ that are identical with those in $c_2$. Renaming the variables, if necessary, we may assume that the literal containing $q_i$ in $c_1$ and $c_2$ for $i \in [2]$ is identical. The variable $v$ is therefore a free variable as may be seen by assigning opposite truth value to the literal containing $q_1$ and $q_2$ in $c_1$ (and therefore also in $c_2$). Thus, $I^*$ is nae-satisfiable and has a free variable.~\ClaimQED{} \medskip By Claim~E, the instance $I^*$ is nae-satisfiable and has a free variable. Therefore, $I \ne I^*$, implying that at least one of $q_1$ and $q_2$ has degree~$3$ in $I$. There is therefore a clause $c_3$, different from $c_1$ and $c_2$, containing $q_1$ or $q_2$. Renaming $q_1$ and $q_2$, if necessary, we may assume that $c_3$ contains $q_2$. \2 \ClaimX{F} The clause $c_3$ does not contain the variable~$q_1$. \ClaimPF{F} Suppose, to the contrary, that $c_3$ contains~$q_1$. Let $q_3$ be the variable in $c_3$ which is different from $q_1$ and $q_2$. Let $I''$ be obtained from $I$ by deleting the three clauses $c_1$, $c_2$ and $c_3$, and deleting the three variables $v$, $q_1$ and $q_2$. We note that $I''$ is connected and the degree of all vertices in $I''$ is at most~$3$. Further, $|C(I'')| = |C|-3 < |V|-3 = |V(I'')|$. In particular, $|C(I'')| < |C|$. By the minimality of $|C|$, the instance $I''$ is nae-satisfiable and contains a free variable. (We remark, however, that here we do not need the fact that there exists a free variable in this case.) By Claim~E, it is possible to assign truth values to two of the variables in $\{v,q_1,q_2\}$ with the third vertex a free variable such that $c_1$ and $c_2$ are both nae-satisfiable. At least one of $q_1$ or $q_2$ has been assigned a truth value, say $q_1$. Let $\ell_1$ be the literal containing $q_1$ in $c_3$, and let $\ell_3$ be the literal containing $q_3$ in $c_3$. By reversing all truth values in $I''$, if necessary, we can guarantee that the literals $\ell_1$ and $\ell_3$ have opposite truth values. Therefore, $I$ is nae-satisfiable and one of the vertices in $\{v,q_1,q_2\}$ is free. This is a contradiction to $I$ being a counterexample.~\ClaimQED{} \medskip By Claim~F, the clause $c_3$ does not contain the variable~$q_1$. Let $I'$ be the NAE-$3$-SAT obtained from $I$ by deleting the two clauses $c_1$ and $c_2$, and the variable $v$. As in the proof of Claim~E, we may assume that there are at least two literals in $c_1$ that are identical with those in $c_2$. \2 \ClaimX{G} The variable $v$ is not free in $I^*$. \ClaimPF{G} Suppose, to the contrary, that $v$ is free in $I^*$. If the literal containing $q_1$ in $c_1$ and $c_2$ is not identical, then in order for the variable $v$ to be free in $I^*$, the literal containing $q_2$ in $c_1$ and $c_2$ is not identical. This contradicts our assumption that at least two literals in $c_1$ are identical with those in $c_2$. Hence, the literal containing $q_i$ in $c_1$ and $c_2$ for $i \in [2]$ is identical. If in $c_1$ and $c_2$ exactly one of $q_1$ and $q_2$ is negated, then let $c_3'$ be a new clause obtained from $c_3$ by replacing the literal $q_2$ with $q_1$ or $\barq_2$ with $\barq_1$. If in $c_1$ and $c_2$ either both or none of $q_1$ and $q_2$ are negated, then let $c_3'$ be a new clause obtained from $c_3$ by replacing the literal $q_2$ with $\barq_1$ or $\barq_2$ with $q_1$. Let $I''$ be the instance obtained from $I'$ by deleting the clause $c_3$ and the variable $q_2$, and adding the clause $c_3'$. We note that $I''$ is connected and the degree of all vertices in $I''$ is at most~$3$. Further, $|C(I'')| = |C|-2 < |V|-2 = |V(I'')|$. In particular, $|C(I'')| < |C|$. By the minimality of $|C|$, the instance $I''$ is nae-satisfiable and contains a free variable. (We remark, however, that here we do not need the fact that there exists a free variable in $I''$ in this case.) If in $c_1$ and $c_2$ exactly one of $q_1$ and $q_2$ is negated, then we let $q_2$ have the same truth value as $q_1$; otherwise, we let $q_2$ have the opposite truth value of $q_1$. This implies that $I$ is nae-satisfiable even without assigning $v$ a value. Therefore, the variable $v$ is free and $I$ is not a counterexample to the theorem, a contradiction.~\ClaimQED{} \medskip By Claim~G, the variable $v$ is not free in $I^*$. By our earlier assumptions, there are at least two literals in $c_1$ that are identical with those in $c_2$. If the literal containing $q_i$ in $c_1$ and $c_2$ is identical for $i \in [2]$, then $v$ is free in $I^*$, a contradiction. This implies that if the literal containing $q_i$ in $c_1$ and $c_2$ is not identical, then the literal containing $q_{3-i}$ in $c_1$ and $c_2$ is identical for $i \in [2]$. Further, the literal containing $v$ in $c_1$ and $c_2$ is identical for $i \in [2]$. Let $c_3'$ be a new clause obtained from $c_3$ by replacing the literal $q_2$ with $q_1$ or $\barq_2$ with $\barq_1$. Let $I''$ be the instance obtained from $I'$ by deleting the clause $c_3$ and the variable $q_2$, and adding the clause $c_3'$. We note that $I''$ is connected and the degree of all vertices in $I''$ is at most~$3$. Further, $|C(I'')| = |C|-2 < |V|-2 = |V(I'')|$. In particular, $|C(I'')| < |C|$. By the minimality of $|C|$, the instance $I''$ is nae-satisfiable and contains a free variable. Suppose that $q_1$ is free in $I''$. In this case, we can assign values to $v$ and $q_2$ if $q_1$ is free in $I^*$ and to $v$ and $q_1$ if $q_2$ is free in $I^*$, such that $c_1$ and $c_2$ are nae-satisfiable. Therefore, $I$ is nae-satisfiable and contains $q_1$ or $q_2$ as a free variable. This is a contradiction to $I$ being a counterexample. Hence, $q_1$ is not free in $I''$. Since $I''$ contains a free variable and $q_1$ is not free in $I''$, some variable, $w$, different from $q_1$ is a free variable in $I''$. We now assign $q_2$ the same truth value as $q_1$ and we assign $v$ the opposite truth value to the literal corresponding to $q_1$ in $c_1$ (or $c_2$). With this truth assignment, both $c_1$ and $c_2$ are nae-satisfiable, noting that the literal containing $v$ in $c_1$ and $c_2$ is identical for $i \in [2]$. Therefore, $I$ is nae-satisfiable and contains $w$ as a free variable. Once again, this is a contradiction to $I$ being a counterexample, which completes the proof of Theorem~\ref{main_nae_3sat}.~\qed \medskip We remark that the result of Theorem~\ref{main_nae_3sat} is best possible in the following sense. \begin{prop} For any $s \ge 1$, there exists a non-trivial connected instance $I$ of NAE-$3$-SAT with $3s$ variables satisfying $0 \le |C(I)| < |V(I))|$ and $\dg_I(v) \le 3$ for all $v \in V(I)$ such that $I$ is nae-satisfiable and contains exactly one free variable. \end{prop} \proof Let $s \ge 1$ and let $I$ be an instance of NAE-$3$-SAT with variables $V(I) = \{ v_i^j \mid i \in [s] \mbox{ and } j \in [3]\}$ and clauses $C(I) = C_1 \cup C_2$, where \[ \begin{array}{rcl} C_1 & = & \{ (v_i^1,v_i^2,v_i^3\}, (\barv_i^1, v_i^2, v_i^3) \mid i \in [s] \} \1 \\ C_2 & = & \{ (v_i^1, v_{i+1}^2, \barv_{i+1}^3) \mid i \in [s-1] \}. \\ \end{array} \] By construction, $I$ is connected and the degree of all vertices in $I$ is at most~$3$. Further, $|C(I)| = 3s-1 = |V(I)|-1$. We will now show that the only free variable in $I$ is $v_s^1$. Due to the clauses in $C_1$ we note that $v_i^2$ and $v_i^3$ must be assigned opposite truth values in any nae-satisfiable truth assignment for all $i \in [s]$. For every $i \in [s-1]$, we note that $v_{i+1}^2$ and $\barv_{i+1}^3$ have the same truth value since $v_{i+1}^2$ and $v_{i+1}^3$ have opposite truth values. This implies that $v_i^1$ must be assigned the opposite truth value to $v_{i+1}^2$ and $\barv_{i+1}^3$. This is true for all $i \in [s-1]$, which implies that $v_s^1$ is the only variable that can be free. It is not difficult to see that $v_s^1$ is free and this also follows from Theorem~\ref{main_nae_3sat}, noting that none of the other variables are free.~\qed \section{Proof of Theorem~\ref{t:main1}} \label{S:proof1} Using Theorem~\ref{main_nae_3sat}, we prove Theorem~\ref{t:main1}. First, we present the following lemma. \begin{lem} \label{lem1} Let $H$ be a connected $3$-uniform hypergraph with no isolated vertex. If $H$ has fewer edges than vertices and has maximum degree at most~$3$, then $H$ contains at least two free vertices. \end{lem} \proof Let $H$ be a connected $3$-uniform hypergraph with no isolated vertex. Suppose that $H$ has fewer edges than vertices and has maximum degree at most~$3$. Let $I_H$ be the NAE-$3$-SAT instance corresponding to $H$. By Theorem~\ref{main_nae_3sat}, the instance $I_H$ is nae-satisfiable and has a free vertex, say $v$. Assigning color~$1$ to true variables and color~$2$ to false variables we obtain a $2$-coloring $\cC$ of $H$ where $v$ has no color and all hyperedges of $H$ contain vertices of both colors. Let $E_v$ be all edges in $H$ containing the vertex $v$. Since $H$ has no isolated vertex, we note that $|E_v| = d_H(v) \ge 1$. We say that a vertex in $H$ is $\cC$-fixed if in some edge in $E(H) \setminus E_v$ it is the only vertex of its color in the $2$-coloring $\cC$. We note that every edge of $H$ is a $3$-edge, and every edge in $E(H) \setminus E_v$ contains vertices of both colors in $\cC$. Thus, in every edge in $E(H) \setminus E_v$ there is a vertex whose color is unique in that edge. Thus, every edge in $E(H) \setminus E_v$ gives rise to exactly one vertex that is $\cC$-fixed. Therefore, there are at most $|E(H) \setminus E_v|$ vertices in $H$ that are $\cC$-fixed. By supposition, $|E(H)| < |V(H)|$. Hence, $|E(H) \setminus E_v| \le |E(H)| - |E_v| \le (|V(H)|-1) - d_H(v) \le |V(H)| - 2$, implying that at least two vertices in $H$ are not $\cC$-fixed. Clearly, the vertex $v$ is not $\cC$-fixed. Let $u$ be a vertex different from $v$ that is not $\cC$-fixed. Renaming colors if necessary, we may assume that $u$ has color~$1$. Thus, every edge in $E(H) \setminus E_v$ that contains $u$ contains another vertex of color~$1$ and one vertex of color~$2$. Let $\cC'$ be the coloring obtained from $\cC$ by removing the color~$1$ from $u$ and assigning color~$1$ to $v$. Since $\cC$ is a $2$-coloring of $H$, so too is $\cC'$ a $2$-coloring of $H$. However, in the $2$-coloring $\cC'$ the vertex $u$ is a free vertex. Thus, $H$ has at least two free vertices, namely $u$ and $v$.~\qed \medskip We are now in a position to prove Theorem~\ref{t:main1}. Recall its statement. \medskip \noindent \textbf{Theorem~\ref{t:main1}}. \emph{Every $4$-regular $4$-uniform hypergraph contains a free vertex.} \\ \noindent \textbf{Proof of Theorem~\ref{t:main1}.} We may assume that $H$ is connected as otherwise we consider each component of $H$ separately. By Thomassen's Theorem~\ref{Extension}, there exists a $2$-coloring, $\cC$, of $H$ such that no edge of $H$ is monochromatic. Analogously to the proof of Lemma~\ref{lem1}, we call a vertex $\cC$-fixed if in some edge in $E(H)$ it is the only vertex of its color in $\cC$. If some vertex is not $\cC$-fixed, then it is a free vertex, as we can simply uncolor it. Therefore, we may assume that all vertices in $H$ are $\cC$-fixed, for otherwise the desire result follows. For every edge $e \in E(H)$, let $v^*(e)$ be the vertex of unique color in $e$, if such a vertex exists. By assumption, all vertices in $H$ are $\cC$-fixed, implying that for every vertex $u \in V(H)$ we have $u = v^*(e)$ for some edge $e$ in $H$. Since $H$ is a $4$-regular $4$-uniform hypergraph, we note that $|V(H)|=|E(H)|$. Thus since every vertex in $H$ is $\cC$-fixed, this implies that for every edge $e \in E(H)$, the vertex $v^*(e)$ exists. Further, if $e$ and $e'$ are distinct edges, then $v^*(e) \ne v^*(e')$. This in turn implies that for every vertex $u \in V(H)$ there is a unique edge, $e^*(u)$, such that $ v^*(e^*(u))=u$. Let $V_1$ be the set of all vertices of color~$1$ in $\cC$ and let $V_2$ be the set of all vertices of color~$2$ in $\cC$. For each vertex $u \in V_1$, the edge $e^*(u)$ contains three vertices in $V_2$, while for each vertex $v \in V_2$, the edge $e^*(v)$ contains one vertex in $V_2$. Thus, the sum of the degrees of the vertices in $V_2$ is $3|V_1| + |V_2|$, implying that the average degree of a vertex in $V_2$ is $(3|V_1|+|V_2|)/|V_2|$. Since $H$ is $4$-regular, this value has to be~$4$, which implies that $|V_1|=|V_2|$. Let $H_1^*$ be the hypergraph with vertex set $V(H_1^*) = V_1$ and with edge set defined as follows: for every vertex $u \in V_2$ add the edge $e_u = (e^*(u) \setminus \{u\})$ to $H_1^*$. We note that each vertex $v \in V_1$ belongs to one edge $e^*(v)$ of $H$ and to three edges of the type $e^*(u)$ where $u \in V_2$. Thus, by construction, $H_1^*$ is a $3$-regular $3$-uniform hypergraph. Analogously, we define $H_2^*$ be the hypergraph with vertex set $V(H_2^*) = V_2$ and with edge set defined as follows: for every $u \in V_1$ add the edge $e_v = (e^*(v) \setminus \{v\})$ to $H_2^*$. By construction, $H_2^*$ is a $3$-regular $3$-uniform hypergraph. Let $C_1^1, \ldots, C_{\ell_1}^1$ be the components of $H_1^*$ where $\ell_1 \ge 1$, and let $C_1^2, \ldots, C_{\ell_2}^2$ be the components of $H_2^*$ where $\ell_2 \ge 1$. Let $i_1=1$. Let $u_1 \in V(C_{i_1}^1)$ and let $i_2$ be defined such that $e_{u_1}$ is an edge in $C_{i_2}^2$. We note that $C_{i_2}^2 - e_{u_1}$ contains at most three components. Further, every component of $C_{i_2}^2 - e_{u_1}$ has fewer edges than vertices as the degrees of its vertices are at most~$3$ and it contains a vertex of degree at most~$2$, namely a vertex contained in the deleted edge $e_{u_1}$. Therefore applying Lemma~\ref{lem1} to each component of $C_{i_2}^2 - e_{u_1}$, we obtain a $2$-coloring of the component that contains a free vertex. Combining these $2$-colorings in each component, produces a $2$-coloring of $C_{i_2}^2 - e_{u_1}$ that contains at least one free vertex. Let $u_2$ be a free vertex of $C_{i_2}^2 - e_{u_1}$. Let $i_3$ be defined such that $e_{u_2}$ is an edge in $C_{i_3}^1$. As before, applying Lemma~\ref{lem1} to each component of $C_{i_3}^1 - e_{u_2}$, we obtain a $2$-coloring with a free vertex, say $u_3$. Continuing the above process we obtain a sequence $i_1, i_2, i_3, i_4, \ldots$. As there are finitely many components of $H_1^*$ and $H_2^*$, we note that there must exist integers $\ell$ and $k$, such that $i_{\ell}, i_{\ell+2}, i_{\ell+4}, \ldots, i_{k-2}$ are all distinct and $i_{\ell+1}, i_{\ell+3}, i_{\ell+5}, \ldots, i_{k-1}$ are all distinct and $i_\ell = i_k$. Renaming components if necessary, we may assume we had started with $u_\ell$ instead of $u_1$ and we may assume $\ell=1$ and $i_1=1$, $i_2=1$, $i_3=2$, $i_4=2$, $i_5=3$, $\ldots$, $i_{k-2}=(k-1)/2$, $i_{k-1} = (k-1)/2$ and $i_k=1$. By Lemma~\ref{lem1}, the hypergraph $C_{i_k}^1 - e_{u_{k-1}} = C_1^1 - e_{u_{k-1}}$ contains at least two free vertices. We may without loss of generality assume that the free vertex $u_k$ of $C_1^1 - e_{u_{k-1}}$ was chosen to be different from $u_1$. We now define the sets $V'$ and $E'$ by $V' = \{u_2,\ldots,u_{k-1},u_k\}$ and $E' = \{e_{u_1},e_{u_2}, \ldots, e_{u_{k-1}} \}$. Further we define \[ V^* = \bigcup_{i=1}^{\frac{k-1}{2}} V(C_i^1) \cup V(C_i^2) \hspace*{0.75cm} \mbox{and} \hspace*{0.75cm} E^* = \bigcup_{i=1}^{\frac{k-1}{2}} E(C_i^1) \cup E(C_i^2). \] If $e_u$ is an edge in $H_1^*$ or $H_2^*$, then let $(e_u)^H$ be the $4$-edge containing the vertices $V(e_u) \cup \{u\}$; that is, $(e_u)^H$ is the original $4$-edge in $H$ that gave rise to $e_u$. We now define \[ E^{**} = \{ (e)^{H} \, | \, e \in E^* \} \hspace*{0.75cm} \mbox{and} \hspace*{0.75cm} E'' = \{ (e)^{H} \, | \, e \in E' \}. \] We note that $E' \subset E^*$ and every edge in $E^*$ is a $3$-edge. Further, $E'' \subset E^{**}$ and every edge in $E^{**}$ is a $4$-edge. Considering the $2$-colorings we obtained above we can $2$-color the vertices of $V^* \setminus V'$ such that all edges of $E^* \setminus E'$ (and therefore also all edges of $E^{**} \setminus E''$) contain vertices of both colors. Interchanging the colors of all vertices in $C_1^1$ if necessary (by recoloring vertices of color~$i$ with color $3-i$ for $i \in [2]$ in the original $2$-coloring of $C_1^1$), we may assume that $(e_{u_1})^H$ also contains vertices of both colors, noting that $u_1 \ne u_k$. Coloring the vertex $u_i$ we can make sure that the edge $(e_{u_i})^H$ contains vertices of both colors, for each $i \in [k-1] \setminus \{1\}$. We have now $2$-colored all vertices in $V^*$ except for the vertex $u_k$ (which is still uncolored) such that all edges in $E^{**}$ contain vertices of both colors. Let $\cC^*$ denote the resulting $2$-coloring of the vertices of $V^*$. If $V^* = V(H)$, then $\cC^*$ is a $2$-coloring of $H$ with a free vertex, and we are therefore done and the desired result follows. Hence, we may assume that $V^* \ne V(H)$. If the $|V^*|$ edges in $\{ e_v \, | \, v \in V^*\}$ are exactly the edges in $E^*$, then $H$ would not be connected, a contradiction. Therefore, there must be a vertex $v \in V^*$ where $e_v \not\in E^*$. Hence, $e_v \in C_a^b$ for some $a>(k-1)/2$ and $b \in [2]$. Applying Lemma~\ref{lem1} to each component of $C_a^b - e_{v}$, we obtain analogously as before a $2$-coloring of $C_a^b - e_{v}$ with a free vertex, say $x_a^b$. If the vertex $v$ was already colored in $\cC^*$, then interchange all colors in the $2$-coloring $\cC^*$, if necessary, in order to guarantee that the edge $(e_v)^H$ contains vertices of both colors. If $v$ was not colored, then color it such that $(e_v)^H$ contains vertices of both colors. In both cases, we produce a $2$-coloring of the vertices of $V^* \cup V(C_a^b)$ with the vertex $x_a^b$ uncolored (and possibly also the vertex $u_k$ uncolored if $v \ne u_k$) such that all edges in $E^{**} \cup \{ (e)^H \, | \, e \in E(C_a^b) \}$ contain vertices of both colors. Repeating the above process (with the new $V^*$ being $V^* \cup V(C_a^b)$ and the new $E^{**}$ being $E^{**} \cup \{ (e)^H \, | \, e \in E(C_a^b) \}$), we will eventually have $V^* = V(H)$ and produce a $2$-coloring of $H$ with a free vertex. This completes the proof of Theorem~\ref{t:main1}.~\qed \section{Closing Remarks} In this paper, we establish a surprising connection between NAE-$3$-SAT and $2$-coloring of hypergraphs. We prove that every connected non-trivial instance of NAE-$3$-SAT with maximum degree~$3$ is nae-satisfiable (and contains a free variable) if the number of clauses is less than the number of variables. Using this property, we strength a beautiful result due to Carsten Thomassen~\cite{Th92} that every $4$-regular $4$-uniform hypergraph is $2$-colorable, which itself is a strengthening of a powerful result due to Alon and Bregman~\cite{AlBru88}. As remarked earlier, our result (see Theorem~\ref{t:main1}) is best possible in the sense that there exist $4$-regular $4$-uniform hypergraphs with only free vertex; that is, every free set in such a hypergraph has size~$1$. We believe that every connected $4$-regular $4$-uniform hypergraph with sufficiently large order contains a free set of size~$2$. Due to the complement of the Fano plane the order of such a hypergraph is more than~$7$. It is possible that every connected $4$-regular $4$-uniform hypergraph of order at least~$8$ contains a free set of size~$2$. We further suspect that every connected $4$-regular $4$-uniform hypergraph with sufficiently large order~$n$ contains a free set of size $C \times n$, where $C > 0$ is some constant. As remarked previously, we have subsequently used our NAE-$3$-SAT property, given by Theorem~\ref{main_nae_3sat}, to solve other seemingly unrelated hypergraph conjectures (such as a conjecture on the fractional disjoint transversal number) that seem difficult to solve using a purely hypergraph approach. An interesting line of future research would be to determine for larger values of~$k \ge 4$ which connected non-trivial NAE-$k$-SAT instances are nae-satisfiable given the maximum degree, number of variables and number of clauses, and to apply such results to solve open problems and conjectures related to $k$-uniform hypergraphs. We believe this would be a very interesting avenue of research to explore. \newpage
{ "timestamp": "2016-11-29T02:06:02", "yymm": "1611", "arxiv_id": "1611.08850", "language": "en", "url": "https://arxiv.org/abs/1611.08850", "abstract": "In this paper, we continue the study of $2$-colorings in hypergraphs. A hypergraph is $2$-colorable if there is a $2$-coloring of the vertices with no monochromatic hyperedge. It is known (see Thomassen [J. Amer. Math. Soc. 5 (1992), 217--229]) that every $4$-uniform $4$-regular hypergraph is $2$-colorable. Our main result in this paper is a strengthening of this result. For this purpose, we define a vertex in a hypergraph $H$ to be a free vertex in $H$ if we can $2$-color $V(H) \\setminus \\{v\\}$ such that every hyperedge in $H$ contains vertices of both colors (where $v$ has no color). We prove that every $4$-uniform $4$-regular hypergraph has a free vertex. This proves a known conjecture. Our proofs use a new result on not-all-equal $3$-SAT which is also proved in this paper and is of interest in its own right.", "subjects": "Combinatorics (math.CO)", "title": "Every 4-regular 4-uniform hypergraph has a 2-coloring with a free vertex", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743631497862, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7088953022161837 }
https://arxiv.org/abs/1110.2235
On distance, geodesic and arc transitivity of graphs
We compare three transitivity properties of finite graphs, namely, for a positive integer $s$, $s$-distance transitivity, $s$-geodesic transitivity and $s$-arc transitivity. It is known that if a finite graph is $s$-arc transitive but not $(s+1)$-arc transitive then $s\leq 7$ and $s\neq 6$. We show that there are infinitely many geodesic transitive graphs with this property for each of these values of $s$, and that these graphs can have arbitrarily large diameter if and only if $1\leq s\leq 3$. Moreover, for a prime $p$ we prove that there exists a graph of valency $p$ that is 2-geodesic transitive but not 2-arc transitive if and only if $p\equiv 1\pmod 4$, and for each such prime there is a unique graph with this property: it is an antipodal double cover of the complete graph $K_{p+1}$ and is geodesic transitive with automorphism group $PSL(2,p)\times Z_2$.
\section{Introduction} The study of finite distance-transitive graphs goes back to Higman's paper \cite{DGH-1} in which ``groups of maximal diameter" were introduced. These are permutation groups which act distance transitively on some graph. The family of distance transitive graphs includes many interesting and important graphs, such as the Johnson graphs, Hamming graphs, Odd graphs, Paley graphs and certain antipodal double covers of complete graphs that are discussed in this paper. We examine graphs with various symmetry properties which are stronger than arc-transitivity. The weakest of these properties is $s$-distance transitivity where $s$ is at most the diameter of the graph (see Section 2 for a precise definition), in which, all pairs of vertices at a given distance at most $s$ are equivalent under graph automorphisms. If $s$ is equal to the diameter, the graph is said to be \emph{distance transitive}. For a fixed integer $s\geq 2$, two stronger concepts than $s$-distance transitivity are important for our work. The first is $s$-geodesic transitivity for $s$ at most the diameter, in which for each integer $t\leq s$, all ordered $t$-paths $(v_0,v_1,\cdots,v_t)$ for which $v_0,v_t$ are at distance $t$, are equivalent under graph automorphisms. Such $t$-paths are called \emph{$t$-geodesics}. The second property is $s$-arc transitivity, in which for each integer $t\leq s$, all ordered $t$-walks $(v_0,v_1,\cdots,v_t)$, with $v_{i-1}\neq v_{i+1}$, for each $i=1,2,\cdots, t-1$, are equivalent. It is straightforward to verify that, for $s$ at most the diameter, $s$-arc transitivity implies $s$-geodesic transitivity, which in turn implies $s$-distance transitivity. The purpose of this paper is to provide some insight into the differences between these conditions, especially for $s=2$. A graph is called \emph{geodesic transitive} if it is $s$-geodesic transitive for $s$ equal to the diameter. Geodesic transitive graphs are in particular 1-arc transitive, and may or may not be $s$-arc transitive for some $s>1$. Our first result specifies the possible arc transitivities for geodesic transitive graphs. We note that (see \cite{weiss}) if a finite graph is $s$-arc transitive but not $(s+1)$-arc transitive then $s\in \{1,2,3,4,5,7\}$. \begin{theo}\label{gtnotat} For each $s\in \{1,2,3,4,5,7\}$, there are infinitely many geodesic transitive graphs that are $s$-arc transitive but not $(s+1)$-arc transitive. Moreover, there exist geodesic transitive graphs that are $s$-arc transitive but not $(s+1)$-arc transitive with arbitrarily large diameter if and only if $s\in \{1,2,3\}$. \end{theo} Theorem \ref{gtnotat} is proved in Section 3 by analysing some well-known families of distance transitive graphs, namely, Johnson graphs, Hamming graphs, Odd graphs and classical generalized polygons. In fact if $s\geq 4$, then all graphs with the property of Theorem \ref{gtnotat} have diameter at most 8 and are known explicitly, see Proposition \ref{dt-4at-diam}. \medskip Our second result is the main result of the paper, proved in Subsection 5.1. It classifies explicitly all $2$-geodesic transitive graphs of prime valency that are not 2-arc transitive. \medskip \begin{theo}\label{gt-primeval} Let $\Gamma$ be a connected $2$-geodesic transitive but not $2$-arc transitive graph of prime valency $p$. Then $\Gamma$ is a nonbipartite antipodal double cover of the complete graph $K_{p+1}$, where $p\equiv 1\pmod 4$. Further, $\Gamma$ is geodesic transitive and $\Gamma$ is unique up to isomorphism with automorphism group $PSL(2,p)\times Z_2$. \end{theo} This family of graphs arose also in \cite{GLP,Li-ci-soluble,Taylor-1} in different contexts, and they are distance transitive of diameter 3. We prove in Lemma \ref{val-p-lemma-4} that each graph in the family is geodesic transitive. In Subsection 5.2, we construct these graphs as in \cite{Li-ci-soluble} and prove that they satisfy the hypotheses of Theorem \ref{gt-primeval}. It would be interesting to know if a similar classification is possible for non-prime valencies. This is the subject of further research by the second author. We complete our comparison of these transitivity properties by producing 2-distance transitive graphs that are not 2-geodesic transitive. We give just one infinite family of examples, namely the Paley graphs $P(q)$ where $q\geq 13$ is a prime power and $q\equiv 1 \pmod{4}$ (see Section 4 for a definition). These graphs $P(q)$ have diameter 2 and are well-known to be distance transitive. The information about these graphs is important for the proof of Theorem \ref{gtnotat}. \begin{theo}\label{exam-valp-ha} Let $q\equiv 1 \pmod{4}$ be a prime power. Then the Paley graph $P(q)$ is distance transitive for all $q$, but $P(q)$ is geodesic transitive if and only if $q=5$ or $9$. \end{theo} \begin{rem}\label{dnotg-rem-1} {\rm The Paley graphs $P(q)$ with $q>9$ seem to be the first family of diameter $2$ graphs observed to be distance transitive but not geodesic transitive. Since all diameter $2$ distance transitive graphs are known, it would be interesting to know which of them are geodesic transitive.} \end{rem} These results give some insight into the relationship between $s$-distance transitivity, $s$-geodesic transitivity and $s$-arc transitivity for $s=2$. It would be interesting to understand the relationship between these properties for larger values of $s$. \section{Preliminaries} All graphs of this paper are finite, undirected simple graphs. We first give some definitions which will be used throughout the paper. Let $\Gamma$ be a graph. We use $V\Gamma,E\Gamma$, and $\Aut\Gamma$ to denote its \emph{vertex set}, \emph{edge set} and \emph{ full automorphism group}, respectively. The size of $V\Gamma$ is called the \emph{order} of the graph. The graph $\Gamma$ is said to be \emph{vertex transitive} (or \emph{edge transitive}) if the action of $\Aut \Gamma$ on $V\Gamma$ (or $E\Gamma$) is transitive. For (not necessarily distinct) vertices $u$ and $v$ in $V\Gamma$, a $walk$ from $u$ to $v$ is a finite sequence of vertices $(v_{0},v_{1},\cdots,v_{n})$ such that $v_{0}=u$, $v_{n}=v$ and $(v_{i},v_{i+1})\in E\Gamma$ for all $i$ with $0\leq i<n$, and $n$ is called the \emph{length} of the walk. If $v_{i}\neq v_{j}$ for $0\leq i < j\leq n$, the walk is called a \emph{path} from $u$ to $v$. The smallest value for $n$ such that there is a path of length $n$ from $u$ to $v$ is called the \emph{distance} from $u$ to $v$ and is denoted $d_{\Gamma}(u, v)$. The \emph{diameter} $\diam(\Gamma)$ of a connected graph $\Gamma$ is the maximum of $d_{\Gamma}(u, v)$ over all $u, v \in V\Gamma$. Let $G \leq \Aut\Gamma$ and $s\leq \diam(\Gamma)$. We say that $\Gamma$ is \emph{$(G, s)$-distance transitive} if, for any two pairs of vertices $(u_1,v_1)$, $(u_2,v_2)$ with the same distance $t\leq s$, there exists $g\in G$ such that $(u_1,v_1)^g=(u_2,v_2)$. For a positive integer $s$, an $s$-arc of $\Gamma$ is a walk $(v_0,v_1,\cdots,v_s)$ of length $s$ in $\Gamma$ such that $v_{j-1}\neq v_{j+1}$ for $1\leq j\leq s-1$. Moreover, a 1-arc is called an \emph{arc}. Suppose $G\leq \Aut \Gamma$. Then $\Gamma$ is said to be \emph{ $(G,s)$-arc transitive} if, for any two $t$-arcs $\alpha$ and $\beta$ where $t\leq s$, there exists $g\in G$ such that $\alpha^g=\beta$. A remarkable result of Tutte about $(G,s)$-arc transitive graphs with valency three shows that $s \leq 5$, see \cite{Tutte-1,Tutte-2}. About twenty years later, relying on the classification of finite simple groups, Weiss in \cite{weiss} proved that there are no $(G,8)$-arc transitive graphs with valency at least three. For more work on $(G,s)$-arc transitive graphs see \cite{IP-1,Praeger-4,Praeger-1993-1}. Let $u,v$ be distinct vertices of $\Gamma$. Then a path of shortest length from $u$ to $v$ is called a \emph{geodesic} from $u$ to $v$, or sometimes an \emph{$i$-geodesic} if $d_{\Gamma}(u,v)=i$. Moreover, 1-geodesics are arcs. If $\Gamma$ is connected, for each $i \in \{1,\cdots,\diam(\Gamma) \}$, we set $geod_i(\Gamma)=\{$all $i$-geodesics of $\Gamma\}$. Let $G\leq \Aut\Gamma$ and $ s \leq \diam(\Gamma)$. Then $\Gamma$ is said to be \emph{ $(G,s)$-geodesic transitive} if, for each $i=1,2,\cdots,s$, $G$ is transitive on $\geod_i(\Gamma)$. When $s=\diam(\Gamma)$, $\Gamma$ is said to be \emph{$G$-geodesic transitive}. Moreover, if we do not wish to specify the group we will say that $\Gamma$ is \emph{ $s$-geodesic transitive} or \emph{ geodesic transitive} respectively, and similarly for the other properties. The following are some examples of geodesic transitive graphs. \begin{examp} \label{geod-deg-exam} {\rm(i)} For any $n\geq 1$, both the complete graph $\K_n$ and the complete bipartite graph $K_{n,n}$ are geodesic transitive. {\rm (ii)} Let $\Gamma=\K_{m[b]}$ be a complete multipartite graph with $m\geq 3$ parts of size $b\geq 2$. Then $A:=\Aut \Gamma=S_b\wr S_m$ is transitive on $V\Gamma$ and on the set of arcs $\geod_1(\Gamma)$. Let $(u,v)$ be an arc of $\Gamma$. Then $|\Gamma_2(u)\cap \Gamma(v)|=b-1$ and $A_{u,v}$ induces $S_{b-1}$ on $\Gamma_2(u)\cap \Gamma(v)$, so $\Gamma$ is $2$-geodesic transitive. Since the diameter of $\Gamma$ is $2$, it follows that $\Gamma$ is geodesic transitive. \end{examp} We also give three infinite families of geodesic transitive graphs with arbitrarily large diameter in Section 3. If a graph $\Gamma$ is $(G,s)$-arc transitive and $s\leq \diam(\Gamma)$, then $s$-geodesics and $s$-arcs are same, and $\Gamma$ is $(G,s)$-geodesic transitive. However, $\Gamma$ can be $(G,s)$-geodesic transitive but not $(G,s)$-arc transitive. The girth of $\Gamma$, denoted by $girth(\Gamma)$, is the length of the shortest cycle in $\Gamma$. \begin{lemma}\label{agd-lemma-1} Suppose that a graph $\Gamma$ is $(G,s)$-geodesic transitive for some $G\leq \Aut \Gamma$ with $2\leq s\leq \diam(\Gamma)$. Then $\Gamma$ is $(G,s)$-arc transitive if and only if $\girth(\Gamma)\geq 2s$. \end{lemma} {\bf Proof.} Note that each $i$-geodesic is an $i$-arc, for $1\leq i\leq \diam(\Gamma)$. Thus $\Gamma$ is $(G,s)$-arc transitive if and only if each $s$-arc is an $s$-geodesic, and this is true if and only if $\girth(\Gamma)\geq 2s$. $\Box$ \bigskip An example of graphs which do not have the property of Lemma \ref{agd-lemma-1} for $s=2$ are the complete multipartite graphs $\K_{m[b]}$ with $m\geq 3$ parts of size $ b\geq 2$, which have girth 3, are 2-geodesic transitive (see Example \ref{geod-deg-exam} (ii)), but are not 2-arc transitive (by Lemma \ref{agd-lemma-1}). \medskip In our study of Paley graphs we use the concept of a Cayley graph. For a finite group $G$, and a subset $S$ of $G$ such that $1\notin S$ and $S=S^{-1}$, the \emph{Cayley graph} $\Cay(G,S)$ of $G$ with respect to $S$ is the graph with vertex set $G$ and edge set $\{\{g,sg\} \,|\,g\in G,s\in S\}$. The group $R(G)=\{\rho_x|x\in G\}$, where $\rho_x:g\mapsto gx$, is a subgroup of the automorphism group of $\Cay(G,S)$ and acts regularly on the vertex set, that is to say, $R(G)$ is transitive and only the identity $\rho_{1_G}$ fixes a vertex. It follows that $\Cay(G,S)$ is vertex transitive. The following is a criterion for a connected graph to be a Cayley graph. \begin{lemma}{\rm(\cite[Lemma 16.3]{Biggs-1})}\label{cayley-1} Let $\Gamma$ be a connected graph. Then a subgroup $H$ of $\Aut \Gamma$ acts regularly on the vertices if and only if $\Gamma$ is isomorphic to a Cayley graph $\Cay(H,S)$ for some set $S$ which generates $H$. \end{lemma} For a graph $\Gamma$, the \emph{k-distance graph} $\Gamma_k$ of $\Gamma$ is the graph with vertex set $V\Gamma$, such that two vertices are adjacent if and only if they are at distance $k$ in $\Gamma$. If $d=\diam(\Gamma)\geq 2$, and $\Gamma_d$ is a disjoint union of complete graphs, then $\Gamma$ is said to be an \emph{antipodal graph}. Suppose that $\Gamma$ is an antipodal distance-transitive graph of diameter $d$. Then we may partition its vertices into sets, called \emph{fibres}, such that any two distinct vertices in the same fibre are at distance $d$ and two vertices in different fibres are at distance less than $d$. Godsil, Liebler and Praeger gave a complete classification of antipodal distance transitive covers of complete graphs. The following lemma follows directly from the Main Theorem of \cite{GLP}. We shall apply it to characterise the examples in Theorem \ref{gt-primeval}. \begin{lemma}{\rm(\cite{GLP})}\label{anticover-lemma-1} Suppose that $G$ is a distance transitive automorphism group of a finite nonbipartite graph $\Gamma$. Suppose further that $\Gamma$ is antipodal with fibres of size $ 2$ and with antipodal quotient the complete graph $K_n$. Then either $\Gamma=K_{n[2]}$ of diameter $2$, or $\Gamma$ has diameter $3$ and one of the following holds. {\rm I.} $\Gamma$ appears in {\rm\cite{Taylor-1}} and is one of \ \ \ \ {\rm (a)} $n=2^{2m-1}\pm 2^{m-1}, G\leq 2\times Sp(2m,2)$ for some $m\geq 3$. \ \ \ \ {\rm (b)} $n=2^{2a+1}+1$, $G\leq 2\times \Aut(R(q))$ for some $a\geq 1$. \ \ \ \ {\rm (c)} $n=176$, $HiS\leq G\leq 2\times HiS$, or $n=276$, $Co_3\leq G\leq 2\times Co_3$. \ \ \ \ {\rm (d)} $n=q^3+1$, $G\leq 2\times P\Gamma U(3,q^2)$ for some $q>3$. \ \ \ \ {\rm (e)} $n=q+1$, $G\leq 2\times P\Sigma L(2,q)$ for some $q\equiv 1 \pmod 4$. {\rm II.} $\Gamma$ is a graph appearing in Example {\rm 3.6} of {\rm\cite{GLP}}, and $n=q^{2d}$ with $q$ even. \end{lemma} \medskip \noindent \textbf{Partitions and quotient graphs:} \quad Let $G$ be a group of permutations acting on a set $\Omega$. A \emph{$G$-invariant partition} of $\Omega$ is a partition $\mathcal{B}=\{B_1,B_2,\cdots,B_n\}$ such that for each $g\in G$, and each $B_i \in \mathcal{B}$, the image $B_{i}^g\in \mathcal{B}$. The parts of $\Omega$ are often called \emph{blocks} of $G$ on $\Omega$. For a $G$-invariant partition $\mathcal{B}$ of $\Omega$, we have two smaller transitive permutation groups, namely the group $G^{\mathcal{B}}$ of permutations of $\mathcal{B}$ induced by $G$; and the group $G_{B_i}^{B_i}$ induced on $B_i$ by $G_{B_i}$ where $B_i\in \mathcal{B}$. Let $\Gamma$ be a graph, and $G\leq \Aut \Gamma$. Suppose $\mathcal{B}=\{B_1,B_2,\cdots,B_n \}$ is a $G$-invariant partition of $V\Gamma$. The \emph{quotient graph} $\Gamma_{\mathcal{B}}$ of $\Gamma$ relative to $\mathcal{B}$ is defined to be the graph with vertex set $\mathcal{B}$ such that $\{B_i,B_j\}$ is an edge of $\Gamma_{\mathcal{B}}$ if and only if there exist $x\in B_i, y\in B_j$ such that $\{x,y\} \in E\Gamma$. We say that $\Gamma_{\mathcal{B}}$ is \emph{nontrivial} if $1< |\mathcal{B}|< |V\Gamma|$. The graph $\Gamma$ is said to be a \emph{cover} of $\Gamma_{\mathcal{B}}$ if for each edge $\{B_i,B_j\}$ of $\Gamma_{\mathcal{B}}$ and $v\in B_i$, we have $|\Gamma(v)\cap B_j|=1$. \section{Proof of Theorem \ref{gtnotat}} In this section, we first describe three families of geodesic transitive graphs, each with unbounded diameter and valency, namely the Johnson graphs, Hamming graphs and Odd graphs. Graphs in these families are $s$-arc transitive but not $(s+1)$-arc transitive for various $s\leq 3$. In the last subsection, we give the proof of Theorem \ref{gtnotat}. In the following discussion, for integers $i,j$, we define $[i,j]=\{n\in Z\,|\,i\leq n\leq j\}$. Note that $[i,i]=\{i\}$ and $[i,j]=\emptyset$ when $i>j$. \subsection{Johnson graphs} Let $\Omega=[1, n]$ where $n\geq 3$, and let $1\leq k\leq [\frac{n}{2}]$ where $[\frac{n}{2}]$ is the integer part of $\frac{n}{2}$. Then the \emph{Johnson graph} $J(n,k)$ is the graph whose vertex set is the set of all $k$-subsets of $\Omega$, and two vertices $u$ and $v$ are adjacent if and only if $|u\cap v|=k-1$. Let $\Gamma=J(n,k)$. By \cite[Section 9.1]{BCN}, $\Gamma$ has the following properties: $\Gamma$ has diameter $k$, valency $k(n-k)$, $\Aut \Gamma\cong S_n\times Z_2$ when $n=2k\geq 4$, otherwise $\Aut \Gamma \cong S_n$, $\Gamma$ is distance transitive, and for any two vertices $u$ and $v$, \medskip $u\in \Gamma_j(v)$ where $j\leq k$ if and only if $|u\cap v|=k-j$. \ \, \, \ \ \ $(J*)$ \medskip Note that for $k=1$, $J(n,k)\cong K_n$ which has diameter 1. So in the following discussion, we assume that $k\geq 2$ and $n\geq 4$. \begin{prop}\label{j(n,k)} Let $\Gamma=J(n,k)$ where $2\leq k\leq [\frac{n}{2}]$ and $n\geq 4$. Then $\Gamma$ has girth $3$, is geodesic transitive, but not $2$-arc transitive. \end{prop} {\bf Proof.} Since $k\geq 2$ and $n\geq 4$, it follows that $u_1=[1,k]$, $u_2=\{1\}\cup [3,k+1]$ and $u_3=[2,k+1]$ are three vertices of $\Gamma$. By $(J*)$, they are pairwise adjacent, so $\Gamma$ has girth 3. Hence $\Gamma$ is not 2-arc transitive. We will prove that $\Gamma$ is geodesic transitive. Since $\Gamma$ is distance transitive, it follows that $\Gamma$ is 1-geodesic transitive. Now suppose that $\Gamma$ is $(j-1)$-geodesic transitive where $j\in [2,k]$. Let $\mathcal{V} =(v_0,v_1,\cdots,v_{j-1},v_j)$ where $v_i=[1,k-i] \cup [k+1,k+i]$ for each $i\in [0,j]$. Then by $(J*)$, $\mathcal{V}$ is a $j$-geodesic. Let $\mathcal{U}$ be any other $j$-geodesic. Then since $\Gamma$ is $(j-1)$-geodesic transitive, there exists $\a \in \Aut \Gamma$ such that $\mathcal{U}^\a=(v_0,v_1,\cdots,v_{j-1},u_j)$ which is also a $j$-geodesic, and in particular $u_j\in \Gamma_j(v_0)$, so $|v_0\cap u_j|=k-j$ by $(J*)$. Since $v_{j-1}$ and $u_j$ are adjacent, $|v_{j-1}\cap u_j|=k-1$. Hence there exist a unique $x$ such that $\{x\}=v_{j-1}\setminus u_j$ and a unique $y$ such that $\{y\}=u_{j}\setminus v_{j-1}$. First, if $x\geq k+1$, then $[1,k-(j-1)]\subseteq u_j$, so $|v_0\cap u_j|\geq k-(j-1)$ which contradicts $|v_0\cap u_j|= k-j$. Thus $x\in [1,k-(j-1)]$, and hence $[k+1,k+(j-1)]\subseteq u_j$. Second, if $y\leq k$, then $y\in [k-(j-2),k]$. It follows that $v_0\cap u_j$ contains $([1,k-(j-1)]\cup \{y\})\setminus \{x\}$, a set of size $k-(j-1)$, which also contradicts $|v_0\cap u_j|= k-j$. Thus $y > k$, and hence $y\in [k+j,n]$. Therefore, $u_j=([1,k-(j-1)]\setminus \{x\})\cup [k+1,k+(j-1)] \cup \{y\}$. Let $A=\Aut \Gamma$. Since $[1,k-(j-1)]\subseteq v_0\cap v_1\cap \dots \cap v_{j-1}$ and $[k+j,n]\subseteq \Omega \setminus (v_0\cup v_1\cup \dots \cup v_{j-1})$, and since $Sym(\Omega)\leq A $, it follows that $Sym([1, k-(j-1)])\times Sym([k+j,n]) \leq A_{v_0,v_1,\cdots,v_{j-1}}$. Hence there exists $\b \in A_{v_0,v_1,\cdots,v_{j-1}}$ such that $x^\b=k-(j-1)$ and $y^\b=k+j$, and so $(v_0,v_1,v_2,\cdots,u_j)^\b=(v_0,v_1,v_2,\cdots,v_{j-1},v_j)=\mathcal{V}$, that is $\mathcal{U}^{\a \b}=\mathcal{V}$. This completes the induction. Thus $\Gamma$ is geodesic transitive. $\Box$ \subsection{Hamming graphs} Let $n\geq 2$ and let $d$ be a positive integer. Then the \emph{Hamming graph} $\H(d,n)$ has vertex set $Z_n^d=Z_n\times Z_n\times \cdots \times Z_n$, seen as a module on the ring $Z_n=[0,n-1]$, and two vertices $u,v$ are adjacent if and only if $u-v$ has exactly one non-zero entry. For a vertex $u\in V\H(d,n)$, we denote by $|u|$ the number of its non-zero entries. Then by \cite[Section 9.2]{BCN}, $\H(d,n)$ has diameter $d$, valency $d(n-1)$, is distance transitive, $\Aut \H(d,n)\cong S_n\wr S_d$, and for two vertices $u,v$, \medskip $u\in \Gamma_k(v)$ where $k\leq d$ if and only if $|u-v|=k$. \ \, \, \ \ \ $(H*)$ \begin{prop}\label{Ham-2-trans} Let $\Gamma=\H(d,n)$ with $d\geq 2$, $n\geq 2$. Then $\Gamma$ is geodesic transitive. Moreover, if $n=2$ and $d\geq 3$ then $\Gamma$ has girth $4$ and is $2$-arc transitive but not $3$-arc transitive, while if $n\geq 3$ then $\Gamma$ has girth $3$ and is not $2$-arc transitive. \end{prop} {\bf Proof.} Since $\Gamma$ is distance transitive, it follows that $\Gamma$ is 1-geodesic transitive. Now, suppose that $\Gamma$ is $(j-1)$-geodesic transitive where $j\in [2,d]$. Let $\mathcal{V}=(v_0,v_1,\cdots,v_j)$ where for each $i\in [0,j]$, $v_i=(1,\cdots,1,0,\cdots,0)$, the first $i$ entries are equal to $1$ and the last $(d-i)$ entries are equal to 0. Then by $(H*)$, $\mathcal{V}$ is a $j$-geodesic. Suppose that $\mathcal{U}$ is any other $j$-geodesic of $\Gamma$. Since $\Gamma$ is $(j-1)$-geodesic transitive, there exists $\a \in \Aut \Gamma$ such that $\mathcal{U}^\a=(v_0,v_1,\cdots,v_{j-1},u_j)$ for some $u_j$. Suppose that the last $d-(j-1)$ entries of $u_j$ are 0. Since $v_{j-1},u_j$ are adjacent, $|u_j-v_{j-1}|=1$. Hence one of the first $j-1$ entries of $u_j$ is equal to $x$ and the rest are 1 for some $x\neq 1$, while the last $d-(j-1)$ entries are 0. Thus $|u_j-v_0|=j-2$ or $j-1$ according as $x$ is 0 or not. However by $(H*)$, $|u_j-v_0|=j$ which is a contradiction. Thus, $u_j$ has at least one non-zero entry in the last $d-(j-1)$ entries. Further, since the last $d-(j-1)$ entries of $v_{j-1}$ are 0 and $|u_j-v_{j-1}|=1$, it follows that for some $x\in Z_n\setminus \{0\}$, the first $(j-1)$ entries of $u_j$ are 1, and $x$ is the unique non-zero entry in the last $d-(j-1)$ entries. Moreover, since for each $i\in [0,j-1]$, the last $d-(j-1)$ entries of $v_i$ are 0, it follows that $Sym(Z_n\setminus \{0\})\wr Sym([j,d])\leq A_{v_0,\cdots,v_{j-1}}$. Thus there exists $\b \in A_{v_0,\cdots,v_{j-1}}$ such that $u_j^\b=(1,\cdots,1,1,0,\cdots,0)=v_j$. Therefore, $\mathcal{U}^{\a \b}=\mathcal{V}$, and hence $\Gamma$ is $j$-geodesic transitive. By induction, $\Gamma$ is geodesic transitive. If $n=2$, then for each vertex $u$, and for any two vertices $v,w$ of $\Gamma(u)$, $|v-w|=2$, that is $v,w$ are not adjacent, so the girth of $\Gamma$ is not 3. Further, $u_1=(0,0,\cdots,0)$, $u_2=(1,0,\cdots,0)$, $u_3=(0,1,0,\cdots,0)$ and $u_4=(1,1,0\cdots,0)$ are four vertices of $\Gamma$ such that $(u_1,u_2,u_4,u_3,u_1)$ is a 4-cycle, so the girth is 4. Now 2-arcs and 2-geodesics are the same, and since $\diam(\Gamma)=d\geq 3$, it follows that $\Gamma$ is $2$-arc transitive but not $3$-arc transitive. If $n\geq 3$, then $(u_1,u_2,u_3,u_1)$ is a triangle where $u_1=(0,0,\cdots,0)$, $u_2=(1,0,0,\cdots,0)$ and $u_3=(2,0,0,\cdots,0)$, so $\Gamma$ has girth $3$, that is, $\Gamma$ is arc-transitive but not $2$-arc transitive. $\Box$ \subsection{Odd graphs} Let $\Omega=[1,2k+1]$ where $k\geq 1$. Then the \emph{Odd graph} $O_{k+1}$ is the graph whose vertex set is the set of all $k$-subsets of $\Omega$, and two vertices are adjacent if and only if they are disjoint. By \cite[Section 9.1]{BCN}, $O_{k+1}$ is distance transitive, its diameter is $k$, valency is $k+1$, and $\Aut O_{k+1} \cong S_{2k+1}$. By \cite{Biggs-2}, for two vertices $u,v$ of $O_{k+1}$, \medskip $u\in \Gamma_i(v)$ if and only if $|u\cap v|=j$ when $i=2j+1$; $|u\cap v|=k-j$ when $i=2j$. \ \ \ \ \ \ $(O*)$ \medskip Note that if $k=1$, then $O_2\cong C_3$ is $s$-arc transitive for all $s\geq 1$. So we will assume that $k\geq 2$. \begin{prop}\label{odd-d-not-g} Let $\Gamma=O_{k+1}$ with $k\geq 2$. Then $\Gamma$ is geodesic transitive, and is $3$-arc transitive but not $4$-arc transitive. \end{prop} {\bf Proof.} Since $\Gamma$ is distance transitive, it follows that $\Gamma$ is 1-geodesic transitive. Now, suppose that $\Gamma$ is $(j-1)$-geodesic transitive where $j\in [2,k]$. Let $\mathcal{V}=(v_0,v_1,\cdots,v_j)$ where $v_{2i}=[1,k-i]\cup [2k-i+2,2k+1]$ and $v_{2i+1}=[k-i+1,2k-i]$ for $i\geq 0$. Then by $(O*)$, $\mathcal{V}$ is a $j$-geodesic. Suppose that $\mathcal{U}$ is any other $j$-geodesic of $\Gamma$. Then since $\Gamma$ is $(j-1)$-geodesic transitive, there exists $\a \in A:=\Aut \Gamma$ such that $\mathcal{U}^\a=(v_0,v_1,\cdots,v_{j-1},u_j)$ for some $u_j$. First, suppose that $j=2l$ is even. Let $\Delta_1=[1,k-l+1]$, $\Delta_2=[k-l+2,k]$, $\Delta_3=[k+1,2k-l+1]$ and $\Delta_4=[2k-l+2,2k+1]$. Then $\Omega=\Delta_1\cup\Delta_2\cup\Delta_3\cup\Delta_4$, $v_{j-1}=\Delta_2 \cup \Delta_3$ and $v_j=(\Delta_1\cup \Delta_4)\setminus \{k-l+1\}$. Since $v_{j-1}$ and $u_j$ are adjacent, it follows that $v_{j-1}\cap u_j=\emptyset$, and hence $u_j \subseteq \Delta_1 \cup \Delta_4$. Since $u_j \in \Gamma_j(v_0)$ and $j$ is even, by $(O*)$, $|v_0\cap u_j|=k-l$. Hence $|u_j \setminus v_0|=l$. Since $\Delta_1 \subseteq v_0$, it follows that $u_j \setminus v_0\subseteq \Delta_4$. Since $|u_j \setminus v_0|=l=|\Delta_4|$, it follows that $u_j \setminus v_0= \Delta_4$, so $\Delta_4 \subseteq u_j$. Hence $u_j=(\Delta_1\cup \Delta_4)\setminus \{x\}$ for some $x\in \Delta_1$. Since for each $m\in [0,j-1]$, $\Delta_1\subseteq v_m$ when $m$ is even, and $\Delta_1\cap v_m=\emptyset$ when $m$ is odd, it follows that $Sym(\Delta_1)\leq A_{v_0,v_1,\cdots,v_{j-1}}$. There exists $\b \in Sym(\Delta_1)$ such that $x^\b=k-l+1$. Hence $u_j^\b=v_j$ and so $\mathcal{U}^{\a \b}=\mathcal{V}$. Second, suppose that $j=2l+1$ is odd. We let $\Delta_1=[1,k-l]$, $\Delta_2=[k-l+1,k]$, $\Delta_3=[k+1,2k-l+1]$ and $\Delta_4=[2k-l+2,2k+1]$. Then $\Omega=\Delta_1\cup\Delta_2\cup\Delta_3\cup\Delta_4$, $v_{j-1}=\Delta_1 \cup \Delta_4$ and $v_j=(\Delta_2\cup \Delta_3)\setminus \{2k-l+1\}$, so $u_j \subseteq \Delta_2\cup \Delta_3$. Since $v_0\cap \Delta_3=\emptyset$, it follows that $v_0\cap u_j \subseteq \Delta_2$. Further since $j$ is odd, by $(O*)$, $|v_0\cap u_j|=l=|\Delta_2|$. Hence $v_0\cap u_j = \Delta_2$. Thus $u_{j}=(\Delta_2 \cup \Delta_3)\setminus \{x\}$ for some $x\in \Delta_3$. Since for each $m\in [0,j-1]$, $\Delta_3\subseteq v_m$ when $m$ is odd, and $\Delta_3\cap v_m=\emptyset$ when $m$ is even, it follows that $Sym(\Delta_3)\leq A_{v_0,v_1,\cdots,v_{j-1}}$. There exists $\b \in Sym(\Delta_3)$ such that $x^\b=2k-l+1$, and hence $u_j^\b=v_j$, and $\mathcal{U}^{\a \b}=\mathcal{V}$. Thus $\Gamma$ is $j$-geodesic transitive. Therefore, by induction $\Gamma$ is geodesic transitive. If $k=2$, then $\Gamma$ is the Petersen graph which has girth $5$ and is $3$-arc transitive but not $4$-arc transitive. If $k\geq 3$, then by \cite[Section 9.1]{BCN}, $\Gamma$ has girth $6$, that is, 3-arcs and 3-geodesics are the same. This together with geodesic transitivity show that $\Gamma$ is 3-arc transitive. Let $k=3$ and $v_0=\{1,2,3\}$, $v_1=\{4,5,6\}$, $v_2=\{1,2,7\}$, $v_3=\{3,4,5\}$, $v_4=\{1,2,6\}$ and $v_5=\{1,6,7\}$. Then $\mathcal{W}_1=(v_0,v_1,v_2,v_3,v_4)$ and $\mathcal{W}_2=(v_0,v_1,v_2,v_3,v_5)$ are two 4-arcs, $d_{\Gamma}(v_0,v_4)=2$ and $d_{\Gamma}(v_0,v_5)=3$. So there is no automorphism mapping $\mathcal{W}_1$ to $\mathcal{W}_2$, and hence $\Gamma$ is not 4-arc transitive. If $k\geq 4$, then $\diam(\Gamma)=k\geq 4$ and some 4-arcs lie in 6-cycles and so are not 4-geodesics. Hence $\Gamma$ is $3$-arc transitive but not $4$-arc transitive. $\Box$ \subsection{Proof of Theorem \ref{gtnotat}} First we collect information about the geodesic transitivity of several 4-arc transitive graphs. \begin{lemma}\label{biggs-smith-foster} The Biggs-Smith graph and the Foster graph have valency and diameter as in Table 1, and are geodesic transitive. Moreover, for $s$ as in Table {\rm 1}, these graphs are $s$-arc transitive but not $(s+1)$-arc transitive. \end{lemma} {\bf Proof.} Let $(\Gamma,s)\in \{(Biggs{-}Smith \ graph,4),$ $(Foster \ graph,5)\}$. Then by \cite[p.221]{BCN} and \cite[Theorem 1.1]{Weiss-2}, $\Gamma$ has valency and diameter as in Table 1, and for $s$ as in Table 1, $\Gamma$ is $s$-arc transitive but not $(s+1)$-arc transitive. Thus $\Gamma$ is $s$-geodesic transitive. Let $d=\diam(\Gamma)$ and $(v_0,v_1,\cdots,v_d)$ be a $d$-geodesic. Then $d=s+3$. By \cite[p.221]{BCN}, $|\Gamma(v_j)\cap \Gamma_{j+1}(v_0)|= 1$ for every $j=d-3,d-2,d-1$, and it follows that $\Gamma$ is geodesic transitive. $\Box$ \bigskip As in \cite[p.84]{GR} we define a \emph{generalized polygon}, or more precisely, a \emph{generalized $d$-gon}, as a bipartite graph with diameter $d$ and girth $2d$. The generalized polygons related to the Lie type groups $A_2(q),B_2(q)$ and $G_2(q)$ ($q$ is prime power) are \emph{classical generalized polygons}, and are denoted by $\Delta_{3,q}$, $\Delta_{4,q}$ and $\Delta_{6,q}$, respectively. They are regular of valency $q+1$. \begin{lemma}\label{strans-gp-gt} The only distance transitive generalized polygons of valency at least $3$ that are $s$-arc transitive but not $(s+1)$-arc transitive, for some $s\geq 4$, are $\Delta_{s-1,q}$ where $(s,q)\in S=\{(4,q),(5,2^m),(7,3^m)|$ $q$ is a prime power and $m$ is a positive integer $\}$. Moreover, all these graphs are geodesic transitive. \end{lemma} {\bf Proof.} Let $\Gamma$ be a distance transitive generalized polygon of valency at least 3 that is $s$-arc transitive but not $(s+1)$-arc transitive for some $s\geq 4$. Let $g$ be its girth. Suppose that $s<\frac{g-2}{2}$. Then for any two vertices $u,v$ at distance $s+1$, there exists a unique $(s+1)$-arc between them. Since $\Gamma$ is distance transitive, it follows that $\Gamma$ is $(s+1)$-arc transitive, which contradicts our assumption. Thus $s \geq \frac{g-2}{2}$. Since $\Gamma$ is a generalized polygon, $g$ is even. By \cite[Theorem 1.1]{Weiss-2}, $s\in \{ \frac{g+2}{2}, \frac{g}{2},\frac{g-2}{2} \}$. If $s= \frac{g+2}{2}$ or $\frac{g}{2}$, then \cite[Theorem 1.1]{Weiss-2} shows that $\Gamma$ is one of $\Delta_{s-1,q}$ where $(s,q)\in S$. Let $A=\Aut \Gamma$. Since $\Gamma$ is distance transitive, $A_u$ is transitive on $\Gamma_{s+1}(u)$ for each vertex $u$. Thus, if $s= \frac{g-2}{2}$, by \cite[Theorem 1.1]{Weiss-2}, $\Gamma$ is the Biggs-Smith graph, which is not a generalized polygon. Moreover, since each $\Delta_{s-1,q}$ has diameter $s-1$, it follows that all these graphs are geodesic transitive. $\Box$ \medskip These lemmas allow us to specify precisely the geodesic transitive graphs which are 4-arc transitive. \begin{prop}\label{dt-4at-diam} Let $\Gamma$ be a regular graph of valency at least $3$. Then $\Gamma$ is geodesic transitive and $s$-arc transitive but not $(s+1)$-arc transitive for some $s\geq 4$ if and only if $\Gamma$ is in one of the lines of Table 1. Table 1 also gives the valency, integer $s$ and diameter for each graph. In particular, $\diam(\Gamma)\leq 8$. \begin{table}[!hbp]\caption{Geodesic and $s$-arc transitive graphs that not $(s+1)$-arc transitive } \medskip \centering \begin{tabular}{|c|c|c|c|c|} \hline Graph $ \Gamma$ & Valency & $s$ & Diameter \\ \hline Foster graph & $3$ & 5& 8 \\ \hline Biggs-Smith graph & $3$ & 4 & 7 \\ \hline $\Delta_{3,q}$, $q$ is a prime power & $q+1$ & 4& 3 \\ \hline $\Delta_{4,q}$, $q=2^m$, $m$ is a positive integer & $q+1$ & 5&4\\ \hline $\Delta_{6,q}$, $q=3^m$, $m$ is a positive integer & $q+1$ & 7&6 \\ \hline \end{tabular} \end{table} \end{prop} {\bf Proof.} By Corollary 1.2 of \cite{Weiss-2}, if $\Gamma$ is distance transitive, $s$-arc transitive but not $(s+1)$-arc transitive for some $s\geq 4$, then $\Gamma$ is either the Foster graph or the Biggs-Smith graph, or a generalized polygon. The result now follows from Lemmas \ref{biggs-smith-foster} and \ref{strans-gp-gt}. $\Box$ \bigskip \noindent {\bf Proof of Theorem \ref{gtnotat}.} It follows from Propositions \ref{j(n,k)}, \ref{Ham-2-trans}, \ref{odd-d-not-g} and \ref{dt-4at-diam} that, for each $s\in \{1,2,3,4,5,7\}$, there are infinitely many geodesic transitive graphs that are $s$-arc transitive but not $(s+1)$-arc transitive. Further, for each $s\in \{1,2,3\}$, Propositions \ref{j(n,k)}, \ref{Ham-2-trans} and \ref{odd-d-not-g} show that there are such graphs with arbitrarily large diameter. Finally, by Proposition \ref{dt-4at-diam}, this is not the case for $s\in \{4,5,7\}$. Therefore geodesic transitive graphs that are $s$-arc transitive but not $(s+1)$-arc transitive with arbitrarily large diameter occur only for $s\in \{1,2,3\}$. $\Box$ \section{ Paley graphs} In this section, we discuss a special family of connected Cayley graphs, namely the Paley graphs, which were first defined by Paley in 1933, see \cite{Paley-1}. We prove that the Paley graph $P(q)$ is distance transitive but not geodesic transitive whenever $q\geq 13$. Let $q=p^e$ be a prime power such that $q\equiv 1 \pmod{4}$. Let $F_q$ be a finite field of order $q$. The \emph{Paley graph} $P(q)$ is the graph with vertex set $F_q$, and two distinct vertices $u,v$ are adjacent if and only if $u-v$ is a nonzero square in $F_q$. The congruence condition on $q$ implies that $-1$ is a square in $F_q$, and hence $P(q)$ is an undirected graph. Note that the field $F_q$ has $\frac{q-1}{2}$ elements which are nonzero squares, so $P(q)$ has valency $\frac{q-1}{2}$. Moreover, $P(q)$ is a Cayley graph for the additive group $G=F_{q}^+\cong Z_{p}^e$. Let $w$ be a primitive element of $F_q$. Then $S=\{w^2,w^4,\cdots,w^{q-1}=1\}$ is the set of nonzero squares of $F_q$, and $P(q)=\Cay(G,S)$. Define $\tau:F_q\mapsto F_q,x\mapsto x^p$. Then $\tau$ is an automorphism of the field $F_q$, called the \emph{Frobenius automorphism}, and $\Aut F_q=\langle \tau \rangle\cong Z_e$. By \cite{Carlitz} (or see \cite{Lim-Praeger-1}), $\Aut P(q)=(G:\langle w^2\rangle).\langle \tau \rangle\cong (Z_p^e:Z_{\frac{q-1}{2}}).Z_e\leq \A\Gamma L(1,q)$. Let $S'$ be the set of all nonsquare elements of $G$. Then $|S'|=\frac{q-1}{2}$. Define the Cayley graph $\Sigma=\Cay(G,S')$ where two vertices $u,v$ are adjacent if and only if $u-v\in S'$. Then $\Sigma$ is the complement of the Paley graph $P(q)$. Further, multiplication by $w$ induces an isomorphism $\Sigma\cong P(q)$, see \cite{Sachs}. Now, we cite a property of Paley graphs. \begin{lemma}{\rm(\cite[p.221]{GR})}\label{val-p-rem1} Let $\Gamma=P(q)$, where $q$ is a prime power such that $q\equiv 1 \pmod 4$. Let $u,v$ be distinct vertices of $\Gamma$. If $u,v$ are adjacent, then $|\Gamma(u)\cap \Gamma(v)|=\frac{q-5}{4}$; if $u,v$ are not adjacent, then $|\Gamma(u)\cap \Gamma(v)|=\frac{q-1}{4}$. \end{lemma} \subsection{Proof of Theorem \ref{exam-valp-ha}} Let $F_q, G$ and $w$ be as above, and let $G^*=\langle w \rangle $ be the multiplicative group of $F_q$. As discussed above, $P(q)=\Cay(G,S)$ where $S=\{w^2,w^4,\cdots,w^{q-1}=1\}$, and $\Aut P(q)=(G:\langle w^2\rangle).\langle \tau \rangle\cong (Z_p^e:Z_{\frac{q-1}{2}}).Z_e$. Now we prove Theorem \ref{exam-valp-ha}. \bigskip \noindent {\bf Proof of Theorem \ref{exam-valp-ha}.} Let $\Gamma=P(q)$ and $A=\Aut \Gamma$. Let $u=0 \in G$. Then $A_{u}=\langle w^2 \rangle.\langle \tau \rangle$ has orbits $\{0\}$, $S$ and $S'=G\setminus (\{0\}\cup S)$ on vertices. Now $S=\Gamma(u)$ and as $\Gamma$ is connected, $\Gamma_2(u)$ must be the other orbit $S'$. In particular, $\Gamma$ has diameter 2 and is distance transitive and arc transitive. Suppose that $v\in \Gamma(u)$. By Lemma \ref{val-p-rem1}, $|\Gamma(u)\cap \Gamma(v)|=\frac{q-5}{4}$. Thus, $|\Gamma_2(u)\cap \Gamma(v)|=|\Gamma(v)|-|\Gamma(u)\cap \Gamma(v)|-1=\frac{q-1}{4}$. If $A$ is transitive on the 2-geodesics of $\Gamma$ then $A_{u,v}$ is transitive on $\Gamma_2(u)\cap \Gamma(v)$. In particular $\frac{q-1}{4}=|\Gamma_2(u)\cap \Gamma(v)|$ divides $|A_{uv}|=e$ and hence $q=5$ or 9. We consider $P(5)$ and $P(9)$. If $q=5$, then $P(q)\cong C_5$, so $P(q)$ is geodesic transitive. Now, suppose that $q=9=3^2$. The field $F_9$ is $\{a+bx \,|\, a,b \in Z_3\}$ under polynomial addition and multiplication modulo $f(x)=x^2+1$. The set $S$ is $\{1,2,x,2x\}$, and $\Gamma_2(u)=\{x+1,x+2,2x+1,2x+2 \}$. Let $v=1\in S$. Then $\Gamma_2(u)\cap \Gamma(v)=\{x+1,2x+1\}$. Since $A_{uv}=\langle \tau \rangle$ and $(x+1)^{\tau}=x^3+1=2x+1$, it follows that $A_{uv}$ is transitive on $\Gamma_2(u)\cap \Gamma(v)$, and hence $\Gamma$ is $(A,2)$-geodesic transitive. Since $\diam(\Gamma)=2$, $\Gamma=P(9)$ is geodesic transitive. $\Box$ \subsection{Arc-transitive graphs of odd prime order} In this subsection we characterise the Paley graphs $P(p)$, for primes $p$, as arc-transitive graphs of given prime order and given valency. This result is used in our proof of Theorem \ref{gt-primeval}. \begin{prop}\label{arctrans-order-p-1} Let $\Gamma$ be an arc-transitive graph of prime order $p$ and valency $\frac{p-1}{2}$. Then $p\equiv 1 \pmod 4$, $\Aut \Gamma \cong Z_p:Z_{\frac{p-1}{2}}$, and $\Gamma \cong P(p)$. \end{prop} The proof uses the following famous result of Burnside. \begin{lemma}{\rm( \cite[Theorem 3.5B]{DM-1})}\label{val-p-prim-1} Suppose that $G$ is a primitive permutation group of prime degree $p$. Then $G$ is either $2$-transitive, or solvable and $G\leq AGL(1,p)$. \end{lemma} \noindent {\bf Proof of Proposition \ref{arctrans-order-p-1}.} Since $\Gamma$ has valency $\frac{p-1}{2}$, $p$ is an odd prime. Since $\Gamma$ is undirected and arc-transitive, it follows that $\Gamma$ has $p(\frac{p-1}{2})/2$ edges. This implies that $p\equiv 1 \pmod 4$. Let $A=\Aut \Gamma$. Since $A$ is transitive on $V\Gamma$ and $p$ is a prime, $A$ is primitive on $V\Gamma$. Since $\Gamma$ is neither complete nor empty, it follows by Lemma \ref{val-p-prim-1} that $A< AGL(1,p)=Z_p:Z_{p-1}$. Again by vertex transitivity, $Z_p\leq A$. Thus, $A \cong Z_p:Z_m$ where $Z_m< Z_{p-1}$. Since $Z_p$ is regular on $V\Gamma$, it follows from Lemma \ref{cayley-1} that $\Gamma$ is a Cayley graph for $Z_p$. Thus $\Gamma=\Cay(G,S)$ where $G\cong Z_p$, $S\subseteq G\setminus \{0\}$, $S=S^{-1}$ and $|S|=\frac{p-1}{2}$. Let $v\in V\Gamma$ be the vertex corresponding to $0\in G$. Then $A_v=Z_m$ acts semiregularly on $G\setminus \{v\}$ with orbits of size $m$. Since $\Gamma$ is arc-transitive, $A_v$ acts transitively on $S$, so $m=|S|=\frac{p-1}{2}$. Thus $A \cong Z_p:Z_{\frac{p-1}{2}}$. Now we may identify $G$ with $F_p^+$ and $v$ with 0. Then $A_v$ is the unique subgroup of order $\frac{p-1}{2}$ of $F_p^*=\langle w\rangle$, that is, $A_v=\langle w^2\rangle$. The $A_v$-orbits in $F_p$ are $\{0\}$, $S_1=\{w^2,w^4,\cdots,w^{p-1}\}$ and $S_2=\{w,w^3,\cdots,w^{p-2}\}$, and so $S=S_1$ or $S_2$, and $\Gamma=P(p)$ or its complement respectively. In either case, $\Gamma\cong P(p)$. $\Box$ \section{ Graphs of prime valency that are 2-geodesic transitive but not 2-arc transitive} We prove Theorem \ref{gt-primeval} in Subsection 5.1, that is, we give a classification of connected 2-geodesic transitive graphs of prime valency which are not 2-arc transitive. Note that the assumption of 2-geodesic transitivity implies that the graph is not complete. Since the identification of the examples is made by reference to deep classification results, we give in Subsection 5.2 an explicit construction of these graphs as coset graphs and verify the properties claimed in Theorem \ref{gt-primeval}. \subsection{ Proof of Theorem \ref{gt-primeval}} We will prove Theorem \ref{gt-primeval} in a series of lemmas. Throughout this subsection we assume that $\Gamma$ is a connected $2$-geodesic transitive but not $2$-arc transitive graph of prime valency $p$ and that $A=\Aut \Gamma$. The first Lemma \ref{val-p-lemma-1} determines some intersection parameters. \begin{lemma}\label{val-p-lemma-1} Let $(v,u,w)$ be a $2$-geodesic of $\Gamma$. Then $p\equiv 1 \pmod 4$, $|\Gamma(v)\cap \Gamma(u)|=|\Gamma_2(v)\cap \Gamma(u)|=\frac{p-1}{2}$ and $|\Gamma(v)\cap \Gamma(w)|$ divides $\frac{p-1}{2}$. Moreover, $A_v^{\Gamma(v)} \cong Z_p:Z_{\frac{p-1}{2}}$, $A_{v,u}^{\Gamma(v)} \cong Z_{\frac{p-1}{2}}$ and $A_{v,u}$ is transitive on $\Gamma(v)\cap \Gamma(u)$. \end{lemma} {\bf Proof.} Since $\Gamma$ is $2$-geodesic transitive but not 2-arc transitive, it follows that $\Gamma$ is not a cycle. In particular, $p$ is an odd prime. Let $|\Gamma(v)\cap \Gamma(u)|=x$ and $|\Gamma_2(v)\cap \Gamma(u)|=y$. Then $x+y=|\Gamma(u)\setminus \{v\}|=p-1$. Since $\Gamma$ is 2-geodesic transitive but not 2-arc transitive, it follows that $\girth(\Gamma)=3$, so $x\geq 1$. Since the induced subgraph $[\Gamma(v)]$ is an undirected regular graph with $\frac{px}{2}$ edges, and since $p$ is odd, it follows that $x$ is even. This together with $x+y=p-1$ and the fact that $p-1$ is even, implies that $y$ is also even. Since $\Gamma$ is arc-transitive, $A_v^{\Gamma(v)}$ is transitive on $\Gamma(v)$. Since $p$ is a prime, $A_v^{\Gamma(v)}$ acts primitively on $\Gamma(v)$. By Lemma \ref{val-p-prim-1}, either $A_v^{\Gamma(v)}$ is 2-transitive, or $A_v^{\Gamma(v)}$ is solvable and $A_v^{\Gamma(v)}\leq AGL(1,p)$. Since $\Gamma$ is not complete, it follows that $[\Gamma(v)]$ is not a complete graph. Also since $\girth(\Gamma)=3$, $[\Gamma(v)]$ is not an empty graph and so $A_v^{\Gamma(v)}$ is not 2-transitive. Hence $A_v^{\Gamma(v)}< AGL(1,p)$. Thus $A_v^{\Gamma(v)}\cong Z_p:Z_m$, where $m|(p-1)$ and $m<p-1$. Hence $m\leq \frac{p-1}{2}$. Since $\Gamma$ is vertex transitive, it follows that $A_u^{\Gamma(u)}\cong Z_p:Z_m$, and hence $A_{u,v}^{\Gamma(u)}\cong Z_m$ is semiregular on $\Gamma(u)\setminus \{v\}$ with orbits of size $m$. Since $\Gamma$ is $2$-geodesic transitive, $A_{u,v}^{\Gamma(u)}$ is transitive on $\Gamma_2(v)\cap \Gamma(u)$, and hence $y=|\Gamma_2(v)\cap \Gamma(u)|=m$, so $x=p-1-m=m(\frac{p-1}{m}-1)\geq m$, and $x$ is divisible by $m$. Now again by arc transitivity, $|\Gamma(u)\cap \Gamma(w)|=|\Gamma(u)\cap \Gamma(v)|=x$. Since $|\Gamma_2(v)\cap \Gamma(u)|=m$, it follows that $|\Gamma_2(v)\cap \Gamma(u)\cap \Gamma(w)|\leq m-1$. Since $\Gamma(w)\cap \Gamma(u)=(\Gamma(w)\cap \Gamma(u)\cap \Gamma(v))\cup (\Gamma(w)\cap \Gamma(u)\cap \Gamma_2(v))$, it follows that $$x\leq |\Gamma(w)\cap \Gamma(u)\cap \Gamma(v)|+(m-1). \ \ \ \ \ \ \ (*)$$ Let $z=|\Gamma(v)\cap \Gamma(w)|$ and $n=|\Gamma_2(v)|$. Since $\Gamma$ is $2$-geodesic transitive, $z,n$ are independent of $v,w$ and, counting edges between $\Gamma(v)$ and $\Gamma_2(v)$ we have $pm=nz$. Now $z\leq |\Gamma(v)|=p$. Suppose first that $z=p$. Then $m=n$ and $\Gamma(v)=\Gamma(w)$, and so for distinct $w_1,w_2\in \Gamma_2(v)$, $d_{\Gamma}(w_1,w_2)=2$. Since $\Gamma$ is 2-geodesic transitive, it follows that $\Gamma(v)=\Gamma(v')$ whenever $d_{\Gamma}(v,v')=2$. Thus $\diam(\Gamma)=2$, $V\Gamma=\{v\}\cup \Gamma(v)\cup \Gamma_2(v)$ and $|V\Gamma|=1+p+m$. Let $\Delta=\{v\}\cup \Gamma_2(v)$. Then for distinct $v_1,v_1'\in \Delta $, $d_{\Gamma}(v_1,v_1')=2$; for any $v_1''\in V\Gamma\setminus \Delta$, $v_1,v_1''$ are adjacent. Thus, for any $v_1\in \Delta$, $\Delta=\{v_1\}\cup \Gamma_2(v_1)$. It follows that $\Delta$ is a block of imprimitivity for $A$ of size $m+1$. Hence $(m+1)|(p+m+1)$, so $(m+1)|p$. Since $m|(p-1)$, it follows that $m+1=p$ which contradicts the inequality $m\leq \frac{p-1}{2}$. Thus $z<p$, and so $z$ divides $m$. Since $|\Gamma(w)\cap \Gamma(u)\cap \Gamma(v)|\leq z$, it follows from $(*)$ that $x\leq z+(m-1)\leq 2m-1<2m$. Since $x$ is divisible by $m$ and $x\geq m$ we have $x=m$. Thus $2m=x+y=p-1$, so $x=y=m=\frac{p-1}{2}$, and since $x$ is even, $p\equiv 1 \pmod 4$. Also $x=m$ implies that $A_{v,u}$ is transitive on $\Gamma(v)\cap \Gamma(u)$. Finally, since $nz=pm=p(\frac{p-1}{2})$ and $z<p$, it follows that $z$ divides $\frac{p-1}{2}$. $\Box$ \begin{lemma}\label{val-p-lemma-2} For $v\in V\Gamma$, the stabiliser $A_v \cong Z_p:Z_{\frac{p-1}{2}}$. \end{lemma} {\bf Proof.} Suppose that $(v,u)$ is an arc of $\Gamma$. Then by Lemma \ref{val-p-lemma-1}, $A_v^{\Gamma(v)}\cong Z_p:Z_{\frac{p-1}{2}}$, and $A_{v,u}^{\Gamma(v)} \cong Z_{\frac{p-1}{2}}$ is regular on $\Gamma(v)\cap \Gamma(u)$. Let $E$ be the kernel of the action of $A_v$ on $\Gamma(v)$. Let $u'\in \Gamma(v)\cap \Gamma(u)$ and $x\in E$. Then $x\in A_{v,u,u'}$. Since $A_{u,v}^{\Gamma(u)} \cong Z_{\frac{p-1}{2}}$ is semiregular on $\Gamma(u)\setminus \{v\}$, it follows that $x$ fixes all vertices of $\Gamma(u)$. Since $x$ also fixes all vertices of $\Gamma(v)$, this argument for each $u\in \Gamma(v)$ shows that $x$ fixes all vertices of $\Gamma_2(v)$. Since $\Gamma$ is connected, $x$ fixes all vertices of $\Gamma$, hence $x=1$. Thus $E=1$, so $A_v \cong Z_p:Z_{\frac{p-1}{2}}$. $\Box$ \begin{lemma}\label{val-p-lemma-3} Let $(v,u,w)$ be a $2$-geodesic of $\Gamma$. Then $|\Gamma(v)\cap \Gamma(w)|=\frac{p-1}{2}$, $|\Gamma_2(v)\cap \Gamma(w)\cap \Gamma(u)|=\frac{p-1}{4}$, $|\Gamma_2(v)|=p$, and $|\Gamma_2(v)\cap \Gamma(w)|=\frac{p-1}{2}$. \end{lemma} {\bf Proof.} Let $z=|\Gamma(v)\cap \Gamma(w)|$ and $n=|\Gamma_2(v)|$. By Lemma \ref{val-p-lemma-1}, $|\Gamma(u)\cap \Gamma_2(v)|=\frac{p-1}{2}$ and $z|\frac{p-1}{2}$. Counting the edges between $\Gamma(v)$ and $\Gamma_2(v)$ gives $\frac{p-1}{2}p=nz$. By Lemma \ref{val-p-lemma-2}, $A_{v,u}=Z_{\frac{p-1}{2}}$, and by Lemma \ref{val-p-lemma-1}, $A_{v,u}$ is transitive on $\Gamma(v)\cap \Gamma(u)$, so $[\Gamma(u)]$ is $A_{u}$-arc transitive. Since $p$ is a prime, it follows by Lemma \ref{arctrans-order-p-1} that $[\Gamma(u)]$ is a Paley graph $P(p)$. Since $v,w\in \Gamma(u)$ are not adjacent, by Lemma \ref{val-p-rem1}, $|\Gamma(v)\cap \Gamma(u)\cap \Gamma(w)|=\frac{p-1}{4}$, hence $z\geq \frac{p-1}{4}+1$. Since $z|\frac{p-1}{2}$, it follows that $z=\frac{p-1}{2}$. Hence $n=p$. Thus, $|\Gamma(v)\cap \Gamma(w)|=\frac{p-1}{2}$ and $|\Gamma_2(v)|=p$. By Lemma \ref{val-p-lemma-1}, we have $|\Gamma(v)\cap \Gamma(u)|=\frac{p-1}{2}$. Since $\Gamma$ is arc transitive, it follows that $|\Gamma(v_1)\cap \Gamma(v_2)|=\frac{p-1}{2}$ for every arc $(v_1,v_2)$. Thus, $|\Gamma(u)\cap \Gamma(w)|=\frac{p-1}{2}$. Since $\Gamma(u)\cap \Gamma(w)=(\Gamma(v)\cap \Gamma(u)\cap \Gamma(w) )\cup (\Gamma_2(v)\cap \Gamma(u)\cap \Gamma(w) )$ and $|\Gamma(v)\cap \Gamma(u)\cap \Gamma(w)|=\frac{p-1}{4}$, it follows that $|\Gamma_2(v)\cap \Gamma(u)\cap \Gamma(w)|=\frac{p-1}{2}-\frac{p-1}{4}=\frac{p-1}{4}$. Since $A_v= Z_p:Z_{\frac{p-1}{2}}$, it follows that $A_{v,w}=Z_{\frac{p-1}{2}}$ and $A_{v,w}$ is semiregular on $\Gamma_2(v)\setminus \{w\}$ with orbits of size $\frac{p-1}{2}$. Since $\Gamma_2(v)\cap \Gamma(w)\subseteq \Gamma(w)\setminus \Gamma(v)$ ( of size $\frac{p-1}{2}$ ) and since $|\Gamma_2(v)\cap \Gamma(w)\cap \Gamma(u)|=\frac{p-1}{4}>0$, it follows that $|\Gamma_2(v)\cap \Gamma(w)|=\frac{p-1}{2}$. $\Box$ \begin{lemma}\label{val-p-lemma-4} Let $v$ be a vertex of $\Gamma$. Then $|\Gamma_3(v)|=1$ and $\diam(\Gamma)=3$. Further, $\Gamma$ is geodesic transitive. \end{lemma} {\bf Proof.} Suppose that $(v,u,w)$ is a $2$-geodesic of $\Gamma$. Then by Lemma \ref{val-p-lemma-3}, $|\Gamma(v)\cap \Gamma(w)|=\frac{p-1}{2}$ and $|\Gamma_2(v)\cap \Gamma(w)|=\frac{p-1}{2}$. Hence $|\Gamma_3(v)\cap \Gamma(w)|=p-|\Gamma(v)\cap \Gamma(w)|-|\Gamma_2(v)\cap \Gamma(w)|=1$. Since $\Gamma$ is $2$-geodesic transitive, it follows that $|\Gamma_3(v)\cap \Gamma(w_1)|=1$ for all $w_1\in \Gamma_2(v)$. Further $\Gamma$ is $3$-geodesic transitive. Let $\Gamma_3(v)\cap \Gamma(w)=\{v'\}$, $n=|\Gamma_3(v)|$ and $i=|\Gamma_2(v)\cap \Gamma(v')|$. Counting edges between $\Gamma_2(v)$ and $\Gamma_3(v)$, we have $p=ni$. Since $[\Gamma(w)]$ is a Paley graph and $u,v'\in \Gamma(w)$ are not adjacent, it follows from Lemma \ref{val-p-rem1} that $|\Gamma(u)\cap \Gamma(w)\cap \Gamma(v')|=\frac{p-1}{4}$. Since $\Gamma(u)\cap \Gamma_2(v)$ contains these $\frac{p-1}{4}$ vertices as well as $w$, we have $i\geq \frac{p+3}{4}>1$. Thus $i=p$ and $n=1$, that is, $|\Gamma_3(v)|=1$. Since $|\Gamma_2(v)\cap \Gamma(v')|=p$ and $|\Gamma_2(v)|=p$, it follows that $\Gamma_2(v)=\Gamma(v')$, and so $\diam(\Gamma)=3$. Therefore $\Gamma$ is geodesic transitive. $\Box$ \bigskip Now, we prove Theorem \ref{gt-primeval}. \bigskip \noindent {\bf Proof of Theorem \ref{gt-primeval}.} Since the graph $\Gamma$ is $2$-geodesic transitive but not $2$-arc transitive, it follows that $\girth(\Gamma)=3$, and hence $\Gamma$ is nonbipartite. Let $v\in V\Gamma$. Then it follows from Lemmas \ref{val-p-lemma-1} to \ref{val-p-lemma-4} that $p\equiv 1 \pmod 4$, $|\Gamma_2(v)|=p$, $|\Gamma_3(v)|=1$ and $\diam(\Gamma)=3$. Thus, $V\Gamma=\{v\}\cup \Gamma(v)\cup \Gamma_2(v)\cup \{v'\}$, where $\Gamma_3(v)=\{v'\}$, $\Gamma(v)=\Gamma_2(v')$ and $\Gamma_2(v)=\Gamma(v')$. Since $\Gamma$ is vertex transitive, these properties hold for all vertices of $\Gamma$. Thus, $\Gamma$ is an antipodal graph. By Lemma \ref{val-p-lemma-4}, $\Gamma$ is geodesic transitive, and hence distance transitive. Let $\mathcal{B}=\{\Delta_1,\Delta_2, \cdots,\Delta_{p+1}\}$ where $\Delta_i=\{u_i,u_i'\}$ such that $d_{\Gamma}(u_i,u_i')=3$. Then each $\Delta_i$ is a block for $\Aut \Gamma$ of size 2 on $V\Gamma$. Further, for each $j\neq i$, $u_i$ is adjacent to exactly one vertex of $\Delta_j$, and $u_i'$ is adjacent to the other. The quotient graph $\Sigma$ such that $V\Sigma=\mathcal{B}$, and two vertices $\Delta_i,\Delta_j$ are adjacent if and only if $\{\Delta_i,\Delta_j\}$ contains an edge of $\Gamma$, is therefore a complete graph $\Sigma\cong K_{p+1}$ and $\Gamma$ is a cover of $\Sigma$. Therefore, we know that $\Gamma$ is a nonbipartite antipodal distance transitive cover with fibres of size 2 of the complete graph $K_{n}$, where $n=p+1$, $p\equiv 1\pmod 4$, and $\diam(\Gamma)=3$, so it is one of the graphs listed in I or II of Lemma \ref{anticover-lemma-1}. Since $p\equiv 1\pmod 4$, it follows that $n\equiv 2\pmod 4$. However, for I $(a)$, $(c)$, and II, $n\equiv 0\pmod 4$, so $\Gamma$ is not a graph in one of these cases. For I $(b)$ and $(d)$, $n-1$ is not a prime, so $\Gamma$ is not in one of these cases either. Thus $\Gamma$ is the graph in I $(e)$ of Lemma \ref{anticover-lemma-1} with $q=p$ prime. Hence $\Gamma$ is unique up to isomorphism and $A=\Aut(\Gamma)\leq PSL(2,p) \times Z_2$. By Lemma \ref{val-p-lemma-2}, $A_v=Z_p:Z_{\frac{p-1}{2}}$ for every $v\in V\Gamma$, and since $\Gamma$ is vertex transitive, it follows that $|A|=p(p+1)(p-1)=|PSL(2,p)\times Z_2|$. Thus $A= PSL(2,p)\times Z_2$. $\Box$ \subsection{Construction } In Subsection 5.1, we proved Theorem \ref{gt-primeval} using results of \cite{GLP} to classify the connected 2-geodesic transitive graphs of prime valency $p$ which are not 2-arc transitive. Here we identify the examples explicitly. The unique example of valency 5 is the icosahedron, and we assume from now on that $p>5$ and $p\equiv 1\pmod{4}$. Taylor gave a construction of this family of graphs from regular two-graphs, see \cite[p.14]{BCN} and \cite{Taylor-1}. Here we present a direct construction of these graphs as coset graphs, fleshing out the construction given by the third author in the proof of \cite[Theorem 1.1]{Li-ci-soluble} in order to prove the additional properties we need for Theorem \ref{gt-primeval}. For a finite group $G$, a core-free proper subgroup $H$, and an element $g\in G$ such that $G=\langle H,g\rangle$ and $g^2\in H$, the \emph{coset graph} $\Cos(G,H,HgH)$ is the graph with vertex set $\{Hx|x\in G\}$, and two vertices $Hx,Hy$ adjacent if and only if $yx^{-1}\in HgH$. It is a connected, undirected, and $G$-arc transitive graph of valency $|H:H\cap H^g|$, see \cite{Lorimer-1}. \begin{constr}\label{coset-constr} {\rm Let $G=PSL(2,p)$ where $p>5$ is a prime and $p\equiv 1\pmod{4}$. Choose $a\in G$ such that $o(a)=p$. Then $N_{G}(\langle a\rangle)=\langle a\rangle:\langle b\rangle\cong Z_p:Z_{\frac{p-1}{2}}$ for some $b\in G$, $o(b)=\frac{p-1}{2}$. Further, there exists an involution $g\in G$ such that $N_{G}(\langle b^2\rangle)=\langle b\rangle:\langle g\rangle\cong D_{p-1}$. Let $H=\langle a\rangle:\langle b^2\rangle$ and $\Gamma=\Cos(G,H,HgH)$. } \end{constr} First, in the following lemma, we show that the coset graph in Construction \ref{coset-constr} is unique up to isomorphism for each $p$. We repeatedly use the fact that each $\sigma \in \Aut G$ induces an isomorphism from $\Cos(G,H,HgH)$ to $\Cos(G,H^{\sigma},H^{\sigma}g^{\sigma}H^{\sigma})$, and in particular, we use this fact for the conjugation action by elements of $G$. \begin{lemma}\label{coset-iso-rem} For each fixed prime $p>5$ and $p\equiv 1\pmod{4}$, up to isomorphism, the graph $\Gamma$ in Construction \ref{coset-constr} is independent of the choices of $H$ and $g$. \end{lemma} {\bf Proof.} Let $G=PSL(2,p)$ where $p>5$ is a prime and $p\equiv 1\pmod{4}$. Let elements $a_i,b_i,g_i$ and subgroup $H_i$ be chosen as in Construction \ref{coset-constr} for $i\in \{1,2\}$. Let $X=PGL(2,p)\cong \Aut(G)$. Since all subgroups of $G$ of order $p$ are conjugate there exists $x\in G$ such that $\langle a_2\rangle^x=\langle a_1\rangle$, so we may assume that $\langle a_1\rangle=\langle a_2\rangle=K$, say. Let $Y=N_{X}(K)$. Then $Y=K: \langle y\rangle$ where $o(y)=p-1$, and $H_1=K:\langle b_1^2\rangle$ and $H_2=K:\langle b_2^2\rangle$ are equal to the unique subgroup of $Y$ of order $\frac{p(p-1)}{4}$, that is, $H_1=H_2=K:\langle y^4\rangle=H$, say. Next, since all subgroups of $Y$ of order $\frac{p-1}{4}$ are conjugate, there exist $x_1,x_2\in Y$ such that $\langle b_1^2\rangle^{x_1}=\langle b_2^2\rangle^{x_2}=\langle y^4\rangle$. Since each $x_i$ normalises $H$ we may assume in addition that $\langle b_1^2\rangle=\langle b_2^2\rangle=\langle y^4\rangle < \langle y\rangle$. Thus $g_1,g_2$ are non-central involutions in $N_G(\langle y^4\rangle)\cong D_{p-1}$, an index 2 subgroup of $N_X(\langle y^4\rangle)=\langle y\rangle:\langle z\rangle\cong D_{2(p-1)}$. The set of non-central involutions in $N_G(\langle y^4\rangle)$ form a conjugacy class of $N_X(\langle y^4\rangle)$ of size $\frac{p-1}{2}$ and consists of the elements $y^{2i}z$, for $0\leq i<\frac{p-1}{2}$. The group $\langle y\rangle$ acts transitively on this set of involutions by conjugation (and normalises $H$). Hence, for some $u\in \langle y\rangle$, $H^u=H$ and $g_2^u=g_1$. $\Box$ \bigskip Now we show that the coset graph $\Gamma$ in Construction \ref{coset-constr} is 2-geodesic transitive but not 2-arc transitive of prime valency $p$. We first state some properties of $\Gamma$ which can be found in \cite[Theorem 1.1]{Li-ci-soluble} and its proof. \begin{rem}\label{smallval-con-rem} {\rm Let $\Gamma=\Cos(G,H,HgH)$ as in Construction \ref{coset-constr}. Then $G=\langle H,g\rangle$, $\Gamma$ is connected and $G$-arc transitive of valency $p$, $\Aut \Gamma \cong G\times Z_2$, $|V\Gamma|=|G:H|=2p+2$. Further, $\diam(\Gamma)=\girth(\Gamma)=3$, so $\Gamma$ is not $2$-arc transitive. Again, by the proof of \cite[Theorem 1.1]{Li-ci-soluble}, the action of $\Aut \Gamma$ on $V\Gamma$ has a unique system of imprimitivity $\mathcal{B}=\{\Delta_1,\Delta_2,\cdots,\Delta_{p+1}\}$, with $\Delta_i=\{v_i,v_i'\}$ of size 2, and the kernel of the action of $\Aut \Gamma$ on $\mathcal{B}$ has order 2. Moreover, $v_i$ is not adjacent to $v_i'$, and for each $j\neq i$, $v_i$ is adjacent to exactly one point of $\Delta_j$ and $v_i'$ is adjacent to the other. Thus, $\Gamma(v_1)\cap \Gamma(v_1')=\emptyset$, $V\Gamma=\{v_1\}\cup \Gamma(v_1) \cup \{v_1'\}\cup \Gamma(v_1') $, and $\Gamma$ is a nonbipartite double cover of $K_{p+1}$.} \end{rem} \begin{lemma}\label{smallval-p} The graph $\Gamma=\Cos(G,H,HgH)$ in Construction \ref{coset-constr} is $2$-geodesic transitive but not $2$-arc transitive. \end{lemma} {\bf Proof.} Let $A:=\Aut \Gamma$, $v_1\in V\Gamma$ and $u\in \Gamma(v_1)$. Let $E$ be the kernel of the $A$-action on $\mathcal {B}$ and $\overline{A}=A/E$. Then by the proof of \cite[Theorem 1.1]{Li-ci-soluble}, $E\cong Z_2 \lhd A$, $A=G\times E$, $\overline{A}\cong G=PSL(2,p)$ and $\overline{A}_{\Delta_1}\cong A_{v_1}$. Since $A\cong G\times Z_2$, it follows that $|A_{v_1}|=\frac{p(p-1)}{2}$, and by Lemma 2.4 of \cite{Li-ci-soluble}, $A_{v_1}\cong Z_p:Z_{\frac{p-1}{2}}$, which has a unique permutation action of degree $p$, up to permutational isomorphism. Since $\Gamma$ is $A$-arc-transitive, $A_{v_1}$ is transitive on $\Gamma(v_1)$ and hence on $\mathcal{B}\setminus \{\Delta_1\}$, and therefore also on $\Gamma(v_1')$, all of degree $p$. Thus the $A_{v_1}$-orbits in $V\Gamma$ are $\{v_1\},\Gamma(v_1),\Gamma(v_1')$ and $\{v_1'\}$, and it follows that $\Gamma(v_1')=\Gamma_2(v_1)$. Moreover, $A_{v_1,u}\cong Z_{\frac{p-1}{2}}$ has orbit lengths $1,\frac{p-1}{2},\frac{p-1}{2}$ in $\Gamma(v_1)$, and hence has the same orbit lengths in $\Gamma_2(v_1)$, and also in $\Gamma(u)$ (since $A_{v_1,u}$ is the point stabiliser of $A_u$ acting on $\Gamma(u)$). Since $\Gamma(v_1)\cap \Gamma(u)\neq \emptyset$, it follows that the $A_{v_1,u}$-orbits in $\Gamma(u)$ are $\{v_1\},\Gamma(v_1)\cap \Gamma(u)$, and $\Gamma_2(v_1)\cap \Gamma(u)$. It follows that $\Gamma$ is $(A,2)$-geodesic transitive. Since $\girth(\Gamma)=3$, $\Gamma$ is not 2-arc transitive. $\Box$
{ "timestamp": "2011-10-12T02:01:39", "yymm": "1110", "arxiv_id": "1110.2235", "language": "en", "url": "https://arxiv.org/abs/1110.2235", "abstract": "We compare three transitivity properties of finite graphs, namely, for a positive integer $s$, $s$-distance transitivity, $s$-geodesic transitivity and $s$-arc transitivity. It is known that if a finite graph is $s$-arc transitive but not $(s+1)$-arc transitive then $s\\leq 7$ and $s\\neq 6$. We show that there are infinitely many geodesic transitive graphs with this property for each of these values of $s$, and that these graphs can have arbitrarily large diameter if and only if $1\\leq s\\leq 3$. Moreover, for a prime $p$ we prove that there exists a graph of valency $p$ that is 2-geodesic transitive but not 2-arc transitive if and only if $p\\equiv 1\\pmod 4$, and for each such prime there is a unique graph with this property: it is an antipodal double cover of the complete graph $K_{p+1}$ and is geodesic transitive with automorphism group $PSL(2,p)\\times Z_2$.", "subjects": "Combinatorics (math.CO)", "title": "On distance, geodesic and arc transitivity of graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743620718529, "lm_q2_score": 0.7154240018510025, "lm_q1q2_score": 0.7088953014450042 }
https://arxiv.org/abs/1801.05471
Minimum saturated families of sets
We call a family $\mathcal{F}$ of subsets of $[n]$ $s$-saturated if it contains no $s$ pairwise disjoint sets, and moreover no set can be added to $\mathcal{F}$ while preserving this property (here $[n] = \{1,\ldots,n\}$).More than 40 years ago, Erdős and Kleitman conjectured that an $s$-saturated family of subsets of $[n]$ has size at least $(1 - 2^{-(s-1)})2^n$. It is easy to show that every $s$-saturated family has size at least $\frac{1}{2}\cdot 2^n$, but, as was mentioned by Frankl and Tokushige, even obtaining a slightly better bound of $(1/2 + \varepsilon)2^n$, for some fixed $\varepsilon > 0$, seems difficult. In this note, we prove such a result, showing that every $s$-saturated family of subsets of $[n]$ has size at least $(1 - 1/s)2^n$.This lower bound is a consequence of a multipartite version of the problem, in which we seek a lower bound on $|\mathcal{F}_1| + \ldots + |\mathcal{F}_s|$ where $\mathcal{F}_1, \ldots, \mathcal{F}_s$ are families of subsets of $[n]$, such that there are no $s$ pairwise disjoint sets, one from each family $\mathcal{F}_i$, and furthermore no set can be added to any of the families while preserving this property. We show that $|\mathcal{F}_1| + \ldots + |\mathcal{F}_s| \ge (s-1)\cdot 2^n$, which is tight e.g.\ by taking $\mathcal{F}_1$ to be empty, and letting the remaining families be the families of all subsets of $[n]$.
\section{Introduction} \begingroup \renewcommand{\thefootnote}{} \footnotetext{ MSC2010 Classification: ~ Primary: 05D05 ~ Secondary: 60C05 } \endgroup In extremal set theory, one studies how large, or how small, a family $\mathcal{F}$ can be, if $\mathcal{F}$ consists of subsets of some set and satisfies certain restrictions. Let $[n] = \{1,\ldots,n\}$, let $\ps$ be the family of all subsets of $[n]$ and let $[n]^{(k)}$ be the family of subsets of $[n]$ of size $k$. A classical example in the area is the study of \emph{intersecting families}. We say that a family $\mathcal{F}$ is \emph{intersecting} if for every $A, B \in \mathcal{F}$ we have $A \cap B \neq \emptyset$. The following simple proposition, first noted by Erd\H{o}s, Ko and Rado \cite{erdos-ko-rado}, gives an upper bound on the size of an intersecting family in $\ps$. \begin{prop} \label{prop:intersecting} Let $\mathcal{F} \subseteq \ps$ be intersecting, then $|\mathcal{F}| \le 2^{n-1}$. \end{prop} This follows from the observation that for every set $A \ss [n]$ at most one of $A$ and $\setcomp{A}$ (where $\setcomp{A} \defeq [n] \setminus A$) is in $\mathcal{F}$. This bound is tight, which can be seen, e.g., by taking the family of all subsets of $[n]$ that contain the element $1$. In fact, there are many more extremal examples (see \cite{erdos-hindman}), partly due to the following proposition. \begin{prop} \label{prop:max-intersecting} Let $\mathcal{F} \subseteq \ps$ be intersecting, then there is an intersecting family in $\ps$ of size $2^{n-1}$ that contains $\mathcal{F}$. In other words, if $\mathcal{F} \ss \ps$ is a maximal intersecting family, then it has size $2^{n-1}$. \end{prop} Indeed, suppose that $\mathcal{F}$ is a maximal intersecting family of size less than $2^{n-1}$. Then there is a set $A \subseteq [n]$ such that $A, \setcomp{A} \notin \mathcal{F}$. By maximality of $\mathcal{F}$, there exist sets $B, C \in \mathcal{F}$ such that $A \cap B = \emptyset$ and $\setcomp{A} \cap C = \emptyset$. In particular, $B \cap C = \emptyset$, a contradiction. There have been numerous extensions and variations of \Cref{prop:intersecting}. For example, the study of $t$-intersecting families \cite{katona} (where the intersection of every two sets has size at least $t$) and $L$-intersecting families \cite{alon-babai-suzuki} (where the size of the intersection of every two distinct sets lies in some set of integers $L$). Such problems were also studied for $k$-uniform families, i.e.\ families that are subsets of $\ups{n}{k}$ (see e.g.\ \cite{ahlswede-khachatrian} and \cite{ray-chaudhuri-wilson}). A famous example is the Erd\H{o}s-Ko-Rado \cite{erdos-ko-rado} theorem which states that if $\mathcal{F} \subseteq \ups{n}{k}$ is intersecting, and $n \ge 2k$, then $|\mathcal{F}| \le \binom{n-1}{k-1}$, a bound which is again tight by taking the families of all sets containing $1$. Another interesting generalisation of \Cref{prop:max-intersecting} looks for the maximum measure of an intersecting family under the $p$-biased product measure (see \cite{ahlswede-katona,dinur-safra,friedgut,filmus}). A different direction, which was suggested by Simonovits and S\'os \cite{Simonovits-sos}, studies the size of intersecting families of structured families, such as graphs, permutations and sets of integers (see e.g.\ \cite{borg,godsil-meagher}). Here we are interested in a different extension of \Cref{prop:intersecting,prop:max-intersecting}. Given $s \ge 2$, we say that a family $\mathcal{F} \ss \ps$ is \emph{$s$-saturated{}} if $\mathcal{F}$ contains no $s$ pairwise disjoint sets, and furthermore $\mathcal{F}$ is maximal with respect to this property. An example for an $s$-saturated{} family is the set of all subsets of $[n]$ that have a non-empty intersection with $[s-1]$. In 1974 Erd\H{o}s and Kleitman \cite{erdos-kleitman} made the following conjecture, which states that this example is the smallest $s$-saturated{} family in $\ps$. \begin{conj}[Erd\H{o}s, Kleitman \cite{erdos-kleitman}] \label{conj:main} Let $\mathcal{F} \ss \ps$ be $s$-saturated{}. Then $|\mathcal{F}| \ge (1 - 2^{-(s-1)})\cdot 2^n$. \end{conj} Note that by \Cref{prop:max-intersecting}, \Cref{conj:main} holds for $s = 2$. Given a family $\mathcal{F} \ss \ps$, define $\comp{\mathcal{F}} = \ps \setminus \mathcal{F}$, and $\pwcomp{\mathcal{F}} = \{\setcomp{A} : A \in \mathcal{F}\}$. Then for every $s \ge 2$, if $\mathcal{F} \ss \ps$ is $s$-saturated{} then $\dbcomp{\mathcal{F}}$ is intersecting. Indeed, if $A \notin \mathcal{F}$ then $\setcomp{A}$ contains $s - 1$ pairwise disjoint sets of $\mathcal{F}$, so if $A$ and $B$ are such that $\setcomp{A}$ and $\setcomp{B}$ are disjoint, then at least one of $A$ and $B$ is in $\mathcal{F}$, as otherwise $\mathcal{F}$ contains $2(s-1) \ge s$ pairwise disjoint sets, a contradiction. By \Cref{prop:intersecting}, it follows that if $\mathcal{F}$ is $s$-saturated{} then $|\mathcal{F}| \ge 2^{n-1}$. Surprisingly, beyond this trivial lower bound, nothing was known. Moreover, Frankl and Tokushige \cite{frankl-survey} wrote in their recent survey that obtaining a lower bound of $(1/2 + \varepsilon)2^n$, i.e.\ a modest improvement over the trivial bound, is a challenging open problem. In this paper we prove such a result. \begin{thm} \label{thm:main} Let $\mathcal{F} \subseteq \ps$ be $s$-saturated{}, where $s \ge 2$. Then $|\mathcal{F}| \ge (1 - 1/s)2^n$. \end{thm} In fact, \Cref{thm:main} is a corollary of a multipartite version of the above problem. A sequence of $s$ families $\mathcal{F}_1, \ldots, \mathcal{F}_s \subseteq \ps$ is called \emph{cross dependant{}} (see, e.g., \cite{frankl-kupavskii}) if there is no choice of $s$ sets $A_i \in \mathcal{F}_i$, for $i \in [s]$, such that $A_1, \ldots, A_s$ are pairwise disjoint. We call a sequence of $s$ families $\mathcal{F}_1, \ldots, \mathcal{F}_s$ \emph{cross saturated{}} if the sequence is cross dependant{} and is maximal with respect to this property, i.e.\ the addition of any set to any of the families results in a sequence which is not cross dependant. Our aim here is to obtain a lower bound on the $|\mathcal{F}_1| + \ldots + |\mathcal{F}_s|$. Note that if $\mathcal{F}$ is $s$-saturated{} then the sequence given by $\mathcal{F}_1 = \ldots = \mathcal{F}_s = \mathcal{F}$ is cross saturated{}. Hence, a lower bound on the sum of sizes of a cross saturated{} sequence of $s$ families implies a lower bound on the size of an $s$-saturated{} family. A simple example of a cross saturated{} sequence $\mathcal{F}_1, \ldots, \mathcal{F}_s$ can be obtained by taking $\mathcal{F}_1$ to be empty, and letting all other sets be $\ps$. This construction is a special case of a more general family of examples which we believe contains all extremal examples; we discuss this in \Cref{sec:conclusion}. Our next result shows that this example is indeed a smallest example for a cross saturated{} sequence. Furthermore, it implies \Cref{thm:main} by taking $\mathcal{F}_1 = \ldots = \mathcal{F}_s = \mathcal{F}$. \begin{thm} \label{thm:cross} Let $\mathcal{F}_1, \ldots, \mathcal{F}_s \ss \ps$ be cross saturated{}. Then $|\mathcal{F}_1| + \ldots + |\mathcal{F}_s| \ge (s-1)2^n$. \end{thm} We have two different approaches to this problem, each of which can be used to prove \Cref{thm:cross}. As the proofs are short, and \Cref{conj:main} is still open, we feel that there is merit in presenting both proofs here in hope that they would give rise to further progress on \Cref{conj:main}. Our first approach makes use of an interesting connection to correlation inequalities. Let us start by defining the \emph{disjoint occurrence} of two families. Given subsets $A, I \ss [n]$, let \begin{equation*} \mathcal{C}(I, A) = \{S \ss [n] : S \cap I = A \cap I \}. \end{equation*} The \emph{disjoint occurrence} of two families $\mathcal{A}, \mathcal{B} \ss \ps$ is defined by \begin{align*} \mathcal{A} \,\square\, \mathcal{B} \defeq \{A : \text{$\exists$ \emph{disjoint} sets $I,J \ss [n]$ s.t.\ $\mathcal{C}(I, A) \subset \mathcal{A}$ and $\mathcal{C}(J, A) \subset \mathcal{B}$}\}. \end{align*} Note that when $\mathcal{A}$ and $\mathcal{B}$ are both increasing families (i.e.\ if $A \in \mathcal{A}$, and $A \ss B \ss [n]$ then $B \in \mathcal{A}$), $\mathcal{A} \,\square\, \mathcal{B}$ is the set of all subsets of $[n]$ which can be written as a disjoint union of a set from $\mathcal{A}$ and a set from $\mathcal{B}$. This notion of disjoint occurrence appears naturally in the study of percolation. Using it, one can express the probability that there are two edge-disjoint paths between two sets of vertices in a random subgraph, chosen uniformly at random, of a given graph. Van den Berg and Kesten \cite{bk} proved that $|\mathcal{A} \,\square\, \mathcal{B}| \le |\mathcal{A}||\mathcal{B}|/2^n$ for increasing families $\mathcal{A}, \mathcal{B} \ss \ps$ and conjectured that this inequality should hold for general families. This was proved by Reimer \cite{reimer} in a ground breaking paper and is currently known as the van den Berg-Kesten-Reimer inequality. Disjoint occurrence is surprisingly suitable for the study of saturated{} families. For example, if $\mathcal{F}$ is $3$-saturated{} then it is easy to see that $\mathcal{F}$ is increasing, so $\mathcal{F} \,\square\, \mathcal{F}$ is the family of sets that are disjoint unions of two sets from $\mathcal{F}$, which is exactly the family $\dbcomp{\mathcal{F}}$. This observation alone implies an improved lower bound on $|\mathcal{F}|$ using the van den Berg-Kesten-Reimer inequality. We obtain a better bound using a variant of this inequality, which was first observed by Talagrand \cite{talagrand}, and later played a major role in Reimer's proof of the van den Berg-Kesten-Reimer inequality in full generality. Our second approach is algebraic: we define a polynomial for each set in a certain family related to $\mathcal{F}_1, \ldots, \mathcal{F}_s$, and show that these polynomials are linearly independent, thus implying that the family is not very large. \section{The proof} Before turning to the first proof of \Cref{thm:cross}, we introduce the correlation inequality that we will need. We present its short proof for the sake of completeness. \begin{lem}[Talagrand \cite{talagrand}] \label{lem:improved-bk} Let $\mathcal{A}, \mathcal{B} \ss \ps$ be increasing families. Then $\left|\mathcal{A} \,\square\, \mathcal{B}\right| \le \left|\pwcomp{\mathcal{A}} \cap \mathcal{B}\right|$. \end{lem} \begin{rem} Before turning to the proof of \Cref{lem:improved-bk}, we remark that the statement of \Cref{lem:improved-bk} holds even without the assumption that the families $\mathcal{A}$ and $\mathcal{B}$ are increasing. Furthermore, an equivalent version of this played a major role in Reimer's proof \cite{reimer} of the van den Berg-Kesten-Reimer inequality. \end{rem} \begin{proof} We prove the statement by induction on $n$. It is easy to check it for $n = 1$. Let $n > 1$ and suppose that the statement holds for $n - 1$. Given a family $\mathcal{F} \ss \ps$, denote by $\mathcal{F}_0$ the family of sets in $\mathcal{F}$ that do not contain the element $n$, and let $\mathcal{F}_1 = \{A \ss [n-1] : A \cup \{n\} \in \mathcal{F}\}$. In particular, $\mathcal{F}_0 \subseteq \mathcal{F}_1 \ss \psn{n-1}$ when $\mathcal{F}$ is an increasing family. We have \begin{align*} \abs{\mathcal{A} \,\square\, \mathcal{B}} & = \abs{(\mathcal{A} \,\square\, \mathcal{B})_0} + \abs{(\mathcal{A} \,\square\, \mathcal{B})_1} \\ & = \abs{\mathcal{A}_0 \,\square\, \mathcal{B}_0} + \abs{\mathcal{A}_1 \,\square\, \mathcal{B}_0} + \abs{\mathcal{A}_0 \,\square\, \mathcal{B}_1} - \abs{(\mathcal{A}_1 \,\square\, \mathcal{B}_0) \cap (\mathcal{A}_0 \,\square\, \mathcal{B}_1)} \\ & \le \abs{\mathcal{A}_1 \,\square\, \mathcal{B}_0} + \abs{\mathcal{A}_0 \,\square\, \mathcal{B}_1} \\ & \le \abs{\pwcomp{\mathcal{A}_1} \cap \mathcal{B}_0} + \abs{\pwcomp{\mathcal{A}_0} \cap \mathcal{B}_1} \\ & = \abs{(\pwcomp{\mathcal{A}} \cap \mathcal{B})_0} + \abs{(\pwcomp{\mathcal{A}} \cap \mathcal{B})_1} \\ & = \abs{\pwcomp{\mathcal{A}} \cap \mathcal{B}}, \end{align*} where the first inequality holds because $\mathcal{A}_0 \,\square\, \mathcal{B}_0 \ss (\mathcal{A}_1 \,\square\, \mathcal{B}_0) \cap (\mathcal{A}_0 \,\square\, \mathcal{B}_1)$, and the second one follows by induction. \end{proof} We are now ready for the first proof of \Cref{thm:cross}. \begin{proof} [First proof of \Cref{thm:cross}] Let $\mathcal{F}_1, \ldots, \mathcal{F}_s$ be cross saturated{}, where $s \ge 2$. Note that \begin{equation} \label{eqn:comp-F-i} \dbcomp{\mathcal{F}_i} = \mathcal{F}_1 \,\square\, \ldots \,\square\, \mathcal{F}_{i-1} \,\square\, \mathcal{F}_{i+1} \,\square\, \ldots \,\square\, \mathcal{F}_s. \end{equation} Indeed, for every $A \notin \mathcal{F}_i$, $\setcomp{A}$ contains a disjoint union of sets from $\mathcal{F}_1, \ldots, \mathcal{F}_{i-1}, \mathcal{F}_{i+1}, \ldots, \mathcal{F}_s$ and, conversely, any $A \in \mathcal{F}_1 \,\square\, \ldots \,\square\, \mathcal{F}_{i-1} \,\square\, \mathcal{F}_{i+1} \,\square\, \ldots \,\square\, \mathcal{F}_s$ cannot be in $\mathcal{F}_i$ by cross dependence. By \Cref{lem:improved-bk}, the following holds for every $i \ge 2$. \begin{align} \label{eqn:size-comp-F-i} \begin{split} \abs{\comp{\mathcal{F}_i}} & = \abs{\dbcomp{\mathcal{F}_i}} \\ & = \abs{\mathcal{F}_1 \,\square\, \ldots \,\square\, \mathcal{F}_{i-1} \,\square\, \mathcal{F}_{i+1} \,\square\, \ldots \,\square\, \mathcal{F}_s} \\ & \le \abs{(\mathcal{F}_1 \,\square\, \ldots \,\square\, \mathcal{F}_{i-1}) \cap \pwcomp{(\mathcal{F}_{i+1} \,\square\, \ldots \,\square\, \mathcal{F}_s)}}. \end{split} \end{align} Denote $\mathcal{G}_1 = \comp{\mathcal{F}_1}$, and $\mathcal{G}_i = (\mathcal{F}_1 \,\square\, \ldots \,\square\, \mathcal{F}_{i-1}) \cap \pwcomp{(\mathcal{F}_{i+1} \,\square\, \ldots \,\square\, \mathcal{F}_s)}$ for $i \ge 2$. \begin{claim} \label{claim:G-i-disjoint} $\mathcal{G}_i \cap \mathcal{G}_j = \emptyset$ for $1 \le i < j \le s$. \end{claim} \begin{proof} Indeed, if $i = 1$ then $\mathcal{G}_1 \subseteq \comp{\mathcal{F}_1}$ and $\mathcal{G}_j \subseteq \mathcal{F}_1$. Otherwise, if $A \in \mathcal{G}_i \cap \mathcal{G}_j$ with $2\le i<j$ then $A$ is the disjoint union of elements from $\mathcal{F}_1, \ldots, \mathcal{F}_{j-1}$, so in particular (as the sets $\mathcal{F}_l$ are increasing) it is the disjoint union of elements from $\mathcal{F}_1, \ldots, \mathcal{F}_i$. Furthermore, since $i\ge 2$, $A$ is also the complement (with respect to $[n]$) of a disjoint union of sets in $\mathcal{F}_{i+1}, \ldots, \mathcal{F}_s$, i.e.\ $\setcomp{A}$ is the disjoint union of sets in $\mathcal{F}_{i+1}, \ldots, \mathcal{F}_s$. But this means that $[n]$ is the disjoint union of sets from $\mathcal{F}_1, \ldots, \mathcal{F}_s$, a contradiction to the assumption that $\mathcal{F}_1, \ldots, \mathcal{F}_s$ form a cross saturated{} sequence. \end{proof} It follows from \eqref{eqn:comp-F-i}, \eqref{eqn:size-comp-F-i} and \Cref{claim:G-i-disjoint} that \begin{align} \label{eqn:end-proof} \begin{split} |\mathcal{F}_1| + \ldots + |\mathcal{F}_s| & = s\cdot 2^n - (|\comp{\mathcal{F}_1}| + \ldots + |\comp{\mathcal{F}_s}|) \\ & \ge s \cdot 2^n - (|\mathcal{G}_1| + \ldots + |\mathcal{G}_s|) \\ & \ge s \cdot 2^n - 2^n = (s - 1)2^n, \end{split} \end{align} thus completing the proof of \Cref{thm:cross}. \end{proof} Our next approach is algebraic. Before presenting the proof, we introduce some definitions and an easy lemma. Let $n$ be fixed and consider the vector space $V$ (over $\mathbb{R}$) of functions from $\{0,1\}^n$ to $\mathbb{R}$. Note that this is a vector space of dimension $2^n$. Given a subset $S \ss [n]$, let $P_S : \{0,1\}^n \rightarrow \mathbb{R}$ be defined by $P_S(x) = \prod_{i \in S}x_i$, where $x = (x_1, \ldots, x_n)^T \in \{0,1\}^n$, and let $x_S\in \{0,1\}^n$ be defined by $(x_S)_i = 1$ if and only if $i \in S$. The following lemma shows that $\{P_S : S \subseteq [n]\}$ is a linearly independent set in $V$ (in fact, as $V$ has dimension $2^n$, it is a basis). \begin{lem} \label{lem:basis} The set $\{P_S: S \subseteq [n]\}$ is linearly independent in $V$. \end{lem} \begin{proof} Suppose that $\sum_{S \ss [n]} \alpha_S P_S = 0$, where $\alpha_S \in \mathbb{R}$, and not all $\alpha_S$'s are $0$. Let $T$ be a smallest set such that $\alpha_T \neq 0$. Note that $P_S(x_T) = 1$ if and only if $S \ss T$. Hence $$ 0 = \sum_{S \ss [n]} \alpha_S P_S(x_T) = \sum_{S \ss [n], |S| \le |T|} \alpha_S P_S(x_T) = \alpha_T,$$ a contradiction to the assumption that $\alpha_T \neq 0$. It follows that $\alpha_S = 0$ for every $S \ss [n]$, i.e.\ the polynomials $\{P_S(x) : S \ss [n]\}$ are linearly independent, as required. \end{proof} We shall use the inner product on $V$ which is defined by \begin{equation} \label{eqn:inner-product} \ang{f,g} = \sum_{x \in \{0,1\}^n} f(x)g(x). \end{equation} It is easy to check that this is indeed an inner product; in fact, it is the standard inner product, if functions are viewed as vectors indexed by $\{0,1\}^n$. We are now ready for the second proof of \Cref{thm:cross}. \begin{proof}[Second proof of \Cref{thm:cross}] Let $\mathcal{F}_1, \ldots, \mathcal{F}_s$ be cross saturated{}, where $s \ge 2$. Given $i$ and $A \in \dbcomp{\mathcal{F}_i}$, recall that by \eqref{eqn:comp-F-i}, $A$ can be written as the disjoint union of sets from $\mathcal{F}_1, \ldots, \mathcal{F}_{i-1}, \mathcal{F}_{i+1}, \ldots, \mathcal{F}_n$. For every such $i$ and $A$, fix a representation \begin{equation} \label{eqn:rep-A} A = B \cup C, \end{equation} where $B$ is a disjoint union of sets from $\mathcal{F}_1, \ldots, \mathcal{F}_{i-1}$ and $C$ is a disjoint union of sets from $\mathcal{F}_{i+1}, \ldots, \mathcal{F}_{s}$. Let \begin{equation*} Q_{i, A}(x) = \prod_{j \in B} x_j \cdot \prod_{j \in C} (x_j - 1). \end{equation*} Let $W_i$ be the family of polynomials $Q_{i, A}$, where $i \in [s]$ and $A \ss \dbcomp{\mathcal{F}_i}$. We shall show that the sets $W_i$ are pairwise disjoint and that $W_1 \cup \ldots \cup W_s$ is linearly independent. This will follow from the following two claims, which state that each $W_i$ is linearly independent and that $W_i$ and $W_j$ are orthogonal for distinct $i$ and $j$. \begin{claim} \label{claim:W-i-lin-indep} $W_i$ is linearly independent for $i \in [s]$. \end{claim} \begin{proof} Suppose that $\sum_{A \in \dbcomp{\mathcal{F}_i}} \, \alpha_A Q_{i, A} = 0$, where $\alpha_A \in \mathbb{R}$ and not all $\alpha_A$'s are $0$. Let $A$ be a largest set such that $\alpha_A \neq 0$. Note that for every $A' \in \dbcomp{\mathcal{F}_i}$, $Q_{i, A'}$ can be written as \begin{equation*} Q_{i, A'} = P_{A'} + \sum_{S \subsetneq A'} \beta_{A', S} P_S, \end{equation*} where the values of $\beta_{A',S}$ depend on the representation of $A'$ as in \eqref{eqn:rep-A}. Hence, by choice of $A$, \begin{align*} 0\, & = \sum_{A' \in \dbcomp{\mathcal{F}_i},\, |A'| \le |A|} \alpha_{A'} Q_{i, A'} \\ & = \sum_{A' \in \dbcomp{\mathcal{F}_i},\, |A'| \le |A|} \alpha_{A'}(P_{A'} + \sum_{S \subsetneq A'}\beta_{A', S} P_S) \\ & = \alpha_A P_A + \sum_{|S| \le |A|,\, S \neq A} \gamma_S P_S, \end{align*} for some $\gamma_S \in \mathbb{R}$. However, since the $P_S$'s are linearly independent (by \Cref{lem:basis}), we have $\alpha_A = 0$, a contradiction. It follows that $W_i$ is linearly independent, as required. \end{proof} \begin{claim} \label{claim:W-i-orthogonal} $W_i$ and $W_j$ are orthogonal for $1 \le i < j \le s$. \end{claim} \begin{proof} Let $A \in \dbcomp{\mathcal{F}_i}$ and $A' \in \dbcomp{\mathcal{F}_j}$, where $1 \le i < j \le s$. Write $A = B \cup C$ and $A' = B' \cup C'$ for the representations as in \eqref{eqn:rep-A}. Let $x \in \{0,1\}^n$. We claim that $Q_{i, A}(x) = 0$ or $Q_{j, A'}(x) = 0$. Indeed, if the former does not hold, then $x_i = 1$ for $i \in B$ and $x_i = 0$ for $i \in C$. Note that $B' \cap C \neq \emptyset$, because $\{\mathcal{F}_1, \ldots, \mathcal{F}_s\}$ is cross dependant{}. Hence, $x_i = 0$ for some $i \in B'$, which implies that $Q_{i, A'}(x) = 0$, as claimed. It easily follows that $\ang{Q_{i,A}, Q_{j, A'}} = 0$ (recall the definition of the inner product given in \eqref{eqn:inner-product}), as required. \end{proof} It follows from \Cref{claim:W-i-lin-indep,claim:W-i-orthogonal} that $W_1 \cup \ldots \cup W_s$ is linearly independent, hence it has size at most the dimension of $V$, i.e.\ at most $2^n$. But $|W_i| = |\comp{\mathcal{F}_i}|$, thus, as in \eqref{eqn:end-proof} \begin{equation*} |\mathcal{F}_1| + \ldots + |\mathcal{F}_s| \ge (s-1)2^n, \end{equation*} as desired. \end{proof} \section{Conclusion} \label{sec:conclusion} There are two main directions for further research that we would like to mention here. The first is related to the tightness of \Cref{thm:cross}. As mentioned in the introduction, the result is tight, which can be seen by taking $\mathcal{F}_1 = \emptyset$ and $\mathcal{F}_2 = \ldots = \mathcal{F}_s = \ps$. In fact, this is a special case of the following class of examples: let $\mathcal{F}_1$ be any increasing family in $\ps$, let $\mathcal{F}_2 = \dbcomp{\mathcal{F}_1}$ and let $\mathcal{F}_3 = \ldots = \mathcal{F}_s = \ps$. Then $|\mathcal{F}_1|+|\mathcal{F}_2|=2^n$ and it is easy to check that any set in $\mathcal{F}_1$ intersect every set in $\mathcal{F}_2$. Therefore, every such example yields a cross saturated{} set of smallest size. Furthermore, it is easy to see that these are the only examples for which $\mathcal{F}_3 = \ldots = \mathcal{F}_s$. It seems plausible that these are the only possible examples (up to permuting the order of the families). This problem of classifying all extremal examples, interesting in its own right, may give a hint on how to further improve the lower bound of the size of $s$-saturated{} families. The second, and seemingly more challenging direction, is to improve on \Cref{thm:main}. We proved that if $\mathcal{F}$ is $s$-saturated{} then $|\mathcal{F}| \ge (1 - 1/s)2^n$, where the conjectured bound is $\left(1 - 2^{-(s-1)}\right)2^n$. We note that it is possible to improve the lower bound slightly, to show that $|\mathcal{F}| \ge \left(1 - 1/s + \Omega(\log n/n)\right)2^n$, by running the argument of the first proof more carefully in the case where $\mathcal{F}_1 = \ldots = \mathcal{F}_s = \mathcal{F}$; we omit further details. It would be very interesting to obtain an improvement of error term $1/s$ to an expression exponential in $s$. We hope that our methods can be used to make further progress on this old conjecture. Let us mention here a general class of examples of $s$-saturated{} families whose size is $\left(1 - 2^{-(s-1)}\right)2^n$. We do not know of any other examples of $s$-saturated{} families, and feel that it is likely that if the conjecture holds, then these are the only extremal examples. \begin{ex} Given $s \ge 2$, let $\{I_1, \ldots, I_{s-1}\}$ be a partition of $[n]$. For each $i \in [s-1]$, pick a maximal intersecting family $\mathcal{F}_i$ of subsets of $I_i$; in particular, by \Cref{prop:max-intersecting}, $|\mathcal{F}_i| = 2^{|I_i|-1}$. Define $\mathcal{F}$ as follows. \begin{equation*} \mathcal{F} = \{A \ss [n] : A \cap I_i \in \mathcal{F}_i \text{ for some $i \in [s-1]$.} \} \end{equation*} It is easy to check that $\mathcal{F}$ is $s$-saturated{} as a family of subsets of $[n]$ and that it has size $\left(1 - 2^{-(s-1)}\right) 2^n$. Note that this class of examples contains the example that was mentioned earlier, of the family of subsets of $[n]$ that intersect $[s-1]$. \end{ex} Finally, we note the following interesting phenomenon. \begin{prop} If \Cref{conj:main} holds for $s+1$, then it holds for $s$. \end{prop} Indeed, suppose that \Cref{conj:main} holds for $s+1$, and let $\mathcal{F} \ss \ps$ be $s$-saturated{}. Define $\mathcal{G} \ss \psn{n+1}$ as follows. \begin{equation*} \mathcal{G} = \mathcal{F} \cup \{A \ss [n+1] : n + 1 \in A\}. \end{equation*} Note that $\mathcal{G}$ is $(s+1)$-saturated{} (as a subset of $\psn{n+1}$). Hence, by the assumption that the conjecture holds for $s+1$, we find that $|\mathcal{G}| \ge \left(1 - 2^{-s}\right)2^{n+1}$. Note also that $|\mathcal{G}| = |\mathcal{F}| + 2^n$. It follows that $|\mathcal{F}| \ge \left(1 - 2^{-s}\right)2^{n+1} - 2^n = \left(1 - 2^{-(s-1)}\right)2^n$, as required.
{ "timestamp": "2018-04-26T02:02:56", "yymm": "1801", "arxiv_id": "1801.05471", "language": "en", "url": "https://arxiv.org/abs/1801.05471", "abstract": "We call a family $\\mathcal{F}$ of subsets of $[n]$ $s$-saturated if it contains no $s$ pairwise disjoint sets, and moreover no set can be added to $\\mathcal{F}$ while preserving this property (here $[n] = \\{1,\\ldots,n\\}$).More than 40 years ago, Erdős and Kleitman conjectured that an $s$-saturated family of subsets of $[n]$ has size at least $(1 - 2^{-(s-1)})2^n$. It is easy to show that every $s$-saturated family has size at least $\\frac{1}{2}\\cdot 2^n$, but, as was mentioned by Frankl and Tokushige, even obtaining a slightly better bound of $(1/2 + \\varepsilon)2^n$, for some fixed $\\varepsilon > 0$, seems difficult. In this note, we prove such a result, showing that every $s$-saturated family of subsets of $[n]$ has size at least $(1 - 1/s)2^n$.This lower bound is a consequence of a multipartite version of the problem, in which we seek a lower bound on $|\\mathcal{F}_1| + \\ldots + |\\mathcal{F}_s|$ where $\\mathcal{F}_1, \\ldots, \\mathcal{F}_s$ are families of subsets of $[n]$, such that there are no $s$ pairwise disjoint sets, one from each family $\\mathcal{F}_i$, and furthermore no set can be added to any of the families while preserving this property. We show that $|\\mathcal{F}_1| + \\ldots + |\\mathcal{F}_s| \\ge (s-1)\\cdot 2^n$, which is tight e.g.\\ by taking $\\mathcal{F}_1$ to be empty, and letting the remaining families be the families of all subsets of $[n]$.", "subjects": "Combinatorics (math.CO)", "title": "Minimum saturated families of sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743620718528, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7088952954328449 }
https://arxiv.org/abs/1001.4582
More Colourful Simplices
We show that any point in the convex hull of each of (d+1) sets of (d+1) points in general position in \R^d is contained in at least (d+1)^2/2 simplices with one vertex from each set. This improves the known lower bounds for all d >= 4.
\section{Introduction}\label{se:intro} A point $p \in R^d$ has {\it simplicial depth} $k$ relative to a set $S$ if it is contained in $k$ closed simplices generated by $(d+1)$ sets of $S$. This was introduced by Liu \cite{Liu90} as a statistical measure of how representative $p$ is of $S$, and is a source of challenging problems in computational geometry -- see for instance \cite{FR05}. More generally, we consider {\it colourful simplicial depth}, where the single set $S$ is replaced by $(d+1)$ sets, or colours, $\S_1,\ldots,\S_{d+1}$, and the {\em colourful} simplices containing $p$ are generated by taking one point from each set. Assuming that the convex hulls of the $\S_i$'s contain $p$ in their interior, {\bara}'s Colourful {\cara} Theorem \cite{Bar82} shows that $p$ must be contained in some colourful simplex. We are interested in determining the minimum number of colourful simplices that can contain $p$ for sets satisfying these conditions. That is, we would like to determine $\mu(d)$, the minimum number of colourful simplices drawn from $\S_1, \ldots, \S_{d+1}$ that contain $p \in R^d$ given that $p \in \interior(\conv(\S_i))$ for each $i$. Without loss of generality, we assume that the points in $\bigcup_i \S_i \cup\{p\}$ are in general position. Besides intrinsic appeal, $\mu(d)$ represents the minimum number of solutions to the colourful linear programming feasibility problem proposed in \cite{BO97} and discussed in \cite{DHST06}. The quantity $\mu(d)$ was investigated in \cite{DHST06}, where it is shown that $2d \le \mu(d) \le d^2+1$, that $\mu(d)$ is even for odd $d$, and that $\mu(2)=5$. This paper also conjectures that $\mu(d)= d^2+1$ for all $d \ge 1$. Subsequently, \cite{BM06} verified the conjecture for $d=3$ and provided a lower bound of $\mu(d) \ge \max(3d, \left\lceil \frac{d(d+1)}{5} \right\rceil)$ for $d \ge 3$, while \cite{ST06} independently provided a lower bound of $\mu(d) \ge \left\lfloor \frac{(d+2)^2}{4}\right\rfloor$. In this note we show: {\flushleft {\bf Theorem 1}: For $d \ge 1$, we have $\mu(d) \ge \lceil \frac{(d+1)^2}{2} \rceil$. } \vspace{2mm} This strengthens the previously known lower bound for all $d \ge 4$. \section{Preliminaries}\label{se:setup} Without loss of generality we can take $p=\zero$. The sets $\S_1, \ldots, \S_{d+1}$ must each contain at least $(d+1)$ points for $\zero$ to be in the interior of their convex hulls, and since we are minimizing we can assume they contain no additional points, i.e.~that $|\S_i|=d+1$ for each $i$. We assume that all points are distinct, so no point occurs in two $\S_i$'s, and $\zero$ is not in any $\S_i$. We can scale the points of the $\S_i$'s so that they lie on the unit sphere $\Sph^{d-1}$: $\zero$ is in a simplex after scaling if and only if it was in the simplex before scaling. We call a set of points drawn from the $\S_i$'s {\it colourful} if it contains at most one point from each $\S_i$. We call a colourful set of $d$ points which misses $\S_i$ an $\widehat{i}$-{\it transversal}. Note that $\widehat{i}$-transversals generate full dimensional pointed colourful cones; we will say that a transversal {\it spans} a point if the point is contained in the associated cone. A key observation is that colourful simplices containing $\zero$ are generated whenever the antipode of a point of colour $i$ is spanned by an $\widehat{i}$-transversal. In particular, we look at the combinatorial {\it octahedra}, or cross polytopes, generated by pairs of disjoint $\widehat{i}$-transversals. We rely on the topological fact that every octahedron $\Omega$ either covers all of $\Sph^{d-1}$ with colourful cones, or, every point $x \in \Sph^{d-1}$ that is covered by colourful cones from $\Omega$ is covered by at least two distinct such cones. In the case where the points of $\Omega$ form an octahedron in the geometric sense, these correspond to the cases where $\zero$ is inside and outside $\Omega$ respectively. For a proof, see for example the {\em Octahedron Lemma} of \cite{BM06}. We remark that a given octahedron contains $2^d$ transversals, though we specify only two disjoint ones to generate it. Our strategy for finding distinct colourful simplices is to begin with a transversal that generates at least one colourful simplex, and get further points from octahedra that include this transversal. We will break into cases based on the number of colourful simplices generated by the initial transversal and how many of the octahedra cover $\Sph^{d-1}$. \section{Proof of the Theorem 1}\label{se:main} We know that at least one colourful simplex contains $\zero$. Therefore we have an antipode of colour $(d+1)$ lying in the cone generated by a $\widehat{d+1}$-transversal $T$. Without loss of generality we can number the points of $\S_1,\S_2,\ldots,\S_d$ so that point $(d+1)$ of $S_i$ is included in $T$. The remaining points of the $S_i$'s can be numbered arbitrarily. Let $T_i$ be the set that contains the points numbered $i$ from $\S_1, \S_2, \ldots \S_{d+1}$. Then each $T_i$ is a $\widehat{d+1}$-transversal and $T_{d+1}=T$. Further, the sets $T_1, T_2, \ldots, T_{d+1}$ are pairwise disjoint. Let $L$ be the set of antipodes of colour $(d+1)$ spanned by $T_{d+1}$, where $|L|=l>0$. \subsection{Points from $d$ octahedra that share a transversal}\label{se:octs} Now consider the $d$ octahedra $\Omega_1, \Omega_2, \ldots, \Omega_d$ given by pairing $T_i$ with $T_{d+1}$ for $i=1,2,\ldots,d$. Except for the common transversal $T_{d+1}$, every $\widehat{d+1}$-transversal found among the $\Omega_i$'s is distinct. For each $i$, $\Omega_i$ may or may not cover all of $\Sph^{d-1}$. Suppose that $b$ of the octahedra cover $\Sph^{d-1}$. There are $(d+1-l)$ antipodes of colour $(d+1)$ that are not spanned by $T_{d+1}$, and hence must be spanned by a different transversal from each of these octahedra. This gives us a total of $b(d+1-l)$ distinct simplices containing $\zero$. Now there remain $(d-b)$ octahedra that do not span all of $\Sph^{d-1}$. By the Octahedron Lemma, each of the $l$ antipodes spanned by $T_{d+1}$ must also be spanned by a second transversal from the octahedron generated by $T_{d+1}$ and $T_i$. So we find an additional $(d-b)l$ distinct simplices along with the $l$ simplices generated by the antipodes with $T_{d+1}$ itself. This brings us to a total of: $l+b(d+1-l)+(d-b)l=(d+1)(b+l)-2bl$ distinct colourful simplices containing $\zero$ through this simple argument. \subsection{Choice of $T_{d+1}$}\label{se:choice} In the above argument, $T_{d+1}$ can be any $\widehat{d+1}$-transversal containing an antipode of colour $(d+1)$. In the construction of previous lower bounds, it was noted that if $\csd(\zero)$ is low, then there must be a portion of $\Sph^d$ lightly covered by colourful cones. That is to say, if each antipode of colour $(d+1)$ is spanned by at least $j$ $\widehat{d+1}$-transversals, then $\csd(\zero) \ge j(d+1)$. We can take $T_{d+1}$ to be a transversal spanning the least covered antipode. As we move through the possible values of $i$ in the argument of Subsection~\ref{se:octs}, whenever the octahedron fails to cover $\Sph^{d-1}$ we will see a new cone covering the lightly covered antipode. Hence $(j-1)+b \ge d$. We thus have that $\csd(\zero)$ is at least $\max[j(d+1),(d+1)(b+l)-2bl]$ with $j \ge 1, 1 \le b, l \le d$, and $j+b \ge d+1$. As long as $l \le \frac{d+1}{2}$, this gives the desired result: by taking either $j \ge \frac{d+1}{2}$ or $b \ge \frac{d+2}{2}$ we get $\csd(\zero) \ge \frac{d^2+2d+1}{2}$. \subsection{Single transversals spanning many antipodes}\label{se:many} This leaves only the case where $\l \ge \frac{d+2}{2}$. In this situation, we begin with $l$ simplices containing $\zero$ differing only in the $(d+1)$st colour. We can repeat this exercise for each colour, in which case we will either find that for each colour $i$, $l_i \ge \frac{d+2}{2}$, or, for some colour $i$, $l_i \le \frac{d+1}{2}$. In the latter case, we apply the analysis above to get at least $\frac{d^2+2d+1}{2}$ distinct simplices containing zero. If it happens that we get $l_i \ge \frac{d+2}{2}$ for each $i$, then for each $i$ we have a set $L_i$ of at least $l_i$ antipodes of colour $i$ which lie on a single $\widehat{i}$-transversal $U_i$. These generate $(d+1)$ sets $X_1, X_2, \ldots X_{d+1}$ of at least $l=\min_i(l_i) \ge \frac{d+2}{2}$ colourful simplices. There may be some duplication between sets, but we note that the simplices within each set are distinct and differ only in the $i$th colour. We can identify the simplices that make up the $X_i$'s with vectors in $\{1,2,\ldots,d+1\}^{d+1}$. We find it helpful to consider them as vectors in $\R^{d+1}$ unrelated to the initial configuration. A simplex $\alpha_d$ belonging to a given $X_i$ is represented by a vector in $\R^{d+1}$ in the following way. The axes correspond to the $d+1$ colours, and the $q$th coordinate is set to the index in $S_q$ of the point of colour $q$ of $\alpha_d$. We recall that the index of points in $S_q$ is set by the arbitrary numbering of points of colour $q$ proposed at the beginning of Section~\ref{se:main}. The vectors associated to the simplices from a given $X_i$ lie on a line segment in the $i$th coordinate direction. If a simplex is in both $X_i$ and $X_q$, then the associated vector must lie at the intersection of the corresponding line segments. \vspace{1mm} {\bf Lemma:} There are at most $d$ {\it duplicate vectors} in the union of the $X_i$'s, where a vector that is in $k+1$ sets is counted as $k$ duplicate vectors. {\bf Proof:} Consider adding the sets iteratively. We will say that two sets are in the same {\it component} if they contain a common point, and extend this to an equivalence relation. We remark that each component is contained in the topological component formed by taking the union of the line segments associated to the $X_i$'s, but a given topological component will contain multiple components if the points of intersection of the line segments are not included in the corresponding $X_i$'s. We begin with $c=0$ components and $k=0$ duplicate vectors. Each added set either creates a new component or intersects $r$ components, producing $r$ duplicate vectors while reducing the number of components by $(r-1)$ through the equivalence relation. Therefore at each step $c+k$ increases by 1. Upon termination, we will have at least 1 component, and hence at most $d$ duplicate vectors. \qed \vspace{2mm} Then the $X_i$'s contain distinct simplices except possibly for up to $d+1-c \le d$ repeats arising in this construction, where $c$ is the number of components. This gives us a total of $(d+1)l-(d+1-c)=(d+1)(l-1)+c$ distinct simplices containing $\zero$. However, if $c$ is small, we can readily find additional distinct simplices containing $\zero$ by observing that for a fixed colour $i$, for instance one attaining $l=l_i$, we also have $(d+1-l)$ antipodes outside of $L_i$. Each of these antipodes must generate some colourful simplex containing $\zero$. In fact, for each antipode omitted, we could get $\frac{d+1}{2}$ simplices since either $l_i$ or $b$ is this large, but it does not improve our worst case. Call this set of simplices $M$, and again consider them as vectors in $\R^{d+1}$. They are not included among the vectors associated to simplices in $X_i$, since they have different values of coordinate $i$. The vectors associated to simplices in $M$ could duplicate vectors from components other than the one containing $X_i$. However, each such component has a fixed value of colour $i$. If $c-1 \ge d+1-l$ it may be the case that all such simplices are repeats, but our guarantee is $(d+1)(l-1)+c \ge dl+1$. If $c-1 < d+1-l$ we get at least $d+2-l-c$ additional distinct simplices from vertices omitted from the $(d+1)$ sets. This again guarantees us at least $(d+1)(l-1)+c+(d+2-l-c)=dl+1$ distinct simplices. \vspace{1mm} Now as $l \ge \frac{d+2}{2}$ we get at least $d\frac{d+2}{2}+1=\frac{d^2+2d+2}{2}$ distinct simplices containing $\zero$. Thus our overall worst case for this analysis is at $\frac{d^2+2d+1}{2}=\frac{(d+1)^2}{2}$, which can be rounded up to an integer when $d$ is even. This improves the known bounds for $d \ge 4$, in particular from 12 to 13 when $d=4$. We remark that unlike previous general approaches, this analysis gives the tight bound of 5 when $d=2$. \section{A Combinatorial Generalization}\label{se:comb} The methods in Section~\ref{se:main} rely on the combinatorial structure of the vectors representing the simplices. Indeed, there is a nice generalization of the colourful simplicial depth problem to systems of vectors of in $\{1,2,\ldots,d+1\}^{d+1}$. Given sets $\S_1, \ldots, \S_{d+1}$ as in Section~\ref{se:intro}, we form the system of vectors $\V$ where $\vecv = (s_1, \ldots, s_{d+1})$ is in $\V$ exactly if the colourful simplex described by $\vecv$ contains $\zero$. In this context, $\widehat{i}$-{\it transversals} are simply vectors with the $i$th coordinate removed, and {\it octahedra} are pairs of disjoint $\widehat{i}$-transversals. The system $\V$ has the following two properties: 1. Every element of $\{1,2,\ldots,d+1\}^{d+1}$ is in some $\vecv \in \V$. This is the combinatorial requirement from {\bara}'s Colourful {\cara} Theorem. 2. For any octahedron $\O$, the parity of the set of vectors using points from $\O$ and a fixed point $s_i$ for the $i$th coordinate is the same for all choices of $s_i$. For a system $\V$ arising from colourful simplices, the parity is odd when the octahedron $\O$ contains $\zero$, and even when it does not. This is a purely combinatorial version of the Octahedron Lemma mentioned in Section~\ref{se:setup}. \begin{question}\label{qu:gen} For a given $d \ge 2$, what is the size $\nu(d)$ of a minimal system $\V$ of vectors in $\{1,2,\dots,{d+1}\}^{d+1}$ satisfying properties 1 and 2? \end{question} The system corresponding to the conjectured minimal core colourful {\cara} configuration from \cite{DHST06} satisfies properties 1 and 2 with $d^2+1$ vectors, so $\nu(d) \le \mu(d) \le d^2+1$. Clearly $\nu(d) \ge d+1$. An exhaustive computer search on a laptop shows in a few seconds that $\nu(2)>4$ and in a few hours that $\nu(3)>8$. In other words, this approach computationally verifies that $\mu(2)=5$ and $\mu(3)=10$ (using the fact that $\mu(3)$ must be even). \section{A Generalized Core}\label{se:gencore} As a final remark, we mention the recent generalization of the Colourful {\cara} Theorem in \cite{HPT08} and \cite{AB+09}, in which the condition of $\zero$ being in the convex hull of each $\S_i$ is relaxed to require $\zero$ to only be in the convex hull of $\S_i \cup \S_j$ for each $i \ne j$. It is natural to ask whether the minimum number of colourful simplices containing $\zero$ is lower for configurations satisfying these weaker conditions. Call the analogous quantity $\mu^\Diamond(d)$. In fact, the construction of \cite{DHST06} can be modified in this to produce configurations showing that $\mu^\Diamond(d) \le d+1$ by fixing the points of colours $1,2,\ldots,d$ in the same way and then clustering all antipodes of the final colour in region that is covered by only a single colourful cone from the first $d$ colours. In this case the relaxed conditions are satisfied almost trivially since $\zero$ is in $\conv(\S_i)$ for $i=1,2,\ldots,d$. We note that in this configuration, each colour from $1,\ldots,d$ has a unique point which is a generator for all $(d+1)$ colourful simplices colourful simplices containing $\zero$. In other words, in contrast to the situation when $\zero$ is in all the $\S_i$'s, some (in fact, most) points from the $\S_i$ generate no colourful simplices containing $\zero$. The following simple argument shows that $\mu^\Diamond(2)=3$. Using the assumptions of Section~\ref{se:setup}, we place the points of the first two colours on the unit circle around $\zero$. The condition $\zero \in \conv(\S_1 \cup \S_2)$ then means that every half-circle contains a point from $\S_1 \cup \S_2$. If the circle is covered by colourful cones, then each antipode of the remaining colour generates a colourful simplex containing $\zero$ and we are done. Otherwise, some segment of the circle is not covered by any colourful cone. This segment must be bounded by two points $p$ and $p'$ of the same $\S_i$, say $\S_1$. The three points of $\S_2$ then are on the longer arc between these points, and for each point of $\S_2$, every point on the longer arc is covered by a colourful cone using that point and either $p$ or $p'$. The condition that $\zero \in \conv(\S_2 \cup \S_3)$ forces at least one of the antipodes of $\S_3$ to lie in the arc that spans the three points of $\S_2$. Finally, we remark that we can generalize $\mu^\Diamond(d)$ combinatorially to $\nu^\Diamond(d)$ analogously to Section~\ref{se:comb}. Combinatorial Property 2 must still hold for such configurations, but Property 1 fails in the constructions above. Nevertheless, we can quickly verify computationally the $\nu^\Diamond(3)=\mu^\Diamond(3)=4$. \section{Acknowledgments} This work was supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC) and MITACS, and by the Canada Research Chairs program. The authors would like to thank the referees for helpful comments and Imre {\bara} for initiating the discussion in Section~\ref{se:comb}. \newpage \newcommand{\etalchar}[1]{$^{#1}$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2010-06-01T02:01:49", "yymm": "1001", "arxiv_id": "1001.4582", "language": "en", "url": "https://arxiv.org/abs/1001.4582", "abstract": "We show that any point in the convex hull of each of (d+1) sets of (d+1) points in general position in \\R^d is contained in at least (d+1)^2/2 simplices with one vertex from each set. This improves the known lower bounds for all d >= 4.", "subjects": "Combinatorics (math.CO)", "title": "More Colourful Simplices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743609939193, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7088952946616656 }
https://arxiv.org/abs/1607.08511
Differential geometry of rectifying submanifolds
A space curve in a Euclidean 3-space $\mathbb E^3$ is called a rectifying curve if its position vector field always lies in its rectifying plane. This notion of rectifying curves was introduced by the author in [Amer. Math. Monthly {\bf 110} (2003), no. 2, 147-152]. In this present article, we introduce and study the notion of rectifying submanifolds in Euclidean spaces. In particular, we prove that a Euclidean submanifold is rectifying if and only if the tangential component of its position vector field is a concurrent vector field. Moreover, rectifying submanifolds with arbitrary codimension are completely determined.
\section{Introduction} Let $\mathbb E^3$ denote Euclidean 3-space with its inner product $\left<\;\,,\;\right>$. Consider a unit-speed space curve $x : I\to \mathbb E^3$, where $I=(\alpha,\beta)$ is a real interval. Let ${\bf x}$ denote the position vector field of $x$ and ${\bf x}'$ be denoted by {\bf t}. It is possible, in general, that ${\bf t} '(s)=0$ for some $s$; however, we assume that this never happens. Then we can introduce a unique vector field ${\bf n}$ and positive function $\kappa$ so that ${\bf t}'=\kappa {\bf n}$. We call ${\bf t}'$ the {\it curvature vector field}, ${\bf n}$ the {\it principal normal vector field}, and $\kappa$ the {\it curvature} of the curve. Since ${\bf t}$ is of constant length, ${\bf n}$ is orthogonal to ${\bf t}$. The {\it binormal vector field} is defined by ${\bf b}={\bf t}\times{\bf n}$, which is a unit vector field orthogonal to both ${\bf t}$ and ${\bf n}$. One defines the {\it torsion} $\tau$ by the equation ${\bf b}'=-\tau{\bf n}$. The famous Frenet-Serret equations are given by \begin{align}\label{E:S-F} &\begin{cases}{\bf t}'=\hskip.38in\kappa {\bf n}\\ {\bf n}'=-\kappa {\bf t} \hskip.3in+\tau{\bf b} \\{\bf b}'=\hskip.3in-\tau{\bf n}.\end{cases}\end{align} At each point of the curve, the planes spanned by $\{{\bf t},{\bf n}\}$, $\{{\bf t},{\bf b}\}$, and $\{{\bf n},{\bf b}\}$ are known as the {\it osculating plane}, the {\it rectifying plane}, and the {\it normal plane}, respectively. From elementary differential geometry it is well known that a curve in $\mathbb E^3$ lies in a plane if its position vector lies in its osculating plane at each point, and lies on a sphere if its position vector lies in its normal plane at each point. In view of these basic facts, the author asked the following simple geometric question in \cite{c6}: \begin{question} When does the position vector of a space curve $\hbox{\bf x}: I\to \mathbb E^3$ always lie in its rectifying plane? \end{question} The author called such a curve a {\it rectifying curve} in \cite{c6}. The author derived many fundamental properties of rectifying curves. In particular, he completely classifies all rectifying curves in \cite{c6}. It is known that rectifying curves related with the notions constant-ratio curves and convolution (cf. \cite{c2001,c2002a,c2002b,c2003a,c2003b}). Furthermore, the author and F. Dillen established in \cite{cd05} a simple link between rectifying curves and the notion of centrodes in mechanics. Moreover, they showed in \cite{cd05} that rectifying curves are indeed the extremal curves which satisfy the equality case of a general inequality. Since then rectifying curves have been studied by many authors, see \cite{Cam16,II2003,II2007,II2008,II2014,Lucas2015,O2009,Yi16,Yu2007} among many others. For the most recent survey on rectifying curves, see \cite{c16}. In this article, we extend the notion of rectifying curves to the notion of rectifying submanifolds in a very natural way. Many fundamental properties of rectifying submanifolds are obtained. In particular, we prove that a Euclidean submanifold is rectifying if and only if the tangential component of its position vector field is a concurrent vector field. Moreover, rectifying submanifolds with arbitrary codimension are completely determined. \section{Preliminaries} Let $x: M\to \mathbb E^m$ be an isometric immersion of a Riemannian manifold $M$ into the Euclidean $m$-space $\mathbb E^m$. For each point $p\in M$, we denote by $T_pM$ and $T^\perp_p M$ the tangent and the normal spaces at $p$. There is a natural orthogonal decomposition: \begin{equation}\label{2.1} T_p{\mathbb E}^{m}=T_pM\oplus T^\perp_p M.\end{equation} Denote by $\nabla$ and $\tilde\nabla$ the Levi-Civita connections of $M$ and ${\mathbb E}^{m}$, respectively. The formulas of Gauss and Weingarten are given respectively by (cf. \cite{cbook,book11}) \begin{align}\label{2.2} &\tilde \nabla_XY=\nabla_X Y+h(X,Y), \\& \label{2.3} \tilde\nabla_X\xi =-A_\xi X+D_X\xi \end{align} for vector fields $X,\,Y$ tangent to $M$ and $\xi$ normal to $M$, where $h$ is the second fundamental form, $D$ the normal connection, and $A$ the shape operator of $M$. For a given point $p\in M$, the {\it first normal space}, of $M$ in $\mathbb E^m$, denoted by ${\rm Im}\,h_p$, is the subspace defined by \begin{align} \label{2.4} {\rm Im}\,h_p ={\rm Span}\{h(X,Y):X,Y\in T_p M\}.\end{align} For each normal vector $\xi$ at $p$, the shape operator $A_\xi$ is a self-adjoint endomorphism of $T_pM$. The second fundamental form $h$ and the shape operator $A$ are related by \begin{equation}\label{2.5} \<A_\xi X,Y\>=\<h(X,Y), \xi\>,\end{equation} where $\<\;\, ,\;\>$ is the inner product on $M$ as well as on the ambient Euclidean space. The {\it equation of Gauss\/} of $M$ in $\mathbb E^m$ is given by \begin{align}\label{2.6} R(X,Y;Z,W)=\<\sigma(X,W),\sigma(Y,Z)\>-\<\sigma(X,Z),\sigma(Y,W)\> \end{align} for $X,Y,Z,W$ tangent to $M$, where $R$ denotes the curvature tensors of $M$. The covariant derivative ${\bar \nabla}h$ of $h$ with respect to the connection on $TM \oplus T^{\perp}M$ is defined by \begin{align}\label{2.7}({\bar\nabla}_{X}h)(Y,Z)=D_{X}(h (Y,Z))-h(\nabla_{X}Y,Z)-h(Y,\nabla_{X}Z).\end{align} The {\it equation of Codazzi\/} is \begin{align}\label{2.8}({\bar\nabla}_{X}h)(Y,Z)= ({\bar\nabla}_{Y}h)(X,Z).\end{align} It follows from the definition of a rectifying curve $x:I\to \mathbb E^3$ that the position vector field ${\bf x}$ of $x$ satisfies \begin{equation}\label{2.9} {\bf x}(s)=\lambda(s){\bf t}(s)+\mu(s){\bf b}(s)\end{equation} for some functions $\lambda$ and $\mu$. For a curve $x:I\to \mathbb E^3$ with $\kappa(s_0)\ne 0$ at $s_0\in I$, the first normal space at $s_0$ is the line spanned by the principal normal vector ${\bf n}(s_0)$. Hence, the rectifying plane at $s_0$ is nothing but the plane orthogonal to the first normal space at $s_0$. Therefore, for a submanifold $M$ of $\mathbb E^m$ and a point $p\in M$, we call the subspace of $T_p\mathbb E^m$, orthogonal complement to the first normal space ${\rm Im}\,\sigma_p$, the {\it rectifying space of} $M$ at $p$. \begin{definition}\label{D:2.1} {\rm A submanifold $M$ of a Euclidean $m$-space $\mathbb E^m$ is called a} {\it rectifying submanifold} {\rm if the position vector field ${\bf x}$ of $M$ always lies in its rectifying space. In other words, $M$ is called a rectifying submanifold if and only if \begin{equation}\label{2.10} \<{\bf x}(p),{\rm Im}\, h_p\>=0\end{equation} holds at every $p\in M$.} \end{definition} \begin{definition} {\rm A non-trivial vector field $Z$ on a Riemannian manifold $M$ is called a {\it concurrent vector field} if it satisfies \begin{align}\label{2.11}\nabla_X Z=X\end{align} for any vector $X$ tangent to $M$, where $\nabla$ is the Levi-Civita connection of $M$.} \end{definition} \section{Lemmas} By a {\it cone\/} in $\mathbb E^m$ with vertex at the origin we mean a ruled submanifold generated by a family of lines passing through the origin. A submanifold of $\mathbb E^m$ is called a {\it conic submanifold\/} with vertex at the origin if it is an open portion of a cone with vertex at the origin. There exists a natural orthogonal decomposition of the position vector field ${\bf x}$ at each point for a Euclidean submanifold $M$; namely, \begin{equation} {\bf x}={\bf x}^T+{\bf x}^N,\end{equation} where ${\bf x}^T$ and ${\bf x}^N$ denote the tangential and normal components of ${\bf x}$, respectively. Let $|{\bf x}^T|$ and $|{\bf x}^N|$ be the length of ${\bf x}^T$ and ${\bf x}^N$, respectively. \begin{lemma} \label{L:3.1} Let $\,x:M\to {\mathbb E}^{m}$ be an isometric immersion of a Riemannian $n$-manifold into the Euclidean $m$-space ${\mathbb E}^{m}$. Then ${\bf x}={\bf x}^T$ holds identically if and only if $M$ is a conic submanifold with the vertex at the origin. \end{lemma} \begin{proof} Let $\,x:M\to {\mathbb E}^{m}$ be an isometric immersion of a Riemannian $n$-manifold into the Euclidean $m$-space ${\mathbb E}^{m}$. If ${\bf x}={\bf x}^T$ holds identically, then $e_1={\bf x}/|{\bf x}|$ is a unit vector field tangent to $M$. Put ${\bf x}=\rho e_1$. Since $\tilde\nabla_{e_1}e_1$ is perpendicular to $e_1$, we find from \begin{equation}\tilde\nabla_{e_1}{\bf x}=e_1,\;\; \tilde\nabla_{e_1}{\bf x}=(e_1\rho)e_1+\rho \tilde\nabla_{e_1}e_1,\end{equation} that $\tilde\nabla_{e_1}e_1=0$. Therefore, the integral curves of $e_1$ are some open portions of generating lines in $\mathbb E^m$. Moreover, because ${\bf x}={\bf x}^T$, the generating lines given by the integral curves of $e_1$ pass through the origin. Consequently, $M$ is a conic submanifold with the vertex at the origin. The converse is clear. \end{proof} \begin{lemma} \label{L:3.2} Let $\,x:M\to {\mathbb E}^{m}$ be an isometric immersion of a Riemannian $n$-manifold into the Euclidean $m$-space ${\mathbb E}^{m}$. Then ${\bf x}={\bf x}^N$ holds identically if and only if $M$ lies in a hypersphere centered at the origin. \end{lemma} \begin{proof} Let $\,x:M\to {\mathbb E}^{m}$ be an isometric immersion of a Riemannian $n$-manifold into the Euclidean $m$-space ${\mathbb E}^{m}$. If ${\bf x}={\bf x}^N$ holds identically, then we get $$Z\! \<{\bf x},{\bf x}\>=2\<\right.\! \tilde\nabla_Z{\bf x},{\bf x}\left.\!\>=2\<Z,{\bf x}^N\>=0$$ for any $Z\in TM$. Thus $M$ lies in a hypersphere centered at the origin. The converse is obvious. \end{proof} In views of Lemma \ref{L:3.1} and Lemma \ref{L:3.2} we make the following. \begin{definition} {\rm A rectifying submanifold $M$ of $\mathbb E^m$ is called } {\it proper} {\rm if its position vector field ${\bf x}$ satisfies ${\bf x}\ne {\bf x}^T$ and ${\bf x}\ne {\bf x}^N$ at every point on $M$.} \end{definition} \begin{lemma} \label{L:3.3} Let $M$ be a proper rectifying submanifold of $\mathbb E^m$ with $\dim M=n$. Then we have \begin{align}\label{3.5} m> n+\dim\, ({\rm Im}\,h_p)\end{align} for each $p\in M$. \end{lemma} \begin{proof} Let $M$ be a proper rectifying submanifold of $\mathbb E^m$. If $m=n+\dim\, ({\rm Im}\,h_p)$, then we get ${\bf x}={\bf x}^T$ which is a contradiction. \end{proof} \begin{remark} In views of Lemma \ref{L:3.1} and Lemma \ref{L:3.2}, we are only interested on proper rectifying submanifolds.\end{remark} \section{Characterization and classification of rectifying submanifolds} First, we give the following simple characterization of rectifying submanifolds. \begin{theorem} \label{T:4.1} If the position vector field ${\bf x}$ of a submanifold $M$ in ${\mathbb E}^{m}$ satisfies ${\bf x}^N\ne 0$, then $M$ is a proper rectifying submanifold if and only if ${\bf x}^T$ is concurrent vector field on $M$. \end{theorem} \begin{proof} Let $x:M\to {\mathbb E}^{m}$ be an isometric immersion of a Riemannian $n$-manifold into the Euclidean $m$-space ${\mathbb E}^{m}$. Consider the orthogonal decomposition \begin{align}\label{4.1} {\bf x}={\bf x}^T +{\bf x}^N\end{align} of the position vector field ${\bf x}$ of $M$ in $\mathbb E^m$. From \eqref{4.1} and formulas of Gauss and Weingarten, we find \begin{align}\label{4.2} Z=\tilde\nabla_Z {\bf x}=\nabla_Z {\bf x}^T +h(Z,{\bf x}^T)-A_{{\bf x}^N}Z +D_Z{\bf x}^N\end{align} for any $Z\in TM$. After comparing the tangential components in \eqref{4.2}, we obtain \begin{align}\label{4.3} A_{{\bf x}^N}Z=\nabla_Z {\bf x}^T - Z.\end{align} Assume that $M$ is a proper rectifying submanifold. Then we have ${\bf x}^T\ne 0$ and ${\bf x}^N\ne 0$ . Moreover, it follows from the Definition \ref{D:2.1} that \begin{align}\label{4.4} \<{\bf x}, h(X,Y)\>=0\end{align} for $X,Y\in TM$. So we get $A_{{\bf x}^N}=0$. Hence, we obtain from \eqref{4.3} that \begin{align}\label{4.5} \nabla_Z {\bf x}^T = Z,\end{align} which shows that ${\bf x}^T$ is a concurrent vector field on $M$. Conversely, if ${\bf x}^T$ is a concurrent vector field on $M$, then we find from \eqref{2.11} and \eqref{4.3} that $A_{{\bf x}^N}=0$. Therefore we obtain \eqref{4.4}. Consequently, $M$ is a proper rectifying submanifold due to ${\bf x}^N\ne 0$ by assumption. \end{proof} Next, we give the following classification of rectifying submanifolds. \begin{theorem}\label{T:4.2} If $M$ is a proper rectifying submanifold of ${\mathbb E}^{m}$, then with respect to some suitable local coordinate systems $\{s,u_2,\ldots,u_n\}$ on $M$ the immersion $\,x$ of $M$ in $\mathbb E^m$ is of the form \begin{equation}\label{4.6} x(s,u_2,\ldots,u_n)=\sqrt{s^2+c^2}\, Y(s,u_2,\ldots,u_n),\;\; \<Y,Y\>=1,\; c>0,\end{equation} such that the metric tensor $g_Y\! $ of the spherical submanifold defined by $Y$ satisfies \begin{align}\label{4.7} g_Y=\frac{c^2}{(s^2+c^2)^2}ds^2+\frac{s^2}{s^2+c^2}\sum_{i,j=2}^n g_{ij}(u_2,\ldots,u_n) du_i du_j.\end{align} Conversely, the immersion given by \eqref{4.6}-\eqref{4.7} defines a proper rectifying submanifold. \end{theorem} \begin{proof} Let $x:M\to {\mathbb E}^{m}$ be an isometric immersion of a Riemannian $n$-manifold $M$ into the Euclidean $m$-space ${\mathbb E}^{m}$. Assume that $M$ is a proper rectifying submanifold. Then \eqref{4.2} holds. After comparing the normal components of \eqref{4.2}, we obtain \begin{align}\label{4.8} &D_Z{\bf x}^N=-h(Z,{\bf x}^T),\end{align} for $Z\in TM$. It follows from \eqref{4.4} and \eqref{4.8} that $\<{\bf x},D_Z {\bf x}^N\>=0$. Hence we get $$Z\! \<{\bf x}^N,{\bf x}^N\>=0,$$ which implies that ${\bf x}^N$ is of positive constant length, say $c$. From \eqref{4.4} we obtain \begin{equation}\label{4.11} \<A_{{\bf x}^N} X,Y\>=\<{\bf x}^N,h(X,Y)\>=\<{\bf x},h(X,Y)\>=0.\end{equation} Hence we have $A_{{\bf x}^N}=0$. Let us put $\rho=|{\bf x}^T|$ and $e_1={\bf x}^T/\rho$. We may extend $e_1$ to a local orthonormal frame $e_1,\ldots,e_n$. We put \begin{equation}\label{4.9} \nabla_X e_i=\sum_{j=1}^n \omega_i^j(X)e_j,\;\; i=1,\ldots,n.\end{equation} For $j,k=2,\ldots,n$, we find \begin{equation}\label{4.10} 0=e_k\! \<{\bf x},e_j\>=\delta_{jk}+ \<{\bf x},\nabla_{e_k}e_j\>+\<{\bf x},h(e_j,e_k)\>,\end{equation} Since $h(e_j,e_k)=h(e_k,e_j)$, equation \eqref{4.10} gives $$\omega^1_j(e_k)=\omega^1_k(e_j),\;\; j,k=2,\ldots,n.$$ Hence, it follows from the Frobenius theorem that the distribution $\mathcal D$ spanned by $e_2,\ldots,e_n$ is an integrable distribution. On the other hand, the distribution $\mathcal D^\perp={\rm Span}\,\{e_1\}$ is also integrable since it is of rank one. Therefore, there exist local coordinate systems $\{s,u_2,\ldots,u_n\}$ on $M$ such that $e_1=\partial/\partial s$ and $\partial/\partial u_2,\ldots,\partial/\partial u_n$ span the distribution $\mathcal D$. Let us put \begin{equation}\label{4.12}{\bf x}^T=\varphi e_1\end{equation} with $\varphi=|{\bf x}^T|$. By taking the derivative of $\varphi=\<{\bf x},e_1\>$ with respect to $e_j$ for $j=1,\ldots,n$, we also have \begin{equation}\label{4.13}e_j\varphi=\delta_{1j}+\<{\bf x},h(e_1,e_j)\>.\end{equation} Combining \eqref{4.4} and \eqref{4.13} gives \begin{equation}\label{4.14}e_j\varphi=\delta_{1j},\quad j=1,\ldots,n.\end{equation} Therefore, we obtain $\varphi=\varphi(s)$ and $\varphi'(s)=1$ which imply $\varphi(s)=s+b$ for some constant $b$. Thus, after applying a suitable translation on $s$ if necessary, we have $\varphi=s$. Consequently, the position vector field satisfies \begin{equation}\label{4.15}{\bf x}=se_1+{\bf x}^N.\end{equation} By combining \eqref{4.15} and $|{\bf x}^N|=c$, we find \begin{equation}\label{4.16}\<{\bf x},{\bf x}\>= s^2+c^2, \end{equation} where $c$ is a positive number. Hence we may put \begin{equation}\label{4.17} x(s,u_2,\ldots,u_n)=\sqrt{s^2+c^2}\, Y(s,u_2,\ldots,u_n),\end{equation} for some $\mathbb E^m$-valued function $Y=Y(s,u_2,\ldots,u_n)$ satisfying $\<Y,Y\>=1$. Using \eqref{4.17} and the fact that $e_1=\partial/\partial s$ is orthogonal to the distribution $\mathcal D$, we obtain that \begin{equation}\label{4.18} \<Y_s,Y_s\>={c^2\over {(s^2+c^2)^2}},\;\; \<Y_s,Y_{u_j}\>=0, \;\; j=2,\ldots,n.\end{equation} Therefore, the metric tensor $g_Y$ of the spherical submanifold defined by $Y$ takes the following form: \begin{align}\label{4.19} g_Y=\frac{c^2}{(s^2+c^2)^2}ds^2+\sum_{i,j=2}^n g_{ij}(s,u_2,\ldots,u_n) du_i du_j.\end{align} On the other hand, it follows from Theorem \ref{T:4.1} that ${\bf x}^T=se_1$ is a concurrent vector field. Thus, we find from \eqref{4.5} that \begin{equation}\begin{aligned}\label{4.20} e_1= \nabla_{e_1}{\bf x}^T=\nabla_{e_1} se_1=e_1+s\nabla_{e_1}e_1.\end{aligned}\end{equation} Hence we get $\nabla_{e_1}e_1=0$, which implies that the integral curves of $e_1$ are geodesic in $M$. Therefore, the distribution $\mathcal D^\perp$ spanned by $e_1$ is a totally geodesic foliation. From \eqref{4.5} we have \begin{equation} e_i= \nabla_{e_i} {\bf x}^T=s\nabla_{e_i}e_1, \;\; i=2,\ldots,n,\end{equation} which implies that \begin{equation} \label{4.22}\omega^j_1(e_i)=\frac{\delta_{ij}}{s}, \;\; i,j=2,\ldots,n,\end{equation} where $\delta_{ij}=1$ or 0 depending on $i=j$ or $i\ne j$. From \eqref{4.22} we conclude that $\mathcal D$ is an integrable distribution whose leaves are totally umbilical in $M$. Moreover, the mean curvature of leaves of $\mathcal D$ are given by $s^{-1}$. Since the leaves of $\mathcal D$ are hypersurfaces in $M$, it follows that the mean curvature vector fields of leaves of $\mathcal D_2$ are parallel in the normal bundle of $M$ in $\mathbb E^m$. Therefore, $\mathcal D$ is a spherical foliation. Consequently, by a result of \cite{H} (or Theorem 4.4 of \cite[page 90]{book11}) we conclude that $M$ is locally a warped product $I\times_{s} F$, where $F$ is a Riemannian $(n-1)$-manifold. Thus, the metric tensor $g$ of $M$ takes the form \begin{equation} \label{4.23}g=ds^2+s^2 g_F,\end{equation} where $g_F$ is the metric tensor of $F$. Now, by applying \eqref{4.6}, \eqref{4.19} and \eqref{4.23}, we may conclude that the metric tensor $g_Y$ can be expressed as \eqref{4.7}. Conversely, let us consider a submanifold $M$ of $\mathbb E^m$ defined by \begin{equation}\label{4.24} x(s,u_2,\ldots,u_n)=\sqrt{s^2+c^2}\, Y(s,u_2,\ldots,u_n),\;\; \<Y,Y\>=1,\; c>0,\end{equation} such that the metric tensor $g_Y\! $ satisfies \begin{align}\label{4.25} g_Y=\frac{c^2}{(s^2+c^2)^2}ds^2+\frac{s^2}{s^2+c^2}\sum_{i,j=2}^n g_{ij}(u_2,\ldots,u_n) du_i du_j.\end{align} Then it follows from \eqref{4.24} that \begin{equation}\begin{aligned}\label{4.26} &\frac{\partial {\bf x}}{\partial s}=\frac{sY}{\sqrt{s^2+c^2}}+\sqrt{s^2+c^2}\, Y_s,\;\; \\&\frac{\partial {\bf x}}{\partial u_j}=\sqrt{s^2+c^2}\, Y_{u_j},\;\; j=2,\ldots,n, \end{aligned}\end{equation} where $Y_s=\partial Y/\partial s$ and $Y_{u_j}=\partial Y/\partial u_j$. It follows from \eqref{4.24}, \eqref{4.25} and \eqref{4.26} that the metric tensor $g_M$ of $M$ is given by \begin{equation} \label{4.27}g_M=ds^2+s^2 \sum_{i,j=2}^n g_{ij}(u_2,\ldots,u_n) du_i du_j. \end{equation} Now, by an easy computation, we find from \eqref{4.27} that \begin{equation} \label{4.28}\nabla_{\frac{\partial}{\partial s}}\frac{\partial}{\partial s}=0, \;\; \nabla_{\frac{\partial}{\partial u_j}} \frac{\partial}{\partial s}=\frac{1}{s}\frac{\partial}{\partial u_j},\;\; j=2,\ldots,n.\end{equation} Since $\<Y,Y\>=1$, \eqref{4.24} and \eqref{4.26} imply that \begin{equation} \label{4.29} \<\right.\! {\bf x}, {\bf x}_{u_j}\! \left.\>=0, \;\; j=2,\ldots,n.\end{equation} Therefore, we obtain ${\bf x}^T=s \frac{\partial}{\partial s}$. Now, it is easy to verify that ${\bf x}^T$ is a concurrent vector field on $M$. Moreover, it is direct to show that the normal component of ${\bf x}$ is given by $${\bf x}^N=\frac{c^2}{\sqrt{s^2+c^2}}\,Y-s\sqrt{s^2+c^2}\, Y_s, $$ which is alway non-zero everywhere on $M$. Consequently, $M$ is a proper rectifying submanifold, according to Theorem \ref{T:4.1}. \end{proof} \begin{remark} Theorem \ref{T:4.2} extend Theorem 3 of \cite{c6}. \end{remark} \begin{remark} If we put $s=\tan^{-1}\(\frac{t}{c}\)$, then \eqref{4.7} becomes \begin{align}\label{4.30} g_Y=dt^2+\sin^2t \sum_{j,k=2}^n g_{jk}(u_2,\ldots,u_n)du_j du_k.\end{align} For $n=2$, we get $g_Y=dt^2+(\sin^2t) du^2$ from \eqref{4.30}, which is the metric tensor of a spherical coordinate system $(t,u)$ on $S^2(1)$. Hence, for $n=2$, $Y=Y(t,u)$ is nothing but an isometric immersion from an open portion of $S^1(1)$ into $S^{m-1}(1)\subset \mathbb E^m$. Therefore, there exist many spherical submanifolds in $\mathbb E^m$ whose metric tensor is given by \eqref{4.7}. Consequently, there exist many rectifying submanifolds in $\mathbb E^m$ according to Theorem \ref{T:4.2}. \end{remark} \section{Some properties of rectifying submanifolds} Finally, we provide some basic properties of proper rectifying submanifolds. \begin{theorem}\label{T:5.1} Let $M$ be a proper rectifying submanifold of ${\mathbb E}^{m}$. Then \begin{itemize} \item[(a)] $|{\bf x}^T|=s+b$ for some constant $b$. \item[{(b)}] $|{\bf x}|^2=s^2+c_1 s+c_2$ for some constants $c_1$ and $c_2$. \item[(c)] ${\bf x}^N$ is of constant length. \item[(d)] $A_{{\bf x}^N}=0$. \item[(e)] The curvature tensor $R$ satisfies $R({\bf x}^T,Y)=0$ for any $Y \in TM$. \item[(f)] The sectional curvature $K$ of $M$ satisfies $K({\bf x}^T,Z)=0$ for any unit vector $Z$ perpendicular to ${\bf x}^T$. \end{itemize} \end{theorem} \begin{proof} Statements (a), (b), (c) and (d) are already done in the proof of Theorem \ref{T:4.2}. Clearly, statement (f) follows immediately from statement (e). Now, we prove statement (e). This can be done as follows. By applying \eqref{2.6} and \eqref{4.8} we have \begin{equation}\begin{aligned}\label{5.1} R({\bf x}^T,Y,Z;W)\, &=\<h({\bf x}^T,W),h(Y,Z)\>-\<h({\bf x}^T,Z),h(Y,W)\> \\&=\<D_Z{\bf x}^N,h(Y,W)\>-\<D_W {\bf x}^N,h(Y,Z)\> \\&=-\<{\bf x}^N,D_Z h(Y,W)\>+\<{\bf x}^N,D_W h(Y,Z)\>.\end{aligned}\end{equation} Therefore, after applying \eqref{4.4} and equation \eqref{2.8} of Codazzi, we derive from \eqref{5.1} that \begin{equation}\begin{aligned} R({\bf x}^T,Y,Z;W)\, &=\<{\bf x}^N,(\bar\nabla_W h)(Y,Z)\>-\<{\bf x}^N,(\bar\nabla_Z h)(Y,W)\>=0,\end{aligned}\end{equation} which gives statement (e). \end{proof} \begin{remark} Statement (a), (b) and (c) of Theorem \ref{T:5.1} extend the corresponding results obtained in Theorem 1 of \cite{c6}. \end{remark} \begin{remark} One may define rectifying submanifolds in a pseudo-Euclidean space in the same as Definition \ref{D:2.1}. We will treat rectifying submanifolds in pseudo-Euclidean spaces in a separate article. \end{remark}
{ "timestamp": "2016-07-29T02:09:15", "yymm": "1607", "arxiv_id": "1607.08511", "language": "en", "url": "https://arxiv.org/abs/1607.08511", "abstract": "A space curve in a Euclidean 3-space $\\mathbb E^3$ is called a rectifying curve if its position vector field always lies in its rectifying plane. This notion of rectifying curves was introduced by the author in [Amer. Math. Monthly {\\bf 110} (2003), no. 2, 147-152]. In this present article, we introduce and study the notion of rectifying submanifolds in Euclidean spaces. In particular, we prove that a Euclidean submanifold is rectifying if and only if the tangential component of its position vector field is a concurrent vector field. Moreover, rectifying submanifolds with arbitrary codimension are completely determined.", "subjects": "Differential Geometry (math.DG)", "title": "Differential geometry of rectifying submanifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743644972027, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7088952911558392 }
https://arxiv.org/abs/1207.0566
Any order superconvergence finite volume schemes for 1D general elliptic equations
We present and analyze a finite volume scheme of arbitrary order for elliptic equations in the one-dimensional setting. In this scheme, the control volumes are constructed by using the Gauss points in subintervals of the underlying mesh. We provide a unified proof for the inf-sup condition, and show that our finite volume scheme has optimal convergence rate under the energy and $L^2$ norms of the approximate error. Furthermore, we prove that the derivative error is superconvergent at all Gauss points and in some special case, the convergence rate can reach $h^{2r}$, where $r$ is the polynomial degree of the trial space. All theoretical results are justified by numerical tests.
\section{Introduction} \setcounter{equation}{0} The {\it finite volume method} (FVM) attracted a lot of attentions during the past several decades, we refer to \cite{Bank.R;Rose.D1987,Barth.T;Ohlberger2004, Cai.Z1991, Cai.Z_Park.M2003, ChenWuXu2011, Ewing.R;Lin.T;Lin.Y2002, EymardGallouetHerbin2000, Li.R2000, Ollivier-Gooch;M.Altena2002,Plexousakis_2004,Suli1991, Xu.J.Zou.Q2009, zou2009a} and the references cited therein for an incomplete list of references. Due to the local conservation of numerical fluxes, the capability to deal with the problems on the domains with complex geometries, and other advantages, FVM has a wide range of applications in scientific and engineering computations (see, e.g., \cite{EymardGallouetHerbin2000,Ollivier-Gooch;M.Altena2002}). There have been many studies of the mathematical theory for FVM, see, e.g., \cite{Bank.R;Rose.D1987, Xu.J.Zou.Q2009} and the monographs \cite{Barth.T;Ohlberger2004,EymardGallouetHerbin2000,Li.R2000}. However, much attention has been paid to linear FVM schemes(see e.g., \cite{Bank.R;Rose.D1987, Cai.Z1991, Ewing.R;Lin.T;Lin.Y2002,Suli1991,Suli1992}), high order FVM schemes have not been investigated as much or as satisfactorily as linear FVM schemes. Moreover, the analysis of high order FVM schemes are often done case by case. For instances, earlier works on quadratic FVM schemes can be traced back to \cite{Tian.Chen_1991,Emonot1992,Liebau1996}, high order FVMs for 1D elliptic equations were derived in \cite{Plexousakis_2004}, and high order FVMs on rectangular meshes were derived and analyzed in \cite{Cai.Z_Park.M2003}, the quadratic FVM schemes on triangular meshes have also been intensively studied by \cite{Li.R2000,Xu.J.Zou.Q2009,Chen.L}. To the best of our knowledge, no analysis of FVM scheme of an arbitrary order has been published yet. In this paper, we study a family of any order FVM schemes in the one-dimensional setting. Instead of a case-by-case study as in the literature for quadratic and cubic FVM schemes, we adopt a unified approach to establish the inf-sup condition. Earlier works(see, e.g. \cite{Liebau1996, Li.R2000,Xu.J.Zou.Q2009,ChenWuXu2011}) utilized element-wise analysis to prove that the bilinear form resulting from FVM is positive definite, which is a stronger property than the inf-sup condition. Hence, some assumption is needed for the mesh, such as quasi-uniformity and shape-regularity (in 2D). The major difference here is that we prove only the inf-sup condition (instead of positive definiteness of the bilinear form) and there is no mesh condition attached. With help of the inf-sup condition, we obtain the optimal rate of convergence in both the $H^1$ and $L^2$ norms. In this paper, we also study the superconvergence property of our FVM schemes. Note that while the superconvergence theory of {\it finite element methods} (FEM) has reached its maturity (\cite{Babuvska.I;Strouboulis.T2001,Chen.C.M2001, L.R.Wahlbin1995,Zhu.QD;LinQ1989,Zienkiewicz;zhujianzhong2005}), the superconvergence analysis of FVM has also been focused on the linear schemes (see, e.g., \cite{Cai.Z1991, Xu.J.Zou.Q2009}). In this paper, it is shown that for a 1D general elliptic equation, the superconvergence behavior of FVM is similar to that of the counterpart finite element method. For instances, the convergence rate at nodal points is $h^{2r}$, the rate at interior Lobatto points (defined in Section 4) is $h^{r+2}$, the convergence rate of the derivative error at Gauss points is $h^{r+1}$. While in some special cases, some surprising superconvergence results have been found and proved. That is, the convergence rate of the derivative error at all Gauss points can reach $h^{r+2}$ or $h^{2r}$, depending on the coefficient of the elliptic equations. The derivative convergence rate $h^{2r}$ doubles the global optimal rate $h^r$, which is much better than the counterpart finite element method's $h^{r+1}$ rate; the derivative convergence rate $h^{r+2}$ is one order higher than the counterpart finite element method's $h^{r+1}$. We organize the rest of the paper as follows. In Section 2 we present an arbitrary order FVM scheme for elliptic equations in one-dimensional setting. In particular, we use the Gauss points of order $r\ge 1$ to construct the control volumes and choose the trial space as the Lagrange finite element of $r$th order with the interpolation points being the Lobatto points of order $r$. In Section 3 we provide a unified proof for the inf-sup condition and establish the optimal convergence rate both under $H^1$ and $L^2$ norms. In Section 4, we study the superconvergence property at some special points of our FVM schemes of any order. In Section 5, a post-processing technique based on superconvergence results in the section 4 is proposed to recover the derivative of the solution. Numerical experiments supporting our theory are presented in Section 6. Some concluding remarks are provided in Section 7. In the rest of this paper, ``$A\lesssim B$" means that $A$ can be bounded by $B$ multiplied by a constant which is independent of the parameters which $A$ and $B$ may depend on. ``$A\sim B$" means $``A\lesssim B"$ and $``B\lesssim A"$. \bigskip \section{FVM schemes of any order} \setcounter{equation}{0} In this section, we develop finite volume schemes for the following two-point boundary value problem on the interval $\Omega=(a,b)$: \begin{eqnarray}\label{Poisson} \begin{aligned} - (\alpha u')'(x) +\beta(x) u'(x)+\gamma(x) u(x)=f(x),&&\ \forall x\in \Omega, \\ u(a)=u(b)=0,&& \end{aligned} \end{eqnarray} where $\alpha \ge \alpha_0>0$, $\gamma-\frac{1}{2}\beta'\geq \kappa>0 $, $\alpha, \beta, \gamma \in L^{\infty}(\bar{\Omega})$, $f$ is a real-valued function defined on $\bar{\Omega}$. We first introduce the primal partition and its corresponding trial space. For a positive integer N, let $\bZ_N:=\{1,\cdots,N\}$ and $a=x_0< x_1<\ldots<x_N=b$ be $N+1$ distinct points on $\bar{\Omega}$. For all $i \in \bZ_N$, we denote $\tau_i=[x_{i-1},x_i]$ and $h_i=x_i-x_{i-1}$, let $h=\max\limits_{i\in \bZ_N}h_i$ and \[ \cal {P}= \{\tau_i: i\in \bZ_N\} \] be a partition of $\Omega$. The corresponding trial space is chosen as the Lagrange finite element of $r$th order, $r\geq 1$, defined by \[ U^r_{\cal P}=\{v\in C(\Omega): v|_{\tau_j}\in \mathbb{P}_r, \forall\tau_j\in{\cal P}, v|_{\partial\Omega}=0\}, \] where $\mathbb{P}_r$ is the set of all polynomials of degree no more than $r$. Obviously, $\dim U^r_{\cal P}=Nr-1$. We next present a dual partition and its corresponding test space. Let $G_1,\ldots,G_r$ be $r$ Gauss points, i.e., zeros of the Legendre polynomial of $r$th degree, on the interval $[-1,1]$. The Gauss points on each interval $\tau_i$ are defined as the affine transformations of $G_j$ to $\tau_i$, that is, \[ g_{i,j}=\frac {1}{2}(x_i+x_{i-1}+h_iG_j), \quad j\in\bZ_r. \] With these Gauss points, we construct a dual partition \[ \cal P'=\{\tau'_{1,0}, \tau'_{N,r}\}\cup \{\tau'_{i,j}: (i,j)\in \bZ_N \times \bZ_{r_i}\}, \] where \[ \tau'_{1,0}=[0,g_{1,1}], \tau'_{N,r}=[g_{N,r},1],\tau'_{i,j}=[g_{i,j}, g_{i,j+1}], \] here \[ r_i=\left\{\begin{array}{lll} r & \text{if} &i \in \bZ_{N-1}\\ r-1 & \text{if} &i=N \end{array} \right. \text{and} \quad g_{i,r+1}=g_{i+1,1},\forall i \in\bZ_{N-1}. \] The test space $V_{\cal P'}$ consists of the piecewise constant functions with respect to the partition $\cal P'$, which vanish on the intervals $\tau'_{1,0}\cup \tau'_{N,r}$. In other words, \[ V_{\cal P'}=\text{Span}\left\{\psi_{i,j}: (i,j)\in \bZ_N \times \bZ_{r_i}\right\}, \] where $\psi_{i,j}=\chi_{[g_{i,j}, g_{i,j+1}]} $ is the characteristic function on the interval $\tau'_{i,j}$. We find that $\dim V_{\cal P'}=Nr-1=\dim U^r_{\cal P}$. We are ready to present our finite volume schemes. Integrating \eqref{Poisson} on each control volume $[g_{i,j}, g_{i,j+1}], (i,j)\in \bZ_N \times \bZ_{r_i}$ yields \[ \int_{g_{i,j}}^{g_{i,j+1}} - (\alpha u')'(x)+\beta(x)u'(x)+\gamma(x)u(x)dx =\int_{g_{i,j}}^{g_{i,j+1}}f(x){dx}. \] In other words, \begin{equation}\label{conserve} \alpha(g_{i,j})u'(g_{i,j})-\alpha(g_{i,j+1})u'(g_{i,j+1})+\int_{g_{i,j}}^{g_{i,j+1}}\big( \beta(x)u'(x)+\gamma(x)u(x)\big)dx =\int_{g_{i,j}}^{g_{i,j+1}}f(x){dx}. \end{equation} Let $w_{\cal P'}\in V_{\cal P'}$, $w_{\cal P'}$ can be represented as \[ w_{\cal P'}=\sum_{i=1}^{N}\sum_{j=1}^{r_i}w_{i,j}\psi_{i,j}, \] where $w_{i,j}'s$ are constants. Multiplying \eqref{conserve} with $w_{i,j}$ and then summing up for all $(i,j)\in \bZ_N\times \bZ_{r_i}$, we obtain \begin{equation*} \begin{split} \sum_{i=1}^{N}\sum_{j=1}^{r_i} w_{i,j}\left(\alpha(g_{i,j})u'(g_{i,j})-\alpha(g_{i,j+1})u'(g_{i,j+1})+\int_{g_{i,j}}^{g_{i,j+1}}\big(\beta(x)u'(x)+\gamma(x)u(x)\big)dx \right) \\=\int_a^b f(x)w_{\cal P'}(x) dx, \end{split} \end{equation*} or equivalently, \begin{equation*} \begin{split} \sum_{i=1}^{N}\sum_{j=1}^{r} [w_{i,j}]\alpha(g_{i,j})u'(g_{i,j})+\sum_{i=1}^{N}\sum_{j=1}^{r_i} w_{i,j}\left(\int_{g_{i,j}}^{g_{i,j+1}}\big(\beta(x)u'(x)+\gamma(x)u(x)\big)dx \right) \\=\int_a^b f(x)w_{\cal P'}(x) dx, \end{split} \end{equation*} where $[w_{i,j}]=w_{i,j}-w_{i,j-1}$ is the jump of $w$ at the point $g_{i,j}, (i,j)\in \bZ_N \times \bZ_r$ with $w_{1,0}=0, w_{N,r}=0$ and $w_{i,0}=w_{i-1,r},2\leq i\leq N$. We define the FVM bilinear form for all $v\in H^1_0(\Omega), w_{\cal P'}\in V_{\cal P'}$ by \begin{equation}\label{bilinear1} \begin{split} a_{\cal P}(v,w_{\cal P'})=&\sum_{i=1}^{N}\sum_{j=1}^{r} [w_{i,j}]\alpha(g_{i,j})v'(g_{i,j})\\ &+\sum_{i=1}^{N}\sum_{j=1}^{r_i} w_{i,j}\left(\int_{g_{i,j}}^{g_{i,j+1}}\big(\beta(x)v'(x)+\gamma(x)v(x)\big)dx \right). \end{split} \end{equation} The finite volume method for solving equation \eqref{Poisson} reads as : Find $u_{\cal P}\in U^r_{\cal P}$ such that \begin{equation}\label{bilinear} a_{\cal P}(u_{\cal P},w_{\cal P'})=(f,w_{\cal P'}),\ \ \forall w_{\cal P'}\in V_{\cal P '}. \end{equation} \bigskip \section{Convergence Analysis} \setcounter{equation}{0} In this section, we prove the inf-sup condition and use it to establish some convergence properties of the FVM solution. \subsection{Inf-sup condition} We begin with some notations and definitions. First we introduce the {\it broken} Sobolev space \[ W^{m,p}_{\cal P}(\Omega)=\{ v\in C(\Omega) : v|_{\tau_i}\in W^{m,p}(\Omega), \forall \tau_i \in \cal P \}, \] where $m$ is a given positive integer and $1\leq p \leq \infty$. When $p=2$, we denote $H^m_{\cal P}$ for simplicity. For all $j\geq0$, we define a semi-norm by \[ |v|_{j,p,\cal P} = \left( \sum_{\tau_i\in \cal P} |v|_{j,p,\tau_i}^p \right)^{1\over p} \] and a norm by \[ \|v\|_{m,p,\cal P} =\left( \sum_{j=0}^m |v|_{j,p,\cal P}^p \right)^{1\over p}. \] We often use $|\cdot|_{m,\cal P}$ instead of $|\cdot|_{m, 2,\cal P}$ and $\|\cdot\|_{m,\cal P}$ instead of $\|\cdot\|_{m, 2,\cal P}$ for simplicity. Secondly, for all $w_{\cal P'}\in V_{\cal P'},\ w_{\cal P'}=\sum\limits_{i=1}^N\sum\limits_{j=1}^{r_i} w_{i,j}\psi_{i,j}$, let \[ \big|w_{\cal P'}\big|^2_{1,\cal P'}=\sum_{i=1}^{N}\sum_{j=1}^r h_i^{-1}[w_{i,j}]^2,\quad \big\|w_{\cal P'}\big\|^2_{0,\cal P'}= \sum_{i=1}^{N}\sum_{j=1}^{r_i} h_i w_{i,j}^2 \] and \[ \big\|w_{\cal P'}\big\|_{\cal P'}^2 = \big|w_{\cal P'}\big|_{1,\cal P'}^2 + \big\|w_{\cal P'}\big\|_{0,\cal P'}^2. \] Noticing that $w_{1,0}=w_{N,r}=0$, it is easy to obtain the following Poincar\'e type inequality \begin{eqnarray}\label{Poincare} \big\|w_{\cal P'}\big\|_{0,\cal P'} \lesssim \big|w_{\cal P'}\big|_{1,\cal P'}, \quad \forall w_{\cal P'}\in V_{\cal P'}, \end{eqnarray} where the hidden constant depends only on $\Omega$ and $r$. Thirdly, we denote by $A_j, j\in \bZ_r$ the weights of the Gauss quadrature \[ Q_r(F)=\sum_{j=1}^r A_j F(G_j) \] for computing the integral \[ I(F)=\int_{-1}^1 F(x) dx. \] It is well-known that $Q_r(F)=I(F)$ for all $F\in \bP_{2r-1}(-1,1)$. Naturally, the weights on interval $\tau_i, i\in \bZ_N$ are \[ A_{ij}=\frac{h_i}{2}A_j,\quad j\in \bZ_r. \] Before the presentation of the inf-sup condition, we define a linear mapping $\Pi_{\cal P}:U_{\cal P}^r \rightarrow V_{\cal P'}$ by \[ \Pi_{\cal P}v_{\cal P}=\sum_{i=1}^N \sum_{j=1}^{r_i} v_{i,j}\psi_{i,j}, \] where the coefficients $v_{i,j}$ are determined by the constraints \[ [v_{i,j}]=A_{i,j}v'_{\cal P}(g_{i,j}),\ \ (i,j)\in \bZ_N\times \bZ_{r_i}. \] Note that $v_{\cal P}\in U^r_{\cal P}$, the derivative $v'_{\cal P}\in \bP_{r-1}(\tau_i), i\in\bZ_N$, then \[ \sum_{i=1}^N\sum_{j=1}^{r} A_{i,j}v_{\cal P}'(g_{i,j})=\int_a^b v_{\cal P}'(x) dx=v_{\cal P}(b)-v_{\cal P}(a)=0. \] Therefore, \begin{eqnarray*} v_{N,r-1}&=&\sum_{i=1}^N\sum_{j=1}^{r_i}[v_{i,j}] =\sum_{i=1}^N\sum_{j=1}^{r} A_{i,j}v'_{\cal P}(g_{i,j})-A_{N,r}v'_{\cal P}(g_{N,r})\\ &=&-A_{N,r}v'_{\cal P}(g_{N,r}). \end{eqnarray*} In other words, we also have \[ [v_{N,r}]=v_{N,r}-v_{N,r-1}=A_{N,r}v'_{\cal P}(g_{N,r}). \] Consequently, \begin{eqnarray*} |\Pi_{\cal P}v_{\cal P}|_{1,\cal P'}^2&=&\sum_{i=1}^{N}\sum_{j=1}^r h_i^{-1}[v_{i,j}]^2 =\sum_{i=1}^{N}\sum_{j=1}^r h_i^{-1} \left(A_{i,j}v'_{\cal P}(g_{i,j})\right)^2 \thicksim\int_a^b \left(v'_{\cal P}(x)\right)^2dx. \end{eqnarray*} Namely, we have \begin{eqnarray}\label{equ-norm} |\Pi_{\cal P}v_{\cal P}|_{1,\cal P'}\thicksim |v_{\cal P}|_{1,\cal P}. \end{eqnarray} With all these preparations, we are now ready to present the inf-sup condition of $a_{\cal P}(\cdot,\cdot)$. \begin{theorem} Assume that the mesh size $h$ is sufficiently small, then \begin{equation}\label{infsup} \inf_{v_{\cal P}\in U^r_{\cal P}}\sup_{w_{\cal P'} \in V_{\cal P'} } \frac{a_{\cal P} (v_{\cal P} ,w_{\cal P'} )}{\|v_{\cal P} \|_{1,\cal P} \|w_{\cal P'}\|_{\cal P'}}\ge c_0, \end{equation} where $c_0>0$ is a constant depending only on $r,\alpha_0,\kappa$ and $\Omega$. \end{theorem} \begin{proof} It follows from the bilinear form \eqref{bilinear1} that \[ a_{\cal P}(v_{\cal P},\Pi_{\cal P}v_{\cal P})=I_1+I_2,\ \ \forall v_{\cal P}\in U_{\cal P}^r \] with \[ I_1=\sum_{i=1}^{N}\sum_{j=1}^{r} [v_{i,j}]\alpha(g_{i,j})v'_{\cal P}(g_{i,j}),\quad I_2=\sum_{i=1}^{N}\sum_{j=1}^{r_i} v_{i,j}\int_{g_{i,j}}^{g_{i,j+1}}\big(\beta(x)v'_{\cal P}(x)+\gamma(x)v_{\cal P}(x)\big)dx . \] Since $(v'_{\cal P})^2 \in \bP_{2r-2}(\tau_i), i\in \bZ_N$, we have \[ \int_{x_{i-1}}^{x_i} (v'_{\cal P}(x))^2 dx = \sum_{j=1}^r A_{i,j}(v'_{\cal P}(g_{i,j}))^2. \] Therefore, \begin{eqnarray*} I_1\ge \alpha_0\sum_{i=1}^{N}\sum_{j=1}^{r} A_{i,j}(v'_{\cal P}(g_{i,j}))^2= \alpha_0|v_{\cal P}|_{1,\cal P}^2. \end{eqnarray*} We next estimate $I_2$. Let $V(x)=\int_a^x \left( \beta(s)v'_{\cal P}(s)+\gamma(s)v_{\cal P}(s) \right) ds $ and denote by \begin{eqnarray*} E_{i}&=&\int_{x_{i-1}}^{x_{i}} v'_{\cal P}(x)V(x)dx-\sum_{j=1}^{r} A_{i,j} v'_{\cal P}(g_{i,j})V(g_{i,j}), \end{eqnarray*} the error of Gauss quadrature in the interval $\tau_i$, $i\in\bZ_N$. Then \begin{eqnarray*} I_2&=&- \sum_{i=1}^{N}\sum_{j=1}^{r} [v_{i,j}] V(g_{i,j})= - \int_a^b v'_{\cal P}(x)V(x)dx + \sum_{i=1}^N E_i. \end{eqnarray*} Using the fact that $v_{\cal P}(a)=v_{\cal P}(b)=0$ and \[ \int_a^b \beta(x)v'_{\cal P}(x)v_{\cal P}(x)dx=-\frac{1}{2}\int_a^b \beta'(x)v^2_{\cal P}(x)dx, \] we obtain \begin{equation}\label{est3} -\int_a^b v'_{\cal P}(x)V(x)dx =\int_a^b \left( \gamma(x)-\frac{\beta'(x)}{2}\right)v^2_{\cal P}(x)dx \geq \kappa \|v_{\cal P}\|_0^2. \end{equation} On the other hand, by\cite{DavisRabinowitz1984}(p98, (2.7.12)), for all $i\in \bZ_N$ \[ E_{i}=\frac{h_i^{2r+1}(r!)^4}{(2r+1)[(2r)!]^3}(v'_{\cal P}V)^{(2r)}(\xi_i), \] where $\xi_i\in \tau_i$. By the Leibnitz formula of derivatives, we have \begin{equation*} \left|(v'_{\cal P}V)^{(2r)}(\xi_i)\right|=\sum_{k=r+1}^{2r} \binom{2r}{k}\left| (\beta v'_{\cal P}+\gamma v_{\cal P})^{(k-1)} (v'_{\cal P})^{(2r-k)}(\xi_i)\right| \ge c_1 \|v_{\cal P}\|_{r,\infty,\tau_i}^2 \end{equation*} with \[ c_1=\max\left\{\|\beta\|_{2r-1,\infty,\tau_i}^2, \|\gamma\|_{2r-1,\infty,\tau_i}^2\right\}\sum_{k=r+1}^{2r} \binom{2r}{k}. \] Therefore, by the inverse inequality that \[ \|v_{\cal P}\|_{r,\infty,\tau_i} \lesssim h_i^{-(r-\frac12)}|v_{\cal P}|_{1,\tau_i}, \] we have \begin{equation*}\label{quad err} |E_{i}| \leq \frac{c_1(r!)^4}{(2r+1)[(2r)!]^3} h_i^2 |v_{\cal P}|_{1,\tau_i}^2. \end{equation*} Combining the above estimates, we obtain \[ I_2 \geq \kappa \|v_{\cal P}\|^2_{0,\cal P}-\frac{c_1(r!)^4}{(2r+1)[(2r)!]^3}h^2\left|v_{\cal P}\right|^2_{1,\cal P}. \] Then for sufficiently small $h$, we have \begin{eqnarray*} a_{\cal P}(v_{\cal P},\Pi_{\cal P}v_{\cal P}) &\ge& \frac{\alpha_0}{2}|v_{\cal P}|_{1,\cal P}^2 +\frac{\kappa}{2}\|v_{\cal P}\|_{0,\cal P}^2\ge \frac12\min\{\alpha_0,\kappa\}\|v_{\cal P}\|_{1,\cal P}^2. \end{eqnarray*} Noticing \eqref{Poincare} and \eqref{equ-norm}, we obtain \[ \|\Pi_{\cal P}v_{\cal P}\|_{\cal P'} \lesssim \|v_{\cal P}\|_{1,\cal P}. \] Therefore, for any $v_{\cal P}\in U^r_{\cal P}$, \[ \sup_{w_{\cal P'}\in V_{\cal P'}} \frac{a_{\cal P}(v_{\cal P},w_{\cal P'})}{\|w_{\cal P'}\|_{\cal P'}} \ge \frac{a_{\cal P}(v_{\cal P},\Pi_{\cal P}v_{\cal P})}{\|\Pi_{\cal P}v_{\cal P}\|_{\cal P'}}\ge c_0 \|v_{\cal P}\|_{1,\cal P}, \] where $c_0$ is a constant depending only on $r,\alpha_0,\kappa$ and $\Omega$. The inf-sup condition \eqref{infsup} follows. \end{proof} \begin{remark} In the above proof, the partition ${\cal P}$ does not need to satisfy any quasi-uniform or shape-regularity condition. Moreover, \eqref{infsup} holds even the order of polynomials in each sub-interval $\tau_i$ are different. \end{remark} \subsection{Energy norm error estimate} Following \cite{Xu.J.Zou.Q2009}, we use the inf-sup condition \eqref{infsup} and the framework of Petrov-Galerkin method to present and analyze the finite volume element method \eqref{bilinear}. We first introduce a semi-norm and a norm in the broken space $ H_{\cal P}^{2}(\Omega)= W^{2,2}_{\cal P}(\Omega) $ by \[ |v|^2_{\cal P}=\sum_{\tau_i\in\cal P}|v|_{1,\tau_i}^2+h_i^2|v|_{2,\tau_i}^2,\ \ \|v\|_{\cal P}^2 = \|v\|_{0,\cal P}^2+|v|_{\cal P}^2. \] It is straightforward to show that, \[ |v_{\cal P}|_{\cal P}\sim |v_{\cal P}|_{1,\cal P}, \qquad \|v_{\cal P}\|_{\cal P}\sim \|v_{\cal P}\|_{1,\cal P},\qquad \forall v_{\cal P}\in U_{\cal P}^r. \] With these equivalences, the inf-sup condition \eqref{infsup} can be written as \begin{equation}\label{infsup2} \inf_{v_{\cal P}\in U_{\cal P}^r}\sup_{w_{\cal P'} \in V_{\cal P'} } \frac{a_{\cal P} (v_{\cal P} ,w_{\cal P'} )}{\|v_{\cal P} \|_{\cal P} \|w_{\cal P'}\|_{\cal P'}}\ge c_2, \end{equation} where $c_2>0$ also depends only on $r,\alpha_0, \kappa$ and $\Omega$. Moreover, we define a discrete semi-norm $\big|\cdot\big|_{G,1}$ for all $v\in H_0^1(\Omega)$ by \[ \big|v\big|_{G,1}=\left(\sum_{i=1}^N\sum_{j=1}^{r}A_{i,j}(v'(g_{i,j}))^2\right)^{\frac 12}. \] We next discuss the relationship between $|\cdot|_{\cal P}$ and $|\cdot|_{G,1}$. First \[ |v_{\cal P}|_{G,1}= |v_{\cal P}|_{1,\cal P}\thicksim |v_{\cal P}|_{\cal P},\quad \forall v_{\cal P}\in U_{\cal P}^r. \] On the other hand, for all $v\in H_0^{1}(\Omega)\cap H_{\cal P}^{2}(\Omega)$, \[ (v'(g_{i,j}))^2\lesssim h^{-1}_{i}\|v'\|_{L^2(\tau_i)}^2+h_{i}\|v''\|_{L^2(\tau_i)}^2, \quad (i,j)\in \bZ_N \times \bZ_r, \] where the hidden constant depends only on $r$. Thus by the fact $A_{ij}\le h_i$, we have \begin{eqnarray*} |v|_{G,1}^2&=&\sum_{i=1}^N\sum_{j=1}^r A_{ij}(v'(g_{i,j}))^2\\ &\lesssim & \sum_{i=1}^N h_i\left(h^{-1}_{i}\|v'\|_{L^2(\tau_i)}^2+h_{i}\|v''\|_{L^2(\tau_i)}^2\right) = |v|_{\cal P}^2. \end{eqnarray*} Namely, \[ |v|_{G,1}\lesssim |v|_{\cal P},\quad \forall v\in H_0^1(\O) \cap H_{\cal P}^2(\O), \] where the hidden constant depends only on $r$. \medskip We are ready to show our main result. \begin{theorem} Assume that $u$ is the solution of \eqref{Poisson}, $u_{\cal P}$ is the solution of \eqref{bilinear}. Then the finite volume bilinear form $a_{\cal P}(\cdot,\cdot)$ is variationally exact: \begin{equation}\label{exact} a_{\cal P}(u,w_{\cal P'})=(f,w_{\cal P'}),\ \ \ \ \forall\ w_{\cal P'}\in \ V_{\cal P'}, \end{equation} and bounded : for all\ $v\in\ H_0^1(\Omega)\cap H_{\cal P}^{2}(\Omega),\ w_{\cal P'}\in \ V_{\cal P'}$, \begin{equation}\label{bounded} |a_{\cal P}(v,w_{\cal P'})|\le M \|v\|_{\cal P} \|w_{\cal P'}\|_{\cal P'}, \end{equation} where the constant $M>0$ depends only on $r,\alpha_0,\kappa$ and $\O$. Consequently, \begin{equation}\label{totalestimate} \big\|u-u_{\cal P}\big\|_{\cal P}\le \frac{M}{c_2} \inf_{v_{\cal P}\in U^r_{{\cal P}}}\big\|u-v_{\cal P}\big\|_{\cal P}, \end{equation} where $c_2$ is the same as in \eqref{infsup2}. \end{theorem} \begin{proof} First, the formula \eqref{exact} follows by multiplying \eqref{Poisson} with an arbitrary function $w_{\cal P'}\in V_{\cal P'} $ and then using Newton-Leibniz formula on each control volume $[g_{i,j},g_{i,j+1}], (i,j)\in \bZ_N \times \bZ_{r_i}$. Next we show \eqref{bounded}. By the Cauchy-Schwartz inequality, from \eqref{bilinear1} there holds for all\ $v\in\ H_0^1(\Omega)\cap H_{\cal P}^{2}(\Omega),\ w_{\cal P'}\in \ V_{\cal P'}$ that \begin{eqnarray*} a_{\cal P}(v,w_{\cal P'})&\leq&|v|_{G,1}\left(\sum_{i=1}^{N}\sum_{j=1}^{r} \frac{\alpha^2(g_{i,j})}{A_{ij}}([w_{i,j}])^2\right)^\frac12 \\ &+&\max(|\beta|,|\gamma|)\left(\sum_{i=1}^{N}\sum_{j=1}^{r_i}h_iw_{i,j}^2\right)^{\frac 12}\left(\sum_{i=1}^{N}(|v|_{1,\tau_i}^2+\|v\|_{0,\tau_i}^2)\right)^{\frac 12} \\ &\leq&M \|v\|_{\cal P} \|w_{\cal P'}\|_{\cal P'}, \end{eqnarray*} where the constant $M$ depends only on $r,\alpha_0,\kappa$ and $\O$. Finally, combining the inf-sup condition \eqref{infsup2}, \eqref{exact} and \eqref{bounded}, we derive \eqref{totalestimate} following similar arguments as in Babuska and Aziz (\cite{BabuskaAziz1972}), or Xu and Zikatanov (\cite{XuZikatanov2003}). \end{proof} \begin{coro} Assume that $u\in H_0^1(\Omega)\cap H^{r+1}_{\cal P}(\Omega)$ is the solution of \eqref{Poisson}, and $u_{\cal P}$ is the solution of FVM scheme \eqref{bilinear}, then \begin{equation}\label{optimalh1} \|u-u_{\cal P}\|_{1}\lesssim h^r|u|_{r+1,\cal P}, \end{equation} where the hidden constant is independent of $\cal P$. \end{coro} \begin{proof} It follows from the definition of $\|\cdot\|_{\cal P}$ and \eqref{totalestimate} that \[ \|u-u_{\cal P}\|_1\le \|u-u_{\cal P}\|_{\cal P}\lesssim \inf\limits_{v_{\cal P}\in U^r_{\cal P}}\|u-v_{\cal P}\|_{\cal P}. \] Noticing that \[ \inf\limits_{v_{\cal P}\in U^r_{\cal P}}\|u-v_{\cal P}\|_{\cal P}\le \|u-u_I\|_{1}+\left(\sum_{i=1}^{N}h_i^2|u-u_I|^2_{2,\tau_i}\right)^{\frac 12}, \] where $u_I$ is the Lagrange interpolation of $u$ at the Lobatto points (defined in the next section) in the trial space $U^r_{\cal P}$. By the standard approximation theory, we obtain the estimate \eqref{optimalh1}. \end{proof} \section{Superconvergence} \setcounter{equation}{0} In this section, we will present the superconvergence properties of the FVM solution. To this end, we need to use the interpolation of a function on the so-called Lobatto points. This kind of interpolation has been used in the literature for superconvergence analysis, see, e.g., \cite{Zhang1, Zhang2}. We denote $L_1,L_2,\cdots,L_{r-1}$ the zeros of $P'_r(x)$, where $P_r(x)$ is the Legendre polynomial of order $r$. Moreover, we denote $L_0=-1,L_r=1$ and $\bN_r=\{0,1,\cdots,r\}$ for $r\ge 1$. The family of points $L_j ,j\in \bN_r$ are called Lobatto points of degree $r$. The Lobatto points on $\tau_i$ are defined as the affine transformations of $L_j$ to $\tau_i$, i.e, \[ l_{i.j}=\frac12(x_i+x_{i-1}+h_iL_j),\quad j\in \bN_r. \] Let $u_I\in U^r_{\cal P}$ be the interpolation of $u$ such that \[ u_I(l_{i,j})=u(l_{i,j}),\quad (i,j)\in \bZ_N \times \bN_r, \] then by \cite{Zhu.QD;LinQ1989}(p146, (1.2)) \begin{eqnarray}\label{est1} |(u-u_I)'(g_{i,j})|\lesssim h^{r}|u|_{r+2,1,\omega'_{i,j}}, \end{eqnarray} where $\omega'_{i,j}=(g_{i,j-1},g_{i,j})$. \begin{theorem}\label{h1estimate} Let $u\in H^1_0(\Omega)\cap H_{\cal P}^{r+2}(\Omega)$ be the solution of \eqref{Poisson}, and $u_{\cal P}$ the solution of FVM scheme \eqref{bilinear}. Then \begin{eqnarray}\label{bilest} |a_{\cal P}(u-u_I,w_{\cal P'})| \lesssim h^{r+1}\left(|u|_{r+2,\cal P}+|u|_{r+1,\infty,\cal P}\right)\|w_{\cal P'}\|_{\cal P'},\quad \forall w_{\cal P'} \in V_{\cal P'}. \end{eqnarray} Consequently, \begin{eqnarray}\label{h1est} \|u_I-u_{\cal P}\|_1\lesssim h^{r+1}\left(|u|_{r+2,\cal P}+|u|_{r+1,\infty,\cal P}\right). \end{eqnarray} \end{theorem} \begin{proof} Recall the definition of bilinear form \eqref{bilinear1}, integral by part we obtain \[ a_{\cal P}(u-u_I,w_{\cal P'})=I_1+I_2 \] with \begin{eqnarray*} &&I_1=\sum_{i=1}^{N}\sum_{j=1}^{r}[w_{i,j}]\left(\alpha(g_{i,j})(u-u_I)'(g_{i,j})-\beta(g_{i,j})(u-u_I)(g_{i,j})\right),\\ &&I_2=\sum_{i=1}^{N}\sum_{j=1}^{r}w_{i,j}\int_{g_{i,j}}^{g_{i,j+1}}(\gamma-\beta')(u-u_I). \end{eqnarray*} For all $i\in \bZ_{N}$, note that \[ (u-u_I)(g_{i,j})\lesssim h_i^{r+1}|u|_{r+1,\infty,\tau_i},\ \ |u|_{r+2,1,\tau_i} \lesssim h^{\frac12}_{i}|u|_{r+2,\tau_i}, \] by the Cauchy-Schwartz inequality, \eqref{est1} and the standard approximation theory, \begin{eqnarray*} a_{\cal P}(u-u_I,w_{\cal P'})&\lesssim& \|w_{\cal P'}\|_{\cal P'}\left( \sum_{i=1}^{N}\sum_{j=1}^{r}\left( h_i^{2(r+1)}|u|^2_{r+2,2,\tau_i}+h_i^{2r+3}|u|^2_{r+1,\infty,\tau_i}+\|u-u_I\|^2_{0,\tau_i} \right) \right)^{\frac12}\\ &\lesssim& h^{r+1}\left(|u|_{r+2,\cal P}+|u|_{r+1,\infty,\cal P}\right)\|w_{\cal P'}\|_{\cal P'}. \end{eqnarray*} The desired result \eqref{bilest} is proved. We next show \eqref{h1est}. By the inf-sup condition \eqref{infsup}, \[ c_0 \|u_I-u_{\cal P}\|_{1}\leq \sup_{w_{\cal P'}\in V_{\cal P'}}\frac{a_{\cal P}(u_{\cal P}-u_I,w_{\cal P'})}{\|w_{\cal P'}\|_{\cal P'}}=\sup_{w_{\cal P'}\in V_{\cal P'}}\frac{a_{\cal P}(u-u_I,w_{\cal P'})}{\|w_{\cal P'}\|_{\cal P'}}. \] Combining the above inequality with \eqref{bilest}, we derive \eqref{h1est}. \end{proof} As a direct consequence of \eqref{h1est}, we have the following $L^2$ error estimate. \begin{corollary}\label{l2estimate} Let $u\in H^1_0(\Omega)\cap H^{r+2}_{\cal P}(\Omega)$ be the solution of \eqref{Poisson}, and $u_{\cal P}$ the solution of FVM scheme \eqref{bilinear}, then \begin{eqnarray}\label{optimall2} \|u-u_{\cal P}\|_0 \lesssim h^{r+1}\|u\|_{r+2,\cal P}, \end{eqnarray} where the hidden positive constant is independent of $\cal P$. \end{corollary} \begin{proof} By the triangle inequality, \[ \|u-u_{\cal P}\|_0\leq \|u-u_I\|_0+\|u_I-u_{\cal P}\|_0. \] By the Poincar\'{e} inequality and \eqref{h1est}, we have \[ \|u_I-u_{\cal P}\|_0 \lesssim |u_I-u_{\cal P}|_1\lesssim h^{r+1}\|u\|_{r+2,\cal P}. \] Moreover, by the standard approximation theory, \[ \|u-u_I\|_0 \lesssim h^{r+1}\|u\|_{r+1} \lesssim h^{r+1}\|u\|_{r+2,\cal P}. \] The desired estimate \eqref{optimall2} follows. \end{proof} \begin{remark} In the above $L^2$ error estimate, we do not use the so-called Aubin-Nitsche technique. However, we need slightly stronger regularity assumption on the exact solution $u$. \end{remark} \bigskip We next study the superconvergence property at the nodes $x_i, i\in \bZ_{N-1}$. \begin{theorem}\label{supconv_vectices} Let $u$ be the solution of \eqref{Poisson}, and $u_{\cal P}$ the solution of FVM scheme \eqref{bilinear}. If $u\in W^{2r+1,\infty}_{\cal P}(\Omega)$, then \begin{eqnarray}\label{e_vertice1} |(u-u_{\cal P})(x_{i})|\lesssim h^{2r} \sum_{j=r+1}^{2r+1} |u|_{j,\infty,\cal P},\ \ \forall i\in \bZ_{N-1}. \end{eqnarray} \end{theorem} \begin{proof}Let $e=u-u_{\cal P}$ and \[ \e(x)=\int_a^x \big( \beta(y)e'(y)+\gamma(y)e(y) \big) dy,\quad \forall x\in [a,b]. \] By the construction of the FVM scheme, both $u$ and $u_{\cal P}$ satisfy \eqref{conserve}, then for all $(i,j)\in \bZ_{N-1}\times \bZ_{r_i}$, \begin{eqnarray*} -\big( \alpha(g_{i,j+1})e'(g_{i,j+1})-\alpha(g_{i,j})e'(g_{i,j}) \big)+\e(g_{i,j+1})-\e(g_{i,j})=0. \end{eqnarray*} Namely, \begin{equation}\label{est4} \alpha(g_{i,j})e'(g_{i,j})-\e(g_{i,j})=C_0, \end{equation} where $C_0$ is a constant independent of $i,j$. On the other hand, let $G(\cdot,\cdot)$ be the Green function for the problem \eqref{Poisson}. Then for all $v\in H_0^1(\Omega)$, \[ v(x)=\int_a^b\alpha(y)v'(y)\frac{\partial G}{\partial y}(x,y) dy+\int_a^b \big( \beta(y)v'(y)+\gamma(y)v(y) \big)G(x,y)dy. \] In particular, for all $i \in \bZ_{N-1}$, \[ e(x_i)=\int_a^b \alpha(y)e'(y)\frac{\partial G}{\partial y}(x_i,y) dy+\int_a^b \big( \beta(y)e'(y)+\gamma(y)e(y) \big)G(x_i,y)dy. \] Noting that $G(x_i,a)=G(x_i,b)=0$, by\eqref{est4} \begin{eqnarray*} e(x_i)&=&\int_a^b \big(\alpha(y)e'(y)-\e(y)\big)\frac{\partial G}{\partial y}(x_i,y) dy\\ &=&\sum_{k=1}^{N}\sum_{j=1}^{r} A_{k,j}\big(\alpha(g_{k,j})e'(g_{k,j})-\e(g_{k,j})\big)\frac{\partial G}{\partial y}(x_i,g_{k,j})+E_1\\ &=&C_0\int_a^b\frac{\partial G}{\partial y}(x_i,y)dy +E_1+E_2=E_1+E_2, \end{eqnarray*} where \begin{eqnarray*} E_1&=&\left.\sum_{k=1}^N \frac{h_k^{2r+1}(r!)^4}{(2r+1)[(2r)!]^3}\left[\big( (\alpha(y)e'(y)-\e(y)\big)\frac{\partial G}{\partial y}(x_i,y)\right]^{(2r)}\right |_{y=\xi_k},\\ E_2&=&\left.-\sum_{k=1}^N \frac{h_k^{2r+1}(r!)^4}{(2r+1)[(2r)!]^3}\left[\frac{\partial G}{\partial y}(x_i,y) \right]^{(2r)}\right|_{y=\eta_k} \end{eqnarray*} with $\xi_k,\eta_k \in \tau_k$. We next estimate $E_1$ and $E_2$, respectively. Note that $e^{(j)}=u^{(j)}$ for $j>r$ and the Green function $G(x_i,\cdot)$ has bounded derivatives of any order on each $\tau_k, k\in \bZ_{N}$, then \begin{eqnarray*} |E_1| &\lesssim&\sum_{k=1}^N h_k^{2r+1}\left( \sum_{j=0}^r |e|_{j,\infty,\tau_k}+\sum_{j=r+1}^{2r+1} |u|_{j,\infty,\tau_k}\right)\\ &\lesssim &\sum_{k=1}^{N}h_k^{2r+1}\left(\sum_{j=0}^{r}h^{-j}\|e\|_{0,\infty,\tau_k}+\sum_{j=r+1}^{2r+1} |u|_{j,\infty,\tau_k}\right)\\ &\lesssim &h^{r}\|e\|_{0,\infty,\cal P}+h^{2r+1}\sum_{j=r+1}^{2r+1} |u|_{j,\infty,\cal P}, \end{eqnarray*} where in the second inequality we have used the fact that\cite{adams} \[ |e|_{j,\infty,\tau_k} \lesssim h^{-j}_k\|e\|_{0,\infty,\tau_k}+h^{r+1-j}_k|e|_{r+1,\infty,\tau_k},\ \ \forall j\in\bZ_{r}. \] We next consider the term $\|e\|_{0,\infty,\cal P}$. By the triangular inequality and the inverse inequality \begin{eqnarray*} \|u_I-u_{\cal P}\|_{0,\infty,\cal P}& \lesssim& h^{\frac12}|u_I-u_{\cal P}|_{1}\lesssim h^{\frac12}(|u-u_I|_1+|u_I-u_{\cal P}|_1)\\ &\lesssim& h^{r+\frac12}|u|_{r+1}\lesssim h^{r+\frac12}|u|_{r+1,\infty,\cal P}, \end{eqnarray*} we have \[ \|e\|_{0,\infty,\cal P}\le \|u-u_I\|_{0,\infty,\cal P}+\|u_I-u_{\cal P}\|_{0,\infty,\cal P}\lesssim h^{r+\frac 12}|u|_{r+1,\infty,\cal P}. \] Therefore, \begin{eqnarray*} |E_1| \lesssim h^{2r+\frac 12} \sum_{j=r+1}^{2r+1}|u|_{j,\infty,\cal P}. \end{eqnarray*} As for $E_2$, a direct calculation shows that \[ |E_2| \lesssim h^{2r}\|G\|_{2r,\infty,\cal P}. \] Combining $E_1$ with $E_2$, we obtain \eqref{e_vertice1}. \end{proof} As a direct consequence of \eqref{e_vertice1}, we have \begin{equation}\label{avg-nod} E_{node}=\left(\frac{1}{N}\sum_{i=1}^{N} [ (u-u_{\cal P})(x_i) ]^2\right)^\frac{1}{2} \lesssim h^{2r}. \end{equation} We next estimate the term $e(x_i)-e(x_{i-1}), i\in\bZ_{N}$ which plays a critical role in our later superconvergence analysis. \begin{theorem}\label{supconv_vectrices2} For all $i\in\bZ_N$, \begin{equation}\label{supconv_vectices1} | (u-u_{\cal P})(x_i)-(u-u_{\cal P})(x_{i-1})|\lesssim h^{2r+1} \sum_{j=r+1}^{2r+1}|u|_{j,\infty,\cal P}. \end{equation} \end{theorem} \begin{proof} By the same arguments as Theorem \ref{supconv_vectices}, we obtain \[ e(x_i)-e(x_{i-1})=\sum_{k=1}^{N}(E'_{1,k}+E'_{2,k}), \] where \begin{eqnarray*} E'_{1,k}&=&\left. \frac{h_k^{2r+1}(r!)^4}{(2r+1)[(2r)!]^3}\left[\big( (\alpha(y)e'(y)-\e(y)\big)\left( \frac{\partial G}{\partial y}(x_i,y) -\frac{\partial G}{\partial y}(x_{i-1},y) \right)\right]^{(2r)}\right|_{y=\xi_k},\\ E'_{2,k}&=&\left.-\frac{h_k^{2r+1}(r!)^4}{(2r+1)[(2r)!]^3}\left( \frac{\partial G}{\partial y}(x_i,y) -\frac{\partial G}{\partial y}(x_{i-1},y) \right)^{(2r)}\right|_{y=\eta_k} \end{eqnarray*} with $\xi_k,\eta_k \in \tau_k$.\\ Recall the construction of the Green function $G(x_i,\cdot)$, for all $j\in \bN_{2r}$, \[ \| \frac{\partial G}{\partial y}(x_i,y) -\frac{\partial G}{\partial y}(x_{i-1},y)\|_{j,\infty,\Omega\setminus\tau_{i}}\lesssim h\|G\|_{j+1,\infty,\Omega\setminus\tau_{i}}, \] and \[ \| \frac{\partial G}{\partial y}(x_i,y) -\frac{\partial G}{\partial y}(x_{i-1},y)\|_{j,\infty,\tau_{i}}\lesssim h\|G\|_{j+1,\infty,\tau_{i}}. \] Since the Green function $G(x_i,\cdot)\in C^{2r}(\tau_k), k\in \bZ_{N}$ is bounded, then \begin{equation*} \begin{split} &\left.\left[\big( (\alpha(y) e'(y)-\e(u)\big)\left( \frac{\partial G}{\partial y}(x_i,y) -\frac{\partial G}{\partial y}(x_{i-1},y) \right)\right]^{(2r)}\right|_{y=\xi_k}\\ &\lesssim \sum_{j=0}^{2r}\binom{2r}{j}\|\alpha e'-\e\|_{j,\infty,\tau_k}\| \frac{\partial G}{\partial y}(x_i,y) -\frac{\partial G}{\partial y}(x_{i-1},y)\|_{2r-j,\infty,\tau_{k}}\\ &\lesssim h_k\left( \sum_{j=0}^{r}|e|_{j,\infty,\tau_k}+\sum_{j=r+1}^{2r+1}|u|_{j,\infty,\tau_k}\right). \end{split} \end{equation*} Following the same estimate for $\sum_{j=0}^{r}|e|_{j,\infty,\tau_k}+\sum_{j=r+1}^{2r+1}|u|_{j,\infty,\tau_k}$ as Theorem \ref{supconv_vectices}, we obtain \[ | E'_{1,k}|\lesssim h_k^{2r+2},\ \ |E'_{2,k}|\lesssim h_k^{2r+2},\ \ \forall k\in \bZ_{N}, \] which yields the inequality \eqref{supconv_vectices1} directly. \end{proof} \bigskip Next we present the superconvergence property of $u_{\cal P}'$ at Gauss points, and $u_{\cal P}$ at Lobatto points. Before our analysis, we first introduce a special polynomial. For all $v(t)\in H^1([-1,1])$, we denote by \[ v_r(t)=\sum_{j=0}^rb_jM_j(t) \] the $r$th approximation of $v(t)$ with \begin{eqnarray*} && b_0=\frac{v(1)+v(-1)}{2},\ b_1=\frac{v(1)-v(-1)}{2},\\ && b_j=(j-\frac 12)\int_{-1}^{1}v'(t)L_{j-1}(t) dt,\ \ j=2,\ldots,r. \end{eqnarray*} where $M_i$ is the Lobatto polynomial of degree $i$ and $L_{j}$ is the Legendre polynomial of degree $j$. For all $x\in\tau_i, i\in\bZ_N$, we denote by \[ v_r(x)=v_r(\frac{x_i+x_{i-1}+h_it}{2}),\ \ t\in[-1,1] \] the $r$th approximation of $v(x)$ on the interval $\tau_i, i\in\bZ_N$. Then \begin{equation}\label{l1} |(v-v_r)(l_{i,j})|\lesssim h^{r+2}\|v\|_{r+2,\infty,\cal P},\ \ j\in\bZ_{r-1}, \end{equation} and \begin{equation}\label{g2} |(v-v_r)'(g_{i,j})|\lesssim h^{r+1}\|v\|_{r+2,\infty,\cal P},\ \ j\in\bZ_r. \end{equation} \begin{theorem}\label{supconv_gauss} Let $u\in W^{r+2,\infty}_{\cal P}(\Omega)$ be the solution of \eqref{Poisson}, and $u_{\cal P}$ the solution of FVM scheme \eqref{bilinear}. Then \begin{eqnarray}\label{e_gauss2} |(u-u_{\cal P})'(g_{i,j})|\lesssim h^{r+1}\|u\|_{r+2,\infty,\cal P},\ \ (i,j)\in \bZ_N \times \bZ_{r-1}, \end{eqnarray} and \begin{eqnarray}\label{e_Lobatto2} |(u-u_{\cal P})(l_{i,j})|\lesssim h^{r+2}\|u\|_{r+2,\infty,\cal P},\ \ (i,j)\in \bZ_N \times \bZ_r. \end{eqnarray} \end{theorem} \begin{proof}For all $v\in H_0^1(\Omega)$, let \[ A(u,v)=(\alpha u',v')+(\beta u'+\gamma u,v). \] Then we have \[ v(x)=A(v,G(x,\cdot)),\ \ \forall x\in \Omega, \] where $G(x,\cdot)$ is the Green function for the problem \eqref{Poisson}. Let $G_{\cal P}$ be the Garlerkin approximation of $G(x,\cdot)$, that is \[ v_{\cal P}(x)=A(v_{\cal P},G_{\cal P}),\ \ \forall v_{\cal P}\in U_{\cal P}^r, \forall x\in\Omega. \] Then(see\cite{Chen.C.M2001}(p33)) \[ A(u-u_r,G_{\cal P})\lesssim h^{r+2}\|u\|_{r+2,\infty,\cal P}. \] We next estimate the term $A(u-u_{\cal P},G_{\cal P})$. Note that $G_{\cal P}\in U_{\cal P}^r$, then \begin{eqnarray*} A(u-u_{\cal P},G_{\cal P})&=&\int_a^b \big(\alpha(y)e'(y)-\e(y)\big)G'_{\cal P}(y) dy\\ &=&\sum_{k=1}^{N}\sum_{j=1}^{r} A_{k,j}\big(\alpha(g_{k,j})e'(g_{k,j})-\e(g_{k,j})\big)G_{\cal P}(g_{k,j})+E_3\\ &=&C_0\int_a^b G'_{\cal P} dy +E_3=E_3, \end{eqnarray*} where $e(y),\e(y)$ and $C_0$ are the same as in Theorem \ref{supconv_vectices} and \[ E_3=\left.\sum_{k=1}^N \frac{h_k^{2r+1}(r!)^4}{(2r+1)[(2r)!]^3}\left[\big(\alpha(y) e'(y)-\e(y)\big)G'_{\cal P}(y)\right]^{(2r)}\right |_{y=\xi_k}. \] Then \begin{eqnarray*} E_3 &\lesssim&\sum_{k=1}^N h_k^{2r+1}\left(|G_{\cal P}|_{r,\infty,\tau_k}\|e\|_{r+2,\infty,\tau_k}+\sum_{j=1}^{r-1}|G_{\cal P}|_{j,\infty,\tau_k}\|e\|_{2r-j+2,\infty,\tau_k}\right)\\ &\lesssim&\sum_{k=1}^N |G_{\cal P}|_{2,1,\tau_k}\left(h_k^{r+2}\|e\|_{r+2,\infty,\tau_k}+\sum_{j=1}^{r-1}h_k^{2r+2-j}\|e\|_{2r-j+2,\infty,\tau_k}\right)\\ &\lesssim& h^{r+2}\|e\|_{r+2,\infty,\cal P}\lesssim h^{r+2}\|u\|_{r+2,\infty,\cal P}. \end{eqnarray*} Here we have used \eqref{e1_1} and the inverse inequality \[ |G_{\cal P}|_{j,\infty,\tau_k}\lesssim h_k^{1-j}|G_{\cal P}|_{2,1,\tau_k}, \ \forall j\in\bZ_{r} \] and the fact \cite{Chen.C.M2001}(p33) \[ |G_{\cal P}|_{2,1,\cal P}=\sum_{i=1}^{N}|G_{\cal P}|_{2,1,\tau_k}\le C \] with $C$ a bounded constance. Note that \[ (u_r-u_{\cal P})(x)=A(u_r-u_{\cal P},G_{\cal P})=A(u-u_r,G_{\cal P})+A(u-u_{\cal P},G_{\cal P}), \] then we have \[ (u_r-u_{\cal P})(x)\lesssim h^{r+2}\|u\|_{r+2,\infty,\cal P}. \] By inverse inequality, \[ (u_r-u_{\cal P})'(x)\lesssim h^{r+1}\|u\|_{r+2,\infty,\cal P}. \] Combing above estimates with \eqref{l1} and \eqref{g2}, we obtain \eqref{e_gauss2} and \eqref{e_Lobatto2} directly by the triangular inequality. \end{proof} As a direct consequence , we have \begin{equation}\label{sup_guass} |u-u_{\cal P}|_{G,1} \lesssim h^{r+1}, \quad |u-u_{\cal P}|_{aver,1} \lesssim h^{r+1} \end{equation} and \begin{equation}\label{sup} |u-u_{\cal P}|_{L,0} \lesssim h^{r+2}, \quad |u-u_{\cal P}|_{aver,0} \lesssim h^{r+2}, \end{equation} where \[ |v|_{{\rm aver},1} = \left(\frac{1}{Nr}\sum_{i=1}^N \sum_{j=1}^r v'(g_{i,j})^2\right)^\frac12 \] and \begin{equation*}\label{semi-norm} |v|_{L,0} = \left(\sum_{i=1}^N \sum_{j=0}^r w_{i,j} v(l_{i,j})^2\right)^\frac12,\quad |v|_{{\rm aver},0} = \left(\frac{1}{Nr}\sum_{i=1}^N \sum_{j=0}^r v(l_{i,j})^2\right)^\frac12, \end{equation*} here $w_{i,j}$ are weights of the Lobatto quadrature. \bigskip Now we consider a special case that $\beta=0$, we have the following theorem. \begin{theorem}\label{supconv_gauss1} Let $u$ be the solution of \eqref{Poisson}, and $u_{\cal P}$ the solution of FVM scheme \eqref{bilinear}. If $\beta(x)=0,\forall x\in \O, u\in W^{r+2,\infty}_{\cal P}(\Omega)$, then \begin{eqnarray}\label{e_gauss8} |(u-u_{\cal P})'(g_{i,j})|\lesssim h^{\min\{r+2,2r\}}\sum_{k=r+1}^{2r+1}|u|_{k,\infty,\cal P}. \end{eqnarray} \end{theorem} \begin{proof} First, both $u$ and $u_{\cal P}$ satisfy \eqref{conserve}, there holds for all $ (i,j)\in \bZ_N\times \bZ_{r-1}$ that \begin{eqnarray*} \alpha(g_{i,j+1})e'(g_{i,j+1})-\alpha(g_{i,j})e'(g_{i,j})=\int_{g_{i,j}}^{g_{i,j+1}} \gamma(x)e(x)dx, \end{eqnarray*} which yields \begin{equation}\label{est2} e'(g_{i,j+1})=\frac{\alpha(g_{i,1})}{\alpha(g_{i,j+1})}e'(g_{i,1})+\frac{1}{\alpha(g_{i,j+1})}\int_{g_{i,1}}^{g_{i,j+1}} \gamma(x)e(x)dx. \end{equation} On the other hand, \[ e(x_i)-e(x_{i-1})=\int_{x_{i-1}}^{x_i} e'(y)dy= \sum_{j=1}^r A_{i,j}e'(g_{i,j})+E_i, \] where by \cite{DavisRabinowitz1984}(p98, (2.7.12)), \begin{eqnarray*} E_i=\frac{h_i^{2r+1}(r!)^4}{(2r+1)[(2r)!]^3}(e')^{(2r)}(\xi_i) \lesssim h^{2r+1}|u|_{2r+1,\infty,\tau_i},\quad \xi_i \in \tau_i. \end{eqnarray*} By Theorem \eqref{supconv_vectrices2} and \eqref{est2}, we obtain \[ h_ie'(g_{i,1})+h_i\int_{x_{i-1}}^{x_i}\gamma(x)e(x) dx\lesssim h_i^{2r+1}\sum_{k=r+1}^{2r+1}|u|_{k,\infty,\cal P}. \] Then \[ e'(g_{i,j})\lesssim \int_{x_{i-1}}^{x_i}|\gamma(x)e(x)| dx+h_i^{2r}\sum_{k=r+1}^{2r+1}|u|_{k,\infty,\cal P},\ \ \forall j\in \bZ_{r}. \] We next estimate the term $\int_{x_{i-1}}^{x_i}|\gamma(x)e(x)| dx$. Note that \begin{eqnarray*} \int_{g_{i,1}}^{g_{i,j}} \left|\gamma(x)e(x) \right| dx &\leq & \|\gamma\|_{\infty} \left( \int_{g_{i,1}}^{g_{i,j}} \left|(u-u_I)(x)\right| dx+\int_{g_{i,1}}^{g_{i,j}} \left|(u_I-u_{\cal P})(x) \right|dx \right)\\ &\lesssim & h^{r+2}|u|_{r+2,\infty,\cal P} +h^{r+{5\over 2}}|u|_{r+2,\cal P} \lesssim h^{r+2}|u|_{r+2,\infty,\cal P}, \end{eqnarray*} where in the above inequalities we have used $|u-u_I|\lesssim h^{r+1}\|u\|_{r+1,\infty}\lesssim h^{r+1}|u|_{r+2,\infty,\cal P}$ and $|u_I-u_{\cal P}|\lesssim h^{\frac 12}|u_I-u_{\cal P}|_{1,\cal P}\lesssim h^{r+\frac 32}|u|_{r+2,\cal P}$. Therefore, \[ |(u-u_{\cal P})'(g_{i,j})|\lesssim h^{\min\{r+2,2r\}}\sum_{k=r+1}^{2r+1}|u|_{k,\infty,\cal P}. \] The proof is completed. \end{proof} \bigskip In particular, when $\beta=\gamma=0$, we have a better result. \begin{theorem}\label{Theo_5}Let $u\in W^{2r+1,\infty}_{\cal P}(\O)$ be the solution of \eqref{Poisson}, and $u_{\cal P}\in U^r_{\cal P}$ the solution of FVM scheme \eqref{bilinear}. If $\beta(x)=\gamma(x)=0,\forall x\in \O$, for all $(i,j)\in \bZ_N\times\bZ_r$, we have \begin{equation}\label{point_conv} |u'(g_{i,j})-u'_{\cal P}(g_{i,j})| \lesssim h^{2r}\sum_{k=r+1}^{2r+1}|u|_{k,\infty,{\cal P}}. \end{equation} \end{theorem} \begin{proof}By \eqref{est4}, we denote the constant \[ C=\alpha(g_{i,j})(u_{\cal P}'(g_{i,j})-u'(g_{i,j})). \] The fact $u_{\cal P}\in {\mathbb P}_{r}$ yields that \begin{eqnarray*}\label{errnode} e(x_i) &-& e(x_{i-1}) = \int_{x_{i-1}}^{x_i} e(t) dt \nonumber \\ &=& \sum_{k=1}^r A_{i,k} e'(g_{i,k}) + \int_{x_{i-1}}^x u'(t) dt - \sum_{k=1}^r A_{i,k} u'(g_{i,k}). \end{eqnarray*} By \cite{DavisRabinowitz1984}(p98, (2.7.12)) \[ \left|\int_{x_{i-1}}^{x_i} u'(t) dt - \sum_{k=1}^r A_{i,k} u_{\cal P}'(g_{i,k})\right| \lesssim h_i^{2r+1} |u|_{2r+1,\infty,\tau_i} \] and Theorem \ref{supconv_vectrices2}, we have \[ C\sum_{j=1}^r A_{i,j}\alpha^{-1}(g_{i,j}) \lesssim h_i^{2r+1}\sum_{k=r+1}^{2r+1}|u|_{k,\infty,\cal P}, \] which yields \eqref{point_conv} directly. \end{proof} \begin{remark} We see that at the Gauss points, when $\beta=\gamma=0$, the derivative convergence rate $h^{2r}$ doubles the global optimal rate $h^r$, which is much better than the counterpart finite element method's $h^{r+1}$ rate, when $\beta=0$, the derivative convergence rate $h^{r+2}$ is one order higher than the counterpart finite element method's $h^{r+1}$; and at the nodal points, the convergence rate $h^{2r}$ almost doubles the global optimal rate $h^{r+1}$ and equals to the counterpart finite element method's $h^{2r}$ rate; and at the Lobatto points, the convergence rate $h^{r+2}$ is one order higher than the optimal global rate $h^{r+1}$, which is the same as the counterpart finite element method. \end{remark} \section{Post processing} We observe from \eqref{e_gauss2}, \eqref{e_gauss8} and \eqref{point_conv} that $u'_{\cal P}$ approximates the derivative of the exact solution $u$ pretty well at the Gauss points. In this subsection, we will recover $u'$ in the whole domain $\O$. For all $i=1,\ldots, N-1$, we construct a function $v_i\in \bP_{2r-1}([x_{i-1},x_{i+1}])$ by letting \[ v_i(g_{l,k}) = u_{\cal P}'(g_{l,k}), \quad l=i,i+1; \; k=1,2,\ldots,r. \] Then we define for all $x\in \tau_i= [x_{i-1},x_i], i=1,\ldots,N$, \[ v(x)=\left\{ \begin{array}{lll} v_1(x), &i=1,\\ \frac{1}{2}\big(v_i(x)+v_{i-1}(x)\big), &2\le i\le N-1,\\ v_{N-1}(x),&i=N. \end{array} \right. \] To study the approximation property of $u$, we note that in each $[x_{i-1},x_{i+1}]$, \[ u'(x)=(L_{2r-1}u')(x)+\frac{u^{(2r+1)}(\xi)}{(2r)!}\prod_{j=1}^r (x-g_{i,j})(x-g_{i+1,j}), \xi\in[x_{i-1},x_{i+1}] \] where the Lagrange interpolant \[ (L_{2r-1}u' )(x)=\sum_{l=i}^{i+1}\sum_{j=1}^r u'(g_{l,j})w_{l,j}(x), w_{l,j}(x)=\prod_{l'\not=l,j'\not=j}\frac{x-g_{l',j'}}{g_{l,j}-g_{l',j'}}. \] Noting that \[ v_{i}(x)=\sum_{l=i}^{i+1}\sum_{j=1}^r u'_{\cal P}(g_{l,j})w_{l,j}(x), \] we have \begin{equation}\label{difference} u'(x)-v_i(x)=\sum_{l=i}^{i+1}\sum_{j=1}^r (u'-u'_{\cal P})(g_{l,j})w_{l,j}(x)+\frac{u^{(2r+1)}(\xi)}{(2r)!}\prod_{j=1}^r (x-g_{i,j})(x-g_{i+1,j}). \end{equation} Since for all $l=i,i+1, j=1,\ldots, r$, we have \[ |w_{l,j}(x)|\le c_r, \forall x\in [x_{i-1},x_{i+1}], \] where $c_r$ is a constant depends only on $r$, we obtain by \eqref{e_gauss2}, \eqref{e_gauss8} and \eqref{point_conv} that \[ |u'(x)-v_i(x)|\lesssim h^m\sum_{k=r+1}^{2r+1}|u|_{k,\infty,{\cal P}}, \] where $m=r+1$ for general elliptic equations, $m=r+2$ if $\beta=0$, and $m=2r$ if $\beta=\gamma=0$. Consequently, we have \[ |u'(x)-v(x)|\lesssim \lesssim h^m\sum_{k=r+1}^{2r+1}|u|_{k,\infty,{\cal P}}, \forall x\in\Omega. \] \section{Numerical experiments} \setcounter{equation}{0} In this section, we present numerical examples to demonstrate the method and to verify the theoretical results proved in this paper. In our experiments, we solve the two-point boundary value problem \eqref{Poisson} by the FVM scheme \eqref{bilinear} with $r=4$ or $r=5$. The underlying meshes are obtained by subdividing $\Omega=(0,1)$ to $N=2,4,8,16,32,64$ subintervals with equal sizes. {\it Example} 1. We consider the two-point boundary value problem \eqref{Poisson} with \[ \alpha(x)=e^{x},\ \ \beta(x)=\cos x,\ \ \gamma(x)=x,\ \ \forall x\in\Omega, \] and $f$ is chosen so that the exact solution of this problem is \[ u(x)=\sin x(x^{12}-x^{11}). \] We list approximate errors under various (semi-)norms in Table \ref{r=4} ( for the scheme $r=4$ ) and Table \ref{r=5} ( for the scheme $r=5$ ). \begin{table}[htbp]\caption{$r=4$ \label{r=4} } \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline N & $\|u-u_{\cal P}\|_0$ & $\|u-u_{\cal P}\|_1$ & $|u_I-u_{\cal P}|_1$ & $|u-u_{\cal P}|_{L,0}$ \\ \hline 2 &1.8618e-03 & 5.1201e-02& 5.2554e-03& 3.3420e-04\\ 4 & 1.4386e-04& 7.2801e-03& 3.1271e-04& 9.8931e-06\\ 8 & 5.9282e-06& 5.9099e-04& 1.1758e-05& 1.8624e-07\\ 16& 1.9882e-07& 3.9516e-05& 3.8485e-07& 3.0490e-09 \\ 32& 6.3240e-09& 2.5119e-06& 1.2166e-08& 4.8197e-11\\ 64& 1.9850e-10& 1.5766e-07& 3.8129e-10& 7.5536e-13\\ \hline \hline N & $|u-u_{\cal P}|_{aver,0}$ &$|u-u_{\cal P}|_{G,1}$ &$|u-u_{\cal P}|_{aver,1}$ & $E_{node}$ \\ \hline 2 & 2.1895e-04& 8.0770e-04& 5.4962e-04& 1.1874e-05\\ 4 & 6.2680e-06& 5.3025e-05& 3.5877e-05& 5.9186e-08\\ 8 & 1.1716e-07& 1.9692e-06& 1.3338e-06& 2.3666e-10\\ 16& 1.9150e-09& 6.3947e-08& 4.3328e-08& 9.2827e-13\\ 32& 3.0260e-11& 2.0170e-09& 1.3667e-09& ---\\ 64& 4.7425e-13& 6.3175e-11& 4.2809e-11& ---\\ \hline \end{tabular} \end{table} \begin{table}[htbp]\caption{$r=5$ \label{r=5} } \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline N & $\|u-u_{\cal P}\|_0$ & $\|u-u_{\cal P}\|_1$ &$|u_I-u_{\cal P}|_1$ & $|u-u_p|_{L,0}$\\ \hline 2 & 4.8206e-04& 1.5546e-02& 8.5017e-04& 4.0891e-05 \\ 4 & 1.5627e-05& 9.6503e-04& 2.1627e-05& 5.2643e-07\\ 8 & 2.9713e-07& 3.6434e-05& 3.8065e-07& 4.6413e-09\\ 16& 4.8711e-09& 1.1927e-06& 6.1190e-09& 3.7318e-11\\ 32& 7.7022e-11& 3.7707e-08& 9.6282e-11& 2.9365e-13\\ 64& 1.2073e-12& 1.1817e-09& 1.5081e-12& ---\\ \hline \hline N & $|u-u_{\cal P}|_{aver,0}$ &$|u-u_{\cal P}|_{G,1}$ & $|u-u_{\cal P}|_{aver,1}$ & $E_{node}$ \\ \hline 2 & 2.6075e-05& 2.2179e-04 & 1.4819e-04& 4.6819e-08 \\ 4 & 3.2965e-07& 5.8493e-06 & 3.9162e-06& 3.0508e-11\\ 8 & 2.8971e-09& 1.0085e-07 & 6.7553e-08& 2.6318e-14\\ 16& 2.3277e-11& 1.6089e-09 & 1.0779e-09& ---\\ 32& 1.8311e-13& 2.5266e-11 & 1.6928e-11& ---\\ 64& --- & 3.9473e-13 & 2.6482e-13& ---\\ \hline \end{tabular} \end{table} To explicitly show the convergence rate of different approximate errors, we plot the error curves in Figures \ref{1_1} and \ref{1_2}. We observe from Figure \ref{1_1} that the convergence rate $|u-u_{\cal P}|_{1}$ is $r$ and the convergence rate of $\|u-u_{\cal P}\|_0$ is $r+1$. In other words, the FVM approximate solution converges to the exact solution with optimal convergence rates under both for $H^1$ and $L^2$ norms, as predicted in \eqref{optimalh1} and \eqref{optimall2}. We also observe that the error $|u_I-u_{\cal P}|_1$ is of order $r+1$, which confirms the convergence result in \eqref{h1est}. The errors $|u-u_{\cal P}|_{aver,0}$, $|u-u_{\cal P}|_{L,0}$ and $E_{node}$ are presented in Figure \ref{1_2}. It is observed that $|u-u_{\cal P}|_{aver,0}$ and $|u-u_{\cal P}|_{L,0}$ converge with a degree $r+2$ which confirm the superconvergence property at Labatto points given in Theorem \ref{supconv_gauss}. Since $E_{node}$ converges with a rate $2r$, it confirms our theory in Theorem \ref{supconv_vectices}. \begin{figure}[htbp] \hskip-0.5cm \scalebox{0.5}{\includegraphics{45_1.eps}} \caption{left: $r=4$, right: $r=5$}\label{1_1} \end{figure} \begin{figure}[htbp] \scalebox{0.5}{\includegraphics{45_2.eps}} \caption{left: $r=4$, right: $r=5$}\label{1_2} \end{figure} \bigskip {\it Example} 2. In this example, we test the convergence behavior of derivative error at Gauss points. We consider three cases of Equation \eqref{Poisson}, they are \begin{itemize} \item[Case 1]: $\alpha(x)=e^x,\ \beta(x)=\cos x,\ \gamma(x)=x$; \item[Case 2]: $\alpha(x)=e^x,\ \beta(x)=0,\ \gamma(x)=x$; \item[Case 3]: $\alpha(x)=e^x,\ \beta(x)=0,\ \gamma(x)=0$. \end{itemize} The exact solution is always $u(x)=\sin x(x^{12}-x^{11})$ and the right-hand function $f$ change according to the coefficients in different cases. Listed in Table \ref{case=0} are errors in the derivative approximation at Gauss points for three different cases for $r=4$ and $r=5$, respectively. Plotted in Fig. \ref{1_3} are corresponding error curves. We observe that the convergence rate is $r+1$ for Case 1, $r+2$ for Case 2 and $2r$ for Case 3. These numerical results are consistent with our theories derived in Section 4. \begin{table} \caption{Gauss points.} \centering \begin{threeparttable} \begin{tabular}{|c|c|c|c||c|c|c|} \hline &\multicolumn{3}{|c||}{$r=4$}&\multicolumn{3}{|c|}{$r=5$}\\ \cline{1-7}N & Case 1 & Case 2 & Case 3 & Case 1 & Case 2 & Case 3 \\ \hline 1 & 4.7633e-03& 4.1098e-03 &4.1493e-03&1.5667e-03&1.1105e-04 &1.0183e-04 \\ 2 & 5.4962e-04& 3.1457e-05& 2.9493e-05&2.6075e-05& 1.9562e-06& 8.6701e-09\\ 4 & 3.5877e-05& 4.2964e-07& 1.1316e-07&3.9162e-06& 2.8751e-08& 3.1732e-11\\ 8 & 1.3338e-06& 8.4296e-09& 4.3677e-10&6.7553e-08& 2.6812e-10& 3.6386e-14\\ 16 & 4.3328e-08& 1.4052e-10& 1.7002e-12&1.0779e-09& 2.1878e-12& ---\\ 32 & 1.3667e-09& 2.2319e-12& 6.6291e-15&1.6928e-11& 1.7284e-14&---\\ \hline \end{tabular} \end{threeparttable} \label{case=0} \end{table} \begin{figure}[htbp] \begin{center} \hskip-0.5cm \scalebox{0.5}{\includegraphics{45_3.eps}} \caption{left: $r=4$, right: $r=5$}\label{1_3} \end{center} \end{figure} \section{Concluding remarks} The mathematical theory for the FVM has not been fully developed. The analysis in the literature for high-order FVM schemes are often done case by case. It is a challenging task to develop mathematical theory for FVM scheme of an arbitrary order. In this article, we provide a unified proof for the inf-sup condition of a family any order FVM schemes in one dimensional setting. Based on this, we show that the FVM solution converges to the exact solution with optimal order, both in $H^1$ and $L^2$ norm. We also studied the superconvergence of our FVM schemes. It is shown both theoretically and numerically that at the nodal and interior Lobatto points, the superconvergence behavior of FVM is similar to that of the counterpart finite element method. Moreover, in some special cases, the superconvergence property of the derivative of the FVM solution at the Gauss points maybe much better than that of the counterpart finite element method. For instances, when $\beta=0$, the convergence rate of the derivative of the FVM solution is $h^{r+2}$ which is one order higher than the counterpart finite element method's $h^{r+1}$; when $\beta=\gamma=0$, the order is $h^{2r}$ which doubles the global optimal rate $h^r$, and it is much better than the counterpart finite element method's $h^{r+1}$ rate. In a recent study \cite{ZhangZhangZou2011}, it is shown that after a simple post-processing procedure, the FEM solutions can have local conservation property. In this sense, the superconvergence property discovered in this paper become a powerful argument to support that the FVM still has its advantages.
{ "timestamp": "2012-07-04T02:01:42", "yymm": "1207", "arxiv_id": "1207.0566", "language": "en", "url": "https://arxiv.org/abs/1207.0566", "abstract": "We present and analyze a finite volume scheme of arbitrary order for elliptic equations in the one-dimensional setting. In this scheme, the control volumes are constructed by using the Gauss points in subintervals of the underlying mesh. We provide a unified proof for the inf-sup condition, and show that our finite volume scheme has optimal convergence rate under the energy and $L^2$ norms of the approximate error. Furthermore, we prove that the derivative error is superconvergent at all Gauss points and in some special case, the convergence rate can reach $h^{2r}$, where $r$ is the polynomial degree of the trial space. All theoretical results are justified by numerical tests.", "subjects": "Numerical Analysis (math.NA)", "title": "Any order superconvergence finite volume schemes for 1D general elliptic equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743612634027, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.708895288842301 }
https://arxiv.org/abs/2104.06694
Line graph characterization of power graphs of finite nilpotent groups
This paper deals with the classification of groups $G$ such that power graphs and proper power graphs of $G$ are line graphs. In fact, we classify all finite nilpotent groups whose power graphs are line graphs. Also, we categorize all finite nilpotent groups (except non-abelian $2$-groups) whose proper power graphs are line graphs. Moreover, we investigate when the proper power graphs of generalized quaternion groups are line graphs. Besides, we derive a condition on the order of the dihedral groups for which the proper power graphs of the dihedral groups are line graphs.
\section{Introduction} \label{sec:intro} The investigation of graph representations is one of the interesting and popular research topic in algebraic graph theory, as graphs like these enrich both algebra and graph theory. Moreover, they have important applications (see, for example, \cite{surveypwrgraphkac1, cayleygraphsckry}) and are related to automata theory \cite{automatatheory}. During the last two decades, investigation of the interplay between the properties of an algebraic structure $S$ and the graph-theoretic properties of $\Gamma(S),$ a graph associated with $S,$ has been an exciting topic of research. Different types of graphs, specifically zero-divisor graph of a ring \cite{anderson}, semiring \cite{atani1}, semigroup \cite{deMeyer}, poset \cite{joshizerodivgraphofideal}, power graph of semigroup \cite{undpwrgraphofsemgmainsgc1, directedgrphcompropofsemgrpkq3}, group \cite{combinatorialpropertyandpowergraphsofgroupskq1}, normal subgroup based power graph of group \cite{normalsubgrpbasedpwrbb2}, intersection power graph of group \cite{intersectionpwegraphb3}, enhanced power graph of group \cite{firstenhcedpwrstrctreaacns1,bera-dey-mukherjee-connectivity-enhanced} etc. have been introduced to study algebraic structures using graph theory. One of the major graph representation amongst them is the power graphs of finite groups. We found several papers in this context \cite{surveypwrgraphkac1,pwrgraphoffntgrpgc1,HeidarJafari,forbidden-cameron-manna-Mehatari,undpwrgraphofsemgmainsgc1,Hamzeh-ashrafi}. The concept of a power graph was introduced in \cite{combinatorialpropertyandpowergraphsofgroupskq1}. As explained in the survey \cite{surveypwrgraphkac1}, this definition also covered the undirected graphs. Accordingly, the present paper follows Chakrabarty \emph{et al}. and uses the brief term ``power graph'' defined as follows. \begin{defn}[\cite{surveypwrgraphkac1,undpwrgraphofsemgmainsgc1, combinatorialpropertyandpowergraphsofgroupskq1}]\label{defn: powr graph} Let $S$ be a semigroup, then the \emph{power graph} $\mathcal{P}(S)$ of $S,$ is a simple graph, whose vertex set is $S$ and two distinct vertices $u$ and $v$ are edge connected if and only if either $u^m=v$ or $v^n=u,$ where $m, n\in \mathbb{N}.$ \end{defn} The authors in \cite{undpwrgraphofsemgmainsgc1} studied various properties of the power graph. They characterized the class of semigroups $S$ for which $\mathcal{P}(S)$ is connected or complete. As a consequence they proved the following; \begin{lemma}[Theorem 2.12, \cite{undpwrgraphofsemgmainsgc1}]\label{thm:P(G) compltele iff G cylcic p group} For a finite group $G,$ the power graph $\mathcal{P}(G)$ is complete if and only if $G$ is cylcic group of order $1$ or $p^m,$ for some prime $p$ and for some $m\in \mathbb{N}.$ \end{lemma} It is cleared that for the two groups $G_1$ and $G_2, G_1\cong G_2$ implies that $\mathcal{P}(G_1)\cong\mathcal{P}(G_2).$ One natural question is that, does the converse hold? In \cite{pwrgraphoffntgrpgc1}, the authors showed that the non-isomorphic finite groups may have isomorphic power graphs, but that finite abelian groups with isomorphic power graphs must be isomorphic. Also they conjectured that two finite groups with isomorphic power graphs have the same number of elements of each order. Then Cameron proved this conjecture in \cite{jgt-cameron}. In \cite{curtin-Pourgholi-prpoer-power-graph}, Curtin \emph{et al}. introduced the concept of deleted power graphs. They introduced deleted power graph as follows; \begin{defn}\label{defn: proper enhacd pwr graph} Given a group $G,$ the \emph{proper power graph} of $G,$ denoted by $\mathcal{P}^{**}(G),$ is the graph obtained by deleting all the dominating vertices from the power graph $\mathcal{P}(G).$ Moreover, by $\mathcal{P}^{*}(G)$ we denote the graph obtained by deleting only the identity element of $G$ and this is called \emph{deleted power graph} of $G.$ Note that if there is no such dominating vertex other than identity, then $\mathcal{P}^{*}(G)=\mathcal{P}^{**}(G).$ \end{defn} Curtin \emph{et al}. discussed the diameter of the proper power graph of the symmetric group $S_n$ on $n$ symbols. For more information related to proper power graphs we refer to \cite{curtin-Pourgholi-prpoer-power-graph, Dostabadi-Farrokhi-Ghouchan,shi-Moghaddamfar}. Then Aalipour \emph{et al}. in \cite[Question 40]{firstenhcedpwrstrctreaacns1} asked about the connectivity of proper power graphs. Recently, Cameron and Jafari in \cite{HeidarJafari} answered this question. Moreover, they wrote a theorem (below) in which they classified all groups $G$ such that some non-identity vertex is joined to all others. \begin{lemma}[Theorem 4, \cite{HeidarJafari}] Let $G$ be a finite group. Suppose that $x\in G$ has the property that for all $y\in G,$ either $x$ is a power of $y$ or vice versa. Then one of the following holds: \begin{enumerate}\label{lemma: classification of dominating vertices of power graph} \item[(a)] $x=e;$ \item [(b)] $G$ is cyclic and $x$ is a generator; \item[(c)] $G$ is a cyclic p-group for some prime $p$ and $x$ is arbitrary; \item[(d)] $G$ is a generalized quaternion group and $x$ has order $2.$ \end{enumerate} \end{lemma} \subsection{Basic Definitions, Notations and Main Results} We begin this section with some standard definitions from graph theory and group theory. For the convenience of the reader and also for later use, we recall some basic definitions and notations about graphs. Let $\Gamma=(V, E)$ be a graph where $V$ is the set of vertices and $E$ is the set of edges. A graph $\Gamma'=(V', E')$ is a subgraph of another graph $\Gamma=(V, E)$ if and only if $V'\subset V,$ and $E'\subset E.$ An \emph{induced} subgraph of a graph is another graph, formed from a subset of the vertices of the graph and all of the edges connecting pairs of vertices in that subset. Two elements $u$ and $v$ are said to be adjacent if $(u, v) \in E.$ The standard distance between two vertices $u$ and $v$ in a connected graph $\Gamma$ is denoted by $d(u, v).$ Clearly, if $u$ and $v$ are adjacent, then $d(u, v)=1.$ For a graph $\Gamma,$ its \emph{diameter} is defined as $\text{diam}(\Gamma)= \max_{u, v \in V} d(u, v).$ That is, the diameter of graph is the largest possible distance between pair of vertices of a graph. A \emph{path} of length $k$ between two vertices $v_0$ and $v_k$ is an alternating sequence of vertices and edges $v_0, e_0, v_1, e_1, v_2, \cdots , v_{k-1}, e_{k-1}, v_k$, where the $v_i'$s are distinct (except possibly the first and last vertices) and $e_i$ is the edge $(v_i, v_{i+1}).$ A graph $\Gamma$ is said to be \emph{connected} if for any pair of vertices $u$ and $v,$ there exists a path between $u$ and $v.$ $\Gamma$ is said to be \emph{complete} if any two distinct vertices are adjacent. A \emph{clique} of a graph $\Gamma$ is an induced subgraph of $\Gamma$ that is complete. The complete graph with $n$ vertices is denoted by $K_n.$ A \emph{bipartite} graph (or bigraph) is a graph whose vertices can be divided into two disjoint and independent sets $V_1$ and $V_2$ such that every edge connects a vertex in $V_1$ to one in $V_2.$ Vertex sets $V_1$ and $V_2$ are usually called the parts of the graph. A \emph{complete bipartite} graph or biclique is a special kind of bipartite graph where every vertex of the first set is connected to every vertex of the second set. The \emph{star} graph with $n+1$ vertices is denoted by $\Gamma_{1, n}$ which consists of a single vertex with $n$ neighbours. A star graph with the vertex set $v, v', v'', v'''$ is denoted by $\Gamma_{1, 3}(v, v', v'', v'''),$ where $v$ is edge connected to each of the vertices $v', v'', v'''$ and there is no edge between the vertices $v', v'', v'''.$ A vertex of a graph $\Gamma=(V, E)$ is called a \emph{dominating vertex} if it is adjacent to every other vertex. For a graph $\Gamma,$ let $\text{Dom}(\Gamma)$ denote the set of all dominating vertices in $\Gamma.$ The \emph{vertex connectivity} of a graph $\Gamma,$ denoted by $\kappa{(\Gamma)}$ is the minimum number of vertices which need to be removed from the vertex set $\Gamma$ so that the induced subgraph of $\Gamma$ on the remaining vertices is disconnected. The complete graph with $n$ vertices has connectivity $n-1.$ A graph $\Gamma$ is a \emph{cograph }if it has no induced subgraph isomorphic to the four-vertex path $P_4.$ A graph $\Gamma$ is \emph{chordal} if it contains no induced cycles of length greater than $3;$ in other words, every cycle on more than $3$ vertices has a chord. A \emph{threshold} graph is a graph containing no induced subgraph isomorphic to $P_4 , K_4$ or $2K_2 (\text{ or } K_2\bigoplus K_2)\text{ (two disjoint edges with no further edges connecting them) }.$ In general, let $\Gamma_1, \cdots, \Gamma_m$ be $m$ graphs such that $V(\Gamma_i)\cap V(\Gamma_j)=\emptyset,$ for $i\neq j.$ Then $\Gamma=\Gamma_1\bigoplus\cdots\bigoplus\Gamma_m$ be a graphs with vertex set is $V(\Gamma)=V(\Gamma_1)\cup\cdots\cup V(\Gamma_m)$ and $E(\Gamma)=E(\Gamma_1)\cup\cdots\cup E(\Gamma_m).$ Two graphs $\Gamma_1$ and $\Gamma_2$ are \emph{isommphic} if there is a bijection, $f$ (say) from $V(\Gamma_1)$ to $V(\Gamma_2)$ such that $v\sim v'$ in $\Gamma_1$ if and only if $f(v)\sim f(v')$ in $\Gamma_2.$ for the vertices $v, v', v\sim v'$ denotes that $v$ and $v'$ are edge connected. Also $v\nsim v'$ means that $v$ and $v'$ are not edge connected. For more on graph theory we refer \cite{graphthrybondymurti, algbraphgodsil, graphthrywest}. Throughout this paper we consider $G$ as a finite group. $|G|$ denotes the cardinality of the set $G.$ For a prime $p,$ a group $G$ is said to be a $p$-group if $|G|=p^{r}, r\in \mathbb{N}.$ Recall that a finite group $G$ is nilpotent if and only if it is a direct product of its Sylow $p$-subgroups over primes p dividing $|G|.$ Note that, in a nilpotent group, elements of different prime orders commute. For more on nilpotent group we refer \cite{algebra-book-hungerford,robinsongroup,scott-group}. \begin{lemma}[Proposition 7.5, \cite{algebra-book-hungerford}]\label{Nilpotent group charecterization thm} A finite group is nilpotent if and only if it is the direct product of its Sylow subgroups. \end{lemma} \begin{lemma}[Corollary 7.6, \cite{algebra-book-hungerford}]\label{m|G, nilpotent G has subgroup of ordder m} If $G$ is a finite nilpotent group and $m$ divides $|G|,$ then $G$ has a subgroup of order $m.$ \end{lemma} We need the structures of dihedral groups and generalized quarternion groups. For $n \geq 2$, the \emph{dihedral group} of order $2n$ is defined by the following presentation: \[ D_{2n}= \langle r, s : r^n=s^2=e, rs=sr^{-1} \rangle.\] We also consider the generalized \emph{quarternion groups} $Q_{2^n}.$ Let $x = \overline{(1, 0)}$ and $y = \overline{(0, 1)}.$ Then $Q_{2^n} = \langle x, y\rangle,$ where \begin{enumerate} \item[(a)] $x$ has order $2^{n-1}$ and $y$ has order $4,$ \item[(b)] every element of $Q_{2^n}$ can be written in the form $x^a$ or $x^ay$ for some $a\in \mathbb{Z},$ \item[(c)] $x^{2^{n-2}}=y^2,$ \item[(d)] for each $g\in Q^{2^n}$ such that $g\in \langle x \rangle,$ such that $gxg^{-1}=x^{-1}.$ \end{enumerate} For more information about $D_{2n},$ and $Q_{2^n}$ see \cite{generalized-quaternion, algebradummitfoote, scott-group}. \begin{lemma}[Theorem 5.4.10, \cite{Gorenstein-group-book}]\label{p group unique subgrp of order p, g is cyclic} If $G$ is a $p$-group with a unique subgroup of order $p$ for an odd prime $p,$ then G is cyclic. \end{lemma} For any element $g \in G, \text{o}(g)$ denotes the order of the element $g \in G.$ Throughout this article $v^{(m)}$ denotes a vertex of order $m$ of $G.$ Let $m$ and $n$ be any two positive integers, then the greatest common divisor of $m$ and $n$ is denoted by $\text{gcd}(m, n).$ The \emph{exponent} of a group $G,$ denoted by $\text{exp}(G)$ is defined as the least common multiple of the orders of all elements of the group. If there is no least common multiple, the exponent is taken to be infinity (or sometimes zero, depending on the convention). The Euler's phi function $\phi(n)$ is the number of integers $k$ in the range $1 \leq k \leq n$ for which the $\text{gcd}(n, k)$ is equal to $1.$ The set $\{1, 2, \cdots, n\}$ is denoted by $[n].$ Throughout this paper, the group operation of any abelian group is taken to be additive. A number of important graph classes, including line graphs (defined later), cographs, chordal graphs, split graphs, and threshold graphs, can be defined either structurally or in terms of forbidden induced subgraphs. Recently, Cameron, Manna and Mehatari in \cite{forbidden-cameron-manna-Mehatari}, determined completely the groups whose power graph is a threshold or split graph. Moreover, they determined completely the finite nilpotent groups whose power graph is a cograph. Motivated by this work, in this paper our focus is on the following problems: \begin{enumerate} \item[(a)] Characterize all finite groups $G$ such that $\mathcal{P}(G)$ is a line graph of some graph $\Gamma.$ \item[(b)] Characterize all finite groups $G$ such that $\mathcal{P}^{**}(G)$ is a line graph of some graph $\Gamma.$ \end{enumerate} \begin{defn}[\cite{algbraphgodsil,graphthrywest}] The line graph of a graph $\Gamma$ is the graph $L(\Gamma)$ with the edges of $\Gamma$ as its vertices, and where two edges of $\Gamma$ are adjacent in $L(\Gamma)$ if and only if they are incident in $\Gamma.$ If a graph $\Gamma'$ is a line graph of some graph then we can call the graph $\Gamma'$ a line graph. \end{defn} \begin{figure}[H] \tiny \centering \begin{tikzpicture}[scale=1] \tikzstyle{edge_style} = [draw=black, line width=2mm,] \draw (0,0) rectangle (2,2); \draw (0,0) -- (-1,1); \draw (0,2) -- (-1,1); \draw (2,0) -- (3,1); \draw (2,2) -- (3,1); \node (e) at (0,-.3) {$\bf{v_1}$}; \node (e) at (2,-.3) {$\bf{v_2}$}; \node (e) at (2,2.3) {$\bf{v_3}$}; \node (e) at (0,2.3) {$\bf{v_4}$}; \node (e) at (-1.3,1) {$\bf{v_5}$}; \node (e) at (3.3,1) {$\bf{v_6}$}; % \node (e) at (1,-.3) {$\bf{e_1}$}; \node (e) at (2.3,1) {$\bf{e_2}$}; \node (e) at (1,2.3) {$\bf{e_3}$}; \node (e) at (-.3,1) {$\bf{e_4}$}; \node (e) at (-.8,1.5) {$\bf{e_5}$}; \node (e) at (-.8,.4) {$\bf{e_6}$}; \node (e) at (2.8,1.5) {$\bf{e_7}$}; \node (e) at (2.8,.4) {$\bf{e_8}$}; % \fill[black!100!] (0.01, 0.01) circle (.05); \fill[black!100!] (2, 2) circle (.05); \fill[black!100!] (2, 0) circle (.05); \fill[black!100!] (0,2) circle (.05); \fill[black!100!] (-1,1) circle (.05); \fill[black!100!] (3,1) circle (.05); % \draw (8,0)-- (9,1); \draw (9,1) -- (8,2); \draw (8,2) -- (7,1); \draw (7,1) -- (8,0); \draw (7,1) -- (6,0); \draw (7,1) -- (6,2); \draw (6,0) -- (6,2); \draw (6,0) -- (8,0); \draw (6,2) -- (8,2); % \draw (9,1) -- (10,0); \draw (9,1) -- (10,2); \draw (10,0) -- (10,2); \draw (10,0) -- (8,0); \draw (10,2) -- (8,2); % \fill[black!100!] (8,0) circle (.05); \fill[black!100!] (9,1) circle (.05); \fill[black!100!] (8,2) circle (.05); \fill[black!100!] (7,1) circle (.05); \fill[black!100!] (6,0) circle (.05); \fill[black!100!] (6,2) circle (.05); % \fill[black!100!] (10, 2) circle (.05); \fill[black!100!] (10, 0) circle (.05); \fill[black!100!] (0,2) circle (.05); % \node (e) at (8,-.3) {$\bf{e_1}$}; \node (e) at (9.3,1) {$\bf{e_2}$}; \node (e) at (8,2.3) {$\bf{e_3}$}; \node (e) at (6.7,1) {$\bf{e_4}$}; \node (e) at (6,2.3) {$\bf{e_5}$}; \node (e) at (6,-.3) {$\bf{e_6}$}; \node (e) at (10,2.3) {$\bf{e_7}$}; \node (e) at (10,-.3) {$\bf{e_8}$}; \end{tikzpicture} \caption{A graph and its line graph } \label{fig: example of line grapph of graph } \end{figure} The star graph $\Gamma_{1, n}$ has the complete graph $K_n$ as its line graph. The path graph $P_n$ has line graph equal to the shorter path $P_{n-1}.$ The cycle $C_n$ is isomorphic to its own line graph. One of the most important result related to the characterization of line graph is Lemma \ref{line graph}. For more information on line graph we refer \cite{algbraphgodsil,graphthrywest}. \begin{lemma}[Theorem 7.1.18, \cite{graphthrywest}]\label{line graph} A graph $\Gamma$ is the line graph of some graph if and only if $\Gamma$ does not have any of the nine graphs in Figure \ref{fig:line grapph theory} as an induced subgraph. \end{lemma} \begin{figure}[H] \tiny \centering \begin{tikzpicture}[scale=1] \tikzstyle{edge_style} = [draw=black, line width=2mm, ] \draw (0,0) rectangle (2,2); \draw (0,0) -- (2,2); \draw (2,0) -- (1,.5); \draw (0,2) -- (1,.5); \node (e) at (0,-.3) {$\bf{1}$}; \node (e) at (2,-.3) {$\bf{2}$}; \node (e) at (2,2.3) {$\bf{3}$}; \node (e) at (0,2.3) {$\bf{4}$}; \node (e) at (1,.3) {$\bf{5}$}; % \node (e) at (1,-.7) {$\bf{\Gamma_1}$}; % \fill[black!100!] (0.01, 0.01) circle (.05); \fill[black!100!] (2, 2) circle (.05); \fill[black!100!] (2, 0) circle (.05); \fill[black!100!] (0,2) circle (.05); \fill[black!100!] (1,.5) circle (.05); % \draw (3,0) rectangle (5,2); \draw (3,0)--(5,2); \draw (4,.5)--(5,2); \draw (4,.5)--(3,0); \draw (4,.5)--(5,0); \draw (4,.5)--(3,2); \node (e) at (3,-.3){$\bf{6}$}; \node (e) at (5,-.3){$\bf{7}$}; \node (e) at (5,2.3){$\bf{8}$}; \node (e) at (3,2.3){$\bf{9}$}; \node (e) at (4,.2){$\bf{10}$}; % \node (e) at (4,-.7) {$\bf{\Gamma_2}$}; % \fill[black!100!] (3,0) circle (.05); \fill[black!100!] (5, 2) circle (.05); \fill[black!100!] (4, 0.5) circle (.05); \fill[black!100!] (3,2) circle (.05); \fill[black!100!] (3,0) circle (.05); \fill[black!100!] (5,0) circle (.05); % \draw (0,-4) rectangle (2,-2); \draw (0,-4)--(2,-2); \draw (0,-4)--(1,-3.5); \draw (2,-4)--(1,-3.5); \draw (2,-2)--(1,-3.5); \draw (.7,-2.8)--(0,-2); \draw (.7,-2.8)--(2,-2); \draw (.7,-2.8)--(0,-4); \node (e) at (0,-4.3){$\bf{11}$}; \node (e) at (2,-4.3){$\bf{12}$}; \node (e) at (2,-1.7){$\bf{13}$}; \node (e) at (0,-1.7){$\bf{14}$}; \node (e) at (1,-3.8){$\bf{15}$}; \node (e) at (.7,-2.5){$\bf{16}$}; % \node (e) at (1,-4.8) {$\bf{\Gamma_3}$}; % \fill[black!100!] (0,-4) circle (.05); \fill[black!100!] (2, -2) circle (.05); \fill[black!100!] (1, -3.5) circle (.05); \fill[black!100!] (2,-4) circle (.05); \fill[black!100!] (0,-4) circle (.05); \fill[black!100!] (.7,-2.8) circle (.05); \filldraw[black!100] (0,-2) circle (.05); \draw (3,-4) rectangle (5,-2); \draw (3,-4) -- (5,-2); \draw (5,-4) -- (4,-3.6); \draw (3,-2) -- (3.5,-2.8); \draw (5,-2) -- (3.5,-2.8); \draw (3,-4) -- (3.5,-2.8); \node (e) at (3,-4.3){$\bf{17}$}; \node (e) at (5,-4.3){$\bf{18}$}; \node (e) at (5,-1.7){$\bf{19}$}; \node (e) at (3,-1.7){$\bf{20}$}; \node (e) at (4,-3.4){$\bf{21}$}; \node (e) at (3.6,-2.5){$\bf{22}$}; % \node (e) at (4,-4.8) {$\bf{\Gamma_4}$}; % \fill[black!100!] (3,-4) circle (.05); \fill[black!100!] (5, -2) circle (.05); \fill[black!100!] (4, -3.6) circle (.05); \fill[black!100!] (5,-4) circle (.05); \fill[black!100!] (0,-4) circle (.05); \fill[black!100!] (3.5,-2.8) circle (.05); \filldraw[black!100] (3,-2) circle (.05); % \draw (10,0) rectangle (12,2); \draw (12,0) -- (10,2); \draw (10,0) -- (11,.5); \draw (12,2) -- (11,1.5); \node (e) at (10,-.3){$\bf{27}$}; \node (e) at (12,-.3){$\bf{28}$}; \node (e) at (12,2.3){$\bf{29}$}; \node (e) at (10,2.3){$\bf{30}$}; \node (e) at (11,.2){$\bf{31}$}; \node (e) at (11.2,1.3){$\bf{32}$}; % \node (e) at (11,-.7) {$\bf{\Gamma_5}$}; % \fill[black!100!] (10,0) circle (.05); \fill[black!100!] (12, 0) circle (.05); \fill[black!100!] (12, 2) circle (.05); \fill[black!100!] (10,2) circle (.05); \fill[black!100!] (11,.5) circle (.05); \fill[black!100!] (11,1.5) circle (.05); % \draw (13,0) rectangle (15,2); \draw (13,0) -- (15,2); \draw (15,0)-- (14.5,.8); \draw (13,2) -- (13.5,1); \draw (14.5,.8) -- (13.5,1); \node (e) at (13,-.3){$\bf{33}$}; \node (e) at (15,-.3){$\bf{34}$}; \node (e) at (15,2.3){$\bf{35}$}; \node (e) at (13,2.3){$\bf{36}$}; \node (e) at (14.3,.6){$\bf{37}$}; \node (e) at (13.8,1.2){$\bf{38}$}; % \node (e) at (14,-.7) {$\bf{\Gamma_6}$}; % \fill[black!100!] (13,0) circle (.05); \fill[black!100!] (15,2) circle (.05); \fill[black!100!] (15,0) circle (.05); \fill[black!100!] (13,2) circle (.05); \fill[black!100!] (14.5,.8) circle (.05); \fill[black!100!] (13.5,1) circle (.05); % \draw (10,-4) rectangle (12,-2); \draw (10,-4)--(12,-2); \draw (10,-4)--(11,-3.5); \draw (12,-4)--(11,-3.5); \draw (10,-2)--(11,-2.5); \draw (12,-2)--(11,-2.5); \node (e) at (10,-4.3){$\bf{39}$}; \node (e) at (12,-4.3){$\bf{40}$}; \node (e) at (12,-1.7){$\bf{41}$}; \node (e) at (10,-1.7){$\bf{42}$}; \node (e) at (11,-3.8){$\bf{43}$}; \node (e) at (11,-2.7){$\bf{44}$}; % \node (e) at (11,-4.8) {$\bf{\Gamma_7}$}; % \fill[black!100!] (10,-4) circle (.05); \fill[black!100!] (12, -2) circle (.05); \fill[black!100!] (11, -3.5) circle (.05); \fill[black!100!] (12,-4) circle (.05); \fill[black!100!] (10,-2) circle (.05); \fill[black!100!] (11,-2.5) circle (.05); % \draw (13,-4) rectangle (15,-2); \draw (13,-4) -- (15,-2); \draw (15,-4)--(14,-3.5); \draw (13,-2)--(13.5,-3); \draw (15,-2)--(13.5,-3); \draw (15,-2)--(14,-3.5); \draw (13.5,-3)--(14,-3.5); \node (e) at (13,-4.3){$\bf{45}$}; \node (e) at (15,-4.3){$\bf{46}$}; \node (e) at (15,-1.7){$\bf{47}$}; \node (e) at (13,-1.7){$\bf{48}$}; \node (e) at (14,-3.8){$\bf{49}$}; \node (e) at (13.24,-3.2){$\bf{50}$}; % \node (e) at (14,-4.8) {$\bf{\Gamma_8}$}; % \fill[black!100!] (13,-4) circle (.05); \fill[black!100!] (15, -2) circle (.05); \fill[black!100!] (14, -3.5) circle (.05); \fill[black!100!] (15,-4) circle (.05); \fill[black!100!] (13,-2) circle (.05); \fill[black!100!] (13.5,-3) circle (.05); % \draw (7.5,-1)--(7.5,-3); \draw (7.5,-1)--(6,1); \draw (7.5,-1)--(9,1); \node (e) at (7.1,-1){$\bf{51}$}; \node (e) at (7.1, -3){$\bf{52}$}; \node (e) at (6,1.2){$\bf{53}$}; \node (e) at (9,1.2){$\bf{54}$}; % \node (e) at (7.5,-3.5) {$\bf{\Gamma_9}$}; % \fill[black!100!] (7.5,-1) circle (.05); \fill[black!100!] (7.5, -3) circle (.05); \fill[black!100!] (6,1) circle (.05); \fill[black!100!] (9,1) circle (.05); \end{tikzpicture} \caption{In this figure there are nine graphs namely, $\Gamma_1,\Gamma_2, \Gamma_3, \Gamma_4, \Gamma_5, \Gamma_6, \Gamma_7,\Gamma_8, \Gamma_9 $ and the numbers appear in this figure are the vertices of the corresponding graphs. } \label{fig:line grapph theory} \end{figure} In this article, we completely describe all nilpotent groups $G$ for which the power graph of $G$ is line graph. In fact our result on power graphs is the following: \begin{theorem}\label{Thm:P(G) is line graph for nilpotent group} Let $G$ be a nilpotent group. Then $\mathcal{P}(G)$ is a line graph of some graph $\Gamma$ if and only if $G$ is cyclic $p$-group. \end{theorem} Besides, we distinguish all nilpotent groups $G$ (except non abelian $2$-groups), for which the proper power graph is line graph. \begin{theorem}\label{Thm:P**(G) is line graph for nilpotent group} Let $G$ be a nilpotent group (except non abelian $2$-groups). Then $\mathcal{P}^{**}(G)$ is a line graph of some graph $\Gamma$ if and only if $G$ is one of the following: \begin{enumerate} \item[(a)] $G\cong\mathbb{Z}_{p^t}, t\geq 1$ \item[(b)] $G\cong \mathbb{Z}_{pq}$ \item[(c)] $G\cong \mathbb{Z}_2\times \mathbb{Z}_{2^{2}}$ \item[(d)] $G\cong \mathbb{Z}_{2^2}\times \mathbb{Z}_{2^2}$ \item[(e)] $G\cong\underbrace{\mathbb{Z}_p\times \cdots\times \mathbb{Z}_p}_{k \text{ times, } k\geq 2}$ \item[(f)] $G \text{ is non abelian } p \text{ group and } G=\mathbb{Z}_{p^{t_1}}\cup \cdots\cup \mathbb{Z}_{p^{t_{\ell}}},$ where $\ell$ is the number of distinct subgroups of order $p.$ \end{enumerate} \end{theorem} If one takes $a, b, c$ in $\mathbb{Z}_p$ for an odd prime $p,$ then one has the \emph{Heisenberg group} modulo $p.$ It is a group of order $p^3$ with generators $x, y$ and relations: \[z=xyx^{-1}y^{-1}, x^{p}=y^{p}=z^{p}=1, xz=zx, yz=zy.\] \begin{corollary} Let $G$ be the Heisenberg group modulo $p.$ Then $\mathcal{P}^{**}(G)$ is a line graph. \end{corollary} \begin{proof} In the Heisenberg group modulo $p,$ the order of each element is $p.$ Therefore by the part (f) of Theorem \ref{Thm:P**(G) is line graph for nilpotent group}, $\mathcal{P}^{**}(G)$ is a line graph. \end{proof} \begin{corollary} Let $G$ be a non abelian group such that $\text{exp}(G)=p.$ Then $\mathcal{P}^{**}(G)$ is a line graph. \end{corollary} \begin{proof} $G$ is non abelian group and $\text{exp}(G)=p.$ So $G$ is non abelian $p$-group and order of each element of $G$ is $p.$ Hence by by the part (f) of Theorem \ref{Thm:P**(G) is line graph for nilpotent group}, $\mathcal{P}^{**}(G)$ is a line graph. \end{proof} Moreover, for the case non-abelian $2$-groups, we prove two theorems. \begin{theorem}\label{THm: Line graph of generalized quaternion group} Let $Q_{2^n}$ be the generalized quaternion group. Then $\mathcal{P}^{**}(Q_{2^n})$ is a line graph. \end{theorem} \begin{theorem}\label{thm: P**(D_n) is a line graph if and only} Let $D_n$ be the dihedral group of order $2n, (n\geq 3).$ Then $\mathcal{P}^{**}(D_n)$ is a line graph if and only if $n=2^k, k\in \mathbb{N}.$ \end{theorem} Now, let us briefly summarize the content. In Section $2,$ we consider the power graphs of finite nilpotent groups and classify all those power graphs which are line graphs. In Section $3,$ we focus on the proper power graphs of finite nilpotent groups (except non abelian $2$-groups) and characterize all those proper power graphs which are line graphs. Moreover, in this section, we study that the proper power graph of generalized quaternion group is line graph. Also we derive the condition on the order of the dihedral group such that the proper power graph is line graph. \section{Proof of main theorem for the power graphs} Here we give the proof of Theorem \ref{Thm:P(G) is line graph for nilpotent group}. To prove this theorem we need to go through three theorems. The first theorem describes the cyclic group case. \begin{theorem} Let $G$ be a finite cyclic group. Then there exists a graph $\Gamma$ such that $\mathcal{P}(G)=L(\Gamma)$ if and only if $G$ is a $p$-group. \end{theorem} \begin{proof} Let $G$ be a cyclic $p$-group with $|G|=p^{r}.$ Then by Lemma \ref{thm:P(G) compltele iff G cylcic p group}, $\mathcal{P}(G)$ is complete. As a result the star graph $\Gamma_{1, p^r}$ serves the purpose. Conversely, let $|G|$ has at least two distinct prime factors say $p, q.$ Now $G$ is cyclic, so $G$ has a unique subgroup $H$ of order $pq.$ Again $\phi(pq)\geq 2$ implies that $H$ has at least two elements namely $v^{(pq)}_1, v^{(pq)}_2$ of order $pq.$ Again the cyclic subgroup $H= \langle v^{(pq)}_1\rangle=\langle v^{(pq)}_1\rangle$ has elements $v^{(p)}$ and $v^{(q)}$ (say) of order $p$ and $q$ respectively. Now we replace the vertices of the graph $\Gamma_2$ in Figure \ref{fig:line grapph theory} in the following way:\[ 6 \text{ by } v^{(pq)}_1, 10 \text{ by } e, \text{ the identity of } G, 8 \text{ by }v^{(pq)}_2, 7 \text{ by } v^{(p)} \text{ and } 9 \text{ by } v^{(q)}.\] Then the resulting graph also isomorphic to the graph $\Gamma_2$ in the Figure \ref{fig:line grapph theory}. Therefore, $\mathcal{P}(G)$ contains an induced subgraph isomorphic to $\Gamma_2.$ Hence the theorem. \end{proof} The next theorem answers for the non cyclic abelian group $G.$ \begin{theorem}\label{Thm:P(G) is not line graph, non cylcic abelian grp} Let $G$ be a non-cyclic abelian group. Then there does not exists any graph $\Gamma$ such that $\mathcal{P}(G)=L(\Gamma).$ \end{theorem} \begin{proof} We prove this theorem by showing that the power graph $\mathcal{P}(G)$ has no induced subgraph isomorphic to the graph $\Gamma_{1, 3}.$ Now it is given that $G$ is non cyclic abelian. So, \[G\cong\mathbb{Z}_{p^{t_{11}}_1}\times \cdots\times\mathbb{Z}_{p^{t_{1k_1}}_1}\times \mathbb{Z}_{p^{t_{21}}_2}\times\cdots\times\mathbb{Z}_{p^{t_{2k_2}}_2}\times\cdots\times \mathbb{Z}_{p^{t_{r1}}_r}\times\cdots\times\mathbb{Z}_{p^{t_{rk_r}}_r},\] where $k_i\geq1, 1\leq t_{i1}\leq t_{i2}\leq\cdots\leq t_{ik_i}, $ for all $i\in [r]$ and there exists at least one $k_i$ such that $k_i\geq 2$ (if each $k_i=1,$ then $G$ would be cyclic). Without loss of generality, we assume that $k_1\geq 2.$ Let $H_1=\langle v^{(p_1)}_1\rangle,\cdots, H_s=\langle v^{(p_1)}_s\rangle $ be the complete list of distinct cyclic subgroups of order $p_1$ of $G.$ Now $k_1\geq 2$ implies that $s\geq 3.$ Therefore we can choose three distinct vertices $v_{i_1}^{(p_1)}, v_{i_2}^{(p_1)}, v_{i_3}^{(p_1)}$ (order of each vertex is $p_1$) from three distinct cyclic subgroups $H_{i_1}, H_{i_2}, H_{i_3}$ respectively. Now we show that $e, v_{i_1}^{(p_1)}, v_{i_2}^{(p_1)}, v_{i_3}^{(p_1)}$ form an induced subgraph isomorphic to the graph $\Gamma_{1, 3}(e, v_{i_1}^{(p_1)}, v_{i_2}^{(p_1)}, v_{i_3}^{(p_1)}).$ From the definition of the power graph $e$ is edge connected with $v_{i_1}^{(p_1)}, v_{i_2}^{(p_1)} \text{ and } v_{i_3}^{(p_1)}.$ Now $v_{i_1}^{(p_1)}, v_{i_2}^{(p_1)} \text{ and } v_{i_3}^{(p_1)}$ are the generators of three distinct cylcic subroups $H_{i_1}, H_{i_1}$ and $H_{i_1}$ respectively. Also order of each cyclic subgroup is $p_1.$ As a result, $v_{i_j}^{(p_1)}\nsim v_{i_k}^{(p_1)},$ for each $i, j\in\{1, 2, 3\} \text{ and } i\neq j.$ Hence the theorem. \end{proof} Here we focus on the finite group $G$ such that $G$ is non abelian nilpotent. In this case, we have the following: \begin{theorem}\label{thm: P(G) line graph, G non abelian nilpotent} Let $G$ be a non abelian nilpotent group. Then there does not exists any graph $\Gamma$ such that $\mathcal{P}(G)=L(\Gamma).$ \end{theorem} \begin{proof} It is given that $G$ is nilpotent. So, by Lemma \ref{Nilpotent group charecterization thm}, $G\cong P_1\times \cdots\times P_r,$ where each $P_i$ is a Sylow subgroup of order $p_i^{\alpha_i} \text{ and }\alpha_i\in \mathbb{N}.$ Now divide the prove of this theorem in several cases. Case 1: First let $r\geq 3.$ In this case $|G|$ has at least three distinct prime divisors $p_1, p_2$ and $ p_3$ (say). Then by Lemma \ref{m|G, nilpotent G has subgroup of ordder m}, $G$ has three elements $v^{(p_1)}, v^{(p_2)}, v^{(p_3)}$ such that $\text{o}(v^{(p_1)})=p_1, \text{o}(v^{(p_2)})=p_2$ and $\text{o}(v^{(p_3)})=p_3.$ Now from the definition of the power graph, it is clear that the vertices $e, v^{(p_1)}, v^{(p_2)}, v^{(p_3)}$ form an induced subgraph $\Gamma_{1, 3}(e, v^{(p_1)}, v^{(p_2)}, v^{(p_3)})$ in $\mathcal{P}(G).$ Case 2: Let $r=2,$ then $G\cong P_1\times P_2.$ Now it is given that $G$ is non abelian. Therefore, either $P_1$ or $P_2$ is non-cyclic. Without loss of generality we assume that $P_1$ is non cyclic. Now $P_1$ is non cyclic group of order $p_1^{\alpha},$ for some $\alpha\geq 2, (\alpha=1 \text{ implies that } P_1 \text{ is cyclic}).$ As a result, $P_1$ has at least two elements $v, v'$ such that $v\nsim v'$ in $\mathcal{P}(G).$ Otherwise by Lemma \ref{thm:P(G) compltele iff G cylcic p group} $P_1$ would be a cyclic group. Clearly, $\text{o}(v)$ and $\text{o}(v')$ are power of the prime $p_1.$ Let $v''$ be an element of order $p_2$ in $P_2.$ Now from the definition of the power graph $v''$ is edge connected to neither $v$ nor $v'.$ Also $e$ is edge connected to the vertices $v, v', v''.$ Hence $\Gamma_{1, 3}(e, v, v', v'')$ is an induced subgraph of $\mathcal{P}(G).$ Case 3: Let $r=1.$ In this case, $G$ is a non abelian $p$-group. To prove this case we first prove a claim. Claim: Let $H_1=\langle v_1^{(p)}\rangle, \cdots, H_{\ell}=\langle v_{\ell}^{(p)}\rangle$ be the collection of all distinct cyclic subgroups of order $p$ of $G.$ We prove that either $\ell=1$ or $\ell\geq 3.$ So, we have to prove that $\ell$ can not be $2.$ So suppose $\ell=2.$ Since $G$ is $p$-group, the center of $G$ is $Z(G)$ is non trivial. Therefore, $Z(G)$ has a subgroup $H_{i_1}=\langle v_{i_1}^{(p)}\rangle$ (say) of order $p.$ Let $H_{i_2}=\langle v_{i_2}^{(p)}\rangle$ be another cyclic subgroup of order $p,$ (as $\ell\geq 2, \text{ we can choose more than one subgroup of order } p).$ Then $v_{i_1}^{(p)}v_{i_2}^{(p)}=v_{i_2}^{(p)}v_{i_1}^{(p)}$ and $\text{o}(v_{i_1}^{(p)}v_{i_2}^{(p)})=p.$ Take $H_{i_3}=\langle v_{i_1}^{(p)}v_{i_2}^{(p)}\rangle$ and it is easy to see that $H_1\neq H_3$ and $H_2\neq H_3.$ Hence $\ell>2.$ Now using the claim we finish the proof of this case. First suppose that $\ell\geq 3.$ Then we can choose three $p$-ordered elements $v_{i_1}^{(p)}, v_{i_2}^{(p)} \text{ and } v_{i_3}^{(p)}$ from $H_{i_1}, H_{i_2}$ and $H_{i_3}$ respectively. Clearly they are not adjacent to each other. Therefore, $\mathcal{P}(G)$ has an induced subgraph $\Gamma_{1, 3}(e, v_{i_1}^{(p)}, v_{i_2}^{(p)}, v_{i_3}^{(p)}).$ Now let $\ell=1.$ i.e., $G$ has exactly one cyclic subgroup of order $p.$ Now by Lemma \ref{p group unique subgrp of order p, g is cyclic}, $p$ can not be an odd prime. Therefore $p=2.$ Then, $G\cong Q_{2^n},$ where $ Q_{2^n}$ is the generalized quaternion group of order $2^n, n\geq 3.$ Now $Q_{2^n}$ has exactly one subgroup $H=\langle v^{(2^{n-1})}\rangle$ of order $2^{n-1}.$ Also, the number of $4$-ordered elements in $Q_{2^n}\setminus H$ is $2^{n-1}.$ Now, the number of distinct $4$-ordered cyclic subgroups in $Q_{2^n}\setminus H$ is $2^{n-2}.$ Let these are $H_1=\langle v_1^{(4)}\rangle, \cdots, H_{2^{n-2}}=\langle v_{2^{n-2}}^{(4)}\rangle.$ Clearly $2^{n-2}\geq 2.$ So, we can choose two vertices $v_{i_1}^{(4)}$ and $v_{i_2}^{(4)}$ from $H_{i_1}$ and $H_{i_2}$ respectively, where $i_1\neq i_2$ and $\text{o}(v_{i_1}^{(4)})=4=\text{o}(v_{i_2}^{(4)}).$ Now $v_{i_1}^{(4)}, v_{i_2}^{(4)}\in Q_{2^n}\setminus H$ implies that $v^{(2^{n-1})}$ neither edge connected to $v_{i_1}^{(4)}$ nor to $ v_{i_2}^{(4)}.$ Moreover, $v_{i_1}^{(4)}\nsim v_{i_2}^{(4)}$ in $\mathcal{P}(G).$ Therefore, $\mathcal{P}(G)$ has an induced subgraph $\Gamma_{1, 3}(e, v^{(2^{n-1})}, v_{i_1}^{(4)}, v_{i_2}^{(4)}).$ This completes the proof. \end{proof} \section{Proof of main theorem for the proper power graphs} In this portion we give the attention for the proof of Theorem \ref{Thm:P**(G) is line graph for nilpotent group}. Here we first characterize all cyclic groups, for which $\mathcal{P}^{**}(G)=L(\Gamma).$ Then we do same thing for the cases non cyclic abelian and non abelian nilpotent groups. \begin{theorem}\label{classify: G cyclic line graph, P^{**}(G)} Let $G$ be a finite cyclic group. Then there exists a graph $\Gamma$ such that $\mathcal{P}^{**}(G)=L(\Gamma)$ if and only if $G$ is one of the following: \begin{enumerate} \item[(a)] $G\cong\mathbb{Z}_{p^t}$ \item[(b)] $G\cong \mathbb{Z}_{pq},$ \end{enumerate} where $p, q$ are distinct primes and $t\geq 1.$ \end{theorem} To prove this theorem first we need to prove the following propositions: \begin{proposition}\label{prop:P^{**}(G), does not contain star graph Gamma(1, 3), G cylic} Let $G$ be a finite cyclic group. Then $\mathcal{P}^{**}(G)$ does not contain the star graph $\Gamma_{1, 3}$ as an induced subgraph if and only if $G$ is one of the following: \begin{enumerate} \item[(a)] $G\cong \mathbb{Z}_{p^t}$ \item[(b)] $G\cong \mathbb{Z}_{pqr}$ \item[(c)] $G\cong \mathbb{Z}_{p^2q^2}$ \item[(d)] $G\cong \mathbb{Z}_{p^tq},$ \end{enumerate} where $p, q, r$ are primes such that $p\neq q\neq r \text{ and } t\geq 1.$ \end{proposition} \begin{proof} We divide the proof of this proposition in several cases. Case 1: Let $|G|$ has at least four distinct prime divisors say $p, q, r, p'.$ Since $G$ is cyclic, then $G$ has an element $v^{(pqr)}$ such that $\text{o}(v^{(pqr)})=pqr.$ Also the cyclic subgroup $\langle v^{(pqr)}\rangle$ has elements $v^{(p)}, v^{(q)}, v^{(r)}$ of order $p, q, r$ respectively. Clearly, the vertices $v^{(pqr)}, v^{(p)}, v^{(q)} \text{ and } v^{(r)}$ are not the generators of the group $G.$ Therefore, by Lemma \ref{lemma: classification of dominating vertices of power graph} $v^{(pqr)}, v^{(p)}, v^{(q)} \text{ and } v^{(r)}\in V(\mathcal{P}^{**}(G)).$ Now $v^{(p)}, v^{(q)} \text{ and } v^{(r)}\in \langle v^{(pqr)}\rangle$ implies that $ v^{(pqr)}$ is edge connected to the vertices $v^{(p)}, v^{(q)} \text{ and } v^{(r)}.$ Again, $p, q, r$ are three distinct primes, therefore from the definition of power graph the vertices $v^{(p)}, v^{(q)} \text{ and } v^{(r)}$ are not adjacent to each other. As a result, $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v^{(pqr)}, v^{(p)}, v^{(q)}, v^{(r)}).$ Case 2: Let $|G|=p^{\alpha}q^{\beta}p_3^{\gamma},$ where at least one of $\alpha, \beta, \gamma\geq 2.$ Then applying the same argument as in the Case 1, we can conclude that $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v^{(pqr)}, v^{(p)}, v^{(q)}, v^{(r)}).$ Now we show that if $G\cong\mathbb{Z}_{pqr},$ then $\mathcal{P}^{**}(\mathbb{Z}_{pqr})$ does not contain any induced subgraph isomorphic to $\Gamma_{1, 3}.$ Note that by Lemma \ref{lemma: classification of dominating vertices of power graph}, identity and all the generators of the group $\mathbb{Z}_{pqr}$ are the complete list of domminating vertices of the graph $\mathcal{P}(\mathbb{Z}_{pqr}).$ Therefore, identity and all the generators of the group $\mathbb{Z}_{pqr}$ are not in the vertex set $V(\mathcal{P}^{**}(\mathbb{Z}_{pqr})).$ If possible $\mathcal{P}^{**}(\mathbb{Z}_{pqr})$ contains an induced subgraph $\Gamma_{1, 3}(v, v_1, v_2, v_3),$ for some vertices $v, v_1, v_2, v_3\in V(\mathcal{P}^{**}(\mathbb{Z}_{pqr})).$ So, $v\sim v_i $ for each $i\in \{1, 2, 3\}$ and $v_i\nsim v_j$ for each pair $i, j(i\neq j)\in \{1, 2, 3\}.$ Now the possible order of the vertices of the graph $\mathcal{P}^{**}(\mathbb{Z}_{pqr})$ are either $p$ or $q$ or $r$ or $pq$ or $pr$ or $qr.$ Now order of $v$ could either $p$ or $q$ or $r$ or $pq$ or $qr$ or $pr.$ First suppose that $\text{o}(v)=pq.$ Then it is cleared (from the definition of power graph) that $v$ is edge connected only with each $pq$-ordered, $p$-ordered and $q$-ordered vertices in $\mathcal{P}^{**}(\mathbb{Z}_{pqr}).$ Let $V(t)$ be the collection of all vertices of order $t.$ So we have to choose $v_1, v_2, v_3$ from $V(p)\cup V(q)\cup V({pq})$ such that no two of them are adjacent. Note that we can not choose more than one vertex from any one of the set $V(p), V(q), V({pq}).$ In fact, all the vertices in any one of the set $V(p), V(q), V({pq})$ form a clique. So, without loss of generality we assume that $v_1\in V(p), v_2\in V(q)$ and $v_3\in V(pq).$ But it is cleared that $v_3$ is edge connected to both of the vertices $v_1$ and $v_2.$ This violates the condition $v_i\nsim v_j$ for each pair $i, j\in \{1, 2, 3\}.$ Similarly, we can show that the graph $\mathcal{P}^{**}(\mathbb{Z}_{pqr})$ don't have any induced subhgraph isomorphic to $\Gamma_{1, 3}(v, v_1, v_2, v_3)$ for the other possible choices of the $\text{o}(v), \text{o}(v_1), \text{o}(v_2), \text{o}(v_3).$ Case 3: Let $|G|$ has two distinct prime divisors $p$ and $q.$ Suppose $G\cong\mathbb{Z}_{p^{t_1}}\times \mathbb{Z}_{q^{t_2}}.$ First we consider that $t_1\geq 3 \text{ and }t_2\geq 2,$ (if $t_1\geq 2 \text{ and }t_2\geq 3,$ then also similar result holds). In this case, $\mathcal{P}^{**}(\mathbb{Z}_{p^{t_1}}\times \mathbb{Z}_{q^{t_2}})$ have an induced subgraph $\Gamma_{1, 3}(v^{(p)}, v^{(p^3)}, v^{(p^2q)}, v^{(pq^2)}),$ where $\text{o}(v^{(p)})=p, \text{o}( v^{(p^3)})=p^3, \text{o}(v^{(p^2q)})=p^2q, \text{o}(v^{(pq^2)})=pq^2.$ If $G\cong\mathbb{Z}_{p^{t}q},$ where $t\geq 1.$ If we proceed exactly same way as in the proof of the case $G\cong\mathbb{Z}_{pqr},$ we can conclude that $\mathcal{P}^{**}(\mathbb{Z}_{p^{t}q})$ does not have any induced subgraph isomorphic to star graph $\Gamma_{1, 3}.$ If $G\cong\mathbb{Z}_{p^2q^2}.$ Similarly as the case $G\cong\mathbb{Z}_{pqr}$ we can prove that $\mathcal{P}^{**}(\mathbb{Z}_{p^2q^2})$ does not have any induced subgraph isomorphic to $\Gamma_{1, 3}.$ Case 4: Let $|G|$ has exactly one prime divisor $p$ say. Now $G\cong\mathbb{Z}_{p^t}$ as a result $\mathcal{P}^{**}(\mathbb{Z}_{p^t})$ is complete. Hence the result. \end{proof} \begin{proposition}\label{prop: P^{**}(G), does not contain Gamma_2, G cyclic} Let $G$ be a cyclic group. Then $\mathcal{P}^{**}(G)$ does not contain the graph $\Gamma_2$ (a graph in Figure \ref{fig:line grapph theory}) as an induced subgraph if and only if $G$ is one of the following: \begin{enumerate} \item[(a)] $G\cong \mathbb{Z}_{p^t}$ \item[(b)] $G\cong\mathbb{Z}_{pq}$ \item[(c)] $G\cong \mathbb{Z}_{12}$ \item[(d)] $G\cong\mathbb{Z}_{18}.$ \end{enumerate} \end{proposition} \begin{proof} Let $G$ be a cyclic group of order $n$ such that $n$ has at least three distinct prime divisors $p<q<r$ (say). Let $v_1^{(qr)}, v_2^{(qr)}, v_3^{(qr)}, v^{(q)}, v^{(r)}\in G$ such that $\text{o}(v_1^{(qr)})=\text{o}(v_2^{(qr)})=\text{o}(v_3^{(qr)})=qr, \text{o}(v^{(q)})=q, \text{o}(v^{(r)})=r$ (since $G$ is cylcic and $|G|$ has at least three distinct prime divisors $p, q, r$ with $p<q<r,$ then clearly $\phi(qr)\geq 3).$ Clearly, $v_1^{(qr)}, v_2^{(qr)}, v_3^{(qr)},v^{(q)}, v^{(r)}\in V(\mathcal{P}^{**}(G)).$ Now we replace the vertices of the graph $\Gamma_2$ in Figure \ref{fig:line grapph theory} in the following way: \[6 \text{ by } v_1^{(qr)}, 8 \text{ by } v_2^{(qr)}, 10 \text{ by } v_3^{(qr)}, 7 \text{ by } v^{(q)}, 9 \text{ by } v^{(r)}.\] Clearly, the resulting induced graph isomorphic to $\Gamma_2.$ If $G\cong\mathbb{Z}_{p^tq}, t\geq 3,$ in this case $\mathcal{P}^{**}(G)$ have the vertices namely, $v_1^{(p^2q)}, v_2^{(p^2q)}, v_3^{(p^2q)}, v^{(p^2)}, v^{(q)},$ where $\text{o}(v_1^{(p^2q)})=\text{o}(v_1^{(p^2q)})=\text{o}(v_1^{(p^2q)})=p^2q, (\text{ as } \phi(p^2q)\geq 3) \text{o}(v^{(p^2)})=p^2, \text{o}(v^{(q)})=q.$ Now we replace the vertices of the graph $\Gamma_2$ in the following way: \[6 \text{ by } v_1^{(p^2q)}, 8 \text{ by }v_2^{(p^2q)}, 10 \text{ by } v_3^{(p^2q)}, 7 \text{ by } v^{(p^2)}, 9 \text{ by } v^{(q)}.\] Clearly, the resulting induced graph is isomorphic to $\Gamma_2$ (in Figure \ref{fig:line grapph theory}). Let $G\cong\mathbb{Z}_{p^2q}.$ Suppose $\phi(pq)\geq 3,$ then $\mathcal{P}^{**}(G)$ has three vertices $v_1^{(pq)}, v_2^{(pq)}, v_3^{(pq)}$ such that $\text{o}(v_1^{(pq)})=pq=\text{o}(v_2^{(pq)})=\text{o}(v_3^{(pq)}).$ Also it has two vertices $v^{(p)}$ and $v^{(q)}$ such that $\text{o}(v^{(p)})=p$ and $\text{o}(v^{(q)})=q.$ Now we replace the vertices of the graph $\Gamma_2$ in the following way: \[6 \text{ by } v_1^{(pq)}, 8 \text{ by } v_2^{(pq)}, 10 \text{ by } v_3^{(pq)}, 7 \text{ by } v^{(p)}, 9 \text{ by } v^{(q)}.\] Clearly, the resulting induced graph is isomorphic to $\Gamma_2$ (in Figure \ref{fig:line grapph theory}). Now $\phi(pq)=2$ if and only if either $p=2, q=3$ or $q=2, p=3.$ So, either $G\cong\mathbb{Z}_{12}$ or $G\cong\mathbb{Z}_{18}.$ In these two cases we prove that $\mathcal{P}^{**}(\mathbb{Z}_{12})$ and $\mathcal{P}^{**}(\mathbb{Z}_{18})$ don't have an induced subgraph isomorphic to the graph $\Gamma_2$ in Figure \ref{fig:line grapph theory}. In fact, in $\mathcal{P}^{**}(\mathbb{Z}_{12})$ there are exactly two vertices namely, $v_1^{(3)}, v_2^{(3)}$ of order $3$ and they generate same cyclic group. Also in $\mathcal{P}^{**}(G), \text{deg}(v_1^{(3)})=3=\text{deg}(v_2^{(3)})$ and $v_1^{(3)}, v_2^{(3)}$ are the only vertices of degree $3.$ But the graph $\Gamma_2$ has two vertices of degree $3$ and they are not edge connected. Therefore, $\mathcal{P}^{**}(\mathbb{Z}_{12})$ has no induced subgraph isomorphic to $\Gamma_2.$ Again, in $\mathcal{P}^{**}(\mathbb{Z}_{18})$ there is no vertex $v$ such that $\text{deg}(v)=3.$ Hence in both of the cases it is not possible. Now if $G\cong\mathbb{Z}_{p^t},$ then $\mathcal{P}^{**}(\mathbb{Z}_{p^t})$ is complete. So $\mathcal{P}^{**}(\mathbb{Z}_{p^t})$ does not have any induced subgraph isomorphic to $\Gamma_2.$ Also for the group $G\cong \mathbb{Z}_{pq},$ the graph $\mathcal{P}^{**}(\mathbb{Z}_{pq})$ is disjoint union of two cliques. Therefore, $\mathcal{P}^{**}(\mathbb{Z}_{pq})$ does not have any induced subgraph isomorphic to the graph $\Gamma_2$ (in Figure \ref{fig:line grapph theory}). This completes the proposition. \end{proof} \begin{proof}[ Proof of Theorem \ref{classify: G cyclic line graph, P^{**}(G)}] Clearly, from propositions \ref{prop:P^{**}(G), does not contain star graph Gamma(1, 3), G cylic} and \ref{prop: P^{**}(G), does not contain Gamma_2, G cyclic}, we can say that $\mathcal{P}^{**}(G)$ is a line graph of some graph $\Gamma$ if and only if either $G\cong\mathbb{Z}_{p^t}$ or $G\cong\mathbb{Z}_{pq}$ or $G\cong\mathbb{Z}_{12}$ or $G\cong\mathbb{Z}_{18}.$ Now we show that if $G\cong\mathbb{Z}_{12}$ or $\mathbb{Z}_{18},$ then there exists no graph $\Gamma$ such that $\mathcal{P}^{**}(G)$ is line graph of $\Gamma.$ Let $G\cong\mathbb{Z}_{12},$ then we show that $\mathcal{P}^{**}(\mathbb{Z}_{12})$ has an induced subgraph isomorphic to the graph $\Gamma_4$ in Figure \ref{fig:line grapph theory}. Clearly $\mathbb{Z}_{12}$ has elements $v_1^{(6)}, v_2^{(6)}, v_1^{(3)}, v_2^{(3)}, v^{(2)}, v^{(4)}$ such that $\text{o}(v_1^{(6)})=\text{o}(v_2^{(6)})=6, \text{o}(v_1^{(3)})=\text{o}(v_2^{(3)})=3, \text{o}(v^{(2)})=2 \text{ and }\text{o}(v^{(4)})=4.$ Now it is easy to see that $\mathcal{P}^{**}(\mathbb{Z}_{12})$ contains the graph $\Gamma_4$ as an induced subgraph by replacing the vertices of the graph $\Gamma_4$ in the following way: \[17 \text{ by } v_1^{(6)}, 18 \text{ by } v^{(2)}, 19 \text{ by } v_2^{(6)}, 20 \text{ by } v_1^{(3)}, 21 \text{ by } v^{(4)}, \text{ and } 22 \text{ by } v_2^{(3)}.\] Let $G\cong\mathbb{Z}_{18},$ then we show that $\mathcal{P}^{**}(G)$ has an induced subgraph isomorphic to the graph $\Gamma_3$ in Figure \ref{fig:line grapph theory}. The group $\mathbb{Z}_{18}$ has elements $v_1^{(3)}, v_2^{(3)}, v_1^{(6)}, v_2^{(6)}, v_1^{(9)}, v_2^{(9)}$ such that $\text{o}(v_1^{(3)})=\text{o}(v_2^{(3)})=3, \text{o}(v_1^{(6)})=\text{o}(v_2^{(6)})=6 \text{ and } \text{o}(v_1^{(9)})= \text{o}(v_2^{(9)})=9.$ Now we replace the vertices of the graph $\Gamma_3$ in the following way: \[11 \text{ by } v_1^{(3)}, 12 \text{ by } v_1^{(6)}, 13 \text{ by } v_2^{(3)}, 14 \text{ by } v_1^{(9)}, 15 \text{ by } v_2^{(6)}, \text{ and } 16 \text{ by } v_2^{(9)}.\] Then the resulting graph is isomorphic to the graph $\Gamma_3.$ Hence the graphs $\mathcal{P}^{**}(\mathbb{Z}_{12})$ and $\mathcal{P}^{**}(\mathbb{Z}_{18})$ are not line graph. Let $G\cong\mathbb{Z}_{pq},$ in this case $\mathcal{P}^{**}(\mathbb{Z}_{pq})$ is the line graph of the graph $\Gamma_{1, \phi(p)}\bigoplus \Gamma_{1, \phi(q)}.$ Also for the cyclic $p$-group $\mathbb{Z}_{p^r}, \mathcal{P}^{**}(\mathbb{Z}_{p^r})$ is complete. Therefore, $\mathcal{P}^{**}(\mathbb{Z}_{p^r})$ is the line graph of the graph $\Gamma_{1, t},$ where $t=p^r-(\phi(p^r)+1).$ Hence the theorem. \end{proof} Now we want to describe all non cyclic abelian groups $G$ for which $\mathcal{P}^{**}(G)$ is a line graph. For that case we have the following: \begin{theorem}\label{thm: non cyclic abeln grp line graph of {P}^{**}(G)} Let $G$ be a non cyclic abelian group. Then $\mathcal{P}^{**}(G)$ is a line graph of some graph $\Gamma$ if and only if $G$ is one of the following: \begin{enumerate} \item[(a)] $G\cong \mathbb{Z}_2\times \mathbb{Z}_{2^{2}}$ \item[(b)] $G\cong \mathbb{Z}_{2^2}\times \mathbb{Z}_{2^2}$ \item[(c)] $G\cong\mathbb{Z}_p\times \cdots\times \mathbb{Z}_p, p \text{ is prime }.$ \end{enumerate} \end{theorem} To prove Theorem \ref{thm: non cyclic abeln grp line graph of {P}^{**}(G)}, we use Lemma \ref{line graph}. According to this lemma we have to characterize all non cyclic abelian groups for which $\mathcal{P}^{**}(G)$ has an induced subgraph isomorphic to the star graph $\Gamma_{1, 3}.$ Now proposition \ref{Prop: noncyclic, abln, {P}^{**}(G) does not contain Gamma_{1, 3} } completely describes this case. \begin{proposition}\label{Prop: noncyclic, abln, {P}^{**}(G) does not contain Gamma_{1, 3} } Let $G$ be a non cyclic abelian group. Then $\mathcal{P}^{**}(G)$ does not contain $\Gamma_{1, 3}$ as an induced subgraph if and only if $G$ is one of the following: \begin{enumerate} \item[(a)] $G\cong \mathbb{Z}_2\times \mathbb{Z}_{2^{2}}$ \item[(b)] $G\cong \mathbb{Z}_{2^2}\times \mathbb{Z}_{2^2}$ \item[(c)] $G\cong\mathbb{Z}_p\times \cdots\times \mathbb{Z}_p, p \text{ is prime }.$ \end{enumerate} \end{proposition} \begin{proof} First let $|G|$ has at least two distinct prime divisors. In this case, we show that $\mathcal{P}^{**}(G)$ has an induced subgraph isomorphic to $\Gamma_{1, 3}(v, v', v'', v'''),$ for some vertices $v, v', v'', v'''\in V(\mathcal{P}^{**}(G)).$ It is given that $G$ is non-cyclic abelian group. Therefore, \[G\cong\mathbb{Z}_{p^{t_{11}}_1}\times \cdots\times\mathbb{Z}_{p^{t_{1k_1}}_1}\times \mathbb{Z}_{p^{t_{21}}_2}\times\cdots\times\mathbb{Z}_{p^{t_{2k_2}}_2}\times\cdots\times \mathbb{Z}_{p^{t_{r1}}_r}\times\cdots\times\mathbb{Z}_{p^{t_{rk_r}}_r},\] where $1\leq t_{i1}\leq t_{i2}\leq\cdots\leq t_{ik_i}, $ for all $i\in [r], r\geq 2, k_i\geq1$ and there exists at least one $k_i$ such that $k_i\geq 2.$ So, without loss of generality, we assume that $k_1\geq 2.$ Consider \[V=\{ (\underbrace{\bar{a}, \bar{b},\bar{0}, \cdots, \bar{0}}_{k_1 \text{ times }}, \bar{c}, \bar{0}, \cdots, \bar{0}): \bar{a}, \bar{b}, \bar{c}\in G \text{ and } \text{o}(\bar{a})=\text{o}(\bar{b})=p_1, \text{o}(\bar{c})=p_2\}.\] Clearly, $V$ is a subset of $G$ with each element of $V$ is of order $p_1p_2$ and $|V|=(p_1^2-1)(p_2-1).$ Again these $(p_1^2-1)(p_2-1)$ number of elements form $\frac{(p_1^2-1)(p_2-1)}{\phi(p_1p_2)}=p_1+1$ number of distinct cyclic groups say $H_1, H_2, \cdots, H_{p_1+1},$ where order of each $H_i$ is $p_1p_2.$ Now, it is easy to see that, the cyclic group $\langle \underbrace{(\bar{0}, \cdots, \bar{0}}_{k_1\text{ times }}, \bar{c}, \bar{0}\cdots, \bar{0} )\rangle$ is contained in each of the cyclic groups $H_1, \cdots, H_{p_1+1},$ where $\text{o}(\bar{c})=p_2.$ Since $p_1+1\geq 3,$ we can choose three distinct vertices $v_{i_1}^{(p_1p_2)}, v_{i_2}^{(p_1p_2)} \text{ and } v_{i_3}^{(p_1p_2)}$ from $H_{i_1}, H_{i_2}$ and $H_{i_3}$ (respectively) such that $\text{o}(v_{i_j}^{(p_1p_2)})=p_1p_2,$ for $j\in \{1, 2, 3\}$ and $i_1, i_2, i_3\in \{1, \cdots, p_1+1\}.$ Also we take the vertex $v^{(p_2)}= \underbrace{(\bar{0}, \cdots, \bar{0}}_{k_1\text{ times }}, \bar{c}, \bar{0}\cdots, \bar{0} ).$ Then we get $\Gamma_{1, 3}(v, v', v'', v''')$ as an induced subgraph in the graph $\mathcal{P}^{**}(G),$ where $v=v^{(p_2)},v'= v_{i_1}^{(p_1p_2)}, v''=v_{i_2}^{(p_1p_2)}, v'''=v_{i_3}^{(p_1p_2)}.$ Now suppose that $G$ is a non cyclic abelian $p$-group. Then we can say that $G\cong \underbrace{\mathbb{Z}_p\times\cdots\times \mathbb{Z}_p}_{k (\geq 0) \text{ times }}\times \mathbb{Z}_{p^{t_1}}\times\cdots \times\mathbb{Z}_{p^{t_r}},$ where $t_1\leq t_2\leq\cdots\leq t_r, t_i\geq 2 \text{ for all }i.$ For this particular groups ( non cyclic abelian $p$-groups), we break the prove in several cases. Case 1: First suppose that $r\geq 3.$ In this case, we show that $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v^{(p)}, v_1^{(p^{(t_1)})}, v_2^{(p^{(t_1)})}, v_3^{(p^{(t_1)})}),$ where \begin{align*} &v^{(p)}=(\underbrace{\bar{0}, \cdots, \bar{0}}_{k \text{ times }}, \bar{a}, \bar{0}, \bar{0}, \bar{0}, \cdots, \bar{0})\\ &v_1^{(p^{(t_1)})}=(\underbrace{\bar{0}, \cdots, \bar{0}}_{k \text{ times }}, \bar{1}, \bar{b}, \bar{c}, \bar{0},\cdots, \bar{0}) \\ &v_2^{(p^{(t_1)})}=(\underbrace{\bar{0}, \cdots, \bar{0}}_{ k \text{ times }}, \bar{1}, \bar{b}, \bar{0}, \bar{0}, \cdots, \bar{0})\\ &v_3^{(p^{(t_1)})}=(\underbrace{\bar{0}, \cdots, \bar{0}}_{k \text{ times }}, \bar{1}, \bar{0}, \bar{0}, \bar{0}, \cdots, \bar{0}), \end{align*} ${a}=p^{t_1-1}$ and $\text{o}(\bar{b})=\text{o}(\bar{c})=p.$ Clearly, $\text{o}(\bar{a})=p.$ Again $\bar{1}$ is a generator of the group $\mathbb{Z}_{p^{t_1}}$ and we can write $\bar{a}=p^{t_1-1}\bar{1}.$ Also $t_1\geq 2$ implies that $t_1-1\geq1.$ Now $\text{o}(\bar{b})=\text{o}(\bar{c})=p$ implies that $p^{t_1-1}v_1^{(p^{(t_1)})}=v^{(p)}.$ Therefore, $v_1^{(p^{(t_1)})}\sim v^{(p)}.$ Similarly, we can show that $v_2^{(p^{(t_1)})}\sim v^{(p)}$ and $v_3^{(p^{(t_1)})}\sim v^{(p)}.$ Now we show that $v_i^{(p^{(t_1)})}\nsim v_j^{(p^{(t_1)})},$ for all $i, j\in \{1, 2, 3\}.$ Note that $\text{o}(v_i^{(p^{(t_1)})})=p^{t_1}$ for each $i.$ So $v_i^{(p^{(t_1)})}\sim v_j^{(p^{(t_1)})}$ if and only if $\langle v_i^{(p^{(t_1)})}\rangle=\langle v_j^{(p^{(t_1)})}\rangle.$ But clearly it is not possible from the construction of the vertices. Case 2: Let $r=2,$ so $G\cong\underbrace{{\mathbb{Z}_p\times\cdots \mathbb{Z}_p}}_{k (\geq 0) \text{ times }}\times \mathbb{Z}_{p^{t_1}}\times \mathbb{Z}_{p^{t_2}},$ where $t_1\leq t_2, t_i\geq 2 \text{ for all }i.$ Subcase 1: First suppose that $p$ is an odd prime. Since $\phi(p)\geq 2,$ we can choose two distinct elements $\bar{b}$ and $\bar{c}$ from $\mathbb{Z}_{p^{t_2}}$ such that $\text{o}(\bar{b})=\text{o}(\bar{c})=p.$ Also we can take another $p$-ordered element $\bar{a}\in \mathbb{Z}_{p^{t_1}}$ such that $a=p^{t_1-1}.$ Now we consider the vertices \begin{align*} &v^{(p)}=(\bar{0}, \cdots, \bar{0}, \bar{a}, \bar{0}),& &v^{(p^{t_1})}_1=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{b}), \\ &v^{(p^{t_1})}_2=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{c}),& &v^{(p^{t_1})}_3=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{0}). \end{align*} Continuing as the Case 1, we can show that $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v^{(p)}, v^{(p^{t_1})}_1, v^{(p^{t_1})}_2, v^{(p^{t_1})}_3).$ Subcase 2: Here we focus on the case $p=2.$ So, $G\cong\underbrace{\mathbb{Z}_{2}\times\cdots\times \mathbb{Z}_{2}}_{k (\geq 0)\text{ times }}\times\mathbb{Z}_{2^{t_1}}\times\mathbb{Z}_{2^{t_2}}.$ In this case we first consider that at least one $t_1$ and $t_2\geq 3.$ Without loss of generality we assume that $t_1\geq 3.$ Here we consider the vertices \begin{align*} &v=(\bar{0}, \cdots, \bar{0}, \bar{a}, \bar{0}),& &v_1=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{b}),\\ &v_2=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{c}), & &v_3=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{0}), \end{align*} where $a=2^{t_1-1}, \text{o}(\bar{b})=2, \text{o}(\bar{c})=4.$ Clearly, $\mathcal{P}^{**}(G)$ has an induced subgrap $\Gamma_{1, 3}(v^{(2)}, v_1^{(2^{t_1})}, v_2^{(t_1)}, v_3^{(t_1)}).$ Let $G\cong \underbrace{\mathbb{Z}_2\times \cdots\times\mathbb{Z}_2}_{k\geq 0 \text{ times }}\times\mathbb{Z}_{2^2}\times \mathbb{Z}_{2^2}.$ It is easy to see that $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v^{(2)}, v_1^{(4)}, v_2^{(4)}, v_3^{(4)}),$ where \begin{align*} &v^{(2)}=(\bar{0}, \cdots, \bar{0}, \bar{0}, \bar{0},\bar{2}),& &v_1^{(4)}=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{0}, \bar{1})\\ &v_2^{(4)}=(\bar{0}, \cdots, \bar{0}, \bar{0}, \bar{0}, \bar{1}),& &v_3^{(4)}=(\bar{0}, \cdots, \bar{0}, \bar{0}, \bar{2}, \bar{1}) \end{align*} Case 3: Let $r=1.$ In this case $G\cong \underbrace{\mathbb{Z}_p\times \cdots \times \mathbb{Z}_p}_{k(\geq 1)\text{ times }}\times \mathbb{Z}_{p^t}, t\geq 2.$ Subcase 1: First suppose that $p\geq 3.$ Then $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v^{(p^{t-1})}, v_1^{(p^t)}, v_2^{(p^t)}, v_3^{(p^t)}),$ where \begin{align*} &v^{(p^{t-1})}=(\bar{0},\cdots, \bar{0}, \bar{0}, \bar{p}),& &v_1^{(p^t)}=(\bar{0}, \cdots, \bar{0}, \bar{2}, \bar{1}),\\ &v_2^{(p^t)}=(\bar{0}, \cdots, \bar{0}, \bar{0}, \bar{1}),& &v_3^{(p^t)}=(\bar{0}, \cdots, \bar{0},\bar{1}, \bar{1}) \end{align*} Subcase 2: Let $p=2.$ Then $G\cong \underbrace{\mathbb{Z}_2\times \cdots\mathbb{Z}_2}_{ k(\geq 1)\text{ times}}\times\mathbb{Z}_{2^t}, t\geq 2.$ In this case, first suppose that $k\geq 2.$ Then the following vertices form an induced subgraph $\Gamma_{1, 3}(v^{(2^{t-1})}, v_1^{(2^t)}, v_2^{(2^t)}, v_3^{(2^t)}),$ where \begin{align*} &v^{(2^{t-1})}=(\bar{0}, \cdots, \bar{0}, \bar{0}, \bar{0},\bar{2}),& &v_1^{(2^t)}=(\bar{0}, \cdots, \bar{0}, \bar{0}, \bar{1}, \bar{1})\\ &v_2^{(2^t)}=(\bar{0}, \cdots, \bar{0}, \bar{0}, \bar{0},\bar{1}),& &v_3^{(2^t)}=(\bar{0}, \cdots, \bar{0}, \bar{1}, \bar{0}, \bar{1}). \end{align*} Now let $k=1.$ Then $G\cong\mathbb{Z}_2\times \mathbb{Z}_{2^t}, t\geq 2.$ In this case, we show that $\mathcal{P}^{**}(G)$ has an induced subgraph isomorphic to $\Gamma_{1, 3}$ if and only if $t\geq 3.$ In fact, for $t\geq 3$ we have the induced subgraph $\Gamma_{1, 3}(v, v_1, v_2, v_3),$ where $v=(\bar{0}, \bar{4}), v_1=(\bar{1}, \bar{2}), v_2=(\bar{0}, \bar{1}), v_3=(\bar{1}, \bar{1}).$ Let $G\cong\mathbb{Z}_2\times\mathbb{Z}_{2^2}.$ Clearly, $\mathcal{P}^{**}(G)$ is the graph in Figure \ref{fig:pwr grapph for group of order Z_2*Z_2^2} \begin{figure}[H] \tiny \centering \begin{tikzpicture}[scale=1] \tikzstyle{edge_style} = [draw=black, line width=2mm, ] \draw (1,0)--(3,0); \draw (1, 0)--(1,3); \draw (1,3)--(3,0); \draw (3,0)--(6,0); \draw (3,0)--(5,3); \draw (5,3)--(6,0); % \node (e) at (.4,0){$\bf{(\bar{0}, \bar{1})}$}; \node (e) at (3,-.3){$\bf{(\bar{0}, \bar{2})}$}; \node (e) at (6,-.3){$\bf{(\bar{1}, \bar{1})}$}; \node (e) at (1,3.2){$\bf{(\bar{0}, \bar{3})}$}; \node (e) at (5.6,3.2){$\bf{(\bar{1}, \bar{3})}$}; \node (e) at (-1,1.1){$\bf{(\bar{1}, \bar{0})}$}; \node (e) at (7,1.1){$\bf{(\bar{1}, \bar{2})}$}; \fill[black!100!] (3,0) circle (.05); \fill[black!100!] (1,0) circle (.05); \fill[black!100!] (1,3) circle (.05); \fill[black!100!] (5,3) circle (.05); \fill[black!100!] (6,0) circle (.05); \fill[black!100!] (-1,1.5) circle (.06); \fill[black!100!] (7,1.5) circle (.06); \end{tikzpicture} \caption{The proper power graph $\mathcal{P}^{**}(Z_2\times Z_2^2)$} \label{fig:pwr grapph for group of order Z_2*Z_2^2} \end{figure} Now we show that there is no vertices $v, v_1, v_2, v_3$ in $V(\mathcal{P}^{**}(G))$ such that $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v, v_1, v_2, v_3).$ The graph in Figure \ref{fig:pwr grapph for group of order Z_2*Z_2^2} has only one vertex namely, $(\bar{0}, \bar{1})$ such that $\text{deg}(\bar{0}, \bar{1})=4$ and the degree of all other vertices are $2.$ So, to form $\Gamma_{1, 3}(v, v_1, v_2, v_3)$ we should take $v=(\bar{0}, \bar{1}).$ Now it is clear that we can not choose $v_1, v_2, v_3$ such that no two of these three vertices are adjacent. Let $G\cong\mathbb{Z}_{2^2}\times\mathbb{Z}_{2^2},$ then we show that $\mathcal{P}^{**}(G)$ does not have any induced subgraph isomorphic to $\Gamma_{1, 3}.$ If possible there is an induced subgraph $\Gamma_{1, 3}(v, v_1, v_2, v_3)$ for some $v, v_1, v_2, v_3\in \mathcal{P}^{**}(G).$ Then we show that $\text{o}(v)=2.$ First note that $v\sim v_i (i=1, 2, 3)$ implies $\langle v\rangle\subset \langle v_i\rangle,$ for all $i.$ If $\langle v_1\rangle\subset \langle v\rangle,$ then $v\sim v_2$ implies that either $\langle v\rangle\subset \langle v_2\rangle$ or $\langle v_2\rangle\subset \langle v\rangle.$ Since $G$ is a $p$-group, then in any cases $v_1\sim v_2,$ which is not possible. So, $\text{o}(v)=2$ and $\text{o}(v_i)=4,$ for all $i.$ First consider the two ordered element $v=(\bar{2}, \bar{0}).$ Our claim is that $v$ is edge connected with exactly two four ordered elements $v_1, v_2$ (say) such that $\langle v_1\rangle\neq \langle v_2\rangle.$ Clearly, $(\bar{2}, \bar{ 0})\nsim (\bar{a}, \bar{b}),$ where $\bar{b}=4.$ By our condition $\text{o}(\bar{a})=4.$ Therefore, $(\bar{2}, \bar{0})$ is edge connected with the vertices $(\bar{1},\bar{0}), (\bar{3},\bar{0}), (\bar{1},\bar{2}), (\bar{3},\bar{2}).$ Also, $\langle (\bar{1}, \bar{0})\rangle=\langle (\bar{3}, \bar{0})\rangle$ and $\langle (\bar{1}, \bar{2})\rangle=\langle (\bar{3}, \bar{2})\rangle.$ This proves our claim. The same claim holds for the vertex $(\bar{0}, \bar{2}).$ The remaining two ordered element is $(\bar{2}, \bar{2}).$ Let $(\bar{2}, \bar{2})\sim (\bar{a}, \bar{b}).$ Then we show that $\text{o}(\bar{a})=4=\text{o}(\bar{b}).$ It is easy to see neither $\bar{a}=\bar{0}$ nor $\bar{b}=\bar{0}.$ If $\bar{a}=\bar{2},$ then $(\bar{2}, \bar{2})\sim (\bar{2}, \bar{b})$ implies there exists $k\in \mathbb{N}$ such that $k\bar{2}=\bar{2}\Rightarrow k\bar{2}=\bar{2}\Rightarrow k=4\ell+1\Rightarrow (4\ell+1)\bar{b}\neq \bar{2}.$ Similar result holds if $\bar{b}=\bar{2}.$ As a result, $(\bar{2}, \bar{2})\in \langle (\bar{1}, \bar{3})\rangle=\langle (\bar{3}, \bar{1})\rangle$ and $\langle (\bar{1}, \bar{1})\rangle=\langle (\bar{3}, \bar{3})\rangle.$ Let $G\cong\underbrace{\mathbb{Z}_p\times \cdots\times \mathbb{Z}_p}_{k \text{ times }}.$ Here the graph $\mathcal{P}^{**}(G)$ isomorphic to the graph $\underbrace{K_{\phi(p)}\bigoplus\cdots\bigoplus K_{\phi(p)}}_{k \text{ times }}.$ This completes the proof. \end{proof} \begin{proof} [Proof of Theorem \ref{thm: non cyclic abeln grp line graph of {P}^{**}(G)}] Let $\mathcal{P}^{**}(G)$ be a line graph of some graph $\Gamma.$ Then by Proposition \ref{Prop: noncyclic, abln, {P}^{**}(G) does not contain Gamma_{1, 3} }, we can say that either $G\cong \mathbb{Z}_2\times \mathbb{Z}_{2^2}$ or $G\cong \mathbb{Z}_{2^2}\times \mathbb{Z}_{2^2}$ or $G\cong \mathbb{Z}_p\times\cdots\times \mathbb{Z}_{p}.$ Conversely, we show that if $G$ is one of the above, then $\mathcal{P}^{**}(G)$ is line graph of some graph. If $G\cong \mathbb{Z}_2\times \mathbb{Z}_{2^2},$ then $\mathcal{P}^{**}(G)$ is the graph in Figure \ref{fig:pwr grapph for group of order Z_2*Z_2^2} and it is the line graph of the graph $\Gamma,$ described as in Figure \ref{fig: G=Z_2*Z_2^2 P^**(G) is line grapph of graph } \begin{figure}[H] \tiny \centering \begin{tikzpicture}[scale=1] \tikzstyle{edge_style} = [draw=black, line width=2mm, ] \draw (0,0)--(0,2); \draw (0,2)--(-1,3.5); \draw (0,2)--(1,3.5); \draw (0,0)--(-1,-1.5); \draw (0,0)--(1, -1.5); \draw (-3,1)--(-1.5,1); \draw (1, 1)--(2.5,1); \node (e) at (-.3,0){$\bf{v_1}$}; \node (e) at (-1.3,-1.5){$\bf{v_2}$}; \node (e) at (1.3,-1.5){$\bf{v_3}$}; \node (e) at (-.3, 2){$\bf{v_4}$}; \node (e) at (1.3,3.5){$\bf{v_5}$}; \node (e) at (-1.3,3.5){$\bf{v_6}$}; \node (e) at (-3.3,1){$\bf{v_7}$}; \node (e) at (-1.2,1){$\bf{v_8}$}; \node (e) at (.7,1){$\bf{v_9}$}; \node (e) at (2.9,1){$\bf{v_{10}}$}; % \node (e) at (-.5,1){$\bf{e(\bar{0}, \bar{2})}$}; \node (e) at (-1.3,-.8){$\bf{e(\bar{0}, \bar{1})}$}; \node (e) at (1.3,-.8){$\bf{e(\bar{0}, \bar{3})}$}; \node (e) at (-1.3, 2.8){$\bf{e(\bar{1}, \bar{3})}$}; \node (e) at (1.3, 2.8){$\bf{e(\bar{1}, \bar{1})}$}; \node (e) at (1.8,1.2){$\bf{e(\bar{1}, \bar{2})}$}; \node (e) at (-2.3,1.2){$\bf{e(\bar{1}, \bar{0})}$}; % \fill[black!100!] (0,0) circle (.05); \fill[black!100!] (0,2) circle (.05); \fill[black!100!] (-1, 3.5) circle (.05); \fill[black!100!] (-1,-1.5) circle (.05); \fill[black!100!] (-3,1) circle (.05); \fill[black!100!] (1,1) circle (.05); \filldraw[black!100] (2.5,1) circle (.05); \filldraw[black!100] (-1.5,1) circle (.05); \filldraw[black!100] (1,-1.5) circle (.05); \fill[black!100!] (1, 3.5) circle (.05); \end{tikzpicture} \caption{The graph $\Gamma$ such that $\mathcal{P}^{**}(\mathbb{Z}_2\times\mathbb{Z}_{2^2})=L(\Gamma).$ } \label{fig: G=Z_2*Z_2^2 P^**(G) is line grapph of graph } \end{figure} Let $G\cong\mathbb{Z}_{2^2}\times\mathbb{Z}_{2^2}.$ The group $G$ has $6$ distinct cyclic subgroups of order $4,$ namely \begin{align*} H_1=\langle (\bar{1}, \bar{0})\rangle, H_2=\langle (\bar{0}, \bar{1})\rangle, H_3=\langle (\bar{1}, \bar{1})\rangle\\ H_4=\langle (\bar{1}, \bar{3})\rangle, H_5=\langle (\bar{1}, \bar{2})\rangle, H_6=\langle (\bar{2}, \bar{1})\rangle. \end{align*} Again $(\bar{2},\bar{0})\in H_1\cap H_5, (\bar{2},\bar{2})\in H_3\cap H_4, (\bar{0},\bar{2})\in H_2\cap H_6.$ Therefore, $\mathcal{P}^{**}(G)$ is the graph in Figure \ref{fig:pwr grapph for group of Z_2^2*Z_2^2} \begin{figure}[H] \tiny \centering \begin{tikzpicture}[scale=1] \tikzstyle{edge_style} = [draw=black, line width=2mm, ] \draw (0,0)--(2,0); \draw (0, 0)--(0,2); \draw (0,2)--(2,0); \draw (2,0)--(4,0); \draw (4,0)--(4,2); \draw (4,2)--(2,0); % \node (e) at (0,-.3){$\bf{(\bar{1}, \bar{0})}$}; \node (e) at (2,-.3){$\bf{(\bar{2}, \bar{0})}$}; \node (e) at (0,2.2){$\bf{(\bar{3}, \bar{0})}$}; \node (e) at (4,-.3){$\bf{(\bar{1}, \bar{2})}$}; \node (e) at (4,2.2){$\bf{(\bar{3}, \bar{2})}$}; \fill[black!100!] (0,0) circle (.05); \fill[black!100!] (2,0) circle (.05); \fill[black!100!] (0,2) circle (.05); \fill[black!100!] (4,0) circle (.05); \fill[black!100!] (4,2) circle (.05); \node (e) at (4.5,1){$\bf{\bigoplus}$}; \draw (5,0)--(7,0); \draw (5, 0)--(5,2); \draw (5,2)--(7,0); \draw (7,0)--(9,0); \draw (9,0)--(9,2); \draw (9,2)--(7,0); % \node (e) at (7,-.3){$\bf{(\bar{0}, \bar{2})}$}; \node (e) at (5,2.2){$\bf{(\bar{0}, \bar{1})}$}; \node (e) at (5,-.3){$\bf{(\bar{0}, \bar{3})}$}; \node (e) at (9,-.3){$\bf{(\bar{2}, \bar{1})}$}; \node (e) at (9,2.2){$\bf{(\bar{2}, \bar{3})}$}; \fill[black!100!] (5,0) circle (.05); \fill[black!100!] (7,0) circle (.05); \fill[black!100!] (5,2) circle (.05); \fill[black!100!] (9,0) circle (.05); \fill[black!100!] (9,2) circle (.05); \node (e) at (9.5,1){$\bf{\bigoplus}$}; \draw (10,0)--(12,0); \draw (10, 0)--(10,2); \draw (10,2)--(12,0); \draw (12,0)--(14,0); \draw (14,0)--(14,2); \draw (14,2)--(12,0); \node (e) at (12,-.3){$\bf{(\bar{2}, \bar{2})}$}; \node (e) at (10,2.2){$\bf{(\bar{1}, \bar{1})}$}; \node (e) at (10,-.3){$\bf{(\bar{3}, \bar{3})}$}; \node (e) at (14,-.3){$\bf{(\bar{3}, \bar{1})}$}; \node (e) at (14,2.2){$\bf{(\bar{1}, \bar{3})}$}; \fill[black!100!] (12,0) circle (.05); \fill[black!100!] (10,0) circle (.05); \fill[black!100!] (10,2) circle (.05); \fill[black!100!] (14,0) circle (.05); \fill[black!100!] (14,2) circle (.05); \end{tikzpicture} \caption{The graph $\mathcal{P}^{**}(\mathbb{Z}_{2^2}\times\mathbb{Z}_{2^2})$} \label{fig:pwr grapph for group of Z_2^2*Z_2^2} \end{figure} Clearly, $\mathcal{P}^{**}(G)$ is the line graph of the graph described as in Figure \ref{fig: G=Z_2^2*Z_2^2 line grapph of graph of P^**(G) } \begin{figure}[H] \tiny \centering \begin{tikzpicture}[scale=1] \tikzstyle{edge_style} = [draw=black, line width=2mm, ] \draw (0,0)--(0,2); \draw (0,2)--(-1,3.5); \draw (0,2)--(1,3.5); \draw (0,0)--(-1,-1.5); \draw (0,0)--(1, -1.5); \node (e) at (-.3,0){$\bf{v_1}$}; \node (e) at (-1,-1.7){$\bf{v_2}$}; \node (e) at (1,-1.7){$\bf{v_3}$}; \node (e) at (-.3, 2){$\bf{v_4}$}; \node (e) at (1,3.7){$\bf{v_5}$}; \node (e) at (-1,3.7){$\bf{v_6}$}; % \node (e) at (-.5,1){$\bf{e(\bar{2}, \bar{0})}$}; \node (e) at (-1.3,-.8){$\bf{e(\bar{1}, \bar{0})}$}; \node (e) at (1.3,-.8){$\bf{e(\bar{3}, \bar{0})}$}; \node (e) at (-1.3, 2.8){$\bf{e(\bar{1}, \bar{2})}$}; \node (e) at (1.3, 2.8){$\bf{e(\bar{3}, \bar{2})}$}; % \fill[black!100!] (0,0) circle (.05); \fill[black!100!] (0,2) circle (.05); \fill[black!100!] (-1, 3.5) circle (.05); \fill[black!100!] (-1,-1.5) circle (.05); \filldraw[black!100] (1,-1.5) circle (.05); \fill[black!100!] (1, 3.5) circle (.05); \node (e) at (1.5,1){$\bf{\bigoplus}$}; \draw (3,0)--(3,2); \draw (3,2)--(2,3.5); \draw (3,2)--(4,3.5); \draw (3,0)--(2,-1.5); \draw (3,0)--(4, -1.5); \node (e) at (2.7,0){$\bf{v_7}$}; \node (e) at (2,-1.7){$\bf{v_8}$}; \node (e) at (4,-1.7){$\bf{v_{9}}$}; \node (e) at (2.6, 2){$\bf{v_{10}}$}; \node (e) at (4.3,3.7){$\bf{v_{11}}$}; \node (e) at (1.85,3.7){$\bf{v_{12}}$}; % \node (e) at (2.5,1){$\bf{e(\bar{0}, \bar{2})}$}; \node (e) at (2.9,-1){$\bf{e(\bar{0}, \bar{1})}$}; \node (e) at (4.3,-.8){$\bf{e(\bar{0}, \bar{3})}$}; \node (e) at (2.9, 3){$\bf{e(\bar{2}, \bar{1})}$}; \node (e) at (4.3, 2.8){$\bf{e(\bar{2}, \bar{3})}$}; \fill[black!100!] (3,0) circle (.05); \fill[black!100!] (3,2) circle (.05); \fill[black!100!] (2, 3.5) circle (.05); \fill[black!100!] (2,-1.5) circle (.05); \filldraw[black!100] (4,-1.5) circle (.05); \fill[black!100!] (4, 3.5) circle (.05); \node (e) at (4.5,1){$\bf{\bigoplus}$};; \draw (6,0)--(6,2); \draw (6,2)--(5,3.5); \draw (6,2)--(7,3.5); \draw (6,0)--(5,-1.5); \draw (6,0)--(7, -1.5); \node (e) at (5.6,0){$\bf{v_{13}}$}; \node (e) at (5,-1.7){$\bf{v_{14}}$}; \node (e) at (7,-1.7){$\bf{v_{15}}$}; \node (e) at (5.6, 2){$\bf{v_{16}}$}; \node (e) at (7.3,3.7){$\bf{v_{17}}$}; \node (e) at (4.85,3.7){$\bf{v_{18}}$}; % \node (e) at (5.5,1){$\bf{e(\bar{2}, \bar{2})}$}; \node (e) at (5.9,-1){$\bf{e(\bar{1}, \bar{1})}$}; \node (e) at (7.3,-.8){$\bf{e(\bar{3}, \bar{3})}$}; \node (e) at (5.9, 3){$\bf{e(\bar{3}, \bar{1})}$}; \node (e) at (7.3, 2.8){$\bf{e(\bar{1}, \bar{3})}$}; % \fill[black!100!] (6,0) circle (.05); \fill[black!100!] (6,2) circle (.05); \fill[black!100!] (5, 3.5) circle (.05); \fill[black!100!] (5,-1.5) circle (.05); \filldraw[black!100] (7,-1.5) circle (.05); \fill[black!100!] (7, 3.5) circle (.05); \end{tikzpicture} \caption{The graph $\Gamma$ such that $\mathcal{P}^{**}(\mathbb{Z}_{2^2}\times\mathbb{Z}_{2^2})=L(\Gamma).$} \label{fig: G=Z_2^2*Z_2^2 line grapph of graph of P^**(G) } \end{figure} Let $G\cong\underbrace{\mathbb{Z}_p\times \cdots\times \mathbb{Z}_p}_{k \text{ times}}.$ In this case $\mathcal{P}^{**}(G)\cong \underbrace{K_{\phi(p)}\bigoplus\cdots\bigoplus K_{\phi(p)}}_{k \text{ times }}.$ Clearly, it is the line graph of the graph $\underbrace{\Gamma_{1, \phi(p)}\bigoplus\cdots\bigoplus\Gamma_{1, \phi(p)}}_{k \text{ times }},$ where $k=p^{k-1}+p^{k-2}+\cdots+p+1.$ Hence the theorem. \end{proof} In this portion we study the non ablien nilpotent groups $G$ for which $\mathcal{P}^{**}(G)$ is line graph. \begin{theorem}\label{thm: non abelian G has geq 3 distinct prime divisor, not line graph } Let $G$ be a non abelian nilpotent group such that $|G|$ has at least three distinct prime divisors. Then there is no graph $\Gamma$ such that $\mathcal{P}^{**}(G)=L(\Gamma).$ \end{theorem} \begin{proof} Let $p_1, p_2, p_3$ be three distinct prime divisors of $|G|.$ Now $G$ nilpotent implies that $G$ has an element $v^{(p_1p_2p_3)}$ such that $\text{o}(v)=p_1p_2p_3.$ Now $\langle v^{(p_1p_2p_3)}\rangle$ has elements $v^{(p_1)}, v^{(p_2)}, v^{(p_3)}$ such that $\text{o}(v^{(p_1)})=p_1, \text{o}(v^{(p_2)})=p_2, \text{o}(v^{(p_3)})=p_3.$ Clearly, $\mathcal{P}^{**}(G)$ has an induced subgraph $\Gamma_{1, 3}(v^{(p_1p_2p_3)}, v^{(p_1)}, v^{(p_2)}, v^{(p_3)}).$ Hence the theorem. \end{proof} \begin{theorem}\label{thm: G non ab nilpotent 2 prime divisors, P**(G) not a line graph} Let $G$ be non abelian nilpotent group such that $|G|$ has exactly two distinct prime divisors. Then there is no graph $\Gamma$ such that $\mathcal{P}^{**}(G)=L(\Gamma).$ \end{theorem} \begin{proof} It is given that $G$ is non abelian nilpotent and $|G|$ has two distinct prime divisors. Let the prime divisors are $p_1$ and $p_2.$ Note that $G\cong P_1\times P_2,$ where $P_1, P_2$ are Sylow subgroups of $G$ with $|P_1|=p_1^{\alpha_1}$ and $|P_2|=p_2^{\alpha_2}.$ So $|G|=p_1^{\alpha_1}p_2^{\alpha_2},$ where at least one $\alpha_i \geq 2(i=1, 2).$ Otherwise, $G$ would be cyclic. First suppose that $p_1$ and $p_2$ are odd primes. Now $G$ is nilpotent, therefore $G$ has an element $v^{(p_1p_2)}$ of order $p_1p_2.$ Consider the cyclic subgroup $H=\langle v^{(p_1p_2)}\rangle.$ (Note that all the elements of the cyclic group $H$ belong to $V(\mathcal{P}^{**}(G)).)$ Again $H$ has elements $v^{(p_1)}$ and $v^{(p_2)}$ of order $p_1$ and $p_2$ respectively. Also no elements in $\langle v^{(p_1)}\rangle $ is edge connected with the element in $\langle v^{(p_2)}\rangle.$ Now $p_1$ and $p_2$ are odd primes, which imply that $\phi(p_1p_2), \phi(p_1), \phi(p_2)\geq2.$ Let $v_1^{(p_1)}, v_2^{(p_1)}, v_1^{(p_2)} v_2^{(p_2)} \text{ and } v_1^{(p_1p_2)}, v_1^{(p_1p_2)}$ are elements of $H$ such that $\text{o}(v_1^{(p_1)})=\text{o}(v_2^{(p_1)})=p_1, \text{o}(v_1^{(p_2)})=\text{o}(v_2^{(p_2)})=p_2, \text{ and } \text{o}(v_1^{(p_1p_2)})=\text{o}(v_2^{(p_1p_2)})=p_1p_2.$ Now we replace the vertices of the graph $\Gamma_3$ in Figure \ref{fig:line grapph theory} in the following way: \[11 \text{ by } v_1^{(p_1p_2)}, 13 \text{ by } v_2^{(p_1p_2)}, 12 \text{ by }v_1^{(p_1)}, 15 \text{ by } v_2^{(p_1)}, 14 \text{ by } v_1^{(p_2)} \text{ and } 16 \text{ by } v_2^{(p_2)}.\] It is easy to see that the resulting graph is isomorphic to the graph $\Gamma_3$ in Figure \ref{fig:line grapph theory}. Now let at least one of $p_1$ and $p_2$ be $2.$ Let $p_1=2.$ Then $|G|=2^kp_2^r,$ for some $r, k\in \mathbb{N}$ with at least one of $r, k\geq 2.$ First suppose that $G$ has an element of order $2^k, k\geq2.$ Now $G$ has an element $v^{(2^kp_2)}$ such that $\text{o}(v^{(2^kp_2)})=2^kp_2.$ As $\phi(2^kp_2)\geq 2,$ we can choose two elements $v_1^{(2^kp_2)}, v_2^{(2^kp_2)}$ form $\langle v^{(2^kp_2)}\rangle$ such that $\text{o}(v_1^{(2^kp_2)})=2^kp_2$ and $\text{o}(v_1^{(2^kp_2)})=2^kp_2.$ Also $\langle v^{(2^kp_2)}\rangle$ has elements namely, $v_1^{(2^k)}, v_2^{(2^k)} (\text{ as }k\geq 2, \phi(2^k)\geq2), v_1^{(p_2)}, v_2^{(p_2)}$ such that $\text{o}(v_1^{(2^k)})=2^k, \text{o}(v_2^{(2^k)})=2^k, \text{o}(v_1^{(p_2)})=p_2, \text{o}(v_2^{(p_2)})=p_2$ ( $p_2$ is an odd prime, so it is possible to find at least two elements of order $p_2$ in $\langle v^{(2p_2)}\rangle).$ Now we replace the vertices of the graph $\Gamma_3$ in Figure \ref{fig:line grapph theory} in the following way: \[11 \text{ by } v_1^{(2^kp_2)}, 13 \text{ by } v_2^{(2^kp_2)}, 12 \text{ by }v_1^{(2^k)}, 15 \text{ by } v_2^{(2^k)}, 14 \text{ by } v_1^{(p_2)} \text{ and } 16 \text{ by } v_2^{(p_2)}.\] As a result $\mathcal{P}^{**}(G)$ has an induced subgraph isomorphic to the graph $\Gamma_3.$ Therefore, in this case $\mathcal{P}^{**}(G)$ is not a line graph. Now suppose that order of each element of the Sylow subgroup $P_1$ is $2.$ Then either $P_1\cong \underbrace{\mathbb{Z}_2\times \cdots \times \mathbb{Z}_2}_{k(\geq2)\text{ times }}$ or $P_1\cong \mathbb{Z}_2.$ Therefore, either $G\cong\underbrace{\mathbb{Z}_2\times \cdots \times \mathbb{Z}_2}_{k(\geq2)\text{ times }}\times P_2$ or $G\cong \mathbb{Z}_2\times P_2.$ In both of the cases, $G$ has at least one element $v^{(2)}$ (say) of order $2.$ Let $H_1=\langle v_1^{(p_2)}\rangle, \cdots, H_{\ell}=\langle v_{\ell}^{(p_2)}\rangle$ be the complete list of distinct cyclic subgroups of order $p_2$ of the Sylow subgroup $P_2.$ Now by the claim in Case 3 of the proof of Theorem \ref{thm: P(G) line graph, G non abelian nilpotent} $\ell\neq 2.$ Again $\ell=1$ implies that $G$ is abelian by Lemma \ref{p group unique subgrp of order p, g is cyclic}. Therefore, $\ell\geq 3.$ Consider $K_1=\langle v^{(2)}v_1^{(p_2)} \rangle, \cdots, K_{\ell}=\langle v^{(2)}v_{\ell}^{(p_2)} \rangle.$ Clearly, each $K_i$ is a cyclic group of order $2p_2.$ Moreover, for all $i\neq j, K_i\neq K_j$ and $K_i\cap K_j=\langle v^{(2)}\rangle,$ where $i, j\in \{1, \cdots, \ell\}, (\ell\geq 3).$ Now we choose three generators $ v^{(2)}v_{i_1}^{(p_2)}, v^{(2)}v_{i_2}^{(p_2)} \text{ and } v^{(2)}v_{i_3}^{(p_2)}$ from three distinct cyclic subgroups $K_{i_1}, K_{i_2}$ and $K_{i_3}$ respectively, where $i_1, i_2, i_3\in \{1, \cdots, \ell\}.$ Since $K_{i_1}, K_{i_2}$ and $K_{i_3}$ are distinct cyclic subgroups, then $ v^{(2)}v_{i_1}^{(p_2)}, v^{(2)}v_{i_2}^{(p_2)} \text{ and } v^{(2)}v_{i_3}^{(p_2)}$ are not adjacent to reach other in $\mathcal{P}^{**}(G).$ Also $\langle v^{(2)}\rangle$ is contained in each of the cyclic subgroups $K_{i_1}, K_{i_2}$ and $K_{i_3}.$ As a result, $\mathcal{P}^{**}(G)$ contains an induced subgraph $\Gamma_{1, 3}(v^{(2)}, v^{(2)}v_{i_1}^{(p_2)}, v^{(2)}v_{i_2}^{(p_2)}, v^{(2)}v_{i_3}^{(p_2)}).$ This completes the proof. \end{proof} \begin{theorem}\label{Thm: non abelian p group and line graph possibility of P**(G)} Let $G$ be a non abelian $p$-group, where $p$ is an odd prime. Then there is a graph $\Gamma$ such that $\mathcal{P}^{**}(G)=L(\Gamma)$ if and only if $G=\mathbb{Z}_{p^{t_1}}\cup \cdots\cup \mathbb{Z}_{p^{t_{\ell}}},$ where $\ell$ is the number of distinct subgroups of order $p.$ \end{theorem} \begin{proof} First suppose that $G=\mathbb{Z}_{p^{t_1}}\cup \cdots\cup \mathbb{Z}_{p^{t_{\ell}}},$ where $\ell$ is the number of distinct subgroups of order $p.$ Now for any $i\neq j,$ $\mathbb{Z}_{p^{t_i}}\cap \mathbb{Z}_{p^{t_j}}=\{e\},$ the identity of the group $G.$ In fact, for a non identity element $a\in \mathbb{Z}_{p^{t_i}}\cap \mathbb{Z}_{p^{t_j}},$ the $p$-order cyclic subgroup of $\langle a\rangle$ is contained in $ \mathbb{Z}_{p^{t_i}}\cap \mathbb{Z}_{p^{t_j}}.$ And this contradicts that $G=\mathbb{Z}_{p^{t_1}}\cup \cdots\cup \mathbb{Z}_{p^{t_{\ell}}},$ where $\ell$ is the number of distinct subgroups of order $p.$ As a result, for any $i\neq j,$ $\mathbb{Z}_{p^{t_i}}\cap \mathbb{Z}_{p^{t_j}}=\{e\}.$ So, $\mathcal{P}^{**}(G)$ is a line graph of the graph $\Gamma_{1, p^{t_1}}\bigoplus\cdots \bigoplus\Gamma_{1, p^{t_{\ell}}}.$ Conversely, suppose that there is a graph $\Gamma$ such that $\mathcal{P}^{**}(G)=L(\Gamma).$ We show that $G=\mathbb{Z}_{p^{t_1}}\cup \cdots\cup \mathbb{Z}_{p^{t_{\ell}}},$ where $\ell$ is the number of distinct subgroups of order $p.$ Now $G$ is a $p$-group. So we can write $G=K_1\cup\cdots \cup K_r,$ where \begin{align*} K_1&=\{x_{(1)}\in G: \langle a_{(1)}\rangle \subset \langle x_{(1)}\rangle \text{ and } \text{o}(a_{(1)})=p\}\cup \{e\}\\ K_2&=\{x_{(2)}\in G: \langle a_{(2)}\rangle \subset \langle x_{(2)}\rangle, a_{(2)}\in G\setminus \langle a_{(1)}\rangle \text{ and }\text{o}(a_{(2)})=p\}\cup \{e\}\\ \vdots& \hspace{40mm} \vdots\\ K_r&=\{x_{(r)}\in G: \langle a_{(r)}\rangle \subset \langle x_{(r)}\rangle, a_{(r)}\in G\setminus \langle a_{(1)}\rangle\cup\cdots\cup\langle a_{(r-1)}\rangle, \text{ and } \text{o}(a_{(r)})=p\}\cup \{e\} \end{align*} (Note that $r\neq1.$ If $r=1,$ then $G$ is a group with unique minimal subgroup. Again $p$ is an odd prime, therefore, $G$ is a cyclic $p$-group, it contradicts that $G$ is non abelian). Clearly, $K_i\cap K_j=\{e\}$ for any $i\neq j.$ Now it is enough to show that each $K_i=\mathbb{Z}_{p^{t_i}},$ for some $t_i\geq 1.$ Let there is at least one $j$ such that $K_j$ is not a cyclic subgroup of $G.$ Then we can write $K_j$ as a union cyclic subgroups of $G.$ In fact, since $K_j$ is not a cyclic group there exists at least one element say $x_{(j)}^{(p^r)}\in K_j$ such that $\text{o}(x_{(j)}^{(p^r)})=p^{r}, r\geq 2,$ (if order of each element of $K_j$ is $p$ then $K_j$ would be cyclic by the construction of $K_j).$ Consider $K_{(j)}^{r}=\langle x_{(j)}^{(p^r)}\rangle.$ Clearly, it is a cyclic subgroup of $K_j.$ Since $K_j$ is not cyclic group then there is an element say $\tilde{x_{(j)}^{(p^r)}}\in K_j\setminus K_{(j)}^{r}$ such that $\text{o}(\tilde{x_{(j)}^{(p^r)}})=p^s, s\geq 2 (s=1 \text{ implies } K_j=\mathbb{Z}_{p^r}).$ Let $K_{(j)}^s=\langle \tilde{x_{(j)}^{(p^s)}}\rangle.$ Note that neither $K_{(j)}^r$ is a subgroup of $ K_{(j)}^s$ nor $K_{(j)}^s$ is a subgroup of $K_{(j)}^r$ and $K_{(j)}^r\cap K_{(j)}^s$ contains the cyclic subgroup $\langle a_{(j)}\rangle.$ Now $p\geq 3$ implies that $\phi(p),\phi(p^r)\text{ and }\phi(p^s)\geq 2.$ Let $a_{(j)1}, a_{(j)2}$ be two $p$-ordered elements of $K_j, x^{(p^r)}_{(j)1}, x^{(p^r)}_{(j)2}$ be two $p^r$-ordered elements of $K_{(j)}^r$ and $x^{(p^s)}_{(j)1}, x^{(p^s)}_{(j)2}$ be two $p^s$-ordered elements of $K_{(j)}^s.$ Now we replace the vertices of the graph $\Gamma_3$ in Figure \ref{fig:line grapph theory} in the following way: \[11 \text{ by }a_{(j)1}, 13 \text{ by }a_{(j)2}, 12 \text{ by }x^{(p^r)}_{(j)1}, 15 \text{ by } x^{(p^r)}_{(j)2}, 14 \text{ by } x^{(p^s)}_{(j)1} \text{ and } 16 \text{ by }x^{(p^s)}_{(j)2}.\] Clearly the resulting graph is isomorphic to the graph $\Gamma_3.$ Therefore, $\mathcal{P}^{**}(G)$ has an induced subgraph isomorphic to $\Gamma_3.$ This contradicts that $\mathcal{P}^{**}(G)$ is a line graph. Therefore, each $K_i=\mathbb{Z}_{p^{t_i}},$ for some $t_i\in \mathbb{N}.$ Clearly $r=\ell$ and hence $G=\mathbb{Z}_{p^{t_1}}\cup \cdots\cup\mathbb{Z}_{p^{t_{\ell}}},$ where $\ell$ is the number of distinct cyclic subgroup of order $p.$ \end{proof} \begin{proof}[Proof of Theorem \ref{Thm:P**(G) is line graph for nilpotent group}] Clearly, Theorems \ref{classify: G cyclic line graph, P^{**}(G)} \ref{thm: non cyclic abeln grp line graph of {P}^{**}(G)}, \ref{thm: non abelian G has geq 3 distinct prime divisor, not line graph }, \ref{thm: G non ab nilpotent 2 prime divisors, P**(G) not a line graph}, \ref{Thm: non abelian p group and line graph possibility of P**(G)} complete the proof. \end{proof} Now we concentrate on non abelian $2$-group. For that we first consider the generalized quaternion group. \begin{proof} [Proof of Theorem \ref{THm: Line graph of generalized quaternion group}] We know that $Q_{2^n}$ has exactly one cyclic subgroup $H$ of order $2^{n-1}.$ Also each element in $Q_{2^n}\setminus H$ is of order $4.$ So, there are $2^{n-2}$ distinct $4$-ordered cyclic subgroups in $Q_{2^n}.$ Clearly, $\mathcal{P}^{**}(Q_{2^n})$ is the graph $K_{2^{n-1}}\bigoplus \underbrace{K_2\bigoplus\cdots\bigoplus K_2}_{ 2^{n-2}\text{ times}}.$ Therefor, $\mathcal{P}^{**}(Q_{2^n})$ is the line graph of the graph $\Gamma_{1, 2^{n-1}}\bigoplus\underbrace{\Gamma_{1, 2}\bigoplus\cdots\bigoplus\Gamma_{1, 2}}_{ 2^{n-2}\text{ times}}.$ \end{proof} \begin{proof}[Proof of Theorem \ref{thm: P**(D_n) is a line graph if and only}] Suppose $n$ is not a power of $2.$ Then clearly $|D_n|$ has a prime divisor $p\neq 2.$ Therefore by Theorem \ref{thm: G non ab nilpotent 2 prime divisors, P**(G) not a line graph} $\mathcal{P}^{**}(D_n)$ is not a line graph. Conversely, let $n=2^k,$ then $D_n$ is $2$-group. Also $D_n=\mathbb{Z}_{2^k}\cup \underbrace{\mathbb{Z}_2\cup\cdots\cup\mathbb{Z}_2}_{2^k \text{ times }}$ and the number of distinct $2$-ordered cyclic subgroup is $2^{k+1}.$ Then by Theorem \ref{Thm:P**(G) is line graph for nilpotent group}, $\mathcal{P}^{**}(D_n)$ is line graph. \end{proof} \subsection*{Acknowledgment} I would like to thank my mentor Prof. Arvind Ayyer for his constant support and encouragement. Also I would like to thank Dr. Sumana Hatui for the helpful discussions on $p$-groups. The author was supported by NBHM Post Doctoral Fellowship grant 0204/52/2019/RD-II/339. \bibliographystyle{amsplain}
{ "timestamp": "2021-04-15T02:12:56", "yymm": "2104", "arxiv_id": "2104.06694", "language": "en", "url": "https://arxiv.org/abs/2104.06694", "abstract": "This paper deals with the classification of groups $G$ such that power graphs and proper power graphs of $G$ are line graphs. In fact, we classify all finite nilpotent groups whose power graphs are line graphs. Also, we categorize all finite nilpotent groups (except non-abelian $2$-groups) whose proper power graphs are line graphs. Moreover, we investigate when the proper power graphs of generalized quaternion groups are line graphs. Besides, we derive a condition on the order of the dihedral groups for which the proper power graphs of the dihedral groups are line graphs.", "subjects": "Combinatorics (math.CO)", "title": "Line graph characterization of power graphs of finite nilpotent groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9615338035725358, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.7088024974991604 }
https://arxiv.org/abs/quant-ph/0508101
Qdensity - a Mathematica Quantum Computer Simulation
This Mathematica 5.2 package~\footnote{QDENSITY is available atthis http URL} is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in {\it Qdensity.m} which contains the tools needed in quantum circuits, e.g. multiqubit kets, projectors, gates, etc. Selected examples of the basic commands are presented here and a tutorial notebook, {\it Tutorial.nb} is provided with the package (available on our website) that serves as a full guide to the package. Finally, application is made to a variety of relevant cases, including Teleportation, Quantum Fourier transform, Grover's search and Shor's algorithm, in separate notebooks: {\it QFT.nb}, {\it Teleportation.nb}, {\it Grover.nb} and {\it Shor.nb} where each algorithm is explained in detail. Finally, two examples of the construction and manipulation of cluster states, which are part of ``one way computing" ideas, are included as an additional tool in the notebook {\it Cluster.nb}. A Mathematica palette containing most commands in QDENSITY is also included: {\it QDENSpalette.nb} .
\section{INTRODUCTION} There is already a rich Quantum Computing (QC) literature~\cite{Nielsen} which holds forth the promise of using quantum interference and superposition to solve otherwise intractable problems. The field has reached the point that experimental realizations are of paramount importance and theoretical tools towards that goal are needed: to gauge the efficacy of various approaches, to understand the construction and efficiency of the basic logical gates, and to delineate and control environmental decoherence effects. In this paper, a Mathematica~\cite{Mathematica} package provides a simulation of a Quantum Computer that is both flexible and an improvement over earlier such works~\cite{prevQC}. It is a bona fide simulation in that its success depends on quantum interference and superposition and is not just a simulation of the QC experience. The flexibility is generated by a modular approach to all of the initializations, operators, gates, and measurements, which then can be readily used to describe the basic QC Teleportation~\cite{Teleportation}, Grover's search~\cite{Grover,Groverslit} and Shor's factoring~\cite{Shor} algorithms. We also adopt a density matrix approach as an organizational framework for introducing fundamental Quantum Computing concepts in a manner that allows for more general treatments, such as handling the dynamics stipulated by realistic Hamiltonians and including environmental effects. That approach allows us to invoke many of the dynamical theories based on the time evolution of the density matrix. Since much of the code uses the density matrix, we call it ``{\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}," which stands for Quantum computing with a density matrix framework. However, the code also provides the tools to work directly with multi-qubit states as an alternative to the density matrix description. In section~\ref{sec2}, we introduce one qubit state vectors and associated spin operators, including rotations, and introduce commands from {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}. The basic characteristics of the density matrix are then discussed in a pedagogic manner in section~\ref{sec3}. Then in section~\ref{sec4}, methods for handling multi-qubit operators, density matrices, and state vectors with commands from {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} are presented. Also in that section, we show how to take traces, subtraces and how to combine subtraces with general projection operators to simulate projective measurements in a multi-qubit context. The basic one, two and three qubit gates (Hadamard, CNOT, CPHASE, Toffoli, etc.) needed for the QC circuits are shown in section~\ref{sec5}. The production of entangled states, such as the two-qubit Bell~\cite{Bell} states, the three-qubit GHZ~\cite{GHZ} states, among others,~\cite{Werner} are illustrated in both density matrix and state vector renditions in section~\ref{sec6}. In sections~\ref{sec7}-\ref{sec9}, Teleportation, Grover's search, and Shor's factoring algorithms are outlined, with the detailed instructions relegated to associated notebooks. Sample application to the cluster or ``one-way computing" model of QC is presented in section~\ref{sec10}. Possible future applications of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} are given in the conclusion section~\ref{sec11}. The basic logical gates used in the circuit model of Quantum Computing are presented in a way that allows ease of use and hence permits one to construct the unitary operators corresponding to well-know quantum algorithms. These algorithms are developed explicitly in the Mathematica notebooks as a demonstration of the application of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}. A tutorial notebook (Tutorial.nb) available on our web site guides the user through the requisite manipulations. Many examples from {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}, as defined in the package file Qdensity.m, are discussed throughout the text, which hopefully, with the tutorial notebook, will help the user to employ this tool. All these examples are instructive in two ways. One way is to learn how to handle {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} for other future applications and generalizations, such as studying entanglement measures, examining the time evolution generated by experiment-based realistic Hamiltonians, error correction methods, and the role of the environment and its affect on coherence. Thus the main motivation for emphasizing a density matrix formulation is that the time evolution can be described, including effects of an environment, starting from realistic Hamiltonians~\cite{Lindblad,Preskill}. Therefore, {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} provides an introduction to methods that can be generalized to an increasingly realistic description of a real quantum computer. Another instructive feature is to gain insight into how quantum superposition and interference are used in QC to pose and to answer questions that would be inaccessible using a classical computer. Thus we can form an initial description of a quantum multi-qubit state and have it evolve by the action of carefully designed unitary operators. In that development, the prime characteristics of superposition and interference of probability amplitudes is cleverly applied in Quantum Computing to enhance the probability of getting the right answer to problems that would otherwise take an immense time to solve. In addition to applying {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} to the usual quantum circuit model for QC, we have adapted it to examine the construction of cluster states and the steps needed to reproduce general one qubit and two qubit operations. These cluster model examples are included to open the door for future studies of the quite promising cluster model~\cite{Cluster} or ``one-way computing" approach for QC. We sought to simulate as large a system of qubits as possible, using new features of Mathematica. Of course, this code is a simulation of a quantum computer based on Mathematica code run on a classical computer. So it is natural that the simulation saturates memory for large qubit spaces; after all, if the QC algorithms always worked efficiently on a classical computer there would be no need for a quantum computer. {\bf Throughout the text, sample {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} commands are presented in sections called ``Usage." The reader should consult Tutorial.nb for more detailed guidance.} \section{ONE QUBIT SYSTEMS} \label{sec2} The state of a quantum system is described by a wave function which in general depends on the space or momentum coordinates of the particles and on time. In Dirac's representation independent notation, the state of a system is a vector in an abstract Hilbert space $\mid \Psi(t)>$, which depends on time, but in that form one makes no choice between the coordinate or momentum space representation. The transformation between the space and momentum representation is contained in a transformation bracket. The two representations are related by Fourier transformation, which is the way Quantum Mechanics builds localized wave packets. In this way, uncertainty principle limitations on our ability to measure coordinates and momenta simultaneously with arbitrary precision are embedded into Quantum Mechanics (QM). This fact leads to operators, commutators, expectation values and, in the special cases when a physical attribute can be precisely determined, eigenvalue equations with Hermitian operators. That is the content of many quantum texts. Our purpose is now to see how to define a density matrix, to describe systems with two degrees of freedom as needed for quantum computing. Spin, which is the most basic two-valued quantum attribute, is missing from a spatial description. This subtle degree of freedom, whose existence is deduced by analysis of the Stern-Gerlach experiment, is an additional Hilbert space vector feature. For example, for a single spin 1/2 system the wave function including both space and spin aspects is: \begin{equation} \Psi(\vec{r}_1, t) \mid s\ m_s>, \end{equation} where $\mid s \ m_s>$ denotes a state that is simultaneously an eigenstate of the particle's total spin operator $s^2 = s_x^2 +s_y^2+s_z^2$, and of its spin component operator $s_z$. That is \begin{equation} s^2 \mid s m_s> = \hbar^2 s (s+1) \mid s m_s> \qquad s_z \mid s m_s> = \hbar m_s \mid s m_s> \,. \end{equation} For a spin 1/2 system, we denote the spin up state as $\mid s m_s>\rightarrow \mid \frac{1}{2},\frac{1}{2}> \equiv \mid 0>$, and the spin down state as $\mid s m_s>\rightarrow \mid \frac{1}{2},-\frac{1}{2}> \equiv \mid 1>$. We now arrive at the definition of a one qubit state as a superposition of the two states associated with the above $0$ and $1$ bits: \begin{equation} \mid \Psi> =a \mid 0>+ b \mid 1>, \end{equation} where $a \equiv <0\mid \Psi>$ and $b\equiv <1\mid \Psi>$ are complex probability amplitudes for finding the particle with spin up or spin down, respectively. The normalization of the state $<\Psi\mid\Psi> =1$, yields $\mid a \mid^2 + \mid b \mid^2=1$. Note that the spatial aspects of the wave function are being suppressed; which corresponds to the particles being in a fixed location, such as at quantum dots.~\footnote{When these separated systems interact, one might need to restore the spatial aspects of the full wave function.} An essential point is that a QM system can exist in a superposition of these two bits; hence, the state is called a quantum-bit or ``qubit." Although our discussion uses the notation of a system with spin, it should be noted that the same discussion applies to any two distinct states that can be associated with $\mid 0> $ and $ \mid 1>$. Indeed, the following section on the Pauli spin operators is really a description of any system that has two recognizable states. \subsubsection{Usage } {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes commands for qubit states as ket and bra vectors. For example, commands {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Ket}[0], {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Bra}[0],{\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Ket}[1], {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Bra}[1], yield \begin{eqnarray} {\rm In[1]} &:=&{\rm \bf Ket[0]} \nonumber \\ {\rm Out[1]}&:=& \left( \begin{array}{l} 1 \\ 0 \end{array} \right) \nonumber \end{eqnarray} \begin{eqnarray} {\rm In[2]} &:=&{\rm \bf Ket[1]} \nonumber \\ {\rm Out[2]}&:=&{\bf \left( \begin{array}{l} 0 \\ 1 \end{array} \right)} \nonumber \end{eqnarray} \begin{eqnarray} {\rm In[3]} &:=&{\rm \bf Bra[0]} \nonumber \\ {\rm Out[3]}&:=&{\bf ( 1\;\; 0 )} \nonumber \end{eqnarray} \begin{eqnarray} {\rm In[4]} &:=&{\rm \bf Bra[1]} \nonumber \\ {\rm Out[4]}&:=&{\bf ( 0\;\; 1 )} \nonumber \end{eqnarray} These are the computational basis states, i.e. eigenstates of the spin operator in the z-direction. States that are eigenstates of the spin operator in the x-direction are invoked by the commands \begin{eqnarray} {\rm In[5]} &:=&{\rm \bf BraX[0]} \nonumber \\ {\rm Out[5]}&:=&{\bf \Big( \frac{1}{\sqrt{2}} \;\; \frac{1}{\sqrt{2}} \Big)} \nonumber \end{eqnarray} which is equivalent to: \begin{eqnarray} {\rm In[6]}&:=&{\rm \bf (Bra[0]+Bra[1])}/\sqrt{2} \nonumber \\ {\rm Out[6]}&:=&{\bf \Big( \frac{1}{\sqrt{2}} \;\; \frac{1}{\sqrt{2}} \Big)} \nonumber \end{eqnarray} Eigenstates of the spin operator in the y-direction are invoked similarly the commands ${\rm \bf BraY[0]}$, ${\rm \bf BraY[1]}$, etc. \subsection{The Pauli Spin Operator} We use the case of a spin 1/2 particle to describe a quantum system with two discrete levels; the same description can be applied to any QM system with two distinct levels. The spin $\vec{s}$ operator is related to the three Pauli spin operators $\sigma_x, \sigma_y, \sigma_z$ by \begin{equation} \vec{s} \equiv (\frac{\hbar}{2}) \vec{\sigma} , \end{equation} from which we see that $\vec{\sigma}$ is an operator that describes the spin 1/2 system in units of $\frac{\hbar}{2}$. Since spin is an observable, it is represented by a Hermitian operator, $\vec{\sigma}^\dagger = \vec{\sigma}$. We also know that measurement of spin is subject to the uncertainty principle, which is reflected in the non-commuting operator properties of spin and hence of the Pauli operators. For example, from the standard commutator property for any spin $[s_x, s_y] = i \hbar s_z,$ one deduces that the Pauli operators do not commute \begin{equation} [ \sigma_x , \sigma_y ] = 2 i \sigma_z \ . \end{equation}\ This holds for all cyclic components so we have a general form~\footnote{Here the Levi-Civita symbol is nonzero only for cyclic order of components $ { i j k }= xyz, yzx ,zxy ,$ for which $ \epsilon_{ i j k}=1.$ For anti-cyclic order of components ${ i j k}= xzy, zyx, yxz$ $\epsilon_{ ijk}=-1.$ It is otherwise zero.} \begin{equation} [ \sigma_i, \sigma_j ] = 2 i \epsilon_{ i j k} \sigma_k \ . \end{equation} An important property of the spin algebra is that the total spin commutes with any component of the spin operator $[s^2, s_i]=0$ for all $i$. The physical consequence is that one can simultaneously measure the total spin of the system and one component (usually $s_z$) of the spin. Only one component is a candidate for simultaneous measurement because the commutator $\nobreak{[s_x, s_y] = i \hbar s_z}$ is already an uncertainty principle constraint on the other components. As a result of the ability to measure $s^2$ and $s_z$ simultaneously, the allowed states of the spin 1/2 system are restricted to being spin-up and spin-down with respect to a specified fixed direction $\hat{z}$, called the axis of quantization. States defined relative to that fixed axis are called ``computational basis" states, in the QC jargon. The designation arises because as noted already one can identify spin-up with a state $\mid 0>$, which designates the digit or bit $0,$ and a spin-down state as $\mid 1>,$ which designates the digit or bit $1$. The fact that there are just two states (up and down) also implies properties of the Pauli operators. We construct~\footnote{With the definition $s_\pm\equiv s_x\pm i s_y,$ and using the original spin commutation rules, it follows that $[ s_\pm , s_z] = \mp s_\pm ,$ which reveals that $s_\pm$ and hence also $\sigma_\pm$ are raising and lowering operators. The general result, including the limit on the total spin is $s_\pm \mid s\ m_s> = \sqrt{s(s+1) -m_s(m_s\pm1)} \mid s\ m_s \pm 1>.$ } the raising and lowering operators $\nobreak{\sigma_\pm = \sigma_x \pm i \sigma_y}$, and note that the raising and lower process is bounded \begin{equation} \sigma_+ \mid 0 > = 0 \qquad\sigma_{-} \mid 1 > = 0. \end{equation} Hence, raising a two-valued state up twice or lowering it twice yields a null (unphysical) Hilbert space; this property tells us additional aspects of the Pauli operator. Since \begin{equation} \sigma_\pm \sigma_\pm =( \sigma_x \pm i \sigma_y)^2 = \sigma_x^2- \sigma_y^2 \pm ( \sigma_x \sigma_y + \sigma_y \sigma_x) =0, \end{equation} we deduce that $ \sigma_x^2=\sigma_y^2$, and that the anti-commutator \begin{equation} \{\sigma_x , \sigma_y\} \equiv \sigma_x \sigma_y + \sigma_y \sigma_x =0. \end{equation} The anti-commutation property is thus a direct consequence of the restriction to two levels. The spin 1/2 property is often expressed as: $ s^2 \mid s m_s > = \hbar^2 s ( s+1) \mid s m_s > = \frac{3}{4} \hbar^2 \mid s m_s > = \frac{\hbar}{4}^2 \sigma^2 \mid s m_s >.$ We have $\sigma^2 =3 = \sigma_x^2 + \sigma_y^2 + \sigma_z^2 =2 \sigma_x^2 +1,$ where we use the above equality $\sigma_x^2=\sigma_y^2,$ and from the $\hat{z}$ eigenvalue equation the property $\sigma_z^2 = 1,$ to deduce that \begin{equation} \sigma_x^2 = \sigma_y^2 =\sigma_z^2=1. \end{equation} Another useful property of the Pauli matrices is obtained by combining the above properties and commutator and anti-commutator into one expression for a given spin 1/2 system \begin{equation} \sigma_i \sigma_j = \delta_{i j} + i \epsilon_ {i j k } \sigma_k, \end{equation} where indices $ i ,j, k $ take on the values $x,y,z,$ and repeated indices are assumed to be summed over. For two general vectors, this becomes \begin{equation} (\vec{\sigma}\cdot \vec{A} ) (\vec{\sigma}\cdot \vec{B} ) = \vec{A} \cdot \vec{B} + i (\vec{A} \times \vec{B}) \cdot \vec{\sigma}. \end{equation} For $\vec{A}=\vec{B}= \vec{\eta},$ a unit vector $(\vec{\sigma}\cdot \vec{\eta} )^2 =1$, which will be useful later. These operator properties can also be represented by the Pauli-spin matrices, where we identify the matrix elements by \begin{equation} < s\ m'_s \mid \sigma_z \mid s\ m_s > \longrightarrow \left( \begin{array}{lc} 1 & 0\\ 0 & -1 \end{array}\right) \, . \end{equation} Similarly for the $x-$ and $y-$component spin operators \begin{equation} < sm'_s \mid \sigma_x \mid s\ m_s > \longrightarrow \left( \begin{array}{lccr} 0&&& 1\\ 1&&& 0 \end{array}\right) \qquad < sm'_s \mid \sigma_y \mid sm_s > \longrightarrow \left( \begin{array}{lcr} 0 && -i \\ i && 0 \end{array}\right) \, . \end{equation} These are all Hermitian matrices $\sigma_i = \sigma^\dagger_i$. Also, the matrix realization for the raising and lowering operators are: \begin{equation} \sigma_{+} = \left( \begin{array}{lccr} 0 &&& 2 \\ 0 &&& 0 \end{array}\right) \hspace{.25in} \sigma_{-}= \left( \begin{array}{lccr} 0 &&& 0 \\ 2 &&& 0 \end{array}\right) \, . \end{equation} Here $\sigma_+^\dagger =\sigma_{-}.$ Note that these $2 \times 2$ Pauli matrices are traceless ${\rm Tr} [ \vec{\sigma}] = 0, $ unimodular $\sigma^2_i = 1$ and have unit determinant $\mid \det \sigma_i \mid = 1$. Along with the unit operator \begin{equation} \sigma_0= {\bf 1} \equiv \left( \begin{array}{lccr} 1 &&& 0 \\ 0 &&& 1 \end{array}\right) , \end{equation} the four Pauli operators form a basis for the expansion of any spin operator in the single qubit space. For example, we can express the rotation of a spin as such an expansion and later we shall introduce a density matrix for a single qubit in the form $\rho = a + \vec{b} \cdot \vec{\sigma} = a + b\ \vec{n} \cdot \vec{\sigma}$ to describe an ensemble of particle spin directions as occurs in a beam of spin-1/2 particles. In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}, we denote the four matrices by $\sigma_i$ where $i=0$ is the unit matrix and $i=1,2,3$ corresponds to the components $x,y,$ and $z$. To produce the Pauli spin operators in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}, one can use either the Greek form or the expression $s[i]$. \subsubsection{Usage } {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes commands for the Pauli operators. For example, there are three equivalent ways to invoke the Pauli $\sigma_y$ matrix in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}: \begin{eqnarray} {\rm In[7]} &:=&{\rm \bf \sigma_y} \nonumber\\ {\rm Out[7]}&:=&{\bf \left( \begin{array}{ll} 0 & -i \\ i & 0 \end{array} \right)} \nonumber \end{eqnarray} \begin{eqnarray} {\rm In[8]} &:=&{\rm \bf s[2]} \nonumber\\ {\rm Out[8]}&:=& {\bf \left( \begin{array}{ll} 0 & -i \\ i & 0 \end{array} \right)} \nonumber \end{eqnarray} The third way is to use the commands {\bf Sigma0}, {\bf Sigma1},{\bf Sigma2}, or {\bf Sigma3}. Note that \begin{eqnarray} {\rm In[9] }&:=& {\rm \bf \sigma _2}\,.\,{\rm \bf KetY[0]-KetY[0] } \nonumber\\ {\rm Out[9]}&:=& {\bf \left( \begin{array}{l} 0 \\ 0 \end{array} \right)} \nonumber \end{eqnarray} and \begin{eqnarray} {\rm In[10] }&:=& {\rm \bf \sigma _2}\,.\,{\rm \bf KetY[1]+KetY[1] } \nonumber\\ {\rm Out[10]}&:=& {\bf \left( \begin{array}{l} 0 \\ 0 \end{array} \right)} \nonumber \end{eqnarray} confirm that {\rm \bf KetY[0]} and {\rm \bf KetY[1]} are indeed eigenstates of $\sigma_y.$ Note that the $ \cdot $ is used to take the dot product. \subsection{Pauli Operators in Hilbert Space} It is often convenient to express the above matrix properties in the form of operators in Hilbert space. For a general operator $\Omega$, using closure, we have \begin{equation} \Omega = \sum_n \sum_{n'} \mid n> < n \mid \Omega \mid n'> <n'\mid \, . \end{equation} For the four Pauli operators this yields: \begin{eqnarray} \sigma_0 &=& \mid 0 > < 0 \mid + \mid 1 > < 1 \mid \nonumber \\ \sigma_1 &=& \mid 0 > < 1 \mid + \mid 1 > < 0 \mid \nonumber \\ \sigma_2 &=& - i \mid 0 > < 1 \mid + i \mid 1 > < 0 \mid \nonumber \\ \sigma_z &=& \mid 0 > < 0 \mid - \mid 1 > < 1 \mid \, . \end{eqnarray} Taking matrix elements of these operators reproduces the above Pauli matrices. Also note we have the general trace property \begin{equation} {\rm Tr} \mid a><b\mid = \sum_n < n \mid a >< b \mid n> = \sum_n < b \mid n> < n \mid a > = < b \mid a>, \end{equation} where $\mid n>$ is a complete orthonormal (CON) basis~\footnote{For a CON basis, we have closure $ \sum_n \mid n>\ < n \mid= {\bf 1}.$}. Applying this trace rule to the above operator expressions confirms that ${\rm Tr}[\sigma_x] ={\rm Tr}[\sigma_y]={\rm Tr}[\sigma_z]=0,$ and ${\rm Tr}[\sigma_0]=2$. Another useful trace rule is ${\rm Tr}[\; \Omega \ \mid a><b \mid \, ] = <b \mid \Omega \mid a>$. \subsection{Rotation of Spin} Another way to view the above superposition, or qubit, state is that a state originally in the $\hat{z}$ direction $\mid 0>$, has been rotated {\it to a new direction} specified by a unit vector $\hat{n}= ( n_x,n_y, n_z)=( \sin \theta \cos \phi, \sin\theta\sin \phi, \cos \theta),$ as shown in Fig.~\ref{spinrot}. \begin{figure}[h] \begin{center} \includegraphics[width=8pc]{vec} \hspace*{20pt} \includegraphics[width=8pc]{vecrot} \end{center} \caption{Active rotation {\it to} a direction $\hat{n}$ (a); and active rotation {\it around} a vector $\hat{\eta}$ (b). } \protect\label{spinrot} \end{figure} The rotated spin state \begin{equation} \mid \hat{n}> = \cos(\theta/2) e^{-i \phi/2} \mid 0> +\sin(\theta/2) e^{+i \phi/2} \mid 1> = \left( \begin{array}{l} \cos(\theta/2) e^{-i \phi/2} \\ \sin(\theta/2) e^{+i \phi/2} \end{array}\right) \, , \end{equation} is a normalized eigenstate of the operator \begin{equation} \vec{\sigma}\cdot \hat{n} = \left( \begin{array}{lc} n_z & n_x-i n_y \\ n_x+i n_y & -n_z \end{array}\right) = \left( \begin{array}{lc} \cos\theta & \sin \theta e^{-i \phi} \\ \sin \theta e^{i \phi} & -\cos\theta \end{array}\right) \,. \end{equation} We see that \begin{equation} \vec{\sigma}\cdot \hat{n} \mid \hat{n}> = \mid \hat{n}>. \end{equation} The half angles above reflect the so-called spinor nature of the QM state of a spin 1/2 particle in that a double full rotation is needed to bring the wave function back to its original form. These rotated spin states allow us to pick a different axis of quantization $\hat{n}$ for each particle in an ensemble of spin 1/2 systems. These states are normalized $<\hat{n} \mid \hat{n}>=1,$ but are not orthogonal $<\hat{n}' \mid \hat{n}>\neq 1$, when the $\hat{n}$ angles $\theta,\phi$ do not equal the $\hat{n}'$ angles $\theta',\phi'.$ Special cases of the above states with spin pointing in the directions $\pm\hat{x}$ and $\pm\hat{y}$ are within phases: \begin{eqnarray} \mid \pm x> &=&\frac{1}{\sqrt{2}} \left( \begin{array}{c} \ 1 \\ \pm 1 \end{array}\right) \rightarrow \frac{\mid 0 > \pm \mid 1> }{\sqrt{2}} \, , \nonumber \\ \mid \pm y> &=& \frac{1}{\sqrt{2}}\left( \begin{array}{c} \ 1 \\ \pm i \end{array}\right)\rightarrow \frac{\mid 0 > \pm i \mid 1> }{\sqrt{2}} \,. \end{eqnarray} Hilbert space versions are also shown above. Rotation can also be expressed as a rotation of an initial spin-up system {\it about a rotation axis} $\hat{\eta}$ by an angle $\gamma$. Thus an operator $R_\gamma \equiv e^{-i \frac{\gamma}{2}\vec{ \sigma} \cdot\hat{\eta}}, $ acting as \begin{equation} \mid \Psi > =R_\gamma \mid 0> \end{equation} can also rotate the spin system state to new direction. This rotation operator can be expanded in the form \begin{equation} R_\gamma = e^{-i \frac{\gamma}{2}\vec{ \sigma} \cdot\hat{\eta}}= \cos \frac{\gamma}{2}\ \sigma_0 -i \sin \frac{\gamma}{2}\ \vec{\sigma}\cdot \hat{\eta}, \label{eq:rot} \end{equation} which follows from the property that $(\vec{\sigma}\cdot \hat{\eta})^2 = \hat{\eta} \cdot \hat{\eta}+ i (\hat{\eta}\times \hat{\eta}) \cdot \vec{\sigma} =1 \, .$ A special case of the state generated by this rotation $R_\gamma \mid 0 > $ is a $\gamma=\pi/2$ rotation about the $ \hat{\eta}\rightarrow \hat{y}$ axis. Then the rotation operator is \begin{equation} R_{\pi/2} = e^{-i \frac{\pi}{4} \sigma_y} = \cos \frac{\pi}{4} \sigma_0 - i \sin \frac{\pi}{4}\ \sigma_y \ . \end{equation} Introducing the Pauli matrices, this becomes \begin{equation} R_{\pi/2} = \frac{1}{\sqrt{2}} \left( \begin{array}{ccc} 1 && -1 \\ 1 && \ \ 1 \end{array}\right) . \end{equation} This rotation about an axis again yields the same result; namely, \begin{equation} R_{\pi/2} \mid 0 >= \frac{1}{\sqrt{2}} \left( \begin{array}{ccl} 1 && -1 \\ 1&& \ \ 1 \end{array}\right) \cdot \frac{1}{\sqrt{2}} \left( \begin{array}{l} 1 \\ 0 \end{array}\right) =\frac{1}{\sqrt{2}} \left( \begin{array}{l} 1 \\ 1 \end{array}\right) \ . \end{equation} Similar steps apply for a rotation about the $\hat{x}$ axis by $\gamma=\pi/2,$ which yields the earlier $\mid \pm y >$ states. From normalization of the rotated state $<\Psi\mid\Psi>=\nobreak{<0\mid R^\dagger_{\gamma}R_{\gamma}\mid 0>}= \nobreak{<0\mid 0>}=1,$ we see that the rotation is a unitary $R^\dagger_{\gamma} R_{\gamma} ={\bf 1}$ operator. \subsubsection{Usage} The trace of the Pauli operators is invoked in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} by:\\ \begin{eqnarray} {\rm In[1] }&:=& {\rm \bf Tr[\sigma _2]}\nonumber\\ {\rm Out[1]}&:=& {\bf 0} \nonumber \end{eqnarray} Rotation about the X axis is represented by \\ \begin{eqnarray} {\rm In[2] }&:=& {\rm \bf RotX[\theta]}\nonumber \\ {\rm Out[2]}&:=& {\bf \left( \begin{array}{ll} \text{Cos}\left[\frac{\theta}{2}\right] & -i \text{Sin}\left[\frac{\theta}{2}\right] \\ -i \text{Sin}\left[\frac{\theta}{2}\right] & \text{Cos}\left[\frac{\theta}{2}\right] \end{array}\right)} \nonumber \end{eqnarray} Commands for other directions are described in {\it Tutorial.nb}; see RotX[$\theta$], RotY[$\theta$], Rotqbit[vec,$\theta$]. \subsection{One Qubit Projection} For a one qubit system, it is simple to define operators that project on to the spin-up or spin-down states. These projection operators are: \begin{equation} {\mathcal P}_0 \equiv \mid 0><0 \mid \hspace{30pt}{\mathcal P}_1 \equiv \mid 1><1 \mid . \end{equation} These are Hermitian operators, and by virtue of closure, sum to one $\nobreak{ \sum_{a=0,1} {\mathcal P}_a = 1.} $ They can also be expressed in terms of the $\sigma_z$ operator as \begin{equation} {\mathcal P}_0 =\frac{1 + \sigma_z}{2} \qquad { \mathcal P}_1 =\frac{1 - \sigma_z}{2}, \end{equation} or in matrix form \begin{equation} {\mathcal P}_0 = \left( \begin{array}{lccc} 1 &&& 0 \\ 0 &&& 0 \end{array}\right) \hspace{30pt}{ \mathcal P}_1 = \left( \begin{array}{lccc} 0 &&& 0 \\ 0 &&& 1 \end{array}\right)\,. \end{equation} One can also project to other directions. For example, projection of a qubit on to the $\pm\hat{x}$ or $\pm\hat{y}$ directions involves the projection operators \begin{eqnarray} {\mathcal P}_{\pm x} &=& \mid \pm \hat{x} > < \pm \hat{x} \mid = \frac{1 \pm \sigma_x}{2}, \nonumber \\ {\mathcal P}_{\pm y} &=& \mid \pm \hat{y} > < \pm \hat{y} \mid = \frac{1 \pm \sigma_y}{2}. \end{eqnarray} \subsubsection{Usage} The above projections operators are invoked in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} by: \begin{eqnarray} {\rm In[1] }&:=& {\rm \bf \mathcal{P}_0} \nonumber \\ {\rm Out[1]}&:=& {\bf \left( \begin{array}{lccc} 1 &&& 0 \\ 0 &&& 0 \end{array} \right) }\nonumber \end{eqnarray} \begin{eqnarray} {\rm In[2] }&:=& {\rm \bf \mathcal{P}_1} \nonumber \\ {\rm Out[2]}&:=& {\bf \left( \begin{array}{lccc} 0 &&& 0 \\ 0 &&& 1 \end{array} \right) } \nonumber \end{eqnarray} Projection operators using the x-basis are invoked by: \begin{eqnarray} {\rm In[3] }&:=& {\rm \bf {PX}[0]} \nonumber \\ {\rm Out[3]}&:=& \left( \begin{array}{lccc} \frac{1}{2} &&& \frac{1}{2} \\ \frac{1}{2} &&& \frac{1}{2} \end{array} \right) \nonumber \end{eqnarray} \begin{eqnarray} {\rm In[4] }&:=& {\rm \bf {PX}[1]} \nonumber \\ {\rm Out[4]}&:=& \left( \begin{array}{ll} \ \ \frac{1}{2} & -\frac{1}{2} \\ -\frac{1}{2} &\ \ \frac{1}{2} \end{array} \right) \nonumber \end{eqnarray} A general operator {\bf ProjG[a,vec]} to project into a direction stipulated by a unit three vector {\bf vec}, with a=0 or 1, is also provided. See {\it Tutorial.nb} for more examples. These operators are useful for projective measurements. \section{THE (SPIN) DENSITY MATRIX} \label{sec3} The above spin rotated wave functions can be used to obtain the expectation value of some relevant Hermitian operator $\Omega=\Omega^\dagger$, which represents a physical observable. Let us assume that a system is in a state labelled by $\alpha$ with a state vector $\mid \alpha>$. In general the role of the label $\alpha$ could be to denote a spatial coordinate (or a momentum), if we were considering an ensemble of localized particles. For the spin density matrix, we use $\alpha$ to label the various spin directions $\hat{n}.$ The average or expectation value of a general observable $\Omega$ is then $ \nobreak{<\alpha \mid \Omega \mid \alpha >} \, . $ This expectation value can be interpreted simply by invoking eigenstates of the operator $\Omega$ \begin{equation} \Omega \mid \nu > = \omega_\nu \mid \nu >, \end{equation} where $\omega_\nu$ are real eigenvalues and $\mid \nu >$ are the eigenstates, which are usually a complete orthonormal basis (CON). The physical meaning of the eigenvalue equation is that if the system is in the eigenstate $\mid \nu >$, there is no uncertainty $\Delta \Omega$ in determining the eigenvalue, e.g. \begin{equation} (\Delta \Omega)^2 \equiv <\nu \mid \Omega^2 \mid \nu> - < \nu \mid \Omega \mid \nu >^2 = \omega_\nu^2 -\omega_\nu^2 \equiv 0. \end{equation} Using the eigenstates $\mid \nu >$, we can now see the basic meaning of the expectation value, which is a fundamental part of QM. The eigenstates form a CON basis. That means any function can be expanded in this basis and that the coefficients can be obtained by an overlap integral. For example, in general terms the completeness (C) allows the expansion \begin{equation} \mid \Psi> = \sum_\nu c_\nu \mid \nu > . \end{equation} The OrthoNormal (ON) aspect is $< \nu \mid \nu'>=\delta_{\nu \nu'}.$ Thus \begin{equation} <\nu' \mid \Psi>= \sum_\nu c_\nu~<\nu'\mid~\nu>~=~c_{\nu'} \, , \end{equation} reinserting this yields \begin{equation} \mid \Psi> = \sum_\nu <\nu \mid \Psi> \mid \nu > = \sum_\nu \ \mid \nu ><\nu \mid \ \Psi> ={\bf I} \mid \Psi> . \nonumber \end{equation} Thus we see completeness with orthonormality of the basis can be expressed in the closure form \begin{equation} \sum_\nu \ \mid \nu><\nu\mid ={\bf I} , \end{equation} with ${\bf I}$ the unit operator in the Hilbert space. With closure (e.g. a CON basis), we can now see that the expectation value breaks in to a sum of the form \begin{eqnarray} <\alpha \mid \Omega \mid \alpha > &=& \sum_\nu \sum_{\nu'} <\alpha \mid \nu> <\nu\mid \Omega \mid \nu'>< \nu' \mid \alpha> \nonumber \\ &=& \sum_\nu \omega_\nu < \nu \mid \alpha><\alpha \mid \nu> = \sum_\nu \omega_\nu \textit{P}_\nu^\alpha . \nonumber \end{eqnarray} Here $\textit{P}^\alpha_\nu = < \nu \mid \alpha><\alpha \mid \nu> = \mid< \nu \mid \alpha>\mid^2$ is the positive real probability of the state $\mid \alpha>$ being in the eigenstate $\mid \nu>$. Hence we see that the quantum average or expectation value is a sum of that probability times the associated eigenvalue $\omega_\nu$ over all possible values $\nu$. That is the characteristic of a quantum average. As the next step towards the spin density matrix, consider the case that we have an ensemble of such quantum systems. Each system is considered not to have quantum interference with the other members of the ensemble. That situation can be realized by the ensemble being located at separate sites with non-overlapping localized wave packets and also in the case of a low density beam, i.e. separated particles in the beam. This allows us to take a classical average over the ensemble. Suppose that the first member of the ensemble is produced in the state $\mid \alpha >$, the next in $\mid \alpha' >$, etc. The ensemble average is then a simple classical average \begin{equation} <\Omega> = \frac{\sum_{\alpha} <\alpha \mid \Omega \mid \alpha > {\bf P}_\alpha}{\sum_{\alpha} {\bf P}_\alpha}, \end{equation} where ${\bf P}_\alpha$ is the probability that a particular state $\alpha$ appears in the ensemble. Summing over all possible states of course yields $\sum_{\alpha} {\bf P}_\alpha=1$. The above expression is a combination of a classical ensemble average with the quantum mechanical expectation value. It contains the idea that each member of the ensemble interferes only with itself quantum mechanically and that the ensemble involves a simple classical average over the probability distribution of the ensemble. We are close to introducing the density matrix. This is implemented by using closure and rearranging. Consider \begin{equation} \sum_{\alpha} <\alpha \mid \Omega \mid \alpha > {\bf P}_\alpha = \sum_{\alpha}\ \sum_{m m'} <\alpha \mid m > < m \mid \Omega \mid m'><m' \mid \alpha>\ {\bf P}_\alpha \,, \end{equation} where $\mid m >$ denotes any CON basis. Now rearrange the above to \begin{equation} \sum_{\alpha} <\alpha \mid \Omega \mid \alpha > {\bf P}_\alpha = \sum_{m m'} \sum_{\alpha}\ <m'\mid \alpha><\alpha \mid m> \, {\bf P}_\alpha < m \mid \Omega \mid m'> \,, \end{equation} and then define the {\bf density operator} by \begin{equation} \rho \equiv \sum_{\alpha}\ \mid \alpha><\alpha \mid {\bf P}_\alpha \end{equation} and the associated density matrix in the CON basis $\mid m>$ as $<m \mid \rho \mid m'> = \sum_{\alpha}\ < m \mid \alpha> <\alpha \mid m'> {\bf P}_\alpha$. We often refer to either the density operator or the density matrix simply as the ``density matrix," albeit one acts in the Hilbert space and the other is an explicit matrix. The ensemble average can now be expressed as a ratio of traces~\footnote{The trace ${\rm Tr}$ is defined as the sum of the diagonal matrix elements of an operator, where a CON basis is used.} \begin{equation} <\Omega>= \frac{{\rm Tr}[ \rho \Omega]}{{\rm Tr}[ \rho ]}, \end{equation} which entails the properties that \begin{eqnarray} {\rm Tr} [\rho] &=& \sum_m <m\mid \rho\mid m> = \sum_\alpha\ {\bf P}_\alpha \sum_m <\alpha \mid m><m\mid \alpha>\nonumber \\ &=& \sum_\alpha\ {\bf P}_\alpha <\alpha \mid \alpha> = \sum_\alpha\ {\bf P}_\alpha=1, \end{eqnarray} and \begin{eqnarray} {\rm Tr} [\rho \Omega] &=& \sum_{m m'} <m\mid \rho \mid m'><m'\mid \Omega\mid m> \nonumber \\ &=& \sum_\alpha \sum_{m m'} {\bf P}_\alpha <\alpha\mid m'> <m' \mid \Omega\mid m><m\mid \alpha> \nonumber \\ &=&\sum_\alpha {\bf P}_\alpha <\alpha\mid \Omega \mid \alpha> , \end{eqnarray} which returns the original ensemble average expression. \subsection{Properties of the Density Matrix} We have defined the density operator by a sum involving state labels $\alpha$ for the special case of a spin 1/2 system. The definition \begin{equation} \rho = \sum_{\alpha}\ \mid \alpha><\alpha \mid {\bf P}_\alpha \end{equation} is however a general one, if we interpret $\alpha$ as the label for the possible characteristics of a state. Several important general properties of a density operator can now be delineated. The density matrix is Hermitian, hence its eigenvalues are real. The density matrix is also positive definite, which means that all of its eigenvalues are greater or equal to zero. This, together with the fact that the density matrix has unit trace, ensures that the eigenvalues are in the range [0,1]. To prove that the density matrix is positive definite, consider a basis $\mid \nu>$ which diagonalizes the density operator so that \begin{eqnarray} < \nu \mid \rho \mid \nu> &=& \mu_\nu \\ &=& \sum_\alpha {\bf P}_\alpha <\nu\mid \alpha><\alpha \mid \nu> = \sum_\alpha {\bf P}_\alpha \mid < \nu \mid \alpha>\mid^2 \geq 0. \nonumber \end{eqnarray} Here $\mu_\nu$ is the $\nu$th eigenvalue of $\rho$ and both parts of the final sum above are positive quantities. Hence all of the eigenvalues of the density matrix are $\geq 0$ and the density matrix is thus positive definite. If one of the eigenvalues is one, all the others are zero. Another general property of the density matrix involves the special case of a pure state. If every member of the ensemble has the same quantum state, then only one $\alpha$ (call it $\alpha_0$) appears and the density operator becomes $\nobreak{\rho = \mid \alpha_0><\alpha _0 \mid}$. The state $\mid \alpha_0> $ is normalized to one and hence for a pure state $\rho^2 = \rho$. Using a basis that diagonalizes $\rho$, this result tells us that the eigenvalues satisfy $\mu_\nu ( \mu_\nu-1) =0$ and hence for a pure state one density matrix eigenvalues is 1, with all others zero. In general, an ensemble does not have all of its members in the same state, but has a mixture of possibilities as reflected in the probability distribution ${\bf P}_\alpha$. In general, as we show below, we have \begin{equation} \rho ^2 \leq \rho, \end{equation} with the equal sign holding for pure states. A simple way to understand this relationship is seen by transforming the density matrix to diagonal form, using its eigenstates to form a unitary matrix $U_\rho$. We have $U_\rho \rho U^\dagger_\rho = \rho_D,$ where $\rho_D$ is diagonal using the eigenstates of $\rho$ as the basis, e.g. $<\nu \mid \rho_D \mid \nu'>= \mu_\nu \delta_{\nu \nu'}$. Here $\mu_\nu$ again denotes the $\nu$th eigenvalue of $\rho$. We already know that the sum of all these eigenvalue equals 1, that they are real and positive. Since every eigenvalue is limited to be less than or equal to 1, we have $\mu_\nu^2\leq \mu_\nu$, for all $\nu$. Transforming that back to the original density matrix yields the result $\rho^2 \leq \rho$. Taking the trace of this result yields another test for the purity of the state ${\rm Tr}[\rho^2] \leq {\rm Tr}[\rho]=1$. Examples of how to use this measure of purity will be discussed later. \subsubsection{Entropy and Fidelity} As an indication of the rich variety of functionals of $\rho$ that can be defined, let us examine the Von Neumann entropy and the fidelity. The Von Neumann entropy~\cite{Neumann}, $S[\rho]=-{\rm Tr} [\rho\; \log_2 \rho]$, is a measure of the degree of disorder in the ensemble. Its basic properties are: $S[\rho]=0$ if $\rho$ is a pure state, and $S[\rho]=1$ for completely disordered states. See later for an application to the Bell, GHZ , \& Werner states and also the {\it Tutorial.nb} notebook for simple illustrative examples. It is often of interest to compare two different density matrices that are alternate descriptions of an ensemble of quantum systems. One simple measure of such differences is the fidelity. Consider for example, two pure states \begin{equation} \rho = \mid \psi > <\psi \mid \hspace{.25in}\rho = \mid \widetilde{ \psi } > < \widetilde{ \psi } \mid , \end{equation} and the associated overlap obtained by a trace method \begin{equation} {\rm Tr}[\ \rho\ \widetilde{\rho} \ ] = {\rm Tr}[ \ \mid \psi > < \psi \mid \ \widetilde{ \psi} > < \widetilde{ \psi} \mid \ ] = |< \psi \mid \widetilde{ \psi}> |^2 . \end{equation} Clearly this overlap equals one if the states and associated density matrices are the same and thus serves as a measure to compare states. This procedure is generalized and applied to general density matrices. It is also written in a symmetric manner, with the general definition of fidelity being \begin{equation} F[ \rho, \widetilde{\rho} ] = {\rm Tr}[ \sqrt{ \sqrt{\tilde{\rho}} \ \rho \ \sqrt{\tilde{\rho} } \ } ] \ , \end{equation} which has the property of reducing to ${\rm Tr}[ \rho ]=1,$ for $\tilde{\rho}=\rho.$ It also yields $ |< \psi \mid \widetilde{ \psi}> | $ in the pure state limit. \subsubsection{Usage} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes commands that produce the Purity and Entropy for a stipulated density matrix $\rho,$ {\bf {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Purity}[$\rho$], {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Entropy}[$\rho$],} and the Fidelity of one specified density matrix relative to another {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Fidelity}[$\rho_1,\rho_2$]. \subsubsection{ Composite Systems and Partial Trace} For a composite system, such as colliding beams, or an ensemble of quantum systems each of which is prepared with a probability distribution, the definition of a density matrix can be generalized to a product Hilbert space form involving systems of type A or B \begin{equation} \rho_{A B} \equiv \sum_{\alpha, \beta} {\bf P}_{\alpha, \beta} \mid \alpha \beta>< \alpha \beta \mid, \end{equation} where ${\bf P}_{\alpha, \beta}$ is the joint probability for finding the two systems with the attributes labelled by $ \alpha$ and $\beta.$ For example, $\alpha$ could designate the possible directions $\hat{n}$ of one spin-1/2 system, while $\beta$ labels the possible spin directions of another spin 1/2 system. One can always ask about the state of system A or B by summing over or tracing out the other system. For example the density matrix of system A is picked out of the general definition above by the following trace steps \begin{eqnarray} \rho_A &=& {\rm Tr}_B [ \rho_{A B} ] \nonumber \\ &=& \sum_{ \alpha, \beta} \ {\bf P}_{ \alpha, \beta} \mid \alpha> < \alpha\mid {\rm Tr}_B[ \ \mid \beta>< \beta \mid \ ] \nonumber \\ &=& \sum_{ \alpha} ( \sum_{ \beta} {\bf P}_{\alpha, \beta} ) \mid \alpha> < \alpha\mid \nonumber \\ &=& \sum_{\alpha} {\bf P}_{\alpha} \mid \alpha> < \alpha\mid . \end{eqnarray} Here we use the product space $ \mid \alpha \beta>\mapsto \mid \alpha> \mid \beta>$ and we define the probability for finding system A in situation $\alpha$ by \begin{equation} {\bf P}_{\alpha} =\sum_{ \beta} {\bf P}_{\alpha, \beta}. \end{equation} This is a standard way to get an individual probability from a joint probability. It is easy to show that all of the other properties of a density matrix still hold true for a composite system case. It has unit trace, it is Hermitian with real eigenvalues, etc. See later for application of these general properties to multi-qubit systems. \subsection{Comments about the Density Matrix} \subsubsection{Alternate Views of the Density Matrix} In the prior discussion, the view was taken that the density matrix implements a classical average over an ensemble of many quantum systems, each member of which interferes quantum mechanically only with itself. Another viewpoint, which is equally valid, is that a single quantum system is prepared, but the preparation of this single system is not pinned down. Instead all we know is that it is prepared in any one of the states labelled again by a generic state label $\alpha$ with a probability ${\bf P}_{\alpha}$. Despite the change in interpretation, or rather in application to a different situation, all of the properties and expressions presented for the ensemble average hold true; only the meaning of the probability is altered. Another important point concerning the density matrix is that the ensemble average (or the average expected result for a single system prepared as described in the previous paragraph) can be used to obtain these averages for all observables $\Omega$. Hence in a sense the density matrix describes a system and the system's accessible observable quantities. It represents then an honest statement of what we can really know about a system. On the other hand, in Quantum Mechanics it is the wave function that tells all about a system. Clearly, since a density matrix is constructed as a weighted average over bilinear products of wave functions, the density matrix has less detailed information about a system that is contained in its wave function. Explicit examples of these general remarks will be given later. To some authors the fact that the density matrix has less content than the system's wave function, causes them to avoid use of the density matrix. Others find the density matrix description of accessible information as appealing. \subsubsection{ Classical Correlations and Entanglement} The density matrix for composite systems can take many forms depending on how the systems are prepared. For example, if distinct systems A \& B are independently produced and observed independently, then the density matrix is of product form $\rho_{AB} \mapsto \rho_A \otimes \rho_B,$ and the observables are also of product form $\Omega_{AB} \mapsto\Omega_A\otimes\Omega_B.$ For such an uncorrelated situation, the ensemble average factors \begin{equation} < \Omega_{AB}> = \frac{{\rm Tr}[ \rho_{AB}\Omega_{AB}]} {{\rm Tr}[ \rho_{AB}]}= \frac{{\rm Tr}[ \rho_{A}\Omega_{A}]}{{\rm Tr}[ \rho_{A}]} \frac{{\rm Tr}[ \rho_{B}\Omega_{B}]}{{\rm Tr}[ \rho_{B}]} \end{equation} as is expected for two separate uncorrelated experiments. This can also be expressed as having the joint probability factor ${\bf P}_{\alpha, \beta}\mapsto{\bf P}_{\alpha } {\bf P}_{ \beta}$ the usual probability rule for uncorrelated systems. Another possibility for the two systems is that they are prepared in a coordinated manner, with each possible situation assigned a probability based on the correlated preparation technique. For example, consider two colliding beams, A \& B, made up of particles with the same spin. Assume the particles are produced in matched pairs with common spin direction $\hat{n}.$ Also assume that the preparation of that pair in that shared direction is produced by design with a classical probability distribution ${\bf P}_{\hat{n}}.$ Each pair has a density matrix $ \rho_{\hat{n}} \otimes \rho_{\hat{n}}$ since they are produced separately, but their spin directions are correlated classically. The density matrix for this situation is then \begin{equation} \rho_{AB} = \sum_{\hat{n}} {\bf P}_{\hat{n }} \ \rho_{\hat{n}} \otimes \rho_{\hat{n}}. \end{equation} This is a ``mixed state" which represents classically correlated preparation and hence any density matrix that takes on the above form can be reproduced by a setup using classically correlated preparations and does {\it not} represent the essence of Quantum Mechanics an entangled state. An entangle quantum state is described by a density matrix (or by its corresponding state vectors) that is not and can not be transformed into the two classical forms above; namely, cast into a product or a mixed form. For example, a Bell state $\frac{1}{2} (\mid 01 > + \mid 10>)$ has a density matrix \begin{equation} \rho= \frac{1}{2}( \mid 01> < 01 \mid + \mid 01> < 10 \mid + \mid 10> < 01 \mid + \mid 10> < 10 \mid \, ) \end{equation} that is not of simple product or mixed form. It is the prime example of an entangled state. The basic idea of decoherence can be described by considering the above case with time dependent coefficients \begin{equation} \rho= \frac{1}{2}(a_1(t) \mid 01> < 01 \mid +a_2(t) \mid 01> < 10 \mid +a_2^*(t) \mid 10> < 01 \mid +a_3(t) \mid 10> < 10 \mid \, ) . \end{equation} If the off-diagonal terms $a_2(t)$ vanish, by attenuation and/or via time averaging, then the above density matrix does reduce to the mixed or classical form, which is an illustration of how decoherence leads to a classical state. \section{MULTI -QUBIT SYSTEMS} \label{sec4} The previous discussion which focused on describing a single qubit, can now be generalized to multiple qubits. Consider the product space of two qubits both in the up state and denote that product state as $ \mid 0\ 0>= \mid 0> \mid 0>, $ which clearly generalizes to \begin{equation} \mid q_1\ q_2>= \mid q_1> \mid q_2>, \end{equation} where $q_1,q_2$ take on the values $0$ and $1$. This product is called a tensor product and is symbolized as \begin{equation} \mid q_1\ q_2>= \mid q_1>\otimes \mid q_2>, \end{equation} which generalizes to $n_q$ qubits \begin{equation} \mid q_1\ q_2\ \cdots\ q_{n_q}>= ( \mid q_1>\otimes \mid q_2>)\, ( \cdots \otimes \mid q_{n_q}>) . \end{equation} In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}, the kets $\mid 0>,\mid 1>$ are invoked by the commands {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Ket}[0] and {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Ket}[1], as shown in Fig.~\ref{kets}, along with the kets $\mid \pm x>$, and $ \mid \pm y>$. Also shown in that figure are the results for forming the tensor products of the kets for two and three qubits, as described next. \begin{figure}[t] \fbox{\parbox{1.\textwidth}{ \includegraphics[width=8pc]{ket12} \hspace*{0.5cm} \includegraphics[width=12pc]{ket123} \hspace*{0.5cm} \includegraphics[width=8pc]{ketv123} }} \caption{Simple examples of tensor products of two and three kets.} \protect\label{kets} \end{figure} \subsection{Multi-Qubit Operators} One can also build operators that act in the multi-qubit spin space described above. Instead of a single operator, we have a set of separate Pauli operators acting in each qubit space. They commute because they refer to separate, distinct quantum systems. Hence, we can form the tensor product of the $n_q$ spin operators which for two qubits has the following structure \begin{eqnarray} < a_1 \mid \sigma_i \mid b_1> < a_2 \mid \sigma_j \mid b_2>&=& < a_1 a_2 \mid \sigma^{(1)}_i \sigma^{(2)}_j \mid b_1 b_2> \nonumber \\ &=& < a_1 a_2 \mid \sigma^{(1)}_i \otimes \sigma^{(2)}_j \mid b_1 b_2>\,, \end{eqnarray} which defines what we mean by the tensor product $\sigma^{(1)}_i \otimes \sigma^{(2)}_j$ for two qubits. The generalization is immediate \begin{equation} ( \sigma^{(1)}_i \otimes \sigma^{(2)}_j ) \otimes( \sigma^{(3)}_k \otimes \sigma^{(4)}_l) \cdots \ . \end{equation} The corresponding steps in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} are shown in Fig.~\ref{multiqubops}, For large numbers of qubits a recursive method has been developed (see, {\it Qdensity.m} and {\it Tutorial.nb}), which involves specifying the ``Length" L$= n_q$ of the qubit array and an array of length L that specifies the Pauli components ${i,j,k,l, \cdots}$. For example, if $i=1, j=0$ there is a $\sigma_x$ in qubit 1 space and a unit operator $\sigma_0$ acting in qubit 2 space. The multi-qubit spin operator is called ${\rm SP}[L, \{i,j,k,l, \cdots\} ]$. Examples in Fig.~\ref{multiqubops} include operator tensor products generated directly using the $\otimes$ notation. \begin{figure}[t] \fbox{\parbox{0.7\textwidth}{\includegraphics[width=10cm]{multiqubops}}} \caption{Multi-qubit operators in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}} \protect\label{multiqubops} \end{figure} \subsubsection{Usage} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes a multiqubit spin operator {\bf SP[L,\{$a_1$,$a_2$,..,$a_L$\}] } built from $L$ Pauli spin operators of components $a_1$,$a_2$,..,$a_L.$ A sample construction is: \begin{eqnarray} {\rm In[1] }&:=& {\rm \bf {SP}[2,\{2,3\}]} \nonumber \\ {\rm Out[1]}&:=& \left( \begin{array}{llll} \ \ 0 &\ \ 0 & -i &\ \ 0 \\ \ \ 0 & \ \ 0 & \ \ 0 &\ \ i \\ \ \ i & \ \ 0 &\ \ 0 & \ \ 0 \\ \ \ 0 & -i & \ \ 0 & \ \ 0 \end{array} \right) \nonumber \end{eqnarray} which is equivalent to the tensor product \begin{eqnarray} {\rm In[2] }&:=& {\rm \bf \sigma _2 \otimes \sigma _3} \nonumber \\ {\rm Out[2]}&:=& \left( \begin{array}{llll} \ \ 0 &\ \ 0 & -i &\ \ 0 \\ \ \ 0 & \ \ 0 & \ \ 0 &\ \ i \\ \ \ i & \ \ 0 &\ \ 0 & \ \ 0 \\ \ \ 0 & -i & \ \ 0 & \ \ 0 \end{array} \right) \nonumber \end{eqnarray} The advantage of this command is that it can readily construct large space tensor products. \subsection{General Multi -Qubit Operators} The production of $n_q$ spin space operators provides a complete basis for expressing any operator. This remark is similar to, and indeed equivalent to, the statement that the $n_q$ ket product space is a CON basis. With that remark, we can expand any $n_q$ operator as \begin{equation} \Omega = \sum_{\bf a} {\mathcal C}_{ \bf a} \; \sigma^{(1)}_{a_1} \otimes \sigma^{(2)}_{a_2} \otimes \sigma^{(3)}_{a_3}\cdots \sigma^{(n_q)}_{a_{n_q}} = \sum_{\bf a} {\mathcal C}_{ \bf a} \ {\rm SP}[n_q,{\bf a}] \end{equation} where the sum is over all possible values of the array ${\bf a}:\{ a_1,a_2,a_3,\cdots, a_{n_q} \}$. Here, the multi-qubit spin operator is denoted by {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont SP}$[n_q,{\bf a}]$, which is the notation used in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}. The coefficient ${\mathcal C}_{\bf a}$ can be evaluated for any given $\Omega$ from the overall trace \begin{equation} {\mathcal C}_{\bf a} = \frac{1}{2^{n_q}} {\rm Tr}[\, \Omega\ .{\rm SP}[n_q,{\bf a}]\, ]. \end{equation} Because of the efficacy of Mathematica 5.2, the total trace can be evaluated rapidly. This set of coefficients characterizes the operator $\Omega$. \subsubsection{Partial Traces} The advantage of expanding a general operator in the Pauli operator basis is that partial traces can now be generated by manipulating the above coefficients. A partial trace involves tracing out parts of a system; for example, consider the partial trace over qubit two for a three qubit operator \begin{equation} {\rm Tr}_2 [ \sigma^{(1)}_i \otimes \sigma^{(2)}_j \otimes \sigma^{(3)}_k]= 2 \delta_{j 0} \ \sigma^{(1)}_i \otimes \sigma^{(3)}_k\ . \end{equation} Recall that ${\rm Tr}[\sigma_i]$ is zero unless we have the unit matrix $\sigma_0$ in which case the trace is two. Of course, one could trace out systems 2 and also 1, and then \begin{equation} {\rm Tr}_{1 2} [ \sigma^{(1)}_i \otimes \sigma^{(2)}_j \otimes \sigma^{(3)}_k]= 2 \delta_{i 0}\ 2 \delta_{j 0}\ \sigma^{(3)}_k\ . \end{equation} The subscript on the {\rm Tr} symbol indicates which qubit operators are being traced out. Note in this case the number of qubits in the result is reduced to $n_q-2$, where 2 is the length of the subscript array in ${\rm Tr}_{1 2}$. Clearly, the trace reduces the rank~\footnote{The rank is the number of qubits $n_q$.} of the operator by the number of qubits traced out. Now we can apply these simple ideas to construct the partial trace of a general operator $\Omega$. Using the linearity of the trace \begin{equation} {\rm Tr}_{\bf t} [ \Omega ] =\sum_{\bf a} {\mathcal C}_{ \bf a}\ {\rm Tr}_{\bf t}[\sigma^{(1)}_{a_1} \otimes \sigma^{(2)}_{a_2} \otimes \sigma^{(3)}_{a_3}\cdots \ \sigma^{(n_q)}_{a_{n_q}}], \end{equation} where the array ${\bf t}:\{t_1 t_2 \cdots \}$ indicates only those qubits that are to be traced out. For example, ${\bf t}:\{2 5 \}$ indicates that only qubits 2 and 5 are traced out. The procedure for taking a partial trace of a general operator is to determine the total coefficient ${\mathcal C}_{\bf a}$ for all of the array ${\bf a}:\{a_1 a_2 \cdots a_{n_q} \},$ except for the entries corresponding to the traced out qubit for which we need only the $a_j=0$ part if say we trace out the $j$th qubit. From the resultant coefficients, we obtained a reduced set of coefficients, reduced by the number of trace outs. That reduced coefficient is then used to construct the reduced space operator, with a multiplier of 2 included for each traced out qubit. This expansion, reduction, reconstruction procedure might seem complicated, but it has been implemented very efficiently using the power of Mathematica 5.2. See {\it Qdensity.m} for the explicit construction procedure (which is rather compact). The command used in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} is {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont PTr }[ {\bf t}, $\Omega$] where the trace out of the general operator $\Omega$ is specified by the array $ {\bf t}$. Examples of the partial traces are in Fig.~\ref{traceout1}. \begin{figure}[t] \fbox{\parbox{0.8\textwidth}{\includegraphics[width=10cm]{traceout1}}} \caption{Taking partial traces with {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}} \protect\label{traceout1} \end{figure} \subsubsection{Usage} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes several commands for taking partial traces. One is {\bf PTr[\{$q_1$,$q_2$,...,$q_M$\},$\Omega$]}, where the array $q_1$,$q_2$,...,$q_M$ stipulates the space to be traced out. See {\it Tutorial.nb} and Fig.~\ref{traceout1} for examples of these commands. \subsection{Multi-Qubit Density Matrix } The multi-qubit density matrix is our prime example of an operator that we examine in various ways, including taking partial traces. Just as in the prior discussion, a general density matrix can be expanded in a Pauli spin operator basis \begin{equation} \rho = \sum_{\bf a} \ {\mathcal C_\rho}_{ \bf a}\ \ {\rm SP}[n_q,{\bf a}], \end{equation} where the coefficient ${\mathcal C_\rho}_{ \bf a}$ is real since the density matrix and the Pauli spin tensor product {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont SP}$[n_q,{\bf a}]$ are Hermitian. Taking a partial trace follows the rules discussed earlier. Examples are presented in Figs.~\ref{ptracerho} and~\ref{Bellreduce}. \begin{figure}[t] \fbox{\parbox{0.7\textwidth}{\includegraphics[width=8cm]{ptr0}}} \caption{Partial Traces of multi-qubit Pauli operators} \protect\label{ptracerho} \end{figure} In these examples, we give the case of three qubits reduced to two and then to one. The general expansion for these cases takes on a simple and physically meaningful form and therefore are worth examining. For one qubit, the above expansion is of the traditional form \begin{equation} \rho_1 = \frac{1}{2} [ \ {\bf 1} + \vec{P}_1\cdot \vec{\sigma}], \end{equation} which involves the three numbers contained in the vector $\vec{P}_1$, also know as the polarization of the ensemble. A $2\times2$ Hermitian density matrix has 4 variables, which is reduced by one by the ${\rm Tr}[\rho_1]=1$ normalization. Thus the polarization vector is a complete parametrization of a single qubit. For a pure state, the magnitude of the polarization vector is one; whereas, the general constraint $\rho^2 \leq \rho$ implies that $\mid P_1 \mid \leq 1$. A graph of that vector thus lies within a unit circle called the Bloch sphere. The physical meaning of $\vec{P}_1$ is that it is the average polarization of an ensemble, which is made clear by forming the ensemble average of the Pauli spin vector: \begin{equation} < \vec{\sigma}> = \frac{{\rm Tr} [ \rho_1 \vec{\sigma} ]}{ {\rm Tr}[\rho_1]} \equiv \vec{P}_1. \end{equation} Now consider two qubits. The Pauli basis is $\sigma_i \otimes \sigma_j$, and hence the two qubit density matrix has the form \begin{eqnarray} \rho_{1 2} &=& \frac{1}{4} [ \ {\bf 1} + \vec{P}_1 \cdot \vec{\sigma}_1 \otimes {\bf 1} + {\bf 1} \otimes \vec{\sigma}_2 \cdot \vec{P}_2 + \sigma_{1 i} \otimes \sigma_{2 j} T_{i , j} ] \nonumber \\ &=& \frac{1}{4} [ \ {\bf 1} + \vec{P}_1 \cdot \vec{\sigma}_1 + \vec{P}_2 \cdot \vec{\sigma}_2 + \vec{\sigma}_{1} \cdot \overleftrightarrow{ T} \cdot \vec{\sigma}_{2 j} ]. \nonumber \end{eqnarray} This involves two polarization vectors, plus one $3\times 3$ tensor polarization $\overleftrightarrow{ T}$~\footnote{In the tensor term the sum extends only over the $i,j=1,2,3$ components} which comes to 15 parameters as indeed is the correct number for a two qubit system $ 2^2 \times 2^2 -1$~\footnote{We see that the number of parameters in a $n_q$ qubit density matrix is thus $ 2^{n_q} \times 2^{n_q}-1=2^{ 2 n_q} -1 .$}. The physical meaning is again an ensemble average polarization vector for each qubit system, plus an ensemble average spin correlation tensor \begin{eqnarray} < \vec{\sigma}_1 > &=& \frac{{\rm Tr} [ \rho_{1 2}\ \vec{\sigma}_1 \otimes {\bf 1}_2 ]}{ {\rm Tr}[\rho_{1 2}]}\ \equiv \vec{P}_1, \nonumber \\ < \vec{\sigma}_2 > &=& \frac{{\rm Tr} [ \rho_{1 2}\ {\bf 1}_1 \otimes \vec{\sigma}_2 ]}{ {\rm Tr}[\rho_{1 2}]} \equiv \vec{P}_2, \nonumber \\ < \sigma_{1 i} \sigma_{2 j} > &=& \frac{{\rm Tr} [ \rho_{1 2}\ \sigma_{1 i} \otimes \sigma_{2 j} ]}{ {\rm Tr}[\rho_{1 2}]} \equiv T_{i j}.\nonumber \\ && \end{eqnarray} To illustrate a partial trace, consider the trace over qubit 2 of the two qubit density matrix \begin{equation} {\rm Tr}_2[ \rho_{ 1 2}] = \rho_{ 1}=\frac{1}{2} [ {\bf 1} + \vec{P}_1 \cdot \vec{\sigma}_1 ], \end{equation} where we see that a proper reduction to the single qubit space results. Examples of the density matrix for the Bell states and their partial trace reduction to the single qubit operator are presented in Fig.~\ref{Bellreduce}. \begin{figure}[t] \fbox{\parbox{.8\textwidth}{ \includegraphics[width=10cm]{belltropy1} \includegraphics[width=10cm]{belltropy2} }} \caption{Example from {\it Tutorial.nb}.} \protect\label{Bellreduce} \end{figure} \subsection{Multi-Qubit States } The procedure for building multi-qubit states follows a path similar to our discussion of operators. First we build the computational basis states, which are eigenstates of the operator $\sigma^{(1)}_z \otimes \sigma^{(2)}_z\otimes \cdots \sigma^{(n_q)}_z$. These states are specified by an array ${\bf a}:\{ a_1, a_2 \cdots a_{n_q}\}$ of length $n_q$, where the entries are either one or zero. That collection of binary bits corresponds to a decimal number according to the usual rule $ a_1a_2 \cdots a_{n_q} \rightarrow a_1 2^{n_q} + a_2 2^{n_q-1} + a_{n_q} 2^{0}$. The corresponding product state $ \mid a_1 a_2 \cdots a_{n_q}> \equiv\ \mid a_1 > \otimes \mid a_2> \otimes \cdots \mid a_{n_q}> $ can be constructed using the command {\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont KetV[\{$a_1$, $a_2$, ..\}]}. Any single computational basis state consists of a column vector with all zeros except at the location counting down from the top corresponding to its decimal equivalent. Examples of the construction of multiqubit states in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} are given in Fig.~\ref{kets}. This capability allows one to use {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} without invoking a density matrix approach, which is often desirable to reduce the space requirements imposed by a full density matrix description. \subsubsection{Usage} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes commands for one qubit ket vectors in the computational and in the x- and y-basis ${\bf Ket, KetX, KetY},$ and also multiqubit product states using the command {\bf KetV[vec]}. Example of its use is \begin{eqnarray} {\rm In[1] }&:=& {\rm \bf \text{KetV}[\{0,1,1\}] } \nonumber \\ {\rm Out[1]}&:=& {\bf \left( \begin{array}{l} 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \right) }\nonumber \end{eqnarray} which is equivalent to \begin{eqnarray} {\rm In[1] }&:=& {\rm \bf (\text{Ket}[0]\otimes \text{Ket}[1])\otimes \text{Ket}[1] } \nonumber \\ {\rm Out[1]}&:=& {\bf \left( \begin{array}{l} 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \right) }\nonumber \end{eqnarray} \section{CIRCUITS \& GATES} \label{sec5} Now that we can construct multi-qubit operators and take the partial trace, we are ready to examine the operators that correspond to logical gates for single and multi-qubit circuits. These gates form the basic operations that are part of the circuit model of QC. We will start with one qubit operators in a one qubit circuit and then go on to one qubit operators acting on selected qubits within a multi-qubit circuit. Then two qubit operators in two and multi-qubit situations will be presented. \subsection{One Qubit Gates} \subsubsection{NOT} The basic operation {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont NOT } is simply represented by the $\sigma_x$ matrix since $\nobreak{\sigma_x \mid 0> = \mid 1 >}$ and $ \sigma_x \mid 1 > = \mid 0>.$ \subsubsection{The Hadamard} For the Hadamard, we have the simple relation \begin{equation} {\mathcal H} = \frac{ \sigma_x + \sigma_z}{\sqrt{2}} \rightarrow \frac{1}{\sqrt{2} } \left( \begin{array}{lcr} 1 && 1\\ 1 && -1 \end{array}\right) , \end{equation} which can also be understood as a rotation about the $\hat{\eta} =\frac{ \hat{x} + \hat{z}}{\sqrt{2}}$ axis by $\gamma=\pi$ since \begin{equation} R= e^{-\frac{\gamma}{2} \vec{\sigma}\cdot \hat{\eta} } = \cos \frac{ \pi}{2}\;\sigma_0 - i \sin\frac{ \pi}{2}\; \vec{\sigma}\cdot \hat{\eta} \rightarrow -i \frac{ \sigma_x + \sigma_z}{\sqrt{2}} \label{rot} \, . \end{equation} The Hadamard plays an important role in QC by generating the qubit state from initial spin up or spin down states, i.e. \begin{equation} {\mathcal H} \mid 0 > = \frac{\mid 0 > + \mid 1>}{\sqrt{2}} \qquad {\mathcal H} \mid 1 > = \frac{\mid 0 > - \mid 1>}{\sqrt{2}} . \end{equation} Having a Hadamard act in a multi-qubit case, involves operators of the type $ {\mathcal H}\otimes {\bf 1}\otimes{\mathcal H},$ for which Hadamards act on qubits 1 and 3 only. The command for this kind of operator in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} is {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Had}[$n_q$,Q] where $n_q$ is the total number of qubits and the array $Q:{q_1,q_2,...}$ of length $n_q$ indicates which qubit is or is not acted on by a Hadamard. The rule used is if $ q_i>0$, then the ith qubit is acted on by a Hadamard, whereas $q_j=0$ designates that the $j$th qubit is acted on by a unit 2$\times$2 operator. For example, {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Had}[3,\{1,0,1\}] has a Hadamard acting on qubits 1 and 3 and a unit 2$\times$2 acts on qubit 2, which is the case given above. To get a Hadamard acting on all qubits, include all qubits in Q, e.g., use Q=\{1,1,1,....\}. Thus, an operator {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont HALL}[L]={\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Had}[L,\{1,1,1....\}] is also implemented where the array of 1's has length $n_q$ of all the qubits. Another {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} command {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont had}[$n_q$ ,q] is for a Hadamard acting on one qubit q out of the full set of $n_q$ qubits. So {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} facilitates the action of a one qubit operator in a multi-qubit environment. \subsubsection{Usage} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes several Hadamard commands. For single qubit cases use either \(\pmb{ \mathcal{H} }\) or {\bf had[1,1]}. For a Hadamard acting on a single qubit within a set of $L$ qubits use {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont had}[L,q]; for a set of Hadamards acting on selected qubits use {\bf {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Had}[L, \{ $ 0,1,0,1 \cdots$ \} ],} and for Hadamards acting on all $L$ qubits use {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont HALL}[L]. These are demonstrated in the tutorial. \subsubsection{Rotations} One can use the rotation operator $R$ to produce a state in any direction. A rotation {\it about} an axis $\hat{\eta}$ is given in Eq.~(\ref{eq:rot}). For special cases, such as the $\hat{x},\hat{y},\hat{z}$ and $\gamma=\pi,$ the expanded form reduce to $-i \sigma_x,-i \sigma_y$ and $-i \sigma_z,$ respectively. For a general choice of rotation, one can use the ``MatrixExp" command directly, or use the spinor rotation matrix for rotation {\it to} angles $\theta,\phi.$ For a multi-qubit circuit, the rotation operator for say qubits 1 and 3 can be constructed using the command $R_{\gamma_1} \otimes {\bf 1}\otimes R_{\gamma_2} \otimes {\bf 1}\otimes \cdots ,$ with associated rotation axes. Examples from {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} are given in Fig.~\ref{genrots}. \begin{figure}[t] \fbox{\parbox{.8\textwidth}{\includegraphics[width=10cm]{nicerot}}} \caption{Example of multiqubit rotation using {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}.} \protect\label{genrots} \end{figure} \subsubsection{Usage} Rotation commands for rorarions about the x-, y- or z- axis by an ankle $\theta$ are included in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} : {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont RotX}[$\theta$], {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont RotY}[$\theta$], {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont RotZ}[$\theta$] In addition, {\bf Rotqbit[v,t]} builds the matrix corresponding to a rotation around a general axis axis v by an angle t. \subsection{Two Qubit Gates} To produce a quantum computer, which relies on quantum interference, one must create entangled states. Thus the basic step of two qubits interacting must be included. The interaction of two qubits can take many forms depending on the associated underlying dynamics. It is helpful in QC, to isolate certain classes of interactions that can be used as logical gates within a circuit model. \subsubsection{CNOT} The most commonly used two-qubit gate is the controlled-not ({\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }) gate. The logic of this gate is summarized by the expression $$ {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } \mid c, t > = \mid c, t \oplus c>,$$ where $c=0,1$ is the control bit and $t=0,1$ is the target bit. In a circuit diagram the $\bullet$ indicates the control qubits and the $\oplus$ indicates the target qubit. \begin{equation} \Qcircuit @C=1em @R=0.5em @!R { &\qw &\ctrl{1}& \qw & \qw \\ &\qw &\targ &\qw & \qw } \nonumber \end{equation} The final state of the target is denoted as ``$t \oplus c$" where $\oplus$ addition is understood to be modular base 2. Thus, the gate has the logical role of the following changes (control bit first) $ \mid 0 0>{\small \mapsto} \mid 0 0>; \mid 0 1>{\small \mapsto} \mid 0 1>;\nobreak{ \mid 1 0>{\small \mapsto}\mid 0 1>;} \nobreak{ \mid 1 1>{\small \mapsto} \mid 1 0> .}$ All of this can be simply stated using projection and spin operators as \begin{equation} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }[ c , t] = \mid 0>_c < 0\mid\otimes {\bf I }_t + \mid 1>_c < 1\mid \otimes \sigma^t_x , \end{equation} with $c$ and $t$ denoting the control and target qubit spaces. The {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }, which is briefly expressed as ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } = {\mathcal P}_0 {\bf I } + {\mathcal P}_1 \sigma_x ,$ is called the controlled-not gate since {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont NOT } $\equiv \sigma_x.$ A matrix form for this operator acting in a two qubit space is \begin{equation} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } = \left( \begin{array}{rrrrrrrrrr} 1 &&& 0 &&& 0 &&& 0\\ 0 &&& 1 &&& 0 &&& 0\\ 0 &&& 0 &&& 0 &&& 1\\ 0 &&& 0 &&& 1 &&& 0\\ \end{array}\right) . \label{cnot} \end{equation} The rows \& columns are ordered numerically as:\ $ 00,01,10,11.$ The {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } gate is used extensively in QC in a multi-qubit context. Therefore, {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} gives a direct way to embed a two-qubit {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } into a multi-qubit environment. The command is {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }$[n_q,c,t]$ where $n_q$ is the total number of qubits and $c$ and $t$ are the control and target qubits respectively as in the following examples: If the number of qubits is 6, and a {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } acts with qubit 3 as the control and 5 is the target, the operator (which is a $2^6\times2^6$ matrix) is invoked by the command ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }[6,3,5].$ The command ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }[6,5,3]$ has 6 qubits, with qubit 5 the control and 3 the target. The basic case in Eq.(\ref{cnot}) is therefore just ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }[2,1,2].$ \subsubsection{CPHASE} Other two qubit operators can now be readily generated. The {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE } gate, which plays an important role in the cluster model of QC is simply a controlled $\sigma_z$ \begin{equation} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE }[ c , t] = \mid 0>_c < 0\mid\otimes {\bf I }_t + \mid 1>_c < 1\mid \otimes \sigma^t_z , \end{equation} which in two qubit space has the matrix form \begin{equation} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE } = \left( \begin{array}{rrrrrrrrrr} 1&&& 0&&& 0&&&0\\ 0&&& 1&&& 0&&&0\\ 0&&& 0&&& 1&&&0\\ 0&&& 0&&& 0&&&-1\\ \end{array}\right) . \end{equation} The multi-qubit version is ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE }[n_q, c, t],$ with the same rules as for the ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }$ gate. \subsubsection{Other Gates} The generation of other gates, such as swap gates~\footnote{See the {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} command {\rm\bf \usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Swap }.}, Controlled-$i \sigma_y$ ( also known as a {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CROT } gate) are now clearly extensions of the prior discussion. The swap gate swaps the content of two qubits. It can be decomposed in a chain of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } gates: \begin{equation} \Qcircuit @C=1em @R=0.5em @!R { \lstick{|\Psi_1\rangle} &\qw &\ctrl{1}& \qw & \qw & \targ &\qw& \qw & \ctrl{1} &\qw&\rstick{|\Psi_2\rangle} \\ \lstick{|\Psi_2\rangle} &\qw &\targ &\qw & \qw & \ctrl{-1} &\qw& \qw & \targ &\qw&\rstick{|\Psi_1\rangle} } \end{equation} Another example is ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CROT }$ \begin{equation} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CROT }[ c , t] = \mid 0>_c < 0\mid\otimes {\bf I }_t + \mid 1>_c < 1\mid \otimes i \, \sigma^t_y , \end{equation} which, in two qubit space, has the matrix form \begin{equation} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CROT } = \left( \begin{array}{rrrrrrrrrr} 1 &&& 0 &&& 0 &&& 0\\ 0 &&& 1 &&& 0 &&& 0\\ 0 &&& 0 &&& 0 &&&1\\ 0 &&& 0 &&& -1 &&& 0\\ \end{array}\right) . \end{equation} A multi-qubit version of ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CROT }[n_q, c, t],$ with the same rules as for the ${\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }$ gate can easily be generated by a modification of {\it Qdensity.m}. Indeed, the general case of a controlled-$\Omega,$ where $\Omega$ is any one-qubit operator is now clearly \begin{equation} C\Omega[ c , t] = \mid 0>_c < 0\mid\otimes {\bf I }_t + \mid 1>_c < 1\mid \otimes \, \Omega^t , \end{equation} with corresponding extensions to matrix and multi-qubit renditions. \subsubsection{Usage} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes the following two-qubit operators, acting between qubits c(control) and t(target) imbedded in a system of $L$ qubits: {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }[L,c,t], {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE }[L,c,t], {\rm\bf \usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont ControlledX }[L,c,t], {\rm\bf \usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont ControlledY }[L,c,t] and {\rm\bf \usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Swap }[L,$q_1$,$q_2$]. A generic two qubit operator within a multiqubit system involving operators $Op1 $ and $Op2 $ is {\bf TwoOp}[L,$q_1$,$q_2$,Op1,Op2]. \subsection{Three Qubit Gates} The above procedure can be generalized to three qubit operators. The most important three qubit gate is the Toffoli~\cite{Nielsen}gate, which has two control bits that determine if a unit or a NOT($\sigma_x$) operator acts on the third (target) bit. The projection operator version of the Toffoli is simply \begin{equation} {\rm {\rm\bf \usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Toffoli } } \equiv{\mathcal P}_0 \otimes {\mathcal P}_0 \otimes {\bf 1} + {\mathcal P}_0 \otimes {\mathcal P}_1 \otimes {\bf 1} + {\mathcal P}_1 \otimes {\mathcal P}_0 \otimes {\bf 1} + {\mathcal P}_1 \otimes {\mathcal P}_1 \otimes \sigma_x, \end{equation} which states that the third qubit is flipped only if the first two(control) qubits are both 1. For a multi-qubit system the {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} command {\rm\bf \usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Toffoli }[$n_q$ ,$q_1$, $q_2$, $q_3$] returns the Toffoli operator with $q_1$ and $q_2$ as control qubits, and $q_3$ as the target qubit within the full set of $n_q$ qubits. The Toffoli gate can be specialized or reduced to lower gates and is a universal gate. \subsubsection{Usage} {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} includes a generic three qubit operator within a multiqubit system involving operators $Op1,$ $Op2, $ and $Op3 $ : {\bf ThreeOp}[L,$q_1$,$q_2$,$q_3$,Op1,Op2,Op3]. The Toffoli gate is a special case and is invoked by the command {\rm\bf \usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Toffoli }[L,c,c,t], where $c,c,t$ specifies the two control and the one target qubit out of the set of $L$ qubits. \section{SPECIAL STATES} \label{sec6} As a prelude to discussing QC algorithms, it is useful to examine how to produce several states that are part of the initialization of a quantum computer. These are the uniform superposition, the two-qubit Bell,~\cite{Bell} three-qubit GHZ,~\cite{GHZ} and Werner~\cite{Werner} states. \subsection{Uniform superposition} In many QC processes, the first step is to produce an initial state of the $n_q$ qubits that is a uniform superposition of all of its possible computational basis states. It is the initialization of this superposition that allows a QC to address lots of questions simultaneously and is often referred to as the ``massively parallel" feature of a quantum computer. The steps start with a state of all spin-up $ \mid 0 0 0 0 \cdots >$, then every qubit is acted on by a Hadamard ${\mathcal H}\otimes{\mathcal H}\otimes{\mathcal H}\otimes \dots,$ which is done by the {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} command {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont HALL}[$n_q$]. Thus each up state is replaced by ${\mathcal H} \mid 0> = \frac{\mid 0> + \mid 1 >}{\sqrt{2}}$, and we have the uniform superposition \begin{equation} \mid \Psi >={\rm HALL}[n_q] \mid 0> = \frac{1}{2^{n_q/2}} \sum_{x=0}^{2^{n_q}-1} \mid x > , \end{equation} where $x$ is the decimal equivalent to all possible binary numbers of length $n_q$. An example of this process, including the associated density matrices, is in Fig.~\ref{uniform}. \begin{figure}[t] \fbox{\parbox{.8\textwidth}{\includegraphics[width=8cm]{uniform}}} \caption{Construction of a uniform four qubit state.} \protect\label{uniform} \end{figure} \subsection{Bell States} The singlet and triplet states are familiar in QM as the total spin zero and one states, with zero spin projection ($M=0$). They are the basic states that enter into the EPR discussion and they are characterized by their ``entanglement." Bell introduced two more combinations, so that for two spin 1/2 systems the four Bell states are: \begin{eqnarray} \mid B_{ 0 0 }>&=& \frac{1}{\sqrt{2}}\, \mid 0 0 > + \mid 1 1 > \nonumber \\ \mid B_{ 0 1 }>&=& \frac{1}{\sqrt{2}}\, \mid 0 1 > + \mid 1 0 > \nonumber \\ \mid B_{ 1 0 }>&=& \frac{1}{\sqrt{2}}\, \mid 0 0 > - \mid 1 1 > \nonumber \\ \mid B_{ 1 1 }>&=& \frac{1}{\sqrt{2}}\, \mid 0 1 > - \mid 1 0 > , \end{eqnarray} or in one line $ \mid B_{ a b }> = \frac{1}{\sqrt{2}}\, \mid 0 b > + (-1)^a \mid 1 \bar{b} >$, where $\bar{q}$ is the NOT[q] operation. A circuit that produces these states, starting from the state $\mid a b>$ ($ a,b = 1,0$) consists of a Hadamard on qubit one, followed by a {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }. The {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} version is thus:\\ B [a\_, b\_] :={\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }[2,1,2]{\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont Had}[2,\{1,0\}](Ket[a]$\otimes$Ket[b]). The density matrix version involves defining the unitary transformation $U\equiv {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT }[2, 1,2] . {\rm Had}[2, {1,0}]$ and an initial density matrix $\rho^I_{a b} \equiv \mid a b > < a b \mid$, then evolving to the density matrix for each of the Bell states \begin{equation} \rho^{Bell}_{ a b } = U \cdot \rho^I_{a b} \cdot U^\dagger. \end{equation} In Fig.~\ref{Bell} part of this process taken from {\it Tutorial.nb} is shown. The tutorial includes a demonstration that the Bell states have zero polarization (as is obvious from their definition), and a simple diagonal form for the associated tensor polarization $\overleftrightarrow{T}.$ Another useful application shown in the tutorial is that taking the partial traces of the Bell state density matrices, yield non pure single qubit density matrices and that the associated von Neumann entropy defined by $\nobreak{S[\rho] = - {\rm Tr}[\rho \;\log_2 \rho]}$ is zero for the Bell states, but 1 for the single qubit density matrices $\rho_1$ and $\rho_2$. Thus each qubit is in a more chaotic state, which physically means they have zero average polarization.~\footnote{Since many different state vectors can yield a net zero average polarization, it is clear that the density matrix stores less information than in a state vextor, albeit realistic statistical, information.} This property is an indication of the entanglement of the Bell states. \begin{figure}[t] \fbox{\parbox{.8\textwidth}{\includegraphics[width=10cm]{bell}}} \caption{Example from {\it Teleportation.nb}} \protect\label{Bell} \end{figure} \subsection{GHZ States} Three qubit states that are similar in spirit to the Bell states, were introduced by Greenberger, M. A. Horne, and A. Zeilinger~\cite{GHZ}. The basic GHZ state is \begin{equation} \mid \Psi> = \mid {\rm GHZ}> = \frac{1}{\sqrt{2}} \, ( \, \mid 0 0 0> + \mid 1 1 1>), \end{equation} which may be written: \begin{equation} \mid GHZ>=U_{GHZ} \mid 000> \end{equation} with $U_{GHZ}={\rm CNOT}[3,1,2].{\rm CNOT[3,1,3]}.{\rm had[3,1]}$ which corresponds to the following circuit: \begin{equation} \Qcircuit @C=1em @R=0.5em @!R { \lstick{|0\rangle} &\qw &\gate{H} &\qw & \ctrl{1} & \qw & \ctrl{2} &\qw \\ \lstick{|0\rangle} &\qw &\qw &\qw & \targ & \qw & \qw &\qw\\ \lstick{|0\rangle} &\qw &\qw &\qw & \qw & \qw & \targ &\qw } \nonumber \end{equation} A complete set of eight GHZ states can be produced by the step \begin{equation} U_{\rm GHZ} \mid abc> = \mid {\rm GHZ}_{abc}> = \frac{1}{\sqrt{2}} ( \mid 0 b c > + (-1)^a \mid 1 \bar{b} \bar{c}> ) . \end{equation} For all eight of these three qubit states, the associated density matrix can be formed $\rho_{1 2 3}^{abc}=\mid {\rm GHZ}_{abc}>< {\rm GHZ}_{abc} \mid$ and are seen in {\it Tutorial.nb} to have a simple structure. Taking partial traces to generate the two qubit $\rho_{ 1 2},\rho_{ 1 3},\rho_{ 2 3}$ and single qubit $\rho_{ 1 },\rho_{ 2 },\rho_{ 3 }$ density matrices, we see that for these states every qubit has zero polarization and a simple structure for the pair and three qubit correlation functions. In addition, the entropy of these GHZ set of states is zero and the sub-entropies of the qubit pairs and single qubits are all 1, corresponding to maximum disorder. A sample GHZ realization in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} is given in Fig.~\ref{GHZ} \begin{figure}[t] \fbox{\parbox{.8\textwidth}{\includegraphics[width=10cm]{ghzexp}}} \caption{Construction of a GHZ state. Example from {\it Tutorial.nb}} \protect\label{GHZ} \end{figure} \subsection{Werner States} Another set of states were proposed by Werner~\cite{Werner}. They are defined in terms of a density matrix: \begin{equation} \rho_W = \lambda \rho_B + (1- \lambda) \rho_u \otimes \rho_u,\end{equation} where $0 \leq \lambda \leq 1 $ is a real parameter. The limit $\lambda=1$ yields a completely entangled Bell state density matrix $\rho_B = \mid B_{a b} ><B_{a b} \mid,$ whereas lower values of $\lambda$ reduce the entanglement. The $\lambda=0$ limit gives a simple two qubit product $\rho_u \otimes \rho_u,$ where $\rho_u=\frac{ {\bf 1}}{2}$ is the density matrix for a single qubit with zero polarization, i.e. it corresponds to a chaotic limit for each qubit. Therefore, the parameter $\lambda$ can alter the original fully entangled Bell state by introducing chaos or noise. Therefore, the Werner state is called a state of noisy entanglement. The entropy of a two qubit Werner state as a function of $\lambda$ ranges from two~\footnote{This corresponds to an entropy per qubit of 1.} for $\lambda=0 ,$ to zero for $\lambda=1.$ The entropy of the single qubit state is 1. See {\it Tutorial.nb} for a sample Werner {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} realization. \section{TELEPORTATION} \label{sec7} To understand QC teleportation, let us first consider classical teleportation, which entails only classical laws. For example, suppose Alice measures a vase using a laser beam to measure its dimensions, shape, color and decoration. She therefore has a full binary description of the vase in 3-D. Bob has all the material to make another vase and, upon receiving the file that Alice sends him by computer, is able to make an exact copy of the vase. There are only local operations (LO) (measuring and sending by Alice and reconstruction by Bob) and a classical communication (CC); so this called a LOCC process. How does this differ from teleportation using Quantum Physics? In Quantum Mechanics, measurement affects the sample; as a result, after collecting the information to send to Bob, the original sample is no longer in its original state. In the classical case, one ends up with two identical copies. In the QC case, Bob has the only extant system. Another difference is that in the QC case, Alice and Bob share an entangled state, say an EPR or a Bell state, Alice entangles the original system with one member of the pair, then measures and by LOCC sends Bob her result. By virtue of the shared entanglement, information is shared by the LOCC and by the Quantum effect of sharing the entangled state. Some information is transmitted by a ``Quantum channel.'' Therefore, the information sent by computer is less than needed in the classical case, because it is supplemented by the Quantum transfer of entanglement. The strange nature of Quantum transportation is thus no stranger than the EPR/Bell effect, which has been affirmed experimentally. To understand these general remarks, let us use {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} to examine three cases. \subsection{One Qubit Teleportation} Suppose Alice has one qubit $q_1$ in an unknown state $\mid \Psi>= a_0 \mid 0> + a_1 \mid 1>,$ with an associated spin density matrix $\rho_0 \equiv \mid \Psi><\Psi \mid$. In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}, such a state is generated randomly. Bob and Alice share a two qubit entangled state, which we take as one of the Bell states $B_{q_2, q_3}$. This Bell state could be provided by an outside EPR purveyor, but for convenience let us assume that Bob produces the entangled pair and sends one member of the pair $\mid q_2>$ to Alice, as shown in Fig.~\ref{teleport1}. Alice then entangles her state $\mid \Psi>,$ using the inverse of the steps that Bob employed to produce the entangled pair, and then she measures the state of her $\mid \Psi> \otimes \mid q_2>,$ which yields a single number between zero and three (or one of the binary pairs $ 0 0; 0 1; 1 0; 1 1).$ Alice transmits by CC that number to Bob, who then knows what to do to his qubit $q_3$ in order to slip it over to the state $\mid \Psi >,$ without making any measurements on it to avoid its destruction. In the end, Alice no longer has a qubit in the original state, but by LOCC and shared entanglement, she is happy to know that Bob has such a qubit, albeit not made of the original material. The only material transmitted is the single member of the entangled pair. \begin{figure}[tbh] \vspace{9pt} \includegraphics[]{1tele.eps} \caption{Schematic circuit for one qubit quantum teleportation.} \label{teleport1} \end{figure} In {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}, the following detailed steps are shown in the file {\it Teleportation.nb}: construction of an initial random qubit for Alice, Bob's entanglement process, Alice's entanglement, measurement and CC and Bob's consequent actions. This process is described using the density matrix language. A rendition using the quantum states directly is easily generated. \subsection{Two Qubit Teleportation} \subsubsection{ Two EPR Teleportation} A similar process can be invoked to teleport an initially unknown state of two qubits $\mid \Psi> =\sum_{i,j=0,1}\ a_{ i j } \mid q_{1i} q_{2j} >$. In the special case that the unknown state is one of the Bell states, it can be transported using the procedure shown in Fig.~\ref{teleportEPR}. Bob now prepares a three qubit entangled GHZ state using Hadamards and {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } \!\!s and then sends one of the three qubits $q_3$ over to Alice, who entangles that one with her original $q_1,q_2$ qubits $\mid \Psi>_{12}\otimes \mid q_3>$ and then measures the state of her three qubits, with the result of a number between zero and seven ( e.g. one of the binary results $000;001;010,011;100;101;110;111$). She transmits that decimal number to Bob by CC ( a phone call say), who then knows what to do to put his two qubits $q_4 ,q_5$ into the original state $\mid \Psi>.$ Again all the steps are presented in detail in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} in the file {\it Teleportation.nb}. \begin{figure}[htb] \vspace{9pt} \includegraphics[]{EPRt.eps} \caption{EPR teleportation using a GHZ entangled state.} \label{teleportEPR} \end{figure} \subsubsection{ General Two Qubit Teleportation} The two qubits can be in a more general state than in the above discussion which was restricted to being one of the Bell states. In this case, Bob needs to entangle four qubits by a chain of Hadamard and {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } gates as shown in Fig.~\ref{teleportGen2}. He then sends two qubits $q_3,q_4$ over to Alice who entangles her two with them. That is, she entangles $\mid \Psi>_{1 2} \otimes \mid q_3>\otimes \mid q_4>,$ then measures them with a decimal number result that is between zero and 14 or a binary measurement of: $ 0000; 0001;0010;0011; \cdots ;1111$. With that number, Bob knows what to do and places his two qubits into the original state $\mid \Psi>$. Again see {\it Teleportation.nb} for the detailed layout. \begin{figure}[tbh] \vspace{9pt} \includegraphics[]{2teleport.eps} \caption{Schematic circuit for two qubit quantum teleportation.} \label{teleportGen2} \end{figure} \begin{figure}[t!] \vspace{9pt} \fbox{\parbox{1.\textwidth}{\includegraphics[width=10cm]{tele.eps}}} \caption{Example from {\it Teleportation.nb}} \label{fig:tele} \end{figure} \section{GROVER'S SEARCH} \label{sec8} Assume you have a set of items and you want to find a particular one in the set which has been marked beforehand. Let us further restrict the problem by saying that you are only allowed to ask yes/no questions, e.g. ``Is this item the item?'' In a disordered database with N items and one marked item that problem would require on the order of N trials to find the marked item. Quantum mechanics allows the states of a system to exist in superpositions and thus in many cases permits one to parallelize the process in some sense. Grover~\cite{Grover} proposed an algorithm that lowers the number of trials needed to $O(\sqrt{N})$ by making clever use of interference and superposition. He based his ideas on an analogy to the multiple slit problem~\cite{Groverslit}. \subsection{The Oracle} Grover's search algorithm relies on the use of an Oracle. The idea underlying the Oracle is the following: the Oracle is a function that can recognize a solution to a problem although it may not know how to solve the problem. In that sense a classical example could be a lock and a key, the problem of finding the proper key that would open a lock out of a bunch of keys illustrates the role of the lock as an Oracle: you select a key and you try it, the Oracle would tell you whether that was the correct key or not, but you cannot ask the lock to single out the key from the bunch. The essential difference between a classical and a quantum Oracle is that in the quantum case the Oracle can act on an input which is a superposition of all states. In our example that would mean that we can try a superposition of all the keys at the same time. The above description of the role of an Oracle takes on the explicit form \begin{equation} {\bf ORACLE} \mid x>_N \mid y>_1 = \mid x >_N \mid y \oplus f(x) >_1, \end{equation} which involves an $N$ qubit and a single qubit product space and a specified function $f(x).$ The matrix form of the Oracle is \begin{equation} < x' \mid < y' \mid {\bf ORACLE} \mid x> \mid y> = \delta_{x' x} <y'\mid y \oplus f(x) >. \end{equation} Examples of this Oracle matrix for single and double marked items are given in detail in the Grover.nb notebook, where the ``inversion about the mean" process is also presented and explained in detail. \subsection{One marked item} A schematic description of the searching process is: \begin{equation} \Qcircuit @C=0.5em @R=0.2em @!R { &&&&&\multigate{2}{H^{\otimes n}} &\qw& \qw & \multigate{6}{G} &\qw &\qw &\multigate{6}{G}& \qw &\hdots &&\qw &\qw &\multigate{6}{G} & \qw &\qw \\ &&&\lstick{|0\rangle}& &\ghost{H^{\otimes n}} &\qw &\qw &\ghost{G} &\qw &\qw &\ghost{G} & \qw &\hdots &&\qw &\qw &\ghost{G} & \qw &\qw &&\rstick{\rm Out\; register}\\ &&& & &\ghost{H^{\otimes n}} &\qw &\qw &\ghost{G} &\qw &\qw &\ghost{G}& \qw &\hdots &&\qw &\qw &\ghost{G} & \qw & \qw \\ \vdots\\ &&&& &\qw &\qw &\qw &\ghost{G} &\qw &\qw &\ghost{G} & \qw &\hdots &&\qw &\qw &\ghost{G} & \qw &\qw\\ {\rm Oracle}&&&& &\qw &\qw &\qw &\ghost{G} &\qw &\qw &\ghost{G} & \qw &\hdots &&\qw &\qw &\ghost{G} & \qw&\qw \\ {\rm space} &&&& &\qw &\qw &\qw &\ghost{G} &\qw &\qw &\ghost{G} & \qw &\hdots &&\qw &\qw &\ghost{G} & \qw&\qw \\ } \nonumber \end{equation} The basic steps, which are detailed in {\it Grover.nb}, consist of applying a Hadamard to all qubits in the register while setting the Oracle to recognize the marked item. Then we need to construct the Grover operator and apply it a number of times. Finally we measure the output register which then will give the marked item with high probability. The probability of success depends on the number of times the Grover operator has been applied, as can be studied in the notebook. {\it Grover.nb} contains two examples, the first one has only one marked item in the full database. The size of the database, given by the number of qubits, can be varied, together with the number of times the Grover operator is applied. \subsection{Two marked items} The second example includes two marked items in the database (and may be generalized to the case of M marked items). Of course, one needs to enlarge the register and Oracle space. \section{SHOR'S ALGORITHM} \label{sec9} Shor's factoring algorithm is the most celebrated quantum algorithm, partly due to its powerful use of quantum superposition and interference to tackle one of the problems upon which most of our secure bank transactions rely. The factoring algorithm, described in detail in the notebook {\it Shor.nb}, essentially first relates the problem of factoring a number, N, made up of two prime numbers, N=$p\times q$, to the problem of finding the period of a particular function $x^j \mod(N)$, being $x$ a coprime\footnote{Two positive integers $a$ and $b$ are said to be coprime if they have no common factor other than 1, that is, their greatest common divisor is 1.} to N smaller than N. Then finding the period is related to the computation of the Quantum Fourier Transform (QFT) (analog to the discrete Fourier transform), for which a very effective quantum algorithm exists. Schematically, the procedure is the following: build two registers, 1 and 2, with $n_1$ and $n_2$ qubits each, initially set to $|0>$. Then Hadamard the first register, the state of the full system then reads: \begin{equation} \Psi = {1\over 2^{n_1}} \sum_{i=0}^{2^{n_1}-1} \mid i>_1 \otimes \mid 0>_2 \, . \end{equation} Then we take a number $x$ smaller than N and coprime with it and load the second register with the function $x^i \mod(N)$, giving: \begin{equation} \Psi = {1\over 2^{n_1}} \sum_{i=0}^{2^{n_1}-1} \mid i>_1 \otimes \mid x^i \mod N>_2 \, . \end{equation} At this point, a measurement is performed on the second register and then one applies the QFT to the first register. From that value measured in the first register, one is able, with a certain probability, to factor the original number N. A detailed study of the performance of the algorithm, e.g. analysis of probabilities of success depending on the size of register considered experimentally can be done within the notebook. A thorough theoretical description of the algorithm can be found in Refs.~\cite{Shor,Gerjuoy}. {\it Shor.nb} contains four slightly different approaches to the algorithm, mainly differing in the procedure used to compute the QFT. The most constructive case and also the one appropriate when studying possible noise effects on parts of the circuit, is the density matrix one. There the QFT (see also {\it QFT.nb}) is obtained using unitary operators in the same way as occurs experimentally. That full QFT treatment implies that the QM method is not practicable for heavy computing simulations, say when number of qubits goes above 10. Then there are three other cases, two of them using the state vector expression for the QFT with and without explicit construction of both registers. Finally the last example, makes use of the already implemented discrete Fourier transform in Mathematica. This example is thus useful when emphasis is on studying larger numbers to check probabilities of success, without concern for the actual quantum mechanical way of building the QFT. In Fig.~\ref{fig:shornb} a snapshot from {\it Shor.nb} is shown. \begin{figure}[t] \fbox{\parbox{1.\textwidth}{\includegraphics[width=10cm]{shor}}} \caption{Example from {\it Shor.nb}} \protect\label{fig:shornb} \end{figure} \section{CLUSTER MODEL} \label{sec10} An alternative to the circuit model for QC has been suggested in a series of recent papers~\cite{Cluster}. The basic idea of this approach is to start with an initial state that is highly entangled by virtue of two-qubit {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE } operations between nearest neighbors in a cluster of qubits. The {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE } operations could be generated somehow by Ising model spin-spin interactions. Once an appropriate cluster is designed, then a carefully planned set of single qubit measurements are made in various directions. The results of those measurements are passed on by classical communications, until one reaches a final qubit, or set of qubits, from which a result can be deduced once a local correction involving Pauli operators and the binary results of the measurement is invoked. This method is being developed, with the procedures for general algorithms still being formulated. It is however novel and promising, especially since it involves single qubit measurements, can generate gates without use of magnetic fields to rotate spins, and holds forth the promise of error stability. It does require however a large increase in the number of qubits. The fact that measurement collapse out qubits and essentially destroys the initial state is why this approach is called ``one-way computing." Of course, one can reconstruct the initial state and try again, which is typical of QC. One can also reuse qubits after measurement. It turns out that the modular nature of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} allows one to use it to demonstrate and test some of the basic ideas of the cluster model of QC. The first illustration is the basic transport process involving two qubits. The two qubits are initially in a $\mid \Psi>\otimes \mid + >$ state, where $\mid \Psi>$ is a general unknown one qubit state. Then a {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CPHASE } operator acts between the two qubits, followed by a measurement of qubit one in the $\mid +x> \equiv \mid +>$ direction with a result of either $a=$ zero or one. The second qubit proves to be in the state $ \sigma_x^a \mid \Psi>$. It is simple to confirm this algebraically. It forms the basic building block of the cluster model. It is illustrated in {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} using a density matrix example. Another example of using {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,} for cluster model studies is the simplest {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } gate, whose cluster model implementation from Ref.~\cite{Cluster} is shown in Fig.~\ref{ccnot}. \begin{figure}[t] \centering \includegraphics[width=5cm]{ccnot.eps} \caption{Cluster model of {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont CNOT } gate; see: {\it Cluster.nb} and~\cite{Cluster}.} \protect\label{ccnot} \end{figure} This example is worked out in detail in {\it Cluster.nb}. \section{CONCLUSION} \label{sec11} This simulation affords opportunities for many applications and extensions. The basic operations and manipulations are formulated in a modular manner and, as the illustrations and tutorial demonstrate, one can formulate and answer many important QC questions. Application to dynamical theories based on master equations, including environmental effects, are one challenge. Invoking and testing measures for entanglement and probing the role of noisy entanglement, imperfect gates, and of error correction protocols are other potential applications. Extending the study to general types of measurements and to cluster model cases is also of considerable interest. Finally, the description of real experimental situations by suitable Hamiltonians and studying the stability of QC algorithms could be an important role for future study using {\rm\bf\usefont{T1}{phv}{n}{n}\fontsize{9pt}{9pt}\selectfont QDENSITY\,}. \begin{figure}[t] \fbox{\parbox{.8\textwidth}{\includegraphics[width=10cm]{clusterexp}}} \caption{Example from {\it Cluster.nb} showing preparation of the cluster state.} \protect\label{cluster} \end{figure} \newpage
{ "timestamp": "2006-03-10T19:44:56", "yymm": "0508", "arxiv_id": "quant-ph/0508101", "language": "en", "url": "https://arxiv.org/abs/quant-ph/0508101", "abstract": "This Mathematica 5.2 package~\\footnote{QDENSITY is available atthis http URL} is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in {\\it Qdensity.m} which contains the tools needed in quantum circuits, e.g. multiqubit kets, projectors, gates, etc. Selected examples of the basic commands are presented here and a tutorial notebook, {\\it Tutorial.nb} is provided with the package (available on our website) that serves as a full guide to the package. Finally, application is made to a variety of relevant cases, including Teleportation, Quantum Fourier transform, Grover's search and Shor's algorithm, in separate notebooks: {\\it QFT.nb}, {\\it Teleportation.nb}, {\\it Grover.nb} and {\\it Shor.nb} where each algorithm is explained in detail. Finally, two examples of the construction and manipulation of cluster states, which are part of ``one way computing\" ideas, are included as an additional tool in the notebook {\\it Cluster.nb}. A Mathematica palette containing most commands in QDENSITY is also included: {\\it QDENSpalette.nb} .", "subjects": "Quantum Physics (quant-ph); Computational Physics (physics.comp-ph)", "title": "Qdensity - a Mathematica Quantum Computer Simulation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9615338057771059, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.7088024935720167 }
https://arxiv.org/abs/1702.03051
Density Functional Estimators with k-Nearest Neighbor Bandwidths
Estimating expected polynomials of density functions from samples is a basic problem with numerous applications in statistics and information theory. Although kernel density estimators are widely used in practice for such functional estimation problems, practitioners are left on their own to choose an appropriate bandwidth for each application in hand. Further, kernel density estimators suffer from boundary biases, which are prevalent in real world data with lower dimensional structures. We propose using the fixed-k nearest neighbor distances for the bandwidth, which adaptively adjusts to local geometry. Further, we propose a novel estimator based on local likelihood density estimators, that mitigates the boundary biases. Although such a choice of fixed-k nearest neighbor distances to bandwidths results in inconsistent estimators, we provide a simple debiasing scheme that precomputes the asymptotic bias and divides off this term. With this novel correction, we show consistency of this debiased estimator. We provide numerical experiments suggesting that it improves upon competing state-of-the-art methods.
\section{Introduction} \input{introduction} \section{Kernel Density Estimator with $k$-NN Bandwidth} \label{sec:kde} \input{section2} \section{Local Likelihood Density Estimator with $k$-NN Bandwidth} \label{sec:klnn} \input{section3} \section{Simulations} \label{sec:simul} \input{section4} \section{Discussion} \label{sec:disc} The problem of estimating integral functional of densities has been studied for decades. The minimax lower bound for the convergence rate has been established in~\cite{BM95}, and several approaches have been proposed to achieve the minimax optimal rate, including Haar wavelet method~\cite{Kerk96}, Lepski's method~\cite{Mukh15} and ensemble methods~\cite{Moon16,BSY16}. It is unlikely that the proposed estimator will achieve the minimax rate. However, given its superior performance in the finite sample regime, especially for densities with sharp boundaries, understanding the convergence rate of the bias for the proposed $k$-NN bandwidth estimators is an interesting open problem. \input{supplementary} \bibliographystyle{plain} {\small \subsection{Kernel Density Estimator with Locally Adaptive Bandwidth} Given $n$ i.i.d.\ samples $\{X_1, X_2, \dots, X_n\}$ drawn from a distribution $f_X(x)$, standard {\em Kernel Density Estimator} (KDE) is defined for a bandwidth $h\in{\mathbb R}$ and a kernel function $K: \mathbb{R}^d \to \mathbb{R}^+$ that integrates to 1 as \begin{eqnarray} \label{eq:KDE} \widehat{f}_n^{{\rm (KDE)}}(x) &=& \frac{1}{nh^d} \sum_{i=1}^n K\left( \frac{X_i - x}{h} \right) \;. \end{eqnarray} Typical choices of $K$ include Gaussian kernel $K(u) \propto \exp\{-\|u\|^2/2\}$, uniform kernel $K(u) \propto \mathbb{I}\{\|u\| \leq 1\}$ and Epanechnikov kernel $K(u) \propto (1-\|u\|^2) \mathbb{I}\{\|u\| \leq 1\}$. The consistency of KDE is known for global choices of $h$ (that does not change for different points $x$) in the range of $h \to 0$ and $nh^d \to \infty$ as the number of samples $n$ goes to infinity \cite{Was06}. Although typical analyses of KDE assume a fixed global bandwidth, in practice there is significant gain in local and variable choice of band widths. For example, consider a case of a mixture of two Gaussian distributions (see Figure~\ref{fig:fig1}). A fixed bandwidth choice can be either too large in the low variance regime of $x$ (labeled by `o' in Figure~\ref{fig:fig1}) or too small for large variance regime of $x$ (labeled by `x' in Figure~\ref{fig:fig1}). In real applications in high dimensions, such heterogeneity is prevalent. \begin{figure}[h] \begin{center} \includegraphics[width=.65\textwidth]{fig1} \end{center} \caption{An example of samples denoted by `x' for one of the mixtures and `o' for the other mixture under a mixture of two Gaussians. The pdf is shown in a solid black line. Fixed bandwidth do not work well for both `x' samples and `o' samples .} \label{fig:fig1} \end{figure} Previous work in~\cite{Rosen56,TW92} suggests using a locally adaptive bandwidth $h(x)$ which varies with $x$. One previously suggested choice of $h(x)$ is the distance between $x$ and its $k$-th nearest neighbor among $\{X_1, X_2, \dots, X_n\}$. This choice is referred to as the {\em $k$-NN bandwidth}. Just as the value of a fixed bandwidth $h$ trades off bias and variance, now the value of an integer $k$ also trades off between bias and variance. We note here that if the uniform kernel $K(u) \propto \mathbb{I}\{\|u\| \leq 1\}$ combined with $k$-NN bandwidth is used, then KDE reduces to the $k$-NN density estimator. In~\cite{TW92}, it was shown that if $k$ is a function of $n$ such that $k(n) \to \infty$ and $k(n)/n \to 0$ as $n$ goes to infinity, then the KDE with $k$-NN bandwidth is consistent. In the example above, the $k$-NN bandwidth adapts to the local geometry of the samples and suffers less from heterogeneity of data compared to a fixed bandwidth. In this paper, we propose to use the $k$-NN bandwidth, but with a {\em fixed and small} $k$ in the range of $4 \sim 8$. Such a choice, violating $k \to \infty$, results in an inconsistent density estimator. However, we propose pre-computing this {\em universal} asymptotic bias and de-biasing the resulting estimator. Precisely, we prove that if we plug the KDE with $k$-NN bandwidth into the resubstitution estimator of $J_{\alpha}(X)$, there will be a {\em multiplicative} bias which is {\em independent} of the underlying distribution, and hence can be precomputed and divided off from our estimate. \subsection{KDE based Estimator of $J_{\alpha}(X)$} As $J_{\alpha}(X) = {\mathbb E} \left[\, f^{\alpha-1}(X) \,\right]$, we propose a resubstitution estimator of the form \begin{eqnarray} \label{eq:resubstitute} {\widehat{J}}_{\alpha}(x) \; =\; \frac{1}{n} \sum_{i=1}^n (\widehat{f}_n(X_i))^{\alpha-1}\;, \end{eqnarray} where for the density estimate $\widehat{f}_n(X_i)$, we propose KDE in \eqref{eq:KDE} with $k$-NN bandwidth $h=\rho_{k,i}$: \begin{eqnarray} \label{eq:def_KDE} {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) &=& \frac{1}{n } \sum_{i=1}^n \frac{1}{B_{k,d,\alpha,K}}\left( \frac{1}{n\rho_{k,i}^d} \sum_{j \in {\cal T}_{i,m}} K\left(\, \frac{X_j - X_i}{\rho_{k,i}}\,\right)\right)^{\alpha-1} \;, \end{eqnarray} where $\rho_{k,i}$ is the distance to the $k$-th nearest neighbor from sample $X_i$. Notice the extra multiplicative factor of $1/B_{k,d,\alpha,K}$. This is the de-biasing term that cancels the multiplicative asymptotic bias that is present in the simple resubstitution estimate that directly substitutes \eqref{eq:KDE} in \eqref{eq:resubstitute}. We show in the following theorem that the multiplicative bias $B_{k,d,\alpha,K}$ only depends on $k$, $d$, $\alpha$ and the choice of kernel $K$, and not on the underlying distribution $f_X(x)$. Hence, it can be pre-computed and divided off as explicitly written in \eqref{eq:def_KDE}. In the summation in \eqref{eq:def_KDE}, we only use the subset of $m = \lceil \log n \rceil$ nearest samples defined as ${\cal T}_{i,m}=\{j\in[n]\,:\, j\neq i \text{ and } \|X_i-X_j\|\leq \rho_{\lceil\log n \rceil,i} \}$. Such a truncation makes the estimator computationally more efficient, as well as allows us to provide a sharp analysis on the asymptotic bias. If we want to include more samples in the computation, our analysis technique can immediately be generalized as long as $m=O(n^{{1/(2d)}-\varepsilon})$ for an arbitrarily small $\varepsilon > 0$. However, for a larger choice of $m$ such as $m = \Omega(n)$, those sample points that are further away have statistical properties that are significantly different from those that are closer, which requires new analysis techniques. The following shows that the asymptotic multiplicative bias $B_{k,d,\alpha,K}$ does not depend on the underlying $f_X(x)$, and hence can be computed beforehand and removed. \begin{thm} \label{thm:unbiased_KDE} Let $X_1, X_2, \dots, X_n \in \mathbb{R}^d$ are i.i.d.\ samples from a twice continuously differentiable pdf $f(x)$ such that ${\mathbb E} \left[\, |f(X)|^{\alpha-1}\,\right] < +\infty$, and $K(u)$ is a kernel function such that $K(u) \leq C \|u\|^{-2d}$ for some constant $C > 0$, then \begin{eqnarray} \lim_{n \to \infty} {\mathbb E} [{\widehat{J}}^{{\rm (KDE)}}_{\alpha}(X) ] &=& J_{\alpha}(X) \;, \end{eqnarray} Further, if ${\mathbb E}\left[\, |f(X)|^{2\alpha-2} \,\right]<+\infty$, then the variance of the proposed estimator is bounded by \begin{eqnarray} {\rm Var} [{\widehat{J}}^{{\rm (KDE)}}_{\alpha}(X) ] &=& O\Big( \frac{(\log n)^2}{n} \Big) \;. \end{eqnarray} \end{thm} This theorem shows the $L_1$ and $L_2$ consistency of the KDE based estimator of $J_{\alpha}(X)$. Conditional on $X_i = x$, the estimator is a function of the nearest neighbor statistics $Z_{\ell,i} = X^{(\ell)}_i - x$, where $X^{(\ell)}_i$ is the $\ell$-nearest neighbor from $x$. The key technical step of the proof is to make a connection between the nearest neighbor statistics and uniform order statistics, shown in Lemma \ref{lem:order_stat}. It is shown that the distances $\rho_{\ell,1} = \|Z_{\ell,1}\|$'s jointly converge to the standardized uniform order statistics, and the directions $(X_{j_\ell} -X_i)/\|X_{j_\ell} -X_i\|$'s converge to i.i.d.\ random variables drawn uniformly over the unit sphere in ${\mathbb R}^d$ (which is called the Haar random variable), jointly with the distances as well. \begin{lemma}[Lemma 3.2.~\cite{GOV16}] \label{lem:order_stat} Let $E_1, E_2, \dots, E_m$ be i.i.d.\ standard exponential random variables and $\xi_1 , \xi_2, \dots, \xi_m$ be i.i.d.\ random variables drawn uniformly over the unit $(d-1)$-dimensional sphere in $d$ dimensions, independent of the $E_i$'s. Suppose $f$ is twice continuously differentiable and $x \in \mathbb{R}^d$ satisfies that there exists $\varepsilon > 0$ such that $f(a) > 0$, $\|\nabla f(a)\| = O(1)$ and $\|H_f(a)\| = O(1)$ for any $\|a - x\| < \varepsilon$. Then for any $m = O( \log n)$, we have the following convergence conditioned on $X_i = x$: \begin{eqnarray} \lim_{n \to \infty} d_{\rm TV} ( (c_d nf(x))^{1/d} (\, Z_{1,i}, \dots, Z_{m,i} \,) \;,\; (\, \xi_1 E_1^{1/d}, \dots , \xi_{m}(\sum_{\ell=1}^{m} E_\ell)^{1/d} \,) ) = 0 \;. \end{eqnarray} where $d_{\rm TV}(\cdot,\cdot)$ is the total variation and $c_d$ is the volume of unit Euclidean ball in $\mathbb{R}^d$ \end{lemma} Given Lemma~\ref{lem:order_stat}, we show that the quantity $S = \sum_{j \in {\cal T}_{i,m}} K\left(\, (X_j - X_i)/\rho_{k,i}\,\right)$ used in the estimate~\eqref{eq:def_KDE} converges in distribution, and we can characterize the asymptotic distribution exactly using uniform order statistics. For i.i.d.\ standard exponential random variables $E_1,E_2,\ldots,E_m$ and i.i.d.\ Haar random variables $\xi_1,\xi_2,\ldots,\xi_m$ in ${\mathbb R}^d$, we define, \begin{eqnarray} \tilde{S}^{(m)} \equiv \sum_{j=1}^{m} K\left(\, \frac{\xi_j (\sum_{\ell=1}^j E_{\ell})^{1/d}}{(\sum_{\ell=1}^k E_{\ell})^{1/d}}\,\right) \label{eq:S}\;, \end{eqnarray} and let ${\tilde{S}} = \lim_{m\to \infty} {\tilde{S}}^{(m)}$. We can show that if the kernel satisfies $K(u) \leq C \|u\|^{-2d}$ (which is fulfilled by all kernels with bounded support or exponentially decaying tails), the limit of ${\tilde{S}}$ exists and is related to the multiplicative bias term $B_{k,d,\alpha,K}$ in the resubstitution estimator of $J_{\alpha}(X)$ in \eqref{eq:def_KDE}: \begin{eqnarray} B_{k,d,\alpha,K} = {\mathbb E}\left[\, \left( \frac{c_d {\tilde{S}}}{\sum_{\ell=1}^k E_\ell} \right)^{\alpha-1} \,\right] \;. \label{eq:defBias_KDE} \end{eqnarray} where $c_d$ is the volume of the unit ball in ${\mathbb R}^d$. We provide a proof in Section~\ref{sec:proof_KDE}. Below is a table of $B_{k,d,\alpha, K}$ computed via numerical simulations, for the Gaussian kernel $K \propto \exp\{-\|u\|^2/2\}$ and some typical values of $k$, $d$ and $\alpha$. Here $1.0245(\pm 3)$ means the bias has empirical mean $\mu=10245\times 10^{-4}$ with confidence interval $3\times10^{-4}$. We run 1,000,000 trials with truncation of the summation at $m=5,000$ in these simulations. \begin{table}[h] \begin{center} \begin{tabular}{ c c | c | c | c | c | c | c |} \cline{3-8} & & \multicolumn{6}{|c|}{$k$} \\ \cline{3-8} & & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ \\ \cline{3-8} \hline \multicolumn{1}{ |c }{\multirow{2}{*}{$d=1$} } & \multicolumn{1}{ |c|| }{$\alpha=2$} & $1.0245(\pm 3)$ & $1.0184(\pm 3)$ & $1.0153(\pm 3)$ & $1.0132(\pm 2)$ & $1.0114(\pm 2)$ & $1.0098(\pm 2)$ \\ \cline{2-8} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c|| }{$\alpha=3$} & $1.1973(\pm 8)$ & $1.1564(\pm 7)$ & $1.1282(\pm 6)$ & $1.1078(\pm 6)$ & $1.0945(\pm 5)$ & $1.10835(\pm 5)$ \\ \cline{1-8} \hline \multicolumn{1}{ |c }{\multirow{2}{*}{$d=2$} } & \multicolumn{1}{ |c|| }{$\alpha=2$} & $0.9883(\pm 2)$ & $0.9897(\pm 2)$ & $0.9915(\pm 2)$ & $0.9930(\pm 1)$ & $0.9934(\pm 1)$ & $0.9943(\pm 1)$ \\ \cline{2-8} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c|| }{$\alpha=3$} & $1.0431(\pm 5)$ & $1.0342(\pm 4)$ & $1.0270(\pm 4)$ & $1.0226(\pm 3)$ & $1.0196(\pm 3)$ & $1.0175(\pm 3)$ \\ \cline{1-8} \hline \multicolumn{1}{ |c }{\multirow{2}{*}{$d=3$} } & \multicolumn{1}{ |c|| }{$\alpha=2$} & $0.9821(\pm 1)$ & $0.9856(\pm 1)$ & $0.9883(\pm 1)$ & $ 0.9900(\pm 1)$ & $0.9912(\pm 1)$ & $0.9920(\pm 1)$ \\ \cline{2-8} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c|| }{$\alpha=3$} & $0.9926(\pm 3)$ & $0.9935(\pm 2)$ & $0.9940(\pm 2)$ & $0.9954(\pm 2)$ & $0.9951(\pm 2)$ & $0.9955(\pm 2)$ \\ \cline{1-8} \end{tabular} \end{center} \caption{Numerical approximation of $B_{k,d,\alpha,K}$ for the Gaussian kernel.} \label{tbl:bias_KDE}\end{table} \subsection{KDE based R\'{e}nyi entropy estimator} Given the KDE based estimator for $J_{\alpha}(X)$, we propose the following estimator for the R\'{e}nyi entropy, \begin{eqnarray} && \widehat{H}^{{\rm (KDE)}}_{\alpha}(X) = \frac{1}{1-\alpha} \log \widehat{J}^{{\rm (KDE)}}_{\alpha}(X) \,\notag\\ &=& \frac{1}{1-\alpha} \left(\, \log \sum_{i=1}^n \left( \frac{1}{n\rho_{k,i}^d} \sum_{j \in {\cal T}_{i,m}} K\left(\, \frac{X_j - X_i}{\rho_{k,i}}\,\right)\right)^{\alpha-1} - \log n - \log B_{k,d,\alpha,K} \,\right) \;. \end{eqnarray} Following by the $L_2$ consistency of ${\widehat{J}}^{{\rm (KDE)}}_{\alpha}(X)$ and the fact that $\log(\cdot)$ is continuous on $\mathbb{R}^+$, we obtain the following corollary showing convergence property of ${\widehat{H}}^{{\rm (KDE)}}_{\alpha}(X)$. \begin{coro} \label{cor:renyi_unbiased_KDE} Under the same assumption of Theorem~\ref{thm:unbiased_KDE}, the estimator ${\widehat{H}}^{{\rm (KDE)}}_{\alpha}(X)$ converges to $H_{\alpha}(X)$ in probability, as $n \to \infty$. \end{coro} \subsection{Local Likelihood Density Estimator} In this section, we propose the {\em local likelihood density estimator} (LLDE), introduced in~\cite{Loa96,HJ96}, as a generalization of KDE. In practice, the choice of a bandwidth is mostly left to the practitioner -- here we propose using the $k$-NN bandwidth for LLDE. Given a point $x$ and i.i.d.\ samples $\{X_1, X_2, \dots, X_n\}$, the LLDE is given by \cite{loader2006local,GOV16}: \begin{eqnarray} \label{eq:LLDE} \widehat{f}_n^{{\rm (LLDE)}}(x) &\equiv& \frac{S_0}{n(2\pi)^{d/2} h^d |\Sigma|^{1/2}} \exp\{-\frac{1}{2}\mu^T \Sigma^{-1} \mu\} \;, \end{eqnarray} where the quantities $S_0$, $S_1$, $S_2$ and $\mu$, $\Sigma$ are defined as follows, \begin{eqnarray} S_0 &\equiv& \sum_{j=1}^n e^{-\frac{\|X_j - x\|^2}{2 h^2}} \;, \label{eq:defS0}\\ S_1 &\equiv& \sum_{j=1}^n \frac{X_j - x}{\rho_{k,i}} \, e^{-\frac{\|X_j - x\|^2}{2 h^2}} \;,\label{eq:defS1}\\ S_2 &\equiv& \sum_{j=1}^n \frac{(X_j - x)(X_j - x)^T}{\rho_{k,i}^2} \, e^{-\frac{-|X_j - x\|^2}{2 h^2}} \;,\label{eq:defS2}\\ \mu &\equiv& \frac{S_1}{S_0} \;,\label{eq:defmu}\\ \Sigma &\equiv& \frac{S_2}{S_0} - \frac{S_1 S_1^T}{S_0^2} \;,\label{eq:defSigma} \end{eqnarray} and for the bandwidth, we propose using the $k$-NN distance: $h=\rho_{k,i}$. LLDE can be viewed as a weighted local Gaussian density, where the Gaussian kernel $K((X_j -x)/h) \propto \exp\{-\|X_j-x\|/(2 h^2)\}$ is used to compute the weight from samples. Locally, in the neighborhood of a sample point $X_i$, $\mu = S_1/S_0$ is the weighted sample mean and $\Sigma = S_2/S_0 - S_1S_1^T/S_0^2$ is the weighted sample variance. Notice that the KDE estimator with $k$-NN bandwidth at point $x$ can be written as $\widehat{f}_n^{{\rm (KDE)}}(x) = S_0/(n(2\pi)^{d/2}h^d)$. Compared with LLDE, KDE can be viewed as a weighted local Gaussian density where the mean is restricted to be $x$ and the variance is restricted to be identity. Therefore, LLDE is able to capture the local structure automatically, hence can reduce the boundary bias if $x$ is near the boundary of the density. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\textwidth]{fig2_1} \put(-220,80){$ Y $} \put(-120,-15){ $X$} \hspace{0.9 cm} \includegraphics[width=0.45\textwidth]{fig2_2} \put(-225,80){$ Y$} \put(-120,-15){$X$} \end{center} \caption{Given samples from joint Gaussian distribution as an example, consider approximating the local density near the blue point $x$ near boundary of the distribution, using a Gaussian density with mean $x$ and unit variance (left) and a Gaussian density with local sample mean and covariance (right).} \label{fig_2} \end{figure} In Figure~\ref{fig_2}, the data are drawn from highly correlated joint Gaussian distribution, where we want to estimate the density of the blue point $x$ near the boundary. On the left, the red contours show that KDE is a Gaussian density with mean $x$ and unit variance, while on the right the green contours corresponds to a Gaussian density with weighted sample mean and variance given by LLDE. We can see that LLDE fits the local structure better than KDE, capturing the fact that $x$ is at the boundary of the underlying density. \subsection{LLDE based Estimator of $J_{\alpha}(X)$} We substitute LLDE in the resubstitution estimator ${\widehat{J}}_{\alpha}(X) = (1/n) \sum_{i=1}^n (\widehat{f}_{\alpha}(X_i))^{\alpha-1}$ to obtain the following $k$-Local Nearest Neighbor ($k$-LNN) estimator of the integral $J_{\alpha}(X)$, \begin{eqnarray} \label{eq:def_kLNN} {\widehat{J}}_{\alpha}^{(k-{\rm LNN})}(x) &=& \frac{1}{n B_{k,d,\alpha}} \sum_{i=1}^n \left( \frac{S_{0,i}}{n(2\pi)^{d/2}\rho_{k,i}^d |\Sigma_i|^{1/2}} \exp\{-\frac{1}{2} \mu_i^T \Sigma_i^{-1} \mu_i\}\right)^{\alpha-1} \;, \end{eqnarray} here $B_{k,d,\alpha}$ is again the multiplicative bias that depends on $k$, $d$ and $\alpha$, but not the underlying distribution. Recall that $\rho_{k,i}$ is the distance between $X_i$ and its $k$-th nearest neighbor. The quantities $S_{0,i}$, $S_{1,i}$, $S_{2,i}$ and $\mu_i$, $\Sigma_i$ are defined from \eqref{eq:defS0}-\eqref{eq:defSigma} in the neighborhood of a sample point $x=X_i$, and with a choice of the bandwidth $h=\rho_{k,i}$. Similar to the KDE based estimator~\eqref{eq:def_KDE}, only the subset of $m = \lceil \log n \rceil$ nearest samples ${\cal T}_{i,m}$ are used for computing the quantities for the same reason. The following theorem shows the $L_1$ and $L_2$ consistency of the $k$-LNN estimator of $J_{\alpha}(X)$ for twice continuously differentiable density $f(x)$. \begin{thm} \label{thm:unbiased_kLNN} Let $X_1, X_2, \dots, X_n \in \mathbb{R}^d$ are i.i.d.\ samples from a twice continuously differentiable pdf $f(x)$ such that ${\mathbb E} \left[\, |f(X)|^{\alpha-1}\,\right] < +\infty$, then \begin{eqnarray} \lim_{n \to \infty} {\mathbb E} [{\widehat{J}}^{(k-{\rm LNN})}_{\alpha}(X) ] &=& J_{\alpha}(X) \;, \end{eqnarray} If ${\mathbb E}\left[\, |f(X)|^{2\alpha-2} \,\right]<+\infty$, then the variance of the proposed estimator is bounded by \begin{eqnarray} {\rm Var} [{\widehat{J}}^{(k-{\rm LNN})}_{\alpha}(X) ] &=& O\Big( \frac{(\log n)^2}{n}\Big) \;. \end{eqnarray} \end{thm} The idea of the proof is quite similar to that of Theorem~\ref{thm:unbiased_KDE}. For i.i.d.\ standard exponential random variables $E_1,E_2, \dots, E_m$ and i.i.d.\ Haar random variables $\xi_1, \dots, \xi_m$, we define for $\gamma \in \{0,1,2\}$, \begin{eqnarray} {\tilde{S}}_{\gamma}^{(m)} = \sum_{j=1}^m \xi_j^{(m)} \frac{(\sum_{\ell=1}^j E_{\ell})^{\gamma}}{(\sum_{\ell=1}^k E_{\ell})^{\gamma}} \exp\{-\frac{(\sum_{\ell=1}^j E_{\ell})^2}{2(\sum_{\ell=1}^k E_{\ell})^2}\} \;, \end{eqnarray} where $\xi_j^{(0)} = 1$, $\xi_j^{(1)} = \xi_j \in \mathbb{R}^d$ and $\xi_j^{(2)} = \xi_j \xi_j^T \in \mathbb{R}^{d \times d}$, and ${\tilde{S}}_{\gamma} = \lim_{m \to \infty} {\tilde{S}}_{\gamma}^{(m)}$. $\tilde{\mu} = {\tilde{S}}_1/{\tilde{S}}_0$ and $\tilde{\Sigma} = {\tilde{S}}_2/{\tilde{S}}_0 - {\tilde{S}}_1{\tilde{S}}_1^T/{\tilde{S}}_0^2$. We show that the quantities $\{S_{0,i},S_{1,i},S_{2,i},\mu_i,\Sigma_i\}$ jointly converge to $\{{\tilde{S}}_0, {\tilde{S}}_1, {\tilde{S}}_2, \tilde{\mu}, \tilde{\Sigma}\}$ using Lemma~\ref{lem:order_stat}. The multiplicative bias $B_{k,d,\alpha}$ is given by, \begin{eqnarray} B_{k,d,\alpha} = {\mathbb E}\left[\, \left( \frac{c_d {\tilde{S}}_0}{(\sum_{\ell=1}^k E_{\ell})(2\pi)^{d/2}|\tilde{\Sigma}|^{1/2}} \exp\{-\frac{1}{2} \tilde{\mu}^T \tilde{\Sigma}^{-1} \tilde{\mu}\} \right)^{\alpha-1} \,\right] \;. \label{eq:defBias_kLNN} \end{eqnarray} We provide a proof in Section~\ref{sec:proof_kLNN}. Here we enumerate the approximate value of $B_{k,d,\alpha}$ for some typical $k$, $d$ and $\alpha$. We run 10,000 trials with truncation of the summation at $m=5,000$ in these simulations. \begin{table}[h] \begin{center} \begin{tabular}{ c c | c | c | c | c | c | c |} \cline{3-8} & & \multicolumn{6}{|c|}{$k$} \\ \cline{3-8} & & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ \\ \cline{3-8} \hline \multicolumn{1}{ |c }{\multirow{2}{*}{$d=1$} } & \multicolumn{1}{ |c|| }{$\alpha=2$} & $1.104(\pm 5)$ & $1.076(\pm 4)$ & $1.062(\pm 4)$ & $1.050(\pm 3)$ & $1.045(\pm 3)$ & $1.037(\pm 3)$ \\ \cline{2-8} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c|| }{$\alpha=3$} & $1.493(\pm 18)$ & $1.358(\pm 10)$ & $1.273(\pm 9)$ & $1.242(\pm 8)$ & $1.199(\pm 7)$ & $1.180(\pm 7)$ \\ \cline{1-8} \hline \multicolumn{1}{ |c }{\multirow{2}{*}{$d=2$} } & \multicolumn{1}{ |c|| }{$\alpha=2$} & $1.006(\pm 3)$ & $1.003(\pm 3)$ & $1.003(\pm 3)$ & $1.000(\pm 3)$ & $0.994(\pm 2)$ & $0.996(\pm 2)$ \\ \cline{2-8} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c|| }{$\alpha=3$} & $1.158(\pm 8)$ & $1.139(\pm 7)$ & $1.095(\pm 6)$ & $1.089(\pm 6)$ & $1.073(\pm 5)$ & $1.075(\pm 5)$ \\ \cline{1-8} \hline \multicolumn{1}{ |c }{\multirow{2}{*}{$d=3$} } & \multicolumn{1}{ |c|| }{$\alpha=2$} & $0.971(\pm 3)$ & $0.977(\pm 2)$ & $0.975(\pm 2)$ & $ 0.978(\pm 2)$ & $0.984(\pm 2)$ & $0.984(\pm 2)$ \\ \cline{2-8} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c|| }{$\alpha=3$} & $1.034(\pm 5)$ & $1.026(\pm 5)$ & $1.015(\pm 4)$ & $1.011(\pm 4)$ & $1.008(\pm 4)$ & $1.014(\pm 3)$ \\ \cline{1-8} \end{tabular} \end{center} \caption{Numerical approximation of $B_{k,d,\alpha}$.} \label{tbl:bias_kLNN}\end{table} \subsection{$k$-LNN R\'{e}nyi entropy estimator} Given the $k$-LNN estimator for $J^{(k-{\rm LNN})}_{\alpha}(X)$, we propose the following estimator for the R\'{e}nyi entropy: \begin{eqnarray} && \widehat{H}^{(k-{\rm LNN})}_{\alpha}(X) = \frac{1}{1-\alpha} \log \widehat{J}^{(k-{\rm LNN})}_{\alpha}(X) \,\notag\\ &=& \frac{1}{1-\alpha} \left(\, \log \sum_{i=1}^n \left( \frac{S_{0,i}}{n(2\pi)^{d/2}\rho_{k,i}^d |\Sigma_i|^{1/2} } \exp\{- \frac12 \mu_{i}^T \Sigma_i^{-1} \mu_{i}\} \right)^{\alpha-1} - \log n - \log B_{k,d,\alpha} \,\right) \;. \end{eqnarray} Similar to Corollary~\ref{cor:renyi_unbiased_KDE}, by the $L_2$ consistency of ${\widehat{J}}^{(k-{\rm LNN})}_{\alpha}(X)$ and the fact that $\log(\cdot)$ is continuous on $\mathbb{R}^+$, the $k$-LNN estimator ${\widehat{H}}^{(k-{\rm LNN})}_{\alpha}(X)$ converges to $H_{\alpha}(X)$ in probability, as $n \to \infty$. \subsection*{Experiment I: Highly Correlated Joint Gaussian} \bigskip\noindent{\bf Experiment I: Highly Correlated Joint Gaussian.} Consider $X \sim \mathcal{N}\left((0,0), \begin{pmatrix} 1 & r \\ r & 1\end{pmatrix}\right)$, where the correlation $r$ is closed to 1. We estimate $J_2(X) = \int f^2(x) dx$, where the ground truth is $1/(4\pi\sqrt{1-r^2})$. In this case, the density function $f$ varies dramatically in the neighborhood of almost every point $x$. Hence, the KDE based estimator and $k$-NN based estimator suffer from boundary bias, whereas our estimator performs better. The result is shown in Figure~\ref{fig:d=1_alpha=2}. For all the experiments in this section, in the left figure, we draw 100 i.i.d.\ samples from distributions of different $r$ and plot the performance of estimators against $r$ and in the right figure, we fixed $r = 0.99999$ and show the performance against number of samples. All results are averaged over 100 independent trails. \\ \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth]{r_d=1_alpha=2} \put(-240,80){\small$ {\mathbb E}[{\widehat{J}}_2(X)] $} \put(-170,-0){ $(1-r)$ where $r$ is correlation} \hspace{0.9 cm} \includegraphics[width=.45\textwidth]{n_d=1_alpha=2} \put(-245,80){\small$ {\mathbb E}[{\widehat{J}}_2(X)] $} \put(-150,-0){ number of samples $n$} \end{center} \caption{Proposed estimator outperform other estimators for $J_2(X)$ for highly correlated Gaussian.} \label{fig:d=1_alpha=2} \end{figure} \bigskip\noindent{\bf Experiment II: Cubic Function.} Now we consider estimation of the integral of cubic function of density $J_3(X) = \int f^3(x) dx$, where the underlying distribution is the same as in experiment I. The ground truth is $J_3(X) = 1/(12\pi^2(1-r^2))$. The result is shown in Figure~\ref{fig:d=1_alpha=3}. \\ \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth]{r_d=1_alpha=3} \put(-240,80){\small$ {\mathbb E}[{\widehat{J}}_3(X)] $} \put(-170,-5){ $(1-r)$ where $r$ is correlation} \hspace{0.9 cm} \includegraphics[width=.45\textwidth]{n_d=1_alpha=3} \put(-240,80){\small$ {\mathbb E}[{\widehat{J}}_3(X)] $} \put(-150,-0){ number of samples $n$} \end{center} \caption{Proposed estimator outperform other estimators for $J_3(X)$ for highly correlated Gaussian.} \label{fig:d=1_alpha=3} \end{figure} \bigskip\noindent{\bf Experiment III: High Dimension.} We consider a 6-dimensional joint Gaussian random variable with ${\rm Cov}(X_1,X_2) = {\rm Cov}(X_3,X_4) = {\rm Cov}(X_5,X_6) = r$ and ${\rm Cov}(X_i, X_j) = 0$ for all other pairs of $(i,j)$. Also the integral of quadratic function $J_2(X)$ is considered. This is a generalization of experiment I for higher dimension. The result is shown in Figure~\ref{fig:d=3_alpha=2}. \\ \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth]{r_d=3_alpha=2} \put(-235,80){\small$ {\mathbb E}[{\widehat{J}}_2(X)] $} \put(-170,-0){ $(1-r)$ where $r$ is correlation} \hspace{0.9 cm} \includegraphics[width=.45\textwidth]{n_d=3_alpha=2} \put(-235,80){\small$ {\mathbb E}[{\widehat{J}}_2(X)] $} \put(-150,-0){ number of samples $n$} \end{center} \caption{Proposed estimator outperform other estimators for $J_2(X)$ for high-dimensional highly correlated Gaussian.} \label{fig:d=3_alpha=2} \end{figure} \bigskip\noindent{\bf Experiment IV: Mixture of Gaussian.} We consider a non-Gaussian distribution. Let $X$ be a mixture of $\mathcal{N}\left((0,0), \begin{pmatrix} 1 & r \\ r & 1\end{pmatrix}\right)$ and $\mathcal{N}\left((0,0), \begin{pmatrix} 1 & -r \\ -r & 1\end{pmatrix}\right)$, with probability $1/2$ each. Also we consider $J_2(X)$. The result is shown in Figure~\ref{fig:mixed}. \\ \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth]{r_mixed} \put(-240,80){\small$ {\mathbb E}[{\widehat{J}}_2(X)] $} \put(-170,-5){ $(1-r)$ where $r$ is correlation} \hspace{0.9 cm} \includegraphics[width=.45\textwidth]{n_mixed} \put(-245,80){\small$ {\mathbb E}[{\widehat{J}}_2(X)] $} \put(-150,-0){ number of samples $n$} \end{center} \caption{Proposed estimator outperform other estimators for $J_2(X)$ for mixture of highly correlated Gaussian.} \label{fig:mixed} \end{figure} \section{Proof of Theorem \ref{thm:unbiased_KDE}} \label{sec:proof_KDE} \subsection{Proof of Asymptotic Unbiasedness} We rewrite the estimate as \begin{eqnarray*} {\widehat{J}}^{{\rm (KDE)}}_{\alpha}=\frac{1}{nB_{k,d,\alpha,K}} \sum_{i=1}^n \Big\{ \underbrace{\left(\, h\big(\,(c_d nf(X_i))^{1/d} Z_{k,i}, S_{0,i}\,\big) f(X_i) \,\right)^{\alpha-1}}_{\equiv J_i} \Big\} \;, \end{eqnarray*} where $S_{0,i} = \sum_{j \in {\cal T}_{i,m}} K((X_j-X_i)/\|Z_{k,i}\|)$ and $h(t_1, t_2) = c_d t_2/\|t_1\|^d$. Since the random variables $J_1, J_2, \dots, J_n$ are identically distributed, the expected value of ${\widehat{J}}_{\alpha}^{{\rm (KDE)}}$ is equal to \begin{eqnarray} {\mathbb E}[{\widehat{J}}_{\alpha}^{{\rm (KDE)}}] &=& \frac{1}{B_{k,d,\alpha,K}}{\mathbb E}[J_1] \;\;= \;\; \frac{1}{B_{k,d,\alpha,K}}{\mathbb E}_{X_1}\big[{\mathbb E}[J_1|X_1 = x]\big]\;\;\, \end{eqnarray} If we take the limit that $n$ goes to infinity, typical approach of dominated convergence theorem cannot be used to handle the above limit. In order to exchange the limit with the (conditional) expectation over $X_1$, we assume the following Ansatz \ref{ansatz} to be true. \begin{ansatz} \label{ansatz} The function $h(\cdot,\cdot)$ is bounded. \end{ansatz} As noted in \cite{Pal10} this ansatz is commonly used implicitly in the literature on consistency of $k$-NN estimators, without explicitly stating as such, in existing analyses of consistency of entropy estimators including \cite{KL87,GLMN05,Leo08,WKV09}. This assumption can be avoided for results of the convergence rate of the estimator with respect to the sample size with more assumptions as in \cite{Pal10,GOV16De,SP16,BSY16}. In practice, we can truncated $h$ by some very large constants to fulfill the ansatz. Under this ansatz, by dominant convergence theorem, we can exchange the limit with the conditional expectation and obtain \begin{eqnarray} \label{eq:entropy1} \lim_{n \to \infty} {\mathbb E}[{\widehat{J}}_{\alpha}^{{\rm (KDE)}}] = \frac{1}{B_{k,d,\alpha,K}} {\mathbb E}_{X_1} \left[\, \lim_{n \to \infty} {\mathbb E} \left[\, J_1 | X_1 = x \,\right] \,\right]. \end{eqnarray} Now we will show that the expectation inside converges to $(f(x))^{\alpha-1}$ multiplied by some constant that is independent of the underlying distribution. Precisely, for almost every $x$ and given $X_1 = x$, we have \begin{eqnarray} {\mathbb E}[J_1 | X_1 = x] &=& {\mathbb E} \left[\, \left(\, h ( (c_d nf(x))^{1/d} Z_{k,1}, S_{0,1}) f(x)\,\right)^{\alpha-1} \,\right] \,\notag\\ & \longrightarrow & B_{k,d,\alpha,K} (f(x))^{\alpha-1}\;, \label{eq:entropy2} \end{eqnarray} as $n\to \infty$. Here $B_{k,d,\alpha.K}$ is a constant only depends on $k$ $d$,$\alpha$ and $K$, defined in \eqref{eq:defB}. Therefore, \begin{eqnarray} {\mathbb E}_{X_1} \left[\, \lim_{n \to \infty} {\mathbb E}[J_1|X_1 = x]\right] & =& {\mathbb E}_{X_1} [B_{k,d,\alpha,K} (f(X_1))^{\alpha-1}] \,\notag\\ &=& B_{k,d,\alpha,K} J_{\alpha}(X) \;.\label{eq:entropy4} \end{eqnarray} Together with \eqref{eq:entropy1}, this finishes the proof of the desired claim. \\ We are now left to prove the convergence of \eqref{eq:entropy2}. We first give a formal definition of the multiplicative factor $B_{k,d,\alpha,K}$ by replacing the sample defined quantities $S_{0,1}$ by similar quantities defined by order statistics, and use Lemma~\ref{lem:order_stat} to prove the convergence. Recall that our order statistics is defined by two sequences of $m$ i.i.d. random variables: i.i.d. standard exponential random variables $E_1, \dots, E_m$ and i.i.d. Haar random variables $\xi_1, \dots, \xi_m$ uniformly distributed over $d$-dimensional unit sphere. Now we define \begin{eqnarray} \label{eq:defB} B_{k,d,\alpha,K} &\equiv & {\mathbb E}\left[\, \left( h \left(\, \xi_k (\sum_{\ell=1}^k E_\ell)^{1/d}, \tilde{S}_0^{(\infty)} \,\right) \,\right)^{\alpha-1}\,\right] \;, \end{eqnarray} here ${\tilde{S}}_0^{(\infty)}$ is defined by the limit of a convergent random sequence \begin{eqnarray} \tilde{S}_0^{(m)} &\equiv& \sum_{j=1}^{m} K\left(\, \frac{\xi_j (\sum_{\ell=1}^j E_\ell)^{1/d} }{(\sum_{\ell=1}^k E_\ell)^{1/d} } \,\right) \;, \end{eqnarray} We will show that the limit exists in Lemma~\ref{lem:tail}. We introduce simpler notations for the joint random variables: ${\tilde{S}}^{(m)}=(\xi_k (\sum_{\ell=1}^k E_{\ell})^{1/d}, \tilde{S}^{(m)}_{0})$ and ${\tilde{S}}^{(\infty)}=(\xi_k (\sum_{\ell=1}^k E_{\ell})^{1/d}, \tilde{S}^{(\infty)}_{0})$. Considering the quantities $S^{(n)} = ((c_d nf(x))^{1/d} Z_{k,1}, S_{0,1})$ defined from samples, we show that this converges to ${\tilde{S}}^{(\infty)}$. Precisely, by applying triangular inequality, \begin{eqnarray} d_{\rm TV} (S^{(n)},{\tilde{S}}^{(\infty)}) &\leq& d_{\rm TV} (S^{(n)},{\tilde{S}}^{(m)}) + d_{\rm TV} ({\tilde{S}}^{(m)},{\tilde{S}}^{(\infty)}) \;, \label{eq:entropy3} \end{eqnarray} and we show that both terms converge to zero for any $m = \Theta(\log n)$. Given that $h$ is continuous and bounded from the ansatz, we obtain \begin{eqnarray} \lim_{n\to\infty} {\mathbb E}[J_1|X_1 = x] &=& {\mathbb E}\, \left[\, \lim_{n\to\infty} \left(\, h(S^{(n)}) f(x) \,\right)^{\alpha-1}|X_1 = x \,\right] \,\notag\\ &=& (f(x))^{\alpha-1} {\mathbb E} \,\left[ \,(h({\tilde{S}}^{(\infty)}))^{\alpha-1} \,\right]\;, \end{eqnarray} for almost every $x$, proving \eqref{eq:entropy4}. The convergence of the first term follows from Lemma \ref{lem:order_stat}. Precisely, consider the function $g_{m}: \mathbb{R}^{d \times m} \to \mathbb{R}^d \times \mathbb{R} $ defined as: \begin{eqnarray} g_{m}(t_1, t_2, \dots, t_{m}) = \left(\, t_k, \sum_{j=1}^{m} K \left(\, \frac{t_j}{\|t_k\|} \,\right) \,\right) \;, \end{eqnarray} such that $S^{(n)} = g_{m} \left(\, (c_dnf(x))^{1/d} \left(\, Z_{1,i}, Z_{2,i}, \dots, Z_{m,i} \,\right) \,\right)$, which follows from the definition of $S^{(n)}=((c_dnf(x))^{1/d}Z_{k,i},S_{0,i})$. Similarly, ${\tilde{S}}^{(m)} = g_{m} \left(\, \xi_1 E_1^{1/d}, \xi_2(E_1 + E_2)^{1/d}, \dots \xi_{m}(\sum_{\ell=1}^{m} E_\ell)^{1/d} \,\right)$. Since $g_{m}$ is continuous, so for any set $A \in \mathbb{R}^d \times \mathbb{R}$, there exists a set ${\widetilde{A}} \in \mathbb{R}^{d \times m}$ such that $g_{m}({\widetilde{A}}) = A$. So for any $x$ such that there exists $\varepsilon > 0$ such that $f(a) > 0$, $\|\nabla f(a)\| = O(1)$ and $\|H_f(a)\| = O(1)$ for any $\|a - x\| < \varepsilon$, we have: \begin{align} & d_{\rm TV} ( S^{(n)},{\tilde{S}}^{(m)}) \,\notag\\ &= \sup_{A } \left|\, {\mathbb P}\left\{g_{m} \left(\, (c_dnf(x))^{1/d} Z_{1,i}, \dots, (c_dnf(x))^{1/d}Z_{m,i} \,\right) \in A \right\} - {\mathbb P}\{g_{m} (\, \xi_1 E_1^{1/d}, \dots \xi_{m}(\sum_{l=1}^{m} E_{\ell})^{1/d} \,) \in A \} \,\right| \,\notag\\ &\leq \sup_{{\widetilde{A}} \in \mathbb{R}^{d \times m}} \left|\, {\mathbb P}\left\{\left(\, (c_dnf(x))^{1/d} Z_{1,i}, \dots, (c_dnf(x))^{1/d} Z_{m,i} \,\right) \in {\widetilde{A}} \right\} - {\mathbb P}\{(\, \xi_1 E_1^{1/d}, \dots \xi_{m} (\sum_{\ell=1}^{m} E_\ell)^{1/d} \,) \in {\widetilde{A}} \} \,\right| \,\notag\\ &= d_{\rm TV}\left( \left(\, (c_dnf(x))^{1/d} Z_{1,i}, \dots, (c_dnf(x))^{1/d} Z_{m,i} \,\right) \,,\, \left(\, \xi_1 E_1^{1/d}, \dots \xi_{m} (\sum_{\ell=1}^{m} E_\ell)^{1/d} \,\right)\,\right) \,\notag\\ &\stackrel{n \rightarrow \infty}{\longrightarrow} 0 \label{eq:converge_1}\;, \end{align} where the last inequality follows from Lemma \ref{lem:order_stat}. By the assumption that $f$ has open support and $\|\nabla f\|$ and $\|H_f\|$ is bounded almost everywhere, this convergence holds for almost every $x$. \\ For the second term in \eqref{eq:entropy3}, let $\tilde{T}^{(m)}_0 = {\tilde{S}}^{(\infty)}_0 - {\tilde{S}}^{(m)}_0$ and we claim that $\tilde{S}^{(m)}$ converges to $\tilde{S}^{(\infty)}$ in distribution by the following lemma. \begin{lemma} \label{lem:tail} Assume $m_n \to \infty$ as $n \to \infty$, and the kernel functions $K: \mathbb{R}^d \to \mathbb{R}^{d'}$ satisfied $\|K(u)\| \leq C \|u\|^{-2d}$ for some constant $C > 0$. Then we have \begin{eqnarray} \lim_{n \to \infty} {\mathbb E} \,\Big\|\, \sum_{j=m_n+1}^{\infty} K\left(\, \frac{\xi_j (\sum_{\ell=1}^j E_{\ell})^{1/d}}{(\sum_{\ell=1}^k E_{\ell})^{1/d}}\,\right)\,\Big\| = 0 \;. \end{eqnarray} \end{lemma} This implies that $\tilde{T}_0^{(m)}$ converges to $0$ in $L_1$. Therefore ${\tilde{S}}^{(m)}=(\xi_k (\sum_{\ell=1}^k E_{\ell})^{1/d}, \tilde{S}^{(m)}_{0})$ converges to ${\tilde{S}}^{(\infty)}=(\xi_k (\sum_{\ell=1}^k E_{\ell})^{1/d}, \tilde{S}^{(\infty)}_{0})$ in $L_1$, hence, in distribution. Therefore, \begin{eqnarray} d_{\rm TV}({\tilde{S}}^{(m)},{\tilde{S}}^{(\infty)}) \stackrel{n \rightarrow \infty}{\longrightarrow} 0 \label{eq:converge_2}\;, \end{eqnarray} Combine~\eqref{eq:converge_1} and~\eqref{eq:converge_2} in~\eqref{eq:entropy3}, this implies the desired claim. \subsection{Proof of the Variance} We will follow the technique from \cite[Section 7.3]{BD16}. For the usage of Efron-Stein inequality, we need a second set of i.i.d. samples $\{X'_1, X'_2, \dots, X'_n\}$. For simplicity, denote ${\widehat{J}} = {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X)$ be the estimate of $J(X)$ base on original sample $\{X_1, \dots, X_n\}$ and ${\widehat{J}}^{(i)}$ be the estimate based on $\{X_1, \dots, X_{i-1}, X'_i, X_{i+1}, \dots X_n\}$, where only $X_i$ is replaced by $X'_i$. Then Efron-Stein theorem states that \begin{eqnarray} {\textrm{ Var }} \left[ {\widehat{J}} \right] \leq 2 \sum_{j=1}^n \mathbb{E} \left[\, \left( {\widehat{J}} - {\widehat{J}}^{(j)} \right)^2 \,\right] \;.\label{eq:efron_stein} \end{eqnarray} Recall that \begin{eqnarray*} {\widehat{J}}^{(n)}_{\alpha}=\frac{1}{nB_{k,d,\alpha,K}} \sum_{i=1}^n \Big\{ \underbrace{\left(\, h\big(\,(c_d nf(X_i))^{1/d} Z_{k,i}, S_{0,i}\,\big) f(X_i) \,\right)^{\alpha-1}}_{\equiv J_i} \Big\} \;, \end{eqnarray*} Similarly, we can write ${\widehat{J}}^{(j)} = (1/nB_{k,d,\alpha,K}) \sum_{i=1}^n J_i^{(j)}$ for any $j \in \{1, \dots, n\}$. Therefore, the difference of ${\widehat{J}}$ and ${\widehat{J}}^{(j)}$ is \begin{eqnarray} {\widehat{J}} - {\widehat{J}}^{(j)} = \frac{1}{nB_{k,d,\alpha,K}} \sum_{i=1}^n \left(\, J_i - J_i^{(j)} \,\right) \;. \end{eqnarray} Notice that $J_i$ only depends on $X_i$ and its $m$ nearest neighbors, so $J_i - J_i^{(j)} = 0$ if none of $X_j$ and $X'_j$ are in $m$ nearest neighbor of $X_i$. If we denote $Z_{i,j} = \mathbb{I} \{X_j \textrm{ is in } m \textrm{ nearest neighbor of } X_i\}$, then $J_i = J_i^{(j)}$ if $Z_{i,j}+Z_{i,j'} = 0$. According to \cite[Lemma 20.6]{BD16}, since $X$ has a density, with probability one, $\sum_{i=1}^n Z_{i,j} \leq m \gamma_d$, where $\gamma_d$ is the minimal number of cones of angle $\pi/6$ that can cover $\mathbb{R}^d$, which only depends on $d$. Similarly, $\sum_{i=1}^n Z_{i,j'} \leq m \gamma_d$. If we denote $S_j = \{i: Z_{i,j} + Z_{i,j'} > 0\}$, the cardinality of $S$ satisfy $|S_j| \leq 2 m \gamma_d$. Therefore, we have ${\widehat{J}} - {\widehat{J}}^{(j)} = \sum_{i \in S} \left(\, J_i - J_i^{(j)} \,\right)/(nB_{k,d,\alpha,K})$. By Cauchy-Schwarz inequality, we have \begin{eqnarray} \mathbb{E} \left[\, \left( {\widehat{J}} - {\widehat{J}}^{(j)} \right)^2 \,\right] &=& \mathbb{E} \left[\, \frac{1}{n^2 B_{k,d,\alpha,K}^2} \left(\, \sum_{i \in S_j} \left(\, J_i - J_i^{(j)} \,\right) \,\right)^2\,\right] \,\notag\\ &\leq& \mathbb{E} \left[\, \frac{|S_j|}{n^2 B_{k,d,\alpha,K}^2} \sum_{i \in S_j} \left(\, J_i - J_i^{(j)} \,\right)^2\,\right] \,\notag\\ &=& \frac{|S_j|}{n^2 B_{k,d,\alpha,K}^2} \sum_{i \in S_j} \mathbb{E} \left[\, \left(\, J_i - J_i^{(j)}\,\right)^2\,\right] \,\notag\\ &\leq& \frac{2|S_j|}{n^2 B_{k,d,\alpha,K}^2} \sum_{i \in S_j} \left(\, \mathbb{E} \left[\, J_i^2\,\right] + \mathbb{E} \left[\, (J_i^{(j)})^2\,\right] \,\right) \;.\label{eq:h-h_j} \end{eqnarray} for every $j \in [n]$. Notice that $J_i$'s and $J_i^{(j)}$'s are identically distributed, so we are left to compute $\mathbb{E} \left[\, J_1^2 \,\right]$. Conditioning on $X_1 = x$, similarly to~\eqref{eq:entropy2}, we have \begin{eqnarray} {\mathbb E}[J_1^2 | X_1 = x] &=& {\mathbb E} \left[\, \left|\, h ( (c_d nf(x))^{1/d} Z_{k,i}, S_{0,1}) f(x)\,\right|^{2\alpha-2} \,\right] \,\notag\\ & \longrightarrow & B_{k,d,2\alpha-1,K} |f(x)|^{2\alpha-2}\;, \end{eqnarray} as $n \to \infty$. Therefore, by taking expectation over $X_1$, we obtain: \begin{eqnarray} {\mathbb E} [J_1^2] &=& {\mathbb E}_{X_1} \left[\, \lim_{n \to \infty} {\mathbb E} \left[\, J_1^2 | X_1 \,\right] \,\right] = B_{k,d,2\alpha-1,K} {\mathbb E}_{X_1} \left[\, |f(X_1)|^{2\alpha-2} \,\right] < +\infty\;, \end{eqnarray} where the last inequality comes from the assumption that ${\mathbb E} \left[\, |f(X)|^{2\alpha-2} \,\right] < +\infty$. Combining with~\eqref{eq:efron_stein} and~\eqref{eq:h-h_j}, we have \begin{eqnarray} {\textrm{ Var }} \left[ {\widehat{J}} \right] &\leq& 2 \sum_{j=1}^n \mathbb{E} \left[\, \left( {\widehat{J}} - {\widehat{J}}^{(j)} \right)^2 \,\right] \,\notag\\ &\leq& \frac{4}{n^2 B^2_{k,d,\alpha,K}} \sum_{j=1}^n \left(\, |S_j| \sum_{i \in S_j} \left(\, \mathbb{E} \left[\, J_i^2\,\right] + \mathbb{E} \left[\, (J_i^{(j)})^2\,\right] \,\right) \,\right) \,\notag\\ &\leq& \frac{4}{n^2B^2_{k,d,\alpha,K}} \sum_{j=1}^n \left(\, 2 |S_j|^2 B_{k,d,2\alpha-1,K} C\,\right) \leq \frac{32m^2 \gamma_d^2 B_{k,d,2\alpha-1,K} C}{n B^2_{k,d,\alpha,K}} \;, \end{eqnarray} where $C$ is the upper bound for ${\mathbb E} \left[\, |f(X)|^{2\alpha-2} \,\right]$. Take $m = O(\log n)$ then the proof is complete. \subsection{Proof of Corollary~\ref{cor:renyi_unbiased_KDE}} For any positive real number $\epsilon > 0$, we have \begin{eqnarray} &&{\mathbb P} \left(\, |{\widehat{H}}_{\alpha}^{{\rm (KDE)}}(X) - H_{\alpha}(X)| > \epsilon\,\right) \,\notag\\ &=& {\mathbb P} \left(\, |\frac{1}{1-\alpha} \left(\, \log {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) - \log J_{\alpha}(X) \,\right)| > \epsilon\,\right) \,\notag\\ &=& {\mathbb P} \left(\, | \log {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) - \log J_{\alpha}(X) | > \epsilon|1-\alpha| \,\right) \,\notag\\ &=& {\mathbb P} \left(\, {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) > J_{\alpha}(X) e^{\epsilon|1-\alpha|} \,\right) + {\mathbb P} \left(\, {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) < J_{\alpha}(X) e^{-\epsilon|1-\alpha|} \,\right) \,\notag\\ &=& {\mathbb P} \left(\, {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) - J_{\alpha}(X) > J_{\alpha}(X) (e^{\epsilon|1-\alpha|}-1) \,\right) + {\mathbb P} \left(\, {\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) - J_{\alpha}(X) < J_{\alpha}(X) (e^{-\epsilon|1-\alpha|}-1) \,\right) \,\notag\\ &\leq& \frac{{\mathbb E}\left[\,\left(\,{\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) - J_{\alpha}(X)\,\right)^2\,\right]}{J_{\alpha}^2(X)(e^{\epsilon|1-\alpha|}-1)^2} + \frac{{\mathbb E}\left[\,\left(\,{\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) - J_{\alpha}(X)\,\right)^2\,\right]}{J_{\alpha}^2(X)(1-e^{-\epsilon|1-\alpha|})^2} \end{eqnarray} where the last inequality is Chebyshev inequality. Since $\epsilon$, $\alpha$ and $J_{\alpha}(X)$ are all fixed quantities, and $ {\mathbb E}\left[\,\left(\,{\widehat{J}}_{\alpha}^{{\rm (KDE)}}(X) - J_{\alpha}(X)\,\right)^2\,\right] \to 0$ as $n$ tends to infinity, as shown in Theorem~\ref{thm:unbiased_KDE}. Therefore, the probability ${\mathbb P} \left(\, |{\widehat{H}}_{\alpha}^{{\rm (KDE)}}(X) - H_{\alpha}(X)| > \epsilon\,\right)$ vanishes as $n \to \infty$, i.e., ${\widehat{H}}_{\alpha}^{{\rm (KDE)}}(X)$ converges to $H_{\alpha}(X)$ in probability. \subsection{Proof of Lemma~\ref{lem:tail}} Firstly, since $\|K(u)\| \leq C \|u\|^{-2d}$ for all $u$, we can upper bound the expectationby: \begin{eqnarray} && {\mathbb E} \, \Big\|\, \sum_{j=m_n+1}^{\infty} K\left(\, \frac{\xi_j(\sum_{l=1}^j E_l)^{1/d}}{(\sum_{l=1}^k E_l)^{1/d}} \,\right)\Big\|\, \,\notag\\ &\leq& \sum_{j=m_n+1}^{\infty} {\mathbb E}\, \Big\|\, K\left(\, \frac{\xi_j(\sum_{l=1}^j E_l)^{1/d}}{(\sum_{l=1}^k E_l)^{1/d}} \,\right)\,\Big\|\,\notag\\ &\leq& C \sum_{j=m_n+1}^{\infty} {\mathbb E} \, \Big\| \frac{\xi_j(\sum_{l=1}^j E_l)^{1/d}}{(\sum_{l=1}^k E_l)^{1/d}} \Big\|^{-2d} \,\notag\\ & = & C \sum_{j=m_n+1}^{\infty} {\mathbb E}\, \left[\, \frac{(\sum_{l=1}^k E_l)^{2}}{(\sum_{l=1}^j E_l)^{2}} \,\right]\label{eq:eq1} \end{eqnarray} where the last equality comes from the fact that $\|\xi_j\| = 1$ for all $j$. Now for any fixed $j \geq k$, let $R_k = \sum_{l=1}^k E_l$ and $R_{j-k} = \sum_{l=k+1}^j E_l$. Notice that $R_k$ is the summation of $k$ i.i.d. standard exponential random variables, so $R_k \sim \textit{Erlang} (k,1)$. Similarly, $R_{j-k} \sim \textit{Erlang}(j-k,1)$. Also $R_k$ and $R_{j-k}$ are independent. Recall that the pdf of $\textit{Erlang}(k, \lambda)$ is given by $f_{k,\lambda}(x) = \lambda^k x^{k-1} e^{-\lambda}/(k-1)!$ for $x \geq 0$. So we have: \begin{eqnarray} &&{\mathbb E}\, \left[\, \frac{(\sum_{l=1}^k E_l)^{2}}{(\sum_{l=1}^j E_l)^{2}} \,\right] = {\mathbb E} \left[\, \frac{R_k^2}{(R_k+R_{j-k})^2} \,\right] \,\notag\\ &=& \int_{x,y \geq 0} \frac{x^2}{(x+y)^2} \frac{x^{k-1}e^{-x}}{(k-1)!} \frac{y^{j-k-1}e^{-y}}{(j-k-1)!} dx dy \,\notag\\ &\leq& \int_{x,y \geq 0} \frac{x^2}{(x+y)^2} \frac{x^{k-1}e^{-x}}{(k-1)!} \frac{y^{j-k-3}(x+y)^2 e^{-y}}{(j-k-1)!} dx dy \,\notag\\ &=& \int_{x,y \geq 0} \frac{x^{k+1}e^{-x}}{(k-1)!} \frac{y^{j-k-3}e^{-y}}{(j-k-1)!} dx dy \,\notag\\ &=& \frac{(k+1)!}{(k-1)!} \frac{(j-k-3)!}{(j-k-1)!} = \frac{k(k+1)}{(j-k-1)(j-k-2)}\label{eq:Ej}\;. \end{eqnarray}\ Therefore, for sufficiently large $n$ such that $m_n \geq 2k+4$, i.e., $m_n - k - 2 \geq m_n/2$, we have \begin{eqnarray} &&{\mathbb E} \, \Big\|\, \sum_{j=m_n+1}^{\infty} K\left(\, \frac{\xi_j(\sum_{l=1}^j E_l)^{1/d}}{(\sum_{l=1}^k E_l)^{1/d}} \,\right)\Big\| \leq C \sum_{j=m_n+1}^{\infty} {\mathbb E}\, \left[\, \frac{(\sum_{l=1}^k E_l)^{2}}{(\sum_{l=1}^j E_l)^{2}} \,\right] \,\notag\\ &\leq& C \sum_{j=m_n+1}^{\infty} \frac{k(k+1)}{(j-k-1)(j-k-2)} \,\notag\\ &=& Ck(k+1) \sum_{j=m_n+1}^{\infty} (\frac{1}{j-k-2} - \frac{1}{j-k-1}) = \frac{Ck(k+1)}{m_n-k-1} \;. \end{eqnarray} Notice that $m_n \to \infty$ as $n \to \infty$, therefore, \begin{eqnarray} \lim_{n \to \infty} {\mathbb E} \,\Big\|\, \sum_{j=m_n+1}^{\infty} K\left(\, \frac{\xi_j(\sum_{l=1}^j E_l)^{1/d}}{(\sum_{l=1}^k E_l)^{1/d}} \,\right)\Big\| = 0 \;. \end{eqnarray} \section{Proof of Theorem \ref{thm:unbiased_kLNN}} \label{sec:proof_kLNN} The proof is quite similar to the proof of Theorem~\ref{thm:unbiased_KDE}, so we skip the detail and focus on the main steps below. First, we rewrite the estimator as \begin{eqnarray*} {\widehat{J}}^{(k-{\rm LNN})}_{\alpha}=\frac{1}{nB_{k,d,\alpha}} \sum_{i=1}^n \Big\{ \underbrace{\left(\, h\big(\,(c_d nf(X_i))^{1/d} Z_{k,i}, S_{0,i}, S_{1,i}, S_{2,i})\,\big) f(X_i) \,\right)^{\alpha-1}}_{\equiv J_i} \Big\} \;, \end{eqnarray*} here the quantities $S_{0,i}$, $S_{1,i}$, $S_{2,i}$ and $\mu_i$, $\Sigma_i$ are given as follows, \begin{eqnarray} \label{eq:defS_i} S_{0,i} &\equiv& \sum_{j \in {\cal T}_{i,m}} e^{-\frac{\|X_j - X_i\|^2}{2\rho_{k,i}^2}} \;,\\ S_{1,i} &\equiv& \sum_{j \in {\cal T}_{i,m}} \frac{X_j - X_i}{\rho_{k,i}} \, e^{-\frac{\|X_j - X_i\|^2}{2\rho_{k,i}^2}} \;,\\ S_{2,i} &\equiv& \sum_{j \in {\cal T}_{i,m}} \frac{(X_j - X_i)(X_j - X_i)^T}{\rho_{k,i}^2} \, e^{-\frac{-|X_j - X_i\|^2}{2\rho_{k,i}^2}} \;,\\ \mu_i &\equiv& \frac{S_{1,i}}{S_{0,i}} \;,\\ \Sigma_i &\equiv& \frac{S_{2,i}}{S_{0,i}} - \frac{S_{1,i} S_{1,i}^T}{S_{0,i}^2} \;. \end{eqnarray} and $h : \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^d \times \mathbb{R}^{d \times d} \to \mathbb{R} $ is defined as \begin{align} &h(t_1, t_2, t_3, t_4) = \frac{C_d t_2}{\|t_1\|^d (2\pi)^{d/2} \det\left(\, \frac{t_4}{t_2} - \frac{t_3 t_3^T}{t^2_2}\,\right)^{1/2}} \exp\{-\frac{1}{2} t_3^T (t_2t_4 - t_3 t_3^T)^{-1} t_3\}\;. \end{align} Since $J_1, J_2, \dots, J_n$ are identically distributed, we have ${\mathbb E} \left[\, {\widehat{J}}^{(k-{\rm LNN})}_{\alpha}\,\right] = {\mathbb E}_{X_1} [ {\mathbb E} [J_1 | X_1 = x] ] /B_{k,d,\alpha}$. By assuming the ansatz that $h(\cdot,\cdot,\cdot,\cdot)$ is bounded, we are able to exchange the limit and conditional expectation, therefore, we are left to show that \begin{eqnarray} {\mathbb E}[J_1 | X_1 = x] &=& {\mathbb E} \left[\, \left(\, h ( (c_d nf(x))^{1/d} Z_{k,i}, S_{0,1}, S_{1,i}, S_{2,i}) f(x)\,\right)^{\alpha-1} \,\right] \,\notag\\ & \longrightarrow & B_{k,d,\alpha} (f(x))^{\alpha-1}\;, \label{eq:entropy2_L} \end{eqnarray} To prove this, we show that the empirical quantities $((c_d n f(x))^{1/d} Z_{k,1}, S_{0,1}, S_{1,1}, S_{2,1})$ jointly converges to $(\xi_k (\sum_{\ell=1}^k E_{\ell})^{1/d}, {\tilde{S}}_0^{(\infty)}, {\tilde{S}}_1^{(\infty)}, {\tilde{S}}_2^{(\infty)})$ in distribution. Here ${\tilde{S}}_{\gamma}^{(\infty)}$ is defined by the limit of the following convergent random sequence \begin{eqnarray} \tilde{S}^{(m)}_{\gamma} &\equiv& \sum_{j=1}^{m} \frac{\xi_j^{(\gamma)} (\sum_{\ell=1}^j E_\ell)^{\gamma/d} }{(\sum_{\ell=1}^k E_\ell)^{\gamma/d} } \exp\Big\{- \frac{(\,\sum_{\ell=1}^j E_\ell\,)^{2/d}}{2(\,\sum_{\ell=1}^k E_\ell \,)^{2/d}} \Big\} \;, \end{eqnarray} where $\xi_j^{(0)} = 1$, $\xi_j^{(1)} = \xi_j$, $\xi_j^{(2)} = \xi_j\xi_j^T$ and $\tilde{S}^{(\infty)}_{\gamma} = \lim_{m \to \infty} \tilde{S}^{(m)}_{\gamma}$. Here Lemma~\ref{lem:order_stat} and Lemma~\ref{lem:tail} (by applying $K_0(u) = \exp\{-\|u\|^2/2\}$, $K_1(u) = u \exp\{-\|u\|^2/2\}$ and $K_2(u) = uu^T \exp\{-\|u\|^2/2\}$ for $S_{0,1}$, $S_{1,1}$ and $S_{2,1}$ respectively) are used to prove the convergence following the same approach as in the proof of Theorem~\ref{thm:unbiased_KDE}. By the assumption that $h$ is continuous and bounded, we obtain \begin{eqnarray} &&\lim_{n \to \infty} {\mathbb E}[J_1 | X_1 = x] \,\notag\\ &=& {\mathbb E} \left[\, \lim_{n \to \infty} \left(\, h ( (c_d nf(x))^{1/d} Z_{k,i}, S_{0,1}, S_{1,i}, S_{2,i}) f(x)\,\right)^{\alpha-1} \,\right] \,\notag\\ & =& (f(x))^{\alpha-1} \underbrace{ {\mathbb E}\left[\, \left( h \left(\,\xi_k \left(\, \sum_{\ell=1}^k E_\ell \,\right)^{1/d}, \tilde{S}^{(\infty)}_{0}, \tilde{S}^{(\infty)}_{1}, \tilde{S}^{(\infty)}_{2} \,\right) \,\right)^{\alpha-1} \,\right]}_{\equiv B_{K,d,\alpha}}\;. \end{eqnarray} which proves the asymptotic unbiasedness of ${\widehat{J}}_{\alpha}^{(k-{\rm LNN})}(X)$. For the variance, we use the Efron-Stein inequality. Let ${\widehat{J}}$ be the $k$-LNN estimate of $J_{\alpha}(X)$ based on original samples and ${\widehat{J}}^{(i)}$ be the estimate if $X_i$ is replaced by $X'_i$. Since the $k$-LNN estimate only uses the $m$-nearest neighbors of each sample, the set $S_j = \{i: J_i - J_i^{(j)} \neq 0\}$ has no more than $2m\gamma_d$ elements. Therefore, \begin{eqnarray} {\textrm{ Var }} \left[ {\widehat{J}} \right] &\leq& 2 \sum_{j=1}^n \mathbb{E} \left[\, \left( {\widehat{J}} - {\widehat{J}}^{(j)} \right)^2 \,\right] \,\notag\\ &\leq& \frac{4}{n^2 B^2_{k,d,\alpha}} \sum_{j=1}^n \left(\, |S_j| \sum_{i \in S_j} \left(\, \mathbb{E} \left[\, J_i^2\,\right] + \mathbb{E} \left[\, (J_i^{(j)})^2\,\right] \,\right) \,\right) \,\notag\\ &\leq& \frac{4}{n^2B^2_{k,d,\alpha}} \sum_{j=1}^n \left(\, 2 |S_j|^2 B_{k,d,2\alpha-1} C\,\right) \leq \frac{32m^2 \gamma_d^2 B_{k,d,2\alpha-1} C}{n B^2_{k,d,\alpha}} \;, \end{eqnarray} where $C$ is the upper bound for ${\mathbb E} |f(X)|^{2\alpha-2}$. Take $m = O(\log n)$ to complete the proof. \begin{comment} \subsection{Proof of Asymptotic Unbiasedness} We rewrite the estimate as Let $J_i \equiv \left(\, h\big(\,(c_d nf(X_i))^{1/d} Z_{k,i}, S_{0,i}, S_{1,i}, S_{2,i})\,\big) f(X_i) \,\right)^{\alpha-1}$. Since the random variables $J_1, J_2, \dots, J_n$ are identically distributed, the expected value of ${\widehat{J}}_{\alpha}^{(n)}$ is equal to \begin{eqnarray} {\mathbb E}[{\widehat{J}}_{\alpha}^{(n)}] &=& \frac{1}{B_{k,d,\alpha}}{\mathbb E}[J_1] \;\;= \;\; \frac{1}{B_{k,d,\alpha}}{\mathbb E}_{X_1}\big[{\mathbb E}[J_1|X_1 = x]\big]\;\;\, \end{eqnarray} If we take the limit that $n$ goes to infinity, typical approach of dominated convergence theorem cannot be used to handle the above limit. In order to exchange the limit with the (conditional) expectation over $X_1$, we assume the following Ansatz \ref{ansatz} to be true. \begin{ansatz} \label{ansatz} The function $h(\cdot,\cdot,\cdot,\cdot)$ is bounded. \end{ansatz} As noted in \cite{Pal10} this ansatz is commonly used implicitly in the literature on consistency of $k$-NN estimators, without explicitly stating as such, in existing analyses of consistency of entropy estimators including \cite{KL87,GLMN05,Leo08,WKV09}. This assumption can be avoided for results of the convergence rate of the estimator with respect to the sample size with more assumptions as in \cite{Pal10,GOV16De,SP16,BSY16}. Under this ansatz, by dominant convergence theorem, we can exchange the limit with the conditional expectation and obtain \begin{eqnarray} \label{eq:entropy1} \lim_{n \to \infty} {\mathbb E}[{\widehat{J}}_{\alpha}^{(n)}] = \frac{1}{B_{k,d,\alpha}} {\mathbb E}_{X_1} \left[\, \lim_{n \to \infty} {\mathbb E} \left[\, J_1 | X_1 = x \,\right] \,\right]. \end{eqnarray} Now we will show that the expectation inside converges to $(f(x))^{\alpha-1}$ multiplied by some constant that is independent of the underlying distribution. Precisely, for almost every $x$ and given $X_1 = x$, we have \begin{eqnarray} {\mathbb E}[J_1 | X_1 = x] &=& {\mathbb E} \left[\, \left(\, h ( (c_d nf(x))^{1/d} Z_{k,i}, S_{0,1}, S_{1,i}, S_{2,i}) f(x)\,\right)^{\alpha-1} \,\right] \,\notag\\ & \longrightarrow & B_{k,d,\alpha} (f(x))^{\alpha-1}\;, \label{eq:entropy2} \end{eqnarray} as $n\to \infty$. Here $B_{k,d,\alpha}$ is a constant only depends on $k$ $d$ and $\alpha$, defined in \eqref{eq:defB}. Therefore, \begin{eqnarray} {\mathbb E}_{X_1} \left[\, \lim_{n \to \infty} {\mathbb E}[J_1|X_1 = x]\right] & =& {\mathbb E}_{X_1} [B_{k,d,\alpha} (f(X_1))^{\alpha-1}] \,\notag\\ &=& B_{k,d,\alpha} J_{\alpha}(X) \;.\label{eq:entropy4} \end{eqnarray} Together with \eqref{eq:entropy1}, this finishes the proof of the desired claim. \\ We are now left to prove the convergence of \eqref{eq:entropy2}. We first give a formal definition of the multiplicative factor $B_{k,d,\alpha}$ by replacing the sample defined quantities $S_{0,i}, S_{1,i}$ and $S_{2,i}$ by similar quantities defined by order-statistics, and use Lemma~\ref{lem:order_stat} to prove the convergence. Recall that our order-statistics is defined by two sequences of $m$ i.i.d. random variables: i.i.d. standard exponential random variables $E_1, \dots, E_m$ and i.i.d. random variables $\xi_1, \dots, \xi_m$ uniformly distributed over $d$-dimensional unit sphere. Now we define \begin{eqnarray} \label{eq:defB} B_{k,d,\alpha} &\equiv & {\mathbb E}\left[\, \left( h \left(\,\xi_k \left(\, \sum_{\ell=1}^k E_\ell \,\right)^{1/d}, \tilde{S}^{(\infty)}_{0}, \tilde{S}^{(\infty)}_{1}, \tilde{S}^{(\infty)}_{2} \,\right) \,\right)^{\alpha-1}\,\right] \;, \end{eqnarray} We will show that ${\tilde{S}}_\gamma^{(\infty)}$ is the limit of the quantities $S_{\gamma,i}$ defined from samples for each $\gamma\in\{0,1,2\}$. Also we know that $(c_d nf(x))^{1/d} Z_{k,i}$ converges to $\xi_k (\sum_{\ell=1}^k E_\ell)^{1/d}$ for almost every $x$ from Lemma \ref{lem:order_stat}. ${\tilde{S}}^{(\infty)}$ is defined by the limit of a convergent random sequence where $\xi_j^{(0)} = 1$, $\xi_j^{(1)} = \xi_j$, $\xi_j^{(2)} = \xi_j \xi_j^T$ and ${\tilde{S}}^{(\infty)}_\gamma = \lim_{m\to \infty} {\tilde{S}}^{(m)}_\gamma$ for $\gamma \in \{0,1,2\}$. This limit exists, since ${\tilde{S}}_0^{(m)}$ is non-decreasing in $m$, and the convergence of ${\tilde{S}}_1^{(m)}$ and ${\tilde{S}}_2^{(m)}$ follows from Lemma \ref{lem:tail}. We introduce simpler notations for the joint random variables: ${\tilde{S}}^{(m)}=(\xi_k (\sum_{\ell=1}^k E_{\ell})^{1/d}, \tilde{S}^{(m)}_{0},\tilde{S}^{(m)}_{1},\tilde{S}^{(m)}_{2})$ and ${\tilde{S}}^{(\infty)}=(\xi_k (\sum_{\ell=1}^k E_{\ell})^{1/d}, \tilde{S}^{(\infty)}_{0}, \tilde{S}^{(\infty)}_{1}, \tilde{S}^{(\infty)}_{2})$. Considering the quantities $S^{(n)} = ((c_d nf(x))^{1/d} Z_{k,i}, S_{0,i}, S_{1,i}, S_{2,i})$ defined from samples, we show that this converges to ${\tilde{S}}^{(\infty)}$. Precisely, applying triangular inequality, \begin{eqnarray} d_{\rm TV} (S^{(n)},{\tilde{S}}^{(\infty)}) &\leq& d_{\rm TV} (S^{(n)},{\tilde{S}}^{(m)}) + d_{\rm TV} ({\tilde{S}}^{(m)},{\tilde{S}}^{(\infty)}) \;, \label{eq:entropy3} \end{eqnarray} and we show that both terms converge to zero for any $m = \Theta(\log n)$. Given that $h$ is continuous and bounded from the ansatz, we obtain \begin{eqnarray*} \lim_{n\to\infty} {\mathbb E}[J_1|X_1 = x] &=& {\mathbb E}[\lim_{n\to\infty} (h(S^{(n)})^{\alpha-1} (f(x))^{\alpha-1}|X_1 = x] \\ &=& (f(x))^{\alpha-1} {\mathbb E}[(h({\tilde{S}}^{(\infty)}))^{\alpha-1}]\;, \end{eqnarray*} for almost every $x$, proving \eqref{eq:entropy4}. The convergence of the first term follows from Lemma \ref{lem:order_stat}. Precisely, consider the function $g_{m}: \mathbb{R}^{d \times m} \to \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^d \times \mathbb{R}^{d \times d}$ defined as: \begin{eqnarray} g_{m}(t_1, t_2, \dots, t_{m}) = \left(\, t_k, \sum_{j=1}^{m} \exp\{-\frac{\|t_j\|^2}{2\|t_k\|^2}\}, \sum_{j=1}^{m} \frac{t_j}{\|t_k\|} \exp\{-\frac{\|t_j\|^2}{2\|t_k\|^2}\}, \sum_{j=1}^{m} \frac{t_j t_j^T}{\|t_k\|^2} \exp\{-\frac{\|t_j\|^2}{2\|t_k\|^2}\} \,\right) \;, \end{eqnarray} such that $S^{(n)} = g_{m} \left(\, (c_dnf(x))^{1/d} \left(\, Z_{1,i}, Z_{2,i}, \dots, Z_{m,i} \,\right) \,\right)$ , which follows from the definition of $S^{(n)}=((c_dnf(x))^{1/d}Z_{k,i},S_{0,i},S_{1,i},S_{2,i})$ in \eqref{eq:defS}. Similarly, ${\tilde{S}}^{(m)} = g_{m} \left(\, \xi_1 E_1^{1/d}, \xi_2(E_1 + E_2)^{1/d}, \dots \xi_{m}(\sum_{\ell=1}^{m} E_\ell)^{1/d} \,\right)$. Since $g_{m}$ is continuous, so for any set $A \in \mathbb{R}^d \times \mathbb{R} \times \mathbb{R}^d \times \mathbb{R}^{d \times d}$, there exists a set ${\widetilde{A}} \in \mathbb{R}^{d \times m}$ such that $g_{m}({\widetilde{A}}) = A$. So for any $x$ such that there exists $\varepsilon > 0$ such that $f(a) > 0$, $\|\nabla f(a)\| = O(1)$ and $\|H_f(a)\| = O(1)$ for any $\|a - x\| < \varepsilon$, we have: \begin{align} & d_{\rm TV} ( S^{(n)},{\tilde{S}}^{(m)}) \,\notag\\ &= \sup_{A } \left|\, {\mathbb P}\left\{g_{m} \left(\, (c_dnf(x))^{\frac1d} Z_{1,i}, \dots, (c_dnf(x))^{\frac1d}Z_{m,i} \,\right) \in A \right\} - {\mathbb P}\{g_{m} (\, \xi_1 E_1^{\frac1d}, \dots \xi_{m}(\sum_{l=1}^{m} E_{\ell})^{\frac1d} \,) \in A \} \,\right| \,\notag\\ &\leq \sup_{{\widetilde{A}} \in \mathbb{R}^{d \times m}} \left|\, {\mathbb P}\left\{\left(\, (c_dnf(x))^{1/d} Z_{1,i}, \dots, (c_dnf(x))^{1/d} Z_{m,i} \,\right) \in {\widetilde{A}} \right\} - {\mathbb P}\{(\, \xi_1 E_1^{1/d}, \dots \xi_{m} (\sum_{\ell=1}^{m} E_\ell)^{1/d} \,) \in {\widetilde{A}} \} \,\right| \,\notag\\ &= d_{\rm TV}\left( \left(\, (c_dnf(x))^{1/d} Z_{1,i}, \dots, (c_dnf(x))^{1/d} Z_{m,i} \,\right) \,,\, \left(\, \xi_1 E_1^{1/d}, \dots \xi_{m} (\sum_{\ell=1}^{m} E_\ell)^{1/d} \,\right)\,\right) \,\notag\\ &\stackrel{n \rightarrow \infty}{\longrightarrow} 0 \label{eq:converge_1}\;, \end{align} where the last inequality follows from Lemma \ref{lem:order_stat}. By the assumption that $f$ has open support and $\|\nabla f\|$ and $\|H_f\|$ is bounded almost everywhere, this convergence holds for almost every $x$. For the second term in \eqref{eq:entropy3}, let $\tilde{T}^{(m)}_{\alpha} = {\tilde{S}}^{(\infty)}_\alpha - {\tilde{S}}^{(m)}_\alpha$ and we claim that $\tilde{T}^{(m)}_{\alpha}$ converges to 0 in distribution by the following lemma. \begin{lemma}\cite[Lemma 10.1]{GOV16} \label{lem:tail} Assume $m \to \infty$ as $n \to \infty$ and $k\geq 3$ , then \begin{eqnarray} \lim_{n \to \infty} {\mathbb E} \|\, \tilde{T}^{(m)}_{\alpha} \,\| = 0 \end{eqnarray} for any $\alpha \in \{0,1,2\}$. Hence $(\tilde{T}^{(m)}_{0},\tilde{T}^{(m)}_{1},\tilde{T}^{(m)}_{2})$ converges to $(0,0,0)$ in distribution. \end{lemma} This implies that $(\tilde{S}^{(m)}_{0},\tilde{S}^{(m)}_{1},\tilde{S}^{(m)}_{2})$ converges to $(\tilde{S}^{(\infty)}_{0},\tilde{S}^{(\infty)}_{1},\tilde{S}^{(\infty)}_{2})$ in distribution, i.e., \begin{eqnarray} d_{\rm TV}({\tilde{S}}^{(m)},{\tilde{S}}^{(\infty)}) \stackrel{n \rightarrow \infty}{\longrightarrow} 0 \label{eq:converge_2}\;, \end{eqnarray} Combine~\eqref{eq:converge_1} and ~\eqref{eq:converge_2} in \eqref{eq:entropy3}, this implies the desired claim. \end{comment}
{ "timestamp": "2017-02-13T02:02:32", "yymm": "1702", "arxiv_id": "1702.03051", "language": "en", "url": "https://arxiv.org/abs/1702.03051", "abstract": "Estimating expected polynomials of density functions from samples is a basic problem with numerous applications in statistics and information theory. Although kernel density estimators are widely used in practice for such functional estimation problems, practitioners are left on their own to choose an appropriate bandwidth for each application in hand. Further, kernel density estimators suffer from boundary biases, which are prevalent in real world data with lower dimensional structures. We propose using the fixed-k nearest neighbor distances for the bandwidth, which adaptively adjusts to local geometry. Further, we propose a novel estimator based on local likelihood density estimators, that mitigates the boundary biases. Although such a choice of fixed-k nearest neighbor distances to bandwidths results in inconsistent estimators, we provide a simple debiasing scheme that precomputes the asymptotic bias and divides off this term. With this novel correction, we show consistency of this debiased estimator. We provide numerical experiments suggesting that it improves upon competing state-of-the-art methods.", "subjects": "Information Theory (cs.IT)", "title": "Density Functional Estimators with k-Nearest Neighbor Bandwidths", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9615338057771059, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.7088024935720167 }
https://arxiv.org/abs/1506.06172
Stepwise Methods in Optimal Control Problems
We introduce a new method, stepwise method for solving optimal con- trol problems. Our first motivation for new approach emanate from limi- tations on continuous time control functions in PMP. Practically in most of the real world models, we are not able to change control value for every time such as in drug dose calculation or in resourse allocation problems. But it is practical to change control value in some time section that lead to stepwise control function. We study some examples via classical Pontrya- gin Maximum Principle(PMP) and via stepwise method. The new method has some other advantages in comparison with PMP method in models with complicated cost function or systems. In real world applications, the new method has a high performance in implementation.
\section{Introduction} Optimal control theory is an effective tool in real world modelling such as physical, biological, economical and other models. Diverse examples are studied in\cite{sethi} and \cite{lenhart} . Optimal control theory is used in chemotherapy of cancer\cite{fisterpan}. Several papers are studied about epidemiology from optimal control theory viewpoint, for example \cite{DSDI} The most important classical method in optimal control theory is the remarkable result of Pontryagin Maximum Principle(PMP) that be used in various forms in applied problems. there are necessary conditions in PMP that lead to limitations in applications. These conditions can occurred for functions in state equations, cost function and control functions. Here we interested in finding a new optimal control method with more proficiency in complicated cases. In this technique, control functions selected among the stepwise functions. Another notable point in the new manner is the combination of the heuristic and classical methods. In the forthcoming sections, numerical forward-backward sweep method and stepwise method are applied to some problems such that one can obtain a clear vision about potency and power of the new method. Furthermore stepwise method works easily in the problems with more complicated cost functions in contrast to classical methods. In this paper the proficiency of stepwise method is shown in some models. \section{ Introductory example and definitions of stepwise method with fixed step-size} In this section, we describe the stepwise method through the simple example. Consider the problem \begin{center} $max\{J=\int_{0}^{2}(2x-3u-u^2)dt\}$ \end{center} subject to $\dot{x}=x+u,\quad x(0)=5$ and the control constrain $u\in \Omega=[0,2].$ Solution: We can find the optimal control function $u(t)$ via PMP. The Hamiltonian is \begin{center} $H=(2x-3u-u^2)+\lambda(x+u)=(2+\lambda)x-(u^2+3u-\lambda u).$ \end{center} One can find the optimal control policy by differentiating $H$ with respect to $u$. Thus \begin{center} $\frac{\partial H}{\partial u}=-2u-3+\lambda=0,$ \end{center} so that the control function is $u(t)=\frac{\lambda(t)-3}{2}$ that $u(t)$ stays within the interval $\Omega=[0,2].$ We next drive the adjoint equation as \begin{equation*} \begin{aligned} &\dot{\lambda}=-\frac{\partial H}{\partial x}=-2-\lambda,\quad \lambda(2)=0\\&\dot{\lambda}+\lambda=-2, \quad \lambda(2)=0. \end{aligned} \end{equation*} This equation can be solved and $\lambda(t)=2(e^{2-t}-1)$ when we impose the control constraint $\Omega=[0,2],$ the optimal control is obtained: \begin{equation}\nonumber u=\left\{\begin{array}{ccccc} 2 & \mbox{if \quad $e^{2-t}-2.5>2$}, \\ e^{2-t}-2.5 &\mbox{if \quad$0\leq e^{2-t}-2.5\leq 2$}, \\ 0 &\mbox{if \quad$e^{2-t}-2.5<0$}. \end{array}\right. \end{equation} \begin{figure}[htbp] \begin{center} \scalebox{0.35}{\includegraphics{seth.eps}} \caption{Graph of optimal control for introductory example.} \label{fig:seth} \end{center} \end{figure} The final cost $J$ is $68.93$. For solving this problem by new method, we changed the maximization problem to minimization through converting $J$ to $1/(1+J)$ and the final cost in minimization problem is $0.0143$. When we use PMP, the control functions must satisfy the special conditions. In practical applications, we are not able to change control value every moment continuously. But we can change control values in some slices of time that lead to step function. In order to solve this problem by stepwise method, we seek control function among stepwise functions. For this purpose, we divide $[0,T]$ into equal parts. Suppose that the control function $u(t)$ has a constant value in each part. Let us enter the stepwise control function in control system { \def,{,} \catcode`\,=13 \def,{% \ifmmode% ,\discretionary{}{}{}% \else% ,% \fi% $\dot{x}=f(x,u,t), x(0)=x_0$. } For $t\in [0,\frac{T}{3}]$,, we have $u(t)=\alpha$ and one can solve the ode $\dot{x}=f(x,\alpha,t),\quad x(0)=x_0$. For $t\in [\frac{T}{3},\frac{2T}{3}]$, we solve the ode $\dot{x}=f(x,\beta,t)$ together with initial condition $ x(0)=x(\frac{T}{3})$ that is the terminal point of system in $t\in [0,\frac{T}{3}]$. Same procedure repeat for $\dot{x}=f(x,\gamma,t),\quad x(0)=x(\frac{2T}{3})$ when $t\in [\frac{2T}{3},T]$. We can compute cost for a typical stepwise control function and search for the optimal values for $(\alpha,\beta,\gamma)$. In this manner we convert the optimal control problem to optimization problem. For solving the optimization problems, we are able to use some analytical and heuristic and metaheuristic methods such as pattern search, simulated annealing, genetic algorithm, and other methods which these methods works easily in the problems with complicated systems and cost functions. There may be exist an important question here about the final cost. Is it possible that the difference between final cost in PMP method and in stepwise method exceeds from our expectation There is a simple lemma about stepwise functions and continuous functions can be helpful here. \begin{lemma} For every continuous function $u(t)$, there is a sequence $\{u_n(t)\}$ of stepwise functions that $\lim_{n\to\infty} u_n(t)= u(t).$ \end{lemma} Using this lemma, we can be confident that the new method do not generate useless solutions. Note that some step functions are not able to satisfy the PMP condition and do not belong to admissible controls. But it is likely, we can find step function in new method with lower cost than PMP solution. The final cost in the stepwise method equal to $0.014305429952056222$. We illustrate this method in figure below. Note that ,We have obtained all the numerical results in 20-30 run times. \begin{figure}[htbp] \begin{center} \scalebox{0.35}{\includegraphics{stairintro.eps}} \caption{Graph of optimal control for introductory example via stepwise method.} \label{fig:stair} \end{center} \end{figure} We continue our study on 3-step stepwise method via pattern search, simulated annealing and genetic algorithm. The results is illustrated in next figure and table. \begin{center} \begin{tabular}{ |l|l| } \hline Method & Final cost (J) \\ \hline Pattern search & $0.014305429952056222$ \\ Simulated annealing & $0.014306824417196181$\\ Genetic algorithm & $0.014358890512889641$ \\ \hline \end{tabular} \end{center} \begin{figure}[htbp] \begin{center} \scalebox{0.35}{\includegraphics{3stepfix.eps}} \caption{Results for stepwise method via pattern search, simulated annealing and genetic algorithm.} \label{fig:stair} \end{center} \end{figure} Trying to get better stepwise control function through additional steps seems natural. We applied 5-step function instead 3-step function.These results coincide with our expectations. \begin{center} \begin{tabular}{ |l|l| } \hline Method & Final cost (J) \\ \hline Pattern search & $0.014283994290191705$ \\ Simulated annealing & $0.01430531425509952$\\ Genetic algorithm & $0.0143199869929726 $ \\ \hline \end{tabular} \end{center} \begin{figure}[htbp] \begin{center} \scalebox{0.35}{\includegraphics{5-stepfix.eps}} \caption{Results for 5-step stepwise method via pattern search, simulated annealing and genetic algorithm.} \label{fig:stair} \end{center} \end{figure} \section{Stepwise method with variable step-size } In the former section, we divided the interval into some equal parts. Here we let the optimization method decide about width of subintervals. Let's come back to introductory example. Here, instead of dividing $[0,2]$ to $[0,1/3]$,$[1/3,2/3]$ and $[2/3,2]$, we divide $[0,a]$,$[a,b]$ and $[b,2]$ and let the optimization method decide about $a$ and $b$. The next figures and tables show the improvement in optimal policy and final cost and give complete information about subinterval and control value on the subinterval in variable stepwise method. \begin{center} \begin{tabular}{ |l|l|} \hline Method & Final cost (J) \\ \hline Pattern search & $0.012566299700335069$ \\ Simulated annealing & $0.01334206413715155$\\ Genetic algorithm & $0.012912036300546229 $\\ \hline \end{tabular} \end{center} \small \begin{center} \begin{tabular}{ |l|l|l|} \hline Method& Subintervals & control value \\ \hline Pattern search &[0,0],[0,1],[1,2]&(0,2,0) \\ Simulated annealing &[0,0.0036],[0.0036,0.9738],[0.9738, 2]&(1.6336,1.8345,0.5623)\\ Genetic algorithm &[0,0.0034],[0.0034,0.9027],[0.9027,2]&(0.7718,1.9087,0.1524) \\ \hline \end{tabular} \end{center} \normalsize \begin{figure}[htbp] \begin{center} \scalebox{0.3}{\includegraphics{var3-step.eps}} \caption{Results for variable 3-step stepwise method via pattern search, simulated annealing and genetic algorithm.} \label{fig:stair} \end{center} \end{figure} \section{Stepwise method in real world models} As we mentioned before, there are limitations for admissible controls in PMP approach such as continuity with respect to time and others. Practically, we are not able to change the value of control function in every moment of time interval. Instead, one can change the control value at several time sections.Thus, it seems that the stepwise method is a reasonable way in real world applications. For example, when you want to make a decision about resource allocation in epidemiological models, you can not alter your strategy in short time interval. Because changing the vaccination rate or prevention strategy may imposes heavy costs. There are same problems in optimal control model of treatment of disease through the use of drugs. It seems that there are sufficient motivation to practice the stepwise method in real world processes. The next examples show the performance of stepwise method in contrast to classical PMP method. \subsection{Example: Chemotherapy } Optimal control methods are useful in optimal control model of chemotherapy. Renee fister et al in\cite{fisterpan} studied different cell-kill models of chemotherapy. They characterized optimal control strategy that minimizes the cancer mass and the cost of total amount of drug. We use stepwise method for some of their models. The problem is: \begin{equation*} \min_u \int_{0}^{T}a(N(t)-N_d)^2+bu^2(t) dt \end{equation*} subject to, \begin{equation*} \begin{aligned} & N'(t)=rNln(\frac{1}{N})-u(t)\delta N(t)\\& N(0)=N_0,\quad u(t)\geq 0. \end{aligned} \end{equation*} The following parameters appear in model:\\ $N(t)$: The normalized density of the tumor at time $t$\\ $r$: The growth rate of the tumor\\ $\delta$: The magnitude of the dose\\ $u(t)$: The time dependent pharmacokinetics of the drug \\ $N_d$: The desired tumor density.\\ Let us enter these value: $r=0.1$, $a=3$, $b=1$, $\delta=0.45$, $N_d=0$, $N_0=0.975$ and $T=20$. The next figures show the optimal control strategy in PMP and stepwise method. \begin{figure}[htbp] \begin{center} \scalebox{0.3}{\includegraphics{cancerclassic.eps}} \caption{Graph of optimal control for chemotherapy example via PMP method.} \label{fig:stair} \end{center} \end{figure} Now, we try to use stepwise method in this model and present the results below. \begin{figure}[htbp] \begin{center} \scalebox{0.3}{\includegraphics{cancer5step.eps}} \caption{Graph of optimal control for introductory example.} \label{fig:stair} \end{center} \end{figure} The final cost in PMP method equal to $10.7758$ and in 5-step stepwise method is $10.866632710096287$. \vspace{-.3cm} \subsection{Example: Differential susceptibility and differential infectivity model} Based on \cite{DSDI}, we develop optimal control formulation of DSDI model with two groups of susceptible and two groups of infected individuals\cite{DSDIme}. Because of apparent diversity of examples, the idea of dividing susceptible and infected population into two subgroups examined. For example in plenty of diseases, disease processes is different in male or female, children or adult, adicted or nonaddicted, and so on. The following parameters appear in our model:\\ $\mu$: natural death rate;\\ $\nu_i$: the rate at which infectives in $I_i$ are removed or become immune;\\ $\delta$: disease-induced mortality rates for the infectives;\\ $\lambda_i$: The rate of infection for susceptibles in group Si; \\ The infectivity rate $\lambda_i$ is given by $\lambda_i=r\alpha_i\sum_{j=1}^{2}\beta_jI_j$ in which $'\beta_i'$ is the transmission probability per contact and $'r'$ is the number of contacts of an individual per unit time. We suggested the following ODEs system (\ref{system2}) describing the model with controls. \begin{equation}\label{system2} \begin{cases} \dot{S_1}&= \mu (p_1S^0- S_1)-\lambda_1 S_1(1-u_1)\\ \dot{S_2}&= \mu (p_2S^0- S_2)-\lambda_2 S_2(1-u_2)\\ \dot{I_1}&= q_{11}\lambda_1 S_1(1-u_1)+q_{21}\lambda_2 S_2(1-u_2)-(\mu+\nu_1+u_3)I_1\\ \dot{I_2}&= q_{12}\lambda_1 S_1(1-u_1)+q_{22}\lambda_2 S_2(1-u_2)-(\mu+\nu_2+u_4)I_2\\ \dot{R}&=(\nu_1+u_3)I_1+(\nu_2+u_4)I_2-(\mu+\delta)R \end{cases} \end{equation} The control functions $u_1(t)$, $u_2(t)$, $u_3(t)$ and $u_4(t)$ have to be bounded on $[0,1]$ and Lebesgue integrable functions. $u_1(t)$ and $u_2(t)$ measure the time dependent efforts on the preventive strategy of susceptible individuals in $S$, to reduce the number of individuals that may be infectious. The control functions $u_2(t)$ and $u_3(t)$ measures the time dependent efforts on the treatment of infected individuals in $I_1$ and $I_2$ respectively. This control will have an impact on the output flow of people from the The objective functional to be minimized is:\\ \begin{equation}\label{objective} J(u_1,u_2,u_3,u_4)=\int^{T}_{0}{AI_1^2+BI_2^2+Cu_1^2+Du_2^2+Eu_3^2+Fu_4^2}dt \end{equation} Here, $A,B,C,D,E$ are adjustment parameters. We seek an optimal control triple $(u_1^*,u_2^*,u_3^*,u_4^*)$ such that \begin{equation*} J(u_1^*,u_2^*,u_3^*,u_4^*)=\min{\{J(u_1,u_2,u_3,u_4) | (u_1,u_2,u_3,u_4)\in U}\} \end{equation*} where \small $U=\{J(u_1,u_2,u_3,u_4) | u_i \mbox{ measerable} , 0\leq u_i \leq 1 , t\in [0,T], i=1,2,3,4\}$ \normalsize is the control set. Let us enter the following values in model system. \small \begin{center} \begin{tabular}{ |l|l|l| } \hline \multicolumn{2}{|c|}{Parameters and values} \\ \hline $S^0=1$ & $\delta=0$ \\ $\mu=.012$ & $S_1(0)=0.47$\\ $T=1000$&$S_2(0)=0.47$\\ $p_1=0.5$ & $I_1(0)=0.02$\\ $p_2=0.5$ & $I_2(0)=.04$\\ $\alpha_1=0.05$ & $R(0)=0$ \\ $\alpha_2=0.2$&$\beta_1=0.2$ \\ $\nu_1=0.15$ & $\beta_2=0.06$ \\ $\nu_2=0.6$ &$r=25 $ \\ $q_{11}=0.9$&$q_{12}=0.1$ \\ $q_{21}=.1$& $q_{22}=.9$ \\ $A=3$ & $B=3$ \\ $C=0.002$ & $D=0.002$ \\ $E=0.002$ & $F=0.002$ \\ \hline \end{tabular} \end{center} \normalsize Below, we can depict optimal control policy by PMP method and stepwise method(with pattern search for optimization problem).The final cost in PMP method equal to $0.1059$ and in 5-step stepwise method is $0.11107136532373643 $. \begin{figure}[htbp] \begin{center} \scalebox{0.31}{\includegraphics{DSDI.eps}} \caption{Graph of optimal control for DSDI model via PMP.} \label{fig:stair} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \scalebox{0.35}{\includegraphics{DSDIstep.eps}} \caption{Graph of optimal control DSDI model via 3-step stepwise method.} \label{fig:stair} \end{center} \end{figure} \section{conclusion} we introduce the stepwise method for optimal control problems. This method could be replaced with PMP classic method in real world problems. Using this new method in various cases of applied models seems reasonable . \section{Acknowledgement} It is a pleasure to acknowledge the helpful suggestions made by Dr rooin during the preparation of this paper.
{ "timestamp": "2015-06-26T02:07:02", "yymm": "1506", "arxiv_id": "1506.06172", "language": "en", "url": "https://arxiv.org/abs/1506.06172", "abstract": "We introduce a new method, stepwise method for solving optimal con- trol problems. Our first motivation for new approach emanate from limi- tations on continuous time control functions in PMP. Practically in most of the real world models, we are not able to change control value for every time such as in drug dose calculation or in resourse allocation problems. But it is practical to change control value in some time section that lead to stepwise control function. We study some examples via classical Pontrya- gin Maximum Principle(PMP) and via stepwise method. The new method has some other advantages in comparison with PMP method in models with complicated cost function or systems. In real world applications, the new method has a high performance in implementation.", "subjects": "Optimization and Control (math.OC)", "title": "Stepwise Methods in Optimal Control Problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9615338123908151, "lm_q2_score": 0.7371581510799252, "lm_q1q2_score": 0.708802487342845 }
https://arxiv.org/abs/1902.00092
Image reconstruction enhancement via masked regularization
Image reconstruction based on an edge-sparsity assumption has become popular in recent years. Many methods of this type are capable of reconstructing nearly perfect edge-sparse images using limited data. In this paper, we present a method to improve the accuracy of a suboptimal image resulting from an edge-sparsity image reconstruction method when compressed sensing or empirical data requirements are not met. The method begins with an edge detection from an initial edge-sparsity based reconstruction. From this edge map, a mask matrix is created which allows us to regularize exclusively in regions away from edges. By accounting for the spatial distribution of the sparsity, our method preserves edge information and and furthermore enhances suboptimal reconstructions to be nearly perfect from fewer data than needed by the initial method. We present results for two phantom images using a variety of initial reconstruction methods.
\section{Introduction} \label{sec:intro} A goal in the imaging science community is to be able to reconstruct images from a small amount of data. Compressed sensing algorithms, e.g. \cite{candes2006robust}, use edge-sparsity based reconstruction methods to accomplish this task. Theoretical exact reconstruction guarantees exist given particular conditions on the forward model, data collection pattern, amount of data, and edge sparsity of the image. There are also empirical results showing the amount of data required to achieve near-perfect reconstructions for specific images. This paper is concerned with when these requirements are \emph{not} met, specifically when too few data are used and a suboptimal image is returned. While the intensity values in images created from too few data using edge-sparsity based reconstruction methods may not be ideal, in many cases the edge locations in the image are faithful to those of the ground truth image. In this paper, we present an algorithm which demonstrates that if the edge locations of the reconstruction are accurate ``enough'', it is possible to improve the suboptimal reconstruction recovered from limited data. The algorithm presented is based on the edge-adaptive $\ell_2$ regularization method from \cite{churchill2018edge} for signal and image reconstruction from (non-uniform) Fourier data, which used a pre-processing $\ell_1$ regularization based edge detection method to extract edges before applying $\ell_2$-regularized reconstruction. Specifically, an edge mask was generated so that the $\ell_2$ regularization would only occur in smooth regions of the image. Further theoretical and empirical support for this two-stage image reconstruction was presented in \cite{churchill2019edge}, where it was shown that given a perfect mask of edge locations, minimizing the ``edge-masked'' cost function will perfectly reconstruct the image. In this case, nearly perfect reconstruction was empirically shown to be possible using only a single radial line through the 2D data collection space for the application of computed tomography (CT). Here we assume we are given an image that has been reconstructed using an edge-sparsity based method. Note that the data for this image can be acquired in a multitude of ways. The proposed algorithm has two steps. The first step is creating a mask that gives edge locations. To achieve this, an edge transform is applied to the given image data and the result is thresholded to determine the approximate edge locations. This mask is then used in a second reconstruction step. Using the same acquired data that the initial reconstruction method used, an edge-masked $\ell_2$-regularized reconstruction is performed. The mask allows the method to regularize away from edges, which has been shown to improve accuracy. When broken down into its component steps, this post-processing enhancement technique is in fact an edge-masked image reconstruction method that is informed by an initial image reconstruction with fairly accurate edge locations. Note that while there are similarities in the goal of our proposed method and iteratively reweighted or edge guided image reconstruction methods \cite{candes2008enhancing,chartrand2008iteratively,guo2010edgecs,guo2012edge}, i.e. regularizing away from edges in order to account for the spatial distribution of the sparsity, our algorithm is not intended to compete with these other methods. To the contrary, our technique functions as a post-processing step to further enhance an image created with one of these other methods, and uses fewer data than typically required for an ideal reconstruction. This accuracy improvement comes relatively cheaply at the computational cost of a single $\ell_2$-regularized minimization. In what follows we show that our new algorithm has the potential to improve reconstructions from a variety of edge-preserving reconstruction methods using two different edge-sparse phantoms in experiments when (i) data requirements for near-perfect reconstruction are not met and (ii) zero-mean Gaussian noise is added to the data. \begin{figure}[t] \centering \includegraphics[width=4.0cm]{X.eps} \includegraphics[width=4.0cm]{sampling.eps} \includegraphics[width=4.0cm]{Xtv.eps} \includegraphics[width=4.0cm]{Xtv_error.eps} \caption{Image reconstruction from 16 radial lines. (top left) Shepp-Logan phantom. (top right) Fourier sampling domain. (bottom left) TV-regularized reconstruction via Eq. (\ref{eq:TV}). (bottom right) point-wise error plot.} \label{fig:TV} \end{figure} \section{Algorithm}\label{sec:algorithm} This section explains the edge-masked image reconstruction enhancement algorithm through an illustrative example. Consider an image that has already been reconstructed using an edge-preserving reconstruction method, for example the isotropic total variation (TV) regularization technique, \cite{rudin1992nonlinear}. In the noise-less form, this method solves \begin{align}\label{eq:TV} \begin{split} \arg\min_\mathbf{x} TV(\mathbf{x})\quad\text{subject to}\quad \mathbf{A}\mathbf{x}=\mathbf{b}, \end{split} \end{align} where $\mathbf{A}$ is the forward model, $\mathbf{b}$ is the data collected, and \begin{align} TV(\mathbf{x}) = \sum_{i,j} \sqrt{|\mathbf{x}_{i+1,j}-\mathbf{x}_{i,j}|^2+|\mathbf{x}_{i,j+1}-\mathbf{x}_{i,j}|^2}. \end{align} When noise is present, Eq. (\ref{eq:TV}) is modified to \begin{align} \arg\min_\mathbf{x} ||\mathbf{Ax}-\mathbf{b}||_2^2+\lambda\cdot TV(\mathbf{x}), \end{align} where $\lambda>0$ is the user-defined regularization parameter that balances noise reduction, fidelity, and edge sparsity. Note here that $\mathbf{x}$ is an $N\times N$ image. This example considers image reconstruction from radially-sampled discrete Fourier coefficients, where $\mathbf{A}=\mathbf{F}$, the 2D discrete Fourier transform, and $\mathbf{b}=\mathbf{\hat{f}}$, the 2D discrete Fourier coefficients of the ground truth image. We note that other forward models can also be accommodated. In \cite{candes2006robust} and \cite{candes2008enhancing}, it is shown that Eq. (\ref{eq:TV}) was capable of near-perfect reconstruction of the Shepp-Logan phantom, \cite{shepp1974fourier}, from measurements collected on 17 radial lines of 2D Fourier space. Figure \ref{fig:TV} shows the result of Eq. (\ref{eq:TV}) using measurements collected on only 16 radial lines instead of 17. \begin{figure}[t] \centering \includegraphics[width=4.0cm]{X_edgex.eps} \includegraphics[width=4.0cm]{Xtv_edgex.eps} \caption{Edge detection. (left) $\mathbf{D}_v(\mathbf{x})$ where $\mathbf{x}$ is ground truth. (right) $\mathbf{D}_v(\mathbf{\tilde{x}})$ where $\mathbf{\tilde{x}}$ is obtained via Eq. (\ref{eq:TV}). Note that the $\mathbf{D}_h$ images are also similar but omitted for space.} \label{fig:edges} \end{figure} \begin{figure}[t] \centering \includegraphics[width=4.0cm]{X_maskx.eps} \includegraphics[width=4.0cm]{Xtv_maskx_32.eps} \caption{Mask creation. (left) $\mathbf{M}_v$ where $\mathbf{x}$ is ground truth. (right) $\mathbf{M}_v$ where $\mathbf{\tilde{x}}$ is obtained via Eq. (\ref{eq:TV}) and $\tau_v(5)$.} \label{fig:mask} \end{figure} To measure accuracy we use the relative error defined by \begin{align} RE= \frac{||\mathbf{x}-\mathbf{x}_{true}||_2}{||\mathbf{x}_{true}||_2}, \end{align} where the difference and norms operate on the vectorized (concatenated) images. For the TV-regularized reconstruction in Figure \ref{fig:TV}, $RE=.0500$. There are visible errors in the intensity values of the image, but the edges appear to be in the proper locations. Figure \ref{fig:edges}, which shows horizontal and vertical edges in the image, confirms this. Specifically, the horizontal and vertical edge transforms are the anisotropic TV transforms defined by \begin{align} \left[\mathbf{D}_v(\mathbf{x})\right]_{i,j} &= \sum_k \mathbf{D}_{i,k}\mathbf{x}_{k,j}, \end{align} and \begin{align} \left[\mathbf{D}_h(\mathbf{x})\right]_{i,j} &= \sum_k \mathbf{D}^T_{k,j}\mathbf{x}_{i,k}, \end{align} where $\mathbf{D}$ is ${N\times N}$ and defined by \begin{align}\label{eq:TVtransform} \mathbf{D}_{i,j} = \left\{\begin{array}{cc} 1 & j=i+1\\ -1 & j=i\\ 0 & \text{else}\end{array}\right., \end{align} with $\mathbf{D}_{N,1}=1$. Note that there are many other methods for edge detection from image data, including the popular Canny method, \cite{canny1986computational}, which was used in the iteratively reweighted EdgeCS method \cite{guo2010edgecs,guo2012edge}. This paper only considers these anisotropic TV edges. Later the same transforms are used to regularize, ensuring the mask will match the sparsity domain. Next, the edge values from Figure \ref{fig:edges} are thresholded to create two binary mask matrices $\mathbf{M}_h$ and $\mathbf{M}_v$, defined by \begin{align} \left[\mathbf{M}_v\right]_{i,j} = \left\{\begin{matrix} 1 & |\left[\mathbf{D}_v(\mathbf{x})\right]_{i,j}|<\tau_v\\ 0 & |\left[\mathbf{D}_v(\mathbf{x})\right]_{i,j}|\ge\tau_v \end{matrix}\right., \end{align} and \begin{align} \left[\mathbf{M}_h\right]_{i,j} = \left\{\begin{matrix} 1 & |\left[\mathbf{D}_h(\mathbf{x})\right]_{i,j}|<\tau_h\\ 0 & |\left[\mathbf{D}_h(\mathbf{x})\right]_{i,j}|\ge\tau_h \end{matrix}\right.. \end{align} Figure \ref{fig:mask} shows the exact result as well as the approximate, thresholded edge mask. Similar to \cite{guo2010edgecs,guo2012edge}, the thresholds $\tau_v$ and $\tau_h$ are defined by \begin{align}\label{eq:thresholds} \begin{split} \tau_v(k) &= 2^{-k}\cdot\max\left\{\mathbf{D}_v(\mathbf{\tilde{x}})\right\}\\ \tau_h(k) &= 2^{-k}\cdot\max\{\mathbf{D}_h(\mathbf{\tilde{x}})\}. \end{split} \end{align} where $k$ is set by the user. For example, choosing $k=5$ marks all grid points above $3.125\%$ of the maximum edge value as edges. In general $k$ is a resolution and noise dependent parameter. \begin{figure}[t] \centering \includegraphics[width=4.0cm]{XAdapt_256.eps} \includegraphics[width=4.0cm]{XAdapt_error_256.eps} \includegraphics[width=8.0cm]{XAdapt_cross_256.eps} \caption{Edge-masked enhancement of Eq. (\ref{eq:TV}) result from 16 radial lines with point-wise error plot and vertical cross-section.} \label{fig:Xadapt} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Radial lines & TV & TV + edge-masked $\ell_2$ &$k$\\ \hline 16 & .0500 & .0063 & 256 \\ \hline 15 & .0769 & .0159 & 64 \\ \hline 14 & .1246 & .0330 & 32 \\ \hline 13 & .1763 & .0518 & 32 \\ \hline 12 & .3189 & .1779 & 32 \\ \hline \end{tabular} \caption{Relative errors for TV and TV plus edge-masked $\ell_2$ enhancement using radial line data and thresholds defined by $k$ in Eq. (\ref{eq:thresholds}).} \end{center} \end{table}\label{table:1} Finally, we employ the two masks to perform the image enhancement via the reconstruction \begin{align}\label{eq:EM} \begin{split} &\arg\min_\mathbf{x} \left|\left| \begin{bmatrix} \mathbf{M}_v\odot\mathbf{D}_v(\mathbf{x})\\ \mathbf{M}_h\odot\mathbf{D}_h(\mathbf{x})\end{bmatrix}\right|\right|_2^2 \text{subject to}\quad \mathbf{Fx}=\mathbf{\hat{f}}. \end{split} \end{align} Here $\odot$ denotes elementwise multiplication. When noise is present, Eq. (\ref{eq:EM}) is modified to \begin{align}\label{eq:EMnoisy} \begin{split} &\arg\min_\mathbf{x} ||\mathbf{Fx}-\mathbf{\hat{f}}||_2^2+\lambda\left|\left| \begin{bmatrix} \mathbf{M}_v\odot\mathbf{D}_v(\mathbf{x})\\ \mathbf{M}_h\odot\mathbf{D}_h(\mathbf{x})\end{bmatrix}\right|\right|_2^2. \end{split} \end{align} The anisotropic TV formulation is used for regularization as it was shown in \cite{guo2010edgecs,guo2012edge} to be more effective in an edge-weighting scheme than isotropic TV. Empirical and theoretical evidence from \cite{churchill2018edge,churchill2019edge} also supports the use of anisotropic TV. Figure \ref{fig:Xadapt} shows the result for the Shepp-Logan phantom reconstructed from 16 radial lines. The relative error is reduced from $.0500$ to $.0063$, and the error plot shows drastic accuracy improvement particularly in smooth regions. Similar results were achieved when supplementing TV-regularized reconstruction from 12, 13, 14, and 15 radial lines with edge-masked regularization as well. Table \ref{table:1} compares the relative errors for both methods. \section{Results}\label{sec:results} \begin{figure}[t] \centering \includegraphics[width=4.0cm]{mri.eps} \includegraphics[width=4.0cm]{mri_edgey.eps} \caption{(left) Realistic brain phantom \cite{guerquin2012realistic} and (right) true vertical edge mask.} \label{fig:mri} \end{figure} Motivated by magnetic resonance (MR) imaging, where Fourier data is often collected along radial lines, the following experiments use a realistic brain phantom from \cite{guerquin2012realistic}, shown in the left panel of Figure \ref{fig:mri}. This image is much more difficult to reconstruct, e.g. requiring 77 radial lines of Fourier data to achieve relative error of less than $10^{-2}$ in the noise-less case using isotropic TV regularization as in Eq. (\ref{eq:TV}). This is due to its overall higher total variation and dense edge structure, seen in the right panel of Figure \ref{fig:mri}, compared with the Shepp-Logan phantom. \begin{figure}[t] \centering \includegraphics[width=4.0cm]{mri_edgecs.eps} \includegraphics[width=4.0cm]{mri_edgecs_error.eps} \includegraphics[width=4.0cm]{mri_edgecs_xadapt.eps} \includegraphics[width=4.0cm]{mri_edgecs_xadapt_error.eps} \includegraphics[width=8.0cm]{mri_edgecs_cross.eps} \caption{Reconstruction and enhancement from limited data. (top) EdgeCS image from 34 radial lines with point-wise error. (middle) edge-masked enhancement with point-wise error. (bottom) vertical cross-section comparison. Here $k=5$.} \label{fig:mrilimited} \end{figure} \begin{figure}[t] \centering \includegraphics[width=4.0cm]{mri_anitv_xtv.eps} \includegraphics[width=4.0cm]{mri_anitv_xtv_error.eps} \includegraphics[width=4.0cm]{mri_anitv_xadapt.eps} \includegraphics[width=4.0cm]{mri_anitv_xadapt_error.eps} \includegraphics[width=8.0cm]{mri_anitv_cross.eps} \caption{Reconstruction and enhancement from noisy data. (top) ``un-masked'' anisotropic TV image from 180 radial lines with added zero-mean Gaussian noise with standard deviation $10^{-2}$ with point-wise error. (middle) edge-masked enhancement with point-wise error. (bottom) vertical cross-section comparison. The initial reconstruction uses $\lambda=10^{-9}$ and the enhancement uses $\mu=10^{-2}$, and $k=4$.} \label{fig:mrinoisy} \end{figure} \subsection{Very limited data} In this experiment, the initial reconstruction is via EdgeCS \cite{guo2010edgecs,guo2012edge}, an iteratively edge-weighted reconstruction method. EdgeCS requires only 36 radial lines to near-perfectly reconstruct the phantom. For the initial reconstruction in this experiment, 34 lines of data are used, and the relative error is $.0344$. The edge-masked enhancement step improves it to $.0072$, as Figure \ref{fig:mrilimited} shows. This experiment provides some evidence that even with very limited data and an advanced initial reconstruction, there is still room for the edge-masked regularization to improve accuracy and enhance edge-sparsity. \subsection{Additive noise} In this experiment an ``un-masked'' Eq. (\ref{eq:EMnoisy}), i.e. $\mathbf{M}_h$ and $\mathbf{M}_v$ all ones, is used to initially reconstruct the phantom from 180 radial lines of data with zero-mean Gaussian noise with standard deviation $10^{-2}$ added. The image has a relative error of $.0947$. The edge-masked enhancement achieves a relative error of $.0219$. Figure \ref{fig:mrinoisy} shows the result. Note that the faithfulness of the initial reconstruction to the true edge locations here is paramount as the enhancement especially relies on an accurate edge mask when noise is present. \section{Conclusion}\label{sec:conclusion} This paper presented an algorithm to enhance images reconstructed via edge-sparsity based methods when data requirements for near-perfect reconstruction are not met. It is able to achieve this because while the intensity values in the resulting images may not be ideal, the edge locations are often faithful to those of the ground truth. The algorithm locates the edges and uses them in a masked $\ell_2$ regularization scheme. Our method was shown to further enhance edge information and improve accuracy for three different initial reconstruction methods, varying amounts of limited data with and without noise, and two different phantom images. In future work, in order to further boost our enhancement results, we will explore edge-sparsity based methods that are robust with respect to noise for use in our initial reconstruction step. In addition, we will explore an iteratively ``re-masked'' algorithm similar to \cite{guo2010edgecs,guo2012edge} but using $\ell_2$ regularization instead of $\ell_1$. \bibliographystyle{IEEEbib}
{ "timestamp": "2019-02-04T02:03:22", "yymm": "1902", "arxiv_id": "1902.00092", "language": "en", "url": "https://arxiv.org/abs/1902.00092", "abstract": "Image reconstruction based on an edge-sparsity assumption has become popular in recent years. Many methods of this type are capable of reconstructing nearly perfect edge-sparse images using limited data. In this paper, we present a method to improve the accuracy of a suboptimal image resulting from an edge-sparsity image reconstruction method when compressed sensing or empirical data requirements are not met. The method begins with an edge detection from an initial edge-sparsity based reconstruction. From this edge map, a mask matrix is created which allows us to regularize exclusively in regions away from edges. By accounting for the spatial distribution of the sparsity, our method preserves edge information and and furthermore enhances suboptimal reconstructions to be nearly perfect from fewer data than needed by the initial method. We present results for two phantom images using a variety of initial reconstruction methods.", "subjects": "Image and Video Processing (eess.IV)", "title": "Image reconstruction enhancement via masked regularization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9615338046748209, "lm_q2_score": 0.7371581510799253, "lm_q1q2_score": 0.708802481654937 }
https://arxiv.org/abs/2007.14002
Equilibrium Behaviors in Repeated Games
We examine a patient player's behavior when he can build reputations in front of a sequence of myopic opponents. With positive probability, the patient player is a commitment type who plays his Stackelberg action in every period. We characterize the patient player's action frequencies in equilibrium. Our results clarify the extent to which reputations can refine the patient player's behavior and provide new insights to entry deterrence, business transactions, and capital taxation. Our proof makes a methodological contribution by establishing a new concentration inequality.
\section{Introduction}\label{sec1} Economists have long recognized that individuals, firms, and governments can benefit from good reputations. As shown in the seminal work of \citet{FL-89}, a patient player can guarantee himself a high payoff when his opponents believe that he might be committed to play a particular action. Their result can be viewed as a refinement, which selects the patient player's optimal equilibria in many games of interest. This paper studies the effects of reputations on the patient player's behavior instead of his payoffs, which have been underexplored in the reputation literature. Existing works on reputation-building behaviors restrict attention to particular equilibria or games with particular payoff functions. By contrast, we identify tight bounds on the patient player's action frequencies that apply to all equilibria under more general payoff functions. Our results clarify the extent to which reputations can refine the patient player's behavior and provide new insights to applications such as entry deterrence, business transactions, and capital taxation. We analyze a repeated game between a patient player and a sequence of myopic opponents. The patient player is either a strategic type who maximizes his discounted average payoff, or a commitment type who plays his optimal pure commitment action (or \textit{Stackelberg action}) in every period. The myopic players cannot observe the patient player's type, but can observe all the actions taken in the past. We examine the extent to which the option to imitate the commitment type can motivate the patient player to play his Stackelberg action. Theorem \ref{Theorem1} characterizes tight bounds on the discounted frequencies with which the strategic-type patient player plays his Stackelberg action in equilibrium. We show that the maximal frequency equals one and the minimal frequency equals the value of the following linear program: Choose a distribution over action profiles in order to minimize the probability of the Stackelberg action subject to two constraints. First, each action profile in the support of this distribution satisfies the myopic player's incentive constraint. Second, the patient player's expected payoff from this distribution is no less than his Stackelberg payoff. The first constraint is necessary since the myopic players best reply to the patient player's action in every period. The second constraint is necessary since the patient player can approximately attain his Stackelberg payoff by imitating the commitment type. In order to provide him an incentive not to play his Stackelberg action, his continuation value after separating from the commitment type must be at least his Stackelberg payoff. The substantial part is to show that these constraints are not only necessary but also sufficient. Our proof is constructive and makes a methodological contribution by establishing a novel concentration inequality on the discounted sum of random variables that bounds the patient player's action frequencies (Lemma \ref{lem:concentration}). Theorem \ref{Theorem2} identifies a sufficient condition under which a distribution of the patient player's actions is his action frequency in some equilibria of the reputation game. In a number of leading applications such as the product choice game and the entry deterrence game, our sufficient condition is also necessary, in which case Theorem \ref{Theorem2} fully characterizes of the set of action frequencies that can arise in equilibrium. Our results provide new insights to classic applications of reputation models. For example, in the product choice game of Mailath and Samuelson (2006, Figure 15.1.1 on page 460),\footnote{In \cite{MS-06}'s product choice game, a patient firm faces a sequence of consumers. In every period, the firm chooses between high effort and low effort, and a consumer chooses between buying a high-end product and a low-end product. The firm finds it costly to exert high effort and prefers the consumers to purchase high-end products. Each consumer has an incentive to buy the high-end product only when she believes that the firm will exert high effort with high enough probability. In this game, high effort is the firm's Stackelberg action but low effort is the dominant action in the stage game.} our results imply that a policy maker can increase the frequency of high effort by subsidizing consumers for purchasing low-end products or by taxing consumers for purchasing high-end products. Intuitively, these policies increase the consumers' demand for high effort when they purchase the high-end product, which in turn increases the frequency of high effort in the worst equilibrium. In the entry deterrence game of \cite{KW-82} and \cite{MR-82}, our results imply that a small amount of subsidy to potential entrants for entering the market makes a reputation-building incumbent more aggressive in fighting entry, but a large amount of subsidy eliminates the incumbent's fighting incentives. Our results contribute to the reputation literature by clarifying the role of reputations in refining the patient player's behavior. This is complementary to the result of \cite{FL-89} that studies how reputations refine the patient player's payoff. Existing works on players' reputation-building behaviors restrict attention to particular equilibria or particular payoff functions. For example, \citet{KW-82} and \citet{MR-82} characterize sequential equilibria in entry deterrence games. \citet{Sch-93} characterizes Markov equilibria in repeated bargaining games. \cite{Bar-03}, \citet{Phe-06}, \citet{Ekm-11}, \citet{Liu-11}, and \citet{LS-14} restrict attention to supermodular games or $2 \times 2$ games. By contrast, we characterize tight bounds on the patient player's action frequencies that apply to all equilibria. Our results are more general in terms of payoffs, which only require the patient player's optimal commitment payoff to be greater than his minmax value and that his optimal commitment outcome is not a stage-game Nash equilibrium. \citet*{CMS-04} show that when the monitoring structure has full support, the myopic players eventually learn the patient player's type and the strategies converge to an equilibrium of the repeated complete information game. However, their results do not characterize the speed of convergence or players' behaviors in finite time, and hence do not imply what players' discounted action frequencies are. \citet{EM-19} study players' reputation-building behaviors in stopping games where a patient uninformed player chooses between continuing and irreversibly stopping the game in every period. By contrast, the uninformed players in our model are myopic and their action choices are reversible. \citet{Pei2020} provides sufficient conditions under which the patient player has a unique on-path behavior. Unlike our model that restricts attention to private value environments but allows for general stage-game payoffs, his result requires nontrivial interdependent values and monotone-supermodular stage-game payoffs. Section \ref{sec2} sets up the baseline model. Section \ref{sec3} states our main results. Section \ref{sec4} applies our results to several applied models of reputation formation and discusses the results' practical implications. Section \ref{sec6} discusses our modeling assumptions as well as issues related to taking our predictions to the data. Section \ref{sec7} concludes. The proofs of our results can be found in the appendix. \section{Model}\label{sec2} Time is discrete, indexed by $t=0,1,2,...$. A patient player $1$ with discount factor $\delta \in (0,1)$ interacts with an infinite sequence of myopic player $2$s, arriving one in each period and each playing the game only once. In period~$t$, a public randomization device $\xi_t \sim U[0,1]$ is realized and is observed by both players, after which players simultaneously choose their actions. Player $1$'s action is denoted by $a_t \in A$. Player $2$'s action is denoted by $b_t \in B$. Their stage-game payoffs are $u_1(a_t,b_t)$ and $u_2(a_t,b_t)$. We assume $A$ and $B$ are finite, with $|A|,|B|\geq 2$. Let $\textrm{BR}_1: \Delta (B) \rightrightarrows 2^{A} \backslash \{\varnothing\}$ and $\textrm{BR}_2: \Delta (A) \rightrightarrows 2^{B} \backslash \{\varnothing\}$ be player $1$'s and player $2$'s best reply correspondences in the stage-game. The set of player $1$'s (pure) Stackelberg actions is $\arg\max_{a \in A} \{ \min_{b \in \textrm{BR}_2(a)} u_1 (a,b) \}$. \begin{Assumption}\label{Ass1} Player $1$ has a unique Stackelberg action, denoted by $a^*$. Player $2$ has a unique best reply to player $1$'s Stackelberg action, denoted by $b^*$. \end{Assumption} Assumption \ref{Ass1} is satisfied when each player has a strict best reply to each of his opponent's pure actions and player $1$ is not indifferent between any pair of pure action profiles, both of which are satisfied for generic $(u_1,u_2)$ since $A$ and $B$ are finite sets. Player $1$'s \textit{Stackelberg payoff} is $u_1(a^*,b^*)$. Let \begin{equation*} \mathcal{B} \equiv \{\beta \in \Delta(B) | \exists \alpha \in \Delta(A) \textrm{ s.t. } \textrm{supp}(\beta) \subset \textrm{BR}_2(\alpha)\} \subset \Delta (B). \end{equation*} Since player $2$s are myopic, they will never take actions that do not belong to $\mathcal{B}$. As a result, player $1$'s minmax value is $\underline{v}_1 \equiv \min_{\beta \in \mathcal{B}} \max_{a \in A} u_1(a, \beta)$. \begin{Assumption}\label{Ass2} $a^* \notin \textrm{BR}_1(b^*)$ and $u_1(a^*,b^*) > \underline{v}_1$. \end{Assumption} Assumptions \ref{Ass1} and \ref{Ass2} are satisfied in many leading applications of reputation models. For example, \begin{enumerate} \item In the product choice game of \citet{MS-06}, a firm benefits from committing to exert high effort since it can encourage consumers to purchase the high-end product or to purchase larger quantities. However, the firm can save costs by lowering its effort. \item In the entry deterrence game of \citet{KW-82} and \citet{MR-82}, and the limit pricing game of \cite{MR-82ECMA}, an incumbent firm benefits from committing to set low prices and to fight potential entrants, but its stage-game payoff is higher when it accommodates entry. \item In the fiscal policy game of \cite{Phe-06}, the government benefits from committing to low tax rates in order to encourage investments, but it is tempted to expropriate the citizens after investment takes place. \item In the monetary policy game of \citet{Bar-86}, the central bank can benefit from committing to low inflation rates. But given the households' expectations about inflation, the central bank is tempted to raise inflation in order to boost economic activities. \end{enumerate} Assumption \ref{Ass2} rules out coordination games (such as the battle of sexes), common interest games, and chicken games, in which $a^*$ best replies to $b^*$, and zero-sum games in which $u_1(a^*,b^*) \leq \underline{v}_1$. Section \ref{sec6} discusses games that violate this assumption, and the role of Assumption \ref{Ass2} in our proofs is explained in Appendix \ref{sec5}. Player $1$ has perfectly persistent private information about his type $\omega$. Let $\omega \in \{\omega^s, \omega^c\}$, where $\omega^c$ stands for a \textit{commitment type} who mechanically plays $a^*$ in every period, and $\omega^s$ stands for a \textit{strategic type} who can flexibly choose his actions in order to maximize his discounted average payoff $\sum_{t=0}^{+\infty} (1-\delta)\delta^t u_1(a_t,b_t)$. Player $2$'s prior belief attaches probability $\pi \in (0,1)$ to the commitment type. Players' past actions are perfectly monitored. A typical public history is denoted by $h^t \equiv \{a_s,b_s,\xi_s\}_{s=0}^{t-1}$. Let $\mathcal{H}^t$ be the set of $h^t$ and let $\mathcal{H} \equiv \cup_{t \in \mathbb{N}} \mathcal{H}^t$. Strategic-type player $1$'s strategy is $\sigma_1: \mathcal{H} \rightarrow \Delta (A)$. Player $2$'s strategy is $\sigma_2 : \mathcal{H} \rightarrow \Delta (B)$. Let $\Sigma_1$ and $\Sigma_2$ be the set of player $1$'s and player $2$'s strategies, respectively. The solution concept is (Bayes) Nash equilibrium. Let $\textrm{NE}(\delta,\pi) \subset \Sigma_1 \times \Sigma_2$ be the set of equilibria. Since the stage game is finite and payoffs are discounted, an equilibrium exists \citep{fl83}. \paragraph{Existing Result on Equilibrium Payoffs:} \citet{FL-89} show that for every $\pi \in (0,1)$ and $\varepsilon>0$, there exists $\underline{\delta} \in (0,1)$ such that \begin{equation}\label{2.1} \inf_{(\sigma_1,\sigma_2) \in \textrm{NE}(\delta,\pi)} \mathbb{E}^{(\sigma_1,\sigma_2)}\Big[ \sum_{t=0}^{+\infty} (1-\delta) \delta^t u_1(a_t,b_t) \Big] \geq u_1(a^*,b^*)- \varepsilon \textrm{ for every } \delta > \underline{\delta}, \end{equation} where $\mathbb{E}^{(\sigma_1,\sigma_2)}[\cdot]$ is the expectation when player $1$'s strategy is $\sigma_1$ and player $2$'s strategy is $\sigma_2$. Inequality (\ref{2.1}) unveils the effects of reputations on the patient player's payoff. \cite{FL-89} view this result as a refinement, which selects among the plethora of equilibria in repeated complete information games. According to the folk theorem of \citet*{FKM-90}, the patient player can attain any payoff between $\underline{v}_1$ and $\overline{v}_1 \equiv \max_{ \{ (\alpha,\beta) | \textrm{supp}(\beta) \subset \textrm{BR}_2(\alpha) \}} \min_{a \in \textrm{supp}(\alpha)} u_1(a,\beta)$ in a repeated complete information game without any commitment type. By definition, $\overline{v}_1 \geq u_1(a^*,b^*)$, which implies that introducing a commitment type selects equilibria in which player $1$'s payoff is between $u_1(a^*,b^*)$ and $\overline{v}_1$. In the entry deterrence game, product choice game, and fiscal and monetary policy games, $\overline{v}_1$ equals $u_1(a^*,b^*)$, in which case the reputation model selects equilibria where the patient player receives his highest equilibrium payoff. \section{Results}\label{sec3} Our results examine the \textit{discounted frequencies} of the patient player's actions. Formally, the discounted frequency of action $a \in A$ under $(\sigma_1,\sigma_2)$ is \begin{equation}\label{3.1} G^{(\sigma_1,\sigma_2)}(a) \equiv \mathbb{E}^{(\sigma_1,\sigma_2)} \Big[ \sum_{t=0}^{\infty} (1-\delta)\delta^t \mathbf{1}\{a_{t}=a\} \Big]. \end{equation} Our first result characterizes the discounted frequencies with which the patient player plays his Stackelberg action $a^*$. Let \begin{equation}\label{3.2} \Gamma \equiv \Big\{ (\alpha,b) \in \Delta (A) \times B \Big| b \in \textrm{BR}_2(\alpha) \Big\} \end{equation} be the set of \textit{incentive compatible action profiles}. Let \begin{equation}\label{3.3} F^* (u_1,u_2) \equiv \min_{(\alpha_1,\alpha_2,b_1,b_2,q) \in \Delta (A) \times \Delta (A) \times B \times B \times [0,1] } \Big\{ q \alpha_1(a^*) +(1-q) \alpha_2(a^*) \Big\}, \end{equation} subject to \begin{equation}\label{3.4} (\alpha_1,b_1) \in \Gamma,\quad (\alpha_2,b_2) \in \Gamma, \end{equation} and \begin{equation}\label{3.5} q u_1(\alpha_1,b_1) +(1-q) u_1(\alpha_2,b_2) \geq u_1(a^*,b^*), \end{equation} where $\alpha_i(a)$ stands for the probability of action $a \in A$ in $\alpha_i \in \Delta (A)$. \begin{Theorem}\label{Theorem1} Suppose $(u_1,u_2)$ satisfies Assumptions 1 and 2. \begin{enumerate} \item For every $f \in [F^*(u_1,u_2),1]$ and $\varepsilon>0$, there exists $\underline{\delta} \in (0,1)$ such that for every $\delta > \underline{\delta}$, there exists $(\sigma_1,\sigma_2) \in \textrm{NE}(\delta,\pi)$ such that $G^{(\sigma_1,\sigma_2)}(a^*) \in (f-\varepsilon,f+\varepsilon)$. \item For every $\widehat{f}< F^*(u_1,u_2)$, there exist $\underline{\delta} \in (0,1)$ and $\eta>0$ such that $G^{(\sigma_1,\sigma_2)}(a^*) > \widehat{f}+\eta$ for every $\delta > \underline{\delta}$ and $(\sigma_1,\sigma_2) \in \textrm{NE}(\delta,\pi)$. \end{enumerate} \end{Theorem} Theorem \ref{Theorem1} implies that when player $1$ is patient, the discounted frequency with which he plays $a^*$ can take any value between $F^*(u_1,u_2)$ and $1$, but it cannot be strictly lower than $F^*(u_1,u_2)$. Therefore, $[F^*(u_1,u_2),1]$ is the set of frequencies with which $a^*$ can arise in equilibrium. Our result applies to every prior belief $\pi \in (0,1)$, which includes but not limited to situations where the probability of the commitment type is small. Since $F^*(u_1,u_2)<1$ under Assumption \ref{Ass2}, Theorem \ref{Theorem1} implies that an arbitrarily patient player can play his Stackelberg action with frequency bounded away from one despite having the option to build a reputation. The upper bound on the frequency of $a^*$ is $1$ since there exists an equilibrium where player $1$ plays $a^*$ and player $2$s play $b^*$. Once player $1$ plays any action other than $a^*$, future player $2$s can observe this deviation after which they can punish player $1$ by driving his continuation value to his minmax payoff $\underline{v}_1$. Such a punishment is feasible since player $1$ separates from the commitment type after any deviation from his equilibrium strategy, and according to \citet*{FKM-90}, there exists an equilibrium of the repeated complete information game in which player $1$'s payoff is $\underline{v}_1$. Since Assumption \ref{Ass2} requires that $u_1(a^*,b^*)>\underline{v}_1$, this punishment provides player $1$ an incentive to play $a^*$ when his discount factor $\delta$ is large enough. For some intuition on the linear program that defines the lower bound $F^*(u_1,u_2)$, consider a static planning problem in which a planner commits to a mixed action $\alpha \in \Delta (A)$ on behalf of player~$1$ after which player $2$ best replies to $\alpha$. Suppose the planner faces a constraint that player $1$'s expected payoff is no less than $u_1(a^*,b^*)$, then by definition, $F^*(u_1,u_2)$ is the lowest probability with which $a^*$ needs to be played.\footnote{The planner in the planning problem can randomize between any number of commitment actions, while in the linear program that defines $F^*(u_1,u_2)$, he can randomize between at most two commitment actions. Lemmas \ref{LA.2} and \ref{LA.3} show that this is without loss and the value of $F^*(u_1,u_2)$ remains the same even when the planner can randomize between any arbitrary number of commitment actions.} We map the two constraints in the planning problem to the reputation game studied by Theorem \ref{Theorem1}. First, since player $2$s are myopic, they play a best reply to $\alpha$ after they learn that the patient player will play $\alpha$. This explains the necessity of constraint (\ref{3.4}). Second, the presence of commitment type implies that the patient player can guarantee payoff approximately $u_1(a^*,b^*)$ by playing $a^*$ in every period. Therefore, the patient player has an incentive to play $\alpha_1$ with probability $q$ and $\alpha_2$ with probability $1-q$ only when his expected payoff from doing so is at least $u_1(a^*,b^*)$. This explains the necessity of constraint (\ref{3.5}). The substantial part of our result is to show that constraints (\ref{3.4}) and (\ref{3.5}) are not only necessary but are also sufficient. Our second result examines the set of discounted action frequencies that can arise in equilibrium. Let \begin{equation}\label{eq:A} \mathcal{A} \equiv \Big\{ \alpha^* \in \Delta(A) \Big| \exists q \in \Delta (\Gamma) \textrm{ such that } \alpha^* = \int_{\alpha} \alpha d q \textrm{ and } \int_{(\alpha,b)} u_1(\alpha,b) d q = u_1(a^*,b^*) \Big\}, \end{equation} which is the set of marginal distributions of player $1$'s actions such that one can find a distribution of incentive compatible action profiles $q \in \Delta (\Gamma)$ from which player $1$'s expected payoff equals his Stackelberg payoff. \begin{Theorem}\label{Theorem2} Suppose $(u_1,u_2)$ satisfies Assumptions \ref{Ass1} and \ref{Ass2}. \begin{enumerate} \item For every $\alpha^* \in \mathcal{A}$ and $\varepsilon>0$, there exists $\underline{\delta} \in (0,1)$ such that for every $\delta > \underline{\delta}$, there exists $(\sigma_1,\sigma_2) \in \textrm{NE}(\delta,\pi)$ such that $\Big| G^{(\sigma_1,\sigma_2)}(a)- \alpha^* (a) \Big| < \varepsilon$ for every $a \in A$.\footnote{We can also show that if $\delta$ is large enough and $\mathcal{A}$ satisfies a full dimensionality assumption, then every $\alpha^*$ that belongs to the interior of $\mathcal{A}$ can be exactly attained as the discounted action frequency of some equilibria.} \item In games where $u_1(a^*,b^*)=\overline{v}_1$. For every $\widehat{\alpha} \notin \mathcal{A}$, there exist $\eta>0$ and $\underline{\delta} \in (0,1)$ such that for every $\delta > \underline{\delta}$ and $(\sigma_1,\sigma_2) \in \textrm{NE}(\delta,\pi)$, $ \Big| G^{(\sigma_1,\sigma_2)}(a)- \widehat{\alpha} (a) \Big| > \eta$ for some $a \in A$. \end{enumerate} \end{Theorem} According to Theorem \ref{Theorem2}, every action distribution that belongs to $\mathcal{A}$ is arbitrarily close to the patient player's action frequency in some equilibria of the reputation game. In fact, the first statement of Theorem \ref{Theorem2} is a generalization of Statement 1 of Theorem~\ref{Theorem1} since it is without loss of generality to focus on distributions of incentive compatible action profiles such that constraint (\ref{3.5}) is binding (Lemma \ref{LB.1}) and it is without loss of generality to focus on distributions supported on $\Gamma$ that have at most two elements in their support when the objective is to minimize the discounted frequency of~$a^*$ (Lemma \ref{LA.2} and Lemma \ref{LA.3}). In games where $u_1(a^*,b^*)=\overline{v}_1$, such as the product choice game and the entry deterrence game, an action distribution is the patient player's discounted action frequency in some equilibria \textit{if and only if} it belongs to $\mathcal{A}$. In this class of games, any action frequency that satisfies player $2$'s incentive constraints and yields player $1$ his Stackelberg payoff can be attained in some equilibria of the repeated game. \section{Economic Applications}\label{sec4} We apply our results to \textit{monotone-supermodular games} that include the leading applications of reputation models, such as the product choice game, the entry deterrence game, and the fiscal policy game. \begin{Definition}\label{Def1} $(u_1,u_2)$ is monotone-supermodular if there exist a complete order on $A$ and a complete order on $B$ such that $u_1(a,b)$ is strictly decreasing in $a$, and $u_2(a,b)$ has strictly increasing differences.\footnote{This definition resembles the one in \citet{LP-20} and \citet{Pei2020} except that there is no state that affects players' payoffs. We also do not require $u_1(a,b)$ to be strictly increasing in $b$.} \end{Definition} In order to facilitate the application of Theorem \ref{Theorem1}, we simplify the linear program that defines $F^*(u_1,u_2)$. Let $\underline{a}$ be the lowest element of $A$ and let $\underline{b} \in B$ be player $2$'s best reply to $\underline{a}$. If player $2$ has multiple best replies to $\underline{a}$, then let $\underline{b}$ the one that maximizes player $1$'s payoff. Let \begin{equation}\label{4.1} \Gamma^* \equiv \Big\{ (\alpha,b) \in \Gamma \Big| | \textrm{BR}_2(\alpha) | \geq 2 \textrm{ and } b \in \arg\max_{b' \in \textrm{BR}_2(\alpha)} u_1(\alpha,b') \Big\}. \end{equation} Intuitively, $\Gamma^*$ is a subset of $\Gamma$ that consists of incentive compatible action profiles where player $2$ has at least two best replies, and for every $\alpha$ that player $2$ has multiple best replies, $b$ is the one that maximizes player $1$'s payoff. Under generic stage-game payoff functions, $\Gamma^*$ is a finite set. Proposition \ref{Prop1} implies that in games with monotone-supermodular payoffs, it is without loss of generality to choose incentive compatible action profiles from the finite set $\Gamma^* \cup \{\underline{a},\underline{b}\}$ instead of the infinite set $\Gamma$. \begin{Proposition}\label{Prop1} If $(u_1,u_2)$ is monotone-supermodular, then \begin{equation*} F^* (u_1,u_2) = \min_{(\alpha_1,\alpha_2,b_1,b_2,q) \in \Delta (A) \times \Delta (A) \times B \times B \times [0,1] } \Big\{ q \alpha_1(a^*) +(1-q) \alpha_2(a^*) \Big\}, \end{equation*} subject to $(\alpha_1,b_1),(\alpha_2,b_2) \in \Gamma^* \cup \{(\underline{a},\underline{b})\}$, and $q u_1(\alpha_1,b_1) +(1-q) u_1(\alpha_2,b_2) \geq u_1(a^*,b^*)$. \end{Proposition} The proof is in Appendix \ref{secC}. For the rest of this section, we apply our theorems as well as Proposition \ref{Prop1} to study product choice games, entry deterrence games, and capital taxation games. \paragraph{Product Choice Game:} Player $1$ is a firm that chooses between high (action $H$) and low effort (action $L$). Player $2$s are consumers, each chooses between purchasing a high-end product (action $h$) and a low-end product (action $l$). Players' payoffs are: \begin{center} \begin{tabular}{| c | c | c |} \hline -- & $h$ & $l$ \\ \hline $H$ & $1-c_h,2-\gamma^*$ & $-c_l,1$ \\ \hline $L$ & $1, -\gamma^*$ & $0,0$ \\ \hline \end{tabular} \end{center} where $c_h,c_l \in (0,1)$ are the costs of effort when the consumer buys the high-end product and the low-end product, respectively, and consumers are willing to choose $h$ only when they believe that the firm exerts high effort with probability more than $\gamma^* \in (0,1)$. This game has monotone-supermodular payoffs once we rank the firm's actions according to $H \succ L$ and the consumers' actions according to $h \succ l$. The firm's Stackelberg action is $H$. According to (\ref{4.1}), $\Gamma^*$ is a singleton set $\Big\{ (\gamma^*H +(1-\gamma^*)L , h) \Big\}$. Proposition \ref{Prop1} implies that \begin{equation}\label{4.2} F^*(u_1,u_2) = \min_{q \in [0,1]} q \gamma^*, \quad \textrm{subject to} \quad q \gamma^* u_1(H, h) +q (1-\gamma^*) u_1(L,h) + (1-q) u_1(L,l) \geq u_1(H,h), \end{equation} from which we obtain \begin{equation}\label{4.3} F^*(u_1,u_2)=\frac{\gamma^* (1-c_h)}{1-\gamma^* c_h}. \end{equation} \begin{Claim}\label{C1} The lowest discounted frequency with which the firm exerts high effort strictly increases in $\gamma^*$, strictly decreases in $c_h$, and is independent of $c_l$. \end{Claim} In terms of practical implications, consider a policy maker who wants to increase the frequency with which the firm exerts high effort but does not know which equilibrium players coordinate on. The policy maker is ambiguity averse and evaluates the effectiveness of each policy according to the frequency of high effort in the \textit{worst equilibrium}. That is, his objective is to increase $F^*(u_1,u_2)$. Claim \ref{C1} implies that the policy maker can increase $F^*(u_1,u_2)$ by subsidizing consumers for purchasing the low-end product or by taxing consumers for purchasing the high-end product. Intuitively, these policies increase the consumers' demand for high effort when they purchase the high-end product. This leads to an increase in the equilibrium frequency of high effort since the firm needs to induce consumers to purchase the high-end product with high enough probability in order to obtain its Stackelberg payoff. Next, we consider a variant of the product choice game in which every consumer chooses whether to buy a high-end product, an intermediate product, or a low-end product. The firm's payoffs are: \begin{center} \begin{tabular}{| c | c | c | c |} \hline -- & $h$ & $m$ & $l$\\ \hline $H$ & $1-c$ & $p-c$ & $-c$\\ \hline $L$ & $1$ & $p$ & $0$\\ \hline \end{tabular} \end{center} where its cost of effort is $c \in (0,1)$, its benefit from selling the high-end product is $1$, its benefit from selling the intermediate product is $p \in (0,1)$, and its benefit from selling the low-end product is $0$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.4] \draw[->] (0,0)--(26,0)node[right]{$\Pr(a=H)$}; \draw[ultra thick] (24,0.1)--(24,-0.1)node[below]{$1$}; \draw[ultra thick] (0,0.1)--(0,-0.1)node[below]{$0$}; \draw[ultra thick] (8,0.1)--(8,-0.1)node[below]{$\gamma_2^*$}; \draw[ultra thick] (15,0.1)--(15,-0.1)node[below]{$\gamma_1^*$}; \draw[ultra thick, red] (0,0)--(4,0)node[above]{$l$ is optimal}--(8,0); \draw[ultra thick, blue] (8,0)--(11,0)node[above]{$m$ is optimal}--(15,0); \draw[ultra thick, green] (15,0)--(19,0)node[above]{$h$ is optimal}--(24,0); \draw[->] (0,-5)--(26,-5)node[right]{$\Pr(a=H)$}; \draw[ultra thick] (24,-4.9)--(24,-5.1)node[below]{$1$}; \draw[ultra thick] (0,-4.9)--(0,-5.1)node[below]{$0$}; \draw[ultra thick] (6,-4.9)--(6,-5.1)node[below]{$\gamma_2^*$}; \draw[ultra thick] (19,-4.9)--(19,-5.1)node[below]{$\gamma_1^*$}; \draw[ultra thick, red] (0,-5)--(3,-5)node[above]{$l$ is optimal}--(6,-5); \draw[ultra thick, blue] (6,-5)--(13,-5)node[above]{$m$ is optimal}--(19,-5); \draw[ultra thick, green] (19,-5)--(22,-5)node[above]{$h$ is optimal}--(24,-5); \end{tikzpicture} \caption{Product choice game with three options: Consumer's best response before (upper panel) and after they receive a subsidy for purchasing the intermediate product (lower panel).} \end{center} \end{figure} The value of $F^*(u_1,u_2)$ depends on consumers' payoffs only through two sufficient statistics $\gamma_1^*$ and $\gamma_2^*$ with $0< \gamma_2^* < \gamma_1^* <1$, such that a consumer has an incentive to choose $h$ when the firm exerts high effort with probability more than $\gamma_1^*$, has an incentive to choose $m$ when the firm exerts high effort with probability between $\gamma_2^*$ and $\gamma_1^*$, and has an incentive to choose $l$ when the firm exerts high effort with probability less than $\gamma_2^*$. This game has monotone-supermodular payoffs once the firm's actions are ranked according to $H \succ L$ and consumers' actions are ranked according to $h \succ m \succ l$. Applying Proposition \ref{Prop1} to this game, we have: \begin{equation}\label{4.4} F^*(u_1,u_2) = \begin{cases} \frac{\gamma_1^* (1-c)}{1-\gamma_1^* c} & \textrm{ if } p \leq \frac{\gamma_2^*}{\gamma_1^*}\\ \frac{\gamma_2^* (1-c)}{p-\gamma_2^* c} & \textrm{ if } p > \frac{\gamma_2^*}{\gamma_1^*} \textrm{ and } c \geq \frac{1-p}{1-\gamma_2^*}\\ \frac{\gamma_1^* (1-p) -c(\gamma_1^*-\gamma_2^*)}{(1-p)-c(\gamma_1^*-\gamma_2^*)} & \textrm{ if } p > \frac{\gamma_2^*}{\gamma_1^*} \textrm{ and } c < \frac{1-p}{1-\gamma_2^*}. \end{cases} \end{equation} Similar to the game with two purchasing options, we examine the effects of a small amount of sales taxes and subsidies for each product on $F^*(u_1,u_2)$. \begin{enumerate} \item A tax on consumers for purchasing the high-end product (i.e., an increase in $\gamma_1^*$) has no effect on $F^*(u_1,u_2)$ when $p > \frac{\gamma_2^*}{\gamma_1^*}$ and $c \geq \frac{1-p}{1-\gamma_2^*}$, and increases $F^*(u_1,u_2)$ otherwise. A subsidy on consumers for purchasing the low-end product (i.e., an increase in $\gamma_2^*$) has no effect on $F^*(u_1,u_2)$ when $ p \leq \frac{\gamma_2^*}{\gamma_1^*}$, and increases $F^*(u_1,u_2)$ otherwise. \item A subsidy on consumers for purchasing the intermediate product (i.e., a decrease in $\gamma_2^*$ and an increase in $\gamma_1^*$) leads to an increase in $F^*(u_1,u_2)$ when $ p \leq \frac{\gamma_2^*}{\gamma_1^*}$, leads to a decrease in $F^*(u_1,u_2)$ when $p > \frac{\gamma_2^*}{\gamma_1^*}$ and $c \geq \frac{1-p}{1-\gamma_2^*}$, and has an ambiguous effect on $F^*(u_1,u_2)$ when $p > \frac{\gamma_2^*}{\gamma_1^*}$ and $c < \frac{1-p}{1-\gamma_2^*}$. \end{enumerate} We obtain two additional insights compared to the case with two products. First, the effectiveness of subsidizing low-end products depends on the firm's benefit from selling intermediate products (i.e., the comparison between $p$ and $\frac{\gamma_2^*}{\gamma_1^*}$). This is because a small subsidy for purchasing the low-end product only increases the demand for effort when the consumer decides whether to purchase the intermediate product instead of the low-end product, but does not affect consumers' demand for effort when deciding whether to purchase the high-end product instead of the intermediate product. When selling the intermediate product is unprofitable (i.e., $p \leq \frac{\gamma_2^*}{\gamma_1^*}$), an increase in the demand for effort when consumers decide between $l$ and $m$ does not affect the firm's equilibrium action frequencies. Similarly, the effectiveness of taxing high-end products also depends on the profitability of selling the intermediate product, that is, the comparison between $c$ and $\frac{1-p}{1-\gamma_2^*}$. Second, subsidizing consumers for purchasing intermediate products encourages the firm to exert effort more frequently when the firm's profit from selling the intermediate product is low (i.e., $p \leq \frac{\gamma_2^*}{\gamma_1^*}$), but encourages the firm to shirk more frequently otherwise. Intuitively, subsidizing the intermediate product has an effect similar to that of subsidizing the high-end product in the two-product setting when selling the intermediate product is attractive for the firm, and has an effect similar to that of subsidizing the low-end product when selling the intermediate product is unattractive. \paragraph{Entry Deterrence Game:} Player $1$ is an incumbent firm that chooses between fight (action $F$) and accommodate (action $A$). Player $2$s are potential entrants. Each of them chooses between staying out (action $O$) and entering the market (action $I$). Players' payoffs are: \begin{center} \begin{tabular}{| c | c | c |} \hline -- & $O$ & $I$ \\ \hline $F$ & $1-c_o,0$ & $-c_i,-(1-\gamma^*)$ \\ \hline $A$ & $1, 0$ & $0,\gamma^*$ \\ \hline \end{tabular} \end{center} where $c_o \in (0,1)$ is the incumbent's cost of setting low prices when the potential entrant stays out, and $c_i>0$ is its cost of setting low prices when the potential entrant enters. Each potential entrant prefers to stay out only when the incumbent fights with probability more than $\gamma^* \in (0, 1)$. These payoffs are monotone-supermodular once we rank the incumbent's actions according to $F \succ A$, and the entrant's actions according to $O \succ I$. The incumbent's Stackelberg action is $F$. Proposition \ref{Prop1} implies that: \begin{equation*} F^*(u_1,u_2)= \frac{(1-c_o) \gamma^*}{ 1-c_o \gamma^*}. \end{equation*} \begin{Claim}\label{C2} The lowest discounted frequency with which the incumbent fights potential entrants strictly increases in $\gamma^*$, strictly decreases in $c_o$, and is independent of $c_i$. \end{Claim} In terms of practical implications, consider a policy maker who can subsidize potential entrants for entering the market. This is modeled as an increase in every entrant's payoff from action $I$ by $s>0$. Claim \ref{C2} implies that the frequency with which the incumbent fights entry is non-monotone with respect to the amount of subsidy. In particular, \begin{enumerate} \item When the subsidy to potential entrants is close to but strictly less than $1-\gamma^*$, the strategic-type incumbent fights with frequency close to $1$ in \textit{all} equilibria. More generally, our formula implies that when $s < 1-\gamma^*$, a marginal increase in the amount of subsidy increases $F^*(u_1,u_2)$. \item When the subsidy is more than $1-\gamma^*$, each entrant has a strict incentive to enter the market regardless of the incumbent's action, so the incumbent plays $A$ in every period. Therefore, the frequency with which the incumbent fights is zero in \textit{all} equilibria. \end{enumerate} \paragraph{Fiscal Policy Game:} Player $1$ is a government that chooses between a normal tax rate and a high tax rate (i.e., expropriation) and player $2$s are citizens who decide whether to invest. Players' payoffs are: \begin{center} \begin{tabular}{| c | c | c |} \hline -- & Invest & Not Invest \\ \hline Normal Tax Rate & $\tau,1-\tau-c$ & $0,0$ \\ \hline Expropriate & $1,-c$ & $0,0$ \\ \hline \end{tabular} \end{center} where the low tax rate is $\tau \in (0,1)$ and the cost of investment is $c \in (0,1-\tau)$. These payoffs are monotone-supermodular. The government's Stackelberg action is ``\textit{normal tax rate}'' and its Stackelberg payoff is $\tau$. According to Proposition \ref{Prop1}, the highest frequency with which the government expropriates is: \begin{equation*} 1-F^*(u_1,u_2)=1-\frac{\tau}{1-\tau} \cdot \frac{c}{1-c}, \end{equation*} which is a decreasing function of both $\tau$ and $c$. This conclusion implies that in the worst case scenario, the frequency of government expropriation is lower when the government's revenue is higher under a normal tax rate (i.e., $\tau$ is larger), or when it is more costly for the citizens to invest (i.e., $c$ is larger). \section{Discussions of Modeling Assumptions and Results}\label{sec6} \paragraph{The Role of Assumption 2:} Assumption \ref{Ass2} rules out games in which the optimal commitment outcome $(a^*,b^*)$ is a stage-game Nash equilibrium (such as coordination games and chicken games), as well as games where player $1$'s optimal commitment payoff is no more than his minmax payoff (such as matching pennies). Our formula for the lowest discounted frequency of the Stackelberg action fails when $u_1(a^*,b^*) \leq \underline{v}_1$. For example, consider the following variant of the matching penny game that satisfies Assumption \ref{Ass1} and the first part of Assumption \ref{Ass2} but violates the second part of Assumption \ref{Ass2}: \begin{center} \begin{tabular}{| c | c | c |} \hline -- & $h$ & $t$ \\ \hline $H$ & $1+\varepsilon,-1$ & $-1+\varepsilon,1$ \\ \hline $T$ & $-1,1$ & $1,-1$ \\ \hline \end{tabular} \end{center} where $\varepsilon>0$. Player $1$'s unique Stackelberg action is $H$, his Stackelberg payoff is $-1+\varepsilon$, and his minmax payoff is close to $0$ when $\varepsilon$ is small enough. Therefore, $F^*(u_1,u_2)$ is close to $0$ when $\varepsilon$ is close to $0$. However, if both $\pi$ and $\varepsilon$ are small, then the discounted frequency of action $H$ is close to $1/2$ in every equilibrium. This means that neither the lower bound $F^*(u_1,u_2)$ nor the upper bound $1$ can be approximately attained in any equilibrium of the reputation game. In games where $u_1(a^*,b^*)> \underline{v}_1$, but $(a^*,b^*)$ is a stage-game Nash equilibrium, our formula for the lowest discounted frequency for $a^*$ applies to the battle of sexes game and the chicken game, \begin{center} \begin{tabular}{| c | c | c |} \hline Battle of Sexes & $o$ & $f$ \\ \hline $O$ & $2,1$ & $0,0$ \\ \hline $F$ & $0,0$ & $1,2$ \\ \hline \end{tabular} \quad \begin{tabular}{| c | c | c |} \hline Chicken Game & $h$ & $d$ \\ \hline $H$ & $0,0$ & $7,2$ \\ \hline $D$ & $2,7$ & $6,6$ \\ \hline \end{tabular} \end{center} or more generally, when $u_1(a^*,b^*)$ is player $1$'s highest feasible payoff and $u_1(a^*,b^*)>u_1(a,b)$ for every $(a,b) \neq (a^*,b^*)$. In those games, $F^*(u_1,u_2)=1$. This is because player $1$'s payoff is close to $u_1(a^*,b^*)$ in every equilibrium of the reputation game, so $a^*$ must be played with discounted frequency close to $1$. Next, we present a counterexample that satisfies Assumption \ref{Ass1} and the second part of Assumption \ref{Ass2} but violates the first part of Assumption \ref{Ass2}. Suppose players' payoffs are: \begin{center} \begin{tabular}{| c | c | c |} \hline -- & $T$ & $N$ \\ \hline $H$ & $1,1$ & $0,0$ \\ \hline $M$ & $0,3$ & $3,0$ \\ \hline $L$ & $0,0$ & $0,3$ \\ \hline \end{tabular} \end{center} Player $1$'s Stackelberg action is $H$. Since $N$ is player $2$'s best reply to player $1$'s mixed action $\frac{1}{2}M + \frac{1}{2}L$, from which player $1$'s expected payoff is $3/2$, the value of $F^*(u_1,u_2)$ is $0$. When the prior probability of commitment type $\pi$ is strictly greater than $3/4$, the discounted frequency with which player $1$ plays $H$ is $1$ in every equilibrium of the reputation game. This is because in every period where player $2$ has not observed player $1$ playing actions other than $H$, she has a strict incentive to play $T$, so player $1$'s payoff is $1$ by playing $H$ in every period. When player $1$ deviates to $M$ or $L$, his stage-game payoff is $0$, and his continuation value is no more than $1$ according to the folk theorem result of \citet*{FKM-90}. This implies that player $1$ plays $H$ at every on-path history in every equilibrium. \paragraph{Mixed-Strategy Commitment Types:} Our model excludes commitment types that play mixed strategies. In order to understand the new challenges brought by mixed-strategy commitment types, consider the product choice game in Section \ref{sec4} where with positive probability, player $1$ is a type who mechanically plays $(\gamma^*+\varepsilon) H +(1-\gamma^*-\varepsilon) L$ in every period, where $\varepsilon>0$ is small. A new complication arises since the strategic type can never be separated from the mixed-strategy commitment type. As a result, the continuation game always has nontrivial incomplete information regardless of the strategies being played. This stands in contrast to games where all commitment types play pure strategies, in which the strategic type is separated from a commitment type as soon as he stops imitating that type. Analyzing repeated games with persistent private information and short-lived uninformed players is a well-known challenge in the repeated games literature, and to the best of our knowledge, there is no existing result that characterizes the informed player's equilibrium behaviors or his equilibrium action frequencies.\footnote{Very few results are obtained in repeated games between an informed patient player and a sequence of uninformed myopic players. \cite{pei} characterizes the set of equilibrium payoffs between an informed seller and a sequence of uninformed buyers when the seller has persistent private information about his cost. His result relies on the assumption that all types of the seller have the same ordinal preference over stage-game outcomes, and does not apply when there are mixed-strategy commitment types.} \paragraph{Rich Set of Commitment Types:} Our baseline model focuses on settings where there is only one commitment type. Our theorems extend to environments with any finite number of commitment types, as long as all of them play pure strategies, and there exists a commitment type who plays $a^*$ in every period. Our proof for the discounted frequency of action $a^*$ being no less than $F^*(u_1,u_2)$ remains the same. On the construction of equilibria that approximately attain a given frequency in $\mathcal{A}$, for every type space that satisfies the above requirements, there exists $T \in \mathbb{N}$ such that for every $\delta \in (0,1)$ and in every equilibrium under $\delta$, player $2$'s posterior belief in period $T$ assigns positive probability to at most one commitment type. Construct the continuation equilibrium starting from period $T$ according to our proof in Appendix \ref{sec5}, the discounted frequency of player $1$'s action is close to $\alpha^* \in \mathcal{A}$ when $\delta$ is close to $1$. \paragraph{Testable Predictions:} Generally speaking, there are three challenges to test the predictions of reputation models.\footnote{Despite the large literature that takes repeated game predictions to the lab, see \cite{DF2018}, we are unaware of experimental results on repeated games with incomplete information between a patient player and a sequence of myopic players.} First, econometricians do not know which equilibrium players coordinate on. Second, econometricians usually observe players' behaviors rather than their payoffs, while most of the existing reputation results that apply to all equilibria (such as those in Fudenberg and Levine 1989) are stated in terms of the patient player's payoff but not his behaviors. Third, many interesting equilibria in reputation games are in mixed strategies, but econometricians usually cannot observe these mixed strategies and can only observe the realized pure strategy. Our results overcome the first and the second challenge by delivering predictions on the patient player's action frequencies that apply to all equilibria. Take the product choice game example in Section \ref{sec4}. The expression for $F^*(u_1,u_2)$ depends only on two terms: \begin{enumerate} \item $\gamma^*$: the minimal probability of high effort above which player $2$ is willing to play $h$; \item $c_h$: the ratio between the cost of effort and the firm's benefit when a consumer buys the high-end product. \end{enumerate} The values of $\gamma^*$ and $c_h$ can be computed without knowing all the details of players' stage-game payoff functions. Therefore, testing our predictions on the patient player's action frequencies has less demanding data requirements compared to testing the predictions on payoffs in canonical reputation models. In context of the product choice game between a firm and a sequence of consumers, one way to address the third challenge is to use the distribution of the firm's actions across different markets as a proxy for its mixed actions. This idea is applicable when the firm is a chain store that operates in many independent and geographically separated markets, and moreover, the consumers in each market can only observe the firm's actions in their own market but cannot observe the firm's actions in other markets. This is usually the case in developing countries where there is a lack-of record-keeping institutions, so that most consumers rely on word-of-mouth communication to learn about the firm's past behaviors. In these situations, it is reasonable to assume that consumers in one market cannot observe the firm's past behaviors in other markets. Using this idea, suppose an econometrician can observe the firm's behavior in every period and in every market, then he can compute the frequency of the firm's behaviors using his observations. He can then apply Theorems \ref{Theorem1} and \ref{Theorem2} to examine whether his observations are consistent with the predictions of reputation models. The above discussion also unveils a limitation of our results, that they only characterize the set of action frequencies that can arise in equilibrium, but do not deliver predictions on the action frequencies that apply to \textit{every path of equilibrium play}. Therefore, an econometrician cannot test our predictions after observing a realized path of equilibrium play. He can do that after observing the firm's mixed actions, e.g., observing the firm's behaviors across many markets and use the empirical distribution as a proxy for the firm's mixed action. \section{Conclusion}\label{sec7} We examine the effects of reputation on the frequencies with which a patient player plays each of his actions. Our results characterize tight bounds that apply to all equilibria in a broad class of games. Our research question stands in contrast to the reputation literature that focuses on the patient player's equilibrium payoff. Our results stand in contrast to those that study the patient player's behavior in some particular equilibria. Our results imply that in games where the optimal commitment outcome is not a stage-game Nash equilibrium, the patient player may play his optimal commitment action with frequency bounded away from one no matter how patient he is. When the patient player's optimal commitment payoff coincides with his highest equilibrium payoff in the repeated complete information game, reputation effects cannot further refine the patient player's behavior beyond that fact that his equilibrium payoff is at least his optimal commitment payoff. In terms of applications, our results imply that a policy maker can increase the frequency with which a firm exerts high effort by subsidizing consumers for purchasing low-end products or by taxing consumers for purchasing high-end products. They also imply that a small amount of subsidy to potential entrants for entering the market makes an incumbent more aggressive in fighting entrants, but a large amount of subsidy encourages the incumbent to accommodate entry. \newpage
{ "timestamp": "2021-02-11T02:28:08", "yymm": "2007", "arxiv_id": "2007.14002", "language": "en", "url": "https://arxiv.org/abs/2007.14002", "abstract": "We examine a patient player's behavior when he can build reputations in front of a sequence of myopic opponents. With positive probability, the patient player is a commitment type who plays his Stackelberg action in every period. We characterize the patient player's action frequencies in equilibrium. Our results clarify the extent to which reputations can refine the patient player's behavior and provide new insights to entry deterrence, business transactions, and capital taxation. Our proof makes a methodological contribution by establishing a new concentration inequality.", "subjects": "Theoretical Economics (econ.TH)", "title": "Equilibrium Behaviors in Repeated Games", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631663220389, "lm_q2_score": 0.7185944046238982, "lm_q1q2_score": 0.7087950522461287 }
https://arxiv.org/abs/1512.04988
Large deviations for random projections of $\ell^p$ balls
Let $p\in[1,\infty]$. Consider the projection of a uniform random vector from a suitably normalized $\ell^p$ ball in $\mathbb{R}^n$ onto an independent random vector from the unit sphere. We show that sequences of such random projections, when suitably normalized, satisfy a large deviation principle (LDP) as the dimension $n$ goes to $\infty$, which can be viewed as an annealed LDP. We also establish a quenched LDP (conditioned on a fixed sequence of projection directions) and show that for $p\in(1,\infty]$ (but not for $p=1$), the corresponding rate function is "universal", in the sense that it coincides for "almost every" sequence of projection directions. We also analyze some exceptional sequences of directions in the "measure zero" set, including the directions corresponding to the classical Cramér's theorem, and show that those directions yield LDPs with rate functions that are distinct from the universal rate function of the quenched LDP. Lastly, we identify a variational formula that relates the annealed and quenched LDPs, and analyze the minimizer of this variational formula. These large deviation results complement the central limit theorem for convex sets, specialized to the case of sequences of $\ell^p$ balls.
\section{Introduction}\label{sec-intro} Consider the projection of a random $n$-dimensional vector $X^{(n)}$ onto some lower dimensional subspace. Our broad goal is to understand and analyze distributional properties of the projections of high-dimensional random vectors (i.e., large $n$), given certain natural assumptions on the law of $X^{(n)}$. In this paper, we focus on projections onto one-dimensional subspaces, and we write the \emph{projection} of $x\in {\mathbb R}^n$ onto the direction $v\in \mathbb{S}^{n-1}$ (the unit sphere in $\mathbb{R}^n$), to refer to $\langle x,v\rangle_n \doteq \sum_{i=1}^n x_i v_i \in {\mathbb R}$; this is for the sake of brevity, since to be precise, the preceding quantity is the scalar component of the projected vector $\langle x,v\rangle_nv\in {\mathbb R}^n$. \nom[0]{$\langle \cdot, \cdot\rangle_n$}{Euclidean inner product on ${\mathbb R}^n$} One prior result in this vein is the central limit theorem (CLT) for projections of convex bodies: if $X^{(n)}$ is sampled from a log-concave measure (e.g., the uniform measure on a convex body) that is also isotropic, then for sufficiently large $n$, and ``most" $\theta^{(n)}\in\mathbb{S}^{n-1}$, the projection of $X^{(n)}$ onto $\theta^{(n)}$ satisfies $ \langle X^{(n)}, \theta^{(n)} \rangle_n \approx N(0,1)$ in some suitable sense. This result is established via a concentration estimate in \cite{klartag2007central}, drawing from a classical idea of \cite{diaconis1984asymptotics, sudakov1978typical} and a conjecture stated in \cite{anttila2003central, brehm2000asymptotics}. Similar central limit results hold for directions of projection $\Theta^{(n)}$ drawn from the unique rotationally invariant measure on $\mathbb{S}^{n-1}$, and for projections onto $k$-dimensional subspaces, for $1 \le k \ll \sqrt{\log n}$ \cite{meckes2012approximation, meckes2012projections}. In this class of results, the source of the Gaussian approximation may be attributed to geometric properties (specifically, log-concavity) of the original measure. It is natural to ask if existing CLT results for typical projections of high-dimensional random vectors from a convex set can be complemented by analyzing deviations beyond the central limit fluctuation scale. In this work, we initiate such an analysis by investigating \emph{large deviation principles (LDPs)} for sequences of random one-dimensional projections of a certain class of convex bodies, the so-called $\ell^p$ balls. One of our motivations for investigating LDPs is to understand which aspects of random projections can be used to distinguish between different convex bodies. From a central limit perspective, convex bodies cannot be distinguished by their random projections; that is, given \emph{any} isotropic convex body in high dimension, its typical random projections will be approximately standard Gaussian. In fact, Gaussian asymptotics arise not only at the ``central limit" scale, but also across the ``moderate deviation" scale \cite{sodin2007tail}. These universal results are quite powerful, but from another point of view, it is also of interest to precisely identify how a random projection can encode distinct distributional information about the original vector. Our results demonstrate that the {large deviation} behavior of a random projection of a convex body depends on the geometry of the underlying convex body. In particular, we demonstrate sharply different large deviation behavior for random projections of $\ell^p$ balls for different values of $p\in[1,\infty)$. That is, compare Theorem \ref{th-aldp} for $p\in[2,\infty)$ against Theorem \ref{th-aldp12} for $p\in[1,2)$ (which we comment on further in Remark \ref{rmk-p12}), and also compare Theorem \ref{th-qldp} against Theorem \ref{th-qldp1}, where the anomalous LDP is for $p=1$, the only $\ell^p$ ball for $p\in[1,\infty)$ with a non-smooth ``corner". Unlike for central limit theorems, where one can quantify the closeness of a random vector in a fixed high-dimensional space ${\mathbb R}^n$ to the $n$-dimensional Gaussian using various metrics, the statement of an LDP for random projections requires an infinite sequence of convex bodies defined for all dimensions $n\in{\mathbb N}$. This motivates our analysis of the uniform measures on $\ell^p$ balls, which offer a natural, fundamental, yet non-trivial (in particular, non-product) example of a \emph{sequence} of isotropic, log-concave measures. Specifically, we address the following question: \begin{quote}\normalsize\itshape Do LDPs hold for (suitably normalized) random projections of vectors uniformly distributed on the $\ell^p$ ball of ${\mathbb R}^n$? If so, then at which speed and with what rate function? Moreover, how do these LDPs vary with $p\in[1,\infty)$? \end{quote} These questions have the flavor of the study of LDPs in random environments (see, e.g., \cite{comets2000quenched}), where in our case the random ``environment" is governed by the random sequence of projection directions. In this setting, it is natural to consider both the case when one conditions on a fixed sequence of random projection directions (the so-called \emph{quenched} case) and also when one incorporates the randomness of the projection directions (the so-called \emph{annealed} case). Our main results on this question are the following: \begin{enumerate}[leftmargin=9em, rightmargin=2em, align=right, labelwidth=6em, itemsep=0.5em] \item [Theorems \ref{th-aldp} \& \ref{th-aldp12}:] annealed LDPs, for $p\in[2,\infty)$ and $p\in[1,2)$, respectively. \item [Theorems \ref{th-qldp} \& \ref{th-qldp1}:] quenched LDPs, for $p\in(1,\infty)$ and $p=1$, respectively. Moreover, for $p\in(1,\infty)$, but \emph{not} for $p=1$, this LDP holds with a ``universal" rate function that coincides for ``almost every" sequence of directions. \item [Theorem \ref{th-compar}:] for $p\in(2,\infty)$, a variational formula that relates the annealed and quenched rate functions via the entropy of an underlying measure. \item [Theorem \ref{th-atyp}:] a proof of the observation that the particular sequence of directions $(\iota^{(n)})_{n\in{\mathbb N}}$ defined in \eqref{onedefn} below (which corresponds to Cram\'er's theorem in the case of product measures) leads to an ``atypical" large deviation rate function. \end{enumerate} Observe that in the preceding summary of our main results (stated precisely in Sect. \ref{sec-main} and proved in Sects. \ref{sec-annealed}--\ref{sec-atyp}), we only discuss $p< \infty$, and omit the case $p=\infty$. However, all of our results have corresponding versions for general product measures satisfying certain tail conditions (including the uniform measure on the $\ell^\infty$ ball), in fact with simpler proofs than in the non-product ($p<\infty$) case. We compile all of the corresponding statements for product measures in Sect. \ref{sec-prod}, where we also provide brief sketches of the proofs. We make the distinction between $p<\infty$ and $p=\infty$ because a secondary motivation for our work is to investigate to what extent large deviation results extend beyond the classical setting of sums of independent and identically distributed (i.i.d.) random variables to the more general setting of generic projections of log-concave measures. More precisely, let $X^{(n,p)}$ be distributed uniformly on the $\ell^p$ ball of ${\mathbb R}^n$. Consider the direction $\iota^{(n)}\in\mathbb{S}^{n-1}$ defined by \begin{equation}\label{onedefn} \iota^{(n)} \doteq \tfrac{1}{\sqrt{n}}(\underbracket{1,1,\dots, 1}_{n \text{ times }}) \in \mathbb{S}^{n-1}. \nom[iota]{$\iota^{(n)}$}{direction $\tfrac{1}{\sqrt{n}}(1,\dots,1) \in \mathbb{S}^{n-1}$} \end{equation} The classical Cram\'er's theorem yields an LDP for the sequence of suitably normalized projections $n^{-1/2} \langle X^{(n,\infty)}, \, \iota^{(n)}\rangle$, $n\in {\mathbb N}$. In contrast, our work establishes an LDP for $n^{-1/2} \langle X^{(n,p)}, \theta^{(n)} \rangle$, $n\in {\mathbb N}$, for $p\in[1,\infty)$ and general $\theta^{(n)}\in\mathbb{S}^{n-1}$. Figure \ref{fig-projwhole} illustrates our setup. \begin{figure}[bht] \begin{subfigure}{0.45\textwidth} \centering \begin{tikzpicture}[scale=1.75] \node[green, scale = 0.90] at (-1.3,0.9) {${\mathbb R}^n$}; \draw [gray!50] (0,-1.15) -- (0,1.15) ; \draw [gray!50] (-1.15,0) -- (1.15,0) ; \draw [color=green, fill opacity=0.35, fill=green!25, pattern=bricks, pattern color = green] (-1,-1) -- (-1, 1) -- (1, 1) -- (1, -1) -- (-1, -1); \draw [blue!50, very thick, densely dotted] plot [smooth cycle, tension=1] coordinates {(-1,0) (0,1) (1,0) (0,-1)}; \node[blue, scale=0.90] at (0.53,-0.8) {$\mathbb{S}^{n-1}$}; \draw [red, thick, densely dotted, ->] (0, 0) -- (0.70710,0.70710) ; \node[scale=0.90, red] at (0.5,0.34) { $\iota^{(n)}$}; \draw [violet, densely dashed] (-0.16,-0.16) -- (-0.94,0.60); \draw [violet,very thick] (0,0) -- (-0.16,-0.16); \draw [violet](-0.16 ,-0.16 ) -- (-0.16 - 0.08,-0.16 + 0.08*38/39) --(-0.08 -0.08, -0.08 + 0.08*38/39) -- (-0.08,-0.08) ; \node[fill=violet, thin, diamond, scale=0.3] at (-0.16,-0.16) { }; \node[scale=0.90, violet] at (-0.2, -0.3) { $\langle X^{(n)}, \iota^{(n)} \rangle$ }; \node[fill=teal, thin, diamond, scale=0.3] at (-0.94,0.60) { }; \node[scale=0.90, teal] at (-0.825,0.75) { $X^{(n)}$}; \draw [teal] (0,0) -- (-0.94,0.60); \end{tikzpicture} \caption{project $\ell^\infty$ ball of ${\mathbb R}^n$ onto fixed $\iota^{(n)}$.} \label{fig-projo} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \begin{tikzpicture}[scale=1.75] \node[green, scale = 0.90] at (-1.3,0.9) {${\mathbb R}^n$}; \draw [gray!50] (0,-1.15) -- (0,1.15) ; \draw [gray!50] (-1.15,0) -- (1.15,0) ; \draw [color=green, fill opacity=0.35, fill=green!25, pattern=bricks, pattern color = green] plot [smooth cycle, tension=1.7] coordinates {(-1,0) (0,1) (1,0) (0,-1)}; \draw [blue!50, very thick, densely dotted] plot [smooth cycle, tension=1] coordinates {(-1,0) (0,1) (1,0) (0,-1)}; \node[blue, scale=0.90] at (0.53,-0.8) {$\mathbb{S}^{n-1}$}; \draw [red, thick, densely dotted, ->] (0, 0) -- (-0.96824,-1/4) ; \node[scale=0.90, red] at (-1.1,-1/4) { $\Theta^{(n)}$}; \draw [red, opacity=0.85 ,densely dotted, ->] (0, 0) -- (0.8, 0.6) ; \draw [red,opacity=0.85 ,densely dotted, ->] (0, 0) -- (-0.3, -0.9539) ; \draw [red,opacity=0.85 ,densely dotted, ->] (0, 0) -- (-0.5, 0.866) ; \draw [red,opacity=0.85 ,densely dotted, ->] (0, 0) -- (0.75, 0.6614) ; \draw [red,opacity=0.85 ,densely dotted, ->] (0, 0) -- (0.9949, -0.1) ; \draw [red,opacity=0.85 , densely dotted, ->] (0, 0) -- (+0.18,-0.9836) ; \draw [violet, densely dashed] (-0.736,-0.19) -- (-0.94,0.60); \draw [violet,very thick] (0,0) -- (-0.736,-0.19); \draw [violet] (-0.736, -0.19) -- (-0.736 -0.04 ,-0.19 + 3.8725*0.04) -- ( -0.736 -0.04 + 0.16*0.96, -0.19 + 3.8725*0.04 + 0.16/4 ) -- (-0.736 +0.16*0.96, -0.19+0.16/4) ; \node[fill=violet, thin, diamond, scale=0.3] at (-0.736,-0.19) { }; \node[scale=0.90, violet] at (-0.4, -0.3) { $\langle X^{(n)}, \Theta^{(n)} \rangle$ }; \node[fill=teal, thin, diamond, scale=0.3] at (-0.94,0.60) { }; \node[scale=0.90, teal] at (-0.825,0.75) { $X^{(n)}$}; \draw [teal] (0,0) -- (-0.94,0.60); \end{tikzpicture} \caption{project $\ell^p$ ball of ${\mathbb R}^n$ onto random $\Theta^{(n)}$.} \label{fig-projr} \end{subfigure} \caption{Projection of $X^{(n)}$ onto an element of $\mathbb{S}^{n-1}$.} \label{fig-projwhole} \end{figure} Note that in the central limit setting, projections onto general $\theta^{(n)}\in\mathbb{S}^{n-1}$ exhibit the same properties (Gaussian fluctuations) as projections onto the specific direction $\iota^{(n)}\in\mathbb{S}^{n-1}$; in the large deviation setting, Theorem \ref{th-atyp} indicates that this is not the case. We elaborate on this in Sect.\ \ref{ssec-atyp}. Lastly, we are also interested in LDPs because they can yield not only the asymptotic likelihood of a rare event, but also insight into \emph{how} a rare event occurs. In particular, large deviation analysis typically yields variational formulas whose minimizer(s) admit a probabilistic interpretation. This paper initiates an investigation of a particular kind of ``geometric" rare event (large value of a projection), and we establish an associated variational formula in Theorem \ref{th-compar}. We also provide some analysis of a simpler variational formula (for the case $p=\infty$) in Sect. \ref{sec-analysis}. It would be interesting to extend this large deviation analysis to more general sequences of probability measures beyond the uniform measures on $\ell^p$ balls. An even broader goal is to determine precisely which geometric aspects of the underlying probability measures affect the large deviation behavior of random projections. We defer these questions for future work. The outline of this paper is as follows. In the remainder of Sect.\ \ref{sec-intro}, we review related work and set up the preliminaries for our own results. In Sect.\ \ref{sec-main}, we precisely state our main results and provide explicit formulas for the case of $p=2$. In Sect.\ \ref{sec-equiv}, we appeal to certain probabilistic representations of the $\ell^p$ balls which simplify our analysis. Sects.\ \ref{sec-annealed}--\ref{sec-atyp} contain the proofs of our results. In Sect. \ref{sec-prod}, we discuss analogous results for product measures. Lastly, in Sect.\ \ref{sec-analysis}, we analyze the variational problem established in Theorem \ref{th-compar}. A list of notation can be found on page \pageref{notation}. \subsection{Relation to prior work}\label{ssec-prior} Random projections of high-dimensional random vectors arise in a variety of applications. In the statistics and machine learning literature, projections onto random lower-dimensional subspaces are employed for the purposes of dimensionality reduction \cite{bingham2001random, lin2003dimensionality}, clustering \cite{fern2003random}, regression \cite{maillard2012linear}, and topic discovery \cite{ding2013topic} in the setting of high-dimensional data. The main idea is that a practitioner would like to restrict statistical analysis to a low-dimensional space, but it may be computationally expensive to try to select an ``optimal" subspace (using, e.g., Principal Component Analysis), and that under certain assumptions, selecting a random subspace may perform ``nearly" as well. On a more theoretical side, there is significant interest in $\ell^p$ balls due to their central role in convex geometry. As a small fraction of the extensive literature, we note results on sections \cite{meyer1988sections}, hyperplanes \cite{barthe2002hyperplane}, extremal slabs \cite{barthe2003extremal}, probabilistic representations \cite{barthe2005probabilistic}, and cone/surface measures \cite{naor2003projecting, naor2007surface, kr1}. The $\ell^p$ balls also arise in computer science in the context of sketches and low-distortion embeddings \cite{indyk2000stable}. An LDP for projections of $\ell^p$ balls onto canonical basis directions can be found in \cite{BarGamLozRou10}, but our work provides the first LDPs for projections onto general $\theta^{(n)}\in\mathbb{S}^{n-1}$ and random $\Theta^{(n)}$. In Sect.\ \ref{sec-equiv}, we show how our LDPs are related to LDPs for weighted sums of certain i.i.d.\ random variables. For a partial survey of large deviation results in the setting of \emph{deterministically} weighted sums of i.i.d.\ random variables, we refer to \cite[\S2.1]{gkr3}, which details the arguments for quenched LDPs for projections of product measures. Also in Sect.\ \ref{sec-equiv}, it becomes apparent that our LDP is related to LDPs for \emph{self-normalized} sums, as developed in \cite{Shao97}. A similar question as our quenched LDP (Theorem \ref{th-qldp}) in the case $p=2$ can be found in \cite[\S3.3, Ex. 5,6]{de2009self}, but without specifying the form of the rate function (which can be found in Sect.\ \ref{ssec-p2}). We discuss these connections to self-normalized sums in greater detail in Remark\ \ref{rmk-shao}. \subsection{Setup and notation} \label{ssec-setup} Let $\mathbb{A} \doteq \prod_{n\in {\mathbb N}} {\mathbb R}^n$ denote the space of infinite triangular arrays. That is, $z\in\mathbb{A}$ if $z=(z^{(1)},z^{(2)},\dots)$ where $z^{(n)} \in {\mathbb R}^n$ for all $n\in {\mathbb N}$. \nom[aaaaa]{$\mathbb{A}$}{infinite triangular arrays} We assume that all random variables are defined on a common probability space $(\Omega, {\mathcal F}, {\mathbb P})$, and let ${\mathbb E}$ denote the corresponding expectation. Let $\mathcal{X}$ be some measurable space, and let $\mathcal{P}(\mathcal{X})$ be the space of probability measures on $\mathcal{X}$. For a random variable $\xi:\Omega\rightarrow \mathcal{X}$, and a measure $\mu\in\mathcal{P}(\mathcal{X})$, we write $\xi \sim \mu$ if the law of $\xi$ is $\mu$; that is, if $\mathbb{P}\circ \xi^{-1} = \mu$. \nom[px]{$\mathcal{P}(\mathcal{X})$}{probability measures on $\mathcal{X}$} For $p \in [1,\infty)$, $n \in {\mathbb N}$, and $x \in {\mathbb R}^n$, let $\nrm{x}_{n,p} \doteq \left(\sum_{i=1}^n |x_i|^p \right)^{1/p}$ denote the $\ell^p$ norm on ${\mathbb R}^n$. For $p=\infty$, let $\nrm{x}_{n,\infty} \doteq \sup_{i \in \{1, \ldots, n\}} |x_i|$ denote the $\ell^{\infty}$ norm on ${\mathbb R}^n$. Let ${\mathbb B}_{n,p}$ be the unit $\ell^p$ ball in ${\mathbb R}^n$: \[ {\mathbb B}_{n,p} \doteq \left\{ x \in {\mathbb R}^n: \lpnrm{x} \leq 1 \right\}, \quad p \in [1,\infty], \] and let $\sx^{(n,p)} =(\sx^{(n,p)}_1,\dots, \sx^{(n,p)}_n)$ be a random vector that is distributed according to the uniform probability measure on ${\mathbb B}_{n,p}$. Whenever we define a probability measure on a subset $A\subset {\mathbb R}^n$, we mean a probability measure on the Borel subsets of $A$. \nom[a]{$\lVert \cdot \rVert_{n,p}$}{$\ell^p$ norm of ${\mathbb R}^n$} \nom[bnp]{${\mathbb B}_{n,p}$}{unit $\ell^p$ ball of ${\mathbb R}^n$} \nom[xnp]{$\sx^{(n,p)}$}{uniform point from ${\mathbb B}_{n,p}$} Let $\mathbb{S}^{n-1}$ denote the unit sphere in ${\mathbb R}^n$: \[ \mathbb{S}^{n-1} \doteq \left\{ x \in {\mathbb R}^n: x_1^2 + x_2^2 + \dots + x_n^2 = 1 \right\} = \left\{ x \in {\mathbb R}^n: \ltwonrm{x} = 1 \right\}, \] We write $\sigma_{n}$ for the unique rotationally invariant probability measure on $\mathbb{S}^{n-1}$. For $n\in {\mathbb N}$, let $\Theta^{(n)}$ denote a random vector that is distributed according to the uniform measure $\sigma_n$ on $\mathbb{S}^{n-1}$, independent of $\sx^{(n,p)}$. \nom[sn]{$\mathbb{S}^{n-1}$}{unit sphere in ${\mathbb R}^n$} \nom[szsig]{$\sigma_n$}{rotation inv.\ measure on $\mathbb{S}^{n-1}$ } \nom[theta1]{$\Theta^{(n)}$, $\theta^{(n)}$}{random / fixed direction in $\mathbb{S}^{n-1}$} For background on large deviations, we refer to \cite{DemZeibook}. In particular, recall the definition of large deviation principles: \begin{definition} Let $\Sigma$ be a topological space. A sequence of $\Sigma$-valued random variables $(\xi_n)_{n\in {\mathbb N}}$ is said to satisfy a \emph{large deviation principle (LDP)} with \emph{speed} $s:{\mathbb N}\rightarrow {\mathbb R}$ and a \emph{rate function} $I:\Sigma\rightarrow[0,\infty]$ if $I$ is lower semi-continuous, and for all Borel measurable subsets $\Gamma\subset \Sigma$, \begin{equation*} -\inf_{x\in \Gamma^\circ} I(x) \le \liminf_{n\rightarrow\infty} \tfrac{1}{s(n)} \log {\mathbb P}(\xi_n\in \Gamma^\circ) \le \limsup_{n\rightarrow\infty} \tfrac{1}{s(n)} \log {\mathbb P}(\xi_n\in \bar \Gamma) \le -\inf_{x\in \bar{\Gamma}} I(x), \end{equation*} where $\Gamma^\circ$ and $\bar\Gamma$ denote the interior and closure of $\Gamma$, respectively. Furthermore, $I$ is said to be a \emph{good rate function} if it has compact level sets. When no speed is explicitly stated, we take the convention that the default speed is $s(n) = n$. \end{definition} In the large deviation setting, we are frequently interested in geometric properties of an LDP rate function, such as convexity, or the following weakened form of convexity. \begin{definition}\label{def-quasi} A function $f:{\mathbb R}\rightarrow(-\infty,+\infty]$ is said to be \emph{quasiconvex} if its level sets $\{x \in {\mathbb R} : f(x) \le c\}$ are convex for all $c\in {\mathbb R}$. \end{definition} Practically, this definition is useful because quasiconvex functions have an equivalent characterization: $f$ is quasiconvex if and only if there exists some $x_0\in {\mathbb R}$ such that $f$ is non-increasing for $x < x_0$ and non-decreasing for $x > x_0$. A general review of quasiconvex functions can be found, for example, in \cite[\S 3.4]{boyd2004convex}. As a further link to convexity, we recall the following transform which arises in Cram\'er's theorem, and will also play a role in our results. \begin{definition} Given a function $\Lambda:\mathbb{R}^n\rightarrow (-\infty,+\infty]$, the \emph{Legendre transform} of $\Lambda$ is the function $\Lambda^*: \mathbb{R}^n \rightarrow (-\infty,+\infty]$ defined by \begin{equation*} \Lambda^*(\tau) \doteq \sup_{t \in \mathbb{R}^n} \{ \langle t, \tau\rangle_n - \Lambda(t) \}, \quad \tau\in \mathbb{R}^n. \nom[aa]{$(\,\cdot\,)^*$}{Legendre transform} \end{equation*} \end{definition} We also define a class of measures which are intimately tied to the $\ell^p$ balls, as we will demonstrate in Sect.\ \ref{sec-equiv}. For $p\in [1,\infty)$, let $\mu_p \in \mathcal{P}(\mathbb{R})$ have density $f_p$, where \begin{equation}\label{fpdef} f_p(y) \doteq \frac{1}{2p^{1/p}\Gamma(1+\frac{1}{p})} e^{-|y|^p/p}, \quad y\in {\mathbb R}. \end{equation} This is the density of the \emph{generalized normal distribution} (also known as the \emph{exponential power distribution}) with location 0, scale $p^{1/p}$, and shape $p$. When $p=2$, $\mu_2$ corresponds to the standard Gaussian distribution. \nom[fp]{$f_p$}{generalized normal density} \nom[mzup]{$\mu_p$}{generalized normal distribution} \section{Main results}\label{sec-main} In Sects. \ref{ssec-ann}--\ref{ssec-p2}, we precisely state our main results. \subsection{Annealed LDP}\label{ssec-ann} Let $\sw^{(n,p)}$ be the normalized (scalar) projection of $\sx^{(n,p)}$ onto a random direction $\Theta^{(n)}$, defined as \begin{equation} \label{def-bwnpbth} \sw^{(n,p)} \doteq \frac{n^{1/p}}{n^{1/2}}\langle \sx^{(n,p)}, \Theta^{(n)} \rangle_n = \frac{1}{n} \sum_{i=1}^n (n^{1/p}\sx^{(n,p)}_i) (n^{1/2}\Theta^{(n)}_i), \quad n \in {\mathbb N}, \end{equation} where for $p=\infty$, we abide by the convention $n^{1/\infty} \equiv 1$. Our first result establishes an LDP for $(\sw^{(n,p)})_{n\in{\mathbb N}}$. \nom[wnpth]{$\sw^{(n,p)}$}{$n^{(1/p)-(1/2)} \langle \sx^{(n,p)}, \Theta^{(n)}\rangle_n$} \begin{remark}\label{rmk-scaling} As Theorem \ref{th-aldp} and Theorem \ref{th-qldp} below show, the scaling $n^{(1/p)-(1/2)}$ in \eqref{def-bwnpbth} --- and also later in \eqref{def-wthetan} --- turns out to be appropriate for large deviation analysis. The heuristic reasoning behind this scaling is that the variance of $\sw^{(n,p)}$ should be of ``order $1/n$" in order to prove non-trivial large deviation principles. To this end, note that both $n^{1/p}\sx^{(n,p)}_i$ and $n^{1/2}\Theta^{(n)}_i$ are typically of order $1$, since they are coordinates of points on $n^{1/p}{\mathbb B}_{n,p}$ and $n^{1/2}\mathbb{S}^{n-1}$, respectively. Thus, the sum over all $i=1,\dots,n$ is of order $n$, and upon multiplying by $1/n$ (which scales the variance by a factor of $1/n^2$), we find that $\sw^{(n,p)}$ is of the appropriate scale. For an alternative perspective, recall the corresponding central limit results briefly discussed in Sect. \ref{sec-intro}. Note that $n^{1/p}$ is the scaling appropriate for central limit fluctuations. To be precise, let $c_{n,p}$ be the isotropic constant (see, e.g., \cite[p.71]{ball1988logarithmically} for a definition) for the law of $n^{1/p}X^{(n,p)}$. A straightforward calculation shows that $\lim_{n\rightarrow\infty} c_{n,p} = [p^{1/p} \Gamma(3/p)/\Gamma(1/p)]^{1/2}$, a numerical constant depending on $p$. That is, the $n^{1/p}$ factor ensures that the isotropic constants of $n^{1/p}{\mathbb B}_{n,p}$ are normalized to be at the same scale for all dimensions $n\in {\mathbb N}$. From this point of view, the scaling $n^{(1/p)-(1/2)}$ is natural for large deviations, as it is just the CLT scaling multiplied by $n^{-1/2}$. \end{remark} For classical sums of i.i.d. random variables, Cram\'er's theorem gives the LDP rate function as the Legendre transform of the logarithmic moment generating function (log mgf) of the common distribution. In our setting of random projections, certain analogs of the log mgf arise, which we now define. For $p\in[2,\infty)$, let \begin{equation}\label{chklampdefn} \Phi_{p}(t_0, t_1, t_2) \doteq \log \int_{\mathbb R} \int_{\mathbb{R}} e^{t_0z^2 + t_1zy + t_2 |y|^p} \, \mu_2(dz)\mu_p(dy), \quad t_0,t_1,t_2\in {\mathbb R}. \nom[phip]{$\Phi_p$}{log mgf for annealed} \end{equation} Note that $\Phi_p(t_0,t_1,t_2) < \infty$ if and only if $t_0 < \frac{1}{2}, t_1\in {\mathbb R},t_2 < \frac{1}{p}$. Our rate function is defined in terms of the Legendre transforms of $\Phi_{p}$: for $w\in {\mathbb R}$, let \begin{align} \label{iadefn} \mathbb{I}^\sfa_{p}(w) &\doteq \inf_{\substack {\tau_0 > 0, \tau_1\in {\mathbb R}, \tau_2 > 0\,:\\ \tau_0^{-1/2}\tau_1\tau_2^{-1/p} = w}} \Phi_{p}^*(\tau_0,\tau_1,\tau_2). \nom[ipa]{$\mathbb{I}^\sfa_p$}{annealed rate function} \end{align} \begin{theorem}[Annealed LDP, $p\in[2,\infty)$]\label{th-aldp} Let $p \in [2,\infty)$. The sequence $(\sw^{(n,p)})_{n\in{\mathbb N}}$ satisfies an LDP with the quasiconvex, symmetric, good rate function $\mathbb{I}^\sfa_{p}$. \end{theorem} The proof of Theorem \ref{th-aldp} is given in Sect. \ref{ssec-anng2}. \smallskip For $p <2$, random projections display significantly different large deviation behavior. For $p\in [1,2)$, define \begin{align} \mathbb{I}^\sfa_p(w) &\doteq \tfrac{1}{r_p}|w|^{r_p}, \quad w\in {\mathbb R}, \label{anp12}\\ r_p &\doteq \tfrac{2p}{2+p}. \label{rp} \end{align} Note that $r_p < 1$ for $p < 2$, so the following large deviation principle holds with a speed $n^{r_p}$, slower than the speed $n$ associated with the case $p\ge 2$. \nom[rp]{$r_p$}{the exponent/scale $2p/(2+p)$} \begin{theorem}[Annealed LDP, $p\in[1,2)$]\label{th-aldp12} Let $p\in[1,2)$. The sequence $(\sw^{(n,p)})_{n\in {\mathbb N}}$ satisfies an LDP with speed $n^{r_p}$ and the quasiconvex, symmetric, good rate function $\mathbb{I}^\sfa_p$. \end{theorem} The proof of Theorem \ref{th-aldp12} is given in Sect. \ref{ssec-annpl2}. \begin{figure}[htb] \begin{tikzpicture} \node [red] at (-2.2,0.9) {\footnotesize$n^{1/1}\mathbb{B}_{n,1}$}; \node [color=green] at (-1.8,-1.2) {\footnotesize$n^{1/\infty}\mathbb{B}_{n,\infty}$}; \node [blue] at (1.8,-1.2) {\footnotesize$n^{1/2}S^{n-1}$}; \draw [gray!50] (0,-2.2) -- (0,2.2) ; \draw [gray!50] (-2.2,0) -- (2.2,0) ; \draw [red, fill opacity=0.35, fill=red!30, thick] (0,2) -- (2, 0) -- (0,-2) -- (-2,0) -- (0,2) ; \draw [blue, thick] plot [smooth cycle, tension=1] coordinates {(-1.4142,0) (0,1.4142) (1.4142,0) (0,-1.4142)}; \draw [green, fill opacity=0.5, fill=green!30] (-1,-1) -- (-1, 1) -- (1, 1) -- (1, -1) -- (-1, -1); \draw [green, very thick] (0,0) -- (1,1); \draw [red, very thick] (0,0) -- (0, 2); \end{tikzpicture} \caption{Scaled $\ell^1$ ball vs. scaled $\ell^\infty$ ball.} \label{fig-1inf} \end{figure} \vspace*{-1em} \begin{remark}\label{rmk-p12} Note that Theorem \ref{th-aldp} and Theorem \ref{th-aldp12} reveal a sharp difference between the LDPs for $p > 2$ and $p <2$. Due to the rotational invariance of the law of $\Theta^{(n)}$, a large deviation of $\langle \sx^{(n,p)}, \Theta^{(n)} \rangle_n$ depends crucially on a large deviation of the Euclidean norm of $\sx^{(n,p)}$. The difference between the cases $p>2$ and $p <2$ is a consequence of the geometry of the $\ell^p$ balls, highlighted in Figure \ref{fig-1inf}, which portrays the scaled balls $n^{1/p}{\mathbb B}_{n,p}$, with \textcolor{red}{$p=1$ in red}, and \textcolor{green}{$p=\infty$ in green}. For $p>2$, the vectors in $n^{1/p}{\mathbb B}_{n,p}$ that attain maximal Euclidean norm are the ``corners" $(\pm 1,\pm 1,\dots,\pm 1)$. Meanwhile, for $p <2$, the vectors in $n^{1/p}{\mathbb B}_{n,p}$ that attain maximal Euclidean norm are again ``corners", but this time the corners are in canonical basis directions $(\pm 1,0,\dots,0)$, $(0,\pm1, 0,\dots, 0)$, etc. In particular, this means that for $p>2$, a large deviation of the Euclidean norm occurs due to a combined large deviation of each coordinate. On the other hand, for $p<2$, the large deviation event is caused by the large deviation of a single coordinate. The behavior in the $p<2$ case is similar to the observation that for random walks with heavy-tailed increments, a large deviation is caused by an extreme of the sample \cite[\S4]{mikosch1998large}, which is also referred to as the ``principle of the big jump" \cite{foss2007discrete}. \end{remark} \subsection{Quenched LDP}\label{ssec-que} We now consider the case where we condition on a fixed sequence of directions $\Theta^{(n)}=\theta^{(n)}$, $n\in {\mathbb N}$. Let $\mathbb{S}\doteq \prod_{n \in {\mathbb N}} \mathbb{S}^{n-1}$. Given a sequence of projection directions $\theta = (\theta^{(1)}, \theta^{(2)}, \dots) \in \mathbb{S}$, consider the sequence of random variables $\sw_\theta^{(n,p)}, n \in {\mathbb N},$ defined by \begin{equation}\label{def-wthetan} \sw_\theta^{(n,p)} \doteq \frac{n^{1/p}}{n^{1/2}}\langle \sx^{(n,p)}, \theta^{(n)} \rangle_n = \frac{1}{n} \sum_{i=1}^n (n^{1/p}\sx^{(n,p)}_i)(n^{1/2}\theta^{(n)}_i), \quad n \in {\mathbb N}. \end{equation} Observe that $\sw_\theta^{(n,p)}$ denotes the normalized (scalar) projection of $\sx^{(n,p)}$ onto a \emph{particular} direction $\theta^{(n)}$, whereas $\sw^{(n,p)}$ of \eqref{def-bwnpbth} denotes the normalized (scalar) projection of $\sx^{(n,p)}$ onto a \emph{random} direction $\Theta^{(n)}$. The scaling $n^{(1/p)-(1/2)}$ follows from the same rationale as in the annealed case, discussed in Remark \ref{rmk-scaling}. \nom[wnpthq]{$\sw_\theta^{(n,p)}$}{$n^{(1/p)-(1/2)} \langle \sx^{(n,p)}, \theta^{(n)}\rangle_n$} \nom[snn]{$\mathbb{S}$}{sequences $\mathbb{S}= \prod_{n\in{\mathbb N}} \mathbb{S}^{n-1}$} In the case of \emph{fixed} directions of projection $\theta\in\mathbb{S}$ (or conditioning on $\Theta = \theta$), the corresponding analog of the log mgf is as follows. For $p\in(1,\infty)$, $\nu\in\mathcal{P}({\mathbb R})$, define \begin{align} \lm_{p}( t_1, t_2) &\doteq \log\left( \int_\mathbb{R} e^{ t_1 y + t_2 |y|^p} \mu_p(dy) \right); \label{lampdefn0} \\ \Psi_{p,\nu}( t_1, t_2) &\doteq \int_\mathbb{R} \lm_p(t_1u,t_2) \nu(du)\, , \quad t_1,t_2\in {\mathbb R}. \label{lampdefn} \nom[lzzamp]{$\lm_p$}{log mgf of $(Y,\lvert Y \rvert^p)$ for $Y\sim\mu_p$} \nom[psip]{$\Psi_{p,\nu}$}{log mgf for quenched} \end{align} Note that $\Psi_{p,\nu}(t_1,t_2) < \infty$ for $t_2 < 1/p$, and is equal to infinity, otherwise. We define the associated rate function in terms of the Legendre transform of $\Psi_{p,\nu}$: for $w\in \mathbb{R}$, let \begin{align}\label{ieqdefn} \mathbb{I}^\sfq_{p,\nu}(w) &\doteq \inf_{\substack{ \tau_1\in {\mathbb R}, \tau_2 > 0\,: \\ \tau_1\tau_2^{-1/p} = w }} \Psi_{p,\nu}^*(\tau_1,\tau_2). \nom[ipqu]{$\mathbb{I}^\sfq_{p,\nu}$}{quenched rate function} \end{align} Let $\pi_n:\mathbb{S}\rightarrow \mathbb{S}^{n-1}$ be the coordinate map such that for $\theta \in \mathbb{S}$, we have $\pi_n(\theta) = \theta^{(n)}$. Let $\sigma$ be any probability measure on (the Borel sets of) $\mathbb{S}$ such that for all $n\in {\mathbb N}$, \begin{equation}\label{sigproj} \sigma \circ \pi_n^{-1} = \sigma_{n}. \end{equation} For example, the product measure $\sigma=\bigotimes_{n\in {\mathbb N}} \sigma_{n}$ satisfies \eqref{sigproj}. Our second result establishes an LDP for $(\sw_\theta^{(n,p)})_{n\in {\mathbb N}}$ which holds for $\sigma$-a.e.\ $\theta \in \mathbb{S}$. \nom[szsig2]{$\sigma$}{ a measure on $\mathbb{S}$ } \begin{theorem}[Quenched LDP, $p \in (1,\infty)$]\label{th-qldp} Let $p\in(1,\infty)$. For $\sigma$-a.e.\ $\theta \in \mathbb{S}$, the sequence $(\sw_\theta^{(n,p)})_{n\in\mathbb{N}}$ satisfies an LDP with the quasiconvex, symmetric, good rate function $\mathbb{I}^\sfq_{p,\mu_2}$. \end{theorem} The proof of Theorem \ref{th-qldp} is given in Sect.\ \ref{sec-quenched}. \smallskip Interestingly, note that almost every sequence of directions of projection yields the same exponential rate of decay! That is, for $\sigma$-a.e.\ $\theta \in \mathbb{S}$, the rate function $\mathbb{I}^\sfq_{p,\mu_2}$ does not depend on the particular choice of $\theta\in \mathbb{S}$. This is not obvious at first sight, because in principle, the rate function for $(\sw_\theta^{(n,p)})_{n\in{\mathbb N}}$ should depend on the particular choice of $\theta\in\mathbb{S}$. Note that the rate function is measurable with respect to the tail sigma-algebra generated by the sequence $(\theta^{(1)},\theta^{(2)},\dots)$. Hence, if $\sigma$ were the product measure $\sigma=\bigotimes_{n\in {\mathbb N}} \sigma_{n}$, then the lack of dependence of $\mathbb{I}^\sfq_{p,\mu_2}$ on $\theta$ would follow from the Kolmogorov 0--1 law. However, our result holds for any $\sigma\in\mathcal{P}(\mathbb{S})$ satisfying \eqref{sigproj}. We refer to \cite[Remark 3.3]{gkr3} for further comment. The key is that the $\sigma$-a.e.\ asymptotic behavior of $\sqrt{n}\theta^{(n)}$ which is relevant for the proof of Theorem \ref{th-qldp} depends only on the row-wise behavior of the array $\theta$ specified by \eqref{sigproj}. A natural question to ask is whether there exists a subset of $\mathbb{S}$ of measure zero that displays ``atypical" behavior; that is, for which an LDP still holds, but with a rate function that is different from the universal quenched rate function $\mathbb{I}^\sfq_{p,\mu_2}$. We address this question in Sect.\ \ref{ssec-atyp}, for $p \in(1,\infty)$. On another note, for $p=2$, we can strengthen Theorem \ref{th-qldp} to hold for \emph{all} $\theta \in \mathbb{S}$, not just for $\sigma$-a.e.\ $\theta \in \mathbb{S}$. This and other unique aspects of the $p=2$ case will be explored further in Sect.\ \ref{ssec-p2}. The preceding discussion applies only to the case $p\in(1,\infty)$. For $p=1$, the integrated log mgf $\Psi_{1,\mu_2}(t_1,t_2)$ is infinite if $t_1\ne 0$, and the same techniques as in the case $p\in(1,\infty)$ do not apply. Instead, for $p=1$ and $c > 0$, define \begin{equation}\label{iq1def} \mathbb{I}^\sfq_{1,c}(w) \doteq \frac{|w|}{c}, \quad w\in {\mathbb R}. \end{equation} \begin{theorem}[Quenched LDP, $p =1$]\label{th-qldp1} Fix $\theta\in\mathbb{S}$ such that \begin{equation}\label{thmax} \lim_{n\rightarrow\infty} \sqrt{\frac{n}{\log n}} \max_{1\le i \le n} \theta_i^{(n)} = c. \end{equation} Then, $(W_\theta^{(n,1)})_{n\in\mathbb{N}}$ satisfies an LDP with speed $n/\sqrt{\log n}$ and the good rate function $\mathbb{I}^\sfq_{1,c}$. \end{theorem} The proof of Theorem \ref{th-qldp1} is given in Sect. \ref{ssec-qp1}. Note that unlike the ``universal" rate function $\mathbb{I}^\sfq_{p,\mu_2}$ of Theorem \ref{th-qldp}, which is the LDP rate function for $\sigma$-a.e $\theta\in\mathbb{S}$ and any $\sigma$ satisfying \eqref{sigproj}, the quenched LDP for $p=1$ (Theorem \ref{th-qldp1}) depends on the particular sequence $\theta\in\mathbb{S}$ through the condition \eqref{thmax}. We discuss the condition \eqref{thmax} further in Remark \ref{rmk-whysig}. \subsection{Relationship between the annealed and quenched LDPs}\label{ssec-rel} Let $m_q$ denote the $q$-th absolute moment of a measure, \begin{equation}\label{qmomdef} m_q(\nu) \doteq \int_\mathbb{R} |x|^q \nu(dx), \quad \nu \in \mathcal{P}(\mathbb{R}). \end{equation} \nom[mq]{$m_q(\cdot)$}{$q$-th absolute moment} Let $H(\cdot | \cdot)$ denote the relative entropy between two measures; that is, for $\nu,\mu\in \mathcal{P}({\mathbb R})$, \begin{equation*} H(\nu | \mu) \doteq \left\{\begin{array}{cl} \displaystyle\int_{\mathbb R} \log \left(\frac{d\nu}{d\mu}\right) d\nu, & \text{ if } \nu \ll \mu, \\ +\infty, & \text{ else.} \end{array}\right. \end{equation*} We identify a variational formula that relates the annealed and quenched rate functions. \nom[Hnumu]{$H(\cdot | \cdot)$}{relative entropy} \begin{theorem}[Relationship between annealed and quenched LDPs]\label{th-compar} Let $p\in [2,\infty)$. Then, for all $w\in {\mathbb R}$, \begin{align} \mathbb{I}^\sfa_{p}(w) &= \inf_{\substack{\nu \in \mathcal{P}({\mathbb R}):\\ m_2(\nu)\le 1}} \left\{\mathbb{I}^\sfq_{p,\nu}(w) + H(\nu | \mu_2) + \tfrac{1}{2}\left(1- m_2(\nu)\right) \right\}. \label{varform1} \end{align}In particular, this implies that $\mathbb{I}^\sfa_{p}(w) \le \mathbb{I}^\sfq_{p,\mu_2}(w)$ for all $w\in {\mathbb R}$. \end{theorem} We prove this theorem in Sect. \ref{ssec-varadh}, as a consequence of the groundwork laid in Sect.\ \ref{ssec-rnp} and Sect. \ref{ssec-empcone}. We also discuss the minimizers of this variational problem in Sect.\ \ref{sec-analysis}. \smallskip As established in Proposition \ref{prop-quegen}, the term $\mathbb{I}^\sfq_{p,\nu}$ in \eqref{varform1} is the large deviation rate function for projections of the random point $\sx^{(n,p)}$ onto a particular outcome of fixed directions of projection $\Theta = \theta$ (i.e., a quenched ``environment") corresponding to the measure $\nu$. On the other hand, we will see in Sect.\ \ref{ssec-empcone} that $H(\cdot | \mu_2) + \frac{1}{2}(1-m_2(\cdot))$ is the large deviation rate function for the underlying environment $\Theta$. That is, an annealed large deviation arises precisely due to the combination of: (i) a deviation of the environment; and (ii) the deviation of a projection within such an environment. \begin{remark} Although quenched and annealed LDPs have been considered in other contexts such as random walks in random environments (RWRE), with the exception of \cite[Eqn.\ (9)]{comets2000quenched}, there appear to be relatively few results that relate quenched and annealed rate functions via a variational formula in the spirit of Theorem \ref{th-compar}. See also \cite[Eqn. (1.9)]{aidekon2010large} for a weaker comparison. As one would expect due to the different contexts, the proofs in the RWRE setting are quite different in nature from our proof. \end{remark} \subsection{Atypical directions of projection}\label{ssec-atyp} As noted in the discussion following Theorem \ref{th-qldp}, the LDP rate function $\mathbb{I}^\sfq_{p,\mu_2}$ is the same for $\sigma$-a.e.\ sequence of directions $\theta \in \mathbb{S}$. In this section, we compare the $\sigma$-a.e.\ sequences of directions with sequences in the set of measure zero for which Theorem \ref{th-qldp} does not hold. One particular sequence to consider is $\iota = (\iota^{(1)},\iota^{(2)},\dots) \in \mathbb{S}$, where $\iota^{(n)}$ is the vector of 1's as in \eqref{onedefn}. Then, $W_\iota^{(n,p)}$ denotes the projection of $\sx^{(n,p)}$ onto a particular ``corner" direction. In order to make a comparison between the particular sequence $\iota$ and ``generic" sequences $\theta$ for which the quenched LDP holds, we define the following rate functions: \begin{align}\label{cramratedefn} \mathbb{I}^{\mathsf{cr}}_p(w) &\doteq \inf_{\substack{\tau_1\in {\mathbb R}, \tau_2 > 0 : \\ \tau_1\tau_2^{-1/p} = w}} \lm_p^*(\tau_1,\tau_2), \quad w\in {\mathbb R}. \nom[ipcr]{$\mathbb{I}^{\mathsf{cr}}_p$}{Cram\'er rate function} \end{align} As we elaborate in Remark \ref{rmk-shao}, this rate function is related to large deviations for \emph{self-normalized} random variables. \begin{theorem}[Atypicality, $p\in (1,\infty)$]\label{th-atyp} For $p\in (1,\infty)$, the sequence $(W_\iota^{(n,p)})_{n\in {\mathbb N}}$ satisfies an LDP with the quasiconvex, symmetric, good rate function $\mathbb{I}^{\mathsf{cr}}_p$. Moreover, for $w\in (-1,1)$, we have the following: \begin{enumerate}[itemsep=4pt] \item for $p > 2$, $\mathbb{I}^\sfq_{p,\mu_2}(w) \ge \mathbb{I}^{\mathsf{cr}}_p(w)$, with equality if and only if $w=0$; \item for $p =2$, $\mathbb{I}^\sfq_{p,\mu_2}(w) = \mathbb{I}^{\mathsf{cr}}_p(w)$; \item for $p <2$, $ \mathbb{I}^\sfq_{p,\mu_2}(w) \le \mathbb{I}^{\mathsf{cr}}_p(w)$, with equality if and only if $w =0$. \end{enumerate} \end{theorem} The proof of Theorem \ref{th-atyp} is given in Sect.\ \ref{sec-atyp}. \smallskip An analogous result for product measures is the focus of \cite{gkr3}, as we briefly recall in Sect. \ref{ssec-queprod}. See Figure \ref{fig-comp} for a sketch of how the universal quenched rate function compares to the exceptional rate function associated with $\iota$, in the case of projections of a random variable uniformly distributed on the $\ell^\infty$ ball. \begin{figure}[hbt] \includegraphics[scale=0.55, trim=2.5in 3.85in 2.5in 3.85in]{atypicalplot5.pdf} \caption{Quenched vs. Cram\'er rate functions for $p=\infty$.} \label{fig-comp} \end{figure} \begin{remark} A similar notion of atypicality can be found in the work of \cite{barthe2003extremal}, where the authors were interested in slabs of convex bodies. In particular, the authors proved a Cram\'er-type upper bound for the volume of slabs of certain convex bodies, and noted that this upper bound was asymptotically attained by the sequence of ``extremal" slabs orthogonal to the main diagonal $(1,1,\dots,1)$. Our result Theorem \ref{th-atyp} shows that the sequence of directions $(1,1,\dots,1)$ is not only extremal, but also particularly distinct, in that almost every other sequence of directions yields a universal rate function different from that of the extremal direction. \end{remark} \begin{remark} Another particular sequence of directions to consider is the sequence of canonical basis vectors $e_1 = (e_1^{(1)}, e_1^{(2)},\dots) \in \mathbb{S}$, where \begin{equation*} e_1^{(n)} \doteq (1,\underbracket{0,\dots, 0}_{n-1 \text{ times }}) \in \mathbb{S}^{n-1}. \end{equation*} That is, project $\sx^{(n,p)}$ onto its first coordinate. In the language of \cite{barthe2003extremal}, this is the volume of the ``canonical slab" of the $\ell^p$ ball. It is known due to \cite[Theorem 3.4]{BarGamLozRou10} that the sequence $(\langle \sx^{(n,p)}, e_1^{(n)}\rangle_n)_{n\in {\mathbb N}}$ satisfies an LDP with speed $n$ and rate $J_p(x) = -\frac{1}{p} \log(1-x^p)$. Note, however, that this sequence lacks the $\frac{n^{1/p}}{n^{1/2}}$ scaling found in $\sw_\theta^{(n,p)}$, so the sequence $e_1$ is also atypical in its own sense, at least for $p\ne 2$. \end{remark} \nom[e1]{$e_1^{(n)}$}{first coord. $(1,0,\dots,0) \in \mathbb{S}^{n-1}$} \subsection{Special case of $p=2$ }\label{ssec-p2} As a brief digression, we consider the special case of $p=2$. First, define the rate function for $w\in {\mathbb R}$: \begin{equation}\label{j2defn} J_2(w) \doteq \left\{\begin{array}{cl} - \frac{1}{2} \log(1-w^2), & w\in (-1,1);\\ +\infty, & \text{ else.}\end{array}\right. \end{equation} \nom[j2]{$J_2$}{rate function for $p=2$} Then, our results can be summarized as follows: \begin{theorem}\label{th-p2} For $p=2$, the quenched LDP of Theorem \ref{th-qldp} holds for all $\theta \in \mathbb{S}$. Moreover, \begin{equation}\label{equalrates} \mathbb{I}^\sfa_{2} = \mathbb{I}^\sfq_{2,\mu_2} = J_2. \end{equation} \end{theorem} \begin{proof} Note that $X^{(n,2)}$ is distributed uniformly over the Euclidean ball, so its distribution is spherically symmetric in the sense that for all $n$ and all $\eta,\eta'\in \mathbb{S}^{n-1}$, \begin{equation}\label{sphersymm} \langle { \sx}^{(n,2)}, \eta\rangle_n \stackrel{(d)}{=} \langle { \sx}^{(n,2)}, \eta' \rangle_n. \end{equation} In particular, this implies that for $e_1^{(n)}= (1,0,\dots,0) \in \mathbb{S}^{n-1}$, \begin{equation*} {\mathbb P}(\langle { \sx}^{(n,2)}, \Theta^{(n)}\rangle_n \in \cdot ) = {\mathbb P}(\langle { \sx}^{(n,2)}, \theta^{(n)} \rangle_n \in \cdot ) = {\mathbb P}(\langle { \sx}^{(n,2)}, e_1^{(n)} \rangle_n \in \cdot \, ). \end{equation*} The upshot is that to analyze either the annealed LDP for $(W^{(n,2)})_{n\in{\mathbb N}}$, or the quenched LDP for $(W_\theta^{(n,2)})_{n\in{\mathbb N}}$, it suffices to consider the LDP of $(W_{e_1}^{(n,2)})_{n\in{\mathbb N}}$, the sequence of projections onto the first coordinate. In this case, it is known from \cite[Theorem 3.4]{BarGamLozRou10} that this sequence satisfies an LDP with good rate function $J_2$. It is also possible to prove the equality \eqref{equalrates} by direct calculation. \end{proof} Note that the key part in the preceding proof is spherical symmetry, a property which we will use again to a different end in Sect.\ \ref{ssec-rnp}. It is this spherical symmetry which leads to the ``for all" claim in Theorem \ref{th-p2}, as opposed to the ``$\sigma$-a.e." claim in Theorem \ref{th-qldp}. \begin{remark} While Theorem \ref{th-p2} shows that the quenched and annealed rate functions are identical when $p=2$, Proposition \ref{prop-nongsn} shows that the quenched and annealed rate functions do \emph{not} coincide when $p=\infty$. \end{remark} \section{An equivalent formulation}\label{sec-equiv} When $p< \infty$, the non-trivial dependence between the coordinates that is induced by the uniform measure on ${\mathbb B}_{n,p}$ makes a direct large deviation analysis difficult. To resolve this, we invoke a more convenient representation for the uniform measure on ${\mathbb B}_{n,p}$ to reduce the analysis of $\sw^{(n,p)}$ and $\sw_\theta^{(n,p)}$ to that of more tractable objects. Furthermore, this representation will also clarify the role of the density $f_p$ introduced in \eqref{fpdef}. \subsection{A probabilistic representation for the uniform measure on ${\mathbb B}_{n,p}$}\label{ssec-repn} Let $n\in {\mathbb N}$ and $p \in [1,\infty)$. Consider the following random variables, defined on the same common probability space $(\Omega,\mathcal{F}, {\mathbb P})$ as in Sect.\ \ref{ssec-setup}: \begin{itemize} \item $U$ is uniformly distributed on $[0,1]$; \item $ \sy^{(p)} = ( Y^{(n,p)} )_{n\in\mathbb{N}} = (\,(Y^{(n,p)}_1,\dots,Y^{(n,p)}_n)\,)_{n\in\mathbb{N}}$ is a triangular array of i.i.d. real-valued random variables, with common distribution $\mu_p$ defined by \eqref{fpdef}; \item $ \sz = ( Z^{(n)} )_{n\in\mathbb{N}} = (\, (Z^{(n)}_1,\dots,Z^{(n)}_n)\, )_{n\in\mathbb{N}}$ is a triangular array of independent $N(0,1)$ random variables; \item $U$, $\sy^{(p)}$, and $\sz$ are independent. \end{itemize} Then, the following properties are well known --- see, e.g., \cite[Lemma 1]{schechtman1990volume}, \cite[\S3]{rachev1991approximate}. \nom[u]{$U$}{uniform r.v.\ on $[0,1]$} \nom[ya]{$\sy$, $Y^{(n,p)}$}{array / vector of i.i.d. $\sim \mu_p$} \nom[za]{$\sz$, $Z^{(n)}$ }{array / vector of i.i.d. $N(0,1)$} \begin{lemma}\label{lem-jointrep} For $p\in[1,\infty)$, \begin{equation} \label{bthn-rep} \left( \sx^{(n,p)}, \Theta^{(n)}\right) \stackrel{(d)}{=} \left(U^{1/n} \frac{Y^{(n,p)}}{\lpnrm{Y^{(n,p)}}}, \frac{Z^{(n)}}{\ltwonrm{Z^{(n)}}}\right). \end{equation} Moreover, $Y^{(n,p)} / \lpnrm{Y^{(n,p)}}$ is independent of $\lpnrm{Y^{(n,p)}}$, and $Z^{(n)}/\ltwonrm{Z^{(n)}}$ is independent of $\ltwonrm{Z^{(n)}}$. \end{lemma} Define the sequences of random variables $(\widehat{W}^{(n,p)})_{n\in{\mathbb N}}$ and $(\widehat{W}^{(n,p)}_\theta)_{n\in{\mathbb N}}$ as follows: for $n\in {\mathbb N}$ and $\theta \in \mathbb{S}$, \begin{align} \widehat{W}^{(n,p)} &\doteq \frac{n^{1/p}}{n^{1/2}} U^{1/n} \frac{ \sum_{i=1}^n Y^{(n,p)}_i Z^{(n)}_i}{\lpnrm{Y^{(n,p)}}\ltwonrm{Z^{(n)}}}; \label{altseq}\\ \widehat{W}^{(n,p)}_\theta &\doteq \frac{n^{1/p}}{n} U^{1/n} \frac{ \sum_{i=1}^n Y^{(n,p)}_i \sqrt{n}\theta^{(n)}_i}{\lpnrm{Y^{(n,p)}}} \label{altseq2}. \nom[wnpz1]{$\widehat{W}^{(n,p)}$}{representation of $W^{(n,p)}$} \end{align} The definitions \eqref{def-bwnpbth} and \eqref{def-wthetan} together with \eqref{bthn-rep}, \eqref{altseq} and \eqref{altseq2} show that for $n\in {\mathbb N}$ and $\theta\in\mathbb{S}$, \begin{align} \sw^{(n,p)} &\stackrel{(d)}{=} \widehat{W}^{(n,p)} \label{bwthn-rep};\\ \sw_\theta^{(n,p)} &\stackrel{(d)}{=} \widehat{W}^{(n,p)}_\theta \label{bwthn-rep2}. \end{align} \subsection{Concentration on the boundary}\label{ssec-bdry} Continue to assume $p\in [1,\infty)$. In this section, we show that, for the purposes of both the annealed and quenched LDPs, it is possible to ignore the contribution of the ``radial" term $U^{1/n}$ in the definition of $\widehat{W}^{(n,p)}$ given by \eqref{altseq}. This is related to the fact that the uniform measure on high-dimensional isotropic convex bodies concentrates strongly on the boundary. Note that unlike in the central limit setting, our asymptotic result as $n\rightarrow \infty$ does not rely on the delicate ``thin-shell" estimates derived for finite $n$ dimensions \cite{klartag2007central}. \begin{lemma}\label{lem-unifequiv} Suppose that a sequence of ${\mathbb R}$-valued random variables $(\xi_n)_{n\in{\mathbb N}}$ satisfies an LDP with a good rate function $I_\xi(\cdot)$. Let $U$ be an independent random variable uniformly distributed on $[0,1]$. If $I_\xi$ is quasiconvex and symmetric, then the sequence $(U^{1/n}\xi_n)_{n\in {\mathbb N}}$ satisfies an LDP with good rate function $I_\xi$. \end{lemma} To prove Lemma \ref{lem-unifequiv}, we begin by appealing to the large deviation behavior of $U^{1/n}$ as $n\rightarrow\infty$. \begin{lemma}\label{lem-uniform} The sequence $(U^{1/n})_{n\in {\mathbb N}}$ satisfies an LDP with the good rate function \begin{equation*} I_U(u) \doteq \left\{ \begin{array}{ll} - \log u & u\in(0,1] ; \\ +\infty & \textnormal{ else. } \end{array}\right. \end{equation*} \end{lemma} \begin{proof} Let $A$ be a Borel set in $\mathbb{R}$. First, we prove the large deviation upper bound; that is, $\limsup_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb P}(U^{1/n} \in A) \le -\inf_{u\in \bar{A}} I_U(u)$. If $1\in \bar{A}$, then $\inf_{u \in \bar A} I_U(u) = 0$, so the upper bound in this case is automatic. Otherwise, let $u_1 = \sup\{ u \in A : u < 1 \}$ and $u_2 = \inf\{ u \in A : u > 1\}$. Since $I_U$ is convex with minimum at 1, and infinite outside $(0,1]$, \begin{align*} \inf_{u\in \bar A} I_U(u) &= I_U(u_1). \end{align*} Using the fact that $\bar{A} \subset (-\infty,u_1] \cup [u_2, \infty)$, and ${\mathbb P}(U^{1/n} \ge u_2) = 0$, we find that \begin{align*} \limsup_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb P}(U^{1/n} \in \bar{A}) &\le \limsup_{n\rightarrow\infty} \frac{1}{n} \log \left( {\mathbb P}(U^{1/n} \in (-\infty,u_1]) + {\mathbb P}(U^{1/n} \in [u_2,\infty))\right)\\ &= \limsup_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb P}(U \le u_1^n)\\ &= \left\{ \begin{array}{cc} \log u_1 & \text{ if } u_1 > 0\\ -\infty & \text{ else } \end{array}\right.\\ &= -\inf_{u\in \bar A} I_U(u). \end{align*} Now we prove the large deviation lower bound, $\liminf_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb P}(U^{1/n} \in A) \ge -\inf_{u\in A^\circ} I_U(u)$. If $A^\circ \cap (0,1] = \emptyset$, then $\inf_{u\in A^\circ} I_U(u) = \infty$, so the lower bound in this case is automatic. Otherwise, let $\delta > 0$, and let $\bar{u} \in A^\circ \cap (0,1]$ such that $I_U(\bar{u}) \le \inf_{u \in A^\circ} I_U(u) + \delta$. Since $A^\circ$ is open, there exists $\epsilon \in (0, \bar{u})$ such that $(\bar u- \epsilon, \bar u] \subset A^\circ \cap (0,1]$. Thus, \begin{align*} \liminf_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb P}(U^{1/n} \in A^\circ ) &\ge \liminf_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb P}(U^{1/n} \in (\bar u - \epsilon , \bar{u}])\\ &= \liminf_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb P}(U \in ((\bar{u}-\epsilon)^n, \bar{u}^n])\\ &= \liminf_{n\rightarrow\infty} \frac{1}{n} \log (\bar{u}^n - (\bar{u} - \epsilon)^n)\\ &= \log \bar{u}\\ &= -I_U(\bar{u})\\ &\ge - \inf_{u\in A^\circ} I_U(u) - \delta. \end{align*} This holds for arbitrary $\delta > 0$, so the lower bound follows. \end{proof} \begin{proof}[Proof of Lemma \ref{lem-unifequiv}] By independence, the sequence $(U^{1/n},\,\,\xi_n)_{n\in {\mathbb N}}$ satisfies a joint LDP with rate function $I_{U, \xi }(u,w) =I_U(u) + I_\xi(x)$, where $I_U$ is the rate function computed in Lemma \ref{lem-uniform}. By the contraction principle, the sequence of products $(U^{1/n}\xi_n)_{n\in {\mathbb N}}$ satisfies an LDP with the rate function $I$, where for $\tilde{x}\in {\mathbb R}$, \begin{align*} I(\tilde{x}) &= \inf\{ I_U(u) + I_{\xi}(x) :u,x\in {\mathbb R}, ux = \tilde x\},\\ &= \inf\{ -\log u + I_{ \xi }(x) : u\in(0,1], x\in {\mathbb R}, ux = \tilde x\}. \end{align*} Let $\tilde x > 0$. By assumption, $I_{\xi}$ is quasiconvex and symmetric, so it is minimized at $x=0$ and non-decreasing for $x > 0$. Using the fact that $x\mapsto \log x$ is increasing, the infimum is attained at $u=1$ and $x=\tilde x$. Therefore, $I(\tilde x) = I_\xi(\tilde x)$. Likewise, when $\tilde{x} < 0$, similar calculations show once again that $I(\tilde{x}) = I_\xi(\tilde{x})$. \end{proof} For $p<\infty$, the equivalence of the LDPs given by Lemma \ref{lem-unifequiv} motivates the analysis of the sequences $(\widetilde{W}^{(n,p)})_{n\in {\mathbb N}}$ and $(\widetilde{W}^{(n,p)}_\theta)_{n\in {\mathbb N}}$ defined as follows: for $n\in {\mathbb N}$ and $\theta\in\mathbb{S}$, \begin{align} \widetilde{W}^{(n,p)} &\doteq \frac{n^{1/p}}{n^{1/2}} \frac{ \sum_{i=1}^n Y^{(n,p)}_i Z^{(n)}_i}{\lpnrm{Y^{(n,p)}}\ltwonrm{Z^{(n)}}}, \label{wnou}\\ \widetilde{W}^{(n,p)}_\theta &\doteq \frac{n^{1/p}}{n} \frac{ \sum_{i=1}^n Y^{(n,p)}_i \sqrt{n}\theta^{(n)}_i}{\lpnrm{Y^{(n,p)}}}. \label{wnou2} \nom[wnpz2]{$\widetilde{W}^{(n,p)}$}{$\widehat{W}^{(n,p)}$ sans $U^{1/n}$ factor} \end{align} In the following lemma, we claim that it suffices to analyze the sequences defined by \eqref{wnou} and \eqref{wnou2}. \begin{lemma}\label{lem-reduction} If the sequence $(\widetilde{W}^{(n,p)})_{n\in{\mathbb N}}$ satisfies an LDP with good rate function $\mathbb{I}^\sfa_p$, then $(\sw^{(n,p)})_{n\in {\mathbb N}}$ satisfies an LDP with the same rate function. Similarly, if the sequence $(\widetilde{W}^{(n,p)}_\theta)_{n\in {\mathbb N}}$ satisfies an LDP with good rate function $\mathbb{I}^\sfq_{p,\mu_2}$ for $\sigma$-a.e.\ $\theta \in \mathbb{S}$, then the sequence $(\sw_\theta^{(n,p)})_{n\in{\mathbb N}}$ satisfies an LDP with the same rate function for $\sigma$-a.e.\ $\theta \in \mathbb{S}$. \end{lemma} \begin{proof} Due to \eqref{bwthn-rep} and \eqref{bwthn-rep2}, $\sw^{(n,p)}$ and $\sw_\theta^{(n,p)}$ are equal in distribution to $\widehat{W}^{(n,p)}$ and $\widehat{W}^{(n,p)}_\theta$, respectively. Thus, it suffices to show that an LDP for $(\widetilde{W}^{(n,p)})_{n\in{\mathbb N}}$ (resp., $(\widetilde{W}^{(n,p)}_\theta)_{n\in{\mathbb N}}$) implies an LDP for $(\widehat{W}^{(n,p)})_{n\in{\mathbb N}}$ (resp., $(\widehat{W}^{(n,p)}_\theta)_{n\in{\mathbb N}}$) with the same rate function. However, this would follow from Lemma \ref{lem-unifequiv} if $\mathbb{I}^\sfa_p$ and $\mathbb{I}^\sfa_{p,\mu_2}$ could be shown to be quasiconvex and symmetric. For $\mathbb{I}^\sfa_p$, note that by \eqref{iadefn}, \begin{align} \mathbb{I}^\sfa_{p}(w) &= \inf_{\tau_0,\tau_2 > 0} \Phi_{p}^*(\tau_0,w\tau_0^{1/p}\tau_2^{1/p},\tau_2), \quad w\in {\mathbb R}. \label{iarepn} \end{align} Since $\mu_p$ and $\mu_2$ are symmetric distributions, $\Phi_p$ (and thus, $\Phi_p^*$) is symmetric in the second variable. Then, the representation \eqref{iarepn} implies that $\mathbb{I}^\sfa_{p}$ is symmetric. As for quasiconvexity, we know that $\Phi_{p}^*$ is convex by definition of the Legendre transform. Combined with the symmetry of $\Phi_{p}^*$ in the second argument, we see that for fixed $\tau_0,\tau_2> 0$, $\Phi_{p}^*(\tau_0,\tau_1, \tau_2)$ is minimized at $\tau_1 = 0$, non-decreasing for $\tau_1 > 0$, and non-increasing for $\tau_1 < 0$. Thus, for $w' > w > 0$, \eqref{iarepn} shows that \begin{align*} \mathbb{I}^\sfa_{p}(w') &= \inf_{\tau_0,\tau_2 > 0} \Phi_{p}^*(\tau_0,w'\tau_0^{1/2}\tau_2^{1/p},\tau_2) \ge \inf_{\tau_0,\tau_2 > 0} \Phi_{p}^*(\tau_0,w\tau_0^{1/2}\tau_2^{1/p},\tau_2) = \mathbb{I}^\sfa_{p}(w). \end{align*} Similar calculations for $w' < w < 0$ show that for all $c>0$, the set $\{w \in {\mathbb R}: \mathbb{I}^\sfa_p(w) \le c\}$ is a closed interval containing 0. Thus, $\mathbb{I}^\sfa_p$ is quasiconvex (see Definition \ref{def-quasi}). The argument is essentially identical for $\mathbb{I}^\sfq_{p,\mu_2}$, and hence, left to the reader. \end{proof} \section{The annealed LDP}\label{sec-annealed} In this section, we prove Theorem \ref{th-aldp}, the annealed LDP for random projections of $\ell^p$ balls. When $p\in[2,\infty)$, the recipe is roughly as follows: we employ the representations of $X^{(n,p)}$ and $\Theta^{(n)}$ given in Sect.\ \ref{sec-equiv}, apply Cram\'er's theorem for a sum of i.i.d.\ random variables in ${\mathbb R}^3$, and then complete the proof with the contraction principle. The case $p\in[1,2)$ is slightly different, in that we must prove an LDP at a different speed; for this case, we still employ the representations of Sect.\ \ref{sec-equiv}, but show that deviations of the ``numerator" are relevant for the LDP, whereas the deviations of the ``denominator" do not matter. \subsection{Annealed proof for $p\in[2,\infty]$}\label{ssec-anng2} For $p\in [2,\infty)$, we define the following sum of i.i.d. $\mathbb{R}^3$-valued random variables, \begin{equation*} S^{(n,p)} \doteq \frac{1}{n}\sum_{i=1}^n \left( |Z^{(n)}_i|^2, \, Y^{(n,p)}_i Z_i^{(n)}, \, |Y^{(n,p)}_i|^p \right), \quad n \in {\mathbb N}. \end{equation*} Note that $\Phi_p$ of \eqref{chklampdefn} is the log mgf of the summands, $( |Z^{(n)}_i|^2, \, Y^{(n,p)}_i Z_i^{(n)}, \, |Y^{(n,p)}_i|^p )$. We write out this sum because $\widetilde{W}^{(n,p)}$ of \eqref{wnou} can be written as a function of $S^{(n,p)}$. In our proof below, we will have to recall the following definition. \begin{definition}\label{def-domain} Consider a convex function $\Lambda:\mathbb{R}^d\rightarrow (-\infty,\infty]$. The \emph{effective domain} of $\Lambda$ is the set \begin{equation*} D_\Lambda \doteq \{x \in {\mathbb R}^d : \Lambda(x) < \infty \}. \end{equation*} When there is no confusion, we refer to $D_\Lambda$ as the \emph{domain} of $\Lambda$. \nom[D]{$D_{\cdot}$}{(effective) domain of a function} \end{definition} \begin{proof}[Proof of Theorem \ref{th-aldp}] For $t_0 < \tfrac{1}{2}$, $t_1\in {\mathbb R}$, $t_2 < \tfrac{1}{p}$, \begin{align*} \Phi_p(t_0,t_1,t_2) &= \log \int_{{\mathbb R}}\int_{{\mathbb R}} e^{t_1 zy } \tfrac{1}{2p^{1/p}\Gamma(1+\tfrac{1}{p})} e^{-(1-pt_2)|y|^p/p} dy\, \tfrac{1}{\sqrt{2\pi}}e^{-(1-2t_0)z^2/2} dz \\ &= -\tfrac{1}{p} \log(1-pt_2) - \tfrac{1}{2}\log(1-2t_0)\\ & \quad \quad + \log \int_{{\mathbb R}} \exp\left(\tfrac{1}{2} t_1^2(1-pt_2)^{-2/p}(1-2t_0)^{-1}y^2 \right) \mu_p(dy) \end{align*} Note that the preceding quantity is finite for $p > 2$. Thus, $D_{\Phi_p}^\circ = (-\infty,\tfrac{1}{2})\times {\mathbb R} \times (-\infty, \tfrac{1}{p}) \ni 0$. Thus, by Cram\'er's theorem, the sequence $(S^{(n,p)})_{n\in {\mathbb N}}$ satisfies an LDP in ${\mathbb R}^3$ with the good rate function given by the Legendre transform $\Phi_{p}^*(\tau_0,\tau_1,\tau_2)$. Note that $D_{\Phi_p^*} \subset (0,\infty)\times {\mathbb R} \times (0,\infty)$, and the map $T_p: (0,\infty)\times {\mathbb R} \rightarrow {\mathbb R}$ defined by \begin{equation*} T_p(\tau_0,\tau_1,\tau_2) \doteq \tau_0^{-1/2}\tau_1\tau_2^{-1/p}, \end{equation*} is continuous. Since $\widetilde{W}^{(n,p)} = T(S^{(n,p)})$, we can apply the contraction principle to obtain an LDP for $(\widetilde{W}^{(n,p)})_{n\in{\mathbb N}}$ with the rate function \begin{equation*} \inf_{\tau_0^{-1/2}\tau_1\tau_2^{-1/p} = w} \Phi_{p}^*(\tau_0,\tau_1,\tau_2) = \mathbb{I}^\sfa_{p}(w), \quad w\in {\mathbb R}. \end{equation*} Due to Lemma \ref{lem-reduction}, this implies that the same LDP holds for $(W^{(n,p)})_{n\in{\mathbb N}}$. \end{proof} \subsection{Annealed proof for $p\in [1,2)$} \label{ssec-annpl2} First, note that we cannot approach Theorem \ref{th-aldp12} in the same way as Theorem \ref{th-aldp} due to the fact that for $p < 2$, $\Phi_p(t_0,t_1,t_2) = \infty$ for $t_1 \ne 0$. This suggests that the LDP, if it exists, occurs at a different speed slower than $n$. To identify the appropriate speed, we begin with a lemma giving upper and lower bounds for the tails of $\mu_p$. \begin{lemma}\label{lem-ulbd} Let $p \in[1,2)$. Then, for all $x \ge 0$, \begin{equation*} \frac{x}{x^p+1}e^{-x^p/p} \le \int_x^\infty e^{-y^p/p}dy \le \frac{1}{x^{p-1}} e^{-x^p/p}. \end{equation*} \end{lemma} \begin{proof} First, we prove the upper bound. For $x\ge 0$, \begin{equation*} \int_x^\infty e^{-y^p/p}dy \le \int_x^\infty \frac{y^{p-1}}{x^{p-1}} e^{-y^p/p}dy \le \frac{1}{x^{p-1}} e^{-x^p/p}. \end{equation*} As for the lower bound, let \begin{equation*} f(x) \doteq \int_x^\infty e^{-y^p/p}dy - \frac{x}{x^p+1} e^{-x^p/p}, \quad x\ge 0. \end{equation*} Note that $f(0) > 0$ and $\lim_{x\rightarrow\infty} f(x) = 0$. Lastly, since $p<2$, \begin{equation*} f'(x) = -\frac{e^{-x^p/p}}{(x^p+1)^2}\left( (2-p)x^p + 2\right) < 0, \end{equation*} and so $f(x) \ge 0$ for all $x \ge 0$, thus proving the lower bound. \end{proof} \begin{lemma}\label{lem-tailyz} Let $p \ge 1$, $Y\sim \mu_p$, and $Z\sim \mu_2$, and let $Y$ and $Z$ be independent. Then, \begin{equation*} \lim_{t\rightarrow\infty} \frac{1}{t^{r_p}} \log {\mathbb P}(YZ \ge t) = - r_p^{-1}, \end{equation*} where $r_p = \frac{2p}{2+p}$ as in \eqref{rp}. \end{lemma} \begin{proof} First, we prove the lower bound. Fix $t > 0$. For all $s$ such that $0 < s < t$, by the independence of $Y$ and $Z$, and the lower bound of Lemma \ref{lem-ulbd}, \begin{equation*} {\mathbb P}(YZ \ge t) \ge {\mathbb P}(Y \ge s) {\mathbb P}(Z \ge \tfrac{t}{s}) \ge C_p\,\frac{s}{s^p+1} e^{-s^p/p} \frac{1}{(t/s) + (s/t)} e^{-t^2/(2s^2)}, \end{equation*} where $C_p <\infty$ represents a constant that depends on $p$ but not on $t$ nor $s$. Then, pick the optimal $s$ for the lower bound, \begin{equation*} s_t = \arg\min_s\left\{ \tfrac{s^p}{p} + \tfrac{t^2}{2s^2}\right\} =t^{2/(2+p)} = t^{r_p/p}. \end{equation*} Therefore, \begin{equation*} \liminf_{t\rightarrow\infty} \frac{1}{t^{r_p}} \log {\mathbb P}(YZ\ge t) \ge \liminf_{t\rightarrow\infty} \frac{1}{t^{r_p}} \left(-\tfrac{s_t^p}{p} - \tfrac{t^2}{2s_t^2}\right) = -\left(\tfrac{1}{p} + \tfrac{1}{2}\right) = - r_p^{-1}. \end{equation*} Now we prove the upper bound. By Lemma \ref{lem-ulbd}, for some different constant $\tilde{C}_p < \infty$, \begin{align*} {\mathbb P}(YZ \ge t) &= \tilde{C}_p\int_0^\infty {\mathbb P}(Y \ge \tfrac{t}{s}) \exp\left(-\tfrac{s^2}{2}\right) ds\\ &\le \tilde{C}_p\int_0^\infty \frac{1}{(t/s)^{p-1} } \exp\left(-\tfrac{1}{p}(\tfrac{t}{s})^p - \tfrac{s^2}{2}\right) ds \\ \text{\footnotesize($s^p = t^{p^2/(2+p)}u$)} \quad &= \tilde{C}_p\frac{1}{pt^{p-1}t^{p^2/(2+p)}} \int_0^\infty \exp\left(-\tfrac{t^p}{pt^{p^2/(2+p)} u} - \tfrac{t^{2p/(2+p)} u^{2/p}}{2}\right) du. \end{align*} Then, using Laplace's method, \begin{align*} \limsup_{t\rightarrow\infty} \frac{1}{t^{r_p}} \log{\mathbb P}(YZ \ge t) &\le \limsup_{t\rightarrow\infty} \frac{1}{t^{r_p}} \log \int_0^\infty \exp\left( -t^{r_p}\left(\tfrac{1}{pu} + \tfrac{u^{2/p}}{2}\right)\right) du\\ &= -\min_{u> 0} \left\{ \tfrac{1}{pu} + \tfrac{u^{2/p}}{2}\right\} \\ &= -\left(\tfrac{1}{p} + \tfrac{1}{2}\right) = -r_p^{-1}. \end{align*} \end{proof} We state an intermediate large deviation result. As in Sect.\ \ref{ssec-repn} (but, for ease of notation, omitting the superscripts $^{(n)}$ and $^{(n,p)}$), let $Y_1,\dots, Y_n$ be i.i.d. with common distribution $\mu_p$, and let $Z_1,\dots, Z_n$ be i.i.d. with common distribution $\mu_2$. Define the empirical mean of i.i.d.\ random variables, \begin{equation*} V^{(n,p)} \doteq \frac{1}{n}\sum_{i=1}^n Y_i\,Z_i. \end{equation*} \begin{proposition}\label{prop-stretched} Let $p\in [1,2)$. Then, with $r_p =\frac{2p}{2+p}$, the sequence $(V^{(n,p)})_{n\in {\mathbb N}}$ satisfies an LDP with speed $n^{r_p}$ and the quasiconvex good rate function $\mathbb{I}^\sfa_p(w) = \frac{1}{r_p} |w|^{r_p}$. \end{proposition} \begin{proof} This follows from \cite[Theorem 2.1]{arcones2002large}, where $p$, $b_n$, and $a$ there correspond to $r_p$, $n$, and $r_p^{-1}$ here, respectively. The condition $\frac{n}{n^{2-r_p}} \rightarrow 0$ as $n\rightarrow\infty$ holds since $r_p < 1$ for $p < 2$, and the condition $\frac{n}{n+1}\rightarrow 1$ as $n\rightarrow\infty$ holds trivially. Then, the symmetry of $\mu_p$ and the tail asymptotics of Lemma \ref{lem-tailyz} imply the desired LDP. Note that this result can also be deduced from \cite[Theorem 1]{gantert2014large}. \end{proof} We now show that at the large deviation scale, $\widetilde{W}^{(n,p)}$ of \eqref{wnou} is comparable to $V^{(n,p)}$ in the following sense. \begin{definition}\label{def-expequiv} Let $(\xi_n)$ and $(\tilde{\xi}_n)$ be two sequences of ${\mathbb R}$-valued random variables such that for all $\delta > 0$, and some speed $s(n)$, \begin{equation*} \limsup_{n\rightarrow\infty} \frac{1}{s(n)} \log {\mathbb P}( |\xi_n - \tilde{\xi}_n| > \delta ) = -\infty; \end{equation*} then, $(\xi_n)$ and $(\tilde{\xi}_n)$ are said to be \emph{exponentially equivalent} with speed $s(n)$. \end{definition} \begin{proposition}[\cite{DemZeibook}] \label{prop-expeq} If $(\xi_n)$ is a sequence of random variables that satisfies an LDP with speed $s(n)$ and good rate function $I$, and $(\tilde{\xi}_n)$ is another sequence that is exponentially equivalent to $(\xi_n)$ with speed $s(n)$, then $(\tilde{\xi}_n)$ satisfies an LDP with speed $s(n)$ and good rate function $I$. \end{proposition} \begin{proof}[Proof of Theorem \ref{th-aldp12}] We will prove that $(\widetilde{W}^{(n,p)})_{n\in {\mathbb N}}$ and $(V^{(n,p)})_{n\in {\mathbb N}}$ are exponentially equivalent with speed $n^{r_p}$. For $\delta > 0$, $\epsilon > 0$, \begin{align*} {\mathbb P} & (|V^{(n,p)}- \widetilde{W}^{(n,p)}| > \delta) \\ &= 2\, {\mathbb P}\left( \frac{1}{n}\sum_{i=1}^n Y_i\,Z_i \cdot \left(1 - \tfrac{n^{1/2}n^{1/p}}{\|Z^{(n)}\|_{n,2}\|Y^{(n,p)}\|_{n,p} } \right) > \delta \right) \\ &\le 2\, {\mathbb P}\left( \frac{1}{n}\sum_{i=1}^n Y_i Z_i > \frac{\delta}{\epsilon} \right) + 2\, {\mathbb P}\left( 1 - \tfrac{n^{1/2}n^{1/p}}{\|Z^{(n)}\|_{n,2}\|Y^{(n,p)}\|_{n,p} } > \epsilon\right)\\ &\le 2\, {\mathbb P}\left( \frac{1}{n}\sum_{i=1}^n Y_i Z_i > \frac{\delta}{\epsilon} \right) + 2\,{\mathbb P}\left(\frac{1}{n}\sum_{i=1}^n Z_i^2 > (1-\epsilon)^{-1} \right) + 2\,{\mathbb P}\left( \frac{1}{n}\sum_{i=1}^n |Y_i|^p > (1-\epsilon)^{-p/2} \right). \end{align*} Note that by Cram\'er's theorem, the second and third terms decay exponentially with speed $n$ since ${\mathbb E}[|Y_1|^p]^{1/p} = {\mathbb E}[|Z_1|^2]^{1/2}= 1$. Thus, for $p\in [1,2)$, the first term is dominant with speed $n^{r_p}$, yielding the limit \begin{align*} \limsup_{n\rightarrow \infty} \frac{1}{n^{r_p}} \log {\mathbb P}(|V^{(n,p)}- \widetilde{W}^{(n,p)}| > \delta) &\le \limsup_{n\rightarrow \infty} \frac{1}{n^{r_p}} \log {\mathbb P}\left( \frac{1}{n}\sum_{i=1}^n Y_i Z_i > \frac{\delta}{\epsilon} \right) = -\tfrac{1}{r_p}\left\lvert\tfrac{\delta}{\epsilon}\right\rvert^{r_p} \end{align*} where the last equality follows from Proposition \ref{prop-stretched} and quasiconvexity. Sending $\epsilon \rightarrow 0$, we see that $(V^{(n,p)})_{n\in {\mathbb N}}$ and $(\widetilde{W}^{(n,p)})_{n\in{\mathbb N}}$ are exponentially equivalent with speed $n^{r_p}$. The LDP for $(\sw^{(n,p)})_{n\in{\mathbb N}}$ then follows from Proposition \ref{prop-stretched}, Proposition \ref{prop-expeq}, and the fact that the $U^{1/n}$ factor in \eqref{def-bwnpbth} can be ignored since $(U^{1/n})_{n\in{\mathbb N}}$ satisfies a large deviation principle with good rate function at speed $n$ (as given by Lemma \ref{lem-uniform}). \end{proof} \section{The quenched LDP}\label{sec-quenched} In this section, we prove Theorem \ref{th-qldp}, the quenched LDP for random projections of $\ell^p$ balls. To do so, we prove LDPs for the weighted sum \eqref{wnou2}, which has \emph{deterministic} weights. This task reduces to proving an LDP for sums of random variables which are independent but not identically distributed (in our case due to the inhomogeneous weights $\theta^{(n)}_i$), for which the G\"artner-Ellis theorem is well suited (see \cite[\S2.3]{DemZeibook}). We first show in Sect.\ \ref{ssec-pressure} that the convergence of a certain empirical measure implies the convergence of a certain limiting log mgf which arises in the G\"artner-Ellis theorem. Then, in Sect.\ \ref{ssec-glivenko}, we prove a slight extension of the Glivenko-Cantelli theorem which establishes convergence of the empirical measure in general settings. We specialize to our case of the surface measure $\sigma$ and complete the proof of the quenched LDP in Sect.\ \ref{ssec-surface}. \subsection{Convergence of log mgfs}\label{ssec-pressure} In what follows, we require two notions of convergence of probability measures. Let $\Rightarrow$ denote weak convergence, and also recall the Wasserstein topology of probability measures. \nom[aaa]{$\Rightarrow$}{weak convergence} \begin{definition} Let $r\in[1,\infty)$, let $m_r$ be the $r$-th moment as in \eqref{qmomdef}, and let $\mathcal{P}_r({\mathbb R}) \doteq \{\mu \in \mathcal{P}({\mathbb R}) : m_r(\mu) < \infty\}$. The \emph{Wasserstein}-$r$ topology on $\mathcal{P}_r({\mathbb R})$ is induced by the following metric: \begin{equation*} \mathcal{W}_r(\mu,\nu)\doteq \inf_{ \pi \in \Pi(\mu,\nu)} \iint_{\mathbb{R}^2} |x-y|^r\, \pi(dx,dy), \end{equation*} where $\Pi(\mu,\nu)$ denotes the set of probability measures on $\mathbb{R}^2$ with first and second marginals $\mu$ and $\nu$, respectively. \nom[pxr]{$\mathcal{P}_r({\mathbb R})$}{probability measures on ${\mathbb R}$ with finite $r$-th moment} \nom[wr]{$\mathcal{W}_r$}{Wasserstein-$r$ metric} \end{definition} \begin{lemma}[see, e.g., Definition 6.8 and Theorem 6.9 of \cite{villani2008optimal}]\label{lem-wass} Let $(\mu_n) \subset \mathcal{P}_r(\mathbb{R})$ and $\mu\in \mathcal{P}_r({\mathbb R})$. The following are equivalent: \begin{enumerate} \item $\mathcal{W}_r(\mu_n,\mu) \rightarrow 0$; \item $\mu_n\Rightarrow \mu$ and $m_r(\mu_n) \rightarrow m_r(\mu)$; \item for all continuous functions $\varphi:{\mathbb R}\rightarrow{\mathbb R}$ bounded by $|\varphi(x)| \le C(1+|x|^r)$, $x\in {\mathbb R}$ for some constant $C \in {\mathbb R}$, we have \begin{equation*} \int_{\mathbb R} \varphi(x) \mu_n(dx) \xrightarrow{n\rightarrow \infty} \int_{\mathbb R} \varphi(x) \mu(dx). \end{equation*} \end{enumerate} \end{lemma} For $\theta\in\mathbb{S}$, let $L_{n,\theta}$ denote the empirical measure, \begin{equation*} L_{n,\theta} \doteq \frac{1}{n}\sum_{i=1}^n \delta_{\sqrt{n}\theta^{(n)}_i}. \end{equation*} The goal of this subsection is to prove the following statement that convergence of $(L_{n,\theta})_{n\in {\mathbb N}}$ implies a quenched LDP. \begin{proposition}\label{prop-quegen} Let $p\in(1,\infty)$. Let $\rho\in\mathcal{P}(\mathbb{A})$ be a probability measure on the space of triangular arrays $\mathbb{A}$, let $\nu\in\mathcal{P}_{p/(p-1)}({\mathbb R})$, and suppose that for $\rho$-a.e.\ $\theta \in \mathbb{S}$, we have as $n\rightarrow\infty$, \begin{equation*} \mathcal{W}_{p/(p-1)}( L_{n,\theta}, \nu) \rightarrow 0. \end{equation*} Then, for $\rho$-a.e.\ $\theta \in \mathbb{S}$, the sequence $(\sw_\theta^{(n,p)})_{n\in\mathbb{N}}$ satisfies an LDP with the quasiconvex, symmetric, good rate function $\mathbb{I}^\sfq_{p,\nu}$ of \eqref{ieqdefn}. \end{proposition} We defer the proof of Proposition \ref{prop-quegen} to the end of this subsection (see p.\pageref{page-propproof}). \begin{remark} A slightly different approach to proving the ``product" version of Theorem \ref{th-qldp} can be found in \cite{gkr3}; that argument does not appeal to the convergence of empirical measures assumed by Proposition \ref{prop-quegen}. However, Proposition \ref{prop-quegen} has the benefit of giving a concrete interpretation of the quenched rate function $\mathbb{I}^\sfq_{p,\nu}$ for any $\nu\in\mathcal{P}_{p/(p-1)}({\mathbb R})$ associated with a conditioned ``environment" $\theta$. \end{remark} We now establish some notation and several preliminary lemmas. For $\gamma \in \mathcal{P}({\mathbb R})$, let \begin{equation}\label{logmgfdef} \mathrm{M}_\gamma(t) \doteq \int_{\mathbb R} e^{ty} \gamma(dy) \end{equation} denote the moment generating function (mgf) of $\gamma$. Let $\mathcal{T}_q$ denote the set of probability measures on ${\mathbb R}$ with tails dominated by the tails of $\mu_q$, in the following sense. \begin{equation}\label{tpdef} \mathcal{T}_q \doteq \left\{ \gamma\in\mathcal{P}({\mathbb R}) : \exists \, C < \infty \text{ s.t. } \forall\, t\in {\mathbb R}, \quad \log\mathrm{M}_\gamma(t) < C|t|^{q/q-1} + C \right\}. \end{equation} Note that $\mathcal{T}_p \supset \mathcal{T}_q$ for $p < q$, and $\mathcal{T}_2$ consists of subgaussian measures. \nom[mynu]{$\mathrm{M}_\gamma$}{moment generating function} \nom[tp]{$\mathcal{T}_q$}{measures w/ $f_q$-dominated tails} \begin{lemma}\label{lem-taildecay} Suppose $\gamma\in\mathcal{P}({\mathbb R})$ has density $f$ and $q\in[1,\infty)$ is such that there exist constants $0 < c_\gamma, d_\gamma < \infty$ such that for all $|x| > d_\gamma$, \begin{equation*} f(x) \le c_\gamma e^{-c_\gamma |x|^q/q}. \end{equation*} Then, $\gamma\in\mathcal{T}_q$. In particular, for $q\in[1,\infty)$, we have $\mu_q\in\mathcal{T}_q$. \end{lemma} \begin{proof} The first assertion of the lemma follows from a simple application of Young's inequality (see \cite[Lemma 2.3]{gkr3} for details). The second assertion is a simple consequence of the first. \end{proof} \begin{lemma}\label{lem-subgsn} Let $p\in (1,\infty)$. For $\gamma\in\mathcal{T}_p$ and $t \in{\mathbb R}$, the map \begin{equation}\label{gammap} \mathcal{P}_{p/(p-1)}({\mathbb R})\ni \nu\mapsto \int_{\mathbb R} \log \mathrm{M}_\gamma(tu)\,\nu(du) \in {\mathbb R} \end{equation} is continuous with respect to the Wasserstein-$\tfrac{p}{p-1}$ topology. \end{lemma} \begin{proof} Fix $t \in {\mathbb R}$. Then, the map $u \mapsto \log M_\gamma (tu)$ is clearly continuous and the definition of $\mathcal{T}_p$ implies that \begin{equation*} \log \mathrm{M}_\gamma(tu) < C |u|^{p/(p-1)} + C, \quad u\in {\mathbb R}, \end{equation*} for some constant $C$ depending on $t$ and $\gamma$, but not $u$. The continuity of \eqref{gammap} with respect to the Wasserstein-$\tfrac{p}{p-1}$ topology follows from the equivalent formulation of Wasserstein convergence given by Lemma \ref{lem-wass}(3). \end{proof} \begin{lemma}\label{lem-tailcont} Let $p\in [1,\infty)$, and let $\lm_p$ and $\Psi_{p,\nu}$ be as defined in \eqref{lampdefn0} and \eqref{lampdefn}, respectively. Then, \begin{equation*} \lm_p(t_1,t_2) = -\tfrac{1}{p}\log(1-pt_2) + \log \mathrm{M}_{\mu_p}\left(\tfrac{t_1}{(1-pt_2)^{1/p}}\right), \quad t_1\in {\mathbb R}, t_2<\tfrac{1}{p}. \end{equation*} As a consequence, for $p\in(1,\infty)$, $t_1\in{\mathbb R}$, $t_2<\tfrac{1}{p}$, the map \begin{equation*} \mathcal{P}_{p/(p-1)}(R)\ni \nu\mapsto \Psi_{p,\nu}(t_1,t_2) \in {\mathbb R} \end{equation*} is continuous with respect to the Wasserstein-$\tfrac{p}{p-1}$ topology. \end{lemma} \begin{proof} By the change of variables $x = (1-pt_2)^{1/p}y$ and the form of the density of $\mu_p$ given by \eqref{fpdef}, we write \begin{align*} \lm_p(t_1,t_2) &= \log\int_{\mathbb{R}} e^{t_1y} \tfrac{1}{2p^{1/p}\Gamma(1+\tfrac{1}{p})} e^{-(1-pt_2) |y|^p/p} dy \\ &=\log\left( \tfrac{1}{(1-pt_2)^{1/p}} \int_{\mathbb{R}} \exp \left(\tfrac{t_1}{(1-pt_2)^{1/p}}x\right) \tfrac{1}{2p^{1/p}\Gamma(1+\tfrac{1}{p})}e^{-|x|^p/p} dx\right) \\ &=-\tfrac{1}{p}\log(1-pt_2) + \log \mathrm{M}_{\mu_p}\left(\tfrac{t_1}{(1-pt_2)^{1/p}}\right). \end{align*} We now prove the continuity part of the lemma. Due to Lemma \ref{lem-taildecay}, $\mu_p \in \mathcal{T}_p$, and therefore by Lemma \ref{lem-subgsn}, $\nu\mapsto\int_{\mathbb R} \log \mathrm{M}_{\mu_p}(tu)\,\nu(du)$ is continuous for all $t\in {\mathbb R}$. Combined with the preceding display, this implies that $\nu\mapsto \Psi_{p,\nu}(t_1,t_2)$ is continuous for all $t_1\in{\mathbb R}$ and $t_2< \frac{1}{p}$. \end{proof} Whereas Lemma \ref{lem-tailcont} will be applied to establish the convergence of certain log mgfs, Lemma \ref{lem-lamfacts} and Lemma \ref{lem-quenchedess} will be used to show that the limit log mgf satisfies the hypotheses of the G\"artner-Ellis theorem. We refer to Theorem 2.3.6 of \cite{DemZeibook} for a precise statement, and Definition 2.3.5 of \cite{DemZeibook} for the definition of \emph{essentially smooth}. \begin{lemma}\label{lem-lamfacts} Let $p\in(1,\infty)$. Then, $D_{\lm_p} = {\mathbb R}\times (-\infty, \frac{1}{p})$ and $\lm_p$ is strictly convex on its effective domain, lower semi-continuous, and essentially smooth. Furthermore, $\lm_p$ is symmetric in its first argument. Lastly, $\lm_p$ is non-decreasing in its second argument; that is, for fixed $t_1 \in {\mathbb R}$ and $t_2 < t_2'$, we have $\lm_p(t_1,t_2) \le \lm_p(t_1,t_2')$. \end{lemma} \begin{proof} This is a basic consequence of standard properties of mgfs and the representation of Lemma \ref{lem-tailcont}. \end{proof} \begin{lemma}\label{lem-quenchedess} Let $p\in(1,\infty)$ and $\nu\in\mathcal{P}_{p/(p-1)}(\mathbb{R})$. Then, $\Psi_{p,\nu}$ is essentially smooth and lower semi-continuous, and $0 \in D_{\Psi_{p,\nu}}^\circ$. \end{lemma} \begin{proof} Recall from Lemma \ref{lem-lamfacts} that $D_{\lm_p} = {\mathbb R} \times (-\infty,\frac{1}{p})$. For $(t_1,t_2)\not\in D_{\lm_p}$, note that $\Psi_{p,\nu}(t_1,t_2) = +\infty$ for all $\nu\in \mathcal{P}({\mathbb R})$. Due to Lemmas \ref{lem-subgsn} and \ref{lem-tailcont}, there exists a constant $C < \infty$ such that for all $t_1\in \mathbb{R}$ and $t_2 < \frac{1}{p}$, \begin{align*} \Psi_{p,\nu}(t_1,t_2) &\le - \tfrac{1}{p}\log\left( 1- pt_2\right) + \int_\mathbb{R} \left( C\left|\tfrac{ t_1z}{(1- pt_2)^{1/p}}\right|^{p/(p-1)} + C\right) \nu(dz)\\ &= - \tfrac{1}{p}\log\left( 1- pt_2\right) + C\tfrac{|t_1|^{p/(p-1)}}{(1- pt_2)^{1/(p-1)}} m_{p/(p-1)}(\nu) + C< \infty. \end{align*} That is, \begin{equation*} D_{\Psi_{p,\nu}}^\circ = \mathbb{R}\times (-\infty,\tfrac{1}{p}) \ni 0. \end{equation*} As for essential smoothness, first note that differentiability of $\Psi_{p,\nu}$ in $D_{\Psi_{p,\nu}}^\circ$ follows from the differentiability of $(t_1,t_2)\mapsto \lm_p(t_1u,t_2)$ for all $u\in {\mathbb R}$ and an application of the dominated convergence theorem with the dominating function \begin{equation*} g_{t_1,t_2}(u) \doteq |\nabla\lm_p((t_1-1)u,t_2)| + |\nabla\lm_p((t_1+1)u,t_2)|. \end{equation*} We refer to Lemma 3.8 of \cite{gkr3} for a similar argument in greater detail. Note by Lemma \ref{lem-lamfacts} that $\partial_{t_2} \lm_p \ge 0$, which implies \begin{equation*} | \nabla \Psi_{p,\nu}(t_1,t_2)| \ge | \partial_{t_2} \Psi_{p,\nu}(t_1,t_2) | = \left|\int_\mathbb{R} \partial_{t_2} \lm_p(t_1u,t_2) \nu(du)\right| = \int_\mathbb{R} \partial_{t_2} \lm_p(t_1u,t_2) \nu(du), \end{equation*} Then, by Fatou's Lemma, for $t' \in \mathbb{R}$, \begin{align*} \liminf_{(t_1,t_2) \rightarrow (t', 1/p)} | \nabla \Psi_{p,\nu}(t_1,t_2)| &\ge \liminf_{(t_1,t_2) \rightarrow (t', 1/p)} \int_\mathbb{R} \partial_{t_2} \lm_p(t_1u,t_2) \nu(du) \\ &\ge \int_\mathbb{R} \liminf_{(t_1,t_2) \rightarrow (t', 1/p)} \partial_{t_2} \lm_p(t_1u,t_2) \nu(du) = \infty, \end{align*} where the last equality follows from the steepness of $\lm_p$ established in Lemma \ref{lem-lamfacts}. This shows that $\Psi_{p,\nu}$ is steep and hence, completes the proof of essential smoothness of $\Psi_{p,\nu}$. For lower semi-continuity, suppose $(t_1^{(n)},t_2^{(n)})\rightarrow (t_1,t_2)$ as $n\rightarrow\infty$. Then, \begin{equation*} \Psi_{p,\nu}(t_1,t_2) \le \int_\mathbb{R} \liminf_{n\rightarrow\infty} \lm_p(t_1^{(n)}u,t_2^{(n)}) \nu(du) \le \liminf_{n\rightarrow\infty} \,\Psi_{p,\nu}(t_1^{(n)},t_2^{(n)}), \end{equation*} where the first inequality is due to the lower semi-continuity of $\lm_p$ (from Lemma \ref{lem-lamfacts}), and the second inequality is due to Fatou's Lemma. \end{proof} \label{page-propproof} \begin{proof}[Proof of Proposition \ref{prop-quegen}] We begin by proving a $\rho$-a.e.\ LDP for the sequence $(R_\theta^{(n,p)})_{n\in{\mathbb N}}$ in ${\mathbb R}^2$, defined as \begin{equation}\label{rnp2} R_\theta^{(n,p)} \doteq \left(\frac{1}{n}\sum_{i=1}^n \sqrt{n}\theta^{(n)}_iY^{(n,p)}_i, \, \, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p \right). \end{equation} Consider the G\"artner-Ellis limit log mgf: for $t=(t_1,t_2)\in {\mathbb R}^2$, \begin{align*} \lim_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb E}\left[\exp(n\,\langle t, R_\theta^{(n,p)}\rangle)\right] &= \lim_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb E}\left[ \exp\left(\sum_{i=1}^n t_1\sqrt{n}\theta^{(n)}_i Y^{(n,p)}_i+ t_2 |Y^{(n,p)}_i|^p \right) \right]\\ &= \lim_{n\rightarrow\infty} \frac{1}{n} \log \prod_{i=1}^n {\mathbb E}\left[ \exp\left( t_1\sqrt{n}\theta^{(n)}_i Y^{(n,p)}_i+ t_2 |Y^{(n,p)}_i|^p \right) \right]\\ & = \lim_{n\rightarrow\infty} \frac{1}{n} \sum_{i=1}^n \lm_p(t_1\sqrt{n}\theta^{(n)}_i, t_2), \end{align*} with $\lm_p$ given by \eqref{lampdefn0}. Due to Lemma \ref{lem-tailcont}, for all $t_1\in {\mathbb R}$ and $t_2<\frac{1}{p}$, the map $\nu\mapsto \int \lm_p(t_1 u,t_2) \nu(du)$ is continuous with respect to the Wasserstein-$\tfrac{p}{p-1}$ topology. Since by assumption, the empirical measure $L_{n,\theta}$ converges to $\nu$ in in the Wasserstein-$\tfrac{p}{p-1}$ topology, we have that for $\rho$-a.e.\ $\theta \in \mathbb{S}$, for all $t_1\in {\mathbb R}$ and $t_2<\frac{1}{p}$, \begin{equation}\label{gcllnmgf} \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^n \lm_p(t_1\sqrt{n}\theta^{(n)}_i, \, t_2) = \int_\mathbb{R} \lm_p(t_1u,t_2)\nu(du). \end{equation} The same claim holds for all $t_2\ge\frac{1}{p}$, with both sides in the preceding equality valued at $+\infty$. Due to the lower semi-continuity and essential smoothness of $\Psi_{p,\nu}$ as established in Lemma \ref{lem-quenchedess}, for $\rho$-a.e.\ $\theta\in\mathbb{S}$, the G\"artner-Ellis theorem (see, e.g., \cite[Theorem 2.3.6]{DemZeibook}) yields the LDP for the sequence $(R_\theta^{(n,p)})_{n\in {\mathbb N}}$, with the good rate function $\Psi_{p,\nu}^*$. Note that $D_{\Psi_{p,\nu}^*}\subset {\mathbb R} \times (0,\infty)$, and the map $\bar T: {\mathbb R} \times (0,\infty)\rightarrow{\mathbb R}$ defined by \begin{equation}\label{bartp} \bar T_p(\tau_1,\tau_2) \doteq \tau_1\tau_2^{-1/p}. \end{equation} is continuous. Since $\widetilde{W}^{(n,p)}_\theta = \bar T_p (R_\theta^{(n,p)})$, we can apply the contraction principle to obtain an LDP for $(\widetilde{W}^{(n,p)}_\theta)_{n\in {\mathbb N}}$ with the rate function $\mathbb{I}^\sfq_{p,\nu}$. Due to Lemma \ref{lem-reduction}, this implies that an identical LDP holds for $(W_\theta^{(n,p)})_{n\in{\mathbb N}}$. \end{proof} \begin{remark} In Proposition \ref{prop-quegen}, we make the assumption $p > 1$ so that the right-hand side of \eqref{gcllnmgf} is well defined. In the case of $p=1$, the effective domain is $D_{\Lambda_1} = (-1,1) \times (-\infty, 1)$, so the integral over ${\mathbb R}$ on the RHS of \eqref{gcllnmgf} is infinite. This issue does not arise for $p >1$ due to Lemma \ref{lem-tailcont}. \end{remark} \subsection{An extension of the Glivenko-Cantelli theorem}\label{ssec-glivenko} In view of Proposition \ref{prop-quegen}, it is natural to investigate when the empirical measure convergence holds. Recall the classical Glivenko-Cantelli theorem, which concerns weak convergence of the empirical measure of an i.i.d\ sequence. That is, for $\xi_1,\xi_2,\dots,$ i.i.d.\ with common distribution $\mu$, \begin{equation*} \frac{1}{n}\sum_{i=1}^n \delta_{\xi_i} \Rightarrow \mu, \quad {\mathbb P}\text{-a.s.}. \end{equation*} In the lemma below, we state a slight extension of the Glivenko-Cantelli theorem, to triangular arrays with some dependence within rows, and to Wasserstein convergence instead of weak convergence. \begin{lemma}\label{lem-gliv} Let $\mu \in \mathcal{P}(\mathbb{R})$, and for $n\in{\mathbb N}$, suppose $(\xi^{(n)})_{n\in {\mathbb N}}$ is a sequence of random variables defined on a common probability space $(\Omega,\mathcal{F},{\mathbb P})$ such that $\xi^{(n)}\sim \mu^{\otimes n}$. Next, let $f_n:{\mathbb R}^n\rightarrow {\mathbb R}$ be such that \begin{equation} f_n(\xi^{(n)}) \xrightarrow{n\rightarrow\infty} 1, \quad \mathbb{P}\text{-a.s.} \label{convass} \end{equation} Let $\eta^{(n)} \doteq \xi^{(n)} / f_n(\xi^{(n)})$, and consider its empirical measure, \begin{equation*} L_{n,\eta} \doteq \frac{1}{n}\sum_{i=1}^n\delta_{\eta_i^{(n)}}. \end{equation*} If $m_p(\mu) < \infty$ for some $p\in[1,\infty)$, then \begin{equation*} \mathcal{W}_{p/4}( L_{n,\eta} ,\mu) \rightarrow 0, \quad \mathbb{P}\text{-a.s.}. \end{equation*} \end{lemma} \begin{proof} Let $\mathbf{F}$ be the cumulative distribution function (cdf) of $\mu$. Let $\mathbb{F}_n$ and $\mathbb{G}_n$, respectively, denote the empirical distribution functions of the samples $\xi^{(n)}$ and $\eta^{(n)}$: \begin{align*} \mathbb{F}_n(t) &\doteq \frac{1}{n}\, \#\left\{ \xi_i^{(n)} \le t; \,\, i=1\,\dots,n \right\},\\ \mathbb{G}_n(t) &\doteq \frac{1}{n}\, \#\left\{ \eta_i^{(n)} \le t ; \,\, i =1,\dots, n\right\}. \end{align*} First, we prove ${\mathbb P}$-a.s.\ weak convergence of $\mathbb{G}_n$ to $\mathbf{F}$. In other words, we prove that ${\mathbb P}$-a.s., for any point of continuity $t$ of $\mathbf{F}$, \begin{equation*} \lim_{n\rightarrow\infty} \mathbb{G}_n(t) = \mathbf{F}(t). \end{equation*} Note that we can decompose the preceding difference as follows: \begin{align} \mathbb{G}_n(t) - \mathbf{F}(t) &= \left[\mathbb{F}_n(f_n(\xi^{(n)})\, t) - \mathbf{F}(f_n(\xi^{(n)})\, t)\right] + \left[\mathbf{F}(f_n(\xi^{(n)})\,t) - \mathbf{F}(t)\right] \notag \\ &\le \sup_{x\in {\mathbb R}} |\mathbb{F}_n(x) - \mathbf{F}(x) | + \left|\mathbf{F}(f_n(\xi^{(n)})\,t) - \mathbf{F}(t)\right|\label{goalweak} \end{align} The first term of \eqref{goalweak} converges to zero by the extension of the Glivenko-Cantelli theorem to row-wise independent triangular arrays \cite[p.106, Theorem 1]{shorack2009empirical}. The second term of \eqref{goalweak} converges to 0 due to the assumption \eqref{convass}. Therefore, we have that $L_{n,\eta}\Rightarrow \mu$, ${\mathbb P}$-a.s. Next, we prove convergence of suitable moments in order to strengthen the result to Wasserstein convergence. Due to Lemma \ref{lem-wass}, it suffices to show ${\mathbb P}$-a.s. convergence of the $p/4$-th moments of $L_{n,\eta}$. That is, ${\mathbb P}$-a.s., \begin{equation}\label{momconv} \lim_{n\rightarrow\infty} m_{p/4}(L_{n,\eta}) = m_{p/4}(\mu). \end{equation} Note that \begin{equation*} m_{p/4}(L_{n,\eta}) = \frac{1}{n}\sum_{i=1}^n |\eta_i^{(n)}|^{p/4} = \frac{\frac{1}{n}\sum_{i=1}^n |\xi_i^{(n)}|^{p/4}}{f_n(\xi^{(n)})^{p/4}}. \end{equation*} Due to the assumption \eqref{convass}, in order to prove \eqref{momconv}, it suffices to show that, ${\mathbb P}$-a.s., \begin{equation}\label{xipfour} \frac{1}{n}\sum_{i=1}^n |\xi_i^{(n)}|^{p/4} \rightarrow m_{p/4}(\mu). \end{equation} Note that the strong law of large numbers (SLLN) does not extend (in general) to row-wise means of i.i.d.\ triangular arrays, but a standard Borel-Cantelli argument shows that the SLLN \emph{does} hold if the common law of the i.i.d.\ elements has finite fourth moment \cite[Example 5.41]{romano1986counterexamples}. Since $\mu$ has finite $p$-th moment, we have \begin{equation*} \int_{\mathbb R} (|x|^{p/4})^4 \mu(dx) = m_p(\mu) < \infty, \end{equation*} and thus, \eqref{xipfour} holds, implying Wasserstein-$p/4$ convergence. \end{proof} \begin{remark} A weaker version of Lemma \ref{lem-gliv} can be found in \cite[p.235]{spruill2007asymptotic}, where $\mu = \mu_p$ and $f_n = n^{-1/p}\|\cdot\|_{n,p}$, so that $\eta^{(n)} \sim\text{Unif}({\mathbb B}_{n,p})$; the difference is that the statement in \cite{spruill2007asymptotic} is for convergence in probability (instead of ${\mathbb P}$-a.s.), and weak convergence of measures (instead of Wasserstein). \end{remark} \subsection{The measure $\sigma \in \mathcal{P}(\mathbb{S})$}\label{ssec-surface} Recall the measure $\sigma\in\mathcal{P}(\mathbb{S})$ which was assumed to satisfy \eqref{sigproj}. It remains to show how $\sigma$ fits into the framework of Proposition \ref{prop-quegen} and Lemma \ref{lem-gliv}. To do so, we further explore the probabilistic representation for the surface measure on $\mathbb{S}^{n-1}$ given in Sect.\ \ref{sec-equiv}. Let $\mathcal{R}:\mathbb{A}\rightarrow\mathbb{A}$ be the map such that for $z \in \mathbb{A}$, the $n$-th row of $\mathcal{R}(z)$ is \begin{equation}\label{rdef} \mathcal{R}(z)^{(n)} \doteq \frac{z^{(n)}}{\|z^{(n)}\|_{n}}. \end{equation} Let $\pi_n:\mathbb{A}\rightarrow \mathbb{R}^n$ denote the coordinate map such that $\pi_n(z) = z^{(n)}$, outputting the $n$-th row of a triangular array. \nom[RR]{$\mathcal{R}$}{Gaussian to spherical map} \begin{definition}\label{zetdef} Let $\zeta\in\mathcal{P}(\mathbb{A})$ be such that $\zeta \circ \pi_n^{-1}$ is the standard Gaussian measure on ${\mathbb R}^n$. \nom[zzeta]{$\zeta$}{a Gaussian measure on $\mathbb{A}$} \end{definition} \begin{proof}[Proof of Theorem \ref{th-qldp}] Fix $r<\infty$. Then, for $\sigma$-a.e.\ $\theta \in \mathbb{S}$, we claim that $\mathcal{W}_r(L_{n,\theta},\mu_2)\rightarrow 0$ as $n\rightarrow\infty$. The proof of the quenched LDP follows immediately from the preceding claim and Proposition \ref{prop-quegen} with $\nu=\mu_2$. To prove the claim, first note that a straightforward application of Lemma \ref{lem-jointrep} shows that if $\sigma$ satisfies \eqref{sigproj}, then for some $\zeta$ as in Definition \ref{zetdef}, we have $\sigma = \zeta \circ \mathcal{R}^{-1}$. The upshot is that $\sigma$-a.e.\ claims about $\theta \in \mathbb{S}$ (i.e., Theorem \ref{th-qldp}) can be reduced to $\zeta$-a.e.\ claims about $\mathcal{R}(z)$ for $z\in\mathbb{A}$. Thus, it suffices to show that for $\zeta$-a.e\ $z\in\mathbb{A}$, we have \begin{equation*} \mathcal{W}_r\left( \frac{1}{n}\sum_{i=1}^n \delta_{\sqrt{n} z_i^{(n)} / \|z^{(n)}\|_{n,2}} , \, \mu_2\right) \rightarrow 0. \end{equation*} This is a consequence of Lemma \ref{lem-gliv}, with $\mu= \mu_2$ (which has finite moments of all order) and $f_n = n^{-1/2} \|\cdot\|_{n,2}$. \end{proof} \subsection{Quenched proof for $p=1$}\label{ssec-qp1} \begin{proof}[Proof of Theorem \ref{th-qldp1}] For $\theta\in \mathbb{S}$ satisfying \eqref{thmax} with limit $c> 0$, let \begin{equation*} V_\theta^{(n)} \doteq \frac{1}{n}\sum_{i=1}^n Y_i\,\sqrt{n}\theta_i^{(n)}, \end{equation*} where $Y_1,Y_2,\dots$ are i.i.d.\ random variables with distribution $\mu_1(dy) \doteq \frac{1}{2} e^{-|y|}dy$. To prove the LDP for $(W_\theta^{(n,1)})_{n\in {\mathbb N}}$, it suffices to show that $(V_\theta^{(n)})_{n\in{\mathbb N}}$ satisfies an LDP with speed $n/\sqrt{\log n}$ and the good rate function $\mathbb{I}^\sfq_{1,c}$ (see proof of Theorem \ref{th-aldp12} for a similar argument). In fact, due to the symmetry of $\mu_1$ and the monotonicity of $w\mapsto \mathbb{I}^\sfq_{1,c}(w)$ for $w > 0$, it suffices to show that for $w> 0$, we have the following upper and lower bounds: \begin{equation}\label{ldclaim1} \limsup_{n\rightarrow\infty} \frac{\sqrt{\log n}}{n}\log {\mathbb P}\left( V_\theta^{(n)} \ge w \right) \le -\frac{w}{c}; \quad \quad \quad \liminf_{n\rightarrow\infty} \frac{\sqrt{\log n}}{n}\log {\mathbb P}\left( V_\theta^{(n)} \ge w \right) \ge -\frac{w}{c}. \end{equation} First we prove the upper bound in \eqref{ldclaim1}. For $\epsilon \in(0,1)$, let \begin{equation*} t_{n,\epsilon} \doteq \frac{1-\epsilon}{c(1+\epsilon)\sqrt{\log n}}. \end{equation*} Due to \eqref{thmax}, for all $\epsilon >0$, there exists $N_{\epsilon} < \infty$ such that for $n\ge N_\epsilon$, we have $ \sqrt{n}t_{n,\epsilon} \max_{1\le i \le n} \theta_i^{(n)} \le 1-\epsilon$. Recall that for $t\in {\mathbb R}$, the mgf of $\mu_1$ is ${\mathbb E}[e^{tY_1}] = (1-t^2)^{-1}$ for $|t| <1$, and equals $+\infty$ otherwise. Combined with the Chernoff bound and the elementary bound $-\log (1-x) \le x + \tfrac{x^2}{2}$ for $x\in[0,1)$, we find that for $n\ge N_\epsilon$, \begin{align*} \frac{1}{nt_{n,\epsilon} } \log {\mathbb P}(V_\theta^{(n)} \ge w) &\le \frac{1}{nt_{n,\epsilon} } \sum_{i=1}^n -\log(1-n\,t_{n,\epsilon}^2 (\theta_i^{(n)})^2) - w \\ &\le t_{n,\epsilon} \sum_{i=1}^n (\theta_i^{(n)})^2 +\frac{\,t_{n,\epsilon}}{2} \sum_{i=1}^n (\theta_i^{(n)})^2 (\sqrt{n} t_{n,\epsilon}\theta_i^{(n)})^2 - w \\ &\le t_{n,\epsilon} +\frac{\,t_{n,\epsilon}}{2} (1-\epsilon)^2 - w. \end{align*} It follows that $\limsup_{n\rightarrow\infty} \frac{\sqrt{\log n}}{n } \log {\mathbb P}(V_\theta^{(n)} \ge w) \le -\frac{w(1-\epsilon)}{c(1+\epsilon)}$. Letting $\epsilon \rightarrow 0$ yields the upper bound. Now we prove the corresponding lower bound in \eqref{ldclaim1}. Again due to \eqref{thmax}, there exists some $N_\epsilon < \infty$ such that for $n\ge N_\epsilon$, we have $\sqrt{n} \max_{1\le i \le n} \theta_i^{(n)} \ge c(1-\epsilon) \sqrt{\log n}$. For $n\in {\mathbb N}$, let $j_n \doteq \arg\max_{1\le i\le n} \theta_i^{(n)}$. Then, for $n\ge N_\epsilon$, \begin{equation}\label{infbd1} {\mathbb P}(V_\theta^{(n)} \ge w) \ge {\mathbb P}\left( Y_{j_n} \ge \tfrac{wn}{c(1-\epsilon)\sqrt{\log n}} \right)\cdot {\mathbb P}\left( \sum_{i\ne {j_n}} Y_i \sqrt{n}\theta_i^{(n)} \ge 0 \right), \end{equation} The second term in \eqref{infbd1} equals $1/2$, due to the symmetry of $\mu_1$. As for the first term, it follows from Lemma \ref{lem-ulbd} with $p=1$ that \begin{equation*} \lim_{\epsilon \rightarrow 0} \lim_{n\rightarrow\infty} \tfrac{\sqrt{\log n}}{n} \log {\mathbb P}\left( Y_{j_n} \ge \tfrac{wn}{c(1-\epsilon)\sqrt{\log n}} \right) = \lim_{\epsilon \rightarrow 0} -\frac{w}{c(1-\epsilon)} = -\frac{w}{c}. \end{equation*} Combining this with \eqref{infbd1}, one obtains the lower bound. \end{proof} \begin{remark}\label{rmk-whysig} Until now, we have not clarified why the condition \eqref{thmax} is natural, nor why it is not possible to make the same $\sigma$-a.e.\ claim as in the quenched LDP for $p\in(1,\infty)$. Roughly speaking, ``almost everywhere" statements on row-wise \emph{sums} of triangular arrays are essentially identical to the corresponding statements for sequences; this is clarified in the proof of Lemma \ref{lem-gliv}, and crucial to the proof of Theorem \ref{th-qldp}, the quenched LDP for $p\in(1,\infty)$. This is not the case for ``almost everywhere" statements on row-wise \emph{maxima} of triangular arrays, which is relevant for the $p=1$ case. To be precise, first note that the following ``in probability" statement is classical \cite[p.430]{gnedenko1943distribution}: for any distribution on triangular arrays $\zeta\in\mathcal{P}(\mathbb{A})$ such that the law of the $n$-th row is the $n$-dimensional standard Gaussian measure (as in Definition \ref{zetdef}), and for all $\epsilon > 0$, \begin{equation}\label{zetprob} \lim_{n\rightarrow\infty} \zeta\left( z\in \mathbb{A} : \left|\tfrac{1}{\sqrt{\log n}}\max_{1\le i \le n} z_i^{(n)} - \sqrt{2} \right| > \epsilon \right) = 0. \end{equation} In fact, for any $\sigma$ satisfying \eqref{sigproj}, there exists some $\zeta$ as in Definition \ref{zetdef} such that $\sigma = \zeta \circ \mathcal{R}^{-1}$ for $\mathcal{R}$ as in \eqref{rdef}. Thus, for all $\epsilon > 0$, \begin{equation}\label{sigprob} \lim_{n\rightarrow\infty} \sigma\left( \theta\in \mathbb{S} : \left|\sqrt{\tfrac{n}{\log n}} \max_{1\le i \le n} \theta_i^{(n)} - \sqrt{2} \right| > \epsilon \right) = 0. \end{equation} The scaling in this limit motivates the condition \eqref{thmax}. We now consider whether the ``almost everywhere" version of \eqref{zetprob} is satisfied: \begin{equation}\label{zetae} \zeta\left( z\in \mathbb{A} : \lim_{n\rightarrow\infty} \tfrac{1}{\sqrt{\log n}}\max_{1\le i \le n} z_i^{(n)} = \sqrt{2} \right) \stackrel{?}{=} 1. \end{equation} The equality in \eqref{zetae} holds for \emph{some} $\zeta\in\mathcal{P}(\mathbb{A})$ satisfying Definition \ref{zetdef}, but is not satisfied for others. \begin{enumerate}[label=(\alph*)] \item Suppose $\zeta$ is such that $\zeta(z\in\mathbb{A} : z_i^{(n)} = z_i^{(i)}, \forall\, i,n\in {\mathbb N})$. That is, for $\zeta$-a.e.\ $z$, the array is constant within columns. Then, the maximum of the $n$-th row of the array $z$ is equivalent to the maximum of the first $n$ terms of the sequence $z_1^{(1)},z_2^{(2)}, \dots$. Under this law, the $\zeta$-a.e.\ convergence in \eqref{zetae} is known to hold \cite[Remark (viii)]{resnick1973almost}. \item On the other hand, suppose $\zeta$ is such that for a random triangular array $Z\sim \zeta$, the rows of $Z$ are independent (and hence, the elements of $Z$ are i.i.d.\ standard Gaussian random variables). Then, the limit \eqref{zetae} can be shown \emph{not} to hold since the $\zeta$-a.e.\ limit inferior and limit superior differ \cite[p.123]{jiang2005maxima}. In particular, it is possible to show that for $\zeta$-a.e. $z$, all of the points in $[\sqrt{2},2]$ are limit points of the sequence $\max_{1\le i \le n}z_i^{(n)} / \sqrt{\log n}$, $n\in {\mathbb N}$. \end{enumerate} Similarly, the ``almost everywhere" analog of \eqref{sigprob} holds for some $\sigma$ satisfying \eqref{sigproj}, but not others. Recall the map $\mathcal{R}$ of \eqref{rdef}, and for $\zeta$ satisfying Definition \ref{zetdef}, let $\sigma = \zeta \circ \mathcal{R}^{-1}$. \begin{enumerate}[label=(\alph*')] \item If $\zeta$ is as in example (a) above, then condition \eqref{thmax} of Theorem \ref{th-qldp1} holds for $\sigma$-a.e\ $\theta\in \mathbb{S}$, with $c=\sqrt{2}$. \item If $\zeta$ is as in example (b) above, then the proof of Theorem \ref{th-qldp1} (which goes through for subsequences) shows that for $\sigma$-a.e.\ $\theta\in \mathbb{S}$, the sequence $(\sqrt{\log n}/n)\log {\mathbb P}(V_\theta^{(n)} \ge w)$ has all of the points in $[-w/2,-w/\sqrt{2}]$ as limit points, and hence, does not converge. \end{enumerate} The upshot of the two preceding examples is that, unlike for the quenched LDP when $p\in(1,\infty)$, it is not possible to state Theorem \ref{th-qldp1} as a result for $\sigma$-a.e.\ $\theta\in \mathbb{S}$ and any $\sigma$ satisfying \eqref{sigproj}. Instead, the large deviation behavior of $(W_\theta^{(n,1)})_{n\in{\mathbb N}}$ depends on the particular sequence $\theta$ of projection directions, via the limit \eqref{thmax}. \end{remark} \section{The relationship between the annealed and quenched LDPs}\label{sec-rel} Fix $p\in (2,\infty)$. In this section, we prove Theorem \ref{th-compar}, which establishes a connection between the quenched rate function $\mathbb{I}^\sfq_{p,\nu}$ and the annealed rate function $\mathbb{I}^\sfa_p$. Additional analysis of this variational problem is deferred to Sect.\ \ref{sec-analysis}. In Sect.\ \ref{sec-quenched}, we obtained the quenched rate function by establishing an LDP for $R_\theta^{(n,p)}$ of \eqref{rnp2} and then using the fact that $\widetilde{W}^{(n,p)}_\theta = \bar T_p(R_\theta^{(n,p)})$, where $\bar T_p:{\mathbb R}\times{\mathbb R}_+ \rightarrow {\mathbb R}$ is the map defined in \eqref{bartp}. To establish the variational formula \eqref{varform1}, we will find it convenient to use an exactly analogous representation for the annealed case (as opposed to the approach originally adopted in Sect.\ \ref{sec-annealed}). Let $R^{(n,p)}$ be defined similarly to $R_\theta^{(n,p)}$ of \eqref{rnp2}, but with the deterministic deterministic $\theta^{(n)}$ replaced by random $\Theta^{(n)}$, \begin{equation}\label{rnpdefn} R^{(n,p)} \doteq \left(\frac{1}{n}\sum_{i=1}^n \sqrt{n}\Theta^{(n)}_iY^{(n,p)}_i, \, \, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p \right). \end{equation} Then, we have \begin{equation*} \sw^{(n,p)} \stackrel{(d)}{=} \bar T_p(R^{(n,p)}). \end{equation*} We will prove an LDP for $(\bar T_p(R^{(n,p)}))_{n\in{\mathbb N}}$, and use it to obtain an alternate form for the annealed LDP that directly relates the annealed and quenched rate functions. In Sect.\ \ref{ssec-rnp}, we establish an LDP for $(R^{(n,p)})_{n\in {\mathbb N}}$ using certain spherical invariance properties similar to those discussed in Sect.\ \ref{ssec-p2}. Then, in Sect.\ \ref{ssec-empcone}, we recall a large deviation principle for the empirical measure induced by the coordinates of a random point on the scaled $\ell^q$ sphere $n^{1/q}\partial \mathbb{B}_{n,q}$. Lastly, in Sect.\ \ref{ssec-varadh}, we apply the aforementioned empirical measure LDP in order to obtain variational formulas for the limit log mgfs associated with $R^{(n,p)}$. Here, we will repeatedly make use of the tail bounds obtained in Lemma \ref{lem-subgsn}. \subsection{An LDP for $(R^{(n,p)})_{n\in {\mathbb N}}$ with a convex rate function}\label{ssec-rnp} In this subsection, we prove that $(R^{(n,p)})_{n\in{\mathbb N}}$ satisfies an LDP with some convex good rate function. For our purposes, although the explicit form of the rate function is irrelevant, its convexity is important. We begin with two elementary lemmas involving convex analysis. \begin{lemma}[Theorem 5.3 or comment on p.54 of \cite{rockafellar1970convex}] \label{lem-infconvex} Let $\mathcal{X}$,$\mathcal{Y}$ be real vector spaces. Let $D_F\subset \mathcal{X}\times \mathcal{Y}$ be a convex set, and suppose $F:D_F\rightarrow {\mathbb R}$ is a convex function. Let \begin{equation*} \tilde{F}(x) \doteq \inf_{y \in \mathcal{Y}: (x,y)\in D_F} F(x,y). \end{equation*} Then, $\tilde{F}$ is a convex function. \end{lemma} \begin{lemma}\label{lem-jointcon} The map \begin{equation*} {\mathbb R}^2 \ni (x,y) \mapsto J_2(\tfrac{x}{y^{1/2}}) = -\tfrac{1}{2}\log(1-\tfrac{x^2}{y}) \in {\mathbb R} \end{equation*} is convex on its domain $\{(x,y) \in {\mathbb R}^2 : y > x^2\}$. \end{lemma} \begin{proof} Let $f(x,y) \doteq -\tfrac{1}{2}\log(1-\tfrac{x^2}{y})$. We compute the Hessian matrix. \begin{align*} (Hf)(x,y) &= \frac{1}{(y-x^2)^2}\begin{pmatrix} y+x^2 & -x \\ -x & \frac{1}{2y^2}x^2(2y-x^2) \end{pmatrix}. \end{align*} Note that for $(x,y)$ such that $y > x^2$, \begin{equation*} \det (Hf) = \frac{1}{(y-x^2)^4}\frac{x^4}{2y^2} \left( y- x^2\right) > 0, \end{equation*} and also \begin{equation*} \frac{y+x^2}{(y-x^2)^2} > 0. \end{equation*} By Sylvester's criterion, since all leading principal minors are positive, $Hf$ is a positive definite matrix, so $f$ is convex. \end{proof} Next, we exploit the spherical symmetry of $\Theta^{(n)}$ in the following lemma, as we did previously in Sect. \ref{ssec-p2}, which will then allow us to prove the desired LDP. \begin{lemma}\label{lem-spherindep} Fix $n\in {\mathbb N}$, and let $X^{(n)}=(X_1,\dots, X_n)$ be a random vector in ${\mathbb R}^n$ independent of $\Theta^{(n)}$ which is uniformly distributed on $\mathbb{S}^{n-1}$. Then, \begin{equation*} \left\langle \sqrt{n}\Theta^{(n)}, \tfrac{X^{(n)}}{\|X^{(n)}\|_{n,2}}\right\rangle_n \stackrel{(d)}{=} \left\langle \sqrt{n}\Theta^{(n)}, e_1^{(n)}\right\rangle_n. \end{equation*} Moreover, $\langle \sqrt{n}\Theta^{(n)}, X^{(n)}/\|X^{(n)}\|_{n,2}\rangle_n$ is independent of $X^{(n)}$. \end{lemma} \begin{proof} Due to the spherical symmetry of $\sqrt{n}\Theta^{(n)}$ and since $X^{(n)}/\|X^{(n)}\|_{n,2} \in \mathbb{S}^{n-1}$, \begin{equation}\label{margeq} \left\langle \sqrt{n}\Theta^{(n)}, \tfrac{X^{(n)}}{\|X^{(n)}\|_{n,2}} \right\rangle_n \stackrel{(d)}{=} \langle \sqrt{n}\Theta^{(n)}, x\rangle_n \stackrel{(d)}{=} \langle \sqrt{n}\Theta^{(n)}, e_1^{(n)}\rangle_n, \end{equation} for any $x\in \mathbb{S}^{n-1}$. It remains to show independence. Let $\pi(\cdot,\cdot)$ denote the joint distribution of $\left( \tfrac{X^{(n)}}{\|X^{(n)}\|_{n,2}}, X^{(n)}\right)$, with first and second marginals $\pi_1$ and $\pi_2$, respectively. For $A \in \mathcal{B}({\mathbb R})$ and $B \in \mathcal{B}({\mathbb R}^n)$, \begin{align*} {\mathbb P}\left( \left\langle \sqrt{n}\Theta^{(n)}, \tfrac{X^{(n)}}{\|X^{(n)}\|_{n,2}}\right \rangle_n \in A, X^{(n)} \in B\right ) &= \int_{{\mathbb R} \times {\mathbb R}^n} {\mathbb P}( \langle \sqrt{n}\Theta^{(n)}, x_1\rangle_n \in A) {\mathbbm 1}_{\{x_2\in B\}} \, \pi(dx_1,dx_2)\\ &= \int_{{\mathbb R} \times B} {\mathbb P}( \langle \sqrt{n}\Theta^{(n)}, e_1^{(n)}\rangle_n \in A) \, \pi(dx_1,dx_2)\\ &= {\mathbb P}( \langle \sqrt{n}\Theta^{(n)}, e_1^{(n)}\rangle_n \in A)\, {\mathbb P}(X^{(n)} \in B),\\ &={\mathbb P}\left( \left\langle \sqrt{n}\Theta^{(n)}, \tfrac{X^{(n)}}{\|X^{(n)}\|_{n,2}}\right \rangle_n \in A\right) \,{\mathbb P}(X^{(n)} \in B), \end{align*} where the second and last equality follow from \eqref{margeq}. \end{proof} \begin{proposition}\label{prop-altldpconvex} Let $p\in(2,\infty)$. Then, the sequence $(R^{(n,p)})_{n\in {\mathbb N}}$ defined by \eqref{rnpdefn} satisfies an LDP with a convex good rate function. \end{proposition} \begin{proof} Due to the independence given by Lemma \ref{lem-spherindep}, \begin{align} R^{(n,p)} &= \left(\frac{1}{n}\sum_{i=1}^n \sqrt{n}\Theta^{(n)}_i \frac{Y^{(n,p)}_i}{\|Y^{(n,p)}\|_{n,2}} \|Y^{(n,p)}\|_{n,2}, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p \right)\notag\\ &= \left(\frac{1}{n}\left\langle \sqrt{n}\,\Theta^{(n)}, \tfrac{Y^{(n,p)}}{\|Y^{(n,p)}\|_{n,2}} \right\rangle_n \|Y^{(n,p)}\|_{n,2}, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p \right)\notag\\ &\stackrel{(d)}{=} \left(\frac{1}{n}\sqrt{n}\,\Theta^{(n)}_1 \|Y^{(n,p)}\|_{n,2}, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p \right).\label{rnprep} \end{align} Define the following ${\mathbb R}^3$-valued sequence of random variables, \begin{equation*} Q^{(n,p)} \doteq \left(\Theta^{(n)}_1, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^2, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p \right), \quad n \in {\mathbb N}. \end{equation*} By Cram\'er's theorem in $\mathbb{R}^2$, the sequence $Q_{2,3}^{(n,p)} \doteq \left( \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^2, \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p \right)$, $n\in{\mathbb N}$, satisfies an LDP with some convex good rate function, call it $\widehat{J}_p$, with domain $D_{\widehat{J}_p} = \mathbb{R}_+^2$. As obtained in \cite{BarGamLozRou10} and described in Sect.\ \ref{ssec-p2}, $(\Theta^{(n)}_1)_{n\in {\mathbb N}}$ satisfies an LDP with the convex good rate function $J_2(a) = -\frac{1}{2}\log(1-a^2)$ for $|a| <1$ (and $+\infty$ elsewhere). Since $\Theta^{(n)}$ and $Y^{(n,p)}$ are independent, the sequence $(Q^{(n,p)})_{n\in {\mathbb N}}$ satisfies an LDP with the convex good rate function \begin{equation*} J_{Q,p}(a,b,c) \doteq J_2(a) + \widehat{J}_p(b,c)\,, \quad a,b,c\in {\mathbb R} . \end{equation*} By \eqref{rnprep} and the contraction principle, $(R^{(n,p)})_{n\in {\mathbb N}}$ satisfies an LDP with the good rate function $J_{R,p}$ defined as follows: for $x\in {\mathbb R}$ and $z \ge 0$, \begin{align*} J_{R,p}(x,z) &\doteq \inf \left\{ J_{Q,p}(a,b,c) : |a| <1, b \ge 0, c \ge 0, x= ab^{1/2} , z = c \right\}\\ &= \inf_{y:y>x^2 \ge 0} J_{Q,p}(\tfrac{x}{y^{1/2}},y,z). \end{align*} We now show that $J_{R,p}$ is convex. By Lemma \ref{lem-infconvex}, it suffices to prove that \begin{equation*} (x,y,z) \mapsto J_{Q,p}(\tfrac{x}{y^{1/2}}, y,z) = -\tfrac{1}{2}\log(1-\tfrac{x^2}{y}) + \widehat{J}_p(y,z) \,, \quad 0 \le x^2 < y; \end{equation*} is (jointly) convex, which follows from Lemma \ref{lem-jointcon}, the convexity of $\widehat{J}_p$, and the fact that the sum of two convex functions is convex. \end{proof} \subsection{LDP for the empirical measure under the cone measure on $n^{1/q}\partial\mathbb{B}_{n,q}$}\label{ssec-empcone} The connection between the annealed and quenched LDPs will make critical use of a particular LDP for the following empirical measures. Let $L_{n,\Theta}$ denote the empirical measure of $\sqrt{n}\Theta^{(n)}$, \begin{equation}\label{empir} L_{n,\Theta} \doteq \frac{1}{n}\sum_{i=1}^n\delta_{\sqrt{n}\Theta^{(n)}_i}. \nom[lnalpha]{$L_{n,\Theta}$}{empirical measure of $\sqrt{n}\Theta^{(n)}$} \end{equation} In Proposition \ref{prop-sanovcone} below, we state a Sanov-type LDP for this sequence of empirical measures, with the rate function $\mathbb{H}:\mathcal{P}(\mathbb{R})\rightarrow[0,\infty]$ defined to be a perturbed version of relative entropy: for $\nu\in\mathcal{P}({\mathbb R})$, let \begin{equation} \label{hpdefn} \mathbb{H}(\nu) \doteq \left\{ \begin{array}{ll} H(\nu | \mu_2) + \tfrac{1}{2} (1 - m_2(\nu)) & \textnormal{ if } m_2(\nu) \le 1, \\ +\infty & \textnormal{ else,} \end{array}\right. \nom[hq]{$\mathbb{H}$}{rate function for $(L_{n,\Theta})_{n\in {\mathbb N}}$} \end{equation} where $m_2(\nu)$ is the second moment of $\nu$. \begin{proposition}\label{prop-sanovcone} Let $r<2$. Then, the empirical measure $(L_{n,\Theta})_{n\in N}$ satisfies an LDP in $\mathcal{P}_r({\mathbb R})$ (equipped with the Wasserstein-$r$ topology) with the strictly convex good rate function $\mathbb{H}$ of \eqref{hpdefn}. \end{proposition} This LDP can be found in \cite[Theorem 6.6]{arous2001aging} with respect to the weak topology. A strengthening to the Wasserstein topology (and in fact, a mild extension to the surface measure on $\ell^q$ spheres for $q\in[1,\infty]$ other than $q=2$) can be found in \cite[Theorem 1.4]{kr1}. \subsection{Application of Varadhan's integral formula}\label{ssec-varadh} In this section, in order to obtain an expression for the rate function, we will apply the G\"artner-Ellis theorem. In view of this, we introduce the limit log mgf $\widetilde\Phi_p : {\mathbb R}^2 \rightarrow {\mathbb R}$. For $t_2 \ge \frac{1}{p}$, let $\widetilde\Phi_p(t_1,t_2) \doteq +\infty$ and for $t_1\in {\mathbb R}$, $t_2 < \frac{1}{p}$, let \begin{equation} \label{pressfunc} \widetilde\Phi_{p}(t_1,t_2) \doteq \lim_{n\rightarrow\infty} \frac{1}{n} \log \mathbb{E} \left[\exp\left( \sum_{i=1}^n \left( t_1 \sqrt{n}\Theta^{(n)}_i Y^{(n,p)}_i + t_2 |Y^{(n,p)}_i|^p\right) \right)\right], \end{equation} where $\sqrt{n}\Theta^{(n)}$ is distributed according to the cone measure on $n^{1/2}\partial \mathbb{B}_{n,2}$ (i.e., the rotationally invariant probability measure on $n^{1/2} \mathbb{S}^{n-1}$). Before applying Varadhan's lemma, we introduce the following technical lemma. \nom[phibar]{$\widetilde\Phi_p$}{limit log mgf} \begin{lemma}[Theorem 2.11(2) of \cite{BarGamLozRou10}]\label{lem-subindep} For all $n\in {\mathbb N}$, the collection of random variables $(|\Theta^{(n)}_1|,\dots, |\Theta^{(n)}_n|)$ is sub-independent. That is, for non-negative non-decreasing functions $g_1,\dots,g_n$, \begin{equation*} {\mathbb E}\left[\prod_{i=1}^n g_i(|\Theta^{(n)}_i|)\right] \le \prod_{i=1}^n {\mathbb E}\left[g_i(|\Theta^{(n)}_i|)\right]. \end{equation*} \end{lemma} Using the preceding technical result, the following lemma introduces the connection between the limit log mgfs of \eqref{pressfunc} and \eqref{pressfunc2} and the entropy-like rate function $\mathbb{H}$ of \eqref{hpdefn}. \begin{lemma} \label{lem-lmgfvar} Let $p\in (2,\infty)$. Then, \begin{equation} \label{mgfvar} \widetilde\Phi_{p}(t_1,t_2) = \sup_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \Psi_{p,\nu}(t_1,t_2) - \mathbb{H}(\nu)\right\}, \quad t_1,t_2\in {\mathbb R}. \end{equation} \end{lemma} \begin{proof} The equality in \eqref{mgfvar} is clear for $t_2 \ge \frac{1}{p}$, since then both sides of \eqref{mgfvar} equal $+\infty$. Thus, fix $t_1\in {\mathbb R}$, $t_2 < \frac{1}{p}$. Conditioning on $\Theta$, and using the assumed independence of $\Theta$ and $\sy^{(p)}$, as well as the definition of $\lm_p$ from \eqref{lampdefn0}, the expectation on the right-hand side of \eqref{pressfunc} can be rewritten as \begin{align*} \widetilde\Phi_{p}(t_1,t_2) &= \lim_{n\rightarrow\infty} \frac{1}{n} \log \mathbb{E} \left[\prod_{i=1}^n\mathbb{E}\left[\left. \exp\left( \left( t_1 \sqrt{n}\Theta^{(n)}_i Y^{(n,p)}_i + t_2 |Y^{(n,p)}_i|^p\right) \right) \right| \sqrt{n}\Theta^{(n)} \right]\right] \\ &= \lim_{n\rightarrow\infty} \frac{1}{n} \log \mathbb{E}\left[\exp\left( \sum_{i=1}^n \lm_p(t_1\sqrt{n}\Theta^{(n)}_i, t_2) \right)\right]\\ &= \lim_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb E}\left[ \exp(n\psi_{p,t_1,t_2}(L_{n,\Theta}))\right], \end{align*} where $\psi_{p,t_1,t_2}:\mathcal{P}({\mathbb R}) \rightarrow {\mathbb R}$ is defined as \begin{equation*} \psi_{p,t_1,t_2} (\nu) \doteq \int\lm_p(t_1 a,t_2)\nu(da) = \Psi_{p,\nu}(t_1,t_2), \quad \nu\in\mathcal{P}({\mathbb R}). \end{equation*} Recall from Proposition \ref{prop-sanovcone} that for all $r <2$, the sequence $(L_{n,\Theta})_{n\in {\mathbb N}}$ satisfies an LDP in $\mathcal{P}_r({\mathbb R})$ equipped with the Wasserstein-$r$ topology, with the good rate function $\mathbb{H}$. Thus, the variational formula \eqref{mgfvar} would follow from Varadhan's integral formula \cite[Theorem 4.3.1]{DemZeibook} if we can show that the following hypotheses hold: \begin{enumerate}[label=(\alph*)] \item for some $r<2$, $\psi_{p,t_1,t_2}$ is continuous with respect to the Wasserstein-$r$ topology; \item for some $\kappa >1$, $\psi_{p,t_1,t_2}$ satisfies the exponential moment condition \begin{equation}\label{expmomcond} \limsup_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb E}\left[ e^{\kappa n \psi_{p,t_1,t_2} (L_{n,\Theta})} \right] < \infty. \end{equation} \end{enumerate} We first check condition (a). The continuity of $\psi_{p,t_1,t_2}$ with respect to the Wasserstein-$\tfrac{p}{p-1}$ topology follows from Lemma \ref{lem-tailcont}. Condition (a) follows since for $p>2$, we have $\frac{p}{p-1} < 2$. We now establish a strong version of condition (b) that shows the exponential moment is finite for any $\kappa >1$. Let $\kappa > 1$. Because $\mu_p$ is symmetric, $\lm_p$ of \eqref{lampdefn0} is symmetric in its first argument, so $\lm_p(t_1a,t_2)$ depends on $a$ only through $|a|$. Moreover, for fixed $t_1\in {\mathbb R}$ and $t_2 < \frac{1}{p}$, the mapping $|a| \mapsto \lm_p(t_1|a|,t_2)$ is non-negative and non-decreasing, as can be seen from the expression for $\lm_p$ given in Lemma \ref{lem-tailcont}. Thus, the sub-independence property of Lemma \ref{lem-subindep} and the definition \eqref{tpdef} of $\mathcal{T}_p$ imply that for a constant $C_{p,t_1,t_2}$ not depending on $n,\kappa, \Theta^{(n)}$, \begin{align} {\mathbb E}\left[\exp\left(\kappa n \psi_{p,t_1,t_2}(L_{n,\Theta}) \right)\right] &\le \prod_{i=1}^n {\mathbb E}\left[ \exp(\kappa \lm_p(t_1\sqrt{n}\Theta^{(n)}_i, t_2) )\right] \notag\\ &\le \prod_{i=1}^n {\mathbb E}\left[ \exp( \kappa C_{p,t_1,t_2} + \kappa C_{p,t_1,t_2} |\sqrt{n}\Theta^{(n)}_i|^{p/(p-1)} )\right] \notag\\ &= \exp(n\kappa C_{p,t_1,t_2}) \, {\mathbb E}\left[ \exp( \kappa C_{p,t_1,t_2} |\sqrt{n}\Theta^{(n)}_1|^{p/(p-1)} )\right]^n.\label{expcond} \end{align} Let $(Z_1,Z_2,\dots)$ be a sequence of i.i.d. standard Gaussian random variables, and note that due to Lemma \ref{lem-jointrep} and the strong law of large numbers, \begin{equation*} \sqrt{n}\Theta^{(n)}_1 \stackrel{(d)}{=} \frac{\sqrt{n} Z_1}{\|Z^{(n)}\|_{n,2}}\xrightarrow[\textnormal{${\mathbb P}$-a.s.}]{ n\rightarrow\infty } Z_1. \end{equation*} Applying logarithms, dividing by $n$, and taking limits in \eqref{expcond} shows that for $r=\frac{p}{p-1}$, \begin{align*} \limsup_{n\rightarrow\infty} \frac{1}{n} \log {\mathbb E}\left[\exp\left(\kappa n \psi_{p,t_1,t_2} (L_{n,\Theta}) \right)\right] &\le \kappa C_{p,t_1,t_2} + \limsup_{n\rightarrow\infty} \log {\mathbb E}\left[ \exp( \kappa C_{p,t_1,t_2} |\sqrt{n}\Theta^{(n)}_1|^{r} )\right]\\ &\le \kappa C_{p,t_1,t_2}+ \log {\mathbb E}\left[ \exp( \kappa C_{p,t_1,t_2} |Z_1|^{r} )\right] < \infty, \end{align*} where the interchange of $\limsup$ and expectation is due to Fatou's lemma, and the last display is finite since $r < 2$ for $p > 2$. \end{proof} In the following two lemmas, we establish some properties of the minimizers of the variational problem of Lemma \ref{lem-lmgfvar}. We later massage these results to obtain the variational formula of Theorem \ref{th-compar}. \begin{lemma}[{\cite[Lemma 2.4]{kr1}}]\label{lem-compact} Let $K_2 \doteq \{ \nu \in \mathcal{P}(\mathbb{R}) : m_2(\nu) \le 1 \}$. The set $K_2$ is convex, non-empty, and compact with respect to the Wasserstein-$r$ topology for all $r <2$. \end{lemma} \begin{lemma}\label{lem-optprops} Let $p\in (2,\infty)$ and for fixed $(t_1,t_2)\in{\mathbb R}^2$, let $\phi:\mathcal{P}({\mathbb R})\rightarrow{\mathbb R}$ denote the functional being maximized in \eqref{mgfvar} , \begin{equation*} \phi(\nu) \doteq \Psi_{p,\nu}(t_1,t_2) - \mathbb{H}(\nu). \end{equation*} Then, $\phi$ is strictly concave and upper semi-continuous (with respect to the Wasserstein-$\tfrac{p}{p-1}$ topology on $\mathcal{P}_{p/(p-1)}({\mathbb R})$). As a consequence, the supremum in \eqref{mgfvar} is uniquely attained at some optimal $\nu^\circ$ such that $m_2(\nu^\circ) \le 1$. \end{lemma} \begin{proof} From the definition \eqref{hpdefn}, it follows that the domain of $\mathbb{H}$ is the compact set $K_2$ of Lemma \ref{lem-compact}, so it suffices to restrict the supremum in the variational problem \eqref{mgfvar} to $K_2\subset\mathcal{P}({\mathbb R})$. For $\nu \in K_2$, we see that $\phi$ is the sum of a linear functional $\nu\mapsto \Psi_{p,\nu}(t_1,t_2)$, and the negative of the strictly convex rate function $\mathbb{H}$ of \eqref{hpdefn}. As for upper semi-continuity, first note that $\nu\mapsto \Psi_{p,\nu}(t_1,t_2)$ is continuous due to Lemma \ref{lem-tailcont}. Since $\frac{p}{p-1} < 2$ for $p>2$, it follows from Proposition \ref{prop-sanovcone} that $-\mathbb{H}$ is upper semi-continuous with respect to Wasserstein-$\tfrac{p}{p-1}$. This shows that $\phi$ is strictly concave and upper semi-continuous on the compact convex non-empty set $K_2$, so the supremum of $\phi$ is uniquely attained on $K_2$. \end{proof} \begin{theorem}[Minimax Theorem, see Corollary 3.3 of \cite{sion1958general}]\label{th-minmax} Let $\mathcal{X},\mathcal{Y}$ be topological vector spaces. Suppose $C\subset \mathcal{X}$ is a compact convex nonempty subset, and $D\subset \mathcal{Y}$ is a convex subset. Let $F:\mathcal{X}\times \mathcal{Y} \rightarrow \mathbb{R}$ be a function such that: \begin{itemize} \item for all $y\in D$, $F(\cdot, y)$ is lower semi-continuous and convex on $C$; \item for all $x\in C$, $F(x,\cdot)$ is upper semi-continuous and concave on $D$. \end{itemize} Then, \begin{equation*} \inf_{x\in C} \sup_{y\in D} F(x,y) = \sup_{y\in D} \inf_{x\in C} F(x,y). \end{equation*} \end{theorem} \begin{lemma} \label{lem-varforbarstar} Let $p\in (2,\infty)$. Then, for $\tau_1,\tau_2\in\mathbb{R}$ , \begin{equation}\label{mimap} \widetilde\Phi_{p}^*(\tau_1,\tau_2) = \inf_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \Psi_{p,\nu}^*(\tau_1,\tau_2) + \mathbb{H}(\nu)\right\}. \end{equation} \end{lemma} \begin{proof} We apply the Minimax Theorem \ref{th-minmax} to the following: \begin{itemize} \item $\mathcal{X} = \mathcal{M}(\mathbb{R})$, the space of finite signed measures on $\mathbb{R}$, equipped with the Wasserstein-$\tfrac{p}{p-1}$ topology; \item $\mathcal{Y} = \mathbb{R}^2$; \item $C = K_2 = \{ \nu \in \mathcal{X} : \nu \in \mathcal{P}(\mathbb{R}), m_2(\nu) \le 1\}$; \item $D = \mathbb{R} \times (-\infty,\tfrac{1}{p})$. \item Fix $\tau_1,\tau_2$. For $\nu \in C$ and $(t_1,t_2) \in D$, let \begin{equation} \label{minmaxf} F(\nu, (t_1,t_2)) \doteq t_1\tau_1 + t_2\tau_2 - \Psi_{p,\nu}(t_1,t_2) + \mathbb{H}(\nu), \end{equation} where $\Psi_{p,\nu}$ and $\mathbb{H}$ are defined as in \eqref{lampdefn} and \eqref{hpdefn}, respectively, for $\nu\in\mathcal{P}({\mathbb R})$, and set equal to $+\infty$ for $\nu \in \mathcal{M}({\mathbb R}) \setminus \mathcal{P}({\mathbb R})$. \end{itemize} It is clear that $\mathcal{X},\mathcal{Y}, D$ satisfy the hypotheses of the minimax theorem. The hypotheses for $C=K_2$ follow from Lemma \ref{lem-compact}, since $\frac{p}{p-1}< 2$. To verify the desired properties of $F$, we first fix $(t_1,t_2) \in D$. Then, the lower semi-continuity and convexity of $F(\,\cdot\,,(t_1,t_2))$ follow from Lemma \ref{lem-optprops}. Next, fix $\nu \in C$. Lower semi-continuity of $\Psi_{p,\nu}$ follows from Lemma \ref{lem-quenchedess}. As for convexity, Lemma \ref{lem-lamfacts} says that $\lm_p$ is convex on $D$, and hence, by linearity of expectation, $\Psi_{p,\nu}$ is convex on $D$. Since $(t_1,t_2) \mapsto t_1\tau_1 + t_2\tau_2$ is continuous and linear, it follows that $F(\nu, \cdot)$ is upper semi-continuous and concave on $D$. Lastly, substitute the representation obtained in Lemma \ref{lem-lmgfvar} into the expression for the Legendre transform $\widetilde\Phi_{p}^*$, and then apply Theorem \ref{th-minmax} to $F$ as defined in \eqref{minmaxf}. \begin{align*} \widetilde\Phi_{p}^*(\tau_1,\tau_2) &= \sup_{t_1\in {\mathbb R}, t_2\in R} \left\{t_1\tau_1 + t_2\tau_2 - \widetilde\Phi_{p}(t_1,t_2) \right\}\\ &= \sup_{t_1 \in {\mathbb R}, t_2 \in {\mathbb R}} \left\{t_1\tau_1 + t_2\tau_2 - \sup_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \Psi_{p,\nu}(t_1,t_2) - \mathbb{H}(\nu)\right\} \right\}\\ &= \sup_{(t_1,t_2) \in D} \inf_{\nu \in C} \left\{t_1\tau_1 + t_2\tau_2 - \Psi_{p,\nu}(t_1,t_2) + \mathbb{H}(\nu)\right\} \\ &= \inf_{\nu \in C} \sup_{ (t_1,t_2) \in D} \left\{t_1\tau_1 + t_2\tau_2 - \Psi_{p,\nu}(t_1,t_2) + \mathbb{H}(\nu)\right\} \\ &= \inf_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \Psi_{p,\nu}^*(\tau_1,\tau_2) + \mathbb{H}(\nu)\right\}, \end{align*} where the third and fifth equalities hold since $\Psi_{p,\nu}(t_1,t_2) = +\infty$ for $t_2 > \frac{1}{p}$, and $\mathbb{H}(\nu) = +\infty$ if either $\nu \in \mathcal{M}({\mathbb R}) \setminus \mathcal{P}({\mathbb R})$ or $m_2(\nu) > 1$. \end{proof} \begin{lemma}\label{lem-var2} The variational formula \eqref{varform1} of Theorem \ref{th-compar} holds for $p=2$, and the infimum is attained at $\nu=\mu_2$. \end{lemma} \begin{proof} For $p=2$, it follows from elementary calculations that for $w\in {\mathbb R}$ such that $w^2 \ge m_2(\nu)$, we have $\mathbb{I}^\sfq_{2,\nu}(w) = +\infty$, and for $w^2 < m_2(\nu)$, \begin{align*} \mathbb{I}^\sfq_{2,\nu}(w) &= -\tfrac{1}{2}\log(1-\tfrac{w^2}{m_2(\nu)}). \end{align*} It is clear that for all $w\in {\mathbb R}$, $\mathbb{I}^\sfq_{2,\nu}(w)$ is non-increasing in $m_2(\nu)\in(w^2,\infty]$. Observe from \eqref{hpdefn} that $\mathbb{H}(\nu)\ge 0$, with equality if and only if $\nu = \mu_2$. Hence, for $w\in {\mathbb R}$, \begin{equation*} \mathbb{I}^\sfq_{2,\mu_2}(w) = \mathbb{I}^\sfq_{2,\mu_2}(w) + \mathbb{H}(\mu_2) \ge \inf_{\substack{\nu\in \mathcal{P}({\mathbb R}):\\ m_2(\nu)\le 1}} \left\{ \mathbb{I}^\sfq_{2,\nu}(w) + \mathbb{H}(\nu) \right\} \ge \inf_{\substack{\nu\in \mathcal{P}({\mathbb R}):\\ m_2(\nu)\le 1}}\mathbb{I}^\sfq_{2,\nu}(w) = \mathbb{I}^\sfq_{2,\mu_2}(w). \end{equation*} Thus, $\mu_2$ minimizes the variational formula \eqref{varform1}. \end{proof} \begin{proof}[Proof of Theorem \ref{th-compar}] For $p=2$, the theorem follows from Lemma \ref{lem-var2}. As for $p\in(2,\infty)$ consider the quantity $R^{(n,p)}$ of \eqref{rnpdefn}. By Proposition \ref{prop-altldpconvex}, the sequence $(R^{(n,p)})_{n\in{\mathbb N}}$ satisfies an LDP with a convex good rate function, which we denote here by $J_{R,p}$. Note that $\widetilde\Phi_p$ of \eqref{pressfunc} satisfies \begin{equation*} \widetilde\Phi_p(t_1,t_2) = \lim_{n\rightarrow\infty}\frac{1}{n}\log{\mathbb E}\left[\exp\left(n\,\langle (t_1,t_2), R^{(n,p)}\rangle\right)\right], \quad t_1\in {\mathbb R}, t_2 < \tfrac{1}{p}. \end{equation*} For $t_1\in {\mathbb R}$, $t_2<\frac{1}{p}$, there exist $\epsilon_1,\epsilon_2 > 0$ such that $\widetilde\Phi_p(t_1(1+\epsilon_1),t_2(1+\epsilon_2)) < \infty$. Therefore, we can apply Varadhan's lemma --- see, e.g., Theorem 4.3.1 and condition (4.3.3) of \cite{DemZeibook}. \begin{equation}\label{varadh1} \widetilde\Phi_p(t_1,t_2) = \sup_{\tau_1,\tau_2} \{t_1\tau_1+t_2\tau_2 - J_{R,p}(\tau_1,\tau_2)\}, \quad t_1\in {\mathbb R}, t_2 < \tfrac{1}{p}. \end{equation} We claim that the equality \eqref{varadh1} in fact holds for \emph{all} $t_1,t_2\in {\mathbb R}$. It remains to show that the right hand side is infinite for $t_2 \ge \frac{1}{p}$. Due to Cram\'er's theorem, the sequence $(R_2^{(n,p)})_{n\in{\mathbb N}}$, defined by \begin{equation*} R_2^{(n,p)} \doteq \frac{1}{n}\sum_{i=1}^n |Y^{(n,p)}_i|^p, \end{equation*} satisfies an LDP with the good rate function $\hat\lm_p^*$, where \begin{align*} \hat\lm_p^*(\tau_2) &\doteq \sup_{t_2\in {\mathbb R}} \{t_2\tau_2 - \hat\lm_p(t_2)\}, \quad \tau_2\in {\mathbb R};\\ \hat\lm_p(t_2) &\doteq \log\int_{\mathbb R} e^{t_2|y|^p}\mu_p(dy), \quad t_2 \in {\mathbb R}. \end{align*} Due to the contraction principle and the continuity of the projection map $(\tau_1,\tau_2)\mapsto \tau_2$, we have \begin{equation*} \hat\lm_p^*(\tau_2) = \inf_{\tau_1} J_{R,p}(\tau_1,\tau_2). \end{equation*} Note that the infimum is attained at some $\tau_1^*\in{\mathbb R}$, because $J_{R,p}$ is lower semi-continuous (since it is a rate function), and has compact level sets (since it is a good rate function). Then, we find that for all $t_1,t_2\in {\mathbb R}$, \begin{align*} \sup_{\tau_1,\tau_2}\{t_1\tau_1 + t_2\tau_2 -J_{R,p}(\tau_1,\tau_2)\} &\ge \sup_{\tau_2}\{t_2\tau_2 + t_1\tau_1^*-J_{R,p}(\tau_1^*,\tau_2) \} \\ &= t_1\tau_1^* + \sup_{\tau_2}\{t_2\tau_2 - \hat \lm_p^*(\tau_2)\}\\ &= t_1\tau_1^* + \hat\lm_p(t_2). \end{align*} But from the definition of $\hat\lm_p$, it is clear that $\hat\lm_p(t_2) = \infty$ for $t_2 \ge \frac{1}{p}$. Thus, \eqref{varadh1} is true for \emph{all} $t_1,t_2\in {\mathbb R}$, so due to the convexity of $J_{R,p}$, Legendre duality (see, e.g., \cite[Lemma 4.5.8]{DemZeibook}) implies that $J_{R,p} = \widetilde\Phi_p^*$. Applying the contraction principle, \eqref{mimap}, and the definition \eqref{ieqdefn} of $\mathbb{I}^\sfq_{p,\nu}$, we write the annealed rate function as \begin{align*} \mathbb{I}^\sfa_{p}(w) &= \inf_{\substack{\tau_1 \in {\mathbb R},\tau_2 \in {\mathbb R} \,:\\ \tau_1\tau_2^{-1/p}=w}} \widetilde\Phi_{p}^*(\tau_1,\tau_2) \\ &= \inf_{\substack{\tau_1 \in {\mathbb R},\tau_2 >0 \,:\\ \tau_1\tau_2^{-1/p}=w}}\inf_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \Psi_{p,\nu}^*(\tau_1,\tau_2) + \mathbb{H}(\nu)\right\}\\ &= \inf_{\nu \in \mathcal{P}(\mathbb{R})} \inf_{\substack{\tau_1 \in {\mathbb R},\tau_2 >0 \,:\\ \tau_1\tau_2^{-1/p}=w}} \left\{ \Psi_{p,\nu}^*(\tau_1,\tau_2) + \mathbb{H}(\nu)\right\}\\ &= \inf_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \mathbb{I}^\sfq_{p,\nu}(w) + \mathbb{H}(\nu)\right\}. \end{align*} The definition of $\mathbb{H}$ in \eqref{hpdefn} allows the restriction of the variational problem to measures $\nu$ satisfying $m_2(\nu)\le 1$, which completes the proof. \end{proof} \begin{remark} The essence of the proof of Theorem \ref{th-qldp} is a strong law of large numbers for $\tfrac{1}{n} \sum_{i=1}^n \lm_p(t_1\sqrt{n}\Theta^{(n)}_i,t_2)$. Similarly, the essence of the proof of Theorem \ref{th-compar} is a large deviation principle for $\tfrac{1}{n} \sum_{i=1}^n \lm_p(t_1\sqrt{n}\Theta^{(n)}_i,t_2)$. The non-triviality of establishing such an LDP is due to the fact that \begin{equation*} [\lm_p(t_1\sqrt{n}\Theta^{(n)}_i,t_2)]_{i=1,\dots,n;\,n\in\mathbb{N}} \end{equation*} is an infinite triangular array of \emph{dependent} random variables. Similar random structures have previously been analyzed, for example, in \cite{frank1985strong} for LLN and in \cite{eichelsbacher1998exponential, trashorras2002large} for LDP. Note that this triangular array does have rich structure. For example, each row is a finite exchangeable vector. In addition, the Maxwell-Poincar\'e-Borel Lemma (see, e.g., \cite[Lemma 1.2]{ledoux1996isoperimetry}) says that for fixed $k$, the random variables \begin{equation*} (\sqrt{n}\Theta^{(n)}_1, \dots, \sqrt{n}\Theta^{(n)}_k) \end{equation*} are asymptotically independent as $n\rightarrow\infty$. Nonetheless, none of the existing literature on general triangular arrays with such structure appears to be immediately applicable in our setting, which is why we appealed to the empirical measure versions (i.e., for $L_{n,\Theta}$) of the LLN (in the proof of Theorem \ref{th-qldp}) and LDP (Proposition \ref{prop-sanovcone}). As a side note, observe that the corresponding CLT for $L_{n,\Theta}$ (a Donsker-type theorem) can be found in \cite{spruill2007asymptotic}. \end{remark} \section{Atypical directions of projection}\label{sec-atyp} The goal of this section is to prove Theorem \ref{th-atyp}. We state some preliminary results in Sect. \ref{ssec-prel}. Then, we address the quenched case in Sect.\ \ref{ssec-atypq} and the annealed case in Sect.\ \ref{ssec-atypa}. \begin{remark}\label{rmk-shao} For $\theta = \iota$ (that is, the sequence of directions $(1,1,\dots,1)\in \mathbb{S}^{n-1}$, $n\in {\mathbb N}$), note that $\widetilde{W}^{(n,p)}_\iota$ corresponds to the following ``self-normalized sum", \begin{equation}\label{shaosum} \widetilde{W}^{(n,p)}_\iota = \frac{\frac{1}{n} \sum_{i=1}^n Y^{(n,p)}_i}{\left( \frac{1}{n} \sum_{i=1}^n |Y^{(n,p)}_i|^p\right)^{1/p} }. \end{equation} The quantity $\widetilde{W}^{(n,p)}_\iota$ when $Y^{(n,p)}_i$ has a general law (not necessarily $\mu_p$) has been analyzed in \cite{bercu2002concentration, dembo1998self, jing2003self, Shao97}. In particular, \cite{Shao97} establishes upper-tail large deviation asymptotics for $(\widetilde{W}^{(n,p)}_\iota)_{n\in{\mathbb N}}$ even if the law of $Y^{(n,p)}_i$ does not satisfy any exponential moment conditions. In our setting where $Y^{(n,p)}_i\sim \mu_p$, it is natural to ask how the rate function of \cite{Shao97} compares with our universal rate function $\mathbb{I}^\sfq_{p,\mu_2}$. A consequence of Theorem \ref{th-atyp} is that the large deviation rate function for the self-normalized sums $(\widetilde{W}^{(n,p)}_\iota)_{n\in{\mathbb N}}$ is atypical, in the sense that it does not coincide with $\mathbb{I}^\sfq_{p,\mu_2}$. \end{remark} \subsection{Preliminary properties of the rate functions}\label{ssec-prel} In this section, we establish some elementary properties of our various rate functions. \begin{lemma}\label{lem-domains} The domains of the rate functions $\mathbb{I}^\sfq_{p,\mu_2}$ of \eqref{ieqdefn}, $\lm_p^*$ the Legendre transform of the function $\lm_p$ of \eqref{lampdefn0}, and $\mathbb{I}^{\mathsf{cr}}_p$ of \eqref{cramratedefn} satisfy the following: \begin{enumerate} \item For $p\in [2,\infty)$, $ D_{\mathbb{I}^\sfq_{p,\mu_2}}^\circ \subset (-1,1)$; \item For $p\in(1,\infty)$, $D_{\lm_p^*}^\circ = \{(\tau_1,\tau_2)\in {\mathbb R}\times{\mathbb R}_+ : |\tau_1|^p < \tau_2\}$; \item For $p\in (1,\infty)$, $D_{\mathbb{I}^{\mathsf{cr}}_p}^\circ = (-1,1)$. \end{enumerate} \end{lemma} \begin{proof} We first prove 1. By H\"older's inequality, for $x\in {\mathbb R}^n$ and $p > 2$, \begin{align*} \sum_{i=1}^n x_i^2 \le \left(\sum_{i=1}^n |x_i|^p\right)^{2/p} \left(\sum_{i=1}^n 1 \right)^{(p-2)/p} \quad \Rightarrow \quad \|x\|_{n,2} \le \|x\|_{n,p}\, n^{\tfrac{1}{2} - \tfrac{1}{p}}. \end{align*} Thus, $n^{1/p}{\mathbb B}_{n,p} \subseteq n^{1/2} \mathbb{B}_{n,2}$, and they intersect at ``corners" of the form $(\pm 1, \pm 1, \dots, \pm 1) \in {\mathbb R}^n$. As a consequence, \begin{equation*} \sup \{|\langle x,y\rangle_n| : x\in n^{1/p} \mathbb{B}_{n,p} , y \in n^{1/2} \mathbb{B}_{n,2} \} = n. \end{equation*} This shows that the supports of the laws of $\sw^{(n,p)}$ of \eqref{def-bwnpbth} and $\sw_\theta^{(n,p)}$ of \eqref{def-wthetan} are both equal to $[-1,1]$, and hence, $D_{\mathbb{I}^\sfq_{p,\mu_2}}^\circ \subset (-1,1)$. Now we prove 2. Note that $\lm_p^*$ is the Cram\'er rate function for the sequence of sums of i.i.d.\ random variables $\frac{1}{n}\sum_{i=1}^n \eta_i$, where in our case $\eta_i = (Y_i,|Y_i|^p)\in{\mathbb R}^2$ for $Y_i \sim \mu_p$, $i\in {\mathbb N}$. A classical fact from large deviation theory says that the closure of the domain of the Legendre transform of the log mgf of a probability measure $\nu\in\mathcal{P}({\mathbb R}^d)$ is equal to the closure of the convex hull of the support of $\nu$ \cite[Lemma 2.4]{de1985large}. In our setting, this says that \begin{equation*} \overline{D_{\lm_p^*}} = \overline{\text{conv}\{(\tau_1,\tau_2)\in{\mathbb R}\times{\mathbb R}_+ : |\tau_1|^p = \tau_2 \} } = \{(\tau_1,\tau_2)\in{\mathbb R}\times{\mathbb R}_+ : |\tau_1|^p \le \tau_2 \}. \end{equation*} Since $D_{\lm_p^*}$ is convex, this implies 2. Lastly, the fact that $\mathbb{I}^{\mathsf{cr}}_p$ is obtained from $\lm_p^*$ via the contraction principle under the map $(\tau_1,\tau_2)\mapsto \tau_1\tau_2^{-1/p}$ shows that $D_{\mathbb{I}^{\mathsf{cr}}_p}^\circ = (-1,1)$. \end{proof} The preceding lemma explains why in Theorem \ref{th-atyp}, we limit our results to the case $w\in (-1,1)$. In the following lemma, we show that the relevant variational problems achieve their optima within these domains. \begin{lemma}\label{lem-attained} Fix $p\in (1,\infty)$. \begin{enumerate} \item Let $(\tau_1,\tau_2)\in D_{\lm_p^*}$. Then, in the variational problem (i.e., the Legendre transform) that defines $\lm_p^*$, \begin{equation}\label{lmtr} \lm_p^*(\tau_1,\tau_2) \doteq \sup_{t_1 \in {\mathbb R}, \,t_2<1/p} \left\{t_1\tau_1 + t_2\tau_2 - \lm_p(t_1,t_2) \right\}, \end{equation} the supremum is uniquely attained. \item Let $w\in D_{\mathbb{I}^\sfq_{p,\mu_2}}$. Then, in the variational problem of \eqref{ieqdefn} that defines $\mathbb{I}^\sfq_{p,\mu_2}$, the infimum is (not necessarily uniquely) attained. \end{enumerate} \end{lemma} \begin{proof}\quad \begin{enumerate} \item A classical result in convex analysis is that the supremum defining the Legendre transform of a strictly convex function is uniquely attained at a single point (see, e.g., Theorem 23.5 and Theorem 26.3 of \cite{rockafellar1970convex}). Since Lemma \ref{lem-lamfacts} states that $\lm_p$ is strictly convex, the supremum in \eqref{lmtr} is uniquely attained. \item For $\tau > 0$ and $w\in (-1,1)$, let $g_w(\tau) \doteq \Psi_{p,\mu_2}^*(w\tau^{1/p},\tau)$. Note that for $w\in(-1,1)$, we can rewrite \eqref{ieqdefn} as \begin{equation*} \mathbb{I}^\sfq_{p,\mu_2}(w) = \inf_{\tau_2 > 0} g_w(\tau_2). \end{equation*} Note that $g_w$ is lower semi-continuous due to the lower semi-continuity of $\Psi_{p,\mu_2}^*$ and the continuity of the map $\tau\mapsto (w\tau^{1/p},\tau)$. To show that the infimum is attained, it suffices to show boundedness of lower level sets, which implies compactness since $g_w$ is lower semi-continuous. Note that for all $\tau > 0$, the function $\Psi_{p,\mu_2}^*(\cdot,\tau)$ is symmetric about 0 and convex, and hence minimized at 0. As a consequence, for all $w\in (-1,1)$, \begin{align*} g_w(\tau) \ge \Psi_{p,\mu_2}^*(0,\tau) &= \sup_{t_1\in {\mathbb R}, t_2<\frac{1}{p}}\left\{t_2\tau - \Psi_{p,\mu_2}(t_1,t_2)\right\}\\ &\ge \sup_{t_2<\frac{1}{p}} \left\{ t_2\tau - \Psi_{p,\mu_2}(0,t_2)\right\} \\ &= \sup_{t_2 < \frac{1}{p}} \left\{ t_2 \tau + \tfrac{1}{p}\log(1-pt_2)\right\}\\ &= \tfrac{1}{p} (\tau + 1 - \log \tau), \end{align*} where the equality in the third line follows from Lemma \ref{lem-tailcont} and the fact that $\Psi_{p,\mu_2}(0,\cdot) = \lm_p(\cdot)$ by the definition of $\Psi_{p,\mu_2}$ in \eqref{lampdefn}. Since $\lim_{\tau\rightarrow\infty}(\tau + 1 - \log \tau) = \infty$, we find that $\lim_{\tau\rightarrow \infty} g_w(\tau) = \infty$, so $g_w$ has bounded level sets. \end{enumerate} \end{proof} \subsection{Comparison of quenched and unweighted LDPs}\label{ssec-atypq} In this section, we present the proof of Theorem \ref{th-atyp}, which entails a comparison of the log mgfs for the quenched and ``Cram\'er"-type LDPs. We begin by setting some notation that will allow us to state two lemmas that identify conditions under which a log mgf is ``more" or ``less" convex than $t\mapsto t^2$. Let $\beta >0$, and let $\mu_{p,\beta}(dy)$ be the absolutely continuous probability measure on ${\mathbb R}$ with density $f_{p,\beta}(y) \doteq C_{p,\beta} f_p(y /\beta^{1/p})$ for the appropriate normalization constant $C_{p,\beta}$. Note that $\mu_p =\mu_{p,1}$ and elementary calculations show that $C_{p,\beta} = C_{p,1} \beta^{-1/p}$. For $p\in[1,\infty)$, we have $\mu_{p,\beta}(dy) = C_{p,\beta} e^{-|y|^p / (p\beta)} dy$, and for $p=\infty$, we have $\mu_{\infty,\beta}(dy)=\mu_\infty(dy)={\mathbbm 1}_{[-1,1]}(y)\,dy$. \begin{lemma}[Theorem 8 of \cite{barthe2003extremal}]\label{lem-mommon} The map ${\mathbb R}_+\ni t \mapsto \log \mathrm{M}_{\mu_{p,1/p}}(\sqrt{t})$ is concave for $p\in[2,\infty]$ and convex for $p\in[1,2]$. \end{lemma} We can mold this lemma to apply to the function $\lm_p$ of \eqref{lampdefn0}. \begin{lemma}\label{lem-strict} Let $p\in [1,\infty)$ and $t_2 < \frac{1}{p}$. The map ${\mathbb R}_+ \ni t_1 \mapsto \lm_p(\sqrt{t_1},\, t_2)$ is concave but not linear for $p>2$, linear for $p=2$, and convex but not linear for $p < 2$. \end{lemma} \begin{proof} It is easy to see that $\mathrm{M}_{\mu_{p,\beta}}(t) = \mathrm{M}_{\mu_{p,1/p}}(t\beta^{1/p})$. Together with Lemma \ref{lem-mommon}, this implies that for all $\beta > 0$, the map $t\mapsto \log\mathrm{M}_{\mu_{p,\beta}}(\sqrt{t})$ is concave for $p\in[2,\infty]$ and convex for $p\in[1,2]$. For $t_1 \in {\mathbb R}$, $t_2< \frac{1}{p}$, we consider the case $\beta = (1-pt_2)^{-1}$ and apply Lemma \ref{lem-tailcont} to see that \begin{equation}\label{lmpexpand} \lm_p(t_1,t_2) = -\tfrac{1}{p}\log(1-pt_2) + \log \mathrm{M}_{\mu_{p,(1-pt_2)^{-1}}}(t_1). \end{equation} This proves the concavity (resp., convexity) of $t_1 \mapsto \lm_p (\sqrt{t_1}, t_2)$ for $p \ge 2$ (resp., $p \le 2$). It remains to show that linearity holds if and only if $p=2$. Note that for all $\beta > 0$, $\mu_{2,\beta}$ is a Gaussian measure with mean 0 and variance $\beta$; thus, for $t \in {\mathbb R}_+$, we have \begin{equation*} \log\mathrm{M}_{\mu_{2,\beta}}(\sqrt{t}) = \tfrac{\beta}{\sqrt{2}} t. \end{equation*} Conversely, if $t_1\mapsto \lm_p(\sqrt{t_1},t_2)$ is linear, then \eqref{lmpexpand} implies that $t_1\mapsto \log\mathrm{M}_{\mu_{p,(1-pt_2)^{-1}}}(t_1)$ is quadratic, so $\mu_{p,(1-pt_2)^{-1}}$ must be Gaussian, hence $p=2$. \end{proof} We apply the concavity and convexity of the preceding lemma to prove inequalities for the function $\Psi_{p,\nu}$ defined in \eqref{lampdefn}. \begin{lemma}\label{lem-mgfineq} Let $\nu\in\mathcal{P}({\mathbb R})$ be non-degenerate (i.e., not a Dirac mass at a single point). If $p\in (2,\infty)$ and $m_2(\nu)\le 1$, then \begin{equation*} \Psi_{p,\nu}(t_1,t_2) \le \lm_p(t_1,t_2), \quad t_1\in {\mathbb R}, \,t_2 < \tfrac{1}{p}, \end{equation*} with equality if and only if $t_1 =0$. If $p\in (1,2)$ and $m_2(\nu) \ge 1$, then \begin{equation*} \Psi_{p,\nu}(t_1,t_2) \ge \lm_p(t_1,t_2), \quad t_1\in {\mathbb R}, \,t_2 < \tfrac{1}{p}, \end{equation*} with equality if and only if $t_1 =0$. \end{lemma} \begin{proof} Fix $p\in (2,\infty)$ and non-degenerate $\nu\in\mathcal{P}({\mathbb R})$ such that $m_2(\nu) \le 1$. Let $X\sim \nu$. Due to the concavity of $t_1\mapsto \lm_p(\sqrt{t_1},t_2)$ from Lemma \ref{lem-strict}, Jensen's inequality, and the fact that $t_1\mapsto \lm_p(t_1,t_2)$ is symmetric and increasing for $t_1>0$, \begin{equation*} \Psi_{p,\nu}(t_1,t_2) = {\mathbb E}_\nu\left[\lm_p((t_1^2X^2)^{1/2},\,t_2)\right] \le \lm_p\left(\left({\mathbb E}_\nu[t_1^2X^2]\right)^{1/2}, \,t_2\right) \le \lm_p(t_1,t_2). \end{equation*} Since $t_1\mapsto \lm_p(\sqrt{t_1},t_2)$ is not linear for $p >2$, it follows that the first inequality above is an equality if and only if $t_1 = 0$ (i.e., when the random variable $t_1^2X^2$ is degenerate). The result for $p\in (1,2)$ and $m_2(\nu) \ge 1$ follows from similar calculations. \end{proof} \begin{remark} The primary argument in the preceding proof of Lemma \ref{lem-mgfineq} is the concavity (or convexity) of $\Lambda \circ \sqrt{\cdot}$ and Jensen's inequality. A similar combination of tools was employed in \cite{barthe2003extremal}, but to a different end; in particular, on pages 2, 16, and 19 of \cite{barthe2003extremal}, Jensen's inequality is applied to a log-concave function $f$ and a vector $v\in {\mathbb R}^n$ to obtain the inequality \begin{equation*} \prod_{i=1}^n f(v_i)^{1/n} \le f\left(\frac{1}{n}\sum_{i=1}^n v_i \right). \end{equation*} In that setting, this yields an upper bound on the volume of a slab orthogonal to any $\theta^{(n)} \in \mathbb{S}^{n-1}$ --- an upper bound that is attained by the slab orthogonal to $\iota^{(n)}$. On the other hand, we use Jensen's inequality in a slightly different way (with respect to a general measure instead of a discrete measure) to show that the precise rate function for projections onto $\sigma$-a.e.\ $\theta\in\mathbb{S}$ differs (i.e., $<$ rather than just $\le$) from the rate function for projections onto $\iota$. \end{remark} \begin{proof}[Proof of Theorem \ref{th-atyp}] As observed in Remark \ref{rmk-shao}, as a consequence of the representation in Lemma \ref{lem-jointrep} and Lemma \ref{lem-unifequiv}, $(W_\iota^{(n,p)})_{n\in{\mathbb N}}$ satisfies the same LDP as the sequence $(\widetilde{W}_\iota^{(n,p)})_{n\in{\mathbb N}}$ of self-normalized sums defined in \eqref{shaosum}. The LDP for $(\widetilde{W}_\iota^{(n,p)})_{n\in{\mathbb N}}$ with rate function $\mathbb{I}^{\mathsf{cr}}_p$ of \eqref{cramratedefn} follows from Cram\'er's theorem in ${\mathbb R}^2$ for $\frac{1}{n}\sum_{i=1}^n (Y^{(n,p)}_i, |Y^{(n,p)}_i|^p)$, $n\in {\mathbb N}$, and the contraction principle applied to the map $\bar T_p$ of \eqref{bartp}. As for comparing the quenched and self-normalized rate functions, let $p\in(2,\infty)$, $(\tau_1,\tau_2)\in D_{\lm_p^*}$, and define \begin{equation*} ( t_1^{\tau_1,\tau_2},t_2^{\tau_1,\tau_2}) \doteq \arg\max_{t_1\in {\mathbb R}, t_2<\tfrac{1}{p}} \left\{ t_1\tau_1 + t_2\tau_2 - \lm_{p}(t_1,t_2) \right\}, \end{equation*} where the supremum is uniquely attained due to Lemma \ref{lem-attained}. Then, it follows from the definition of the Legendre transform and Lemma \ref{lem-mgfineq} that \begin{align} \Psi_{p,\mu_2}^*(\tau_1,\tau_2) &\ge t_1^{\tau_1,\tau_2}\tau_1 + t_2^{\tau_1,\tau_2}\tau_2 - \Psi_{p,\mu_2}( t_1^{\tau_1,\tau_2}, t_2^{\tau_1,\tau_2})\notag \\ &\stackrel{(\star)}{\ge} t_1^{\tau_1,\tau_2}\tau_1 + t_2^{\tau_1,\tau_2}\tau_2 - \lm_{p}( t_1^{\tau_1,\tau_2}, t_2^{\tau_1,\tau_2}) \label{rateineqptwise} \\ &= \lm_p^*(\tau_1,\tau_2).\notag \end{align} Note that Lemma \ref{lem-mgfineq} shows that the inequality $(\star)$ is an equality if and only if $t_1^{\tau_1,\tau_2}=0$. Due to the strict convexity of $\lm_p$, we have $( t_1^{\tau_1,\tau_2},t_2^{\tau_1,\tau_2}) = \nabla \lm_p^*(\tau_1,\tau_2)$. Since $\lm_p$ is essentially smooth (resp., symmetric in its first argument), $\lm_p^*$ is strictly convex (resp., symmetric in its first argument). Therefore, $t_1^{\tau_1,\tau_2}=\partial_{\tau_1} \lm_p^*(\tau_1,\tau_2) = 0$ if and only if $\tau_1=0$. Recall from Lemma \ref{lem-domains} that $D_{\mathbb{I}^\sfq_{p,\mu_2}}^\circ \subset (-1,1) = D_{\mathbb{I}^{\mathsf{cr}}_p}^\circ$. For $w\in (-1,1) \setminus D_{\mathbb{I}^\sfq_{p,\mu_2}}$, we have $\mathbb{I}^{\mathsf{cr}}_p(w) < \mathbb{I}^\sfq_{p,\mu_2}(w) = \infty$. For $w\in D_{\mathbb{I}^\sfq_{p,\mu_2}}$, let \begin{equation*} (\tau_1^w,\tau_2^w) \in \arg \min_{\substack{\tau_1\in {\mathbb R}, \tau_2 > 0 : \\ \tau_1\tau_2^{-1/p}=w}}\Psi_{p,\mu_2}^*(\tau_1,\tau_2), \end{equation*} where a minimizer exists due to Lemma \ref{lem-attained}(2). Then, it follows from \eqref{rateineqptwise} and the definition of $\mathbb{I}^{\mathsf{cr}}_p$ from \eqref{cramratedefn}, that \begin{equation*} \infty > \mathbb{I}^\sfq_{p,\mu_2}(w) = \Psi_{p,\mu_2}^*(\tau_1^w,\tau_2^w) \stackrel{(\ddagger)}{\ge} \lm_p^*(\tau_1^w,\tau_2^w) \ge \inf_{\substack{\tau_1\in {\mathbb R}, \tau_2 > 0 : \\ \tau_1\tau_2^{-1/p}=w}} \lm_p^*(\tau_1,\tau_2) = \mathbb{I}^{\mathsf{cr}}_p(w). \end{equation*} The assumption that $w\in D_{\mathbb{I}^\sfq_{p,\mu_2}}$ implies that $(\tau_1^w,\tau_2^w)\in D_{\lm_p^*}$. Thus, the inequality $(\ddagger)$ is strict if and only if the corresponding inequality \eqref{rateineqptwise} is strict, which is the case if and only if $\tau_1^w \ne 0$. If $w\ne 0$, then the constraint $w=\tau_1\tau_2^{-1/p}$ implies that $\tau_1^w\ne 0$, so $(\ddagger)$ is a strict inequality. On the other hand, if $w=0$, then $\mathbb{I}^\sfq_{p,\mu_2}(0) = 0 = \mathbb{I}^{\mathsf{cr}}_p(0)$. This completes the proof for $p >2$. The proof is essentially identical for $p<2$, with convexity replacing concavity. The identification in the case $p=2$ follows from Theorem \ref{th-p2}, which states that the rate function associated with $(W_\theta^{(n,2)})_{n\in {\mathbb N}}$ is the same for all $\theta \in \mathbb{S}$, in particular for $\theta^{(n)} = \iota^{(n)} = \tfrac{1}{\sqrt{n}}(1,1,\dots,1)$. \end{proof} \subsection{Comparison of annealed and unweighted LDPs}\label{ssec-atypa} Using similar methods as for Theorem \ref{th-atyp}, combined with the limit log mgf $\widetilde\Phi_p$ of \eqref{pressfunc}, we obtain the following result which compares the sequence of fixed directions $\iota$ with the sequence of random directions $\Theta$. \begin{proposition} For $p\in (2,\infty)$, \begin{equation*} \mathbb{I}^\sfa_p(w) \ge \mathbb{I}^{\mathsf{cr}}_p(w), \quad w\in (-1,1), \end{equation*} with equality if and only if $w =0$. \end{proposition} \begin{proof} Recall the definition of the limit log mgf $\widetilde\Phi_p$ given in \eqref{pressfunc}. Due to the variational representation stated in Lemma \ref{lem-lmgfvar} and Lemma \ref{lem-optprops}, there exists an optimal probability measure $\nu_p^\circ$ such that \begin{equation} \widetilde\Phi_p(t_1,t_2) = \Psi_{p,\nu_p^\circ}(t_1,t_2) - \mathbb{H}(\nu_p^\circ). \label{nuorep} \end{equation} Note that $m_2(\nu_p^\circ)\le 1$, so by Lemma \ref{lem-mgfineq}, \begin{equation*} \Psi_{p,\nu_p^\circ}(t_1,t_2) \le \lm_p(t_1,t_2), \quad t_1\in {\mathbb R}, t_2 < \tfrac{1}{p}, \end{equation*} with equality if and only if $t_1 = 0$. Together with \eqref{nuorep}, this shows that \begin{equation*} \widetilde\Phi_p(t_1,t_2) \le \lm_p(t_1,t_2) - \mathbb{H}(\nu_p^\circ) \le \lm_p(t_1,t_2), \end{equation*} with equality only if $t_1=0$ and $\nu_{p}^\circ = \mu_2$. From this inequality, the same considerations as in the proof of Theorem \ref{th-atyp} --- except with $\Psi_{p,\mu_2}$ there replaced by $\widetilde\Phi_p$ here --- complete the proof. \end{proof} \section{Analogous results for product measures}\label{sec-prod} In this section, we consider projections of a random vector distributed according to a product measure, and state the analogous ``product measure" versions of the ``$\ell^p$ ball" results of Sect. \ref{sec-main}. For $n\in{\mathbb N}$ and $\gamma \in \mathcal{P}({\mathbb R})$, let \begin{equation} \sx^{(n,\gamma)} = (\sx^{(n,\gamma)}_1,\dots,\sx^{(n,\gamma)}_n) \sim \gamma^{\otimes n}, \end{equation} independent of $\Theta^{(n)}$. Let $\mu_\infty$ be the uniform measure on $[-1,1]$, whose density is the limit $f_\infty = \lim_{p\rightarrow\infty} f_p$ of the densities $f_p$ defined in \eqref{fpdef}. The $p=\infty$ analogs of our results stated in Sect.\ \ref{sec-main} follow as a consequence of the results in this section with $\gamma=\mu_\infty$. The results in the product measure case are proved using very similar arguments as in the proofs of the $\ell^p$ ball case for $p<\infty$, given in Sect.\ \ref{sec-annealed}--\ref{sec-atyp}. In fact, the arguments in this section are typically slightly simpler, because the \emph{a priori} independence of the coordinates of $X^{(n,\gamma)}$ eliminates the need to appeal to the representation of the uniform measure on the $\ell^p$ ball given in Sect.\ \ref{sec-equiv}. For this reason, we will mostly only sketch the proofs in this section, highlighting only the main differences. \subsection{Annealed LDP} For $n\in {\mathbb N}$ and $\gamma\in \mathcal{P}({\mathbb R})$, let \begin{align} W^{(n,\gamma)} &\doteq \tfrac{1}{n^{1/2}} \langle \sx^{(n,\gamma)} , \Theta^{(n)}\rangle_n,\label{wgth}\\ \Phi_\gamma(t_0,t_1) &\doteq \log \int_{\mathbb R} \int_{\mathbb R} e^{t_0 z^2 + t_1 zx} \mu_2(dz) \gamma(dx), \quad t_0, t_1\in {\mathbb R}, \label{phigamdef}\\ \mathbb{I}^\sfa_{\gamma}(w) &\doteq \inf_{\substack{\tau_0 > 0, \tau_1\in {\mathbb R} \, :\\ \tau_0^{-1/2}\tau_1 = w}} \Phi_{\gamma}^*(\tau_0,\tau_1). \end{align} \begin{theorem}\label{th-aldpprod} Let $\gamma$ lie in the space $\mathcal{T}_2$ defined in \eqref{tpdef}. Then, the sequence $(W^{(n,\gamma)})_{n\in {\mathbb N}}$ satisfies an LDP with the quasiconvex good rate function $\mathbb{I}^\sfa_\gamma$. \end{theorem} \begin{proof} Let $Z^{(n)}=(Z^{(n)}_1,\dots,Z^{(n)}_n)$ be a standard Gaussian random vector (i.e., distributed according to $\mu_2^{\otimes n}$), independent of $\sx^{(n,\gamma)}$. Define \begin{equation}\label{wgz} \widetilde{W}^{(n,\gamma)} \doteq \tfrac{1}{n^{1/2}} \left\langle \sx^{(n,\gamma)} , \tfrac{Z^{(n)}}{\ltwonrm{Z^{(n)}}}\right\rangle_n, \end{equation} and consider the associated sum of i.i.d.\ ${\mathbb R}^2$-valued random variables, \begin{equation*} S^{(n,\gamma)} \doteq \frac{1}{n}\sum_{i=1}^n \left( |Z^{(n)}_i|^2, \sx^{(n,\gamma)}_i\, Z^{(n)}_i \right). \end{equation*} Note that $\Phi_{\gamma}$ of \eqref{phigamdef} is the log mgf of the summands of $S^{(n,\gamma)}$. Since $\Theta^{(n)} \stackrel{(d)}{=} Z^{(n)}/\|Z^{(n)}\|_{n,2}$ as shown in Lemma \ref{lem-jointrep}, we have $W^{(n,\gamma)}\stackrel{(d)}{=} \widetilde W^{(n,\gamma)}$, so it suffices to prove an LDP for $(\widetilde{W}^{(n,\gamma)})_{n\in{\mathbb N}}$. Note that $\widetilde W^{(n,\gamma)} = T(S^{(n,\gamma)})$, where $T:{\mathbb R}^2\rightarrow{\mathbb R}$ is defined by \begin{equation}\label{tdef} T(\tau_0,\tau_1) = \tau_0^{-1/2}\tau_1. \end{equation} It is straightforward to check that if $\gamma\in\mathcal{T}_2$, then $0\in D_{\Phi_\gamma}^\circ$ (see the proof of Theorem \ref{th-aldp} in Sect. \ref{ssec-anng2} for a related calculation), so by Cram\'er's theorem, the sequence $(S^{(n,\gamma)})_{n\in {\mathbb N}}$ satisfies an LDP in ${\mathbb R}^2$ with the good rate function $\Phi_{\gamma}^*$. Due to the continuity of $T$ on $D_{\Phi_\gamma^*}$, the contraction principle yields the LDP for $(\widetilde{W}^{(n,\gamma)})_{n\in{\mathbb N}}$ with the desired rate function $\mathbb{I}^\sfa_\gamma$. \end{proof} \subsection{Quenched LDP and atypical projection directions}\label{ssec-queprod} Recall the mgf $\mathrm{M}_\gamma$ of \eqref{logmgfdef}. For $n\in {\mathbb N}$ and $\gamma,\nu \in \mathcal{P}({\mathbb R})$, define \begin{align} W_\theta^{(n,\gamma)} &\doteq \tfrac{1}{n^{1/2}} \langle \sx^{(n,\gamma)} , \theta^{(n)}\rangle_n, \label{wgth2}\\ \Psi_{\gamma,\nu}( t_1) &\doteq \int_\mathbb{R} \log\mathrm{M}_\gamma(t_1u) \nu(du), \quad t_1 \in \mathbb{R}, \label{laminfdefn}\\ \mathbb{I}^\sfq_{\gamma,\nu}(w) &\doteq \Psi_{\gamma,\nu}^*(w). \label{iqgamdefn} \end{align} \begin{theorem}\label{th-qldpprod} Let $\gamma \in \mathcal{T}_q$ for some $q > 1$. Then, for $\sigma$-a.e.\ $\theta \in \mathbb{S}$, the sequence $(W_\theta^{(n,\gamma)})_{n\in {\mathbb N}}$ satisfies an LDP with the convex good rate function $\mathbb{I}^\sfq_{\gamma,\mu_2}$. \end{theorem} A version of Theorem \ref{th-qldpprod} with weaker conditions can be found in \cite[Theorem 2.4]{gkr3}. The reader can also find in \cite[Theorem 2.5]{gkr3} a comparison of $\mathbb{I}^\sfq_{\gamma,\mu_2}$ and $(\log \mathrm{M}_\gamma)^*$; the latter is the large deviation rate function for the sequence of empirical means of $X^{(n,\gamma)}$, as given by Cram\'er's theorem. \subsection{Variational formula} \begin{theorem}\label{th-comparprod} Let $\gamma \in \mathcal{T}_q$ for some $q>2$. Then, for all $w\in {\mathbb R}$, \begin{align} \mathbb{I}^\sfa_{\gamma}(w) &= \inf_{\substack{\nu \in \mathcal{P}({\mathbb R}):\\ m_2(\nu)\le 1}} \left\{\mathbb{I}^\sfq_{\gamma,\nu}(w) + H(\nu |\mu_2) + \tfrac{1}{2}\left(1- m_2(\nu)\right) \right\}. \label{varform2} \end{align} In particular, this implies that for all $w\in {\mathbb R}$, $\mathbb{I}^\sfa_{\gamma}(w) \le \mathbb{I}^\sfq_{\gamma,\mu_2}(w)$. \end{theorem} To prove Theorem \ref{th-comparprod}, we first establish appropriate versions of the lemmas established in Sect. \ref{sec-rel}. Let $\gamma \in \mathcal{T}_q$ for some $q> 2$, and define the functional $\widetilde\Phi_\gamma:{\mathbb R} \rightarrow {\mathbb R}$ as follows: \begin{equation} \label{pressfunc2} \widetilde\Phi_\gamma(t) \doteq \lim_{n\rightarrow\infty} \frac{1}{n} \log \mathbb{E} \left[\exp\left( \sum_{i=1}^n t \sqrt{n}\Theta^{(n)}_i \sx^{(n,\gamma)}_i \right)\right] , \quad t\in {\mathbb R}. \end{equation} \begin{lemma}\label{lem-lmbar} Let $\gamma \in \mathcal{T}_p$. Then, \begin{equation} \label{mgfvar2} \widetilde\Phi_{\gamma}(t) = \sup_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \Psi_{\gamma,\nu}(t) - \mathbb{H}(\nu)\right\}, \quad t\in {\mathbb R}. \end{equation} In addition, $\widetilde\Phi_\gamma(t) < \infty$ for all $t\in {\mathbb R}$. \end{lemma} \begin{proof}[Sketch of Proof] The proof of Lemma \ref{mgfvar2} centers around Varadhan's lemma, and follows from similar calculations as in the proof of Lemma \ref{lem-lmgfvar}, except with $\log \mathrm{M}_\gamma$ in place of $\lm_p$. \end{proof} \begin{lemma}\label{lem-optprops2} Let $\gamma \in \mathcal{T}_p$, and for fixed $t\in {\mathbb R}$, let $\phi:\mathcal{P}({\mathbb R})\rightarrow {\mathbb R}$ denote the functional being maximized in \eqref{mgfvar2}, \begin{equation*} \phi(\nu) \doteq \Psi_{\gamma,\nu}(t)- \mathbb{H}(\nu). \end{equation*} Then, $\phi$ is strictly concave and upper semi-continuous (with respect to the Wasserstein-$\tfrac{p}{p-1}$ topology on $\mathcal{P}_{p/(p-1)}({\mathbb R})$). As a consequence, the supremum in \eqref{mgfvar2} is uniquely attained at some optimal $\nu^\circ$ such that $m_2(\nu^\circ) \le 1$. \end{lemma} \begin{proof}[Sketch of Proof] The proof is essentially identical to the proof of Lemma \ref{lem-optprops}, except the continuity of $\nu\mapsto \Psi_{\gamma,\nu}(t)$ is given by Lemma \ref{lem-subgsn} instead of Lemma \ref{lem-tailcont}. \end{proof} \begin{lemma} \label{lem-varforbarstar2} Let $\gamma \in \mathcal{T}_p$. Then, for $\tau\in {\mathbb R}$, \begin{equation}\label{mimag} \widetilde\Phi_{\gamma}^*(\tau) = \inf_{\nu \in \mathcal{P}(\mathbb{R})} \left\{ \Psi_{\gamma,\nu}^*(\tau) + \mathbb{H}(\nu)\right\}. \end{equation} \end{lemma} \begin{proof}[Sketch of Proof] The proof of Lemma \ref{lem-varforbarstar2} is similar to the proof of Lemma \ref{lem-varforbarstar}, where the main task is to verify the conditions of the Minimax Theorem (Theorem \ref{th-minmax}), in order to apply it to the variational formula \eqref{mgfvar2}. The main differences in this case are: we set $\mathcal{Y} = \mathbb{R}$, $D = \mathbb{R}$, and for fixed $\tau$, the functional $F$ is set equal to $F(\nu, t) \doteq t\tau - \Psi_{\gamma,\nu}(t) + \mathbb{H}(\nu)$ for $\nu \in C$ and $t \in D$. We omit the details. \end{proof} \begin{proof}[Proof of Theorem \ref{th-comparprod}] A straightforward modification of the proof of Proposition \ref{prop-altldpconvex} shows that $(W^{(n,\gamma)})_{n\in{\mathbb N}}$ satisfies an LDP with a convex good rate function. Also note that the domain of the limit log mgf $\widetilde\Phi_\gamma$ is all of ${\mathbb R}$. We utilize the following fact for large deviations in a topological vector space $\mathcal{X}$: if a given rate function is convex in $\mathcal{X}$, and the domain of the associated limit log mgf is the entire dual space $\mathcal{X}^*$, then the rate function can be identified with the Legendre transform of the limit log mgf (see, e.g., \cite[p.152, Theorem 4.5.10]{DemZeibook}). Therefore, the rate function for $(W^{(n,\gamma)})_{n\in{\mathbb N}}$ is $\widetilde\Phi_\gamma^*$, the Legendre transform of the limit log mgf $\widetilde\Phi_\gamma$ defined in \eqref{pressfunc2}. This observation and the variational formula \eqref{mimag} complete the proof. \end{proof} \section{Analysis of the variational problem}\label{sec-analysis} In this section, we analyze the variational problems that relate the annealed and quenched rate functions. In Sect. \ref{ssec-infvar}, we analyze the variational problem of Theorem \ref{th-comparprod} for $\gamma=\mu_\infty$. In Sect. \ref{ssec-conj}, we formulate some conjectures for the variational problem of Theorem \ref{th-compar}. \subsection{Comparison of quenched and annealed rate functions for $p=\infty$}\label{ssec-infvar} Note that for $w=0$ and $p\in[2,\infty)$, the infimum in the variational problem \eqref{varform1} is attained at $\mu_2$. Roughly speaking, this occurs because $w=0$ is the (LLN) limit of the random projection $\sw^{(n,p)}$, and the Gaussian measure $\mu_2$ is the (LLN) limit of the empirical measure defined in \eqref{empir}, \begin{equation*} L_{n,\Theta} = \frac{1}{n}\sum_{i=1}^n \delta_{\sqrt{n}\Theta^{(n)}_i} \Rightarrow \mu_2, \quad \text{ as } n\rightarrow \infty. \end{equation*} For general $w\ne 0$, the minimizer (assuming it exists) may not necessarily be the Gaussian measure. For $p=2$, Lemma \ref{lem-var2} states that the infimum is attained at $\mu_2$ for \emph{all} $w\in {\mathbb R}$. This is because the spherical symmetry of the uniform law on $\mathbb{B}_{n,2}$ is such that a projection onto a random direction has the same law as a projection onto a fixed direction (say, the canonical first coordinate $e_1^{(n)}$). In other words, large deviations of the random directions of projection play no role in the annealed large deviations, when $p=2$. In contrast, as clarified in Proposition \ref{prop-nongsn} below, the random directions of projection do play a role when the random vector to be projected is drawn according to the uniform measure on $[-1,1]^n$ instead of the uniform measure on $\mathbb{B}_{n,2}$. That is, the unique minimizer of \eqref{varform2} is \emph{not} $\mu_2$, which suggests the that deviations of the underlying ``environment" (the directions of projection) play a non-trivial role in the overall annealed large deviations. \begin{lemma}\label{lem-unique} For $\gamma \in \mathcal{T}_2$ and $w\in {\mathbb R}$ such that $\mathbb{I}^\sfa_\gamma(w) < \infty$, there exists a unique minimizer $\nu_{\gamma,w} \in \mathcal{P}({\mathbb R})$ that attains the infimum in \eqref{varform2}. \end{lemma} \begin{proof} The idea is similar to Lemma \ref{lem-optprops2}, which considers the related variational problem \eqref{mgfvar2}. Let $r\in (1,2)$, and equip $\mathcal{P}_r({\mathbb R})$ with the Wasserstein-$r$ topology. By Lemma \ref{lem-compact}, it follows that the infimum in \eqref{varform2} is over a convex, compact set. In addition, $\nu\mapsto\mathbb{I}^\sfq_{\gamma,\nu}(w)$ is convex and lower semi-continuous, since it is the supremum of the maps $\nu\mapsto tw - \Psi_{\gamma,\nu}(t)$, which are continuous due to Lemma \ref{lem-subgsn}, and also clearly linear by definition. Moreover, $\mathbb{H}$ is lower semi-continuous and strictly convex due to Proposition \ref{prop-sanovcone}. Thus, the infimum in \eqref{varform2} is the infimum of a lower semi-continuous strictly convex function over a compact convex set, so the infimum is uniquely attained. \end{proof} \textsc{Notation.} Fix the following notational convention for the remainder of this section: replace $\mu_\infty$ by $\infty$ in our notation for the mgfs and rate functions (i.e., write $\mathrm{M}_\infty$, $\Psi_{\infty,\nu}$, $\Phi_\infty$, $\mathbb{I}^\sfq_{\infty, \nu}$, $\mathbb{I}^\sfa_\infty$), as well as in our notation for the optimizing measure of Lemma \ref{lem-unique}, replace $\nu_{\mu_\infty,w}$ with $\nu_{\infty,w}$. \begin{proposition}\label{prop-nongsn} Let $p=\infty$. There exists $w_*\in (0,1)$ such that if $w_* \le |w| < 1$, then $\nu_{\infty,w} \ne \mu_2$; that is, for some $w\in (-1,1)$, the minimizer in \eqref{varform2} is not standard Gaussian. This implies that for such $w$, the following strict inequality holds: $\mathbb{I}^\sfa_{\infty}(w) < \mathbb{I}^\sfq_{\infty,\mu_2}(w)$. \end{proposition} To prove this, we begin by analyzing the asymptotics of the function $\Psi_{\infty,\nu}$ defined in \eqref{laminfdefn}. \begin{lemma}\label{lem-asyms} Let $\mathrm{M}_\infty$ be the log mgf of $\mu_\infty$, as defined in \eqref{logmgfdef}, and let $m_1(\cdot)$ be the first moment map, as defined in \eqref{qmomdef}. Then, \begin{equation}\label{minf} \lim_{|t|\rightarrow\infty} \frac{\log \mathrm{M}_\infty(t)}{|t|} = 1. \end{equation} For $\nu\in\mathcal{P}({\mathbb R})$, \begin{equation}\label{psinf} \lim_{|t|\rightarrow\infty} \frac{\Psi_{\infty,\nu}(t)}{|t|} = m_1(\nu). \end{equation} In addition, $\Psi_{\infty,\nu}$ is strictly convex. As a consequence, we have \begin{equation} \label{dpsinf} (-m_1(\nu), +m_1(\nu)) \subset D_{\Psi_{\infty,\nu}^*} \subset [-m_1(\nu), +m_1(\nu)]. \end{equation} \end{lemma} \begin{proof} The limit \eqref{minf} follows from basic calculus. That is, applying the symmetry of $\log\mathrm{M}_\infty$, using the explicit expression $\log\mathrm{M}_\infty(t) = \log \left(\frac{\sinh t}{t}\right)$, and applying L'H\^opital's rule to compute the limit, \begin{align*} \lim_{|t|\rightarrow \infty} \frac{\log\mathrm{M}_\infty(t)}{|t|} &= \lim_{t\rightarrow\infty} \frac{\log\mathrm{M}_\infty(t)}{t} = \lim_{t\rightarrow\infty} (\log\mathrm{M}_\infty)'(t) = \lim_{t\rightarrow\infty} \left(\coth t - \tfrac{1}{t}\right) = 1. \end{align*} As for the second limit \eqref{psinf}, by the monotone convergence theorem, for $\nu \in \mathcal{P}({\mathbb R})$, \begin{equation*} \lim_{|t|\rightarrow \infty} \frac{\Psi_{\infty,\nu}(t)}{|t|} = \lim_{t\rightarrow\infty} \int_{{\mathbb R}}|u|\left( \coth(tu) - \tfrac{1}{tu}\right)\nu(du) = m_1(\nu). \end{equation*} Note that $\log\mathrm{M}_\infty$ is strictly convex due to basic properties of log mgfs, and therefore $\Psi_{\infty,\nu}$ is also strictly convex for all $\nu\in\mathcal{P}({\mathbb R})$, since integration with respect to $\nu$ is a linear functional. We now prove the first inclusion of \eqref{dpsinf}. The strict convexity of $\Psi_{\infty,\nu}$ and the asymptotic linearity given by \eqref{psinf} imply that for all $c < m_1(\nu)$, there exists some $t_c\in{\mathbb R}$ such that $\Psi_{\infty,\nu}(t) > c|t|$ for $|t| \ge t_c$. The upshot is that if $\epsilon > 0$ and $|w| < m_1(\nu) -\epsilon$, then \begin{equation*} \limsup_{|t|\rightarrow \infty} \left[ tw - \Psi_{\infty,\nu}(t)\right] \le \limsup_{|t|\rightarrow \infty} |t|\,(|w| - m_1(\nu) + \epsilon) = -\infty. \end{equation*} Hence, the map $F_{w,\nu}$ defined by $F_{w,\nu}(t)\doteq tw - \Psi_{\infty,\nu}(t)$ has compact upper level sets. Since $F_{w,\nu}$ is upper semi-continuous (due to the lower semi-continuity of $\Psi_{\infty,\nu}$), it follows that $F_{w,\nu}$ is bounded above in ${\mathbb R}$, implying that $\Psi_{\infty,\nu}^*(w) < \infty$ when $|w| < m_1(\nu) - \epsilon$. As this holds for all $\epsilon > 0$, we have that $(-m_1(\nu),+m_1(\nu)) \subset D_{\Psi_{\infty,\nu}^*}$. To prove the the second inclusion of \eqref{dpsinf}, a similar argument as above shows that for $\epsilon > 0$, if $w > m_1(\nu) + \epsilon$, then \begin{equation*} \liminf_{t\rightarrow+\infty} \left[ tw - \Psi_{\infty,\nu}(t)\right] \ge \liminf_{t\rightarrow+\infty} t(w - m_1(\nu) - \epsilon) =+\infty, \end{equation*} and if $w < -(m_1(\nu) + \epsilon)$, then \begin{equation*} \liminf_{t\rightarrow-\infty} \left[ tw - \Psi_{\infty,\nu}(t)\right] \ge \liminf_{t\rightarrow-\infty} t(w + m_1(\nu) + \epsilon) = +\infty. \end{equation*} Therefore, $\Psi_{\infty,\nu}^*(w) = \infty$ for $|w| > m_1(\nu) + \epsilon$. Because this holds for all $\epsilon > 0$, it follows that $D_{\Psi_{\infty,\nu}^*} \subset [-m_1(\nu),+m_1(\nu)]$. \end{proof} \begin{remark} Note that $m_1(\mu_2) = \sqrt{2/\pi} \approx 0.798$, which lies on the boundary of the domain of $\mathbb{I}^\sfq_{\infty,\mu_2}$, as depicted in Figure \ref{fig-comp}. \end{remark} \begin{proof}[Proof of Proposition \ref{prop-nongsn}] To show that the minimizer of the variational problem \eqref{varform2} is not $\mu_2$, it suffices to show that there exists \emph{some} measure $\nu_\circ \in \mathcal{P}({\mathbb R})$ such that: \begin{enumerate}[label=(\alph*)] \item $\nu_\circ$ is absolutely continuous with respect to Lebesgue measure; \item $m_2(\nu_\circ)\le 1$; \item $m_1(\nu_\circ) > m_1(\mu_2)$. \end{enumerate} There exist several such measures, but for a concrete example, consider the uniform measure on $[-\sqrt{3},\sqrt{3}]$. Given any measure $\nu_\circ$ satisfying (a), (b), and (c), it follows that $H(\nu_\circ | \mu_2) < \infty$, and the definition of $\mathbb{I}^\sfq_{\gamma,\nu}$ in \eqref{iqgamdefn} and Lemma \ref{lem-asyms} imply that for $w\in(-1,1)$ such that $m_1(\mu_2) < |w| < m_1(\nu_\circ)$, we have \begin{equation*} \mathbb{I}^\sfq_{\infty,\nu_\circ} = \Psi_{\infty,\nu_\circ}^*(w) < \infty = \Psi_{\infty,\mu_2}^*(w) = \mathbb{I}^\sfq_{\infty,\mu_2}. \end{equation*} Therefore, the functional $\nu\mapsto \mathbb{I}^\sfq_{\infty,\nu}(w) + H(\nu|\mu_2) + \frac{1}{2}(1-m_2(\nu))$ is finite when $\nu=\nu_0$ but infinite when $\nu=\mu_2$, which proves the proposition. \end{proof} \subsection{Conjectures regarding the variational problem}\label{ssec-conj} We believe that Proposition \ref{prop-nongsn} can be extended to all $w\ne 0$ in the domain of $\mathbb{I}^\sfa_\infty$, and that an analogous result should hold for all $p\in(2,\infty)$ as well as for products of measures other than $\gamma = \mu_\infty$. To be precise, we mean: \begin{conjecture}\label{conj-var} Let $p\in(2,\infty)$. For $w\in(-1,1)\setminus \{0\}$, the minimizer in \eqref{varform1} is not $\mu_2$. Similarly, for $\gamma \in \mathcal{T}_p$ and $w\in D_{\mathbb{I}^\sfa_{\gamma}} \setminus \{0\}$, the minimizer in \eqref{varform2} is not $\mu_2$. This implies that except at $w=0$, the annealed rate function lies strictly below the quenched rate function. \end{conjecture} This would require a new approach since: (i) our current proof relies on the exact asymptotics of Lemma \ref{lem-asyms} for the case $p=\infty$, which makes generalization to other product measures difficult; and (ii) for general $\ell^p$ balls, the variational problem is more complicated, due to the additional contraction step. One possible approach to Conjecture \ref{conj-var} would be to analyze the intermediate variational problems \eqref{mgfvar} and \eqref{mgfvar2}. In the case $p=\infty$, it is possible to establish the following lemma: \begin{lemma}\label{lem-intoptnon} Let $F_t(\nu) \doteq \Psi_{\infty,\nu}(t) - \mathbb{H}(\nu)$. There exists $t_* >0$ such that if $|t| \ge t_*$, then the maximizer of \eqref{mgfvar2} is not the standard Gaussian. That is, for some probability measure $\nu_t\ne \mu_2$, we have $F_t(\nu_t) > F_t(\mu_2)$. \end{lemma} \begin{proof}[Sketch of Proof] First, we can rewrite $F_t$ and \eqref{mgfvar2} in terms of the \emph{entropy} of $\nu$. Then, Lemma \ref{lem-asyms} can be applied to transform \eqref{mgfvar2} into a penalized maximum entropy problem, amenable to exact calculations. \end{proof} The main issues with this approach are that the claim is for $t$ sufficiently large, and the ``optimal" measure is not identified. Nonetheless, this approach offers an alternative variational problem which may be simpler to analyze than \eqref{varform2}. \clearpage \printnomenclature[1.5cm] \label{notation}
{ "timestamp": "2015-12-17T02:01:49", "yymm": "1512", "arxiv_id": "1512.04988", "language": "en", "url": "https://arxiv.org/abs/1512.04988", "abstract": "Let $p\\in[1,\\infty]$. Consider the projection of a uniform random vector from a suitably normalized $\\ell^p$ ball in $\\mathbb{R}^n$ onto an independent random vector from the unit sphere. We show that sequences of such random projections, when suitably normalized, satisfy a large deviation principle (LDP) as the dimension $n$ goes to $\\infty$, which can be viewed as an annealed LDP. We also establish a quenched LDP (conditioned on a fixed sequence of projection directions) and show that for $p\\in(1,\\infty]$ (but not for $p=1$), the corresponding rate function is \"universal\", in the sense that it coincides for \"almost every\" sequence of projection directions. We also analyze some exceptional sequences of directions in the \"measure zero\" set, including the directions corresponding to the classical Cramér's theorem, and show that those directions yield LDPs with rate functions that are distinct from the universal rate function of the quenched LDP. Lastly, we identify a variational formula that relates the annealed and quenched LDPs, and analyze the minimizer of this variational formula. These large deviation results complement the central limit theorem for convex sets, specialized to the case of sequences of $\\ell^p$ balls.", "subjects": "Probability (math.PR)", "title": "Large deviations for random projections of $\\ell^p$ balls", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631659211718, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7087950519580677 }
https://arxiv.org/abs/1501.06165
The multiplicity of eigenvalues of the Hodge Laplacian on 5-dimensional compact manifolds
We study multiplicity of the eigenvalues of the Hodge Laplacian on smooth, compact Riemannian manifolds of dimension five for generic families of metrics. We prove that generically the Hodge Laplacian, restricted to the subspace of co-exact two-forms, has nonzero eigenvalues of multiplicity two. The proof is based on the fact that Hodge Laplacian restricted to the subspace of co-exact two-forms is minus the square of the Beltrami operator, a first-order operator. We prove that for generic metrics the spectrum of the Beltrami operator is simple. Because the Beltrami operator in this setting is a skew-adjoint operator, this implies the main result for the Hodge Laplacian.
\section{Statement of the problem and results}\label{sec:introduction} \setcounter{equation}{0} The multiplicity of the $L^2$-eigenvalues of the Laplacian $\Delta_g \geq 0$ on a smooth compact manifold $(M,g)$ is linked with the symmetry of the manifold. Generally speaking, the multiplicity of an eigenvalue is reduced under perturbations of the Laplacian. In the seventies, Uhlenbeck \cite{uhlenbeck1} and Albert \cite{albert1} studied this question for generic classes of metric and potential perturbations. For a Riemannian manifold $(M, g_0)$, Uhlenbeck proved that a generic, local perturbation of the metric $g_0 \rightarrow g_0 + \delta g$, with support $\delta g \subset U \subset M$, an open set, removes all multiplicities. That is, the eigenvalues of $\Delta_{g_0 + \delta g}$ are simple (that is, have multiplicity one) for a generic set of perturbations $\delta g$ supported in $U \subset M$. In light of Uhlenbeck's result for the Laplace operator on functions, one might wonder if the nonzero eigenvalues of the Hodge Laplacian $\Delta_g^{(k)}$ acting on $k$-forms might likewise be simple for a residual set of metrics. Soon after Uhlenbeck published her theorem, Millman \cite{Millman} noted that on a manifold of even dimension $2n$, the McKean-Singer t$\acute{\mbox{e}}$lescopage theorem \cite{bgm1} implies that all the nonzero eigenvalues of the Hodge Laplacian acting on $n$-forms (forms of middle dimension) have even multiplicity. While Millman's observation precludes a general extension of Uhlenbeck's theorem to the Hodge Laplacian, it is possible for analogues to hold under appropriate hypotheses. In 2012, Enciso and Peralta-Salas \cite{EPS} proved that on a closed 3-manifold, there exists a residual set of $C^r$ metrics, $r \geq 2$, such that the nonzero eigenvalues of the Hodge Laplacian $\Delta_g^{(k)}$, for $0\leq k\leq 3$, all have multiplicity 1. They structure their proof around the study of the Beltrami operator $*_gd$ restricted to co-exact 1-forms, which they show to have simple spectrum by a similar transversality theory argument as employed by Uhlenbeck. The Beltrami operator $*_gd$ restricted to co-exact 1-forms is self-adjoint and its square, on the same subspace, is the Hodge Laplacian $\Delta_g^{(1)}$, restricted to this invariant subspace. Consequently, the Hodge Laplacian restricted to this subspace also has simple nonzero eigenvalues. This fact, when combined with the Hodge decomposition and Uhlenbeck's theorem for the Laplace operator acting on 0-forms (functions), and Hodge duality, allow Enciso and Peralta-Salas to conclude their simplicity result for $\Delta_g^{(1)}$. The generic simplicity of the nonzero spectrum of the Hodge Laplacian acting on $k$-forms for $0\leq k\leq 3$ follows from Uhlenbeck's theorem for $k=0$, their result for $k=1$, and Hodge duality for $k=2$ and $k=3$. In this paper, we extend the method centered on the Beltrami operator, as introduced by Enciso and Peralta-Salas \cite{EPS}, to study the generic nonzero eigenvalue multiplicities of the Hodge Laplacian on closed 5-manifolds. In particular, we will prove that for a residual set of $C^r$ metrics, for any $r \geq 2$, the nonzero eigenvalues of the Hodge Laplacian $\Delta_g^{(2)}$ acting on co-exact 2-forms have multiplicity 2. Instead of transversality, we employ the direct perturbation theory method used by Albert \cite{albert1} (also used by Colin de Verdi\`ere \cite{cdv1}). In order to state the main theorem, we recall the de Rham complex of real differential forms over $M$. The de Rham complex for $(M,g)$ consists of the spaces $\Lambda^k(M)$ of smooth $k$-forms on $M$ and the differential maps $d:\Lambda^k(M)\to\Lambda^{k+1}(M)$ for $k=0,\ldots,n$. Each $\Lambda^k(M)$ is a pre-Hilbert space with inner product given by \begin{eqnarray}\label{eq:ip} (u,v)_g &=& \int_M u \wedge {(*_g v)} \hspace{.1in}\mbox{for }u,v\in\Lambda^k(M), \end{eqnarray} where $\wedge$ is the wedge product and $*_g:\Lambda^k(M)\to\Lambda^{n-k}(M)$ is the Hodge star operator. We denote the closure of $\Lambda^k(M)$ in the related norm by $L^2 (M, \Lambda^k)$. In the discussion of the Beltrami operator in section \ref{sec:beltrami-ev1} we will work with complex-valued forms that we denote by $\Lambda_{\C}^k(M)$. In this case, the form ${(*_g v)}$ in the inner product \eqref{eq:ip} is replaced by its complex conjugate denoted by $\overline{(*_g v)}$. The adjoint of $d$ with respect to this inner product is the codifferential operator $\delta_g:\Lambda^{k+1}(M)\to\Lambda^k(M)$. Our primary operator of interest is the Hodge Laplacian, the second order differential operator given by $\Delta_g^{(k)}=d\delta_g+\delta_gd$, acting on its natural domain in $L^2 (M, \Lambda^k)$. The operators $\Delta_g^{(k)}$, $d$, and $\delta_g$ allow us to define the following subspaces of $\Lambda^k(M)$. The space of harmonic $k$-forms on $M$ is $$\mathcal{H}^k(M)=\{u\in\Lambda^k(M)|\,\Delta_g^{(k)}u=0\},$$ the space of exact $k$-forms is $$d\Lambda^{k-1}(M)=\{u\in\Lambda^k(M)|\,u=dv\mbox{ for some }v\in\Lambda^{k-1}(M)\},$$ and the space of co-exact $k$-forms is $$\delta_g\Lambda^{k+1}(M)=\{u\in\Lambda^k(M)|\,u=\delta_gw\mbox{ for some }w\in\Lambda^{k+1}(M)\}.$$ The Hodge Decomposition Theorem guarantees that any $k$-form can be uniquely written as the sum of a harmonic form, an exact form, and a co-exact form: \begin{thm}\label{thm:Hodge}\cite{Morita} On an oriented compact Riemannian manifold $(M,g)$, the space $\Lambda^k(M)$ can be decomposed as $$\Lambda^k(M)=\mathcal{H}^k(M)\oplus d\Lambda^{k-1}(M)\oplus\delta_g\Lambda^{k+1}(M).$$ The space of harmonic forms $\mathcal{H}^k(M)$ is finite dimensional. \end{thm} The result extends to an orthogonal decomposition of $L^2 (M, \Lambda^k)$. If $H^1 (M, \Lambda^k)$ is the Sobolev space of $k$-forms, then $L^2 ( M, \Lambda^k ) = \mathcal{H}^k(M)\oplus d H^1(M,\Lambda^{k-1})\oplus \delta_g H^1(M,\Lambda^{k+1})$, see, for example, \cite[Theorem 1.5.2]{gilkey}. The Beltrami operator $*_gd$ maps $k$-forms to $n-k-1$-forms, with the ranks of the forms coinciding precisely when $n=2k+1$. In particular, the manifold must be of odd dimension. In the case studied by Enciso and Peralta-Salas with $n=3$, the Beltrami operator maps $1$-forms to $1$-forms. The spectrum of the Hodge Laplacian restricted to exact $1$-forms follows from Uhlenbeck's analysis of the spectrum of the Laplace-Beltrami operator on $0$-forms since the exact $1$-forms have the form $d f$. On co-exact $1$-forms, the Hodge Laplacian equals a phase factor times the square of the Beltrami operator. Hence, by the Hodge decomposition, the spectrum of the Hodge Laplacian on $1$-forms is determined by the Beltrami operator. By Hodge duality, this determined the spectrum of the Hodge Laplacian on $2$-forms. The next dimension for which the Beltrami operator may be used to study the spectrum of the Hodge Laplacian is $n = 5$. In this case, the Beltrami operator maps $2$-forms to $2$-forms. In particular, the square of the Beltrami operator acting on co-exact $2$-forms is minus the Hodge Laplacian acting on co-exact $2$-forms. Consequently, the Beltrami operator may be used to study the spectrum of the Hodge Laplacian restricted to the invariant subspace of co-exact $2$-forms. \begin{thm}\label{thm:hodge-main1} Let $M$ be a closed, 5-dimensional Riemannian manifold. Let $r$ be an integer with $r\geq 2$. There exists a residual subset $\Gamma$ of the space of all $C^r$ metrics on $M$ such that, for all $g\in\Gamma$, the nonzero eigenvalues of the Hodge Laplacian $\Delta_g^{(2)}$ acting on co-exact 2-forms have multiplicity 2. \end{thm} Our proof of Theorem \ref{thm:hodge-main1} centers on an investigation of the Beltrami operator $*_gd$. Using perturbation theory inspired by Albert \cite{albert1}, and a density argument of Colin de Verdi\`ere \cite{cdv1}, we will show that for a residual set of metrics, the Beltrami operator restricted to co-exact 2-forms has only simple eigenvalues. We will then explore the relationship between the spectrum of the Beltrami operator, a skew-adjoint operator, and that of the Hodge Laplacian on co-exact 2-forms. In particular, the origin of the generic multiplicity two of eigenvalues is the skew-adjointness of the Beltrami operator on 2-forms. This means the eigenvalues of the Beltrami operator are pure imaginary and the real and imaginary parts of the complex eigenforms give rise to independent real eigenforms of the Hodge Laplacian. The main result follows from this. \subsection{The meaning of generic} In this article, the terms \emph{generic} and \emph{generic property} mean the following. Let $X$ be a topological space. A set $\mathcal{G} \subset X$ will be called \emph{residual} or \emph{generic} in $X$ if it is a dense $G_\delta$-set. That is, $\mathcal{G} = \cap_{j=1}^\infty G_j$, where each $G_j \subset X$ is dense and open in $X$. A property that is true for a residual subset of a topological space $X$ is called \emph{generic}. \subsection{Discussion of the Beltrami and Hodge operators} The Beltrami operator may be used to study the eigenvalues of the Hodge Laplacian restricted to co-exact $k$-forms only for certain pairs $(n,k)$ of dimension $n$ of the manifold and rank $k$ of the forms. Before narrowing our focus to co-exact 2-forms on a 5-manifold, we consider the more general properties of the Beltrami operator acting on $k$-forms on an $n$-dimensional manifold. Since the Beltrami operator is the composition of $*_g$ and $d$, the operator is an isomorphism between $\delta_g\Lambda^{k+1}(M)$ and $\delta_g\Lambda^{n-k}(M)$, that is, the spaces of real co-exact $k$-forms and co-exact $(n-k-1)$-forms. The Beltrami operator may be extended to complex-valued forms by linearity. The extended Beltrami operator $*_gd:\delta_g\Lambda_{\mathbb{C}}^{k+1}(M)\to\delta_g\Lambda_{\mathbb{C}}^{n-k}(M)$ is also an isomorphism. \begin{lemma}\label{lem:coexact} Let $M$ be an $n$-manifold. Then $$\Delta_g^{(k)}=(-1)^{nk+1}(*_gd)^2$$ when restricted to co-exact, real or complex, $k$-forms. \end{lemma} \textit{Proof.} If $\omega\in\delta_g\Lambda_{\mathbb{C}}^{k+1}(M)$, then $\Delta_g^{(k)}\omega = \delta_gd\omega$. In terms of the Hodge star operator, the co-differential operator $\delta_g$ is $ \delta_g = (-1)^{n(k+1)+1}*_g d *_g$. Using this, we find $$ \Delta_g^{(k)}\omega = (-1)^{n(k+2)+1}(*_gd*_g)d\omega = (-1)^{nk+1}(*_gd)^2\omega. $$ The same calculation holds on $\delta_g\Lambda^{k+1}(M)$. \hfill$\Box$ Lemma \ref{lem:coexact} implies that when restricted to co-exact forms, the Hodge Laplacian is given by $\Delta_g^{(k)}=(*_gd)^2$ if $n$ and $k$ are both odd; otherwise $\Delta_g^{(k)}=-(*_gd)^2$. The parity of $n$ and $k$ also determine whether the Beltrami operator is self-adjoint or skew-adjoint. \begin{lemma}\label{lem:adjoint} Let $M$ be an $n$-dimensional manifold, $\omega\in H^1(M,\Lambda_{\mathbb{C}}^k)$, and $\eta\in H^1(M,\Lambda_{\mathbb{C}}^{n-k-1})$. Then $$( *_gd\omega, \eta)_g=(-1)^{nk+1}( \omega,*_gd\eta)_g.$$ \end{lemma} This result indicates that the Beltrami operator is self-adjoint if $(n,k)$ are both odd and skew-adjoint otherwise. Combining this with the mapping properties of the Beltrami operator, we make the following conjecture concerning the generic multiplicities of the nonzero eigenvalues of the Hodge Laplacian on odd dimensional manifolds: The nonzero eigenvalues of the Hodge Laplacian acting on co-exact $k$-forms on an $n=2k+1$-dimensional manifold are generically simple if $k$ is odd and generically of multiplicity 2 if $k$ is even. \subsection{Related work}\label{subsec:related1} Bleeker and Wilson \cite{BW} studied eigenvalue multiplicity for the Laplace-Beltrami operator (the Hodge Laplacian on 0-forms) under conformal perturbations of the metric $g \rightarrow e^fg$, for $f \in C^\infty (M, \R)$ and proved generic simplicity of the eigenvalues. More recently, Canzani \cite{canzani} studied the question of generic eigenvalue multiplicity for conformally covariant, elliptic self-adjoint operators $P_g$ on smooth sections of vector bundles over a compact Riemannian manifold $(M, g)$. Canzani proved that there is a residual set of functions in $C^\infty (M, \R)$ for which the corresponding operators $P_{e^fg}$ associated with the conformally deformed metrics $e^fg$ have simple nonzero eigenvalues. The perturbation theory employed there, similar to that used in the present paper, depends crucially on the conformal covariance of the operators $P_g$. In related work, Jakobson and Strohmaier \cite{js1} studied quantum ergodicity for, among other operators, the Hodge Laplacian restricted to co-closed $k$-forms. In their study of quantum ergodicity for compact K\"ahler manifolds, Jacobson, Strohmaier, and Zelditch \cite[Remark 4.2]{jsz1} conjectured that the spectrum of the Hodge Laplacian restricted to primitive, co-closed $(p,q)$-forms is generically simple. \subsection{Contents of the paper} The Beltrami operator is studied in section 2. This is a skew-adjoint operator so the corresponding spectral problem is posed on the space of complex-valued 2-forms. It is shown in Theorem \ref{thm:beltrami-simple1} that its eigenvalues are generically simple. The relation between the eigenvalues of the Beltrami operator and Hodge Laplacian is discussed in section 3. The main result, Theorem \ref{thm:hodge-main1}, is proved in section 3, and states that the nonzero eigenvalues of the Hodge Laplacian acting on real-valued, co-exact 2-forms is generically two. In the last section, we discuss the general question of the generic multiplicity of the nonzero eigenvalues of the Hodge Laplacian acting on 2-forms over a 5-dimensional manifold. \section{Generic simplicity of the eigenvalues of the Beltrami operator}\label{sec:beltrami-ev1} The Beltrami operator $*_gd$ maps co-exact $2$-forms to co-exact $2$-forms on a $5$-dimensional manifold. If $\omega$ is a co-exact $2$-form then it is easily found that $$ \Delta_g^{(2)}\omega = \delta_g d\omega = -(*_gd)^2\omega. $$ Furthermore, the Beltrami operator is skew-adjoint on the domain $H^1(M, \Lambda^2)$ in $L^2(M, \Lambda^2)$ with the inner product \eqref{eq:ip}. Thus, in order to study the eigenvalues of the Beltrami operator, we consider the Beltrami operator on the space of complex-valued $2$-forms $L^2 (M, \Lambda_{\C}^2)$. Acting on its domain $H^1 ( M , \Lambda_\C^2)$, the Beltrami operator is skew-adjoint with purely imaginary eigenvalues. We are interested in the multiplicities of the nonzero eigenvalues of the Beltrami operator restricted to the subspace of co-exact $2$-forms. We define $$ \mathcal{K}=\{u\in L^2(M,\Lambda^2)\,|\, du=0\}, $$ which is the set of all $L^2$ exact and harmonic 2-forms on $M$. We will use $\perp_g$ to specify orthogonality with respect to the inner product \eqref{eq:ip}. By Hodge decomposition, $\mathcal{K}^{\perp_g}$ is the set of all $L^2$ co-exact 2-forms on $(M,g)$. The spaces $\mathcal{K}$ and $\mathcal{K}^{\perp_g}$ consist of real 2-forms and will be used in section \ref{sec:hodge1}. In the present section in which we discuss the eigenvalue problem for the Beltrami operator, we will use the analogous spaces of complex-valued 2-forms, $\mathcal{K}_{\mathbb{C}}$ and $\mathcal{K}^{\perp_g}_{\mathbb{C}}$. The main result of this section is the generic simplicity of the eigenvalues of the Beltrami operator on co-exact $2$-forms. \begin{thm}\label{thm:beltrami-simple1} The eigenvalues of the Beltrami operator $*_gd$ acting on the space $H^1(M,\Lambda_{\mathbb{C}}^2)\cap \mathcal{K}_{\mathbb{C}}^{\perp_g}$ are all simple for a residual set of $C^r$ metrics, for any $r \geq 2$. \end{thm} The proof of Theorem \ref{thm:beltrami-simple1} consists of a two parts. In the first, we focus on one degenerate eigenvalue $i\lambda$ of $*_gd$. We prove that there is a real symmetric matrix $h$ so that the metric $g+ \epsilon h$ has a cluster of at least two nearby eigenvalues, converging to $i\lambda$ as $\epsilon \rightarrow 0$. Each will have multiplicity less than that of $i\lambda$. In the second step, we prove that generically all eigenvalue multiplicities are removed using an inductive argument of Albert \cite[Theorems 1 and 2]{albert1} (see also Colin de Verdi\`ere, \cite[section 5]{cdv1}). \subsection{Variation with respect to the metric}\label{subsec:variation1} In this section, we compute the differential of the Beltrami operator $*_gd$ with respect to the metric $g$. Let $\mathcal{G}^r(M)$ denote the set of all $C^r$ metrics on the compact manifold $M$. The space $\mathcal{S}^r(M)$ consists of all symmetric tensor fields of class $C^r$ and type $(0,2)$ and can be identified with the tangent space $T_g\mathcal{G}^r(M)$ at any $g\in\mathcal{G}^r(M)$. Thus, $D(*d)_g(h)$ represents the variation of the Beltrami operator at the metric $g\in\mathcal{G}^r(M)$ in the direction of a $C^r$ symmetric $(0,2)$-tensor $h$. The trace of $h$ is given by $\tr_g h=g^{ij}h_{ij}$. The following lemma gives the local coordinate representation of $D(*d)_g(h)$ acting on an eigenform of the Beltrami operator. \begin{lemma}\label{lemma:derivative} Let $u\in H^1(M,\Lambda_{\mathbb{C}}^2)$ be an eigenform of $*_gd$ with eigenvalue $i\lambda$. Then for any $h\in \mathcal{S}^r(M)$, \beq\label{eq:differential1} (D(*d)_g(h)u)_{ij} = i\lambda\left[-\frac{1}{2}(\tr_gh)u_{ij}+g^{mt}h_{ti}u_{mj}+g^{mt}h_{tj}u_{im}\right]. \eeq \end{lemma} \noindent {\it Sketch of the proof.} The proof of Lemma \ref{lemma:derivative} is computationally long. We provide an overview of the computations involved. Complete details are provided in \cite[Appendix A]{gier-thesis}. First, we express the Beltrami operator in local coordinates by $$ (*_gdu)_{ij} = \frac{1}{6} \varepsilon_{klmij}|g|^{1/2} g^{kn}g^{lp}g^{mq}\left(\frac{\partial u_{np}}{\partial x_q}-\frac{\partial u_{nq}}{\partial x_p}+\frac{\partial u_{pq}}{\partial x_n}\right). $$ Next, using the formulas $$D(g^{ij})(h)=-h^{ij}\hspace{.2in}\mbox{and}\hspace{.2in}D(|g|^s)(h)=s|g|^s(\tr_g h) \hspace{.1in}\mbox{for } s>0,$$ we compute \begin{eqnarray*} (D(*d)_g(h)u)_{ij} &=& \frac{1}{6}\varepsilon_{klmij}|g|^{1/2}\left(\frac{\partial u_{np}}{\partial x_q}-\frac{\partial u_{nq}}{\partial x_p}+\frac{\partial u_{pq}}{\partial x_n}\right)\\ & & \times \left[\frac{1}{2}(\tr_gh)g^{kn}g^{lp}g^{mq}-g^{kn}g^{lp}h^{mq}-g^{kn}g^{mq}h^{lp}-g^{lp}g^{mq}h^{kn}\right]. \end{eqnarray*} Finally, we utilize the eigenvalue equation $*_gdu=i\lambda u$ to simplify the expression for $(D(*d)_g(h)u)_{ij}$. This results in the desired formula given in \eqref{eq:differential1}.\hfill $\Box$ \subsection{A density result}\label{subsec:density1} The following density result states that any compactly-supported 2-form may be locally expressed in terms of a given non-vanishing form and a symmetric $(0,2)$-tensor. \begin{lemma}\label{lem:density} Let $w\in C^r(M,\Lambda_{\mathbb{C}}^2)$, $r\geq 1$, and consider a compact subset $K\subset M\backslash w^{-1}(0)$. Then for any $v\in C^r(M,\Lambda_{\mathbb{C}}^2)$ with $\supp v\subset K$, there exists a symmetric complex $(0,2)$-tensor $t\in\mathcal{S}_{\mathbb{C}}^r(M)$ such that $v_{ij}=t_{ik}g^{kl}w_{lj}+w_{ik}g^{kl}t_{lj}$. \end{lemma} \noindent {\it Sketch of the proof.} Let $w\in C^r(M,\Lambda_{\mathbb{C}}^2)$, let $K$ be a compact subset of $M\backslash w^{-1}(0)$, and let $v$ be any 2-form in $C^r(M,\Lambda_{\mathbb{C}}^2)$ with $\supp v\subset K$. To make the computations clearer, we will use matrix representations of the various forms and tensors. The 2-forms $w$ and $v$ correspond to the antisymmetric $5\times 5$ matrices that we denote by $W$ and $V$, respectively. The 2-forms $g^{-1}$ and $t$ naturally correspond to the symmetric matrices denoted $G^{-1}$ and $T$. The matrices $W,V,G^{-1},$ and $T$ are matrix-valued functions of $p\in M$. The condition $v_{ij}=t_{ik}g^{kl}w_{lj}+w_{ik}g^{kl}t_{lj}$ for $1\leq i,j\leq 5$ translates into the matrix equation $V=TG^{-1}W+WG^{-1}T$. Since $G^{-1}$ is a symmetric positive-definite matrix, it has a symmetric positive-definite square root $G^{-1/2}$. We thus obtain the equivalent equation \begin{eqnarray}\label{eqn:sylv} \tilde{V} &=& \tilde{T}\tilde{W}+\tilde{W}\tilde{T}, \end{eqnarray} where the matrices $\tilde{V}=G^{-1/2}VG^{-1/2}$ and $\tilde{W}=G^{-1/2}WG^{-1/2}$ are antisymmetric and $\tilde{T}=G^{-1/2}TG^{-1/2}$ is symmetric. Let $\mathcal{M}$ denote the set of all $C^r$ $5\times 5$ matrix-valued functions on $M$. We define a linear operator $L_{\tilde{W}}:\mathcal{M}\to\mathcal{M}$ by \begin{eqnarray}\label{eqn:sylvester} L_{\tilde{W}}(X)&:=& X\tilde{W}+\tilde{W}X. \end{eqnarray} Satisfying condition \eqref{eqn:sylv} amounts to finding a symmetric $\tilde{T}\in\mathcal{M}$ such that $L_{\tilde{W}}(\tilde{T})=\tilde{V}$. The Sylvester equation $L_{\tilde{W}}(X)=X\tilde{W}+\tilde{W}X=\tilde{V}$ has a unique solution if and only if $\tilde{V}$ is orthogonal to $\ker L_{\tilde{W}}$ (see, for example \cite{Bhatia}). It is proved in \cite[Appendix C]{gier-thesis} that each $E\in\ker L_{\tilde{W}}$ is symmetric. By the antisymmetry of $\tilde{V}$, the matrix inner product of $\tilde{V}$ with each $E\in \ker L_{\tilde{W}}$ is \begin{eqnarray*} E\cdot \tilde{V} &=& \sum_{i,j=1}^5 \overline{e_{ij}}\tilde{v}_{ij} \\ &=& \sum_{i<j} \overline{e_{ij}}\tilde{v}_{ij} + \sum_{i>j} \overline{e_{ij}}\tilde{v}_{ij} \\ &=& \sum_{i<j} \overline{e_{ij}}\tilde{v}_{ij} + \sum_{i>j} \overline{e_{ji}}(-\tilde{v}_{ji}) \\ &=& \sum_{i<j} \overline{e_{ij}}\tilde{v}_{ij} - \sum_{i<j} \overline{e_{ij}}\tilde{v}_{ij} \hspace{.2in}\mbox{(reindexing)}\\ &=& 0. \end{eqnarray*} Since $\tilde{V}$ is orthogonal to $\ker L_{\tilde{W}}$, there exists an $X\in\mathcal{M}$ such that $\tilde{V}=X\tilde{W}+\tilde{W}X$ on $K$. From the antisymmetry of $\tilde{V}$ and $\tilde{W}$, one easily shows that $X^T\tilde{W}+\tilde{W}X^T = \tilde{V}$, so that $X^T$ solves the same equation as $X$. Thus, we define $\tilde{T}$ to be the symmetrization $\tilde{T}=\frac{1}{2}[X+X^T]$. Hence, $T=G^{1/2}\tilde{T} G^{1/2}$ is a symmetric $C^r$ matrix-valued function such that $V=TG^{-1}W+WG^{-1}T$. We thus obtain from $T$ the desired symmetric complex $(0,2)$-tensor $t\in\mathcal{S}_{\mathbb{C}}^r(M)$. \hfill $\Box$ \subsection{Eigenvalue perturbation theory}\label{subsec:eigenvalue-pert1} To establish the generic simplicity of the eigenvalues of the Beltrami operator, we use standard results from perturbation theory as discussed in Rellich \cite[chapter II, section 5, Theorem 3]{Rellich} and Kato \cite{Kato}. In particular, observe that the skew-adjointness of the Beltrami operator $*_gd$ when $n=5$ and $k=2$ implies that the operator $i\hspace{-.03in}*_gd:H^1(M,\Lambda_{\mathbb{C}}^2)\cap\mathcal{K}_{\mathbb{C}}^{\perp_g}\to \mathcal{K}_{\mathbb{C}}^{\perp_g}$ is self-adjoint with respect to the metric $g$ and has real, isolated eigenvalues of finite multiplicity. We consider perturbations of the metric $g \rightarrow g(\epsilon) := g + \epsilon h$ so the norm, and hence the Hilbert space, depends on $\epsilon$. We map these spaces to the $\epsilon$-independent Hilbert space $L^2 (M, \Lambda^2_{\C})$. We define a unitary operator $U_\epsilon : L^2 (M, \Lambda^2_{\C}) \rightarrow L^2 (M, \Lambda^2_{\C}, g(\epsilon))$ by $$ U_\epsilon \omega = \left( \frac{ \det g }{\det g(\epsilon) } \right)^{1/4} \omega, $$ for any two-form $\omega \in L^2 (M, \Lambda^2_{\C})$. Then the Beltrami operator $\mathcal{D}_\epsilon := U_\epsilon^{-1} ( *_{g(\epsilon)} d) U_\epsilon$ acts on $L^2 (M, \Lambda^2_{\C})$ and is unitarily equivalent to the Beltrami operator $*_{g (\epsilon)} d$. Note that $\mathcal{D}_0 = *_g d$. Furthermore, the set of co-exact two-forms $\mathcal{K}^{\perp_{g(\epsilon)}}$ in $L^2 (M, \Lambda^2_{\C}, g(\epsilon))$ maps to the $\mathcal{D}_\epsilon$-invariant subspace $\tilde{\mathcal{K}} ^{\perp_{g(\epsilon)}} \subset L^2 (M, \Lambda^2_{\C}, g)$. In this setting, we have the following perturbation theorem for linear perturbations of the metric. \begin{thm}\label{thm:perttheoremh} Let $\lambda$ be an eigenvalue of $i\hspace{-.03in}*_gd:H^1(M,\Lambda_{\mathbb{C}}^2)\cap\mathcal{K}_{\mathbb{C}}^{\perp_g}\to \mathcal{K}_{\mathbb{C}}^{\perp_g}$ of multiplicity $m$, and let $g(\epsilon)=g+\epsilon h$ for some $h\in S^r(M)$. Then there are $m$ functions $\ell^h_1(\epsilon),\ldots,\ell^h_m(\epsilon)$ real-analytic at $\epsilon=0$, and $m$ functions $U^h_1(\epsilon),\ldots,U^h_m(\epsilon) \in L^2 (M, \Lambda_\C^2)$, analytic in $H^1(M,\Lambda_{\mathbb{C}}^2)$ at $\epsilon=0$, such that the following conditions hold: \begin{enumerate} \item $\ell^h_j(0)=\lambda$ for $j=1,\ldots,m$; \item $i \mathcal{D}_\epsilon U^h_j(\epsilon) = \ell^h_j(\epsilon)U^h_j(\epsilon)$ for $j=1,\ldots,m$; \item For $\epsilon$ in a small enough neighborhood of $0$, $\{U^h_1(\epsilon),\ldots, U^h_m(\epsilon)\}$ is an orthonormal set in $H^1(M,\Lambda_{\mathbb{C}}^2)\cap \tilde{\mathcal{K}}_{\mathbb{C}}^{\perp_{g(\epsilon)}}$; \item For every open interval $(a,b)\subset\mathbb{R}$ such that $\lambda$ is the only eigenvalue of $i\hspace{-.03in}*_gd$ in $[a,b]$, there are exactly $m$ eigenvalues (counting multiplicity) $\ell^h_1(\epsilon),\ldots,\ell^h_m(\epsilon)$ of $i\hspace{-.03in}*_{g(\epsilon)}d$ in $(a,b)$, for $\epsilon$ sufficiently small. \end{enumerate} \end{thm} It will be convenient for the calculation in section \ref{subsec:beltrami-simple1} to write the eigenvalue equation in the second point of Theorem \ref{thm:perttheoremh} in the following form. Since \beq\label{eq:pert1} i \mathcal{D}_\epsilon U^h_j(\epsilon) = i U_\epsilon^{-1} (\hspace{-.03in}*_{g(\epsilon)}d ) ( U_\epsilon U^h_j(\epsilon)), \eeq if we let $\tilde{U}_j^h (\epsilon) := U_\epsilon U^j_h (\epsilon)$, we have \beq\label{eq:pert2} i \hspace{-.03in}*_{g(\epsilon)}d \tilde{U}^h_j(\epsilon) = \ell^h_j(\epsilon) \tilde{U}^h_j(\epsilon), \eeq for $j=1,\ldots,m$. These eigenforms $\tilde{U}_j^h (\epsilon)$ belong to $L^2 (M, \Lambda_\C^2, g(\epsilon))$. \subsection{Proof of Theorem \ref{thm:beltrami-simple1} for the Beltrami operator}\label{subsec:beltrami-simple1} We combine the perturbation result with the topological arguments of Albert to prove Theorem \ref{thm:beltrami-simple1}. \newline \noindent \textit{Proof.}\\ \noindent 1. The setting. For a metric $g\in\mathcal{G}^r(M)$, we label the eigenvalues $i\lambda_n (g)$ of the Beltrami operator $*_gd$ so that $$\lambda_{n+1}^2(g)\geq \lambda_n^2(g).$$ We define the following subsets of $\mathcal{G}^r(M)$, the metrics on $M$: $$\Gamma_\infty := \{g\in\mathcal{G}^r(M)\,|\,\mbox{ all eigenvalues of }*_gd|_{H^1(M,\Lambda_{\mathbb{C}}^2)\cap \mathcal{K}_{\mathbb{C}}^{\perp_g}}\mbox{ are simple}\}$$ and $$\Gamma_n :=\{g\in\mathcal{G}^r(M)\,|\,\mbox{ the first }n\mbox{ eigenvalues of }*_gd|_{H^1(M,\Lambda_{\mathbb{C}}^2)\cap \mathcal{K}_{\mathbb{C}}^{\perp_g}}\mbox{ are simple}\}. $$ These subsets are nested so that $$ \Gamma_\infty \subset\cdots\subset \Gamma_n\subset\Gamma_{n+1}\subset\cdots\subset\Gamma_1\subset\Gamma_0=\mathcal{G}^r(M), $$ and $$\Gamma_\infty =\bigcap_{n=0}^\infty \Gamma_n.$$ By the stability of simple eigenvalues under small perturbations of the metric, each set $\Gamma_n$ is open in $\mathcal{G}^r(M)$. Thus, to prove that $\Gamma_\infty$ is residual in $\mathcal{G}^r(M)$, it is sufficient to show that $\Gamma_{n+1}$ is dense in $\Gamma_{n}$ for all $n=0,1,2,\ldots$. \noindent 2. The density argument. Let $g\in \Gamma_n$ so that the first $n$ eigenvalues of $$*_gd:H^1(M,\Lambda_{\mathbb{C}}^2)\cap\mathcal{K}_{\mathbb{C}}^{\perp_g}\to\mathcal{K}_{\mathbb{C}}^{\perp_g}$$ are simple. Suppose that the $(n+1)^{\rm st}$ eigenvalue $i\lambda\neq 0$ of $*_gd$ has multiplicity $m$, and define $g(\epsilon)=g+\epsilon h$ for some $h\in S^r(M)$. Theorem \ref{thm:perttheoremh} implies there are $m$ functions $\ell^h_1(\epsilon),\ldots,\ell^h_m(\epsilon)$ real-analytic at $\epsilon=0$, and $m$ functions $U^h_1(\epsilon),\ldots,U^h_m(\epsilon)$ analytic in $H^1(M,\Lambda_{\mathbb{C}}^2)$ at $\epsilon=0$ such that the conditions of Theorem \ref{thm:perttheoremh} hold. When $\epsilon=0$, each set $\{U^h_1(0),\ldots, U^h_m(0)\}$ forms an orthonormal basis of the eigenspace $E(*_gd,i\lambda)$. This basis may depend on the choice of $h\in\mathcal{S}^r(M)$ in the linear perturbation of the metric $g(\epsilon)=g+\epsilon h$. \noindent 3. Variation with respect to the metric. We differentiate the eigenvalue equation \begin{eqnarray*} \displaystyle *_{g(\epsilon)}d {\tilde U}^h_j(\epsilon)&=& i\ell^h_j(\epsilon) {\tilde U}^h_j(\epsilon), \end{eqnarray*} where ${\tilde U}^h_j(\epsilon) = U_\epsilon {U}^h_j(\epsilon) \in L^2(M,\Lambda_{\mathbb{C}}^2, g(\epsilon)$, with respect to $\epsilon$ and evaluate at $\epsilon=0$ to obtain \beq\label{eq:diff-ev-eqn1} D(*d)_g(h)U^h_j(0)+*_gd({\tilde U}^h_j)'(0) = i(\ell^h_j)'(0)U^h_j(0)+i\ell^h_j(0)({\tilde U}^h_j)'(0), \eeq where $({\tilde U}^h_j)'(0) \in L^2 (M,\Lambda_{\mathbb{C}}^2)$ due to the analyticity in Theorem \ref{thm:perttheoremh}. Introducing the notation $u^h_j=U^h_j(0)$, we simplify \eqref{eq:diff-ev-eqn1} to \beq\label{eqn:perderh} D(*d)_g(h)u^h_j+(*_gd-i\lambda)({\tilde U}^h_j)'(0) = i(\ell^h_j)'(0)u^h_j. \eeq Since $\{u_1^h,\ldots,u_m^h\}$ is an orthonormal basis of the eigenspace $E(*_gd,i\lambda)$, we take the inner product of \eqref{eqn:perderh} with another eigenform $u^h_k$. This results in \beq i(\ell^h_j)'(0)(u^h_j,u^h_k)_g = (D(*d)_g(h)u^h_j,u^h_k)_g+((*_gd-i\lambda)({\tilde U}^h_j)'(0),u^h_k)_g . \eeq The last term on the right vanishes due to the skew-adjointness of the Beltrami operator and the eigenvalue equation. Consequently, we obtain \beq i(\ell^h_j)'(0)\delta_{jk} = (D(*d)_g(h)u^h_j,u^h_k)_g. \eeq We may express the inner product $(D(*d)_g(h)u^h_j,u^h_k)_g$ in local coordinates using Lemma \ref{lemma:derivative} to obtain \begin{eqnarray}\label{eq:variation1} (\ell^h_j)'(0)\delta_{jk} &=& \frac{\lambda}{2}\int g^{pr}g^{qs}\left[-\frac{1}{2}(\tr_gh)(u^h_j)_{pq}+g^{lt}h_{tp}(u^h_j)_{lq}+g^{lt}h_{tq}(u^h_j)_{pl}\right] \overline{(u^h_k)_{rs}}\,d\mu_g. \nonumber \\ & & \end{eqnarray} We define a bilinear form $S:\mathcal{S}^r_{\mathbb{C}}(M)\times L^2(M,\Lambda_{\mathbb{C}}^2)\to L^2(M,\Lambda_{\mathbb{C}}^2)$ by \beq\label{eq:bilinear1} [S(h, w)]_{pq}=-\frac{1}{2}(\tr_gh)w_{pq}+g^{lt}h_{tp}w_{lq}+g^{lt}h_{tq}w_{pl} \eeq for $h\in\mathcal{S}^r_{\mathbb{C}}(M)$ and $w\in L^2(M,\Lambda_{\mathbb{C}}^2)$. We may then express \eqref{eq:variation1} more concisely as \begin{eqnarray}\label{eqn:orthwh} (\ell^h_j)'(0)\delta_{jk} &=& \lambda(S(h, u^h_j),u^h_k)_g. \end{eqnarray} \noindent 4. Change of basis. Our goal is to show that there exists an $h\in\mathcal{S}^r(M)$ such that $$(\ell^h_j)'(0)\neq (\ell^h_k)'(0)$$ for some pair $j,k\in\{1,\ldots,m\}$. This fact implies that under the metric perturbation $g(\epsilon)=g+\epsilon h$ for $\epsilon$ sufficiently small, the perturbed eigenvalues $i\ell^h_j(\epsilon)$ and $i\ell^h_k(\epsilon)$ of $*_{g(\epsilon)}d$ are distinct. While $i\ell^h_j(\epsilon)$ and $i\ell^h_k(\epsilon)$ are not guaranteed to be simple, they each have multiplicity less than $m$. To this end, assume to the contrary that $(\ell^h_j)'(0) = (\ell^h_k)'(0)$ for all $h\in\mathcal{S}^r(M)$ and all $j,k\in\{1,\ldots,m\}$. By \eqref{eqn:orthwh}, this assumption implies \begin{eqnarray} (S(h, u^h_j),u^h_j)_g &=& (S(h, u^h_k),u^h_k),\hspace{.1in} 1\leq j,k\leq m \label{eqn:scond1}\\ (S(h, u^h_j),u^h_k)_g &=& 0,\hspace{.1in} j\neq k \label{eqn:scond2} \end{eqnarray} for all $h\in \mathcal{S}^r(M)$. As previously noted, each set $\{u_1^h,\ldots,u_m^h\}$ forms an orthonormal basis of $E(*_gd, i\lambda)$, but we cannot assume that $u^{h_1}_j=u^{h_2}_j$ when $h_1\neq h_2$. Let us therefore fix an orthonormal basis $\{u_1,\ldots,u_m\}$ of $E(*_gd,i\lambda)$. For a given $h\in \mathcal{S}^r(M)$, we write each $u_j$ in terms of the basis elements $\{u_1^h,\ldots,u_m^h\}$ as $u_j = \sum_{\ell = 1}^m c_{j, \ell} u_\ell^h$, for constants $c_{j,\ell} \in \mathbb{C}$. The fact that $\{u_1,\ldots,u_m\}$ and $\{u_1^h,\ldots,u_m^h\}$ are both orthonormal bases of $E(*_gd,i\lambda)$ implies \beq\label{eqn:cd} \delta_{jk} = (u_j,u_k)_g = \sum_{\ell = 1}^m c_{j, \ell} \overline{c_{k,\ell}}. \eeq Combining \eqref{eqn:cd} with \eqref{eqn:scond1} and \eqref{eqn:scond2} yields \begin{eqnarray*} (S(h, u_j),u_k)_g &=& c_{j,1}(S(h, u^h_1),u_k)_g+\cdots+c_{j,m}(S(h, u^h_m),u_k)_g \\ &=& c_{j,1}\overline{c_{k,1}}(S(h, u^h_1),u^h_1)_g+\cdots+c_{j,m}\overline{c_{k,m}}(S(h, u^h_m),u^h_m)_g\\ &=& (c_{j,1}\overline{c_{k,1}}+\cdots+c_{j,m}\overline{c_{k,m}})(S(h, u^h_j),u^h_j)_g\\ &=& \delta_{jk}(S(h, u^h_j),u^h_j)_g. \end{eqnarray*} Thus, for all $h\in\mathcal{S}^r(M)$, the elements in the orthonormal basis $\{u_1,\ldots,u_m\}$ satisfy \bea\label{eq:basis-rel1} (S(h, u_j),u_j)_g &=& (S(h, u_k),u_k)_g, \hspace{.1in} 1\leq j,k\leq m \nonumber \\ (S(h, u_j),u_k)_g &=& 0, \hspace{.1in}j\neq k. \eea \noindent 5. Extension to $\mathcal{S}^r_{\mathbb{C}}(M)$. For any $T\in \mathcal{S}^r(M)$, we define $h_T$ by \beq\label{eqn:htwh} h_T = T-(\tr_g T)g. \eeq With this choice, the bilinear form $S$ defined in \eqref{eq:bilinear1} becomes \begin{eqnarray*} [S(h_T, u_j)]_{pq}&=& -\frac{1}{2}[(\tr_g T)-5(\tr_g T)](u_j)_{pq}+g^{lt}[T_{tp}-(\tr_g T)g_{tp}](u_j)_{lq}\\ & & +g^{lt}[T_{tq}-(\tr_g T)g_{tq}](u_j)_{pl} \\ &=& 2(\tr_g T)(u_j)_{pq}+g^{lt}T_{tp}(u_j)_{lq}-(\tr_g T)(u_j)_{pq}+g^{lt}T_{tq}(u_j)_{pl}\\ & & -(\tr_g T)(u_j)_{pq}\\ &=& T_{pt}g^{tl}(u_j)_{lq}+(u_j)_{pl}g^{lt}T_{tq}. \end{eqnarray*} By decomposing a complex symmetric $(0,2)$-tensor $T\in \mathcal{S}^r_{\mathbb{C}}(M)$ into $T=T_1+iT_2$ for $T_1,T_2\in \mathcal{S}^r(M)$, the linearity of $h_T$ in $T$ \eqref{eqn:htwh} and relations \eqref{eq:basis-rel1} for real $T$ imply $(S(h_{T},u_j), u_j)_g = (S(h_T, u_k),u_k)_g$, for all $T\in \mathcal{S}_{\mathbb{C}}^r(M)$. Likewise, we obtain \begin{eqnarray}\label{eqn:what2} (S(h_T,u_j),u_k)_g = 0,&& j\neq k \end{eqnarray} for all complex tensors $T\in S_{\mathbb{C}}^r(M)$. \noindent 6. Unique continuation principle. Without loss of generality, we fix $j=1$ and $k=2$. Equation \eqref{eqn:what2} implies \begin{eqnarray}\label{eqn:quicker} (S(h_T,u_1),u_2)_g &=& 0 \end{eqnarray} for all $T\in \mathcal{S}_{\mathbb{C}}^r(M)$. We apply Lemma \ref{lem:density} with $w = u_1$. It follows from $(*_gd-i\lambda)u_1=0$ and the co-exactness of $u_1$ that $\Delta_g^{(2)}u_1=-(*_gd)^2u_1=\lambda^2 u_1$, so that $u_1$ is an eigenform of the Hodge Laplacian $\Delta^{(2)}_g$ with eigenvalue $\lambda^2$. The unique continuation principle then states that $u_1$ cannot vanish in any open subset of $M$ \cite{Aronszajn,uniquecont}. Consequently, the set $$\mathscr{S}=\{S(h_{T},u_1)\,|\, T\in\mathcal{S}_{\mathbb{C}}^r(M)\}$$ is dense in $L^2(M,\Lambda_{\mathbb{C}}^2)$ by Lemma \ref{lem:density}. Since \eqref{eqn:quicker} implies $u_2$ is orthogonal to the dense set $\mathscr{S}$, we obtain $u_2=0$ on $M$, contradicting the fact that $u_2$ is a normalized eigenform. \noindent 7. Conclusion of the proof. By the above, there exists an $h\in \mathcal{S}^r(M)$ such that $(\ell^h_j)'(0) \neq (\ell^h_k)'(0)$ for some $j,k\in\{1,\ldots,m\}$. Consequently, for all $\epsilon > 0$ small, the $n+1^{\rm st}$ eigenvalue of the Beltrami operator $*_{g(\epsilon)} d$ has multiplicity at most $m-1$, and the first $n$ eigenvalues remain simple. Repeating the above argument as necessary, we obtain a metric $g(\epsilon)=g+\epsilon h$ in $\Gamma^{n+1}$ for $\epsilon$ sufficiently small. Since $g(\epsilon)$ can be taken arbitrarily close to $g$ in the $C^r$ topology, we conclude that $\Gamma^{n+1}$ is dense in $\Gamma^n$. Additionally, each $\Gamma^n$ is open in $\mathcal{G}^r(M)$, so we infer that $$\Gamma_\infty =\bigcap_{n=1}^\infty \Gamma_n$$ is residual $\mathcal{G}^r(M)$. Thus, for a residual set of metrics $\Gamma_\infty \subset\mathcal{G}^r(M)$, the Beltrami operator acting on $H^1(M,\Lambda_{\mathbb{C}}^2)\cap \mathcal{K}_{\mathbb{C}}^{\perp_g}$ has only simple eigenvalues.\hfill $\Box$ \section{The Hodge Laplacian on co-exact $2$-forms}\label{sec:hodge1} In this section, we apply the results on the generic multiplicities of the Beltrami operator to the study of the eigenvalue multiplicities of the Hodge Laplacian acting on real co-exact 2-forms. \subsection{Relation to the eigenvalues of the Beltrami operator}\label{subsec:relation1} In order to determine the generic eigenvalue multiplicities of the Hodge Laplacian on co-exact 2-forms on a 5-manifold, we must determine the relationship between the eigenvalues and eigenforms of the Hodge Laplacian and those of the Beltrami operator. Our next two lemmas hold in the more general setting of $n=4\ell+1$ and $k=2\ell$ for some $\ell\in\mathbb{N}$ and in particular apply when $n=5$ and $k=2$. \begin{lemma}\label{lem:bab} Let $M$ be a manifold of dimension $n=4\ell+1$ for some $\ell\in\mathbb{N}$, and let $k=2\ell$. Let $\omega=\alpha+i\beta$ be a nonzero complex $k$-form with $\alpha,\beta\in H^1(M,\Lambda^k)$. Then $*_gd\omega=i\lambda\omega$ if and only if \begin{eqnarray}\label{eqn:ab} *_gd\alpha &=& -\lambda\beta \hspace{.2in}\mbox{and}\hspace{.2in} *_gd\beta=\lambda\alpha. \end{eqnarray} \end{lemma} \textit{Proof.} First, suppose that $\omega=\alpha+i\beta\in H^1(M,\Lambda_{\mathbb{C}}^k)$ solves $*_gd\omega=i\lambda\omega$. Then \begin{eqnarray*} *_gd\omega &=& i\lambda\omega \\ *_gd(\alpha+i\beta) &=& i\lambda(\alpha+i\beta)\\ *_gd\alpha+i*_gd\beta &=& -\lambda\beta+i\lambda\alpha, \end{eqnarray*} so equating real and imaginary parts yields \eqref{eqn:ab}. Conversely, suppose that $\alpha,\beta\in H^1(M,\Lambda^k)$ satisfy \eqref{eqn:ab}, and let $\omega=\alpha+i\beta$. Then $$*_gd\omega=*_gd\alpha+i*_gd\beta=-\lambda\beta+i\lambda\alpha=i\lambda(\alpha+i\beta)=i\lambda\omega$$ so that $\omega$ is an eigenfunction of $*_gd$ with eigenvalue $i\lambda$. \hfill $\Box$ \begin{remark}\label{n:independent} It is important to recognize that condition \eqref{eqn:ab} implies that $\alpha$ and $\beta$ are nonzero, linearly independent forms over $\mathbb{R}$. To see this, observe that $\beta=c\alpha$ implies $$\beta=c\alpha=\frac{c}{\lambda}*_gd\beta=\frac{c^2}{\lambda}*_gd\alpha=-c^2\beta,$$ which gives $c=\pm i$ in contradiction to $c\in\mathbb{R}$. Even more notably, $$(\alpha,\beta)_g = \frac{1}{\lambda}(*_gd\beta,\beta)_g = -\frac{1}{\lambda}(\beta,*_gd\beta)_g=-(\beta,\alpha)_g$$ reveals that \beq\label{eqn:aborth} (\alpha,\beta)_g = 0. \eeq \end{remark} The next lemma follows from our observations in Lemma \ref{lem:bab}. \begin{lemma}\label{lem:bhl} Let $M$ be a manifold of dimension $n=4\ell+1$ for some $\ell\in\mathbb{N}$, and let $k=2\ell$. Let $\alpha,\beta\in H^2(M,\Lambda^k)\cap \mathcal{K}^{\perp_g}$. If $\omega=\alpha+i\beta$ is an eigenform of the Beltrami operator $*_gd$ with eigenvalue $i\lambda$, then both $\alpha$ and $\beta$ are eigenforms of the Hodge Laplacian $\Delta_g^{(k)}$ with eigenvalue $\lambda^2$. \end{lemma} \textit{Proof.} Let $\alpha,\beta\in H^2(M,\Lambda^k)\cap \mathcal{K}^{\perp_g}$, and suppose $\omega=\alpha+i\beta$ is an eigenform of $*_gd$ with eigenvalue $i\lambda$. By Lemma \ref{lem:bab}, $\alpha$ and $\beta$ satisfy \begin{eqnarray*} *_gd\alpha &=& -\lambda\beta \hspace{.2in}\mbox{and}\hspace{.2in} *_gd\beta=\lambda\alpha. \end{eqnarray*} Since $\alpha$ is a co-exact form, $n=4\ell+1$ is odd, and $k=2\ell$ is even, $$\Delta_g^{(k)}\alpha=-(*_gd)^2\alpha=\lambda*_gd\beta=\lambda^2\alpha.$$ Similarly, $$\Delta_g^{(k)}\beta=-(*_gd)^2\beta=-\lambda*_gd\alpha=\lambda^2\beta$$ so that $\alpha$ and $\beta$ are both eigenforms of $\Delta_g^{(k)}$ with eigenvalue $\lambda^2$. \hfill $\Box$ \subsection{Proof of Theorem \ref{thm:hodge-main1}} \textit{Proof.} By Theorem \ref{thm:beltrami-simple1}, there exists a residual set $\Gamma$ of $C^r$ metrics on $M$ such that the eigenvalues of the Beltrami operator $*_gd$ acting on $H^1(M,\Lambda_{\mathbb{C}}^2)\cap \mathcal{K}_{\mathbb{C}}^{\perp_g}$ are all simple. Take $g\in \Gamma$, and consider an eigenvalue $\lambda^2>0$ of the restriction of $\Delta_g^{(2)}$ to co-exact 2-forms. Let $\eta\in H^2(M,\Lambda^2)\cap \mathcal{K}^{\perp_g}$ be an eigenform of $\Delta_g^{(2)}$ with eigenvalue $\lambda^2$ so that \begin{eqnarray}\label{eqn:eb} -(*_gd)^2\eta &=& \lambda^2\eta. \end{eqnarray} Now, since $*_gd$ maps $H^2(M,\Lambda^2)\cap \mathcal{K}^{\perp_g}$ to $H^1(M,\Lambda^2)\cap \mathcal{K}^{\perp_g}$, we have $*_gd\eta=\lambda\zeta$ for some $\zeta\in H^1(M,\Lambda^2)\cap\mathcal{K}^{\perp_g}$. Equation \eqref{eqn:eb} then yields \begin{eqnarray*} -*_gd(\lambda\zeta) &=& \lambda^2\eta\\ *_gd\zeta &=& -\lambda\eta \end{eqnarray*} so that $\zeta$ is in fact contained in $H^2(M,\Lambda^2)\cap \mathcal{K}^{\perp_g}$. Since $\eta$ and $\zeta$ together satisfy condition \eqref{eqn:ab}, Lemma \ref{lem:bab} implies that $\zeta+i\eta$ is an eigenform of $*_gd$ with eigenvalue $i\lambda$. It follows from Lemma \ref{lem:bhl} that $\zeta$ is also an eigenform of $\Delta_g^{(2)}$ with eigenvalue $\lambda^2$. As mentioned in \ref{n:independent}, the eigenforms $\eta$ and $\zeta$ are linearly independent, indicating that the eigenvalue $\lambda^2$ of $\Delta_g^{(2)}$ has multiplicity of at least 2. To prove that $\lambda^2$ has a multiplicity of precisely 2, suppose that $\Delta_g^{(2)}\tau=\lambda^2\tau$ for some $\tau\in H^2(M,\Lambda^2)\cap \mathcal{K}^{\perp_g}$. By our previous argument, there must exist a co-exact 2-form $\xi\in H^2(M,\Lambda^2)\cap \mathcal{K}^{\perp_g}$ such that $\xi+i\tau$ is an eigenform of $*_gd$ with eigenvalue $i\lambda$. Since $g$ is contained in the residual set $\Gamma$, the eigenvalue $i\lambda$ is simple. Thus, $\xi+i\tau$ must be a complex multiple of the eigenform $\zeta+i\eta$; that is, \begin{eqnarray}\label{eqn:xz} \xi+i\tau\hspace{.08in}=\hspace{.08in}(a+ib)(\zeta+i\eta)&=&(a\zeta-b\eta)+i(b\zeta+a\eta) \end{eqnarray} for some $a+ib\in\mathbb{C}$. Equating the imaginary parts of equation \eqref{eqn:xz} gives $$\tau=b\zeta+a\eta$$ so that $\tau$ is a linear combination of the eigenforms $\eta$ and $\zeta$ of $\Delta_g^{(2)}$. Thus, $\lambda^2$ has multiplicity 2. We therefore conclude that for a residual set of metrics $\Gamma\subset \mathcal{G}^r(M)$, all eigenvalues of the restriction of the Hodge Laplacian $\Delta_g^{(2)}$ to $H^2(M,\Lambda^2)\cap \mathcal{K}^{\perp_g}$ have multiplicity 2. \hfill $\Box$ \vspace{.2in} As an immediate consequence of the commutativity of the Hodge Laplacian and the exterior differential operator, we obtain an analogous result for exact 3-forms. \begin{follow}\label{cor:ethree} Let $M$ be a closed 5-manifold, and let $r$ be an integer, $r\geq 2$. There exists a residual subset $\Gamma$ of the space of all $C^r$ metrics on $M$ such that, for all $g\in\Gamma$, the eigenvalues of the restriction of the Hodge Laplacian $\Delta_g^{(3)}$ to the space of exact forms in $H^2(M,\Lambda^3)$ have multiplicity 2. \end{follow} \section{Conclusions: Multiplicities of Hodge eigenvalues on $5$-manifolds}\label{sec:conclusions1} Let us summarize the situation concerning the generic multiciplities of the nonzero eigenvalues of the Hodge Laplacian $\Delta_g^{(k)}$ on a closed 5-manifold. Uhlenbeck's reults \cite{uhlenbeck1} ensures the generic simplicity of the nonzero eigenvalues of following operators: \begin{enumerate} \item[(i)] $\Delta_g^{(0)}$; \item[(ii)] $\Delta_g^{(1)}$ restricted to exact 1-forms; \item[(iii)] $\Delta_g^{(4)}$ restricted to co-exact 4-forms;\ \item[(iv)] $\Delta_g^{(5)}$. \end{enumerate} The first statement, the generic simplicity of the nonzero eigenvalues of $\Delta_g^{(0)}$, is Theorem 8 of \cite{uhlenbeck1}. The second statement follows from this and the fact that an exact 1-form has the form $df$, for a function $f$. The last two statements follow from Hodge duality. Moreover, Theorem \ref{thm:hodge-main1} and Corollary \ref{cor:ethree} of the present work assert that there exists a residual set of $C^r$ metrics such that the operators \begin{enumerate} \item[(v)] $\Delta_g^{(2)}$ restricted to co-exact 2-forms, \item[(vi)] $\Delta_g^{(3)}$ restricted to exact 3-forms \end{enumerate} have eigenvalues of multiplicity 2. In order to completely characterize the generic nonzero eigenvalue multiplicities of the Hodge Laplacian on a closed 5-manifold, we also need information about the eigenspaces of the operators \begin{enumerate} \item[(vii)] $\Delta_g^{(1)}$ restricted to co-exact 1-forms, \item[(viii)] $\Delta_g^{(2)}$ restricted to exact 2-forms, \item[(ix)] $\Delta_g^{(3)}$ restricted to co-exact 3-forms, \item[(x)] $\Delta_g^{(4)}$ restricted to exact 4-forms. \end{enumerate} Since operators (vii)-(x) have isomorphic eigenspaces, it suffices to determine the eigenvalue multiplicities of the Hodge Laplacian restricted to co-exact 1-forms. It is unclear how to best approach this problem. On a 5-manifold, the Beltrami operator only maps the space of 2-forms to itself and hence only has eigenvalues when acting on 2-forms. Thus, the eigenvalue multiplicities of the Beltrami operator will not give insight into the eigenvalues of $\Delta_g^{(1)}$ on co-exact 1-forms. A direct perturbation approach is possible but to obtain results similar to those in section \ref{sec:beltrami-ev1}, and Lemma \ref{lem:density} in particular, for the Hodge Laplacian, would be calculationally intensive.
{ "timestamp": "2015-01-27T02:11:32", "yymm": "1501", "arxiv_id": "1501.06165", "language": "en", "url": "https://arxiv.org/abs/1501.06165", "abstract": "We study multiplicity of the eigenvalues of the Hodge Laplacian on smooth, compact Riemannian manifolds of dimension five for generic families of metrics. We prove that generically the Hodge Laplacian, restricted to the subspace of co-exact two-forms, has nonzero eigenvalues of multiplicity two. The proof is based on the fact that Hodge Laplacian restricted to the subspace of co-exact two-forms is minus the square of the Beltrami operator, a first-order operator. We prove that for generic metrics the spectrum of the Beltrami operator is simple. Because the Beltrami operator in this setting is a skew-adjoint operator, this implies the main result for the Hodge Laplacian.", "subjects": "Spectral Theory (math.SP); Differential Geometry (math.DG)", "title": "The multiplicity of eigenvalues of the Hodge Laplacian on 5-dimensional compact manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631655203046, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7087950516700068 }
https://arxiv.org/abs/2203.13960
A Relation of the Allen-Cahn equations and the Euler equations and applications of the Equipartition
We will prove that solutions of the Allen-Cahn equations that satisfy the equipartition can be transformed into solutions of the Euler equations with constant pressure. As a consequence, we obtain De Giorgi type results, that is, the level sets of entire solutions are hyperplanes. In addition, we obtain some examples of smooth entire solutions of the Euler equations in particular cases. For specific type of initial conditions, some of these solutions can be extended to the Navier-Stokes equations. Also, we will determine the structure of solutions of the Allen-Cahn system in two dimensions that satisfy the equipartition. Finally, we apply the Leray projection on the Allen-Cahn system and provide some explicit entire solutions.
\section{Introduction} As it is well known, De Giorgi in 1978 \cite{DeGi} suggested a stricking analogy of the Allen Cahn equation ($ \Delta u =f(u) $) with minimal surface theory that led to significant developments in Partial Differential equations and the Calculus of Variations, by stating the following conjecture about bounded solutions on $ {\mathbb{R}}^n : \\ $ \textbf{Conjecture:}(De Giorgi) Let $ u \in C^2({\mathbb{R}}^n) $ be a solution to \begin{align*} \Delta u - u^3 +u =0 \end{align*} such that: 1. $ |u|<1 $, $ \; $ 2. $ \dfrac{\partial u}{\partial x_n} >0 \;\: \forall x \in {\mathbb{R}}^n . \\ $ Is it true that all the level sets of $ u $ are hyperplanes, at least for $ n \leq 8 $? The relationship with the Bernstein problem for minimal graphs is the reason why $ n \leq 8 $ appears in the conjecture. The conjecture has been proved and the relation of the Allen Cahn with minimal surfaces can be seen via the theory of $ \Gamma $-convergence. The family of functionals $ J_{\varepsilon}(u) = \int_{\Omega} ( \frac{\varepsilon}{2} |\nabla u|^2 + \frac{1}{\varepsilon}W(u)) dx \;\; , \; \varepsilon >0 \;\:,\; \Gamma$-converges as $ \varepsilon \rightarrow 0 $ to the perimeter functional and has Euler-Lagrange equations: $ \: \varepsilon \Delta u - \frac{1}{\varepsilon}W'(u) =0 \; $, therefore one expects that the level sets of the minimizers will minimize the perimeter. So, one question could be, whether there exists a transformation (under some assumptions) that transforms the Allen-Cahn equation $ \Delta u = f(u) \;\: (u:\Omega \subset {\mathbb{R}}^n \rightarrow \mathbb{R} $) to the minimal surface equation of one dimension lower (i.e. $ (n-1) $-dimensional minimal surface equation). This can be done, (Corollary 1) at least in dimension 3 and then we can apply the Bernstein's theorem for the minimal surface equation to obtain that the level sets of entire solutions of the Allen-Cahn equation that satisfy the equipartition ($ \frac{1}{2} | \nabla u|^2 = W(u) $) are hyperlanes. A more general result holds for bounded entire solutions of the Allen-Cahn equation that satisfy the equipartition (see Theorem 5.1 in \cite{CGS}), that is, the level sets of entire solutions of the Allen Cahn equations that satisfy the equipartition are hyperplanes. We note that, in this work, we do not suppose that the solutions are bounded. As we can see in subsection 2.2, the relations between different classes of equations, allow us to obtain some explicit smooth entire solutions for the 2D and 3D Isobaric Euler equations. Those solutions can be extended when the pressure is linear function in the space variables. Some of these solutions have linear dependent components. Thus, if we impose linear dependency of the components of solutions, we can obtain some explicit solutions to more general type of equations. In Appendix B we give some examples of smooth entire solutions of the Navier-Stokes equations with linear dependent components when the initial conditions have a particular form. One of the ideas in this paper, is to view the equipartition as the Eikonal equation. As stated in Proposition 1, the Eikonal equation can be transformed to the Euler equations with constant pressure (without the divergence free condition). Thus, solutions of the Allen Cahn equations that satisfy the equipartition can be transformed into the Euler equations with constant pressure, and we obtain the divergence free condition from the Allen Cahn equations. Furthermore, we generalize this result to the equation $ a(u) \Delta u + b(u) | \nabla u|^2 = c(u) $, under the hypothesis that $ u = \Phi (v) $ for some $ v $ that is also in this class of equation. This hypothesis is quite reasonable since the equation $ a(u) \Delta u + b(u) | \nabla u|^2 = c(u) $ is invariant under such transformations, in the sence that if $ u $ is a solution then $ v= F(u) $ is also in this class of equations. In the last section, we propose an analogue of the De Giorgi type conjecture for the vector Allen-Cahn equations and we will prove that entire solutions of the Allen-Cahn system in dimension 2 that satisfy the equipartition have such a specific structure. Finally, we apply the Helmholtz-Leray decomposition in the Allen-Cahn system and obtain an equation, independent from the potential $ W $. Then we apply the Leray projection (i.e. only the divergence free term from the decomposition) and we can determine explicit entire solutions. In Appendix A, we give some examples of such solutions and compare them to the structure we have obtained from Theorem 3. One such example, for a particular potential $ W \geq 0 $ with finite number of global minima has the property that $ \lim_{x \rightarrow \pm \infty} u(x,y)= a^{\pm} $, where $ a^{\pm} \in \lbrace W=0 \rbrace $ and $ \lim_{y \rightarrow \pm \infty} u(x,y) = U^{\pm} (x) $ where $ U^{\pm} $ are heteroclinic connections of the system (i.e. $ U^{\pm ''} = W_u(U^{\pm} ) $). $ \\ $ \begin{large} \textbf{Acknowledgments:} \end{large} The author would like to thank his advisor professor Nicholas D. Alikakos for his guidance and inspiration, and also for motivating the study of implications of the equipartition in the Allen-Cahn system. $ \\ \\ $ \section{Statements and Proofs} $ \\ $ First, we begin with a transformation that relates the Eikonal equation and the Euler equation (with constant pressure and without the incompressibility condition). Note that $ x_n $ plays the role of the ``time parameter'' and $ x_n \in \mathbb{R} $ instead of $ x_n >0 $. We could choose any of $ x_i \;,\; i=1,...,n $ as a ``time parameter'', supposing the monotonicity condition with respect to $ x_i $. $ \\ $ \textbf{Proposition 1}\label{Eikonal-Euler} Let $ v: \Omega \rightarrow \mathbb{R} $ be a smooth solution of the Eikonal equation, $ \Omega \subset {\mathbb{R}}^n $ open ($ n \geq 2 $), i.e. $ v $ satisfies \begin{equation}\label{Eikonal} | \nabla v|^2 = G(v) \end{equation} where $ G: \mathbb{R}\rightarrow \mathbb{R} $ is a differentiable function and suppose that $ v_{x_n} >0 $. Then the vector field $ F=(F_1,...,F_{n-1}) $ where $ F_i = \dfrac{v_{x_i}}{v_{x_n}} \;,\; i=1,...,n-1 $ satisfies the Euler equations \begin{equation}\label{Euler} F_{x_n} + F {\nabla}_y F = 0 \;\;,\; y= (x_1,...,x_{n-1}) \end{equation} $ \\ $ \begin{proof} Differentiating \eqref{Eikonal} over $ x_i $ gives \begin{equation}\label{Eikonal-EulerEq2} 2 \sum_{j=1}^n v_{x_j} v_{x_j x_i} = G'(v) v_{x_i} \;\;,\; i=1,...,n \end{equation} Now we have \begin{equation}\label{Eikonal-EulerEq3} F_{i \: x_n}= \frac{v_{x_i x_n} v_{x_n} - v_{x_i}v_{x_n x_n}}{v_{x_n}^2} \;\;,\; i=1,...,n-1 \end{equation} and \begin{equation}\label{Eikonal-EulerEq4} F_{i \: x_j} = \frac{v_{x_i x_j} v_{x_n} - v_{x_i} v_{x_n x_j}}{v_{x_n}^2} \;\;,\; i,j=1,...,n-1 \end{equation} \begin{equation}\label{Eikonal-EulerEq5} \Rightarrow F_j F_{i \: x_j} = \frac{ v_{x_j} v_{x_i x_j} v_{x_n} - v_{x_j} v_{x_i} v_{x_n x_j}}{v_{x_n}^3} \end{equation} Thus, by \eqref{Eikonal-EulerEq3} and \eqref{Eikonal-EulerEq5} (for $ i=1,...,n-1 $), we have \begin{equation}\label{Eikonal-EulerEq6} \begin{gathered} F_{i \: x_n} + \sum_{j=1}^{n-1} F_j F_{i \: x_j} = \\ \frac{v_{x_i x_n} v_{x_n}^2 - v_{x_n} v_{x_i}v_{x_n x_n} + \sum_{j=1}^{n-1} ( v_{x_j} v_{x_i x_j} v_{x_n} - v_{x_j} v_{x_i} v_{x_n x_j} ) }{v_{x_n}^3} = \\ \frac{v_{x_n} \sum_{j=1}^n v_{x_j} v_{x_j x_i} - v_{x_i} \sum_{j=1}^n v_{x_j} v_{x_j x_n} }{v_{x_n}^3} \end{gathered} \end{equation} finally, by \eqref{Eikonal-EulerEq2}, the last equation becomes \begin{equation} F_{i \: x_n} + \sum_{j=1}^{n-1} F_j F_{i \: x_j} = \frac{v_{x_n} \frac{G'(v)}{2} v_{x_i} - v_{x_i} \frac{G'(v)}{2} v_{x_n} }{v_{x_n}^3} =0 \end{equation} \begin{equation}\label{Eikonal-EulerEq7} \Rightarrow F_{i \: x_n} + \sum_{j=1}^{n-1} F_j F_{i \: x_j} =0 \;\;,\; i=1,...,n-1 \end{equation} \end{proof} $ \\ $ \textbf{Remark 2.1:} Note that since $ v_{x_n} >0 $ it holds that $ v( \mathbb{R}^n) \cap \lbrace G = 0 \rbrace = \emptyset $. Indeed, if $ v(x_0) \in \lbrace G = 0 \rbrace \Rightarrow | \nabla v(x_0) |^2 = 0 $ which contradicts $ v_{x_n} >0 $. So, by setting $ \tilde{v} = P(v) $, where $ P'(v) = \frac{1}{\sqrt{G(v)}} $ we have $ \nabla \tilde{v} = P'(v) \nabla v \Rightarrow | \nabla \tilde{v}|^2 = (P'(v))^2 | \nabla v |^2 \Rightarrow | \nabla \tilde{v}|^2 = 1 . $ Thus $ \tilde{v} $ satisfies $ | \nabla \tilde{v}|^2 =1 $ and $ F_i = \dfrac{v_{x_i}}{v_{x_n}} = \dfrac{{\tilde{v}}_{x_i}}{{\tilde{v}}_{x_n}} . $ So, at first, it seems that this transformation can be inverted: $ F_1^2 + ... + F_{n-1}^2 = \dfrac{{\tilde{v}}_{x_1}^2 + ... + {\tilde{v}}_{x_{n-1}}^2}{{\tilde{v}}_{x_n}^2} = \dfrac{1}{{\tilde{v}}_{x_n}^2} -1 \Rightarrow {\tilde{v}}_{x_n} = \dfrac{1}{\sqrt{F_1^2 + ... + F_{n-1}^2 +1 }} $ \begin{equation}\label{Euler-Eikonal} \Rightarrow \tilde{v} = \int \frac{1}{\sqrt{F_1^2 + ... + F_{n-1}^2 +1 }} dx_n + a(x_1,...,x_{n-1}) \end{equation} That is, if $ F_i \;,\; i=1,...,n-1 $ satisfies the Euler equations $ F_{x_n} +F {\nabla}_y F =0 $, then $ v $ defined by \eqref{Euler-Eikonal} will satisfy the Eikonal equation. This statement is true for $ n=2 $ (see \cite{AG}). But to generalize for $ n \geq 3 $ it appears that further assumptions are needed. So, the class of solutions of the Euler equations with constant pressure seem to be ``richer'' in some sense than the class of solutions of the Eikonal equation, that is, for every smooth solution of the Eikonal equation, we can obtain a solution of the Euler equation, but not vice versa. $ \\ \\ $ \textbf{Lemma 1} Let $ u: \Omega \subset \mathbb{R}^n \rightarrow \mathbb{R} $ such that $ u_{x_n}>0 $ and suppose that $ F_1 = \dfrac{u_{x_1}}{u_{x_n}} ,..., F_{n-1} = \dfrac{u_{x_{n-1}}}{u_{x_n}} $ satisfy the Euler equations \begin{align*} F_{i \: x_n} + \sum_{j=1}^{n-1} F_j F_{i \: x_j} = 0 \;\;,\; i =1,...,n-1 \end{align*} Then \begin{align*} div_y (F_1,...,F_{n-1}) =0 \Leftrightarrow div(\dfrac{\nabla u}{| \nabla u|}) =0 \;\;,\; y = (x_1,...,x_{n-1}) \end{align*} where $ div_y (F_1,...,F_{n-1}) = \sum_{i=1}^{n-1} F_{i \: x_i}. \\ \\ $ \begin{proof} We have \begin{align*} \begin{gathered} div(\dfrac{\nabla u}{| \nabla u|}) = 0 \Leftrightarrow \sum_{i=1}^n ( \dfrac{u_{x_i}}{\sqrt{u_{x_1}^2 +...+ u_{x_n}^2}})_{x_i} =0 \Leftrightarrow \sum_{i=1}^n ( \dfrac{F_i}{\sqrt{F_1^2 +...+ F_{n-1}^2+1}})_{x_i} =0 \\ \sum_{i=1}^n ( \dfrac{F_{i \: x_i}}{|F|} - \dfrac{F_i(F_1 F_{1 \: x_i}+...+ F_{n-1} F_{n-1 \: x_i})}{|F|^3}) =0 \end{gathered} \end{align*} where $ | F|= \sqrt{F_1^2 +...+ F_{n-1}^2 +1} \;\:,\; F_i = \dfrac{u_{x_i}}{u_{x_n}} \;,\; i =1,...,n-1 $ and $ F_n =1 . \\ $ So the last equation is equivalent with \begin{align*} \begin{gathered} |F|^2 \sum_{i=1}^n F_{i \: x_i} - \sum_{i=1}^n F_i (F_1 F_{1 \: x_i}+...+ F_{n-1} F_{n-1 \: x_i})=0 \\ \Leftrightarrow |F|^2 \sum_{i=1}^n F_{i \: x_i} - \sum_{i=1}^n F_i \sum_{j=1}^{n-1} F_j F_{j \: x_i} =0 \\ \Leftrightarrow |F|^2 \sum_{i=1}^n F_{i \: x_i} - \sum_{j=1}^{n-1} \sum_{i=1}^{n-1} F_i F_j F_{j \: x_i} - \sum_{j=1}^{n-1}F_j F_{j \: x_n} =0 \end{gathered} \end{align*} but $ \sum_{j=1}^{n-1} F_i F_{j \: x_i} = - F_{j \: x_n} $ (and $ F_n =1 $) Thus \begin{align*} \begin{gathered} |F|^2 \sum_{i=1}^n F_{i \: x_i} - \sum_{j=1}^{n-1} F_j \sum_{i=1}^{n-1} F_i F_{j \: x_i} - \sum_{j=1}^{n-1}F_j F_{j \: x_n} =0 \Leftrightarrow |F|^2 \sum_{i=1}^n F_{i \: x_i} =0 \\ \Leftrightarrow \sum_{i=1}^{n-1} F_{i \: x_i} = 0 \Leftrightarrow div_y (F_1,...,F_{n-1}) = 0 \;\;,\; y =(x_1,...,x_{n-1}). \end{gathered} \end{align*} \end{proof} $ \\ $ \subsection{The Allen-Cahn equations and the equipartition} $ \\ $ Now, we will associate the solutions of the Allen Cahn equation that satisfy the equipartition with a vector field, as defined in Proposition 1, that satisfy the Euler equation (together with the incompressibility condition). Also, the mean curvature of the level sets of such solutions will be zero. We will assume that the potential $ W : \mathbb{R} \rightarrow [0,+ \infty) $ is smooth and $ \Omega \subset \mathbb{R}^n $ is an open set. $ \\ \\ $ \textbf{Proposition 2} Let $ u \in C^2 (\Omega) $ be a solution of $ \Delta u = W'(u) $, such that $ u_{x_n} >0 $. If $ u $ satisfies \begin{align}\label{equipartition1} \frac{1}{2}|\nabla u|^2 = W(u) \end{align} then $ F=(F_1,...,F_{n-1}) \;\:,\; F_i := \dfrac{u_{x_i}}{u_{x_n}} \;,\: i=1,...,n-1 $, satisfies the Euler equations \begin{align*} \begin{cases} F_{x_n} +F \nabla_y F = 0 \\ div_y F = 0 \end{cases} \;\;, \; y=(x_1,...,x_{n-1}) \end{align*} In addition, we have \begin{align*} div(\dfrac{\nabla u}{| \nabla u|}) =0 \end{align*} $ \\ \\ $ \begin{proof} The fact that $ F=(F_1,...,F_{n-1}) \;\:,\; F_i := \dfrac{u_{x_i}}{u_{x_n}} \;,\: i=1,...,n-1 $ satisfies \begin{align*} F_{x_n} +F \nabla_y F = 0 \;\;, \; y=(x_1,...,x_{n-1}) \end{align*} holds from Proposition 1. It suffices to prove $ div_y F=0 \: .$ We have \begin{align*} \begin{gathered} div_y F = \sum_{i=1}^{n-1} F_i = \sum_{i=1}^{n-1} \frac{u_{x_i x_i} u_{x_n} -u_{x_i} u_{x_n x_i}}{u_{x_n}^2} \\ = \frac{u_{x_n} \sum_{i=1}^{n-1} u_{x_i x_i} - \sum_{i=1}^{n-1} (\frac{1}{2} u_{x_i}^2)_{x_n}}{u_{x_n}^2} = \frac{u_{x_n} \sum_{i=1}^{n-1} u_{x_i x_i} - W'(u) u_{x_n} + u_{x_n} u_{x_n x_n}}{u_{x_n}^2} \\ = \frac{1}{u_{x_n}} (\Delta u - W'(u)) = 0 \end{gathered} \end{align*} The fact that $ div(\dfrac{\nabla u}{| \nabla u|}) =0 $ holds from Lemma 1. \end{proof} $ \\ $ \textbf{Notes:} (1) The fact that solutions of the Allen Cahn equations that satisfy the equipartition also satisfy $ div(\dfrac{\nabla u}{| \nabla u|}) =0 $ has been proved more directly and for more general type of equations (see Proposition 4.11 in \cite{DG}). $ \\ $ (2) For bounded solutions the hypothesis $ \frac{1}{2}| \nabla u|^2 = W(u) $ can be lessened, in the sense that, it can assumed to be satisfied only in a single point (see Theorem 2.2 in \cite{CGS}). In fact, it holds that in all dimensions the level sets of bounded entire solutions of the Allen Cahn equations that satisfy the equipartition are hyperplanes (see Theorem 5.1 in \cite{CGS}). $ \\ $ (3) We could see the fact that $ div_y F=0 $, can alternatively be obtained with calculations utilizing the stress-energy tensor (see \cite{AFS} ,p.88), applied in the scalar case. $ \\ $ (4) Note that classically, the solutions of the Allen Cahn equations are assumed to be bounded. However this assumption excludes apriori for example $ \nabla u $ from being a global diffeomorphism. Indeed, if $ || u ||_{L^{\infty}} \leq M $, elliptic regularity theory (see for example \cite{HL}) implies, since $ W $ is smooth, that $ ||\nabla u ||_{L^{\infty}} \leq M $. $ \\ $ As we will see now, in dimension 3 we can deduce the Allen Cahn equation (together with equipartition) into the minimal surface equation of dimension 2 and then apply Bernstein's result to conclude that the level sets of the solution are hyperplanes. $ \\ $ \textbf{Corollary 1} Let $ u \in C^2 (\mathbb{R}^3) $ be a solution of $ \Delta u = W'(u) $ , such that $ u_z >0 $. If $ u $ satisfies \begin{align}\label{equipartition2} \frac{1}{2}|\nabla u|^2 = W(u) \end{align} then there exists a function $ \psi $ such that $ \psi_y = \dfrac{u_x}{u_z} \;,\; \psi_x = - \dfrac{u_y}{u_z} $ that satisfies the minimal surface equation \begin{align*} \psi_{yy} (\psi_x^2 +1 ) - 2 \psi_x \psi_y \psi_{xy} + \psi_{xx} (\psi_y^2 +1) =0 \end{align*} In particular, the level sets of $ u $ are hyperplanes. $ \\ $ \begin{proof} From Proposition 2 we have that $ div_{(x,y)} F =0 $, thus there exists some $ \psi = \psi(x,y,z) \: : F_1 = \psi_y $ and $ F_2 =- \psi_x \: .$ As we noted in Remark 2.1, $ u(\mathbb{R}^n) \cap \lbrace W = 0 \rbrace = \emptyset \: $ (by \eqref{equipartition2} and since $ u_z >0 $). So we set $ v = G(u) $, with $ G'(u) = \dfrac{1}{\sqrt{2W(u)}} $, thus \begin{align*} \begin{gathered} | \nabla v |^2 =1 \;\;\;\; \textrm{and} \;\;\; F_1 = \frac{u_x}{u_z} = \frac{v_x}{v_z} \;,\; F_2 = \frac{u_y}{u_z} = \frac{v_y}{v_z} \\ \Rightarrow F_1^2 +F_2^2 = \frac{1}{v_z^2} -1 \Rightarrow v_z = \frac{1}{\sqrt{F_1^2 +F_2^2 +1}} \\ v_{zx} = \frac{-F_1 F_{1x} - F_2 F_{2x}}{(F_1^2 + F_2^2 +1)^\frac{3}{2}} \end{gathered} \end{align*} and $ v_x = F_1 v_z = \dfrac{F_1}{\sqrt{F_1^2 + F_2^2 +1}} $ \begin{align*} \Rightarrow v_{xz} = \frac{F_{1z}(F_1^2 +F_2^2 +1) - F_1(F_1 F_{1z} +F_2 F_{2z})}{(F_1^2 + F_2^2 +1)^{\frac{3}{2}}} \end{align*} Also, by Proposition 1, $ F $ satisfy \begin{align*} \begin{cases} F_{1z} +F_1 F_{1x} + F_2 F_{1y} =0 \\ F_{2z} + F_1 F_{2x} + F_2 F_{2y} =0 \end{cases} \end{align*} and therefore, from the fact that $ v_{zx}=v_{xz} $ since $ v \in C^2(\mathbb{R}^3) $, we obtain \begin{align*} \begin{gathered} F_{1z}(F_2^2 +1) - F_1 F_2 F_{2z} + F_1 F_{1x} + F_2 F_{2x} =0 \\ \Rightarrow -F_1 F_2 F_{1x} - F_2^2 F_{1y} - F_{1y} + F_1^2 F_{2x} + F_1 F_2 F_{2y} + F_{2x} =0 \\ \Leftrightarrow \psi_{yy} (\psi_x^2 +1 ) - 2 \psi_x \psi_y \psi_{xy} + \psi_{xx} (\psi_y^2 +1) =0 \end{gathered} \end{align*} Finally by Berstein's theorem (see Theorem 1.21 \cite{CM}) $ \psi $ must be a plane (in respect to the variables $ (x,y) $, since $ \psi_{xx}= \psi_{xy} = \psi_{yy}=0 $): $ {\psi}_x = -b(z) $ and $ {\psi}_y =a(z) $ (for some functions $ a,b: \mathbb{R} \rightarrow \mathbb{R} $) $ \Rightarrow \psi(x,y,z) = -b(z)x +a(z)y + c(z) $. This gives: $ F_1 = {\psi}_y = a(z) \;\;, \; F_2 = - {\psi}_x =b(z) $ \begin{align*} \begin{gathered} \Rightarrow \frac{u_x}{u_z} =a(z) \;\;\;\; \textrm{and} \;\;\; \frac{u_y}{u_z} = b(z) \\ \Rightarrow u(x,y,z) = G(s,y) = H(t ,x) \\ \textrm{where} \;\: s=x + \int \frac{1}{a(z)}dz \;\;,\; t =y + \int \frac{1}{b(z)}dz \end{gathered} \end{align*} Now we have \begin{align*} \begin{gathered} u_x = a(z) u_z \Rightarrow H_x = \frac{a}{b} H_t \;\: (H_t \neq 0 \;\: \textrm{since} \;\: u_z >0) \\ \textrm{and} \;\: \frac{1}{2}| \nabla u|^2 =W(u) \Rightarrow \frac{1}{2} [ H_x^2 + H_t^2(1 + \frac{1}{b^2})] = W(H) \end{gathered} \end{align*} Differentiating the last equation with respect to $ y,z $ respectively (and utilizing $ H_x = \frac{a}{b} H_t$), we obtain \begin{align*} \begin{cases} \frac{a}{b} H_t H_{xt} + H_t H_{tt} (1 + \frac{1}{b^2}) = W' H_t \\ \frac{a}{b^2} H_t H_{xt} + H_t H_{tt} ( \frac{1}{b} + \frac{1}{b^3}) - H_t^2 \frac{b'}{b^3} = W' \frac{H_t}{b} \end{cases} \end{align*} \begin{align*} \Rightarrow - H_t^2 \frac{b'}{b^2} = 0 \Rightarrow b' =0 \end{align*} thus, $ b=b_0 =constant $. Arguing similarly for $ G= G(s,y) $ we obtain $ a=a_0 = constant. $ Therefore, \begin{align*} u(x,y,z) = h(ax+by+z) \end{align*} where $ h $ is a solution of the ODE \begin{align*} h''(t) = \frac{W'(h(t))}{a^2 +b^2 +1} \end{align*} \end{proof} $ \\ $ We now generalize Proposition 2 to the following Theorem: $ \\ $ \textbf{Theorem 1} Let $ u,v : \Omega \subset {\mathbb{R}}^n \rightarrow \mathbb{R} $ such that $ u_{x_n}, v_{x_n} >0 $ satisfy the equations \begin{equation}\label{Theorem1} \begin{gathered} a(u)\Delta u + b(u) | \nabla u|^2 = f(u) \\ k(v) \Delta v + l(v) | \nabla v|^2 = g(v) \end{gathered} \end{equation} and suppose that $ u = \Phi(v) $ for some $ \Phi : \mathbb{R} \rightarrow \mathbb{R} \;( \Phi' \neq 0) $ and $ p(t) \neq 0 \;,\; a(t) \neq 0 $, where $ \\ p(t):= k(t) a( \Phi (t)) \Phi ''(t) + k(t) b( \Phi (t)) ( \Phi '(t))^2 - l(t) a( \Phi (t) ) \Phi '(t) \: $. Then the vector field $ F=(F_1,...,F_{n-1}) $ defined as $ F_i = \dfrac{u_{x_i}}{u_{x_n}} \:, \\ i=1,..,n-1 $, will satisfy the Euler equations \begin{equation}\label{Theorem1Euler} F_{x_n} + F {\nabla}_y F = 0 \;\;\;,\; y=(x_1,...,x_{n-1}) \end{equation} Also, $div_y F = 0 $ if and only if $ \Phi $ is a solution of the ODE \begin{align*} a( \Phi (t)) \Phi '(t) G'(t) + 2 [b( \Phi (t)) ( \Phi '(t))^2 + a( \Phi(t)) \Phi''(t) ] G(t) = 2 f( \Phi (t)) \end{align*} where $ G(t) := \dfrac{k(t) f( \Phi (t)) - g(t) a( \Phi (t)) \Phi '(t)}{p(t)} \;\;\; (p $ as defined above) $ \\ \\ $ \begin{proof} We have $ u= \Phi (v) $ and $ \nabla u = \Phi ' (v) \nabla v $ , therefore \begin{align*} \begin{gathered} \Delta u = \Phi'(v) \Delta v + \Phi''(v) |\nabla v|^2 \\ \Rightarrow f(u) - b(u) | \nabla u |^2 = a( \Phi(v)) ( \Phi'(v) \Delta v + \Phi''(v) |\nabla v|^2) \\ \Rightarrow f( \Phi (v)) - b( \Phi (v)) ( \Phi '(v))^2 | \nabla v |^2 = a( \Phi(v)) ( \Phi'(v) \Delta v + \Phi''(v) |\nabla v|^2) \\ \end{gathered} \end{align*} \begin{equation}\label{Theorem1proofEq1} \Rightarrow \Delta v = \dfrac{f( \Phi (v)) - [b( \Phi (v)) ( \Phi '(v))^2 + a( \Phi(v)) \Phi''(v) ] \: | \nabla v |^2 }{a( \Phi (v)) \Phi '(v)} \;\;\; ( a,\Phi' \neq 0) \end{equation} since $ u $ is a solution of $ a(u)\Delta u + b(u) | \nabla u|^2 = f(u) . \\ $ Now, since $ v $ is also solution of the second equation in \eqref{Theorem1}, we have \begin{align*} \begin{gathered} k(v)(\dfrac{f( \Phi (v)) - [b( \Phi (v)) ( \Phi '(v))^2 + a( \Phi(v)) \Phi''(v) ] \: | \nabla v |^2 }{a( \Phi (v)) \Phi '(v)}) + l(v) | \nabla v |^2 = g(v) \\ \Leftrightarrow p(v) | \nabla v |^2 = k(v)f( \Phi (v)) -a( \Phi(v)) \Phi'(v) g(v) \end{gathered} \end{align*} where $ p(v) = k(v) a( \Phi (v)) \Phi ''(v) + k(v) b( \Phi (v)) ( \Phi '(v))^2 - l(v) a( \Phi (v) ) \Phi '(v) $. By hypothesis $ p \neq 0 $, thus \begin{equation}\label{Theorem1equipartition} |\nabla v|^2 = G(v) \end{equation} where \begin{equation} G(v) = \frac{k(v) f( \Phi (v)) - g(v) a( \Phi (v)) \Phi '(v)}{p(v)} \end{equation} Also note that $ F_i = \dfrac{u_{x_i}}{u_{x_n}} = \dfrac{v_{x_i}}{v_{x_n}} . \\ $ So we apply Proposition 1 and we obtain that \begin{equation}\label{Theorem1Euler2} \begin{gathered} F_{i \: x_n} + \sum_{j=1}^{n-1} F_j F_{i \: x_j} =0 \;\;,\; i=1,...,n-1 \\ \Leftrightarrow F_{x_n} + F {\nabla}_y F =0 \end{gathered} \end{equation} Now, for the divergence of $ F $: \begin{align*} \begin{gathered} F_{i \: x_i} = \frac{v_{x_i x_i} v_{x_n} -v_{x_i}v_{x_n x_i}}{v_{x_n}^2} \\ \Rightarrow div_y F = \sum_{i=1}^{n-1} F_{i \: x_i} = \frac{\sum_{i=1}^{n-1} v_{x_i x_i} v_{x_n} - \sum_{i=1}^{n-1} v_{x_i} v_{x_n x_i} }{v_{x_n}^2} \end{gathered} \end{align*} \begin{equation}\label{Theorem1div} \Rightarrow div_y F = \frac{v_{x_n} \Delta v - \frac{1}{2} (|\nabla v|^2)_{x_n}}{v_{x_n}^2} \end{equation} Thus, from \eqref{Theorem1proofEq1} and \eqref{Theorem1equipartition} the equation \eqref{Theorem1div} becomes: \begin{align*} div_y F = \frac{v_{x_n} \frac{f( \Phi (v)) - [b( \Phi (v)) ( \Phi '(v))^2 + a( \Phi(v)) \Phi''(v) ] \: G(v) }{a( \Phi (v)) \Phi '(v)} - \frac{G'(v)}{2} v_{x_n}}{v_{x_n}^2} \end{align*} Therefore \begin{align*} div_y F = 0 \Leftrightarrow a( \Phi (v)) \Phi '(v) G'(v) + 2 [b( \Phi (v)) ( \Phi '(v))^2 + a( \Phi(v)) \Phi''(v) ] G(v) = 2 f( \Phi (v)) \end{align*} \end{proof} $ \\ $ \textbf{Note:} If we take $ a(t)=1 \;,\: b(t) = 0 = k(t) \;,\: l(t) = \frac{1}{2} $ and $ f(t) = g'(t) $ in Theorem 1, we have $ p(t) = -\frac{1}{2} \neq 0 \;,\; G(t) = 2 g(t) $ (i.e. the ODE $ a( \Phi (t)) \Phi '(t) G'(t) + 2 [b( \Phi (t)) ( \Phi '(t))^2 + a( \Phi(t)) \Phi''(t) ] G(t) = 2 f( \Phi (t)) $ is satisfied for $ \Phi(t) =t $), that is the result of Proposition 2. $ \\ $ \subsection{Examples of Entire solutions of the Euler equations} $ \\ $ In this subsection we will determine some smooth entire solutions of the 2D and 3D Euler equations and the pressure being a linear function with respect to the space variables. The relations between different classes of equations raise a general type of ``question'', that is, if we have an equation (A) that can be transformed into the equation (B), whether we can extract all solutions of (B) utilizing the solutions of (A). In particular, we can raise the following question: $ \\ $ \textbf{Question:} Let $ u: \mathbb{R}^3 \rightarrow \mathbb{R}^2 \;,\; (u = u(x,y,t)=(u_1,u_2) )$ be a smooth, non constant entire solution of the Isobaric 2D Euler equations \begin{align}\label{Euler2DIsobaric} \begin{cases} u_{1 \: t} +u_1 u_{1 \: x} + u_2 u_{1 \: y} = 0 \\ u_{2 \: t} +u_1 u_{2 \: x} + u_2 u_{2 \: y} = 0 \\ u_{1 \: x} + u_{2 \: y} =0 \end{cases} \end{align} Is it true that then \begin{equation}\label{Euler2DSolIsobaric} u_1 = c_1 \: g(\beta x + \gamma y -( \beta \tilde{c}_1 + \gamma \tilde{c}_2 )t) + \tilde{c}_1 \;\; , \;\: u_2 = c_2 g(\beta x + \gamma y -( \beta \tilde{c}_1 + \gamma \tilde{c}_2 )t) + \tilde{c}_2 \;\;\; ? \end{equation} where $ c_1 \beta + c_2 \gamma =0 \;\;,\; c_1,c_2,\tilde{c}_1,\tilde{c}_2, \beta, \gamma \in \mathbb{R} . \\ \\ $ From the form of solution \eqref{Euler2DSolIsobaric} we can obtain a solution of the 2D Euler equation with pressure being a linear function in respect to the space variables. Let $ u: \mathbb{R}^3 \rightarrow \mathbb{R}^2 \;,\; (u = u(x,y,t)=(u_1,u_2) )$ is such that \begin{equation}\label{Euler2DSol} \begin{gathered} u_1 = c_1 \: g(\beta x + \gamma y -( \beta \tilde{c}_1 + \gamma \tilde{c}_2 )t) + \lambda A(t) + \tilde{c}_1 \;\; , \;\: u_2 = c_2 g(\beta x + \gamma y -( \beta \tilde{c}_1 + \gamma \tilde{c}_2 )t) + \xi A(t) + \tilde{c}_2 \\ \textrm{and} \;\; p(x,y,t) = -a(t)( \lambda x+ \xi y) +b(t) \end{gathered} \end{equation} where $ A'(t)=a(t) \;,\; a,b: \mathbb{R} \rightarrow \mathbb{R} $ and $ c_1, \tilde{c}_1 , c_2, \tilde{c}_2 ,\beta, \gamma, \lambda , \xi \in \mathbb{R} $ are such that $ c_1 \beta + c_2 \gamma =0 $ and $ \lambda \beta + \xi \gamma =0 . \\ $ Then $ u = (u_1,u_2) $ satisfies \begin{align}\label{Euler2D} \begin{cases} u_{1 \: t} +u_1 u_{1 \: x} + u_2 u_{1 \: y} =-p_x \\ u_{2 \: t} +u_1 u_{2 \: x} + u_2 u_{2 \: y} =-p_y \\ u_{1 \: x} + u_{2 \: y} =0 \end{cases} \end{align} $ \\ \\ $ Now we give some examples of smooth entire solutions for the three dimensional Euler equations. If $ u = (u_1,u_2,u_3) : \mathbb{R}^4 \rightarrow \mathbb{R}^3 $ where $ u_i = u_i(x,y,z,t) $ is such that \begin{equation}\label{Euler3DSol1} \begin{gathered} u_1(x,y,z,t) = G(c_1t -y+c_2z) \;\;\;,\;\; u_2(x,y,z,t) = H(c_1t-y+c_2z) -A(t) \\ u_3(x,y,z,t) = \frac{1}{c_2} H(c_1t-y+c_2z) - \frac{1}{c_2} A(t) + C \;\;\textrm{and} \;\; p(x,y,z,t) = a(t) (y + \frac{z}{c_2}) +b(t) \\ \textrm{where} \;\; A'(t)=a(t) \;,\; a,A,G,H : \mathbb{R} \rightarrow \mathbb{R} \;\;,\;\; c_1,c_2 \in \mathbb{R} \;\; \textrm{and} \;\; C = \frac{-c_1}{c_2} \end{gathered} \end{equation} then $ u = (u_1,u_2,u_3) $ is an entire solution of the Euler equations, that is $ u $ satisfies \begin{equation}\label{Euler3D} \begin{cases} u_{1 \: t} +u_1 u_{1 \: x} + u_2 u_{1 \: y}+ u_3 u_{1 \: z} = - p_x \\ u_{2 \: t} +u_1 u_{2 \: x} + u_2 u_{2 \: y} + u_3 u_{2 \: z} = - p_y \\ u_{3 \: t} +u_1 u_{3 \: x} + u_2 u_{3 \: y} + u_3 u_{3 \: z} = -p_z \\ u_{1 \: x} + u_{2 \: y} + u_{3 \: z} =0 \end{cases} \end{equation} Note that from symmetry properties of the Euler equations and from \eqref{Euler3DSol1} we can also have the following solution of \eqref{Euler3D}: \begin{equation}\label{Euler3DSol2} \begin{gathered} u_1(x,y,z,t) = \frac{1}{c_2} H(c_1t-z +c_2x) - \frac{1}{c_2} A(t) + C \;\;\;,\;\; u_2(x,y,z,t) = G(c_1t -z +c_2x) \\ u_3(x,y,z,t) = H(c_1t-z +c_2x) -A(t) \;\;\textrm{and} \;\; p(x,y,z,t) = a(t) (z + \frac{x}{c_2}) +b(t) \\ \textrm{where} \;\; A'(t)=a(t) \;,\; a,A,G,H : \mathbb{R} \rightarrow \mathbb{R} \;\;,\;\; c_1,c_2 \in \mathbb{R} \;\; \textrm{and} \;\; C = \frac{-c_1}{c_2} \end{gathered} \end{equation} and also, \begin{equation}\label{Euler3DSol3} \begin{gathered} u_1(x,y,z,t) = H(c_1t-x+c_2y) -A(t) \;\;,\; u_2(x,y,z,t) = \frac{1}{c_2} H(c_1t-x+c_2y) - \frac{1}{c_2} A(t) + C \\ u_3(x,y,z,t) = G(c_1t -x+c_2y) \;\;\textrm{and} \;\; p(x,y,z,t) = a(t) (x + \frac{y}{c_2}) +b(t) \\ \textrm{where} \;\; A'(t)=a(t) \;,\; a,A,G,H : \mathbb{R} \rightarrow \mathbb{R} \;\;,\;\; c_1,c_2 \in \mathbb{R} \;\; \textrm{and} \;\; C = \frac{-c_1}{c_2} \end{gathered} \end{equation} Finally, another example of smooth entire solution of \eqref{Euler3D} is the following \begin{equation}\label{Euler3DSol4} \begin{gathered} u_1(x,y,z,t) = G( 2[(c_1 \tilde{c}_2)^2 - (c_2 \tilde{c}_1)^2]t + 2c_1c_2(c_1 \tilde{c}_2 - c_2 \tilde{c}_1)x + 2 \tilde{c}_1c_2^2y - 2 c_1^2 \tilde{c}_2z) -A(t) \\ u_2(x,y,z,t) = c_1 u_1(x,y,z,t) + \tilde{c}_1 \;\;\;,\;\;\; u_3(x,y,z,t) = c_2u_1(x,y,z,t) + \tilde{c}_2 \\ \textrm{and} \;\; p(x,y,z,t) = a(t) (x +c_1 y +c_2z) +b(t) \\ \textrm{where} \;\; A'(t) = a(t) \;\;,\; a,A,G : \mathbb{R} \rightarrow \mathbb{R} \;\; \textrm{and} \;\; c_1,c_2,\tilde{c}_1, \tilde{c}_2 \in \mathbb{R} . \end{gathered} \end{equation} The solution in \eqref{Euler3DSol4} can be written in a bit more general form \begin{equation}\label{Euler3DSol5} \begin{gathered} u_1(x,y,z,t) = G([k \tilde{c}_1 + l \tilde{c}_2]t + [k c_1 + l c_2]x -ky -lz) -A(t) \\ u_2(x,y,z,t) = c_1 u_1(x,y,z,t) + \tilde{c}_1 \;\;\;,\;\;\; u_3(x,y,z,t) = c_2u_1(x,y,z,t) + \tilde{c}_2 \\ \textrm{and} \;\; p(x,y,z,t) = a(t) (x +c_1 y +c_2z) \\ \textrm{where} \;\; A'(t) = a(t) \;\;,\; a,A,G : \mathbb{R} \rightarrow \mathbb{R} \;\; \textrm{and} \;\; c_1,c_2,\tilde{c}_1, \tilde{c}_2,k,l \in \mathbb{R} . \end{gathered} \end{equation} (we can choose $ A $ such that $ A(0)=0 ) \\ $ Therefore we conclude to the following result $ \\ $ \textbf{Theorem 2} Let $ u=(u_1,u_2,u_3) \;\:,\; u_i \:,p : \mathbb{R}^3 \times (0,+ \infty ) \rightarrow \mathbb{R} $ and consider the initial value problem \begin{equation}\label{Euler3DIV} \begin{cases} u_t + u \nabla u = - \nabla p \\ \textrm{div} \: u =0 \\ u(x,y,z,0)=g(x,y,z) \end{cases} \end{equation} where $ g = (g_1,g_2,g_3) $ is either of the form \begin{align}\label{Euler3DIVcond1} \begin{gathered} g=(g_1, c_1g_1 + \tilde{c}_1,c_2g_1+ \tilde{c}_2) \;\; \textrm{and} \;\; g_1(x,y,z)= g_1([kc_1 +lc_2]x-ky-lz) \\ c_1,c_2, \tilde{c}_1, \tilde{c}_2 , k,l \in \mathbb{R} \;\;,\; g_1 \;\: \textrm{smooth} \end{gathered} \end{align} or \begin{align}\label{Euler3DIVcond2} \begin{gathered} g=(g_1, g_2,\frac{1}{c_2} g_2 -\frac{c_1}{c_2}) \;\; \textrm{and} \;\; g_1(x,y,z)= G(c_2z-y) \;\;,\;\; g_2(x,y,z) = H(c_2z-y) \\ c_1,c_2, \tilde{c}_1 \in \mathbb{R} \;\:,\;G,H \;\: \textrm{smooth} \end{gathered} \end{align} Then there exists a smooth, globally defined in $ t>0 $, solution of \eqref{Euler3DIV}. In particular, either $ u $ and $ p $ are given by \eqref{Euler3DSol5} if the initial value $ g $ is of the form \eqref{Euler3DIVcond1} or $ u $ and $ p $ are given by \eqref{Euler3DSol1} if $ g $ is of the form \eqref{Euler3DIVcond2}. $ \\ \\ $ The condition \eqref{Euler3DIVcond2} could be easily modified in order to obtain the solutions given by \eqref{Euler3DSol2} and \eqref{Euler3DSol3}. $ \\ $ \textbf{Remark 2.2} Such solutions can be extended to general dimensions, i.e. solutions of \eqref{Euler} and $ n \geq 4 $, together with the divergence free condition and a pressure being a linear function with respect to space variables. $ \\ $ \section{Applications} $ \\ $ An immediate result of Theorem 1 is a De Giorgi type result, that is, the level sets of the entire solutions are hyperplanes. For simplicity, we state these applications in a quite simpler form of \eqref{Theorem1}. $ \\ \\ $ \textbf{Corollary 2} Let $ u,v : {\mathbb{R}}^2 \rightarrow \mathbb{R} $ such that $ u_y, v_y >0 $ and satisfy the equations \begin{equation}\label{Corollary2eq1} \begin{gathered} \Delta u = f(u) \\ k(v) \Delta v + l(v) | \nabla v |^2 = g(v) \end{gathered} \end{equation} Suppose that $ u = \Phi(v) $ where $ \Phi : \mathbb{R} \rightarrow \mathbb{R} \;\;,$ and $ k(t) \Phi '' (t) - l(t) \Phi ' (t) \neq 0 $, then \begin{equation}\label{Corollary2eq2} u(x,y) = h(cx + y) \;\; and \;\; v(x,y)=p(cx+y) \end{equation} where $ h,p $ are solutions of the ODEs \begin{equation}\label{Corollary2eq3} h''(t) = \frac{f(h(t))}{c^2 +1} \;\; and \;\; k(p(t))p''(t) + l(p(t)) (p'(t))^2 = \frac{g(p(t))}{c^2 +1} \end{equation} $ \\ $ \begin{proof} By Theorem 1 we have that the function $ F(x,y) = \dfrac{u_x}{u_y} = \dfrac{v_x}{v_y} $ is an entire solution of the Burgers equation. Therefore $ F $ must be identically constant i.e. $ F \equiv c $ (see \cite{AG}). Thus $ u_x = c u_y $ and $ v_x = c v_y $ and consequently $ u(x,y) = h(cx+y) $ and $ v(x,y) = p(cx +y) $. From \eqref{Corollary2eq1} we have \begin{align*} (c^2 +1) h''(cx+y) = f(h(cx+y)) \end{align*} and \begin{align*} (c^2 +1) p''(cx+y) + l(p(cx+y)) (c^2 +1) (p'(cx+y))^2 = g(p(cx+y)) \end{align*} \end{proof} Now we can generalize Corollary 1 via Theorem 1. $ \\ \\ $ \textbf{Corollary 3} Let $ u,v : \Omega \subset {\mathbb{R}}^3 \rightarrow \mathbb{R} \;\;, u_z \:, v_z >0 $ be solutions of \begin{equation}\label{Corollary3eq1} \begin{cases} \Delta u = f(u) \\ k(v) \Delta v + l(v) | \nabla v |^2 = g(v) \end{cases} \end{equation} and suppose $ u = \Phi (v) $ where $ \Phi : \mathbb{R} \rightarrow \mathbb{R} \;\;,$ and $ k(t) \Phi '' (t) - l(t) \Phi ' (t) \neq 0 . $ If $ G $ is a solution of: \begin{equation}\label{Corollary3eq2} \Phi '(t) G'(t) + 2 \Phi''(t) G(t) = 2 f( \Phi (t)) \end{equation} where $ G(t) := \dfrac{k(t) f( \Phi (t)) - g(t) a( \Phi (t)) \Phi '(t)}{k(t) \Phi''(t) - l(t) \Phi'(t)} $ and suppose $ \Omega $ is a star- shaped (or simply connected) open set. Then there exists a function $ \psi $ such that $ {\psi}_y = \dfrac{u_x}{u_z} $ and $ {\psi}_x = - \dfrac{u_y}{u_z} $ and $ \psi $ satisfies the minimal surface equation: \begin{equation}\label{Corollary3eq3} {\psi}_{yy} ( {\psi}_{x}^2 +1) -2 {\psi}_x {\psi}_y {\psi}_{xy} + {\psi}_{xx}({\psi}_y^2 +1) =0 \end{equation} In particular, if $ \Omega = {\mathbb{R}}^3 $, then the level sets of $ u,v $ are hyperplanes. $ \\ \\ $ \begin{proof} By Theorem 1 we have that $ div_{(x,y)} \: F =0 \Leftrightarrow F_{1 \: x} +F_{2 \: y} =0 \; ,$ and $ \Omega $ is a star- shaped (or simply connected) open set, thus by Poincare's Lemma \begin{align*} F_1 = {\psi}_y \;\;,\; F_2 = - {\psi}_x \end{align*} for some function $ \psi $, where $ F_1 = \dfrac{u_x}{u_z} $ and $ F_2 = \dfrac{u_y}{u_z} . \\ $ Now, as in the proof of Theorem 1, we have: \begin{align}\label{Corollary3proofeq1} | \nabla v |^2 = G(v) \;\;,where \;\; G(v)= \frac{k(v) f( \Phi (v)) - g(v) a( \Phi (v)) \Phi '(v)}{k(v) \Phi''(v) - l(v) \Phi'(v)} \end{align} we define: \begin{equation}\label{Corollary3proofeq2} \tilde{v}= H(v) \;\;,\; where \;\; H'(v) = \frac{1}{\sqrt{G(v)}} \end{equation} Then \begin{align}\label{Corollary3proofeq3} | \nabla \tilde{v} |^2 =1 \end{align} and we proceed as in the proof of Corollary 1 to obtain \begin{align*} \begin{gathered} \frac{u_x}{u_z} = \frac{v_x}{v_z}=a \;\;\;\; and \;\;\; \frac{u_y}{u_z} = \frac{v_y}{v_z} =b \\ \Rightarrow u(x,y,z) = h(ax+by+z) \;\;\;\; and \;\;\; v(x,y,z) = p(ax +by+z) \end{gathered} \end{align*} where $ h,p $ are solutions of the ODEs: \begin{align*} h''(t) = \frac{f(h(t))}{a^2 +b^2 +1} \;\;\;\; and \;\;\; k(p(t)) p''(t) + l(p(t)) (p'(t))^2 = \frac{g(p(t))}{a^2 + b^2 +1} \end{align*} \end{proof} Note that for $ k=0 \;,\: l=1 \;,\: g'=f $ and $ \Phi(t)=t $ we have the Corollary 1. $ \\ $ Now we will prove an analogue of Theorem 5.1 in \cite{CGS}, without boundedness assumptions for solutions and also, without excluding apriori some potential singularities of the solutions. The observation in Proposition 2 below, is to utilize the main result from \cite{CC}. $ \\ \\ $ \textbf{Proposition 2} Let $ u : \mathbb{R}^n \rightarrow \mathbb{R} $ be a smooth solution of $ \Delta u = W'(u) \;,\: W: \mathbb{R} \rightarrow [0, + \infty) $, except perhaps on a closed set $ S $ of potential singularities with $ \mathcal{H}^1(S) =0 $ and such that \begin{align*} (1) \;\: \frac{\partial u}{\partial x_n} >0 \;\;\;\;,\; (2) \;\: \frac{1}{2}|\nabla u|^2 = W(u) \end{align*} where $ \mathcal{H}^1 $ is the Hausdorff 1-measure in $ \mathbb{R}^n . $ Then \begin{align*} u(x) = g( a \cdot x +b) \;\;\;,\;\; \textrm{for some} \;\: a \in \mathbb{R}^n \;\:,\: |a|=1 \;,\; b \in \mathbb{R} \end{align*} and $ g $ is such that $ g'' = W'(g) . \\ $ \begin{proof} Let $ v = G(u) $, where $ G'(u) = \dfrac{1}{\sqrt{2W(u)}} $, then \begin{align*} | \nabla v |^2 = (G'(u))^2 | \nabla u |^2 = 1 \;\;\;,\;\; \textrm{on} \;\: \mathbb{R}^n \setminus S \end{align*} so $ v $ is a smooth solution of the Eikonal equation except perhaps of a closed set $ S $ of potential singularities with $ \mathcal{H}^1 (S) =0 $. Thus from the result of \cite{CC}, we have that $ v= a \cdot x +b \;\;,\; a \in \mathbb{R}^n \;,\: |a|=1 \;, \; b \in \mathbb{R} $ or $ v = | x-x_0 | +c $ for some $ x_0 \in \mathbb{R}^n \;\;,\; c \in \mathbb{R} . \\ $ Therefore, \begin{align*} u(x) = G^{-1} (a \cdot x +b) \;\;\: \textrm{or} \;\: u(x) = G^{-1} (| x-x_0 | +c) \end{align*} where $ G : \mathbb{R} \rightarrow \mathbb{R} $, such that $ G' = \dfrac{1}{\sqrt{2W}} $. If $ u = G^{-1}(d +c) $ where $ d(x)= |x-x_0|$, then \begin{align*} \Delta u = (G^{-1})''(d +c) + \frac{n-1}{d} (G^{-1})'(d+c) = W'(u) = W'(G^{-1}(d+c)) \end{align*} and also, \begin{align*} | \nabla u |^2 = (G^{-1 '}(d+c))^2 = 2 W(u) \Rightarrow (G^{-1})'(d+c) = \sqrt{2W(G^{-1}(d+c))} \end{align*} and thus, $ (G^{-1})'' = W'(G^{-1}) $, so we obtain \begin{align*} (G^{-1})''(d +c) + \frac{n-1}{d} (G^{-1})'(d+c) = (G^{-1})''(d+c) \\ \Rightarrow (G^{-1})'(d+c) = 0 \Rightarrow \sqrt{2W(G^{-1}(d+c))} =0 \end{align*} which contradicts the fact that $ W $ is strictly positive in $ u(\mathbb{R}^n) $ (see Remark 2.1). Therefore $ u(x) = g (a \cdot x +b) $ where $ g = G^{-1} $. \end{proof} \textbf{Remark 3.1:} Note that radially symmetric solutions are excluded in Proposition 2 above as we see in the proof, but as it is well known (see \cite{GNN}) if $ f $ is smooth and $ u \in C^2 ( \overline{ \Omega}) $ is a positive solution of $ - \Delta u = f(u) $ for $ x \in B_1 \subset \mathbb{R}^n $ that vanishes on $ \partial B_1 $, it holds that then $ u $ is radially symmetric. So radially symmetric solutions of the Allen-Cahn equations are incompatible with the equipartition. $ \\ \\ $ \section{The Allen Cahn system} \subsection{Applications of the Equipartition} $ \\ $ We begin by proposing an analogous result for a De Giorgi like conjecture for the Allen Cahn systems and/or as a generalization of \cite{CGS} in the vector case. First, the property that the level sets of a solution are hyperplanes can be expressed equivalently as $ \dfrac{u_{x_i}}{u_{x_n}} = c_i \;,\;\: i =1,...,n-1 \;\: (u: \mathbb{R}^n \rightarrow \mathbb{R} \;\: , \; u_{x_n}>0 $), that is, if we consider $ v_i = \dfrac{u_{x_i}}{u_{x_n}} \;\:,\; i=1,...,n,\;\: v_i : \mathbb{R}^n \rightarrow \mathbb{R} $, then \begin{align*} v_i = c_i \;\:,\; i=1,...,n-1 \Leftrightarrow rank( \nabla v_i) <1 \;\:,\; i=1,...,n-1 \end{align*} We can see the above statement as follows, $ \\ $ If $ v_i = c_i \;\:,\; i=1,...,n-1 $ then $ \nabla v_i =0 \Rightarrow rank( \nabla v_i) =0 <1 \\ $ Conversely, if $ rank( \nabla v_i) <1 $, that is $ rank( \nabla v_i) =0 $ since $ v_i : \mathbb{R}^n \rightarrow \mathbb{R} $, we have by Sard's theorem that $ \mathcal{L}^1(v_i(\mathbb{R}^n)) = 0 \;\:,\; i=1,...,n-1 $ (see for example \cite{RN}) where $ \mathcal{L}^1 $ is the lebesgue measure on $ \mathbb{R}$. Thus, $\mathcal{L}^1(v_i(\mathbb{R}^n)) =0 \Rightarrow v_i =c_i $ (constant) $i=1,...,n-1. \\ $ Now, we can generalize the above to the vector case as follows: $ \\ $ Let $ u : \mathbb{R}^n \rightarrow \mathbb{R}^m \;\:, \; u = (u_1,...,u_m) \;\:,\; u_i = u_i (x_1,...,x_n) $, we consider the functions \begin{align*} v_{ij} = \frac{u_{i \: x_j}}{u_{i \: x_n}} \;\:, \; i =1,...,m \;\:,\; j =1,...,n-1 \end{align*} and $ \tilde{v}^k = (v_{1k},...,v_{mk}) \;\:,\; \tilde{v}^k : \mathbb{R}^n \rightarrow \mathbb{R}^m \;\:,\; k =1,...,n-1 $ and $ \nabla \tilde{v}^k : \mathbb{R}^n \rightarrow \mathbb{R}^{m \times n} $. Thus, if $ u $ is a solution of the Allen Cahn system, we could ask (under appropriate assumptions) whether $ rank ( \nabla \tilde{v}^k) < \min \lbrace n,m \rbrace = \mu $ (and by Sard's Theorem we would have that $ \mathcal{L}^{\mu} ( \tilde{v}^k (\mathbb{R}^n)) =0 $, where $ \mathcal{L}^{\mu} $ is the Lebesgue measure in $ \mathbb{R}^{\mu}). \\ $ Apart from $ u $ being a solution of the Allen Cahn system (and $ u_{i \: x_n}>0) $) we should need further assumptions, as in the scalar case. However, the relation with minimal surfaces in the vector case is far more complicated than in the scalar case. Therefore, one possible assumption would be that $ u $ also satisfies the equipartition, i.e. $ \dfrac{1}{2} | \nabla u |^2 = W(u) $. We will now prove that the above is true, at least for $ n=m=2 $, that is, if $ \tilde{v} = (v_1,v_2) \;,\; v_i = \dfrac{u_{i \: x}}{u_{i \:y}} $ and $ u=(u_1,u_2) $ is a solution of the Allen-Cahn system that satisfy the equipartition, then $ rank(\nabla v) <2 $. In fact, we can obtain a quite stronger result about the structure of solutions in two dimensions, as stated in Theorem 3 that follows. $ \\ \\ $ \textbf{Theorem 3} Let $ u : \mathbb{R}^2 \rightarrow \mathbb{R}^2 $ be a smooth solution of \begin{equation}\label{Theorem2AllenCahn} \Delta u = W_u(u) \end{equation} with $ u_{iy}>0 \;,\; i =1,2 $ and $ W: \mathbb{R}^2 \rightarrow [0, +\infty) $ smooth. $ \\ $ If $ u $ satisfies \begin{equation}\label{Theorem2equipartition} \frac{1}{2} | \nabla u |^2 = W(u) \end{equation} Then \begin{equation}\label{Theorem2Sol1} \textrm{either} \;\;\;\: u(x,y) = (U_1 (c_1 x +y), U_2(c_2 x+y)) \;\;\;,\;\: \textrm{where} \;\: U_i'' = \frac{W_{U_i}(U_1,U_2)}{c_i^2 +1} \;\: i=1,2 \end{equation} \begin{equation}\label{Theorem2Sol2} \textrm{or} \;\;\;\;\;\;\;\; \begin{cases} h(\dfrac{u_{1x}}{u_{1y}} , \dfrac{u_{2x}}{u_{2y}}) =0 \;\;\;, \\ \textrm{and} \;\;\; u_{2y}^2 h_{v_1} - u_{1y}^2 h_{v_2} =0 \end{cases} \end{equation} for some $ \;\: h: \mathbb{R}^2 \rightarrow \mathbb{R} . $ In particular, $ \mathcal{L}^2(v(\mathbb{R}^2 ))=0 $, where $ v= (\dfrac{u_{1x}}{u_{1y}} , \dfrac{u_{2x}}{u_{2y}}). \\ $ \begin{proof} We differentiate \eqref{Theorem2equipartition} with respect to $ x \;,\: y $ \begin{equation}\label{Theorem2proofeq1} \begin{cases} u_{1x} u_{1xx} + u_{1y} u_{1yx} + u_{2x} u_{2xx} + u_{2y} u_{2yx} = W_{u_1} u_{1x} + W_{u_2} u_{2x} \\ u_{1x} u_{1xy} + u_{1y} u_{1yy} + u_{2x} u_{2xy} + u_{2y} u_{2yy} = W_{u_1} u_{1y} +W_{u_2} u_{2y} \end{cases} \end{equation} and utilizing \eqref{Theorem2AllenCahn} we get \begin{equation}\label{Theorem2proofeq2} \begin{cases} u_{1x} u_{1xx} + u_{1y} u_{1yx} + u_{2x} u_{2xx} + u_{2y} u_{2yx} = u_{1x} \Delta u_1 + u_{2x} \Delta u_2 \\ u_{1x} u_{1xy} + u_{1y} u_{1yy} + u_{2x} u_{2xy} + u_{2y} u_{2yy} = u_{1y} \Delta u_1 + u_{2y} \Delta u_2 \end{cases} \end{equation} \begin{equation}\label{Theorem2proofeq3} \Leftrightarrow \begin{cases} u_{1y} u_{1yx} + u_{2y} u_{2yx} = u_{1x}u_{1yy} +u_{2x} u_{2yy} \\ u_{1x} u_{1xy} + u_{2x} u_{2xy} = u_{1y} u_{1xx} + u_{2y} u_{2xx} \end{cases} \end{equation} Now we define $ v_i := \dfrac{u_{ix}}{u_{iy}} \;,\; i=1,2 $ and by the second equation in \eqref{Theorem2proofeq3} we have \begin{equation}\label{Theorem2proofeq4} \begin{gathered} v_{1x}= \frac{u_{1xx}u_{1y}-u_{1x}u_{1yx}}{u_{1y}^2} = \frac{u_{2x}u_{2xy}-u_{2y}u_{2xx}}{u_{1y}^2} = -\frac{u_{2y}^2}{u_{1y}^2} v_{2x} \\ \Leftrightarrow u_{1y}^2 v_{1x} + u_{2y}^2 v_{2x} = 0 \end{gathered} \end{equation} similarly by the first equation in \eqref{Theorem2proofeq3} we have \begin{equation}\label{Theorem2proofeq5} u_{1y}^2 v_{1y} + u_{2y}^2 v_{2y} = 0 \end{equation} From \eqref{Theorem2proofeq4}, \eqref{Theorem2proofeq5} and the assumption $ u_{iy}>0 \;,\; i=1,2 $ we obtain that \begin{equation} v_{1x} v_{2y} - v_{1y}v_{2x} =0 \Leftrightarrow det( \nabla v) =0 \;\;,\; \forall \; (x,y) \in \mathbb{R}^2 \end{equation} Since $ det( \nabla v) = 0 $, we have that $ rank ( \nabla v)<2 $ and by Sard's Theorem (see for example \cite{RN}, p. 20) we have that $ \mathcal{L}^2(v(\mathbb{R}^2 ))=0 $. By Theorem 1.4.14 in \cite{RN}, since $ rank ( \nabla v)<2 $, we have that $ v_1,v_2 $ are functionally dependent, that is, there exists a smooth function $ h: \mathbb{R}^2 \rightarrow \mathbb{R} $ such that \begin{equation}\label{Theorem2proofeq6} h(v_1 ,v_2) = 0 \Leftrightarrow h(\frac{u_{1x}}{u_{1y}} , \frac{u_{2x}}{u_{2y}}) =0 \;\;,\; \forall \; (x,y) \in \mathbb{R}^2 \end{equation} Thus we have \begin{equation}\label{Theorem2proofeq7} h_{v_1}v_{1x} + h_{v_2}v_{2x} =0 \;\; \textrm{and} \;\: h_{v_1}v_{1y} + h_{v_2} v_{2y} =0 \end{equation} so, together with \eqref{Theorem2proofeq4}, \eqref{Theorem2proofeq5} we get \begin{equation}\label{Theorem2proofeq8} (u_{1y}^2 h_{v_2} - u_{2y}^2 h_{v_1}) v_{2x} =0 \;\; \textrm{and} \;\: (u_{1y}^2 h_{v_2} - u_{2y}^2 h_{v_1}) v_{2y} =0 \end{equation} which gives \begin{equation}\label{Theorem2proofeq9} \begin{gathered} v_{2x} = 0 \;\; \textrm{and} \;\; v_{2y} = 0 \\ \textrm{or} \;\; u_{1y}^2 h_{v_2} - u_{2y}^2 h_{v_1}=0 \end{gathered} \end{equation} in the first case we also have \begin{equation}\label{Theorem2proofeq10} v_{1x} = 0 \;\; \textrm{and} \;\; v_{1y} = 0 \end{equation} and therefore \begin{equation}\label{Theorem2proofeq11} \begin{gathered} \frac{u_{1x}}{u_{1y}} = c_1 \;\; \textrm{and} \;\; \frac{u_{2x}}{u_{2y}} =c_2 \Rightarrow u_1 (x,y) = U_1 (c_1 x +y) \;\; \textrm{and} \;\; u_2(x,y) = U_2 (c_2 x +y) \end{gathered} \end{equation} where \begin{align*} U_i'' = \frac{W_{U_i}(U_1,U_2)}{c_i^2 +1} \;\: i=1,2. \end{align*} In the second case we see that both equations of \eqref{Theorem2Sol2} are satisfied. \end{proof} $ \\ \\ $ \textbf{Note:} If $ W(u_1,u_2) = W_1(u_1) +W_2(u_2) $, then \eqref{Theorem2AllenCahn} becomes \begin{align*} \Delta u_i = W_i'(u_i) \;\;,\; i=1,2 \end{align*} so, in order the De Giorgi conjecture to hold we should suppose $ u_{iy} >0 $ as we see in Theorem 3 above. $ \\ \\ $ \subsection{The Leray projection on the Allen-Cahn system} $ \\ $ We begin with a calculation with which we will obtain an equation independent of the potential $ W . \\ $ Let $ u : \mathbb{R}^2 \rightarrow \mathbb{R}^2 $ be a smooth solution of the system \begin{equation}\label{AllenCahnSystem} \Delta u = W_u(u) \Leftrightarrow \begin{cases} \Delta u_1 = W_{u_1}(u_1,u_2) \\ \Delta u_2 = W_{u_2} (u_1,u_2) \end{cases} \end{equation} where $ W : \mathbb{R}^2 \rightarrow \mathbb{R} $, a $ C^2 $ potential. From \eqref{AllenCahnSystem}, differentiating over $ x \;,\:y $ we obtain \begin{equation}\label{DifferAllenCahnSystem1} \begin{cases} \Delta u_{1y} = W_{u_1 u_1} u_{1y} + W_{u_1 u_2} u_{2y} \\ \Delta u_{1x} = W_{u_1 u_1} u_{1x} + W_{u_1 u_2} u_{2x} \\ \Delta u_{2y} = W_{u_2 u_1} u_{1y} + W_{u_2 u_2} u_{2y} \\ \Delta u_{2x} = W_{u_2 u_1} u_{1x} + W_{u_2 u_2} u_{2x} \end{cases} \end{equation} and therefore \begin{equation}\label{DifferAllenCahnSystem2} u_{1x} \Delta u_{1y} + u_{2x} \Delta u_{2y} = W_{u_1 u_1} u_{1y} u_{1x} + W_{u_1 u_2} ( u_{1x} u_{2y} + u_{1y} u_{2x} ) + W_{u_2 u_2} u_{2y} u_{2x} \end{equation} thus we have \begin{equation}\label{DifferAllenCahnSystem3} u_{1x} \Delta u_{1y} + u_{2x} \Delta u_{2y} = u_{1y} \Delta u_{1x} + u_{2y} \Delta u_{2x} \end{equation} Now we will apply the Helmholtz-Leray decomposition, that resolves a vector field $ u $ in $ \mathbb{R}^n \;\: (n = 2, 3) $ into the sum of a gradient and a curl vector. Regardless of any boundary conditions, for a given vector field $ u $ can be decomposed in the form \begin{align*} u = \nabla \phi + \tilde{\sigma} = ( \phi_x + \tilde{\sigma}_1 , \phi_y + \tilde{\sigma}_2) \end{align*} where $ div \: \tilde{\sigma} =0 \Leftrightarrow \tilde{\sigma}_{1 x} + \tilde{\sigma}_{2 y} =0 $ since we are in two dimensions, and thus $ \tilde{\sigma}_1 = - \sigma_y \;,\; \tilde{\sigma}_2 = \sigma_x $. So, we have that \begin{align*} u = ( \phi_x - \sigma_y , \phi_y + \sigma_x) \end{align*} for some $ \phi , \sigma : \mathbb{R}^2 \rightarrow \mathbb{R} $. Utilizing now this decomposition of $ u $, we obtain \begin{equation}\label{Helmholtz-LerayDecomEq} \begin{gathered} (\phi_{xx} - \sigma_{yx}) \Delta (\phi_{xy} - \sigma_{yy}) + ( \phi_{yx} + \sigma_{xx}) \Delta (\phi_{yy} + \sigma_{xy}) \\ = (\phi_{xy} - \sigma_{yy}) \Delta(\phi_{xx} - \sigma_{yx}) + ( \phi_{yy} + \sigma_{xy}) \Delta( \phi_{yx} + \sigma_{xx}) \end{gathered} \end{equation} Thus, if in particular we apply the Leray projection, $ v = \mathbb{P}(u) $, we have that $ v = \tilde{\sigma} $, that is, $ v = ( - \sigma_y , \sigma_x) $. So, from \eqref{Helmholtz-LerayDecomEq} we have \begin{equation}\label{LerayPrEq1} \begin{gathered} \sigma_{yx} \Delta \sigma_{yy} + \sigma_{xx} \Delta \sigma_{xy} = \sigma_{yy} \Delta \sigma_{yx} + \sigma_{xy} \Delta \sigma_{xx} \\ \Leftrightarrow (\sigma_{xx} - \sigma_{yy}) \Delta \sigma_{xy} = \sigma_{xy} \Delta ( \sigma_{xx} - \sigma_{yy}) \end{gathered} \end{equation} Note that a class of solutions to \eqref{LerayPrEq1} is $ \sigma $ that satisfy \begin{equation}\label{LerayPrEq2} c_1 \sigma_{xy} = c_2 ( \sigma_{xx} - \sigma_{yy}) \end{equation} and we can solve explicitly in $ \mathbb{R}^2 $, \begin{equation}\label{LerayPrSol} \begin{gathered} \sigma(x,y) = A(x) + B(y) \;\;\;,\;\; \textrm{if} \;\: c_2=0 \\ \sigma(x,y) = F(cx +y) + G(x - cy) \;\;\;,\;\; \textrm{where} \;\: c = \dfrac{c_1 + \sqrt{c_1^2 + 4c_2^2}}{2c_2} \;,\;\: \textrm{if} \;\: c_2 \neq 0 \end{gathered} \end{equation} for arbitrary functions $ A,B,F,G : \mathbb{R} \rightarrow \mathbb{R} $. In the first case, the Leray projection of the solution is of the form \begin{align}\label{LerayPrSol1} v = \mathbb{P}(u) = ( b(y) , a(x)) \end{align} and in the second case \begin{align}\label{LerayPrSol2} v = \mathbb{P}(u) = ( c g(x -cy) - f(cx+y), g(x -cy) + cf(cx +y)) \end{align} Similarly, if we take the projection to the space of gradients, we have $ \tilde{v} = ( \phi_x , \phi_y) $ that will also satisfy \begin{equation}\label{GradProjEq} (\phi_{xx} - \phi_{yy}) \Delta \phi_{xy} = \phi_{xy} \Delta ( \phi_{xx} - \phi_{yy}) \end{equation} so again, the projection to the space of gradients of the solution will be of the form \begin{equation}\label{GradProjSol} \begin{gathered} \textrm{either} \;\: \tilde{u}(x,y) = (\tilde{a}(x),\tilde{b}(y)) \\ \textrm{or} \;\: \tilde{u}(x,y) = ( c \tilde{f}(cx+y) +\tilde{g}(x-cy), \tilde{f}(cx+y) - c \tilde{g}(x-cy)) \end{gathered} \end{equation} Therefore, if we determine a class of potentials $ W $, such that the solutions (or some solutions) are invariant under the Leray projection (or the projection to the space of gradients), we can obtain explicit solutions of the form \eqref{LerayPrSol} or \eqref{GradProjSol}. In the Appendix we give such examples. $ \\ \\ \\ \\ $ \begin{appendix} \begin{LARGE} \textbf{Appendix} \end{LARGE} \section{Some examples of entire solutions of the Allen-Cahn system} \label{sec:appendix} $ \\ \\ $ We note that solutions of the form \eqref{LerayPrSol} and \eqref{GradProjSol} are equivalent in the special case that \eqref{LerayPrEq2} is satisfied. So in the class of solutions of \eqref{LerayPrEq2} the Leray projection is, in some sense equivalent with the projection to the space of gradients. Suppose now that $ u = \nabla \phi $ for some $ \phi : \mathbb{R}^2 \rightarrow \mathbb{R} $, that is, a solution of the Allen-Cahn system remains invariant under the projection to the space of gradients. Then, as \eqref{GradProjEq} we have \begin{equation} \phi_{xy} \Delta (\phi_{xx} - \phi_{yy}) = (\phi_{xx} - \phi_{yy}) \Delta \phi_{xy} \end{equation} So a simple solution to (A.1) is \begin{equation} \phi_{xx} - \phi_{yy} = 0 \Rightarrow \phi (x,y) = F(x+y) + G(x-y) \end{equation} and $ u(x,y) = ( \phi_x , \phi_y) $, so in this case $ u $ has the form \begin{equation} u(x,y) = (f(x+y) + g(x-y),f(x+y) - g(x-y)) \end{equation} for some $ f,g : \mathbb{R} \rightarrow \mathbb{R} . \\ $ If $ u $ has the form (A.3), we can see that it also satisfies the equipartition. Indeed, \eqref{AllenCahnSystem} becomes \begin{equation} \begin{cases} 2f'' +2g'' = W_{u_1} \\ 2f'' - 2g'' = W_{u_2} \end{cases} \Rightarrow \begin{cases} 2(f'' +g'')(f' +g') = W_{u_1}(f' +g') \\ 2(f'' - g'')(f' -g') = W_{u_2}(f' -g') \end{cases} \end{equation} \begin{equation} \begin{gathered} \Rightarrow 4f''f' +4g''g' = W_{u_1}(f' +g') + W_{u_2}(f' -g') \\ \Rightarrow 2 (f')^2 +2(g')^2 = W(f+g,f-g) +c \end{gathered} \end{equation} $ \\ $ and the equipartition can be written as \begin{equation} \begin{gathered} \frac{1}{2} | \nabla u |^2 = W(u) \\ \Leftrightarrow 2 (f'(x+y))^2 + 2 (g'(x-y))^2 = W(f(x+y)+g(x-y),f(x+y)-g(x-y)) \end{gathered} \end{equation} $ \\ $ (the system (A.1) remains equivalent if we add a constant to the potential) $ \\ $ First we note that solutions of the form (A.3) satisfy \eqref{Theorem2Sol2} in Theorem 3. Indeed, if $ u $ is of the form (A.3), \begin{equation} \begin{gathered} u_1 = f(x+y) +g(x-y) \;\: \textrm{and} \;\: u_2 = f(x+y) -g(x-y) \\ \Rightarrow \frac{u_{1x}}{u_{1y}} = \frac{f'+g'}{f'-g'} =v_1 \;\;\: \textrm{and} \;\;\: \frac{u_{2x}}{u_{2y}} = \frac{f'-g'}{f'+g'} =v_2 \end{gathered} \end{equation} so the function $ h :\mathbb{R}^2 \rightarrow \mathbb{R} $ in \eqref{Theorem2Sol2} is $ h(s,t) = st -1 $. Also, \begin{equation} \frac{u_{1y}^2}{u_{2y}^2} = \frac{(f' -g')^2}{(f'+g')^2} \;\;\: \textrm{and} \;\;\: \dfrac{h_{v_1}}{h_{v_2}} = \dfrac{v_2}{v_1} = \dfrac{(f' -g')^2}{(f'+g')^2} = \frac{u_{1y}^2}{u_{2y}^2} \end{equation} Now we will see some examples of solutions to the Allen Cahn system that are not in the form \eqref{Theorem2Sol1} in Theorem 3 (which are more similar to the ones in the scalar case). Some of the examples of such solutions are in the form (A.3) and for all solutions in this form the function $ h $ in \eqref{Theorem2Sol2} is, as mentioned above, $ h(s,t) = st -1 . \\ \\ $ \textbf{Example (1)} If $ W(u_1,u_2) = u_1 u_2 $, then \begin{align*} u(x,y) = (\textrm{cosh} (\dfrac{x+y}{\sqrt{2}}) + \textrm{sin}(\dfrac{x-y}{\sqrt{2}}), \textrm{cosh}(\dfrac{x+y}{\sqrt{2}}) - \textrm{sin}(\dfrac{x-y}{\sqrt{2}})) \end{align*} where $ \textrm{cosh} (t) = \dfrac{e^{t} + e^{-t}}{2} $, is a solution of $ \Delta u = W_u(u) $ that satisfies the equipartition and is of the form \eqref{Theorem2Sol2}. A more general solution is \begin{align*} u(x,y) = (c_1 e^{a_1x +b_1y} +c_2 e^{a_2x +b_2y} +c_3 \textrm{sin}(a_3x +b_3y) + c_4 \textrm{cos}(a_4x +b_4y), \\ c_1 e^{a_1x +b_1y} +c_2 e^{a_2x +b_2y} -c_3 \textrm{sin}(a_3x +b_3y) - c_4 \textrm{cos}(a_4x +b_4y)) \\ \textrm{where} \;\;\; a_i^2+b_i^2=1 \;,\;\; i=1,2,3,4 \;\:,\; c_i \in \mathbb{R} \;\: i=1,2,3,4. \end{align*} However, not all solutions in this form satisfy the equipartition. In this example the zero set of the potential is $ \lbrace W= 0 \rbrace = \lbrace u_1=0 \rbrace \cup \lbrace u_2=0 \rbrace $. Such potentials $ W $ belong in a class of potentials that have been thoroughly studied in \cite{CL}. $ \\ \\ $ \textbf{Example (2)} If $ W(u_1,u_2)= \dfrac{[(u_1+u_2)^2-8]^2+[(u_1-u_2)^2-8]^2}{256} $, then \begin{align} u(x,y) = \sqrt{2} \: (\textrm{tanh}(\dfrac{x+y}{2\sqrt{2}}) + \textrm{tanh}(\dfrac{x-y}{2\sqrt{2}}), \textrm{tanh}(\dfrac{x+y}{2\sqrt{2}}) - \textrm{tanh}(\dfrac{x-y}{2\sqrt{2}})) \end{align} is a solution of $ \Delta u = W_u(u) $ that satisfies the equipartition (and is of the form \eqref{Theorem2Sol2} and $ h(s,t) = st-1 $). In addition, $ u $ above connects all four phases of the potential $ W $ at infinity, that is \begin{align*} \lim_{x \rightarrow \pm \infty} u(x,y) = ( \pm 2 \sqrt{2},0) \;\;\; \textrm{and} \;\;\; \lim_{y \rightarrow \pm \infty} u(x,y) = ( 0, \pm 2 \sqrt{2}) \end{align*} $ \lbrace W=0 \rbrace = \lbrace (2 \sqrt{2},0), (-2 \sqrt{2},0) , (0, 2 \sqrt{2}) , (0, -2\sqrt{2}) \rbrace . $ Also, another solution of $ \Delta u = W_u(u) $ for such potential is \begin{align} u(x,y)= \sqrt{2} (\textrm{tanh}x + \textrm{tanh}(\dfrac{x+y}{\sqrt{2}}) , \textrm{tanh}x - \textrm{tanh}(\dfrac{x+y}{\sqrt{2}})) \end{align} for this solution the function $ h $ in \eqref{Theorem2Sol2} is $ h(s,t)=s+t -2 $ but $ u $ in (A.10) does not satisfy the equipartition. Thus, the class of solutions of the Allen-Cahn system that are of the form \eqref{Theorem2Sol2} in Theorem 3, is more general than that of solutions to the Allen-Cahn system that satisfy the equipartition. Note that $ u $ in (A.10) has the property that \begin{equation} \lim_{x \rightarrow \pm \infty} u(x,y) = ( \pm 2 \sqrt{2},0) \;\;\; \textrm{and} \;\;\; \lim_{y \rightarrow \pm \infty} u(x,y) = \sqrt{2}(\textrm{tanh}x \pm 1, \textrm{tanh}x \mp 1) \end{equation} and $ W(-u_1,u_2) = W(u_1,u_2) $. The general existence of solutions with property similar to (A.11) for potentials with such symmetry hypothesis can be found in \cite{ABG}. $ \\ \\ $ \textbf{Example (3)} If $ W(u_1,u_2) = u_1^2 + u_2^2 -1 $, then \begin{align*} u(x,y)= ( c_1 e^{a_1x+b_1y}+ c_2 e^{a_2x+b_2y} +c_3 e^{a_3x+b_3y}+ c_4 e^{a_4x+b_4y} , \\ c_1 e^{a_1x+b_1y}+ c_2 e^{a_2x+b_2y} - c_3 e^{a_3x+b_3y}- c_4 e^{a_4x+b_4y}) \end{align*} is a solution of $ \Delta u = W_u(u) $, where $ a_i^2+b_i^2 =2 \;\:,\; c_i \in \mathbb{R} $. In this case, $ \lbrace W = 0 \rbrace = \lbrace u_1^2+u_2^2=1 \rbrace . \\ $ Also, if $ W(u_1,u_2) = W(u_1^2+u_2^2) $ and $ W' <0 $, we have that \begin{align*} u(x,y) = (\textrm{cos}(ax+by+c), \textrm{sin}(ax+by+c)) \end{align*} with $ a^2+b^2= -2W'(1) $, is a solution to $ \Delta u =W_u(u) $. $ \\ \\ \\ $ \section{Some examples of entire solutions of the Navier-Stokes equations} $ \\ \\ $ First we note that some solutions of the 3D Euler equations in subsection 2.2 have the form $ u=(u_1, c_1u_1 + \tilde{c}_1,c_2u_1 + \tilde{c}_2) $, that is, we have linear dependence of the components of the solution. So, now we will determine some specific examples of solutions of the Navier-Stokes equations with linear dependent components. Let $ u=(u_1,u_2) \;,\; u_i=u_i(x,y,t) : \mathbb{R}^2 \times (0,+ \infty) \rightarrow \mathbb{R} $ defined as \begin{equation}\label{NSSol} \begin{gathered} u_1(x,y,t)= c_1 g(x-c_1y,t) -c_1A(t) +c_2 \;\;,\;\; u_2(x,y,t) = g(x-c_1y,t) -A(t) \\ \textrm{and} \;\;\; p(x,y,t) = a(t)(c_1x+y)+b(t) \;\;,t>0 \;\;\;,\; c_1,c_2 \in \mathbb{R} \\ \textrm{where} \;\;\; g_t + c_2 g_s = \mu (c_1^2+1) g_{ss} \;\;,\; g=g(s,t): \mathbb{R}^2 \rightarrow \mathbb{R} \;\; \textrm{and} \; A'(t)=a(t) \;\;,\; a,b,A: \mathbb{R} \rightarrow \mathbb{R} \end{gathered} \end{equation} then $ u $ is a solution of \begin{equation}\label{NS2D} \begin{cases} u_{1 \: t} +u_1 u_{1 \: x} + u_2 u_{1 \: y} =-p_x + \mu \Delta u_1 \\ u_{2 \: t} +u_1 u_{2 \: x} + u_2 u_{2 \: y} =-p_y + \mu \Delta u_2 \\ u_{1 \: x} + u_{2 \: y} =0 \end{cases} \;\;\; , \; \mu >0 \end{equation} $ \\ \\ $ Similarly in the three dimensional case, we give some examples of solutions of \begin{equation}\label{NS3D} \begin{cases} u_{1 \: t} +u_1 u_{1 \: x} + u_2 u_{1 \: y}+ u_3 u_{1 \: z} = - p_x + \mu \Delta u_1 \\ u_{2 \: t} +u_1 u_{2 \: x} + u_2 u_{2 \: y} + u_3 u_{2 \: z} = - p_y + \mu \Delta u_2 \\ u_{3 \: t} +u_1 u_{3 \: x} + u_2 u_{3 \: y} + u_3 u_{3 \: z} = -p_z + \mu \Delta u_3 \\ u_{1 \: x} + u_{2 \: y} + u_{3 \: z} =0 \end{cases} \;\;\;,\; \mu >0 \end{equation} Let $ g=g(s, \eta ,t) \;,\; g : \mathbb{R}^2 \times (0, + \infty) \rightarrow \mathbb{R} $ be a solution of \begin{equation} g_t - ( \frac{\tilde{c}_1}{2c_1} + \frac{\tilde{c}_2}{2c_2}) g_s +( \frac{\tilde{c}_1}{2c_1} - \frac{\tilde{c}_2}{2c_2}) g_{\eta} = \mu (\frac{1}{4c_1^2} + \frac{1}{4c_2^2}) (g_{ss} +g_{\eta \eta}) + \mu g_{ss} \end{equation} where $ \mu >0 \;,\; c_1,c_2, \tilde{c}_1 , \tilde{c}_2 \in \mathbb{R} $ and $ t>0 $. Then $ u=(u_1,u_2,u_3) \;\;,\; u_i : \mathbb{R}^3 \times (0,+ \infty) \rightarrow \mathbb{R} \;\;,\; i =1,2,3 $ defined as \begin{equation}\label{NS3DSol} \begin{gathered} u_1(x,y,z,t) = g( x - \frac{c_2y + c_1z}{2c_1c_2}, \frac{c_2y - c_1z}{2c_1c_2},t) - A(t) \;\;\;,\; (x,y,z) \in \mathbb{R}^3 \;,\; t>0 \\ u_2 (x,y,z,t) = c_1 u_1(x,y,z,t) + \tilde{c}_1 \;\;\;,\;\; u_3(x,y,z,t) = c_2 u_1(x,y,z,t) + \tilde{c}_2 \\ \textrm{and} \;\; p(x,y,z,t) = a(t)(x + c_1y +c_2z) +b(t) \\ \textrm{where} \;\; A'(t) =a(t) \;,\; a,A : \mathbb{R} \rightarrow \mathbb{R} \end{gathered} \end{equation} is a solution of \eqref{NS3D}. $ \\ $ Therefore we conclude to the following $ \\ $ \textbf{Proposition 3} Let $ u=(u_1,u_2,u_3) \;\:,\; u_i,\: p: \mathbb{R}^3 \times (0, + \infty) \rightarrow \mathbb{R}^3 $ and consider the initial value problem \begin{equation}\label{NS3DIV} \begin{cases} u_t +u \nabla u = - \nabla p + \mu \Delta u \\ \textrm{div} \: u =0 \\ \lim_{t \rightarrow 0^+} u(x,y,z,t) = h(x,y,z) \end{cases} \;\;\;,\; \mu >0 \;\;,\; (x,y,z,t) \in \mathbb{R}^3 \times (0, + \infty) \end{equation} where $ h=(h_1,c_1h_1 + \tilde{c}_1, c_2h_1 +\tilde{c}_2) $ and $ h_1(x,y,z)= H(2c_1c_2 x - c_2y - c_1z) \;\;,\; c_1,c_2, \tilde{c}_1 , \tilde{c}_2 \in \mathbb{R} $ such that $ \tilde{c}_1 c_2 + c_1 \tilde{c}_2 =0 $ and $ H $ smooth. Then there exists a smooth, globally defined in $ t>0 $, solution to \eqref{NS3DIV}. In particular, \begin{equation}\label{NS3DIVSol} \begin{gathered} u(x,y,z,t) = (u_1,c_1u_1 + \tilde{c}_1, c_2u_1+ \tilde{c}_2) \;\: \textrm{and} \;\: p(x,y,z,t)=a(t)(x+c_1y+c_2z) +b(t) \\ \textrm{where} \;\; u_1(x,y,z,t)= g(2c_1c_2x - c_2y -c_1z,t) -A(t) \\ \textrm{and} \;\; g=g(s,t)= \frac{1}{2\sqrt{\pi t}} \int_{\mathbb{R}} e^{-\frac{|s-w|^2}{4 \tilde{\mu}t}} H(w)dw \;\;,\; \tilde{\mu}= \mu (4c_1^2 c_2^2 +c_1^2 +c_2^2) \\ ( A'(t)=a(t) \;\: ,\; A(0)=0) \end{gathered} \end{equation} $ \\ $ \textbf{Remark B.1} We can also have the same result for a bit more general initial values $ h $ in Proposition 3, as we can see from (B.4),(B.5). It suffices to have linear dependency of the components of $ h $ and $ h_1 $ above can also be for example of the form $ h_1(x,y,z) = H(2c_1c_2x-c_2y-c_1z,c_2y-c_1z) $. \end{appendix} $ \\ \\ \\ \\ $
{ "timestamp": "2022-03-29T02:08:03", "yymm": "2203", "arxiv_id": "2203.13960", "language": "en", "url": "https://arxiv.org/abs/2203.13960", "abstract": "We will prove that solutions of the Allen-Cahn equations that satisfy the equipartition can be transformed into solutions of the Euler equations with constant pressure. As a consequence, we obtain De Giorgi type results, that is, the level sets of entire solutions are hyperplanes. In addition, we obtain some examples of smooth entire solutions of the Euler equations in particular cases. For specific type of initial conditions, some of these solutions can be extended to the Navier-Stokes equations. Also, we will determine the structure of solutions of the Allen-Cahn system in two dimensions that satisfy the equipartition. Finally, we apply the Leray projection on the Allen-Cahn system and provide some explicit entire solutions.", "subjects": "Analysis of PDEs (math.AP)", "title": "A Relation of the Allen-Cahn equations and the Euler equations and applications of the Equipartition", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631651194373, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7087950513819458 }
https://arxiv.org/abs/2109.00609
On a Partition Identity of Lehmer
Euler's identity equates the number of partitions of any non-negative integer n into odd parts and the number of partitions of n into distinct parts. Beck conjectured and Andrews proved the following companion to Euler's identity: the excess of the number of parts in all partitions of n into odd parts over the number of parts in all partitions of n into distinct parts equals the number of partitions of n with exactly one even part (possibly repeated). Beck's original conjecture was followed by generalizations and so-called "Beck-type" companions to other identities.In this paper, we establish a collection of Beck-type companion identities to the following result mentioned by Lehmer at the 1974 International Congress of Mathematicians: the excess of the number of partitions of n with an even number of even parts over the number of partitions of n with an odd number of even parts equals the number of partitions of n into distinct, odd parts. We also establish various generalizations of Lehmer's identity, and prove related Beck-type companion identities. We use both analytic and combinatorial methods in our proofs.
\section{Introduction and statement of results}\label{sec_intro} Many results in the theory of partitions concern identities asserting that the set $\mathcal P_{X}(n)$ of partitions of $n$ satisfying condition $X$ and the set $\mathcal P_{Y}(n)$ of partitions of $n$ satisfying condition $Y$ are equinumerous. Likely the oldest such result is Euler's identity that the number of partitions of $n$ into odd parts is equal to the number of partitions of $n$ into distinct parts. In 2017, Beck made the following conjecture (\cite{oeisA090867}, \cite[Conjecture]{A17}): \begin{conj}[Beck] \label{conj} The excess of the number of parts in all partitions of $n$ into odd parts over the number of parts in all partitions of $n$ into distinct parts equals the number of partitions of $n$ with exactly one even part (possibly repeated). \end{conj} Beck's conjecture was quickly proved analytically by Andrews \cite{A17}, who additionally showed that this excess also equals the number of partitions of $n$ with exactly one part repeated (and all other parts distinct). The conjecture was also proved combinatorially by Yang \cite{Yang19} and Ballantine--Bielak \cite{BB19} independently. This work was followed by generalizations and Beck-type companions to other well known identities (e.g., \cite{AB19}, \cite{BW21}, \cite{LW20}, \cite{Yang19}). In general, a Beck-type companion identity to $|\mathcal P_{X}(n)|=|\mathcal P_{Y}(n)|$ is an identity that equates the excess of the number of parts in all partitions in $\mathcal P_{X}(n)$ over the number of parts in all partitions in $\mathcal P_{Y}(n)$ to the number of partitions of $n$ satisfying a condition closely related to $X$ (or $Y$). In this article, we establish a number of Beck-type identities related to a result of Lehmer, which he informally mentioned at the 1974 International Congress of Mathematicians \cite{G75}: for every non-negative integer $n$, we have that \begin{equation} \label{lehmer1} 2p_e(n,2)=p(n)+q_o(n), \end{equation} where $$p_{e}(n,2) := p(n \ | \ \text{ the number of even parts is even})$$ and $$q_o(n) := p(n \ | \ \text{ distinct, odd parts}).$$ Here and throughout we use the standard notations $p(n)$ and $p(n \ | \ X)$ to denote the number of partitions of $n$, and the number of partitions of $n$ satisfying condition $X$, respectively. If we also denote by $$p_{o}(n,2) := p(n \ | \ \text{ the number of even parts is odd}),$$ identity \eqref{lehmer1} is equivalent to the following statement which we refer to as Lehmer's identity. \begin{theorem} \label{lehmer-thm} For any $n\in \mathbb N_0:=\mathbb N \cup \{0\}$, we have \begin{equation}\label{lehmer} p_e(n,2) = p_o(n,2) + q_o(n).\end{equation} \end{theorem} An analytic proof of Theorem \ref{lehmer-thm} is immediate: The generating series for $p_e(n,2)-p_o(n,2)$ and $q_o(n)$ are ${(q;q^2)^{-1}_\infty(-q^2;q^2)^{-1}_\infty}$ and $(-q;q^2)_\infty$, respectively. Then Theorem \ref{lehmer-thm} follows from the fact that $$(-q;q^2)_\infty=\frac{(-q;q)_\infty}{(-q^2;q^2)_\infty}$$ and Euler's identity $$(-q;q)_\infty=\frac{1}{(q;q^2)_\infty}.$$ Here and throughout, the $q$-Pochhammer symbol is given by \begin{align*} & (a;q)_n := \begin{cases} 1, & \text{for $n=0$,}\\ (1-a)(1-aq)\cdots(1-aq^{n-1}), &\text{for $n>0$;} \end{cases}\\ & (a;q)_\infty := \lim_{n\to\infty} (a;q)_n. \end{align*} In \cite{G75}, Gupta provided a beautiful combinatorial proof of Theorem \ref{lehmer-thm}. We also note that \eqref{lehmer} is equivalent to the following identity due to Glaisher (\cite[p.129]{D52} \cite[p.256]{G1876}) $$p_e(n)-p_o(n)=(-1)^nq_o(n),$$ where $$p_{e/o}(n) := p(n \ | \ \text{even/odd number of parts}).$$ Our first main result, Theorem \ref{beck-lehmer0} below, is a Beck-type companion identity to Lehmer's identity \eqref{lehmer}. To state it, we first set some additional notation. We begin by formally defining a \emph{partition} $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_j)$ of \emph{size} $n\in\mathbb N_0$ to be a non-increasing sequence of positive integers $\lambda_1\geq \lambda_2 \geq \cdots \geq \lambda_j$ called \emph{parts} that add up to $n$. For convenience, we abuse notation and use $\lambda$ to denote either the multiset of its parts or the non-increasing sequence of parts. We write $a\in \lambda$ to mean the positive integer $a$ is a part of $\lambda$. The empty partition is the only partition of size $0$. Thus, $p(0)=1$. We write $|\lambda|$ for the size of $\lambda$ and $\lambda\vdash n$ to mean that $\lambda$ is a partition of size $n$. For a pair of partitions $(\lambda, \mu)$ we also write $(\lambda, \mu)\vdash n$ to mean $|\lambda|+|\mu|=n$. We use the convention that $\lambda_k=0$ for all $k$ greater than the number of parts. When convenient we will also use the exponential notation for parts in a partition: the exponent of a part is the multiplicity of the part in the partition, e.g., we write $(a^b)$ for the partition consisting of $b$ parts equal to $a$. Further, we denote by calligraphy style capital letters the set of partitions enumerated by the function denoted by the same letter. For example, $\mathcal Q_o(n)$ denotes the set of partitions of $n$ into distinct odd parts. We also define $\mathcal Q_o := \bigcup_{n\geq 0} \mathcal Q_o(n)$. \begin{theorem}\label{beck-lehmer0} Let $n\in \mathbb N_0$. The excess of the number of parts in all partitions in $\mathcal P_e(n,2)$ over the number of parts in all partitions in $ \mathcal P_o(n,2)\cup \mathcal Q_o(n)$ equals the number of partitions of $n$ with exactly one even part, possibly repeated, and all other parts odd and distinct. \end{theorem} \begin{remark} As proved in \cite{AB19}, the excess in Theorem \ref{beck-lehmer0} is almost always equal to the number of parts in all self-conjugate partitions of $n$. Hence, the excess in the number of parts in all partitions in $\mathcal P_e(n,2)$ over the number of parts in all partitions in $\mathcal P_o(n,2)$ is almost always equal to the total number of parts in all self-conjugate partitions of $n$ and in all partitions of $n$ into distinct odd parts. More precisely, if $N(x)$ is the number of times the above statement is true for $n\leq x$, then $\lim_{x\to \infty}{N(x)}/{x}=1$. \end{remark} We also establish a \emph{restricted} Beck-type identity accompanying \eqref{lehmer} in which we only count the number of even parts in partitions in $\mathcal P_e(n,2)$ and $\mathcal P_o(n,2)$; this result is given in Theorem \ref{beck-lehmer} below. To ease notation in the statement of this result and other Beck-type identities that follow, we introduce the following definition. Let $n, r, a, b$ be non-negative integers such that $1\leq ab\leq n$. We define $\mathcal B_r(n,a,b)$ to be the set of partitions $\lambda \vdash n-rab$ such that $\lambda\neq (ra, r(a-2))$, $\lambda_1-\lambda_2 \leq 2r(a+b+1)$, and $r(a+b+1)\not \in \lambda$. We write $\mathcal B(n,a,b)$ for $\mathcal B_1(n,a,b)$. \begin{theorem}\label{beck-lehmer} Let $n\in \mathbb N_0$. The excess of the number of parts in all partitions in $\mathcal Q_o(n)$ plus the number of even parts in all partitions in $\mathcal P_o(n,2)$ over the number of even parts in all partitions in $\mathcal P_e(n,2)$ equals the number of pairs of partitions $(\lambda, (a^b))$ satisfying the following conditions: \begin{itemize} \item[i.] $a,b$ are both odd, \item[ii.] $\lambda\in \mathcal Q_o\cap \mathcal B(n,a,b)$, i.e., $\lambda$ has distinct odd parts, is not equal to $(a,a-2)$, does not have $a+b+1$ as a part, and satisfies $\lambda_1-\lambda_2\leq 2(a+b+1)$. \end{itemize} \end{theorem} \begin{remark} If $n$ is even, the condition $\lambda \neq (a, a-2)$ in $ii.$ is vacuously true. \end{remark} In general, whenever we refer to pairs of the form $(\lambda, (a^b))$, we require $(a^b)$ to be nonempty (i.e. $a,b>0$), while $\lambda$ is allowed to be the empty partition. \begin{remark} \label{rmk_Beckpair} Beck's Conjecture \ref{conj} can also be formulated in the language of pairs as in Theorem \ref{beck-lehmer}: The excess of the number of parts in all partitions of $n$ into odd parts over the number of parts in all partitions of $n$ into distinct parts equals the number of pairs of partitions $(\lambda, (a^b))\vdash n$ satisfying the following conditions: \begin{itemize} \item[i.] $a$ is even, \item[ii.] $\lambda$ is a partition into odd parts. \end{itemize} \end{remark} Next, we give a collection of Beck-type companion identities to the following generalization of Lehmer's identity \eqref{lehmer}, which we prove in Section \ref{sec_proofs1}. For the remainder of the paper, we let $r \in \mathbb N$. \begin{theorem}\label{lehmer-gen} For any $n\in \mathbb N_0$, we have \begin{equation} \label{lehmer-g}p_{e}(n,2r) = p_{o}(n,2r) + q_o(n,r), \end{equation} where \begin{align*} p_{e/o}(n,2r) &:= p(n \ | \ \text{all parts allowed, even/odd no.\ of parts divisible by $2r$ }) \\ q_o(n,r) &:=p\left(n \ \Big| \ \begin{array}{l}\text{parts are not divisible by $2r$,} \\ \text{parts divisible by $r$ are distinct}\end{array}\right)\\ &=p\left(n \Big| \begin{array}{l}\text{all parts divisible by $r$ are distinct, odd multiples of $r$} \end{array}\right). \end{align*} \end{theorem} \noindent Note that for $r=1$, identity~\eqref{lehmer-g} reduces to identity \eqref{lehmer} since $p_{e/o}(n,2) = p_{e/o}(n)$ and $q_o(n,1)=q_o(n)$. Our first Beck-type companion identity to \eqref{lehmer-g} is given by the next theorem which becomes Theorem \ref{beck-lehmer0} when $r=1$. \begin{theorem}\label{beck-lehmer2prime} Let $n\in \mathbb N_0$. The excess in the total number of parts in all partitions in $\mathcal P_e(n,2r)$ over the total number of parts in all partitions in $\mathcal P_o(n,2r)\cup \mathcal Q_o(n,r)$ equals the number of pairs of partitions $(\lambda, (a^b))\vdash n$ such that \begin{itemize} \item[i.] $2r \mid a$, \item[ii.] $\lambda\in \mathcal{Q}_o(n-ab,r)$. \end{itemize} \end{theorem} \begin{remark} Equivalently, the excess of Theorem \ref{beck-lehmer2prime} equals the number of partitions of $n$ in which, among the parts divisible by $r$, there is a single even multiple of $r$ and this part is possibly repeated, while all other parts divisible by $r$ are odd multiples of $r$ and they are distinct. \end{remark} Theorem \ref{beck-lehmer2} below is a restricted Beck-type companion identity to \eqref{lehmer-g}, in which we only count the number of parts divisible by $r$ in $\mathcal{Q}_o(n,r)$, and the number of parts divisible by $2r$ in $\mathcal{P}_e(n,2r)$ and $\mathcal{P}_o(n,2r)$. The theorem reduces to Theorem \ref{beck-lehmer} when $r=1$. \begin{theorem}\label{beck-lehmer2} Let $n\in \mathbb N_0$. The excess of the number of parts divisible by $r$ in all partitions in $\mathcal Q_o(n,r)$ plus the number of parts divisible by $2r$ in all partitions in $\mathcal P_o(n,2r)$ over the number of parts divisible by $2r$ in all partitions in $\mathcal P_e(n,2r)$ equals the number of pairs of partitions $(\lambda,((ar)^b))$ satisfying the following conditions: \begin{itemize} \item[i.] $a,b$ are both odd, \item[ii.] $\lambda\in \mathcal Q_o(n-rab,r)$ such that, if we write $\lambda=\lambda^{ndiv}\cup \lambda^{div}$ where $\lambda^{div}$ contains all parts of $\lambda$ that are divisible by $r$, then $\lambda^{div}\in \mathcal B_r(n-|\lambda^{ndiv}|, a, b)$. \end{itemize} \end{theorem} \noindent Recall that $\lambda\cup \mu$ is the partition whose parts are precisely the parts of $\lambda$ and $\mu$ (with multiplicities). Next we give another generalization of Lehmer's identity \eqref{lehmer}. To describe this, we let $r\in\mathbb N$, and let $ L_r\subseteq \{2,4,6,\dots,2r\}$, with $L_r\neq \varnothing$. We use the sets $L_r$ to restrict even parts of partitions to lie within certain arithmetic progressions. More precisely, we define \begin{align*} {p}_{e/o}&(n, L_r,2r) \\ &:= p\left(n \ \Bigg| \ \begin{array}{l}\text{all parts allowed,} \\ \text{even parts $\equiv \ell\!\!\!\!\pmod{2r}, \ell \in L_r$,} \\\text{even/odd no.\ of even parts}\end{array}\right), \\ {q}(n, & \ L_r, r) \\ &:=p\left(n \ \Big| \ \begin{array}{l}\text{all parts distinct,} \\\text{even parts $\not \equiv \ell\!\!\!\!\pmod{2r},$ $\ell \in L_r$}\end{array}\right). \end{align*} \begin{theorem}\label{lehmer-gen2} For any $n\in\mathbb N_0$, we have \begin{align}\label{lehmer-g2} &{p}_{e}(n, L_r,2r) = {p}_{o}(n, L_r,2r) + {q}(n, L_r,r). \end{align} \end{theorem} \noindent Note that in the case $L_r = \{2,4,\ldots,2r\}$, identity \eqref{lehmer-g2} is equivalent to identity \eqref{lehmer}. The next theorem is a Beck-type companion identity to \eqref{lehmer-g2}, which becomes Theorem \ref{beck-lehmer0} when $L_r = \{2,4,\ldots,2r\}$. \begin{theorem}\label{beck-lehmer3prime} Let $n\in \mathbb N_0$. The excess in the total number of parts in all partitions in $\mathcal P_e(n,L_r,2r)$ over the total number of parts in all partitions in $\mathcal P_o(n,L_r,2r)\cup \mathcal Q(n,L_r,r)$ equals the number of pairs of partitions $(\lambda,(a^b))$ satisfying the following conditions: \begin{itemize} \item[i.] $a$ is even \item[ii.] $\lambda\in \mathcal{Q}(n-ab,L_r,r)$. \end{itemize} \end{theorem} A restricted Beck-type companion identity to \eqref{lehmer-g2} is given by the next theorem, where we only count the number of even parts in $\mathcal{P}_e(n,L_r,2r)$ and $\mathcal{P}_o(n,L_r,2r)$. The theorem becomes Theorem \ref{beck-lehmer} when $L_r = \{2,4,\ldots,2r\}$. \begin{theorem} \label{beck-lehmer3} Let $n\in \mathbb N_0$. The excess of the number of parts in all partitions in $\mathcal Q(n,L_r,r)$ plus the number of even parts in all partitions in $\mathcal P_o(n,L_r,2r)$ over the number of even parts in all partitions in $\mathcal P_e(n,L_r,2r)$ equals the number of pairs of partitions $(\lambda, (a^b))$ satisfying the following conditions: \begin{enumerate} \item[i.] $a,b$ are both odd, \item[ii.] $\lambda\in \mathcal{Q}(n-ab,L_r,r)$ such that, if we write $\lambda=\lambda^e\cup\lambda^o$, where $\lambda^e$ consists of all the even parts of $\lambda$ and $\lambda^o$ consists of all the odd parts of $\lambda$, then $\lambda^o\in \mathcal B(n-|\lambda^e|, a, b)$. \end{enumerate} \end{theorem} The next result is a new restricted Beck-type companion identity to Lehmer’s identity \eqref{lehmer}, different from Theorem \ref{beck-lehmer}. We only count the number of parts in certain arithmetic progressions in $\mathcal{Q}_o(n,2)$, $\mathcal{P}_e(n,2)$ and $\mathcal{P}_o(n,2)$. To describe it, for $r\in \mathbb N$, let \begin{align*} L_r\subseteq \{2,4,6,\dots,2r\},\ \ \ O_r \subseteq\{1,3,5,\dots,2r-1\}. \end{align*} \begin{theorem}\label{beck-lehmer4} Let $n$ be a positive integer, and $L_r$ and $O_r$ as above such that if $n\equiv 0\pmod 4$ then $2 \not \in L_r$. The excess of the number of parts $\equiv \ell \pmod{2r}, \ell \in O_r$ in all partitions in $ {\mathcal Q}_o(n)$ plus the number of parts $\equiv \ell \pmod{2r}, \ell \in L_r$ in all partitions in $\mathcal P_o(n,2)$ over the number of parts $\equiv \ell \pmod{2r}, \ell \in L_r$ in all partitions in $\mathcal P_e(n,2)$ equals the number of pairs of partitions $(\lambda, (a^b))\vdash n$ satisfying the following conditions: \begin{enumerate} \item[i.] $a \equiv \ell \pmod{2r}$ for some $\ell \in L_r\cup O_r$, and $b$ is odd. Moreover, if $a$ is odd, then $b=1$, \item[ii.] $\lambda\in \mathcal{Q}_o$. Moreover, if $a$ is odd, then $a\not \in \lambda$; if $a$ is even, then $\lambda_1-\lambda_2\leq a$ and $\lambda \not \in \{(\frac{a}{2}+1, \frac{a}{2}-1),(\frac{a}{2}+2, \frac{a}{2}-2) \}$. \end{enumerate} If $n\equiv 0\pmod 4$ and $2 \in L_r$, the excess is one less than the number of pairs counted above. Moreover, if additionally $n\not\in \{4,8,12,16,20\}$, then the excess is equal to the number of pairs $(\lambda, (a^b))$ satisfying i. and ii. with the additional condition $(\lambda, (a^b)) \neq ((9,7,5,1),(2^b))$. \end{theorem} \begin{remark} If $n \not\equiv 0 \pmod 4$, then the condition $\lambda \not \in \{(\frac{a}{2}+1, \frac{a}{2}-1),(\frac{a}{2}+2, \frac{a}{2}-2) \}$ is vacuously true. \end{remark} Generally speaking, our proofs are both analytic and combinatorial in nature. In Sections \ref{sec_newproof} to \ref{sec_proofs2}, we prove Theorems \ref{beck-lehmer0} through \ref{beck-lehmer3}. In Section \ref{sec_proofs3}, we provide two paths to prove Theorem \ref{beck-lehmer4} and give several important examples. The first proof relies upon the non-negativity of certain $q$-series coefficients and their combinatorial interpretation, while the second proof establishes a relevant combinatorial injection. In Section \ref{sec_aux}, we establish the non-negativity of the coefficients of some related $q$-series. \allowdisplaybreaks \section{Proof of Theorem \ref{beck-lehmer0} and Theorem \ref{beck-lehmer}}\label{sec_newproof} Consider the generating series \begin{multline}\nonumber F(z;q):=\frac{1}{(zq;q^2)_\infty(-zq^2;q^2)_\infty}\\ {{=\sum_{n=0}^\infty\sum_{m=0}^\infty \sum_{s=0}^m p(n \ | \ m \text{ parts, of which } s \text{ parts are even})(-1)^sz^m q^n, }} \end{multline} \begin{multline}\nonumber E(z;q):=\frac{1}{(q;q^2)_\infty(-zq^2;q^2)_\infty}\\ {{=\sum_{n=0}^\infty\sum_{m=0}^\infty p(n \ | \ \text{the number of even parts is } m)(-z)^m q^n, }} \end{multline} and \begin{multline}\nonumber Q_o(z;q):=(-zq;q^2)_\infty \\ {{=\sum_{n=0}^\infty \sum_{m=0}^\infty p(n \ | \ \text{parts must be odd and distinct, } m \text{ parts}) z^m q^n }} .\end{multline} To prove Theorem \ref{beck-lehmer0}, note that $\frac{\partial}{\partial z} \big |_{z=1}(F(z;q) -Q_o(z;q))$ gives the generating series for the excess of the number of parts in all partitions in $\mathcal P_e(n,2)$ over the number of parts in all partitions in $\mathcal Q_o(n) \cup \mathcal P_o(n,2)$. We have \begin{align*}\frac{\partial}{\partial z} \Big |_{z=1}(F(z;q) & -Q_o(z;q)) \\ &=(-q;q^2)_\infty\left(\sum_{k=0}^\infty \frac{q^{2k+1}}{1-q^{2k+1}}-\sum_{k=1}^\infty \frac{q^{2k}}{1+q^{2k}}-\sum_{k=0}^\infty \frac{q^{2k+1}}{1+q^{2k+1}} \right)\\ &=(-q;q^2)_\infty\left(\sum_{k=0}^\infty \frac{q^{2k+1}}{1-q^{2k+1}}-\sum_{k=1}^\infty \frac{q^k}{1+q^{k}}\right)\\ &= (-q;q^2)_\infty\left(\sum_{k=0}^\infty \frac{q^{2k+1}}{1-q^{2k+1}}-\sum_{k=1}^\infty \frac{q^k}{1-q^{2k}}+\sum_{k=1}^\infty \frac{q^{2k}}{1-q^{2k}}\right)\\ &= (-q;q^2)_\infty\left(\sum_{k=1}^\infty \frac{q^{k}}{1-q^{k}}-\sum_{k=1}^\infty \frac{q^k}{1-q^{2k}}\right)\\ &= (-q;q^2)_\infty\sum_{k=1}^\infty \frac{q^{2k}}{1-q^{2k}}. \end{align*} The last expression is the generating series for the number of partitions of $n$ with exactly one even part, possibly repeated, and all other parts odd and distinct. This proves Theorem \ref{beck-lehmer0}. To prove Theorem \ref{beck-lehmer} we note that $\frac{\partial}{\partial z} \big |_{z=1}(Q_o(z;q)-E(z;q))$ is the generating series for the excess of the number of parts in all partitions in $\mathcal Q_o(n)$ plus the number of even parts in all partitions in $\mathcal P_o(n,2)$ over the number of even parts in all partitions in $\mathcal P_e(n,2)$. We compute \begin{align*}\frac{\partial}{\partial z} \Big |_{z=1}(Q_o(z;q)&-E(z;q))\\ &=(-q;q^2)_\infty\left(\sum_{k=0}^\infty \frac{q^{2k+1}}{1+q^{2k+1}}+\sum_{k=1}^\infty \frac{q^{2k}}{1+q^{2k}}\right)\\ &=(-q;q^2)_\infty \sum_{k=1}^\infty \frac{q^k}{1+q^k} \\ &= (-q;q^2)_\infty \left(\sum_{k=1}^\infty \frac{q^k}{1-q^{2k}}-\sum_{k=1}^\infty \frac{q^{2k}}{1-q^{2k}}\right). \end{align*} Let \begin{align*} p_{e^o}(n):=p(n \ | \ \text{odd number of identical even parts}), \end{align*} i.e., \begin{align*} p_{e^o}(n):=|\{\lambda\vdash n \mid \lambda= (a^b), a \mbox{ even and } b \mbox{ odd} \}|. \end{align*} Define $p_{e^e}(n),p_{o^e}(n)$, and $p_{o^o}(n)$ similarly. Then \begin{align*} \sum_{k=1}^\infty \frac{q^k}{1-q^{2k}}=\sum_{n=1}^{\infty} (p_{o^o}(n)+p_{e^o}(n)) q^n, \end{align*} and \begin{align*} \sum_{k=1}^\infty \frac{q^{2k}}{1-q^{2k}}=\sum_{n=1}^{\infty} (p_{o^e}(n)+p_{e^e}(n)) q^n. \end{align*} Since conjugation gives a bijection between $\mathcal{P}_{o^e}(n)$ and $\mathcal{P}_{e^o}(n)$, we further have \begin{align*} \sum_{k=1}^\infty \frac{q^k}{1-q^{2k}}-\sum_{k=1}^\infty \frac{q^{2k}}{1-q^{2k}} =\sum_{n=1}^{\infty} (p_{o^o}(n)-p_{e^e}(n)) q^n. \end{align*} Therefore \begin{align*}(-q;q^2)_\infty \sum_{k=1}^\infty \frac{q^k}{1+q^k} =&\left(\sum_{n=0}^{\infty} q_o(n) q^n\right)\left(\sum_{n=1}^{\infty} (p_{o^o}(n)-p_{e^e}(n)) q^n\right)\\ =&\sum_{n=1}^{\infty}\left(\sum_{m=0}^{n-1}q_o(m)p_{o^o}(n-m)-q_o(m)p_{e^e}(n-m)\right)q^n, \end{align*} and the excess in question is given by $$\sum_{m=0}^{n-1}\left(q_o(m)p_{o^o}(n-m)-q_o(m)p_{e^e}(n-m)\right).$$ Equivalently, this is the excess of the number of elements in $$B(n):=\{(\lambda,(a^b))\vdash n \ | \ \lambda \in \mathcal Q_o, a, b \text{ odd}\}$$ over that in $$A(n):=\{(\lambda,(a^b))\vdash n\ | \ \lambda \in \mathcal Q_o, a, b \text{ even}\}.$$ To measure this excess, we construct an injection $T$ from $A(n)$ to $B(n)$ as follows. We partition the set $A(n)$ into three disjoint subsets: \begin{align*} &A_1(n):=\{(\lambda,(a^b))\in A(n) \mid a+b-1 \not \in \lambda \};\\ &A_2(n):=\{(\lambda,(a^b))\in A(n) \mid a+b-1 \in\lambda \text{ and $\lambda$ has at least two parts}\};\\ &A_3(n):=\{(\lambda,(a^b))\in A(n) \mid \lambda=(a+b-1) \}. \end{align*} We define $T$ on each $A_i(n)$ in the following way. \begin{enumerate} \item If $(\lambda, (a^b))\in A_1(n)$ (including the case where $\lambda$ is empty), then $$T(\lambda, (a^b)):=\left(\lambda\cup\{a+b-1\}, \left((a-1)^{b-1}\right)\right).$$ \item If $(\lambda, (a^b))\in A_2(n)$, then let $m$ denote the largest part of $\lambda$ that is not $a+b-1$ and define $$T(\lambda, (a^b)):=\left((\lambda\setminus\{m, a+b-1\})\cup(2a+2b-2+m), \left((a-1)^{b-1}\right)\right),$$ where $\lambda\setminus\{m, a+b-1\}$ is the partition obtained by removing parts $a+b-1$ and $m$ from $\lambda$. \item If $(\lambda, (a^b))\in A_3(n)$, then $T(\lambda, (a^b)):=\left((a+1,a-1), \left((a+1)^{b-1}\right)\right)$. \end{enumerate} The image sets are thus \begin{align*} &T(A_1(n))=\{(\mu,(c^d))\in B(n) \ | \ c+d+1\in \mu\};\\ &T(A_2(n))=\left\{(\mu,(c^d))\in B(n) \ \Big | \ \begin{array}{ll}c+d+1\not \in\mu, \, \mu_1\neq 3(c+d+1), \\ \text{and } \mu_1-\mu_2 >2(c+d+1)\end{array}\right\};\\ &T(A_3(n))=\{(\mu,(c^d))\in B(n) \ | \ \mu=(c,c-2)\}. \end{align*} Note that $T(A_1(n)),~T(A_2(n)),$ and $T(A_3(n))$ are disjoint, and their union $T(A(n))$ is a subset of $B(n)$. Define the map $L$ from $T(A(n))$ to $A(n)$ as follows: \begin{enumerate} \item If $(\mu, (c^d))\in T(A_1(n))$, then $$L(\mu, (c^d)):=\left(\mu\setminus\{c+d+1\}, \left((c+1)^{d+1}\right)\right).$$ \item If $(\mu, (c^d))\in T(A_2(n))$, then $$L(\mu, (c^d)):=\left((\mu\setminus\{\mu_1\})\cup\{c+d+1,\mu_1-2(c+d+1)\}, \left((c+1)^{d+1}\right)\right).$$ \item If $(\mu, (c^d))\in T(A_3(n))$, then $$L(\mu, (c^d)):=\left((c+d-1), \left((c-1)^{d+1}\right)\right).$$ \end{enumerate} Then $L$ and $T$ are inverses of each other. Since $T$ gives a bijection between $A(n)$ and $T(A(n))\subseteq B(n)$, the excess in question is given by the number of elements in \begin{align*} B(n)\setminus T(A(n))&=B(n)\setminus(T(A_1(n))\cup T(A_2(n))\cup T(A_3(n)))\\ &= \left\{(\mu,(c^d))\in B(n)\ \Big |\ \begin{array}{l}c+d+1\not \in \mu, \mu\neq (c,c-2)\text{,}\\ \text{and }\mu_1-\mu_2\leq 2(c+d+1)\end{array}\right\},\\ &= \left\{(\mu,(c^d))\in B(n)\ \Big |\ \mu \in \mathcal B(n,c,d) \right\}. \end{align*} Theorem \ref{beck-lehmer} now follows. \section{Proof of Theorem \ref{lehmer-gen}, Theorem \ref{beck-lehmer2prime}, and Theorem \ref{beck-lehmer2}}\label{sec_proofs1} For $r\in\mathbb N$, we define \begin{align*} F_r(z;q) &:= \frac1{(zq;q^{2r})_\infty(zq^2;q^{2r})_\infty \cdots (zq^{2r-1};q^{2r})_\infty\cdot(-zq^{2r};q^{2r})_\infty} \\ &=\sum_{n=0}^\infty \sum_{m=0}^\infty \sum_{s=0}^m p\left(n \ \Bigg | \ \begin{array}{l}\text{all parts allowed,} \\mu \text{ parts, }\\ s \text{ parts divisible by } 2r \end{array} \right)(-1)^s z^mq^n, \\ R_r(z;q) &:= \frac{(-zq^r;q^{2r})_\infty}{(zq;q^r)_\infty(zq^2;q^r)_\infty \cdots (zq^{r-1};q^r)_\infty} \\ &=\sum_{n=0}^\infty \sum_{m=0}^\infty p\left(n \ \left| \ \begin{array}{l}\text{parts are not divisible by $2r$,}\\mu\text{ parts,} \\ \text{parts divisible by $r$ are distinct} \end{array}\right.\right) z^m q^n. \end{align*} Hence, the generating series for $p_e(n,2r)-p_o(n,2r)$ and $q_o(n,r)$ are $F_r(1;q)$ and $R_r(1;q)$, respectively. We have \begin{align*} F_r(1;q) &= \frac{ (q^{2r}; q^{2r})_\infty}{(q;q)_\infty} \cdot \frac1{(-q^{2r};q^{2r})_\infty } = \frac{ (q^{2r}; q^{2r})_\infty}{(q;q)_\infty} \cdot \frac{(-q^r;q^{2r})_\infty}{(-q^r;q^r)_\infty}\\ &= \frac{ (q^{2r}; q^{2r})_\infty}{(q;q)_\infty} \cdot (q^r;q^{2r})_\infty \cdot (-q^r;q^{2r})_\infty = \frac{ (q^{r}; q^{r})_\infty}{(q;q)_\infty} \cdot (-q^r;q^{2r})_\infty\\ &= R_r(1;q). \end{align*} Here we used the fact \[(-q^{2r};q^{2r})_\infty (-q^r;q^{2r})_\infty = (-q^r;q^r)_\infty \text{ and } (q^{2r};q^{2r})_\infty (q^r;q^{2r})_\infty = (q^r;q^r)_\infty \] in the second and fourth equality respectively, and used Euler’s identity \[ (-q;q)_\infty = \frac{1}{(q;q^2)_\infty} \] (by replacing $q$ by $q^r$) in the third equality. Theorem \ref{lehmer-gen} now follows. To prove Theorem \ref{beck-lehmer2prime}, we have that $\left.\frac{\partial}{\partial z}\right|_{z=1}(F_r(z;q)-R_r(z;q))$ is the generating series for the excess of the total number of parts in all partitions in $\mathcal{P}_e(n,2r)$ over the total number of parts in all partitions in $\mathcal{P}_o(n,2r)\cup \mathcal{Q}_o(n,r)$. We have \begin{align} \notag &\frac{\partial}{\partial z}\Big|_{z=1}(F_r(z;q)-R_r(z;q)) \\\notag &= R_r(1;q)\Bigg(\sum_{\ell=1}^{2r-1}\sum_{k=0}^\infty\frac{q^{\ell+2kr}}{1-q^{\ell+2kr}}-\sum_{k=1}^\infty\frac{q^{2kr}}{1+q^{2kr}} \\\notag&{\hspace{1.35in}}-\sum_{\ell=1}^{r-1}\sum_{k=0}^\infty\frac{q^{\ell+kr}}{1-q^{\ell+kr}}-\sum_{k=0}^\infty\frac{q^{r+2kr}}{1+q^{r+2kr}}\Bigg) \\\notag &=R_r(1;q) \left( \sum_{\ell=1}^{2r-1}\sum_{k=0}^\infty\frac{q^{\ell+2kr}}{1-q^{\ell+2kr}}-\sum_{\ell=1}^{r-1}\sum_{k=0}^\infty\frac{q^{\ell+kr}}{1-q^{\ell+kr}}-\sum_{k=1}^\infty\frac{q^{kr}}{1+q^{kr}}\right) \\\notag &=R_r(1;q) \Bigg( \sum_{\ell=1}^{2r-1}\sum_{k=0}^\infty\frac{q^{\ell+2kr}}{1-q^{\ell+2kr}}-\sum_{\ell=1}^{r-1}\sum_{k=0}^\infty\frac{q^{\ell+kr}}{1-q^{\ell+kr}}\\\notag&\hspace{1.35in}-\sum_{k=1}^\infty\frac{q^{kr}}{1-q^{2kr}}+\sum_{k=1}^\infty\frac{q^{2kr}}{1-q^{2kr}}\Bigg) \\\notag &=R_r(1;q)\left(\sum_{k=1}^\infty \frac{q^k}{1-q^k} -\sum_{\ell=1}^{r-1}\sum_{k=0}^\infty\frac{q^{\ell+kr}}{1-q^{\ell+kr}}-\sum_{k=1}^\infty\frac{q^{kr}}{1-q^{2kr}} \right) \\\notag &=R_r(1;q)\left(\sum_{k=1}^\infty\frac{q^{kr}}{1-q^{kr}}-\sum_{k=1}^\infty\frac{q^{kr}}{1-q^{2kr}} \right) \\\label{eqn_Rr} &=R_r(1;q) \sum_{k=1}^\infty \frac{q^{2kr}}{1-q^{2kr}}. \end{align} This is the generating series for the number of pairs of partitions $(\lambda, (a^b))\vdash n$ so that \begin{itemize} \item[i.] $2r\mid a$, \item[ii.] $\lambda\in \mathcal{Q}_o(n-ab,r)$. \end{itemize} Equivalently, \eqref{eqn_Rr} is the generating series for the number of partitions of $n$ in which among the parts divisible by $r$ there is exactly one even multiple of $r$, possibly repeated, and all other parts divisible by $r$ are odd multiples of $r$ and are distinct. This proves Theorem \ref{beck-lehmer2prime}. To prove Theorem \ref{beck-lehmer2}, we define \begin{align*} E_r(z;q) &:= \frac1{ (q;q^{2r})_\infty (q^2;q^{2r})_\infty \cdots (q^{2r-1};q^{2r})_\infty \cdot (-zq^{2r};q^{2r})_\infty } \\ &=\sum_{n=0}^\infty \sum_{m=0}^\infty p(n \ | \ \text{all parts allowed, } m \text{ parts divisible by } 2r )(-z)^m q^n, \\ Q_r(z;q)&:=\frac{(-z q^r;q^{2r})_\infty}{(q;q^r)_\infty(q^2;q^r)_\infty\dots(q^{r-1};q^r)_\infty} \\ &= \sum_{n=0}^\infty \sum_{m=0}^\infty p\left(n \ \Bigg | \ \begin{array}{l}\text{parts are not divisible by $2r$,} \\ \text{parts divisible by $r$ are distinct,} \\ m \text{ parts divisible by $r$}\end{array}\right) z^m q^n. \end{align*} As in the proof of Theorem \ref{beck-lehmer2prime}, we compute \begin{align}\label{eqn_becklehmer} \frac{\partial}{\partial z} \Big |_{z=1} (Q_r(z;q) - E_r(z;q)) = \frac{(- q^r;q^{2r})_\infty}{(q;q^r)_\infty(q^2;q^r)_\infty\cdots(q^{r-1};q^r)_\infty} \sum_{k=1}^\infty \frac{q^{kr}}{1+q^{kr}}. \end{align} In the proof of Theorem \ref{beck-lehmer}, we have shown that \begin{equation} \label{eqn_r=1} (-q;q^2)_\infty \sum_{k=1}^\infty \frac{q^{k}}{1+q^k} \end{equation} is the generating series for the number of pairs of partitions $(\lambda, (a^b))\vdash n$ satisfying the following conditions: \begin{itemize} \item[i.] $a,b$ are both odd, \item[ii.] $\lambda\in \mathcal Q_o\cap \mathcal B(n,a,b)$. \end{itemize} For each $r\in \mathbb{N}$, replacing $q$ by $q^r$ in \eqref{eqn_r=1} implies that $$(-q^r;q^{2r})_\infty \sum_{k=1}^\infty \frac{q^{kr}}{1+q^{kr}}$$ is the generating series for the number of pairs $(\lambda^{div}, ((ar)^b))\vdash n$ satisfying the following conditions: \begin{itemize} \item[i.] $a,b$ are both odd, \item[ii.] $\lambda^{div}\in \mathcal Q_o(n-rab,r)\cap \mathcal B_r(n,a,b)$ and every part of $\lambda^{div}$ is divisible by $r$. \end{itemize} Theorem~\ref{beck-lehmer2} follows from equation~\eqref{eqn_becklehmer}. \section{Proof of Theorem \ref{lehmer-gen2}, Theorem \ref{beck-lehmer3prime} and Theorem \ref{beck-lehmer3}} \label{sec_proofs2} For $r\in \mathbb N, L_r\subseteq \{2,4,\ldots,2r\}$ as in Section \ref{sec_intro}, we define \begin{align*} {E}_{r, L_r}(z;q) &:= \frac{1}{\displaystyle (q;q^2)_{\infty} \prod_{\ell \in L_r}(-zq^{\ell};q^{2r})_\infty}\\ &= \sum_{n=0}^\infty \sum_{m=0}^\infty p\left(n \ \Bigg | \ \begin{array}{l} \text{all odd parts allowed,} \\ \text{even parts $\equiv \ell\!\!\!\!\pmod{2r}, \ell \in L_r$,}\\ \text{$m$ even parts}\end{array}\right)(-z)^m q^n, \\ {F}_{r, L_r}(z;q) &:= \frac{1}{\displaystyle (zq;q^2)_{\infty} \prod_{\ell \in L_r}(-zq^{\ell};q^{2r})_\infty}\\ &=\sum_{n=0}^\infty \sum_{m=0}^\infty \sum_{s=0}^m p\left(n \ \Bigg | \ \begin{array}{l} \text{all odd parts allowed,}\\\text{even parts $\equiv \ell\!\!\!\!\pmod{2r}, \ell \in L_r$,}\\ \text{$m$ parts, $s$ even parts}\end{array}\right)(-1)^s z^m q^n, \\ {Q}_{r, L_r}(z;q)&:=\mathop{\prod_{j=1}^{2r}}_{j\not\in L_r}(-zq^j;q^{2r})_\infty\\ &=\sum_{n=0}^\infty \sum_{m=0}^\infty p\left(n \ \Bigg| \ \begin{array}{l}\text{all odd parts allowed,}\\ \text{even parts $\not\equiv \ell\!\!\!\!\pmod{2r},$ $\ell \in L_r$,}\\ m\text{ parts, all distinct,}\\\end{array}\right) z^m q^n. \end{align*} Theorem \ref{lehmer-gen2} now follows from the fact that $E_{r,L_r}(1;q) = Q_{r,L_r}(1;q)$, which is not difficult to obtain after a short calculation using Euler's identity. The proof of Theorem \ref{beck-lehmer3prime} is similar to the proofs of Theorems \ref{beck-lehmer0} and \ref{beck-lehmer2prime}, and can be seen from \begin{align*} \frac{\partial}{\partial z} \Big |_{z=1} & ({Q}_{r, L_r}(z;q) - {F}_{r, L_r}(z;q)) = Q_{r,L_r}(1;q) \sum_{k=1}^\infty \frac{q^{2k}}{1-q^{2k}}. \end{align*} To prove Theorem \ref{beck-lehmer3}, we compute \begin{align} \frac{\partial}{\partial z} \Big |_{z=1} & ({Q}_{r, L_r}(z;q) - {E}_{r, L_r}(z;q)) \nonumber\\&=\mathop{\prod_{j=1}^{2r}}_{j\not\in L_r}(-q^j;q^{2r})_\infty \left(\sum_{k=1}^\infty \frac{q^{k}}{1+q^{k}}\right) \nonumber\\ &= \left(\mathop{\prod_{j=1}^{2r}}_{j \text{ even, } j \not\in L_r}(-q^j;q^{2r})_\infty \right) (-q;q^{2})_\infty \sum_{k=1}^\infty \frac{q^{k}}{1+q^{k}}. \label{product} \end{align} Using the combinatorial interpretation of \eqref{eqn_r=1} in the proof of Theorem \ref{beck-lehmer}, Theorem~\ref{beck-lehmer3} follows from \eqref{product}. \section{Proof of Theorem \ref{beck-lehmer4}}\label{sec_proofs3} Let $r\in\mathbb N$, $L_r\subseteq\{2,4,\ldots,2r\}$ and $O_r\subseteq\{1,3,\ldots,2r-1\}$ as in Section \ref{sec_intro}. Also let $L_r^c = \{2,4,\ldots, 2r\}\setminus L_r$ and $O_r^c = \{1,3,\ldots, 2r-1\}\setminus O_r$. Define \begin{align*} \widetilde{E}_{r, L_r}(z;q) &:= \frac{1}{\displaystyle (q;q^2)_\infty\prod_{j\in L_r^c}(-q^j;q^{2r})_\infty \prod_{\ell\in L_r}(-zq^\ell;q^{2r})_\infty} \\ &=\sum_{n=0}^\infty \sum_{m=0}^\infty (p_{e}(n,m;L_r)-p_{o}(n,m;L_r)) z^m q^n, \\ \widetilde{Q}_{r, O_r}(z;q) &:= \prod_{j \in O_r^c}(-q^j;q^{2r})_\infty \prod_{\ell \in O_r}(-zq^\ell;q^{2r})_\infty\\ &=\sum_{n=0}^\infty \sum_{m=0}^\infty q_o(n,m;O_r) z^m q^n, \end{align*} where \begin{align*} p_{e/o}(n,m;L_r) &:= p\left(n \ \Bigg | \ \begin{array}{l} \text{all parts allowed,} \\ \text{even/odd no.\ of even parts,} \\ \text{$m$ (even) parts $\equiv \ell\!\!\!\!\pmod{2r}, \ell \in L_r$}\end{array}\right), \\ q_o(n,m;O_r) &:= p\left(n \ \Big | \ \begin{array}{l} \text{all parts odd and distinct,} \\ \text{$m$ (odd) parts $\equiv \ell \!\!\!\!\pmod{2r}, \ \ell \in O_r$}\end{array}\right). \end{align*} When $z=1$, $\widetilde{E}_{r, L_r}(1;q)=\widetilde{Q}_{r, O_r}(1;q)$ recovers Lehmer’s identity \eqref{lehmer} in Theorem \ref{lehmer-thm}. We compute that \begin{align}\label{eqn_QEtilderiv} \frac{\partial}{\partial z} \Big |_{z=1} (\widetilde{Q}_{r, O_r}(z;q) - \widetilde{E}_{r, L_r}(z;q)) &=(-q;q^2)_\infty \sum_{ \ell \in L_r \cup O_r}\mathop{\sum_{k=0}^\infty} \frac{q^{2kr+\ell}}{1+q^{2kr+\ell}}. \end{align} To prove Theorem \ref{beck-lehmer4}, it suffices to prove the case where $L_r\cup O_r=\{\ell\}$ for each positive integer $\ell\leq 2r$. In Section \ref{sec_ec}, we state and prove Proposition \ref{lem_ec}, which establishes the non-negativity of the $q$-series coefficients of the series in \eqref{eqn_QEtilderiv} (noting that special case $\ell=2$ is more delicate). Then we provide two different proofs of Theorem \ref{beck-lehmer4}: the first proof in Section \ref{sec_eccomb1} makes use of Proposition \ref{lem_ec} and its proof, while the second proof in Section \ref{sec_eccomb2} establishes a relevant combinatorial injection and is independent of Proposition \ref{lem_ec}. \subsection{Non-negativity of $q$-series coefficients}\label{sec_ec} We use the notation $F(q)\succeq 0$ to mean that the coefficients of $F(q)$ when expanded as a $q$-series are all non-negative. \begin{proposition} \label{lem_ec} Let $r \in \mathbb N$, and $\ell$ a positive integer such that $\ell\leq 2r$. If $\ell \neq 2$, then $$(-q;q^2)_\infty \mathop{\sum_{k=0}^\infty} \frac{q^{2kr+\ell}}{1+q^{2kr+\ell}}\succeq 0.$$ If $\ell=2$, then the only possible negative coefficients of \begin{align}\label{def_ell2} (-q;q^2)_\infty \mathop{\sum_{k=0}^\infty} \frac{q^{2kr+2}}{1+q^{2kr+2}}\end{align} (when expanded as a $q$-series) are the coefficients of $q^4, q^8, q^{12}, q^{16},$ and $q^{20}$, and any such negative coefficient is equal to $-1$. Precisely, the set of all $n$ such that the coefficient of $q^n$ in \eqref{def_ell2} is negative (and thus equal to $-1$) is given as a function of $r$ in the following table: $$\begin{array}{|l|l|} \hline r & \{n\} \\ \hline 1 & \{ \ \} \\ \hline 2,4,7 & \{4,8,12\} \\ \hline 3 & \{4\}\\ \hline 5 & \{4,8\}\\ \hline 6,9 & \{4,8,12,16\}\\ \hline 8, \text{or} \geq 10 & \{4,8,12,16,20\} \\ \hline \end{array}$$ \end{proposition} \begin{proof}[Proof of Proposition \ref{lem_ec}] We divide our proof into three cases: $\ell$ odd, $\ell$ even but $\ell \neq 2$, and $\ell=2$. For $0<\ell\leq 2r$ odd, we have that \begin{align}(-q;q^2)_\infty\sum_{k=0}^{\infty} \frac{q^{2kr+\ell}}{1+q^{2kr+\ell}} =\sum_{k=0}^{\infty} \mathop{\prod_{m=0}^\infty}_{2m+1\neq 2kr+\ell}q^{2kr+\ell}(1+q^{2m+1}) \succeq 0 \label{leftoversGF}. \end{align} For $0<\ell\leq 2r$ even, we note that \begin{equation} \begin{split}\label{eqn_cbconj} (-q;q^2)_\infty\sum_{k=0}^{\infty} \frac{q^{2kr+\ell}}{1+q^{2kr+\ell}}=&\sum_{k=0}^{\infty}(-q;q^2)_\infty(1-q^{2kr+\ell}) \frac{q^{2kr+\ell}}{1-q^{2(2kr+\ell)}}. \end{split}\end{equation} We first assume that $\ell\neq 2.$ Using \eqref{eqn_cbconj}, it suffices to show that $(-q;q^2)_\infty (1-q^{2a}) \succeq 0$ for any integer $a\geq 2$. We apply the well-known identity (see, e.g., \cite[(2.2.6) with $q \mapsto q^2, t\mapsto q$]{AndEncy}) $$ (-q;q^2)_\infty = \sum_{n=0}^\infty \frac{q^{n^2}}{(q^2;q^2)_n}$$ to re-write \begin{align}\nonumber (-q;q^2)_\infty (1-q^{2a}) & = (1-q^{2a})\sum_{n=0}^\infty \frac{q^{n^2}}{(q^2;q^2)_n} \\\nonumber &= (1-q^{2a}) + \frac{(1-q^{2a})}{(1-q^2)}\left( \sum_{n=1}^\infty \frac{q^{n^2}}{(q^4;q^2)_{n-1}}\right) \\\label{eqn_jevenan} &= 1-q^{2a} + \left(\sum_{t=0}^{a-1} q^{2t} \right)\left(\sum_{n=1}^\infty \frac{q^{n^2}}{(q^4;q^2)_{n-1}}\right). \end{align} Thus, it suffices to show that the coefficient of $q^{2a}$ in the $q$-series expansion of \begin{align}\label{eqn_jevenexpproof} \left(\sum_{t=0}^{a-1} q^{2t} \right)\left(\sum_{n=1}^\infty \frac{q^{n^2}}{(q^4;q^2)_{n-1}}\right)\end{align} is strictly positive. We re-write $2a= u^2 + v$, where $u^2$ is the largest even perfect square at most equal to $2a$ (with $u$ a non-negative even integer), and $v$ is a non-negative integer. Note that $u^2 \geq 4$ (so $u\geq 2$), since $2a \geq 4$. Since $u$ is even, $v$ is even, and since $u^2\geq 4$, we have that $0\leq v \leq 2(a-2)$. That is, $v=2t$ for some $0\leq t \leq a-2$. For this $t$, we consider \begin{align}\label{eqn_tconj}q^{2t} \sum_{n=1}^\infty \frac{q^{n^2}}{(q^4;q^2)_{n-1}},\end{align} which appears in \eqref{eqn_jevenexpproof}. We extract the $n=u$ term ${q^{u^2}}/{(q^4;q^2)_{u-1}}$ from the sum in \eqref{eqn_tconj}, noting that $u\geq 2$. Expanding this as a $q$-series, we obtain \begin{align}\label{eqn_tconj2} \frac{q^{u^2}}{(q^4;q^2)_{u-1}} = q^{u^2}+\Sigma_u(q),\end{align} where $\Sigma_u(q)\succeq 0$. Multiplying \eqref{eqn_tconj2} by $q^{2t}$, we find the term $q^{2t+u^2} = q^{2a}$ in the $q$-expansion of \eqref{eqn_jevenexpproof}; moreover, we have explained that $\Sigma_u(q) \succeq 0$, and it is clear that the remaining coefficients in the $q$-expansion of \eqref{eqn_jevenexpproof} are non-negative. This completes the proof of non-negativity in the case of even $\ell \neq 2 $. When $\ell=2$, we begin with the identity in \eqref{eqn_jevenan}, which also holds for $a=1$. In this case, the $q$-series expansion for the expression in \eqref{eqn_jevenexpproof} is $q+q^4+O(q^8)$ (and has non-negative coefficients); that is, the coefficient of $q^2$ is $0$. Thus, the $q$-expansion of \eqref{eqn_jevenan} in this case is $1+q-q^2+q^4+O(q^8)$, and has non-negative coefficients for all powers of $q$ greater than $4$. Referring to \eqref{eqn_cbconj}, as above, we have that \begin{align}\label{eqn_jksum} \sum_{k=1}^{\infty}(-q;q^2)_\infty(1-q^{2kr+\ell}) \frac{q^{2kr+\ell}}{1-q^{2(2kr+\ell)}} \succeq 0 \end{align} for any even $\ell \in L_r$, including $\ell=2$. We also have from the above that the $k=0$ term from \eqref{eqn_cbconj} satisfies \begin{align}\label{eqn_jkzero} (-q;q^2)_\infty(1-q^{\ell}) \frac{q^{\ell}}{1-q^{2\ell}}\succeq 0 \end{align} for even $\ell\geq 4$. For $\ell=2$, we have shown that the only negative term appearing in the $q$-expansion \begin{align}\label{eqn_j2exp} (-q;q^2)_\infty(1-q^{2}) = 1+q-q^2+q^4+q^8+q^9+q^{12}+q^{13}+q^{15}+2q^{16}+O(q^{17}) \end{align} is $-q^{2}.$ We multiply \eqref{eqn_j2exp} by \begin{align}\label{eqn_j2exp2} \frac{q^{2}}{1-q^{4}} = q^2+q^6 + q^{10}+q^{14}+q^{18}+q^{22}+O(q^{26})\end{align} (which clearly has non-negative $q$-series coefficients) to obtain \begin{align}\notag q^2+q^3&-q^4+2 q^6+q^7-q^8+3 q^{10}+2 q^{11}-q^{12}+4 q^{14}\\\label{eqn_j2exp3} &+3 q^{15}-q^{16}+q^{17}+6 q^{18}+4 q^{19}-q^{20} + \Sigma(q),\end{align} where $\Sigma(q) = O(q^{21})$. We now argue that $\Sigma(q) \succeq 0 $. It is not difficult to see that the only powers of $q$ in the expansion for $\Sigma(q)$ which may possibly have negative coefficients are those $q^m$ such that $m\equiv 0 \pmod 4$, $m\geq 24$ (where we have also used that $\Sigma(q) = O(q^{21})$). Now, any $m\equiv 0 \pmod{4}$ such that $m\geq 24$ can also be written as $m=(3+1)^2+6+(2+4c)$ for some integer $c\geq 0$. Thus, we also obtain the term $+q^m$ after multiplying \eqref{eqn_j2exp} and \eqref{eqn_j2exp2} as follows. We use the expression in \eqref{eqn_jevenan} (with $a=1$ and $t=0$) for \eqref{eqn_j2exp}, and take the numerator $q^{(3+1)^2}$ of the $n=3$ term and also $q^6$ from the expansion of the denominator $(q^4;q^2)_3$ of that same term. This yields a term $q^{(3+1)^2+6}$ after multiplying. We now multiply by the term $q^{2+4c}$ from the expansion of $q^2/(1-q^4)$ in \eqref{eqn_j2exp2}. Overall, this yields after multiplication the term $q^{(3+1)^2+6+(2+4c)}=q^m$, which cancels with the earlier $-q^m$. This shows that $\Sigma(q)\succeq 0$. Thus, the only negative coefficients of the series in \eqref{eqn_j2exp3} are $q^{4}, q^8, q^{12}, q^{16},$ and $q^{20}$, and these coefficients are all equal to $-1$. When added to the rest of the sum in \eqref{eqn_jksum} (which has non-negative coefficients), this argument shows that the only powers of $q$ in the expansion of \eqref{def_ell2} (equivalently, \eqref{eqn_cbconj} with $\ell=2$) with potentially negative coefficients are $q^4, q^8, q^{12}, q^{16}, q^{20}$, and that any such negative coefficient must be $-1$, as claimed. Moreover, for $r\geq 10$, and any $k\geq 1$, we have that $$ \frac{q^{2kr+2}}{1-q^{2(2kr+2)}} = O(q^{22}),$$ which, when combined with the above argument, proves that the coefficients of $q^4, q^8, q^{12}, q^{16},$ and $q^{20}$ are all equal to $-1$. The remaining negative coefficients as given in the table in Proposition \ref{lem_ec} for $1\leq r \leq 9$ are easily calculated directly. \end{proof} \subsection{Combinatorial interpretation of Proposition \ref{lem_ec}}\label{sec_eccomb1} In this section, we give a combinatorial interpretation of the coefficient of $q^n$ in the $q$-series of Proposition \ref{lem_ec} in terms of the number of pairs of partitions $(\lambda, (a^b))\vdash n$ satisfying certain conditions. This will complete the first proof of Theorem \ref{beck-lehmer4}. Let $\ell$ be a positive integer such that $\ell\leq 2r$. If $\ell$ is odd, then the coefficient of $q^n$ in \eqref{leftoversGF} is the number of pairs of partitions $(\lambda, (a))\vdash n$ satisfying $a \equiv \ell \pmod{2r}$, $\lambda\in \mathcal{Q}_o$, and $a\not\in\lambda$. If $\ell$ is even, $\ell \neq 2$, we substitute \eqref{eqn_jevenan} in \eqref{eqn_cbconj} to obtain \begin{align} \nonumber (-q;q^2)_\infty& \sum_{k=0}^{\infty} \frac{q^{2kr+\ell}}{1+q^{2kr+\ell}} = \\ \label{I} & \sum_{\substack{a \equiv \ell\!\!\!\!\pmod{2r}\\ a > 0}}\left(1+\left(\sum_{t=0}^{\frac{a}{2}-1} q^{2t}\right)\left( \sum_{n=0}^\infty \frac{q^{(n+1)^2}}{(q^4;q^2)_{n}}\right)\right) \frac{q^{a}}{1-q^{2a}}\\ \nonumber - \\ \label{II} & \sum_{\substack{a \equiv \ell\!\!\!\!\pmod{2r}\\ a > 0}}q^{a} \frac{q^{a}}{1-q^{2a}}. \end{align} The $q$-series $$\sum_{n=0}^\infty \frac{q^{(n+1)^2}}{(q^4;q^2)_{n}}$$ is the generating series for the number of self-conjugate partitions of $n\geq 1$ with smallest part at least $2$. Thus, this is also the generating series of the number of partitions of $n\geq 1$ into distinct odd parts such that the first two parts differ by exactly $2$. Moreover, $$\sum_{t=0}^{\frac{a}{2}-1} q^{2t}$$ is the generating series for partitions of $n$ into a single even part no larger than $a-2$. By adding a non-negative even integer no larger than $a-2$ to the first part of a partition into distinct odd parts whose first two parts differ by exactly $2$, we see that $$\left(\sum_{t=0}^{\frac{a}{2}-1} q^{2t}\right)\left( \sum_{n=0}^\infty \frac{q^{(n+1)^2}}{(q^4;q^2)_{n}}\right)$$ is the generating series for the number of partitions $\lambda$ of $n\geq 1$ into distinct odd parts with $\lambda_1-\lambda_2\leq a$. Thus, \eqref{I} is the generating series for the number of pairs of partitions $(\lambda, (a^b))\vdash n$ with $b$ odd, $a \equiv \ell \pmod{2r}$, and $\lambda \in \mathcal Q_o$ satisfying $\lambda_1-\lambda_2\leq a$. (Note that $\lambda$ may be empty.) To interpret \eqref{II}, for $a\geq 4$ and even, define $\mu(a)\in \mathcal Q_o(a)$ to be the partition \begin{equation}\label{def_mu} \mu(a):=\begin{cases} (\frac{a}{2}+1, \frac{a}{2}-1) & \mbox{ if $\frac{a}{2}$ is even}\\ (\frac{a}{2}+2, \frac{a}{2}-2) & \mbox{ if $\frac{a}{2}$ is odd}. \end{cases} \end{equation} Thus, \eqref{II} is the generating series for the number of pairs of partitions $(\mu(a), (a^b))\vdash n$ with $b$ odd, $a \equiv \ell \pmod{2r}$. Therefore, the coefficient of $q^n$ in $$(-q;q^2)_\infty\sum_{k=0}^{\infty} \frac{q^{2kr+\ell}}{1+q^{2kr+\ell}}$$ is equal to the number of the pairs of partitions $(\lambda, (a^b))\vdash n$ with $b$ odd, $a \equiv \ell \pmod{2r}$, and $\lambda \in \mathcal Q_o$ satisfying $\lambda_1-\lambda_2\leq a$ and $\lambda\neq\mu(a)$. If $\ell=2$, the argument above fails only in the interpretation of $q^{a}\frac{q^{a}}{1-q^{2a}}$ when $a=2$. However, this $q$-series is just $\sum_{k=1}^\infty q^{4k}$. Thus, if $\ell=2$ and $n \equiv 0\pmod 4$, the coefficient of $q^n$ in \eqref{eqn_cbconj} is one less than the number of pairs of partitions described above. If $n\geq 24$, $n \equiv 0 \pmod 4$, then $((9,7,5,1), (2^{(n-22)/2}))$ is a pair counted by the sequence whose generating series is \eqref{I}. Thus, if $n \not \in\{4,8,12,16,20\}$, the coefficient of $q^n$ in \eqref{eqn_cbconj} is non-negative and equal to the number of pairs of partitions $(\lambda, (a^b))\vdash n$ with $b$ odd, $a \equiv \ell \pmod{2r}$, and $\lambda \in \mathcal Q_o$ satisfying $\lambda_1-\lambda_2\leq a$, $\lambda\neq \mu(a)$ and, if $n\equiv 0 \pmod 4$, also $(\lambda, (a^b))\neq ((9,7,5,1), (2^{(n-22)/2}))$. If $\ell=2$, $r\geq 10$, and $n\in\{4,8,12,16,20\}$, then $q^n$ appears only when $a=2$ in \eqref{I}. If $b$ is odd, $n-2b \equiv 2\pmod 4$ and $n-2b\leq 18$. Thus, a partition $\lambda\vdash n-2b$ into distinct parts would have two parts. However, since $n-2b \equiv 2\pmod 4$, it follows that $\lambda_1-\lambda_2\geq 4$. Therefore, there are no pairs of partitions $(\lambda, (a^b))$ counted by the sequence whose generating series is \eqref{I} and the coefficient of $q^n$ in \eqref{eqn_cbconj} is $-1$. For $r<10$, one can easily verify that the values of $n$ giving negative coefficients are as in the statement of the theorem. \subsection{An alternate proof} \label{sec_eccomb2} In this section, we provide an alternate proof of Theorem~\ref{beck-lehmer4}. This proof is independent of Proposition~\ref{lem_ec} and additionally proves the combinatorial interpretation in Section~\ref{sec_eccomb1}. Recall that to prove Theorem \ref{beck-lehmer4}, it suffices to interpret \eqref{eqn_QEtilderiv} combinatorially in the case where $L_r\cup O_r = \{\ell\}$. Then \eqref{eqn_QEtilderiv} becomes \begin{equation} \begin{split} &(-q;q^2)_\infty\sum_{k=0}^{\infty} \frac{q^{2kr+\ell}}{1+q^{2kr+\ell}}\\ \label{eqn_lsplit} =&(-q;q^2)_\infty\sum_{k=0}^{\infty} \frac{q^{2kr+\ell}}{1-q^{2(2kr+\ell)}}-(-q;q^2)_\infty\sum_{k=0}^{\infty} \frac{q^{2(2kr+\ell)}}{1-q^{2(2kr+\ell)}}\\ =:&\sum_{n=0}^{\infty}c_{\ell,r}(n)q^n, \end{split} \end{equation} { From \eqref{eqn_lsplit} we see that $c_{\ell,r}(n)$ is the excess of the number of elements in $$B_{\ell,r}(n):=\left\{(\lambda,(a^b))\vdash n\ \Big | \begin{array}{l} \lambda \in \mathcal Q_o, b \text{ odd}, a\equiv \ell\!\!\!\!\pmod{2r}\end{array}\!\right\}$$ over that in $$A_{\ell,r}(n):=\left\{(\lambda,(a^b))\vdash n\ \Big | \begin{array}{l} \lambda \in \mathcal Q_o, b \text{ even}, a\equiv \ell\!\!\!\!\pmod{2r}\end{array}\!\right\}.$$ We create an injection $T_{\ell,r}$ from $A_{\ell,r}(n)$ into $B_{\ell,r}(n)$ as follows. \textbf{If $\ell$ is odd:} For $(\lambda, (a^b))$ in $A_{\ell,r}(n)$, define $$T_{\ell,r}(\lambda, (a^b))= \begin{cases} (\lambda\cup \{a\},(a^{b-1})) & \mbox{ if $a \not \in\lambda$},\\ (\lambda \setminus \{a\}, (a^{b+1})) & \mbox{ if $a\in \lambda$}. \end{cases}$$ Then, $c_{\ell,r}(n)=|B_{\ell,r}(n)\setminus T_{\ell}(A_{\ell,r}(n))|$, i.e., the number of pairs of partitions $(\lambda, (a))\vdash n$ such that \begin{enumerate} \item[(I$_o$)] $a \equiv \ell \pmod{2r}$ \item[(II$_o$)] $\lambda\in \mathcal{Q}_o$ such that $a$ is not a part of $\lambda$. \end{enumerate} \textbf{If $\ell$ is even:} For $(\lambda, (a^b))$ in $A_{\ell,r}(n)$ with $(\lambda, a)\neq (\varnothing, 2)$, define $$T_{\ell,r}(\lambda, (a^b))= \begin{cases} (\lambda\setminus\{\lambda_1\}\cup \{\lambda_1+a\}, (a^{b-1})) & \mbox{ if } \lambda\neq\varnothing\\ (\mu(a), (a^{b-1})) & \mbox{ if } \lambda=\varnothing, \end{cases}$$ where $\mu(a)$ was defined in \eqref{def_mu}. When $n\not\equiv 0\pmod 4$ or $\ell\neq 2$, we have $(\lambda, a)\neq (\varnothing, 2)$ for all $(\lambda, (a^b))$ in $A_{\ell,r}(n)$. Then $c_{\ell,r}(n)=|B_{\ell,r}(n)\setminus T_{\ell,r}(A_{\ell,r}(n))|$, i.e., the number of pairs of partitions $(\lambda, (a^b))\vdash n$ such that \begin{enumerate} \item[(I$_e$)] $a \equiv \ell \pmod{2r}$ and $b$ is odd. \item[(II$_e$)] $\lambda\in \mathcal{Q}_o$ such that $\lambda_1-\lambda_2\leq a$, $\lambda \neq \mu(a)$. \end{enumerate} If $n\equiv 0\pmod 4$ and $\ell =2$, then $(\varnothing, 2^{n/2})\in A_{\ell,r}(n)$. If $n\geq 24$, we define \begin{equation*}\label{eq:imageT} T_{\ell,r}(\varnothing, (2^{n/2}))= ((9,7,5,1),(2^{(n-22)/2})). \end{equation*} If $n\not \in \{4,8,12,16,20\}$, then $c_\ell(n)=|B_{\ell,r}(n)\setminus T_{\ell,r}(A_{\ell,r}(n))|$, i.e., the number of pairs of partitions $(\lambda, (a^b))\vdash n$ satisfying i. and ii. above and the additional condition \begin{enumerate} \item[(III$_e$)] If $a=2$, then $\lambda\neq (9,7,5,1)$. \end{enumerate} If $n\in \{4,8,12,16,20\}$ there is no obvious image of $(\varnothing, 2^{n/2})$ under $T_{\ell,r}$ so that the transformation remains injective. In this case, $c_{\ell,r}(n)$ is one less than the number of pairs of partitions $(\lambda, (a^b))\vdash n$ satisfying i. and ii. above. Depending on $r$ and $n$, in this case $c_{\ell,r}(n)$ could be $-1$. } \begin{remark} For sufficiently large $n$, the coefficients of $q^n$ are strictly positive. We explain this more precisely below. Suppose $\ell$ is odd and $n\geq \ell+8$. Let $d:=n-\ell$. Then, if $d$ is even, we have $(d-1, 1), (d-3, 3)\in \mathcal Q_o(d)$, and if $d$ is odd, we have $(d), (d-4, 3,1)\in \mathcal Q_o(d)$. Since pairs of partitions in each case have disjoint sets of parts, there is a pair $(\lambda, (\ell))\vdash n$ satisfying (I$_o$) and (II$_o$). Suppose $\ell$ is even and $n\geq \ell+19$, and let $d:=n-\ell$ as above. The following pairs $(\lambda, \ell)$ satisfy conditions (I$_e$) and (II$_e$). If $d\equiv 0\pmod 4$, let $\lambda:=(\frac{d}{2}-1,\frac{d}{2}-3, 3,1)$. If $d\equiv 1\pmod 4$, let $\lambda:=(\frac{d+1}{2},\frac{d-3}{2},1)$. If $d\equiv 2\pmod 4$, let $\lambda:=(\frac{d}{2}-2,\frac{d}{2}-4, 5,1)$. If $d\equiv 3\pmod 4$, let $\lambda:=(\frac{d-1}{2},\frac{d-5}{2}, 3)$. If $n\geq 28$ and $d\equiv 2 \pmod 4$ with $\ell=2$, then (III$_e$) is also satisfied. \end{remark} \subsection{Examples of Theorem \ref{beck-lehmer4}} In this section, we give some examples of Theorem \ref{beck-lehmer4} for specific choices of $L_r$ and $O_r$ in which the excess is non-negative for all $n$. Example 1 gives a new interpretation for the excess studied in Theorem \ref{beck-lehmer}. Examples 2 and 3, when specialized to $r=1$, are also related to the excess studied in Theorem \ref{beck-lehmer}. \subsubsection{Example 1: $L_r\cup O_r = \{1,\ldots,2r\}$} The excess in this case is the same as that in Theorem \ref{beck-lehmer}, but the combinatorial description given by Theorem \ref{beck-lehmer} and Theorem \ref{beck-lehmer4} are different. When $n\not\equiv 0 \pmod 4$, Theorem \ref{beck-lehmer4} becomes: The excess of the number of parts in all partitions in $\mathcal{Q}_o(n)$ plus the number of even parts in all partitions of $\mathcal{P}_e(n,2)$ over the number of even parts in all partitions in $\mathcal{P}_o(n,2)$ equals the number of pairs of partitions $(\lambda, (a^b))\vdash n$ satisfying the following conditions: \begin{itemize} \item[i.] $b$ is odd. Moreover, if $a$ is odd, then $b=1$, \item[ii.] $\lambda\in\mathcal{Q}_o$. Moreover, if $a$ is odd, then $a\not \in \lambda$; if $a$ is even, then $\lambda_1-\lambda_2\leq a$. \end{itemize} When $n\in\{4,8,12,16,20\}$, since $2\in L_r\cup O_r$, Theorem \ref{beck-lehmer4} does not guarantee non-negativity of the excess because we could not define our injection on $(\varnothing,(2^{n/2}))$. However, since $L_r\supseteq \{2,4,\ldots, 2r\}$, when $n\equiv 0\pmod{4}$, we can map $(\varnothing, (2^{n/2}))$ to $(\varnothing, (n) )$, proving non-negativity of the excess. Specifically, the excess is now equal to the number of pairs of partitions $(\lambda,(a^b))\vdash n$ satisfying the following conditions: \begin{itemize} \item[i.] $b$ is odd. Moreover, if $a$ is odd, then $b=1$, \item[ii.] $\lambda\in\mathcal{Q}_o$. Moreover, if $a$ is odd, then $a\not \in \lambda$; if $a$ is even, then $\lambda_1-\lambda_2\leq a$ and $\lambda\neq\mu(a) \}$; if $a \equiv 0 \pmod 4$ and $b=1$, then $\lambda\neq\varnothing$. \end{itemize} With the modified injection, we have the following Corollary of Theorem \ref{beck-lehmer4}: \begin{corollary}\label{cor-even} The excess of the number of even parts in all partitions of $\mathcal P_o(n,2)$ over the number of even parts in all partitions of $\mathcal P_e(n,2)$ equals the number of partitions $\lambda$ of $n$ such that exactly one part is even, all other parts are odd and distinct, the even part may be repeated an odd number of times and, if we write $\lambda=(\lambda^o\cup ((2k)^b))$ with $\lambda^o\in\mathcal{Q}_o$, $k\geq 1$, then $\lambda^o_1-\lambda^o_2 \leq 2k, \lambda^o\neq\mu(2k)$, and if $k$ is even and $b=1$ then $\lambda^o\neq \varnothing$. \end{corollary} \begin{proof} Corollary \ref{cor-even} follows from the fact that counting $(\lambda, (a))\vdash n$ with $a$ odd and $a\not \in \lambda$ is the same as counting parts in $\mathcal Q_o(n)$. When $a=2k$, we insert the even (possibly repeated) part into the partition into distinct odd parts to obtain a statement similar to the original Beck conjecture. \end{proof} Combining Theorem \ref{beck-lehmer0} and Corollary \ref{cor-even}, we arrive at the following corollary. \begin{corollary} \label{cor-odd} The excess of the number of odd parts in all partitions in $\mathcal P_e(n,2)$ over the number of odd parts in $\mathcal P_o(n,2)\cup \mathcal Q_o(n)$ equals the number of partitions $\lambda$ of $n$ satisfying either \begin{itemize} \item[i.] $\lambda$ has exactly one even part, possibly repeated, and all other parts are odd and distinct, or \item[ii.] all parts of $\lambda$ are odd and exactly one part $b$ is repeated. Moreover, let $\lambda^o = \lambda \setminus (b^{2k})$ be the partition obtained by removing from $\lambda$ the largest even number of parts equal to $b$, then $\lambda^o_1-\lambda^o_2 \leq 2k, \lambda^o\neq\mu(2k)$, and if $k$ is even and $b=1$ then $\lambda^o\neq \varnothing$. \end{itemize} \end{corollary} \begin{proof} The excess in Corollary \ref{cor-odd} is the sum of the excess in Theorem \ref{beck-lehmer0} and Corollary \ref{cor-even}. The partitions described in \emph{ii.} are in one-to-one correspondence with the partitions described in Corollary \ref{cor-even} and are disjoint from the partitions described in \emph{i.} To see this, consider a partition $\mu=(\mu^o \cup ((2k)^b))$ as in Corollary \ref{cor-even}. Then define $\lambda=\mu^o\cup (b^{2k})$. Now $b$, which is odd, is a repeated part. The part $b$ may have already existed in $\mu^o$, so its multiplicity in $\lambda$ can be even or odd. \end{proof} \subsubsection{Example 2: $L_r\cup O_r = \{r,2r\}$} In this case, \[\frac{\partial}{\partial z}\Big|_{z=1}(\widetilde Q_{r,O_r}(z;q) - \widetilde E_{r,L_r}(z;q)) = (-q;q^2)_\infty \sum_{k=0}^\infty \frac{q^{kr}}{1+q^{kr}}.\] Theorem \ref{beck-lehmer4} implies that the series has non-negative coefficients when $r\geq3$, because $2\not \in L_r\cup O_r.$ When $r=1$, both Example 1 and Proposition \ref{lem_ec} show that the series has non-negative coefficients. When $r=2$, the series also has non-negative coefficients. As in Example 1, when $n\equiv 0 \pmod 4$, we can map $(\varnothing, (2^{n/2}))$ to $(\varnothing, (n))$. Alternatively, one can see that in Proposition \ref{lem_ec}, the coefficient of $q^n$ in $(-q;q^2)_\infty \sum_{k=0}^\infty \frac{q^{4k+2}}{1+q^{4k+2}}$ is negative (and equal to $-1$) only for $n=4,8,12$, while it can be computed that the coefficient of $q^4, q^8, q^{12}$ in $(-q;q^2)_\infty \sum_{k=0}^\infty \frac{q^{4k+4}}{1+q^{4k+4}}$ are $1,1,4$, respectively. \subsubsection{Example 3: $L_r\cup O_r = \{1,2r\}$} In this case, \begin{align*} &\frac{\partial}{\partial z}\Big|_{z=1}(\widetilde Q_{r,O_r}(z;q) - \widetilde E_{r,L_r}(z;q)) \notag \\ =& (-q;q^2)_\infty \left(\sum_{k=0}^\infty \frac{q^{2kr+1}}{1+q^{2kr+1}} +\sum_{k=1}^\infty \frac{q^{2kr}}{1+q^{2kr}}\right).\notag \end{align*} Theorem \ref{beck-lehmer4} (for $r\ge 2$) and Example 1 (for $r=1$) imply that the series has non-negative coefficients. \begin{remark} The derivative differences given in Examples 2 and 3 also have interpretations as generating series for the number of pairs of partitions satisfying certain conditions as in Theorem \ref{beck-lehmer4}. \end{remark} \section{Further non-negativity results}\label{sec_aux} The derivative difference in Example 3 can be expressed as \begin{align} &(-q;q^2)_\infty \left(\sum_{k=1}^{\infty} \frac{q^{2kr}}{1-q^{4kr}}-\sum_{k=0}^{\infty} \frac{q^{2(2kr+1)}}{1-q^{2(2kr+1)}}\right) \label{seriesThm6.2}\\ &\quad +(-q;q^2)_\infty\left( \sum_{k=0}^{\infty} \frac{q^{2kr+1}}{1-q^{2(2kr+1)}}-\sum_{k=1}^{\infty} \frac{q^{4kr}}{1-q^{4kr}}\right). \label{seriesThm6.3} \end{align} In Theorem \ref{thm6.2}, we show that \eqref{seriesThm6.2} has non-positive coefficients, and, in Theorem \ref{thm6.3}, we show that \eqref{seriesThm6.3} has non-negative coefficients. \begin{theorem} \label{thm6.2} For $r\in \mathbb{N}$, we have that $$(-q;q^2)_\infty\left(\sum_{k=0}^{\infty} \frac{q^{2(2kr+1)}}{1-q^{2(2kr+1)}}-\sum_{k=1}^{\infty} \frac{q^{2kr}}{1-q^{4kr}}\right)\succeq 0.$$ \end{theorem} \begin{proof} It suffices to construct an injection $T$ from $$A(n):=\{(\lambda,(a^b))\vdash n\ | \ \lambda \in \mathcal Q_o, a\equiv 0\!\!\!\!\pmod{2r}, b \text{ odd}\}$$ to $$B(n):=\{(\lambda,(a^b))\vdash n\ | \ \lambda \in \mathcal Q_o, a\equiv 1\!\!\!\!\pmod{2r},b \text{ even}\}.$$ Let $(\lambda, (a^b))\in A(n)$. Let $0\leq c<2r$ be the remainder of $b-1$ when divided by $2r$. Note that $c$ is even. We partition the set $A(n)$ into two disjoint subsets: \begin{align*} &A_1(n):=\{(\lambda,(a^b))\in A(n) \ | \ \lambda\neq\varnothing \};\\ &A_2(n):=\{(\lambda,(a^b))\in A(n) \ | \ \lambda=\varnothing\}. \end{align*} We define $T$ on each $A_i(n)$ in the following way. \begin{enumerate} \item If $\lambda\neq \varnothing$, then $$T(\lambda, (a^b)) = (\lambda \setminus \{\lambda_1\} \cup \{\lambda_1+ab-(a-c)(b-c)\}, (b-c)^{a-c}).$$ \item If $\lambda =\varnothing$, then $$T(\varnothing, (a^b)) = (\mu(ab-(a-c)(b-c)), (b-c)^{a-c}).$$ \end{enumerate} The image sets are thus \begin{align*} T(A_1(n)))=&\{(\mu,(x^y))\in B(n) \ | \ \mu_1-\mu_2> (y+z)(x+z)-xy\},\\ T(A_2(n))=&\{(\mu,(x^y))\in B(n) \ | \ \mu=\mu((x+z)(y+z)-xy) \}, \end{align*} where $z$ is the remainder of $-y$ when divided by $2r$. Note that $T$ maps $(\lambda, (a^b))\in A(n)$ with $b\equiv \ell \pmod {2r}$ to $(\mu, (x^y))\in B(n)$ with $y\equiv -\ell+1 \pmod {2r}$. When $b\equiv 1\pmod{2r}$, $T(\varnothing, (a^b)) = (\varnothing, (b^a))\not\in T(A_1(n))$. When $b\equiv \ell \not\equiv 1\pmod{2r}$, because $(y+z)(x+z)-xy\geq2(x+y)+4\geq4$, $T(A_1(n))$ and $T(A_1(n))$ are disjoint. Define the map $L$ from $T(A(n))$ to $A(n)$ as follows: \begin{enumerate} \item If $(\mu,(x^y))\in T(A_1)$, then $$L(\mu, (x^y)) = (\mu \setminus \{\mu_1\} \cup \{\mu_1-(y+z)(x+z)+xy\}, ((y+z)^{x+z})).$$ \item If $(\mu,(x^y))\in T(A_2)$, then $$L(\mu, (x^y)) = (\varnothing, ((y+z)^{x+z})).$$ \end{enumerate} Then $L$ and $T$ are inverse to each other. Hence $T$ is an injection and Theorem \ref{thm6.2} follows. \end{proof} \begin{remark} When $r=1$, the injection $T$ is the \emph{bijection} $(\lambda, (a^b))\mapsto (\lambda, (b^a))$, where the conjugation $(a^b)\mapsto (b^a)$ was used in the proof of Theorem \ref{beck-lehmer}. \end{remark} \begin{theorem} \label{thm6.3} For $r\in \mathbb N$, we have that $$(-q;q^2)_\infty \left(\sum_{k=0}^\infty \frac{q^{2kr+1}}{1-q^{2(2kr+1)}}-\sum_{k=1}^\infty \frac{q^{4kr}}{1-q^{4kr}}\right)\succeq 0.$$ \end{theorem} \begin{proof}[First Proof.] Because the derivative difference in Example 3 has non-negative coefficients, Theorem \ref{thm6.3} follows from Theorem \ref{thm6.2}. \end{proof} \begin{proof}[Second Proof.] Alternatively, we can also prove Theorem \ref{thm6.3} directly. It suffices to construct an injection $T$ from $$A(n):=\{(\lambda,(a^b))\vdash n\ | \ \lambda\in \mathcal Q_o,a\equiv 0\!\!\!\!\pmod {2r}, b \text{ even} \}$$ to $$B(n):=\{(\lambda,(a^b))\vdash n \ | \ \lambda \in \mathcal Q_o, a\equiv 1\!\!\!\!\pmod{2r}, b \text{ odd}\}.$$ We partition the set $A(n)$ into three disjoint subsets: \begin{align*} &A_1(n):=\{(\lambda,(a^b))\in A(n) \mid a+(2r-1)(b-1) \not\in \lambda \};\\ \ \\ &A_2(n) \\ &:={\small{\{(\lambda,(a^b))\in A(n) \mid a+(2r-1)(b-1) \in \lambda \text{ and $\lambda$ has at least two parts}\}};}\\ \ \\ &A_3(n):=\{(\lambda,(a^b))\in A(n) \mid \lambda =(a+(2r-1)(b-1)) \}. \end{align*} We define $T$ on each $A_i(n)$ in the following way. \begin{enumerate} \label{injection_general} \item If $(\lambda, (a^b))\in A_1(n)$ (including the case where $\lambda$ is empty), then $$T(\lambda, (a^b)):=\left(\lambda\cup \{a+(2r-1)(b-1)\}, \left((a+1-2r)^{b-1}\right)\right).$$ \item If $(\lambda, (a^b))\in A_2(n)$, let $m$ denote the largest part of $\lambda$ that is not $(a+(2r-1)(b-1))$. Then $T(\lambda, (a^b))$ equals {\small{\begin{align*}\left((\lambda\setminus\{a\!+(2r\!-1)(b\!-1),m\})\cup\{2(a\!+(2r\!-1)(b\!-1))\!+m\}, \left((a\!+1\!-2r)^{b\!-1}\right)\right).\end{align*}}} \item If $(\lambda, (a^b))\in A_3(n)$, then $$T(\lambda, (a^b)):=\left(\left(a+1,a+(2r-2)b-(2r-1)\right), \left((a+1)^{b-1}\right)\right).$$ \end{enumerate} The image sets are thus \begin{align*} &T(A_1(n))=\{(\mu,(c^d))\in B(n)\mid c+(2r-1)(d+1) \in \mu \}, \\ \ \\ &T(A_2(n))=\left\{(\mu,(c^d))\in B(n) \ \Big | \ \begin{array}{l} c+(2r-1)(d+1) \not \in \mu, \\ \mu_1-\mu_2>2(c+(2r-1)(d+1)) \end{array}\right\},\\ \ \\ &T(A_3(n))=\{(\mu,(c^d))\in B(n)\mid \mu = (c,c+(2r-2)d-2)\}. \end{align*} Note that when $r=1$, $(2r-2)d-2=-2<0$, and $2\leq 2(c+(2r-1)(d+1))$. When $r>1$, $(2r-2)d-2>0$ and $(2r-2)d-2\leq 2(c+(2r-1)(d+1))$. Hence $T(A_1(n)), T(A_2(n)), T(A_3(n))$ are pairwise disjoint. Define the map $L$ from $T(A(n))$ to $A(n)$ as follows: \begin{enumerate} \item If $(\mu,(c^d))\in T(A_1(n))$, then $$L(\mu,(c^d)):=(\mu\setminus\{c+(2r-1)(d+1)\},((c+2r-1)^{d+1})).$$ \item If $(\mu,(c^d))\in T(A_2(n))$, then we define $L(\mu,(c^d))$ by {\small{\begin{align*}((\mu\setminus\{\mu_1\})\cup\{c\!+(2r\!-1)(d\!+1),\mu_1-2(c\!+(2r\!-1)(d\!+1))\},((c\!+2r\!-1)^{d+1})).\end{align*}}} \item If $(\mu,(c^d))\in T(A_3(n))$, then $$L(\mu,(c^d)):=\left((c-1+(2r-1)d),\left((c-1)^{d+1}\right)\right).$$ \end{enumerate} Then $L$ and $T$ are inverses of each other. Hence, $T$ is an injection and Theorem \ref{thm6.3} follows. \end{proof} \begin{remark} When $r=1$, the second proof of Theorem \ref{thm6.3} recovers the proof of Theorem \ref{beck-lehmer}. \end{remark}
{ "timestamp": "2021-09-03T02:03:22", "yymm": "2109", "arxiv_id": "2109.00609", "language": "en", "url": "https://arxiv.org/abs/2109.00609", "abstract": "Euler's identity equates the number of partitions of any non-negative integer n into odd parts and the number of partitions of n into distinct parts. Beck conjectured and Andrews proved the following companion to Euler's identity: the excess of the number of parts in all partitions of n into odd parts over the number of parts in all partitions of n into distinct parts equals the number of partitions of n with exactly one even part (possibly repeated). Beck's original conjecture was followed by generalizations and so-called \"Beck-type\" companions to other identities.In this paper, we establish a collection of Beck-type companion identities to the following result mentioned by Lehmer at the 1974 International Congress of Mathematicians: the excess of the number of partitions of n with an even number of even parts over the number of partitions of n with an odd number of even parts equals the number of partitions of n into distinct, odd parts. We also establish various generalizations of Lehmer's identity, and prove related Beck-type companion identities. We use both analytic and combinatorial methods in our proofs.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "On a Partition Identity of Lehmer", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631639168357, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7087950505177629 }
https://arxiv.org/abs/1311.0267
Sectional curvature for Riemannian manifolds with density
In this paper we introduce two new notions of sectional curvature for Riemannian manifolds with density. Under both notions of curvature we classify the constant curvature manifolds. We also prove generalizations of the theorems of Cartan-Hadamard, Synge, and Bonnet-Myers as well as a generalization of the (non-smooth) 1/4-pinched sphere theorem. The main idea is to modify the radial curvature equation and second variation formula and then apply the techniques of classical Riemannian geometry to these new equations.
\section{Introduction} In this paper we are interested in the geometry of a Riemannian manifold $(M,g)$ with a smooth positive density function, $e^{-f}$. A theory of Ricci curvature for these spaces goes back to Lichnerowicz \cite{Lich1, Lich2} and was later developed by Bakry-Emery \cite{BE} and many others. It has turned out to be integral to developments in both Ricci flow and optimal transport and has thus experienced an explosion of results in the last few years. We will not try to reference them all here, see chapter 18 of \cite{MorganBook} for a partial survey. A notion of weighted scalar curvature also comes up in Perelman's work \cite{Per} and is related to his functionals for the Ricci flow, also see \cite{Lott2, CGY1, CGY2}. The weighted Gauss curvature and the weighted Gauss Bonnet theorem in dimension two has also been studied in \cite{ Cetc, CM}. We introduce two new concepts of sectional curvature for a Riemannian manifold equipped with a smooth vector field $X$. Given an orthonormal pair of vectors $(V,U)$ we define \begin{eqnarray*} \sec_X^V(U) &=& \sec(V,U) + \frac12 L_X g(V,V) \\ \overline{\sec}_X^V(U) &=& \sec(V,U) + \frac12 L_X g(V,V) + g(X,V)^2 \end{eqnarray*} Where $\sec(V,U)$ is the sectional curvature of the plane spanned by $V$ and $U$. When $X = \nabla f$ is a gradient field we write $\sec_f$ and $\overline{\sec}_f$ respectively. The asymmetrical placement of $U$ and $V$ emphasizes that $\mathrm{sec}^V_X(U) \neq \mathrm{sec}^U_X(V)$. On the other hand, we will see below that in dimension $2$, bounds on $\sec_X$ and $\overline{\sec}_X$ are equivalent to bounds on certain Bakry-Emery Ricci tensors. Since the Bakry-Emery Ricci tensors generically have distinct eigenvalues in dimension $2$, the lack of symmetry of the weighted sectional curvature is a necessary feature of any notion of weighted sectional curvature that agrees with the Bakry-Emery Ricci curvature. We also show below that $\sec_X$ and $\overline{\sec}_X$ come up naturally in at least three places: the radial curvature equation, the second variation of energy formula, and formulae for Killing fields. We will discuss our motivation for considering these notions from the radial curvature equation in section two. We use these equations to show that some of the fundamental comparison results about sectional curvature bounds extend to $\sec_X$ and $\overline{\sec}_X$. We define the condition $\mathrm{sec}_X = \psi$ where $\psi$ is a real valued function on $M$ to mean that $\sec^V_X(U) = \psi(p) $ for all $p \in M$ and for all orthonormal pairs of $(V,U)$ in $T_pM$. We can define the conditions $\overline{\sec}_X = \psi$ or $\sec_X \geq (\leq ) \psi$, etc. similarly. The most fundamental fact about sectional curvature is that constant curvature characterizes the classical Euclidean, spherical, and hyperbolic geometries. Constant weighted sectional curvature also characterizes natural vector fields or functions on spaces of constant curvature. \begin{proposition}[Constant Curvature] \label{ConsCurvIntro} Let $(M^n,g)$ be a Riemannian manifold of dimension $n>2$, let $X$ be a smooth vector field and $f$ be a smooth function on $M$, then \begin{enumerate} \item $\sec_X = \psi$ if and only if $g$ has constant curvature and $X$ is a conformal field on $(M,g)$. \item $\overline{\mathrm{sec}}_f = \psi$ if and only if both $g$ and $e^{-2f} g$ have constant curvature. \end{enumerate} \end{proposition} When the weighted sectional curvatures are not constant, we think of $\overline{\sec}_X$ or $\sec_f$ as measuring how far away the space is from one of the canonical spaces in Proposition \ref{ConsCurvIntro}. First we generalize the Cartan-Hadamard theorem to the case where $\overline{\sec}_X \leq 0$. \begin{theorem}[Weighted Cartan-Hadamard Theorem] If a complete Riemannian manifold admits a smooth vector field $X$ such that $ \overline{\sec}_X \leq 0$, then $M$ does not have any conjugate points. In particular, if $M$ is simply connected then it is diffeomorphic to $\mathbb{R}^n$. \end{theorem} Applying the result to the universal cover gives the standard corollary that a compact Riemannian manifold that admits a vector field $X$ with $\overline{\sec}_X \leq 0$ must have infinite fundamental group and have all other homotopy groups vanish. The lack of conjugate points also implies much more about the fundamental group, see \cite{CS}. Also note that $\overline{\mathrm{sec}}_X \geq \sec_X$, so the condition $\overline{\sec}_X \leq 0$ is a stronger assumption that $\sec_X \leq 0$. The Cartan-Hadamard theorem is not true for the condition $\sec_f \leq 0$, see Example \ref{cigar}. In the case of positive curvature we also prove generalizations of the following results of Synge\cite{Synge} and Berger\cite{BergerKilling}. \begin{theorem} \label{PosIntro} Suppose a compact Riemannian manifold admits a smooth function $f$ such that $\overline{\sec}_f > 0$ then \begin{enumerate} \item If $M$ is even dimensional then every Killing field has a zero. \item If $M$ is even dimensional and orientable, then $M$ is simply connected. \item If $M$ is odd-dimensional, then $M$ is orientable. \end{enumerate} \end{theorem} \begin{remark} The conditions $\overline{\sec}_f >0$ or $\sec_X >0$ for a compact manifold, has been studied much further by the author and Kennard in \cite{KennardWylie}. \end{remark} In the case of a two sided bound on curvature, we also prove the following generalization of the homeomorphic $1/4$-pinched sphere theorem. Our generalization will depend on the maximum and minimum of $u=e^f$, which we denote by $u_{max}$ and $u_{min}$. \begin{theorem} \label{PinchingIntro} If $(M,g)$ is compact, simply connected Riemmanian manifold and there is a smooth function $f$ such that \[ \frac{1}{4}\left (\frac{u_{max}}{u_{min}} \right)^2 < \overline{\sec}_f \leq \left(\frac{u_{min}}{u_{max}} \right)^2,\] then $M$ is homeomorphic to the sphere. \end{theorem} The proof of Theorem \ref{PinchingIntro} follows from classical methods of Klingenberg \cite{Kling61} and Berger \cite{Berger2}. We prove that the manifold is homotopic to the sphere and apply the resolution of the Poincare conjecture to conclude the manifold is homeomorphic to a sphere. We do not know to what extent this theorem is optimal. Note that the hypothesis implies that $\frac{u_{max}}{u_{min}} \leq (4)^{1/4} \approx 1.414$, so the result applies only to small densities. Some other pinching phenomena will be considered by the author in \cite{Wylie}. One reason for studying sectional curvature for manifolds with density is that understanding sectional curvature will enhance our understanding of weighted Ricci curvature. Given any real number $N$, the $N$-Bakry-Emery Ricci tensor is \[ \mathrm{Ric}_X^N = \mathrm{Ric} + \frac12 L_X g - \frac{X^\sharp \otimes X^\sharp}{N} \] When $N=\infty$ we write $\mathrm{Ric}^{\infty}_X = \mathrm{Ric}_X = \mathrm{Ric} + \frac12L_X g$. As we mention above, comparison geometry for lower bounds on the Bakry-Emery Ricci tensors have been very well studied recently. Traditionally this has been done with the parameter $N >0$ or infinite. Recently, however the negative case has been considered, see \cite{KolMil, Mil, Ohta} and the references there-in. Our approach to weighted sectional curvature gives a new diameter estimate for a positive lower bound on $\mathrm{Ric}_f ^{-(n-1)}$. \begin{theorem} \label{DiamIntro} If a complete Riemannian manifold supports a bounded function $f$ such that $\mathrm{Ric}_f^{-(n-1)} \geq (n-1) kg$ for some $k>0$, then $M$ is compact with finite fundamental group and $ \mathrm{diam}_M \leq \left(\frac{u_{max}}{u_{min}} \right)^{\frac{1}{n-1}}\frac{\pi}{\sqrt{k}} .$ \end{theorem} In \cite[Theorem 1.4]{WW} the author and Wei proved a similar diameter bound under the stronger hypothesis of a positive lower bound on $\mathrm{Ric}_f$. There are simple examples showing that $f$ being bounded is a necessary assumption for $M$ to be compact. Also see \cite{Morgan2} for a similar result for the weighted diameter. The paper is organized as follows. In the next section we discuss the motivation for the definitions which come from the Bakry-Emery Ricci curvatures and the radial curvature equation. We also discuss the relationship between our curvature and the curvature of the conformal change. In section 3 we discuss the case of constant weighted curvature; in section 4 we discuss conjugate radius estimates; in section 5 we consider the second variation of energy formula; in section 6 we prove the diameter estimate; and in section 7 we discuss curvature pinching. In the final section we consider Killing Fields. \textbf{Acknowledgement:} The author would like to thank Guofang Wei, Peter Petersen, and Frank Morgan for their encouragement and very helpful discussions and suggestions that improved the draft of this paper. \section{Motivation and the fundamental equations} In this section we first discuss the motivation for the Bakry-Emery Ricci curvature in terms of Bochner formulas and then show how a similar approach yields the definitions of $\mathrm{sec}_X$ and $\overline{\sec}_X$. \subsection{Ricci for manifolds with density} Recall the Bochner formula for the Riemannian Laplacian \[ \frac12\Delta|\nabla u|^2 = |\mathrm{Hess} u|^2 + \mathrm{Ric}(\nabla u, \nabla u) + g\left( \nabla \Delta u, \nabla u\right) \qquad u \in C^3(M). \] If $\mathrm{Ric} \geq k$ and the dimension of $M$ is less than or equal to $n$ an application of the Cauchy-Schwarz inequality gives \begin{eqnarray} \label{Bochner} \frac12\Delta|\nabla u|^2 \geq \frac{(\Delta u)^2}{n} + k |\nabla u|^2 + g\left( \nabla \Delta u, \nabla u\right) \qquad u \in C^3(M). \end{eqnarray} For a smooth vector field $X$ we consider the ``drift" Laplacian $\Delta_X= \Delta - D_X$. A simple calculation gives the Bochner formula \[ \frac12 \Delta_X |\nabla u|^2 = |\mathrm{Hess} u|^2 + \mathrm{Ric}(\nabla u, \nabla u) +\frac12L_Xg(\nabla u, \nabla u) + g\left( \nabla \Delta_X u, \nabla u\right) \qquad u \in C^3(M).\] The $N$-Bakry Emery Ricci tensor is defined to be $\mathrm{Ric}^N_X = \mathrm{Ric} + \frac12L_Xg- \frac{X^{\sharp} \otimes X^{\sharp}}{N}$. When $N >0$, if $\mathrm{Ric}_X^N \geq k$ one can show that \begin{eqnarray*} \frac12\Delta_X|\nabla u|^2 \geq \frac{(\Delta u)^2}{n+N} + k |\nabla u|^2 + g\left( \nabla \Delta_X u, \nabla u\right). \end{eqnarray*} This looks exactly like the Bochner formula for a $n+N$ dimensional manifold. We can also consider the case where $N = \infty$, then we have the Bakry-Emery Ricci tensor $\mathrm{Ric}_f = \mathrm{Ric} + \mathrm{Hess} f$, and $\mathrm{Ric}_f \geq k$ gives \begin{eqnarray*} \frac12\Delta_X|\nabla u|^2 \geq k |\nabla u|^2 + g\left( \nabla \Delta_Xu, \nabla u\right). \end{eqnarray*} From these formulae one can prove versions of many comparison results for lower bounds on $\mathrm{Ric}_X^N$ or $\mathrm{Ric}_X$. All of the classical results generalize to the $\mathrm{Ric}_f^N$ case but with all of the dimension dependent constants now depending on the synthetic dimension $n+N$ (see \cite{Qian}). We can think of $\mathrm{Ric}_f$ as being an infinite dimensional (or dimension-less) condition and thus the results for lower bounds on $\mathrm{Ric}_f$ are weaker, see for example \cite{Lott, Morgan, WW, MW}. \subsection{The radial curvature equation} Now to consider sectional curvature we examine the special case of the Bochner formula applied to a distance function. Fix $p \in M$ and let $r(x) = d(p,x)$. The function $r$ is smooth on $M \setminus C_p$ where $C_p$ denotes the cut locus of $p$. On $M \setminus C_p$ introduce geodesic polar coordinates $(r, \theta)$ where $\theta \in S^{n-1}$. The Bochner formula applied to the function $r$ then gives \begin{equation} \label{Ric1} \partial_r (\Delta_X r )= -|\mathrm{Hess} r|^2 - \mathrm{Ric}_X (\partial_r, \partial_r). \end{equation} In the case where $X = \nabla f$, the weighted Laplacian is also related to the weighted volume by the equation \begin{equation} \label{Ric2} L_{ \partial_r} \left(e^{-f} d\mathrm{vol}\right) =\left(\Delta_f r\right) e^{-f} d \mathrm{vol}. \end{equation} Putting these two equations together we can then see how bounds on Bakry-Emery Ricci tensors gives control on the measure $e^{-f} d\mathrm{vol}_g$. The corresponding equations for a distance function that involve sectional curvature are the fundamental equations. \begin{eqnarray} \label{sec1} L_{\partial_r} g &=& 2\mathrm{Hess} r \\ \label{sec2} (\nabla_{\partial_r} \mathrm{Hess} r) (X,Y) + \mathrm{Hess}^2 r(X,Y) &=& -g( R^{\partial_r}(X), Y) \end{eqnarray} Where $\mathrm{Hess}^2 r$ is the operator square of $\mathrm{Hess} r$, namely if $S$ is a dual $(1,1)$-tensor to $\mathrm{Hess} r$, $\mathrm{Hess} r (X,Y)= g(S(X), Y)$, then $\mathrm{Hess}^2 r = g(S(S(X), Y))$ and our notation for the curvature tensor is that \begin{eqnarray*} R^V(U) = R(U,V)V = \nabla_U \nabla_V V - \nabla_V \nabla_U V - \nabla_{[U,V]} V . \end{eqnarray*} So that $R^V$ is a symmetric operator on the orthogonal complement of $V,$ which, following \cite{Petersen} we call the {\em directional curvature operator} in the direction of $V$. Note that if we trace equations (\ref{sec1}) and (\ref{sec2}) we get equations (\ref{Ric1}) and (\ref{Ric2}). (\ref{sec2}) is called the radial curvature equation. For the moment we consider the gradient case, $X = \nabla f$. The weighted sectional curvatures will control the growth of $e^{-2f} g$ along a geodesic $\gamma$. Consider the equation \begin{eqnarray*} L_{\partial_r} \left(e^{-2f} g\right) = 2e^{-2f} \left( \mathrm{Hess} r - g(\nabla f, \partial_r) g \right). \end{eqnarray*} Set $H_f r= \mathrm{Hess} r - g(\nabla f, \partial_r) g $, then \begin{eqnarray*} (\nabla_{\partial_r} (H_f r)) (X,Y) &=& (\nabla_{\partial_r} \mathrm{Hess} r )(X,Y) - \mathrm{Hess} f (\partial_r, \partial_r) g(X,Y) \\ &=& - \mathrm{Hess}^2 r(X,Y) - R^{\partial_r } _f(X,Y) \end{eqnarray*} Where $R^{V } _f(U,W) = g(R_f^{V}(U), W)$ is the weighted directional curvature operator defined as \begin{eqnarray*} R^V_X(U) &=& R^V(U) + \frac12 L_X g(V,V) U \end{eqnarray*} with $X = \nabla f$, so that if $(V,U)$ is an orthonormal pair of vectors, $g(R_X^V(U), U) = \sec_X^V(U)$. We can make these equations more concrete by considering Jacobi fields. For a Jacobi field $J$ along a unit speed radial geodesic, $\gamma(r)$, with $J \perp \dot{\gamma}$ the fundamental equations are \begin{eqnarray*} \partial_r |J|^2 &=& 2 \mathrm{Hess} r(J,J) \\ \partial_r \left( \mathrm{Hess} r (J,J) \right) &=& \mathrm{Hess}^2 r(J,J) - R(J, \partial_r, \partial_r, J). \end{eqnarray*} When considering Jacobi fields in the weighted case, the curvatures $\overline{\sec}_f$ appear. Let \begin{eqnarray*} \overline{R}^V_X(U) &=& R^V(U) + \left(\frac12L_X g(V,V) + g(X,V)^2 \right) U \end{eqnarray*} and $\overline{R}^{V} _X(U,W) = g(\overline{R}^{V} _X(U), W)$, and for $X=\nabla f$ write $\overline{R}_f$ then we have \begin{eqnarray*} \partial_r \left(e^{-2f} |J|^2 \right) &=& 2 e^{-2f} \left( H_f r (J,J) \right) \\ \partial_r \left( H_f r(J,J) \right) &=& \mathrm{Hess}^2 r(J,J)- 2g(\nabla r, X)g(\dot{J}, J) - R^{\partial_r } _f(J,J)\\ &=& (H_fr)^2(J,J) - \overline{R}^{\partial_r } _f(J,J) . \end{eqnarray*} Which now looks even closer to the radial curvature equation (\ref{sec2}). Jacobi fields are the variation fields produced by variations of geodesics. So we can think of Jacobi fields as measuring the rate of the spreading of geodesics and of the fundamental equations as showing that sectional curvature controls this spreading. Thus, the weighted sectional curvatures control the rate of spreading of geodesics in a weighted sense by controlling the derivative of $e^{-2f}|J|^2$ along geodesics. In the motivation above we have used that $X = \nabla f$ in order to differentiate $e^{-2f} g$. However, many of the arguments we will use only depend on arguing along a fixed geodesic $\gamma$. Along a fixed geodesic $\gamma$ we can always find an anti-derivative for $X$ by simply defining $f_{\gamma}(t) = \int_0^t g(X, \dot{\gamma}) dt.$ We can then still make sense of the equations above along $\gamma$, replacing $e^{-2f}$ with $e^{-2f_{\gamma}}$. We have first motivated the definition of the weighted sectional curvature through the radial curvature equation because it is closer to the approach of Bakry-Emery and Lichnerowicz in the Ricci curvature case. We consider the second variation of energy formula in section 5. The weighted curvature also comes up in considering equations for Killing fields, as we will show in section 8. \subsection{Relationship to the conformal change} The weighted curvatures are also different from the sectional curvatures of the conformal metric $h = e^{-2f}g$. The formula for the $(4,0)$-curvature tensor of $h$ in terms of the curvature of $g$ is \begin{eqnarray*} R^h = e^{-2f} \left( R^g + \left( \mathrm{Hess}^g f + df \otimes df - \frac12 |df|^2 g \right) \circ g\right) \end{eqnarray*} Where $\circ$ denotes the Nomizu-Kulkarni product. We can re-interpreted this formula in the following way. \begin{proposition} \label{Conformal} Let $(M,g)$ be a Riemannian manifold with $f$ a smooth function on $M$ and let $h=e^{-2f} g$ then \[ \left(\overline{R}^g\right)_f^U(V,V) = e^{2f} \left(\overline{R}^h\right)_{-f}^V(U,U). \] In particular, \[ \overline{\sec}^g_f(U,V) = e^{-2f} \left( \overline{\sec}^h_{-f}(V,U)\right). \] \end{proposition} \begin{remark} This proposition shows that the map $(g,f) \rightarrow (e^{-2f} g, -f)$ is an involution on the space of Riemannian metrics with density that preserves the conditions $\overline{\sec}_f = \phi$, $\overline{\sec}_f \geq 0$ or $\overline{\sec}_f \leq 0$. \end{remark} \begin{proof} Let $U,V$ be orthogonal vectors in $g$. Then we have \begin{eqnarray*} e^{2f} R^h(U,V,V, U) &=& R(U,V,V,U) + \mathrm{Hess}^g f(U,U) g(V,V) + \mathrm{Hess}^g f(V,V) g(U,U) \\ && + df(U)^2g(V,V) + df(V)^2 g(U,U) - |df|^2g(U,U)g(V,V) \end{eqnarray*} Which gives us \begin{eqnarray*} \overline{R}^{U} _f(V,V) &=& e^{2f} \left( R^h(U,V,V, U) - \mathrm{Hess}^g f(V,V) h(U,U) - df(V)^2 h(U,U)+ |df|^2g(V,V) h(U,U) \right) \\ &=& e^{2f} \left( R^h(U,V,V, U) - \mathrm{Hess}^h f(V,V) h(U,U) + df(V)^2 h(U,U) \right) \\ &=& e^{2f} \left( \overline{R}^{V} _{-f} (U,U) \right). \end{eqnarray*} Where we have used the formula for the Hessian under the change of metrics \[ \mathrm{Hess}^h f (U,V) = \mathrm{Hess}^g f(U,V) + 2 df(U)df(V) - |df|^2 g \] \end{proof} \section{Constant Curvature} In this section we establish that our definitions of constant sectional curvature characterize natural canonical Riemannian manifolds with density in dimension larger than two. First we consider the case $\sec_X = \psi$ for some function $\psi$. In dimension two we always have $\mathrm{sec} = \phi$ and so $\sec_X = \psi$ if and only if $X$ is a conformal field. An obvious example in higher dimensions is a constant curvature metric with $X$ a conformal field. It is, in fact easy to see from Schur's lemma that these are the only examples. \begin{proposition} Suppose that $(M^n,g)$ has $n>2$. There is a vector field $X$ on $(M,g)$ such that $\mathrm{sec}_X = \psi$ for some function $\psi:M \rightarrow \mathbb{R}$ if and only if $(M,g)$ is a space of constant curvature and $X$ is a conformal field on $(M,g)$. Moreover, if $\psi=K$ is constant then either $X$ is a Killing field or $(M,g)$ is isometric to a domain of Euclidean space and $X$ is a homothetic field satisfying $L_X g = K g$. \end{proposition} \begin{proof} Let $U,V$ be perpendicular unit vectors in $T_pM$, then \begin{eqnarray*} \psi &=& \sec^U_X(V) = \sec(U,V) + L_Xg(U,U)\\ \psi &=& \sec^V_X(U) = \sec(V,U) + L_Xg(V,V) \end{eqnarray*} Since $ \sec(U,V) = \sec(V,U)$, we have $L_Xg(U,U) = L_Xg(V,V)$, showing that $X$ is a conformal field, $L_Xg = \phi g$, $\phi:M \rightarrow \mathbb{R}$. Then, letting $\{E_i\}_{i=1}^{n-1}$ be an orthonormal basis for the orthogonal complement of $U$ we have \[ (n-1) \psi = \sum_{i=1}^{n-1} \sec^U_X(E_i) = \mathrm{Ric}(U,U) + (n-1) \phi \] So that $\mathrm{Ric} = (n-1)(\psi - \phi)g$. By Schur's lemma, $\psi - \phi$ must be constant, showing the metric has constant curvature. This also shows that $\psi=K$ is constant if and only if $\phi$ is. If $\phi$ is zero, then $X$ is Killing and $(M,g)$ has constant curvature $K$. If $\phi \neq 0$, then $X$ is a non-Killing homothetic field. The existence of such a field implies that $(M,g)$ is isometric to a domain in Euclidean space, see \cite[p. 242]{KN}. \end{proof} In the case $\overline{\mathrm{sec}}_X = \psi$ the same proof gives the following result. \begin{lemma} \label{ConstantB} Suppose that $(M^n,g)$ has $n>2$ and there is a vector field $X$ on $M$ such that $\overline{ \mathrm{sec}}_X = \psi$ for some function $\psi$ then $(M,g)$ has constant curvature $\rho$ and $X$ satisfies $ L_X g + X^{\sharp} \otimes X^{\sharp} = (\psi-\rho) g. $ \end{lemma} When $X = \nabla f$ this gives us the following statement from the introduction. \begin{proposition} $(M^n,g,f)$ has $n>2$ and $\overline{ \mathrm{sec}}_f = \psi$ if and only if $g$ and $h=e^{-2f} g$ have constant curvature. \end{proposition} \begin{proof} $g$ has constant curvature by Lemma \ref{ConstantB} and from Proposition \ref{Conformal}, $\sec^h_{-f} = \psi e^{-2f}$. Therefore applying Lemma \ref{ConstantB} to $h$ tells us that $h$ also has constant curvature. Conversely, if $g$ and $h$ are both constant curvature, the equation for the curvature tensor under conformal change shows that $\mathrm{Hess} u$ is a function times the metric, which implies that $\overline{\sec}_f = \psi$. \end{proof} The conformal changes between Riemannian metrics with constant curvature, are completely classified in fact they are known in the Einstein case, see \cite{Brinkmann, KR}. The proof of this fact also gives some more information about the possible functions $\psi$ such that $\overline{\sec}_f = \psi$. Letting $u=e^f$, from the equation in Lemma \ref{ConstantB} we have $\mathrm{Hess} u = (\psi-\rho) u g$ where $\rho$ is the curvature of the metric. A lemma of Brinkmann-Tashiro states that if one has a non-constant solution to $\mathrm{Hess} u = \phi g $ for some function $\phi$, then the metric must be of the form \[ g = dt^2 + (u'(t))^2 g_N\] where $u$ is a function of $t$ and $g_N$ is some fixed metric. Brinkmann \cite{Brinkmann} showed that this is true locally and Tashiro \cite{Tashiro} showed it is true globally when the metric is complete, also see \cite{OS, JW}. Once we have these coordinates we can compute that $ \mathrm{Hess} u = u'' g $ where prime denotes derivatives in the $t$ direction. So we have that $u$ is a solution to $u'' = (\psi-\rho)u$. Differentiating this equation gives us $u'''= (\psi u)' - \rho u'$. On the other hand the sectional curvature in these coordinates is given by $ \mathrm{sec}(\partial_r, X) = -\frac{u'''}{u'}.$ Since $\rho$ is also the sectional curvature these two equations combine to give us $(\psi u)' = 0$, i.e. $\psi = Ku^{-1}$ for some constant $K$. In particular, we can see that if $\overline{\sec}_f = K$ for a constant $K$ and $f$ is non constant, then $K$ must be zero. In this case we get the following classification in terms of the curvature $\rho$. \begin{example} Suppose that $(M^n,g,f)$ has $n>2$ and $\overline{ \sec}_f = 0$. If $f$ is non-constant, then after normalizing $\rho$ to be 1, 0, or -1 and possibly re-parametrizating $r$ and rescaling the metric $g_N$ below, the only possibilities are \begin{enumerate} \item $\rho=1$, $g = dr^2 + \sin^2(r) g_N $, where $g_N$ is a metric of constant curvature 1, and $u = \cos(r)$. \item $\rho=0$, $g= dr^2 + g_N$ where $g_N$ is a flat metric, and $u=Ar$. \item $\rho=-1$ and either \begin{enumerate} \item $g = dr^2 + \sinh^2(r) g_N$ where $g_N$ is a metric of constant curvature 1, and $u = \cosh(r)$. \item $g=dr^2 + e^{2r} g_N$ where $g_N$ is a flat metric, and $u = e^r$. \item $g=dr^2 + \cosh^2(r) g_N$ where $g_N$ is a metric of constant curvature $-1$, and $u=\sinh(r)$. \end{enumerate} \end{enumerate} \end{example} \begin{remark} \label{RemTri} These examples already show that there is no obvious Toponogov triangle comparison type theorem for the conditions $\overline{\sec}_f \geq 0$ or $\leq 0$ as the hemisphere and hyperbolic space both admit densities with constant zero curvature. It also shows that $\overline{\sec}_f \geq 0$ or $\leq 0$ does not imply a triangle comparison theorem for the metric $h= e^{-2f}g$ since if $g$ is the hemisphere then $h$ is the hyperbolic space with the opposite curvature and vice-versa. \end{remark} \section{Conjugate Radius estimates} In this section we discuss Jacobi field estimates. First we discuss the Cartan-Hadamard Theorem and then we prove a theorem for a positive upper bound on weighted curvature. \subsection{Weighted Cartan Hadamard theorem} The Cartan-Hadamard theorem states that manifolds with non-positive sectional curvature do not have conjugate points. First we show through an example that this theorem is not true for $\sec_X \leq 0$. \begin{example} \label{cigar} Hamilton's cigar metric \cite{Hamilton} is a rotationally symmetric metric on $\mathbb{R}^2$ with $\mathrm{Ric} + \mathrm{Hess} f = 0$ and thus has $\sec_f = 0$. However, it also has conjugate points. To see this note that since the metric is simply connected and complete, if it had no conjugate points it would have a unique geodesic between any two points. Since the cigar is rotationally symmetric we can write the metric as $g = dr^2 + \phi^2(r) d\theta^2$. In the coordinates $(r,\theta)$, fix $\theta_0$ and consider the geodesic defined for all $r \in (-\infty, \infty)$ \[ \gamma(r) = \left \{ \begin{array}{ccc} (r, \theta_0) & r\geq 0 \\ (-r, -\theta_0) & r< 0 \end{array} \right. .\] Then, since the cigar is cylindrical at infinity, there is a universal constant $C$ such that $d(\gamma(r) ,\gamma(-r)) <C$, for all $r$. In particular, for $r > C/2$ there are two geodesics between $\gamma(r)$ and $\gamma(-r)$, implying the metric has conjugate points. \end{example} On the other hand, we show that the stronger condition $\overline{\sec}_X \leq 0$ does imply the non-existence of conjugate points. \begin{theorem} \label{CH_thm} Suppose that a manifold $(M,g)$ supports a vector field such that $\overline{\sec}_X \leq 0$, then $(M,g)$ has no conjugate points. \end{theorem} \begin{proof} Let $\gamma:[0, t_0] \rightarrow M$ be a unit speed geodesic and $J$ a Jacobi field along $\gamma$ which is perpendicular to $\dot{\gamma}$. Let $f=f_{\gamma}$ be the function $f_{\gamma}(t) = \int_0^t g_{\gamma(r)}(X, \dot{\gamma}(r)) dr$. Then we have \[ \frac{d}{dt} \left( \frac12 e^{-2f} |J|^2 \right) = e^{-2f_{\gamma}} \left( g\left( \dot{J} - g(X, \dot{\gamma}) J, J \right) \right) \] and \begin{eqnarray*} \frac{d}{dt} \left( g\left( \dot{J} - g(X, \dot{\gamma}) J, J \right) \right) &=& g(\ddot{J}, J) + g(\dot{J}, \dot{J}) - \frac{1}{2} L_X g(\dot{\gamma}, \dot{\gamma})g(J,J) - 2g(X, \dot{\gamma})g(\dot{J},J) \\ &=& -R(J, \dot{\gamma}, \dot{\gamma}, J) - \frac{1}{2} L_X g(\dot{\gamma}, \dot{\gamma})g(J,J) + g(\dot{J}, \dot{J}) - 2g(X, \dot{\gamma})g(\dot{J},J) \\ &=& -R(J, \dot{\gamma}, \dot{\gamma}, J) - \frac{1}{2} L_X g(\dot{\gamma}, \dot{\gamma})g(J,J) + |\dot{J} - g(X, \dot{\gamma})J|^2 - g(X, \dot{\gamma})^2g(J,J) \\ &\geq& |\dot{J} - g(X, \dot{\gamma})J|^2 - \overline{\sec}_X^{\dot{\gamma}}(J) |J|^2. \end{eqnarray*} Then the assumption $\overline{\sec}_X \leq 0$ gives us that $\frac{d}{dt}\left( g\left( \dot{J} - g(X, \dot{\gamma}) J, J \right) \right) \geq 0$. If $J(0) = 0$, this implies that $g\left( \dot{J} - g(X, \dot{\gamma}) J, J \right) \geq 0,$ which gives us that $\frac{d}{dt} \left( \frac12 e^{-2f} |J|^2 \right) \geq 0$. Thus, the only way $J(0) = J(t_0)=0$ is if $J(t) = 0$ for all $0\leq t \leq t_0$. \end{proof} \subsection{Positive Upper bound} Now we consider the case $\overline{\sec}_X \leq K$, for a positive constant $K$. Recall that if Riemannian manifold satisfies $\mathrm{sec} \leq K$ for some $K>0$ then any two conjugate points are distance greater than or equal to $\frac{\pi}{\sqrt{K}}$ apart. We generalize this result to the condition $\overline{\sec}_X \leq K$. To do so we fix some notation. Given a fixed parametrized geodesic $\gamma$ we let $u = e^{f_{\gamma}}$ and let $u_{max}$ and $u_{min}$ be the maximum and minimum of $u$ on the geodesic. While the function $f_{\gamma}$ depends on the parametrization of $\gamma$ we note that the ratio $\frac{u_{min}}{u_{max}}$ does not. \begin{theorem} \label{Conjugate Radius} If $\gamma$ is a geodesic such that $\overline{\sec}_X(\dot{\gamma}, E) \leq K$ for all $|E|=1$, $E \perp \dot{\gamma}$ then the distance between any two conjugate points of $\gamma$ is greater than or equal to $\frac{u_{\mathrm{min}}}{u_{\mathrm{max}}} \cdot \frac{\pi}{\sqrt{k}}.$ \end{theorem} \begin{remark} We can obtain a different proof of Theorem \ref{CH_thm} by applying Theorem \ref{Conjugate Radius} for $K \rightarrow 0$ for a fixed geodesic $\gamma$ with $\overline{\sec}_f(\dot{\gamma}, E) \leq 0$. In particular, Theorem \ref{Conjugate Radius} is not true for $\sec_X \leq K$. \end{remark} \begin{proof} [Proof of Theorem \ref{Conjugate Radius}] Let $J(t)$ a Jacobi field along $\gamma$ with $J(0) = 0$ and let $\phi = \ln(\frac{1}{2} e^{-2f_{\gamma}} |J|^2)$. If $J(a) = 0$ then $\phi \rightarrow -\infty$ at $a$. The derivative of $\phi$ is \begin{eqnarray*} \frac{d\phi}{dt} &=& \frac{ 2 \left(g(\dot{J}, J) - g(X, \dot{\gamma})|J|^2 \right) }{ |J|^2} \end{eqnarray*} Define $\lambda(t) = \frac{1}{2} e^{2f_{\gamma}} \frac{d\phi}{dt}$. Then \begin{eqnarray*} \frac{d\lambda}{dt} &=& u^2 \left( \frac{ \frac{d}{dt} \left( g(\dot{J}, J) - g(X, \dot{\gamma})|J|^2 \right) |J|^2 -2 \left( g(\dot{J}, J) - g(X, \dot{\gamma})\right)^2 } {e^{-2f_{\gamma}}|J|^4} \right) \\ &=& u^2 \left( \frac{ |\dot{J} - g(X, \dot{\gamma})J|^2|J|^2 - 2 \left( g(\dot{J} - g(X, \dot{\gamma})J, J) \right)^2 - \overline{\sec}_X^{\dot{\gamma}}(J) |J|^4 }{|J|^4}\right)\\ &\geq &-\frac{\lambda^2}{u^2} - Ku^2 \\ \end{eqnarray*} Where we have used the formula \begin{eqnarray*} \frac{d}{dt} \left( g\left( \dot{J} - g(X, \dot{\gamma}) J, J \right) \right) &\geq& |\dot{J} - g(X, \dot{\gamma})J|^2 - \overline{\sec}_X^{\dot{\gamma}}(J) |J|^2 \end{eqnarray*} and Cauchy-Schwarz. We thus have \[ \frac{d\lambda}{dt} \geq -\frac{\lambda^2}{u^2} - Ku^2 \geq -\frac{\lambda^2}{u_{\mathrm{min}} ^2} - Ku_{\mathrm{max}}^2.\] We can then get a lower bound for $\lambda$ in terms of the solution to the corresponding Ricatti equation, $ \frac{d\lambda}{dt} =-\frac{\lambda^2}{u_{\mathrm{min}} ^2} - Ku_{\mathrm{max}}^2$. This equation can be solved explicitly using separation of variables and we obtain \begin{eqnarray*} \lambda(t) &\geq& (u_{min} u_{max} \sqrt k ) \cot\left( \frac{u_{max} \sqrt{K}}{u_{min}} t \right) \end{eqnarray*} This shows that $\lambda$ can not diverge to $-\infty$ for $t< \frac{u_{min}}{u_{max}} \frac{\pi}{\sqrt{K}}$, which implies $J(t)$ can not go to $0$ for $t< \frac{u_{min}}{u_{max}} \frac{\pi}{\sqrt{K}}$ \end{proof} \section{Second variation of energy formula and Synge's theorem } We now discuss how the weighted curvatures appear in the formula for the second variation of energy of a path. The energy of a path $c:[a,b] \rightarrow \mathbb{R}$ is \[ E(c) = \frac{1}{2} \int_a^b |\dot{c}|^2 dt \] where $\dot{ }$ here and below will denote derivative in the $t$ direction. The formula for the second variation of energy of geodesic is \begin{eqnarray*} \frac{d^2E}{ds^2} |_{s=0} &=& \int_{a}^b |\dot{V}|^2 - R(V, \dot{\gamma}, \dot{\gamma}, V) dt + \left. g\left( \frac{\partial^2 \bar{\gamma}}{\partial s^2}, \dot{\gamma} \right) \right|_a^b \\ \end{eqnarray*} where $\bar{\gamma}: [a, b] \times (-\varepsilon, \varepsilon) \rightarrow M$ is a variation of the geodesic $\gamma(t) = \bar{\gamma}(t, 0)$, $V(t) = \frac{\partial \bar{\gamma}}{\partial s} |_{s=0}$ is the variation field. The index form is the quantity \[ I_{[a,b]}(V,V) = \int_{a}^b |\dot{V}|^2 - R(V, \dot{\gamma}, \dot{\gamma}, V) dt.\] Recall from section two that weighted directional curvature operators along $\gamma$ are \begin{eqnarray*} R^{\dot{\gamma} } _X(U,V) &=& R(U,\dot{\gamma}, \dot{\gamma}, V) +\frac12 L_Xg (\dot{\gamma}, \dot{\gamma})g(U,V)\\ \overline{R}^{\dot{\gamma} } _X(U,V) &=& R(U,\dot{\gamma}, \dot{\gamma}, V) +\frac12 L_Xg (\dot{\gamma}, \dot{\gamma})g(U,V) + g(X,\dot{\gamma})^2g(U,V). \end{eqnarray*} and that the weighted sectional curvatures are given by $\sec^{\dot{\gamma}}_X(U) = R^{\dot{\gamma} } _X(U,U)$ and $\overline{\sec}^{\dot{\gamma}}_X(U) = \overline{R}^{\dot{\gamma} } _X(U,U)$ where $U$ is a unit vector perpendicular to $\dot{\gamma}$. We can modify the formula for the index form to involve the weighted directional curvature operators. \begin{proposition} \label{2variation} For the triple $(M,g,X)$ we have the following formulas for the Index form along a geodesic $\gamma$. \begin{eqnarray} I_{[a,b]}(V,V) &=& \int_{a}^b |\dot{V}|^2 - R^{\dot{\gamma} } _X(V,V)- 2 g(\dot{\gamma}, X) g(V, \dot{V}) dt + \left. g(\dot{\gamma}, X)|V|^2 \right|_a^b \label{2vfa} \\ &=& \int_{a}^b |\dot{V} - g(\dot{\gamma}, X)V|^2 -\overline{R}^{\dot{\gamma} } _X(V,V) dt + \left. g(\dot{\gamma}, X)|V|^2 \right|_a^b \label{2vfb} \end{eqnarray} \end{proposition} \begin{proof} To obtain the first formula we write \begin{eqnarray*} I_{[a,b]}(V,V)&=& \int_{a}^b |\dot{V}|^2 -R^{\dot{\gamma} } _X(V,V) + \frac{1}{2}L_X(\dot{\gamma}, \dot{\gamma})|V|^2 dt \\ &=& \int_{a}^b |\dot{V}|^2 -R^{\dot{\gamma} } _X(V,V) + \left(\frac{d}{dt} g(\dot{\gamma}, X)\right) |V|^2 dt \\ &=& \int_{a}^b |\dot{V}|^2 -R^{\dot{\gamma} } _X(V,V) - g(\dot{\gamma}, X) \frac{d}{dt} |V|^2 + \frac{d}{dt} \left(g(\dot{\gamma}, X)|V|^2 \right) dt \\ &=& \int_{a}^b |\dot{V}|^2 -R^{\dot{\gamma} } _X(V,V) - 2 g(\dot{\gamma}, X) g(V, \dot{V}) dt + \left. g(\dot{\gamma}, X)|V|^2 \right|_a^b \end{eqnarray*} To incorporate the strongly weighted curvature into the equation we complete the square \[ |\dot{V} - g(\dot{\gamma}, X)V|^2 = |\dot{V}|^2 - 2g(\dot{\gamma}, X)g(V, \dot{V}) + g(\dot{\gamma},X)^2|V|^2\] to obtain (\ref{2vfb}). \end{proof} \begin{remark} When $X = \nabla f$, the weighted sectional curvatures $\sec_f^U(\dot{\gamma})$ and $\overline{\sec}_f^U(\dot{\gamma})$ also appear in the second variation formula for the weighted energy at a weighted geodesic, see \cite{Morgan2, Morgan3}. \end{remark} Our first application of these formulas will be to generalize Synge's theorem to the weighted setting. We have the following lemma for parallel fields around closed geodesics. \begin{lemma} \label{Shorter} Let $(M,g,X)$ be a Riemannian manifold equipped with a smooth vector field $X$ which contains a closed geodesic $\gamma$ which supports a unit parallel field perpendicular to $\dot{\gamma}$. If either $\mathrm{sec}_X >0$, or $X=\nabla f$ and $\overline{\mathrm{sec}}_f>0$, then there is smooth closed curve which is homotopic to $\gamma$ and has shorter length. \end{lemma} \begin{proof} First consider the case $\mathrm{sec}_X >0$. For a parallel field $V$ along a geodesic (\ref{2vfa}) implies \begin{eqnarray*} \frac{d^2E}{ds^2} |_{s=0} &=& -\int_{a}^bR^{\dot{\gamma} } _X(V,V) dt + \left. g(\dot{\gamma}, X) \right|_a^b + \left. g\left( \frac{\partial^2 \bar{\gamma}}{\partial s^2}, \dot{\gamma} \right) \right|_a^b. \end{eqnarray*} If the geodesic is closed then the boundary terms cancel and from $\mathrm{sec}_X >0$ we obtain \[ \frac{d^2E}{ds^2} |_{s=0} = -\int_{a}^bR^{\dot{\gamma} } _X(V,V) dt < 0 \] Which shows that the closed curve obtained from the variation has smaller length than the original closed geodesic. When $X=\nabla f$ and $\overline{\mathrm{sec}}_f>0$, let $Y=e^f V$, then \[ \dot{Y} = g(X, \dot{\gamma})e^f V = g(X, \dot{\gamma})Y \] Applying (\ref{2vfb}) to the variation field $Y$ we also get that the boundary terms cancel and we obtain \[ \frac{d^2E}{ds^2} |_{s=0} = -\int_{a}^b \overline{R}^{\dot{\gamma}}_X(Y, Y) dt < 0 \] Again showing that there is a closed curve with smaller length. \end{proof} The proof of Synge's theorem now goes exactly as in the classical case. \begin{theorem}[Synge's Theorem] Suppose that $M$ is a compact manifold supporting a vector field $X$ such that either $\mathrm{sec}_X >0$, or $X=\nabla f$ and $\overline{\mathrm{sec}}_f>0$, then \begin{enumerate} \item If $M$ is even dimensional and orientable, then $M$ is simply connected. \item If $M$ is odd-dimensional, then $M$ is orientable \end{enumerate} \end{theorem} \begin{proof} The argument of Synge proceeds by contradiction and shows that if the topological conclusions do not hold then there is a closed geodesic with a parallel field which minimizes length in its homotopy class, see e.g. Theorem 26 of \cite{Petersen}. Applying Lemma \ref{Shorter} then gives the desired contradiction in the weighted setting. \end{proof} \section{Diameter Estimate} Now we prove the diameter estimate in the introduction. We could give a proof of the result using the traditional second variation of energy argument and the formula from the previous section. However, we will give a quicker proof using the Bochner formula. From formula (\ref{Ric1}) above we have \[ \partial_r (\Delta_X r )= -|\mathrm{Hess} r|^2 - \mathrm{Ric}_X (\partial_r, \partial_r). \] We can modify this equation to obtain an equation for $\mathrm{Ric}_X^{-(n-1)}$ in the following way: \begin{lemma} \label{NegBochner} Let $\gamma$ be a geodesic and let $v = e^{\frac{f_{\gamma}}{n-1}}$. Then \[ \partial_r (v^2 \Delta_X r) \leq -v^2 \frac{(\Delta_X r)^2}{n-1} - v^2 \mathrm{Ric}_X^{-(n-1)}(\partial_r, \partial_r)\] \end{lemma} \begin{proof} We have \begin{eqnarray*} \partial_r (v^2 \Delta_X r ) &=& \left( -|\mathrm{Hess} r|^2 - \mathrm{Ric}_X (\partial_r, \partial_r) + \frac{2 \partial_r f}{n-1} \Delta_X r \right) v^2 \\ &\leq& \left( - \frac{(\Delta r)^2}{n-1} + \frac{2 \partial_r f}{n-1} \Delta_X r - \mathrm{Ric}_X (\partial_r, \partial_r) \right)v^2 \\ &=& \left( - \frac{(\Delta_X r)^2}{n-1} - \mathrm{Ric}^{-(n-1)}_X (\partial_r, \partial_r) \right)v^2 \end{eqnarray*} \end{proof} This now gives us Theorem \ref{DiamIntro}. \begin{proof}[Proof of Theorem \ref{DiamIntro}] Let $\gamma(r)$ be a minimizing unit speed geodesic and let $\lambda(r) = \frac{v^2 \Delta_X r}{n-1}$, then Lemma \ref{NegBochner} tells us that \[ \dot{\lambda} \leq -\frac{\lambda^2}{v^2} - k v^2 \] when $\mathrm{Ric}_X^{-(n-1)} \geq (n-1) k$. Let $v_{max}$ and $v_{min}$ be the minimum and maximum of $v$. . Then we have \[ \dot{\lambda} \leq -\frac{\lambda^2}{v_{max}^2} - k v_{min}^2 \] This implies by the Ricatti comparison that \[ \lambda(t) \leq v_{\min} v_{\max} \sqrt{k} \cot\left( \frac{v_{\min}}{v_{\max}} \sqrt{k} t \right). \] Since the right hand side goes to $-\infty$ at $t _0= \frac{v_{max}}{v_{min}} \cdot \frac{\pi}{\sqrt{k}}$, $\lambda$ must go to $-\infty$ at some earlier time, meaning the geodesic will not be minimizing past $t_0$. \end{proof} \section{Pinching} In this section we present the proof of Theorem \ref{PinchingIntro}. What we will show is that the conjugate radius and second variation estimates we already have combined with classical methods give a proof that any such manifold is a homotopy sphere. We will go into less detail in many of the arguments in this section and instead reference the textbooks \cite{doC, Petersen, Kling}. For submanifolds $A$ and $B$ in $M$ define the path space as \[ \Omega_{A,B}(M) = \{ \gamma:[0,1] \rightarrow M, \gamma(0) = A, \gamma(1) = B \} \] We consider the Energy $E: \Omega_{A,B}(M) \rightarrow \mathbb{R}$ and variation fields tangent to $A$ and $B$ at the end points. The critical points are then the geodesics perpendicular to $A$ and $B$ and we say that the index of such a geodesic is $\geq k$ if there is a $k$-dimensional space of variation fields along the geodesic which have negative second variation. The first step in the proof is that the diameter estimate in the previous section can be improved to an index estimate in the case of a sectional curvature bound. \begin{lemma} \label{lemmaindex} Suppose that $\overline{\sec}_f \geq k$, then if $\gamma$ is a geodesic of length longer than $ \frac{u_{\max}}{u_{\min}} \cdot \frac{\pi}{\sqrt{k}} $ than the index of $\gamma$ is greater than or equal to $(n-1)$. \end{lemma} \begin{proof} Along a geodesic $\gamma:[0,l] \rightarrow M$ with a proper variation, $V$, the second variation formula (\ref{2vfb}) becomes \[ \frac{d^2E}{ds^2} |_{s=0} = \int_{0}^l |\dot{V} - g(\dot{\gamma}, X)V|^2 - \overline{R}^{\dot{\gamma} } _X(V,V) dt \] Choose $E$ to be a unit length parallel field along $\gamma$ such that $E \perp \dot{\gamma}$, let $\phi(t)$ be a function such that $\phi(0) = 0$ and $\phi(l)=0$, and let $V = \phi e^{f} E. $ Then we have \[ \dot{V} - g(\dot{\gamma}, X)V = \dot{\phi} e^f E \] Plugging $V$ into the second variation formula then gives \begin{eqnarray*} \frac{d^2E}{ds^2} |_{s=0}& =& \int_{0}^le^{2f} \left( (\dot{\phi})^2 - \phi^2\overline{R}^{\dot{\gamma} } _X(V,V)\right) dt \\ &=& - \int_{0}^le^{2f} \left( \ddot{\phi} \phi + 2\dot{f}\dot{\phi} \phi + \phi^2 \overline{R}^{\dot{\gamma} } _X(V,V)\right) dt \\ &\leq& - \int_{0}^le^{2f} \phi \left( \ddot{\phi} + 2\dot{f}\dot{\phi} + k\phi \right)dt \end{eqnarray*} Let $\psi$ be the solution to \begin{eqnarray*} \label{BVE} \ddot{\psi} + 2\dot{f}\dot{\psi} + k\psi = 0 \qquad \psi(0) = 0 \qquad \dot{\psi}(0) = 1. \end{eqnarray*} and let $L$ be the smallest positive number such that $\psi(L) = 0$. If we show that $L \leq \frac{u_{\max}}{u_{\min}} \cdot \frac{\pi}{\sqrt{k}}$ then this will imply the result since we can then construct $(n-1)$ linearly independent fields along $\gamma$ with $ \frac{d^2E}{ds^2} |_{s=0} \leq 0$ by taking the fields $V$ as above and defining $\phi = \psi$ on $[0,L]$ and $\phi(t) = 0$ for $t \in \left [L, \frac{u_{\max}}{u_{\min}} \cdot \frac{\pi}{\sqrt{k}}\right]$. To see that $L \leq \frac{u_{\max}}{u_{\min}} \cdot \frac{\pi}{\sqrt{k}}$, let $\lambda = \frac{e^{2f} \dot{\phi}}{\phi}$. Then a simple calculation shows that \[ \dot{\lambda} = - \frac{\lambda^2}{u^2} -ku^2\] The Ricatti comparison applied as in the proof of Theorem \ref{DiamIntro} then gives the result \end{proof} This index estimate gives the following generalization of a sphere theorem of Berger \cite{Berger1} which is Theorem 33 in \cite{Petersen}. \begin{theorem} \label{BergerHomotopy} If a compact Riemannian manifold has $\overline{\sec}_f \geq k$ and \[ \mathrm{inj}(M,g) \geq \frac{u_{\max}}{u_{\min}} \frac{\pi}{2\sqrt{k}} \] Then $M$ is a homotopy sphere. \end{theorem} \begin{proof} Under the hypothesis, every geodesic loop $\gamma$ such that $\gamma(0) = \gamma(l)= p$ must have length greater than or equal to $\frac{u_{max}}{u_{min}} \frac{\pi}{2\sqrt{k}}$. Then Lemma \ref{lemmaindex} implies that every geodesic in $\Omega_{p,p}$ has index greater than or equal to $(n-1)$. This then implies that $M$ is $(n-1)$ connected and thus a homotopy sphere see Theorems 32 and 33 of \cite{Petersen} along with Theorem 2.5.16 of \cite{Kling}. \end{proof} This shows that the key to proving a sphere theorem is to prove an injectivity radius estimate. In the even dimensional case an injectivity radius estimate follows from Theorem \ref{Conjugate Radius} and Lemma \ref{Shorter}. \begin{theorem} Suppose that $M$ is a compact even dimensional simply connected manifold such that $0 \leq \overline{\sec}_f\leq L$ then $\mathrm{inj}(M,g) \geq \frac{u_{min}}{u_{max}} \frac{\pi}{\sqrt{L}}. $ \end{theorem} \begin{proof} Suppose that $\mathrm{inj}(M,g)< \frac{u_{min}}{u_{max}} \frac{\pi}{\sqrt{L}}$. Then from Theorem \ref{Conjugate Radius} the conjugate radius is larger than the injectivity radius. This tells us that there is a closed geodesic of length $\frac12\mathrm{inj}_M$. From the proof of Synge's theorem, when the manifold is orientable and even-dimensional it is possible to construct a parallel field along the geodesic and from Lemma \ref{Shorter} there is a variation which decreases the length of this closed curve. However, it is possible to show that this leads to conjugate points of smaller distance apart, a contradiction, see the proof of Theorem 30 of \cite{Petersen}. \end{proof} The odd dimensional case is more difficult where the injectivity radius estimate is due to Klingenberg in the classical case. However, from what we have already proved, Klingenberg's arguments carry over to the weighted setting. First we have the homotopy lemma. \begin{lemma} [Klingenberg's homotopy lemma] Suppose that a Riemannian manifold $(M,g)$ has the property that no geodesic segment of length less than $\pi$ contains a conjugate point. Suppose that $p, q \in M$ such that $p$ and $q$ are joined by two distinct geodesics $\gamma_0$ and $\gamma_1$ which are homotopic. Then there exists a curve in the homotopy $\alpha_{t_0}$ such that \[ \mathrm{length}(\alpha_{t_0}) \geq 2\pi - \mathrm{min}\{ \mathrm{length}(\gamma_i)\} \] \end{lemma} \begin{proof} This is usually stated with the conjugate point estimate replaced with the condition $\mathrm{sec} \leq 1$. However, as is pointed out in 2.6.5 of \cite{Kling}, the lemma holds with the same proof in this generality. \end{proof} Now we can prove the injectivity radius estimate in all dimensions. \begin{theorem} \label{InjEst} Suppose that $(M,g,f)$ is complete simply connected and satisfies \[ \frac{1}{4}\left (\frac{u_{max}}{u_{min}} \right)^2 < L \leq \overline{\sec}_f \leq \left(\frac{u_{min}}{u_{max}} \right)^2\] then $\mathrm{inj}(M,g)\geq \pi$ \end{theorem} \begin{proof} Since $\overline{\sec}_f \leq \left(\frac{u_{\min}}{u_{\max}} \right)^2$ Theorem \ref{Conjugate Radius} shows that the conjugate radius is less than or equal to $\pi$ so that we can apply the homotopy lemma. On the other hand, from \ref{lemmaindex}, $ \overline{\sec}_f > \frac{1}{4}\left (\frac{u_{max}}{u_{min}} \right)^2$ implies that any geodesic of length longer than $\frac{\pi}{2}$ has index greater than or equal to $2$. These are the only two elements about curvature used in the proof of the injectivity radius estimate of Klingenberg, see for example the proof of Proposition 3.1 on page 276 of \cite{doC}. \end{proof} The proof of Theorem \ref{PinchingIntro} now follows as Theorem \ref{InjEst} and Theorem \ref{BergerHomotopy} showing the manifold is a homotopy sphere. \section{Killing Fields} In this section we augment the previous considerations involving Jacobi fields and the second variation of energy formula by showing that the weighted sectional curvatures also come up naturally in formulas for Killing fields. Recall that for a Killing field $V$ on a Riemannian manifold $(M,g)$ we have the following. \begin{eqnarray} \label{KF1} \frac12 \nabla \left( |V|^2\right) & =& - \nabla_V V \\ \label{KF2} \frac12 \mathrm{Hess}\left (|V|^2\right) (Y,Y) &=& |\nabla_Y V|^2 - R(Y,V,V,Y) \end{eqnarray} Now suppose we have a smooth manifold with smooth density $(M,g, f)$ and consider the function \[ h= \frac{1}{2}e^{-2f} |V|^2. \] then we have the following formulas. \begin{lemma} Let $Y$ be a tangent vector, then \begin{eqnarray*} \nabla h &=& -e^{-2f}\left( \nabla_V V + |V|^2 \nabla f \right)\\ \mathrm{Hess} h(Y,Y) &=&- 2df\otimes dh(Y,Y)+ |\nabla_Y (e^{-f} V)|^2\\ && \quad - e^{-2f} \left( R(V,Y,Y,V) + |V|^2 \mathrm{Hess} f(Y,Y) + |V|^2df(Y)^2 \right) \end{eqnarray*} \end{lemma} \begin{proof} For the first equation, from the product rule we have \begin{eqnarray*} dh = e^{-2f}\left(- |V|^2 df + d\left( \frac12 |V|^2\right) \right) \end{eqnarray*} So that \begin{eqnarray*} dh(Y) = -e^{-2f}\left( g(\nabla_V V, Y) + g(\nabla f, Y)|V|^2 \right) \end{eqnarray*} Differentiating this equation then gives us \begin{eqnarray*} \mathrm{Hess} h = e^{-2f}\left( 2|V|^2 df \otimes df -2 d\left( \frac12 |V|^2\right) \otimes df - 2 df \otimes d\left( \frac{1}{2} |V|^2\right) - |V|^2 \mathrm{Hess} f + \mathrm{Hess} \left(\frac12 |V|^2\right) \right). \end{eqnarray*} Plugging in (\ref{KF1}) and (\ref{KF2}) gives us \begin{eqnarray*} \mathrm{Hess} h(Y,Y) &=&e^{-2f} \left( 2 |V|^2 g(\nabla f, Y)^2 + 4 g(\nabla f, Y)g(\nabla_V V, Y) + |\nabla_Y V|^2 \right. \\ && \qquad \left. - R(Y,V,V,Y) - |V|^2 \mathrm{Hess} f(Y,Y) \right) \\ &=& |\nabla_Y (e^{-f} V)|^2 - e^{-2f}( R(Y,V,V,Y) + |V|^2 \mathrm{Hess} f(Y,Y)) \\ && + e^{-2f} (|V|^2g(\nabla f, Y)^2 + 2 g(\nabla f, Y)g(\nabla_V V, Y)). \end{eqnarray*} Then we also have \begin{eqnarray*} df\otimes dh(Y,Y) &=& -e^{-2f} g(\nabla f, Y) \left( g(\nabla_V V, Y) + g(\nabla f, Y)|V|^2 \right)\\ &=& -e^{-2f}\left( g(\nabla f, Y) g(\nabla_V V, Y) + g(\nabla f, Y)^2|V|^2\right) \end{eqnarray*} which tells us that \[ 2e^{-2f} g(\nabla f, Y)g(\nabla_V V, Y) = - 2df\otimes dh(Y,Y) - 2e^{-2f}g(\nabla f, Y)^2|V|^2. \] Plugging this in to the original gives \begin{eqnarray*} &&\mathrm{Hess} h(Y,Y) + 2df\otimes dh(Y,Y) = \\ && \quad |\nabla_Y (e^{-f} V)|^2 - e^{-2f} \left( R(Y,V,V,Y) + |V|^2 \mathrm{Hess} f(Y,Y) + |V|^2g(\nabla f, Y)^2 \right) \end{eqnarray*} \end{proof} \begin{theorem} Suppose $(M,g)$ is a compact even dimensional manifold, if there is a function $f$ such that $ \overline{\mathrm{sec}}_f > 0$ then every Killing field has a zero. \end{theorem} \begin{proof} The argument is by contradiction. If there is a vector field $V$ which does not have a zero then the function $h$ has a non-zero minimum at a point $p$. At $p$, we then have $dh=0$ which implies from the previous lemma that \[ g(\nabla_V V, Y) = - g(\nabla f, Y)|V|^2 \qquad \forall Y \in T_pM \] In particular, setting $Y=V$ and using the skew-symmetry of $\nabla V$ we obtain $g(\nabla f, V) = 0$ at $p$. Consider the skew symmetric endomorphism on $A : T_pM \rightarrow T_pM $ given by \[ A(w) = \nabla_w V + g(w, V) \nabla f - g(w, \nabla f) V \] Then, using that $V \perp \nabla f$ at $p$ we can see that $V|_p$ is in the null space of $A$ as \[ A(V|_p) = (\nabla_V V + |V|^2 \nabla f)|_p = \nabla h|_p = 0 \] If the dimension of the manifold is even, then we know that $A$ has another zero eigenvector for $A$ which is perpendicular to $V$, call it $w$. Then we have \[ 0 = A(w) = \nabla_w V - g(w, \nabla f) V \] Which implies that \[ \nabla_w(e^{-f} V) = e^{-f} A(w) = 0 \] Plugging this into the formula for the Hessian of $h$ in the previous lemma gives us \begin{eqnarray*} \mathrm{Hess} h(w,w)&=& - e^{-2f} \left( R(w,V,V,w) + |V|^2 \mathrm{Hess} f(w,w) + |V|^2g(\nabla f, w)^2 \right) \end{eqnarray*} The assumption $\overline{\mathrm{sec}}_f > 0$ then shows that $\mathrm{Hess} h (w,w)<0$, which is a contradiction to $p$ being a minimum. \end{proof} \begin{bibdiv} \begin{biblist} \bib{BE}{article}{ author={Bakry, D.}, author={{\'E}mery, Michel}, title={Diffusions hypercontractives}, language={French}, conference={ title={S\'eminaire de probabilit\'es, XIX, 1983/84}, }, book={ series={Lecture Notes in Math.}, volume={1123}, publisher={Springer}, place={Berlin}, }, date={1985}, pages={177--206}, } \bib{Berger1}{article}{ author={Berger, Marcel}, title={Sur certaines vari\'et\'es riemanniennes \`a courbure positive}, language={French}, journal={C. R. Acad. Sci. Paris}, volume={247}, date={1958}, pages={1165--1168}, } \bib{Berger2}{article}{ author={Berger, M.}, title={Les vari\'et\'es Riemanniennes $(1/4)$-pinc\'ees}, language={French}, journal={Ann. Scuola Norm. Sup. Pisa (3)}, volume={14}, date={1960}, pages={161--170}, review={\MR{0140054 (25 \#3478)}}, } \bib{BergerKilling}{article}{ author={Berger, Marcel}, title={Trois remarques sur les vari\'et\'es riemanniennes \`a courbure positive}, journal={C. R. Acad. Sci. Paris S\'er. A-B}, volume={263}, date={1966}, pages={A76--A78}, } \bib{Brinkmann}{article}{ author={Brinkmann, H. W.}, title={Einstein spaces which are mapped conformally on each other}, journal={Math. Ann.}, volume={94}, date={1925}, number={1}, pages={119--145}, } \bib{CGY1}{article}{ author={Chang, Sun-Yung A.}, author={Gursky, Matthew J.}, author={Yang, Paul}, title={Conformal invariants associated to a measure}, journal={Proc. Natl. Acad. Sci. USA}, volume={103}, date={2006}, number={8}, pages={2535--2540}, } \bib{CGY2}{article}{ author={Chang, Sun-Yung A.}, author={Gursky, Matthew J.}, author={Yang, Paul}, title={Conformal invariants associated to a measure: conformally covariant operators}, journal={Pacific J. Math.}, volume={253}, date={2011}, number={1}, pages={37--56}, } \bib{Cetc}{article}{ author={Corwin, I.}, author={Hoffman, N.}, author={Hurder, S.} author={Sesum, V.} author={Xu, Y.} title={Differential geometry of manifolds with density}, journal={ Rose-Hulman Und. Math. J.}, volume={7}, date={2006}, number={1} note={article 2}, } \bib{CM}{article}{ author={Corwin, Ivan}, author={Morgan, Frank}, title={The Gauss-Bonnet formula on surfaces with densities}, journal={Involve}, volume={4}, date={2011}, number={2}, pages={199--202}, } \bib{CS}{article}{ author={Croke, Christopher B.}, author={Schroeder, Viktor}, title={The fundamental group of compact manifolds without conjugate points}, journal={Comment. Math. Helv.}, volume={61}, date={1986}, number={1}, pages={161--175},} \bib{doC}{book}{ author={do Carmo, Manfredo Perdig{\~a}o}, title={Riemannian geometry}, series={Mathematics: Theory \& Applications}, note={Translated from the second Portuguese edition by Francis Flaherty}, publisher={Birkh\"auser Boston Inc.}, place={Boston, MA}, date={1992}, } \bib{Hamilton}{article}{ author={Hamilton, Richard S.}, title={The Ricci flow on surfaces}, conference={ title={Mathematics and general relativity}, address={Santa Cruz, CA}, date={1986}, }, book={ series={Contemp. Math.}, volume={71}, publisher={Amer. Math. Soc., Providence, RI}, }, date={1988}, pages={237--262}, } \bib{JW}{article}{ author={Jauregui, Jeffrey L.}, author={Wylie, William}, title={Conformal diffeomorphisms of gradient Ricci solitons and generalized quasi-Einstein Manifolds}, journal={J. Geom. Anal.}, volume={25}, number={1}, date={2015}, pages={668-708} } \bib{KolMil}{article}{ author ={Kolesnikov, Alexander V.}, author ={Milman, Emanuel}, title={Poincar\'e and Brunn-Minkowski inequalities on weighted Riemannian manifolds with boundary}, note={ arXiv:1310.2526} } \bib{KennardWylie}{article}{ author={Kennard, Lee}, author={Wylie, William}, title={Positive weighted sectional curvature}, note={arXiv:1410.1558} } \bib{Kling61}{article}{ author={Klingenberg, Wilhelm}, title={\"Uber Riemannsche Mannigfaltigkeiten mit positiver Kr\"ummung}, language={German}, journal={Comment. Math. Helv.}, volume={35}, date={1961}, pages={47--54}, } \bib{Kling}{book}{ author={Klingenberg, Wilhelm}, title={Riemannian geometry}, series={de Gruyter Studies in Mathematics}, volume={1}, publisher={Walter de Gruyter \& Co.}, place={Berlin}, date={1982}, } \bib{KN}{book}{ author={Kobayashi, Shoshichi}, author={Nomizu, Katsumi}, title={Foundations of differential geometry. Vol. I}, series={Wiley Classics Library}, note={Reprint of the 1963 original}, publisher={John Wiley \& Sons Inc.}, place={New York}, date={1996}, } \bib{KR}{article}{ author={K{\"u}hnel, Wolfgang}, author={Rademacher, Hans-Bert}, title={Einstein spaces with a conformal group}, journal={Results Math.}, volume={56}, date={2009}, number={1-4}, pages={421--444} } \bib{Lich1}{article}{ author={Lichnerowicz, Andr{\'e}}, title={Vari\'et\'es riemanniennes \`a tenseur C non n\'egatif}, language={French}, journal={C. R. Acad. Sci. Paris S\'er. A-B}, volume={271}, date={1970}, pages={A650--A653}, } \bib{Lich2}{article}{ author={Lichnerowicz, Andr{\'e}}, title={Vari\'et\'es k\"ahl\'eriennes \`a premi\`ere classe de Chern non negative et vari\'et\'es riemanniennes \`a courbure de Ricci g\'en\'eralis\'ee non negative}, language={French}, journal={J. Differential Geometry}, volume={6}, date={1971/72}, pages={47--94}, } \bib{Lott}{article}{ author={Lott, John}, title={Some geometric properties of the Bakry-\'Emery-Ricci tensor}, journal={Comment. Math. Helv.}, volume={78}, date={2003}, number={4}, pages={865--883}} \bib{Lott2}{article}{ author={Lott, John}, title={Remark about scalar curvature and Riemannian submersions}, journal={Proc. Amer. Math. Soc.}, volume={135}, date={2007}, number={10}, pages={3375--3381}, } \bib{Mil}{article}{ author ={Milman, Emanuel}, title={Beyond traditional Curvature-Dimension I: new model spaces for isoperimetric and concentration inequalities in negative dimension}, note={arXiv:1409.4109} } \bib{MW}{article}{ author={Munteanu, Ovidiu}, author={Wang, Jiaping}, title={Analysis of weighted Laplacian and applications to Ricci solitons}, journal={Comm. Anal. Geom.}, volume={20}, date={2012}, number={1}, pages={55--94} } \bib{Morgan}{article}{ author={Morgan, Frank}, title={Manifolds with density}, journal={Notices Amer. Math. Soc.}, volume={52}, date={2005}, number={8}, pages={853--858}, } \bib{Morgan2}{article}{ author={Morgan, Frank}, title={Myers' theorem with density}, journal={Kodai Math. J.}, volume={29}, date={2006}, number={3}, pages={455--461}, } \bib{MorganBook}{book}{ author={Morgan, Frank}, title={Geometric measure theory}, edition={4}, publisher={Elsevier/Academic Press, Amsterdam}, date={2009}, pages={viii+249}, } \bib{Morgan3}{article}{ author={Morgan, Frank}, title={Manifolds with density and Perelman's proof of the Poincar\'e conjecture}, journal={Amer. Math. Monthly}, volume={116}, date={2009}, number={2}, pages={134--142} } \bib{Ohta}{article}{ author={Ohta, Shin-ichi} title={ (K,N)-convexity and the curvature-dimension condition for negative N} note={arXiv:1310.7993} } \bib{OS}{article}{ author={Osgood, Brad}, author={Stowe, Dennis}, title={The Schwarzian derivative and conformal mapping of Riemannian manifolds}, journal={Duke Math. J.}, volume={67}, date={1992}, number={1}, pages={57--99} } \bib{Per}{article}{ author= {Perelman, G. } title={The entropy formula for the {R}icci flow and its geometric applications.} note={arXiv: math.DG/0211159.} } \bib{Petersen}{book}{ author={Petersen, Peter}, title={Riemannian geometry}, series={Graduate Texts in Mathematics}, volume={171}, edition={2}, publisher={Springer}, place={New York}, date={2006}, } \bib{Qian}{article}{ author={Qian, Zhongmin}, title={Estimates for weighted volumes and applications}, journal={Quart. J. Math. Oxford Ser. (2)}, volume={48}, date={1997}, number={190}, pages={235--242}, } \bib{Synge}{article}{ author = {Synge, J.L.} title = {On the connectivity of spaces of positive curvature} journal = {Quart. J. Math} volume = {7} date = {1936} pages = {316-320} } \bib{Tashiro}{article}{ author={Tashiro, Yoshihiro}, title={Complete Riemannian manifolds and some vector fields}, journal={Trans. Amer. Math. Soc.}, volume={117}, date={1965}, pages={251--275} } \bib{WW}{article}{ author={Wei, Guofang}, author={Wylie, Will}, title={Comparison geometry for the Bakry-Emery Ricci tensor}, journal={J. Differential Geom.}, volume={83}, date={2009}, number={2}, pages={377--405},} \bib{Wylie}{article}{ author={Wylie, William} title={Some curvature pinching results for Riemannian manifolds with density} note={In preparation} } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2015-01-27T02:08:59", "yymm": "1311", "arxiv_id": "1311.0267", "language": "en", "url": "https://arxiv.org/abs/1311.0267", "abstract": "In this paper we introduce two new notions of sectional curvature for Riemannian manifolds with density. Under both notions of curvature we classify the constant curvature manifolds. We also prove generalizations of the theorems of Cartan-Hadamard, Synge, and Bonnet-Myers as well as a generalization of the (non-smooth) 1/4-pinched sphere theorem. The main idea is to modify the radial curvature equation and second variation formula and then apply the techniques of classical Riemannian geometry to these new equations.", "subjects": "Differential Geometry (math.DG)", "title": "Sectional curvature for Riemannian manifolds with density", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631631151012, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.708795049941641 }
https://arxiv.org/abs/0810.1425
Hodge polynomials and birational types of moduli spaces of coherent systems on elliptic curves
In this paper we consider moduli spaces of coherent systems on an elliptic curve. We compute their Hodge polynomials and determine their birational types in some cases. Moreover we prove that certain moduli spaces of coherent systems are isomorphic. This last result uses the Fourier-Mukai transform of coherent systems introduced by Hernández Ruiperez and Tejero Prieto.
\section{Introduction} \noindent A {\it coherent system of type $(n,d,k)$} on a smooth projective curve $C$ over an algebraically closed field is by definition a pair $(E,V)$ consisting of a vector bundle $E$ of rank $n$ and degree $d$ over $C$ and a vector subspace $V \subset H^0(E)$ of dimension $k$. For any real number $\alpha$, the {\it $\alpha$-slope} of a coherent system $(E,V)$ of type $(n,d,k)$ is defined by $$ \mu_{\alpha}(E,V) := \frac{d}{n} + \alpha \frac{k}{n}. $$ A {\it coherent subsystem} of $(E,V)$ is a coherent system $(E',V')$ such that $E'$ is a subbundle of $E$ and $V' \subset V \cap H^0(E')$. A coherent system $(E,V)$ is called {\it $\alpha$-stable} ({\it $\alpha$-semistable}) if $$ \mu_{\alpha}(E',V') < \mu_{\alpha}(E,V) \ \ (\mu_{\alpha}(E',V') \le \mu_{\alpha}(E,V)) $$ for every proper coherent subsystem $(E',V')$ of $(E,V)$. The $\alpha$-stable coherent systems of type $(n,d,k)$ on $X$ form a quasi-projective moduli space which we denote by $G(\alpha;n,d,k)$. In a previous paper \cite{ln} we studied the case of an elliptic curve $C$, determined precisely when $G(\alpha;n,d,k)$ is non-empty and showed that in this case it is always smooth and irreducible of the expected dimension $$ \beta(d,k) := k(d-k) + 1. $$ In this paper we consider again the case where $C$ is elliptic and assume for convenience that the base field is $\mathbb C$. After summarizing some properties of the Hodge polynomial $\epsilon_X(u,v)$ of a quasi-projective variety $X$ in section 2, we investigate first the spaces $G_0(n,d,k)$, defined by $G_0(n,d,k): = G(\alpha;n,d,k)$ for small positive $\alpha$. When $\mbox{gcd}(n,d) = 1$ we prove in particular that for fixed $d$ and $k$ the birational type (Corollary \ref{cor2.2}) and the Hodge polynomial (Corollary \ref{cor2.3}) of $G_0(n,d,k)$ are independent of the rank $n$. We give also an alternative proof of a special case of the main result of \cite{ht} saying that $G_0(n',d,k) \simeq G_0(n,d,k)$ when $n' \equiv n \mod d$ (see Corollary \ref{cor2.4}). Moreover we show\\ \noindent {\bf Proposition \ref{prop2.6}.} {\it If ${\emph\mbox{gcd}}(n,d) = {\emph\mbox{gcd}}(n',d) = 1$ and $n' \not \equiv n \mod d$, then $G_0(n',d,1) \not \simeq G_0(n,d,1)$ and $G_{0}(n',d,d-1) \not \simeq G_{0}(n,d,d-1)$.}\\ In section 4 we show that if $d$ and $\mbox{gcd}(n,d)$ are fixed, the birational type of $G_0(n,d,1)$ is independent of $n$. This improves \cite[Theorem 5.2]{ht} in the case $k=1$. When $\mbox{gcd}(n,d) = 2$, we describe in section 5 a stratification of $G_0(n,d,1)$ and use it to calculate its Hodge polynomial. We obtain similar results for the corresponding moduli spaces $G_0(n,N,1)$ with fixed determinant $N$.\\ \noindent {\bf Theorem \ref{thm4.9}.}\\ (a) $\epsilon_{G_0(n,d,1)}(u,v) =$ $$ = \frac{(1+u)(1+v)(1-(uv)^{\frac{d}{2}})}{(1-uv)^2(1+uv)}[(u+v)(uv -(uv)^{\frac{d}{2}}) + (1+uv)(1-(uv)^{\frac{d}{2}+1})]; $$ (b) $\epsilon_{G_0(n,N,1)}(u,v) =$ $$ = \frac{1-(uv)^{\frac{d}{2}}}{(1-uv)^2(1+uv)}[(u+v)(uv -(uv)^{\frac{d}{2}}) + (1+uv)(1-(uv)^{\frac{d}{2}+1})]. $$ \vspace*{0.3cm} In section 6 we allow the parameter $\alpha$ to vary and compute the Hodge polynomial of $G(\alpha;2+ad,d,1)$. We recall that there are only finitely many distinct moduli spaces as $\alpha$ varies, usually labelled $G_i := G_i(2+ad,d,1)$.\\ \noindent {\bf Theorem \ref{thm5.6}.} {\it For $i = 0, \ldots, L$ we have}\\ $ \displaystyle{\epsilon_{G_i}(u,v) = (1+u)(1+v)\frac{1-(uv)^d}{1-uv} \;+}$ $$ + \frac{(1+u)^2(1+v)^2(1-(uv)^{\frac{d-\gamma}{2} -i)}}{(1-uv)^2(1-(uv)^2)} (uv - (uv)^{\gamma + 2i})(1 - (uv)^{\frac{d-\gamma}{2}-i+1}), $$ {\it where $\gamma$ is $1$ if $d$ is odd and $2$ if $d$ is even.} \vspace*{0.3cm} We note that these Hodge polynomials are independent of $a$. When $i=0$ it is in fact known (see \cite{ht}) that the spaces $G_0(2+ad,d,1)$ are all isomorphic for fixed $d$. Our theorem provides evidence that this result may extend to arbitrary $i$ and we prove this in section 7. In section 8 we investigate further the birational type of $G(\alpha;n,d,k)$. We show that, if $\gcd(n,d)=1$ and $k\le d$ or $\mbox{gcd}(n,d) = 2$ and $k=1$ or $\gcd(n-k,d)=1$ and $k<\min(d,n)$, the variety is birational to $\mathbb P^{k(d-k)} \times C$ (Propositions \ref{prop8.1}, \ref{prop6.1}, \ref{prop8.3}). Finally, if $k<d$, $\gcd(n,d)=h>1$ and $S^hC$ denotes the $h$-fold symmetric product of $C$, we prove \\ \noindent{\bf Theorem \ref{th8.4}.} {\it For all $\alpha$ for which $G(\alpha;n,d,k)\ne\emptyset$,\\ \emph{(1)} $G(\alpha;n,N,k)$ is birational to a variety $Y_N$, where $Y_N$ is fibred over $\mathbb P^{h-1}$ with general fibre unirational.\\ \emph{(2)} $G(\alpha;n,d,k)$ is birational to a variety $Y$, where $Y$ is fibred over $S^hC$ with general fibre unirational.} \\ We are grateful to the referee for a careful reading of the paper. \section{Hodge polynomials} We recall the basic properties of Hodge polynomials as defined by Deligne in \cite{d}. For any quasi-projective variety $X$ over the field of complex numbers, Deligne defined a mixed Hodge structure on the cohomology groups $H^k_c(X,\mathbb C)$ with compact support with associated Hodge polynomial $\epsilon_X(u,v)$. When $X$ is a smooth projective variety, we have $$ \epsilon_X(u,v) = \sum_{p,q} h^{p,q}(X)u^pv^q, $$ where $h^{p,q}(X)$ are the usual Hodge numbers. In particular, in this case $\epsilon_X(u,u)$ is the usual Poincar\'e polynomial $P_X(u)$. We need only the following properties of the Hodge polynomials (see \cite{d} and \cite[Theorem 2.2 and Lemmas 2.3 and 2.4]{mov}). \begin{itemize} \item If $X$ is a finite disjoint union $X = \sqcup_i X_i$ of locally closed subvarieties $X_i$, then $$ \epsilon_X = \sum_{i} \epsilon_{X_i}. $$ \item If $Y \rightarrow X$ is an algebraic fibre bundle with fibre $F$ which is locally trivial in the Zariski topology, then $$ \epsilon_Y = \epsilon_F \cdot \epsilon_X. $$ \item If $Y \rightarrow X$ is a map between quasi-projective varieties which is a locally trivial fibre bundle in the complex topology with fibres projective spaces $F = \mathbb P^N$ for some $N > 0$, then $$ \epsilon_Y = \epsilon_F \cdot \epsilon_X. $$ \end{itemize} Moreover we need the Hodge polynomials of the Grassmannians. In fact, \begin{equation} \label{eqn5.1} \epsilon_{{\rm Gr}(r,N)}(u,v) = \frac{(1 - (uv)^{N-r+1})(1- (uv)^{N-r+2}) \cdots (1- (uv)^{N})}{(1-uv)(1-(uv)^2) \cdots (1-(uv)^r)}. \end{equation} \section{$G_0(n,d,k)$ for coprime $n$ and $d$} Let $C$ be an elliptic curve defined over $\mathbb C$ and suppose that $n,d,k$ are integers with $n\ge2$, $1\le k\le d$. Let $G_0(n,d,k)$ denote the moduli space of coherent systems of type $(n,d,k)$ which are $\alpha$-stable for small positive $\alpha$ (we call such systems $0^+$-{\it stable}). Similarly for any line bundle $N$ of degree $d$ on $C$ let $G_0(n,N,k)$ denote the corresponding moduli space with fixed determinant $N$. In this section we assume $\mbox{gcd}(n,d) =1$. Then there exists a Poincar\'e bundle $\mathcal{U}$ on $C \times C$. It has the property that $\mathcal{U}|_{C \times \{E\}} \simeq E$ for all stable bundles $E$ of rank $n$ and degree $d$, where we identify the moduli space of stable bundles of type $(n,d)$ with the curve $C$ in the usual way (see \cite{at} and \cite{tu}). Let $p_i$ denote the $i$-th projection of $C \times C$. \begin{prop} \label{prop2.1} If $\emph{\mbox{gcd}}(n,d) = 1$, then\\ \emph{(1)} $G_0(n,N,k)$ is isomorphic to the Grassmannian ${\rm Gr}(k,d)$;\\ \emph{(2)} $G_0(n,d,k)$ is a ${\rm Gr}(k,d)$-bundle over $C$. \end{prop} \begin{proof} (1) For $\mbox{gcd}(n,d) = 1$ any semistable vector bundle on $C$ is stable. For a fixed $N \in \mbox{Pic}^{d}(C)$ there is a unique stable bundle $E$ of rank $n$ and determinant $N$ (this follows from \cite[Theorem 7]{at}). Then $(E,V)$ belongs to $G_0(n,N,k)$ for any $k$-dimensional subspace $V$ of $H^0(E)$. (2) From part (1) we conclude that $G_0(n,d,k)$ can be identified with the Grassmannian bundle of $k$-planes in the fibres of the rank-$d$ vector bundle $p_{2*}\mathcal{U}$ on $C$. \end{proof} \begin{cor} \label{cor2.2} The birational type of $G_0(n,d,k)$ is independent of $n$ provided $\emph{\mbox{gcd}}(n,d) = 1$. \end{cor} \begin{proof} Observe that the Gr$(k,d)$-bundle over $C$ of the proposition is Zariski locally trivial. \end{proof} This improves the statement in \cite{ht} that for fixed $d$ and $k$ there are at most $d$ different birational types of varieties $G(\alpha;n,d,k)$, since we have by \cite[Theorem 4.4 (ii)]{ln} that the birational type is independent of $\alpha$ in this case. Moreover we conclude that the Hodge polynomial of $G_0(n,d,k)$ is given by \begin{cor} \label{cor2.3} Suppose $\emph{\mbox{gcd}}(n,d) = 1$. Then $$ \epsilon_{G_0}(u,v) = \frac{(1 - (uv)^{d-k+1})(1-(uv)^{d-k+2}) \cdots (1-(uv)^d)} {(1-uv)(1-(uv)^2) \cdots (1-(uv)^k)} (1+u)(1+v). $$ \begin{flushright} $\square$ \end{flushright} \end{cor} \begin{lem} \label{lem2.3} $p_{2*}\mathcal{U}$ is a stable vector bundle of rank $d$ on $C$. \end{lem} \begin{proof} According to \cite{ht}, $\mathcal{U}$ is the kernel of a Fourier-Mukai transform $\Phi_{\mathcal{U}}$ on $C \times C$. Hence $p_{2*}\mathcal{U} = \Phi_{\mathcal{U}}(\mathcal{O}_C)$ is stable by \cite[Proposition 2.8 and Remark 2.9]{ht}. \end{proof} The following corollary is a special case of the main result of \cite{ht}. \begin{cor} \label{cor2.4} If $\emph{\mbox{gcd}}(n,d) = 1$ and $n' \equiv n \mod d$, then $G_0(n',d,k)$ is isomorphic to $G_0(n,d,k)$. \end{cor} \begin{proof} Let $\mathcal{U}'$ be a Poincar\'e bundle for $(n',d)$ on $C \times C$. Then by \cite[Proposition 7.2]{ln}, $$ c_1(p_{2*}\mathcal{U}) = s[C] \quad \mbox{and} \quad c_1(p_{2*}\mathcal{U}') = s'[C] $$ where $sn \equiv s'n' \equiv -1 \mod d$. Since $n' \equiv n \mod d$, it follows that $s \equiv s' \mod d$. From Lemma \ref{lem2.3} and the classification of stable bundles on an elliptic curve we conclude that $$ p_{2*} \mathcal{U} \simeq p_{2*} \mathcal{U}' \otimes M $$ with $M \in \mbox{Pic}(C)$. Hence $P(p_{2*}\mathcal{U}) \simeq P(p_{2*} \mathcal{U}')$ and the same holds for the Grassmannian fibrations. Now the assertion follows from the description of $G_0(n,d,k)$ in the proof of Proposition \ref{prop2.1}. \end{proof} Now suppose $\mbox{gcd}(n',d) =1$ and $n' \not \equiv n \mod d$. Then in the above argument $s' \not \equiv s \mod d$. So the projective bundles $P(p_{2*} \mathcal{U})$ and $P(p_{2*} \mathcal{U}')$ are not isomorphic as projective bundles and therefore also not isomorphic as varieties by the argument in \cite[Proposition 8.2]{ln}. This implies \begin{prop} \label{prop2.6} If $\emph{\mbox{gcd}}(n,d) = \emph{\mbox{gcd}}(n',d) = 1$ and $n' \not \equiv n \mod d$, then $G_0(n',d,1) \not \simeq G_0(n,d,1)$ and $G_{0}(n',d,d-1) \not \simeq G_{0}(n,d,d-1)$. $\square$ \end{prop} \section{$G_0(n,d,1)$ for arbitrary $(n,d)$} In this section we determine the birationality type of the moduli space $G_0(n,d,1)$ for arbitrary $n$ and $d$. Suppose $h :=\mbox{gcd}(n,d)$. Then any semistable vector bundle $E$ of rank $n$ and degree $d$ on $C$ is of the form $E = E_1 \oplus \cdots \oplus E_{\ell}$ with $E_i$ indecomposable of slope $\frac{d}{n}$ and $\ell \leq h$. Each $E_i$ is semistable and is the unique indecomposable multiple extension of $\frac{rk\; E_i}{n/h}$ copies of a stable bundle $F$ of rank $\frac{n}{h}$ and degree $\frac{d}{h}$. Moreover a generic such $E$ has the form \begin{equation} \label{eqn1} E = F_1 \oplus \cdots \oplus F_h \end{equation} with all $F_i$ stable and non-isomorphic of rank $\frac{n}{h}$ and degree $\frac{d}{h}$ (see \cite{tu}). A generic $(E,V) \in G_0(n,d,1)$ has $E= F_1 \oplus \cdots \oplus F_h$ as a bundle and $V$ is a one-dimensional subspace of $H^0(E)$ whose projection to each $F_i$ is still one-dimensional. Denote by $G_0^1$ the open set of coherent systems in $G_0(n,d,1)$ which are generic in this sense. So we get a $(\times_{i=1}^h \mathbb P^{\frac{d}{h}-1})$-bundle over $\times_{i=1}^h C \;\setminus \;\Delta$ where $$ \Delta := \{(x_1, \cdots , x_h) \in \times_{i=1}^h C\; | \; x_i = x_j \; \mbox{for some} \; i \neq j \}. $$ To be more precise, we think of $\times_{i=1}^h C$ as the set of $h$-tuples of stable bundles of rank $\frac{n}{h}$ and degree $\frac{d}{h}$. If $\mathcal{U}$ denotes a Poincar\'e bundle on $C \times C$ of rank $\frac{n}{h}$ and degree $\frac{d}{h}$ and $p_2$ the second projection of $C \times C$, denote $$ \mathcal{F} := \times_{i=1}^h P(p_{2*} \mathcal{U})|_{\times_{i=1}^h C \,\setminus \,\Delta}. $$ With these notations the following proposition is obvious. \begin{prop} \label{prop3.1} There is a natural action of the symmetric group $S(h)$ on $\mathcal{F}$ such that the quotient is isomorphic to the open subset $G_0^1$ of $G_0(n,d,1)$, $$ G_0^1 \simeq \mathcal{F}/S(h). $$ \begin{flushright} $\square$ \end{flushright} \end{prop} From this we conclude \begin{prop} \label{prop3.2} The birational type of $G_0(n,d,1)$ is independent of $n$ provided $d$ and $\emph{\mbox{gcd}}(n,d)$ are fixed. \end{prop} \begin{proof} The sheaf $p_{2*}\mathcal{U}$ is locally trivial over $C$. This implies that there is an open dense subset $U$ of $C$ such that $$ \mathcal{F}|_{\times_{i=1}^h U \;\setminus \,\Delta_U} \simeq \times_{i=1}^h(U \times \mathbb P^{\frac{d}{h}-1})|_ {\times_{i=1}^h U \; \setminus \, \Delta_U}, $$ where $\Delta_U:=\Delta\cap\times^h_{i=1}U$. Moreover the action of the group $S(h)$ on this is given by permuting the factors. This implies the assertion. \end{proof} The following corollary improves \cite[Theorem 5.2]{ht} slightly in the case $k=1$. \begin{cor} Suppose $d = \prod_{i=1}^r p_i^{a_i}$ is the prime decomposition of $d$. Then there are at most $\prod_{i=1}^r (a_i + 1)$ birational types of varieties $G(\alpha;n,d,1)$. \end{cor} \begin{proof} According to \cite[Theorem 4.4 (ii)]{ln} the birational type of the variety $G(\alpha;n,d,1)$ does not depend on $\alpha$. Hence Proposition \ref{prop3.2} implies that the number of birational types of $G(\alpha;n,d,1)$ equals at most the number of divisors of $d$. \end{proof} Fix $N$ in $\mbox{Pic}^d(C)$ and define $$ G_0^1(N) = G_0^1 \cap G_0(n,N,1). $$ If $(E,V) \in G_0^1(N)$, then $E = F_1 \oplus \cdots \oplus F_h$ as in \eqref{eqn1} and $\otimes_{i=1}^h \det F_i = N$. Regarding $\times_{i=1}^h C$ as the set of $h$-tuples of stable bundles $(F_1, \ldots , F_h)$ of rank $\frac{n}{h}$ and degree $\frac{d}{h}$, the subset $$ C_N := \{(F_1, \ldots, F_h) \in \times_{i=1}^h C \;|\; \otimes_{i=1}^h \det F_i = N \} $$ is isomorphic to $\times_{i=1}^{h-1} C$. The variety $$ \mathcal{F}_N := \mathcal{F}|_{C_N \setminus (\Delta \cap C_N)} $$ is a $(\times_{i=1}^h \mathbb P^{\frac{d}{2}-1})$-fibration over $C_N \setminus(\Delta \cap C_N)$. With this notation we have \begin{prop} \label{prop3.4} There is a natural action of the group $S(h)$ on $\mathcal{F}_N$ such that the quotient is isomorphic to the open subset $G_0^1(N)$ of the variety $G_0(n,N,1)$, $$ G_0^1(N) \simeq \mathcal{F}_N/S(h). $$ \end{prop} \begin{proof} The natural actions of $S(h)$ on the varieties $\times_{i=1}^h C$ and $\mathcal{F}$ restrict to actions on $C_N$ and $\mathcal{F}_N$ respectively. Moreover $\Delta \cap C_N = \{ (F_1,\ldots,F_h) \in C_N \;|\; F_i \simeq F_j \;\mbox{for some}\; i \neq j \}$. The result now follows. \end{proof} \section{$G_0(n,d,1)$ with $\mbox{gcd}(n,d)=2$} \subsection{The set up} Now suppose $\mbox{gcd}(n,d) =2$ and consider the moduli space $G_0(n,d,1)$ as well as its subspace $G_0(n,N,1)$. Then there are no stable vector bundles of rank $n$ and degree $d$. Hence, if $(E,V) \in G_0(n,d,1)$, then $E$ is of one of the following types \begin{enumerate} \item $E = F_1 \oplus F_2$ with $F_1,F_2$ stable of rank $\frac{n}{2}$, degree $\frac{d}{2}$ and $F_1 \not \simeq F_2$; \item there is a nontrivial exact sequence $0 \rightarrow F \rightarrow E \rightarrow F \rightarrow 0$ with stable $F$; \item $E = F \oplus F$ with stable $F$. \end{enumerate} The coherent systems of type (1) form the open set $G_0^1$ (respectively $G_0^1(N)$) of the previous section in the moduli space $G_0(n,d,1)$ (respectively $G_0(n,N,1)$). The coherent systems of type (2) form a locally closed subset $G_0^2$ (respectively $G_0^2(N)$) in $G_0(n,d,1)$ (respectively $G_0(n,N,1)$), whose boundary is the closed subset $G_0^3$ (respectively $G_0^3(N)$) of coherent systems of type (3) in $G_0(n,d,1)$ (respectively $G_0(n,N,1)$). Moreover we have stratifications \begin{equation} \label{eq1} G_0(n,d,1) = \sqcup_{i=1}^3 G_0^i \quad \mbox{and} \quad G_0(n,N,1) = \sqcup_{i=1}^3 G_0^i(N). \end{equation} \subsection{The spaces $G_0^1$ and $G_0^1(N)$} According to Proposition \ref{prop3.1} there is a natural action of the group $\mathbb Z_2$ on the variety $\mathcal{F}$ such that $$ G_0^1 \simeq \mathcal{F}/\mathbb Z_2, $$ where $\mathcal{F} = P \times P \setminus (p \times p)^{-1}(\Delta)$; here $p: P \rightarrow C$ is the projection of a $\mathbb P^{\frac{d}{2}-1}$-bundle over the curve $C$ and $\Delta \subset C \times C$ the diagonal. This allows us to compute the Hodge polynomial of $G_0^1$. \begin{prop} \label{prop4.1} \quad \newline $\epsilon_{G_0^1}(u,v) = \frac{(1+u)(1+v)(1-(uv)^{\frac{d}{2}})}{(1-uv)^2(1+uv)}\{(u+v)(uv-(uv)^{\frac{d}{2}}) +uv(1 -(uv)^{\frac{d}{2}+1}) \}.$ \end{prop} \begin{proof} The above description of $\mathcal{F}$ gives the following exact sequence of the $(+1)$-eigenspaces of the action of $\mathbb Z_2$ on $\mathcal{F}$ on the $i$-th cohomology with compact support $$ 0 \rightarrow H^i_c(\mathcal{F})_+ \rightarrow H^i(P \times P)_+ \rightarrow H^i((p \times p)^{-1}(\Delta))_+ \rightarrow 0. $$ This implies $$ \epsilon_{G_0^1}(u,v) = \epsilon_{\mathcal{F}}(u,v)_+ = \epsilon_{P \times P}(u,v)_+ - \epsilon_{(p \times p)^{-1}(\Delta)}(u,v)_+. $$ Since $\epsilon_{P}(u,v) = \frac{(1+u)(1+v)(1-(uv)^{\frac{d}{2}})}{1-uv}$, \cite[Lemma 2.6]{mov} implies\\ $$ \begin{array}{rll} \epsilon_{P \times P}(u,v)_+ &= & \frac{1}{2} [\epsilon_{P}(u,v)^2 + \epsilon_{P}(-u^2,-v^2)]\\ & = & \frac{1}{2}\{\frac{(1+u)^2(1+v)^2(1-(uv)^{\frac{d}{2}})^2}{(1-uv)^2} + \frac{(1-u^2)(1-v^2)(1-(uv)^d)}{1-(uv)^2}\}. \end{array} $$ Now $(p \times p)^{-1}(\Delta)$ is a $(\mathbb P^{\frac{d}{2}-1} \times \mathbb P^{\frac{d}{2}-1})$-fibration over $\Delta$ and $\mathbb Z_2$ acts on it by swapping the two $\mathbb P^{\frac{d}{2}-1}$'s. So, again using \cite[Lemma 2.6]{mov}, $$ \begin{array}{rcl} \epsilon_{(p \times p)^{-1} (\Delta)}(u,v)_+ & = &\epsilon_{\Delta}(u,v) \epsilon_{\mathbb P^{\frac{d}{2}-1} \times \mathbb P^{\frac{d}{2}-1}}(u,v)_+\\ & = &\frac{1}{2}(1+u)(1+v)\{ \frac{(1-(uv)^{\frac{d}{2}})^2}{(1-uv)^2} + \frac{1-(uv)^d}{1-(uv)^2} \}. \end{array} $$ Subtracting and simplifying we get the assertion. \end{proof} Now fix $N$ in $\mbox{Pic}^d(C)$. Then $C_N \subset C \times C$ and $\Delta\cap C_N$ consists of 4 points (given by the square roots of $N$). Moreover $ \Delta\cap C_N$ is the fixed point set for the action of $S(2) = \mathbb Z_2$ on $C_N$ and $C_N/\mathbb Z_2$ is isomorphic to $\mathbb P^1$. Here Proposition \ref{prop3.4} gives \begin{prop} \label{prop4.2} There is a natural action of the group $\mathbb Z_2$ on $\mathcal{F}_N$ such that the quotient is isomorphic to the open subset $G_0^1(N)$ of the variety $G_0(n,N,1)$, $$ G_0^1(N) \simeq \mathcal{F}_N/\mathbb Z_2. $$ \begin{flushright} $\square$ \end{flushright} \end{prop} Proposition \ref{prop4.2} allows us to compute the Hodge polynomial of $G_0^1(N)$. \begin{prop} \label{prop4.3} $$ \epsilon_{G_0^1(N)}(u,v) = \frac{1-(uv)^{\frac{d}{2}}}{(1-uv)^2(1+uv)}[(1-(uv)^{\frac{d}{2} + 1})(uv - 3) + (uv - (uv)^{\frac{d}{2}})(u+v)]. $$ \end{prop} \begin{proof} From Proposition \ref{prop4.2} we conclude \begin{eqnarray*} \epsilon_{G_0^1(N)}(u,v) & = & \epsilon_{\mathcal{F}_N}(u,v)_+\\ &=& \epsilon_{C_N \setminus(\Delta \cap C_N)}(u,v)_+ \cdot \epsilon_{\mathbb P^{\frac{d}{2}-1} \times \mathbb P^{\frac{d}{2}-1}}(u,v)_+ \\ && \hspace*{1cm} + \epsilon_{C_N \setminus(\Delta \cap C_N)}(u,v)_- \cdot \epsilon_{\mathbb P^{\frac{d}{2}-1} \times \mathbb P^{\frac{d}{2}-1}}(u,v)_- \end{eqnarray*} Now $\epsilon_{C_N \setminus(\Delta \cap C_N)}(u,v) = (1+u)(1+v) - 4$ and $$ \epsilon_{C_N \setminus(\Delta \cap C_N)}(u,v)_+ = \epsilon_{\mathbb P^1 \setminus \{4 \; {\rm points}\}}(u,v) = uv - 3; $$ hence $$ \epsilon_{C_N \setminus(\Delta \cap C_N)}(u,v)_- = u + v. $$ Using \cite[Lemma 2.6]{mov} we get \begin{eqnarray*} \epsilon_{\mathbb P^{\frac{d}{2}-1} \times \mathbb P^{\frac{d}{2}-1}}(u,v)_+ &=& \frac{1}{2}\{ \epsilon_{\mathbb P^{\frac{d}{2}-1}}(u,v)^2 + \epsilon_{\mathbb P^{\frac{d}{2}-1}}(-u^2,-v^2) \}\\ &=& \frac{1}{2} \left\{ \frac{(1 - (uv)^{\frac{d}{2}})^2}{(1-uv)^2} + \frac{1-(uv)^d}{1-(uv)^2} \right\}\\ &=& \frac{(1-(uv)^{\frac{d}{2}})(1 - (uv)^{\frac{d}{2} +1})}{(1-uv)^2(1+uv)} \end{eqnarray*} and similarly $$ \epsilon_{\mathbb P^{\frac{d}{2}-1}\times \mathbb P^{\frac{d}{2}-1}}(u,v)_- = \frac{(1-(uv)^{\frac{d}{2}})(uv - (uv)^{\frac{d}{2}})}{(1-uv)^2(1+uv)}. $$ Inserting these expressions into the above formula for $\epsilon_{G_0^1(N)}(u,v)$ and simplifying we get the assertion. \end{proof} \subsection{The spaces $G_0^2$ and $G_0^2(N)$} Recall that $(E,V) \in G_0(n,d,1)$ is of type (2) if it admits a non-trivial exact sequence $0 \rightarrow F \rightarrow E \rightarrow F \rightarrow 0$ with a stable vector bundle $F$. The bundle $F$ is uniquely determined by $E$ and called the stable bundle associated to $E$. Conversely $E$ is uniquely determined by $F$, since $\dim \mbox{Ext}^1(F,F) = h^1(End(F)) = h^0(End(F)) = 1$. \begin{lem}\label{new} Fix a stable bundle $F$ of rank $\frac{n}{2}$ and degree $\frac{d}{2}$. The variety of coherent systems $(E,V)$ of type (2) with associated stable bundle $F$ is isomorphic to an $\mathbb A^{\frac{d}{2}-1}$-fibration over the projective space $\mathbb P^{\frac{d}{2}-1}$. \end{lem} \begin{proof} A coherent system $(E,V)$ with $\det E = (\det F)^2$ is of type (2) if and only if there is a non-trivial exact sequence $0 \rightarrow F \rightarrow E \stackrel{p}{\rightarrow} F \rightarrow 0$ such that $V \subset H^0(E)$ is a line which projects to a line $\ell$ in $H^0(F)$. (Note that, if $H^0(p)(V)=0$, the subsystem $(F,V)$ would contradict the $0^+$-stability of $(E,V)$.) Via the exact sequence the line $\ell$ is contained in $H^0(E): \; \ell \subset H^0(F) \subset H^0(E)$. So we end up with a plane in $H^0(E)$ containing $\ell$ and projecting to $\ell$. Considering lines as points in the corresponding projective spaces, we have $\ell \in P(H^0(F))$ and $V \in P(H^0(p)^{-1}(\ell)) \subset P(H^0(E))$. Hence the coherent systems $(E,V)$ of type (2) associated to the bundle $F$ with a fixed $\ell \in P(H^0(F))$ correspond to the points of $P(H^0(p)^{-1}(\ell)) \setminus P(H^0(F))$. Since $\mbox{Aut} E = \{ \lambda \cdot id_E \;|\; \lambda \in \mathbb C^* \} \times \mbox{End} F$, two coherent systems of this form are isomorphic if and only if the corresponding points of $P(H^0(p)^{-1}(\ell)) \setminus P(H^0(F))$ lie in the same line through $\ell$. Now choose a hyperplane $H$ in $P(H^0(p)^{-1}(\ell))$ not containing the point $\ell$. Then we can identify the isomorphism classes of pairs $(E,V)$ with fixed $F$, $E$ and $H^0(p)(V) = \ell$ with the points of $H \setminus (H \cap P(H^0(F)))$. Since $P(H^0(p)^{-1}(\ell))$ is of dimension $\frac{d}{2}$, this is an affine space $\mathbb A^{\frac{d}{2}-1}$ of dimension $\frac{d}{2} -1$. Since $\ell$ varies in the projective space $P(H^0(F))$, this completes the proof of the lemma. \end{proof} \begin{prop} \label{prop4.5} \emph{(a)} The variety $G_0^2$ is isomorphic to an $\mathbb A^{\frac{d}{2}-1}$-fibration over a $\mathbb P^{\frac{d}{2}-1}$-bundle over the curve $C$.\\ \emph{(b)} \quad $$\epsilon_{G_0^2}(u,v) = (1+u)(1+v)(uv)^{\frac{d}{2}-1}\frac{1-(uv)^{\frac{d}{2}}}{1-uv}.$$ \end{prop} \begin{proof} (a): Identifying $C$ with the moduli space of stable bundles of rank $\frac{n}{2}$ and degree $\frac{d}{2}$, the natural map $G_0^2 \rightarrow C$ is a morphism whose fibres are isomorphic to $\mathbb A^{\frac{d}{2}-1}$-fibrations over $\mathbb P^{\frac{d}{2}-1}$ by Lemma \ref{new}.\\ (b): Since any $\mathbb A^{\frac{d}{2} - 1}$-fibration over a projective variety is locally trivial in the complex topology, we conclude from the third property of Hodge polynomials in section 2 that $$ \epsilon_{G_0^2}(u,v) = \epsilon_C(u,v) \cdot \epsilon_{\mathbb A^{\frac{d}{2}-1}}(u,v) \cdot \epsilon_{\mathbb P^{\frac{d}{2} - 1}}(u,v), $$ which gives the assertion, since $\epsilon_{\mathbb A^{\frac{d}{2}-1}}(u,v) = (uv)^{\frac{d}{2}-1}$. \end{proof} \begin{prop} \label{prop4.6} \emph{(a)} The variety $G_0^2(N)$ is isomorphic to the disjoint union of four $\mathbb A^{\frac{d}{2} - 1}$-fibrations over $\mathbb P^{\frac{d}{2}-1}$.\\ \emph{(b)} $$ \epsilon_{G_0^2(N)}(u,v) = 4 \frac{(uv)^{\frac{d}{2}-1}(1-(uv)^{\frac{d}{2}})}{1-uv}. $$ \end{prop} \begin{proof} (a) follows from the fact that there are exactly 4 vector bundles $E$ providing coherent systems $(E,V)$ in $G_0^2(N)$, since $\mbox{Pic}^0(C)$ has exactly 4 two-division points. The proof of (b) is analogous to the proof of Proposition \ref{prop4.5} (b). \end{proof} \subsection{The spaces $G_0^3$ and $G_0^3(N)$} \begin{prop} \label{prop4.7} \emph{(a)} The variety $G_0^3$ is isomorphic to a ${\rm Gr}(2,\frac{d}{2})$-fibration over the curve $C$.\\ \emph{(b)} $$ \epsilon_{G_0^3}(u,v) = (1+u)(1+v)\frac{1-(uv)^{\frac{d}{2}}}{(1-uv)^2(1+uv)}(1-(uv)^{\frac{d}{2}-1}). $$ \end{prop} \begin{proof} Let $(E,V)\in G_0^3$. Then $E = F \oplus F$ for some stable bundle $F$ and $V$ is generated by a section $\sigma = (\sigma_1,\sigma_2)$ with $\sigma_1, \sigma_2 \in H^0(F)$. Moreover $\sigma_1$, $\sigma_2$ are linearly independent (otherwise there would exist a subsystem $(F,V)$ of $(E,V)$ with $\mbox{rk} F=1$, contradicting the $0^+$-stability of $(E,V)$). Hence giving a line $V$ in $H^0(E)$ is equivalent to giving a two-dimensional subspace of $H^0(F)$. Identifying the space of stable bundles of rank $\frac{n}{2}$ and degree $\frac{d}{2}$ with the curve $C$ gives assertion (a). For the proof of (b) we use the fact that any Gr$(2,\frac{d}{2})$-fibration over the curve $C$ is locally trivial in the Zariski topology. \end{proof} \begin{prop} \label{prop4.8} \emph{(a)} The variety $G_0^3(N)$ is isomorphic to the disjoint union of four Grassmannians ${\rm Gr}(2,H^0(F))$: $$ G_0^3(N) \simeq \sqcup_{(\det F)^2 = N} {\rm Gr}(2,H^0(F)). $$ \emph{(b)} $$ \epsilon_{G_0^3(N)}(u,v) = 4 \frac{1-(uv)^{\frac{d}{2}}}{(1-uv)^2(1+uv)}(1-(uv)^{\frac{d}{2}-1}). $$ \end{prop} \begin{proof} The proof is the same as the proof of the previous proposition using the fact that the line bundle $N$ admits exactly 4 square roots. \end{proof} \subsection{The Hodge polynomials of $G_0(n,d,1)$ and $G_0(n,N,1)$} The Hodge polynomial is additive on disjoint unions. Hence we conclude from equation \eqref{eq1} adding the formulas of Propositions \ref{prop4.1}, \ref{prop4.5} and \ref{prop4.7} (respectively Propositions \ref{prop4.3}, \ref{prop4.6} and \ref{prop4.8}) after multiplying out and simplifying, \begin{theorem} \label{thm4.9} \emph{(a)}\\ $\epsilon_{G_0(n,d,1)}(u,v) =$ $$ = \frac{(1+u)(1+v)(1-(uv)^{\frac{d}{2}})}{(1-uv)^2(1+uv)}[(u+v)(uv -(uv)^{\frac{d}{2}}) + (1+uv)(1-(uv)^{\frac{d}{2}+1})]; $$ \emph{(b)}\\ $\epsilon_{G_0(n,N,1)}(u,v) =$ $$ = \frac{1-(uv)^{\frac{d}{2}}}{(1-uv)^2(1+uv)}[(u+v)(uv -(uv)^{\frac{d}{2}}) + (1+uv)(1-(uv)^{\frac{d}{2}+1})]. $$ \begin{flushright} $\square$ \end{flushright} \end{theorem} \begin{rem} \emph{From Theorem \ref{thm4.9} we see that} $$ \epsilon_{G_0(n,d,1)} = \epsilon_{G_0(n,N,1)} \cdot \epsilon_C. $$ \emph{However the morphism} $$ G_0(n,d,1) \rightarrow C, \quad (E,V) \mapsto \det E, $$ \emph{whose fibre over} $N \in C$ \em{is} $G_0(n,N,1)$, is not Zariski locally trivial. This follows from the fact that $\epsilon_{G_0^1} \neq \epsilon_{G_0^1(N)} \cdot \epsilon_C$ (see Propositions \ref{prop4.1} and \ref{prop4.3}). \end{rem} \section{The Hodge polynomial of $G(\alpha;2+ad,d,1)$} As outlined in \cite[Section 6]{ln} and described in more generality in \cite{bgn}, the moduli spaces $G(\alpha;2+ad,d,1)$ are modified at certain critical values $\alpha = \alpha_i$ ($1\le i\le L$) which in this case are given by the following lemma. \begin{lem} \label{lemma5.1} The critical values for coherent systems of type $(2+ad,d,1)$ are given by $$ \alpha_i = \frac{d-2d_1}{1+a(d-d_1)} $$ where $d_1 = [\frac{d-1}{2}] -i+1$ and $0 < i \leq [\frac{d-1}{2}]$. \end{lem} \begin{proof} According to \cite[section 6]{ln} we have $$ \alpha_i = \frac{n_1d_2-n_2d_1}{n_2} $$ where $n_1,n_2,d_1,d_2$ are positive integers such that $n_1+n_2 = 2 +ad$ and $d_1+d_2= d$. Moreover $\frac{d_1}{n_1} < \frac{d_2}{n_2}$ and $$ 0< \alpha_i < \frac{d}{1+ad}. $$ After substituting $n_2 = 2 + ad -n_1$ and $d_2 = d-d_1$, we get $$ \alpha_i = \frac{n_1d-d_1(2+ad)}{2+ad-n_1}. $$ So $\alpha_i > 0$ is equivalent to $$ n_1 > \frac{2d_1}{d} + d_1a $$ and $\alpha_i < \frac{d}{1+ad}$ is equivalent to $(n_1d-d_1(2+ad))(1+ad)<d(2+ad-n_1)$ which simplifies to $$ n_1 < \frac{d_1}{d} + d_1a + 1. $$ So the only possible value for $n_1$ is \begin{equation} \label{eqn3} n_1 = d_1a + 1 \end{equation} and this satisfies the inequalities if and only if $d_1 < \frac{d}{2}$. Writing \begin{equation} \label{eqn4} i = \left[ \frac{d-1}{2} \right] - d_1 + 1, \end{equation} we obtain the $\alpha_i$ in increasing order of magnitude. \end{proof} The moduli spaces $G(\alpha;2+ad,d,1)$ for $\alpha_i < \alpha < \alpha_{i+1}$ are denoted by $G_i$. Note that $$ \alpha_L = \frac{d-2}{1+a(d-1)} $$ and $G(\alpha;2+ad,d,1) = \emptyset$ for $\alpha \geq \frac{d}{1+ad}$. So $G_L$ denotes $G(\alpha;2+ad,d,1)$ for $\alpha_L < \alpha < \frac{d}{1+ad}$. According to \cite[Remark 5.5]{bgn} the moduli space $G_L$ is isomorphic to a $\mathbb P^{d-1}$-bundle over the curve $C$. Hence \begin{equation} \label{eq2} \epsilon_{G_L}(u,v) = (1+u)(1+v)\frac{1-(uv)^d}{1-uv} \end{equation} and for any line bundle $N$ of degree $d$, \begin{equation} \label{eq3} \epsilon_{G_L}(u,v) = \frac{1-(uv)^d}{1-uv}. \end{equation} The modifications at $\alpha_i$ are given as follows. There are closed subvarieties $G_i^+$ of $G_i$ and $G_i^-$ of $G_{i-1}$ such that \begin{equation} \label{eq4} G_i \setminus G_i^+ \simeq G_{i-1} \setminus G_i^-. \end{equation} These subvarieties are called {\it flip loci} and are described as follows. The variety $G_i^+$ is given by coherent systems $(E,V)$ which are non-trivial extensions \begin{equation} \label{eq5} 0 \rightarrow (F_2,0) \rightarrow (E,V) \rightarrow (F_1,V_1) \rightarrow 0 \end{equation} where $F_2$ is a stable bundle of rank $n_2$ and degree $d_2$ and $(F_1,V_1)$ belongs to the moduli space $G(\alpha_i^+;n_1,d_1,1)$ of coherent systems of type $(n_1,d_1,1)$ which are $\alpha$-stable for $\alpha$ slightly greater than $\alpha_i$. Note that by (\ref{eqn3}), $$ n_1 = d_1a +1, \quad \quad n_2 = d_2a + 1. $$ Hence $\mbox{gcd}(n_2,d_2) = 1$. So the moduli space of the bundles $F_2$ can be identified with the curve $C$. \begin{lem} \label{lem5.2} There are no critical values for coherent systems of type $(1+ad_1,d_1,1)$. \end{lem} \begin{proof} According to \cite[section 6]{ln} any critical value is of the form $$ \alpha = \frac{m(d_1-e)-(ad_1+1-m)e}{ad_1+1-m} $$ with $0 < m < ad_1+1$ and $ 0 < e < d_1$. Moreover $$ 0 < \alpha < \frac{d_1}{1+ad_1-1} = \frac{1}{a}. $$ The first inequality is equivalent to $$ m > ae + \frac{e}{d_1} $$ and the second inequality is equivalent to $$ m < ae +1, $$ which gives a contradiction. \end{proof} \begin{rem} \emph{It follows from Proposition \ref{prop2.1} and Lemma \ref{lem5.2} that $G_L(1+ad_1,d_1,1)$ is a $\mathbb P^{d_1-1}$-bundle over $C$. This is by no means clear from the general structure theorem for $G_L$ (\cite[Theorem 5.4]{bgn})}. \end{rem} \begin{prop} \label{prop5.3} The flip locus $G_i^+$ is a $\mathbb P^{d_1 - 1}$-bundle over $C \times G_0(1+ad_1,d_1,1)$ and $G_0(1+ad_1,d_1,1)$ is a $\mathbb P^{d_1-1}$-bundle over $C$. \end{prop} \begin{proof} By Lemma \ref{lem5.2} the variety $G(\alpha_i^+;1+ad_1,d_1,1)$ is isomorphic to $G_0(1+ad_1,d_1,1)$ which according to Proposition \ref{prop2.1} is a $\mathbb P^{d_1 - 1}$-bundle over $C$. The result now follows from \eqref{eq5} provided that $$ \dim \mbox{Ext}^1((F_1,V_1),(F_2,0)) = d_1. $$ Now this dimension is given by $$ C_{12} + \mathbb H^0_{12} + \mathbb H^2_{12} $$ (see \cite[Proposition 3.2]{bgn}), where $\mathbb H^0_{12} = \mbox{Hom}((F_1,V_1),(F_2,0))$ and $\mathbb H^2_{12} = H^0(F_2^* \otimes N_1)^*$ with $N_1 = \mbox{Ker}(V_1 \otimes \mathcal{O}_C \rightarrow F_1)$. Since $(F_1,V_1)$ and $(F_2,0)$ are both $\alpha_i$-stable and non-isomorphic of the same $\alpha_i$-slope, we get $\mathbb H_{12}^0 = 0$. It is obvious that $N_1=0$ and hence $\mathbb H_{12}^2 = 0$. This completes the proof, since $C_{12} = d_1$ by \cite[equation (13)]{ln}. \end{proof} The variety $G_i^-$ is given by coherent systems $(E,V)$ which are nontrivial extensions \begin{equation} \label{eq6} 0 \rightarrow (F_1,V_1) \rightarrow (E,V) \rightarrow (F_2,0) \rightarrow 0 \end{equation} where $(F_1,V_1)$ and $F_2$ are as above. \begin{prop} \label{prop5.4} The flip locus $G_i^-$ is a $\mathbb P^{d - 2d_1 - 1}$-bundle over $C \times G_0(1+ad_1,d_1,1)$ and $G_0(1+ad_1,d_1,1)$ is a $\mathbb P^{d_1-1}$-bundle over $C$. \end{prop} \begin{proof} The proof is the same as the proof of Proposition \ref{prop5.3} with $C_{12}$ replaced by $C_{21}$, which is equal to $-d_1+d_2 = d - 2d_1$ according to \cite[equation (13)]{ln}. \end{proof} \begin{lem} \label{lem5.5} For every $i = 1, \ldots, L=\left[ \frac{d-1}{2} \right]$ we have $$ \epsilon_{G_{i-1}}(u,v) = \epsilon_{G_i}(u,v) + \frac{(1+u)^2(1+v)^2(1-(uv)^{d_1})}{(1-uv)^2} \{ (uv)^{d_1} - (uv)^{d-2d_1} \}. $$ \end{lem} \begin{proof} According to \eqref{eq4} and Propositions \ref{prop5.3} and \ref{prop5.4} we have $$ \begin{array}{rcl} \epsilon_{G_{i-1}} - \epsilon_{G_i} & = & \epsilon_{G_i^-} - \epsilon_{G_i^+} \\ &=& (1+u)^2(1+v)^2 \frac{1-(uv)^{d_1}}{1-uv}\{ \frac{1-(uv)^{d-2d_1}}{1-uv} - \frac{1-(uv)^{d_1}}{1-uv} \}. \end{array} $$ This gives the assertion. \end{proof} \begin{theorem} \label{thm5.6} For $i = 0, \ldots, L$ we have\\ $ \displaystyle{\epsilon_{G_i}(u,v) = (1+u)(1+v)\frac{1-(uv)^d}{1-uv} \;+}$ $$ + \frac{(1+u)^2(1+v)^2(1-(uv)^{\frac{d-\gamma}{2} -i)}}{(1-uv)^2(1-(uv)^2)} (uv - (uv)^{\gamma + 2i})(1 - (uv)^{\frac{d-\gamma}{2}-i+1}). $$ where $\gamma$ is $1$ if $d$ is odd and $2$ if $d$ is even. \end{theorem} \begin{proof} By Lemmas \ref{lemma5.1} and \ref{lem5.5} and downwards induction on $i$ we have $$ \begin{array}{rcl} \epsilon_{G_i} &=& \epsilon_{G_L} + \frac{(1+u)^2(1+v)^2}{(1-uv)^2} \sum_{d_1=1}^{\frac{d- \gamma}{2} -i}(1-(uv)^{d_1})((uv)^{d_1} - (uv)^{d-2d_1})). \end{array} $$ Now the sum equals\\ $ \sum_{d_1=1}^{\frac{d- \gamma}{2} -i}[(uv)^{d_1} - (uv)^{2d_1} - (uv)^{d -2d_1} + (uv)^{d-d_1}] = \frac{uv(1-(uv)^{\frac{d-\gamma}{2}-i})}{1-uv} - \\ \hspace*{0.5cm} - \frac{(uv)^2(1-(uv)^{d-\gamma -2i})}{1-(uv)^2} - \frac{(uv)^{\gamma +2i}(1-(uv)^{d - \gamma -2i})}{1-(uv)^2} + \frac{(uv)^{\frac{d+\gamma}{2} +i}(1-(uv)^{\frac{d-\gamma}{2}-i})}{1-uv}\\ \hspace*{1.5cm} = \frac{1-(uv)^{\frac{d-\gamma}{2}-i}}{1-uv}(uv + (uv)^{\frac{d+\gamma}{2} +i}) - \frac{1-(uv)^{d-\gamma -2i}}{1-(uv)^2}((uv)^2 +(uv)^{\gamma + 2i})\\ \hspace*{1.5cm} = \frac{1-(uv)^{\frac{d-\gamma}{2} -i}}{1-(uv)^2} (uv - (uv)^{\gamma + 2i})(1 - (uv)^{\frac{d-\gamma}{2}-i+1}).\\ $ Together with \eqref{eq2} this gives the assertion. \end{proof} \begin{rem} \emph{Note that the formula of the theorem is independent of $a$. This is consistent with the main theorem of \cite{ht} which implies that the isomorphism class of $G_0(2+ad,d,1)$ is independent of $a$. Our result suggests that the isomorphism class of $G_i$ should be independent of $a$ and we prove this in the next section.} \end{rem} \begin{rem} \emph{When $i=0$ and $d$ is odd, the second term in the formula of Theorem \ref{thm5.6} is $0$, so we have $$ \epsilon_{G_0}(u,v)=(1+u)(1+v)\frac{1-(uv)^d}{1-uv} $$ in accordance with Proposition \ref{prop2.1}(2). For $i=0$ and $d$ even, it can easily be checked that the formula of Theorem \ref{thm5.6} agrees with Theorem \ref{thm4.9}(a).} \end{rem} For coherent systems of fixed determinant $N$ we can define moduli spaces $G_i(N)$ in the same way as we defined the moduli spaces $G_i$ at the beginning of this section. \begin{prop} \label{prop5.9} For $i = 0, \ldots, L$ we have\\ $ \displaystyle{\epsilon_{G_i(N)}(u,v) = \frac{1-(uv)^d}{1-uv} \;+}$ $$ + \frac{(1+u)(1+v)(1-(uv)^{\frac{d-\gamma}{2} -i)}}{(1-uv)^2(1-(uv)^2)} (uv - (uv)^{\gamma + 2i})(1 - (uv)^{\frac{d-\gamma}{2}-i+1}). $$ \end{prop} \begin{proof} Note that in \eqref{eq5} and \eqref{eq6} the bundle $F_2$ is uniquely determined by $F_1$ and the fact that $\det F_2 \simeq N \otimes ( \det F_1)^{-1}$. The rest of the proof of Theorem \ref{thm5.6} goes through exactly as before. \end{proof} \section{Isomorphism class of $G_i$} For any positive integer $a$, let $\Phi_a$ denote the Fourier-Mukai transform defined in \cite[section 3.2]{ht}. We use the results of the previous section in order to prove the following theorem. \begin{theorem} \label{thm6.1} The Fourier-Mukai transform $\Phi_a$ induces an isomorphism of moduli spaces $$ \Phi_a^0: G_i(2,d,1) \rightarrow G_i(2+ad,d,1) $$ for every $i$. \end{theorem} \begin{rem} \label{rem6.2} \emph{In the case $i=0$ the theorem is a special case of the main result of \cite{ht} and we use this result in the proof.} \end{rem} \begin{proof} The proof is by induction on $i$, the case $i=0$ being covered by Remark \ref{rem6.2}. Now suppose $1 \leq i \leq L$ and assume that $\Phi_a^0: G_{i-1}(2,d,1) \rightarrow G_{i-1}(2 +ad,d,1)$ is an isomorphism. We shall prove that the Fourier-Mukai transform $\Phi_a$ induces isomorphisms on $G_i^+(2,d,1)$ and $G_i^-(2,d,1)$ and this will complete the proof of the theorem. The elements of $G_i^+$ are given by the exact sequences \eqref{eq5}, where $(F_1,V_1)$ is contained in $G_0(1,d_1,1)$ and $F_2$ is a line bundle. According to \cite[Theorem 4.5]{ht} $\Phi_a$ induces an isomorphism of moduli spaces $\Phi_a^0: G_0(1,d_1,1) \rightarrow G_0(1+ad_1,d_1,1)$ and by \cite[Proposition 2.10 (1)]{ht} $\Phi_1^0(F_2)$ is stable of rank $1 + ad_2$ and degree $d_2$. Since $F_2$ is $\Phi_a-IT_0$ in the sense of \cite{ht}, we get an exact sequence \begin{equation} \label{eq11} 0 \rightarrow \Phi_a^0(F_2,0) \rightarrow \Phi_a^0(E,V) \rightarrow \Phi_a^0(F_1,V_1) \rightarrow 0. \end{equation} Moreover $\mbox{Ext}^1((F_1,V_1),(F_2,0)) \simeq \mbox{Ext}^1(\Phi_a^0(F_1,V_1),\Phi_a^0(F_2,0))$. So we obtain a map \begin{equation} \label{eq12} \Phi_a^0: G_i^+(2,d,1) \rightarrow G_i^+(2+ad,d,1). \end{equation} Denote by $\Psi_a^1$ the Fourier-Mukai transform inverse to $\Phi_a^0$ as defined in \cite{ht}. It remains to show that \begin{equation} \label{eq13} \Psi_a^1: G_i^+(2+ad,d,1) \rightarrow G_i^+(2,d,1), \end{equation} since then by the results of \cite{ht} the map \eqref{eq13} is the inverse of \eqref{eq12}. So let $(E,V) \in G_i^+(2+ad,d,1)$. It is given by an exact sequence \eqref{eq5} with $(F_1,V_1) \in G(\alpha_i^+;1+ad_1,d_1,1)$ and $F_2$ stable of rank $1+ad_2$ and degree $d_2$. By Lemma \ref{lem5.2}, $(F_1,V_1) \in G_0(1+ad_1,d_1,1)$. Since $F_1$ and $F_2$ are both semistable of positive degree, they are $\Psi_a-IT_1$ in the sense of \cite{ht}. Hence by \cite[Proposition 3.7]{ht}, $\Psi_a^1(F_1,V_1)$ is a coherent system of type $(1,d_1,1)$ and $\Psi_a^1(F_2)$ is a line bundle. Moreover we have an exact sequence $$ 0 \rightarrow \Psi_a^1(F_2,0) \rightarrow \Psi_a^1(E,V) \rightarrow \Psi_a^1(F_1,V_1) \rightarrow 0. $$ It follows that $\Psi_a^1(E,V) \in G_i^+(2,d,1)$ which implies the assertion. This completes the proof for the varieties $G_i^+$. The proof for the $G_i^-$ is the same as for the $G_i^+$. \end{proof} \section{Birational type of $G(\alpha;n,d,k)$} For $\mbox{gcd} (n,d) = 1$ and any $k$ we determined in Proposition \ref{prop2.1} and Corollary \ref{cor2.2} the birational type of the variety $G_0(n,d,k)$. Since the birational type of $G(\alpha;n,d,k)$ (respectively $G(\alpha;n,N,k)$) is independent of $\alpha$ (see \cite[Theorem 4.4 (ii)]{ln}), this determines the birational type of all the moduli spaces $G(\alpha;n,d,k)$ (respectively $G(\alpha;n,N,k))$ in this case. To summarize we have, \begin{prop}\label{prop8.1} If $\emph{\mbox{gcd}}(n,d) = 1$ and $k \leq d$, then $G(\alpha;n,N,k)$ is rational and $G(\alpha;n,d,k)$ is birational to $\mathbb P^{k(d-k)} \times C$. \end{prop} \begin{proof} The only remaining thing to observe is that $G_0(n,d,k)$ and $G_0(n,N,k)$ are non-empty in this case \cite[Proposition 3.2]{ln}. \end{proof} Now suppose $\mbox{gcd}(n,d) = 2$ and $k = 1$. \begin{prop} \label{prop6.1} If $\emph{\mbox{gcd}}(n,d) = 2$, then $G(\alpha;n,N,1)$ is rational and $G(\alpha;n,d,1)$ is birational to $\mathbb P^{d-1} \times C$. \end{prop} \begin{proof} By Proposition \ref{prop3.2} we have that $G_0(n,d,1)$ is birational to $G_0(2,d,1)$. Since the birational type is independent of $\alpha$, $G_0(2,d,1)$ is birational to $G_L(2,d,1)$ which is a locally trivial $\mathbb P^{d-1}$-bundle over $C$ according to \cite[Remark 5.5]{bgn}. \end{proof} \begin{prop}\label{prop8.3} If $\emph{\mbox{gcd}}(n-k,d) = 1$ and $k < min(d,n)$, then $G(\alpha;n,N,k)$ is rational and $G(\alpha;n,d,k)$ is birational to $\mathbb P^{k(d-k)} \times C$. \end{prop} \begin{proof} By \cite[Remark 5.5]{bgn} the moduli space $G_L(n,d,k)$ is a Gr$(k,d)$-fibration over $C$ which is Zariski locally trivial. Also $G_L(n,N,k) \simeq \mbox{Gr}(k,d)$. Now the result follows from the fact that the birational type is independent of $\alpha$. \end{proof} For general $(n,d,k)$ with $\mbox{gcd}(n,d) = h>1$ and $k < d$ we have morphisms $$ G_0(n,d,k) \rightarrow \widetilde{M}(n,d) \quad \mbox{and} \quad G_0(n,N,k) \rightarrow \widetilde{M}(n,N), $$ where $\widetilde{M}(n,d)$ is the moduli space of S-equivalence classes of semistable bundles of rank $n$ and degree $d$ on $C$ and $\widetilde{M}(n,N)$ the subvariety of $\widetilde{M}(n,d)$ with fixed determinant $N$. From \cite{tu} we have that $\widetilde{M}(n,N) \simeq \mathbb P^{h-1}$ and $\widetilde{M}(n,d) \simeq S^hC$. With these notations, we have \begin{theorem}\label{th8.4} For all $\alpha$ for which $G(\alpha;n,d,k)\ne\emptyset$, \\ \emph{(1)} $G(\alpha;n,N,k)$ is birational to a variety $Y_N$, where $Y_N$ is fibred over $\mathbb P^{h-1}$ with general fibre unirational.\\ \emph{(2)} $G(\alpha;n,d,k)$ is birational to a variety $Y$, where $Y$ is fibred over $S^hC$ with general fibre unirational. \end{theorem} \begin{proof} The general point of $\widetilde{M}(n,d)$ is represented by a bundle of the form $F_1 \oplus \cdots \oplus F_h$, where the $F_i$ are non-isomorphic stable bundles of rank $\frac{n}{h}$ and degree $\frac{d}{h}$. To obtain $(E,V) \in G_0(n,d,1)$ we must choose a subspace $V$ of $H^0(E)$ of dimension $k$. Since $G_0(n,d,k) \neq \emptyset$ if $k < d$ (see \cite{ln}) and $\alpha$-stability is an open condition, the subspace $V$ must belong to a non-empty Zariski open subset of Gr$(k,H^0(E))$. The condition $(E,V) \simeq (E,V')$ means that $V$ and $V'$ are in the same orbit for the action of Aut$E$. A similar statement applies to $G_0(n,N,k)$. The result follows from the fact that the birational type is independent of $\alpha$. \end{proof} \begin{rem} \emph{So far as we know, it is possible that all the $G(\alpha;n,N,k)$ are rational varieties.} \end{rem}
{ "timestamp": "2009-04-29T09:03:52", "yymm": "0810", "arxiv_id": "0810.1425", "language": "en", "url": "https://arxiv.org/abs/0810.1425", "abstract": "In this paper we consider moduli spaces of coherent systems on an elliptic curve. We compute their Hodge polynomials and determine their birational types in some cases. Moreover we prove that certain moduli spaces of coherent systems are isomorphic. This last result uses the Fourier-Mukai transform of coherent systems introduced by Hernández Ruiperez and Tejero Prieto.", "subjects": "Algebraic Geometry (math.AG)", "title": "Hodge polynomials and birational types of moduli spaces of coherent systems on elliptic curves", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631623133665, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.708795049365519 }
https://arxiv.org/abs/1211.7264
Macaulay-like marked bases
We define marked sets and bases over a quasi-stable ideal $\mathfrak j$ in a polynomial ring on a Noetherian $K$-algebra, with $K$ a field of any characteristic. The involved polynomials may be non-homogeneous, but their degree is bounded from above by the maximum among the degrees of the terms in the Pommaret basis of $\mathfrak j$ and a given integer $m$. Due to the combinatorial properties of quasi-stable ideals, these bases behave well with respect to homogenization, similarly to Macaulay bases. We prove that the family of marked bases over a given quasi-stable ideal has an affine scheme structure, is flat and, for large enough $m$, is an open subset of a Hilbert scheme. Our main results lead to algorithms that explicitly construct such a family. We compare our method with similar ones and give some complexity results.
\section{Introduction} In this paper, we are interested in computing the family of ideals $\mathfrak i \subset R=K[x_1,\ldots,x_n]$ whose quotients $R/\mathfrak i$ share the same affine Hilbert polynomial and the same monomial $K$-vector basis, that we choose to be the sous-escalier $\mathcal N(\mathfrak j)$ of a strongly stable ideal $\mathfrak j\subset R$. A similar problem for homogeneous ideals has been studied in \cite{CR,BCLR,BLR}, but the non-homogeneous case is more tricky. In fact, in the homogeneous case the condition on the quotient implies the one on the Hilbert polynomial, while this is no more true for non-homogeneous ideals (see Example \ref{nonomognonbasta}). Furthermore, for a fixed strongly stable ideal $\mathfrak j$, the family of ideals $\mathfrak i$ such that $\mathcal N(\mathfrak j)$ is a basis for $R/\mathfrak i$ may depend on an infinite number of parameters. For instance, for $\mathfrak j = (x_2)\subset K[x_1,x_2]$ ($x_2>x_1$), the family of all ideals $\mathfrak i$ such that $K[x_1,x_2]/\mathfrak i$ is generated by $\mathcal N(\mathfrak j) = \{x_1^n \ : \ n \in \mathbb N\}$ depends on infinitely many parameters. We overcome these difficulties by the notion of {\em $[\mathfrak j,m]$-marked basis}, for a fixed positive integer $m$ (see Definition \ref{baseaff}). Similarly to the homogeneous case, any ideal generated by a $[\mathfrak j,m]$-marked basis satisfies both the above conditions and for $m$ sufficiently high, the converse is true. In fact, for any ideal $\mathfrak i$ generated by a $[\mathfrak j,m]$-marked basis, the quotient $R/\mathfrak i$ has the same affine Hilbert function as $R/\mathfrak j$ in degrees from $m$ on. The collection of all the ideals generated by a $[\mathfrak j,m]$-marked basis is the {\em $[\mathfrak j,m]$-marked family}. We face our study from both a theoretical and a computational point of view, obtaining two main results: a new division algorithm, that is able to detect $[\mathfrak j,m]$-marked bases, and the construction of a class of flat families of ideals. Our division algorithm consists of steps of reduction based on the nice combinatorial structure of the strongly stable ideal $\mathfrak j$. For every ideal $\mathfrak i$ generated by a $[\mathfrak j,m]$-marked basis, our algorithm is a new tool to compute normal forms modulo $\mathfrak i$. Moreover, it allows to explicitely compute equations defining the $[\mathfrak j,m]$-marked family. A $[\mathfrak j,m]$-marked basis {has} many good properties of Gr\"obner bases, in a context where the ideal $\mathfrak j$ covers a role analogous to that of an initial ideal and, at the same time, the strongly stable property addresses the lack of the term order. In particular, a $[\mathfrak j,m]$-marked basis behaves very well with respect to the homogenization. Although we investigate $[\mathfrak j,m]$-marked families independently from the already studied homogeneous case, the new division algorithm refines and improves the so-called superminimal reduction, introduced in \cite{BCLR} for the homogeneous case. In fact, if $J$ is a saturated strongly stable ideal in $K[x_0,x_1,\ldots,x_n]$, $m$ is a positive integer and $\mathfrak j=J\cap R$, then any $[\mathfrak j,m]$-marked basis turns out to be the de-homogenization of a $J_{\geq m}$-marked basis introduced in \cite{BCLR}. On the other hand, the homogenization of a $[\mathfrak j,m]$-marked basis gives us the projective closure in $\mathbb{P}^n$ of the affine scheme in $\mathbb A^n$ defined by the ideal $\mathfrak i$ it generates; moreover we can easily obtain a $J_{\geq m}$-marked basis from the $[\mathfrak j,m]$-marked basis. This good behavior is due to the fact that any projective scheme defined by a $J_{\geq m}$-marked basis does not contain components at infinity. These facts are also related to a strict correspondence between the satiety of the strongly stable ideal and that of the ideals in the homogeneous case. The present "affine" division algorithm is computationally convenient in the general case since we skip time consuming multiplications by powers of $x_0$, unavoidable in the superminimal reduction described in \cite{BCLR}; moreover it is especially suited to treat the case of Artinian ideals. We also show that every $[\mathfrak j,m]$-marked family is endowed with a structure of an affine scheme, in a very natural way, and is flat at $\mathfrak j$. For families of projective schemes, some criteria to recognize the flatness are available, when the Hilbert polynomial is fixed and the parameterizing scheme has nice properties. The affine case is more complicated: we apply the local characterization of flatness by the lifting of the syzygies of $\mathfrak j$ to syzygies of the ideal generated by a $[\mathfrak j,m]$-marked basis. Our result is not obvious, because, in general, families of affine schemes can be non-flat, even if the affine Hilbert polynomial is fixed (see Example \ref{Esempio flat}). The flatness is the key feature of $[\mathfrak j,m]$-marked families allowing to embed them in suitable Hilbert schemes. The choice of the integer $m$ is strategic to embed a marked family as a locally closed subscheme or as an open subset in a Hilbert scheme (see \cite{BLR}). Our computational and theoretical results allow to face open questions concerning the Hilbert scheme: in particular we show that every local Gorenstein Artin algebra with Hilbert function $(1,7,7,1)$ is smoothable. The paper is organized in the following way. In Section \ref{seznot} , we recall some properties of strongly stable ideals and the main notions and results concerning the homogeneous case, i.e. $J$-marked families introduced in previous papers \cite{CR,BCLR,BLR}. In Section \ref{sezregsat}, we focus our attention on relations among the regularity and satiety of $J$ and those of the ideals of a $J$-marked family (Theorem \ref{bastaaffine}). In Section \ref{sectionJmSet}, we introduce the new notions used in the non-homogeneous context, in particular the new reduction relation (Definition \ref{sminred}) with its main features (Theorem \ref{ridsm}), and develop the theory of $[\mathfrak j,m]$-marked bases and $[\mathfrak j,m]$-marked families (Subsection \ref{subsecMB}). We compare the new results with those of the homogeneous case (Subsection \ref{passaggio}) and show the good behavior of $[\mathfrak j,m]$-marked sets, bases and the new described reduction relation with respect to the homogenization. Moreover, the new reduction relation produces a division algorithm that allows to construct $[\mathfrak j,m]$-marked families by an effective criterion (Subsection \ref{sectionJmBasis}). In Subsection \ref{conseguenze}, a deep study of this criterion explains how the algorithm is particularly efficient in the Artinian case (Theorem \ref{critpunti}). This fact is also related to an {\em optimal} choice for $m$ suggested by Theorem \ref{ro2}. In Section \ref{sezpiatt}, we describe the structure of affine scheme of a $[\mathfrak j,m]$-marked familiy (Theorem \ref{daProjaAff}) and study its flatness, giving also a particular example of non-flat affine family with fixed Hilbert polynomial (Example \ref{Esempio flat}). Furthermore, we show that the elements of a $[\mathfrak j,m]$-marked family can be simultaneously homogenized (Proposition \ref{omsim}) and that we can obtain an open cover of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, the Hilbert scheme parameterizing subschemes of $\mathbb{P}^n$ which share the same Hilbert polynomial with $S/\mathfrak j^h$ (Theorem \ref{coprhilb}). In Section \ref{algoritmi} we describe an algorithm which computes a set of equations defining a $[\mathfrak j,m]$-marked family, by our reduction process. In Section \ref{Hscheme}, as an application of our results, we analyze the $[\mathfrak j,3]$-marked family for a suitable strongly stable ideal $\mathfrak j\subset R=K[x_1,\dots,x_7]$ such that $R/\mathfrak j$ has affine Hilbert polynomial $16$. We show that $\mathcal M\mathrm f(\mathfrak j,3)$ has at least three irreducible components; one of them contains the reduced schemes of length $16$ and we prove in particular that it contains the Gorenstein schemes with Hilbert function $(1,7,7,1)$. In a similar way we also prove the smoothability of Gorenstein schemes with Hilbert function $(1,5,5,1)$. {This last result has been independently obtained by J. Jelisiejew in \cite{JJ} by different tools.} \section{Notations and background}\label{seznot} Let $K$ be a field. We consider the polynomial rings $S=K[x_0,\dots,x_n]\supset R=K[x_1,\dots,x_n]$, with variables ordered as $x_n>\cdots>x_1>x_0$. In this section, we fix some notations and facts that hold both in $S$ and in $R$. When needed, we will state precisely in which ring we are working to avoid any ambiguity. We will denote by capital letters, such as $I$ and $J$, homogeneous ideals in $S$, while we will use gothic letters, such as $\mathfrak i$ and $\mathfrak j$, for ideals (not necessarily homogeneous) in $R$. For any set $A$ of polynomials, we will denote by $A_t$ the subset of $A$ made up of homogeneous polynomials of degree $t$ and by $A_{\leq t}$ the set of elements of $A$ of total degree $\leq t$. For a homogeneous ideal $I$ of $S$, the {\it Hilbert function} of $S/I$ is the function $H_{S/I}: \ell\in \mathbb N \rightarrow dim_K \frac{S_\ell}{I_\ell}\in\mathbb N$ and the {\it Hilbert polynomial} $P_{S/I}(t)$ is the unique numerical polynomial in $\mathbb Q[t]$ such that $P_{S/I}(\ell)=H_{S/I}(\ell)$, for $\ell\gg 0$. For an ideal $\mathfrak i$ of $R$, the {\it affine Hilbert function} of $R/\mathfrak i$ is the function $^aH_{R/\mathfrak i}: \ell\in \mathbb N \rightarrow dim_K \frac{R_{\leq \ell}}{\mathfrak i_{\leq \ell}}\in\mathbb N$ and the {\it affine Hilbert polynomial} $^aP_{R/\mathfrak i}(t)$ is the unique numerical polynomial in $\mathbb Q[t]$ such that $^aP_{R/\mathfrak i}(\ell)= {}^aH_{R/\mathfrak i}(\ell)$, for $\ell\gg 0$. We refer to (\cite[Chapter 9, Section 3]{CLO}, \cite[Section 5.6]{KR2}) for basic results about these functions. Given $F\in S$, we denote by $F^{a}\in R$ the polynomial obtained by replacing $x_0$ with 1 in $F$. Conversely, if $\mathfrak f\in R$, then we denote by $\mathfrak f^{h}$ the polynomial $x_0^{\deg \mathfrak f}\mathfrak f(x_1/x_0,\cdots, x_n/x_0) \in S$. \subsection{Strongly stable ideals}\label{subsecss} If $\alpha=(\alpha_0,\alpha_1,\dots,\alpha_n)$ belongs to $\mathbb N^{n+1}$, we use the compact notation $x^\alpha$ to represent the monomial $x_0^{\alpha_0}x_1^{\alpha_1}\cdots x_n^{\alpha_n}\in S$. We adopt the analogous notation for monomials in $R$. We denote by $\max(x^\alpha)$ the biggest variable that appears in $x^\alpha$ and, analogously, $\min(x^\alpha)$ is the smallest variable that appears in $x^\alpha$. For a monomial ideal $J\subset S$ (resp. $\mathfrak j\subset R$), we denote by $B_J$ (resp. $B_{\mathfrak j}$) its \emph{monomial basis} and by $\mathcal{N}\,(J)$ (resp. $\mathcal{N}\,(\mathfrak j)$) its \emph{sous-escalier}, that is the set of monomials in $S\setminus J$ (resp. $R\setminus \mathfrak j$). We will say that a monomial $x^\beta$ can be obtained by a monomial $x^\alpha$ through an \emph{elementary move} if $x^\alpha x_j = x^\beta x_i$ for some variable $x_i\neq x_j$. In particular, if $i < j$, we say that $x^\beta$ can be obtained by $x^\alpha$ through an \emph{increasing} elementary move and we write $x^\beta = \mathrm{e}_{i,j}^{+}(x^\alpha)$, whereas if $i > j$ the move is said to be \emph{decreasing} and we write $x^\beta = \mathrm{e}_{i,j}^{-} (x^\alpha)$. The transitive closure of the relation $x^\beta > x^\alpha$ if $x^\beta = \mathrm{e}_{i,j}^{+}(x^\alpha)$ gives a partial order on the set of monomials of a fixed degree, that we will denote by $>_{B}$ and that is often called \emph{Borel partial order}: \[ x^\beta >_{B} x^\alpha\ \Longleftrightarrow\ \exists\ x^{\gamma_1}, \dots, x^{\gamma_t} \text{ such that } x^{\gamma_1} = \mathrm{e}^+_{i_0,j_0}(x^\alpha),\ \dots\ ,x^\beta = \mathrm{e}^{+}_{i_t,j_t}(x^{\gamma_t}) \] for suitable indexes $i_k,j_k$. In analogous way, we can define the same relation using decreasing moves: \[ x^\beta >_{B} x^\alpha\ \Longleftrightarrow\ \exists\ x^{\delta_1}, \dots, x^{\delta_s} \text{ such that } x^{\delta_1} = \mathrm{e}^{-}_{h_0,l_0}(x^\beta),\ \dots\ ,x^\alpha = \mathrm{e}^{-}_{h_s,l_s}(x^{\delta_s}) \] for suitable indexes $i_k,j_k$.\\ Note that every term order $\succ$ is a refinement of the Borel partial order $>_B$, that is $x^\alpha >_B x^\beta$ implies that $x^\alpha \succ x^\beta$. \begin{definition}\label{definBorel} A monomial ideal $J \subset S$ (or $\mathfrak j\subset R$) is said to be \emph{strongly stable} if {every monomial $x^\alpha$ such that $x^\alpha >_{B} x^\beta$, with $x^\beta \in J$ (resp. $x^\beta \in \mathfrak j$), belongs to $J$} (resp. to $\mathfrak j$). \end{definition} Both in $S$ and $R$, a strongly stable ideal is always Borel fixed, that is fixed by the action of the Borel subgroup of upper triangular matrices of $GL(n+1)$ for $S$ and of $GL(n)$ for $R$. The vice versa holds under the hypothesis $\textnormal{char}(K)=0$ (e.g. \cite{D}). Throughout the paper we will work on the field $K$ without any further hypothesis on its characteristic. We neither need to assume that $K$ is algebraically closed: indeed, even when we recall the application of the techniques we develop to the study of the Hilbert Scheme, we consider the classical construction of \cite[Appendix C]{IarroKanev}, \cite{CS}, recalled also in \cite{BLR}. Galligo's Theorem \cite{Gall}, generalized in \cite{BS87} for a field of non-zero characteristic, guarantees that in generic coordinates the initial ideal of $I$ (homogeneous or not), w.r.t.~a fixed term order, is a constant Borel fixed monomial ideal called the {\em generic initial ideal} of $I$. The following statements and definitions concern strongly stable ideals both in $S$ and $R$. \begin{definition} Given a strongly stable ideal $J$, with monomial basis $B_{J}$, and a monomial $x^\gamma\in J$, we define \[ x^\gamma=x^\alpha \ast_J x^\eta \text{ with } \gamma=\alpha+\eta,\quad x^\alpha \in B_{J}\text{ and }\min(x^\alpha)\geq \max (x^\eta). \] This decomposition exists and it is unique (see \cite[Lemma 1.1]{EK}). \end{definition} \begin{lemma}\label{descLex} Let $J$ be a strongly stable ideal. If $x^\epsilon$ belongs to $\mathcal{N}\,(J)$ and $x^{\epsilon}\cdot x^\delta=x^{\epsilon+\delta}$ belongs to $J$ for some $x^\delta$, then $x^{\epsilon+\delta}=x^{\alpha}\ast_J x^{\eta}$ with $x^{\eta}<_{\mathtt{Lex}}x^\delta$. Furthermore if $\vert \delta\vert=\vert\eta\vert$, then $x^{\eta}<_{B}x^\delta$. \end{lemma} \begin{proof} See \cite[Lemma 2.4]{BCLR}. \end{proof} \begin{definition}\label{ordstar} Let $J$ be a strongly stable ideal. On the monomials of $J$ we define the following total order: \[ x^\alpha\ast_J x^\delta <_\ast x^{\alpha'}\ast_J x^{\delta'} \text{ if } x^\delta<_{\mathtt{Lex}}x^{\delta'} \text{ or } x^\delta=x^{\delta'}\text{ and } x^\alpha<_{\mathtt{Lex}}x^{\alpha'}. \] \end{definition} In Definition \ref{ordstar}, in order to obtain a well-order on the monomials of $J$, it is sufficient to order the monomials in $B_J$ in any way, not necessarily lexicographically. We underline that $<_\ast$ is a well-order on $J$, but it is not a term order, as shown by the following example \begin{example} Consider $J=(x_3^2,x_3x_2,x_3x_1,x_2^2,x_2x_1)$ in $K[x_0,x_1,x_2,x_3]$, with $x_0<x_1<x_2<x_3$. If we take $x_0x_2x_3$ and $x_1^2x_2$, we get $x_2x_3\ast x_0<_\ast x_1x_2\ast x_1$, but $x_1^2x_2x_3=x_3x_2\ast_J x_1^2<_\ast x_3^2\ast_J x_2x_0=x_3^2x_2x_0$. \end{example} \subsection{Marked sets and bases for homogeneous ideals}\label{subsecMsh} For a polynomial $F$, belonging to $S$ or $R$, we denote by $\mathrm{supp}(F)$ its \emph{support}, that is the set of monomials appearing in $F$ with non-zero coefficients. \begin{definition}\label{polymarcato} \cite{RStu,CR} A \emph{marked polynomial} is a polynomial $F\in S$ together with a specified monomial of $\mathrm{supp}(F)$ that will be called {\it head term} of $F$ and denoted by $\mathrm{Ht}(F)$ . Let $J$ be a monomial ideal in $S$. A finite set $G\subset S$ of homogeneous marked polynomials $F_{\alpha}$ is a \emph{$J$-marked set} if the head terms $\mathrm{Ht}(F_\alpha)=x^\alpha$ are pairwise different, they form the monomial basis of $J$ and the monomials in $\mathrm{supp}(x^\alpha-F_\alpha)$ do not belong to $J$, i.e. $\vert \mathrm{supp}(F_\alpha)\cap J\vert=1$. We call \emph{tail of $F_\alpha$} the polynomial $T(F_\alpha)=\mathrm{Ht}(F_\alpha)-F_\alpha$, so we can write $F_\alpha=x^\alpha-T(F_\alpha)$. A $J$-marked set $G\subseteq S$ is a \emph{$J$-marked basis} if $(G)\oplus \mathcal{N}\,(J)=S$ as a $K$-vector space. \end{definition} \begin{definition} \cite{RStu,CR} The family of all homogeneous ideals $I$ such that $\mathcal N(J)$ is a basis of the quotient $S/I$ as a $K$-vector space will be denoted by $\mathcal M\mathrm f(J)$ and called {\em $J$-marked family}. \end{definition} Referring to \cite{CR,BCLR}, we recall some properties and basic results about $J$-marked bases and superminimals generators that will be useful in the next sections. For a $J$-marked set $G$ with $J$ strongly stable and for every integer $\ell\geq \min\{t : J_t\not=(0)\}$, we set $$V_\ell:=\{x^\delta f_\alpha \ \vert \ f_\alpha \in G, \vert\delta+\alpha\vert=\ell \text{ and } x^\delta x^\alpha=x^\alpha\ast_J x^\delta\} \text{ and } V:=\cup_{\ell} V_\ell.$$ The ideal $I$ generated by $G$ belongs to $\mathcal M\mathrm f(J)$ if and only if $I_t=\langle V_{t} \rangle$ for every $t$, by \cite[Lemma 2.2]{BCLR}. In \cite{CR,BCLR} the reduction relation $\xrightarrow{\ V_\ell \ }$ on homogeneous polynomials of degree $\ell$, in the usual sense of Gr\"obner bases theory, is introduced and studied to investigate properties and features of $\mathcal M\mathrm f(J)$. {An experimental version of the algorithms for $J$-marked bases which use the reduction relation $\xrightarrow{\ V_\ell \ }$ is available in \cite{michela} for the software package Singular \cite{DGPS}.} Anyway, to study the family $\mathcal M\mathrm f(J_{\geq m})$ for a saturated strongly stable ideal $J$ and a given integer $m$, a much more efficient reduction relation can be used. To recall the definition of this latter reduction relation, called {\it superminimal reduction}, first we need the so-called {\it superminimal generators of $J_{\geq m}$}, i.e. the monomials of $B_{J_{\geq m}}$ of type $x_0^{t_{\alpha}} x^\alpha$, with $x^\alpha$ in $B_J$ and $t_\alpha\geq 0$. We denote the set of superminimal generators of $J_{\geq m}$ by $sB_{J_{\geq m}}$. Then we define the subset $sG$ of the polynomials of a $(J_{\geq m})$-marked set $G$ whose head terms are the monomials in $sB_{J_{\geq m}}$. The subset $sG$ is called a {\it $(J_{\geq m})$-marked superminimal set} and, when $G$ is a $(J_{\geq m})$-marked basis, $sG$ is called a {\it $(J_{\geq m})$-superminimal basis} \cite[Definitions 3.5 and 3.9]{BCLR}. \begin{definition} \label{def:superred} \cite[Definition 3.11]{BCLR} Consider a saturated strongly stable ideal $J$, a positive integer $m$, a $J_{\geq m}$-marked set $G$ and two polynomials $g$ and $g_1$. We say that $g$ is in {\em $sG_\ast$-relation} with $g_1$ if there is a monomial $x^\gamma \in \mathrm{supp}(g)\cap J_{\geq m}$, such that $x^\gamma$ is divisible by a superminimal generator $x^{\alpha'}=x_0^{t_{\alpha}}x^\alpha$ of $sB_{J_{\geq m}}$, with $x^\gamma=x^{\alpha} \ast_{J} x^\eta=x_0^{t_{\alpha}}x^{\alpha}\cdot x^{\eta'}$ and ${g_1}={g}-c_\gamma\cdot x^{\eta'} f_{\alpha'}$ (where $c_\gamma$ is the coefficient of $x^\gamma$ in ${g}$); hence, ${g_1}$ is obtained by replacing the monomial $x^\gamma$ in ${g}$ by $x^{\eta'} \cdot T(f_{\alpha'})$. We call \emph{superminimal reduction} the transitive closure of the above relation and denote it by $\xrightarrow{\ sG_\ast \ }$. A homogeneous polynomial $h$ is {\it strongly reduced} if no monomial in $\mathrm{supp}(h)$ is divisible by a monomial of $B_J$. \end{definition} \begin{theorem} \cite[Theorem 3.14]{BCLR} Let $J$ be a saturated strongly stable ideal in $S$ and $sG$ be a $J_{\geq m}$-marked superminimal set. Then: \begin{enumerate}[(i)] \item \label{ridsm-i}$\xrightarrow{\ sG_\ast\ }$ is Noetherian. \item \label{ridsm-ii} For every homogeneous polynomial $g$ there exist $t$ and a unique polynomial $g(t)$ strongly reduced such that $x_0^t\cdot g \xrightarrow{\ sG_\ast\ }g(t)$. If $\overline{t}$ is the minimum one and $\bar g:=g(\overline{t})$, then $ g(t)=x_0^{t-\bar t}\cdot \bar g$ for every $t\geq \bar t$. There is an effective procedure that computes $\bar t$ and $\bar g$. \end{enumerate} If moreover $sG$ is the superminimal basis of an ideal $I$ of $\mathcal M\mathrm f(J_{\geq m})$, then $\xrightarrow{\ sG_\ast\ }$ computes the $J_{\geq m}$-normal forms modulo $I$. More precisely, for every homogeneous polynomial $g$: \[ \mathrm{Nf}(g)=\begin{cases} g, &\text{ if } \deg(g) < m\\ \bar g / x_0^{\bar t}, & \text{ if } \deg(g) \geq m \text{ and } x_0^{\overline{t}}\cdot g\xrightarrow{\ sG_\ast\ } {\bar g} \end{cases} \] \end{theorem} We refer to \cite{BCLR} for other properties of the superminimal reduction. Here, we recall that this reduction is not only very efficient for effective computations, but it is also a very good tool for proving some feature of the family $\mathcal M\mathrm f(J_{\geq m})$. For example, the superminimal reduction is the tool used in \cite{BCLR} to prove that there is a scheme-theoretical isomorphism between $\mathcal M\mathrm f(J_{\geq m})$ and $\mathcal M\mathrm f(J_{\geq m-1})$, for every $m$ higher than or equal to a suitable integer $\rho$ which depends only on $J$ \cite[Proposition 5.7]{BCLR} and which will be studied in Subsection \ref{passaggio}. Even though this procedure allows computations on Hilbert schemes (see \cite{BLR}) which we cannot cope with by using Gr\"obner techniques, it is plain that the multiplication of $x_0^t$ with every monomial of the polynomial to reduce by $\xrightarrow{\ sG_\ast\ }$ is time-consuming in any implementation of the Algorithm presented in \cite{BCLR} . \section{Regularity and satiety in a $J$-marked family}\label{sezregsat} \begin{definition}\label{regsat} Let $I$ be a homogeneous ideal in $S$. Consider its graded minimal free resolution \[ 0\rightarrow E_n\rightarrow \cdots \rightarrow E_1\rightarrow E_0\rightarrow I\rightarrow 0, \] where $E_i=\oplus_j S(-a_{ij})$. The ideal $I$ is $m$-regular if $m\geq a_{ij}-i$ for every $i,j$. The \emph{regularity} of $I$, denoted by $\textnormal{reg}(I)$, is the smallest $m$ for which $I$ is $m$-regular.\\ The ideal $I$ is \emph{saturated} if $I=(I:(x_0,\dots,x_n))$. The \emph{saturation} of $I$ is $I^{\textnormal{sat}}=\cup_{j\geq 0}(I:(x_0,\dots,x_n)^j)$ and $I$ is $m$-saturated if $I_t=I_t^{\textnormal{sat}}$ for every $t\geq m$. The \emph{satiety} of $I$, denoted by ${\textnormal{sat}}(I)$, is the smallest $m$ for which $I$ is $m$-saturated.\\ A homogeneous polynomial $h$ in $S$ is \emph{generic for $I$} if $h$ is a non-zero divisor in $S/(I^{\textnormal{sat}})$. \end{definition} Strongly stable ideals have a very nice combinatorial structure which allows us to easily read their \emph{regularity} and \emph{satiety} from their monomial bases. \begin{lemma}\label{regsatB} Let $J$ be a strongly stable ideal in $S$ with monomial basis $B_J$. Then: \[\textnormal{reg}(J)=\max\{\deg(x^{\alpha}): x^\alpha\in B_{J}\},\] \[{\textnormal{sat}}(J)=\max\{\deg(x^\alpha): x^\alpha \in B_J \text{ and } x_0\vert x^\alpha\}.\] Furthermore $(J:x_0^\infty)=J^\mathrm{sat}$. As a consequence, a strongly stable ideal is saturated if and only if no monomial in its monomial basis involves the smallest variable of the ring. \end{lemma} \begin{proof} See \cite[Proposition 2.9]{BS} and \cite[Proposition 2.9, Corollary 2.10]{Gr}. \end{proof} Recall that, if $\mathrm{in}_\prec(I)$ is the ideal of the leading terms of an ideal $I$ with respect to some term order $\prec$, then $\textnormal{reg}(I)\leq \textnormal{reg}(\mathrm{in}_\prec(I))$. In particular, when $\mathrm{char}(K)=0$, if $\mathrm{gin}(I)$ is the generic initial ideal of $I$ with respect to $\mathtt{DegRevLex}$, then $\textnormal{reg}(I)=\textnormal{reg}(\mathrm{gin}(I))$ and ${\textnormal{sat}}(I)={\textnormal{sat}}(\mathrm{gin}(I))$ \cite{BS,BS87}. For what concerns the regularity, we highlight that the ideal $J$ plays for $I\in \mathcal M\mathrm f(J)$ a role analogous to that of an initial ideal. Indeed, we recall the following result. \begin{prop} \cite[Proposition 4.6]{CR} Let $J$ be a strongly stable ideal in $S$. The $J$-marked family $\mathcal M\mathrm f(J)$ is flat at the origin. In particular, for every ideal $I \in \mathcal M\mathrm f(J)$, we get $\textnormal{reg}(J)\geq \textnormal{reg}(I)$. \end{prop} For what concerns satiety and saturation, the strongly stable ideal $J$ plays for $I\in \mathcal M\mathrm f(J)$ a similar role to the one of the generic initial ideal with respect to $\mathtt{DegRevLex}$ in the following sense. First, observe that we can read the second part of the statement of Lemma \ref{regsatB} in this geometric way: the associated scheme $\mathcal Z=\textnormal{Proj}\,(S/J^{\textnormal{sat}})$ in the projective space $\mathbb{P}^n$ has no components lying in the hyperplane $H_0\subseteq \mathbb{P}^n$ defined by the ideal $(x_0)$, because $J^{\textnormal{sat}}$ does not have any primary component contained in the ideal $(x_0)$. As a consequence, the affine scheme $\mathcal Z\setminus H_0$ shares some numerical features with $\mathcal Z\subseteq \mathbb{P}^n$, as the affine Hilbert function and polynomial. We now show that the same holds for every homogeneous ideal in $S$ generated by a $J$-marked basis, with $J$ a strongly stable ideal; not only the satiety of $J$ bounds the satiety of every ideal in $\mathcal M\mathrm f(J)$, but also, for every $I\in \mathcal M\mathrm f(J)$, the scheme $\textnormal{Proj}\,(S/I)$ has no components at infinity. We need the following Lemma. \begin{lemma}\label{lemma1} Let $J$ be a strongly stable ideal in $S$, $m$ be an integer, $m\geq {\textnormal{sat}}(J)$, $g$ a homogeneous polynomial of degree $\ell\geq m$ and $G$ a $J$-marked set. Then $$g\in \langle V_\ell\rangle \Leftrightarrow x_0\cdot g\in \langle V_{\ell+1}\rangle.$$ \end{lemma} \begin{proof} The proof is the same as the one in \cite[Lemma 4.2]{BCLR}, we simply need to observe that for every monomial $x_0x^\gamma\in J$, with $x^\gamma\in\mathrm{supp}(g)$, we obtain $x_0x^\gamma \notin B_J$ because $m\geq {\textnormal{sat}}(J)$. \end{proof} \begin{theorem}\label{bastaaffine} Let $J$ be a strongly stable ideal in $S$ and $I$ be an ideal belonging to $\mathcal M\mathrm f(J)$. Then: \begin{enumerate}[(i)] \item $x_0$ is generic for $I$; \item if $J$ is saturated, then $I$ is saturated; \item \label{bastaffineiii} ${\textnormal{sat}}(I)\leq {\textnormal{sat}}(J)$. \end{enumerate} As a consequence, $(I:x_0^\infty)=I^{\textnormal{sat}}$. \end{theorem} \begin{proof} Recall that $I=\langle V\rangle$, because $I$ belongs to $\mathcal M\mathrm f(J)$ \cite[Lemma 2.2]{BCLR}. Hence, assuming that $J$ is $m$-saturated, by Lemma \ref{lemma1} we get that for every $t\geq m$ $$f\in (I:x_0)_t \Rightarrow fx_0 \in I_{t+1}=\langle V\rangle_{t+1} \Rightarrow f\in \langle V\rangle_t=I_t,$$ or in other words $$(I:x_0)_t=I_t, \text{ for every } t\geq m.$$ By \cite[Lemma (1.6)]{BS}, this is equivalent to the fact that $I$ is $m$-saturated too and $x_0$ is generic for $I$. \end{proof} \begin{remark} If $K$ is not an infinite field, there may not be a general linear form for an ideal $I$. However, we do not need this hypothesis on $K$: even if $K$ is finite, $x_0$ is a general linear form for the strongly stable ideal $J$, and by Theorem \ref{bastaaffine}, it is general for $I\in \mathcal M\mathrm f(J)$ too. \end{remark} \begin{cor}\label{sattronc} Let $J$ be a non-null saturated strongly stable ideal in $S$ and $m$ be a positive integer. If $I$ is a non-saturated ideal of $\mathcal M\mathrm f(J_{\geq m})$, then ${\textnormal{sat}}(I)=m$ and $I=(I^{sat})_{\geq m}$. \end{cor} \begin{proof} Since $I$ is not saturated, then by Theorem \ref{bastaaffine} the ideal $J_{\geq m}$ is not saturated, but ${\textnormal{sat}}(J)=m$. Since $m$ is also the initial degree of the ideal $I$, we also have that ${\textnormal{sat}}(I)\geq m$. By Theorem \ref{bastaaffine}(\ref{bastaffineiii}), we have ${\textnormal{sat}}(I)=m$. In particular, we get $I=(I^{sat})_{\geq m}$. \end{proof} The family $\mathcal M\mathrm f(J)$, with $J$ a strongly stable ideal, in general contains also saturated ideals, even if $J$ is not saturated, as shown in the following examples. \begin{example} Take the saturated strongly stable ideal $J=(x_2^2,x_1^3x_2,x_1^4)\subseteq S=K[x_0,x_1,x_2]$, for which $S/J$ has Hilbert polynomial $7$. Geometrically, $\textnormal{Proj}\,(S/J)$ is a non-reduced scheme of length 7. The family $\mathcal M\mathrm f(J_{\geq 3})$ is a dense open subset of ${\mathcal{H}\textnormal{ilb}}_7^2$ \cite{BLR}, which has only one component. Since 7 general points in $\mathbb{P}^2$ are not on a conic, there is an open subset of $\mathcal M\mathrm f(J)$ made up of saturated ideals. More explicitely, we consider the $J_{\geq 3}$-marked set: \[G=B_J\setminus\{x_2^2x_0,x_1^4\}\cup\{x_{{0}}{x_{{2}}}^{2}-u{x_{{1}}}^{2}x_{{2}},{x_{{1}}}^{4}+vu{x_{{1}}}^{2}{x_{{0}}}^{2}-vx_{{2}}{x_{{0}}}^{3}\}, \quad u,v \in K\] and define $I:=(G)K[x_0,x_1,x_2]$. Since $\dim I_t=\dim J_t$ for every $t\geq 3$, $I\in \mathcal M\mathrm f(J_{\geq 3})$ for every $u,v \in K$ and $I$ is saturated. \end{example} \begin{example} Let $I$ be the saturated ideal in $K[x_0,\dots,x_3]$ defining the reduced scheme made up of the following five points in $\mathbb{P}^3$: \[ [1:4:3:0], \quad [2:0:1:1], \quad [3:7:5:6], \quad [1:1:0:1], \quad [0:1:0:1].\] We consider the generic initial ideal of $I$ with respect to $\mathtt{DegLex}$: \[J:=\mathrm{gin}_{\mathtt{DegLex}}(I)=(x_0x_1x_2, x_0^3x_2, x_1^5, x_3^2, x_2x_3, x_1x_3, x_0x_3, x_2^2, x_1^2x_2).\] $J$ is a strongly stable ideal with satiety 4 and $I$ belongs to $\mathcal M\mathrm f(J)$, because its $J$-marked basis is exactly its reduced Gr\"obner basis with respect to $\mathtt{DegLex}$. \end{example} \section{Dehomogenizing marked sets and bases} \label{sectionJmSet} By Theorem \ref{bastaaffine}, we can think that any kind of computations performed on a homogeneous ideal $I$ generated by a $J$-marked basis could be performed after a process of \lq\lq affinization\rq\rq. In the present section, we will see in which sense this is true, showing that the superminimal reduction of Definition \ref{def:superred} has a natural correspondence with a reduction process for the non-homogeneous case. The starting point to establish a correspondence between the homogeneous and the non-homogenous case is very simple: if $J$ is a saturated strongly stable ideal in $S$, then $J\cap R=(B_{J})R$ is a strongly stable ideal too. Viceversa, if $\mathfrak j$ is a strongly stable ideal in $R$, then $(B_{\mathfrak j})S$ is a saturated strongly stable ideal in $S$. We are going to define a reduction process for non-homogeneous polynomials that will lead to interesting computational and theoretical results: on the one hand, this process will give a deeper insight in the theory of marked bases over a strongly stable ideal in the homogeneous case, for what concerns 0-dimensional ideals (Corollary \ref{critpunti}); on the other hand, the notion of non-homogeneous marked basis (Definition \ref{baseaff}) will lead to the construction of a flat family of non-homogeneous ideals (Section \ref{sezpiatt}). We will describe the relations between the homogeneous case and the non-homogeneous one, observing how to pass from one to the other. In particular, by the homogenization of the non-homogeneous marked bases we can obtain a $J$-marked family from a non-homogeneous one {and vice versa}. \subsection{$[\mathfrak j,m]$-marked sets, completions and bases in $R$}\label{subsecMB} The polynomial $\mathfrak f \in R$ is a marked polynomial of $R$ if it is marked as a polynomial in $S$ (according to the first part of Definition \ref{polymarcato}). \begin{definition}\label{nhms} Let $\mathfrak j$ be a strongly stable ideal in $R$. We call \emph{non-homogeneous $\mathfrak j$-marked set} ({{\it n.h.~$\mathfrak j$-marked set,}} for short) a finite subset $\mathfrak G\subset R$ such that every polynomial $\mathfrak f_\alpha \in \mathfrak G$ is marked and the set $\{\mathrm{Ht}(\mathfrak f_\alpha)\}$ is the monomial basis of $B_{\mathfrak j}$. We call \emph{tail} of $\mathfrak f_\alpha$ the polynomial $T(\mathfrak f_\alpha)=\mathrm{Ht}(\mathfrak f_\alpha)-\mathfrak f_\alpha$. \end{definition} \begin{remark} The polynomials in a n.h.~$\mathfrak j$-marked set are not supposed to be homogeneous, while the polynomials in Definition \ref{polymarcato} are. \end{remark} \begin{definition}\label{sminred} Let $\mathfrak j$ be a strongly stable ideal in $R$ and let $\mathfrak G\subset R$ be a n.h.~$\mathfrak j$-marked set. We say that the polynomial $\mathfrak g \in R$ is in {\em $\mathfrak G_\ast$-relation} with $\mathfrak g_1\in R$ if there is a monomial $x^\gamma \in \mathrm{supp}(\mathfrak g)\cap \mathfrak j$, $x^\gamma=x^{\alpha} \ast_{\mathfrak j} x^\eta$, {and $\mathfrak g_1=\mathfrak g_\gamma-c_\gamma\cdot x^\eta \mathfrak f_\alpha$, where $c_\gamma$} is the coefficient of $x^\gamma$ in $\mathfrak g$. In other words, $\mathfrak g_1$ is obtained by replacing in $\mathfrak g$ the monomial $x^\gamma$ by $x^\eta \cdot T(\mathfrak f_{\alpha})$. We call \emph{$\mathfrak G_\ast$-reduction} the transitive closure of the above relation and denote it by $\xrightarrow{\ \id G_\ast\ }$. Moreover, we say that: \begin{itemize} \item[-] \emph{$\mathfrak g$ can be reduced to $\mathfrak g_1$} by $\xrightarrow{\ \id G_\ast\ }$ if $\mathfrak g\xrightarrow{\ \id G_\ast\ } \mathfrak g_1$ ; \item[-] \emph{$\mathfrak g$ is reduced } if $\mathrm{supp}(\mathfrak g)\cap \mathfrak j=\emptyset$, i.e. $\mathfrak g$ is not further reducible by $\xrightarrow{\ \id G_\ast\ }$. \end{itemize} \end{definition} The following result states that every polynomial $\mathfrak g$ of $R$ is in $\mathfrak G_\ast$-relation with a reduced polynomial. Hence, we have a division algorithm to construct reduced polynomials, also in a context in which we deal with non-homogeneous polynomials. \begin{theorem}\label{ridsm} {Given a n.h.~$\mathfrak j$-marked set $\mathfrak G$, the $\mathfrak G_\ast$-reduction} $\xrightarrow{\ \id G_\ast\ }$ is Noetherian and for every polynomial $\mathfrak g\in R$ there is a reduced polynomial $\bar {\mathfrak g}$ such that $\mathfrak g\xrightarrow{\ \id G_\ast\ } \bar {\mathfrak g}$. \end{theorem} \begin{proof} It is sufficient to prove both statements for monomials. If $x^\beta \notin \mathfrak j$ then it is reduced and there is nothing to prove. We then consider monomials in $\mathfrak j$ and proceed by induction on~$<_\ast$. For every $x^\alpha\in B_{\mathfrak j}$, we reduce $x^\alpha$ by $\mathfrak f_\alpha$: $x^\alpha \xrightarrow{\ \id G_\ast\ } T(\mathfrak f_\alpha)$, which is a reduced polynomial. We now consider $x^\beta=x^\alpha\ast x^\delta\in \mathfrak j\setminus B_{\mathfrak j}$ and assume that the thesis holds for every monomial in $\mathfrak j$ which is smaller than $x^\alpha\ast x^\delta$ with respect to $<_\ast$. We perform the first step of reduction on $x^\beta=x^\alpha\ast_{\mathfrak j} x^\delta$ using $\mathfrak f_\alpha$: $x^\beta=x^\alpha x^\delta\xrightarrow{\ \id G_\ast\ } x^\delta T(\mathfrak f_\alpha)$. If $\mathrm{supp}(x^\delta T(\mathfrak f_\alpha))\subseteq \mathcal{N}\,(\mathfrak j)$, we are done. Otherwise, for every $x^\gamma \in \mathrm{supp}(x^\delta T(\mathfrak f_\alpha))\cap \mathfrak j$, we have $x^\gamma=x^{\alpha'}\ast_{\mathfrak j} x^{\delta'}<_\ast x^\alpha x^\delta=x^\beta$ by Lemma \ref{descLex}. Then, by the inductive hypothesis, applying a finite number of steps of $\xrightarrow{\ \id G_\ast\ }$ on $x^\gamma$, we get a reduced polynomial. \end{proof} Given a strongly stable ideal $\mathfrak j$, then the notions of n.h.~$\mathfrak j$-marked set $\mathfrak G$ and the related $\mathfrak G_\ast$-reduction $\xrightarrow{\ \id G_\ast\ }$ seem to be the right tool to read some of the numerical invariants of the ideal generated by $\mathfrak G$ in $R$, in particular the affine Hilbert function. However, without any further assumption on the polynomials of $\mathfrak G$, $\mathfrak j$ and $(\mathfrak G)$ do not share the same affine Hilbert polynomial. Further, the following simple example shows that in our definition of $\mathfrak j$-marked set, we do not have a bound on the degree of the reduced polynomials we obtain by $\xrightarrow{\ \id G_\ast\ }$. \begin{example}\label{nonomognonbasta} Consider $\mathfrak j=(x_3,x_2^2)\subseteq K[x_1,x_2,x_3]$ and consider a n.h.~$\mathfrak j$-marked set of polynomials $\mathfrak G=\{f_1=x_3-x_1^4, f_2=x_2^2\}\subset R$. The quotient $R/(\mathfrak G)$ has affine Hilbert polynomial $8t-8$, while $R/\mathfrak j$ has affine Hilbert polynomial $2t+1$. The monomial $x_3^\ell=x_3\ast_{\mathfrak j} x_3^{\ell-1}$ belongs to $\mathfrak j$ and is reduced by $\xrightarrow{\ \id G_\ast\ }$ to $\mathfrak g:=x_3^{\ell-1}T(\mathfrak f_1)=x_3^{\ell-1}x_1^4$. Of course $\mathfrak g$ is not yet reduced because its support is contained in $\mathfrak j$. With further steps of reduction, we obtain $x_1^{4\ell}$ which is not further reducible. \end{example} We introduce the following more refined notion to have a control on the degree of the reduced polynomials we obtain by $\xrightarrow{\ \id G_\ast\ }$. \begin{definition}\label{jmmarc} Let $\mathfrak j$ be a strongly stable ideal in $R$ and $m$ be a positive integer. A n.h.~$\mathfrak j$-marked set $\mathfrak G$ is a \emph{$[\mathfrak j,m]$-marked set} if for every $\mathfrak f_\alpha \in \mathfrak G$, we have $\mathrm{supp}(T(\mathfrak f_\alpha))\subseteq \mathcal{N}\,(J)_{\leq t}$ with $t=\max\{m,\vert\alpha\vert\}$.\\ For a fixed $[\mathfrak j,m]$-marked set $\mathfrak G$, a \emph{$[\mathfrak j,m]$-completion} (or shortly, a {{\it completion}}) is a subset $\overline {\mathfrak G}\subset (\mathfrak G)$ of marked polynomials $\mathfrak f_\beta$ such that the head terms $\mathrm{Ht}(\mathfrak f_\beta)=x^\beta$ are pairwise different, they form the set of all monomials in $\mathfrak j_{\leq m}\setminus B_{\mathfrak j}$ and $\mathrm{supp}(\mathrm{Ht}(\mathfrak f_\beta)-\mathfrak f_\beta)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq m}$. For every $\mathfrak f_\beta$ belonging to a $[\mathfrak j,m]$-completion $\overline{\mathfrak G}$ of $\mathfrak G$, we use again the expression \emph{tail} for the polynomial $T(\mathfrak f_\beta):=\mathrm{Ht}(\mathfrak f_\beta)-\mathfrak f_\beta$. \end{definition} \begin{definition} Given a strongly stable ideal $\mathfrak j$ in $R$ and a $[\mathfrak j,m]$-marked set $\mathfrak G$, consider $\mathfrak g\in R$. A \emph{$[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$} of $\mathfrak g$ is a polynomial $\overline{\mathfrak g}$ such that $\mathrm{supp}(\overline{\mathfrak g})\subset \mathcal{N}\,(\mathfrak j)_{\leq t}$, with $t=\max\{m,\deg(\mathfrak g)\}$, and $\mathfrak g -\overline{\mathfrak g}\in (\mathfrak G)$. If such a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$ exists and is unique for every $\mathfrak g \in R$, we call it the \emph{$[\mathfrak j,m]$-normal form} and denote it by $\mathrm{Nf}(\mathfrak g)$. \end{definition} We now show that a $[\mathfrak j,m]$-marked set, with a completion, generates an ideal $\mathfrak i$ such that the quotient $R/\mathfrak i$ in degree $\leq t$ is generated as a $K$-vector space by $\mathfrak j_{\leq t}$, for every $t \geq m$. \begin{prop}\label{Efn} Let $\mathfrak j$ be a strongly stable ideal in $R$ and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set, for some positive integer $m$. Then there exists a $[\mathfrak j,m]$- completion of $\mathfrak G$ if and only if every $\mathfrak g \in R$ has a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$. \end{prop} \begin{proof} Suppose that every $\mathfrak g \in R$ has a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$. We can construct a $[\mathfrak j,m]$-completion of $\mathfrak G$ taking for every $x^\beta \in \mathfrak j_{\leq m} \setminus B_{\mathfrak j}$, the marked polynomial $x^\beta-\mathfrak g_\beta$, $\mathrm{Ht}(x^\beta-\mathfrak g_\beta):=x^\beta$, where $\mathfrak g_\beta$ is a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$ of $x^\beta$, because $\deg(\mathfrak g)=\vert\beta\vert\leq m$. Vice versa, we now assume that the $[\mathfrak j,m]$-marked set $\mathfrak G$ has a completion $\overline{\mathfrak G}$. It is sufficient to prove that every monomial in $R$ has a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$.\\ Let $E$ be the set of monomials in $R$ which do not have a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$. Observe that $E\cap \mathcal{N}\,(\mathfrak j)=\emptyset$ because if a monomial does not belong to $\mathfrak j$, then it is its $[\mathfrak j, m]$-reduced form modulo $(\mathfrak G)$. Furthermore $E\cap \mathfrak j_{\leq m}=\emptyset$, because every $x^\beta \in \mathfrak j_{\leq m}$ has a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$: indeed, there is $\mathfrak f_\beta \in \mathfrak G\cup \overline{\mathfrak G}$ such that $\mathrm{supp}(x_\beta -\mathfrak f_\beta)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq m}$. Suppose $E$ is not empty and consider $x^\beta \in E$ with minimum degree $\deg(x^\beta)=t$ and with minimal $\min(x^\beta)=:x_i$ among the monomials of $E$ of degree $t$. Note that $x^\beta$ belongs to $\mathfrak j$ and its degree $t$ is $\geq m+1$. The monomial $x^\beta$ can be written as $x^{\beta'} x_i$, with $x^{\beta'} \in \mathfrak j_{\leq t-1}$ and $x_i=\min (x^\beta)$ (by \cite[Lemma 1.2]{BCLR}). By the minimality of $t$ in $E$, $x^{\beta'}$ has a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$, that we denote by $\mathfrak g$: $x^{\beta'}-\mathfrak g\in (\mathfrak G)$ and $\mathrm{supp}(\mathfrak g)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq t-1}$. For every monomial $x_ix^\gamma \in \mathrm{supp}(x_i \mathfrak g)\cap \mathfrak j$, since $x^\gamma \in \mathrm{supp}(\mathfrak g)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq t-1}$, we have $x_ix^\gamma=x^{\alpha'}x_\ell$, with $x^{\alpha'}\in \mathfrak j$ and $x_\ell<x_i$ (by Lemma \ref{descLex}). By the minimality of $x_i$, every monomial of $\mathrm{supp}(x_i \mathfrak g)\cap \mathfrak j$ has a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$. This is a contradiction, so $E$ is empty. \end{proof} \begin{definition}\label{baseaff} Let $\mathfrak j$ be a strongly stable ideal in $R$ and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set, for some positive integer $m$. $\mathfrak G$ is a \emph{$[\mathfrak j,m]$-marked basis} if $R_{\leq t}=\langle \mathcal{N}\,(\mathfrak j)_{\leq t}\rangle\oplus (\mathfrak G)_{\leq t}$ for all $t\geq m$ as $K$-vector spaces. \end{definition} \begin{theorem}\label{unicitacomecrit} Let $\mathfrak j$ be a strongly stable ideal in $R$ $m$ be a positive integer and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set. $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis if and only if {there is a $[\mathfrak j,m]$-completion $\overline{\mathfrak G}$} and the $[\mathfrak j,m]$-reduced forms modulo $(\mathfrak G)$ are unique. \end{theorem} \begin{proof} If $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, by the definition we have $R_{\leq t}=\langle \mathcal{N}\,(\mathfrak j)_{\leq t}\rangle\oplus (\mathfrak G)_{\leq t}$ for every $t\geq m$. For every polynomial $\mathfrak g\in R$, there is a unique couple of polynomials $\mathfrak g_1 \in (\mathfrak G)_{\leq t}$, $\mathfrak g_2\in \langle\mathcal{N}\,(\mathfrak j)_{\leq t}\rangle$ such that $\mathfrak g=\mathfrak g_1+\mathfrak g_2$, with $t=\max\{m,\deg(\mathfrak g)\}$. Then, $\mathfrak g_2$ is a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$ of $\mathfrak g$. By Proposition \ref{Efn}, the existence of a $[\mathfrak j,m]$-reduced forms modulo $(\mathfrak G)$ is equivalent to the existence of a completion, which can be explicitely constructed as in the proof of Proposition \ref{Efn}. For what concerns uniqueness, if $\mathfrak g \in R$ has two $[\mathfrak j,m]$-reduced forms modulo $(\mathfrak G)$, then their difference belongs to $\langle \mathcal{N}\,(\mathfrak j)_{\leq t}\rangle\cap (\mathfrak G)_{\leq t}=\{0\}$, and so they are the same. Vice versa, we can apply Proposition \ref{Efn} in order to get $R_{\leq t}=\langle \mathcal{N}\,(\mathfrak j)_{\leq t}\rangle+ (\mathfrak G)_{\leq t}$ as $K$-vector spaces for every $t\geq m$; by the uniqueness of $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$, we get the direct sum. \end{proof} \begin{cor}\label{cor1} Let $\mathfrak j$ be a strongly stable ideal in $R$ and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set, for some positive integer $m$, with $\overline{\mathfrak G}$ a completion. Then $\mathcal{N}\,(\mathfrak j)_{\leq t}$ generates $R_{\leq t}/(\mathfrak G)_{\leq t}$ as $K$-vector space and $ \dim_K \mathfrak j_{\leq t}\leq \dim_K (\mathfrak G)_{\leq t}$, for every $t\geq m$.\\ If $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, then $\mathcal{N}\,(\mathfrak j)_{\leq t}$ is a basis of $R_{\leq t}/(\mathfrak G)_{\leq t}$ as a $K$-vector space and $\dim_K \mathfrak j_{\leq t}=\dim_K (\mathfrak G)_{\leq t}$ for all $t\geq m$. \end{cor} \begin{proof} If $\mathfrak G$ is a $[\mathfrak j,m]$-marked set with $\overline{\mathfrak G}$ a completion then by Proposition \ref{Efn} every monomial in $\mathfrak j$ has a $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$, so every $\mathfrak g$ in $R_{\leq t}/(\mathfrak G)_{\leq t}$ is a linear combination of monomials in $\mathcal{N}\,(\mathfrak j)_{\leq t}$, for every $t\geq m$. Then $\mathcal{N}\,(\mathfrak j)_{\leq t}$ generates the $K$-vector space $R_{\leq t}/(\mathfrak G)_{\leq t}$.\\ If $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis then every $\mathfrak g$ in $R_{\leq t}/(\mathfrak G)_{\leq t}$ is a linear combination of monomials in $\mathcal{N}\,(\mathfrak j)_{\leq t}$ and such a linear combination is unique, by Theorem \ref{unicitacomecrit}. Then $\mathcal{N}\,(\mathfrak j)_{\leq t}$ is a basis of $R_{\leq t}/(\mathfrak G)_{\leq t}$ for all $t\geq m$. \end{proof} We end this subsection with two properties of $[\mathfrak j,m]$-marked bases. The first one establishes that, given a $[\mathfrak j,m]$-marked basis $\mathfrak G$, every polynomial $\mathfrak g$ in $R$ is in $\mathfrak G_\ast$-relation with its $[\mathfrak j,m]$-normal form $\mathrm{Nf}(\mathfrak g)$, that exists by Theorem \ref{unicitacomecrit}. Hence, the $\mathfrak G_\ast$-reduction gives us a division algorithm to compute exactly $\mathrm{Nf}(\mathfrak g)$. The second property is similar to a feature of border bases \cite{MMM,MMiran}. \begin{prop}\label{propbase} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer and $\mathfrak G$ be a $[\mathfrak j,m]$-marked basis. Then: \begin{enumerate}[(i)] \item\label{propbasei} for every $\mathfrak g \in R$, the polynomial $\mathrm{Nf}(\mathfrak g)$ is the reduced polynomial we obtain by applying $\xrightarrow{\ \id G_\ast\ }$ to $\mathfrak g$; \item for every $x^\beta \in R$, for every $x_i$, $\mathrm{Nf}(x_ix^\beta)=\mathrm{Nf}(x_i \mathrm{Nf}(x^\beta))$. \end{enumerate} \end{prop} \begin{proof}\ \begin{enumerate}[(i)] \item We consider $\mathfrak g \in R$ and we compute $\mathfrak g_1$, the reduced polynomial we obtain from $\mathfrak g$ by $\xrightarrow{\ \id G_\ast\ }$: $\mathrm{supp}(\mathfrak g_1)$ is a subset of $\mathcal{N}\,(\mathfrak j)$. We define $\overline t:=\max\{\deg(\mathfrak g),\deg(\mathfrak g_1),m\}$ and consider $\mathrm{Nf}(\mathfrak g)$: $\mathfrak g_1-\mathrm{Nf}(\mathfrak g)$ belongs to $(\mathfrak G)_{\leq\overline t}$ and its support is contained in $\mathcal{N}\,(\mathfrak j)_{\leq \overline t}$. Since $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, $\mathfrak g_1=\mathrm{Nf}(\mathfrak g)$. \item By the previous item, for every $x^\beta \in R$, $x^\beta \xrightarrow{\ \id G_\ast\ } \mathrm{Nf}(x^\beta)$, $x_ix^\beta \xrightarrow{\ \id G_\ast\ } \mathrm{Nf}(x_ix^\beta)$ and $x_i\mathrm{Nf}(x^\beta)\xrightarrow{\ \id G_\ast\ } \mathrm{Nf}(x_i\mathrm{Nf}(x^\beta))$. Observe that $\mathfrak g_1:=x^\beta -\mathrm{Nf}(x^\beta)$, $\mathfrak g_2:=x_ix^\beta-\mathrm{Nf}(x_ix^\beta)$ and $\mathfrak g_3:=x_i\mathrm{Nf}(x^\beta)-\mathrm{Nf}(x_i\mathrm{Nf}(x^\beta))$ all belong to $(\mathfrak G)$ and the supports of $\mathrm{Nf}(x^\beta)$ and $\mathrm{Nf}(x_ix^\beta)$ are contained in $\mathcal{N}\,(\mathfrak j)$. Then \[ x_i\mathfrak g_1-\mathfrak g_2+\mathfrak g_3=\mathrm{Nf}(x_ix^\beta)-\mathrm{Nf}(x_i\mathrm{Nf}(x^\beta))\in (\mathfrak G)_{\leq t}\cap \langle\mathcal{N}\,(\mathfrak j)_{\leq t}\rangle \text{ for some integer } t. \] Since $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, we obtain $\mathrm{Nf}(x_ix^\beta)=\mathrm{Nf}(x_i\mathrm{Nf}(x^\beta))$. \end{enumerate} \end{proof} \subsection{Marked sets in $S$ and in $R$}\label{passaggio} Given a strongly stable ideal $\mathfrak j\subset R$ and a positive integer $m$, in the previous section we have introduced the notion of $[\mathfrak j,m]$-marked set $\mathfrak G$ endowed with a completion and, in particular, that of $[\mathfrak j,m]$-marked basis. We have also investigated main properties of them and how these notions are related with the reduction procedure $\xrightarrow{\ \id G_\ast\ }$. Here we will show that the strong similarity among the above notions and those of $J_{\geq m}$-marked sets, bases and superminimal reduction, for $J$ saturated strongly stable ideal in $S$ (see \cite[Theorem 2.2, Corollary 2.4]{CR}, \cite[Theorem 3.14]{BCLR}), conveys a precise correspondence we are going to investigate. We define two special integers that will be useful in the next statements and proofs: \begin{itemize} \item for every $x^\alpha \in S$, we define $t_\alpha:=\max\{0,m-\vert\alpha\vert\}$; \item for every $\mathfrak g \in R$, we define $m_g:=\max\{0,m- \deg \mathfrak g\}$. If we consider a marked polynomial $\mathfrak f_\alpha\in R$, we simply write $m_\alpha$ instead of $m_{\mathfrak f_{\alpha}}$. \end{itemize} Recall that, given an ideal $\mathfrak a\subset R$, its homogenization is the ideal $\mathfrak a^h\subset S$ generated by the homogenizations $\mathfrak f^h$ of all the polynomials $\mathfrak f$ of $\mathfrak a$. In general, if we homogenize a set of generators of $\mathfrak a\subset R$, we do not get a set of generators of $\mathfrak a^h$. However, the situation is far simpler for monomial ideals, and in particular for a strongly stable ideal $\mathfrak j$ of $R$ with monomial basis $B_{\mathfrak j}$: its homogenization is exactly the ideal $\mathfrak j^h=(B_{\mathfrak j})S$. Vice versa, given a homogeneous ideal $A$ in $S$ generated by a finite set of homogeneous polynomials $\{F_1,\ldots,F_s\}$, the ideal $A^a$ in $R$ is generated by the set $\{F^a_1,\ldots,F^a_s\}$. For a saturated strongly stable ideal $J$ we get the strongly stable ideal $J^a=(B_J)R$. We now state a quite simple fact concerning the relation between the sous-escaliers of strongly stable ideals when we homogenize or dehomogenize them. This will be useful when considering homogenizations and dehomogenizations of marked sets and bases. \begin{prop}\ \begin{enumerate}[(i)] \item Let $\mathfrak j$ be a strongly stable ideal in $R$. The monomial $x^\gamma$ belongs to $\mathcal{N}\,(\mathfrak j)$ if and only if for every $r\geq 0$, $x_0^rx^\gamma$ belongs to $\mathcal{N}\,(\mathfrak j^{h})$. \item Let $J$ be a saturated strongly stable ideal in $S$. The monomial $x^\gamma$ belongs to $\mathcal{N}\,(J)$ if and only if $(x^\gamma)^a$ belongs to $\mathcal{N}\,(J^a)$. \end{enumerate} \end{prop} \begin{proof} The statements are consequences of the facts that $B_{\mathfrak j}=B_{\mathfrak j^h}$ and, if $J$ is saturated, that $B_J=B_{J^a}$. \end{proof} \begin{cor}\label{head and tail}\ \begin{enumerate} \item Let $\mathfrak j$ be a strongly stable ideal in $R$ and $\mathfrak f$ a polynomial in $R$ with $\deg(\mathfrak f_\alpha)=t$. Then \[\mathfrak \mathrm{supp}(\mathfrak f)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq t} \Leftrightarrow \text{for every }r\geq 0,\ \mathrm{supp}(x_0^r\mathfrak f^h)\subseteq \mathcal{N}\,(\mathfrak j^h)_{t+r}.\] \item Let $J$ be a saturated strongly stable ideal in $S$ and $f$ a homogeneous polynomial in $S$. Then \[ \mathrm{supp}(f)\subseteq \mathcal{N}\,(J)_{t}\Leftrightarrow \mathrm{supp}(f^a)\subseteq \mathcal{N}\,(J^a)_{\leq t}. \] \end{enumerate} \end{cor} \begin{remark}\label{rm:polinomi marcati} In the same setting of Corollary \ref{head and tail}, take a marked polynomial $\mathfrak f=\mathrm{Ht}(\mathfrak f)-T(\mathfrak f)\in R$, with {$\mathrm{Ht}(\mathfrak f)\in \mathfrak j$ and} $T(\mathfrak f)\subset \mathcal{N}\,(\mathfrak j)$. Then, its homogenization $\mathfrak f^h\in S$ is a marked polynomial over $\mathfrak j^h$ with $\mathrm{Ht}(\mathfrak f^h)=$ $x_0^{deg(\mathfrak f)-deg(\mathrm{Ht}(\mathfrak f))}\mathrm{Ht}(\mathfrak f)$, $T(\mathfrak f^h)=x_0^{deg(\mathfrak f)-deg(T(\mathfrak f))} T(\mathfrak f)^h$ and $\mathrm{supp}(T(\mathfrak f^h))\subset \mathcal{N}\,(\mathfrak j^h)$. Further, given a saturated strongly stable ideal $J$ of $S$, a positive integer $m$ and a marked polynomial $f=\mathrm{Ht}(f)-T(f)\in S$ {with $\mathrm{Ht}(f)\in J_{\geq m}$ and $\mathrm{supp}(T(f))\subseteq \mathcal{N}\,(J)$}, let $J^a$ be the strongly stable ideal generated in $R$ by the monomial basis of $J$. Then the polynomial $f^a\in R$ is marked over {$J^a$} with $\mathrm{Ht}(f^a)=\mathrm{Ht}(f)^a$ and $T(f^a)=T(f)^a$. \end{remark} \begin{lemma}\label{costr1} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer, $\mathfrak G\subset R$ be a $[\mathfrak j,m]$-marked set and let $\overline{\mathfrak G}$ be a $[\mathfrak j,m]$-completion of $\mathfrak G$. Then, the following set of homogeneous marked polynomials in $S$ \begin{equation}\label{omG} G=\{x_0^{m_{\alpha}}\mathfrak f_\alpha^{h}\vert \mathfrak f_\alpha \in \mathfrak G\cup \overline{\mathfrak G}\}, \text{ where } \mathrm{Ht}(x_0^{m_\alpha}\mathfrak f_\alpha^{h})=x_0^{m_\alpha+\deg(\mathfrak f_\alpha)-\vert\alpha\vert}x^\alpha \end{equation} is a ${\mathfrak j^{h}}_{\geq m}$-marked set, with ${\mathfrak j^{h}}_{\geq m}$-marked superminimal set $sG=\{x_0^{m_{\alpha}}\mathfrak f_\alpha^{h}\vert \mathfrak f_\alpha \in \mathfrak G \}$. \end{lemma} \begin{proof} Observe that the head terms of the marked polynomials in $G$ as defined in \eqref{omG} are pariwise different and they constitute the monomial basis of ${\mathfrak j^{h}}_{\geq m}$ in $S$; further, the tails of these polynomials are contained in $\mathcal{N}\,({\mathfrak j^{h}}_{\geq m})$ by Corollary \ref{head and tail}, hence $G$ is a ${\mathfrak j^{h}}_{\geq m}$-marked set in $S$. Furthermore, the head terms of the polynomials obtained from the marked polynomials in $\mathfrak G$ are exactly of kind $x_0^{t_\alpha}x^\alpha$, $x^\alpha \in B_{\mathfrak j}$. These monomials are exactly the superminimal generators of ${\mathfrak j^{h}}_{\geq m}$, hence $sG=\{x_0^{m_{\alpha}}\mathfrak f_\alpha^{h}\vert \mathfrak f_\alpha \in \mathfrak G \}$. \end{proof} \begin{lemma}\label{costr2} Let $J$ be a saturated strongly stable ideal in $S$, $m$ be a positive integer, $G$ be a $J_{\geq m}$-marked set in $S$ and let $sG$ be its $J_{\geq m}$-marked superminimal set. Then the following set of marked polynomials in $R$ \begin{equation}\label{affG} \mathfrak G=\{F_\alpha^{a}\vert F_\alpha \in sG\},\text{ where }\mathrm{Ht}(F_{\alpha}^{a}):=\mathrm{Ht}(F_{\alpha})^{a} \end{equation} is a $[J^{a},m]$-marked set, having a completion defined as \begin{equation}\label{affCompl} \overline{\mathfrak G}=\{F_\beta^{a}\vert F_\beta \in G\setminus sG\},\text{ where } \mathrm{Ht}(F_{\beta}^{a}):=\mathrm{Ht}(F_{\beta})^{a}. \end{equation} \end{lemma} \begin{proof} The head terms of the marked polynomials in $\mathfrak G$ as defined in \eqref{affG} are pairwise different and they constitute the monomial basis of $J^{a}$. Furthermore the head terms of the polynomials in $\overline{\mathfrak G}$ constitute the set of monomials in ${J^{a}}_{\leq m}\setminus B_J$. Finally, the tails of the marked polynomials $F_\alpha^{a}$ are supported on $\mathcal{N}\,(J^{a})$ by Corollary \ref{head and tail} and their degrees are bounded by $\max\{m,\deg(\mathrm{Ht}(F_\alpha^{a}))\}$. Then $\mathfrak G$ is a $[J^{a},m]$-marked set having $\overline{\mathfrak G}$ as completion. \end{proof} \begin{theorem}\label{equivbasi}\ \begin{enumerate} \item \label{punto1} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer, $\mathfrak G\subset R$ be a $[\mathfrak j,m]$-marked basis. Then the ${\mathfrak j^{h}}_{\geq m}$-marked set $G\subset S$, as constructed in Lemma \ref{costr1}, is a ${\mathfrak j^{h}}_{\geq m}$-marked basis. \item \label{punto2} Let $J$ be a saturated strongly stable ideal in $S$, $m$ be a positive integer, $G\subset S$ be a $J_{\geq m}$-marked basis. Then the $[J^{a},m]$-marked set $\mathfrak G\subset R$, as constructed in Lemma \ref{costr2}, is a $[J^{a},m]$-marked basis. \end{enumerate} \end{theorem} \begin{proof}\ \begin{enumerate} \item $\mathfrak G\subset R$ is a $[\mathfrak j,m]$-marked basis, so there is a completion $\overline{\mathfrak G}$, by Theorem \ref{unicitacomecrit}. Using $\mathfrak G$ and $\overline{\mathfrak G}$, we construct $G$ as in Lemma \ref{costr1}: $G$ is a ${\mathfrak j^{h}}_{\geq m}$-marked set, hence, by \cite[Corollary 2.3]{CR}, $\mathcal{N}\,({\mathfrak j^{h}}_{\geq m})$ generates $S/(G)$ as $K$-vector space: $S_t=\mathcal{N}\,({\mathfrak j^{h}}_{\geq m})_t+(\mathcal G)_t$ for every $t$. We now show that $\langle\mathcal{N}\,({\mathfrak j^{h}}_{\geq m})\rangle\cap (G)=\{0\}$. Consider a homogeneous polynomial $g\in \langle\mathcal{N}\,({\mathfrak j^{h}}_{\geq m})_t\rangle\cap (G)_t$. Thanks to the construction of $G$, $g^{a}$ is a polynomial belonging to $(\mathfrak G)_{\leq t}$ whose support is contained in $\mathcal{N}\,(\mathfrak j)_{\leq t}$; thus, $g^{a}\in \langle \mathcal{N}\,(\mathfrak j)_{\leq t}\rangle \cap (\mathfrak G)_{\leq t}$ and $g^{a}$ is the null polynomial, by Definition \ref{baseaff}. This implies that $g$ is the null polynomial too. Then $G$ is a ${\mathfrak j^{h}}_{\geq m}$-marked basis, by Definition \ref{polymarcato}. \item $G\subset S$ is a $J_{\geq m}$-marked basis and we construct $\mathfrak G$ and $\overline{\mathfrak G}$ as in Lemma \ref{costr2}: $\mathfrak G$ is a $[J^{a},m]$-marked set and $\overline{\mathfrak G}$ a completion. By Corollary \ref{cor1}, $\mathcal{N}\,(J^{a})_{\leq t}$ generates $R_{\leq t}/(\mathfrak G)_{\leq t}$ as $K$-vector space for every $t\geq m$, hence $R_{\leq t}=\mathcal{N}\,(J^{a})_{\leq t} + (\mathfrak G)_{\leq t}$. We now prove that $\mathcal{N}\,(J^{a})_{\leq t} \cap (\mathfrak G)_{\leq t}=\{0\}$ for every $t\geq m$. Consider $\mathfrak g\in \mathcal{N}\,(J^{a})_{\leq t} \cap (\mathfrak G)_{\leq t}$, $t=\max\{m,\deg(g)\}$. Thanks to the construction of $\mathfrak G$ and $\overline{\mathfrak G}$, the polynomial $x_0^{m_{\mathfrak g}}\mathfrak g^{h}$ belongs to $(G)_t$ and its support is contained in $\mathcal{N}\,(J_{\geq m})_t$. By Definition \ref{polymarcato}, $x_0^{m_{\mathfrak g}}\mathfrak g^{h}$ is the null polynomial in $S$, hence $\mathfrak g$ is the null polynomial in $R$. Then $\mathfrak G$ is a $[J^{a},m]$-marked basis, by Definition \ref{baseaff}. \end{enumerate} \end{proof} \begin{prop}\label{homrid} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer, $\mathfrak G\subset R$ be a $[\mathfrak j,m]$-marked set with $\overline{\mathfrak G}=\{f_\beta\}$ a $[\mathfrak j,m]$-completion. Let $G\subset S$ be the ${\mathfrak j^{h}}_{\geq m}$-marked set constructed from $\mathfrak G\cup \overline{\mathfrak G}$ as in Lemma \ref{costr1}. Consider $\mathfrak g\in R$, $\mathfrak g\xrightarrow{\ \id G_\ast\ } \mathfrak g_1$. Then there is $t_0$ such that for every $t\geq t_0$, $x_0^t\mathfrak g^{h}\xrightarrow{\ sG_\ast\ } x_0^{t_1}\mathfrak g_1^{h}$ with $t+\deg(\mathfrak g)=t_1+\deg(\mathfrak g_1)$. \end{prop} \begin{proof} We prove that the thesis holds for each step of reduction. Let $\mathfrak g$ be a polynomial in $R$. Consider $x^\epsilon \in \mathrm{supp}(\mathfrak g)\cap \mathfrak j$ and $x^\epsilon=x^\alpha\ast_{\mathfrak j} x^\delta$. In order to reduce $x^\epsilon$, we perform the step $\mathfrak g\xrightarrow{\ \id G_\ast\ } \mathfrak g-c_\epsilon x^\delta \mathfrak f_\alpha$, where $c_\epsilon$ is the coefficient of $x^\epsilon$ in $\mathfrak g$. Observe that: \[ (\mathfrak g-c_\epsilon x^\delta \mathfrak f_\alpha)^{h}=\begin{cases} \mathfrak g^{h}-c_\epsilon x_0^{\deg(\mathfrak g)-\deg(x^\delta\mathfrak f_\alpha)}x^\delta \mathfrak f_\alpha^{h}\quad \text{ if } \deg(\mathfrak g)\geq \deg(x^\delta \mathfrak f_\alpha)\\ x_0^{\deg(x^\delta\mathfrak f_\alpha) -\deg(\mathfrak g)}\mathfrak g^{h}-c_\epsilon x^\delta \mathfrak f_\alpha^{h} \quad \text{ otherwise} \end{cases} \] We now show that there is $t_0$ such that for every $t\geq t_0$, $x_0^t\mathfrak g^{h}\xrightarrow{\ sG_\ast\ } x_0^{t_1}(\mathfrak g-c_\epsilon x^\delta \mathfrak f_\alpha)^{h}$, where these two homogeneous polynomials have the same degree. Since $x^\alpha \in B_{\mathfrak j^{h}}$, there is $x_0^{m_{\alpha}}\mathfrak f_\alpha^{h}\in sG$ such that $\mathrm{Ht}(x_0^{m_{\alpha}}\mathfrak f_\alpha^{h})=x_0^{t_\alpha}x^\alpha \in sB_{\mathfrak j^h}$. We will apply a step of superminimal reduction using $x_0^{m_{\alpha}}\mathfrak f_\alpha^{h}\in sG$. In $\mathfrak g^h$, the monomial $x^\epsilon$ becomes the monomial $x_0^{\deg(\mathfrak g)-\vert\epsilon\vert}x^\epsilon=x^\alpha \ast_{\mathfrak j^{h}}(x^\delta x_0^{\deg(\mathfrak g)-\vert\epsilon\vert}) \in \mathrm{supp}(\mathfrak g^{h})\cap {\mathfrak j^{h}}_{\geq m}$. In $x^\delta \mathfrak f_\alpha^h$, the monomial $x^\epsilon$ becomes $x_0^{deg(\mathfrak f_\alpha)-\vert\alpha\vert}x^\delta x^\alpha$. If $\deg(\mathfrak g)\geq \deg(x^\delta \mathfrak f_\alpha)$, we define $r:=\max\{0, m_\alpha - \deg(g)+\deg(x^\delta f_\alpha ^h)\}$. The monomial $x_0^r$ is the smallest power of $x_0$ allowing to perform a step of superminimal reduction by $x_0^{m_\alpha}f_\alpha^h$. Reducing by $\xrightarrow{\ sG_\ast\ }$, we obtain: \[ x_0^{r}\mathfrak g^h \xrightarrow{\ sG_{\ast}\ } x_0^{r} \mathfrak g^h - c_\epsilon x_0^{r+\deg(\mathfrak g)-\deg(x^\delta \mathfrak f_\alpha)} x^\delta \mathfrak f_\alpha^h =x_0^r\mathfrak (\mathfrak g-c_\epsilon x^\delta \mathfrak f_\alpha)^{h}. \] We observe that in this case $t_0= r=\max\{0, m_\alpha - \deg(g)+\deg(x^\delta f_\alpha ^h)\} $ and for every $t\geq t_0$, $t_1=t$. If $\deg(\mathfrak g)< \deg(x^\delta \mathfrak f_\alpha)$, we define $r:=m_\alpha+\deg(x^\delta \mathfrak f_\alpha)-\deg(\mathfrak g)$. Again, in this case the monomial $x_0^r$ is the smallest power of $x_0$ allowing to perform a step of superminimal reduction by $x_0^{m_\alpha}f_\alpha^h$. Reducing by $\xrightarrow{\ sG_\ast\ }$, we obtain: \[ x_0^{r}\mathfrak g^h \xrightarrow{\ sG_{\ast}\ } x_0^{r} \mathfrak g^h - c_\epsilon x_0^{m_\alpha} x^\delta \mathfrak f_\alpha^h =x_0^{m_\alpha} (\mathfrak g-c_\epsilon x^\delta \mathfrak f_\alpha)^{h}. \] We observe that in this case $t_0=r=m_\alpha+\deg(x^\delta \mathfrak f_\alpha)-\deg(\mathfrak g)$ and for every $t\geq t_0$, $t_1=t-\deg(x^\delta \mathfrak f_\alpha)+\deg(\mathfrak g)$. \end{proof} \begin{prop}\label{affinrid} Let $J$ be a saturated strongly stable ideal in $S$, $m$ be a positive integer and $G$ be a $J_{\geq m}$-marked set. Let ${\mathfrak G}\subset R$ be the $[J^{a}, m]$-marked set constructed from $G$ as in Lemma \ref{costr2}. If $g$ is a homogeneous polynomial belonging to $S$ and there is $t$ such that $x_0^t g\xrightarrow{\ sG_\ast\ }g_1$ then $g^{a}\xrightarrow{\ \id G_\ast\ } {g_1}^{a}$. \end{prop} \begin{proof} We prove that the thesis holds for each step of reduction. Consider $x^\epsilon \in \mathrm{supp}(g)\cap J$: $x^\epsilon=x^\alpha\ast_{J}x^\delta$. If $\vert\alpha\vert \geq m$, then the reduction step is $g\xrightarrow{\ sG_\ast\ }g-c_\epsilon x^\delta F_\alpha$. Correspondingly, if we consider $(x^\epsilon)^{a}\in \mathrm{supp}(g^{a})\cap J^{a}$, then $(x^\epsilon)^{a}=x^\alpha\ast_{J^{a}}{(x^\delta)^{a}}$, and so the reduction in the non-homogeneous case is exactly ${g^{a}}\xrightarrow{\ \id G_\ast\ } g^{a}-c_\epsilon {(x^\delta)^{a}} {F_\alpha}^{a}$. If $\vert\alpha\vert<m$, then the reduction step is $x_0^t g\xrightarrow{\ sG_\ast\ }x_0^tg-c_\epsilon x^{\delta'} F_{\alpha'}$ for a suitable power $t$ such that $x^{\alpha'}=x^\alpha x_0^{m-\vert\alpha\vert}$ and $x^{\delta'}=x^\delta x_0^{t-m+\vert\alpha\vert}$. Again, if we consider $(x^\epsilon)^{a}$, it decomposes as $x^\alpha\ast_{J^{a}} (x^\delta)^{a}=(x^{\alpha'})^a \ast_{J^a} (x^{\delta'})^a$, hence the reduction in the non-homogeneous case is exactly \[ {g}^{a}\xrightarrow{\ \id G_\ast\ } g^{a}-c_\epsilon (x^{\delta'})^{a} (F_{\alpha'})^{a}=(x_0^tg-c_\epsilon x^{\delta'} F_{\alpha'})^{a}. \] \end{proof} Recall that, for suitable term orders, Gr\"obner bases behave well with respect to the homogenization, in the sense that the homogenizations of the polynomials of such a Gr\"obner basis $Gb$ generate the homogenization of the ideal $(Gb)$ (for example, see \cite[Chapter 8]{CLO}). Theorem \ref{equivbasi} shows that $[\mathfrak j,m]$-marked basis in $R$ have an analogous nice behaviour. \begin{prop}\label{omogbene} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer, $\mathfrak G$ be a $[\mathfrak j,m]$-marked basis. Then ${(\mathfrak G)^h}_{\geq m}={(G)_{\geq m}}=(\mathfrak G^h\cup \overline{\mathfrak G}^h)_{\geq m}$ and $(\mathfrak G)^h=(\mathfrak G^h\cup \overline{\mathfrak G}^h)^{\textnormal{sat}}$. \end{prop} \begin{proof} For every ideal in $R$, we can obtain its homogenization in $S$ considering the ideal generated by the homogenization of a set of generators and saturating it by $x_0$ \cite[Corollary 4.3.8]{KR2}. In our case, we consider the set of generators $\mathfrak G\cup \overline{\mathfrak G}$ and obtain ${(\mathfrak G)^h}_{\geq m}=((\mathfrak G^h\cup \overline{\mathfrak G}^h):x_0^\infty)_{\geq m}$. On the other hand, consider the ${\mathfrak j^h}_{\geq m}$-marked basis $G$ constructed from $\mathfrak G\cup \overline{\mathfrak G}$ in Lemma \ref{costr1}. We have that $(G)\subset S$ belongs to $\mathcal M\mathrm f({\mathfrak j^h}_{\geq m})$, hence $(G)$ is $m$-saturated, by Corollary \ref{sattronc}, and $((G):x_0^\infty)_{\geq m}=(G)_{\geq m}$, by Theorem \ref{bastaaffine}. The inclusion $((G):x_0^\infty)_{\geq m}\subseteq ((\mathfrak G^h\cup \overline{\mathfrak G}^h):x_0^\infty)_{\geq m}$ is obvious, by the construction of $G$. For the other inclusion, consider $g \in ((\mathfrak G^h\cup \overline{\mathfrak G}^h):x_0^\infty)_{\geq m}$: there is $t$ such that $x_0^t g=\sum a_jx^{\delta_j}f_{\alpha_j}^h$, with $f_{\alpha_j}\in \mathfrak G\cup \overline{\mathfrak G}$. Let $\overline m$ be the maximum in the set of integers $m_{\alpha_j}$. Then $x_0^{t+\overline m}g$ belongs to $(G)$, hence the other inclusion holds. We have proved till here that ${(\mathfrak G)^h}_{\geq m}=(G)_{\geq m}$. Observe now that $(G)_{\geq m}\subseteq (\mathfrak G^h\cup \overline{\mathfrak G}^h)_{\geq m}$ by construction and $(\mathfrak G^h\cup \overline{\mathfrak G}^h)_{\geq m}\subseteq {(\mathfrak G)^h}_{\geq m}$, by definition of homogenization of an ideal. We get a chain of inclusions which turn out to be equalities, hence $(\mathfrak G^h\cup \overline{\mathfrak G}^h)_{\geq m}=(\mathfrak G)^h_{\geq m}$. Saturating both homogeneous ideals, we get $(\mathfrak G)^h=(\mathfrak G^h\cup \overline{\mathfrak G}^h)^{\textnormal{sat}}$. \end{proof} One may think that the homogenization of a $[\mathfrak j,m]$-marked basis $\mathfrak G$ with its completion generates $(\mathfrak G)^h$, without saturating. The following example shows that this is not the case. \begin{example} Consider $\mathfrak G=\{x_2-x_1^3+x_1^2+x_1+2,x_1^4-2x_1^3-2-x_1\}\subset K[x_1,x_2]$ which is a $[(x_2,x_1^4),3]$-marked basis. We can compute by means of $\xrightarrow{\ \id G_\ast\ }$ the $[\mathfrak j,m]$-reduced forms modulo $\mathfrak G$ of the monomials in $\mathfrak j_{\leq m}\setminus B_{\mathfrak j}$ and construct in this way the following polynomials: \[\begin{array}{l} \mathfrak f_1:=x_2^3-9x_1^3-10-7x_1+21x_1^2, \quad \mathfrak f_2:= x_2^2x_1-x_1^3+6+x_1-3x_1^2,\\ \mathfrak f_3:= x_2^2+3x_1^3-2-3x_1-7x_1^2, \quad \mathfrak f_4:=x_2x_1^2-x_1^3-2-3x_1+x_1^2, \quad \mathfrak f_5:=x_2x_1-x_1^3-2+x_1+x_1^2. \end{array} \] We define the completion $\overline{\mathfrak G}$ as the set $\{\mathfrak f_1, \mathfrak f_2,\mathfrak f_3,\mathfrak f_4,\mathfrak f_5\}$. In this case, observe that ${(\mathfrak G)^h}_{\geq 3}$ is equal to $(\mathfrak G^h\cup \overline{\mathfrak G}^h)_{\geq 3}$, but the equality does not hold if we consider the truncation of both ideals from degree $2$ on. Indeed, the polynomial $3\cdot \mathfrak f_5+\mathfrak f_3$ has degree 2, its homogenization belongs to $(\mathfrak G^h)$ by definition, however it does not belong to $(\mathfrak G^h\cup \overline{\mathfrak G}^h)$. \end{example} \subsection{Effective criterion for $[\mathfrak j,m]$-marked bases}\label{sectionJmBasis} The following theorem will give an algorithmic criterion to test whether a $[\mathfrak j,m]$-marked set of polynomials is a $[\mathfrak j,m]$-marked basis. The interest of this algorithm is twofold: on the one hand, thanks to the results of Section \ref{passaggio}, {we can avoid the multiplication by powers of $x_0$, which represents a bottleneck for the efficiency of the Algorithm presented in \cite{BCLR}}; on the other hand, this algorithm will give interesting theoretical results deepening the understanding of the structure of marked families, that we will explore in the Section \ref{conseguenze}. \begin{theorem}\label{critsupermin} Let $\mathfrak j$ be a strongly stable ideal in $R$ and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set, for some positive integer $m$. $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis if and only if: \begin{enumerate}[(i)] \item \label{condii} for every $\mathfrak f_\alpha \in \mathfrak G$, for every $x_i>\min(x^\alpha)$, $x_i\mathfrak f_\alpha\xrightarrow{\ \id G_\ast\ } 0$; \item \label{condi}for all $x^\beta \in {\mathfrak j}_{\leq m}\setminus B_{\mathfrak j}$, $x^\beta\xrightarrow{\ \id G_\ast\ } \mathfrak g_\beta$ with $\mathrm{supp}(\mathfrak g_\beta)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq m}$. \end{enumerate} \end{theorem} \begin{proof} If $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, then there is a completion $\overline{\mathfrak G}$ and every polynomial $\mathfrak g$ in $R$ is in $\mathfrak G_{\ast}$-relation with $\mathrm{Nf}(\mathfrak g)$, by Proposition \ref{propbase}(i). In the present hypothesis, we have that for every $x^\beta\in {\mathfrak j}_{\leq m}\setminus B_{\mathfrak j}$, then $x^\beta\xrightarrow{\ \id G_\ast\ } \mathrm{Nf}(x^\beta)$ with $\mathrm{supp}(\mathrm{Nf}(x^\beta)) \subset \langle\mathcal{N}\,(\mathfrak j)_{\leq m}\rangle$ and analogously, for every $f_\alpha \in \mathfrak G$, for every $x_i>\min(x^\alpha)$, the reduction $\xrightarrow{\ \id G_\ast\ }$ on the polynomial $x_if_\alpha$ leads to 0 (by Proposition \ref{propbase}(\ref{propbasei})). Vice versa, we now assume that conditions (\ref{condii}) and (\ref{condi}) hold. By (\ref{condi}), we define a completion $\overline{\mathfrak G}$ in the following way: \[ \overline{\mathfrak G}=\{\mathfrak f_\beta=x^\beta-\mathfrak g_\beta: x^\beta \in {\mathfrak j}_{\leq m}\setminus B_{\mathfrak j}, x^\beta\xrightarrow{\ \id G_\ast\ } \mathfrak g_\beta\}, \quad \mathrm{Ht}(\mathfrak f_\beta)=x^\beta. \] Using $\mathfrak G$ and $\overline{\mathfrak G}$, by Lemma \ref{costr1} we construct the ${\mathfrak j^{h}}_{\geq m}$-marked set $G\subset S$. By Theorem \ref{equivbasi}, if we prove that $G$ is a ${\mathfrak j^{h}}_{\geq m}$-marked basis, we get the thesis. Indeed, $G$ satisfies the conditions of \cite[Proposition 5.5]{BCLR}: \begin{enumerate} \item by Proposition \ref{homrid} and condition (\ref{condi}), for every $x^{\beta'} \in B_{{\mathfrak j^{h}}_{\geq m}}$, there exists $t$ such that \[ x_0^t\cdot x^{\beta'}\xrightarrow{\ sG_\ast\ }g_\beta, \] where $x^{\beta'}=x_0^{m-\vert\beta\vert}x^\beta$ for some $x^\beta \in \mathfrak j_{\leq m}\setminus B_{\mathfrak j}$ and $g_\beta=x_0^{t_1}\left(\mathrm{Nf}(x^\beta)\right)^h$, hence $\mathrm{supp}(g_\beta)\subset \mathcal{N}\,(\mathfrak j^{h})_{\leq m}$ by Corollary \ref{head and tail}. \item by Proposition \ref{homrid} and by condition (\ref{condii}), for every polynomial $F_\alpha\in sG$ and for every $x_i>\min((x^{\alpha})^{a})$ there exists $t$ such that \[ x_0^t x_i F_\alpha\xrightarrow{\ sG_\ast\ }0. \] \end{enumerate} Then $(G)\subset S$ belongs to $\mathcal M\mathrm f({\mathfrak j^{h}}_{\geq m})$ and $G$ is a ${\mathfrak j^{h}}_{\geq m}$-marked basis in $S$ \cite[Proposition 1.11]{BCLR}. Hence $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis in $R$. \end{proof} The following results highlight another feature of $[\mathfrak j,m]$-marked bases analogous to one of Gr\"obner bases, that involves the notion of syzygy. \begin{cor}\label{sollsiz} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer and $\mathfrak G$ be a $[\mathfrak j,m]$-marked basis. Then every homogeneous syzygy of $\mathfrak j$ lifts to a syzygy of $\mathfrak G$. \end{cor} \begin{proof} For a strongly stable ideal $\mathfrak j$ in $R$, a set of generators for the module of the first syzygies is given by the couples $(x_i,x^\delta)$ such that $x_ix^{\alpha_1}-x^\delta x^{\alpha_2}=0$, where $x^{\alpha_1},x^{\alpha_2}\in B_{\mathfrak j}$, $x_i>\max(x^{\alpha_1})$ and $x^\delta x^{\alpha_2}=x^{\alpha_2}\ast_{\mathfrak j} x^\delta$ \cite{EK}. Since $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, for every $\mathfrak f_\alpha\in \mathfrak G$ and every $x_i>\min(x^\alpha)$, $x_i\mathfrak f_\alpha\xrightarrow{\ \id G_\ast\ } 0$, by Theorem \ref{critsupermin}(\ref{condi}). This means that $x_i\mathfrak f_\alpha=\sum c_ix^{\delta_i}\mathfrak f_{\alpha_i}$, where $x^{\delta_i}x^{\alpha_i}=x^{\alpha_i}\ast_{\mathfrak j}x^{\delta_i}$. In particular, among the polynomials $x^{\delta_i}\mathfrak f_{\alpha_i}$ there is $x^{\delta'}\mathfrak f_{\alpha'}$ such that $x_ix^\alpha=x^{\alpha'}\ast_{\mathfrak j}x^{\delta'}$. \end{proof} One of the key points of Theorem \ref{critsupermin} is condition (\ref{condi}) because it allows to define a $[\mathfrak j,m]$-completion for a $[\mathfrak j,m]$-marked set: indeed the existence of this completion is a necessary hypothesis in almost all the characterizations of $[\mathfrak j,m]$-marked bases we gave. One may think that the definition of a $[\mathfrak j,m]$-completion is always possible starting from a $[\mathfrak j,m]$-marked set or that it is sufficient to assume that a subset $A$ of monomials in $\mathfrak j_{\leq m}\setminus B_{\mathfrak j}$ has $[\mathfrak j,m]$-reduced forms of degree $\leq m$. { A reasonable subset of monomials to consider could be the \emph{border basis} of $\mathfrak j$ \cite[Section 1]{MMiran}: \[ \mathcal B(\mathfrak j)=\{x_ix^\gamma \in \mathfrak j\vert x^\gamma \in \mathcal{N}\,(\mathfrak j)\} \] The following example shows that this is not the case, in fact even though the monomials in $\left(\mathfrak j_{\leq m}\setminus B_{\mathfrak j}\right)\cap \mathcal B(\mathfrak j) $ reduce by $\xrightarrow{\ \id G_\ast\ }$ to polynomials of degree $\leq m$, however this does not imply the same property for other monomials in $\mathfrak j_{\leq m}\setminus B_{\mathfrak j}$. } \begin{example} Consider $\mathfrak j=(x_3,x_2^2)\subset K[x_1,x_2,x_3]$, $m=3$ and the $[\mathfrak j,m]$-marked set $\mathfrak G=\{\mathfrak f_1,\mathfrak f_2\}$ with \[ \mathfrak f_1=x_3-x_1x_2+x_1^2-x_2+2x_1-3, \quad \mathfrak f_2=x_2^2-x_1x_2-x_2-1, \] with $\mathrm{Ht}(\mathfrak f_1)=x_3$, $\mathrm{Ht}(\mathfrak f_2)=x_2^2$, $\mathrm{supp}(T(\mathfrak f_i))\subseteq \mathcal{N}\,(\mathfrak j)_{\leq 3}=\{1,x_2,x_1,x_1x_2,x_1^2,x_1^3\}$, $i=1,2$. The $[\mathfrak j,m]$-marked set $\mathfrak G$ fullfills condition (\ref{condii}) of Theorem \ref{critsupermin}. For what concerns condition (\ref{condi}), instead of considering the whole set \[ \mathfrak j_{\leq 3}\setminus B_{\mathfrak j}=\{x_3^2, x_2x_3,x_1x_3, x_3^3,x_3^2x_2,x_3^2x_1,x_3x_2^2,x_3x_2x_1,x_3x_1^2,x_2^2x_1,x_1^3\}. \] we just consider its subset $A$ containing monomials which lie on the border of $\mathfrak j$: \[ A:=\left(\mathfrak j_{\leq 3}\setminus B_{\mathfrak j}\right)\cap \mathcal B(\mathfrak j)=\{x_1x_2^2,x_1x_3,x_1^{2}x_3,x_2x_3,x_1x_2x_3\}. \] The $[\mathfrak j,m]$-marked set $\mathfrak G$ also fullfills condition (\ref{condi}) of Theorem \ref{critsupermin} for what concerns the monomials in $A$. {However the reduction by $\xrightarrow{\ \id G_\ast\ }$ of the monomial $x_1x_3^2 \in \mathfrak j_{\leq 3}\setminus B_{\mathfrak j}$ has degree strictly larger than $m=3$, hence condition (\ref{condi}) of Theorem \ref{critsupermin} is not satisfied. } \end{example} \subsection{Minimal $m$ for a marked basis and the 0-dimensional case}\label{conseguenze} In the present subsection we will focus our attention on two issues that allow faster explicit computations: first, we will show that under suitable hypothesis on $m$ and the monomials of degree $m+1$ in ${B_{\mathfrak j}}$, a $[\mathfrak j,m]$-marked basis is actually a $[\mathfrak j,m-1]$-marked basis; secondly, we will consider the case of $0$-dimensional ideal and improve the criterion of Theorem \ref{critsupermin} in this special case. The first result is analogous to \cite[Theorem 5.7]{BCLR}, but the proof given here for a $[\mathfrak j,m]$-marked basis is much easier and gives a better insight in the algebraic structure of marked bases. The second result has been observed by the authors of \cite{BCLR} after computing several examples of marked schemes: using the \lq\lq affine\rq\rq\ constructions and tools that we have developed till here, the proof is immediate. We consider a strongly stable ideal $\mathfrak j\subseteq R$. We now show that, under some hypothesis on the monomials of $B_{\mathfrak j}$ of degree $m+1$, if we have a $[\mathfrak j,m]$-marked basis $\mathfrak G$, actually the $[\mathfrak j,m]$-normal form of every monomial in $R$ has degree bounded by the maximum between $m-1$ and the degree of the monomial itself. \begin{lemma}\label{ro} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer such that no monomial in $B_{\mathfrak j}$ of degree $m+1$ is divided by $x_1$. Consider a $[\mathfrak j,m]$-marked basis $\mathfrak G$. {Then for every $x^\beta \in R$, $\mathrm{supp}(\mathrm{Nf}(x^\beta))\subseteq \mathcal{N}\,(\mathfrak j)_{\leq t}$, with $t=\max\{m-1,\vert\beta\vert\}$.} \end{lemma} \begin{proof} First we recall that, since $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, $[\mathfrak j,m]$-reduced forms modulo $(\mathfrak G)$ are unique and they can be computed by $\xrightarrow{\ \id G_\ast\ }$ (Proposition \ref{propbase}). Then for every $x^\beta \in R$, $x^\beta\xrightarrow{\ \id G_\ast\ } \mathrm{Nf}(x^\beta)$. If $x^\beta \in R\setminus \mathfrak j$, then $\mathrm{Nf}(x^\beta)=x^\beta \in \mathcal{N}\,(\mathfrak j)_{\leq \vert\beta\vert}$. If $x^\beta\in \mathfrak j$ and $\vert\beta\vert>m$, then $t=\vert\beta\vert$ and, by definition of $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$, we have $\mathrm{supp}(\mathrm{Nf}(x^\beta))\subseteq \mathcal{N}\,(\mathfrak j)_{\leq \vert\beta\vert}$. If $x^\beta \in {\mathfrak j}$ with $\vert\beta\vert\leq m-1$, then its $[\mathfrak j,m]$-normal form has degree lower than or equal to $m$. Let $x^\epsilon$ be a monomial in $\mathrm{supp}(\mathrm{Nf}(x^\beta))$ with $\vert\epsilon \vert=m$ and consider $x_1x^\beta$: by Proposition \ref{propbase}, we have that $\mathrm{Nf}(x_1\mathrm{Nf}(x^\beta))=\mathrm{Nf}(x_1x^\beta)$. In particular, $\mathrm{supp}(\mathrm{Nf}(x_1x^\beta))\subset \mathcal{N}\,(\mathfrak j)_{\leq m}$ hence $x_1x^\epsilon$ does not appear in $\mathrm{Nf}(x_1x^\beta)$, because $1+\vert\epsilon\vert= m+1$, and $x_1x^\epsilon$ is reducible by $\xrightarrow{\ \id G_\ast\ }$. Consider $x_1x^\epsilon=x^{\alpha}\ast_{\mathfrak j} x^{\delta'}$: by Lemma \ref{descLex}, we obtain $x^{\delta'}<_{\mathtt{Lex}}x_1$, that means $x^{\delta'}=1$. But this means that $x_1$ divides $x^{\alpha}\in B_{\mathfrak j}$, $\vert\alpha\vert=m+1$, and this contradicts the hypothesis. \end{proof} For a strongly stable ideal $\mathfrak j\subseteq R$, let $\rho$ be its satiety (Definition \ref{regsat}), which is the maximal degree of a monomial in $B_{\mathfrak j}$ divided by $x_1$ (Lemma \ref{regsatB}). \begin{theorem}\label{ro2} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer such that no monomial in $B_{\mathfrak j}$ of degree $m+1$ is divided by $x_1$. Consider a $[\mathfrak j,m]$-marked set of polynomials $\mathfrak G$. $\mathfrak G$ is a $[\mathfrak j,m-1]$-marked basis if and only if $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis. In particular, for every $m\geq \rho$, $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis if and only if it is a $[\mathfrak j,\rho-1]$-marked basis. \end{theorem} \begin{proof} If $\mathfrak G$ is a $[\mathfrak j,m-1]$-marked basis, then $R_{\leq t}=\langle \mathcal{N}\,(J)_{\leq t}\rangle\oplus (\mathfrak G)_{\leq t}$ for every $t\geq m-1$. Then the same holds for every $t\geq m\geq m-1$, so $\mathfrak G$ is also a $[\mathfrak j,m]$-marked basis. Vice versa, if $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis then $R_{\leq t}=\langle \mathcal{N}\,(J)_{\leq t}\rangle\oplus (\mathfrak G)_{\leq t}$ for every $t\geq m$. It is sufficient to prove that $\mathfrak G$ is a $[\mathfrak j,m-1]$-marked set and that $R_{\leq m-1}=\langle \mathcal{N}\,(J)_{\leq m-1}\rangle\oplus (\mathfrak G)_{\leq m-1}$. Since $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, reduced forms are unique. In particular for every $x^\alpha \in B_{\mathfrak j}$, there is a marked polynomial $\mathfrak f_\alpha\in \mathfrak G$ such that $\mathfrak f_\alpha=x^\alpha-\mathrm{Nf}(x^\alpha)$. By Lemma \ref{ro}, for every $x^\alpha \in B_{\mathfrak j}$, $\vert\alpha\vert\leq m-1$, $\deg \mathrm{Nf}(x^\alpha)\leq m-1$, hence $\mathfrak G$ is a $[\mathfrak j,m-1]$-marked set. We consider the basis of monomials for the $K$-vector space $R_{\leq m-1}$. We will prove that for every $x^\gamma \in R$, $\vert\gamma\vert\leq m-1$, there is a unique decomposition $x^\gamma=\mathfrak g_1+\mathfrak g_2$ with $\mathfrak g_1\in (\mathfrak G)_{\leq m-1}$ and $\mathfrak g_2 \in \langle \mathcal{N}\,(\mathfrak j)_{\leq m-1}\rangle$. If $x^\gamma\notin \mathfrak j$, then we consider the following unique writing of $x^\gamma \in (\mathfrak G)_{\leq m-1}\oplus \langle \mathcal{N}\,(\mathfrak j)_{\leq m-1}\rangle$: $x^\gamma=0+x^\gamma$. If there was another such writing for $x^\gamma$ in $(\mathfrak G)_{\leq m-1}+ \langle \mathcal{N}\,(\mathfrak j)_{\leq m-1}\rangle$ this would be in contradiction with the assumption that $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis. Then every monomial in $R_{\leq m-1}\setminus {\mathfrak j}_{\leq m-1}$ has a unique decomposition in $(\mathfrak G)_{\leq m-1}+ \langle \mathcal{N}\,(\mathfrak j)_{\leq m-1}\rangle$. We now consider $x^\gamma \in {\mathfrak j}_{\leq m-1}$. Since $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis, there is a completion $\overline{\mathfrak G}$ and $[\mathfrak j,m]$-normal forms are unique, by Theorem \ref{unicitacomecrit}. We can compute the unique $[\mathfrak j,m]$-reduced form modulo $(\mathfrak G)$ of $x^\gamma$ (using $\xrightarrow{\ \id G_\ast\ }$, by Proposition \ref{propbase}). We are in the hypothesis of Lemma \ref{ro}, hence $\deg \mathrm{Nf}(x^\gamma)\leq m-1$. Hence there exists $f_\gamma:=x^\gamma-\mathrm{Nf}(x^\gamma)\in \mathfrak G\cup\overline{\mathfrak G}$. Then we can consider the writing $x^\gamma=f_\gamma+\mathrm{Nf}(x^\gamma)$, $f_\gamma\in (\mathfrak G)_{\leq m-1}$ and $\mathrm{Nf}(x^\gamma)\in\langle \mathcal{N}\,(\mathfrak j)_{\leq m-1}\rangle$. If there was another such writing for $x^\gamma \in {\mathfrak j}_{\leq m-1}$, this would be in contradiction with the hypothesis that $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis. \end{proof} We obtain the following result by Theorems \ref{critsupermin} and \ref{ro2}. \begin{cor} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer such that no monomial in $B_{\mathfrak j}$ of degree $m+1$ is divided by $x_1$. Let $\mathfrak G$ be a $[\mathfrak j,m]$-marked set. Then $\mathfrak G$ is a $[\mathfrak j, m-1]$-marked basis if and only if \begin{enumerate}[(i)] \item for every $\mathfrak f_\alpha \in \mathfrak G$, for every $x_i>\min(x^\alpha)$, $x_i\mathfrak f_\alpha\xrightarrow{\ \id G_\ast\ } 0$; \item for all $x^\beta \in {\mathfrak j}_{\leq m-1}\setminus B_{\mathfrak j}$, $x^\beta\xrightarrow{\ \id G_\ast\ } \mathfrak g_\beta$ with $\mathrm{supp}(\mathfrak g_\beta)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq m-1}$. \end{enumerate} \end{cor} We now turn to the special case of a $[\mathfrak j,m]$-marked set $\mathfrak G$ which defines an Artinian ideal. The algorithmic techniques that we can use to check whether $\mathfrak G$ is a $[\mathfrak j,m]$-marked basis are more efficient in this special case. \begin{prop}\label{punti} Let $\mathfrak j$ be a strongly stable ideal in $R$ and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set of polynomials, for some $m\geq \textnormal{reg}(\mathfrak j)-1$. If $\mathcal{N}\,(\mathfrak j)$ is a finite set, then for every $x^\beta \in \mathfrak j$, $x^\beta\xrightarrow{\ \id G_\ast\ } \mathfrak g_\beta$ with $\mathrm{supp}(\mathfrak g_\beta)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq \textnormal{reg}(\mathfrak j)-1}$. \end{prop} \begin{proof} For every $x^\beta\in \mathfrak j$, we compute $x^\beta\xrightarrow{\ \id G_\ast\ } \mathfrak g_\beta$, $\mathrm{supp} (\mathfrak g_\beta)\subseteq \mathcal{N}\,(\mathfrak j)$. We write this polynomial as $\mathfrak g_\beta=\mathfrak g_{\beta_1}+\mathfrak g_{\beta_2}$, where $\mathrm{supp}(\mathfrak g_{\beta_1})\subseteq \mathcal{N}\,(\mathfrak j)_{\leq m}$ and $\mathrm{supp}(\mathfrak g_{\beta_2})\subseteq \mathcal{N}\,(\mathfrak j)_{\leq t}\setminus\mathcal{N}\,(\mathfrak j)_{\leq m}$ for some $t\geq m+1$. Since $\mathcal{N}\,(\mathfrak j)$ is a finite set of monomials and $\textnormal{reg}(\mathfrak j)=\max\{\vert\alpha\vert:x^\alpha\in B_{\mathfrak j}\}$, we have that $\mathcal{N}\,(\mathfrak j)_{\leq\textnormal{reg}(\mathfrak j)-1}=\mathcal{N}\,(\mathfrak j)_{\leq\textnormal{reg}(\mathfrak j)-1+r}$ for every positive integer $r$. Then $\mathfrak g_{\beta_2}=0$ and $\mathrm{supp}(\mathfrak g_{\beta})\subseteq \mathcal{N}\,(\mathfrak j)_{\leq \textnormal{reg}(\mathfrak j)-1}$. \end{proof} \begin{theorem}\label{critpunti} Let $\mathfrak j$ be a strongly stable ideal in $R$ and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set of polynomials, for some positive integer $m\geq \rho$. If $\mathcal{N}\,(\mathfrak j)$ is finite, then $\mathfrak G$ is a $[\mathfrak j,\rho-1]$-marked basis if and only if for every $f_\alpha \in \mathfrak G$ and for every $x_i> \min(x^\alpha)$, we have $x_if_\alpha\xrightarrow{\ \id G_\ast\ } 0$. \end{theorem} \begin{proof} Since $\mathcal{N}\,(\mathfrak j)$ is finite, we have that $\rho=\textnormal{reg}(\mathfrak j)$ \cite[Lemma (1.7)]{BS}. Then for every $x^\beta \in \mathfrak j_{\leq m-1}\setminus B_{\mathfrak j}$, condition (\ref{condi}) of Theorem \ref{critsupermin} is fullfilled by Proposition \ref{punti}. \end{proof} \section{The scheme structure of $\mathcal M\mathrm f(\mathfrak j,m)$ and a flat family of affine schemes}\label{sezpiatt} As recalled in Section 2, in \cite{CR} the authors consider homogeneous ideals of $S$ generated by a $J$-marked basis, with $J$ strongly stable. These ideals form a family called a $J$-marked family $\mathcal M\mathrm f(J)$, that is endowed with a structure of subscheme of a suitable affine space, explicitely computed by an algorithmic method based on the already cited reduction procedure $\xrightarrow{\ V_\ell \ }$. In \cite{BCLR} it is shown that, when $J$ is a saturated strongly stable ideal and $m$ a positive integer, the family $\mathcal M\mathrm f(J_{\geq m})$ can be embedded as a subscheme of an affine space of lower dimension than the previous one, by the superminimal reduction (Definition \ref{def:superred}). Further, as shown in \cite[Theorem 5.7, (i)]{BCLR}, $\mathcal M\mathrm f(J_{\geq m})$ can be embedded as a {locally closed subscheme} in the Hilbert scheme parameterizing the subschemes of $\mathbb{P}^n$ having the same Hilbert polynomial as $S/J$. In particular, in \cite{BLR} the authors highlight that $\mathcal M\mathrm f(J)$ is embedded as an open subset of a Hilbert scheme when $m$ is higher than or equal to the satiety of $J^a\subset R$ minus one. In this section we make analogous considerations for ideals of $R$ generated by a $[\mathfrak j,m]$-marked basis, where $m$ is a positive integer. \begin{definition} Let $\mathfrak j$ be a strongly stable ideal in $R$. The family of all the ideals generated by a $[\mathfrak j,m]$-marked basis is called a \emph{$[\mathfrak j,m]$-marked family} and denoted by $\mathcal M\mathrm f(\mathfrak j,m)$. \end{definition} We consider $\mathfrak j \in R$ a saturated strongly stable ideal, $m$ a positive integer and we define the following $[\mathfrak j,m]$-marked set $\mathfrak G$: \begin{equation}\label{JbaseC} {\mathfrak G}= \{\mathfrak f_\alpha=x^\alpha-\sum C_{\alpha\gamma} x^\gamma : Ht(\mathfrak f_\alpha)=x^\alpha\in B_{\mathfrak j}\}\subset K[C][x_1,\dots,x_n], \end{equation} where $C$ is a compact notation for the set of new variables $C_{\alpha \gamma}$, $x^\alpha \in B_{\mathfrak j}$, $x^\gamma \in \mathcal{N}\,(\mathfrak j)_{\leq\max\{\vert\alpha\vert,m\}}$. As shown for the families considered in \cite{CR, BCLR}, here we show that also $\mathcal M\mathrm f(\mathfrak j,m)$ can be endowed with a structure of affine scheme, by the results of the previous sections. \begin{definition}\label{idstruttura} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set as in \eqref{JbaseC}.\\ For every $x^\beta \in \mathfrak j_{\leq m}\setminus B_{\mathfrak j}$, let $\mathfrak g_\beta$ the reduced polynomial such that $x^\beta \xrightarrow{\ \id G_\ast\ } \mathfrak g_{\beta}$; let $\mathfrak g'_{\beta}$, $\mathfrak g''_{\beta}$ be reduced polynomials such that $\mathfrak g_\beta=\mathfrak g'_{\beta}+\mathfrak g''_\beta$ and $\mathrm{supp}(\mathfrak g'_\beta)\subseteq \mathcal{N}\,(\mathfrak j)_{\leq m}$ and $\mathrm{supp}(\mathfrak g''_\beta)\subseteq \mathcal{N}\,(\mathfrak j)\setminus \mathcal{N}\,(\mathfrak j)_{\leq m}$. We denote by \begin{itemize} \item $\mathcal D_1\subset K[C]$ the set containing the coefficients of $\mathfrak g_{\beta}'' $, for every $x^\beta \in \mathfrak j_{\leq m}\setminus B_{\mathfrak j}$; \item $\mathcal D_2\subset K[C]$ the set containing the coefficients of all the reduced polynomials in $(\mathfrak G)K[C][x_1,\dots,x_n]$. \end{itemize} Let $\mathfrak A$ be the ideal in $K[C]$ generated by $\mathcal D_1\cup \mathcal D_2$. \end{definition} \begin{theorem}\label{daProjaAff} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer. Then $\mathcal M\mathrm f(\mathfrak j,m)$ is endowed with the structure of affine scheme, defined by the ideal $\mathfrak A\subset K[C]$ of Definition \ref{idstruttura}, and $\mathcal M\mathrm f(\mathfrak j,m)=\mathcal M\mathrm f({\mathfrak j^{h}}_{\geq m})$ scheme-theoretically. \end{theorem} \begin{proof} It is enough to recall that there is an equivalence (up to powers of $x_0$) between marked bases in $S$ and $R$ (Theorem \ref{equivbasi}), so $\mathcal M\mathrm f(\mathfrak j,m)$ and $\mathcal M\mathrm f({\mathfrak j^h}_{\geq m})$ are the same sets. Further, the ideal $\mathfrak A$ of Definition \ref{idstruttura} is exactly the ideal defining the scheme structure of $\mathcal M\mathrm f(\mathfrak j^h_{\geq m})$ \cite[Theorem 5.4]{BCLR}. \end{proof} \begin{cor} Let $J$ be a saturated strongly stable ideal in $S$, $m$ be a positive integer, then $\mathcal M\mathrm f(J_{\geq m})=\mathcal M\mathrm f(J^{a},m)$ scheme-theoretically. \end{cor} \begin{remark} If $\mathfrak j\subset R$ is a strongly stable ideal, $m$ is a positive integer and $\mathfrak G$ is a $[\mathfrak j,m]$-marked set, the ideal $(\mathfrak G)\subseteq R$ belongs to $\mathcal M\mathrm f(\mathfrak j,m)$ if and only if $\mathfrak G$ is obtained from \eqref{JbaseC} replacing the variables $C$ by $c\in K^{\vert C\vert}$ belonging to $V(\mathfrak A)$. Indeed, it is sufficient to observe the following: a $[\mathfrak j,m]$-marked set $\mathfrak G\subset R$, with a completion $\overline{\mathfrak G}$, generates an ideal belonging to $\mathcal M\mathrm f(\mathfrak j,m)$ if and only if the coefficients of the polynomials in the ${\mathfrak j^h}_{\geq m}$-marked set $G$ (constructed as in Lemma \ref{costr1}) constitute a point of the scheme $\mathcal M\mathrm f({\mathfrak j^h}_{\geq m})$. \end{remark} \begin{prop}\label{algeff} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set as in \eqref{JbaseC}. For every $\mathfrak f_\alpha \in \mathfrak G$, for every $x_i>\min(x^\alpha)$, let $\mathfrak h_{\alpha,i}$ the reduced polynomial such that $x_i\mathfrak f_\alpha \xrightarrow{\ \id G_\ast\ } \mathfrak h_{\alpha,i}$ and let $\mathcal D_2'\subset K[C]$ be the set containing the coefficients of all the polynomials $\mathfrak h_{\alpha,i}$. Then the ideal generated by $\mathcal D_1\cup \mathcal D_2'$ in $K[C]$ is $\mathfrak A$. \end{prop} \begin{proof} Thanks to Theorems \ref{daProjaAff}, \ref{affinrid} and \ref{homrid}, we have that the ideal generated by $\mathcal D_1\cup \mathcal D_2'$ in $K[C]$ satisfies the conditions of \cite[Proposition 5.5]{BCLR}, hence $(\mathcal D_1\cup \mathcal D_2')=\mathfrak A$. \end{proof} \begin{remark} In Section \ref{algoritmi} we will improve the computational techniques presented in \cite{BCLR}: first, we can use the results of Section \ref{conseguenze} to design an algorithm computing a set of generators for the ideal $\mathfrak A$ which takes into account some features of the strongly stable ideal in order to get a faster execution; furthermore, although the reduction $\xrightarrow{\ \id G_\ast\ }$ is theoretically analogous to the \lq\lq superminimal reduction \rq\rq\ defined and used in \cite{BCLR} by Theorems \ref{affinrid} and \ref{homrid}, from the computational viewpoint the relation $\xrightarrow{\ \id G_\ast\ }$ allows us to avoid the time-consuming operation of multiplying by powers of $x_0$ the polynomials we want to reduce. The algorithm we present in Section \ref{algoritmi} relies on Proposition \ref{algeff}, which requires a finite number of computations to construct the set $\mathcal D_2'$, while it is not possible to compute the set $\mathcal D_2$ in a finite number of steps. However, in the near future, we aim to design a performing algorithm computing a subset $\mathcal M$ of $K[C]$ such that $\mathfrak A\supset \mathcal M$ and $(\mathcal M)\supset \mathfrak A$ in $K[C]$: such a set $\mathcal M$ will not necessarily be computed by $\xrightarrow{\ \id G_\ast\ }$. \end{remark} Given a strongly stable ideal $J$ in $S$, \cite[Proposition 4.6]{CR} shows that the marked scheme $\mathcal M\mathrm f(J)$ is flat (as defined in \cite[Section 9, Chapter III]{H}) at the origin (which corresponds to $J$). We now give an independent proof that the analogous result holds for $\mathcal M\mathrm f(\mathfrak j,m)$. \begin{theorem} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ a positive integer and $\mathfrak G$ be a $[\mathfrak j,m]$-marked set as in \eqref{JbaseC}. Then $\mathcal M\mathrm f(\mathfrak j,m)$ is flat at the origin. \end{theorem} \begin{proof} First, we observe that the family $\mathcal M\mathrm f(\mathfrak j,m)$ is defined by the morphism \[ \mathrm{Spec}(K[C][x_1,\dots,x_n]/(\mathfrak A,\mathfrak G))\rightarrow \mathrm{Spec}(K[C]/\mathfrak A). \] We consider $A:=\left(\frac{K[C]}{\mathfrak A}\right)_{(C)/\mathfrak A}$, $\mathcal O:=K[x_1,\dots,x_n]/\mathfrak j$ and $\mathcal O_A:=A[X]/(\mathfrak G)$. By Corollary \ref{sollsiz}, we obtain that every syzygy of $J$ lifts to a syzygy of the ideal generated by $\mathfrak G$ in $\mathcal O_A$. Hence, by \cite[Corollary to Proposition 3.1, Chapter 1]{Artin}, $\mathcal O_A$ is flat over $A$. Letting $P:=(C,X)/(\mathfrak A,\mathfrak G)$, in particular we obtain that $(\mathcal O_A)_{(P)}$ is flat over $A$, hence $\mathcal M\mathrm f(\mathfrak j,m)$ is flat at the origin. \end{proof} In general, a family of ideals in $R$ is not \emph{flat}, even assuming that the ideals in the family share the same Hilbert polynomial, as shown by the following example. \begin{example}\label{Esempio flat} Consider $R=\mathbb C[x_1,\cdots,x_5]$ and the term order $\prec$ associated to the weight vector $\omega =[8,7,5,4,3]$: $x^\alpha\prec x^\beta$ if and only if $\alpha \cdot \omega\leq \beta \cdot \omega$ and, if equality holds, ties are broken using reverse lexicographic order.\\ Let $\mathfrak U_T$ be the set of the following polynomials:\\ $\mathfrak f_1:=-212\,x_2x_3-317\,x_2x_4+284\,x_3x_4+72\,x_2^{2}+144\,x_4^{2}-43\,x_1x_2,$\\ $\mathfrak f_2:= x_1x_4-x_3x_4-x_1x_2+x_2x_3,\quad \mathfrak f_3:=-2\,x_2x_3-x_2x_4+x_3x_4+x_1x_3+x_1x_2,$\\ $\mathfrak f_4:=4\,x_2x_3+4\,x_0x_4+x_2x_4-4\,x_3x_4-5\,x_1x_2,\quad \mathfrak f_5:=-5\,x_2x_4+4\,x_3x_4+4\,x_0x_3-8\,x_2x_3+5\,x_1x_2,$\\ $\mathfrak f_6:=-4\,x_3x_4+4\,x_0x_2+4\,x_2x_3+3\,x_2x_4-7\,x_1x_2 ,$\\ $\mathfrak f_7:=71\,x_2x_4-90\,x_4^{2}+18\,x_1^{2}-56\,x_3x_4+18\,x_3^{2}-34\,x_2x_3+109\,x_1x_2 ,$\\ $\mathfrak f_8:=-5\,x_2x_4+4\,x_3x_4+4\,x_0x_1-4\,x_2x_3+x_1x_2,$\\ $\mathfrak f_9:=-83\,x_2x_4+48\,x_4^{2}+68\,x_3x_4+12\,x_0^{2}-44\,x_2x_3-25\,x_1x_2 ,$\\ $\mathfrak f_{10}(T):=x_1x_2-x_2x_3+T \left( x_4^{3}+x_3^{3}+x_3^{2} \right). $ For every $\tau \in \mathbb C$ all the ideals $\mathfrak i_\tau\subset \mathbb C[x_1,\cdots,x_5]$, generated by the set of polynomials $\mathfrak U_\tau$ obtained specializing $T$ to $\tau$, share the same affine Hilbert polynomial $p(t)=12$. Moreover the following monomial ideal is the initial ideal w.r.t.~$\prec$ of every ideal $\mathfrak i_\tau$: \[ \mathfrak j=( x_5^2,x_5x_4,x_4^2,x_5x_3,x_4x_2,x_3^2,x_5x_2,x_4x_2,x_5x_1,x_4 x_1,x_3x_2^2,x_2^3,x_3x_2x_1,x_3^2x_1,x_3x_1^2,x_2x_1^2,x_1^4). \] In fact, $\mathfrak j$ is the initial ideal of the ideal $\mathfrak I$ generated by $ \mathfrak U_T $ in $\mathbb C[T,\frac{1}{T}][x_1,\cdots,x_5]$; all the polynomials in its reduced Gr\"obner basis have coefficients in $\mathbb C[T]$, except the one with leading monomial $x_1^4$, which is $\mathfrak g_T=x_1^4+\frac{4T-1}{9T}x_1^3$. Thus, specializing $T\mapsto \tau\in \mathbb C\setminus \{0\}$, we get the reduced Gr\"obner basis $B_\tau$ of the ideal $\mathfrak i_\tau $, hence the initial ideal of $\mathfrak i_\tau$ w.r.t.~$\prec$ is $\mathfrak j$. Finally, if $\tau =0$, by a direct computation of the reduced Gr\"obner basis $B$ with respect to $\prec$ of $\mathfrak i_0$, we obtain that its initial ideal is $\mathfrak j$ too. Observe that $B$ contains the polynomial $x_1^4$ and does not contain the monomial $x_1^3$ as one could expect, observing that $-9T \mathfrak g_T=-9T x_1^4+(-4T+1)x_1^3\in \mathfrak I$. We underline that $\mathfrak j$ is a strongly stable ideal ad also an affine $ 3$-segment with respect to $\omega =[8,7,5,4,3]$ (Definition \ref{segmento}). Hence, for every $\tau \in \mathbb C$, both $\tau\neq 0$ and $\tau=0$, the reduced Gr\"obner basis w.r.t.~$\prec$ of $\mathfrak i_\tau$ is a $[\mathfrak j, 3]$-marked basis, but the marked basis for $\mathfrak i_ 0$ is not the limit of those of $\mathfrak i_\tau$ with $\tau \neq 0$. We observe that, for $\tau=0$, $\textnormal{Proj}\,(S/(\mathfrak i_0)^h)$ is a Gorenstein scheme in $\mathbb{P}^5$, with Hilbert polynomial $12$, while, for $\tau\neq 0$, $\textnormal{Proj}\,(S/(\mathfrak i_\tau)^h)$ is not Gorenstein. Since Gorenstein schemes constitute an open subset of ${\mathcal{H}\textnormal{ilb}}^5_{12}$, this means that the family of ideals in $R$ given by $\mathfrak i_T$ is not flat. Furthermore, if we homogenize the polynomials in $B_\prec$ and then we replace $T$ by 0, we do not get the ideal $(\mathfrak i_0)^h$. Summing up $\mathfrak i_T$ is a family whose ideals have constant affine Hilbert Polynomial 12, constant initial ideal $\mathfrak j$ with respect to $\prec$, are generated by a marked basis over the strongly stable ideal $\mathfrak j$, but this family is not flat. In fact, our family defines a function $\mathbb A^1 \rightarrow \mathcal M\mathrm f(\mathfrak j,3)$ which is not a morphism of schemes. \end{example} We end this section by presenting two interesting features of $\mathcal M\mathrm f(\mathfrak j,m)$, which are both closely connected to the flatness at $\mathfrak j$. In the following proposition we show how to obtain simultaneously the homogenization of all the ideals belonging to a $[\mathfrak j,m]$-marked family. \begin{prop}\label{omsim} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer, $\mathfrak G$ be the $[\mathfrak j,m]$-marked set in $K[C][x_1,\dots,x_n]$ as in \eqref{JbaseC}. Let $\mathfrak A$ be the ideal in $K[C]$ which defines the affine scheme structure of $\mathcal M\mathrm f(\mathfrak j,m)$ and let $\overline {\mathfrak G}$ be the completion of $\mathfrak G$ containing, for every $x^\beta \in (\mathfrak j)_{\leq m}\setminus B_{\mathfrak j}$, the polynomials $x^\beta-h_\beta \mod \mathfrak A$, where $x^\beta\xrightarrow{\ \id G_\ast\ } h_\beta \mod \mathfrak A$ . \\ Then $(\mathfrak G^h\cup \overline{\mathfrak G}^h)_{\geq m}$ is an $m$-saturated ideal and it is equal to ${(\mathfrak G)^h}_{\geq m}\subseteq K[C,x_0,\dots,x_n]/\mathfrak A$, having ${\mathfrak j^h}_{\geq m}$-marked basis $G$, the set of polynomials constructed as in Lemma \ref{costr1}. \end{prop} \begin{proof} It is sufficient to apply Proposition \ref{omogbene} to the $[\mathfrak j,m]$-marked basis $\mathfrak G\subset K[C,x_1,\dots,x_n]/\mathfrak A$ and to its completion. \end{proof} Let $K$ be a field of characteristic 0. With the same notations already fixed in Section 2, let $p(t)$ be an \emph{admissible Hilbert polynomial} in $S$. The \emph{Hilbert scheme} ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ is the projective scheme parameterizing the subschemes $\mathcal Z$ of $\mathbb{P}^n$ having Hilbert polynomial $p(t)$. If $I$ is a homogeneous ideal defining the scheme $\mathcal Z$, we will say that $I$ is a point of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$. \begin{theorem}\label{coprhilb} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer, and assume that $m\geq {\textnormal{sat}}(\mathfrak j)-1$. Then $\mathcal M\mathrm f(\mathfrak j,m)$ is isomorphic to an open subset of the Hilbert scheme ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, where $p(t)$ is the Hilbert polynomial of $S/\mathfrak j^h$. Furthermore, if $\mathcal S_{p(t)}^n$ is the set of strongly stable ideals in $R$ with affine Hilbert polynomial $p(t)$, then, up to linear changes of coordinates in $\mathbb P^n$, the open subsets $\mathcal M\mathrm f(\mathfrak j,{\textnormal{sat}}(\mathfrak j)-1)$ cover ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, for $\mathfrak j \in \mathcal S_{p(t)}^n$. \end{theorem} \begin{proof} By using the scheme-theoretical equality between $\mathcal M\mathrm f(\mathfrak j,m)$ and $\mathcal M\mathrm f({\mathfrak j^h}_{\geq m})$ established in Theorem \ref{daProjaAff}, we get that $\mathcal M\mathrm f(\mathfrak j,m)$ is isomorphic to an open subset of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ for every $m\geq {\textnormal{sat}}(\mathfrak j)-1$ by \cite[Theorem 3.1]{BLR} and these open subsets cover ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ up to changes of coordinates in $\mathbb P^n$ applying \cite[Theorem 2.5]{BLR}. \end{proof} \section{Algorithms}\label{algoritmi} We now describe a prototype of the algorithm for computing $[\mathfrak j,m]$-marked families based on Theorem \ref{daProjaAff} and Proposition \ref{algeff}, with a variant in case the strongly stable ideal $\mathfrak j$ has a finite sous-escalier, as observed in Proposition \ref{punti} and Theorem \ref{critpunti}. Let us suppose that the following functions are made available. The ideal $\mathfrak j\subseteq R$ is always strongly stable. \begin{itemize} \item \textsc{Aff}$(J)$. It returns the monomial ideal $J^a\subseteq R$, given a monomial ideal $J\subseteq S$. \item \textsc{Coeff}$(\mathfrak g,x^\gamma)$. It returns the coefficient of the monomial $x^\gamma$ in the polynomial $\mathfrak g$ (obviously 0 if $x^\gamma \notin \mathrm{supp}(\mathfrak g)$). \item $\textsc{Basis}(\mathfrak j)$. It determines the minimal set of monomials generating $\mathfrak j$. \item $\textsc{LowerPart}(\mathfrak g,t)$. Given a polynomial $\mathfrak g$ and a non-negative integer $t$, it returns the pair of polynomials $(\mathfrak g_1,\mathfrak g_2)$ such that $\mathfrak g=\mathfrak g_1+\mathfrak g_2$, $\deg(\mathfrak g_1)\leq t$ and $\mathrm{supp}(\mathfrak g_2)\subset R_{\geq t+1}$. \item $\textsc{OptimizedLevel}(\mathfrak j,m)$. It determines the biggest integer $m_0\leq m$ such that there is $x^\alpha \in B_{\mathfrak j}$, $\vert\alpha\vert=m_0+1$ such that $x_1$ divides $x^\alpha$. \item $\textsc{Reg}(\mathfrak j)$. It determines the regularity of $\mathfrak j$, using Lemma \ref{regsatB}. \item $\textsc{Sat}(\mathfrak j)$. It determines the satiety of $\mathfrak j$, using Lemma \ref{regsatB}. \item $\textsc{SousEscalier}(\mathfrak j,m)$. It determines the set of monomials in the sous-escalier of $\mathfrak j$, up to degree $m$. \item $\textsc{superminimalReduction}(\mathfrak g, \mathfrak G)$. Given a $[\mathfrak j,m]$-marked set $\mathfrak G$ and a polynomial $\mathfrak g$, it returns the polynomial $\mathfrak g_1$ such that $\mathfrak g \xrightarrow{\ \id G_\ast\ } \mathfrak g_1$, $\mathrm{supp}(\mathfrak g_1)\subset \mathcal{N}\,(\mathfrak j)$ (according to Definition \ref{sminred} and Theorem \ref{ridsm}). \item $\textsc{VSpace}(\mathfrak j,m)$. It determines the monomial basis of the $K$-vector space $\mathfrak j_{\leq m}$. \end{itemize} \begin{algorithm}[H] \begin{algorithmic}[1] \STATE $\textsc{Reduction1}(\mathfrak j, m, G)$ \REQUIRE $\mathfrak j \subset K[x_1,\ldots,x_n]$ strongly stable ideal, $m$ a positive integer, $\mathfrak G\subset K[x_1,\dots,x_n]$ a $[\mathfrak j,m]$-marked set. \ENSURE the conditions to impose on the coefficients of the polynomials in $\mathfrak G$ in order to satisfy condition (\ref{condii}) of Theorem \ref{critsupermin}. \STATE $\textsf{Equations1} \leftarrow \emptyset$; \FORALL{$\mathfrak f_\alpha \in \mathfrak G, x_i > \min(x^\alpha)$} \STATE $\mathfrak g\leftarrow\textsc{superminimalReduction}(x_i\mathfrak f_\alpha,\mathfrak G)$; \FORALL{$x^\gamma \in \mathrm{supp}(\mathfrak g)$} \STATE $\textsf{Equations1}\leftarrow \textsf{Equations1}\cup \{\textsc{Coeff}(\mathfrak g,x^\gamma)\}$; \ENDFOR \ENDFOR \RETURN $\textsf{Equations1}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \begin{algorithmic}[1] \STATE $\textsc{Reduction2}(\mathfrak j, m, G)$ \REQUIRE $\mathfrak j \subset K[x_1,\ldots,x_n]$ strongly stable ideal, $m$ a positive integer, $\mathfrak G\subset K[x_1,\dots,x_n]$ a $[\mathfrak j,m]$-marked set. \ENSURE the conditions to impose on the coefficients of the polynomials in $\mathfrak G$ in order to satisfy condition (\ref{condi}) of Theorem \ref{critsupermin}. \STATE $\textsf{Equations2} \leftarrow \emptyset$; \STATE $B\leftarrow \textsc{VSpace}(\mathfrak j,m)\setminus \textsc{Basis}(\mathfrak j)$; \FORALL{$x^\beta \in B$} \STATE $\mathfrak g\leftarrow\textsc{superminimalReduction}(x^\beta,\mathfrak G)$; \STATE $(\mathfrak g_1,\mathfrak g_2)\leftarrow\textsc{LowerPart}(\mathfrak g,m)$; \FORALL{$x^\gamma \in \mathrm{supp}(\mathfrak g_2)$} \STATE $\textsf{Equations2}\leftarrow \textsf{Equations2}\cup \{\textsc{Coeff}(\mathfrak g_2,x^\gamma)\}$; \ENDFOR \ENDFOR \RETURN $\textsf{Equations2}$; \end{algorithmic} \end{algorithm} \begin{algorithm}[H]\label{algschemaaffine} \begin{algorithmic}[1] \STATE $\textsc{MarkedScheme}(\mathfrak j, m)$ \REQUIRE $\mathfrak j \subset K[x_1,\ldots,x_n]$ strongly stable ideal, $m$ a positive integer. \ENSURE an ideal defining the marked scheme $\mathcal M\mathrm f((\mathfrak j^{h})_{\geq m})$. \STATE $\textsf{Equations} \leftarrow \emptyset$ \STATE $B \leftarrow \textsc{Basis}(\mathfrak j)$; \STATE {$m_0\leftarrow \textsc{OptimizedLevel}(\mathfrak j,m)$; } \STATE $\mathfrak G \leftarrow \emptyset$; \FORALL{$x^\alpha \in B$} \STATE $\mathfrak f_{\alpha} \leftarrow x^\alpha$; \FORALL{$x^\eta \in \textsc{SousEscalier}(\mathfrak j, \max\{m_0,\vert\alpha\vert\})$} \STATE $\mathfrak f_{\alpha} \leftarrow \mathfrak f_{\alpha} + {C}_{\alpha\eta} x^\eta$; \ENDFOR \STATE ${\mathfrak G} \leftarrow \mathfrak G\cup \{\mathfrak f_{\alpha}\}$; \ENDFOR \STATE $\textsf{Equations}\leftarrow \textsc{Reduction1}(\mathfrak j,m_0,\mathfrak G)$ \IF{$m_0{=} \textsc{Reg}(\mathfrak j)-1$ \AND $\textsc{SousEscalier}(\mathfrak j,\textsc{Reg}(\mathfrak j)-1)=\textsc{SousEscalier}(\mathfrak j,\textsc{Reg}(\mathfrak j))$} \RETURN $\textsf{Equations}$ \ENDIF \STATE $\textsf{Equations}\leftarrow \textsf{Equations} \cup \textsc{Reduction2}(\mathfrak j,m_0,\mathfrak G)$ \RETURN $(\textsf{Equations})$ \end{algorithmic} \end{algorithm} Finally, given a saturated strongly stable ideal $J\subseteq S$, the algorithm $\textsc{MarkedScheme}$ is used to compute equations for the $J_{\geq \rho-1}$-marked scheme, with $\rho={\textnormal{sat}}(J^a)$. Indeed, $\rho-1$ is the minimum integer ensuring that $\mathcal M\mathrm f(J^a,\rho-1)\simeq\mathcal M\mathrm f(J_{\geq \rho-1})$ is a open subset of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, where $p(t)$ is the Hilbert Polynomial of $S/J$ (see \cite[Theorem 3.1]{BLR} and Theorem \ref{coprhilb}). \begin{algorithm}[H] \begin{algorithmic}[1] \STATE $\textsc{HilbertOpenSubset}(J)$ \REQUIRE $J \subset K[x_0, x_1,\ldots,x_n]$ saturated strongly stable monomial ideal. \ENSURE a set of generators of the ideal defining the marked scheme $\mathcal M\mathrm f(J_{\geq \rho-1})$, where $\rho={\textnormal{sat}}(J^a)$. \RETURN $\textsc{MarkedScheme}(\textsc{Aff}(J), \textsc{Sat}(\textsc{Aff}(J))-1)$ \end{algorithmic} \end{algorithm} \section{Explicit computations on a Hilbert scheme of points}\label{Hscheme} In this section we will present a significant application of the computational method described in the previous sections. Let $K$ be a field of characteristic 0. We consider an admissible Hilbert polynomial $p(t)$ and the {Hilbert scheme} ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ parameterizing the subschemes $\mathcal Z$ of $\mathbb{P}^n$ having Hilbert polynomial $p(t)$. If $I$ is a homogeneous ideal defining the scheme $\mathcal Z$, we will say that $I$ is a point of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$. Among saturated homogeneous ideals $I$ such that the Hilbert polynomial of $S/I$ is $p(t)$, there is a well-known one, the monomial ideal $J_{\mathtt {Lex}}$, which is the $\mathtt{Lex}$-segment, also called in this setting the \emph{${\mathtt{ Lex}}$-point} of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$. The $\mathtt{Lex}$-point is a smooth point of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ (see \cite{RS}), hence it belongs to a single irreducible component of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, that we denote by $\mathcal H_{\mathtt{ Lex}}$. Even in the easiest situation, namely a Hilbert scheme parameterizing $0$-dimensional schemes, it is not known yet, except in a few special cases, how many irreducible components ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ has. Indeed, one of the main problems to overcome is the lack of computational tools allowing the direct study of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$. It is quite natural to embed ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ as a closed subscheme of suitable Grassmannians \cite{gotz, CS,IarroKanev} and consider equations defining this scheme structure. Different authors studied bounds for the degree of a set of generators: a recent paper \cite{BLMR} improves the known bounds given by \cite{B,HaimSturm, IarroKanev}, showing that there is a set of defining equations whose degree is lower than o equal to $\deg(p(t))+2$. Nevertheless, direct computations using these sets of defining equations are impossible because of the large number of variables involved. Since the direct study of the equations describing ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ in the Grassmannian is not affordable, it is reasonable to study it locally, by a suitable open cover. In \cite{BLR} it is proved that if $J$ is a saturated strongly stable ideal belonging to ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, then $\mathcal M\mathrm f(J_{\geq r})$ is an open subset of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, where $r$ is the Gotzmann number of $p(t)$. Further, up to linear changes of coordinates in $\mathbb{P}^n$, as $J$ varies among saturated strongly stable ideals of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, the families $\mathcal M\mathrm f(J_{\geq r})$ cover ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$. This is not really surprising, since strongly stable ideals are well-distributed on any Hilbert scheme, in the sense that there is at least one strongly stable ideal lying on each irreducible component and on each intersection among irreducible components of the Hilbert scheme. Thanks to Theorem \ref{coprhilb}, we can consider the open cover of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ made up of $[\mathfrak j,{\textnormal{sat}}(\mathfrak j)-1]$-marked families, for $\mathfrak j$ strongly stable ideal in $R$ with $R/\mathfrak j$ having affine Hilbert polynomial $p(t)$, and apply the computational \lq\lq affine\rq\rq techniques developed mainly in Section \ref{sectionJmSet} to study the Hilbert scheme. We now show a result which is inferred from results in \cite{FR} and is analogous to results contained in \cite{LR} for the homogeneous case. For more details on segments, see also \cite{CLMR}. \begin{definition}\label{segmento} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ a positive integer. The ideal $\mathfrak j$ is an {\em affine $m$-segment} if there is a weight vector $\omega \in \mathbb N^n$ such that for every $x^\alpha \in B_{\mathfrak j}$, $\deg_\omega(x^\alpha)>\deg_\omega(x^\gamma)$ for every $x^\gamma\in \mathcal{N}\,(\mathfrak j)_{\leq t}$, with $t=\max\{m,\vert\alpha\vert\}$. \end{definition} \begin{theorem}\label{connessione} Let $\mathfrak j$ be a strongly stable ideal in $R$, $m$ be a positive integer and assume that $\mathfrak j$ is an affine $m$-segment. Then every irreducible component $\mathcal M$ of $\mathcal M\mathrm f(\mathfrak j,m)$ contains $\mathfrak j$, hence $\mathcal M\mathrm f(\mathfrak j,m)$ is a connected scheme. \\ If moreover $\mathfrak j$ is smooth on $\mathcal M$, then $\mathcal M$ is isomorphic to an affine space. \end{theorem} \begin{proof} By \cite[Corollary 2.7]{FR}, since $\mathfrak j$ is an affine $m$-segment, then $\mathcal M\mathrm f(\mathfrak j,m)$ is a $\omega$-cone, where $\omega$ is the weight vector of Definition \ref{segmento}. From this, the thesis follows. \end{proof} We now apply our results to the study of two questions about Hilbert scheme of points, focusing on some of its irreducible components. \subsection{Irreducible components of ${\mathcal{H}\textnormal{ilb}}^7_{16}$ and smoothability of local Gorenstein algebras $(1,7,7,1)$}\label{GorSmooth} \ We consider ${\mathcal{H}\textnormal{ilb}}^7_{16}$ that parameterizes 0-dimensional subschemes of $\mathbb{P}^7$ of length 16. We can identify every point of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$ with an ideal in $R=K[x_1,\dots,x_7]$, non-necessarily homogeneous: this is possilble on a open dense subset of ${\mathcal{H}\textnormal{ilb}_{p(t)}^n}$, by considering a generic change of coordinates. Hence, we only consider the polynomial ring $R$, with variables ordered as $x_7>\cdots>x_1$, and the ideals in $R$ with affine Hilbert polynomial $p(t)=16$. The $\mathtt{Lex}$-point of ${\mathcal{H}\textnormal{ilb}}^7_{16}$ is given by the strongly stable ideal in $R$: \[ \mathfrak j_{\mathtt{Lex}}= (x_7,x_6,x_5,x_4,x_3,x_2,x_1^{16}). \] It is a smooth point of $\mathcal H_{\mathtt {Lex}}$, whose dimension is $7\cdot 16=112$ since its general point is the reduced scheme of 16 distinct points. We can compute the complete set of strongly stable ideals of $R$ lying on ${\mathcal{H}\textnormal{ilb}}^7_{16}$ by using the algorithm described in \cite{CLMR} and further developed and implemented in \cite{L}, obtaining 561 ideals (total time of computation: about 1 second). We focus on one of them, the number 541, denoted by $\mathfrak j$ and generated by the following monomials: \[ x_7^2, x_7x_6, x_7x_5, x_7x_4, x_7x_3, x_7x_2, x_7x_1, x_6^2, x_6x_5, x_6x_4, x_6x_3, x_6x_2, x_6x_1, x_5^2, x_5x_4, x_5x_3, x_5x_2, \] \[ x_5x_1, x_4^2, x_4x_3, x_4x_2, x_4x_1^2, x_3^3, x_3^2x_2, x_3^2x_1, x_3x_2^2, x_3x_2x_1, x_3x_1^2, x_2^3, x_2^2x_1, x_2x_1^2, x_1^4. \] Our interest in this special strongly stable ideal comes from the following fact: { it is the generic initial ideal w.r.t.~$\mathtt{Lex}$ of a general ideal defining a local Gorenstein algebra of type $(1,7,7,1)$.} By Theorem \ref{coprhilb}, $\mathcal M\mathrm f(\mathfrak j ,3)$ is an open subset of ${\mathcal{H}\textnormal{ilb}}^7_{16}$. Furthermore, $\mathfrak j$ is an affine $m$-segment with weight vector $\omega=[11, 10, 9, 8, 6, 5, 4]$, hence $\mathcal M\mathrm f(\mathfrak j ,3)$ is connected and all its irreducible components contain $\mathfrak j$ (Theorem \ref{connessione}). We can construct the $[\mathfrak j,3]$-marked set $\mathfrak G$ in $K[C,x_1,\dots,x_7]$ (as in \eqref{JbaseC}) and apply the Algorithm \textsc{MarkedScheme} with input $\mathfrak j$ and $3$. Observe that since we are considering a strongly stable ideal $\mathfrak j$ with a finite sous-escalier, the Algorithm \textsc{MarkedScheme} will only call the Algorithm \textsc{Reduction1} and not \textsc{Reduction2}. We get $\mathcal M\mathrm f(\mathfrak j ,3)$ as the affine scheme defined by an ideal $\mathfrak A$ generated by 2160 polynomials of degrees between 3 and 5 in the polynomial ring $K[C]$, in 512 variables. We would like to study the irreducible components of $\mathcal M\mathrm f(\mathfrak j ,3)$, since their closures are irreducible components of ${\mathcal{H}\textnormal{ilb}}^7_{16}$. The computation of a primary decomposition of $\mathfrak A$ with Gr\"obner-like techniques is absolutely unaffordable. However we can obtain interesting information on some of the irreducible components of $\mathcal M\mathrm f(\mathfrak j,3)$ by our computational techniques. The irreducible component $\mathcal{\mathcal M}_1$ whose closure is $ \mathcal H_{\mathtt {Lex}}$ is the one on which the Pl\"{u}cker coordinate corresponding to $\mathfrak j_{\mathtt{Lex}}$ does not identically vanish. It is well known that $\mathcal H_{\mathtt {Lex}}$ is rational: we obtain a rational parametrization of $\mathcal{\mathcal M}_1$ choosing a suitable subset of 112 variables $C_1\subset C$ as parameters, but, since $\mathfrak j$ is a singular point of $\mathcal{\mathcal M}_1$, in this way we cannot obtain a polynomial parametrization. However we can easily find parametrizations for some special families of ideals contained in $\mathcal{\mathcal M}_1$. Let us consider the following $[\mathfrak j , 3]$-marked set $\mathfrak G_t$ whose coefficients depend on a parameter $t$: \noindent $ \mathfrak g_1:=\mathbf{x_7^2} +4 x_1x_2-2 x_1x_3-x_2^2+x_2x_3-4 x_1x_4-2 x_3^2+(-9 x_7+16 x_4+\frac{11}{2} x_3-7 x_2+2 x_1)t-8 t^2 $ \noindent $ \mathfrak g_2:=\mathbf{x_7x_6} +x_1x_2-x_1x_4-x_2x_3+(-\frac{1}{2} x_6 +x_4+\frac{1}{2} x_3-x_2+\frac{1}{2}x_1)t-\frac{1}{2} t^2 $ \noindent $ \mathfrak g_3:=\mathbf{x_7x_5} -x_1x_2+x_1x_4+x_1x_3-x_7t-\frac{1}{2} tx_5-tx_4-tx_3+tx_2-\frac{1}{2} tx_1+t^2 ,$ \noindent $ \mathfrak g_4:=\mathbf{x_7x_4} -2 x_2x_3-5 x_1x_2+2 x_1x_4+5 x_1x_3+2 x_3^2-x_1^2+11 x_7t-{\frac { 45}2} tx_4-8 tx_3+5 tx_2-\frac{3}{2} tx_1+17 t^2 ,$ \noindent$ \mathfrak g_5:=\mathbf{x_7x_3} +x_2x_3-3 x_1x_2+3 x_1x_4+2 x _1x_3-3 tx_4-3 tx_3+3 tx_2-\frac{3}{2} tx_1+\frac{3}{2}t^2 $ \noindent$ \mathfrak g_6:=\mathbf{x_7x_2} +x_2x_3-3 x_1x_2+3 x_1x_4+2 x _1x_3-\frac{1}{2} x_7t-3 tx_4-\frac{5}{2} tx_3+\frac{5}{2} tx_2-\frac{3}{2} tx_1+\frac{7}{4} t^2 $ \noindent$ \mathfrak g_7:=\mathbf{x_7x_1} +x_1x_2-x_1x_4-x_1x_3-x_7t+t x_4+tx_3-tx_2 ,$ \noindent$ \mathfrak g_8:=\mathbf{x_6^2} -3 x_2x_3+x_1x_2+x_1x_4-x_1x_3+x_3^2+x_2^2+x_1^2+2 tx_6-tx_4 +\frac{1}{2} tx_3+2 tx_2+\frac{1}{2} tx_1-{\frac {13}{4}} t^2 $ \noindent$ \mathfrak g_9:=\mathbf{x_6x_5} -x_1x_2+x_1x_4+x_1x_3-tx_6- tx_4-tx_3+tx_2-\frac{1}{2} tx_1+\frac{1}{2} t^2 ,$ \noindent$ \mathfrak g_{10}:=\mathbf{x_6x_4}+ x_1x_2-x_1x_4-x_2x_3-tx_6+t x_4+\frac{1}{2} tx_3-tx_2+\frac{1}{2} tx_1-\frac{1}{2} t^2 ,$ \noindent$ \mathfrak g_{11}:=\mathbf{x_6x_3} +x_2x_3-4 x_1x_2+4 x_1x_4+2 x _1x_3-4 tx_4-\frac{5}{2} tx_3+4 tx_2-2 tx_1+2 t^2, $ \noindent$ \mathfrak g_{12}:=\mathbf{x_6x_2} +x_2x_3-4 x_1x_2+4 x_1x_4+2 x _1x_3-\frac{1}{2} tx_6-4 tx_4-\frac{5}{2} tx_3+4 tx_2-2 tx_1+2 t^2 , $ \noindent$ \mathfrak g_{13}:=\mathbf{x_6x_1} x_1x_6-x_1x_3-tx_6+tx_3,$ \noindent$ \mathfrak g_{14}:=\mathbf{x_5^2} +2 x_3^2+2 x_2^2+2 x_1x_4-3 x_2x_3-x_1^2-14 tx_5-2 tx_4-\frac{5}{2} tx_3+6 tx_2-4 tx_1+{\frac{29}2} t^2 ,$ \noindent$ \mathfrak g_{15}:=\mathbf{x_5x_4}+ x_1x_4+-tx_5-2 tx_4-tx_1+2 t^2, $ \noindent$ \mathfrak g_{16}:=\mathbf{x_5x_3} -x_1x_2+x_1x_4+x_1x_3+x_3x_5-tx_4-2 tx_3+tx_2-\frac{1}{2} tx_1+\frac{1}{2} t^2 , $ \noindent$ \mathfrak g_{17}:=\mathbf{x_5x_2} +x_1x_4-\frac{1}{2} tx_5-tx_4-tx_2-tx_1+\frac{3}{2} t^2 ,$ \noindent$ \mathfrak g_{18}:=\mathbf{x_5x_1} x_1x_5-x_1x_4-tx_5+tx_4, $ \noindent$ \mathfrak g_{19}:=\mathbf{x_4^2} +2 x_2x_3+11 x_1x_2-7 x_1x_4-8 x_1x_3-4 x_3^2+x_1^2-x_2^2-20 x_7t+37 tx_4+15 tx_3-14 tx_2+\frac{7}{2} tx_1-{\frac {95}{4}} t^2 , $ \noindent$ \mathfrak g_{20}:=\mathbf{x_4x_3}+x_2x_3-3 x_1x_2+3 x_1x_4+2 x_1x_3-3 tx_4-\frac{7}{2} tx_3+3 tx_2-\frac{3}{2} tx_1+\frac{3}{2} t^2 , $ \noindent$ \mathfrak g_{21}:=\mathbf{x_4x_2}+ x_2x_3-2 x_1x_2+3 x_1x_4+x_1x_3-\frac{7}{2} tx_4-\frac{3}{2} tx_3+tx_2-2 tx_1+\frac{5}{2} t^2 ,$ \noindent$ \mathfrak g_{22}:=\mathbf{x_4x_1^2} +x_1^3-2 tx_4x_1+4 tx_2x_1+t^2x_4-4 t^2x_2-5 t^2x_1+4 t^3 , $ \noindent$ \mathfrak g_{23}:=\mathbf{x_3^3} -2 tx_3^2+6 tx_3x_2-4 tx_4x_1+4 tx_2x_1+4 t^2x_4-3 t^2x_3-4 t^2x_2+2 t^2x_1-2 t^3 ,$ \noindent$ \mathfrak g_{24}:=\mathbf{x_3^2x_2} -\frac{1}{2} x_1^3-\frac{1}{2} tx_3^2+4 tx_3x_2-6 tx_4x_1-6 tx_3x_1+8 tx_2x_1-\frac{1}{2} tx_1^2+6 t^2x_4+4 t^2x_3-8 t^2x_2 +\frac{9}{2} t^2x_1-\frac{7}{2} t^3 ,$ \noindent$ \mathfrak g_{25}:=\mathbf{x_3^2x_1} -x_1^3-tx_3^2-8 tx_4x_1-8 tx_3x_1+12 tx_2x_1-tx_1^2+8 t^2x_4 +8 t^2x_3-12 t^2x_2+7 t^2x_1-5 t^3 , $ \noindent$ \mathfrak g_{26}:=\mathbf{x_3x_2^2} x_2^2x_3-\frac{1}{2} x_1^3+3 tx_3x_2-6 tx_4 x_1-6 tx_3x_1+8 tx_2x_1-\frac{1}{2} tx_1^2+6 t^2x_4+{\frac {17}{4}} t^2x_3-8 t^2x_2+ \frac{9}{2} t^2x_1-\frac{7}{2} t^3 ,$ \noindent$ \mathfrak g_{27}:=\mathbf{x_3x_2x_1} -x_1^3-tx_3x_2-8 tx_4x_1-\frac{17}{2} tx_3x_1+12 tx_2x_1-tx_1^2+8 t^2x_4+\frac{17}{2} t^2x_3-12 t^2x_2+7 t^2x_1-5 t^3 ,$ \noindent$ \mathfrak g_{28}:=\mathbf{x_3x_1^2} -12 tx_4x_1-6 tx_3x_1+12 tx_2x_1+12 t^2x_4+5 t^2x_3-12 t^2x_2+6t^2x_1-6 t^3 ,$ \noindent$ \mathfrak g_{29}:=\mathbf{x_2^3} +\frac{1}{2} x_1^3+\frac{5}{2} tx_2^2-2 tx_4x_1-6 tx_3x_1+12 tx_2x_1+\frac{1}{2} tx_1^2+2 t^2x_4+6 t^2x_3-{\frac {61}{4}} t^2x_2- \frac{13}{2} t^2x_1+{\frac {51}{8}} t^3 ,$ \noindent$ \mathfrak g_{30}:=\mathbf{x_2^2x_1} -\frac{3}{2} x_1^3-tx_2^2-10 tx_4x_1-6 tx_3x_1+7 tx_2x_1-\frac{3}{2} tx_1^2+10 t^2x_4+ 6 t^2x_3-7 t^2x_2+{\frac {55}{4}} t^2 x_1-{\frac {43}{4}} t^3 ,$ \noindent$ \mathfrak g_{31}:=\mathbf{x_2x_1^2} +\frac{1}{2} x_1^3-10 tx_4x_1-6 tx_3x_1+14 tx_2x_1+10 t^2x_4+6 t^2x_3-15 t^2x_2+\frac{1}{2} t^2x_1-t^3 ,$ \noindent $ \mathfrak g_{32}:=\mathbf{x_1^4}+64 t^2x_4x_1-32 t^2x_2x_1-6\,t^2x_1^2-64 t^3x_4+32 t^3x_2-40 t^3x_1+45 t^4$. \vskip 2mm \noindent As a consequence we get the following result. \begin{theorem} \label{Gor} Every Gorenstein local algebra of the type $(1,7,7,1)$ is smoothable. \end{theorem} \begin{proof} We refer to \cite{IarroKanev} for classical and recent results about this kind of problem. Our proof consists in finding a Gorenstein point of type $(1,7,7,1)$ that belongs to the irreducible component $\mathcal H_{\mathtt {Lex}}$ and that is smooth in the Hilbert scheme. These facts imply that our point is smoothable and that all the other Gorenstein points of the same type belong to $\mathcal H_{\mathtt {Lex}}$, i.e. are smoothable too. We observe that the above marked set $\mathfrak G_t$ turns out to be a $[\mathfrak j,3]$-marked basis, so that it defines a family $\mathcal F_1$ which is flat over $\mathbb A^1$, hence we have an embedding $\mathbb A^1 \hookrightarrow \mathcal M\mathrm f(\mathfrak j,3)$. Let us denote by $\mathfrak i_\tau $ the ideal generated by the specialization $t\mapsto \tau$ of the $\mathfrak G_t$. For every $\tau \neq 0$, the ideal $\mathfrak i_\tau $ defines a scheme composed by $11$ simple points and a multiple structure over a $12^{th}$ point, which is Gorenstein of type $(1,3,1)$, hence smoothable because of already known results. This guarantees that also the ideal $\mathfrak i_\tau$ is smoothable, that is it belongs to the component $\mathcal H_{\mathtt {Lex}}$ of ${\mathcal{H}\textnormal{ilb}}^7_{16}$ (in particular, it belongs to $\mathcal M_1$). For $\tau=0$, the ideal $\mathfrak i_0$ defines a multiple structure over a single point which is Gorenstein of type $(1,7,7,1)$ and it is smoothable too, by the flatness of the family $\mathcal F_1$. Furthermore, the dimension of the Zariski tangent space to ${\mathcal{H}\textnormal{ilb}}^7_{16}$ at the point $\mathfrak i_0$ is precisely $112=16 \times 7$. Hence $\mathcal H_{\mathtt {Lex}}$ is the only irreducible component of ${\mathcal{H}\textnormal{ilb}}^7_{16}$ containing $\mathfrak i_0$ and $\mathfrak i_0$ is smooth in ${\mathcal{H}\textnormal{ilb}}^7_{16}$. \end{proof} Theorem \ref{Gor} covers the case $r'=7$, which is the single value not treated in the range considered by \cite[Lemma 6.21]{IarroKanev}. \bigskip There is a second irreducible component $\mathcal M_2 $ of $\mathcal M\mathrm f(\mathfrak j,3)$ whose dimension is 161. On this component $\mathfrak j$ is smooth, hence $\mathcal{\mathcal M}_2 $ turns out to be isomorphic to an affine space $\mathbb A^{161}$ by Theorem \ref{connessione}. We get a parametrization of $\mathcal{\mathcal M}_2 $ choosing a suitable subset of 161 variables $C_2 \subset C$ as parameters: for instance we can take $C_2:=C \setminus \mathrm{in_\prec}(\mathfrak T)$, where $\mathfrak T$ is the ideal of the Zariski tangent space to ${\mathcal{H}\textnormal{ilb}}^7_{16}$ at a general point of $\mathcal M_2 $ and the initial ideal is made w.r.t.~any term order $\prec$. The parametrization we obtain in this way is given by polynomials. The general ideal $\mathfrak l$ in $ \mathcal{M}_2$ defines a smooth point of ${\mathcal{H}\textnormal{ilb}}^7_{16}$ and the corresponding scheme is the union of a simple point and a non reduced structure of multiplicity 15 on a different one. For instance let us consider the ideal $\mathfrak l$ defined by the following $[\mathfrak j, 3]$-marked basis. \noindent$\mathfrak f_1:=\mathbf{x_7^2}+x_3x_2-x_3x_1-x_2x_1-x_1^2,$\\ $\mathfrak f_2:=\mathbf{x_7x_6}+x_4x_1-x_3^2-x_3x_2+x_2^2+x_2x_1-x_1^3+2x_1^2,$\\ $\mathfrak f_3:=\mathbf{x_7x_5}-x_4x_1-x_3x_1-x_2^2-x_1^3+4x_1^2,$\\ $\mathfrak f_4:=\mathbf{x_7x_4}-x_3^2+x_2^2+x_2x_1+x_1^3-2x_1^2,$\\ $\mathfrak f_5:=\mathbf{x_7x_3}+x_4x_1+x_3x_1-x_2x_1-x_1^3,$ \\ $\mathfrak f_6:=\mathbf{x_7x_2}+x_3x_2-x_3x_1+x_2x_1+x_1^3-3x_1^2,$\\ $\mathfrak f_7:=\mathbf{x_7x_1}-x_1^3+x_3^2+x_3x_2-x_4x_1-x_3x_1-x_2x_1,$\\ $\mathfrak f_8:=\mathbf{x_6^2}+x_3x_2+x_3x_1-x_2^2-x_2x_1+x_1^3-2x_1^2,$\\ $\mathfrak f_9:=\mathbf{x_6x_5}+x_4x_1-x_3^2+x_3x_2-x_2^2+x_2x_1-x_1^2,$\\ $\mathfrak f_{10}:=\mathbf{x_6x_4}+x_4x_1+x_3^2-x_3x_1-x_1^3+x_1^2,$\\ $\mathfrak f_{11}:=\mathbf{x_6x_3}+x_4x_1+x_3x_2-x_2^2+x_2x_1, $\\ $\mathfrak f_{12}:=\mathbf{x_6x_2}-x_4x_1-x_3^2+x_3x_2+x_3x_1-x_2^2+x_2x_1+x_1^3-x_1^2, $\\ $\mathfrak f_{13}:=\mathbf{x_6x_1}+x_1^3-x_3^2+x_3x_2-x_4x_1+x_3x_1-x_1^2,$\\ $\mathfrak f_{14}:=\mathbf{x_5^2}-x_4x_1+x_3^2+x_3x_2-x_2^2-x_1^3-x_1^2,$\\ $\mathfrak f_{15}:=\mathbf{x_5x_4}-x_4x_1+x_3x_2+x_3x_1-x_1^3-x_1^2,$ \\ $\mathfrak f_{16}:=\mathbf{x_5x_3}+x_4x_1+x_3^2-x_3x_2-x_3x_1-x_2^2+x_1^3+2x_1^2, $\\ $\mathfrak f_{17}:=\mathbf{x_5x_2}+x_4x_1-x_3^2-x_3x_2+x_2^2-x_2x_1-x_1^3+4x_1^2,$\\ $\mathfrak f_{18}:=\mathbf{x_5x_1}+x_1^3+x_3^2-x_4x_1-x_3x_1-x_2x_1+x_1^2, $\\ $\mathfrak f_{19}:=\mathbf{x_4^2}+x_3^2+x_3x_1-x_2^2-x_2x_1+x_1^3-x_1^2, $\\ $\mathfrak f_{20}:=\mathbf{x_4x_3}+x_3x_1+x_2^2+x_2x_1+x_1^3-4x_1^2, $\\ $\mathfrak f_{21}:=\mathbf{x_4x_2}+x_4x_1-x_3x_2+x_3x_1+x_2^2+x_2x_1-x_1^3-x_1^2,$\\ $\mathfrak f_{22}:=\mathbf{x_4x_1^2}, \ \ \ \mathfrak f_{23}:=\mathbf{x_3^3}-x_1^3,\ \ \ \mathfrak f_{24}:=\mathbf{x_3^2x_2}-x_1^3,\ \ \ \mathfrak f_{25}:=\mathbf{x_3^2x_1}-x_1^3,$ \\ $\mathfrak f_{26}:=\mathbf{x_3x_2^2}-x_1^3,\ \ \ \mathfrak f_{27}:=\mathbf{x_3x_2x_1}-x_1^3,\ \ \ \mathfrak f_{28}:=\mathbf{x_3x_1^2}-x_1^3,$\\ $\mathfrak f_{29}:=\mathbf{x_2^3}-x_1^3,\ \ \ \mathfrak f_{30}:=\mathbf{x_2^2x_1}-x_1^3,\ \ \ \mathfrak f_{31}:=\mathbf{x_2x_1^2}-x_1^3,\ \ \ \mathfrak f_{32}:=\mathbf{x_1^4}-x_1^3$. We can verify that $\mathfrak l$ is smooth on ${\mathcal{H}\textnormal{ilb}}^7_{16}$ because the dimension of the Zariski tangent space to ${\mathcal{H}\textnormal{ilb}}^7_{16}$ at this point is 161. \bigskip Finally we can find a third irreducible component $\mathcal{M}_3$ of $\mathcal M\mathrm f(\mathfrak j,3)$ in the following way. If we replace by $0$ the variables in a suitable subset $C_3 \subset C$, we obtain a family $\mathcal F_3$ which is flat over $\mathbb A^{116}$. A general point $\mathfrak v$ of $\mathcal F_3$, for instance the one defined by the ideal generated by the following $[\mathfrak j, 3]$-marked basis, is a non-reduced structure over a point. \\ $ \mathfrak h_1:=\mathbf{x_7^2}-x_3^2+x_3x_2-2x_4x_1+2x_3x_1 , \ \ \mathfrak h_2:=\mathbf{x_7x_6}+3x_3^2+4x_3x_2-4x_2^2-x_4x_1+3x_3x_1,$ \\ $\mathfrak h_3:=\mathbf{x_7x_5}-2x_3^2+x_3x_2+x_2^2+4x_4x_1+3x_3x_1 ,\ \ \mathfrak h_4:=\mathbf{x_7x_4}-2x_3^2+3x_3x_2+x_2^2-3x_4x_1+x_3x_1,$\\ $\mathfrak h_5:=\mathbf{x_7x_3}+2x_3^2+x_3x_2-x_2^2-x_4x_1-4x_3x_1, \ \ \mathfrak h_6:=\mathbf{x_7x_2}-x_3^2+x_3x_2-4x_2^2+x_4x_1+4x_3x_1, $\\ $\mathfrak h_7:=\mathbf{x_7x_1}-2x_3^2-x_3x_2+2x_2^2-3x_4x_1+x_3x_1, \ \ \mathfrak h_8:=\mathbf{x_6^2}-x_3x_2+3x_2^2+2x_4x_1, $\\ $\mathfrak h_9:=\mathbf{x_6x_5}+2x_3^2+2x_2^2-4x_4x_1-3x_3x_1, \ \ \mathfrak h_{10}:=\mathbf{x_6x_4}+x_3^2+x_3x_2-x_4x_1+2x_3x_1,$\\ $\mathfrak h_{11}:=\mathbf{x_6x_3}-4x_3^2+3x_2^2-2x_3x_1, \ \ \mathfrak h_{12}:=\mathbf{x_6x_2}-2x_3^2+3x_2^2+x_4x_1+3x_3x_1, $\\ $\mathfrak h_{13}:=\mathbf{x_6x_1}-2x_3^2+2x_3x_2-4x_2^2-x_4x_1-4x_3x_1\ \ \mathfrak h_{14}:=\mathbf{x_5^2}-4x_3^2-2x_3x_2-2x_2^2, $\\ $\mathfrak h_{15}:=\mathbf{x_5x_4}-2x_3x_2-4x_2^2+4x_4x_1-x_3x_1, \ \ \mathfrak h_{16}:=\mathbf{x_5x_3}-4x_2^2-x_4x_1+x_3x_1, $\\ $\mathfrak h_{17}:=\mathbf{x_5x_2}+x_3^2+3x_3x_2+3x_2^2-x_4x_1-4x_3x_1,\ \ \mathfrak h_{18}:=\mathbf{x_5x_1}-3x_3^2-2x_2^2-3x_4x_1+x_2x_1, $\\ $\mathfrak h_{19}:=\mathbf{x_4^2}-2x_3^2-2x_3x_2+4x_2^2+x_4x_1+x_3x_1, \ \ \mathfrak h_{20}:=\mathbf{x_4x_3}+x_3^2+x_3x_2+4x_2^2-2x_4x_1-4x_3x_1, $\\ $\mathfrak h_{21}:=\mathbf{x_4x_2}+x_3^2+2x_3x_2+4x_2^2-4x_4x_1-2x_3x_1,\ \ \mathfrak h_{22}:=\mathbf{x_4x_1^2}, \ \ \ \mathfrak h_{23}:=\mathbf{x_3^3},$\\ $\mathfrak h_{24}:=\mathbf{x_3^2x_2},\ \ \mathfrak h_{25}:=\mathbf{x_3^2x_1}, \ \ \mathfrak h_{26}:=\mathbf{x_3x_2^2},\ \ \ \mathfrak h_{27}:=\mathbf{x_3x_2x_1},\ \ \ \mathfrak h_{28}:=\mathbf{x_3x_1^2},\ \ \mathfrak h_{29}:=\mathbf{x_2^3},$\\ $\mathfrak h_{30}:=\mathbf{x_2^2x_1},\ \ \ \mathfrak h_{31}:=\mathbf{x_2x_1^2}+3x_1^3,\ \ \ \mathfrak h_{32}:=\mathbf{x_1^4}$. Since $\mathcal F_3$ is a flat family over $\mathbb A^{116}$, we have an embedding $\mathbb A^{116}\hookrightarrow \mathcal M_3$. Hence the irreducible component $\mathcal M_{3}$ of $\mathcal M\mathrm f(\mathfrak j,3)$ cannot coincide with $\mathcal M_{1}$ because $\dim(\mathcal M_{3}) \geq 116 > 112=\dim(\mathcal M_1)$. On the other hand $\mathcal M_{3}$ cannot coincide with $\mathcal M_{2}$ because the dimension of the Zariski tangent space to ${\mathcal{H}\textnormal{ilb}}^7_{16}$ at a general point of $\mathcal F_3$, as for instance $\mathfrak v$, is $153 $, while at every point of $\mathcal M_{2}$ such dimension is $\geq 161$. Therefore there are at least three irreducible components of ${\mathcal{H}\textnormal{ilb}}^7_{16}$ passing through $\mathfrak j$. \subsection{Smoothability of local Gorenstein algebras $(1,5,5,1)$}\label{GorSmooth2} We consider ${\mathcal{H}\textnormal{ilb}}^5_{12}$ that parameterizes 0-dimensional subschemes of $\mathbb{P}^5$ of length 12. As in the previous case, we identify every point of ${\mathcal{H}\textnormal{ilb}}^5_{12}$ with an ideal in $R=K[x_1,\dots,x_5]$, non-necessarily homogeneous, with affine Hilbert polynomial $p(t)=12$ and we order the variables as $x_5>\cdots>x_1$. The ideal $\mathfrak i_0$ of Example \ref{Esempio flat} corresponds to a point of ${\mathcal{H}\textnormal{ilb}}^5_{12}$ and the quotient $R/\mathfrak i_0$ is a local Gorenstein algebra with Hilbert Function $(1,5,5,1)$. Its initial ideal w.r.t.~the term order $\mathtt{Lex}$ is the ideal $\mathfrak j$ presented therein. We observe that $\mathfrak j$ is strongly stable and affine $3$-segment with weight vector $\omega=[8, 7, 5, 4, 3]$. We can construct the $[\mathfrak j,3]$-marked set $\mathfrak G$ in $K[C,x_1,\dots,x_5]$ (as in \eqref{JbaseC}) and apply the Algorithm \textsc{MarkedScheme} with input $\mathfrak j$ and $3$. We get a set of 576 polynomials in $K[C]$, where $\vert C \vert =204$, that generate the ideal $\mathfrak A$ of $\mathcal M\mathrm f(\mathfrak j ,3)$. Using $\mathfrak A$ we compute the Zariski tangent space to $\mathcal M\mathrm f(\mathfrak j ,3)$ at the point corrresponding to $\mathfrak i_0$. The dimension of this tangent space is $60=12 \times 5$, the one expected if $\mathfrak i_0$ were a smooth point of the $\mathtt{Lex}$-component of ${\mathcal{H}\textnormal{ilb}}^5_{12}$. If this is the case, the computation of the tangent space also highlights a special set $\widetilde C$ of $60$ variables $C$ that can give a local parametrization around $\mathfrak i_0$. Specializing in $\mathfrak A$ all the variables in $\widetilde C$ to $0$ except a suitable one which we choose as a parameter $T$, we can determine a specialization for every variable in $C$ depending on $T$. In this way we obtain the following set of polynomials, marked over $\mathfrak j$: \noindent$\mathfrak f_1:=\mathbf{x_5^2}+4x_1^{2}+\frac {17}{3}x_1x_2-\frac {83}{12} x_1 x_3-\frac {23}{4}\,x_2 x_3,$\\ $\mathfrak f_2:=\mathbf{x_4 x_5}-\frac{3}{4}\,x_2 x_3-\frac{5}{4}\,x_1 x_3+x_1 x_2 ,$\\ $\mathfrak f_3:=\mathbf{x_4^2 }-Tx_4+x_2T+\frac {25}{6} x_2 x_3+x_2^{2}+{ \frac {71}{18}} x_1 x_3- \frac {28}{9}x_1 x_2-5 x_1^{2},$\\ $\mathfrak f_4:=\mathbf{x_3 x_5}-\frac{3}{4}\,x_2 x_3+\frac{3}{4} x_1 x_3-x_1 x_2,$\\ $\mathfrak f_5:=\mathbf{x_3 x_4}-x_2x_3,$ \\ $\mathfrak f_6:=\mathbf{x_3^{2}}-{\frac {85}{24}}\,x_{{2}}x_3-{\frac {317}{72}}\,x_1 x_3+ \frac {71}{18} x_1 x_2+2 x_1^{2} ,$\\ $\mathfrak f_7:=\mathbf{x_2x_5}-\frac{3}{4}\,x_{{2}}x_3-\frac{5}{4}\,x_{{1}}x_3+x_{{1}}x_2,$\\ $\mathfrak f_8:=\mathbf{x_2x_4}-x_{{2}}x_3-x_{{1}}x_3+x_{{1}}x_2,$\\ $\mathfrak f_9:=\mathbf{x_1x_5}-\frac{1}{4}x_{{2}}x_3+\frac{1}{4} x_1x_3-x_1 x_2,$\\ $\mathfrak f_{10}:=\mathbf{x_1x_4}-x_1 x_2,$\\ $\mathfrak f_{11}:=\mathbf{x_2^2 x_3}+x_1^3, $\\ $\mathfrak f_{12}:=\mathbf{ x_2^3}-x_2 x_3 T-x_3 x_1 T+T x_2^2+x_2 x_1 T+\frac{5}{9}x_1^3, $\\ $\mathfrak f_{13}:=\mathbf{x_2x_1 x_3}-{\frac{11}{9}}x_1^3,$ $\mathfrak f_{14}:=\mathbf{x_1 x_2^{2}}-{\frac{8}{9}}x_1^3,$\ $\mathfrak f_{15}:=\mathbf{x_1^2 x_3}+ x_1^3,$ $\mathfrak f_{16}:=\mathbf{x_1^2 x_2}+\frac{2}{3} x_1^3, $ $\mathfrak f_{17}:=\mathbf{x_1^4}.$ This marked set is, by construction, a $[\mathfrak j,3]$-marked basis, so that it defines a family $\mathcal F_1$ (different from that of Example \ref{Esempio flat}) which is flat over $\mathbb A^1$, hence we have an embedding $\mathbb A^1 \hookrightarrow \mathcal M\mathrm f(\mathfrak j,3)$. Specializing $T$ to 0 we obtain a set of generators for $\mathfrak i_0$, while specializing to a non zero value the ideal we obtain defines a scheme composed by $2$ simple points and a multiple structure over a $3^{th}$ point, which is Gorenstein of type $(1,4,4,1)$ and then smoothable by \cite{CN10}. As a consequence, by the same reasoning applied in the proof of Theorem \ref{Gor}, we get the following result. \begin{theorem} \label{Gor2} Every Gorenstein local algebra of the type $(1,5,5,1)$ is smoothable. \end{theorem} {This last result has been independently obtained by J. Jelisiejew in \cite{JJ} by different tools.} \section*{Acknowledgments} The authors wish to thank Silvio Greco, Gianfranco Casnati and Roberto Notari who introduced them to the problem of smoothability of some local Gorenstein Artin algebras. Some of the examples and applications presented in this paper come from the useful discussions with them, in particular the non-flatness of the family of ideals of Example \ref{Esempio flat} and the suggestion to apply our method to study the smoothablility of local Gorenstein algebras of type $(1,5,5,1) $ and $(1,7,7,1)$ treated in Subsections \ref{GorSmooth} and \ref{GorSmooth2}. \vskip.4cm The first and third authors were partially supported by the Group of Algebra and Geometry of the Department G. Peano (University of Turin) and by the PRIN \lq\lq Geometria delle variet\'a algebriche\rq\rq, cofinanced by MIUR (Italy) (cofin 2008). The second author was supported by the PRIN \lq\lq Geometria Algebrica e Aritmetica, Teorie Coomologiche e Teoria dei Motivi\rq\rq, cofinanced by MIUR (Italy) (cofin 2008). \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2012-12-03T02:02:16", "yymm": "1211", "arxiv_id": "1211.7264", "language": "en", "url": "https://arxiv.org/abs/1211.7264", "abstract": "We define marked sets and bases over a quasi-stable ideal $\\mathfrak j$ in a polynomial ring on a Noetherian $K$-algebra, with $K$ a field of any characteristic. The involved polynomials may be non-homogeneous, but their degree is bounded from above by the maximum among the degrees of the terms in the Pommaret basis of $\\mathfrak j$ and a given integer $m$. Due to the combinatorial properties of quasi-stable ideals, these bases behave well with respect to homogenization, similarly to Macaulay bases. We prove that the family of marked bases over a given quasi-stable ideal has an affine scheme structure, is flat and, for large enough $m$, is an open subset of a Hilbert scheme. Our main results lead to algorithms that explicitly construct such a family. We compare our method with similar ones and give some complexity results.", "subjects": "Commutative Algebra (math.AC); Algebraic Geometry (math.AG)", "title": "Macaulay-like marked bases", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631675246405, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.708795047165973 }
https://arxiv.org/abs/1910.05726
Norm attaining operators which satisfy a Bollobás type theorem
In this paper, we are interested in studying the set $\mathcal{A}_{\|\cdot\|}(X, Y)$ of all norm-attaining operators $T$ from $X$ into $Y$ satisfying the following: given $\epsilon>0$, there exists $\eta$ such that if $\|Tx\| > 1 - \eta$, then there is $x_0$ such that $\| x_0 - x\| < \epsilon$ and $T$ itself attains its norm at $x_0$. We show that every norm one functional on $c_0$ which attains its norm belongs to $\mathcal{A}_{\|\cdot\|}(c_0, \mathbb{K})$. Also, we prove that the analogous result holds neither for $\mathcal{A}_{\|\cdot\|}(\ell_1, \mathbb{K})$ nor $\mathcal{A}_{\|\cdot\|}(\ell_{\infty}, \mathbb{K})$. Under some assumptions, we show that the sphere of the compact operators belongs to $\mathcal{A}_{\|\cdot\|}(X, Y)$ and that this is no longer true when some of these hypotheses are dropped. The analogous set $\mathcal{A}_{nu}(X)$ for numerical radius of an operator instead of its norm is also defined and studied. We present a complete characterization for the diagonal operators which belong to the sets $\mathcal{A}_{\| \cdot \|}(X, X)$ and $\mathcal{A}_{nu}(X)$ when $X=c_0$ or $\ell_p$. As a consequence, we get that the canonical projections $P_N$ on these spaces belong to our sets. We give examples of operators on infinite dimensional Banach spaces which belong to $\mathcal{A}_{\| \cdot \|}(X, X)$ but not to $\mathcal{A}_{nu}(X)$ and vice-versa. Finally, we establish some techniques which allow us to connect both sets by using direct sums.
\section{Introduction and Motivation} The famous theorem due to Bollob\'as on functionals which attain their norms states that if $x^*$ is a norm one functional which almost attains its norm at some element $x$, in the sense that $x^*(x) > 1 - \eta$ for some $\eta > 0$, then there exist a new functional $x_0^*$ and a new element $x_0$ such that $x_0^*$ attains its norm at $x_0$, $x_0 \approx x$, and $x_0^* \approx x^*$ (see \cite{Bol}). This result opened the gate to further discussion on this topic and nowadays we have a large literature about various classes of functions which attain their norms and satisfy a Bollob\'as type result (see, for instance, \cite{A, AAGM, CDJ, D, DKKLM2, DKLM, KL, S, T} and the references therein). Lindenstrauss in \cite{L} was the first one who answers negatively a question posed by Bishop and Phelps in \cite{BP} on the density of linear operators which attain their norms. In particular, Bollob\'as' result is no longer true for this class of functions, which had allowed many researchers to study systematically when we have a Bollob\'as type result for other functions rather than functionals. For a starting point on this topic, we suggest the reader the seminal paper \cite{AAGM}. In order to explain properly what we will be doing in this paper, we briefly present some necessary notation so that the reader can follow the ideas easily. We denote by $B_X$ and $S_X$ the closed unit ball and the unit sphere of the Banach space $X$, respectively. We denote by $X^*$ the topological dual of $X$. Given two Banach spaces $X$ and $Y$, we denote by $\mathcal{L}(X, Y)$ the set of all bounded linear operators and by $\mathcal{L} (X)$ if $X= Y$. We will be using both notation $\langle x^*, x \rangle$ and $x^*(x)$ indistinctly throughout the paper for the action of an element $x^* \in X^*$ at an element $x \in X$. We say that $T \in \mathcal{L}(X, Y)$ attains its norm if there exists $x_0 \in S_X$ such that $\|T(x_0)\| = \|T\|$. In this case, we say that $T$ is a norm-attaining operator. The set of norm attaining operators from $X$ into $Y$ is denoted by $\operatorname{NA}(X, Y)$. We introduce the set of all states on $X$ by $\Pi(X) = \{(x,x^*) \in S_X \times S_{X^*}: \langle x^*, x \rangle = 1\}$ and for a given operator $T \in \mathcal{L}(X)$, the numerical radius of $T$ is defined as $\nu(T) := \sup \{|\langle x^*, T(x) \rangle|: (x, x^*) \in \Pi(X)\}$. Notice that we always have $v(T) \leq \|T\|$. We say that $T$ attains the numerical radius when there is $(x_0, x_0^*) \in \Pi(X)$ such that $|\langle x_0^* , T(x_0)\rangle| = \nu(T)$. In this case, we say that $T$ is a numerical radius attaining operator. For a background on this topic, we refer to \cite{BD1, BD2}. In this paper, we are interested in studying a set of linear operators which satisfy a Bollob\'as type theorem in a sense which will be clear in a moment. This was motivated by a natural question, although quite restrictive at first glance, whether we can get a Bollob\'as theorem without changing the initial operator which almost attains its norm. In other words, given $\varepsilon>0$, is it true that there exists $\eta > 0$, which depends just on $\varepsilon$, such that if $\|Tx\| > 1 - \eta$, then there exists $x_0$ such that $x_0 \approx x$ and $T$ {\it itself} attains its norm at $x_0$? It turns out that the answer for this problem is negative whenever the dimension of the involved Banach spaces are bigger than 2 (see \cite[Theorem 2.1]{DKKLM2}) and, on the other hand, it characterizes uniformly convex Banach spaces when we consider the problem for linear functionals (see \cite[Theorem 2.1]{KL}). Since there is no hope for a uniform version for the operator case of this problem (in the sense that $\eta$ depends just on a given $\varepsilon>0$) and the functional case is completely characterized, it seems to be reasonable considering the same problem but now taking $\eta$ depending not just on $\varepsilon$ but also on a fixed norm one operator $T$. This was done in \cite{D, DKLM, DKLM2, S, T} and many positive results come out differently from the uniform case. Here, we will be working with a set of operators which satisfy such a property. Let us give the precise definitions. \begin{definition} \label{maindefinition} Let $X, Y$ be Banach spaces. \begin{itemize} \item[(i)] $\mathcal{A}_{\| \cdot \|}(X, Y)$ stands for the set of all norm-attaining operators $T \in \mathcal{L}(X, Y)$ with $\|T\| = 1$ such that if $\varepsilon > 0$, then there is $\eta(\varepsilon, T) > 0$ such that whenever $x \in S_X$ satisfies $\|T(x)\| > 1 - \eta(\varepsilon, T)$, there is $x_0 \in S_X$ such that $\|T(x_0)\| = 1$ and $\|x_0 - x\| < \varepsilon$. \vspace{0.2cm} \item[(ii)] $\mathcal{A}_{\num}(X)$ stands for the set of all numerical radius attaining operators $T \in \mathcal{L}(X)$ with $\nu(T) = 1$ such that if $\varepsilon > 0$, then there is $\eta(\varepsilon, T) > 0$ such that whenever $(x, x^*) \in \Pi(X)$ satisfies $|\langle x^*, T(x) \rangle| > 1 - \eta(\varepsilon, T)$, there is $(x_0, x_0^*) \in \Pi(X)$ such that $|\langle x_0^*, T(x_0) \rangle| = 1$, $\|x_0 - x\| < \varepsilon$, and $\|x_0^* - x^*\| < \varepsilon$. \end{itemize} \end{definition} Let us notice the following. Suppose that the pair of Banach spaces $(X, Y)$ satisfies the following property: given $\varepsilon>0$ and $T \in S_{\mathcal{L}(X, Y)}$, there exists $\eta(\varepsilon, T) >0$ such that whenever $x \in S_X$ satisfies $\|Tx\| > 1 - \eta(\varepsilon, T)$, then there exists $x_0 \in S_X$ such that $\|Tx_0\| = 1$ and $\|x_0 - x\| < \varepsilon$. In this case, we have clearly that all norm one operators from $X$ into $Y$ will belong to the set $\mathcal{A}_{\|\cdot\|}(X, Y)$. Notice also that if $(X, Y)$ satisfies such a property, then $X$ must be reflexive by the James theorem since, in this case, every operator attains its norm. Studying the set $\mathcal{A}_{\|\cdot\|}$ gives us more freedom in the sense that we do not have to restrict ourselves to any condition on the involved spaces but of course on the definition of a concrete operator. On the other hand, very recently this property was used in \cite{DJRR} as a tool to prove that every nuclear operator can be approximated (in the nuclear norm) by nuclear operators which attain their nuclear norms. This makes us think that, studying the sets $\mathcal{A}_{\|\cdot\|}$ and $\mathcal{A}_{\text{nu}}$, might be helpful to get similar results in the context of tensor products by using an analogous definition of the set $\mathcal{A}_{\|\cdot\|}$ for bilinear mappings. For instance (and this will be just a motivational thought by now), consider the projective tensor product $X \ensuremath{\widehat{\otimes}_\pi} Y$ between two Banach spaces $X$ and $Y$, and denote by $\|\cdot\|_{\pi}$ the projective tensor norm on $X \ensuremath{\widehat{\otimes}_\pi} Y$. Define $\mathcal{A}_{\|\cdot\|}(X \times Y)$ the set of all norm-attaining bilinear mappings satisfying its corresponding version of Definition \ref{maindefinition}.(i). If $B \in \mathcal{A}_{\|\cdot\|}(X \times Y)$ and $|B(z)| > 1 - \eta(\varepsilon, B)^2$, where $z \in X \ensuremath{\widehat{\otimes}_\pi} Y$ with $\|z\|_{\pi} = 1$ and $\eta(\varepsilon, B)>0$ is function which appears in the definition, then there exists a norm-attaining tensor $z' \in X \ensuremath{\widehat{\otimes}_\pi} Y$ such that $B(z') = 1$ and $\|z' - z\|_{\pi} < \delta(\varepsilon)$, where $\delta(\varepsilon) > 0$ is small. This leads us to the natural question of how often a bilinear mapping belongs to $\mathcal{A}_{\|\cdot\|}$ and how often the function $\eta(\varepsilon, B)$ depends just on $\varepsilon$, since this would imply the density of the set of the tensors which attain their projective norms (see \cite[Proposition 4.3]{DJRR} for more information in this direction). Thanks to the natural isometric identification between the bilinear mappings on $X \times Y$ and the operators from $X$ into $Y^*$, our study on the sets $\mathcal{A}_{\|\cdot\|}$ and $\mathcal{A}_{\text{nu}}$ for operators might derive in new progresses on both nuclear operators and tensors which attain their nuclear and projective norms, respectively. Now, we describe the content of this paper. We start by showing, as expected, that when we are working with finite dimensional spaces, we have a positive result. That is, if $\dim(X)<\infty$, then the set $\mathcal{A}_{\| \cdot \|}(X, Y)$ coincides with the sphere of $\mathcal{L}(X, Y)$ for every Banach space $Y$ and the set $\mathcal{A}_{\num}(X)$ coincides with the set of all operators with numerical radius one. As a consequence of it, we get that every norm one functional on $c_0$ which attains its norm belongs to $\mathcal{A}_{\|\cdot\|}(c_0, \mathbb{K})$ by using the canonical embedding from a finite dimensional Euclidian space $(\mathbb{K}^n,\,\|\cdot\|_{\infty}) $ (the space $\mathbb{K}^n$ with the topology induced by the norm of $c_0$) into $c_0$. On the other hand, we present examples of norm one functionals on $\ell_1$ and $\ell_{\infty}$ which attain their norms but cannot be in $\mathcal{A}_{\|\cdot\|}(\ell_1, \mathbb{K})$ and $\mathcal{A}_{\|\cdot\|}(\ell_{\infty}, \mathbb{K})$, respectively. Next, we show that under some assumptions on the Banach space $X$, the sphere of the compact operators is contained in $\mathcal{A}_{\| \cdot \|}(X, Y)$ and also the set of all compact operators $T$ with $\nu(T) = \|T\| = 1$ belongs to $\mathcal{A}_{\num}(X)$. Moreover, we provide some counterexamples which show that the result is no longer true by dropping some of these hypothesis. Some conditions on $X$ which guarantee the possibility to pass from $\mathcal{A}_{\| \cdot \|}(X, Y)$ to $\mathcal{A}_{\|\cdot\|}(Y^*, X^*)$ (analogously, from $\mathcal{A}_{\num}(X)$ to $\mathcal{A}_{\text{nu}}(X^*)$) via the adjoint operation are also considered. As one of main results, we give a complete characterization for the diagonal operators when such operators belong to $\mathcal{A}_{\|\cdot\|}(X, X)$ for $X=c_0$ or $\ell_p$ with $1\leq p \leq \infty$, and to $\mathcal{A}_{\text{nu}}(X)$ in the same cases except $p=\infty$. As a consequence, the canonical projections $P_N$ belong to these sets. Finally, in the last section, we study some relations between $\mathcal{A}_{\|\cdot\|} (X, Y)$ and $\mathcal{A}_{\text{nu}} (X \oplus Y)$ through the natural correspondences between $\mathcal{L} (X,Y)$ and $\mathcal{L} (X \oplus Y)$. \section{Main results} In this section, we present the main results of the paper. Recall that in finite dimensional Banach spaces, every operator $T$ attains both norm and numerical radius by compactness. The following result shows that, when $X$ is finite dimensional, we can describe the sets $\mathcal{A}_{\| \cdot \|}(X, Y)$ and $\mathcal{A}_{\num}(X)$ entirely. Moreover, we show that $S_{\ell_1} \cap \NA(c_0, \mathbb{K})$ is always contained in $\mathcal{A}_{\|\cdot\|}(c_0, \mathbb{K})$. \begin{theorem} \label{finitedim} Let $X$ be a finite dimensional Banach space. Then \begin{itemize} \item [(i)] $\mathcal{A}_{\| \cdot \|}(X, Y) = \{T \in \mathcal{L}(X,Y) : \| T \| =1\}$ for any Banach space $Y$, \item[(ii)] $\mathcal{A}_{\num}(X) = \{T \in \mathcal{L}(X) : \nu (T) =1\}$, \item[(iii)] Every norm one functional on $c_0$ which attains the norm belongs to $\mathcal{A}_{\|\cdot\|}(c_0, \mathbb{K})$. \end{itemize} \end{theorem} \begin{proof} Items (i) and (ii) are proved by using the compactness of the unit ball of the finite dimensional space $X$ as in \cite[Proposition 2.4]{AAGM} or \cite[Theorem 2.4]{D}. To prove (iii), suppose $x^* \in S_{c_0^*}$ attains its norm at some point in $B_{c_0}$. Then, there exists $n_0 \in \mathbb{N}$ so that $x^* (n)=0$ for every $n > n_0$. Let $\Psi : (\mathbb{K}^{n_0},\| \cdot \|_{\infty}) \rightarrow c_0$ be the canonical embedding into $c_0$ that sends $(k_1, \cdots, k_{n_0}) \mapsto (k_1, \cdots, k_{n_0}, 0, 0, \cdots)$. It is easy to see that $\| \Psi \| = 1$. Moreover, $\| x^* \circ \Psi \| = 1$, so (i) implies that $x^* \circ \Psi \in \mathcal{A}_{\|\cdot\|} (\mathbb{K}^{n_0}, \mathbb{K})$. Given $\varepsilon>0$, define $$\delta(\varepsilon, x^*):=\min \left\{ \frac{\varepsilon}{2}, \eta\left( \frac{\varepsilon}{2}, x^*\circ \Psi\right) \right\}$$ and suppose that $| \langle x^*, x_0\rangle | > 1 - \delta(\varepsilon, x^*)$ for some point $x_0 \in S_{c_0}$. Let $z_0\in \mathbb{K}^{n_0}$ be the point such that $z_0(n)=x_0(n)$ for $1\leq n\leq n_0$. Then, \begin{equation*} \left|(x^*\circ \Psi) \left(\frac{z_0}{\| z_0\|_{\infty}} \right) \right| > 1-\delta(\varepsilon, x^*) \end{equation*} so, there is $u_0 \in S_{\mathbb{K}^{n_0}}$ such that $|( x^* \circ \Psi )(u_0) | = 1$ and $ \| u_0 - \frac{z_0}{\| z_0\|_{\infty}} \|_\infty < \frac{\varepsilon}{2}$. Finally, let $v_0\in c_0$ be such that $v_0(n)=u_0(n)$ for $1\leq n\leq n_0$ and $v_0(n)=x_0(n)$ for $n>n_0$. It follows that $x^*$ attains its norm at $v_0 \in S_{c_0}$ and $$\| v_0 - x_0 \| = \|u_0 - z_0\|_\infty \leq \left\| u_0 - \frac{z_0}{\| z_0\|_\infty}\right\|_\infty + \left\| \frac{z_0}{\| z_0\|_\infty} - z_0\right\|_\infty < \frac{\varepsilon}{2} + (1 - \| z_0\|) \leq \varepsilon.$$ \end{proof} Concerning linear functionals on $\ell_p$-spaces, we have the following result. \begin{prop} Let $X$ be a Banach space. \begin{itemize} \item[(i)] If $X$ is uniformly convex, then $S_{X^*} = \mathcal{A}_{\|\cdot\|}(X, \mathbb{K})$. \item[(ii)] There is $x^* \in \NA(\ell_1, \mathbb{K}) \cap S_{\ell_{\infty}}$ such that $x^* \not\in \mathcal{A}_{\|\cdot\|} (\ell_1, \mathbb{K})$. \item[(iii)] There is $x^* \in \NA(\ell_{\infty}, \mathbb{K}) \cap S_{\ell_{\infty}^*}$ such that $x^* \not\in \mathcal{A}_{\|\cdot\|} (\ell_{\infty}, \mathbb{K})$. \end{itemize} \end{prop} \begin{proof} For item (i), we argue as in \cite[Theorem 2.1]{KL}. Let us prove (ii) now. Consider the norm one functional $z^* := \left(1, \frac{1}{2},\frac{2}{3}, \ldots, \frac{n-1}{n}, \ldots \right) \in \ell_{\infty}$. Notice that $z^*$ is a norm-attaining functional and it is not difficult to see that the rotations of the unit vector $e_1 \in S_{\ell_1}$ are the only norming points of $z^*$, that is, if $| \langle z^*, z \rangle | = 1$ with $z \in S_{\ell_1}$, then $z$ is of the form $z = e^{i \theta} e_1$ for some $\theta \in [0,2\pi)$. Given $\varepsilon>0$, suppose that there is such a $\eta(\varepsilon, z^*) > 0$. We take $k \in \mathbb{N}$ to be such that $\frac{1}{k} < \eta(\varepsilon, z^*)$ and then $| \langle z^*, e_k \rangle | > 1 - \eta(\varepsilon, z^*)$. This means that there is $z \in S_{\ell_1}$ such that $| \langle z^*, z \rangle | = 1$ and $\|z - e_k\|_1 < \varepsilon$. This implies that $z = e^{i \theta}e_1$ and $\|e^{i \theta}e_1 - e_k\|_1 = 2$, which is a contradiction. For item (iii), consider the functional $ x^* := \left(\frac{1}{2}, \frac{1}{2^2}, \frac{1}{2^3}, \ldots \right)$ on $\ell_\infty$, which is as an element in $S_{\ell_1}$; hence is embedded in $S_{\ell_{\infty}^*}$. If there is $z = ( z(n) )_{n=1}^{\infty} \in S_{\ell_{\infty}}$ such that $| \langle x^* , z \rangle| = \| x^* \| = 1$, then \begin{equation*} 1 = | \langle x^* , z \rangle| = \left| \sum_{n=1}^{\infty} \frac{1}{2^n} z(n) \right| \leq \sum_{n=1}^{\infty} \frac{1}{2^n} | z(n) | \leq 1. \end{equation*} From this, we get that $z(n) = e^{i \theta}$ for all $n \in \mathbb{N}$. Now, assuming that such a $\eta(\varepsilon, x^* ) > 0$ exists, we take $k \in \mathbb{N}$ with $2^k \eta(\varepsilon, x^* ) > 1$ and consider the element $e_1 + \ldots + e_k \in S_{\ell_{\infty}}$. Then, $ | \langle x^* , e_1 + \ldots + e_k \rangle | > 1 - \eta(\varepsilon, x^* ).$ So, there is $x \in S_{\ell_{\infty}}$ such that $| \langle x^* , x \rangle| = 1$ and $\|x - (e_1 + \ldots + e_k)\|_{\infty} < \varepsilon$, which leads to a contradiction since $\|x - (e_1 + \ldots + e_k)\|_{\infty} \geq 1$. \end{proof} Let us observe that it is immediate that an operator which has norm one but does not attain the norm cannot be in $\mathcal{A}_{\| \cdot \|}(X, Y)$ by its definition. Analogously, the same argument for non numerical radius operators applies for the set $\mathcal{A}_{\num}(X)$. Nevertheless, we present in Example \ref{ex2} a norm one operator which attains its norm and numerical radius but belongs neither to $\mathcal{A}_{\| \cdot \|}(X, X )$ nor to $\mathcal{A}_{\num}(X)$. \begin{example} \label{ex2} Let $p>0$ and $q>0$ be such that $\frac{1}{p} + \frac{1}{q} = 1$. We consider the spaces $\ell_p$ and $\ell_q$ as $\ell_p (\ell_p^2)$ and $\ell_q(\ell_q^2)$, respectively, where $\ell_p^2 = (\mathbb{K}^2, \| \cdot \|_p)$. For each $n \in \mathbb{N}$, we define $T_n\in \mathcal{L} (\ell_p^2 )$ by \begin{equation*} {T_n(x, y) := \left( \left(1 - \frac{1}{2n} \right)x, y \right) \ \ \ \left( (x, y) \in \ell_p^2 \right).} \end{equation*} Now, define $T\in \mathcal{L}( \ell_p )$ as \begin{equation*} {T(z) := (T_n( x(n) , y(n) ))_n = \left( \left( 1 - \frac{1}{2n}\right) x(n) , y(n) \right)_n \ \ \ \left( z = (( x(n) , y(n) ))_n \in \ell_p \right)}. \end{equation*} Following \cite[Theorem 2.21.(ii)]{D}, we see that $T$ attains its norm but $T \notin \mathcal{A}_{\| \cdot\|}(\ell_p, \ell_p)$. Let us also see that $T \not\in \mathcal{A}_{\num}(X)$. Let $e_i^2$ be the unit canonical vectors of $\ell_p^2$ and $\ell_q^2$ for $i = 1, 2$, {that is, $e_1^2 = (1,0)$ and $e_2^2 = (0,1)$}. Consider $e_{i, n} := ((0, 0), \ldots, (0, 0), \underbrace{e_i^2}_{n \text{-th}} , (0, 0), \ldots) \in S_{\ell_p}$ and $e_{i, n}^* := ((0, 0), \ldots, (0, 0), \underbrace{e_i^2}_{n \text{-th}} , (0, 0), \ldots) \in S_{\ell_q}$ for $i = 1, 2$. Since $| \langle e_{2, n}^*, T(e_{2,n})\rangle | = 1$, $T$ attains its numerical radius and $\nu(T) = \|T\| = 1$. Suppose that $T \in \mathcal{A}_{\num} (\ell_p)$ and consider $\frac{1}{2n} < \eta(\varepsilon, T)$ for a given $\varepsilon \in (0, 1)$. Since $\nu(T) = \|e_{1, n}\|_p = \|e_{1, n}^*\|_q = \langle e_{1, n}^*, e_{1, n} \rangle = 1$ and $| \langle e_{1, n}^*, T(e_{1, n}) \rangle | > 1 - \eta(\varepsilon, T)$, there is $(w, w^*) \in \Pi(\ell_p)$ such that $| \langle w^*, T(w) \rangle | = 1$, $\|w - e_{1, n}\|_p < \varepsilon$, and $\|w^* - e_{1,n}^*\|_q < \varepsilon$. Since $\|T\| = 1$ and $| \langle w^*, T(w) \rangle | = 1$, it follows that $\|T(w)\|_p = 1$. If we denote $w = (( u(n) , v(n) ))_{ n} \in S_{\ell_p}$, then it is possible to see that $ u(j) = 0$ for all $j \in \mathbb{N}$. This implies that $\|w - e_{1, n}\|_p = \| ((0, v(n) ))_n - e_{1, n}\|_p \geq 1$, which is a contradiction. \end{example} \begin{rem} Due to the relation between the norm of an operator and its numerical radius, it is natural to wonder whether the fact that an operator is in $\mathcal{A}_{\| \cdot \|}(X, X)$ for some Banach space $X$ implies that it also belongs to $\mathcal{A}_{\text{nu}}(X)$. Nevertheless, this is not the case in general even in Hilbert spaces. Indeed, on the one hand, every isometry on $X$ clearly belongs to $\mathcal{A}_{\|\cdot\|}(X, X)$. On the other hand, this does not hold for the set $\mathcal{A}_{\text{nu}} (X)$. Consider the right shift operator $R \in \mathcal{L}(\ell_2 )$. It is known that the numerical range $W(R)$ of $R$ is the open unit disk $\mathbb{D}$ in the complex plane (see, for example, \cite[Example 2]{GR}) which implies that $\nu (R) = 1$, but $| \langle Rx, x \rangle | < 1$ for every $x \in S_{\ell_2}$. \end{rem} Recall that a Banach space $X$ satisfies the Kadec-Klee property when the weak and norm topologies coincide on the unit sphere $S_X$. It is well-known that every locally uniformly rotund space (LUR, for short) satisfies the Kadec-Klee property (the converse is not true, e.g., $\ell_1^2$). Recall also that, by the \v{S}mulian lemma, the norm of $X$ is Fr\'echet differentiable at $x$ if and only if $(x_n^*) \subset S_{X^*}$ is convergent whenever $\lim_n \langle x_n^*, x \rangle = 1$. In the next result, under some assumptions on the involved Banach spaces, we show that some subsets of the space of all compact operators belong to the classes $\mathcal{A}_{\|\cdot\|}$ and $\mathcal{A}_{\text{nu}}$. We denote by $\mathcal{K}(X, Y)$ the set of all compact operators from $X$ into $Y$. \begin{theorem} \label{KKop} Let $X$ be a reflexive space which satisfies the Kadec-Klee property. Then, \begin{itemize} \item[(i)] $S_{\mathcal{K}(X, Y)} \subset \mathcal{A}_{\| \cdot \|}(X, Y)$ for every Banach space $Y$. \item[(ii)] $\{ T \in \mathcal{K}(X) : \nu(T) = \|T\| = 1 \} \subset \mathcal{A}_{\num}(X)$ whenever $X$ is Fr\'echet differentiable. \end{itemize} \end{theorem} \begin{proof} Item (i) follows from the same argument as in \cite[Theorem 2.12]{S}. Let us prove (ii). Suppose by contradiction that it is not true. Then, there are $\varepsilon_0 \in (0, 1)$ and a compact operator $T \in \mathcal{K}( X)$ with $ \nu (T) = \|T\| = 1$ such that for every $n \in \mathbb{N}$, there is $(x_n, x_n^*) \in \Pi(X)$ such that \begin{equation} \label{eq0} 1 \geq | \langle x_n^*, T(x_n) \rangle | \geq 1 - \frac{1}{n} \end{equation} and whenever $(x, x^*) \in \Pi(X)$ satisfies $\|x - x_n\| < \varepsilon_0$ and $\|x^* - x_n^*\| < \varepsilon_0$, we have $| \langle x^*, T(x) \rangle | < 1$. By reflexivity of $X$, there is a subsequence of $(x_n)$, which we denote again by $(x_n)$, and $x_0 \in B_X$ such that $x_n \stackrel{w}{\longrightarrow} x_0$. Thus, $T(x_n) \longrightarrow T(x_0)$ in norm. From this and $1 = \nu (T) = \|T\| \geq \|T(x_n)\| \geq | \langle x_n^*, T(x_n) \rangle | \longrightarrow 1$, we get that $\|T(x_0)\| = 1$. This shows that $x_0 \in S_X$. Since $w$ and norm topologies coincide in $S_X$, we have that $x_n \longrightarrow x_0$ in norm. Notice now that for each $n \in \mathbb{N}$, we have \begin{eqnarray*} 1 \geq | \langle x_n^*, T(x_0) \rangle | \geq | \langle x_n^*, T(x_n) \rangle | - \|x_0 - x_n\|. \end{eqnarray*} Since $x_n$ converges to $x_0$ in norm, by using $(\ref{eq0})$, we get that $| \langle x_n^*, T(x_0) \rangle | \longrightarrow 1$. Thus, there exists a subsequence of $(x_n^*)$, which we denote again by $(x_n^*)$, and some $\theta\in [0, 2\pi )$ such that $ \langle x_n^*, T(x_0) \rangle $ converges to $e^{i\theta}$. Let $S\in \mathcal{K}(X)$ be the operator defined by $S:=e^{-i\theta} T$. One clearly has that $S(x_0)\in S_X$ and $ \langle x_n^*, S(x_0) \rangle $ converges to $1$. By \v{S}mulian lemma, there is $x_0^* \in B_{X^*}$ such that $x_n^* \longrightarrow x_0^*$ in norm. Since $ \langle x_n^*, x_n \rangle = 1$ for every $n \in \mathbb{N}$, we get that $ \langle x_0^*, x_0 \rangle = 1$. So, $x_0^* \in S_{X^*}$ and then $(x_0, x_0^*) \in \Pi(X)$. Finally, in view of (\ref{eq0}) and $| \langle x_n^*, T(x_n) \rangle | \longrightarrow | \langle x_0^*, T(x_0) \rangle |$, we get that $| \langle x_0^*, T(x_0) \rangle | = 1$. This is a contradiction. \end{proof} In fact, the above argument shows, under the same assumptions on (ii), that every compact operator $T$ which has norm and numerical radius $1$ attains its numerical radius. Notice also that the identity operator always belongs to $\mathcal{A}_{\text{nu}} (X)$ whereas it is not compact unless $X$ is finite dimensional. So, in the infinite dimensional setting, the inclusion in Theorem \ref{KKop}.(ii) must be strict. On the other hand, since every operator from a reflexive space into a space which satisfies the Schur's property is compact and Hilbert spaces satisfy all the hypothesis of Theorem \ref{KKop}, we have the following consequence. \begin{cor} \label{corohilbert} Let $X$ be a reflexive Banach space with the Kadec-Klee property and let $H$ be a Hilbert space. \begin{itemize} \item[(i)] If $Y$ has the Schur property, then $ \mathcal{A}_{\| \cdot \|}(X, Y) = S_{\mathcal{L}(X, Y)}$. \item[(ii)] If $T \in \mathcal{K}(H)$ is with $\nu(T) = \|T\| = 1$, then $T \in \mathcal{A}_{\num}(H)$. \end{itemize} \end{cor} Next, we present a numerical radius attaining compact operator $S \notin \mathcal{A}_{\text{nu}} $ with $ \nu (S) = \|S\| = 1$ defined on a Banach space $X$ which is not reflexive, its norm is nowhere Fr\'echet differentiable, and satisfies the Schur's property (and, in particular, the Kadec-Klee property). \begin{example} \label{ex3} Consider $c_0$ as a real space. Define the operator $T\in \mathcal{L}(c_0)$ by \begin{equation*} (T(x) )(1) = \sum_{j=1}^{\infty} \frac{1}{2^j} x(j) \quad\text{and}\quad (T(x)) (k) = 0 \quad (k \geq 2) \ \ \ (x = (x(j))_{j=1}^{\infty} \in c_0). \end{equation*} It is proved in \cite[Proposition 2.8]{A} that $\|T\| = \nu(T) = 1$ but $T$ attains neither its norm nor numerical radius. In particular, $T$ belongs neither to $\mathcal{A}_{\|\cdot\|}(c_0, c_0)$ nor to $\mathcal{A}_{\text{nu}}(c_0)$. We claim that $S := T^*$ is a compact numerical radius attaining operator with $\nu(S)=\|S\|=1$ but does not belong to $\mathcal{A}_{\text{nu}} (\ell_1)$. Indeed, first notice that $S\in \mathcal{L}(\ell_1 )$ is given by \begin{equation*} S(y) = \sum_{j=1}^{\infty} \frac{y(1)}{2^j} e_j \ \ \ (y = (y(j))_{j=1}^{\infty} \in \ell_1). \end{equation*} Moreover, $\nu(S) = \nu(T) = 1$, $\langle z, e_1 \rangle = 1$ where $z = (1,1,1, \dots ) \in S_{\ell_\infty}$, and that $\langle z, S e_1 \rangle = \sum_{j=1}^{\infty} \frac{1}{2^j} = 1$, which implies that $S$ attains the numerical radius (and the norm). Before proving that $S \notin \mathcal{A}_{\text{nu}} (\ell_1)$, let us first observe that $S \in \mathcal{A}_{\| \cdot \|} (\ell_1, \ell_1)$. Indeed, given $\varepsilon>0$, take $x \in S_{\ell_1}$ such that $\| S(x)\|_{1} > 1 - \frac{\varepsilon}{2}$, that is, $\sum_{j=1}^{\infty} \frac{|x(1)|}{2^j} > 1 - \frac{\varepsilon}{2}.$ Thus, $|x(1)| > 1-\frac{\varepsilon}{2}$ and $\sum_{j=2}^{\infty} |x(j)| \leq \frac{\varepsilon}{2}$. Consider $y = \left(\frac{x(1)}{|x(1)|}, 0,0,\dots\right) \in S_{\ell_1}$, then \begin{equation*} \|S(y) \|_{1} = 1 \ \ \ \mbox{and} \ \ \ \| x -y \|_{1} = |x(1) - y(1)| + \sum_{j=2}^{\infty} |x(j)| \leq (1 - |x(1)|) + \frac{\varepsilon}{2} < \varepsilon. \end{equation*} This shows that $S \in \mathcal{A}_{\| \cdot \|} (\ell_1, \ell_1)$. Next, we claim that $S$ cannot be in $\mathcal{A}_{\text{nu}}(\ell_1)$. Indeed, observe that if $(y,z) \in \Pi(\ell_1)$ satisfy $| \langle z, S(y) \rangle | = 1$, then \begin{equation*} \sum_{j=1}^{\infty} |y(j)| = 1, \ \ \ \sum_{j=1}^{\infty} y(j) z (j) = 1, \ \ \ \left| \sum_{j=1}^{\infty} \frac{1}{2^j} y(1) z (j)\right| = 1, \ \ \ \mbox{and} \ \ \ \max_{j \in \mathbb{N}} |z(j)| = 1. \end{equation*} From the third equality, we have \begin{equation*} 1 = \left| \sum_{j=1}^{\infty} \frac{1}{2^j} y(1) z (j) \right| \leq |y(1)| \sum_{j=1}^{\infty} \frac{1}{2^j} = |y(1)| \leq 1. \end{equation*} This implies that the only possible candidates are $y = (1, 0, 0, 0, \ldots)$ and $z = (1, 1, 1, 1, \ldots)$ or $y = (-1, 0, 0, 0, \ldots)$ and $z = (-1, -1, -1, -1, \ldots)$. Suppose, by contradiction, that for a given $\varepsilon \in (0, 1)$, there is $\eta(\varepsilon, S) > 0$. Let $n_0 \in \mathbb{N}$ be such that $\sum_{j=1}^{n_0} \frac{1}{2^j} > 1 - \eta(\varepsilon, S)$. Set $y_0 = (1, 0, 0, \ldots) \in S_{\ell_1}$ and $z_0 = (1, 1, \ldots, 1, \underbrace{1}_{n_0 \text{-th}}, 0, 0, \ldots) \in S_{\ell_{\infty}}$. Then, $(y_0, z_0) \in \Pi(\ell_1)$ and $| \langle z_0, S(y_0) \rangle| = \sum_{j=1}^{n_0} \frac{1}{2^j} > 1 - \eta(\varepsilon, S)$. So, there is $(y, z) \in \Pi(\ell_1)$ such that $| \langle z, S(y) \rangle| = 1$, $\|y - y_0\|_1 < \varepsilon$, and $\|z - z_0\|_{\infty} < \varepsilon$. But this is not possible since $\|z - z_0\|_{\infty} \geq | z(n_0 +1) - z_0(n_0 +1)| \geq 1$. \end{example} Let us recall that in Corollary \ref{corohilbert}, we proved that if a compact operator $T$ defined on a Hilbert space is such that $\nu (T) = \| T \| = 1$, then $T$ must belong to the set $\mathcal{A}_{\num}(H)$. However, the following result (inspired by \cite[Example 1.9]{A}) provides us a wide class of operators $T\in \mathcal{A}_{\num}(H)$ such that $1 = \nu (T) < \| T \|$ and , in particular, examples of operators which belong to $\mathcal{A}_{\text{nu}}$ but not to $\mathcal{A}_{\|\cdot\|}$. Notice, by item (iii) below, that $T$ belongs to the set $\mathcal{A}_{\text{nu}}$ in a uniform sense, that is, the $\eta$ does not depend on the operator $T$ defined there. We do not know how often this happens, that is, we do not know, for instance, whether the set of such an operators could be norming for the whole space. \begin{prop} \label{ex4} Let $H$ be a separable infinite dimensional real Hilbert space. Then, there is $T\in \mathcal{L} (H)$ such that \begin{itemize} \item[(i)] $T$ is a compact operator. \item[(ii)] $1 = \nu(T) < \|T\|$ and $T$ attains its numerical radius. \item[(iii)] given $\varepsilon > 0$, there is $\eta(\varepsilon) > 0$ such that whenever $x_0 \in S_H$ satisfies \begin{equation*} |\langle Tx_0, x_0 \rangle| > 1 - \eta(\varepsilon), \end{equation*} there is $x_1 \in S_H$ such that $\nu(T) = \langle Tx_1, x_1 \rangle = 1$ and $\|x_1 - x_0\| < \varepsilon$. \end{itemize} In particular, $T \in \mathcal{A}_{\num}(H)$ and $T \not\in \mathcal{A}_{\|\cdot\|}(H, H)$. \end{prop} \begin{proof} Let $0 < \alpha \leq 1$ and $\{\alpha_n\}$ be a sequence such that $|\alpha_1| > 1$, $-1< \alpha_n < 1$ for $n \geq 2$, and $\alpha_n \rightarrow 0$ as $n \rightarrow \infty$. Let $\{J_1, J_2, J_3\}$ be a partition of $\mathbb{N}$ such that $|J_1|=|J_2|=\aleph_0$, $|J_3|=\ell<\infty$. Write the subsets $J_1$, $J_2$ as $J_1 = \{n_k : k \geq 1\}$, $J_2 = \{m_k : k \geq 1\}$ where $n_1 \leq n_2 \leq \dots$, $m_1 \leq m_2 \leq \dots$ and each $n_k$ corresponds to $m_k$ via an one-to-one correspondence between $J_1$ and $J_2$. Define $T \in \mathcal{L}(H)$ by \begin{align*} T(e_{n_k}) = -\alpha_k e_{m_k}~ ({k \in \mathbb{N}}), \quad T(e_{m_k}) =\alpha_k e_{n_k} ~ ({k \in \mathbb{N}}),\quad T(e_n) = \alpha e_n ~ (n \in J_3), \end{align*} where $\{e_n : n \geq 1\}$ is an orthonormal basis of $H$. Note first that for every $x \in H$, we have \begin{equation*} T(x) = \sum_{n=1}^{\infty} \langle x, e_n \rangle T(e_n) = \sum_{k \in \mathbb{N}} \left(-\alpha_k \langle x, e_{n_k} \rangle e_{m_k}+ \alpha_k \langle x, e_{m_k}\rangle e_{n_k}\right) + \sum_{n \in J_3} \alpha \langle x, e_n\rangle e_n. \end{equation*} The item (i) is clear. Let us calculate the norm and numerical radius of $T$. Note for each $x \in S_H$, we have \begin{align*} \langle T(x), x\rangle = \sum_{k \in \mathbb{N}} \left(\alpha_k \langle e_{n_k}, x \rangle \langle x, e_{m_k}\rangle - \alpha_k \langle e_{m_k}, x \rangle \langle x, e_{n_k}\rangle\right) + \sum_{n \in J_3} \alpha \langle e_n, x\rangle \langle x, e_n\rangle. \end{align*} The first two terms are canceled out because $H$ is real and then \begin{equation}\label{eq2} \langle T(x), x\rangle= \alpha \sum_{n \in J_3} |\langle x, e_n\rangle |^2 \end{equation} for $x \in S_H$ which implies that $ \nu (T) \leq \alpha$. Since $|\langle T e_n, e_n\rangle| = \alpha$ for every $n \in J_3$, we have that $T$ attains its numerical radius and $ \nu (T) = \alpha$. On the other hand, let us notice that, for every $x \in H$, we have \begin{align*} \|T(x)\|^2 = \sum_{j=1}^{\infty} |\langle T(x), e_j \rangle |^2 &= \sum_{k \in \mathbb{N}} \left( |\alpha_k \langle x, e_{m_k} \rangle|^2 + |\alpha_k \langle x, e_{n_k} \rangle |^2\right) + \sum_{n \in J_3} | \alpha \langle x, e_n\rangle |^2. \end{align*} It follows that $\| T\| \leq \max \{ \|\{\alpha_n\}\|_{\infty}, |\alpha|\}$. However, we also have \begin{align*} \|T \| &\geq \sup\{\|T(e_n) \| : n \geq 1\} = \sup \{ |\alpha_k|, |\alpha| : k \geq 1\} = \max \{ \| \{\alpha_n\} \|_{\infty}, |\alpha| \}; \end{align*} hence $\|T \| = \max \{ \| \{\alpha_n\} \|_{\infty}, |\alpha| \}$. In particular, since $|\alpha_1| > 1$, we have $\|T\| > 1 \geq \alpha = \nu (T)$. This proves item (ii). Now we prove that $T \in \mathcal{A}_{\num}(H)$ when $\alpha=1$. Given $\varepsilon \in (0, 1)$, let $x_0 \in S_H$ be such that $|\langle T(x_0), x_0 \rangle| > 1 - \frac{ \varepsilon^2}{4}.$ By equation (\ref{eq2}), we have that \begin{equation*} \sum_{n \in J_3} |\langle x_0, e_n\rangle |^2 = |\langle T(x_0), x_0 \rangle| > 1 - \frac{ \varepsilon^2}{4}, \ \mbox{and then} \ \sum_{k \in J_1 \cup J_2 } |\langle x_0, e_k \rangle|^2 < \frac{\varepsilon^2}{4}. \end{equation*} Let $\pi_{3}$ be the projection of $H$ onto the closed subspace $H_3 = \text{span} \{ e_n : n \in J_3 \}$. Then we have $\pi_3 (x_0) = \sum_{n \in J_3} \langle x_0, e_n \rangle e_n$ and \begin{equation*} \langle T(\pi_3(x_0)), \pi_3(x_0)\rangle = \sum_{n\in J_3} |\langle \pi_3(x_0), e_n \rangle |^2 = \sum_{n \in J_3} |\langle x_0 , e_n \rangle |^2. \end{equation*} It follows that $T$ attains its numerical radius at $\| \pi_3 (x_0) \|^{-1} \pi_3 (x_0) \in S_H$. Moreover, \begin{align*} \left\| \frac{\pi_3 (x_0) }{\|\pi_3 (x_0)\| }- x_0 \right\| &\leq \left\| \frac{\pi_3 (x_0) }{\|\pi_3 (x_0)\| }- \pi_3 (x_0) \right\| + \left\| \pi_3 (x_0) - x_0 \right\| \\ &\leq | 1 - \| \pi_3 (x_0) \|| + \left( \sum_{k \in J_1 \cup J_2} |\langle x_0, e_k\rangle|^2 \right)^{1/2} < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon. \end{align*} \end{proof} Observe that it is not true that $T^*$ belongs to $\mathcal{A}_{\|\cdot\|}$ if $T$ belongs to $\mathcal{A}_{\|\cdot\|}$ in general (see Examples \ref{ex3} and \ref{ex6}). However, if we put some extra assumptions on the spaces $X$ and $Y$, then we can obtain the following duality results. \begin{prop} \label{adjoint} Let $X,Y$ be Banach spaces and $T \in \mathcal{L} (X,Y)$. \begin{itemize} \item[(i)] Suppose that $Y$ be uniformly smooth. If $T \in \mathcal{A}_{\| \cdot \|}(X, Y)$, then $T^* \in \mathcal{A}_{\| \cdot \|}(Y^*, X^*)$. \item[(ii)] Suppose that $X$ be uniformly convex. If $T^* \in \mathcal{A}_{\| \cdot \|}(Y^*, X^*)$, then $T \in \mathcal{A}_{\| \cdot \|}(X, Y)$. \item[(iii)] Suppose that $X$ is reflexive. Then, $T \in \mathcal{A}_{\num}(X)$ if and only if $T^* \in \mathcal{A}_{\num}(X^*)$. \end{itemize} \end{prop} \begin{proof} Note that (ii) is just a consequence of (i) since, in this case, $X$ is, in particular, reflexive. Let us prove (i). Let $Y$ be a uniformly smooth Banach space. Let $T \in \mathcal{A}_{\| \cdot \|}(X, Y)$. Then, $\|T^*\| = \|T\| = 1$ and $T^*$ is also norm-attaining. In order to prove that $T^* \in \mathcal{A}_{\| \cdot \|}(Y^*, X^*)$, let $\varepsilon \in (0, 1)$ be given and consider $\eta(\varepsilon, T) > 0$. Set \begin{equation*} \eta(\varepsilon, T^*) := \min \left\{ \eta \left( \frac{\delta_{Y^*}(\varepsilon)}{2}, T \right), \frac{\delta_{Y^*}(\varepsilon)}{2} \right\} > 0, \end{equation*} where $\varepsilon \mapsto \delta_{Y^*}(\varepsilon)$ stands for the modulus of convexity of $Y^*$. Pick $y_1^* \in S_{Y^*}$ to satisfy $\|T^*(y_1^*)\| > 1 - \eta(\varepsilon, T^*)$. There is $x_1 \in S_X$ such that $\re \langle y_1^*, T(x_1) \rangle = \re \langle x_1, T^* (y_1^*) \rangle = \|T^*(y_1^*)\| > 1 - \eta(\varepsilon, T^*)$. This implies that $\|T(x_1)\| > 1 - \eta(\varepsilon, T^*)$. Since $T \in \mathcal{A}_{\| \cdot \|}(X, Y)$, there is $x_2 \in S_X$ such that $\|T(x_2)\| = 1$ and $\|x_2 - x_1\| < \frac{\delta_{Y^*} (\varepsilon)}{2}$. Take $y_2^* \in S_{Y^*}$ to be such that $\re \langle y_2^*, T(x_2) \rangle = \|T(x_2)\| = 1$ and notice that $\re \langle y_1^*, T(x_2) \rangle > 1 - \delta_{Y^*}(\varepsilon)$. Then, $\|y_1^* + y_2^*\| > 2 - 2 \delta_{Y^*}(\varepsilon)$. This shows that $\|y_2^* - y_1^*\| < \varepsilon$. As $T^*$ attains its norm at $y_2^*$ which is close to $y_1^*$, henceforth, $T^* \in \mathcal{A}_{\| \cdot \|}(Y^*, X^*)$. Now we prove (iii). Since $X$ is reflexive, we just have to prove one direction. Assume $T \in \mathcal{A}_{\num}(X)$. Note that $T^* \in \mathcal{L}(X^*)$ also attains its numerical radius. Now let $\varepsilon > 0$ be given and set $\eta(\varepsilon, T^*) := \eta(\varepsilon, T) > 0$. Let $(x_1^*, x_1^{**}) \in \Pi(X^*)$ be such that $| \langle x_1^{**}, T^*(x_1^*) \rangle | > 1 - \eta(\varepsilon, T^*)$. Since $X$ is reflexive, there is $x_1 \in S_X$ such that $x_1 = x_1^{**}$. Then \begin{equation*} | \langle x_1^*, T(x_1) \rangle | = | \langle x_1, T^*(x_1^*) \rangle | = | \langle x_1^{**}, T^*(x_1^*) \rangle | > 1 - \eta(\varepsilon, T^*) = 1 - \eta(\varepsilon, T). \end{equation*} Then there is $(x_2, x_2^*) \in \Pi(X)$ such that $| \langle x_2^*, T(x_2) \rangle | = 1$, $\|x_2 - x_1\| < \varepsilon$ and $\|x_2^* - x_1^*\| < \varepsilon$. So, $T^* \in \mathcal{A}_{\num}(X^*)$ as desired. \end{proof} Given $T\in \mathcal{L}(c_0 )$ and $N \in \mathbb{N}$, it is not difficult to see that $\operatorname{ran} T^*\subset \operatorname{span}\{e_1^*, \ldots, e_N^*\}$ if and only if $T=T\circ P_N$, where $P_N$ is the natural $N$-th projection on $c_0$. A property related to Proposition \ref{adjoint}.(iii) above can be proved for $c_0$ under this condition. \begin{prop} \label{prop:AnuAdjoint-c0} Let $T\in \mathcal{A}_{\text{nu}}(c_0)$ be an operator such that the range of $T^* \in \mathcal{L} (\ell_1)$ is in $\operatorname{span}\{ e_1^*, \ldots, e^*_N \}$ for some $N\in\mathbb{N}$. Then, $T^*\in \mathcal{A}_{\text{nu}}(\ell_1)$ \end{prop} \begin{proof} Let $\varepsilon > 0$. Set $\eta(\varepsilon, T^*):= \min \{ \frac{\varepsilon}{3}, {\eta\left( \frac{\varepsilon}{3}, T \right)} \} > 0$. Let $(x_1^*, x_1^{**})\in \Pi(\ell_1)$ be such that $|\langle x_1^{**}, T^*(x_1^*)\rangle| > 1 - \eta(\varepsilon, T^*)$. Let $n_0>N$ be big enough so that $\sum_{n=1}^{n_0}|x_1^*(n)|>1-\eta(\varepsilon, T^*)$. Define $(x_2^*, x_2^{**}) \in \ell_1 \times \ell_\infty$ as follows: \begin{enumerate} \item[(a)] $x_2^{*} (n) = (\sum_{n=1}^{n_0} |x_1^* (n)|)^{-1} x_1^* (n)$ for $1 \leq n \leq n_0$ and $x_2^* (n) = 0 $ for $n > n_0$, \item[(b)] $x_2^{**}(n) = x_1^{**}(n)$ for $1\leq n \leq n_0$ and $x_2^{**}(n)=0$ for $n>n_0$. \end{enumerate} As $x_1^* (n) x_1^{**} (n) = |x_1^* (n)|$ for every $n \in \mathbb{N}$, we get that $(x_2^*, x_2^{**}) \in \Pi (\ell_1)$. Note that $\| x_2^* - x_1^* \| < 2\eta(\varepsilon, T^*) < \frac{2\varepsilon}{3}$. Now, \begin{align*} | \langle x_2^*, T (x_2^{**} ) \rangle | = |\langle x_2^{**}, T^* (x_2^*) \rangle | &= \left| \sum_{n=1}^{N} x_2^{**} (n) (T^{*} (x_2^{*})) (n) \right| \\ &= \left(\sum_{n=1}^{n_0} |x_1^* (n)| \right)^{-1} \left| \sum_{n=1}^{N} x_1^{**} (n) (T^{*} (x_1^{*})) (n) \right| > 1- \eta(\varepsilon, T^*). \end{align*} Hence, there exists $(x_3, x_3^*) \in \Pi (c_0)$ such that $|\langle x_3^* , T x_3 \rangle | = 1$, $\| x_3 - x_2^{**}\| < \frac{\varepsilon}{3}$, and $\|x_3^* - x_2^* \| < \frac{\varepsilon}{3}$. Notice that $|x_3 (n) | < \frac{\varepsilon}{3}$ for every $n > n_0$; hence $x_3^* (n) = 0$ for every $n > n_0$. Define $x_3^{**} \in B_{\ell_\infty}$ by $x_3^{**} (n) = x_3 (n)$ for $1 \leq n \leq n_0$ and $x_3^{**} (n) = x_1^{**} (n)$ for $n > n_0$. Then, $(x_3^*, x_3^{**}) \in \Pi (\ell_1)$, $\|x_3^* - x_1^* \| < \varepsilon$, and $\|x_3^{**} - x_1^{**} \| < \frac{\varepsilon}{3}$. Finally, \begin{equation*} \left| \langle x_3^{**}, T^* (x_3^*) \rangle \right| = \left| \sum_{n=1}^N x_3^{**} (n) (T^* (x_3^*))(n) \right| = \left| \sum_{n=1}^N x_3 (n) (T^* (x_3^*))(n) \right| = 1. \end{equation*} \end{proof} In Proposition \ref{adjoint}, if we drop off some of the hypothesis, then it is possible to construct operators which do not satisfy the conclusion of that result. Recall that, in Example \ref{ex3}, we have constructed an operator $T$ on $c_0$ such that $T^* \in \mathcal{A}_{\|\cdot\|} (\ell_1, \ell_1 )$ but $T \notin \mathcal{A}_{\|\cdot\|} (c_0, c_0 )$. Next, we present an operator $S$ such that $S \in \mathcal{A}_{\|\cdot\|} (X, X)$ but $S^* \notin \mathcal{A}_{\|\cdot\|} (X^{ * }, X^{ * })$. \begin{example}\label{ex6} The operator $T$ defined in Example \ref{ex3} is such that $T^{**} \not\in \mathcal{A}_{\|\cdot\|} (\ell_{\infty}, \ell_{\infty})$ although $T^* \in \mathcal{A}_{\|\cdot\|} (\ell_1, \ell_1)$ . Indeed, $T^{**} \in \mathcal{L}(\ell_{\infty})$ is given by \begin{equation*} (T^{**}(z))(1) = \sum_{j=1}^{\infty} \frac{1}{2^j} z(j) \ \ \ \mbox{and} \ \ \ (T^{**}(z))(k) = 0 \ \forall \ k \geq 2 \end{equation*} for $ z \in \ell_{\infty} $. Then, for the vector $u_0 = (1, 1, 1, 1, \ldots) \in S_{\ell_{\infty}}$, we have $\|T^{**} (u_0)\| = 1 = \|T^{**}\|$. Let $z_0 \in S_{\ell_{\infty}}$ be such that $\|T^{**} (z_0)\|_{\infty} = 1$. This implies that $|z_0(j)| = 1$ for all $j \in \mathbb{N}$. For a given $\varepsilon \in (0, 1)$, suppose that there is $\eta(\varepsilon, T^{**}) > 0$. Let $n_0 \in \mathbb{N}$ be such that $2^n \eta(\varepsilon, T^{**}) > 1$ for every $n \geq n_0$. Consider the vector $z \in S_{\ell_{\infty}}$ defined as $z_1 (n) = 1$ for $1 \leq n \leq n_0$ and $z_1 (n) = 0$, otherwise. Then, $\| T^{**} (z_1) \| = \sum_{j=1}^{n_0} \frac{1}{2^j} > 1 - \eta (\varepsilon, T^{**}). $ However, the vector $z_1$ cannot be close to norming points of $T^{**}$ by definition. This shows that $T^{**} \not\in \mathcal{A}_{\| . \|} (\ell_{\infty}, \ell_{\infty})$. \end{example} Our next aim is to characterize the diagonal operators which belong to $\mathcal{A}_{\|\cdot\|}$ and $\mathcal{A}_{\text{nu}}$. We give a complete characterization for these operators which belong to $\mathcal{A}_{\|\cdot\|}(X, X)$ whenever $X=c_0$ or $\ell_p$ with $1 \leq p \leq \infty$ and for $\mathcal{A}_{\text{nu}}(X)$ whenever $X=c_0$ or $\ell_p$ with $1 \leq p < \infty$. Next lemma describes the norm-attaining diagonal operators defined on $c_0$ or $\ell_p$. Although it might be well-known in the literature, we present a short proof of it for the sake of completeness and we use it to prove Theorem \ref{theo:diag_norm}. \begin{lemma}\label{lem:diag_norm} Let $X=c_0$ or $\ell_p$ with $1\leq p\leq\infty$. Let $T \in \mathcal{L}(X)$ be a norm one operator defined as $$Tx = (\alpha_n x(n))_{n=1}^{\infty} \quad (x = (x(n))_{n=1}^\infty \in X),$$ where $(\alpha_n )_{n=1}^{ \infty}$ is a bounded sequence of complex numbers. Given $x \in S_{X}$, $T$ attains its norm at $x$ if and only if the following is satisfied: \begin{itemize} \item[(i)] Case $X=c_0$: there exists $n_0\in\mathbb{N}$ such that $|\alpha_{n_0}|=\| T \|$ and $|x(n_0)|=1$. \item[(ii)] Case $X=\ell_\infty$: either the same condition as in $c_0$ holds or there exists a subsequence of the natural numbers, $(n_k)_{k=1}^{\infty}$, such that $|\alpha_{n_k}|$ converges to $\| T\|$ and $|x(n_k)|$ converges to $1$ as $k\rightarrow \infty$. \item[(iii)] Case $X=\ell_p$ with $1\leq p < \infty$: setting $J=\{ n\in \mathbb{N}:\, |\alpha_n|=1 \}$, $J$ is non-empty and $x(n)=0$ for all $n\in \mathbb{N}\backslash J$. \end{itemize} \end{lemma} \begin{proof} In all cases, it is easy to prove that $\|T \| = \sup_{n\in\mathbb{N}} |\alpha_n|$ and the implication $(\Leftarrow)$ is clear. Conversely, the proofs for $X=c_0$ and $X=\ell_\infty$ are a consequence of the fact that $$1=\|T \| = \|Tx\| = \sup_{n\in\mathbb{N}} |\alpha_n x(n)| \leq \sup_{n\in\mathbb{N}} |\alpha_n| \leq \|T\|=1,$$ and the proof for $X=\ell_p$ with $1\leq p < \infty$ is a consequence of the fact that $$1 = \|Tx\|^p = \sum_{n=1}^\infty |\alpha_n|^p |x (n)|^p = \sum_{n \in J} |x(n)|^p + \sum_{n \in \mathbb{N} \setminus J} |\alpha_n|^p |x(n)|^p \leq \sum_{j=1}^\infty |x_n|^p = 1.$$ \end{proof} \begin{theorem} \label{theo:diag_norm} Let $X=c_0$ or $\ell_p$, $1\leq p\leq \infty$. Let $T \in \mathcal{L}(X)$ be a norm one operator defined as $$Tx = (\alpha_n x(n))_{n=1}^{\infty} \quad (x = (x(n))_{n=1}^\infty \in X),$$ where $(\alpha_n)_{n=1}^\infty$ is a bounded sequence of complex numbers. Then, the following assertions are equivalent: \begin{itemize} \item[(a)] $T\in \mathcal{A}_{\| \cdot \|}(X, X)$, \item[(b)] Both of these conditions are satisfied: \begin{enumerate} \item There exists some $n_0\in \mathbb{N}$ such that $|\alpha_{n_0}|= 1$. \item If $J=\{ n\in \mathbb{N}:\, |\alpha_n|= 1 \}$, then either $J=\mathbb{N}$ or $\sup_{n\in \mathbb{N}\backslash J} |\alpha_n| < 1.$ \end{enumerate} \end{itemize} \end{theorem} \begin{proof} We prove the result for $X=c_0$ first. The proof for $X=\ell_\infty$ is very similar, so we omit it. $(a) \Longrightarrow (b)$: By Lemma \ref{lem:diag_norm}, it suffices to show that $\sup_{n \in \mathbb{N} \setminus J} |\alpha_n| < 1$ when $J \neq \mathbb{N}$. Assume to the contrary that $\sup_{n \in \mathbb{N} \setminus J} |\alpha_n| = 1$. Pick a sequence $(n_k) \subset \mathbb{N} \setminus J$ such that $|\alpha_{n_k}| \geq 1 - \frac{1}{k}$ for each $k \in \mathbb{N}$. Given $\varepsilon \in (0,1)$, choose $N \in \mathbb{N}$ so that $N^{-1} < \eta(\varepsilon, T)$, then $\| T (e_{n_N}) \| > 1 - \eta(\varepsilon, T)$. Thus there exists $x_0 \in S_{c_0}$ such that $T$ attains its norm at $x_0$ and $\| x_0 - e_{n_N}\| <\varepsilon$. Now, Lemma \ref{lem:diag_norm} implies that there exists $k \in J$ such that $|x_0(k)| = 1 = |\alpha_k|$. This contradicts $\| x_0 - e_{n_N}\| <\varepsilon$. $(b) \Longrightarrow (a)$: If $J = \mathbb{N}$, then $T$ attains its norm at every point in $S_{c_0}$. Suppose that $J \neq \mathbb{N}$ and $\sup_{n \in \mathbb{N} \setminus J} |\alpha_n | < 1$. Assume to the contrary that $T \notin \mathcal{A}_{\|\cdot\|} (c_0, c_0)$, then there is some $\varepsilon_0 \in (0, 1)$ such that for each $n\in\mathbb{N}$, there is some $x_n\in S_{c_0}$ such that $1\geq \| T(x_n)\| \geq 1-\frac{1}{n}$, and whenever $x\in S_{c_0}$ satisfies that $\|x - x_n\| < \varepsilon_0$, we have that $\| T(x)\| < 1$. Let $n_0\in \mathbb{N}$ be such that \[ \sup_{n \in \mathbb{N} \setminus J} |\alpha_n| < 1-\frac{1}{n_0} \,\, \text{ and } \,\, \frac{1}{n_0} < \varepsilon_0. \] Since $\| T(x_{n_0})\| \geq 1-\frac{1}{n_0}$, we can choose $k\in J$ such that $|x_{n_0}(k)| \geq 1 - \frac{1}{n_0}$. Let $y_{n_0} \in S_{c_0}$ be the point such that \begin{enumerate} \item $y_n(j):=x_n(j)$ for all $j \in\mathbb{N}\backslash \{k\}$, \item $\displaystyle y_n(k):=\frac{x_n(k)}{\| x_n(k)\|}$. \end{enumerate} It is clear that $\| T(y_{n_0})\| = \|y_{n_0}\| = 1$ and $\| y_{n_0} - x_{n_0} \| \leq \frac{1}{n_0} < \varepsilon_0$. This contradiction completes the proof. \vspace{0.2cm} Let us prove now the result for $X=\ell_p$ with $1\leq p < \infty$. \vspace{0.2cm} $(a) \Longrightarrow (b)$: It suffices to check that $\sup_{n \in \mathbb{N} \setminus J} |\alpha_n| < 1$ when $J \neq \mathbb{N}$. Assume to the contrary that $\sup_{n \in \mathbb{N} \setminus J} |\alpha_n| =1$. Given $\varepsilon \in (0,1)$, pick $n_0 \in \mathbb{N} \setminus J$ so that $|\alpha_{n_0}| > 1 - \eta(\varepsilon, T)$. Thus, $\|T e_{n_0}\| > 1 - \eta(\varepsilon, T)$. By Lemma \ref{lem:diag_norm}, if $T$ attains its norm at $x \in S_{\ell_p}$, then $|x (n_0)| = 0$ which implies that $\| x - e_{n_0} \| \geq 1 > \varepsilon$. $(b) \Longrightarrow (a)$: If $J = \mathbb{N}$, then we are done. Suppose that $J \neq \mathbb{N}$ and $\beta := \sup_{n \in \mathbb{N} \setminus J} |\alpha_n| < 1$. Assuming that $T$ does not belong to $\mathcal{A}_{\| \cdot \|}(\ell_p, \ell_p)$, there exists $\varepsilon_0 \in (0, 1)$ such that for each $n\in\mathbb{N}$, there is some $x_n\in S_{\ell_p}$ such that $1\geq \| T(x_n)\| \geq 1-\frac{1}{n}$, and whenever $x \in S_{\ell_p}$ satisfies that $\|x - x_n\| < \varepsilon_0$, we have that $\| T(x)\| < 1$. Note that \begin{equation*} \left(1 - \frac{1}{n}\right)^p \leq \sum_{k \in J} |x_n (k)|^p + \beta \sum_{k \in \mathbb{N} \setminus J} |x_n (k)|^p < \sum_{k=1}^\infty |x_n (k)|^p = 1. \end{equation*} This implies that $ \sum_{k \in J} |x_n (k)|^p$ converges to $1$ and $\sum_{k \in \mathbb{N} \setminus J} |x_n (k)|^p$ converges to $0$ as $n \rightarrow \infty$. Set $A_n := \left( \sum_{k \in J} |x_n (k)|^p \right)^{\frac{1}{p}}$ and choose $n_0 \in \mathbb{N}$ such that $1 - A_{n_0}^p < \frac{\varepsilon_0^p}{2}$. Define $y_{n_0} \in S_{\ell_p}$ by \[ y_{n_0} (k) = \frac{x_{n_0} (k)}{A_{n_0}} \,\, \text{ for every } \, k \in J \,\, \text{ and } \,\, y_{n_0} (k) = 0 \,\, \text{ for every } \, k \in \mathbb{N} \setminus J. \] By Lemma \ref{lem:diag_norm} that $\| T y_{n_0} \| = 1$. However, \begin{equation*} \| y_{n_0} - x_{n_0} \|^p \leq (1 - A_{n_0})^p + \sum_{j \in \mathbb{N} \setminus J} |x_{n_0} (k)|^p \leq 2(1 - A_{n_0}^p ) < \varepsilon_0^p. \end{equation*} \end{proof} Next we are proving the counterpart of Lemma \ref{lem:diag_norm} and Theorem \ref{theo:diag_norm} for numerical radius. As in the $\mathcal{A}_{\|\cdot\|}$ case, it gives a whole characterization for the set $\mathcal{A}_{\text{nu}}$ for diagonal operators on $c_0$ and $\ell_p$. Let us notice that Lemma \ref{lem:diag_nu} establishes some properties for a numerical radius one diagonal operator on $c_0$ and $\ell_p$ which attains its numerical radius. We will use it to prove Theorem \ref{theo:diag_nu} and again we present a short proof of it for the sake of completeness. \begin{lemma}\label{lem:diag_nu} Let $X=c_0$ or $\ell_p$, $1\leq p< \infty$. Let $T \in \mathcal{L}(X)$ be a numerical radius one operator defined as $$Tx = (\alpha_n x(n))_{n=1}^{\infty} \quad (x = (x(n))_{n=1}^\infty \in X),$$ where $(\alpha_n)_{n=1}^\infty$ is a bounded sequence of complex numbers. If $T$ attains its numerical radius at $(x, x^*) \in \Pi (X)$, then we have the following: \begin{enumerate} \item There exists $n_0 \in \mathbb{N}$ such that $|\alpha_{n_0}| = 1$. \item For $X=c_0$, $\re x^*(n)x(n) = |x^*(n)x (n)| = |x^* (n)|$ for every $n \in \mathbb{N}$. \\ For $X=\ell_p$, $\re x^*(n)x(n) = |x^*(n)x (n)| = |x(n)|^p = |x^* (n)|^q$ for every $n \in \mathbb{N}$. \item There exists $\theta \in [0, 2\pi)$ such that $\alpha_n = e^{i \theta}$ on $\{ n \in \mathbb{N} : |x^* (n) | \neq 0 \}$. \end{enumerate} \end{lemma} \begin{proof} Let us see the result for $X=c_0$. First of all, as $(x, x^*) \in \Pi (c_0)$, by using a convex argument, it follows that $\re (x^*(n)x(n)) = |x^*(n)x(n)| = |x^* (n)|$ for every $n \in \mathbb{N}$. This proves item (2). Notice that (1) is clear as the operator $T$ attains its norm as well (notice that for these operators, we always have $\|T\| = \nu(T) =1$). To see (3), observe that \begin{align*} 1=|\langle x^*, Tx \rangle| = \left| \sum_{n \in \mathbb{N}} \alpha_n x^* (n) x(n) \right| \leq \sum_{n \in \mathbb{N}} |\alpha_n x^* (n) x(n) | \leq 1. \end{align*} Therefore, there exists $\theta \in [0, 2\pi)$ such that $\alpha_n = e^{i \theta}$ on $\{ n \in \mathbb{N} : |x^* (n) | \neq 0 \}$. The proof for $X=\ell_p$ with $1\leq p < \infty$ is similar, just keeping in mind the equality case of H\"older's inequality, so we omit it. \end{proof} \begin{theorem} \label{theo:diag_nu} Let $X=c_0$ or $\ell_p$, $1\leq p< \infty$. Let $T \in \mathcal{L}(X)$ be a numerical radius one operator defined as \begin{equation*} Tx = (\alpha_n x(n))_{n=1}^{\infty} \quad (x = (x(n))_{n=1}^\infty \in X), \end{equation*} where $(\alpha_n)_{n=1}^\infty$ is a bounded sequence of complex numbers. Then, the following assertions are equivalent: \begin{itemize} \item[(a)] $T\in\mathcal{A}_{\text{nu}}(X)$. \item[(b)] The following both conditions hold: \begin{enumerate} \item There exists some $n_0\in \mathbb{N}$ such that $|\alpha_{n_0}|= 1$. \item If $J=\{ n\in \mathbb{N}:\, |\alpha_n|= 1 \}$, then the cardinality of the set $\{ \alpha_n : n \in J \}$ is finite and $\sup_{n \in \mathbb{N} \setminus J } |\alpha_n | < 1$ when $J \neq \mathbb{N}$. \end{enumerate} \end{itemize} \end{theorem} Before giving the precise proof of Theorem \ref{theo:diag_nu}, let us notice that when $(\alpha_n)_{n=1}^{\infty}$ is a bounded sequence of {\it real} numbers, we have that the set $\{\alpha_n: n \in J\} \subseteq \{1, -1\}$, that is, it is automatically finite. Combining Theorem \ref{theo:diag_norm} and Theorem \ref{theo:diag_nu}, we get the following immediate consequence. \begin{cor} Let $X=c_0$ or $\ell_p$, $1\leq p< \infty$. Let $T \in \mathcal{L}(X)$ be a numerical radius one operator defined as \begin{equation*} Tx = (\alpha_n x(n))_{n=1}^{\infty} \quad (x = (x(n))_{n=1}^\infty \in X), \end{equation*} where $(\alpha_n)_{n=1}^\infty$ is a bounded sequence of real numbers. Then, the following assertions are equivalent: \begin{itemize} \item[(a)] $T \in \mathcal{A}_{\|\cdot\|}(X, X)$. \item[(b)] $T\in\mathcal{A}_{\text{nu}}(X)$. \item[(c)] Both of the following conditions are satisfied: \begin{enumerate} \item There exists some $n_0\in \mathbb{N}$ such that $|\alpha_{n_0}|= 1$. \item If $J=\{ n\in \mathbb{N}:\, |\alpha_n|= 1 \}$, then $J = \mathbb{N}$ or $\sup_{n \in \mathbb{N} \setminus J } |\alpha_n | < 1$ when $J \neq \mathbb{N}$. \end{enumerate} \end{itemize} \end{cor} \begin{proof}[Proof of Theorem \ref{theo:diag_nu}] Let us prove first the result for $X=c_0$. The case $X=\ell_1$ can be proved similarly to the case $X=c_0$ by using duality arguments, so we omit it. $(a) \Longrightarrow (b)$: By Lemma \ref{lem:diag_nu}, the set $J$ is non-empty. Assume that the set $\{ \alpha_n : n \in J \}$ is an infinite set. Write $\{ \alpha_n : n \in J \} = \{ e^{i\theta_1}, \ldots, e^{i\theta_n}, \ldots \}$. Then, there exists a subsequence $(n_k)_{k=1}^\infty \subset J$ such that $e^{i \theta_{n_k}}$ converges to some $\lambda \in \mathbb{C}$ with $|\lambda| = 1$. Given $\varepsilon \in (0, 1/2)$, let $k_0 \in \mathbb{N}$ be such that $|e^{i \theta_{n_k}} - \lambda| < {\eta(\varepsilon, T)}$ for every $k \geq k_0$. Then, for $k \neq k' \geq k_0$, we obtain that $\left| \frac{e^{i \theta_{n_{k}}} + e^{i \theta_{n_{k'}}}}{2} - \lambda \right| < \eta(\varepsilon ,T).$ Pick $n \neq n'$ in $J$ so that $\alpha_n = e^{i \theta_{n_k}}$ and $\alpha_{n'} = e^{i \theta_{n_{k'}}}$. Then, $( (e_{n} + e_{n'} ), \frac{1}{2} (e_{n}^* + e_{n'}^* )) \in \Pi (c_0)$ and $ \left|\left\langle \frac{1}{2} (e_{n}^* + e_{n'}^* ) ,T\left( e_{n} + e_{n'} \right) \right\rangle\right| = \left| \frac{e^{i \theta_{n_{k}}} + e^{i \theta_{n_{k'}}}}{2} \right| > 1 - \eta(\varepsilon, T). $ However, if $T$ attains its numerical radius at $(x, x^*) \in \Pi (c_0)$, then, by Lemma \ref{lem:diag_nu}, there exists $\theta \in [0, 2\pi)$ such that $\alpha_m = e^{i\theta}$ on $A := \{ m \in \mathbb{N} : |x^* (m)| \neq 0\}$. If $n, n' \notin A$, then for $k \in A$, $\| x - (e_{n} + e_{n'} ) \| \geq |\langle e_k^*, x - (e_{n} + e_{n'}) \rangle| = |x(k)| = 1 > \varepsilon$. Otherwise, without loss of generality, we may assume that $n \in A$. As $\alpha_{n} \neq \alpha_{n'}$, we have that $n' \notin A$, i.e., $|x^* (n')| = 0$. It follows that $\left\|x^* - \frac{1}{2} (e_{n}^* + e_{n'}^* )\right\| \geq | \langle x^* - \frac{1}{2}(e_n^* + e_{n'}^*), e_{n'} \rangle| = \frac{1}{2} > \varepsilon. $ This proves that $\{ \alpha_m : m \in J \}$ must be a finite set. By applying a similar argument used in the proof of Theorem \ref{theo:diag_norm}, we can deduce that $\sup_{m \in \mathbb{N} \setminus J} |\alpha_m| < 1$ when $J \neq \mathbb{N}$. $(b) \Longrightarrow (a)$: Let us say that $\{ \alpha_n : n \in J \} = \{e^{i\theta_1}, \ldots, e^{i \theta_m} \}$ for some $m \in \mathbb{N}$. Assume to the contrary that $T$ does not belong to $\mathcal{A}_{\text{nu}}(c_0)$. Then, there exists some $\varepsilon_0\in(0, 1)$ such that for each $n\in \mathbb{N}$, there is $(x_n, x_n^*)\in\Pi(c_0)$ such that $1\geq |\langle x_n^*, T(x_n)\rangle |\geq 1-\frac{1}{n}$, and whenever $(x, x^*)\in\Pi(c_0)$ is such that $\| x-x_n\| < \varepsilon_0$ and $\| x^* - x_n^*\| < \varepsilon_0$, we have that $|\langle x^*, T(x)\rangle |< 1$. If $J \neq \mathbb{N}$, then, by Lemma \ref{lem:diag_nu}, \begin{align*} 1 &= \sum_{k=1}^\infty x_n^* (k) x_n (k) \nonumber > \sum_{k \in J_1} x_n^* (k) x_n (k) +\ldots + \sum_{k \in J_m} x_n^* (k) x_n (k) + \beta \sum_{k \in \mathbb{N} \setminus J} x_n^* (k) x_n (k) \geq \\ &\geq \left| e^{i\theta_1}\sum_{k \in J_1} x_n^* (k) x_n (k) +\ldots + e^{i\theta_m} \sum_{k \in J_m} x_n^* (k) x_n (k) \right| + \left| \sum_{k \in \mathbb{N} \setminus J} \alpha_k x_n^* (k) x_n (k) \right| \geq 1- \frac{1}{n}, \end{align*} for every $n \in \mathbb{N}$, where $J_k = \{ n \in \mathbb{N} : \alpha_n = e^{i\theta_k} \}$ and $\beta := \sup_{n \in \mathbb{N} \setminus J} |\alpha_n| < 1$. Passing to a subsequence, we may assume that $\sum_{k \in J_l} x_n^* (k) x_n (k)$ converges as $n \rightarrow \infty$ for each $1 \leq l \leq m$. As $e^{i\theta_l} \neq e^{i\theta_{l'}}$ for all $1\leq l \neq l' \leq m$, we can choose $1 \leq s \leq m$ so that \[ \sum_{k \in J_l} x_n^* (k) x_n (k) \rightarrow 0 \,\, \text{ for all } \,\, l \neq s, \,\, \text{ and } \, \,\, \sum_{k \in J_s} x_n^* (k) x_n (k) \rightarrow 1 \] as $n \rightarrow \infty$. Also, notice that $ \sum_{k \in \mathbb{N} \setminus J} x_n^* (k) x_n (k) \rightarrow 0$ as $n \rightarrow \infty$. Pick $n_0 \in \mathbb{N}$ large enough so that \[ \sum_{k \in J_l} |x_{n_0}^* (k)| < \frac{\varepsilon}{3m} \,\, \text{ for all } \,\, l \neq s, \,\, 1 - \sum_{k \in J_s} |x_{n_0}^* (k) | < \frac{\varepsilon}{3}, \,\, \text{ and } \, \, \sum_{k \in \mathbb{N} \setminus J} x_{n_0}^* (k) x_{n_0} (k) < \frac{\varepsilon}{3}. \] Let $y_{n_0} = x_{n_0} \in S_{c_0}$ and define $y_{n_0}^* \in S_{\ell_1}$ as \[ \displaystyle y_{n_0}^* (k) = \frac{x_{n_0}^*(k)}{\gamma} \,\, \text{ for every } \, k \in J_s \, \, \text{and } \,\, y_{n_0}^* (k) = 0 \,\, \text{ for every } \, k \in \mathbb{N} \setminus J_s, \] where $\gamma = \sum_{k \in J_s} |x_{n_0}^* (k)|$. Then, $(y_{n_0}, y_{n_0}^*) \in \Pi (c_0)$, \[ |\langle y_{n_0}^* , T(y_{n_0}^*) \rangle | = \left| \sum_{k \in J_s} \frac{e^{i \theta_s} x_{n_0}^* (k) x_{n_0} (k) }{\gamma} \right| = 1, \] and \[ \| y_{n_0}^* - x_{n_0}^* \| \leq (m-1) \frac{\varepsilon}{3m} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} < \varepsilon. \] This is a contradiction. For the case when $J = \mathbb{N}$, we have \begin{align*} 1 = \sum_{k=1}^\infty x_n^* (k) x_n (k) \nonumber &= \sum_{k \in J_1} x_n^* (k) x_n (k) +\ldots + \sum_{k \in J_m} x_n^* (k) x_n (k) \\ &\geq \left| e^{i\theta_1}\sum_{k \in J_1} x_n^* (k) x_n (k) +\ldots + e^{i\theta_m} \sum_{k \in J_m} x_n^* (k) x_n (k) \right| \geq 1- \frac{1}{n}, \end{align*} for every $n \in \mathbb{N}$. Arguing as above, we may choose $1 \leq s \leq m$ and $n_0 \in \mathbb{N}$ such that \[ \sum_{k \in J_l} |x_{n_0}^* (k)| < \frac{\varepsilon}{2m} \,\, \text{ for all } \,\, l \neq s, \,\, \text{ and } \, \, 1 - \sum_{k \in J_s} |x_{n_0}^* (k) | < \frac{\varepsilon}{2}. \] By defining $(y_{n_0}, y_{n_0}^*) \in \Pi (c_0)$ as above, we get again another contradiction. \vspace{0.2cm} Let us prove the result for $X=\ell_p$ with $1<p<\infty$. \vspace{0.2cm} $(a) \Longrightarrow (b)$: Note that Lemma \ref{lem:diag_nu} implies (1). Assume that the set $\{ \alpha_n : n \in J \}$ is an infinite set, say $\{ \alpha_n : n \in J \} = \{ e^{i\theta_1}, \ldots, e^{i\theta_n}, \ldots \}$. Then, there exists a subsequence $(n_k)_{k=1}^\infty \subset J$ such that $e^{i \theta_{n_k}}$ converges to some $\lambda \in \mathbb{C}$ with $|\lambda| = 1$. Given $\varepsilon \in (0, (\frac{1}{2})^{\frac{1}{q}})$, let $k_0 \in \mathbb{N}$ be such that $|e^{i \theta_{n_k}} - \lambda| < {\eta(\varepsilon, T)}$ for every $k \geq k_0$. Then, for $k \neq k' \geq k_0$, we obtain that $\left| \frac{e^{i \theta_{n_k}} + e^{i \theta_{n_{k'}}}}{2} - \lambda \right| < \eta(\varepsilon ,T). $ Pick $n \neq n'$ in $J$ so that $\alpha_n = e^{i \theta_{n_k}}$ and $\alpha_{n'} = e^{i \theta_{n_{k'}}}$. Thus, $( (\frac{1}{2 })^{\frac{1}{p}} (e_{n} + e_{n'} ), (\frac{1}{2})^{\frac{1}{q}} (e_{n}^* + e_{n'}^* )) \in \Pi (\ell_p)$ and \begin{align*} \left|\left\langle \left(\frac{1}{2}\right)^{\frac{1}{q}} (e_{n}^* + e_{n'}^*) ,T\left( \left(\frac{1}{2}\right)^{\frac{1}{p}} \left( e_{n} + e_{n'} \right) \right) \right\rangle\right| = \left| \frac{e^{i \theta_{n_k}} + e^{i \theta_{n_{k'}}}}{2} \right| > 1 - \eta(\varepsilon, T). \end{align*} However, if $T$ attains its numerical radius at $(x, x^*) \in \Pi (\ell_p)$, then, by Lemma \ref{lem:diag_nu}, there exists $\theta \in [0, 2\pi)$ such that $\alpha_m = e^{i\theta}$ on $A := \{ m \in \mathbb{N} : |x^* (m)| \neq 0\}$. If $n, n' \notin A$, i.e., $|x^* (n)| = |x^* (n')| = 0$, then Lemma \ref{lem:diag_nu} implies that $|x(n)| = |x (n')| = 0$. Thus, $\left\| x - \left(\frac{1}{2}\right)^{\frac{1}{p}} (e_{n} + e_{n'} ) \right\|^p \geq \frac{1}{2} + \frac{1}{2} = 1 > \varepsilon.$ Otherwise, without loss of generality, we may assume that $n \in A$. As $\alpha_{n} \neq \alpha_{n'}$, we have that $n' \notin A$, i.e., $|x^* (n')| = 0$. It follows that $ \left\|x^* - \left( \frac{1}{2}\right)^{\frac{1}{q}} (e_{n}^* + e_{n'}^* )\right\|^q \geq \frac{1}{2} > \varepsilon^q. $ This proves that $\{ \alpha_m : m \in J \}$ must be a finite set. By applying a similar argument used in the proof of Theorem \ref{theo:diag_norm}, we can deduce that $\sup_{m \in \mathbb{N} \setminus J} |\alpha_m| < 1$ when $J \neq \mathbb{N}$. As the implication $(b) \Longrightarrow (a)$ can be proved in a similar way as before, we omit its proof. \end{proof} One may wonder whether or not there is a characterization for diagonal operators in the set $\mathcal{A}_{\|\cdot\|}$ when the domain is different from the range space. As a matter of fact, there is. Similar techniques as in Theorem \ref{theo:diag_norm} and Theorem \ref{theo:diag_nu} yield the following result on operators from $c_0$ into $\ell_p$ and from $\ell_p$ into $c_0$. Notice that in this case we cannot consider the set $\mathcal{A}_{\num}(X)$. \begin{theorem} Let $1\leq p< \infty$ be given. \begin{itemize} \item[(I)] Let $T \in \mathcal{L}(\ell_p ,c_0)$ be a norm one operator defined as \begin{equation*} Tx = (\alpha_n x(n))_{n=1}^{\infty} \quad (x = (x(n))_{n=1}^\infty \in \ell_p), \end{equation*} where $(\alpha_n)_{n=1}^\infty$ is a bounded sequence of scalars. Then, the following assertions are equivalent: \begin{itemize} \item[(a)] $T\in\mathcal{A}_{\|\cdot\|}(\ell_p, c_0)$. \item[(b)] If $J = \{n \in \mathbb{N}: |\alpha_n| = 1 \}$, then $J$ is non empty and \begin{enumerate} \item $J = \mathbb{N}$ or \item $\sup_{\mathbb{N} \setminus J} |\alpha_n| < 1$ \end{enumerate} \end{itemize} \vspace{0.2cm} \item[(II)] Let $T \in \mathcal{L}(c_0 , \ell_p)$ be a norm one operator defined as \begin{equation*} Tx = (\alpha_n x(n))_{n=1}^{\infty} \quad (x = (x(n))_{n=1}^\infty \in c_0), \end{equation*} where $(\alpha_n)_{n=1}^{\infty}$ is a sequence of scalars with $p$-norm equal to 1. Then, the following assertions are equivalent: \begin{itemize} \item[(a)] $T\in\mathcal{A}_{\|\cdot\|}(\ell_p, c_0)$. \item[(b)] There is some $N \in \mathbb{N}$ such that $\alpha_n = 0$ for all $n > N$. \end{itemize} \end{itemize} \end{theorem} The previous theorems provide a wide class of operators that belong to our sets. For instance, the canonical projections $P_N \in \mathcal{L}(X) $ belong to both $\mathcal{A}_{\| \cdot \|}(X, X)$ and $\mathcal{A}_{\text{nu}}(X)$ for the Banach spaces $X=c_0$ or $\ell_p$, with $1\leq p < \infty$, and to $\mathcal{A}_{\| \cdot \|}(X, X)$ when $X=\ell_\infty$. \begin{cor} \label{projection} Let $N \in \mathbb{N}$ be given. \begin{itemize} \item[(1)] $P_N \in \mathcal{A}_{\|\cdot\|}(c_0, c_0)$ and $P_N \in \mathcal{A}_{\|\cdot\|}(\ell_p, \ell_p)$ for $1 \leq p \leq \infty$. \item[(2)] $P_N \in \mathcal{A}_{\text{nu}}(c_0)$ and $P_N \in \mathcal{A}_{\text{nu}}(\ell_p)$ for $1 \leq p < \infty$. \end{itemize} \end{cor} \section{Connecting the sets $\mathcal{A}_{\|\cdot\|}$ and $\mathcal{A}_{nu}$} In this section, we introduce a natural approach to connect the sets $\mathcal{A}_{\|\cdot\|}$ and $\mathcal{A}_{\text{nu}}$ through direct sums. Throughout the section, we will be using the following notation. Given two Banach spaces $X_1$ and $X_2$, consider the mappings $P_i \in \mathcal{L}( X_1\oplus X_2 , X_i)$ such that $P_i(x_1,x_2):=x_i$, $i=1, 2$, and $\iota_j \in \mathcal{L}(X_j , X_1\oplus X_2)$ such that $\iota_i(x):=x e_i$, where $e_1=(1,0)$ and $e_2=(0,1)$. For Banach spaces $W$ and $Z$, if we have an operator $T \in \mathcal{L}(W, Z)$, then there is the simplest way to define $\widetilde{T} \in \mathcal{L}( W \oplus Z)$: consider $\widetilde{T} := \iota_2 \circ T \circ P_1$, that is, $\widetilde{T} (w,z) = (0, Tw)$ for every $(w,z) \in W \oplus Z$. Conversely, we can define a pseudo-inverse process as follows: if we have an operator $S \in \mathcal{L} (W \oplus Z)$, then we can consider $\widecheck{S} \in \mathcal{L}( W, Z)$ defined as $\widecheck{S} := P_2 \circ S \circ \iota_1$, that is, $\widecheck{S} (w) = (P_2 \circ S) (w,0)$ for every $w \in W$. We start with the following result, which establishes a bond between the assertions $T\in \mathcal{A}_{\|\cdot\|}(W,Z)$ and $\widetilde{T}\in \mathcal{A}_{\text{nu}} (W \oplus_1 Z)$ under some assumptions on the spaces. \begin{prop}\label{propsums1} Let $W$ and $Z$ be two Banach spaces, and let $T\in S_{\mathcal{L}(W, Z)}$. Then, \begin{itemize} \item[(a)] If $\widetilde{T}\in \mathcal{A}_{\num}(W \oplus_1 Z)$, then $T\in \mathcal{A}_{\| \cdot \|}(W, Z)$. \item[(b)] Suppose that $W$ and $Z$ are uniformly smooth Banach spaces. If $T \in \mathcal{A}_{\| \cdot \|}(W, Z)$, then $\widetilde{T} \in \mathcal{A}_{\num}(W \oplus_1 Z)$. \end{itemize} \end{prop} \begin{proof} (a). Assume $\widetilde{T} \in \mathcal{A}_{\num}(W \oplus_1 Z)$ and for a given $\varepsilon > 0$, set $\eta(\varepsilon, T) := \eta(\varepsilon, \widetilde{T}) > 0$. Pick $w_0 \in S_W$ to be such that $\|T(w_0)\| > 1 - \eta(\varepsilon, T).$ Let $z_0^* \in S_{Z^*}$ be such that $|\langle z_0^*, T(w_0)\rangle | = \|T(w_0)\| > 1 - \eta(\varepsilon, T)$. Let $w_0^* \in S_{W^*}$ be such that $\langle w_0^*, w_0\rangle = 1$ and consider the point $(\langle (w_0^*, z_0^*), (w_0, 0)\rangle \in \Pi (W \oplus_1 Z).$ Since $\widetilde{T} \in \mathcal{A}_{\num}(W \oplus_1 Z)$ and \begin{equation*} | \langle (w_0^*, z_0^*), \widetilde{T}(w_0, 0) \rangle | = |\langle z_0^*, T(w_0)\rangle | > 1 - \eta(\varepsilon, T) = 1 - \eta(\varepsilon, \widetilde{T}), \end{equation*} there is $\langle (w_1^*, z_1^*), (w_1, z_1)\rangle \in \Pi(W \oplus_1 Z)$ such that \begin{equation*} \nu (\widetilde{T}) = | \langle (w_1^*, z_1^*), \widetilde{T} (w_1, z_1) \rangle|, \ \ \|(w_1, z_1) - (w_0, 0)\|_1 < \varepsilon \ \ \mbox{and} \ \ \|(w_1^*, z_1^*) - (w_0^*, z_0^*) \|_{\infty} < \varepsilon. \end{equation*} So $1 = | \langle (w_1^*, z_1^*), \widetilde{T} (w_1, z_1) \rangle| = |\langle z_1^*, T(w_1)\rangle | \leq \|z_1^*\| \|T(w_1)\| \leq 1$. This implies that $\|T(w_1)\| = 1$ and that $z_1 = 0$. So $\|w_1 - w_0\| < \varepsilon$. This proves that $T \in \mathcal{A}_{\| \cdot \|}(W, Z)$. (b). Suppose $T \in \mathcal{A}_{\| \cdot \|}(W, Z)$. It is plain to check that $\widetilde{T}$ attains its numerical radius and $ \nu (\widetilde{T}) = 1$. Given $\varepsilon \in (0, 1)$, we set \begin{equation*} \eta(\varepsilon, \widetilde{T}) := \min \left\{ \eta \left( \min \left\{ \frac{\delta_{W^*}(\varepsilon)}{2}, \frac{\delta_{Z^*}(\varepsilon)}{2}, \frac{\varepsilon}{2} \right\}, T \right), \frac{\delta_{W^*}(\varepsilon)}{2}, \frac{\delta_{Z^*}(\varepsilon)}{2}, \frac{\varepsilon}{2} \right \} > 0, \end{equation*} where $\varepsilon \mapsto \delta_{W^*}(\varepsilon)$ and $\varepsilon \mapsto \delta_{Z^*} (\varepsilon)$ are the modulus of convexity of $W^*$ and $Z^*$, respectively. Let $((w_1, z_1), (w_1^*, z_1^*)) \in \Pi(W \oplus_1 Z)$ be such that \begin{equation*} | \langle z_1^*, T(w_1) \rangle | = \left|\langle (w_1^*, z_1^*), \widetilde{T}(w_1, z_1) \rangle \right|> 1 - \eta(\varepsilon, \widetilde{T}). \end{equation*} As we have \begin{equation*} \|T(w_1)\| \geq \left| \langle z_1^*, T(w_1) \rangle \right| > 1 - \eta \left( \min \left\{ \frac{\delta_{W^*}(\varepsilon)}{2}, \frac{\delta_{Z^*}(\varepsilon)}{2}, \frac{\varepsilon}{2} \right\}, T \right), \end{equation*} there is $w_2 \in S_W$ such that \begin{equation*} \|T(w_2)\| = 1 \ \ \ \mbox{and} \ \ \ \|w_2 - w_1\| < \min \left\{ \frac{\delta_{W^*}(\varepsilon)}{2}, \frac{\delta_{Z^*}(\varepsilon)}{2}, \frac{\varepsilon}{2} \right\}. \end{equation*} Since $\|w_1\| \geq | \langle z_1^*, T(w_1) \rangle | > 1 - \eta(\varepsilon, \widetilde{T})$, we have that $\|z_1\| < \eta(\varepsilon, \widetilde{T})$. Let $w_2^* \in S_{W^*}$ be such that $ \langle w_2^*, w_2 \rangle = 1$, then \begin{equation*} \left| \frac{ \langle z_1^*, z_1 \rangle - \langle w_1^*, w_2 - w_1 \rangle }{2} \right| \leq | \langle z_1^*, z_1 \rangle | + \|w_1^*\|\|w_2 - w_1\| < \delta_{W^*}(\varepsilon). \end{equation*} So, we have \begin{eqnarray*} \left\| \frac{w_1^* + w_2^*}{2} \right\| \geq \left| \left\langle \frac{w_1^* + w_2^*}{2}, w_2 \right\rangle \right| &=& \left|\frac{2 - \langle z_1^*, z_1 \rangle + \langle w_1^*, w_2 - w_1 \rangle }{2} \right| \\ &\geq& 1 - \left| \left( \frac{ \langle z_1^*, z_1 \rangle - \langle w_1^*, w_2-w_1 \rangle }{2} \right) \right| > 1 - \delta_{W^*} (\varepsilon), \end{eqnarray*} which implies that $\|w_2^* - w_1^*\| < \varepsilon.$ Let $\theta \in \mathbb{R}$ be such that $ \langle z_1^*, T(w_2) \rangle = e^{i\theta} | \langle z_1^*, T(w_2) \rangle | $. Notice that \begin{eqnarray*} | \langle z_1^*, T(w_2) \rangle | \geq | \langle z_1^*, T(w_1) \rangle | - | \langle z_1^*, T(w_2 - w_1) \rangle | \geq 1 - \delta_{Z^*}(\varepsilon). \end{eqnarray*} Now, let $z_2^* \in S_{Z^*}$ be such that $\langle z_2^*, T(w_2) \rangle = e^{i\theta}$. Observe that $$ \left\| \frac{z_1^* + z_2^*}{2} \right\| \geq \left| \left\langle \frac{z_1^* + z_2^*}{2}, T(w_2) \right\rangle \right| = \frac{1 + | \langle z_1^*, T(w_2) \rangle |}{2} > 1 - \delta_{Z^*} (\varepsilon); $$ hence $\|z_2^* - z_1^*\| < \varepsilon.$ Finally, considering the point $((w_2, 0), (w_2^*, z_2^*)) \in \Pi(W \oplus_1 Z)$, we conclude that $\widetilde{T} \in \mathcal{A}_{\num}(W \oplus_1 Z)$. \end{proof} \begin{remark}\label{counterexsum1} Proposition \ref{propsums1}.(b) no longer holds in general if we consider arbitrary Banach spaces instead of uniformly smooth ones. Indeed, consider the real Banach space $\ell_1$. Example \ref{ex3} provides an operator that belongs to $\mathcal{A}_{\| \cdot \|} (\ell_1, \ell_1)$ but not to $\mathcal{A}_{\text{nu}}(\ell_1)$. We will show that this operator does not satisfy the property stated in Proposition \ref{propsums1}.(b). Indeed, let $S \in \mathcal{L}(\ell_1)$ be the operator defined in Example \ref{ex3}. Note that if $((x,y),(x^*,y^*))\in\Pi(\ell_1 \oplus_1 \ell_1)$ satisfies \begin{equation}\label{sumcondition} |\langle (x^*,y^*),\widetilde{S}(x,y) \rangle|=|\langle y^*, S(x)\rangle|=\left| \sum_{j=1}^{\infty} \frac{y^*(j) x(1)}{2^j}\right|=1, \end{equation} then, one gets easily that $y^*(j)x(1)$ has to be equal to either $1$ or $-1$ for all $j\in \mathbb{N}$. From here, we get that the only possibilities have the form $x = se_1$, $y=0$, $x^* = (s, x^*(2), x^*(3), \ldots)$, and $y^* = (r,r,r,\ldots)$ with $|x^*(j)|\leq 1$ for all $j>1$, where $s,r\in \{ -1, 1\}$. Now, suppose by contradiction that for a given $\varepsilon \in (0, 1)$, there is $\eta(\varepsilon, \widetilde{S}) > 0$. Let $n_0 \in \mathbb{N}$ be such that $\sum_{j=1}^{n_0} \frac{1}{2^j} > 1 - \eta(\varepsilon, \widetilde{S})$, and set $w=e_1$, $z=0$, $w^*=e_1^*$, and $z^* = e_1^*+\ldots+e_{n_0}^*$. It is immediate to check that $((w,z),(w^*,z^*))\in \Pi(\ell_1 \oplus_1 \ell_1)$ and also that $|\langle (w^*, z^*), \widetilde{S}(w,z) \rangle | >1-\eta(\varepsilon, \widetilde{S})$. Then, there must be some $((x,y),(x^*,y^*))\in \Pi(\ell_1 \oplus_1 \ell_1)$ satisfying \eqref{sumcondition} and such that $\| (w,z)-(x,y) \|_1 < \varepsilon$ and $\| (w^*,z^*)-(x^*,y^*) \|_\infty < \varepsilon$. But this is already a contradiction, since $\| (x^*-w^*,y^*-z^*) \|_\infty \geq \| y^*-z^*\|_\infty \geq 1.$ Therefore $\widetilde{S} \notin \mathcal{A}_{\text{nu}}(\ell_1 \oplus_1 \ell_1)$ as desired, even though $S \in \mathcal{A}_{\| \cdot \|}(\ell_1, \ell_1)$. \end{remark} \begin{remark}\label{remarksums2} There exists an operator $S \in \mathcal{L} (W\oplus_1 Z)$, with both $W$ and $Z$ being uniformly smooth Banach spaces, such that $S\in \mathcal{A}_{\num}(W \oplus_1 Z)$ but $\widecheck{S}\notin \mathcal{A}_{\| \cdot \|}(W, Z)$ (note that this does not contradict Proposition \ref{propsums1}, since our $S$ is not of the form $\widetilde{T}$ for any operator $T$). Indeed, let $S \in \mathcal{L} ( \ell_2 \oplus_1 \ell_2 )$ be defined as $$ S(x,y) = ( ( x(1) , 0, 0, \cdots), (0,0,0,\cdots)), \quad \forall (x,y) \in \ell_2 \oplus_1 \ell_2, $$ where $\ell_2$ is a real space. Note that $\nu (S) = 1$ and $S$ attains its numerical radius. For $\varepsilon \in (0,1)$, suppose that $| \langle (x^*, y^*), S(x,y) \rangle | > 1 -\varepsilon >0$ for some $((x,y),(x^*,y^*)) \in \Pi (\ell_2\oplus_1\ell_2)$. Then $| x(1) | > 1-\varepsilon$, $| x^*(1) | > 1-\varepsilon$ and $\|y\| < \varepsilon$. Note also that \begin{eqnarray*} 1 \geq | x(1) |^2 + \sum_{n\neq 1} | x(n) |^2 \geq | x(1) |^2 > (1-\varepsilon)^2 \end{eqnarray*} which implies that $(\sum_{n\neq 1} | x(n) |^2)^{1/2} < (2\varepsilon - \varepsilon^2)^{1/2}$. On the other hand, \begin{eqnarray*} 1 = \sum_{n} x(n) x^*(n) + \sum_{n} y(n) y^*(n) \leq \|x\| \|x^*\| + \|y\|\|y^*\| \leq \|x\| + \|y\| =1 \end{eqnarray*} From this, we have $\|x^*\| = \|y^*\| = 1$. As above, we can see that $(\sum_{n\neq 1} | x^*(n) |^2)^{1/2} < (2\varepsilon - \varepsilon^2)^{1/2}$. If we define pairs of vectors \begin{eqnarray*} (\widetilde{x}, \widetilde{y}) = \left( \left( \frac{ x(1) }{| x(1) |}, 0, 0, \cdots \right), 0 \right) \ \ \mbox{and} \ \ (\widetilde{x^*}, \widetilde{y^*}) = \left( \left( \frac{ x^*(1) }{| x^*(1) |}, 0, 0, \cdots \right), y^* \right), \end{eqnarray*} then $\| ({x}, {y})-(\widetilde{x}, \widetilde{y})\| \leq \varepsilon + \sqrt{2\varepsilon}$ and $\| ({x^*}, {y^*})-(\widetilde{x^*}, \widetilde{y^*})\| \leq \sqrt{2\varepsilon}$. It is clear that $\left( (\widetilde{x}, \widetilde{y}), (\widetilde{x^*}, \widetilde{y^*}) \right) \in \Pi(\ell_2 \oplus_1 \ell_2)$ and that $|\langle (\widetilde{x^*}, \widetilde{y^*}), S(\widetilde{x},\widetilde{y}) \rangle| =1$. This proves that $S$ belongs to $\mathcal{A}_{\text{nu}} (\ell_2 \oplus_1 \ell_2)$. However, $\widecheck{S} \in \mathcal{L}( \ell_2 )$ is the operator such that \begin{align*} \widecheck{S} x = (P_2 \circ S) (x,0) = P_2 ( ( x(1) ,0,0,\cdots), (0,0,0,\cdots)) = 0 \end{align*} for every $x \in \ell_2$; hence $\widecheck{S} = 0$, and the null operator cannot belong to $\mathcal{A}_{\|\cdot\|} (\ell_2, \ell_2)$. \end{remark} We proceed now to prove the analogous results for $\ell_{\infty}$-sums but under different hypothesis on the underlying spaces. \begin{prop}\label{propsums2} Let $W$ and $Z$ be two Banach spaces, and let $T\in S_{\mathcal{L}(W, Z)}$. Then: \begin{itemize} \item[(a)] If $\widetilde{T}\in \mathcal{A}_{\num}(W \oplus_{\infty} Z)$, then $T\in \mathcal{A}_{\| \cdot \|}(W, Z)$. \item[(b)] Suppose that $Z$ is a uniformly convex Banach space and $W$ is a uniformly smooth Banach spaces. If $T \in \mathcal{A}_{\| \cdot \|}(W, Z)$, then $\widetilde{T} \in \mathcal{A}_{\num}(W \oplus_{\infty} Z)$. \end{itemize} \end{prop} \begin{proof} (a). Suppose $\widetilde{T} \in \mathcal{A}_{\num}(W \oplus_{\infty} Z)$. Given $\varepsilon \in (0, 1)$, we set $\eta(\varepsilon, T) := \eta(\varepsilon, \widetilde{T}) > 0$. Let $w_0 \in S_W$ be such that $\|T(w_0)\| > 1 - \frac{\eta(\varepsilon, T)}{2}.$ Take $\widetilde{z_0}^* \in S_{Z^*}$ to be such that \begin{equation*} |\langle \widetilde{z_0}^*, T(w_0)\rangle| = \|T(w_0)\| > 1 - \frac{\eta(\varepsilon, T)}{2}. \end{equation*} By the Bishop-Phelps theorem, there is $z_0^* \in S_{Z^*}$ and $\widetilde{z_0} \in S_Z$ such that $|\langle z_0^*, \widetilde{z_0}\rangle | = 1$ and $\|z_0^* - \widetilde{z_0}^* \| < \frac{\eta(\varepsilon, T)}{2}$. Since $\langle z_0^*, \widetilde{z_0}\rangle = e^{i \theta}$ for some $\theta\in [0, 2\pi)$, we take $z_0 := e^{-i \theta} \widetilde{z_0} \in S_Z$ which satisfies $\langle z_0^*, z_0\rangle = 1$ and \begin{eqnarray*} |\langle z_0^*, T(w_0)\rangle | &=& |\langle \widetilde{z_0}^*, T(w_0)\rangle + \langle z_0^* - \widetilde{z_0}^*, T(w_0)\rangle | \\ &\geq& |\langle \widetilde{z_0}^*, T(w_0)\rangle | - \|z_0^* - \widetilde{z_0}^*\| \\ &>& 1 - \eta(\varepsilon, T). \end{eqnarray*} Consider the point $((w_0, z_0), (0, z_0^*)) \in \Pi(W \oplus_{\infty} Z)$. Then, since $\nu(\widetilde{T}) = 1$ and \begin{equation*} | \langle (0, z_0^*), \widetilde{T} (w_0, z_0) \rangle| = |\langle z_0^*, T(w_0)\rangle | > 1 - \eta(\varepsilon, T) = 1 - \eta(\varepsilon, \widetilde{T}), \end{equation*} there is $((w_1, z_1), (w_1^*, z_1^*)) \in \Pi(W \oplus_{\infty} Z)$ such that \begin{equation*} | \langle (w_1^*, z_1^*), \widetilde{T}(w_1, z_1) \rangle| = 1, \ \ \|(w_1, z_1) - (w_0, z_0)\|_{\infty} < \varepsilon \ \ \mbox{and} \ \ \|(w_1^*, z_1^*) - (0, z_0^*)\|_1 < \varepsilon. \end{equation*} So, since $1 = | \langle (w_1^*, z_1^*), \widetilde{T}(w_1, z_1) \rangle| = |\langle z_1^*, T(w_1)\rangle | \leq \|T(w_1)\| \leq 1$, we get that $\|T(w_1)\| = \|w_1\| = 1$. Finally, $\|w_1 - w_0\| \leq \|(w_1, z_1) - (w_0, z_0)\|_{\infty} < \varepsilon$. This shows that $T \in \mathcal{A}_{\| \cdot \|}(W, Z)$. (b). Suppose $T \in \mathcal{A}_{\| \cdot \|}(W, Z)$. It is not difficult to see that $ \nu (\widetilde{T}) = 1$ and that $\widetilde{T}$ attains its numerical radius. Now let $\varepsilon \in (0, 1)$ be given and set $\eta(\varepsilon, \widetilde{T})$ as the positive real number $\eta(\varepsilon, \widetilde{T}) := \min \left\{ \varepsilon_0, \eta \left(\varepsilon_0, T \right)\right\},$ where $$ \varepsilon_0 = \min\left\{ \frac{1}{2} \delta_{Z^*} \left(\min\left\{ \frac{\delta_{Z} (\varepsilon)}{2}, \frac{\varepsilon}{2}\right\} \right), \frac{\delta_{Z} (\varepsilon)}{2}, \frac{\varepsilon}{2} \right\}. $$ Let $((w_1, z_1), (w_1^*, z_1^*)) \in \Pi (W \oplus_{\infty} Z)$ be such that \begin{equation*} \left| \langle z_1^*, T(w_1) \rangle \right| = \left| \langle (w_1^*, z_1^*), \widetilde{T} (w_1, z_1) \rangle \right|> 1 - \eta(\varepsilon, \widetilde{T}). \end{equation*} Since $\|T(w_1)\| \geq \left| \langle z_1^*, T(w_1) \rangle \right| > 1 - \eta(\varepsilon, \widetilde{T}),$ there is $w_2 \in S_W$ such that $\|T(w_2)\| = 1$ and $\|w_2 - w_1\| < \varepsilon_0.$ Since $\|z_1^*\| \geq \left| \langle z_1^*, T(w_1) \rangle \right| > 1 - \eta(\varepsilon, \widetilde{T})$, we get that $\|w_1^*\| < \eta(\varepsilon, \widetilde{T}) \leq \frac{\varepsilon}{2}$. Let $\theta \in \mathbb{R}$ be such that $ \langle z_1^*, T(w_2) \rangle = | \langle z_1^*, T (w_2) \rangle | e^{i\theta}$. Pick $z_2^* \in S_{Z^*}$ to be such that $\langle z_2^*, T(w_2) \rangle = e^{i\theta}$ and notice that $\left| \langle z_1^*, T(w_2) \rangle \right| > 1 - 2 \varepsilon_0 > 1 - \delta_{Z^*} \left( \min \left\{ \frac{\delta_Z(\varepsilon)}{2}, \frac{\varepsilon}{2} \right\} \right).$ Thus, \begin{align}\label{sumeq8.5mg} \left\| \frac{z_1^* + z_2^*}{2} \right\| \geq \left| \left\langle \frac{z_1^* + z_2^*}{2}, T(w_2) \right\rangle \right| = \frac{| \langle z_1^*, T(w_2) \rangle | + 1}{2} \nonumber > 1 - \delta_{Z^*} \left( \min \left\{ \frac{\delta_Z(\varepsilon)}{2}, \frac{\varepsilon}{2} \right\} \right). \end{align} This implies that $\|z_2^* - z_1^*\| < \min \left\{ \frac{\delta_Z(\varepsilon)}{2}, \frac{\varepsilon}{2} \right\}$. By using the above estimates, \begin{align*} \left\| \frac{T(e^{-i\theta} w_2) + z_1}{2} \right\| &\geq \left| \left\langle z_1^*, \frac{T(e^{-i\theta} w_2) + z_1}{2} \right\rangle \right| = \left| \frac{| \langle z_1^*, T(w_2) \rangle | +1 - \langle w_1^*, w_1 \rangle }{2} \right| \geq \\ &\geq \left| \frac{| \langle z_1^*, T(w_2) \rangle | +1}{2} \right| - \left| \frac{ \langle w_1^*, w_1 \rangle }{2} \right| \geq 1 - \delta_{Z} (\varepsilon) \end{align*} and so $\|T(e^{-i\theta} w_2) - z_1\| < \varepsilon$. Finally, we conclude that $\widetilde{T}$ attains its numerical radius at the point $\left((w_2, T(e^{-i\theta} w_2)), (0, z_2^*) \right) \in \Pi(W \oplus_{\infty} Z)$ which is close to $((w_1, z_1), (w_1^*, z_1^*))$; hence $\widetilde{T} \in \mathcal{A}_{\num}(W \oplus_{\infty} Z)$. \end{proof} \begin{remark} Similar to what happened on Proposition \ref{propsums1}, Proposition \ref{propsums2}.(b) is not true in general for arbitrary Banach spaces. Indeed, consider the real Banach space $\ell_1$. Like we did in Remark \ref{counterexsum1}, we will show that the operator introduced in Example \ref{ex3} does not satisfy the property stated in Proposition \ref{propsums2}.(b): Let $S \in \mathcal{L} (\ell_1)$ be the operator defined in Example \ref{ex3} and let $\widetilde{S}\in \mathcal{L}(\ell_1 \oplus_\infty \ell_1)$ be defined accordingly. Notice as before that if $((x,y),(x^*,y^*))\in\Pi(\ell_1 \oplus_\infty \ell_1)$ satisfies $|\langle (x^*,y^*),\widetilde{S}(x,y) \rangle|=|\langle y^*, S(x)\rangle|=\left| \sum_{j=1}^{\infty} \frac{y^*(j) x(1)}{2^j}\right|=1,$ then $y^*(j)x(1)$ has to be equal to either $1$ or $-1$ for all $j\in \mathbb{N}$. From here, we get that the only possibilities have the form $x = se_1$, $y = (y(1),y(2),y(3),\ldots)$ with $ \sum_{j=1}^{\infty} y(j) =r$, $x^* = 0$, and $y^* = (r,r,r,\ldots)$, where $s,r\in \{ -1, 1\}$. Assuming that for $\varepsilon \in (0, 1)$, there exists $\eta(\varepsilon, \widetilde{S}) > 0$, we get a contradiction in the same manner as in Remark \ref{counterexsum1}. \end{remark} \begin{remark} Once again, there exists an operator $S\in \mathcal{L}(W\oplus_1 Z) $, with $W$ uniformly smooth and $Z$ uniformly convex, such that $S\in \mathcal{A}_{\num}(W \oplus_{\infty} Z)$ but $\widecheck{S}\notin \mathcal{A}_{\| \cdot \|}(W, Z)$ (note that this does not contradict Proposition \ref{propsums2}, since our $S$ is not of the form $\widetilde{T}$ for any operator $T$). Indeed, the same argument used in Remark \ref{remarksums2} shows that $S \in \mathcal{L}( \ell_2 \oplus_{\infty} \ell_2 ) $, which is defined as $$ S(x,y) = ( ( x(1) , 0, 0, \cdots), (0,0,0,\cdots)), \quad \forall (x,y) \in \ell_2 \oplus_{\infty} \ell_2, $$ where $\ell_2$ is a real space, belongs to $\mathcal{A}_{\text{nu}} (\ell_2 \oplus_{\infty} \ell_2)$. However, $\widecheck{S} = 0$ cannot belong to $\mathcal{A}_{\|\cdot\|} (\ell_2, \ell_2)$. \end{remark} We finish the paper by noting that Propositions \ref{propsums1}.(b) and \ref{propsums2}.(b) are no longer true for $p$-sums with $1 < p < \infty$. Indeed, let $X$ be a uniformly convex and uniformly smooth Banach space and consider the identity operator $\Id_X \in \mathcal{L}(X)$. Clearly, $\Id_X $ belongs to $\mathcal{A}_{\| \cdot \|}(X, X)$. On the other hand, $\widetilde{\Id}_X \in \mathcal{L}(X \oplus_p X)$ is defined as $\widetilde{\Id}_X(x_1, x_2) = (0, x_1)$ for all $x_1, x_2 \in X$. Then $ \nu (\widetilde{\Id}_X) \leq \|\widetilde{\Id}_X\| = \| \Id_X\| = 1$. If $|\langle (x_1^*, x_2^*), \widetilde{\Id}_X(x_1, x_2) \rangle| = 1$ for some $((x_1, x_2), (x_1^*, x_2^*)) \in \Pi(X \oplus_p X)$, we would have $| \langle x_2^*, x_1 \rangle | = 1$, which would imply $\|x_2^*\| = \|x_1\| = 1$. Because of this, we would have $x_1^* = x_2 = 0$ since $\|x_1^*\|^q + \|x_2^*\|^q = 1 = \|x_1\|^p + \|x_2\|^p$ with $\frac{1}{p} + \frac{1}{q} = 1$, contradicting the assumption $ \langle x_1^*, x_1 \rangle + \langle x_2^*, x_2 \rangle = 1$. So, $\widetilde{\Id}_X$ cannot attain its numerical radius; hence cannot belong to $\mathcal{A}_{\num} (X \oplus_p X)$. \proof[Acknowledgements] The authors would like to thank Manuel Maestre for suggesting the topic of the article and his helpful comments during his visit to POSTECH. They also would like to thank Miguel Mart\'in and Abraham Rueda Zoca for fruitful conversations on the topic of the paper.
{ "timestamp": "2020-09-01T02:30:56", "yymm": "1910", "arxiv_id": "1910.05726", "language": "en", "url": "https://arxiv.org/abs/1910.05726", "abstract": "In this paper, we are interested in studying the set $\\mathcal{A}_{\\|\\cdot\\|}(X, Y)$ of all norm-attaining operators $T$ from $X$ into $Y$ satisfying the following: given $\\epsilon>0$, there exists $\\eta$ such that if $\\|Tx\\| > 1 - \\eta$, then there is $x_0$ such that $\\| x_0 - x\\| < \\epsilon$ and $T$ itself attains its norm at $x_0$. We show that every norm one functional on $c_0$ which attains its norm belongs to $\\mathcal{A}_{\\|\\cdot\\|}(c_0, \\mathbb{K})$. Also, we prove that the analogous result holds neither for $\\mathcal{A}_{\\|\\cdot\\|}(\\ell_1, \\mathbb{K})$ nor $\\mathcal{A}_{\\|\\cdot\\|}(\\ell_{\\infty}, \\mathbb{K})$. Under some assumptions, we show that the sphere of the compact operators belongs to $\\mathcal{A}_{\\|\\cdot\\|}(X, Y)$ and that this is no longer true when some of these hypotheses are dropped. The analogous set $\\mathcal{A}_{nu}(X)$ for numerical radius of an operator instead of its norm is also defined and studied. We present a complete characterization for the diagonal operators which belong to the sets $\\mathcal{A}_{\\| \\cdot \\|}(X, X)$ and $\\mathcal{A}_{nu}(X)$ when $X=c_0$ or $\\ell_p$. As a consequence, we get that the canonical projections $P_N$ on these spaces belong to our sets. We give examples of operators on infinite dimensional Banach spaces which belong to $\\mathcal{A}_{\\| \\cdot \\|}(X, X)$ but not to $\\mathcal{A}_{nu}(X)$ and vice-versa. Finally, we establish some techniques which allow us to connect both sets by using direct sums.", "subjects": "Functional Analysis (math.FA)", "title": "Norm attaining operators which satisfy a Bollobás type theorem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631671237733, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7087950468779122 }
https://arxiv.org/abs/1609.06433
Counting Finite Index Subrings of $\mathbb{Z}^n$
We count subrings of small index of $\mathbb{Z}^n$, where the addition and multiplication are defined componentwise. Let $f_n(k)$ denote the number of subrings of index $k$. For any $n$, we give a formula for this quantity for all integers $k$ that are not divisible by a 9th power of a prime, extending a result of Liu.
\section{Introduction}\label{sec1} The main goal of this paper is to study the number of subrings of $\mathbb{Z}^n$ of given index. We begin by reviewing an easier problem, counting subgroups of $\mathbb{Z}^n$ of given index. \subsection{Counting Subgroups of $\mathbb{Z}^n$}\label{sec_counting_subgroups} The \emph{zeta function} of an infinite group $G$ is defined by \[ \zeta_G(s) = \sum_{H \le G \atop [G:H] < \infty} [G:H]^{-s} = \sum_{k=1}^\infty s_G(k) k^{-s}, \] where $s$ is a complex variable and $s_G(k)$ is the number of subgroups of $G$ of index $k$. We can think of $\zeta_G(s)$ as a generating function that gives the number of subgroups $H$ of $G$ of each finite index. We focus on the case $G = (\mathbb{Z}^n,+)$ and write $s_n(k)$ in place of $s_{\mathbb{Z}^n}(k)$. A finite index subgroup of $\mathbb{Z}^n$ is a sublattice, and every sublattice of $\mathbb{Z}^n$ is the column span of a unique matrix $A$ in Hermite normal form. The index of the lattice spanned by $A$ is $\det(A)$. Let $M_n(\mathbb{Z})$ denote the set of all $n \times n$ matrices with entries in $\mathbb{Z}$. We have \[ s_n(k) = \#\{A \in M_n(\mathbb{Z}) \colon A \text{ is in Hermite normal form and } \det(A) = k\}. \] Throughout this paper, $p$ always represents a prime number and $\prod_p$ denotes a product over all primes. All of the zeta functions we consider have Euler products, see for example \cite[Section 1.2.2]{duSautoyWoodward} or \cite[Section 3.2]{VollRecent}. That is, \[ \zeta_{\mathbb{Z}^n}(s) = \prod_p \zeta_{\mathbb{Z}^n,p}(s), \] where \[ \zeta_{\mathbb{Z}^n,p}(s) = \sum_{k=0}^{\infty} s_n(p^k) p^{-ks}. \] A matrix $A \in M_n(\mathbb{Z})$ in Hermite normal form with $\det(A) = p^k$ has diagonal $(p^{i_1}, p^{i_2},\ldots, p^{i_n})$ where each $i_j \ge 0$ and $\sum_{j=1}^n i_j = k$. It is not difficult to compute the number of $n \times n$ matrices in Hermite normal form with given diagonal. This leads to the fact that \begin{equation}\label{local_factor_subgroup} \zeta_{\mathbb{Z}^n,p}(s) = (1-p^{-s})^{-1} (1-p^{-(s-1)})^{-1}\cdots (1-p^{-(s-(n-1))})^{-1}, \end{equation} which implies \begin{equation}\label{subgroupeqn} \zeta_{\mathbb{Z}^n}(s) = \zeta(s) \zeta(s-1) \cdots \zeta(s-(n-1)). \end{equation} See the book of Lubotzky and Segal for five proofs of this fact \cite{LubotzkySegal}. We review one of these arguments in Section \ref{sec_hermite}, as it forms the basis for the approach to counting subrings that we explain in Section \ref{sec_subring_matrices}. Since $\zeta(s)$ has a simple pole at $s=1$, a standard Tauberian theorem from analytic number theory gives an asymptotic formula for the number of sublattices of $\mathbb{Z}^n$ of bounded index. We have \es{\label{asymptotic_subgroup_count} N_n(X) := &\ \#\{\mbox{sublattices of } \mathbb{Z}^d \mbox{ of index} <X\} = \sum_{k < X} s_n(k) \\ = &\ \frac{\zeta(d)\zeta(d-1)\cdots \zeta(2)}{d}X^d+O(X^{d-1}\log(X)) } as $X\to\infty$. In Section \ref{sec_hermite} we see that for fixed $n$ and $e,\ s_n(p^e)$ is a polynomial in $p$ that is not difficult to compute. Therefore, the problems of counting sublattices of $\mathbb{Z}^n$ of given index, and of asymptotically counting sublattices of bounded index, are well-understood. \subsection{Counting Subrings of $\mathbb{Z}^n$}\label{counting_subrings} We study the function analogous to $s_n(k)$ that counts subrings of $\mathbb{Z}^n$. We use the term \emph{subring} to mean a multiplicatively closed sublattice containing the multiplicative identity $(1,1,\ldots, 1)$. Let $f_n(k)$ denote the number of subrings of $\mathbb{Z}^n$ of index $k$. Define the \emph{subring zeta function of $\mathbb{Z}^n$} by \[ \zeta^R_{\mathbb{Z}^n}(s) = \sum_{k=1}^\infty f_n(k) k^{-s}. \] As in the previous section, this zeta function has an Euler product \[ \zeta^R_{\mathbb{Z}^n}(s) = \prod_p \zeta_{\mathbb{Z}^n,p}^R(s), \] where \[ \zeta_{\mathbb{Z}^n,p}^R(s) = \sum_{k=0}^\infty f_n(p^k) p^{-ks}. \] Equivalently, $\zeta_{\mathbb{Z}^n,p}^R(s) = \zeta_{\mathbb{Z}_p^n}^R(s)$ where $\mathbb{Z}_p$ denotes the ring of $p$-adic integers and $\zeta_{\mathbb{Z}_p^n}(s)$ is the zeta function counting finite index $\mathbb{Z}_p$-subalgebras of $\mathbb{Z}_p^n$. \begin{question} For fixed $n$ and $e$, how does $f_n(p^e)$ behave as a function~of~$p$? \end{question} Liu uses a strategy similar to the one outlined in Section \ref{sec_counting_subgroups} to compute $f_n(p^e)$ for $e \le 5$ and any $n$. (There is a small error in the computation of $f_n(p^5)$ that we correct here. More specifically, the constant terms in the coefficients of ${n \choose 6}$ and ${n \choose 7}$ are corrected to $141$ and $371$, respectively.) \begin{prop}\cite[Proposition 1.1]{Liu}\label{Liu_fn_prop} We have \begin{footnotesize} \begin{eqnarray*} f_n(1) & = & 1, \\ f_n(p) & = & \binom{n}{2},\\ f_n(p^2) & = & \binom{n}{2} + \binom{n}{2} + 3 \binom{n}{4},\\ f_n(p^3) & = & \binom{n}{2} + (p+1) \binom{n}{3} + 7 \binom{n}{4} + 10 \binom{n}{5} + 15 \binom{n}{6}, \\ f_n(p^4) & = & \binom{n}{2} + (3p+1) \binom{n}{3} + (p^2+p+10) \binom{n}{4} + (10p+21) \binom{n}{5} \\ & & + 70 \binom{n}{6} + 105 \binom{n}{7} + 105 \binom{n}{8}. \end{eqnarray*} \end{footnotesize} \end{prop} The main theorem of this paper extends this result to all $e\le 8$. \begin{thm}\label{Main_Theorem} We have \begin{footnotesize} \begin{eqnarray*} f_n(p^5) & = & {n \choose 2} + (4p + 1){n \choose 3} + (7p^2 + p + 13){n \choose 4} + (p^3 + p^2 + 41p + 31){n \choose 5} \\ & & + (15p^2 + 35p + 141){n \choose 6} +(105p + 371){n \choose 7} + 910{n \choose 8} + 1260{n \choose 9} \\ & & + 945{n \choose 10}, \end{eqnarray*} \begin{eqnarray*} f_n(p^6) & = & {n \choose 2} + (p^2 + 4p + 1){n \choose 3} + (p^3 + 14p^2 + p + 16){n \choose 4} \\ & & + (p^4+11p^3+2p^2+81p+41){n \choose 5} + (p^4 + p^3 + 131p^2 + 111p + 226){n \choose 6} \\ & & + (21p^3 + 56p^2 + 616p + 743){n \choose 7}+ (210p^2 + 770p + 2639){n \choose 8} \\ & & + (1260p + 6958){n \choose 9} + 14175{n \choose 10}+ 17325{n \choose 11} + 10395{n \choose 12},\\ & & \\ & & \\ f_n(p^7) & = & {n \choose 2} + (3p^2 + 4p + 1){n \choose 3} + (10p^3 + 12p^2 + p + 19){n \choose 4}\\ & & + (15p^4+21p^3+16p^2+121p+51){n \choose 5} \\ & & + (p^6 + p^5 + 17p^4 + 17p^3 + 392p^2 + 206p + 326){n \choose 6}\\ & & + (p^5 + 22p^4 + 288p^3 + 379p^2 + 1618p + 1219){n \choose 7} \\ & & + (28p^4 + 84p^3 + 2324p^2 + 3640p + 5279){n \choose 8}\\ & & + (378p^3 + 1638p^2 + 11298p + 18600){n \choose 9}+(3150p^2 + 15750p + 58800){n \choose 10}\\ & & + (17325p + 143605){n \choose 11} + 252945{n \choose 12} + 270270{n \choose 13} + 135135{n \choose 14}, \end{eqnarray*} and \begin{eqnarray*} f_n(p^8) & = & {n \choose 2} + (4p^2 + 4p + 1){n \choose 3} + (p^4 + 26p^3 + 9p^2 + p + 22){n \choose 4} \\ & & + (p^5 + 77p^4 -13p^3 + 52p^2 + 161p + 61){n \choose 5} \\ & & + (16p^6 + 31p^5 + 22p^4 + 187p^3 + 702p^2 + 301p + 441){n \choose 6} \\ & & + (p^8 + p^7 + 2p^6 + 23p^5 + 339p^4 + 1080p^3 + 1206p^2 + 3074p + 1800){n \choose 7}\\ & & + (26p^6 + 29p^5 + 652p^4 + 1093p^3 + 9374p^2 + 9073p + 8933){n \choose 8}\\ & & + (36p^5 + 498p^4 + 6420p^3 + 15324p^2 + 39810p + 37200){n \choose 9}\\ & & + (630p^4 + 3150p^3 + 46200p^2 + 103320p + 148551){n \choose 10}\\ & & + (6930p^3 + 41580p^2 + 243705p + 510730){n \choose 11} \\ & & + (51975p^2 + 329175p + 1474165){n \choose 12}\\ & & + (270270p + 3258255){n \choose 13} + 5045040{n \choose 14} + 4729725{n \choose 15} + 2027025{n \choose 16}. \end{eqnarray*} \end{footnotesize} \end{thm} \subsection{Motivation: Counting Subrings and Orders of Bounded Index} Bhargava has asked about the asymptotic growth rate of $f_n(k)$ \cite{Liu}. We would like to have formulas analogous to those of \eqref{asymptotic_subgroup_count} that give asymptotic formulas for the number of subrings of $\mathbb{Z}^n$ of bounded index. Expressions for $\zeta^R_{\mathbb{Z}^n}(s)$ analogous to \eqref{subgroupeqn} would lead to such results. However, such formulas are only known for $n \le 4$. \begin{thm}\label{GeneratingFunctions} We have \begin{eqnarray*} \zeta_{\mathbb{Z}^2}^R(s) & = & \zeta(s), \\ \zeta_{\mathbb{Z}^3}^R(s) & = & \frac{\zeta(3s-1) \zeta(s)^3}{\zeta(2s)^2}, \\ \zeta_{\mathbb{Z}^4}^R(s) & = & \prod_p \frac{1}{(1-p^{-s})^2(1-p^2 p^{-4s})(1-p^3 p^{-6s})} \Big(1+4 p^{-s}+2 p^{-2s}\\ & & +(4p-3) p^{-3s}+(5p-1)p^{-4s} +(p^2-5p)p^{-5s}+(3p^2-4p) p^{-6s} \\ & & -2p^2 p^{-7s}-4p^2 p^{-8s} - p^2 p^{-9s}{\Big)}. \end{eqnarray*} \end{thm} \noindent The computation for $n=2$ is elementary. The $n=3$ result is originally due to Datskovsky and Wright \cite{DatskovskyWright}, and for $n =4$ it is a result of Nakagawa \cite{Nakagawa}. Liu gives combinatorial proofs of these results \cite{Liu}; his $A_n(p,p^{-s})$ is $\zeta_{\mathbb{Z}^n,p}^R(s)$. Kaplan, Marcinek, and Takloo-Bighash study the problem of counting subrings of bounded index in $\mathbb{Z}^n$ and prove the following. \begin{thm}\cite[Theorem 6]{KMTB}\label{KMTB_Theorem} Let \[ N^R_n(X) := \#\{\mbox{Subrings of } \mathbb{Z}^d \mbox{ of index less than } X\} = \sum_{k < X} f_n(k). \] \begin{enumerate} \item Let $n \le 5$. There is a positive real number $C_n$ such that \[ N^R_n(X) \sim C_n X (\log X)^{\binom{n}{2}-1} \] as $X \to \infty$. \item Suppose $n \ge 6$. Then for any $\epsilon > 0$ we have \[ X (\log X)^{\binom{n}{2} -1} \ll N^R_n(X) \ll_{\epsilon} X^{\frac{n}{2}-\frac{7}{6} + \epsilon}. \] \end{enumerate} \end{thm} The authors of \cite{KMTB} derive the asymptotic order of growth for $N^R_5(X)$ up to a constant, despite not having a formula for $\zeta^R_{\mathbb{Z}^5}(s)$ analogous to those of Theorem \ref{GeneratingFunctions}. The main idea is to try to locate the right-most pole of $\zeta_{\mathbb{Z}^n}^R(s)$ by computing $f_n(p^e)$ exactly for small $e$ and giving estimates for larger $e$. A major motivation for the computations of this paper is to try to prove stronger versions of Theorem \ref{KMTB_Theorem}. For $n \ge 6$ we do not even know of a conjecture for the asymptotic growth rate of $N_n^R(X)$. One of the main problems in the rapidly growing field of arithmetic statistics is to count finite extensions of a number field and the orders that they contain. For example, it is an old conjecture that the number of isomorphism classes of degree $n$ extensions $K$ of $\mathbb{Q}$ with $|\mathop{\rm disc}(K)| < X$ is asymptotic to a constant depending on $n$ times $X$. One can also ask for the number of isomorphism classes of orders contained in these fields with discriminant at most $X$ in absolute value. Bhargava has proven breakthrough results counting quartic and quintic fields by first counting all isomorphism classes of orders in these fields and then sieving for the maximal ones \cite{Bhargava4, Bhargava5}. Bhargava, Malle, and others have made extensive precise conjectures for counting finite extensions with bounded discriminant and specified Galois group \cite{BhargavaIMRN, Malle1, Malle2}. Problems about counting orders contained in field extensions of given degree with bounded discriminant have received less attention. Recall that if $K$ is a number field with ring of integers $\mathcal{O}_K$, an order $\mathcal{O} \subseteq \mathcal{O}_K$ is a subring of $\mathcal{O}_K$ with identity that is a $\mathbb{Z}$-module of rank $n$. If $\mathcal{O} \subseteq \mathcal{O}_K$ is an order, then $\mathop{\rm disc}(\mathcal{O}) = [\mathcal{O}_K:\mathcal{O}]^2\cdot \mathop{\rm disc}(\mathcal{O}_K)$. \begin{question} Let $B_n(X)$ denote the number of isomorphism classes of orders $\mathcal{O}$ in all degree $n$ number fields such that $|\mathop{\rm disc}(\mathcal{O})| < X$. How does $B_n(X)$ grow as a function of $X$? \end{question} It follows from work of Davenport and Heilbronn for $n = 3$ \cite{DavenportHeilbronn}, and Bhargava for $n = 4,5$ \cite{Bhargava4, Bhargava5}, that $B_n(X)$ is asymptotic to a constant $c_n$ times $X$. For $n \ge 6$ we do not know of a conjecture for the asymptotic growth rate of this function. One approach to this problem is to count orders contained in a fixed field $K$ of bounded index, and take a sum over all $K$ of fixed degree. \begin{question}\label{count_orders} Let $K$ be a number field and let $N_K(X)$ denote the number of isomorphism classes of orders $\mathcal{O}$ contained in $K$ such that $|\mathop{\rm disc}(\mathcal{O})| < X$. How does $N_K(X)$ grow as a function of $X$? \end{question} Kaplan, Marcinek, and Takloo-Bighash study Question \ref{count_orders} by investigating analytic properties of the subring zeta function of $\mathcal{O}_K$. More precisely, let \[ \zeta_{\mathcal{O}_K}^R(s) = \sum_{\mathcal{O} \subseteq \mathcal{O}_K} [\mathcal{O}_K:\mathcal{O}]^{-s} \] where the sum is taken over all orders in $\mathcal{O}_K$. In the statement of the theorem below, $r_2$ is an explicitly computable positive integer that depends on the Galois group of the normal closure of $K/\mathbb{Q}$; see \cite{KMTB} for details. \begin{thm}\cite[Theorem 2]{KMTB}\label{KMTB_Theorem_Orders} \begin{enumerate} \item For $n \le 5$, there is a constant $C_K >0$ such that \[ N_K(X) \sim C_K X^{1/2} (\log X)^{r_2 -1}, \] as $X \to \infty$, \item For any $n \ge 6$ and any $\epsilon > 0$, \[ X^{1/2} (\log X)^{r_2 -1} \ll N_K(X) \ll_\epsilon X^{\frac{n}{4} - \frac{7}{12} + \epsilon}. \] \end{enumerate} \end{thm} \noindent When $[K:\mathbb{Q}] \ge 6$, we do not know of a conjecture for the asymptotic growth rate of $N_K(X)$. The subring zeta function of $\mathcal{O}_K$ has an Euler product, and its local factors satisfy $\zeta^R_{\mathcal{O}_K,p}(s) = \zeta^R_{\mathcal{O}_K \otimes \mathbb{Z}_p}(s)$, where this zeta function counts finite index $\mathbb{Z}_p$-subalgebras. When $p$ splits completely in $\mathcal{O}_K,\ \mathcal{O}_K \otimes \mathbb{Z}_p \cong \mathbb{Z}_p^n$, so \[ \zeta_{\mathcal{O}_K,p}^R(s) = \zeta_{\mathbb{Z}_p^n}^R(s) = \zeta_{\mathbb{Z}^n,p}^R(s). \] The $n =3$ case of Theorem \ref{KMTB_Theorem_Orders} follows from earlier work of Datskovsky and Wright \cite{DatskovskyWright}, who compute $\zeta_{\mathcal{O}_K}^R(s)$ for any cubic field $K$. The $n=4$ case follows from Nakagawa's computation for any quartic field $K$ of $\zeta^R_{\mathcal{O}_K,p}(s)$ at all unramified primes $p$ \cite{Nakagawa}. The authors of \cite{KMTB} suggest that among all unramified primes, those that split completely may control the asymptotic growth rate of $N_K(X)$. This suggests that the growth rate of the simpler function $N_n^R(X)$ along with the Galois group of the normal closure of $K$ may determine the growth rate of $N_K(X)$. For more information, see the discussion following \cite[Theorem 4]{KMTB}. We hope that more precise results on the growth of $f_n(p^e)$ will lead not only to improved asymptotic estimates for counting subrings of $\mathbb{Z}^n$ of bounded index, like those of Theorem \ref{KMTB_Theorem}, but may also help to understand asymptotic formulas for counting orders of bounded index in a fixed number field, like those of Theorem \ref{KMTB_Theorem_Orders}. In Section \ref{sec_lower_bounds} we give lower bounds for $f_n(p^e)$ that are analogous to lower bounds for $N_K(X)$ due to Brakenhoff \cite{Brakenhoff}, suggesting a closer connection between these counting problems. \subsection{Motivation: Uniformity of Subring Zeta Functions} Zeta functions of infinite groups, rings, and algebras have been studied extensively from both a combinatorial and analytic point of view \cite{duSautoyWoodward, Rossmann2, Rossmann3, VollRecent, Voll}. A common question is how local factors of zeta functions vary with $p$. \begin{definition}\cite[Section 1.2.4]{duSautoyWoodward} A zeta function $\zeta_G(s)$ for which there exist finitely many rational functions $W_1(X,Y),\ldots, W_r(X,Y) \in \mathbb{Q}[X,Y]$ such that for each prime $p$ there is an~$i$ for which $\zeta_{G,p}(s) = W_i(p,p^{-s})$ is called \emph{finitely uniform}. If $r=1$, we say the zeta function is \emph{uniform}. This definition can be extended in the obvious way to subring zeta functions. \end{definition} \begin{example}\label{local_factor_ex} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item In \eqref{local_factor_subgroup} we saw that $\zeta_{\mathbb{Z}^n,p}(s)$ is given by a single rational function in $p$ and $p^{-s}$. That is, the zeta function $\zeta_{\mathbb{Z}^n}(s)$ is uniform. \item When $K$ is a number field of degree at most $4$ with ring of integers $\mathcal{O}_K,\ \zeta^R_{\mathcal{O}_K}(s)$ is finitely uniform. The local factor at $p$ depends on the decomposition of $p$ in $\mathcal{O}_K$. \end{enumerate} \end{example} In order to understand how $f_n(p^e)$ varies with $p$, we want to know how the local factors $\zeta_{\mathbb{Z}^n,p}^R(s)$ vary. Grunewald, Segal, and Smith build on work of Denef \cite{Denef1,Denef2}, and Igusa \cite{Igusa}, to prove the following result. \begin{thm}\label{GSSthm}\cite[Theorem 3.5]{GSS} For each positive integer $n$ and each prime $p$ there exist polynomials $\Phi_{n,p}(X), \Psi_{n,p}(X) \in \mathbb{Z}[x]$ such that \[ \zeta_{\mathbb{Z}^n,p}^R(s) = \frac{\Phi_{n,p}(p^{-s})}{\Psi_{n,p}(p^{-s})}. \] Moreover, the degrees of $\Phi_{n,p}$ and $\Psi_{n,p}$ are bounded independently of the prime $p$. \end{thm} When $n$ is fixed, we want to understand how these rational functions vary with $p$. \begin{question}\cite[Question 3.7]{VollRecent}\label{zetaznuniform} Is the zeta function $\zeta_{\mathbb{Z}^n}^R(s)$ uniform? Is it finitely uniform? \end{question} Expanding the rational functions of Theorem \ref{GSSthm} as power series and computing individual coefficients shows that $\zeta_{\mathbb{Z}^n}^R(s)$ is uniform if and only if for each fixed $e\ge 1,\ f_n(p^e)$ is a polynomial in~$p$. See \cite[Section 2.1]{Voll} for additional discussion. \begin{question}\label{fnpolyq} For fixed $n \ge 2$ and $e \ge 1$, is $f_n(p^e)$ a polynomial in $p$? \end{question} It was previously known that $f_n(p^e)$ is polynomial for $e \le 5$. The main result of this paper, Theorem \ref{Main_Theorem}, extends this to $e \le 8$. This provides some evidence for a positive answer to Questions \ref{zetaznuniform} and \ref{fnpolyq}. However, we see in the proof of Theorem \ref{Main_Theorem} how equations for varieties over finite fields play a role in our counting problem. More precisely, in the proof of Lemma \ref{lemma3211} we see a close relation to the variety $V$ in $5$-dimensional affine space over $\mathbb{F}_p$ defined by \[ (x^2-x) - (u^2 - u) c' =(y^2-y) - (v^2 - v) c' = xy -uv c' = 0. \] A computation in Magma shows that $V$ is is $2$-dimensional, with $7$ irreducible components, and suggests that $\#V(\mathbb{F}_p) = 7p^2 - 6p+6$. We verify this formula in the proof of Lemma \ref{lemma3211}. It is not difficult to imagine how for larger values of $e$, more complicated varieties may play a role in the formula for $f_n(p^e)$. \subsection{Outline of the Paper} In Section \ref{sec2} we follow the method of Liu and give a bijection between subrings of $\mathbb{Z}^n$ and matrices of a specific form. This transforms the question of counting subrings of prime power index into a problem of counting matrices with entries satisfying certain divisibility conditions. We then review Liu's notion of irreducible subrings. Let $g_n(k)$ be the number of irreducible subrings of $\mathbb{Z}^{n}$ of index $k$. We recall a recurrence due to Liu relating $g_n(k)$ and $f_n(k)$ in Proposition \ref{fnrecurrence}. Our $g_n(k)$ is denoted by $g_{n-1}(k)$ in \cite{Liu}. In Section \ref{sec3} we express $g_n(p^e)$ as a sum over irreducible subrings with fixed diagonal entries. Possible diagonals are in bijection with compositions of $e$ into $n-1$ parts. We verify that for $n \le 9$ and $e \le 8$ each of the functions counting irreducible subring matrices with a fixed diagonal is given by a polynomial in $p$. We use this method along with the recurrence of \cite{Liu} to compute $f_n(p^e)$ for $e \leq 8$, proving Theorem \ref{Main_Theorem}. In Section \ref{sec_lower_bounds} we give lower bounds for $g_n(p^e)$ analogous to results of Brakenhoff for orders in a fixed number field \cite{Brakenhoff}. We end with questions for further study. \section{Subring Matrices and Irreducible Subring Matrices} \label{sec2} In our analysis of $f_{n}(k)$ we employ techniques developed in \cite{Liu}, where Liu gives a bijection between subrings $\mathbb{Z}^n$ and a class of integer matrices. This reduces the problem of counting of subrings of index $k$ in $\mathbb{Z}^n$ to the problem of counting \emph{subring matrices}, which can be understood as compositions of yet simpler \emph{irreducible subring matrices}. \subsection{Counting Matrices in Hermite Normal Form}\label{sec_hermite} We begin by giving a proof of \eqref{local_factor_subgroup} due to Bushnell and Reiner \cite{LubotzkySegal}. \begin{definition} A matrix $A \in M_n(\mathbb{Z})$ with entries $a_{ij}$ is in \emph{Hermite normal form} if: \begin{enumerate} \item $A$ is upper triangular, and \item $0\leq a_{ij} < a_{ii}$ for $1\leq i < j \leq n$. \end{enumerate} \end{definition} There is a bijection between sublattices of $\mathbb{Z}^n$ of index $p^k$ and matrices $A\in M_n(\mathbb{Z})$ in Hermite normal form with $\det(A) = p^k$. The diagonal entries of such a matrix are of the form $(p^{i_1},\ldots, p^{i_n})$, where each $i_j \ge 0$ and $\sum_{j=1}^n i_j = k$. \begin{definition} A \emph{weak composition} of an integer $k$ is a list of non-negative integers $(\alpha_1,\ldots, \alpha_n)$ where $\sum_{i=1}^n \alpha_i = k$. Each $\alpha_i$ is a \emph{part} of the weak composition, and $n$ is the \emph{length} or \emph{number of parts}. A weak composition in which every part is positive is called a \emph{composition} of $k$. \end{definition} \noindent The possible diagonals of an $n \times n$ matrix in Hermite normal form with determinant $p^k$ are in bijection with weak compositions of $k$ of length~$n$. The number of $n \times n$ matrices in Hermite normal form with diagonal $(p^{i_1},\ldots, p^{i_n})$ is $p^{(n-1) i_1} p^{(n-2) i_2} \cdots p^{i_{n-1}}$. Taking a sum of these terms over all weak compositions of $k$ into $n$ parts gives a polynomial formula for $s_n(p^k)$. We have, \begin{eqnarray*} \zeta_{\mathbb{Z}^n,p}(s) & = & \sum_{k=0}^\infty s_n(p^k) p^{-ks} = \sum_{i_1 = 0}^\infty \cdots \sum_{i_n = 0}^\infty p^{-(i_1+\cdots + i_n)s} p^{(n-1) i_1} p^{(n-2) i_2} \cdots p^{i_{n-1}} \\ & = & \left(\sum_{i_1 = 0}^\infty p^{-(s-(n-1))i_1} \right) \cdots \left(\sum_{i_{n-1} = 0}^\infty p^{-(s-1)i_{n-1}} \right) \left(\sum_{i_n = 0}^\infty p^{-s i_n} \right) \\ & = & (1-p^{-s})^{-1} (1-p^{-(s-1)})^{-1}\cdots (1-p^{-(s-(n-1))})^{-1}, \end{eqnarray*} completing the proof of \eqref{local_factor_subgroup}. \subsection{Counting Subrings via Liu's Bijection}\label{sec_subring_matrices} Liu adapts the argument of the previous section to count subrings of $\mathbb{Z}^n$. For column vectors $u = (u_1,\ldots, u_n)$ and $w = (w_1,\ldots, w_n)$ we write $u\circ w$ for the column vector given by the componentwise product $(u_1 w_1,\ldots, u_n w_n)$. We write $v_1,\ldots, v_n$ for the columns of an $n \times n$ matrix. \begin{prop}[Propositions 2.1 and 2.2 in \cite{Liu}] \label{prop:bijection-between-rings-and-matrices} There is a bijection between subrings with identity $L \subset \mathbb{Z}^{n}$ of index $k$ and matrices $A\in M_n(\mathbb{Z})$ in Hermite normal form with $\det(A) = k$ such that: \begin{enumerate} \item the identity element $(1, \ldots, 1)^{T}$ is in the column span of $A$, and \item for each $i,j \in [1,n]$, $v_i \circ v_j$ is in the lattice spanned by the column vectors $v_1,\ldots, v_n$. \end{enumerate} \end{prop} \begin{definition} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item A lattice for which the second condition of Proposition \ref{prop:bijection-between-rings-and-matrices} holds is \emph{multiplicatively closed}. \item A matrix that satisfies the conditions of Proposition \ref{prop:bijection-between-rings-and-matrices} is a \emph{subring matrix}. \end{enumerate} \end{definition} Fixing $n$, we may calculate $f_n(k)$ by enumerating the corresponding subring matrices. Since $f_n(k)$ is weakly multiplicative, it suffices to consider $k=p^{e}$ for $p$ prime. This restricts our attention to subring matrices with diagonal entries $(p^{\alpha_{1}},\ldots,p^{\alpha_{n}})$ such that $(\alpha_1,\ldots, \alpha_n)$ is a weak composition of $e$ of length $n$. Subring matrices are direct sums of \emph{irreducible subring matrices}. \begin{definition}\label{irred-def} A subring $L\subset \mathbb{Z}^{n}$ of index $p^{e}$ is irreducible if for each $(x_{1},\ldots,x_{n})\in L,\ x_{1}\equiv x_{2} \equiv \ldots \equiv x_{n} \pmod{p}$. \end{definition} \begin{thm}\cite[Theorem 3.4]{Liu} Any subring $L\subset \mathbb{Z}^{n}$ of finite index can be written uniquely as a direct sum of irreducible subrings $L_{i}\subset \mathbb{Z}^{n}$. \end{thm} It is easy to see if a subring is irreducible by considering the corresponding subring matrix. \begin{prop}\cite[Proposition 3.1]{Liu} \label{prop:irreducible-corresponding-matrix} An $n\times n$ subring matrix represents an irreducible subring if and only if its first $n-1$ columns contain only entries divisible by $p$, and its final column is the identity $(1, \ldots, 1)^{T}$. \end{prop} Recall that $g_n(k)$ is the number of irreducible subrings of $\mathbb{Z}^n$ of index $k$. In Section \ref{sec3}, we compute polynomial formulas for $g_n(p^e)$ for each $e \le 8$. We recall the following recurrence due to Liu. We again emphasize that $g_n(p^e)$ is given by $g_{n-1}(p^e)$ in \cite{Liu}. Define $f_0(1) = 1$ and $f_0(p^e) = 0$ for $e> 0$. \begin{prop}\cite[Proposition 4.4]{Liu}\label{fnrecurrence} The following recurrence holds for $n > 0$: \[ f_n(p^e) = \sum_{i=0}^e \sum_{j=1}^{n} \binom{n-1}{j-1} f_{n-j}(p^{e-i}) g_j(p^i). \] \end{prop} In order to show for fixed $n$ and $e$ that $f_n(p^e)$ is a polynomial in $p$, it is enough to show that $f_j(p^k)$ is a polynomial in $p$ for each fixed $j\le n-1$ and $k \le e$, and that $g_j(p^i)$ is a polynomial in $p$ for each fixed $j \le n$ and $i \le e$. \section{Computing $f_n(p^6)$, $f_n(p^7)$, and $f_n(p^8)$}\label{sec3} In this section we give formulas for $f_n(p^6),\ f_n(p^7)$, and $f_n(p^8)$, proving Theorem \ref{Main_Theorem}. We do this by showing for any $n$ and fixed $e\le 8$ that $g_n(p^e)$ is a polynomial in $p$ and then applying Proposition \ref{fnrecurrence}. We first recall that for fixed $e$, $ g_n(p^e) = 0$ for all but finitely many $n$. \begin{prop}\cite[Proposition 4.3]{Liu} For all $n > 0$, we have that $g_n(p^e) = 0$ for $e < n-1,\ g_{n+1}(p^n) = 1$, and $g_n(p^{n}) = \frac{p^{n-1}-1}{p-1}$. \end{prop} Proposition \ref{Liu_fn_prop} gives Liu's polynomial formulas for $f_n(p^e)$ for $e \le 4$ \cite[Proposition 1.1]{Liu}. We note that there is a slight error in Liu's computation for $e = 5$, so we have stated the corrected result as part of Theorem \ref{Main_Theorem}. We have checked that our formulas are correct by explicit calculation for small primes. Combining Propositions \ref{Liu_fn_prop} and \ref{fnrecurrence} gives polynomial formulas for $g_n(p^e)$ for all $e \le 5$ and all $n$. Although they are not written explicitly, Liu gives polynomial formulas for $g_n(p^e)$ for $n =3, 4$ \cite[Propositions 6.2 and 6.3]{Liu}. Note that $g_2(p^e) = 1$ for all $e$. Therefore, in order to give a polynomial formula for $f_n(p^6)$, we need only give a polynomial formula for $g_5(p^6)$. For $e = 6, 7, 8$ and $n$ fixed we show that $g_n(p^e)$ is given by a polynomial in $p$ by showing that the number of irreducible subrings of $\mathbb{Z}^n$ of index $p^e$ whose subring matrices have fixed diagonal entries is given by a polynomial. Recall that the last column of any irreducible subring matrix is $(1,\ldots, 1)^T$ and that every other entry of such a matrix is divisible by $p$. The set of tuples of diagonal entries of irreducible subring matrices corresponding to irreducible subrings of $\mathbb{Z}^n$ of index $p^{e}$ is in bijection with the set of compositions of $e$ of length $n-1$. \begin{definition} Let $\mathcal{C}_{n,e}$ denote the set of compositions of $e$ into $n-1$ parts. For a composition $\alpha$ of length $n-1$ let $g_\alpha(p)$ denote the number of irreducible subrings of $\mathbb{Z}^n$ with diagonal entries $(p^{\alpha_1}, p^{\alpha_2}, \ldots, p^{\alpha_{n-1}}, 1)$. \end{definition} \noindent It is a standard fact that $|\mathcal{C}_{n,e}| = \binom{e-1}{n-2}$. Combining these definitions gives the following. \begin{lemma} Let $n$ and $e$ be fixed positive integers. Then \[ g_n(p^e) = \sum_{\alpha \in \mathcal{C}_{n,e}} g_\alpha(p). \] \end{lemma} In order to find a polynomial formula for $g_5(p^6)$ we need only show that $g_\alpha(p)$ is given by a polynomial in $p$ for each $\alpha \in \mathcal{C}_{5,6}$. Proposition \ref{fnrecurrence} then gives a polynomial formula for $f_n(p^6)$. We need to consider some individual cases with separate arguments, but the following lemma significantly reduces this casework. \begin{lemma}\label{lem2} Let $\alpha = (1,\alpha_2,\ldots, \alpha_k)$ be a composition of a positive integer $e$ and $\alpha' = (\alpha_2,\ldots, \alpha_k)$. We have $g_\alpha(p) = g_{\alpha'}(p)$. \end{lemma} \begin{proof} An irreducible subring matrix with a $p$ in its upper left corner has its first row equal to $(p,0,\ldots, 0,1)$ since every entry $a_{1,j}$ with $j \not\in \{1,n\}$ satisfies $0 \le a_{1,j} < p$ and $a_{1,j} \equiv 0 \pmod{p}$. The conditions derived from taking products of pairs of columns are identical in both cases. \end{proof} Lemma \ref{lem2} implies that \begin{eqnarray*} g_4(p^5) & = & g_{(1,3,1,1)}(p) + g_{(1,1,3,1)}(p) +g_{(1,1,1,3)}(p) + g_{(1,2,2,1)}(p) \\ & & + g_{(1,2,1,2)}(p) + g_{(1,1,2,2)}(p). \end{eqnarray*} We can compute that each term in this sum is given by a polynomial in $p$, but do not give the details here. Therefore, in order to verify that $g_5(p^6)$ is a polynomial in $p$ we need only check that each $g_\alpha(p)$ is given by a polynomial in $p$, where \[ \alpha \in \left\{ (3,1,1,1), (2,2,1,1),(2,1,2,1),(2,1,1,2) \right\}. \] There is a particular class of compositions for which we can explicitly compute $g_\alpha(p)$. \begin{lemma} Let $\alpha = (\beta,1,\ldots, 1)$ be a composition of length $n-1$. \begin{enumerate} \item If $\beta = 2$, then $g_{\alpha}(p) = p^{n-2}$. \item If $\beta \ge 3$, then $g_{\alpha}(p) = (n-1)p^{n-2}$. \end{enumerate} \end{lemma} \begin{proof} We count irreducible subring matrices with diagonal $(p^\beta,p,\ldots, p,1)$. Let $A$ be such a matrix with entries $a_{i,j}$ and columns $v_1,\ldots, v_n$. Recall that we may assume $v_n = (1,\ldots, 1)^T$. As in the proof of Lemma \ref{lem2}, $a_{i,j} = 0$ for all $i,j$ satisfying $1 < i < j \le n-1$. Therefore, all divisibility conditions that must be satisfied come from the first row of $A$. When $\beta = 2$ it is easy to check that if $a_{1,j} \equiv 0 \pmod{p}$ for each $1\le j \le n-1$ then $v_i \circ v_j$ is in the column span of $A$ for each pair $1\le i,j \le n-1$. This gives the first part of the lemma. For the rest of the proof, suppose $\beta \ge 3$. If $a_{1,j} = 0$ for some~$j$ then $v_i \circ v_j$ is in the column span of $A$ for all $i$. Therefore, we only get non-trivial conditions from entries $a_{1,j} \neq 0$. For each $j \in [2,n-1]$ such that $a_{1,j} \neq 0$ let $a_{1,j} = p^{\gamma_j} b_j$ where $b_j \not\equiv 0 \pmod{p}$ and $1 \le \gamma_j < \beta$. If $v_j \circ v_j$ is in the column span of $A$ then \[ p^{2\gamma_j} b_j^2 = p^{\gamma_j+1} b_j + x p^\beta \] for some $x \in \mathbb{Z}$. This gives \[ x = p^{-(\beta-\gamma_j -1)}(p^{\gamma_j-1} b_j^2 - b_j). \] If $\gamma_j \neq 1$, then $\gamma_j = \beta - 1$. There are $p-1$ values of $b_j$ satisfying $0 \le p^{\beta-1} b_j < p^{\beta}$ and $p \nmid b_j$. We also include the possibility $a_{1,j} = 0$, for a total of $p$ choices for which $p^{\beta -1} \mid a_{1,j}$. If $\gamma_j = 1$, we must have $b_j - 1 \equiv 0 \pmod{p^{\beta-2}}$. There are $p$ values of $b_j$ satisfying $0 \le p b_j < p^\beta$ and $p^{\beta-2} \mid (b_j -1)$. Now we show that $\gamma_j = 1$ for at most one $j \in [2,n-1]$. Suppose $\gamma_j = \gamma_k = 1$ where $j \neq k$. If $v_j \circ v_k$ is in the column span of $A$ we must have $v_j \circ v_k = x v_1$ for some $x\in \mathbb{Z}$, which implies $p^\beta \mid p^2 b_j b_k$. This contradicts the assumption that $\beta \ge 3$ and $p \nmid b_j b_k$. If $v_j$ and $v_k$ are columns with $\gamma_j = \gamma_k = \beta - 1$ then $v_j \circ v_k$ is a multiple of $v_1$. If $v_j$ is a column with $\gamma_j = 1$ and $v_k$ is a column with $\gamma_k = \beta -1$ then again $v_j \circ v_k$ is a multiple of $v_1$. We say that a sequence $(\gamma_2,\ldots, \gamma_{n-1})$ is admissible if it contains at most one $\gamma_j$ equal to $1$ and the rest of the $\gamma_i$ are equal to $\beta-1$. For any admissible sequence there are $p^{n-2}$ corresponding irreducible subring matrices. Hence, \[ g_{\alpha}(p) = p^{n-2} + (n-2) p^{n-2} = (n-1)p^{n-2}. \] \end{proof} The exact position of $\beta$ played no role in this proof, so we can compute $g_{\alpha}(p)$ for any composition $\alpha$ that has a single part not equal to $1$. This can also be seen as a consequence of Lemma~\ref{lem2}. We consider the remaining compositions individually. We give the details of the computation for $g_{(2,2,1,1)}(p)$ and note that it is easier to compute $g_{(2,1,2,1)}(p)$ and $g_{(2,1,1,2)}(p)$ since the corresponding subring matrices do not need to satisfy as many constraints. \begin{lemma}\label{uniquenesslemma} For any prime $p$, we have $g_{(2, 2, 1, 1)}(p) = p^4 + 3p^2(p-1)$. \end{lemma} \begin{proof} An irreducible subring matrix $A$ with diagonal $(p^2,p^2, p, p, 1)$ is of the form \[ \begin{pmatrix} p^2 & c & a_1 & a_2 & 1 \\ 0 & p^2 & b_1 & b_2 & 1 \\ 0 & 0 & p & 0 & 1 \\ 0 & 0 & 0 & p & 1 \\ 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix}, \] where $0 \le a_1, a_2, b_1, b_2, c < p^2$ and $a_1, a_2, b_1, b_2, c \equiv 0 \pmod{p}$. Let $v_1, \ldots, v_5$ denote the columns of $A$. Define $a_1', a_2', b_1', b_2'$ and $c'$ by $a_i = p a_i',\ b_i = p b_i'$, and $c = c' p$. We see that $v_2 \circ v_2,\ v_2 \circ v_3$, and $v_2\circ v_4$ are in the column span of $A$ for any choice of $c'$. If $c = 0$ then every choice of $a_i'$ and $b_i'$ gives a multiplicatively closed sublattice, giving $p^4$ irreducible subring matrices. For the rest of the proof, suppose that $c \neq 0$, which implies $0 < c' < p$. We need only determine when $v_3 \circ v_4, v_3 \circ v_3$, and $v_4 \circ v_4$ lie in the column span of $A$. Taking $v_3 \circ v_4$ gives \[ \begin{pmatrix} p^2a'_1a'_2\\ p^2b'_1b'_2 \end{pmatrix}=b'_1b'_2\begin{pmatrix} pc'\\ p^2 \end{pmatrix} + x\begin{pmatrix} p^2\\ 0 \end{pmatrix}. \] Such an integer $x$ exists precisely when $p \mid b'_1b'_2 c'$, which only occurs when $b_1 b_2 = 0$. Taking $v_3 \circ v_3$ or $v_4 \circ v_4$ gives an equation of the form \[ \begin{pmatrix} a_i^2 \\ b_i^2 \\ p^2 \end{pmatrix} = p \begin{pmatrix} a_i \\ b_i \\ p \end{pmatrix} + x \begin{pmatrix} c \\ p^2 \\ 0 \end{pmatrix} + y \begin{pmatrix} p^2 \\ 0 \\ 0 \end{pmatrix}. \] We see that $xp^2 = b_i^2 - p b_i$, and therefore $x = b_i'(b_i'-1)$. We also have $a_i^2 - p a_ i- x c =yp^2$, which implies $p^2 a_i' (a_i'-1) - pxc' = yp^2$. When $c \neq 0,\ p \mid x$. Since $0\leq b'_i \leq p-1$ we see $b'_i =0$ or $b'_i =1$, which implies $b_i\in \{0,p\}$. When $c\neq 0,\ (b_1,b_2)$ must be in $\{(0,0),(0,p), (p,0)\}$. This gives~$p^2$ choices for $(a_1,a_2)$ and $p-1$ choices for $c$. Given such a choice, $v_i \circ v_j$ is in the column span of $A$ for any $i,j$. \end{proof} We now have polynomial formulas for $g_k(p^6)$ for $k = 2,3,\ldots, 6$, which completes the computation of $f_n(p^6)$ in Theorem \ref{Main_Theorem}. We follow a similar strategy to give formulas for $f_n(p^7)$ and $f_n(p^8)$. We show that $g_n(p^7)$ is a polynomial in $p$ for all $n \le 8$, which follows from the results of this section except when $n \in \{5,6\}$. In each of these cases we show that $g_\alpha(p)$ is a polynomial in~$p$ for all compositions $\alpha \in \mathcal{C}_{n,e}$ by following arguments of the type given above. For $e = 8$ we show that $g_n(p^8)$ is a polynomial in $p$ for all $n \le 9$ by following a similar strategy. We include details only for one representative challenging case. \begin{lemma}\label{lemma3211} For a prime $p$, we have $g_{(3, 2, 1, 1)}(p) =7p^4 - 6p^3 + 6p^2$. \end{lemma} \begin{proof} This statement is an easy computation when $p=2$, so for the rest of the proof suppose that $p \ge 3$. An irreducible subring matrix $A$ with diagonal $(p^3,p^2, p, p, 1)$ is of the form \[ \begin{pmatrix} p^3 & c & a_1 & a_2 & 1 \\ 0 & p^2 & b_1 & b_2 & 1 \\ 0 & 0 & p & 0 & 1 \\ 0 & 0 & 0 & p & 1 \\ 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix}, \] where $0 \le a_1, a_2, c < p^3,\ 0 \le b_1, b_2 < p^2$, and $p$ divides each of $a_1, a_2, b_1, b_2$, and $c$. Let $v_1,\ldots, v_5$ denote the columns of $A$. If $v_2 \circ v_2$ is in the column span of $A$ then \[ \begin{pmatrix} c^2 \\ p^4 \end{pmatrix}= p^2\begin{pmatrix} c \\ p^2 \end{pmatrix}+ \lambda \begin{pmatrix} p^3 \\ 0 \end{pmatrix} \] for some $\lambda\in\mathbb{Z}$. This implies $p^3 \mid (c^2 - p^2c^2)$. Since $p \mid c$ we see that $p^3 \mid c^2$, and therefore $p^2 \mid c$. Define $c'$ by $c = p^2 c'$ where $0 \le c' < p$. Also define $x,y,u$, and $v$ by $a_1 = p x,\ a_2 = p y,\ b_1 = pu$, and $b_2 = p v$. Taking $v_3 \circ v_3$ or $v_4 \circ v_4$ and applying an argument like the one above gives \[ p^3 \mid \left(a_i^2 - pa_i - (b_i^2 - pb_i) c'\right), \] which implies \begin{eqnarray} (x^2-x) - (u^2 - u) c' \equiv & 0 & \pmod{p}\label{eq1} \\ (y^2-y) - (v^2 - v) c' \equiv & 0 & \pmod{p}\label{eq2}. \end{eqnarray} Taking $v_3 \circ v_4$ gives \begin{equation}\label{eq3} xy -uv c' \equiv 0 \pmod{p}. \end{equation} Every solution to these three equations $(x,y,u,v,c')$ gives $p^2$ irreducible subring matrices since these equations depend only on $x$ and $y$ modulo $p$, rather than their particular values. Therefore, we need only count solutions to equations \eqref{eq1}, \eqref{eq2}, and \eqref{eq3} for which $0 \le x, y < p$. If $c' = 0$, then equations \eqref{eq1} and \eqref{eq2} imply that $x,y \in \{0,1\}$. By equation (\ref{eq3}) we cannot have $x=y=1$. Any choices of $u$ and $v$ now satisfy these equations. This gives $3p^2$ choices for $(x,y,u,v,c')$ and $3p^4$ irreducible subring matrices. For the rest of the proof suppose that $c' \neq 0$. We consider cases based on the values of $u$ and $x$. Equation \eqref{eq1} implies that $u \in \{0,1\}$ if and only if $x \in \{0,1\}$. \begin{claim}\label{ClaimA} Suppose that $c' \neq 0$. The following table gives the number of solutions to equations \eqref{eq1}, \eqref{eq2}, and \eqref{eq3} with specified values of $u$~and~$x$: \[ \begin{tabular}{|c | c | c |} \hline $u$ & $x$ & Number of Solutions \\ \hline $0$ & $0$ & $p^2$ \\ \hline $1$ & $0$ & $2(p-1)$ \\ \hline $0$ & $1$ & $2(p-1)$ \\ \hline $1$ & $1$ & $2(p-1)$ \\ \hline Not $\{0,1\}$ & Not $\{0,1\}$ & $3(p-2)^2$ \\ \hline \end{tabular}\ . \] \end{claim} \noindent We further divide up the last case of this claim. \begin{claim}\label{ClaimB} Suppose that $c' \neq 0$ and $u,x \not\in\{0,1\}$. There are $(p-2)^2$ solutions with $v = 0$. When $v \neq 0$ there are $(p-1)(p-2)$ solutions with $x =u$ and $(p-2)(p-3)$ solutions with $x \neq u$. \end{claim} Once these claims are established we count \[ 3p^4 + p^2 \left(p^2+6(p-1)+3(p-2)^2\right) = 7p^4-6p^3+6p^2 \] total irreducible subrings, completing the proof. We now prove Claim \ref{ClaimA}. \noindent \textbf{Case 1: $u=x=0$.} We need only count solutions of equation \eqref{eq2}. If $v \in \{0, 1\}$, then there are $p-1$ choices for $c'$ and $2$ solutions $y$ for every $c'$, namely $y \in \{0, 1\}$, for a total of $4(p-1)$ solutions. If $v \not\in \{0, 1\}$, this equation gives a quadratic polynomial in $y$ for each fixed $v$. The number of solutions depends on whether the discriminant $1 + 4(v^2-v)c'$ is $0$, a non-zero square modulo $p$, or a non-square. Since $v\not\in\{0,1\},\ v^2 -v \not\equiv 0\pmod{p}$, and as $c'$ varies through $\{1,\ldots, p-1\}$ this discriminant takes every value modulo $p$ except $1$ exactly once, giving $2 \left(\frac{p-1}{2}-1\right) + 1 = p-2$ solutions. Adding these cases together gives $4(p-1) + (p-2)^2 = p^2$ solutions. \noindent \textbf{Case 2: $u=1,\ x=0$.} Equation \eqref{eq3} implies $uvc' = 0$, and since $u$ and $c'$ are non-zero, we must have $v = 0$. Equation \eqref{eq2} implies $y \in \{0,1\}$, so accounting for the $p-1$ possible values of $c'$ gives $2(p-1)$ solutions. \noindent \textbf{Case 3: $u=0,\ x=1$.} Equation \eqref{eq3} implies $y = 0$. Equation (\ref{eq2}) gives $v \in \{0,1\}$, so accounting for the $p-1$ possible values of $c'$ gives $2(p-1)$ solutions. \noindent \textbf{Case 4: $u=1,\ x=1$.} Equation \eqref{eq3} gives $y \equiv v c' \pmod{p}$. Substituting this into equation \eqref{eq2} gives $(c'^2-c')v^2 \equiv 0 \pmod{p}$. If $c' = 1$ we have $p$ choices for $v$. If $c'\neq 1$ then $v = 0$. This gives $p + p-2 = 2(p-1)$ solutions. For the rest of the proof suppose that $c'\neq 0$ and $x,u \not\in \{0,1\}$. We consider two further subcases. \noindent \textbf{Case 5: $v=0$.} Equation \eqref{eq3} implies $y=0$. Setting $c' = \frac{x^2-x}{u^2-u}$ for any choice of $x, u$ gives a valid solution. This gives $(p-2)^2$ solutions. \noindent \textbf{Case 6: $v\neq0$.} Equations \eqref{eq1} and \eqref{eq3} imply that \[ c' \equiv \frac{x(x-1)}{u(u-1)} \equiv \frac{xy}{uv} \pmod{p}. \] This implies $v \equiv \frac{y(u-1)}{x-1} \pmod{p}$. Since $v \neq 0$ by assumption, equation (\ref{eq3}) implies $y \neq 0$. Substituting this expression for $v$ into equation \eqref{eq2} and dividing by $y$ gives \begin{equation}\label{eq4} y \left(1 - c'\frac{(u-1)^2}{(x-1)^2}\right) + c'\frac{u-1}{x-1}-1 \equiv 0 \pmod{p}. \end{equation} This equation is linear in $y$ and the coefficient of $y$ is $0$ precisely when $c' \equiv \frac{(x-1)^2}{(u-1)^2} \pmod{p}$. Since $c' \equiv \frac{x(x-1)}{u(u-1)} \pmod{p}$ by equation (\ref{eq1}), this is equivalent to $\frac{x}{u} \equiv \frac{x-1}{u-1} \pmod{p}$. This implies $x \equiv u \pmod{p}$. Now suppose that $x = u$. Equation (\ref{eq4}) is satisfied for any of the $p-2$ possible choices for $x$. All of these equations are satisfied for any choice of $y$, except that $y = 0$ implies $v = 0$ by equation \eqref{eq3}, a case we have already considered. Therefore, this case gives $(p-1)(p-2)$ solutions. When $x\neq u$, for any of the $(p-2)(p-3)$ choices of $x$ and $u$ there are unique choices of $y$ and $c'$ such that equation \eqref{eq4} is satisfied, giving $(p-2)(p-3)$ solutions. This completes the proofs of the two claims, which completes the proof of Lemma \ref{lemma3211}. \end{proof} Combining these computations with Proposition \ref{fnrecurrence} completes the proof of Theorem \ref{Main_Theorem}. With such complicated expressions it is natural to be concerned about errors. We have used a computer to check our computations for $f_n(p^e)$ and $g_n(p^e)$ for all integers $n$ where $e \le 8$ and all primes $p \le 23$. We have also checked our computations for $g_{\alpha}(p)$, where $\alpha$ is a composition of an integer $e \le 8$. It would be quite technically challenging to extend these computations to $f_n(p^9)$, even with the help of a computer. \section{Lower Bounds on $g_n(p^e)$}\label{sec_lower_bounds} We now give a lower bound on $g_n(p^e)$ when $2 \le e \le n-1$. We do this by giving a lower bound on $g_{\alpha}(p^e)$ for a particular composition $\alpha$ of $e$ of length $n-1$. \begin{prop}\label{lowerbound} Let $\alpha = (2,\ldots, 2,1,\ldots, 1)$ be a composition of length $n-1$ with $r$ entries equal to $2$ and $s$ entries equal to $1$. Then $g_{\alpha}(p) \ge p^{rs}$. \end{prop} Note that $r+s = n-1$ and $2r + s = e$. Solving for $r$ and $s$ in terms of $n$ and $e$ gives $(r,s) = \left(e-(n-1), 2(n-1)-e\right)$. \begin{proof} Let $A$ be an upper triangular matrix with columns $v_1,\ldots, v_n$ where the diagonal entry of columns $v_1,\ldots, v_r$ is $p^2$, the diagonal entry of columns $v_{r+1},\ldots, v_{r+s}$ is $p$, and the final column is the identity $(1,\ldots, 1)^T$. Suppose that every non-diagonal entry in the first $n-1$ columns of this matrix is zero except possibly in the first $r$ rows of columns $v_{r+1},\ldots, v_{r+s}$. In each of these $rs$ entries there are $p$ integers $a_{i,j}$ satisfying $0 \le a_{i,j} < p^2$ and $p \mid a_{i,j}$. This gives $p^{rs}$ total matrices. It is easy to check that each one is an irreducible subring matrix. \end{proof} The matrices described in this proof do not give all irreducible subring matrices with diagonal $(p^2,\ldots, p^2, p,\ldots, p, 1)$. For example, we saw in Lemma \ref{uniquenesslemma} that $g_{(2, 2, 1, 1)}(p) = p^4 + 3p^2(p-1)$, larger than the lower bound of $p^4$ from Proposition \ref{lowerbound}. For each $e$, we compute the value of $n$ such that the corresponding lower bound on $g_n(p^e)$ is largest. That is, for fixed $e$ we want to find the non-negative integer $n$ maximizing $(e-(n-1))(2(n-1)-e)$. As a function of real numbers, this is maximized when $n = \frac{3e+4}{4}$. Taking $n$ equal to the nearest integer to $\frac{3e+4}{4}$, where we take either integer when $e\equiv 2 \pmod{4}$, gives the following lower bound. \begin{cor}\label{Cor1} Let $e$ be a positive integer. We have \[ \max_n g_n(p^e) \ge \begin{cases} p^{\frac{e^2}{8}} & \text{if } e \equiv 0 \pmod{4} \\ p^{\frac{1}{8}(e^2-1)} & \text{if } e \equiv 1 \pmod{4} \\ p^{\frac{1}{8}(e^2-4)} & \text{if } e \equiv 2 \pmod{4} \\ p^{\frac{1}{8}(e^2-1)} & \text{if } e \equiv 3 \pmod{4} \end{cases}. \] \end{cor} For each $e \le 8$ we can compute the value of $n$ giving the largest value of $g_n(p^e)$, and see that it is a polynomial of degree equal to the degree in the right hand side of the expression given in this corollary. For example, \[ g_7(p^8) = p^8 + p^7 + 2p^6 + 23p^5 + 3p^4 + 2p^3 + 2p^2 + p + 1, \] a polynomial of degree $\frac{8^2}{8} =8$. It is unclear whether for larger values of $e$ these lower bounds will continue to grow at a rate similar to the growth of $\max_n g_n(p^e)$. These lower bounds on $g_n(p^e)$ together with Proposition \ref{fnrecurrence} give lower bounds on $f_n(p^e)$. The lower bounds of this section are closely related to Brakenhoff's lower bounds for orders of given index in the ring of integers of a number field. \begin{prop}\cite[Lemma 5.10]{Brakenhoff}\label{BrakenhoffLower} Let $\mathcal{O}_K$ be the ring of integers of a number field $K$. Every additive subgroup $G$ of $\mathcal{O}_K$ that satisfies $\mathbb{Z} + m^2 \mathcal{O}_K \subset G \subset \mathbb{Z} + m \mathcal{O}_K$ for some integer $m$ is a subring. \end{prop} The subrings $R$ described in the proof of Proposition \ref{lowerbound} do satisfy $\mathbb{Z} + p^2 \mathbb{Z}^n \subset R \subset \mathbb{Z}+ p \mathbb{Z}^n$, where the first $\mathbb{Z}$ is interpreted as integer multiples of the multiplicative identity $(1,\ldots, 1)$. Brakenhoff gives a lower bound for the number of additive subgroups satisfying the hypothesis of Proposition \ref{BrakenhoffLower} and derives a lower bound for the number of orders of index at most $X$ in the ring of integers of a degree $n$ number field $K$. This requires an easy optimization along the lines of Corollary \ref{Cor1}. In this way, our lower bounds for $g_n(p^e)$ is analogous to the lower bounds from \cite[Theorem 5.1]{Brakenhoff}. Using results of \cite{KMTB}, increased lower bounds on $g_n(p^e)$ should lead to better lower bounds for orders of bounded index in a given number field. \section{Further Questions}\label{conjectures} \subsection{Uniformity of $\zeta_{\mathbb{Z}^n}^R(s)$ and Varieties over Finite Fields} Questions \ref{zetaznuniform} and \ref{fnpolyq} are related to how counting functions vary with $p$. Theorem \ref{GSSthm} of Grunewald, Segal, and Smith gives information on how the function $f_n(p^e)$ behaves for fixed $n$ and $p$. We recall a theorem of du Sautoy and Grunewald \cite{duSautoyGrunewald}, which implies that even if for fixed $n$ and $e\ f_n(p^e)$ is not a polynomial in $p$ we can still say quite a lot about how it behaves. For simplicity of notation, we state a version of this result for the subring zeta function of a ring with a multiplicative identity that is a modification of the first part of \cite[Theorem A]{Voll2}. \begin{thm}\label{dSGthm} Let $L$ be a ring of additive rank $n$ containing a multiplicative identity. Then there are smooth projective varieties $V_t,\ t\in \{1,\ldots, m\}$, defined over $\mathbb{Q}$, and rational functions $W_t(X,Y) \in \mathbb{Q}(X,Y)$ such that for almost all primes $p$ the following holds:\\ Denoting by $b_t(p)$ the number of $\mathbb{F}_p$-rational points of $\overline{V_t}$, the reduction $\mod p$ of $V_t$, we have \[ \zeta^R_{L,p}(s) = \sum_{t=1}^m b_t(p) W_t(p,p^{-s}). \] \end{thm} Much is known about the denominators of the rational functions of Theorem \ref{GSSthm}, but the numerators are significantly more complicated. This can lead to the appearance of interesting projective varieties in the theorem above. See the paper of du Sautoy \cite{duSautoyDenom} and Voll's survey \cite[Section 2.1]{Voll} for more information. In case $\zeta_{\mathbb{Z}^n}^R(s)$ is not uniform it would be interesting to see what kinds of varieties arise in the formulas of Theorem \ref{dSGthm}. The conditions for the columns of an $n\times n$ matrix to be multiplicatively closed define many equations in the matrix entries. For examples for $n= 4$ and $5$, see \cite[Lemmas 12 and 13]{KMTB}. Once $n$ is not too small, it is possible that the number of $\mathbb{F}_p$-points of the varieties defined by these equations does not vary in a polynomial way and that these counts filter into formulas for $f_n(p^e)$. \subsection{Coefficients of $f_n(p^e)$ and $g_n(p^e)$} For small fixed values of $n$ and $e$, the function $g_n(p^e)$ is a polynomial in $p$ with non-negative coefficients. However, this is not true for $g_5(p^8) = p^5 + 77p^4 - 13p^3 + 12p^2 + p + 1$. As far as we know, there has been no previous study of the positivity of coefficients of $g_n(p^e)$ or $f_n(p^e)$. These questions are motivated by analogous work related to Hall polynomials. \begin{definition} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item Let $\lambda = (\lambda_1, \ldots, \lambda_k)$, where $\lambda_1 \ge \cdots \ge \lambda_k > 0$. A finite abelian $p$-group $G$ is of \emph{type $\lambda$} if \[ G \cong \mathbb{Z}/p^{\lambda_1} \mathbb{Z} \times \cdots \times \mathbb{Z}/p^{\lambda_k} \mathbb{Z}. \] \item A subgroup of $H$ of $G$ is of cotype $\nu$ if $G/H$ is of type $\nu$. \item Let $g_{\mu\nu}^{\lambda}(p)$ be the number of subgroups $H$ of a $p$-group $G$ of type $\lambda$ such that $H$ has type $\mu$ and cotype $\nu$. \end{enumerate} \end{definition} Hall proved that $g_{\mu\nu}^{\lambda}(p)$ is a polynomial in $p$ with integer coefficients. Several other authors have studied these coefficients. For example, Butler and Hales give a characterization of types~$\lambda$ for which all of the associated Hall polynomials have non-negative coefficients \cite{ButlerHales}. Maley shows that the expansion of any $g_{\mu\nu}^{\lambda}(p)$ in terms of powers of $p-1$ has non-negative coefficients \cite{Maley}. In all cases we have computed, the same property holds for $g_n(p^e)$. This is even stronger than the observation that $g_n(1)$ is a non-negative integer. \begin{question} When $g_n(p^e)$ is expanded in terms of powers of $p-1$, are the coefficients positive? \end{question} Evseev has studied the substitution $p = 1$ in the form of the \emph{reduced zeta function} \cite{Evseev}. The $p \to 1$ behavior of local factors of zeta functions is closely related to the corresponding topological zeta function \cite{Rossmann2}. For a more detailed discussion of these questions with some examples, see \cite[Section 3.3]{VollRecent}. It would be interesting to undertake a more detailed study of the coefficients of $f_n(p^e)$ and $g_n(p^e)$. For much more background on Hall polynomials and connections to counting subgroups of finite abelian groups, see the books of Macdonald \cite{Macdonald} and Butler \cite{ButlerBook}. \section*{Acknowledgements} We thank the mathematics department at Yale University and the Summer Undergraduate Research at Yale (SUMRY) program for providing the opportunity to conduct this research. SUMRY is supported in part by NSF grant CAREER DMS-1149054. The second author is supported by NSA Young Investigator Grant H98230-16-10305 and an AMS-Simons Travel Grant. We thank Francisco Munoz, for his active involvement throughout this project and for many helpful conversations. We thank Christopher Voll and Tobias Rossmann for many extremely helpful comments. We would also like to extend our gratitude to Sam Payne, Susie Kimport, and Jos\'e Gonz\'alez for helping to organize SUMRY. The second author thanks Robert Lemke-Oliver for helpful conversations. Lastly, we thank the Yale Center for Research Computing for High Performance Computing resources.
{ "timestamp": "2018-02-13T02:14:19", "yymm": "1609", "arxiv_id": "1609.06433", "language": "en", "url": "https://arxiv.org/abs/1609.06433", "abstract": "We count subrings of small index of $\\mathbb{Z}^n$, where the addition and multiplication are defined componentwise. Let $f_n(k)$ denote the number of subrings of index $k$. For any $n$, we give a formula for this quantity for all integers $k$ that are not divisible by a 9th power of a prime, extending a result of Liu.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "Counting Finite Index Subrings of $\\mathbb{Z}^n$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631671237733, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7087950468779121 }
https://arxiv.org/abs/1511.01359
Measures of irrationality for hypersurfaces of large degree
We study various measures of irrationality for hypersurfaces of large degree in projective space and other varieties. These include the least degree of a rational covering of projective space, and the minimal gonality of a covering family of curves. The theme is that positivity properties of canonical bundles lead to lower bounds on these invariants. In particular, we prove that if X is a very general smooth hypersurface of dimension n and degree d \ge 2n+1, then any dominant rational mapping from X to projective n-space must have degree at least d-1. We also propose a number of open problems, and we show how our methods lead to simple new proofs of results of Ran and Beheshti-Eisenbud.
\section*{Introduction} The purpose of this paper is to study various measures of irrationality for hypersurfaces of large degree in projective space and other varieties. The theme is that positivity properties of canonical bundles lead to lower bounds for these invariants. In particular, we prove the conjecture of \cite{BCD} that if $X \subseteq \mathbf{P}^{n+1}$ is a very general smooth hypersurface of dimension $n$ and degree $d \ge 3n$, then any dominant rational mapping $f : X \dashrightarrow \mathbf{P}^n$ must satisfy \[ \deg(f) \, \ge \, d-1, \] with equality if and only if $f$ is given by projection from a point of $X$.\footnote{It was actually conjectured in \cite{BCD} that this statement holds as soon as $d \ge 2n + 1$. In their Appendix \cite{Appendix} to the present paper, Bastianelli and De Poi show that our hypothesis $d \ge 3n$ can be relaxed to $d \ge 3n - 2$.} To start with some background, recall that the \textit{gonality} $\textnormal{gon}(C)$ of an irreducible complex projective curve $C$ is defined to be the least degree of a branched covering \[ C^\prime \longrightarrow \mathbf{P}^1, \]where $C^\prime$ is the normalization of $C$. Thus \[ \textnormal{gon}(C) \ = \ 1 \ \ \Longleftrightarrow \ \ C \, \approx_{\text{birat}} \, \mathbf{P}^1, \] and it is profitable in general to view $\textnormal{gon}(C)$ as measuring the failure of $C$ to be rational. Because of this there has been a certain amount of interest over the years in bounding from below the gonality of various natural classes of curves. For instance, a classical theorem of Noether states that if $C \subseteq \mathbf{P}^2$ is a smooth plane curve of degree $d \ge 3$, then \[ \textnormal{gon}(C) \ = \ d-1, \] with the relevant coverings given by projection from a point of $C$. This was generalized to complete intersection and other curves in \cite[Ex. 4.12]{LLS} and \cite{HS} by means of vector bundle techniques. In another direction, Abramovich \cite{Abramovich} used results of Li and Yau to obtain a linear lower bound on the gonality of modular curves, and the behavior of gonality in certain towers of coverings was studied by Hwang and To \cite{HwangTo} as a consequence of relations they established between gonality and injectivity radii; the paper \cite{BT} contains some interesting applications of these results. Several authors have proposed and studied some analogous measures of irrationality for an irreducible complex projective variety $X$ of arbitrary dimension $n$. We will be concerned here with three of these -- the \textit{degree of irrationality}, the \textit{connecting gonality}, and the \textit{covering gonality} of $X$ -- defined as follows: \begin{align*} \textnormal{irr}(X) \ &= \ \min \Big \{ \delta > 0 \ \Big | \parbox{2.2in}{\begin{center} $\exists$ degree $\delta$ rational covering \\$X \dashrightarrow \mathbf{P}^n$ \end{center}} \Big \}; \\ \\ \textnormal{conn.\,gon}(X) \ &= \ \min \Bigg \{ c > 0 \ \Bigg | \parbox{2.7in}{\begin{center} General points $x, y \in X$ can be connected by an irreducible curve $C \subseteq X$ with $\textnormal{gon}(C) = c$. \end{center}} \Bigg \}; \\ \\ \textnormal{cov.\,gon}(X) \ &= \ \min \Bigg \{ c > 0 \ \Bigg | \parbox{2.7in}{\begin{center} Given a general point $x \in X$, $\exists$ an irreducible curve $C \subseteq X$ through $x$ with $\textnormal{gon}(C) = c$. \end{center}} \Bigg \}. \end{align*} (Note that the curves $C$ computing the connecting and covering gonalities are allowed to be singular.) Thus \begin{align*} \textnormal{irr}(X) \, = \, 1 \ &\Longleftrightarrow \ X \text{ is rational},\\ \textnormal{conn.\,gon}(X) \, = \, 1 \ &\Longleftrightarrow \ X \text{ is rationally connected},\\ \textnormal{cov.\,gon}(X) \, = \, 1 \ &\Longleftrightarrow \ X \text{ is uniruled},\end{align*} and in general one has the inequalities \begin{equation} \label{relations.among.invariants} \textnormal{cov.\,gon}(X) \, \le \, \textnormal{conn.\,gon}(X) \, \le \, \textnormal{irr}(X). \end{equation} The integer $\textnormal{irr}(X)$ is perhaps the most natural generalization of the gonality of a curve, but $\textnormal{cov.\,gon}(X)$ often seems to be easier to control.\footnote{We introduce the connecting gonality only because it fits naturally into the picture. In fact this invariant does not enter seriously into any of our results.} The degree of irrationality was introduced by Heinzer and Moh in \cite{HM}, and Yoshihara subsequently computed it for several classes of surfaces (\cite{Y1}, \cite{Y2}, \cite{Y3}, \cite{Y4}). Lopez and Pirola \cite{LP} showed in passing that if $X \subseteq \mathbf{P}^3$ is a surface of degree $d \ge 4$, then $\textnormal{cov.\,gon}(X) = d-2$. Along similar lines, Fakhruddin established in his note \cite{F} that given any integer $c > 0$, a very general hypersurface of sufficiently large degree in any smooth variety does not contain any curves of gonality $\le c$. However as a measure of irrationality, it seems that the covering gonality was first studied systematically in \cite{Bast}, where Bastianelli computes $\textnormal{cov.\,gon}(X)$ and bounds $\textnormal{irr}(X)$ when $X = C_2$ is the symmetric square of a curve $C$. The present work was most directly motivated by the interesting paper \cite{BCD} in which Bastianelli, Cortini and De Poi consider the question of computing the degree of irrationality of a smooth projective hypersurface \[ X \, = \, X_d \ \subset \ \mathbf{P}^{n+1} \] of degree $d$ and dimension $n \ge 2$, generalizing the result of Noether for plane curves cited above. They show to begin with that if $d \ge n+3$ then \begin{equation} \label{BCP.Bound} d -n \ \le \ \textnormal{irr}(X) \ \le d -1. \end{equation} It can happen that $\textnormal{irr}(X) < d-1$, but Bastianelli, Cortini and De Poi prove that if $X$ is a \textit{very general} surface of degree $d \ge 5$ or threefold of degree $d \ge 7$ then \[ \textnormal{irr}(X) \ = \ d-1, \] and they classify the exceptional cases in these dimensions. They conjectured that the same inequality holds for a very general hypersurface $X \subseteq \mathbf{P}^{n+1}$ of arbitrary dimenson $n\ge 2$ and degree $d \ge 2n+1$, and that moreover if $d \ge 2n+2$ then any dominant rational mapping \[ f : X \dashrightarrow \mathbf{P}^n \ \ \text{with } \deg(f) = d-1 \] is given by projection from a point of $X$. Our first results concern covering gonality. \begin{theoremalpha} \label{Cov.Gon.Thm} Let $X \subseteq \mathbf{P}^{n+1}$ a smooth hypersurface of dimension $n$ and degree $d \ge n+2$. Then \[ \textnormal{cov.\,gon}(X) \ \ge \ d - n . \] \end{theoremalpha} \noindent Observe that one recovers in particular the lower bound \eqref{BCP.Bound} of Bastianelli, Cortini and De Poi on the degree of irrationality of such hypersurfaces. In fact it suffices in the Theorem that $X$ is normal with at worst canonical singularities, and in this setting the statement is best possible for every $n \ge 2$ and $d \ge n+2$. More generally, we prove that the conclusion of the Theorem holds for any smooth projective variety $X$ with \[ K_X \ \equiv_{\text{lin}} \ B + E \] where $B$ is a $(d-n-2)$-very ample divisor on $X$ and $E$ is effective.\footnote{Recall that a divisor $B$ on a smooth projective variety $Y$ is said to be $p$-very ample if any finite subscheme $\xi \subseteq Y$ of length $(p+1)$ imposes independent conditions on $H^0(Y,B)$. If $A$ is a very ample divisor then $\mathcal{O}_Y(pA)$ is $p$-very ample, and therefore if $X \subseteq \mathbf{P}^{n+1}$ is a smooth hypersurface of degree $d$ then $K_X$ is $(d-n-2)$-very ample.} Thus we deduce \begin{corollaryalpha} Let $M$ be a smooth projective variety, and let $A$ be a very ample divisor on $M$. There is an integer $e = e(M, A)$ depending only on $M$ and $A$ with the property that if \[ X_d \, \in \ \linser{dA} \] is any smooth divisor, then \[ \textnormal{cov.\,gon}(X_d) \ \ge \ d - e. \] \end{corollaryalpha} \noindent In particular, the degree of irrationality of $X_d$ goes to infinity with $d$. (One can prove this last fact directly using the ideas of \cite{LP}, \cite{Bast}, \cite{BCD}, and \cite{GP}: see Remark \ref{Direct.Proof.Irr.Bound}.) As noted above, Fakhruddin proved in \cite{F} the closely related result that in the situation of the Corollary, there is a linear function $d(c)$ such that a very general divisor $X_d \in \linser{dA}$ actually contains no curves of gonality $\le c$ provided that $d \ge d(c)$. (Compare Proposition \ref{No.Curves.Small.Gonality} below.) Returning to smooth hypersurfaces in projective space, our second theorem proves the conjecture of Bastianelli, Cortini and De Poi under a slightly more stringent degree hypothesis. \begin{theoremalpha} \label{BCP.Conj.Thm.} Let $X \subseteq \mathbf{P}^{n+1}$ be a very general smooth hypersurface of dimension $n$ and degree $d \ge 3n$. Then \[ \textnormal{irr}(X) \, = \, d-1. \] Furthermore, any rational mapping \[ f : X \dashrightarrow \mathbf{P}^n \ \ \text{ with } \deg(f) = d-1 \] is given by projection from a point of $X$. \end{theoremalpha} \noindent In their appendix \cite{Appendix} to the present paper, Bastianelli and De Poi prove that one can weaken the hypothesis to $d \ge 3n -2$. The proof of Theorem \ref{Cov.Gon.Thm}, which is quite quick and elementary, occupies \S1: there we work on an arbitrary smooth variety whose canonical bundle satisfies a suitable positivity property. For Theorem \ref{BCP.Conj.Thm.}, which appears in \S 2, we start with the set-up established by Bastianelli, Cortini and De Poi in \cite{BCD}. Those authors give a very nice argument to show that if \[ f : X \dashrightarrow \mathbf{P}^n \] is a rational mapping with \[ d-n \ \le \ \deg(f) \ \le \ d-2, \] then each of the fibres of $f$ spans a line in $\mathbf{P}^{n+1}$ provided that $d \ge 2n+1$. They prove that these lines form a congruence of order one on $\mathbf{P}^{n+1}$, meaning that a general point of $\mathbf{P}^{n+1}$ lies on exactly one of the lines. The main effort in \cite{BCD} was to use results on the classification of such congruences to show that $X$ must contain a rational curve when $n = 2$ or $n = 3$, which forces $X$ to be special provided that $d \ge 2n+1$. Our observation is that in arbitrary dimension $n$, whether or not $X$ contains a rational curve one can locate on $X$ a curve of gonality $\le n$. On the other hand, drawing on computations of the first author and Voisin in \cite{Ein}, \cite{Voisin} one easily proves that a very general hypersurface of degree $d \ge 3n$ does not contain a curve of gonality $\le n$, and Theorem \ref{BCP.Conj.Thm.} follows. The common thread in these arguments is that the invariants we consider are ultimately controlled by measuring the positivity of canonical bundles. In \S3 we discuss a few open problems. For example, it is natural to ask for extensions of the present results to other classes of varieties. More ambitiously, we feel that it would be very interesting to know whether any of the new techniques introduced in \cite{Kollar}, \cite{Voisin2} and \cite{Totaro} to study questions of rationality have any applicability here, or whether geometric or arithmetic connections such as those established in \cite{Abramovich}, \cite{HwangTo}, \cite{Lengths} for the gonality of curves extend to a higher-dimensional setting. The reader will see that the methods of the present paper are very elementary, and several of the ideas involved are at least implicit in earlier work such as \cite{LP}, \cite{F}, \cite{Bast}, \cite{BCD} and \cite{GP}. However we have tried to pull things together in a natural way by focusing on a specific birational measure of positivity for the canonical bundle (Definition \ref{BVA}). We hope that this might help to lay the foundation for further work on what we consider to be an interesting circle of questions. We are grateful to Francesco Bastianelli, Daniel Litt, John Ottem, Ian Shipman, David Stapleton and Jason Starr for helpful discussions. We would also like to acknowledge the influence of Pietro Pirola, who with his colleagues introduced many of the ideas that implicitly play a role here. We are particularly grateful to Claire Voisin, who read through a draft of the paper and provided several valuable observations and suggestions. We are honored to dedicate this paper to J\'anos Koll\'ar on the upcoming occasion of his sixtieth birthday. Beyond guiding the direction of algebraic geometry over three decades, J\'anos has been instrumental to the work of the first two authors through his encouragement and generosity with ideas. It is a pleasure to have this opportunity to express our admiration and thanks. Concerning notation and conventions -- we work throughtout over the complex numbers. As usual, morphisms are indicated by solid arrows, while rational mappings are dashed. We have taken the customary liberties in confounding line bundles and divisors. \numberwithin{equation}{section} \section{Covering Gonality} In this section we study the covering gonality of a projective variety $X$, and prove Theorem \ref{Cov.Gon.Thm} from the Introduction. The basic strategy is to bound $\textnormal{cov.\,gon}(X)$ in terms of the positivity of the canonical bundle $K_X$. So we start with some remarks on birational measures of positivity for line bundles. Let $X$ be an irreducible projective variety. Given an integer $p \ge 0$, recall that a line bundle $L$ on $X$ is said to be $p$-very ample if the restriction map \[ \HH{0}{X}{L} \longrightarrow \HH{0}{X}{L \otimes \mathcal{O}_{\xi}} \] is surjective for every finite subscheme $\xi \subseteq X$ of length $p+1$. In other words, one asks that every subscheme of length $p+1$ imposes independent conditions on the sections of $L$. The condition we focus on here is a birational analogue of this. \begin{definition} \label{BVA} A line bundle $L$ on $X$ \textit{satisfies property $\BVA{p}$} if there exists a proper Zariski-closed subset $Z = Z(L) \subsetneqq X$ depending on $L$ such that \begin{equation} \label{DefBVA} \HH{0}{X}{L} \longrightarrow \HH{0}{X}{L \otimes \mathcal{O}_{\xi}} \end{equation} surjects for every finite subscheme $\xi \subset X$ of length $p+1$ whose support is disjoint from $Z$. \end{definition} \noindent Thus $\BVA{0}$ is equivalent to requiring that $L$ be effective, and $\BVA{1}$ is what is often called ``birationally very ample."\footnote{Hence ``BVA."} The following remarks yield a supply of examples. \begin{example} \label{Examples.BVA} Let $X$ be an irreducible projective variety. \begin{enumerate} \item[(i).] If $L$ is a line bundle on $X$ satisfying $\BVA{p}$ and $E$ is an effective divisor on $X$, then $\mathcal{O}_X(L+E)$ satisfies $\BVA{p}$. \vskip 4pt \item[(ii).] Suppose that $f : X \longrightarrow Y$ is a birational morphism of irreducible projective varieties. If $L$ is a line bundle on $Y$ satisfying $\BVA{p}$, then $f^*L$ satisfies $\BVA{p}$ on $X$. \vskip4pt \item[(iii).] More generally, let $f : X \longrightarrow Y$ be a morphism which is birational onto its image, and suppose that $L$ satisfies $\BVA{p}$ on $Y$. Assume moreover that $f(X)$ is not contained in the exceptional set $Z \subseteq Y$ arising in the definition of property $\BVA{}$. Then $f^*L $ satisfies $\BVA{p}$ on $X$. \vskip 4pt \item[(iv).] Suppose that \[ f : X \longrightarrow \mathbf{P} \] is a morphism from $X$ to some projective space which is birational onto its image. Then $f^*\mathcal{O}_\mathbf{P}(p)$ satisfies $\BVA{p}$. \vskip4pt \item[(v).] Suppose that \[ X \ \subseteq \ \mathbf{P}^{n+1} \] is a normal hypersurface of degree $d\ge n+2$ with at worst canonical singularities, and let $ \mu : X^\prime \longrightarrow X$ be a resolution of singularities. Then the canonical bundle $K_{X^\prime}$ of $X^\prime$ satisfies $\BVA{d-n-2}$. \end{enumerate} \noindent Indeed, (i), (ii) and (iii) are clear from the definition, while (iv) is a consequence of (ii) and the elementary fact that $\mathcal{O}_\mathbf{P}(p)$ is $p$-very ample. For (v), it follows from the definition of canonical singularities that \[ K_{X^\prime} \ \equiv_{\text{lin}} \ (d - n -2)H + E, \] where $H$ is the pullback of the hyperplane bundle on $X$ and $E$ is effective. So the assertion follows from (i) and (iv). \qed \end{example} The relevance of this notion to questions of gonality arises from the following elementary observation. \begin{lemma} \label{Gon.Bound.Curve.Lemma} Let $C$ be a smooth projective curve of genus $g$ whose canonical bundle $K_C$ satisfies $\BVA{p}$. Then \[ \textnormal{gon}(C) \ge p+2.\] \end{lemma} \begin{proof} We may suppose $g \ge 2$. Let $A$ be a globally generated line bundle of degree $d \le g-1$ on $C$. Then the divisor $\xi$ of any section of $A$ fails to impose independent conditions on $\linser{K_C}$. Hence if $K_C$ satisfies $\BVA{p}$ then one must have $d \ge p+2$. \end{proof} We now turn to coverings by curves of specified gonality. Let $X$ be an irreducible projective variety. \begin{definition} \label{Def.Cov.Fam.Gon} A \textit{covering family of curves of gonality $c$} on $X$ consists of a smooth family \[ \pi : \mathcal{C} \longrightarrow T \] of irreducible projective curves parametrized by an irreducible variety $T$, together with a dominant morphism \[ f : \mathcal{C} \longrightarrow X, \] satisfying: \begin{enumerate} \item[(i).] For a general point $t \in T$, the fibre $C_t \,=_\text{def} \, \pi^{-1}(t) $ is a smooth curve with $\textnormal{gon}(C_t) = c$; and \vskip 5pt \item[(ii).] For general $t \in T$, the map $ f_t : C_t \longrightarrow X$ is birational onto its image. \end{enumerate} \end{definition} \noindent By standard arguments, the existence of such a family is equivalent to asking that $X$ contains a (possibly singular) curve of gonality $c$ passing through a general point. \begin{remark} \label{Cov.Gon.Properties} We make some remarks about the formal properties of this definition. \noindent (i). \ After replacing $T$ by a desingularization, one can suppose without loss of generality that $T$ and $\mathcal{C}$ are non-singular. \noindent (ii). \ Given a covering family as above, after restricting to a suitable suvariety of $T$ we may suppose without loss of generality that $\dim \mathcal{C} = \dim X$, so that in particular the morphism \[ f : \mathcal{C} \longrightarrow X \] is generically finite. \noindent (iii). \ Suppose that $ \pi : \mathcal{C} \longrightarrow T$, $ f : \mathcal{C} \longrightarrow X $ is a covering family with $\mathcal{C}$ and $T$ non-singular, and let $\nu : \mathcal{C}^\prime \longrightarrow \mathcal{C}$ be the blowing up of $\mathcal{C}$ along a smooth center. Then there is a non-empty Zariski-open subset $T_0 \subseteq T$ over which the restrictions of the two maps \[ \mathcal{C}^\prime \longrightarrow T \ \ , \ \ \mathcal{C} \longrightarrow T \] coincide. (Since blowing up along a divisor has no effect, we can assume that this center has codimension $\ge 2$, and hence maps to a subset of $T$ having codimension $\ge 1$.) \noindent (iv). Let $\pi : \mathcal{C} \longrightarrow T$, $f : \mathcal{C} \longrightarrow X$ be a covering family with $\mathcal{C}$ and $T$ smooth, and let \[ \mu : X^\prime \longrightarrow X \] be a birational morphism. Then there is a non-empty Zariski-open subset $T_0\subseteq T$ so that the restriction $ \pi_0: \mathcal{C}_0 \longrightarrow T_0 $ extends to a family \[ f^\prime : \mathcal{C}_0 \longrightarrow X^\prime. \] (In fact, by a suitable sequence of blow-ups, we can construct a modification $\mathcal{C}^\prime \longrightarrow \mathcal{C}$ that admits an extension $f^\prime : \mathcal{C}^\prime \longrightarrow X^\prime$. The assertion then follows from (iii).) \end{remark} As in the Introduction, we focus on the smallest gonality of such a covering family: \begin{definition} The \textit{covering gonality} $\textnormal{cov.\,gon}(X)$ of $X$ is the least integer $c>0$ for which such a covering family exists. \end{definition} \noindent It follows from Remark \ref{Cov.Gon.Properties} (iv) that this is indeed a birational invariant. \begin{example} \textbf{(Examples of covering gonality).} \label{Cov.Gon.Exs} Here are some examples where the covering gonality can be estimated or computed. \noindent (i). \ Let $X$ be a $K3$ surface. By a theorem of Bogomolov and Mumford (\cite[p. 351]{MM}) $X$ is covered by (singular) elliptic curves. Hence $\textnormal{cov.\,gon}(X) = 2$. \noindent (ii). \ Let $X = C_2$ be the symmetric square of a smooth curve $C$ of genus $g$. Then $X$ is covered by copies of $C$ via the double covering $C \times C \longrightarrow X$. Bastianelli \cite{Bast} shows that these curves compute the covering gonality of $X$, ie $\textnormal{cov.\,gon}(X) = \textnormal{gon}(C)$. \noindent (iii). \ Let $X \subseteq \mathbf{P}^3$ be a smooth surface of degree $d \ge 4$, let $x \in X$ be a general point, and let $T_x \subseteq \mathbf{P}^3$ be the tangent plane to $X$ at $x$. Then \[ D_x = T_x \cap X\] is an irreducible plane curve of degree $d$ with a double point, which has gonality $d-2$. Therefore $\textnormal{cov.\,gon}(X) \le d-2$. In fact, Lopez and Pirola \cite{LP} show that this is the unique family of minimal gonality, and hence $\textnormal{cov.\,gon}(X) = d-2$. \noindent (iv). \ Suppose now that $X \subseteq \mathbf{P}^4$ is a smooth threefold of degree $d \ge 5$. A dimension count predicts that $X$ should be covered by a two-dimensional family of plane curves of degree $d$ with triple points. One can prove -- either directly or (as Jason Starr pointed out) by a degeneration -- that this is indeed the case. Hence \[ \textnormal{cov.\,gon} (X) \ \le \ d - 3, \] and the same inequality holds \textit{a fortiori} for hypersurfaces of degree $d$ and larger dimension. In general, one expects the covering gonality of hypersurfaces of given degree $d$ to decrease as their dimension grows, until eventually they become rationally connected in the Fano range. \noindent (v). \ Let $X \subseteq \mathbf{P}^{n+1}$ be a hypersurface of degree $d> n$ having an ordinary singular point $p \in X$ of multiplicity $n$: in particular, $X$ has only canonical singularities. Projection from $p$ gives rise to a rational map $X \dashrightarrow \mathbf{P}^n$ of degree $d - n$, and the inverse images of lines $\ell \subseteq \mathbf{P}^n$ then yield a covering of $X$ by curves of gonality $\le d-n$. Therefore $\textnormal{cov.\,gon}(X) \le d-n$, and it follows from Corollary \ref{Cov.Gon.Hypsfs} below that in fact $\textnormal{cov.\,gon}(X) = d-n$. \qed \end{example} The main theorem of this section asserts that the covering gonality of a smooth projective variety is bounded by the positivity of its canonical bundle. \begin{theorem} \label{Cov.Gon.via.BVA.Thm} Let $X$ be a smooth projective variety, and suppose that there is an integer $p \ge 0$ such that its canonical bundle $K_X$ satisfies property $\BVA{p}$. Then \[ \textnormal{cov.\,gon}(X) \ \ge \ p+2. \] \end{theorem} \begin{proof} This is very elementary. Suppose that \[ \pi : \mathcal{C} \longrightarrow T \ \ , \ \ f : \mathcal{C} \longrightarrow X \] is a covering family of curves of gonality $c$. Thanks to Remark \ref{Cov.Gon.Properties} (i) and (ii), there is no loss of generality in assuming that $\mathcal{C}$ and ${T}$ are smooth, and that $f$ is generically finite. Then \[ K_\mathcal{C} \ \equiv_{\text{lin}} \ f^* K_X \, + \, E \tag{*} \] where $E = \text{Ram}(f)$ is the ramification divisor of $f$. On the other hand, since $\pi$ is smooth one has \[ K_{C_t} \ \equiv_{\text{lin}} \ K_{\mathcal{C}} \mid C_t \tag{**} \] for every $t \in T$. Furthermore, if $t \in T$ is general, then $C_t$ meets the effective divisor $E$ properly, and its image \[ f_t(C_t) \ \subseteq \ X \] will not be contained in the exceptional set $Z(K_X) \subseteq X$ arising in Definition \ref{BVA}. Since by definition $f_t : C_t \longrightarrow X$ is birational onto its image, if follows from (*), (**) and Remark \ref{Examples.BVA} (iii) that $K_{C_t}$ satisfies property $\BVA{p}$. Hence $c \ge p+2$ thanks to Lemma \ref{Gon.Bound.Curve.Lemma}. \end{proof} \begin{corollary} \label{Cov.Gon.Hypsfs} Let $X \subseteq \mathbf{P}^{n+1}$ be a smooth hypersurface of degree $d \ge n+2$. Then \[ \textnormal{cov.\,gon}(X) \ \ge \ d-n. \] The same statement holds if $X$ is normal with only canonical singularities. \end{corollary} \noindent Note that if we allow canonical singularities, then Example \ref{Cov.Gon.Exs} (v) shows that the statement is best possible for all $n \ge 2$ and $d \ge n+2$. When $n = 1$ we recover Noether's result that a smooth plane curve of degree $d$ has gonality $d-1$. \begin{proof}[Proof of Corollary] When $X$ is smooth, its canonical bundle $\omega_X = \mathcal{O}_X(d-n-2)$ is already $(d-n-2)$-very ample. For the second statement, we can pass to a desingularization, and then Example \ref{Examples.BVA} (v) applies. \end{proof} We observe next that a sufficiently positive divisor on any smooth variety has large covering gonality. \begin{corollary} Let $M$ be a smooth projective variety, and let $A$ be a very ample line bundle on $M$. Fix an integer $e$ such that $\linser{(e+2)A + K_M}$ is basepoint-free, and let \[ X = X_d \in \linser{dA}\] be any smooth divisor. Then \[ \textnormal{cov.\,gon}(X) \ \ge \ d - e. \] \end{corollary} \begin{proof} In fact, \[ K_{X} \ = \ \big (K_M + dA \big)\mid X \ = \ \big( (d-e-2)A + E \big) \mid X, \] where $\linser{E}$ is free. Since $A$ is very ample, $\mathcal{O}_X\big((d-e-2)A\big)$ is $(d-e-2)$-very ample, and therefore $K_{X}$ satisfies Property $\BVA{d-e-2}$. \end{proof} Finally, we say a word about the connecting gonality of an irreducible projective variety $X$. An evident modification of Definition \ref{Cov.Gon.Exs} leads to the notion of a family of curves of gonality $c$ connecting two general points of $X$, and as in the Introduction the least such gonality is defined to be $\textnormal{conn.\,gon}(X)$. Clearly \[ \textnormal{cov.\,gon}(X) \ \le \ \textnormal{conn.\,gon}(X),\] and the example of a uniruled variety which is not rationally connected shows that the inequality can be strict. Unfortunately, we do not at the moment know any useful ways of controlling this invariant. For example, when $X$ is the symmetric square of a curve of large genus, as in Example \ref{Cov.Gon.Exs} (ii), we suspect that $\textnormal{cov.\,gon}(X) < \textnormal{conn.\,gon}(X)$, but we do not know how to prove this. \section{Degree of Irrationality of Projective Hypersurfaces} In this section we discuss the degree of irrationality and give the proof of Theorem \ref{BCP.Conj.Thm.} from the Introduction. We start with some general remarks about the irrationality degree $\textnormal{irr}(X)$ of an irreducible complex projective variety $X$ of dimension $n$. Recall from the Introduction that this is defined to be the least degree of a dominant rational map \[ f : X \dashrightarrow \mathbf{P}^n. \] Equivalently, one can characterize $\textnormal{irr}(X)$ as the minimal degree of a field extension \[ \mathbf{C}(t_1, \ldots, t_n) \ \subseteq \ \mathbf{C}(X) \] where the $t_i \in \mathbf{C}(X)$ are algebraically independent rational functions on $X$. We refer to \cite{Y1}, \cite{Y2}, \cite{Y3}, \cite{Y4}, \cite{Bast} for some computations and estimations of $\textnormal{irr}(X)$, especially in the case of surfaces. Given a rational covering $f : X \dashrightarrow \mathbf{P}^n$, observe that the inverse images of lines $\ell \subseteq \mathbf{P}^n$ determine a family of curves of gonality $\le \deg(f)$ connecting two general points on $X$. This shows that \begin{equation}\label{Ineq.Invars.In.Section} \textnormal{cov.\,gon}(X) \ \le \ \textnormal{conn.\,gon}(X) \ \le \ \textnormal{irr}(X). \end{equation} The existence of rationally connected varieties that are not rational -- as well as many other examples -- illustrates that the gonality invariants can be strictly smaller than $\textnormal{irr}(X)$. However by combining \eqref{Ineq.Invars.In.Section} with Theorem \ref{Cov.Gon.via.BVA.Thm} we find: \begin{corollary} Let $X$ be a smooth projective variety whose canonical bundle $K_X$ satisfies Property $\BVA{p}$ for some $p \ge 0$. Then \[ \textnormal{irr}(X) \ \ge \ p+2. \hfill\qed\] \end{corollary} \noindent As above (Examples \ref{Examples.BVA} (v) and \ref{Cov.Gon.Exs} (v)), equality holds for the desingularization of a hypersurface of degree $d$ in $\mathbf{P}^{n+1}$ with an ordinary $n$-fold point. \begin{remark} \label{Direct.Proof.Irr.Bound} One can give a direct proof of (a strengthening of) the Corollary using results and methods of \cite{LP}, \cite{Bast}, \cite{BCD} and \cite{GP}, involving correspondences with null trace and the Cayley-Bacharach property. Specifically, consider a dominant rational map \[ f : X \dashrightarrow Y \] between two smooth projective $n$-folds. We claim: \begin{equation} \label{Null.Trace.Eqn} \text{ If $K_X$ satisfies $\BVA{p}$ and $H^0(Y,K_Y ) = 0$, then $ \deg f \ \ge \ p+2. $} \end{equation} In fact, given any rational covering $f$ one has a trace map \[ \textnormal{Tr}_f : H^0(X, K_X) \longrightarrow H^0(Y,K_Y) \] on canonical forms. For $\eta \in H^0(X, K_X)$ and a general point $y \in Y$, one can view the value of $\textnormal{Tr}_f(\eta)$ at $y$ as being computed by averaging the values of $\eta$ over the fibre $f^{-1}(y)$ of $y$. It follows that if $H^0(Y, K_Y) = 0$, then $f^{-1}(y)$ satisfies the Cayley-Bacharach property with respect to $\linser{K_X}$: any $n$-form vanishing on all but one of the points of $f^{-1}(y)$ must vanish on the remaining one. (See for instance \cite[Proposition 2.3]{BCD} or \cite[\S 3.2 -- \S 3.4]{GP}.) In particular, these points do not impose independent conditions on $H^0(X, K_X)$, and \eqref{Null.Trace.Eqn} follows. \qed \end{remark} \begin{remark} Voisin has pointed out to us that one can also prove a a variant of the statement \eqref{Null.Trace.Eqn} from the previous remark. Specifically, consider a smooth projective $n$-fold $X$ with the property that the Hodge-structure $H^n(X, \mathbf{Q})_{\text{prim}}$ is irreducible: this holds for instance for a very general hypersurface $X \subseteq \mathbf{P}^{n+1}$ of degree $ > n+2$. Suppose moreover that $K_X$ satisfies property $\BVA{p}$ with $p \ge 1$. If $Y$ is \textit{any} smooth projective variety of dimension $n$, then any rational covering \[ f : X \dashrightarrow Y \ \ \text{ with } \deg(f) < p+2 \] must actually be birational. In fact, assume to the contrary that $f$ is not birational. The mapping $f^* H^0(Y, K_Y) \longrightarrow H^0(X, K_X)$, which in any event is injective, must be surjective or zero, else it would give a non-trivial Hodge substructure of $H^n(X, \mathbf{Q})_{\text{prim}}$. The former possibility is impossible since $K_X$ satisfies $\BVA{1}$, and therefore it must be the case that $H^0(Y, K_Y) = 0$. Then the previous remark applies. (Compare \cite[Proposition 3.5.2]{GP}.) \end{remark} We now turn to the case of a smooth hypersurface \[ X \ \subseteq\ \mathbf{P}^{n+1}\] of dimension $n \ge 2$ and degree $d \ge n+2$. Projection from a point of $X$ shows that in any event \begin{equation} \label{UpperBound irrat hyps} \textnormal{irr}(X) \ \le \ d-1, \end{equation} and by \cite[Theorem 1.2]{BCD} (or Corollary \ref{Cov.Gon.Hypsfs} above) one has the lower bound \begin{equation} \label{LowerBound.irrat hyps} \textnormal{irr}(X) \ \ge \ d - n. \end{equation} \begin{example} Interestingly enough, it can actually happen that $\textnormal{irr}(X) < d-1$. For instance, suppose that $X \subseteq \mathbf{P}^3$ is a surface containing two disjoint lines $ \ell_1 ,\ell_2 \subseteq X$. Then the line joining general points $p_1 \in \ell_1, p_2 \in \ell_2$ meets $X$ at $(d-2)$ residual points, and this defines a rational mapping \[ X\, \dashrightarrow \, \ell_1 \times \ell_2 \, \approx \, \mathbf{P}^2 \] of degree $d-2$. There are a few other examples of a similar flavor, and it is established in \cite[Theorem 1.3]{BCD} that these are the only surfaces of degree $d \ge 5$ in $\mathbf{P}^3$ for which $\textnormal{irr}(X) = d-2$. In a similar way, if $X \subseteq \mathbf{P}^{2k+1}$ contains two disjoint $k$-planes then $\textnormal{irr}(X) \le d-2$, but apparently no examples are known of hypersurfaces of odd dimension $\ge 5$ for which equality fails in \eqref{UpperBound irrat hyps}. (See \cite[4.13, 4.14]{BCD}.) \qed \end{example} Our goal in the rest of this section is to prove two results, which together will establish Theorem \ref{BCP.Conj.Thm.} from the Introduction. \begin{proposition} \label{Existence.Curves.Small.Gonality} Let $X \subseteq \mathbf{P}^{n+1}$ be a smooth hypersurface of degree $d \ge 2n +1$, and suppose that \[ f : X \dashrightarrow \mathbf{P}^n \] is a rational covering of degree $ \delta \le d-1$. If $f$ is not given by projection from a point of $X$, then there exists a $($possibly singular$)$ irreducible curve \[ C \subseteq X \ \ \text{with} \ \ \textnormal{gon}(C) \le d - \delta . \] \end{proposition} \begin{proposition} \label{No.Curves.Small.Gonality} Let $X \subseteq \mathbf{P}^{n+1}$ be a very general hypersurface of degree $d \ge 2n$. Then any irreducible curve $C \subseteq X$ satisfies \[ \textnormal{gon}(C) \ \ge \ d-2n +1. \] \end{proposition} Observe that Theorem \ref{BCP.Conj.Thm.} follows immediately from these two statements. In fact, suppose that $f : X \dashrightarrow \mathbf{P}^n$ is a covering of degree $\le d-1$. Thanks to the lower bound \eqref{LowerBound.irrat hyps}, Proposition \ref{Existence.Curves.Small.Gonality} guarantees that $X$ contains a curve of gonality $\le n$ unless $f$ is projection from a point. On the other hand, Proposition \ref{No.Curves.Small.Gonality} shows a very general hypersurface of degree $d \ge 3n$ contains no such curve.\footnote{In their appendix \cite{Appendix} to the present paper, Bastianelli and De Poi show that in the situation of Proposition \ref{Existence.Curves.Small.Gonality}, $X$ actually contains a curve of gonality $\le 3n - 2$.} In preparation for the proof of Proposition \ref{Existence.Curves.Small.Gonality}, we start by summarizing some of the constructions and results of \cite{BCD}, upon which we will draw. The rational mapping $f : X \dashrightarrow \mathbf{P}^n$ is given by a correspondence \[ Z \ \subseteq \ X \times \mathbf{P}^n, \] and for any $y \in \mathbf{P}^n$ we can view the fibre $Z_y$ -- which in general consists of $\delta$ distinct points of $X$ -- as a subset of $\mathbf{P}^{n+1}$. Bastianelli, Cortini and De Poi prove two key facts: \parbox{5in}{ (i). For general $y \in \mathbf{P}^n$ the fibre $Z_y \subseteq \mathbf{P}^{n+1}$ lies on a line \[ \ell_y \subseteq \mathbf{P}^{n+1}.\] (ii). A general point of $\mathbf{P}^{n+1}$ lies on exactly one of these lines. } \noindent These are established in \cite[Theorem 2.5, Lemma 4.1]{BCD} using the ideas involving correspondences with null trace recalled in Remark \ref{Direct.Proof.Irr.Bound} above; (i) is where the assumption $d \ge 2n+1$ comes into the picture. In classical language, the $\{ \ell_y\}$ form a \textit{congruence} of lines, ie a family of lines parametrized by an irreducible $n$-dimensional subvariety \[ B_0 \ \subseteq \ \mathbf{G} \, = \, \mathbf{G}(\mathbf{P}^1, \mathbf{P}^{n+1}) \] of the Grassmannian of lines in $\mathbf{P}^{n+1}$. Statement (ii) asserts that the congruence has $\textit{order one}$: if $W_0 \subseteq B_0 \times \mathbf{P}^{n+1} $ is the restriction to $B_0$ of the tautological point-line correspondence in $\mathbf{G} \times \mathbf{P}^{n+1}$, this means that the projection \[ \mu_0: W_0 \longrightarrow \mathbf{P}^{n+1}\] is birational, and it implies that $B_0$ is rational.\footnote{If one fixes a general hyperplane $H \subseteq \mathbf{P}^{n+1}$, then almost every point of $H$ lies on a unique line of the congruence, establishing a birational isomorphism $H\approx B_0$.} Replacing $B_0$ by a desingularization $B \longrightarrow B_0$, we arrive at the basic diagram: \begin{equation} \label{basic.diagram} \begin{gathered} \xymatrix @C = 2.5pc@R=.55pc { X &X^\prime \ar[l]_{\mu^\prime}\\ \rotatebox{90}{$\supseteq$} &\rotatebox{90}{$\supseteq$}\\ \mathbf{P}^{n+1} & W \ar[l]^\mu \ar[ddd]_\pi ^{\mathbf{P}^1-\text{bundle}}\\ \\ \\ &B \ar[r] &\mathbf{G} } \end{gathered} \end{equation} Here $B$ is a smooth rational $n$-fold mapping birationally to its image in the Grassmannian $\mathbf{G}$, and $\pi : W \longrightarrow B$ is the pull-back to $B$ of the tautological $\mathbf{P}^1$-bundle on $\mathbf{G}$. The mapping $\mu : W \longrightarrow \mathbf{P}^{n+1}$ is birational, and we define $ X^\prime \subseteq W$ to be the proper transform of $X$ in $W$. Thus $X^\prime$ is a reduced and irreducible divisor in $W$ of relative degree $\delta$ over $B$, and $X^\prime \longrightarrow B$ is a generically finite morphism of degree $\delta$ that represents birationally the original mapping $f : X \dashrightarrow \mathbf{P}^n$. We now give the: \begin{proof}[Proof of Proposition \ref{Existence.Curves.Small.Gonality}] Let \[ X^* \ = \ \mu^{*}(X) \ \subseteq \ W \] be the full pre-image of $X$ in $W$, so that $X^*$ is a (possibly non-reduced) divisor in $W$ of relative degree $d$ over $B$. We can write $ X^*= X^\prime + F$, where $F$ is a divisor of relative degree $d - \delta \ge 1$ over $B$. Now fix any irreducible component $Y \subseteq F$ that dominates $B$, and view $Y$ as a reduced irreducible variety of dimension $n$. Thus $Y$ sits in a diagram \[ \xymatrix @C = 2.5pc@R=.55pc { X &Y \ar[l]\\ \rotatebox{90}{$\supseteq$} &\rotatebox{90}{$\supseteq$}\\ \mathbf{P}^{n+1} & W \ar[l]^\mu \ar[r]_\pi &B,} \] and we have \[ 0 \ < \ \deg( Y \longrightarrow B ) \ \le \ d - \delta. \] Suppose first that $\mu(Y)$ consists of a single point $p \in X$. This means that all the lines in the congruence pass through $p$, and hence $f$ must be projection from $p$. Therefore we may assume that \[ \dim \mu(Y) \ge 1. \tag{*} \] Now since $B$ is rational, we can choose a rational curve $\Gamma \subseteq B$ joining two general points of $B$, and by Bertini we may as well suppose moreover that \[ D \ = \ Y \times_B \Gamma \] is irreducible. Viewing $D$ as a reduced curve, one has \[ \deg ( D \longrightarrow \Gamma) \ \le \ d - \delta, \] so $\textnormal{gon}(D) \le d - \delta$. Moreover thanks to (*) we can suppose that \[ C\ =_{\text{def}} \mu (D)\ \subseteq \ X \] also has dimension $1$. It is conceivable that $D \longrightarrow C$ is not birational, but it is elementary and well-known that gonality does not increase in coverings of curves. Therefore $\textnormal{gon}(C) \le d - \delta$, as required. \end{proof} \begin{remark} \textbf{(Fundamental locus of a correspondence of order one).} Consider an arbitrary order-one congruence $\Gamma$ of lines in $\mathbf{P}^{n+1}$, given by a diagram as in \eqref{basic.diagram} \[ \xymatrix{&W \ar[dl]_\mu \ar[dr]^\pi\\ \mathbf{P}^{n+1} & &B.} \] As explained for example in \cite{Arrondo}, a computation of canonical bundles shows that the exceptional divisor $E \subseteq W$ of $\mu$ has relative degree $n$ over $B$; its image \[\mu(E) \, = \, \textnormal{Fund}(\Gamma) \, \subseteq \, \mathbf{P}^{n+1} \] is called the fundamental locus of $\Gamma$, and it consists of those points of $\mathbf{P}^{n+1}$ through which infinitely many lines of the congruence pass. Assume that $\Gamma$ is not the star of lines through a fixed point of $\mathbf{P}^{n+1}$. Then the fundamental locus has dimension $\ge 1$, and arguing as above one sees that $ \textnormal{Fund}(\Gamma)$ contains a curve of gonality $\le n$. In their appendix \cite{Appendix} to the present paper, Bastiabelli and De Poi considerably strengthen this observation, while in \cite{BCD} Bastianelli, Cantini and De Poi raise the interesting question whether in fact the fundamental locus always contains a rational curve. \qed Finally, we show that Proposition \ref{No.Curves.Small.Gonality} follows immediately from computations of \cite{Ein} and \cite{Voisin}. This is essentially the same argument that appears in \cite{F}. \begin{proof}[Proof of Proposition \ref{No.Curves.Small.Gonality}] Let $S = \HH{0}{\mathbf{P}^{n+1}}{\mathcal{O}_\mathbf{P}(d)}$ be the vector space of all hypersurfaces of degree $d$, which we view as an affine variety. Let \[ \mathcal{X} \subseteq S \times \mathbf{P}^{n+1}\] be the universal hypersurface of degree $d$. Denote by \[ {pr}_1: \mathcal{X} \longrightarrow S \ \ , \ \ {pr}_2: \mathcal{X} \longrightarrow \mathbf{P}^{n+1} \] the two projections, and write $s = \dim S$. Suppose now that a very general hypersurface of degree $d$ contains a curve of gonality $c$. Then by a standard argument there exists a commutative diagram: \[ \xymatrix{\mathcal{C} \ar[r]^f \ar[d]_\pi & \mathcal{X} \ar[d]^{{pr}_1}\\ T\ar[r]_\rho & S, } \] where $\pi : \mathcal{C} \longrightarrow T$ is a family of curves of gonality $c$, $\rho$ is \'etale, and $f_t : C_t \longrightarrow X_{\rho(t)}$ is birational onto its image. In this setting, Ein and Voisin prove that if $t \in T$ is a general point, then \[ \Omega^{s+1}_{\mathcal{C}} \otimes \Big(({pr}_2 \circ f)^* \mathcal{O}_{\mathbf{P}^{n+1}}(2n + 1 - d)\Big) \Big| C_t\] is generically generated by its global sections (cf \cite[Theorem 1.4]{Voisin}), where $C_t = \pi^{-1}(t)$ is the fibre of $\pi$. This implies that \[ K_{C_t} \ \equiv_{\text{lin}} \ (d-2n -1) H_{C_t} \, + \, ( \text{effective}), \] where $H_{C_t}$ is the pull-back of the hyperplane bundle from $\mathbf{P}^{n+1}$. Thus $K_{C_t}$ satisfies property $\BVA{d - 2n -1}$, and Lemma \ref{Gon.Bound.Curve.Lemma} applies to show that $c \ge d -2n +1$. \end{proof} \section{Open Problems} In this final brief section, we mention a few open questions concerning this circle of ideas. To begin with, it would be interesting to compute -- or at least estimate -- the degree of irrationality for various natural classes of examples. This seems non-trivial already in the case of surfaces. For example, suppose that $X_d$ is a very general polarized $K3$ surface of degree $d$, (ie $X_d$ carries an ample line bundle $L_d$ with $ \int c_1(L_d)^2 = 2d-2$.) Is it true that \[ \lim_{d \to \infty} \textnormal{irr} (X_d) \ = \ \infty? \] Note that here $\textnormal{cov.\,gon}(X_d)$ is independent of $d$ (Example \ref{Cov.Gon.Exs} (i)). One can ask a similar question for abelian surfaces with a polarization of type $(1, d)$. Yoshihara \cite{Y4}, \cite{Y2} has some partial results in this direction, but the overall picture is far from clear. Bastianelli \cite[Remark 6.7]{Bast} proposes a rather clean conjecture about what one might expect when $X$ is the symmetric product of a curve. In a similar vein, one would like to know the birational positivity (in the sense of Definition \ref{BVA}) of the canonical bundles of various specific varieties. For instance, suppose that $X = C_k$ is the $k^{\text{th}}$ symmetric product of a curve of genus $g$ and gonality $c$. Is it true (at least if $k \ll g$) that $K_X$ satisfies $\BVA{c-2}$? When $k = 2$ this would ``explain" \cite[Theorem 1.6]{Bast}, and in any event is very close to the arguments in that paper. A very natural extension of Theorem \ref{BCP.Conj.Thm.} would be to compute -- or at least bound realistically -- the irrationality invariants for a very general complete intersection $X \subseteq \mathbf{P}^{n+e}$ of $e$ hypersurfaces of given degrees. The results of the previous sections yield some statements, but they are additive in the degrees of the defining equations, whereas one might expect bounds that are closer to multiplicative, as in \cite[4.12]{LLS} for curves. As noted at the end of \S 1, we unfortunately have very little to say about the connecting gonality $\textnormal{conn.\,gon}(X)$ of an irreducible projective variety $X$. It would be interesting to develop techniques that would yield bounds on this invariant. While many examples suggest that it is quite common for $\textnormal{cov.\,gon}(X) \ll \textnormal{irr}(X)$, it is not clear at the moment to what extent $\textnormal{cov.\,gon}(X)$ and $\textnormal{conn.\,gon}(X)$ can diverge. In their paper \cite{HM}, Heinzer and Moh point out that it is interesting to consider not only $\textnormal{irr}(X)$, but all the possible degrees of a rational mapping \[ f : X \dashrightarrow \mathbf{P}^n. \] When $n = \dim X = 1$ they remark that this is an additive semigroup, but in higher dimensions not much seems to be known about what integers can occur. One can ask the analogous question for the gonalities of covering or connecting families of curves. In a more speculative direction, a number of new techniques have been introduced to study questions of rationality, such as Koll\'ar's passage to characteristic $p > 0$ \cite{Kollar}, the Chow-theoretic ideas used by Voisin \cite{Voisin2}, and the combination of these by Totaro \cite{Totaro}. It would be very interesting if ideas along these lines could be used to say something about measures of irrationality. Similarly, the papers \cite{Lengths} and \cite{HwangTo} show that the gonality of a curve $C$ influences various Riemannian and K\"ahler invariants of varieties associated to $C$. Are there any analogous statements in higher dimensions? \end{remark} % % % %
{ "timestamp": "2015-11-05T02:11:37", "yymm": "1511", "arxiv_id": "1511.01359", "language": "en", "url": "https://arxiv.org/abs/1511.01359", "abstract": "We study various measures of irrationality for hypersurfaces of large degree in projective space and other varieties. These include the least degree of a rational covering of projective space, and the minimal gonality of a covering family of curves. The theme is that positivity properties of canonical bundles lead to lower bounds on these invariants. In particular, we prove that if X is a very general smooth hypersurface of dimension n and degree d \\ge 2n+1, then any dominant rational mapping from X to projective n-space must have degree at least d-1. We also propose a number of open problems, and we show how our methods lead to simple new proofs of results of Ran and Beheshti-Eisenbud.", "subjects": "Algebraic Geometry (math.AG)", "title": "Measures of irrationality for hypersurfaces of large degree", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631671237733, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7087950468779121 }
https://arxiv.org/abs/1912.05775
On the Locating Chromatic Number of Trees
Some coloring algorithms gives an upper bound for the locating chromatic number of trees with all the vertices not in an end-path colored by only two colors. That means, a better coloring algorithm could be achieved by optimizing the number of colors used in the end-paths. We provide an estimation of the locating chromatic number of trees using the locating chromatic number of its end-palms. We also study the locating chromatic number of palms, a subdivision of star. We also prove $\chi_L(S_n(k))=\Theta(n^{1/k})$; $\chi_L(S_n(3))=(1+o(1))\sqrt[3]{4n}$; and $\chi_L(O_n)=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$.
\section{Introduction} Let $G=(V,E)$ be a simple connected graph. For any $u \in V$ and $S \subseteq V$, the distance from vertex $u$ to $S$ is defined by $d(u,S)=\min\{d(u,v) \mid v\in S\}$. A set of vertices $S$ {\em resolves} two vertices $u$ and $v$ if $d(u,S)\ne d(v,S)$. Let $c:V\to\{1,2,\cdots, k\}$ be a $k$-coloring of $G$ and $c^{-1}(i)=\{v\in V \mid c(v)=i\}$. A coloring $c$ is called a locating $k$-coloring (or simply a locating coloring) if for every two vertices, there exists a color class $c^{-1}(i)$ that resolves them. The \textit{locating chromatic number} of $G$, denoted by $\chi_L(G)$, is the smallest integer $k$ such that $G$ has a locating $k$-coloring. The \textit{color code} of a vertex $v$ with respect to $c$ is given by $r_c(v)=\left(d(v,c^{-1}(1)),d(v,c^{-1}(2)),\cdots,d(v,c^{-1}(k))\right)$. The locating chromatic number is also called the metric chromatic number in \cite{MCN09}. There has been some coloring algorithms that gives an upper bound for the locating chromatic number of trees, see \cite{ASB2019,DIDP,CHR02,DAM19}. In almost all of these algorithms (\cite{DIDP}, \cite{CHR02}, and \cite{DAM19}), all the vertices which is not in an end-path (a path joining a leaf to its nearest branch) is colored by using only two colors. One graph in particular having the locating chromatic number far from all the known upper bound is an olive $O_n$. From the algorithms in \cite{CHR02} and \cite{DAM19}, we have $\chi_L(O_n)\leq n+1$; and from \cite{DIDP}, we have $\chi_L(O_n)\leq \lceil\sqrt{n}\rceil+1$. The exact value for $\chi_L(O_n)$ is $\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$ as stated in Theorem \ref{xlon}. One of the reasons the known algorithms for the locating chromatic number is sill relatively far from the exact value is because those algorithms have not optimize the colors used in the end-palms (the union of all end-paths from a branch). That means, a better coloring algorithm could be achieved by optimizing the number of colors used in the palms. This motivates us to study the locating chromatic number of palms, which is a subdivision of a star. We believe that determining the locating chromatic number of all subdivision of stars will be a major part of determining the locating chromatic number of all trees. In general, We will use the terminology in \cite{Book17}. A \textit{palm} $S_n(a_1,a_2,\cdots,a_n)$ for $n\geq2$, is the graph obtained from a star $S_{n}$ on $n+1$ vertices by subdividing the $i^{th}$ edge of $S_{n}$, $a_i-1$ times. Formally, define the vertex-set and edge-set of $S_n(a_1,a_2,\cdots,a_n)$ as $V=\{a_{0}\}\cup\{a_{i,j}\mid 1\leq i\leq n, 1\leq j\leq a_i\}$ and $E=\{a_{0}a_{i,1}\mid 1\leq i \leq n\}\cup\{a_{i,j}a_{i,j+1}\mid 1\leq i\leq n, 1\leq j\leq a_i-1\}$. The $k^{th}$ {\em level} is the set of vertices of distance $k$ to the {\em hub} vertex $a_0$, and the $k^{th}$ {\em end-path} is the subgraph induced by the set $\{a_0\}\cup\{a_{k,j}:1\leq j\leq a_k\}$. An \textit{olive} tree is defined as $O_n:=S_n(1,2,\cdots,n)$. Figure \ref{O5} is an example of an olive tree. When all the end-paths from a palm have the same length, we call it a {\em regular palm}, and it is denoted by $S_n(k):=S_n(k,k,\cdots,k)$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.8] \begin{footnotesize} \draw[color=blue] (1,4)--(2,4) (0,3)--(3,3) (1,2)--(4,2) (1,1)--(5,1); \draw[color=blue] (1,4)--(0,3)--(1,2) (1,5)--(0,3)--(1,1); \draw[color=black,fill=blue!40] (0,3)circle(0.1) node[left]{$a_{0}$}; \draw[color=black,fill=blue!40] (1,5)circle(0.1) node[above right]{$a_{1,1}$}; \draw[color=black,fill=blue!40] (1,4)circle(0.1) node[above right]{$a_{2,1}$}; \draw[color=black,fill=blue!40] (1,3)circle(0.1) node[above right]{$a_{3,1}$}; \draw[color=black,fill=blue!40] (1,2)circle(0.1) node[above right]{$a_{4,1}$}; \draw[color=black,fill=blue!40] (1,1)circle(0.1) node[above right]{$a_{5,1}$}; \draw[color=black,fill=blue!40] (2,4)circle(0.1) node[above right]{$a_{2,2}$}; \draw[color=black,fill=blue!40] (2,3)circle(0.1) node[above right]{$a_{3,2}$}; \draw[color=black,fill=blue!40] (2,2)circle(0.1) node[above right]{$a_{4,2}$}; \draw[color=black,fill=blue!40] (2,1)circle(0.1) node[above right]{$a_{5,2}$}; \draw[color=black,fill=blue!40] (3,3)circle(0.1) node[above right]{$a_{3,3}$}; \draw[color=black,fill=blue!40] (3,2)circle(0.1) node[above right]{$a_{4,3}$}; \draw[color=black,fill=blue!40] (3,1)circle(0.1) node[above right]{$a_{5,3}$}; \draw[color=black,fill=blue!40] (4,2)circle(0.1) node[above right]{$a_{4,4}$}; \draw[color=black,fill=blue!40] (4,1)circle(0.1) node[above right]{$a_{5,4}$}; \draw[color=black,fill=blue!40] (5,1)circle(0.1) node[above right]{$a_{5,5}$}; \draw (-1,5.7)rectangle(6,0.5); \end{footnotesize} \end{tikzpicture} \caption{Graph $O_5=S_5(1,2,3,4,5)$} \label{O5} \end{center} \end{figure} In the second section, we provide an estimation of the locating chromatic number of trees using the locating chromatic number of its end-palms. In the third section we study the relation between the locating chromatic number of a graph and its maximum degree. The forth section discuss a tight upper and lower bound for the locating chromatic number of palms, we also prove that for every integer $k$ between the bounds, there is a palm having the locating chromatic number equal to $k$. In the last section, we took an asymptotic approach to study the locating chromatic number of regular palms, we prove that $\chi_L(S_n(k,k,\cdots,k))=\Theta(n^{1/k})$. This leads to the observation that $\chi_L(S_n(k,k,\cdots,k))$ is decreasing and goes to $\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$ as a function of $k$, but it is increasing and unbounded as a function of $n$. \section{Locating chromatic number of trees} In this section, we provide an algorithm to make a locating coloring of any tree by utilizing the locating coloring of its palms. This algorithm requires we know a locating coloring of all of its end-palms, which we will study in the next sections. We also compare our algorithm with the algorithm given in \cite{DAM19} and a combined result in \cite{CHR02} and \cite{SLT75}. \subsection{Coloring algorithm} We introduce the notion of an end-path of a tree, that is, a path from a leaf to its nearest branch (a vertex with degree more than two). We call a branch, an end-branch, if it has at least two end-paths. Lastly, an end-palm is an end-branch together with all of its end-paths. The following algorithm gives a locating coloring of a tree, provided we know a locating coloring of all of its end-palm. \vspace{16pt} \hrule \vspace{3pt} \centerline{Algorithm 1. Locating coloring of any tree.} \vspace{3pt} \hrule \vspace{3pt} \noindent\textbf{Input :} Tree $T$, locating coloring of all of its end-palms\\ \textbf{Output :} $c$, a coloring of $T$ \baselineskip 10pt \begin{enumerate} \item Fix a vertex $w$ and for every vertex $u$, $c(u)= d(u,w) \pmod{2}$ \item For every vertex $u$, $c(u)=c(u)+1$ \item $m\leftarrow 0$ \item For every end-palm of $T$, do: \item \qquad Let $c'$ be a locating coloring of this end palm \item \qquad Let $v$ be the branch vertex of the palm, and $v_1$ a neighbor of $v$ in the palm \item \qquad $x\leftarrow c(v)$ \item \qquad $y\leftarrow 3-x$ \item \qquad Permute the colors in $c'$ such that $v$ is colored $x$ and $v_1$ is colored $y$ \item \qquad For every vertex $u$ in this palm, do: \item \qquad \qquad If $c'(u)\leq 2$: \item \qquad \qquad \qquad $c(u)\leftarrow c'(u)$ \item \qquad \qquad else \item \qquad \qquad \qquad $c(u)\leftarrow m+ c'(u)$ \item \qquad $m\leftarrow m-2+\max\{c'(u): u\ \text{in this palm}\}$ \end{enumerate} \hrule \baselineskip 16pt \begin{theorem}\label{xlT} Let $T$ be a tree with $b$ end-palms, $P_1,P_2,\cdots,P_b$, then $$\chi_L(T)\leq 2-2b+\sum_{i=1}^b\chi_L(P_i).$$ \end{theorem} To prove Theorem \ref{xlT}, we need the following lemma \begin{lemma}\cite{DIDP}\label{bridge} Let $G$ be a graph and let $xy$ be a bridge of $G$. Let $G_x$ and $G_y$ be the component of $G-xy$ containing $x$ and $y$ respectively. Let $c$ be a coloring of $G$. If there exist $i$ and $j$ such that $c^{-1}(i)\subseteq V(G_x)$ and $c^{-1}(j)\subseteq V(G_y)$ then for any two vertices $u \in V(G_x)$ and $v \in V(G_y)$, their color codes are different. \end{lemma} \begin{proof}[\bf\em Proof of Theorem \ref{xlT}] Color $T$ using Algorithm $1$. Any two vertices in the same palm of $T$ is distinguished be the existence of the locating coloring in that palm, and any other two vertices is distinguished by lemma \ref{bridge}. \end{proof} Theorem \ref{xlT} makes us realize the importance of studying the locating chromatic number of palms. For the locating chromatic number of palms, see the next sections. \subsection{Algorithm comparison} Let $T$ be a tree and $dim(T)$ its metric dimension, suppose $T$ has $l$ leaves and $\beta$ branches with at least one end-path. One upper bounds of the locating-chromatic number of trees is given by combining the result in \cite{SLT75}, \begin{align} \chi_L(T)\leq dim(T) + \chi(T), \end{align} and the result in \cite{CHR02}, \begin{align} dim(T)=l-\beta \end{align} to get the following theorem \begin{theorem}\cite{SLT75,CHR02}\label{l-beta} Let $T$ be a tree with $l$ leaves and $\beta$ branch with at least one end-path, then $\chi_L(T)\leq l-\beta+2$. \end{theorem} Another upper bound is given in the following theorem. \begin{theorem}\label{l-b}\cite{DAM19} Let $T$ be a tree having $l$ leaves and $b$ branch with at least two end-paths, then $\chi_L(T)\leq l-b+2$. \end{theorem} The upper bound for the locating chromatic of trees in Theorem \ref{xlT} is better than the upper bounds in Theorems \ref{l-beta} and \ref{l-b}. \begin{theorem} Let $T$ be a tree having $l$ leaves, $\beta$ branch with at least one end-path, and $b$ branch with at least two end-paths. If $P_1,P_2,\cdots,P_b$ are the end-palms of $T$, then $$\chi_L(T)\leq2-2b+\sum_{i=1}^b\chi_L(P_i)\leq l-\beta+2\leq l-b+2.$$ \end{theorem} \begin{proof} We will only prove the second inequality. Let $l_i$ be the number of leaves in $P_i$. There are $\beta-b$ end branch(es) with exactly one end-path, so $l=\beta-b+\sum_{i=1}^bl_i$. By Theorem \ref{batasxl}, $\chi_L(P_i)\leq l_i+1$ and the result follows. \end{proof} \section{Maximum degree} In this section, we study the maximum degree of any graph with certain locating chromatic number. The correlation between the maximum degree of a graph with its metric dimension is used to characterize infinite graphs with finite metric dimension, see \cite{DAM12}. The maximum degree of a graph having certain locating chromatic number is also needed to characterize infinite graphs with finite locating chromatic number. In particular, we show that any graph with locating chromatic number $k\geq3$ must have maximum degree at most $4\cdot3^{k-3}$. \begin{theorem} \label{Delta xl} If $G$ is a graph with $\chi_L(G)=k\geq3$, then $\Delta(G)\leq 4\cdot3^{k-3}$. \end{theorem} \begin{proof} Let $G$ be a graph with $\chi_L(G)=k\geq3$ and $c:V(G)\to\{1,2,\cdots,k\}$ be a locating coloring of $G$. Consider the color code of any vertex $v$, i.e., $r_c(v)=(a_1,a_2,\cdots,a_k)$. Without loss of generality, by permuting the colors, we may assume that $c(v)=1$, and so $a_1=0$, and $0<a_2\leq\cdots\leq a_k$. Let $u$ be a neighbor of $v$, and $r_c(u)=(b_1,b_2,\cdots,b_k)$. Then, $|a_i-b_i|\leq 1$ for all $i$ by the triangle inequality, and so $b_i \in \{a_i-1,a_i,a_i+1\}$ for all $i$. Now we prove that $d(v)\leq 4\cdot3^{k-3}$ for any vertex $v$. To the contrary suppose that $d(v)\geq 4\cdot3^{k-3}+1$. First, group all the neighbors of $v$ depending to the distances to colors $4,5,\cdots,k-1,k$. All neighbors of $v$ with the same distances to colors $4,5,\cdots,k-1,k$ will be in the same group. This means that their color codes of all members in a group will have the same ordinates in positions $4,5,\cdots,k-1,k$. Since the distance of any neighbor of $v$ to $c^{-1}(i)$ is either $a_i-1$, $a_i$, or, $a_i+1$, then there will be at most $3^{k-3}$ groups. Since $v$ has $d(v)\geq 4\cdot3^{k-3}+1$ neighbors, by the pigeon hole principle there exists a group containing at least $5$ vertices, say $u_1,u_2,u_3,u_4,u_5$. The color codes of all the members of such a group will be $(1,*,*,x_4,x_5, \cdots, x_k)$, for some fixed nonnegative integers $x_4,x_5,\cdots, x_k$. If there exists a vertex $u$ in $U=\{u_1,u_2,\cdots,u_5\}$ with $c(u)\geq4$ then $c(u_i) = c(u)$ for all $i \in \{1,2,\cdots,5\}$. Therefore, $0=a_1 < a_2\leq a_3 \leq \cdots \leq a_{c(u)}=1$, and so $a_2=a_3=1$. This implies that for every $u\in U$, $d(u,c_j)$ is 1 or 2, for $j=2,3.$ Since there are $5$ vertices in $U$ with $4$ possible representations, there will be two distinct vertices with the same color code, a contradiction. Now, the only possibility is that the color of each vertex $u \in U$ is either $2$ or $3$; it cannot be color $1$ because $u$ is adjacent to $v$ and $c(v)=1$. If all vertices in $U$ have the same color, say $c(u)=x$ for every $u \in U$ with $x=2$ or $x=3$, and let $y\in\{2,3\}-\{x\}$ (the other color), then we have $d(u,c^{-1}(1))=1$, $d(u,c^{-1}(x))=0$, and $d(u,c^{-1}(y))\in\{a_y-1,a_y,a_y+1\}$. This means that there are $5$ vertices in $U$ with $3$ possible representations, therefore there will be two vertices with the same color code, a contradiction. So, $U$ must contain vertices of colors 2 and 3 only, and so $a_2=a_3=1$. Let $u \in U$. If $c(u)=2$ then $d(u,c^{-1}(1))=1$, $d(u,c^{-1}(2))=0$, and $d(u,c^{-1}(3))\in\{1,2\}$; and if $c(u)=3$ then $d(u,c^{-1}(1))=1$, $d(u,c^{-1}(2))\in\{1,2\}$, and $d(u,c^{-1}(3))=0$. Again, we have 4 possible representation for at least $5$ vertices, therefore there will be two vertices with the same color codes, a contradiction. Therefore, $deg(v)\leq 4\cdot3^{k-3}$ for any vertex $v$. Thus, $\Delta(G)\leq 4\cdot3^{k-3}$. \end{proof} The tightness of this bound will be discussed in the next section. A different proof of Theorem \ref{Delta xl} was given in \cite{COR15}. In \cite{CHR02}, Chartrand et al. gave the following result. \begin{theorem} \label{ctr} (Theorem 4.3 in \cite{CHR02}) Let $k\geq 3$. If $T$ is a tree for which $\Delta(T)>(k-1)2^{k-2}$, then $\chi_L(T)>k$. \end{theorem} In other form, we have that if $T$ is a tree with locating chromatic number $\chi_L(T)=k \;(\geq 3)$ then $\Delta(T) \leq (k-1)2^{k-2}$. This result is true only for $k=3$ and $k=4$. For $k \geq 5$, Theorem \ref{Delta xl} corrects the upper bound of the maximum degree of such tree $T$, namely $\Delta(T) \leq 4\cdot3^{k-3}$. Figure \ref{S_36(5)} gives a locating coloring with $k=5$ colors for a tree with $\Delta(T)=36$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.45,rotate=270] \begin{footnotesize} \foreach \x in {1,...,36} \draw[color=gray] (0,0)arc(180:90:2 and \x -18.5)--(12,\x -18.5); \draw[color=gray,fill=yellow!50] (0,0)circle(0.12)node[above]{\color{black}1}; \foreach \x in {1,...,36} \draw[color=gray,fill=yellow!50] (2,\x -18.5)circle(0.12) (4,\x -18.5)circle(0.12) (6,\x -18.5)circle(0.12) (8,\x -18.5)circle(0.12) (10,\x -18.5)circle(0.12) (12,\x -18.5) circle(0.12); \foreach \x in {1,...,9} \draw (2.1,4*\x-18.6)node[right]{2} (4.1,4*\x-18.6)node[right]{1} (2.1,4*\x-19.6)node[right]{3} (4.1,4*\x-19.6)node[right]{1} (2.1,4*\x-20.6)node[right]{2} (4.1,4*\x-20.6)node[right]{3} (2.1,4*\x-21.6)node[right]{3} (4.1,4*\x-21.6)node[right]{2}; \foreach \x in {1} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{1}; \foreach \x in {2} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{1}; \foreach \x in {3} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{3} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{3}; \foreach \x in {4} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{2} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{2}; \foreach \x in {5} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{1}; \foreach \x in {6} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{1}; \foreach \x in {7} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{3} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{3}; \foreach \x in {8} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{2} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{2}; \foreach \x in {9} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{1}; \foreach \x in {10} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{1}; \foreach \x in {11} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{3}; \foreach \x in {12} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{2}; \foreach \x in {13} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{1}; \foreach \x in {14} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{1}; \foreach \x in {15} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{3} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{3}; \foreach \x in {16} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{2} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{2}; \foreach \x in {17} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{1}; \foreach \x in {18} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{1}; \foreach \x in {19} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{3} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{3}; \foreach \x in {20} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{2} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{2}; \foreach \x in {21} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{1}; \foreach \x in {22} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{1}; \foreach \x in {23} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{3}; \foreach \x in {24} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{\color{blue}5} (12.1,18.4-\x)node[right]{2}; \foreach \x in {25} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {26} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {27} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{3} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {28} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{2} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {29} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {30} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{1} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {31} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{3} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {32} \draw (6.1,18.4-\x)node[right]{\color{red}4} (8.1,18.4-\x)node[right]{2} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {33} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {34} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {35} \draw (6.1,18.4-\x)node[right]{2} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{2} (12.1,18.4-\x)node[right]{\color{blue}5}; \foreach \x in {36} \draw (6.1,18.4-\x)node[right]{3} (8.1,18.4-\x)node[right]{\color{red}4} (10.1,18.4-\x)node[right]{3} (12.1,18.4-\x)node[right]{\color{blue}5}; \end{footnotesize} \end{tikzpicture}\\ \caption{A tree $T$ with $\Delta(T)=36$ and $\chi_L(T)=5$.} \label{S_36(5)} \end{center} \end{figure} \section{The locating chromatic number of palms} In this section, we give a tight upper and lower bound for $\chi_L(S_n(a_1,a_2,\cdots,a_n))$. We show that the upper bounds of the maximum degree in Theorems \ref{Delta xl} are tight not only for general graph, but also for trees. We also show that for every integer $k\geq 3$ there is a palm with locating chromatic number $k$ and $\Delta=4\cdot 3^{k-3}$. \begin{theorem}\label{batasxl} Let $n\geq 2$ and $G=S_n(a_1,a_2,\cdots,a_n)$ be a palm, then \begin{align} \left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3\leq \chi_L(G)\leq n+1. \end{align} Moreover, \begin{align} \text{for every } k\ { with } \left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3\leq k\leq n+1, \text{there exist a palm } G\ \text{with}\ \chi_L(G)=k. \end{align} \end{theorem} \begin{proof} We will prove the first part of Theorem \ref{batasxl}, the second part will be proven after Theorem \ref{xlon}. The lower bound is a direct consequence of Theorem \ref{Delta xl}. To prove the upper bound, let $c:V\to \{1,2,\cdots,n+1\}$ with $c(a_0)=n+1$, $c(a_{i,j})=i$ if $j$ is odd, and $c(a_{i,j})=n+1$ if $j$ is even. Any two vertices in the same end-path, say the $i^{th}$ end-path, is resolved by $c^{-1}(j)$ for every $j\ne i$, and any two vertices in different end-paths, say the $i^{th}$ and $j^{th}$ end-path, is resolved by $c^{-1}(i)$ and $c^{-1}(j)$. Thus, $c$ is a locating $(n+1)$-coloring of $G$, and the result follows. \end{proof} Next, we shall study the locating chromatic number of olive tree. This olive tree is a simpler counter example of Theorem 4.3 in \cite{CHR02} from the one given in \cite{COR15}. The tightness of the upper bound of the maximum degree in Theorem \ref{Delta xl} and the lower bound in Theorem \ref{batasxl} is achieved by the olive tree. The upper bound in Theorem \ref{batasxl} is achieved by stars. \vspace{12pt} \hrule \vspace{3pt} \centerline{Algorithm 2. Coloring palms.} \vspace{3pt} \hrule \vspace{3pt} \noindent\textbf{Input :} Integers $n\geq 3$, $a_1,a_2,\cdots,a_n$.\\ \textbf{Output :} $c$, a coloring of $S_n(a_1,a_2,\cdots,a_n)$. \begin{enumerate} \item $k=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$. \item For $i=1,2,\cdots,n$ write $i=4l+r$ with $r\in\{1,2,3,4\}$ and $l \in\mathbb{Z}$. \item Write the number $l$ in the expression of $i=4l+r$ as a $(k-3)$-digit number in base $3$, allowing the first digit to be zero. \item For two distinct integers $x$ and $y$, define an $(x,y)$-alternating sequence, as the sequence $\{x,y,x,y,\cdots\}$. \item For $i=1,2,\cdots,n$, define an integer sequence $A_i=\{a^i_1,a^i_2,\cdots\}$ as follows. \begin{enumerate} \item Write $i=4 l+r=4\times (l_{k}l_{k-1}\cdots l_5 l_4)_3 +r$; as in step 1. \item Initially, define $A_i$ for each $i$ as an $(x,y)$-alternating sequence with $(x,y)=(2,1)$ if $r=1$, $(x,y)=(3,1)$ if $r=2$, $(x,y)=(2,3)$ if $r=3$, and $(x,y)=(3,2)$ if $r=4$. \item For $t=4,5,\cdots, k$; if $l_t\ne0$, change the value of $a^i_{2t+l_t-6}$ with $t$. \end{enumerate} \item Assign $c(a_{0})=1$ and $c(a_{i,j})=a^i_j$ for $1 \leq j \leq a_i$, $i=1,2,\cdots,n$. \end{enumerate} \hrule \vspace{6pt} We give an example of Algorithm 2. If $n=108$, then $k=6$; write $i=57=4\times 14 + 1$ with $(14)_{10}=(112)_3$, $i=80=4\times 19 + 4$ with $(19)_{10}=(201)_3$, and $i=100=4\times 24 + 4$ with $(24)_{10}=(220)_3$, so $57=4\times(112)_3+1$, $80=4\times(201)_3+4$, and $100=4\times(220)_3+4$. The sequences $A_i$ for $i=1,57,80,100$ are as follows. \begin{align*} A_{4\times (000)_3+1} = \{2,1,2,1,2,1,2,1,2,1,2,\cdots\}\qquad \qquad A_{4\times (112)_3+1} = \{2,1,2,4,5,1,6,1,2,1,2,\cdots\}\\ A_{4\times (201)_3+4} = \{3,2,4,2,3,2,3,6,3,2,3,\cdots\}\qquad \qquad A_{4\times (220)_3+4} = \{3,2,3,2,3,5,3,6,3,2,3,\cdots\} \end{align*} \begin{theorem}\label{xlon} For $n\geq 2$, $\chi_L(O_n)=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$. \end{theorem} \begin{proof} Since $\Delta(O_n)=n$, then by Theorem \ref{batasxl} we have $\chi_L(O_n) \geq \left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$. Now, construct a locating coloring on $O_n$ with $k=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$ colors by using Algoritm 2. We will prove that the color codes of all vertices are different. Note that vertex $a_{0}$ is the only vertex whose color $1$ and has neighbors with colors $2$ and $3$, so its color code is different from the color codes of the other vertices. Let $a_{i,j}$ and $a_{p,q}$ be two different vertices and write $i=4\times (\overline{i_{k}i_{k-1}\cdots i_5 i_4})+r_1$ and $p=4\times (\overline{p_{k}p_{k-1}\cdots p_5 p_4})+r_2$ as in 4(a). Let $(w,x)$ and $(y,z)$ be the alternating coloring for $A_i$ and $A_p$ in Step 4(b). Consider the following cases. \textbf{Case I: $\mathbf{j\ne q}$.} In this case, $a_{i,j}$ and $a_{p,q}$ are in different level. Without loss of generality, let $j>q$. If $\{w,x\}=\{y,z\}$, then the color $s\in\{1,2,3\}-\{w,x\}$ is not used in $A_i$ and $A_p$. Since $j\ne q$, then $d(a_{i,j},c_s)\ne d(a_{p,q},c_s)$. If $\{w,x\}\ne\{y,z\}$, then there is a color used in $A_i$ but not used in $A_p$ and vice versa. Let $s\in \{y,z\}-\{w,x\}$; $s$ is the color used in $A_p$ but not in $A_i$. Note that either $a_{p,1}$ or $a_{p,2}$ is colored by $s$, so $d(a_{p,q},c_s)<q<j<d(a_{i,j},c_s)$. \textbf{Case II: $\mathbf{j=q}$.} Since $a_{i,j}$ and $a_{p,q}$ are in the same level, they must be in different end-paths; $i\ne p$. If there is a $t\ (4\leq t\leq k)$ with $i_t\ne p_t$, then the position of vertex with color $t$ is different in $A_i$ and $A_p$, so $d(a_{i,j},c_t)\ne d(a_{p,q},c_t)$. If $i_t=p_t$ for all $t$, then $r_1\ne r_2$, which means that $A_i$ and $A_p$ have different alternating colorings. If $w\ne y$ then these two colors will distinguish $r_c(a_{i,j})$ and $r_c(a_{p,q})$ because they are in the same level; A similar argument can be applied if $x\ne z$. Thus, we have constructed a locating coloring of $O_n$ with $k=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$ colors. Therefore, $\chi_L(O_n)=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$. \end{proof} To prove the second part of Theorem $\ref{batasxl}$, we need to prove the following lemma \begin{lemma}\label{1} Let $G=S_n(a_1,a_2,\cdots,a_n)$ be a palm and $G'=S_n(a_1,a_2,\cdots,a_i+1,\cdots,a_n)$ then \[\chi_L(G')\geq\chi_L(G)-1.\] \end{lemma} \begin{proof} Let $\chi_L(G')=p$ and $c'$ be a locating $p$-coloring of $G'$. We will construct a locating $(p+1)$-coloring of $G$, by doing so, we will have $\chi_L(G)\leq p+1$ and the result follows. Let $w$ be the only vertex in $G'$ but not in $G$, and $z$ the only neighbor of $w$. Define $c:V(G)\to \{1,2,\cdots,p+1\}$ with $c(z)=p+1$ and $c(v)=c'(v)$ if $v\ne z$. Without loss of generality let $c'(w)=1$. Suppose there are two different vertices $u$ and $v$ in $G$ with $r_c(u)=r_c(v)$, that means $d_G(u,c^{-1}(k))=d_G(v,c^{-1}(k))$ for $k=1,2,\cdots,p+1$. Since $d_G(u,c^{-1}(k))=d_{G'}(u,c^{-1}(k))$ for $k=2,3,\cdots,p$ and $r_{c'}(u)\ne r_{c'}(v)$, then $d_{G'}(u,c^{-1}(1))\ne d_{G'}(v,c^{-1}(1))$. Without loss of generality, let $d_{G'}(u,c^{-1}(1))< d_{G'}(v,c^{-1}(1))$. Since $c'(w)=1$, we have $d_{G'}(u,c^{-1}(1))\leq d_{G'}(u,w)$, consider the following cases.\\ {Case I : ${\ d_{G'}(u,c^{-1}(1))<d_{G'}(u,w)}$.} In this case, $d_{G}(u,c^{-1}(1))=d_{G'}(u,c^{-1}(1))$. This means, $d_{G'}(v,c^{-1}(1))\leq d_{G}(v,c^{-1}(1))=d_{G}(u,c^{-1}(1))=d_{G'}(u,c^{-1}(1))$, a contradiction.\\ {Case II : ${\ d_{G'}(u,c^{-1}(1))=d_{G'}(u,w)}$.} In this case $d_{G'}(v,w)\geq d_{G'}(v,c^{-1}(1)) >d_{G'}(u,c^{-1}(1))=d_{G'}(u,w)$. This implies $d_G(v,c^{-1}(p+1))=d_G(v,z)=d_{G'}(v,w)-1>d_{G'}(u,w)-1=d_G(v,z)=d_G(v,c^{-1}(p+1))$, a contradiction. Thus $c$ is a locating $(p+1)$-coloring of $G$. \end{proof} Now we will prove the second part of Theorem \ref{batasxl}. \begin{proof}[\bf\em Proof of Theorem \ref{batasxl} (2)] Define $$S_n=G_0\subseteq G_1 \subseteq \cdots \subseteq G_z=O_n=S_n(1,2,\cdots,n)$$ where $|G_{i+1}|=|G_{i}|+1$. Note that the previous sequence is not unique. Consider the sequence $$n+1=\chi_L(G_0),\chi_L(G_1),\chi_L(G_2),\cdots,\chi_L(G_z)=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3.$$ From lemma \ref{1}, we have $\chi_L(G_{i+1})\geq \chi_L(G_i)-1$. Note that the previous sequence is decreasing in some of its terms because $n+1 \geq \left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$. Since each term is an integer and when it decrease, it can only decrease by $1$, the sequence will pass every integer between $\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$ and $n+1$. This means for every integer $k$ between $\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$ and $n+1$, there exist a palm $G_i$ with $\chi_L(G_i)=k$. \end{proof} By using Algorithm 2 and the arguments in the proof of Theorem \ref{xlon}, we get the following theorem. This also proves that the coloring in Figure \ref{S_36(5)} is a locating coloring. \begin{theorem} Let $n\geq 2$ and $G=S_n(a_1,a_2,\cdots,a_n)$ be a palm with $1\leq a_1\leq a_2 \leq \cdots \leq a_n$. If $a_3\geq 2$, $a_{4\cdot3^k+1}\geq 2k+3$ and $a_{8\cdot 3^k+1}\geq 2k+4$ for all non negative integers $k\leq \log_3\left(\frac{n}{4}\right)$, then $\chi_L(G)=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$. $\square$ \end{theorem} The following corollary is a special case for the previous theorem when $a_i=k$ for all $i$. \begin{corollary}\label{>>} Let $n\geq 3$ and $k\geq 2 \left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+4$, then $\chi_L(S_n(k))=\left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$. $\square$ \end{corollary} \begin{theorem} Let $G=S_n(a_1,a_2,\cdots,a_n)$ be a palm, $\chi_L(G)=n+1$ if and only if $G$ is a star. \label{xl=n+1} \end{theorem} \begin{proof} The $(\Leftarrow)$ part is trivial. For the $(\Rightarrow)$ part, let $G$ be a palm which is not a star. Without loss of generality, let $a_1>1$. Let $c:V\to \{1,2,\cdots,n\}$ with $c(a_0)=1$, $c(a_{1,j})=2$ if $j$ is odd, $c(a_{1,j})=3$ if $j$ is even, $c(a_{i,j})=i$ if $j$ is odd, and $c(a_{1,j})=1$ if $j$ is even. Its not hard to see that $c$ is a locating $n$-coloring of $G$ and the result follows. \end{proof} \section{Regular palms} In this section, we will study the order of the locating chromatic number of regular palms. If we fix $n$ and consider $\chi_L(S_n(k))$ as a function of $n$, we have $\chi_L(S_n(k)) \to \left\lceil\log_3\left(\frac{n}{4}\right)\right\rceil+3$ as $k\to\infty$ from Corollary $\ref{>>}$. However, if we fix $k$ and consider $\chi_L(S_n(k))$ as a function of $n$, $\chi_L(S_n(k))$ is increasing and unbounded (Lemma \ref{monoton}). Let $f$ and $g$ be two function with integer variable $n$. We say that $f=O(g)$ if there exist $c>0$ such that $|f(n)|\leq c\ |g(n)|$ for large values of $n$, $f=\Omega(g)$ if $g=O(f)$, and $f=\Theta(g)$ if $f=O(g)$ and $f=\Omega(g)$. We also denote $f=o(g)$ if $\lim \frac{f}{g}=0$. Note that $\lim \frac{f}{g}=1$ is equivalent to $f=(1+o(1))g$. The order of $\chi_L(S_n(k))$ is given in the following theorem. \begin{theorem}\label{orderxl} For every fixed positive integer $k$, $\chi_L(S_n(k))=\Theta\left(n^\frac{1}{k}\right)$. \end{theorem} We give the exact value of $\chi_L(S_n(k))$ for $k=1,2,3$ in the following Theorems. \begin{theorem}\label{S12} For $n\geq 2$, $\chi_L(S_n(1))=n+1$ and $\chi_L(S_n(2))=\left\lceil\sqrt{n}\right\rceil+1$. \end{theorem} \begin{theorem}\label{S3} For integers $p\geq3$, let $f(p)=(p-1)\left\lfloor\frac{p^2}{4}\right\rfloor-\left\lfloor\frac{p^2-2p}{4}\right\rfloor$, then \begin{align} \chi_L(S_n(3))= p\ \Longleftrightarrow\ f(p-1)< n\leq f(p);\label{Sn3} \end{align} or simply $\chi_L\left(S_n(3)\right) =(1+o(1)) \sqrt[3]{4n}$. \end{theorem} We end this section with the following conjecture. \begin{conjecture} For $k\geq 4$, $\chi_L(S_n(k)) =(1+o(1)) \left(\frac{k-1}{2}\right) \sqrt[k]{4n}$. \end{conjecture} \begin{center} {\bf Proof of the theorems \ref{orderxl}, \ref{S12}, and \ref{S3}} \end{center} To prove Theorem \ref{orderxl} we need to prove the following lemma. \begin{lemma}\label{monoton} Let $k$ be a positive integer and $n\geq 3$, then $\chi_L(S_n(k))$ is increasing and unbounded as a function of $n$. \end{lemma} \begin{proof} The case for $k=1$ is clear. Let $k\geq 2$ and $n\geq 3$, we will prove that $\chi_L(S_{n}(k))\geq \chi_L(S_{n-1}(k))$. Let $\chi_L(S_{n}(k))=p$, then $p\leq n$ by Theorems \ref{batasxl} and \ref{xl=n+1}. Let $c$ be a locating $p$-coloring of $S_n(k)$ with $p$ as the color of the center. For $i=1,2,\cdots,p-1$; choose one vertex for color $i$ with the smallest distance to $a_0$, if there is more than one vertex, choose one arbitrary, this vertex is called the reference vertex of color $i$. Therefore we have $p-1$ chosen reference vertices. Since $p\leq n$, there exist an end-path not containing any reference vertex, if we remove this end-path, the remaining graph (with the remaining coloring) is a locating $p$-coloring of $S_{n-1}(k)$, because the color code for all vertices does not change. Therefore, $\chi_L(S_{n-1}(k))\leq p=\chi_L(S_{n}(k))$. The unbounded property comes from Theorem \ref{batasxl}. \end{proof} \begin{proof}[\bf\em Proof of Theorem \ref{orderxl}] We will prove that there exist $A,B>0$ such that \begin{align}\label{theta} A n^{\frac{1}{k}} \leq \chi_L(S_n(k)) \leq B n^{\frac{1}{k}} \end{align} for large values of $n$. First, we prove the second inequality in (\ref{theta}). Let $n\geq 4\cdot 3^{2k+1}$ and $\chi_L(S_n(k))=p+2\geq 2k+4$ from Theorem \ref{batasxl}. For $i=1,2,\cdots,p$; let $A_i=\{a\in\{1,2,\cdots,p\} \mid a \equiv i\ \text{mod}\ k \}$. Let $m=\prod_{i=1}^k |A_i|$, we will construct a locating $(p+1)$-coloring $c$ of $S_m(k)$, as follows. \begin{enumerate} \item Let $c(a_0)=p+1$. \item Arrange the elements in $\mathbb{A}=A_1 \times A_2 \times \cdots \times A_k$ by their lexicographic order. \item For $i=1,2,\cdots,m$; let $\left(a_0,a_{i,1},a_{i,2},\cdots,a_{i,k}\right)$ be the $i^{th}$ end-path of $S_m(k)$, and\linebreak $(\alpha_{i1},\alpha_{i2},\cdots,\alpha_{ik})$ be the $i^{th}$ element in $\mathbb{A}$, then define $c(a_{i,j})=\alpha_{ij}$. \end{enumerate} It is easy to verify that the previous coloring is a locating $(p+1)$-coloring of $S_m(k)$, thus $\chi_L(S_m(k))\leq p+1<\chi_L(S_n(k))$. By Lemma \ref{monoton}, we have $n>m$, therefore \begin{align*} n> \prod_{i=1}^k |A_i|\geq \prod_{i=1}^{k} \left\lfloor\frac{p}{k}\right\rfloor \geq \left(\frac{p}{k}-1\right)^k \geq \left(\frac{p+2}{2k}\right)^k, \end{align*} which is equivalent to $p+2\leq (2k) n^{\frac{1}{k}}$. Thus, the bound in (\ref{theta}) is satisfied for $n\geq 4\cdot 3^{2k+1}$, and $B=2k$. Now, we only need to prove the first inequality in (\ref{theta}). Let $q=\chi_L(S_n(k))$. There are $k$ vertices in an end-path without the center of $S_n(k)$. In a resolving $q$-coloring, each vertex has $q$ possible color, therefore there are at most $q^k$ possible ways to color an end-path. Since two different end-paths cannot have the same coloring, then there are at most $q^k$ end-paths, thus $n\leq q^k$ which is equivalent to $n^{\frac{1}{k}}\leq q=\chi_L(S_n(k))$. Hence, (\ref{theta}) is true for $A=1$. \end{proof} \begin{proof}[\bf\em Proof of Theorem \ref{S12}] For $k=1$, $S_n(k)=S_n$ and the result follows. Let $\lceil\sqrt{n}\rceil=p$, we will prove that $\chi_L(S_n(2))=p+1$. First, we construct a locating $(p+1)$-coloring of $S_n(2)$. Let $A=\{(x,y) \mid x\in \{1,2,\cdots,p\}, y\in\{1,2,\cdots,p+1\}-\{x\}\}$, then $|A|=p^2\geq n$. Define a coloring $c$ with $c(a_0)=p+1$, $c(a_{i,1})=x_i$, and $c(a_{i,2})=y_i$; where $(x_i,y_i)$ is the $i^{th}$ element in $A$ (based on lexicographic order). Note that $c$ is a locating $(p+1)$-coloring of $S_n(2)$, because vertices in the same level is resolved by the color of their neighbors, and vertices in different level is resolved by $c^{-1}(p+1)$. To prove that $(p+1)$ is minimum, let $c'$ be a $q$-coloring of $S_n(2)$ with $c(a_0)=q$ and $q\leq p$. We will prove that $c'$ is not a locating coloring. Let $A'=\{(c'(a_{i,1}),c'(a_{i,2})) \mid i=1,2\cdots,n\}$. Note that $A'\subseteq \{(x,y) \mid x\in \{1,2,\cdots,p-1\}, y\in\{1,2,\cdots,p\}-\{x\}\}$, so $|A'|\leq (p-1)^2<n$. This means, there are two different indices $i$ and $j$, such that $(c'(a_{i,1}),c'(a_{i,2}))=(c'(a_{j,1}),c'(a_{j,2}))$. Therefore $a_{c'}(a_{i,1})=a_{c'}(a_{j,1})$ and thus $c'$ is not a locating coloring of $S_n(2)$. \end{proof} \begin{proof}[\bf\em Proof of Theorem \ref{S3}] Note that $f(p)=(p-1)\left\lfloor\frac{p^2}{4}\right\rfloor-\left\lfloor\frac{p^2-2p}{4}\right\rfloor=\left\lceil \frac{p}{2} \right\rceil(p-1)\left\lfloor \frac{p}{2}-1 \right\rfloor+\left\lceil \frac{p}{2} \right\rceil^2$ is a strictly increasing function for $p\geq 3$. So, for $n\geq 2$ there is a unique $p$ such that $f(p-1)< n\leq f(p)$. Proving (\ref{Sn3}) is equivalent to proving $\chi_L(S_n(3))= p$ where $p$ is the smallest integer such that $n\leq f(p)$. Let $n\geq2$ and $\chi_L(S_n(3))=p$. First, we will prove that $n\leq f(p)$. Let $V(S_n(3))=\{v\}\cup\{x_i,y_i,z_i \mid i=1,2,\cdots,n\}$ and for $i=1,2,\cdots,n$; the subrgaph induced by $\{v,x_i,y_i,z_i\}$ is a path. Let $c$ be a locating $p$-coloring of $S_n(3)$ with $c(v)=p$. Let $A=\{j\mid d(v,c^{-1}(j))=1\}$, $B=\{j \mid d(v,c^{-1}(j))=2\}$, and $C=\{j\mid d(v,c^{-1}(j))=3\}$; also $|A|=\alpha$, $|B|=\beta$, and $|C|=\gamma$. Now, we will count the number of possible color code for $x_i$. We know that $c(x_i)\in A$ and $c(y_i)\in A\cup B\cup\{p\}$. Let $a_c(x_i)=(a_1,a_2,\cdots,a_p)$, then $a_p=1$, $a_j=2$ for $j\in A\backslash \{c(x_i),c(y_i)\}$, $a_j=3$ for $j\in B\backslash \{c(y_i),c(z_i)\}$, and $a_j=4$ for $j\in C\backslash\{c(z_i)\}$. So, for fix $A$, $B$, and $C$, the color code of $x_i$ depends only on $c(x_i)$, $c(y_i)$, and $c(z_i)$. \textbf{(i)} If $c(y_i)\in B$ and $c(z_i)\in A\cup \{p\}$, then there are $\alpha$ possible choices for $c(x_i)$ and $\beta$ possible choices for $c(y_i)$. So, there are $\alpha\beta$ possible values for $a_c(x_i)$; note that the value of $a_c(x_i)$ does not change for different values of $c(z_i)\in A\cup \{p\}$. \textbf{(ii)} If $c(y_i)\in B$ and $c(z_i)\in (B\cup C)\backslash \{c(y_i)\}$, then there are $\alpha$ possible choices for $c(x_i)$, $\beta$ possible choices for $c(y_i)$, and $\beta+\gamma-1$ possible choices for $c(z_i)$. So, there are $\alpha\beta(\beta+\gamma-1)$ possible values for $a_c(x_i)$. \textbf{(iii)} If $c(y_i)\in (A\cup\{p\})\backslash\{c(x_i)\}$ and $c(z_i)\in (A\cup\{p\})\backslash\{c(y_i)\}$, then there are $\alpha$ possible choices for $c(x_i)$ and $\alpha$ possible choices for $c(y_i)$. So, there are $\alpha^2$ possible values for $a_c(x_i)$; note that the value of $a_c(x_i)$ does not change for different values of $c(z_i)\in A\cup \{p\}\backslash\{c(y_i)\}$. \textbf{(iv)} If $c(y_i)\in (A\cup\{p\})\backslash\{c(x_i)\}$ and $c(z_i)\in (B\cup C)$, then there are $\alpha$ possible choices of $c(x_i)$, $\alpha$ possible choices for $c(y_i)$, and $\beta+\gamma$ possible values for $c(z_i)$. So, there are $\alpha^2(\beta+\gamma)$ possible values for $a_c(x_i)$. Since $\alpha+\beta+\gamma=p-1$, the total number of possible values for $a_c(x_i)$ is $\alpha\beta+\alpha\beta(\beta+\gamma-1)+\alpha^2+\alpha^2(\beta+\gamma)=\alpha(\alpha+\beta)(\beta+\gamma)+\alpha^2=\alpha(p-1-\gamma)(p-1-\alpha)+\alpha^2$. This value is maximized by taking $\gamma=0$, so the number of possible values for $a_c(x_i)$ is at most $\alpha(p-1)(p-1-\alpha)+\alpha^2$ which is maximized when $\alpha=\frac{p}{2}+\frac{1}{2p-4}$. Since $\alpha$ must be an integer and the closest integer to $\frac{p}{2}+\frac{1}{2p-4}$ is $\left\lceil \frac{p}{2} \right\rceil$, then the number of possible values for $a_c(x_i)$ is at most \[ \left\lceil \frac{p}{2} \right\rceil(p-1)\left\lfloor \frac{p}{2}-1 \right\rfloor+\left\lceil \frac{p}{2} \right\rceil^2 = (p-1)\left\lceil \frac{p}{2} \right\rceil\left\lfloor \frac{p}{2} \right\rfloor - \left\lceil \frac{p}{2} \right\rceil\left(p-1-\left\lceil \frac{p}{2} \right\rceil\right) = (p-1)\left\lfloor\frac{p^2}{4}\right\rfloor-\left\lfloor\frac{p^2-2p}{4}\right\rfloor. \] Therefore, $n\leq f(p)$. Now, we prove that $p$ is the smallest integer such that $n\leq f(p)$. Suppose otherwise, let $k$ be an integer such that $n\leq f(k)$ and $k<p$. We will construct a locating $k$-coloring of $S_n(3)$, which will contradict $\chi_L(S_n(3))=p$. Let $A=\left\{1,2,\cdots,\left\lceil \frac{k}{2} \right\rceil\right\}$ and $B=\left\{\left\lceil \frac{k}{2} \right\rceil+1,\left\lceil \frac{k}{2} \right\rceil+2,\cdots,k-1\right\}$. Let $S_1=\{(a,b,a) \mid a\in A; b\in B\}$, $S_2=\{(a,b,c)\mid a\in A; b,c\in B; b\ne c\}$, $S_3=\{(a,b,a)\mid a,b\in A\cup\{k\}; b\ne a\ne k\}$, $S_4=\{(a,b,c)\mid a,b\in A\cup\{k\}; c\in B; b\ne a\ne k\}$, and $S=S_1\cup S_2\cup S_3\cup S_4$. Note that $|S|=\alpha\beta+\alpha\beta(\beta-1)+\alpha^2+\alpha^2\beta$ where $\alpha=|A|$ and $\beta=|B|$, so $|S|=f(k)\geq n$. Let $c$ be a coloring of $S_n(3)$ as follows. \begin{enumerate} \item Color $v$ with $k$. \item Arrange the elements of $S$ from $S_1$ to $S_4$. \item For $i=1,2,\cdots,n$; let $(a_i,b_i,c_i)$ be the $i^{th}$ element in $S$ (based on the previous ordering), and color $x_i$ with $a_i$, color $y_i$ with $b_i$, and color $z_i$ with $c_i$. \end{enumerate} Now we prove that $c$ is a locating coloring. Note that $\{w\in V(S_n(3)) \mid c(w)=k\}\subseteq \{v,y_1,y_2,\cdots,y_n\}$ and $c(z_i)\in A \Rightarrow c(z_i)=c(x_i)$. By contradiction, let $u,w$ be two vertices with $a_c(u)=a_c(w)$. Consider the following cases. \textit{Case I : $d(u,c^{-1}(k))$ is even.} Since $S_n(3)$ is bipartite and color $k$ only appears in one partition, if $d(u,c^{-1}(k))$ is even, then $u,w\in \{v,y_1,y_2,\cdots,y_n\}$. Note that $v$ is the only vertex with $d(v,r)=1$ for all $r\in A$, so $v\notin\{u,w\}$. Let $u=y_i$ and $w=y_j$ with $i\ne j$. Since $a_c(u)=a_c(w)$, then $\{c(x_i),c(z_i)\}=\{c(x_j),c(z_j)\}$. If $c(z_i)=c(z_j)$, then $c(x_i)=c(x_j)$, which implies $i=j$. If $c(z_i)=c(x_j)$, then $c(z_i)\in A$, which implies $|\{c(x_i),c(z_i),c(x_j),c(z_j)\}|=1$ and thus $i=j$. \textit{Case II : $d(u,c^{-1}(k))$ is odd.} Let $u\in \{x_i,y_i,z_i\}$ and $w\in \{x_j,y_j,z_j\}$. If $d(u,v)=d(w,v)$, then $i\ne j$ and the different colors in $(a_i,b_i,c_i)$ and $(a_j,b_j,c_j)$ will distinguish $a_c(u)$ and $a_c(w)$. If $d(u,v)< d(w,v)$, then $u=x_i$ and $w=z_j$. This implies $c(z_j)=c(x_i)\in A$, thus $c(z_j)=c(x_j)$. Since $d(z_j,c^{-1}(k))=d(x_i,c^{-1}(k))=1$, then $c(y_j)=p$ and $d(z_j,r)=5>3\geq d(x_i,r)$ for every $r\in B$, a contradiction. We already prove that $c$ is a locating $k$ coloring of $S_n(3)$, which means $\chi_L(S_n(3))\leq k$, but $\chi_L(S_n(3))=p>k$, a contradiction. Therefore $p$ is the smallest integer such that $n\leq f(p)$, and thus (\ref{Sn3}) is proven. From (\ref{Sn3}), we have \begin{align*} \lim\limits_{n\to \infty} \frac{n}{f(p)}=1\quad \Rightarrow \quad \lim\limits_{n\to \infty} \frac{n}{p^3/4}=1\quad \Rightarrow \quad \lim\limits_{n\to \infty} \frac{p}{\sqrt[3]{4n}}=1, \end{align*} therefore $\chi_L\left(S_n(3)\right) =(1+o(1)) \sqrt[3]{4n}$. \end{proof} \section*{Acknowledgment} This research has been funded by the Indonesian Ministry of Research and Technology/National Agency of Research and Innovation under the World Class University (WCU) Program managed by Institut Teknologi Bandung.
{ "timestamp": "2020-11-18T02:17:47", "yymm": "1912", "arxiv_id": "1912.05775", "language": "en", "url": "https://arxiv.org/abs/1912.05775", "abstract": "Some coloring algorithms gives an upper bound for the locating chromatic number of trees with all the vertices not in an end-path colored by only two colors. That means, a better coloring algorithm could be achieved by optimizing the number of colors used in the end-paths. We provide an estimation of the locating chromatic number of trees using the locating chromatic number of its end-palms. We also study the locating chromatic number of palms, a subdivision of star. We also prove $\\chi_L(S_n(k))=\\Theta(n^{1/k})$; $\\chi_L(S_n(3))=(1+o(1))\\sqrt[3]{4n}$; and $\\chi_L(O_n)=\\left\\lceil\\log_3\\left(\\frac{n}{4}\\right)\\right\\rceil+3$.", "subjects": "Combinatorics (math.CO)", "title": "On the Locating Chromatic Number of Trees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631667229061, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7087950465898513 }
https://arxiv.org/abs/2005.09205
Distance matrices of subsets of the Hamming cube
Graham and Winkler derived a formula for the determinant of the distance matrix of a full-dimensional set of $n + 1$ points $\{ x_{0}, x_{1}, \ldots , x_{n} \}$ in the Hamming cube $H_{n} = ( \{ 0,1 \}^{n}, \ell_{1} )$. In this article we derive a formula for the determinant of the distance matrix $D$ of an arbitrary set of $m + 1$ points $\{ x_{0}, x_{1}, \ldots , x_{m} \}$ in $H_{n}$. It follows from this more general formula that $\det (D) \not= 0$ if and only if the vectors $x_{0}, x_{1}, \ldots , x_{m}$ are affinely independent. Specializing to the case $m = n$ provides new insights into the original formula of Graham and Winkler. A significant difference that arises between the cases $m < n$ and $m = n$ is noted. We also show that if $D$ is the distance matrix of an unweighted tree on $n + 1$ vertices, then $\langle D^{-1} \mathbf{1}, \mathbf{1} \rangle = 2/n$ where $\mathbf{1}$ is the column vector all of whose coordinates are $1$. Finally, we derive a new proof of Murugan's classification of the subsets of $H_{n}$ that have strict $1$-negative type.
\section{Introduction: Distance and Gram matrices}\label{Sec 1} The global geometry of a finite metric space $(\{x_{0}, x_{1}, \ldots , x_{m} \}, d)$ is completely encoded within its \textit{distance matrix} $D = (d(x_{i}, x_{j}))_{i,j = 0}^{m}$. The distance matrices we focus on in this article correspond to metric subspaces of the \textit{Hamming cube} $H_{n} = ( \{ 0,1 \}^{n}, \ell_{1} )$. In this context, the metric distance $d$ between two vectors $x, y \in \{ 0,1 \}^{n}$ is given by $d( x,y ) = \| x - y \|_{1}$. Associated with such a distance matrix $D$ is a Gram matrix $G = G(D)$ that will be described in Section \ref{Sec 2}. It is, however, helpful at this point to recall some general properties of Gram matrices. The \textit{Gram matrix} of a set of vectors $\{ x_{1}, \ldots , x_{m} \} \subset \mathbb{R}^{n}$ is the $m \times m$ matrix \begin{align*} G(x_{1}, \ldots , x_{m}) & = (x_{i} \cdot x_{j})_{i,j = 1}^{m} \\ & = BB^{T}, \end{align*} where $B$ is the $m \times n$ matrix whose $i$th row is given by the vector $x_{i}$, $1 \leq i \leq m$. One may use the Gram matrix $G(x_{1}, \ldots , x_{m})$ to test for linear dependence. Indeed, the set of vectors $\{ x_{1}, \ldots , x_{m} \}$ is linearly dependent if and only if $\det G(x_{1}, \ldots , x_{m}) = 0$. This result is known as Gram's criterion for linear dependence. All Gram matrices are positive semi-definite. Moreover, $G(x_{1}, \ldots , x_{m})$ is positive definite if and only if the set of vectors $\{ x_{1}, \ldots , x_{m} \}$ is linearly independent. For a classical treatment of these results, see Gantmacher \cite{FG}. Gram matrices also arise naturally when calculating the volumes of $m$-dimensional parallelepipeds in $\mathbb{R}^{n}$. Given linearly independent vectors $x_{1}, x_{2}, \ldots, x_{m} \in \mathbb{R}^{n}$, the $m$-dimensional parallelepiped with sides $x_{1}, x_{2}, \ldots, x_{m}$ is, by definition, the set \begin{align*} P = \{ t_{1}x_{1} + t_{2}x_{2} + \cdots + t_{m}x_{m} \, | \, t_{k} \in [0,1], 1 \leq k \leq m \}. \end{align*} For any given set of vectors $\{ x_{1}, \ldots , x_{m} \} \subset \mathbb{R}^{n}$, the $m$-dimensional volume $V$ of the parallelepiped $P$ with sides $x_{1}, \ldots , x_{m}$ satisfies $V^{2} = \det G(x_{1}, \ldots , x_{m})$. See Courant and John \cite{CJ} for a comprehensive treatment of volumes of parallelepideds. A set of points $S$ in the Hamming cube $H_{n}$ is said to be \textit{full-dimensional} if the convex hull of $S$ has positive $n$-dimensional volume. Notably, a set $S$ of $n + 1$ points in $H_{n}$ is full-dimensional if and only if $S$ is an affinely independent subset of $\mathbb{R}^{n}$. As an example, it is well-known that every $n + 1$ point metric tree $T$ endowed with the usual graph metric $\rho$ embeds isometrically into $H_{n}$. So it follows from results of Hjorth et al.\ \cite{Hj1} and Murugan \cite{MM} that the embedded vertices of $T$ form a full-dimensional subset of $H_{n}$. The determinant of the distance matrix $D$ of any such metric tree $(T, \rho)$ is given by $\det (D) = (-1)^{n}n2^{n - 1}$ and hence does not depend upon the geometry of the particular tree $T$. This remarkable formula is due to Graham and Pollak \cite{GP}. Graham and Winkler \cite{GW, GW2} generalized this tree result by calculating the determinant of the distance matrix $D$ of any full-dimensional set of $n+1$ points $\{ x_{0}, x_{1}, \ldots , x_{n} \}$ in $H_{n}$. They showed that \begin{align}\label{twinkle} \det (D) & = (-1)^{n}n2^{n-1} \det G(x_{1} - x_{0}, \ldots , x_{n} - x_{0}) \nonumber \\ & = (-1)^{n}n2^{n-1} V^{2}, \end{align} where $V$ is the volume of the parallelepiped with sides $x_{1} - x_{0}, \ldots , x_{n} - x_{0}$. In this article we calculate the determinant of the distance matrix $D$ of an arbitrary set $\{ x_{0}, x_{1}, \ldots , x_{m} \} \subseteq H_{n}$, $m \geq 1$. These calculations are implemented in Section \ref{Sec 2}. The resulting formulas are stated in Lemma \ref{Lemma 1}, Theorem \ref{Thm 1} and Theorem \ref{Thm 2}. It follows that $\det (D) \not= 0$ if and only if the set of vectors $\{ x_{0}, x_{1}, \ldots , x_{m} \}$ is affinely independent. In the case $m = n$ we show how to reduce to the aforementioned result (\ref{twinkle}) of Graham and Winkler \cite{GW, GW2}. In Remark \ref{Rem 1}, we point out that there is a significant difference between the cases $m < n$ and $m = n$. In Section \ref{Sec 3} we use the formulas from Section \ref{Sec 2} and a theorem of S\'{a}nchez \cite{SS} to provide a new proof of Murugan's \cite{MM} classification of the subsets of $H_{n}$ that have strict $1$-negative type. Given the distance matrix $D$ of an affinely independent set $ \{ x_{0}, x_{1}, \ldots , x_{m} \} \subset H_{n}$, it becomes necessary to calculate a formula for the inner product $\langle D^{-1}\mathbf{1},\mathbf{1}\rangle$, where $\mathbf{1}$ is the column $(m+1)$-vector all of whose coordinates are $1$. This is done in Corollary \ref{Cor M}. In Section \ref{Sec 4} we consider the case of an embedded $n+1$ point unweighted metric tree in $H_{n}$ and show that $\langle D^{-1}\mathbf{1},\mathbf{1}\rangle = 2/n$. We conjecture that this quantity is, in fact, minimal over all affinely independent subsets $\{ x_{0}, x_{1}, \ldots , x_{m} \}$ of $H_{n}$ and provide some numerical evidence. Throughout we assume that $n \geq 2$ is a fixed integer and let $H_{n}$ denote the $n$-dimensional hypercube $\{ 0,1 \}^{n}$ endowed with the $\ell_{1}$-metric. \section{Distance matrices of subsets of the Hamming cube}\label{Sec 2} The hypercube $\{0,1\}^n$ has a natural additive group structure given by elementwise addition modulo 2. It is easy to check that the $\ell_1$ metric is invariant under translation in this group. In particular, if $X = \{x_0,x_1,\dots,x_m\}$ is a given subset of $H_{n}$, then the map $\Phi(x) = x-x_0$ is an isometric isomorphism from $X$ to the set $X' = \{{\bf 0}, x_1-x_0,\dots,x_m-x_0\}$ (where $\mathbf{0}$ denotes the zero vector in $\{ 0,1 \}^{n}$). Note that if we set $x_i' = x_i - x_0$, $0 \le i \le m$, then obviously $G(x_1-x_0,\dots,x_m-x_0) = G(x_1' - x_0',\dots,x_m' - x_0')$ and consequently the $m$-dimensional parallelepided with edges $x_{1} - x_{0}, \ldots, x_{m} - x_{0}$ has the same volume as the $m$-dimensional parallelepided with edges $x_{1}^{\prime}, \ldots, x_{m}^{\prime}$. Consequently, when considering a set $\{ x_{0}, x_{1}, \ldots, x_{m} \}\subseteq H_n$ we may assume that $x_{0} = \mathbf{0}$ without altering the distance matrix $D$ and without altering the volume of the $m$-dimensional parallelepided with edges $x_{1} - x_{0}, \ldots, x_{m} - x_{0}$. Henceforth, given a set $\{ x_{0}, x_{1}, \ldots, x_{m} \} \subseteq H_{n}$, we will assume that $x_{0} = \mathbf{0}$ unless stated otherwise. Throughout the distance matrix of $\{ x_{0}, x_{1}, \ldots, x_{m} \}$ will be denoted by $D = (d(x_{i}, x_{j}))_{i,j = 0}^{m} = ( \| x_{i} - x_{j} \|_{1})_{i,j = 0}^{m}$. We associate with $D$ the $m \times n$ matrix $B$ whose $i$th row is given by $x_{i}$, $1 \leq i \leq m$. So if $x_{i} = (x_{i,1}, x_{i,2}, \ldots, x_{i,n})$, then $B = (x_{i,j})_{i = 1,j = 1}^{m,n}$. The matrix product $BB^{T}$ is the $m \times m$ Gram matrix $G = G(x_{1}, \ldots, x_{m}) = (x_{i} \cdot x_{j})_{i,j = 1}^{m}$. It it also useful to let $u\in\mathbb{R}^{m}$ denote the column vector $u=(x_{1} \cdot x_{1}, x_{2} \cdot x_{2}, \ldots, x_{m} \cdot x_{m})^T$. Finally, we will use $\langle\cdot,\cdot\rangle$ to denote the standard inner product on ${\mathbb R}^{n}$ when dealing with quadratic forms such as $(G^{-1}u)\cdot u=u^{T}G^{-1}u=\langle G^{-1}u,u\rangle$. With these notational preliminaries in mind it is instructive to compare $\det (D)$ to $\det (G)$. \begin{lemma}\label{Lemma 1} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. Then \[ \det(D) = (-1)^{m-1}2^{m-1} \det \begin{pmatrix} 0 & u^{T} \\ u & G \end{pmatrix}. \] \end{lemma} \begin{proof} The following simple identity will be used. For $x, y \in H_{n}$, \begin{align}\label{eqn1} d(x, y) = (x - y) \cdot (x - y) = (x \cdot x) + (y \cdot y) - 2(x \cdot y). \end{align} Let $R_{i}$ and $C_{j}$ denote the $i$-th row and $j$-th column of $D$ respectively. Consider the matrix $D^{\prime}$ that is obtained by applying the following elementary row and column operations to $D$. For $2 \leq i,j \leq m+1$ replace $R_{i}$ by $R_{i}-R_{1}$ and then replace $C_{j}$ by $C_{j}-C_{1}$. Using (\ref{eqn1}) we see that \[D^{\prime}=\begin{pmatrix} 0 & u^{T} \\ u & -2G \end{pmatrix}\] and so \[\det(D) = \det \begin{pmatrix} 0 & u^{T} \\ u & -2G \end{pmatrix}.\] The result now follows by factorizing $-2$ from each of the entries of $D^{\prime}$ and replacing such a factor for the first row and the first column. \end{proof} \begin{thm}\label{Thm 1} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. If the set of vectors $\{ x_{1}, x_{2}, \dots, x_{m} \}$ is linearly dependent then $\det(D) = 0$. Consequently, \[ \det \begin{pmatrix} 0 & u^{T} \\ u & G \end{pmatrix} = 0. \] \end{thm} \begin{proof} Suppose that $\{ x_{1}, x_{2}, \dots, x_{m} \}$ is linearly dependent subset of $H_{n}$. Then there exist scalars $c_{1}, \dots ,c_{m}\in\mathbb{R}$, not all zero, such that $\sum_{j=1}^{m} c_{j}x_{j} = \mathbf{0}$. Now set $c_{0} =- \sum_{j=1}^{m} c_{j}$ and $c =(c_{0}, c_{1}, \ldots,c_{m})^T$. Then for each $i = 0, \ldots ,n$, \begin{align*} (Dc)_{i} & = \sum_{j=0}^{m} d(x_{i}, x_{j})c_{j} \\ & = d(x_{i}, x_{0})c_{0} + \sum_{j=1}^{m} \left( \sum_{k=1}^n|x_{i,k} - x_{j,k}| \right)c_{j}. \end{align*} Note that since $x_{i,k}$, $x_{j,k}$ and $|x_{i,k} - x_{j,k}|$ are all either $0$ or $1$, we have that \begin{align*} |x_{i,k} - x_{j,k}| & = (x_{i,k} - x_{j,k})^{2} = x_{i,k} - 2x_{i,k}x_{j,k} + x_{j,k}. \end{align*} Recalling that $\sum_{j=1}^{m} c_{j}x_{j} = \mathbf{0}$ and swapping the order of summation yields \begin{align*} (Dc)_{i} & = c_{0} \sum_{k=1}^{n} x_{i,k} + \sum_{k=1}^{n} \left( \sum_{j=1}^{m} \left(x_{i,k} - 2x_{i,k}x_{j,k} + x_{j,k}\right)c_{j} \right) \\ & = c_{0} \sum_{k=1}^{n} x_{i,k} + \sum_{k=1}^{n}\sum_{j=1}^{m}x_{i,k}c_{j} + \sum_{k=1}^{n}(1-2x_{i,k})\bigg(\sum_{j=1}^{m}c_{j}x_{j}\bigg)_{k}\\ & = c_{0} \sum_{k=1}^{n} x_{i,k} + \left(\sum_{j=1}^{m} c_{j}\right)\sum_{k=1}^{n}x_{i,k}\\ & = (c_{0} - c_{0})\sum_{k=1}^{n} x_{i,k} \\ & = 0. \end{align*} Hence $c$ is a nonzero element of the kernel of $D$, and so we conclude that $\det (D)=0$. The fact that \[ \det \begin{pmatrix} 0 & u^{T} \\ u & G \end{pmatrix} = 0 \] now follows immediately from this and Lemma \ref{Lemma 1}. \end{proof} \begin{lemma}\label{Lemma 2} Suppose that $W$, $X$, $Y$ and $Z$ are matrices of sizes $j \times j$, $j \times k$, $k \times j$ and $k \times k$ (respectively) and that $Z$ is invertible. Then \[\det\begin{pmatrix} W & X \\ Y & Z \end{pmatrix}=\det(Z)\det(W-XZ^{-1}Y).\] \end{lemma} \begin{proof} Simply note the factorization \[\begin{pmatrix} W & X \\ Y & Z \end{pmatrix}=\begin{pmatrix} I & X \\ 0 & Z \end{pmatrix}\begin{pmatrix} W-XZ^{-1}Y & 0 \\ \ Z^{-1}Y & I \end{pmatrix},\] and take the determinant of both sides. \end{proof} \begin{thm}\label{Thm 2} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. If the set of vectors $\{ x_{1}, x_{2}, \dots, x_{m} \}$ is linearly independent, then \begin{align*} \det (D) & = (-1)^{m}2^{m-1}\det(G)\langle G^{-1}u,u\rangle \\ & = (-1)^{m}2^{m-1}V^{2}\langle G^{-1}u,u\rangle, \end{align*} where $V$ is the volume of the $m$-dimensional parallepiped with sides $x_{1}, x_{2}, \dots, x_{m}$. In particular, $\det (D) \not= 0$. \end{thm} \begin{proof} The Gram matrix $G$ is invertible (and thus has a non-zero determinant) because the vectors $x_{1}, x_{2}, \dots, x_{m}$ are linearly independent. Hence the formulas for $\det (D)$ follow immediately from Lemmas \ref{Lemma 1} and \ref{Lemma 2}. Moreover, because the Gram matrix $G$ is positive definite, $\langle G^{-1}u,u\rangle \not= 0$. Consequently, $\det (D) \not= 0$. \end{proof} The following corollary holds for any set of vectors $\{ x_{0}, x_{1}, \ldots, x_{m} \} \subseteq H_{n}$. In particular, it may be the case that $x_{0} \not= \mathbf{0}$. \begin{cor}\label{affine} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. Then $\det (D) \not= 0$ if and only if the set of vectors $\{ x_{0}, x_{1}, \ldots, x_{m} \}$ is affinely independent. \end{cor} \begin{proof} This is an immediate consequence of Theorems \ref{Thm 1} and \ref{Thm 2}. \end{proof} We now reduce to the result (\ref{twinkle}) of Graham and Winkler \cite{GW, GW2} stated in Section \ref{Sec 1} by calculating $\langle G^{-1}u,u\rangle$ in the case $m = n$. \begin{thm}\label{Thm 3} Let $\{ x_{0}, x_{1}, \ldots, x_{n} \}$ be a subset of the Hamming cube $H_{n}$. If the set of vectors $\{ x_{1}, x_{2}, \dots, x_{n} \}$ is linearly independent, then $\langle G^{-1}u,u\rangle = n$. \end{thm} \begin{proof} Let $\mathbf{1}$ denote the vector in ${\mathbb R}^{n}$ all of whose coordinates are $1$. It is easy to verify that $B\mathbf{1}=u$. In this setting ($m=n$) we have the advantage that the matrix $B$ is invertible and hence $B^{-1}u=\mathbf{1}$. Then we simply calculate that \begin{align*} \langle G^{-1}u,u\rangle&=\langle(BB^{T})^{-1}u,u\rangle \\&=\langle(B^{-1})^{T}B^{-1}u,u\rangle \\&=\langle B^{-1}u,B^{-1}u\rangle \\&=\langle\mathbf{1},\mathbf{1}\rangle \\&=n. \end{align*} \end{proof} \begin{rem}\label{Rem 1} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. It is worth noting that if the set of vectors $\{ x_{1}, x_{2}, \dots, x_{m} \}$ is linearly independent and $m < n$, then it need not be the case that $\langle G^{-1}u,u\rangle = m$. If, for instance, we consider two linearly independent vectors $x_{1}, x_{2} \in \{ 0,1 \}^{n}$, then direct calculations show that \begin{align}\label{eqn3} \langle G^{-1}u,u\rangle = \frac{(x_{1} \cdot x_{1})(x_{2} \cdot x_{2})(x_{1} - x_{2}) \cdot (x_{1} - x_{2})}{\det (G)}. \end{align} If $n = 2$ the quantity on the right side of (\ref{eqn3}) is easily seen to be equal to $2$ and this is consistent with Theorem \ref{Thm 3}. But, in general, if $n > 2$ the quantity on the right side of (\ref{eqn3}) is not even constant. To see that this is so it suffices to consider the case $n = 3$. If $x_{1} = (1,1,1)$ and $x_{2}= (1,1,0)$, then $\langle G^{-1}u,u\rangle = 3$. On the other hand, if $x_{1} = (1,0,1)$ and $x_{2}= (1,1,0)$, then $\langle G^{-1}u,u\rangle = 8/3$. In general, the calculation of $\langle G^{-1}u,u\rangle$ is more nuanced in the case $m < n$ because $B$ is no longer invertible. \end{rem} The following corollary holds for any set of vectors $\{ x_{0}, x_{1}, \ldots, x_{n} \} \subseteq H_{n}$. In particular, it may be the case that $x_{0} \not= \mathbf{0}$. \begin{cor} Let $\{ x_{0}, x_{1}, \ldots, x_{n} \}$ be a subset of the Hamming cube $H_{n}$. If the set of vectors $\{ x_{0}, x_{1}, \dots, x_{n} \}$ is affinely independent, then $\det (D) = (-1)^{n}n2^{n-1}V^{2}$, where $V$ is the volume of the $n$-dimensional parallepiped with sides $x_{j} - x_{0}$, $1 \leq j \leq n$. \end{cor} \begin{proof} Immediate from Theorems \ref{Thm 2} and \ref{Thm 3}. \end{proof} In the event that an affinely independent set $\{ x_{0}, x_{1}, \dots, x_{n} \} \subset H_{n}$ is an embedded $n + 1$ point unweighted tree we have that $\det (D) = (-1)^{n}n2^{n-1}$ by the celebrated formula of Graham and Pollak \cite{GP}. It is worth noting that there exist affinely independent sets $\{ x_{0}, x_{1}, \dots, x_{n} \} \subset H_{n}$ that satisfy this same formula but which are not embedded trees. For instance, if we set $x_{0} = \mathbf{0}$, $x_{1} = (1,0,0)$, $x_{2} = (0,1,0)$ and $x_{3} = (1,1,1)$ in $H_{3}$, then it is easy to verify that $\det(D) = -12$. However, $\{ x_{0}, x_{1}, x_{2}, x_{3} \}$ is certainly not an embedded tree in $H_{3}$. \section{Applications to supremal negative type}\label{Sec 3} The results of Section \ref{Sec 2} afford a new analysis of negative type properties of subsets of the Hamming cube $H_{n}$. In particular, we develop a new proof of Murugan's \cite{MM} classification of the subsets of $H_{n}$ that have strict $1$-negative type. In order to proceed we need to recall some classical definitions and related theorems. \begin{defn} Let $(X, d)$ be a metric space and suppose that $p \geq 0$. Then: \begin{enumerate} \item[(a)] $(X, d)$ has \textit{$p$-negative type} iff for each finite subset $\{ x_{0}, \ldots, x_{m} \}$ of $X$ and each choice of scalars $\xi_{0}, \ldots, \xi_{m}$ such that $\xi_{0} + \cdots + \xi_{m} = 0$, we have \begin{eqnarray}\label{neg type} \sum_{i,j=0}^{m} d(x_{i}, x_{j})^{p} \xi_{i} \xi_{j} & \le & 0. \end{eqnarray} \item[(b)] $(X, d)$ has \textit{strict $p$-negative type} iff $(X, d)$ has \textit{$p$-negative type} and, moreover, each inequality (\ref{neg type}) is strict whenever $(\xi_{1}, \ldots, \xi_{n}) \not= \mathbf{0}$. \item[(c)] The \textit{supremal negative type} of $(X, d)$, denoted by $\wp_{(X,d)}$ or simply $\wp_{X}$ when the metric $d$ is clear, is defined to be the supremum of all $p \geq 0$ such that $(X, d)$ has $p$-negative type. \end{enumerate} \end{defn} Considerations of negative type arose classically in relation to fundamental isometric embedding problems. For instance, Schoenberg famously determined that a metric space $(X, d)$ embeds isometrically into a Hilbert space $H$ iff $(X, d)$ has $2$-negative type. Moreover, the range of the embedding will be an affinely independent subset of $H$ iff $(X, d)$ has strict $2$-negative type. Schoenberg further determined that if a metric space $(X, d)$ has $p$-negative type then it has $q$-negative type for all $q \leq p$ and that $\wp_{(X, d)}$ is a maximum whenever it is finite. These results appear in Schoenberg \cite{IS1, IS2, IS3}. Subsets of $\ell_{1}$ are well-known to have to have $1$-negative type. (See, for instance, Wells and Williams \cite[Theorem 4.10]{WW}.) Explicitly determining subsets of $\ell_{1}$ that have strict $1$-negative type is a significantly more challenging (and largely open) problem. A nice result in this direction is the following theorem of Murugan \cite{MM}: A subset $X = \{ x_{0}, x_{1}, \ldots, x_{m}\}$ of the Hamming cube $H_{n}$ has supremal negative type $\wp_{X} = 1$ iff the set of vectors $\{ x_{0}, x_{1}, \ldots, x_{m}\}$ is affinely dependent. Equivalently, since the supremal negative type of a finite metric space cannot be strict by Li and Weston \cite{LW}, it follows that $\wp_{X} > 1$ iff the set of vectors $\{ x_{0}, x_{1}, \ldots, x_{m}\}$ is affinely independent. Stated this way, we see that there is a strong correlation between Murugan's theorem and Corollary \ref{affine}. For any metric space $(X, d)$ and any $\alpha \in (0, 1)$, the so-called \textit{metric transform} $d^{\alpha}$ is also a metric on $X$ and it is easy to verify that $\wp_{(X, d^\alpha)} = \alpha^{-1}\wp_{(X,d)}$. Now, for $p \geq 1$, let $d_{p}$ denote the $\ell_{p}$-metric on $\mathbb{R}^{n}$. For any $x, y \in \{ 0, 1\}^{n}$ it is plain to see that $d_{p}(x, y) = d_{1}(x, y)^{1/p}$. So, for any $X \subseteq \{ 0, 1\}^{n}$, it follows that $\wp_{(X,d_{p})} = p \wp_{(X,d_{1})}$. As $\wp_{(X, d_{1})} \geq 1$, we deduce that $\wp_{(X, d_{p})} \geq p$ for all $p \geq 1$. It is also worth noting that in the case $p = \infty$, $d_{\infty}$ is necessarily the discrete metric on $X$, and so $\wp_{(X, d_{\infty})} = \infty$. In general, given a metric space $(X, d)$, explicitly calculating or even estimating $\wp_{(X, d)}$ is a difficult exercise in combinatorial optimization. In the case of a finite metric space $(X, d) = (\{ x_{0}, \ldots x_{m} \}, d)$, S{\'a}nchez \cite{SS} gave an explicit formula for $\wp_{(X,d)}$ in terms of the underlying $p$-distance matrices $D_{p} = (d(x_{i}, x_{j})^{p})_{i, j = 0}^{m}$, $p \geq 0$. Namely, \begin{equation}\label{eqn4} \wp_{(X,d)} = \min \{ p \,:\, \text{$\det(D_p) = 0$ or $\ip<D_p^{-1}\mathbf{1},\mathbf{1}> = 0$} \} \end{equation} where $\mathbf{1}$ is the column vector all of whose coordinates are $1$. In particular, this shows that if $\det(D_p) = 0$ then $\wp_{(X,d)} \le p$. S{\'a}nchez' proof of (\ref{eqn4}) depends upon the following theorem. \begin{thm}[S{\'a}nchez \cite{SS}]\label{Thm 3.5} Let $|X| > 1$ and $(X, d)$ be a finite metric space of $p$-negative type. Then $(X, d)$ has strict $p$-negative type iff \begin{enumerate} \item $\det (D_{p}) \not= 0$, and \item $\ip<D_p^{-1}\mathbf{1},\mathbf{1}> \not= 0$. \end{enumerate} \end{thm} Now consider a set $X = \{ x_{0}, \ldots x_{m} \} \subseteq \{ 0, 1\}^{n}$. For $p \geq 1$, let $D^{(p)}$ denote the $1$-distance matrix for $(X, d_{p})$. In other words, $D^{(p)} = ( \| x_{i} - x_{j} \|_{p})_{i,j = 0}^{m}$ where $\| \cdot \|_{p}$ denotes the $\ell_{p}$-norm on $\mathbb{R}^{n}$. As per our observations above, we have $$D_{p}^{(p)} = (d_{p}(x_{i}, x_{j})^{p})_{i,j = 0}^{m} = (d_{1}(x_{i}, x_{j}))_{i,j = 0}^{m} = D_{1}^{(1)}$$ for all $p \geq 1$. Notice that $D_{1}^{(1)}$ is just $D$ according to the notation of Section \ref{Sec 2}. Hence, by applying Theorem \ref{Thm 1}, we see that if the set $X$ is affinely dependent, then $\det (D_{p}^{(p)}) = \det (D_{1}^{(1)}) = 0$ for all $p \geq 1$. So in this case we deduce that $\wp_{(X, d_{p})} = p$ for all $p \geq 1$. In particular, by applying Theorem \ref{Thm 3.5}, it follows that $(X, d_{p})$ does not have strict $p$-negative type for any $p \geq 1$. Specializing to the case $p = 1$ provides a new proof of one implication of Murugan's theorem. To establish the converse implication we need to develop two additional results. The first is a variant of Lemma \ref{Lemma 1}. \begin{thm}\label{Thm 4} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. Then \[ \det\begin{pmatrix} 0 & \mathbf{1}^{T} \\ \mathbf{1} & D \end{pmatrix}=(-1)^{m-1}2^{m}\det(G), \] where $D = D_{1}^{(1)} = (d_{1}(x_{i}, x_{j}))_{i,j = 0}^{m}$. \end{thm} \begin{proof} Recall that $G$ denotes the Gram matrix $G(x_{1}, \ldots, x_{m}) = (x_{i} \cdot x_{j})_{i,j = 1}^{m}$. Let $u=(x_{1}\cdot x_{1}, \ldots, x_{m}\cdot x_{m})^{T}$ and $A=((x_{i}\cdot x_{i})+(x_{j}\cdot x_{j})-2(x_{i}\cdot x_{j}))_{i,j=1}^{m}$. Also, for $k \geq 1$, let $\mathbf{1}_{k}$ denote the column vector in $\mathbb{R}^{k}$ all of whose entries are $1$. Using (\ref{eqn1}) we have that \[ \begin{pmatrix} 0 & \mathbf{1}_{m+1}^{T} \\ \mathbf{1}_{m+1} & D \end{pmatrix}=\begin{pmatrix} 0 & 1 & \mathbf{1}_{m}^{T} \\ 1 & 0 & u^{T} \\ \mathbf{1}_{m} & u & A \end{pmatrix}. \] We proceed by applying elementary row and column operations, as in the proof of Lemma \ref{Lemma 1}. To this end, let $R_{i}$ and $C_{j}$ denote the $i$-th row and $j$-th column of the above matrix, respectively. Then for $3\leq i,j\leq m+2$, replace $R_{i}$ by $R_{i}-R_{2}$ and then replace $C_{j}$ by $C_{j}-C_{2}$. This gives that \[ \det\begin{pmatrix} 0 & \mathbf{1}_{m+1}^{T} \\ \mathbf{1}_{m+1} & D \end{pmatrix}=\det\begin{pmatrix} 0 & 1 & \mathbf{1}_{m}^{T} \\ 1 & 0 & u^{T} \\ \mathbf{1}_{m} & u & A \end{pmatrix}=\det\begin{pmatrix} 0 & 1 & \mathbf{0}_{m}^{T} \\ 1 & 0 & u^{T} \\ \mathbf{0}_{m} & u & -2G \end{pmatrix} \] where here $\mathbf{0}_{m}$ denotes the zero vector in $\mathbb{R}^{m}$. Now, factorizing $-2$ from all of the entries and then replacing such a factor in the second row and the second column, we see that \[ \det\begin{pmatrix} 0 & \mathbf{1}_{m+1}^{T} \\ \mathbf{1}_{m+1} & D \end{pmatrix}=(-2)^{m}\det\begin{pmatrix} 0 & 1 & \mathbf{0}_{m}^{T} \\ 1 & 0 & u^{T} \\ \mathbf{0}_{m} & u & G \end{pmatrix}. \] But now we just expand along the top row twice to obtain \begin{align*} \det\begin{pmatrix} 0 & \mathbf{1}_{m+1}^{T} \\ \mathbf{1}_{m+1} & D \end{pmatrix}&=(-2)^{m}\det\begin{pmatrix} 0 & 1 & \mathbf{0}_{m}^{T} \\ 1 & 0 & u^{T} \\ \mathbf{0}_{m} & u & G \end{pmatrix} \\&=-(-2)^{m}\det\begin{pmatrix} 1 & u^{T} \\ \mathbf{0}_{m} & G \end{pmatrix} \\&=-(-2)^{m}\det(G) \\&=(-1)^{m-1}2^{m}\det(G), \end{align*} as required. \end{proof} \begin{cor}\label{Cor M} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. If the set of vectors $\{ x_{0}, x_{1}, \dots, x_{m} \}$ is affinely independent, then \[ \ip< D^{-1} \mathbf{1},\mathbf{1}>=\frac{2}{\langle G^{-1}u,u\rangle} > 0, \] where $D = D_{1}^{(1)} = (d_{1}(x_{i}, x_{j}))_{i,j = 0}^{m}$. \end{cor} \begin{proof} Recall that $G$ denotes the Gram matrix $G(x_{1}, \ldots, x_{m}) = (x_{i} \cdot x_{j})_{i,j = 1}^{m}$. Since $D$ is invertible, we may apply Lemma \ref{Lemma 2} with $W=0$, $X=\mathbf{1}^{T}$, $Y=\mathbf{1}$ and $Z=D$. This gives that \[ \det\begin{pmatrix} 0 & \mathbf{1}^{T} \\ \mathbf{1} & D \end{pmatrix}=-\det(D)\ip< D^{-1} \mathbf{1},\mathbf{1}>. \] So, by Theorems \ref{Thm 2} and \ref{Thm 4}, we see that \begin{align*} \langle D^{-1}\mathbf{1},\mathbf{1}\rangle&=-\frac{\det\begin{pmatrix} 0 & \mathbf{1}^{T} \\ \mathbf{1} & D \end{pmatrix}}{\det(D)} \\&=-\frac{(-1)^{m-1}2^{m}\det(G)}{(-1)^{m}2^{m-1}\det(G)\langle G^{-1}u,u\rangle} \\&=\frac{2}{\langle G^{-1}u,u\rangle}. \end{align*} Moreover, the Gram matrix $G$ is positive definite because the vectors $x_{1},\dots,x_{m}$ are linearly independent. Hence $G^{-1}$ is positive definite, and so $\langle G^{-1}u,u\rangle > 0$. \end{proof} Now if the set of vectors $X = \{ x_{0}, \ldots, x_{m} \} \subseteq H_{n}$ is affinely independent, then $\det (D) \not= 0$ by Theorem \ref{Thm 2}. Moreover, $ \ip< D^{-1} \mathbf{1},\mathbf{1}> > 0$ by Corollary \ref{Cor M}. So we see that $\wp_{(X, d_{1})} > 1$ by (\ref{eqn4}) and Theorem \ref{Thm 3.5}. In summary, we have obtained the following version of Murugan's theorem. \begin{thm}\label{Murugan} Let $X = \{x_0,x_1,\dots,x_m\}$, $m \geq 1$, be a subset of $\{ 0, 1 \}^{n}$. Then $\wp_{(X,d_p)} \ge p$ for all $p \geq 1$. Furthermore, the following conditions are equivalent. \begin{enumerate} \item $X$ is affinely independent. \item $(X, d_1)$ has strict $1$-negative type. \item $\wp_{(X,d_p)} > p$ for some $p \ge 1$. \item $\wp_{(X,d_p)} > p$ for all $p \ge 1$. \end{enumerate} \end{thm} \section{Affinely independent subsets of the Hamming cube}\label{Sec 4} In the proof of Murugan's result given in the previous section, it was shown that if $\{x_{0},x_{1},\dots,x_{m}\}$ is an affinely independent subset of $H_{n}$, then $\langle D^{-1}\mathbf{1},\mathbf{1}\rangle > 0$ (where $D = D_{1}^{(1)} = (d_{1}(x_{i}, x_{j}))_{i,j = 0}^{m}$). However, there is actually more that can be said in this setting. As stated earlier, results of Hjorth et al.\ \cite{Hj1} and Murugan \cite{MM} imply that any unweighted metric tree $T$ on $n+1$ vertices embeds isometrically into $H_{n}$ as an affinely independent set. For such embedded trees we may compute the precise value of $\langle D^{-1}\mathbf{1},\mathbf{1}\rangle$. In fact, just as Graham and Pollak \cite{GP} showed that $\det(D)$ does not depend upon the geometry of the particular tree, we now show that this is also the case for the positive quantity $\langle D^{-1}\mathbf{1},\mathbf{1}\rangle$. \begin{thm}\label{tree} Let $D$ be the distance matrix of an unweighted metric tree on $n+1$ vertices. Then \[ \ip< D^{-1} \mathbf{1},\mathbf{1}> = \frac{2}{n}. \] \end{thm} \begin{proof} Denote the vertices of the tree by $v_{0},\dots,v_{n}$ and for each $i$, $0\leq i\leq n$, let $\delta_{i}$ be the degree of the vertex $v_{i}$. Let $A=(a_{ij})_{i,j=0}^{n}$ be the adjacency matrix of the tree and write $D^{-1}=(d_{ij}^{\ast})_{i,j=0}^{n}$. By Graham and Lov{\'a}sz \cite[Lemma 1]{GL}, \[ d_{ii}^{\ast}=\frac{(2-\delta_{i})(2-\delta_{i})}{2n}-\frac{\delta_{i}}{2} \] for all $0\leq i\leq n$, and \[d_{ij}^{\ast}=\frac{(2-\delta_{i})(2-\delta_{j})}{2n}+\frac{a_{ij}}{2} \] for all $0\leq i,j\leq n$ such that $i\neq j$. We then compute that \begin{align*} \langle D^{-1}\mathbf{1},\mathbf{1}\rangle&=\sum_{i,j=0}^{n}d_{ij}^{\ast} \\&=\sum_{\substack{i,j=0 \\ i\neq j}}^{n}d_{ij}^{\ast}+\sum_{i=0}^{n}d_{ii}^{\ast} \\&=\frac{1}{2n}\sum_{i,j=0}^{n}(2-\delta_{i})(2-\delta_{j})+\frac{1}{2}\bigg(\sum_{i,j=0}^{n}a_{ij}-\sum_{i=0}^{n}\delta_{i}\bigg) \\&=\frac{1}{2n}\sum_{i,j=0}^{n}(2-\delta_{i})(2-\delta_{j}). \end{align*} Now we use the fact that the sum of the degrees of the vertices of a tree on $n+1$ vertices is $2n$. Consequently, \begin{align*} \langle D^{-1}\mathbf{1},\mathbf{1}\rangle&=\frac{1}{2n}\sum_{i,j=0}^{n}(2-\delta_{i})(2-\delta_{j}) \\&=\frac{1}{2n}\bigg(\sum_{i=0}^{n}2-\delta_{i}\bigg)^{2} \\&=\frac{1}{2n}\bigg(2(n+1)-2n\bigg)^{2} \\&=\frac{2}{n}, \end{align*} as required. \end{proof} The condition $\ip< D^{-1} \mathbf{1},\mathbf{1}> = 2/n$ in Theorem \ref{tree} is not unique to embedded $n + 1$ point trees in $H_{n}$. If, for example, we set $x_{0} = \mathbf{0}$, $x_{1} = (1,0,0)$, $x_{2} = (0,1,0)$ and $x_{3} = (1,1,1)$ in $H_{3}$, then it is easy to verify that $\ip< D^{-1} \mathbf{1},\mathbf{1}> = 2/3$. However, $\{ x_{0}, x_{1}, x_{2}, x_{3} \}$ is certainly not an embedded tree in $H_{3}$. As for what can be said about affinely independent subsets of $H_{n}$ that may not have the structure of an unweighted metric tree, we have the following conjecture. \begin{con*} Let $\{ x_{0}, x_{1}, \ldots, x_{m} \}$, $m \geq 1$, be a subset of the Hamming cube $H_{n}$. If the set of vectors $\{ x_{0}, x_{1}, \ldots, x_{m} \}$ is affinely independent, then $\ip<D^{-1} \mathbf{1},\mathbf{1}> \ge 2/n$. \end{con*} We have confirmed this conjecture using a computer algebra package for all integers $m,n \leq 5$. In addition, tens of thousands of random tests in larger dimensions has not provided any counterexamples to date. No clear arithmetic reason for the conjectured lower bound has come to our attention. Notably, the denominators of the entries of $D^{-1}$ can be large compared to $n$. It is fascinating to ask what, if any, geometric information is encoded by the quantity $\ip< D^{-1} \mathbf{1},\mathbf{1}>$ in this context. \section*{Acknowledgements} The work of the second and third authors was supported by the Research Training Program of the Department of Education and Training of the Australian Government. \bibliographystyle{amsalpha}
{ "timestamp": "2020-08-03T02:07:43", "yymm": "2005", "arxiv_id": "2005.09205", "language": "en", "url": "https://arxiv.org/abs/2005.09205", "abstract": "Graham and Winkler derived a formula for the determinant of the distance matrix of a full-dimensional set of $n + 1$ points $\\{ x_{0}, x_{1}, \\ldots , x_{n} \\}$ in the Hamming cube $H_{n} = ( \\{ 0,1 \\}^{n}, \\ell_{1} )$. In this article we derive a formula for the determinant of the distance matrix $D$ of an arbitrary set of $m + 1$ points $\\{ x_{0}, x_{1}, \\ldots , x_{m} \\}$ in $H_{n}$. It follows from this more general formula that $\\det (D) \\not= 0$ if and only if the vectors $x_{0}, x_{1}, \\ldots , x_{m}$ are affinely independent. Specializing to the case $m = n$ provides new insights into the original formula of Graham and Winkler. A significant difference that arises between the cases $m < n$ and $m = n$ is noted. We also show that if $D$ is the distance matrix of an unweighted tree on $n + 1$ vertices, then $\\langle D^{-1} \\mathbf{1}, \\mathbf{1} \\rangle = 2/n$ where $\\mathbf{1}$ is the column vector all of whose coordinates are $1$. Finally, we derive a new proof of Murugan's classification of the subsets of $H_{n}$ that have strict $1$-negative type.", "subjects": "Functional Analysis (math.FA)", "title": "Distance matrices of subsets of the Hamming cube", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986363166722906, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7087950465898512 }
https://arxiv.org/abs/1004.3321
On the Sandpile group of the cone of a graph
In this article, we give a partial description of the sandpile group of the cone of the cartesian product of graphs in function of the sandpile group of the cone of their factors. Also, we introduce the concept of uniform homomorphism of graphs and prove that every surjective uniform homomorphism of graphs induces an injective homomorphism between their sandpile groups. As an application of these result we obtain an explicit description of a set of generators of the sandpile group of the cone of the hypercube of dimension d.
\section{Introduction} The {\it sandpile models} were firstly introduced by Bak, Tang and Wiesenfeld in~\cite{bak87} and~\cite{bak88}, and have been studied under several names in statistical physics, theoretical computer science, algebraic graph theory, and combinatorics. The {\it abelian sandpile model} of a graph was introduced by Dhar in~\cite{dhar90}, which generalizes the sandpile model of a grid given in~\cite{bak87}. The abelian sandpile model of Dhar~\cite{dhar90} begins with a connected graph $G=(V,E)$ and a distinguished vertex $s\in V$, called the sink. Dhar~\cite{dhar90} showed that the set of some configurations (a configurations of $G$ is a vector in $\mathbb{N}^{V\setminus s}$), called {\it recurrent configurations}, with the vertex-by-vertex sum as a binary operation forms a finite abelian group, called the sandpile group of $G$. It follows from Kirchhoff's Matrix-Tree theorem (see e.g. \cite{biggs93}) that the order of the sandpile group of a graph $G$ is the number of spanning trees of $G$. Mainly, the abelian sandpile group has been studied under the name of sandpile group, denoted by $SP(G,s)$, and critical group, denoted by $K(G)$. It has been also studied under other names, such as Jacobian group, Picard group, dollar game, see for instance~\cite{biggs97,biggs99,lorenzini89,lorenzini91}. The sandpile group has been completely determined for some family of graphs, see for instance~\cite{biggs99,threshold,cartesian,levine,lorenzini89,lorenzini91,merris,musiker,regulartrees}. The sandpile group of the cartesian product has received special interest, for instance the following cartesian products of graphs it has been determined: $P_4\times C_n$~\cite{p4cn}, $K_3\times C_n$~\cite{k3cn}, $K_m\times P_n$~\cite{kmpn}, $C_4\times C_n$~\cite{wang09}, and $K_m\times C_n$~\cite{corralesthesis,wang09p}. The abstract structure of the sandpile group has been partially described for the hypercube~\cite{Bai} and the cartesian product of complete graphs~\cite{cartesian}. In~\cite{dual} it was proved that the sandpile group of a dual graph $G^*$ is isomorphic to the sandpile group of $G$. Also, in~\cite{berget09} there are established some relations between the sandpile group of a graph $G$ and the sandpile group of its line graph. In particular, they proved that if $G$ is non bipartite and regular, then $K({\rm \bf line}(G))$ is completely determined as a function of $K(G)$. Finally, in~\cite{lorenzini08} a relationship between the eigenvalues and eigenvectors of the Laplacian matrix of a graph and their sandpile group is established. Given a natural number $n$, the \textit{$n$-cone} of a graph $G$, denoted by $c_n(G)$, is the graph obtained from $G$ when we add a new vertex $s$ to $G$ and $n$ parallel edges between the new vertex $s$ and all the vertices of $G$. If $n=1$ we simply write $c(G)$ instead of $c_1(G)$. In this article we study the sandpile group of the cone of a graph. In particular, we give a partial description of the sandpile group of the cone of the cartesian product of graphs as a function of the sandpile group of the cone of their factors. Also, we introduce the concept of uniform homomorphism of graphs and prove that every surjective uniform homomorphism of graphs induces an injective homomorphism between their sandpile groups. As an application of these two results we obtain an explicit description of a set of generators of the sandpile group of the cone of the hypercube of dimension $d$. A \textit{graph} $G$ is a pair $(V,E)$, where $V$ is a finite set and $E$ is a subset of the set of unordered pair of elements of $V$. The elements of $V$ and $E$ are called \textit{vertices} and \textit{edges}, respectively. If $e=\{x,y\}$, then $x$ and $y$ are \textit{incident} to $e$, $x$ and $y$ are the \textit{ends} of $e$ and $x$ and $y$ are {\it adjacents}. The \textit{multiplicity} between two vertices $u$ and $v$ of a graph, denoted by $m_{u,v}$, is the number of edges with ends $u$ and $v$. The \textit{degree} of a vertex $x\in G$, denoted by $d_{G}(x)=d(x)$, is the number of incident edges to $x$. A graph $G'=(V',E')$ is a \textit{subgraph} of the graph $G=(V,E)$, if $V'\subseteq V$ and $E'\subseteq E$. An \textit{induced} subgraph $G[V']=(V',E')$ is a subgraph of $G=(V,E)$ such that every edge $e\in E$ that has its ends in $V'$ is in $E'$. The article is organized as follows. In section $2$ the concepts of graph theory that will be needed in the rest of the article are introduced. We also give the combinatorial and algebraic definitions of the sandpile group of $G$ with sink $s_G$. \medskip In section $3$ we introduce the concept of uniform homomorphism of graphs. Let $G$ and $H$ be two graphs and $V\subseteq V(H)$. A $V$-uniform homomorphism between $G$ and $H$, is a mapping $f:V(G)\rightarrow V(H)$ such that for all $x\in V$ and $y\in V(H)$ \[ d_{G[\{u\}\cup S_y]}(u)=m_{x,y} \text{ for all } u \in S_x=f^{-1}(x) \] and $f: V(G)\setminus f^{-1}(V)\rightarrow V(H)\setminus V$ is the identity isomorphism. After introducing the concept of a $V$-uniform homomorphism, we prove the main theorem of this section. \medskip {\bf Theorem 3.5.} {\it If $f: G\rightarrow H$ is a surjective $V$-uniform homomorphism with $f^{-1}(s_H)=\{s_G\}$ and $s_H\notin V\subset V(H)$ such that $V(H)\setminus V$ is a stable set, then the induced mapping $\widetilde{f}: SP(H, s_H)\rightarrow SP(G, s_G)$, given by \[ \widetilde{f}({\bf c})_v= \begin{cases} {\bf c}_{f(v)} & \text{ if } f(v) \in V, \\ {\rm deg}(f)\cdot c_{f(v)} & \text{ if } f(v) \notin V, \end{cases} \] is an injective homomorphism of groups.} \medskip Section $4$ is devoted to the study of the sandpile group of the cone of the cartesian product of graphs. Let ${\bf a}\in \mathbb{Z}^{V(G)}$ and ${\bf b}\in \mathbb{Z}^{V(H)}$ be configurations of the cones of $G$ and $H$ respectively. Taking the cartesian product of configurations as \[ ({\bf a}\Box {\bf b})_{(u,v)}={\bf a}_u+{\bf b}_v \text{ for all } u\in V(G) \text{ and } v \in V(H), \] then, ${\bf a}\Box {\bf b}$ is a recurrent configuration of the cone of the cartesian product of $G$ and $H$ whenever ${\bf a}$ and ${\bf b}$ are recurrent configurations of $G$ and $H$, respectively. This definition of the cartesian product of configurations leads to the main result of section $4$. {\bf Theorem 4.4.} {\it If ${\bf e}_H$ is the identity of $SP(c(H), s_{c(H)})$, then the mapping \[ \widetilde{\pi}_G: SP(c(G), s_{c(G)})\rightarrow SP(c(G \Box H), s_{c(G \Box H)}) \] given by $\widetilde{\pi}_G ({\bf a})={\bf a}\Box {\bf e}_H$ is an injective homomorphism of groups.} Finally, in section $5$ we use an explicit description of the sandpile group of a thick graph with three vertices as well as the results obtained in section 3 and section 4, to get a concrete description of a set of generators of the sandpile group of the cone of the hypercube of dimension $d$. More precisely, if $V(Q_d)=\{v_{\bf a} \, | \, {\bf a}\in \{0,1\}^d\}$ is the vertex set of the hypercube of dimension $d$ and \[ g_{\beta} (r,t)_{v_{\bf a}}= \begin{cases} r & \text{ if } \beta\cdot {\bf a} \text{ is even},\\ t & \text{ if } \beta\cdot {\bf a} \text{ is odd}, \end{cases} \] for all $\beta\in \{0,1\}^d$. Then, \[ \widetilde{K}_{\beta}=\{ g_{\beta} (r,t)+ (d-|\beta|){\bf 1}\: | \: 0\leq r,t \leq d \text{ and either } r=|\beta| \text{ or } t=|\beta| \}\subset \mathbb{Z}^{V(Q_d)}, \] is a set of recurrent configurations of $SP(c(Q_d),s_{c(Q_d)})$ which is a subgroup of $SP(c(Q_{d}),s_{c(Q_d)})$ isomorphic to ${\mathbb Z}_{2|\beta|+1}$. The next theorem gives a description of the sandpile group of the cone of $Q_d$ gluing all the subgroups $\widetilde{K}_{\beta}$. {\bf Theorem 5.3.} {\it Let $k\geq 0$, $d\geq 1$ be natural numbers and let $c_{2k+1}(Q_d)$ be the $2k+1$-cone of the hypercube $Q_d$. If $s=V(c_{2k+1}(Q_d))\setminus V(Q_d)$, then \[ SP(c_{2k+1}(Q_d),s)\cong \bigoplus_{i=0}^{d} \mathbb{Z}_{2i+2k+1}^{\binom{d}{i}}. \] Furthermore, $SP(c(Q_{d}),s)=\bigoplus_{\beta \in \{0,1\}^d} \widetilde{K}_{\beta}$.} The introduction of an extra vertex in the cone's construction is fundamental in order to get a better behavior of the sandpile group. For instance, in 2003, Jacobson, Niedermaier and Reiner~\cite{cartesian} gave a partial description of the sandpile group of the cartesian product of complete graphs. In the same year, Bai~\cite{Bai} proved that the number of invariant factors of the hypercube $Q_k$ is $2^{k-1}-1$ and gave a formula for the number of occurrences of $\mathbb{Z}_2$ in the elementary divisor form of the sandpile group of $Q_k$. However, the full structure of the Sylow $2$-subgroup of the sandpile group of the hypercube is still unknown. \section{Preliminaries} Let $G$ be a graph with $V$ as vertex set and $E$ as edge set. For simplicity, an edge $e=\{x,y\}$ will be denoted by $xy$. The sets of two or more edges with the same ends are called \textit{multiple edges}. A \textit{loop} is an edge incident to a unique vertex. A \textit{multigraph} is a graph with multiple edges and without loops. A \textit{digraph} $G$ is a pair $(V,E)$, where $V$ is a finite set and $E$ is a subset of the set of ordered pair of elements of $V$. The elements of $V$ and $E$ are called \textit{vertices} and \textit{arcs}, respectively. Given an arc $e=(x,y)$, we say that $x$ is the initial vertex of $e$ and $y$ is the terminal vertex of $e$. The number of arcs with initial vertex $x$ and terminal vertex $y$ will be denoted by $m_{(x,y)}$. The \textit{out-degree} of a vertex $x$ of a digraph, denoted by $d_{G}^+(x)$, is the number of arcs with initial vertex $x$. A vertex $x$ is a {\it sink} if its out-degree is zero. Moreover, a sink $x$ is a {\it global sink} if for every vertex $y\in G$, there exists a directed path from $y$ to $x$. Given a multigraph $G$ and a vertex $s$ of $G$, let $b(G,s)$ be the digraph with the same vertex set of $G$ and arc set equal to \[ E(b(G,s))=\left( \bigcup_{xy\in E(G\setminus s)} \{(x,y),(y,x)\}\right) \cup \left( \bigcup_{xs\in E(G)} \{(x,s)\}\right). \] Note that, $b(G,s)$ is a digraph with global sink $s$. Let $G$ be a digraph, $s$ be a global sink of $G$, and $\widetilde{V}$ the set of non-sink vertices of $G$. \subsection{The sandpile group} There exist several ways to define the sandpile group of a digraph. In this section we will present a combinatorial and an algebraic definition of the sandpile group. \paragraph{\it Algebraic description.} One of the simplest ways to define the sandpile group is by using an algebraic description, known as the critical group. The {\it Laplacian matrix} of $G$, denoted by $L(G)$, is the matrix of $|V|\times |V|$ given by \[ L(G)_{u,v}= \begin{cases} d^+(u)-m_{(u,u)} & \text{ if } u=v,\\ -m_{(u,v)} & \text{ otherwise}. \end{cases} \] The {\it reduced Laplacian matrix}, denoted by $L(G,s)$, is the matrix obtained from $L(G)$ by removing the row and column $s$. The {\it sandpile group} of $G$ is the cokernel of $L(G,s)$, \[ SP(G,s)=\mathbb{Z}^{\widetilde{V}}/{\rm Im}\, L(G,s)^t. \] \medskip Another way to define the sandpile group is by using stable and recurrent configurations. \paragraph{\it Combinatorial description.} A \textit{configuration} of $(G,s)$ is a vector ${\bf c}\in \mathbb{N}^{\widetilde{V}}$. A non-sink vertex $v$ is called \textit{stable} if $d^+(v) > {\bf c}_v$, and otherwise is called unstable. Moreover, a configuration is called \textit{stable} if every vertex $v$ in $\widetilde{V}$ is stable. \textit{Toppling} an unstable vertex $u$ in ${\bf c}$ is performed by decreasing ${\bf c}_u$ by the degree $d^+(u)$, and adding the multiplicity $m_{(u,v)}$ to each of the vertices $v$ such that $(u,v)\in E(G)$. Now, let $\Delta_u=d^+(u)-\sum_{uv\in E} m_{(u,v)}{\bf e}_v$, where ${\bf e}_v$ is the $v$-th canonical vector with a one in the $v$-th coordinate and zeros elsewhere. Then, $\Delta_u$ is a row of the reduced Laplacian matrix $L(G,s)$ and toppling $u$ means to subtract $\Delta_u$ from ${\bf c}$. By performing a sequence of topplings, we will eventually arrive at a stable configuration, \cite[Lemma 2.4]{holroyd}. See~\cite[Example 2.1]{holroyd} for an example of a digraph without global sink and a configuration that does not stabilizes. Moreover, the stabilization of a unstable configuration is unique, \cite[Theorem 2.1]{meester}. The stable configuration associated to ${\bf c}$ will be denoted by $s({\bf c})$. Then, $s({\bf c})={\bf c}-L(G,s)^t \beta$ for some $\beta\in \mathbb{N}^{\widetilde{V}}$. Now, let $({\bf c}+{\bf d})_u:={\bf c}_u+{\bf d}_u$ for all $u\in \widetilde{V}$ and ${\bf c}\oplus {\bf d}:=s({\bf c}+{\bf d})$. A configuration ${\bf c}$ is \textit{recurrent} if it is stable and there exists a non-zero configuration ${\bf r}$ such that $s({\bf c}+{\bf r})={\bf c}$. The \textit{sandpile group} of $G$, denoted by $SP(G,s)$, is the set of recurrent configurations with $\oplus$ as binary operation. \medskip Given a multigraph $G$ with a distinguished vertex $s$ their sandpile group is defined by $SP(b(G,s),s)$. \begin{Theorem}\cite[corollary 2.5]{dual} and \cite[Corollary 2.16]{holroyd}. Let $G=(V,E)$ be a multigraph (respectively, digraph) with (respectively, global sink) sink $s\in V$, then $SP(G,s)$ is an abelian group. \end{Theorem} One of the simplest ways to check when a configuration of a multigraph is recurrent is given by the following result: \begin{Theorem}[Burning Algorithm]\cite{dhar90} \label{burning} A configuration ${\bf c} \in \mathbb{N}^{\widetilde{V}}$ is recurrent if and only if there exist an order $u_1,u_2,\cdots, u_n$ of the vertices $\widetilde{V}$ such that if ${\bf c}_1={\bf c}+\sum_{i=1}^n \Delta_{u_i}$, and \[ {\bf c}_{i}={\bf c}_{i-1}-\Delta_{u_{i-1}}\text{ for all } i=2,\ldots,n, \] then $u_{i}$ is an unstable vertex of ${\bf c}_{i}$ for all $i=1,\ldots,n$ and ${\bf c}={\bf c}_n-\Delta_{u_n}$. \end{Theorem} There is a generalization of the burning algorithm for digraphs, know as the script algorithm, see~\cite{speer} For instance, in the next proposition, we shall describe the sandpile group of the multidigraph $c(\mathcal{K}_2(r,t))$ with $V =\{s, v_1, v_2\}$ as vertex set, $m_{v_1,s}=1$, $m_{v_2,s}=1$, $m_{(v_1,v_2)}=r$ , and $m_{(v_2,v_1)}=t$. If $r=t$ we simply write $c(\mathcal{K}_2(r))$ instead of $c(\mathcal{K}_2(r,t))$. \begin{Theorem}\cite[theorem 2.34]{alfarothesis}\label{generators} If $r \in \mathbb{Z}_+$ and $t \in \mathbb{Z}_+$, then \[ SP(c(\mathcal{K}_2(r,t)),s) \cong \mathbb{Z}_{r+t+1}. \] Moreover, $SP(c(\mathcal{K}_2(r)),s)=\{ (m,l)\: | \: 0\leq m,l \leq d \text{ and } m=r \text{ or } l=r \}$ with $(r,r)$ as the identity and $(r,0)$ is a generator of $SP(\mathcal{K}_2(r),s)$ with \[ k(r,0)= \begin{cases} (r-j,r) & \text{ if } k=2j\leq 2r,\\ (r,j) & \text{ if } k=2j+1 \leq 2r+1. \end{cases} \] \end{Theorem} \medskip It is known that both descriptions are equivalent in the sense that both descriptions define isomorphic groups, \cite[Corollary 2.16]{holroyd}. . Is not difficult to see that the structure of the sandpile group does not depend on the sink vertex. However, the set of recurrent configurations of $G$ depends on the sink. In this article we are not only interested in the abstract structure of the sandpile group, we are also interested in the set of recurrent configurations and in the description of the subgroups generated by this recurrent configuration. We are interested in giving a description of the recurrent configurations because they contain a very nice combinatorial structure and some combinatorial information of the graph. In general it is easier to describe the abstract structure of the sandpile group than to give an explicit description of recurrent configurations and their generated subgroups generated. For instance, when $G$ is the grid, in~\cite{BorgneIdentidad}, \cite{identity}, and~\cite{DartoisIdentity} is given a partial characterization of the recurrent configuration that plays the role of the identity. The set of recurrent configuration and their generated subgroup has been described only for a few family of graphs. In the following, every multigraph will be connected and will have a distinguished vertex $s_G\in V(G)$, called \textit{sink}. Sometimes we will simply write $s$ instead of $s_G$. The set of non-sink vertices will be denoted by $\widetilde{V}$. \section{Graph homomorphism and the sandpile group} In this section we introduce the concepts of uniform homomorphism and weak homomorphism of graphs. This concepts are similar to the classical concepts of homomorphism and full homomorphism of graphs. Also we introduce a directed variant of the uniform homomorphism concept, called directed uniform homomorphism. In the literature there are several concepts that are either equivalent or similar to the concepts of uniform homomorphism, weak homomorphism, and directed uniform homomorphism of graphs. For instance, in~\cite[chapter 5]{godsil} and~\cite[section 5]{directed} the concept of an equitable partition of a graph was defined. This concept of equitable partition is equivalent to the concept of directed uniform homomorphism. In~\cite[section 5]{bicycles}, Berman defined the concept of divisibility of graphs, which is closed related to the concept of weak $V$-uniform homomorphism when $V=V(G)$, see remark~\ref{bicycles} for a more precise explanation of this equivalence. The concept of uniform homomorphism is useful in order to get an insight of the group structure of the sandpile groups of graphs. For instance, theorem~\ref{uniformhomeo} says that if $f:G\rightarrow H$ is a surjective $(V(H)\setminus s_H)$-uniform homomorphism, then the induced mapping $\widetilde{f}:SP(H,s_H) \rightarrow SP(G,s_G)$ is an injective homomorphism of groups; that is, this mapping sends recurrent configurations to recurrent configurations and is compatible with the group structure. Theorem 5.7 in~\cite{bicycles} shows an equivalent result to the one in theorem~\ref{uniformhomeo}. Theorem 6.1 in~\cite{directed} shows an equivalent result to the one in theorem~\ref{directeduniformhomeo}. In~\cite[section 2]{harmonic} the concept of harmonic morphism was defined (this concept is different form uniform homomorphism) and a functor between the category of graphs with harmonic morphisms and the category of abelian groups was studied. In~\cite{treumann} it is explored a functor from the category of graphs with divisibility to the category of abelian groups, see for instance Proposition 19. Finally, in~\cite{berget09,durgin,lorenzini91} some functorial results on the category of graphs to the category of abelian groups are proved. For instance in~\cite[Proposition 2]{lorenzini91} and~\cite[Proposition 21]{treumann} is proved that: if $G$ is a connected graph and $G_k$ is the graph obtained by dividing each edge of $G$ in $k$ edges, then there exists a surjective function between the sandpile group of $G_k$ and the sandpile group of $G$. In~\cite{durgin} is introduced the concept of symmetric configuration and quotient graph are discussed, more precisely Theorem 2.1 proved that the set of symmetric configurations forms a subgroup of the sandpile group. In~\cite[Theorems 1.3 and 1.5]{berget09} are established homomorphism between the sandpile group of the line graph of a graph $G$ and the sandpile group of $G$ and between the sandpile group of the line graph of a graph $G$ and the sandpile group of a subdivision of $G$. \begin{Definition} Let $G$, $H$ be multigraphs without loops and $V\subseteq V(H)$. A \textit{$V$-uniform homomorphism} of $G$ to $H$, denoted by $f: G\rightarrow H$, is a mapping $f:V(G)\rightarrow V(H)$ such that for all $x\in V$ and $y\in V(H)$ \[ d_{G[\{u\}\cup S_y]}(u)=m_{x,y} \text{ for all } u \in S_x=f^{-1}(x) \] and $f: V(G)\setminus f^{-1}(V)\rightarrow V(H)\setminus V$ is the identity isomorphism. \end{Definition} If $f:V(G)\rightarrow V(H)$ is a $V$-uniform homomorphism with $V=V(H)$, then we simply say that $f$ is a uniform homomorphism. In the case of directed multigraphs, we define a \textit{directed $V$-uniform homomorphism} as a mapping $f:V(G)\rightarrow V(H)$ such that $f: V(G)\setminus f^{-1}(V)\rightarrow V(H)\setminus V$ is the identity isomorphism and for all $x\in V$ and $y\in V(H)$ \[ d^+_{G[\{u\}\cup S_y]}(u)=m_{(x,y)} \text{ for all } u \in S_x, \] where $d^+_{G}(u)$ is the outdegree of the vertex $u$ in the graph $G$, that is, the number of arcs of $G$ with tail $u$. If $f: G\rightarrow H$ is a $V$-uniform homomorphism, then $S_x$ is a stable set of $G$ for all $x\in V$ because $H$ has no loops. Moreover, since $G[S_x\cup S_y]$ is a $m_{x,y}$-regular bipartite graph for all $x \neq y\in V$ and $H[V]$ is connected, then $|S_x|=|S_y|$ for all $x,y\in V$. The degree of a $V$-uniform homomorphism $f: G\rightarrow H$, denoted by ${\rm deg}(f)$, is equal to the cardinality of the set $S_x$ for some $x\in V$. \begin{Proposition}\label{grado} If $f: G\rightarrow H$ is a $V$-uniform homomorphism and $V(H)\setminus V$ is a stable set, then \[ d_G(u)= \begin{cases} d_{H}(f(u)) & \text{ if } f(u)\in V,\\ {\rm deg}(f)\cdot d_{H}(f(u)) & \text{ if } f(u)\notin V. \end{cases} \] \end{Proposition} \begin{proof} If $u\in f^{-1}(V)$, then \[ d_G(u)=\sum_{y\in V(H)\setminus f(u)} d_{G[\{u\}\cup S_y]} (u)=\sum_{y\in V(H)\setminus f(u)} d_{H[\{f(u)\}\cup \{y\}]} (f(u))=d_{H}(f(u)). \] On the other hand, since $V(H)\setminus V$ is a stable set, then \[ d_G(u)=\sum_{v\in S_x, x\in V} d_{G[\{v\}\cup \{u\}]} (u)=\sum_{x\in V} {\rm deg}(f)\cdot d_{H[\{x\}\cup \{f(u)\}]} (f(u))={\rm deg}(f)\cdot d_{H}(f(u)). \] when $u\notin f^{-1}(V)$. \end{proof} The next proposition gives us an alternative description of a uniform homomorphism. \begin{Proposition}\label{bipartita} Let $G$ and $H$ be multigraphs without loops. Then, $f: G\rightarrow H$ is a uniform homomorphism if and only if \begin{description} \item[$(i)$] $S_x=f^{-1}(x)$ is an independent set of $G$ for all $x\in V(H)$, \item[$(ii)$] $G[S_x\cup S_y]$ is a $m_{x,y}$-regular bipartite graph for all $x \neq y\in V(H)$. \end{description} \end{Proposition} \medskip Now, we will introduce the classical definitions of a homomorphism and a full homomorphism of graphs in order to compare them with the notion of uniform homomorphism. Let $G$ and $H$ be multigraphs. A \textit{homomorphism (respectively, full homomorphism)} is a mapping \[ f:V(G)\rightarrow V(H) \] such that $f(u)f(v) \in E(H)$ if (respectively, and only if) $uv \in E(G)$. \medskip The definitions of full homomorphism and isomorphism of graphs are similar. The main difference between them is that a full homomorphism is not necessarily bijective; meanwhile an isomorphism is. By example, let $C_4$ and $P_3$ be graphs as in figure \ref{Gfullh}. The mapping $f:V(C_4)\rightarrow V(P_3)$ given by $v_1,v_3 \overset{f}{\mapsto} u_1$, and $v_2,v_4 \overset{f}{\mapsto} u_2$ is a full homomorphism. \begin{figure}[h] \centering \begin{tabular}{c@{\extracolsep{1cm}}c@{\extracolsep{1cm}}c} \begin{tikzpicture}[line width=1.1pt, scale=1] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (45:1) node[draw, circle, fill=green] (v1) {}; \draw (135:1) node[draw, circle, fill=green] (v3) {}; \draw (225:1) node[draw, circle, fill=blue] (v2) {}; \draw (315:1) node[draw, circle, fill=blue] (v4) {}; \draw (v1) to (v2); \draw (v2) to (v3); \draw (v3) to (v4); \draw (v4) to (v1); \draw (45:1.4) node {\small $v_3$}; \draw (135:1.4) node {\small $v_1$}; \draw (225:1.4) node {\small $v_4$}; \draw (315:1.4) node {\small $v_2$}; \draw (0,0.5) node {\small $C_4$}; \end{tikzpicture} & \begin{tikzpicture}[line width=1.1pt, scale=1] \draw (0,0) node {}; \draw (0,1) node {\small $\overset{f}{\longrightarrow}$}; \end{tikzpicture} & \begin{tikzpicture}[line width=1.1pt,scale=1] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (0,1.4) node[draw, circle, fill=green] (v1) {}; \draw (0,0) node[draw, circle, fill=blue] (v2) {}; \draw (1.4,0) node[draw, circle, fill=gray] (v3) {}; \draw (v1) to (v2); \draw (v2) to (v3); \draw (-.3,1.7) node {\small $u_1$}; \draw (-.3,-.3) node {\small $u_2$}; \draw (1.76,0) node {\small $u_3$}; \draw (0.8,0.8) node {\small $P_3$}; \end{tikzpicture} \end{tabular} \caption{\small A full homomorphism between $C_4$ and $P_3$.} \label{Gfullh} \end{figure} The following proposition gives us an equivalent way to define a (full) homomorphism of graphs: \begin{Proposition}\cite[Proposition 1.10 and exercise 10 in page 35]{Nesetril} \label{THEnesetril} Let $G$ and $H$ be multigraphs without loops. Then $f: G\rightarrow H$ is an homomorphism if and only if \begin{description} \item[$(i)$] $S_x=f^{-1}(x)$ is an independent set of $G$ for all $x\in V(H)$, \item[$(ii)$] if $xy\notin E(H)$, then $uv\notin E(G)$ for all $u\in S_x$ and $v\in S_y$. \end{description} Moreover, $f$ is a full homomorphism if and only if $f$ satisfies conditions $(i)$, $(ii)$, and \begin{description} \item[$(ii')$] if $xy\in E(H)$, then $uv\in E(G)$ for all $u\in S_x$ and $v\in S_y$. \end{description} \end{Proposition} In order to illustrate the concept of uniform homomorphism, let $C_3$ and $C_5$ be the cycles with three and five vertices, respectively. \begin{figure}[h] \centering \begin{tikzpicture}[line width=1.1pt, scale=.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (0,0) { +(0:1) node[draw, circle, fill=gray] (v1) {} +(120:1) node[draw, circle, fill=green] (v2) {} +(240:1) node[draw, circle, fill=blue] (v3) {} (v2)+(-1,0) node[draw, circle, fill=green] (v4) {} (v3)+(-1,0) node[draw, circle, fill=blue] (v5) {} (v1)+(-4,0) node[draw, circle, fill=gray] (v1p) {} (v1) to (v2) (v4) to (v3) (v1) to (v3) (v5) to (v2) (v4) to (v5) (0:1.4) node {\small $v_1$} (v2)+(0,0.25) node {\small $v_2$} (240:1)+(0,-0.3) node {\small $v_5$} (120:1)+(-1.1,0.3) node {\small $v_4$} (240:1)+(-1.1,-0.3) node {\small $v_3$} (v1p)+(-0.3,0) node {\small $v'_1$} (-0.0,0) node {\small $C_5$} }; \draw[dashed] (0,0) { (v2) to (v3) (v1) .. controls +(-90:0.8) and +(-50:1.6).. (v5) (v1) .. controls +(90:0.8) and +(50:1.6).. (v4) }; \draw[dashed,gray] (0,0) { (v4) to (v1p) (v5) to (v1p) }; \draw[dashed,red] (0,0) { (v2) to (v1p) }; \draw[->, line width=0.5] (2,0) to (3.6,0); \draw (2.9,0.3) node {\small $f$}; \draw (5,0) { +(0:1) node[draw, circle, fill=gray] (v1) {} +(120:1) node[draw, circle, fill=green] (v2) {} +(240:1) node[draw, circle, fill=blue] (v3) {} (v1) to (v2) (v2) to (v3) (v1) to (v3) (v1)+(0:0.4) node {\small $u_1$} (v2)+(0,0.3) node {\small $u_2$} (v3)+(0,-0.3) node {\small $u_3$} +(0.5,0.9) node {\small $C_3$} }; \draw[dashed, gray] (0,0) { (v2) to [bend right] (v3) }; \end{tikzpicture} \vspace{-8mm} \caption{\small The mapping $f$.} \label{Gfullh1} \end{figure} \vspace{-2mm} The mapping $f:V(C_5)\rightarrow V(C_3)$ given by \[ \begin{array}{rcc} v_1 \overset{f}{\longmapsto} u_1,\\ v_2, v_4 \overset{f}{\longmapsto} u_2,\\ v_3, v_5 \overset{f}{\longmapsto} u_3, \end{array} \] is a homomorphism of graphs that is neither a full nor uniform homomorphism. If we replace $C_5$ by $C_5 + v_2 v_5+v_1v_3+v_1v_4$, we get that the mapping $f$ is a full homomorphism that is not uniform. Additionally, if we replace $C_5$ by $C_5 + v_2 v_5+v'_1v_4+v'_1v_3$ and $C_3$ by $C_3 + u_2 u_3$, then the function given by $\widehat{f}(v_i)=f(v_i)$ for all $i=1,\ldots, 5$ and $\widehat{f}(v_1')=u_1$ is a uniform homomorphism, but $\widehat{f}$ is not a full homomorphism because $v'_1v_2$ is not an edge as required by theorem \ref{THEnesetril}[$(ii)$]. \medskip The concept of uniform homomorphism of graphs is relevant in the study of the sandpile group of graphs as shown in the following result: \begin{Theorem}\label{uniformhomeo} Let $G$ be a multigraph with sink $s_G$, $H$ be a multigraph with sink $s_H$, $V\subset V(H)$ such that $V(H)\setminus V$ is a stable set and $s_H\notin V$, $f:G\rightarrow H$ be a surjective $V$-uniform homomorphism such that $f^{-1}(s_H)=\{s_G\}$. Then the induced mapping $\widetilde{f}:SP(H,s_H) \rightarrow SP(G,s_G)$, given by \[ \widetilde{f}({\bf c})_v= \begin{cases} {\bf c}_{f(v)} & \text{ if } f(v) \in V, \\ {\rm deg}(f)\cdot {\bf c}_{f(v)} & \text{ if } f(v) \notin V, \end{cases} \] is an injective homomorphism of groups, that is, $SP(H, s_H) \triangleleft SP(G, s_G)$. \end{Theorem} \begin{proof} Let $\widehat{f}:\mathbb{Z}^{V(H\setminus s_H)} \rightarrow \mathbb{Z}^{V(G\setminus s_G)}$ be the mapping induced by $f$ given by \[ \widehat{f}({\bf c})_v= \begin{cases} {\bf c}_{f(v)} & \text{ if } f(v) \in V, \\ {\rm deg}(f)\cdot {\bf c}_{f(v)} & \text{ if } f(v) \notin V. \end{cases} \] Clearly $\widehat{f}$ is an injective homomorphism of groups. In order to prove this theorem we need to prove the following facts: \begin{itemize} \item If ${\bf c}$ is a recurrent configuration of $(H,s_H)$, then $\widetilde{f}({\bf c})$ is a recurrent configuration of $(G,s_G)$, \item $\widetilde{f}({\bf c}_1\oplus {\bf c}_2)=\widetilde{f}({\bf c}_1)\oplus \widetilde{f}({\bf c}_2)$ for all ${\bf c}_1,{\bf c}_2 \in SP(H,s_H)$. \end{itemize} Th next claim will be useful to prove this fact. \begin{Claim}\label{stable} If ${\bf c}_1$ and ${\bf c}_2$ are configurations of $(H,s_H)$, then \[ \widehat{f}(s({\bf c}_1+{\bf c}_2))=s(\widehat{f}({\bf c}_1)+\widehat{f}({\bf c}_2)).\vspace{-1mm} \] \end{Claim} \begin{proof} By proposition~\ref{grado} a vertex $x\in V(H)\setminus s_H$ can be toppled in the configuration ${\bf c}$ of $(H,s_H)$ if and only if the vertices $S_x$ of $G$ can be toppled in the configuration $\widehat{f}({\bf c})$ of $(G,s_G)$. On the other hand, since $\widehat{f}(\Delta_x)=\sum_{v\in S_x} \Delta_{v}$ for all $x\in V(H)\setminus \{s_H\}$ and $s({\bf c})={\bf c}-\sum_{w\in W} \Delta_w$ for some multiset $W$ of $V(H)\setminus s_H$, then \begin{eqnarray*} \widehat{f}(s({\bf c}_1+{\bf c}_2))&=&\widehat{f}\left( {\bf c}_1+{\bf c}_2-\sum_{w\in W} \Delta_w\right)= \widehat{f}({\bf c}_1)+\widehat{f}({\bf c}_2)-\sum_{w\in W} \widehat{f}(\Delta_w)\\ &=& \widehat{f}({\bf c}_1)+\widehat{f}({\bf c}_2)-\sum_{w\in W} \sum_{v\in S_w} \Delta_{v}=s(\widehat{f}({\bf c}_1)+\widehat{f}({\bf c}_2)). \vspace{-7mm} \end{eqnarray*} \end{proof} Clearly, ${\bf c}$ is a stable configuration of $(H,s_H)$ if and only if $\widehat{f}({\bf c})$ is a stable configuration of $(G,s_G)$. Furthermore, if ${\bf c}$ is a recurrent configuration of $(H,s_H)$, then there exists a configuration ${\bf u}$ of $(H,s_H)$ such that $s({\bf c}+{\bf u})={\bf c}$. Thus, by claim~\ref{stable} $s(\widehat{f}({\bf c})+\widehat{f}({\bf u}))=\widehat{f}(s({\bf c}+{\bf u}))=\widehat{f}({\bf c}$) and therefore $\widehat{f}({\bf c})$ is a recurrent configuration of $(G,s_G)$. Finally, $\widetilde{f}({\bf c}_1\oplus {\bf c}_2)=\widetilde{f}(s({\bf c}_1+{\bf c}_2))=s(\widetilde{f}({\bf c}_1)+\widetilde{f}({\bf c}_2))= \widetilde{f}({\bf c}_1)\oplus \widetilde{f}({\bf c}_2)$ for all ${\bf c}_1,{\bf c}_2 \in SP(H,s_H)$. \end{proof} \begin{Remark}\label{contraction} Note that, a mapping $f:G\rightarrow H$ is a surjective uniform homomorphism if and only if the induced mapping $\check{f}:\check{G}\rightarrow H$ is a surjective $(V(H)\setminus s_H)$-uniform homomorphism, where $\check{G}=G/ f^{-1}(s_H)$ is the graph obtained from $G$ when we contract all the vertices in $f^{-1}(s_H)$ to a single vertex $s_G$. For instance, consider the next graphs with $f:G\rightarrow H$ given by \[ \begin{array}{lcr} u_1, u'_1 & \overset{f}{\longmapsto} & s_H,\\ u_2, u_4 & \overset{f}{\longmapsto} & v_2,\\ u_3, u_5 & \overset{f}{\longmapsto} & v_3. \end{array} \] \vspace{-4mm} \begin{figure}[h]\centering \begin{tikzpicture}[line width=1.1pt,scale=.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (-3.5,0) { +(-0.15,0.15) node[draw, circle,fill=gray] (w1) {} +(0.15,-0.15) node[draw, circle,fill=gray] (w1p) {} +(45:1.2) node[draw, circle,fill=green] (w2) {} +(135:1.2) node[draw, circle,fill=blue] (w3) {} +(225:1.2) node[draw, circle,fill=green] (w4) {} +(315:1.2) node[draw, circle,fill=blue] (w5) {} (w2) to (w3) to (w4) to (w5) to (w2) (w1) to (w2) (w1p) to (w4) (w1) to[bend right] (w3) (w1) to[bend left] (w3) (w1p) to[bend right] (w5) (w1p) to[bend left] (w5) (w1)+(-0.3,-0.1) node {\small $u_1$} (w1p)+(0.3,0.2) node {\small $u'_1$} (w2)+(0,0.3) node {\small $u_2$} (w3)+(0,0.3) node {\small $u_3$} (w4)+(0,-0.3) node {\small $u_4$} (w5)+(0,-0.3) node {\small $u_5$} +(-2.3,0.9) node {\small $G$} +(2.5,1.1) node {\small $f$} +(4.4,-1.3) node {\small $\check{f}$} }; \draw (-0.4,-2.6) { +(0,0) node[draw, circle,fill=gray] (u1) {} +(45:1.2) node[draw, circle,fill=green] (u2) {} +(135:1.2) node[draw, circle,fill=blue] (u3) {} +(225:1.2) node[draw, circle,fill=green] (u4) {} +(315:1.2) node[draw, circle,fill=blue] (u5) {} (u2) to (u3) to (u4) to (u5) to (u2) (u1) to (u2) (u1) to (u4) (u1) to[bend right] (u3) (u1) to[bend left] (u3) (u1) to[bend right] (u5) (u1) to[bend left] (u5) (u1)+(0.4,0.05) node {\small $s_G$} (u2)+(0,0.3) node {\small $u_2$} (u3)+(0,0.3) node {\small $u_3$} (u4)+(0,-0.3) node {\small $u_4$} (u5)+(0,-0.3) node {\small $u_5$} (u1)+(0,1.3) node {\small $\check{G}$} }; \draw (4,0) { +(0:-2) node[draw, circle,fill=gray] (v1) {} +(120:1) node[draw, circle,fill=green] (v2) {} +(240:1) node[draw, circle,fill=blue] (v3) {} (v1) to (v2) (v1) to[bend right] (v3) (v1) to[bend left] (v3) (v2) to [bend right] (v3) (v2) to [bend left] (v3) (v1)+(0,0.3) node {\small $s_H$} (v2)+(0,0.3) node {\small $v_2$} (v3)+(0,-0.3) node {\small $v_3$} +(-0.6,1) node {\small $H$} }; \draw[|->>, line width=0.3] (-2,0) to (1.5,0); \draw[->, gray, line width=0.4] (-3.5,-1.3) to (-1.5,-2.6); \draw[|->>, dashed,red, line width=0.5] (0.7,-2.6) to (2.5,-1); \end{tikzpicture} \caption{\small A surjective uniform homomorphism and its induced surjective $(V(H)\setminus s_H)$-uniform homomorphism.} \vspace{-3mm} \end{figure} Then $\check{f}: \check{G} \rightarrow H$ is a surjective $(V(H)\setminus s_H)$-uniform homomorphism of graphs. \end{Remark} \begin{Example} In order to illustrate theorem~\ref{uniformhomeo}, consider the surjective $(V(H)\setminus s_H)$-uniform homomorphism, $\check{f}: \check{G} \rightarrow H$ defined in remark~\ref{contraction}. Using the CSandPile \footnote{CSandPile is a C++ program that computes the sandpile group of a graph. It is available by requesting to alfaromontufar@gmail.com.} program we can see that $SP(H,s_H)\cong \mathbb{Z}_8$ with identity ${\bf e}_{H}=(1,2)$ and generated by ${\bf c}_8=(0,3)$, and $SP(G,s_G)\cong \mathbb{Z}_2 \oplus \mathbb{Z}_{48}$ with identity ${\bf e}_G=\widetilde{f}({\bf e}_H)=(1,2,1,2)$ and generated by ${\bf c}_2=(2,1,2,3)$ of order two and ${\bf c}_{48}=(1,2,2,3)$ of order $48$. \vspace{-4mm} \begin{figure}[h]\centering \begin{tikzpicture}[line width=1.1pt,scale=.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (-2.1,0) { +(0,0) node[draw, circle,fill=gray] (u1) {} +(45:1.2) node[draw, circle,fill=green] (u2) {} +(135:1.2) node[draw, circle,fill=blue] (u3) {} +(225:1.2) node[draw, circle,fill=green] (u4) {} +(315:1.2) node[draw, circle,fill=blue] (u5) {} (u2) to (u3) to (u4) to (u5) to (u2) (u1) to (u2) (u1) to (u4) (u1) to[bend right] (u3) (u1) to[bend left] (u3) (u1) to[bend right] (u5) (u1) to[bend left] (u5) (u1)+(0.4,0.05) node {\small $s_G$} (u2)+(0,0.3) node {\small $0$} (u3)+(0,0.3) node {\small $3$} (u4)+(0,-0.3) node {\small $0$} (u5)+(0,-0.3) node {\small $3$} (u1)+(0.1,0.5) node {\small $\check{G}$} (u3)+(2.9,-0.55) node {\small $\check{f}$} }; \draw (3.2,0) { +(0:-2) node[draw, circle,fill=gray] (v1) {} +(120:1) node[draw, circle,fill=green] (v2) {} +(240:1) node[draw, circle,fill=blue] (v3) {} (v1) to (v2) (v1) to[bend right] (v3) (v1) to[bend left] (v3) (v2) to [bend right] (v3) (v2) to [bend left] (v3) (v1)+(0,0.3) node {\small $s_H$} (v2)+(0,0.3) node {\small $0$} (v3)+(0,-0.3) node {\small $3$} +(-0.6,1) node {\small $H$} }; \draw[|->, line width=0.3] (-0.6,0) to (0.5,0); \end{tikzpicture} \caption{\small A surjective $(V(H)\setminus s_H)$-uniform homomorphism.} \end{figure} For instance, the induced mapping $\widetilde{f}: SP(H,s_H) \rightarrow SP(\check{G},s_{\check{G}})$ sends the configuration ${\bf c}_8$ to the configuration $\widetilde{\bf c}_8=\widetilde{f}({\bf c}_8)=(0,3,0,3)$, which generates a subgroup of order eight in $SP(\check{G},s_{\check{G}})$. Moreover, $\widetilde{\bf c}_8={\bf c}_2\oplus 6 \cdot {\bf c}_{48}=(2,1,2,3)\oplus(2,2,2,0)$. \end{Example} Now, we will present a directed version of theorem~\ref{uniformhomeo}. \begin{Theorem}\label{directeduniformhomeo} If $G$ are a multigraph with sink $s_G$, $H$ is a multigraph with sink $s_H$, and $f:G\rightarrow H$ is a directed surjective $V(H)\setminus s_H$-uniform homomorphism with $f^{-1}(s_H)=\{s_G\}$. If $\widehat{f}:\mathbb{Z}^{V(H\setminus s_H)} \rightarrow \mathbb{Z}^{V(G\setminus s_G)}$ is the mapping given by \[ \widehat{f}({\bf c})_v={\bf c}_{f(v)} \text{ for all }v\in V(G)\setminus s_G, \] then the induced mapping $\widetilde{f}:SP(H,s_H) \rightarrow SP(G,s_G)$ is an injective homomorphism of groups, that is, $SP(H, s_H) \triangleleft SP(G, s_G)$. \end{Theorem} \begin{proof} Clearly, $\widehat{f}$ is an an injective homomorphism of groups. Moreover, if $L(H,s_H){\bf z}={\bf a}$, then $L(G,s_G)\widehat{f}({\bf z})=\widehat{f}({\bf a})$. Thus, since ${\rm det}(L(G,s_G))\neq 0$, then \[ \widehat{f}({\rm Im}\, L(H,s_H)) = \widehat{f}(\mathbb{Z}^{V(H\setminus s_H)})\cap {\rm Im}\, L(G,s_G). \] Hence the mapping $\overline{f}:\mathbb{Z}^{V(H\setminus s_H)} \rightarrow \mathbb{Z}^{V(G\setminus s_G)}/\,{\rm Im}\, L(G,s_G)$ given by $\overline{f}({\bf a})= \widehat{f}({\bf a})\, (\text{mod }{\rm Im}\, L(G,s_G))$ has a kernel equal to ${\rm Im}\, L(H,s_H)$ and therefore the induced mapping \[ \widetilde{f}:SP(H,s_H)=\mathbb{Z}^{V(H\setminus s_H)}/\,{\rm Im}\, L(H,s_H) \rightarrow \mathbb{Z}^{V(G\setminus s_G)}/ \,{\rm Im}\, L(G,s_G)=SP(G,s_G) \] is an injective homomorphism of groups. \end{proof} \begin{Remark} Note that, if $f:G\rightarrow H$ is a directed surjective $V(H) \setminus s_H$-uniform homomorphism with $f^{-1}(s_H)=\{s_G\}$ and ${\bf a}$ is an eigenvector of $L(H, s_H)$ for the eigenvalue $\lambda$, then $\widehat{f}({\bf a})$ is an eigenvector of $L(G, s_G)$ for $\lambda$. Also, note that in the directed case the mapping $\widetilde{f}$ defined in theorem~\ref{directeduniformhomeo} is not a natural homomorphism of sandpile groups in the sense that it does not necessarily send recurrent configurations to recurrent configurations. \end{Remark} The next corollary is an application of theorem~\ref{directeduniformhomeo}. \begin{Corollary}\label{regular} If $B_{r,t}$ is a bipartite graph with bipartition $V=V_1\cup V_2$ and \[ d(u)= \begin{cases}r & \text{ if } u\in V_1,\\ t & \text{ if } u\in V_2, \end{cases} \] then ${\mathbb Z}_{r+t+1} \triangleleft SP(c(B_{r,t}),s_{B_{r,t}})$. \end{Corollary} \begin{proof} Let ${\mathcal K}_2(r,t)$ be the multigraph with $V=\{v_1,v_2\}$ as set of vertices and $m_{v_1,v_2}=r$, $m_{v_2,v_1}=t$. Let $f:c(B_{r,t})\rightarrow c({\mathcal K}_2(r,t))$ be the mapping given by \[ f(v)= \begin{cases} v_1 & \text { if } v\in V_1,\\ v_2 & \text { if } v\in V_2,\\ s_{{\mathcal K}_2(r,t)} & \text{ if } v=s_{B_{r,t}}. \end{cases} \] Since $f$ is a surjective $\{v_1,v_2\}$-uniform homomorphism, then by theorems \ref{directeduniformhomeo} and \ref{generators} we have that \[ {\mathbb Z}_{r+t+1}\cong SP(c({\mathcal K}_2(r,t))) \triangleleft SP(c(B_{r,t}),s_{B_{r,t}}). \] \end{proof} \begin{Remark}\label{generalized} A \textit{weak $V$-uniform homomorphism} is a mapping $f:V(G)\rightarrow V(H)$ such that for all $x\in V$ and $y\in V(H)$ with $x\neq y$ \[ d_{G[\{u\}\cup S_y]}(u)=m_{x,y} \text{ for all } u \in S_x \] (that is, the sets $S_x$ are not necessarily stable) and $f: V(G)\setminus f^{-1}(V)\rightarrow V(H)\setminus V$ is the identity isomorphism. In this case the induced mapping $\widetilde{f}$ does not send recurrent configurations to recurrent configurations, but the mapping $\hat{f}({\bf c})=[\widetilde{f}({\bf c})]$ (where $[\widetilde{f}({\bf c})]$ is the unique recurrent configuration of $G$ such that $s(\widetilde{f}({\bf c})+r)=[\widetilde{f}({\bf c})]$ for some non-zero configuration $r$) is an injective homomorphism of groups. \end{Remark} \begin{Remark}\label{bicycles} The group of bicycles of a graph $G$ over an abelian group $A$, denoted by $B(G,A)$, consists of the edge weightings of $G$ over $A$ that are both cycles and cocycles of $G$ and the entry by entry sum. The group of bicycles and the sandpile group of a graph are closely related. For instance, $B(G, A)={\rm Hom}_{\mathbb Z}(SP(G),A)$. Moreover, if either $A=\mathbb{Q}/ \mathbb{Z}$ or $A=\mathbb{Z}_{|SP(G)|}$, then the group $B(G, A)$ of bicycles of $G$ is isomorphic to $SP(G)$. Let $G$, $H$ be connected multigraphs and $V(H)=\{u_1,\cdots, u_{|V(H)|}\}$ be the vertex set of $H$. We say that $G$ is \textit{divisible} by $H$ (see~\cite[page 9]{bicycles}) if the vertices of $G$ can be partitioned into $|V(H)|$ classes $U_1,\cdots, U_{|V(H)|}$, such that for $1\leq i, j\leq |V(H)|$ a vertex $v$ in $U_i$ is either joined only to vertices of $U_i$ or for every $i\neq j$ is joined to exactly $m_{u_i,u_j}$ vertices of $U_{j}$ (and any number of vertices in $U_i$). Note that the concepts of divisibility and weak $V(G)$-uniform homomorphism are closed related. Clearly, if $G$ is divisible by $H$, then there exists a weak $V(H)$-uniform homomorphism $f$ between $G$ and $H$. However, if $\check{G}$ and $H$ are the graphs defined in remark~\ref{contraction}, then $\check{G}$ is not divisible by $H$ but there exists a surjective $(V(H)\setminus s_H)$-uniform homomorphism $\check{f}$ between $\check{G}$ and $H$. Also, it is not difficult to see that the cycle with four vertices with an added pendant edge (that is, $E(G)=\{x_1x_2,x_2x_3,x_3x_4,x_4x_1,x_1x_5\}$) is divisible by $\mathcal{K}_2(2)$ but there not exists a uniform homomorphism between them. Theorem 5.7 in~\cite{bicycles} says that if $G$ is divisible by $H$, then $B(H,\mathbb{Z}_k)$ is a subgroup of $B(G,\mathbb{Z}_k)$ for all $k\in \mathbb{Z}$. That is, Theorems \ref{uniformhomeo}, \ref{directeduniformhomeo}, and ~\cite[Theorem 5.7]{bicycles} shows injections between groups induced by some class of morphism between graphs. \end{Remark} \section{The sandpile group of the cartesian product of graphs} The sandpile group of the cartesian product of graphs has been studied by several authors, see for instance~\cite{Bai,p4cn,k3cn,cartesian,kmpn}. In this section we define the cartesian product of configurations and we prove that the cartesian product of recurrent configurations is a recurrent configuration. After that, we prove that: if ${\bf e}_H\in SP(c(H), s_{c(H)})$ is the identity of the sandpile group of the cone of $H$, then the mapping $\widetilde{\pi}_G: SP(c(G), s_{c(G)})\rightarrow SP(c(G\Box H), s_{c(G\Box H)})$ given by \[ \widetilde{\pi}_G ({\bf a})={\bf a}\Box {\bf e}_H, \] is an injective homomorphism of groups. The \textit{cartesian product} of $G$ and $H$, denoted by $G\Box H$ is the graph with $V(G) \times V(H)$ as its vertex set and two vertices $u_1 v_1$ and $u_2 v_2$ are adjacent in $G \Box H$ if and only if either $u_1 = u_2$ and $v_1 v_2\in E(H)$, or $v_1 = v_2$ and $u_1 u_2\in E(G)$. \begin{figure}[h]\centering \begin{tikzpicture}[line width=1.1,scale=0.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (-3.4,0) { +(0:1) node[draw, circle, fill=gray] (u1) {} +(72:1) node[draw, circle, fill=gray] (u2) {} +(144:1) node[draw, circle, fill=gray] (u3) {} +(216:1) node[draw, circle, fill=gray] (u4) {} +(288:1) node[draw, circle, fill=gray] (u5) {} +(0:1.3) node {\small $u_1$} +(72:1.3) node {\small $u_2$} +(144:1.3) node {\small $u_3$} +(216:1.3) node {\small $u_4$} +(288:1.3) node {\small $u_5$} (u1) to (u2) to (u3) to (u4) to (u5) to (u1) }; \draw (0,1.9) { +(0.3,0) node[draw, circle, fill=blue] (v1) {} +(3.3,0) node[draw, circle, fill=green] (v2) {} (v1)+(0,0.3) node {\small $v_1$} (v2)+(0,0.3) node {\small $v_2$} (v1) to (v2) }; \draw (0,0) { +(0:1) node[draw, circle, fill=blue] (v1) {} +(72:1) node[draw, circle, fill=blue] (v2) {} +(144:1) node[draw, circle, fill=blue] (v3) {} +(216:1) node[draw, circle, fill=blue] (v4) {} +(288:1) node[draw, circle, fill=blue] (v5) {} (v1)+(3,0) node[draw, circle, fill=green] (v11) {} (v2)+(3,0) node[draw, circle, fill=green] (v12) {} (v3)+(3,0) node[draw, circle, fill=green] (v13) {} (v4)+(3,0) node[draw, circle, fill=green] (v14) {} (v5)+(3,0) node[draw, circle, fill=green] (v15) {} (v1)+(-0.7,0) node {\small $(u_1,v_1)$} (v2)+(0,0.3) node {\small $(u_2,v_1)$} (v3)+(-0.3,0.3) node {\small $(u_3,v_1)$} (v4)+(-0.3,-0.3) node {\small $(u_4,v_1)$} (v5)+(0,-0.25) node {\small $(u_5,v_1)$} (v11)+(0.7,0) node {\small $(u_1,v_2)$} (v12)+(0,0.3) node {\small $(u_2,v_2)$} (v13)+(0.65,-0.2) node {\small $(u_3,v_2)$} (v14)+(0.65,0.2) node {\small $(u_4,v_2)$} (v15)+(0,-0.25) node {\small $(u_5,v_2)$} (v1) to (v2) to (v3) to (v4) to (v5) to (v1) (v11) to (v12) to (v13) to (v14) to (v15) to (v11) (v1) to (v11) (v2) to (v12) (v3) to (v13) (v4) to (v14) (v5) to (v15) }; \end{tikzpicture} \caption{\small Cartesian product of $C_5$ and $\mathcal{K}_2$.} \end{figure} Let $\pi_{G}: G\Box H \rightarrow G$ and $\pi_{H}:G\Box H \rightarrow H$ be the projection mappings, given by \[ \pi_{G}(u,v)=u \text{ for all } (u,v)\in V(G\Box H) \text{ and } \pi_{H}(u,v)=v \text{ for all } (u,v)\in V(G\Box H). \] Thus, it is not difficult to see that the mappings $\pi_{G}$ and $\pi_{H}$ are weak surjective uniform homomorphisms of graphs. For the rest of this section, let $s_{c(G)}\in V(c(G))\setminus V(G)$, $s_{c(H)}\in V(c(H))\setminus V(H)$ and $s_{c(G\Box H)}\in V(c(G \Box H))\setminus V(G \Box H)$. \medskip Now, let ${\bf a}\in \mathbb{N}^{V(G)}$ be a configuration of $c(G)$, ${\bf b}\in \mathbb{N}^{V(H)}$ be a configuration of $c(H)$, \vspace{-5mm} \begin{figure}[h] \centering \begin{tikzpicture}[line width=1.1,scale=0.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (0,0) { +(0:1) node[draw, circle, fill=blue] (v1) {} +(72:1) node[draw, circle, fill=blue] (v2) {} +(144:1) node[draw, circle, fill=blue] (v3) {} +(216:1) node[draw, circle, fill=blue] (v4) {} +(288:1) node[draw, circle, fill=blue] (v5) {} (v1)+(3,0) node[draw, circle, fill=green] (v11) {} (v2)+(3,0) node[draw, circle, fill=green] (v12) {} (v3)+(3,0) node[draw, circle, fill=green] (v13) {} (v4)+(3,0) node[draw, circle, fill=green] (v14) {} (v5)+(3,0) node[draw, circle, fill=green] (v15) {} (v1) to (v2) to (v3) to (v4) to (v5) to (v1) (v11) to (v12) to (v13) to (v14) to (v15) to (v11) (v1) to (v11) (v2) to (v12) (v3) to (v13) (v4) to (v14) (v5) to (v15) (v1)+(0.20,-0.22) node {\small $3$} (v2)+(0.0,0.27) node {\small $2$} (v3)+(-0.25,0.0) node {\small $6$} (v4)+(-0.25,0.0) node {\small $5$} (v5)+(0.0,-0.25) node {\small $4$} (v11)+(0.25,0) node {\small $4$} (v12)+(0.0,0.27) node {\small $3$} (v13)+(-0.20,0.18) node {\small $7$} (v14)+(-0.2,-0.18) node {\small $6$} (v15)+(0.0,-0.25) node {\small $5$} +(-3.2,1) node {\small ${\bf a}\Box{\bf b}$} }; \draw (-3.2,0) { +(v1) node[draw, circle, fill=gray] (v16) {} +(v2)node[draw, circle, fill=gray] (v17) {} +(v3) node[draw, circle, fill=gray] (v18) {} +(v4) node[draw, circle, fill=gray] (v19) {} +(v5) node[draw, circle, fill=gray] (v20) {} (v16) -- (v17) -- (v18) -- (v19) -- (v20) -- (v16) +(-1,0) node {\small ${\bf a}$} (v16)+(0.2,0) node {\small $2$} (v17)+(0.0,0.27) node {\small $1$} (v18)+(-0.25,0.0) node {\small $5$} (v19)+(-0.25,0.0) node {\small $4$} (v20)+(0.0,-0.25) node {\small $3$} }; \draw (0,2.8) { +(v5)node[draw, circle, fill=blue] (v21) {} +(v15) node[draw, circle, fill=green] (v22) {} (v21) to (v22) (v21)+(0,0.27) node {\small $1$} (v22)+(0,0.27) node {\small $2$} +(-1.5,0.3) node {\small ${\bf b}$} }; \end{tikzpicture} \caption{\small Cartesian product of configurations.} \label{cartproconfi} \end{figure} and let ${\bf a}\Box {\bf b}\in \mathbb{N}^{V(G\Box H)}$ be the configuration of $c(G\Box H)$ given by \[ ({\bf a}\Box {\bf b})_{(u,v)}={\bf a}_u+{\bf b}_v \text{ for all } u\in V(G) \text{ and } v \in V(H). \] The following lemma shows that the cartesian product of configurations of $c(G)$ and $c(H)$ is compatible with the toppling operators of $c(G)$, $c(H)$ and $c(G\Box H)$. \begin{Lemma}\label{recurrent} Let $G$ and $H$ be multigraphs, ${\bf a}\in \mathbb{N}^{V(G)}$ be a configuration of $c(G)$, and ${\bf b}\in \mathbb{N}^{V(H)}$ be a configuration of $c(H)$. Then \begin{description} \item[$(i)$] If ${\bf a}$ and ${\bf b}$ are stable configurations, then ${\bf a}\Box {\bf b}$ is a stable configuration of $c(G\Box H)$, \item[$(ii)$] If ${\bf a}$ and ${\bf b}$ are recurrent configurations, then ${\bf a}\Box {\bf b}$ is a recurrent configuration of $c(G\Box H)$. \end{description} \end{Lemma} \begin{proof} $(i)$ If ${\bf a}$ and ${\bf b}$ are stable configurations of $c(G)$ and $c(H)$ respectively, then \[ {\bf a}_u \leq deg_{c(G)}(u)-1 \text{ for all } u\in V(G) \text{ and }{\bf b}_v \leq deg_{c(H)}(v)-1 \text{ for all } v\in V(H). \] Hence ${\bf a}\Box {\bf b}_{(u,v)}={\bf a}_u+{\bf b}_v \leq deg_{c(G)}(u)+deg_{c(H)}(v)-2=deg_{c(G\Box H)}((u,v))-1$, that is, ${\bf a}\Box {\bf b}$ is a stable configuration of $c(G\Box H)$. $(ii)$ We will use the burning algorithm~\ref{burning} to prove the second part of this lemma. Since, the sink $s_G$ of $c(G)$ is adjacent to all the vertices of $G$, then $\sum_{i=1}^n \Delta_{u_i}={\bf 1}$. \begin{Claim} Let ${\bf a}$ be a recurrent configuration of $c(G)$ and ${\bf b}$ be a recurrent configuration $c(H)$. Also, let \[ {\bf a}_{i}= \begin{cases} {\bf a}+{\bf 1} &\text{ if } i=1\\ {\bf a}_{i-1}-\Delta_{u_{i-1}} &\text{ if } i=2,\ldots,n, \end{cases} \text{ and } {\bf b}_{i}= \begin{cases} {\bf b}+{\bf 1} &\text{ if } j=1\\ {\bf b}_{i-1}-\Delta_{v_{i-1}} & \text{ if } j=2,\ldots,m, \end{cases} \] such that the vertex $u_{i}$ is an unstable vertex in ${\bf a}_{i}$ for all $i=1,\ldots,n$ and the vertex $v_{j}$ is an unstable vertex in ${\bf b}_{j}$ for all $j=1,\ldots,m$. If ${\bf c}={\bf a}\Box {\bf b}$, ${\bf c}_{(1,1)}={\bf a}\Box {\bf b}+{\bf 1}={\bf a}_1\Box {\bf b}={\bf a}\Box {\bf b}_1$, and \[ {\bf c}_{(i,j)}= \begin{cases} {\bf c}_{(i-1,m)}-\Delta_{(u_{i-1},v_m)} & \text{ if } i=2,\ldots,n \text{ and } j=1,\\ {\bf c}_{(i,j-1)}-\Delta_{(u_i,v_{j-1})} & \text{ otherwise, } \end{cases} \] then the vertex $(u_i,v_j)$ is an unstable vertex in ${\bf c}_{(i,j)}$ for all $i=1,\ldots,n$ and $j=1,\ldots,m$. \end{Claim} \begin{proof} Since the vertex $u_{i}$ is an unstable vertex in ${\bf a}_{i}$ for all $i=1,\ldots,n$ and the vertex $v_{j}$ is an unstable vertex in ${\bf b}_{j}$ for all $j=1,\ldots,m$, then $({\bf a}_{i})_{u_{i}} \geq {\rm deg}_{c(G)} (u_i)$ for all $i=1,\ldots,n$ and $({\bf b}_{j})_{v_{j}} \geq {\rm deg}_{c(H)} (v_j)$ for all $j=1,\ldots,m$. \medskip Now, ${\bf c}_{(i,1)}={\bf c}_{(1,1)}-\sum_{1\le k\leq i-1} \sum_{1\le l\leq m} \Delta_{(u_{k},v_l)} =({\bf a}_1- \sum_{1\le k\leq i-1} \Delta_{u_{k}})\Box {\bf b}={\bf a}_{i}\Box {\bf b}$ for all $i=1,\ldots,n$. Thus, \begin{eqnarray*} ({\bf c}_{(i,1)})_{(u_i,v_1)}&=&({\bf a}_{i}\Box {\bf b})_{(u_i,v_1)}=({\bf a}_{i})_{u_i}+{\bf b}_{v_1}=({\bf a}_{i})_{u_i}+({\bf b}_1)_{v_1}-1 \geq {\rm deg}_{c(G)} (u_i)+{\rm deg}_{c(H)} (v_1)-1\\ &=&{\rm deg}_{c(G\Box H)} ((u_i,v_1)) \text{ for all }i=1,\ldots,n. \end{eqnarray*} Moreover, since ${\bf c}_{(i,j)}={\bf a}_{i}\Box {\bf b}-\sum_{1\le l\leq j} \Delta_{(u_{i},v_l)}= ({\bf a}_{i}-{\bf 1})\Box {\bf b}_1-\sum_{1\le l\leq j} \Delta_{(u_{i},v_l)}$, then \[ ({\bf c}_{(i,j)})_{(u_i,v_j)}=({\bf a}_{i})_{u_i}+ ({\bf b}_j)_{v_j}-1 \geq {\rm deg}_{c(G)} (u_i)+{\rm deg}_{c(H)} (v_j)-1={\rm deg}_{c(G\Box H)} ((u_i,v_j)) \] for all $i=1,\ldots,n$ and $j=1,\ldots,m$. Therefore $(u_i,v_j)$ is an unstable vertex of ${\bf c}_{(i,j)}$ for all $i=1,\ldots,n$ and $j=1,\ldots,m$. \end{proof} Finally, by using part $(i)$ of this lemma and the previous claim we obtain that ${\bf a}\Box {\bf b}$ is recurrent. \end{proof} The next example is useful to illustrate the previous theorem: \begin{Example} Let $G\cong H\cong \mathcal{K}_2$ with $V(G)=\{u_1,u_2\}$ and $V(H)=\{v_1,v_2\}$ as vertex sets, ${\bf a}=(1,1)$ be a recurrent configuration of $c(G)$ and ${\bf b}=(1,0)$ be a recurrent configuration of $c(H)$. Hence ${\bf c}=(2,1,2,1)=(1,1)\Box (1,0)$ is a recurrent configuration of $c(G\Box H)$, as is shown in figure~$7$. \begin{figure}[h] \centering \label{toppling} \begin{tikzpicture}[line width=1.1pt,scale=0.95] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (-7.0,0) { +(0,0) node[draw, circle, fill=Magenta] (v1) {} +(1.5,0) node[draw, circle, fill=gray] (v2) {} +(1.5,1.5) node[draw, circle, fill=gray] (v3) {} +(0,1.5) node[draw, circle, fill=gray] (v4) {} (v1) -- (v2) -- (v3)--(v4)--(v1) (v1)+(0,-0.3) node {\small $3$} (v2)+(0,-0.3) node {\small $3$} (v3)+(0,0.3) node {\small $2$} (v4)+(0,0.3) node {\small $2$} +(0.75,-0.75) node {\small ${\bf c}_{(1,1)}$} +(2.5,-0.75) node {\small $-\Delta_{(u_1,v_1)}=$} }; \draw (-3.5,0) { +(0,0) node[draw, circle, fill=gray] (v5) {} +(1.5,0) node[draw, circle, fill=gray] (v6) {} +(1.5,1.5) node[draw, circle, fill=gray] (v7) {} +(0,1.5) node[draw, circle, fill=Magenta] (v8) {} (v5) -- (v6) -- (v7)--(v8)--(v5) (v5)+(0,-0.3) node {\small $0$} (v6)+(0,-0.3) node {\small $4$} (v7)+(0,0.3) node {\small $2$} (v8)+(0,0.3) node {\small $3$} +(0.75,-0.75) node {\small ${\bf c}_{(1,2)}$} +(2.5,-0.75) node {\small $-\Delta_{(u_1,v_2)}=$} }; \draw (0,0) { +(0,0) node[draw, circle, fill=gray] (v9) {} +(1.5,0) node[draw, circle, fill=Magenta] (v10) {} +(1.5,1.5) node[draw, circle, fill=gray] (v11) {} +(0,1.5) node[draw, circle, fill=gray] (v12) {} (v9) -- (v10) -- (v11)--(v12)--(v9) (v9)+(0,-0.3) node {\small $1$} (v10)+(0,-0.3) node {\small $4$} (v11)+(0,0.3) node {\small $3$} (v12)+(0,0.3) node {\small $0$} +(0.75,-0.75) node {\small ${\bf c}_{(2,1)}$} +(2.5,-0.75) node {\small $-\Delta_{(u_2,v_1)}=$} }; \draw (3.5,0) { +(0,0) node[draw, circle, fill=gray] (v13) {} +(1.5,0) node[draw, circle, fill=gray] (v14) {} +(1.5,1.5) node[draw, circle, fill=Magenta] (v15) {} +(0,1.5) node[draw, circle, fill=gray] (v16) {} (v13) -- (v14) -- (v15)--(v16)--(v13) (v13)+(0,-0.3) node {\small $2$} (v14)+(0,-0.3) node {\small $1$} (v15)+(0,0.3) node {\small $4$} (v16)+(0,0.3) node {\small $0$} +(0.75,-0.75) node {\small ${\bf c}_{(2,2)}$} +(2.5,-0.75) node {\small $-\Delta_{(u_2,v_2)}=$} }; \draw (7.0,0) { +(0,0) node[draw, circle, fill=gray] (v17) {} +(1.5,0) node[draw, circle, fill=gray] (v18) {} +(1.5,1.5) node[draw, circle, fill=gray] (v19) {} +(0,1.5) node[draw, circle, fill=gray] (v20) {} (v17) -- (v18) -- (v19)--(v20)--(v17) (v17)+(0,-0.3) node {\small $2$} (v18)+(0,-0.3) node {\small $2$} (v19)+(0,0.3) node {\small $1$} (v20)+(0,0.3) node {\small $1$} +(0.75,-0.75) node {\small ${\bf c}$} }; \end{tikzpicture} \caption{\small The topplings of the configuration ${\bf c}=(3,2,3,2)$ of $c(C_4)$.} \end{figure} \vspace{-3mm} \end{Example} The next theorem shows that the mappings $\pi_{G}$ and $\pi_{H}$ induce homomorphisms of groups between the sandpile groups of the cones of $G$ and $G\Box H$, and $H$ and $G\Box H$; respectively. \begin{Theorem}\label{cartesian} Let $G$ and $H$ be two multigraphs, and ${\bf e}_H$ be the identity of the sandpile group of the cone of $H$. Then the mapping $\widetilde{\pi}_G: SP(c(G), s_{c(G)})\rightarrow SP(c(G\Box H), s_{c(G\Box H)})$ given by \[ \widetilde{\pi}_G ({\bf a})={\bf a}\Box {\bf e}_H, \] is an injective homomorphism of groups. \end{Theorem} \begin{proof} Since ${\bf e}_H$ is recurrent, then using lemma~\ref{recurrent} $(ii)$, $\widetilde{\pi}_G ({\bf a})={\bf a}\Box {\bf e}_H$ is a recurrent configuration of $c(G\Box H)$ for all ${\bf a}\in SP(c(G), s_{c(G)})$; that is, the mapping $\widetilde{\pi}_G$ is well defined. Now, we will prove that $\widetilde{\pi}_G$ is a homomorphism of groups. Let ${\bf a},{\bf b}\in SP(c(G), s_{c(G)})$, then \begin{eqnarray*} \widetilde{\pi}_G ({\bf a} \oplus {\bf b})&=&({\bf a}\oplus {\bf b})\Box {\bf e}_H= s({\bf a}+ {\bf b})\Box {\bf e}_H=({\bf a}+ {\bf b})\Box {\bf e}_H \,(\text{mod }L(c(G\Box H),s_{c(G\Box H)}))\\ &=&{\bf a}\Box {\bf e}_H+{\bf b}\Box {\bf e}_H = s({\bf a}\Box {\bf e}_H + {\bf b}\Box{\bf e}_H) \,(\text{mod }L(c(G\Box H),s_{c(G\Box H)}))= {\bf a}\Box {\bf e}_H \oplus {\bf b}\Box{\bf e}_H\\ &=&\widetilde{\pi}_G ({\bf a})\oplus \widetilde{\pi}_G ({\bf b}), \end{eqnarray*} and therefore $\widetilde{\pi}_G$ is a homomorphism of groups. Finally, $\widetilde{\pi}_G ({\bf a})=\widetilde{\pi}_G ({\bf b})$ if and only if ${\bf a}\Box {\bf e}_H={\bf b}\Box {\bf e}_H$ if and only if ${\bf a}={\bf b}$, and therefore $\widetilde{\pi}_G$ is an injective homomorphism of groups. \end{proof} \begin{Example} Using the CSandPile program we get that, $SP(c(\mathcal{K}_2), s_{c(\mathcal{K}_2)})=\mathbb{Z}_3$ is generated by $(1,0)$ with identity $(1,1)$, $SP(c(C_5),s_{c(C_5)})=\mathbb{Z}_{11}^{2}$ is generated by $(2,1,1,1,1)$ and $(1,2,1,1,1)$ with identity $e=(2,2,2,2,2)$ (Also see~\cite[page 5]{corirossin}), and $SP(c(C_5\Box \mathcal{K}_2))=\mathbb{Z}_{11\cdot 29} \oplus \mathbb{Z}_{3\cdot 11\cdot 29}$. Moreover, using the mapping $\pi_{\mathcal{K}_2}$ we have that \[ \pi_{\mathcal{K}_2}(1,0)=(3,3,3,3,3,2,2,2,2,2) \] is a generator of a subgroup of $SP(c(C_5\Box \mathcal{K}_2))$ isomorphic to $\mathbb{Z}_3$, and using the mapping $i_{C_5}$ we have that \[ \pi_{C_5}(2,1,1,1,1)=(3,2,2,2,2,3,2,2,2,2) \text{ and } \pi_{C_5}(1,2,1,1,1)=(2,3,2,2,2,2,3,2,2,2) \] are generators of subgroups of $SP(c(C_5\Box \mathcal{K}_2))$ isomorphic to $\mathbb{Z}_{11}$. \end{Example} \begin{Remark} If $n> 1$ and ${\bf e}_H$ is the identity of $SP(c_n(G), s_{c_n(G)})$, then the mapping given by \[ \widetilde{\pi}_G ({\bf a})={\bf a}\Box {\bf e}_H \] does not necessarily send stable configurations to stable configurations. For instance, the vector $(3,3)$ is the identity of $c_3(Q_1)$ and $(6,6,6,6)=\widetilde{\pi}_G ((3,3))=(3,3)\Box (3,3)$ is a non stable configuration of $c_3(Q_2)$. However, the non canonical mapping $$\hat{\pi}_G: SP(c_n(G), s_{c_n(G)})\rightarrow SP(c_n(G\Box H), s_{c_n(G\Box H)})$$ given by $\hat{\pi}_G ({\bf a})=[\widetilde{\pi}_G ({\bf a})]$ is an injective homomorphism of groups. \end{Remark} \section{The sandpile group of $c(Q_d)$} The hypercube of dimension $d$ is the cartesian product of $d$ copies of the complete graph with two vertices ${\mathcal K}_2$. The structure of the sandpile group of the hypercube is complex, see for intance~\cite{Bai} for a description of the Sylow $p$-group of $SP(Q_d)$ when $p$ is odd and ~\cite{cartesian} for a description of the cartesian product of complete graphs in general. In this section we give an explicit combinatorial and algebraic description of a set of generators of the sandpile group of the cone of the hypercube of dimension $d$, see theorem~\ref{cQd}. We will use mainly theorems~\ref{uniformhomeo} and ~\ref{cartesian}, developed in previous sections, to get a description of the sandpile group of the cone of the hypercube of dimension $d$. \medskip First of all, we will fix some notation that will be needed in order to establish the main theorem, let \[ Q_d=\Box_{i=1}^d {\mathcal K}_2=\underbrace{{\mathcal K}_2 \Box \cdots \Box {\mathcal K}_2}_{d- \text{copies of } {\mathcal K}_2}, \] with vertex set $V(Q_d)=\{v_{\bf a} \, | \, {\bf a}\in \{0,1\}^d\}$ and edge set \[ E(Q_d)=\{v_{\bf a}v_{{\bf a}'} \, | \, {\bf a},{\bf a}'\in \{0,1\}^d \text{ and } {\bf a}-{\bf a}'=\pm e_i \text{ for some } 1\leq i \leq d \}. \] Moreover, for all $\beta \in \{0,1\}^d$, let $Q_{\beta}=Q_d[\{ v_{\bf a} \, | \, {\rm supp}({\bf a}) \subseteq {\rm supp}(\beta) \}]$, be an induced subgraph of $Q_d$, where ${\rm supp}(c)=\{i \mid c_i\neq 0 \}$. It is not difficult to note that: \begin{itemize} \item $Q_{\beta} = \Box_{i\in {\rm supp}(\beta)} Q_{e_i}\cong \Box_{i=1}^{|\beta|} {\mathcal K}_2=Q_{|\beta|} $, where $|\beta|=\sum_{i=1}^d \beta_i$, \item $Q_{\beta'}=Q_{\beta} \Box Q_{\beta'-\beta}$ for all ${\rm supp}(\beta) \subset {\rm supp}(\beta')$, in particular $Q_d=Q_{(1,\ldots,1)}=Q_\beta \Box Q_{{\bf 1}-\beta}$ for all $\beta \in \{0,1\}^d$. \end{itemize} \vspace{-6mm} \begin{figure}[h] \centering \begin{tikzpicture}[line width=1pt, scale=1.1] \path (0,0) coordinate (v1); \path (1.3,0) coordinate (v2); \path (226:1.3)+(1.3,0.2) coordinate (v3); \path (226:1.3)+(0,0.2) coordinate (v4); \path (0,0)+(0,1.3) coordinate (v5); \path (1.3,0)+(0,1.3) coordinate (v6); \path (226:1.3)+(1.3,1.5) coordinate (v7); \path (226:1.3)+(0,1.5) coordinate (v8); \draw[line width=2pt] (v1)--(v2)--(v3)--(v4)--(v1); \draw[blue,line width=1pt] (v1)--(v2)--(v3)--(v4)--(v1); \draw[line width=2pt] (v1)--(v5); \draw[red,line width=1pt] (v1)--(v5); \draw[line width=1.5pt] (v5)--(v6)--(v7)--(v8)--(v5); \draw[line width=1.5pt] (v2)--(v6) (v3)--(v7) (v4)--(v8); \draw[fill=white] (v1) circle(1.7pt); \draw[fill=blue] (v2) circle(1.7pt) (v3) circle(1.7pt) (v4) circle(1.7pt); \draw[fill=red] (v5) circle(1.7pt); \draw[fill=gray] (v6) circle(1.7pt) (v7) circle(1.7pt) (v8) circle(1.7pt); \draw (v1)+(-0.46,0.07) node {\scriptsize $v_{(0,0,0)}$}; \draw (v2)+(0.55,-0.15) node {\scriptsize $v_{(1,0,0)}$}; \draw (v3)+(0.40,-0.25) node {\scriptsize $v_{(1,0,1)}$}; \draw (v4)+(-0.45,-0.15) node {\scriptsize $v_{(0,0,1)}$}; \draw (v5)+(-0.0,0.20) node {\scriptsize $v_{(0,1,0)}$}; \draw (v6)+(0.55,-0.05) node {\scriptsize $v_{(1,1,0)}$}; \draw (v7)+(0.48,-0.15) node {\scriptsize $v_{(1,1,1)}$}; \draw (v8)+(-0.50,-0.05) node {\scriptsize $v_{(0,1,1)}$}; \draw[text=red] (v3)+(0.06,1.82) node {\scriptsize $Q_{\mbox{} \hspace{-0.7mm} (0,1,0)}$}; \draw[text=blue] (v3)+(0.95,0.25) node {\scriptsize $Q_{(1,0,1)}$}; \draw (v2)+(0.9,0.6) node {\scriptsize $Q_3\cong Q_{(1,1,1)}$}; \end{tikzpicture} \caption{\small The hypercube $Q_3\cong Q_{(1,1,1)}$, where the hypercube \textcolor{blue}{$Q_{(1,0,1)}\cong Q_2$} is colored in blue, and the hypercube \textcolor{red}{$Q_{(0,1,0)}\cong Q_1$} is colored in red.} \end{figure} Now, for all ${\bf 0}\neq \beta\in \{0,1\}^d$, let $f_{\beta}:c(Q_{\beta}) \rightarrow c(\mathcal{K}_2(|\beta|))$ be the surjective $V(\mathcal{K}_2(|\beta|))$-uniform homomorphism of graphs given by \[ f_{\beta}(v)= \begin{cases} v_1 & \text{ if } v=v_{\bf a} \text{ and } \beta\cdot {\bf a} \text{ is even},\\ v_2 & \text{ if } v=v_{\bf a} \text{ and } \beta\cdot {\bf a} \text{ is odd},\\ s_{c(\mathcal{K}_2(|\beta|))}& \text{ if } v=s_{c(Q_{\beta})}, \end{cases} \] and for all $\beta'\in \{0,1\}^d$ such that ${\rm supp}(\beta) \subseteq {\rm supp}(\beta')$, let $\pi_{\beta,\beta'}$ be the projection mapping of $Q_{\beta}$ on $Q_{\beta'}$. \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] {c(Q_{\beta'}) & c(Q_{\beta}) & c(\mathcal{K}_2(|\beta|))\\ }; \path[->>,font=\scriptsize] (m-1-1) edge node[auto] {$\check{\pi}_{\beta,\beta'}$} (m-1-2); \path[->>,font=\scriptsize] (m-1-2) edge node[auto] {$f_{\beta}$} (m-1-3); \end{tikzpicture} \] Also, let $\widetilde{K}_{\beta,\beta'}:={\rm Im}(\widetilde{\pi}_{\beta,\beta'}\circ \widetilde{f}_{\beta})$, and $g_{\beta,\beta'}: \mathbb{N}^2 \rightarrow \mathbb{N}^{V(Q_{\beta'})}$ given by \[ g_{\beta,\beta'} (r,t)_{v_{\bf a}}= \begin{cases} r & \text{ if } \beta\cdot {\bf a} \text{ is even},\\ t & \text{ if } \beta\cdot {\bf a} \text{ is odd}, \end{cases} \] If $\beta'={\bf 1}$, then we simply denote $\pi_{\beta,\beta'}$ by $\pi_{\beta}$, $\widetilde{K}_{\beta,\beta'}$ by $\widetilde{K}_{\beta}$, and $g_{\beta,\beta'}$ by $g_{\beta}$. Note that, if $|\beta|=1$, then $f_{\beta}$ is the identity mapping and if $\beta=\beta'$, then $\pi_{\beta,\beta'}$ is the identity mapping. \begin{Proposition}\label{main} Let $d$ be a natural number, $s=V(c(Q_d))\setminus V(Q_d)$, and $\beta,\beta' \in \{0,1\}^d$. Then \begin{description} \item[$(i)$] $ \widetilde{K}_{\beta}=\{ g_{\beta} (r,t)+ (d-|\beta|){\bf 1}\: | \: 0\leq r,t \leq d \text{ and either } r=|\beta| \text{ or } t=|\beta| \} \triangleleft SP(c(Q_{d}),s)$, \item[$(ii)$] $\widetilde{K}_{\beta}$ is generated by $g_{\beta} (d,d-|\beta|)$ with identity $d{\bf 1}$ and $\widetilde{K}_{\beta} \cong {\mathbb Z}_{2|\beta|+1}$, \item[$(iii)$] $\widetilde{\pi}_{\beta}(SP(c(Q_{\beta}),s))\cap \widetilde{\pi}_{\beta'}(SP(c(Q_{\beta'}),s))= \widetilde{\pi}_{\beta\odot \beta'}(SP(c(Q_{\beta\odot \beta'}),s))$, \end{description} where $({\bf a} \odot {\bf b})_i={\bf a}_i\cdot {\bf b}_i$ for all $i$. \end{Proposition} \begin{proof} $(i)$ and$(ii)$ By theorems~\ref{uniformhomeo} and ~\ref{cartesian} we get that $\widetilde{\pi}_{\beta}$ and $\widetilde{f}_{\beta}$ are injective homomorphims of groups. \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] { SP(c(\mathcal{ K}_2(|\beta|)), s) & SP(c(Q_{\beta}),s) & SP(c(Q_d), s) \\}; \path[right hook->,font=\scriptsize] (m-1-2) edge node[auto] {$\widetilde{\pi}_{\beta}$} node[auto,swap] {thm.~\ref{cartesian}} (m-1-3); \path[right hook->,font=\scriptsize] (m-1-1) edge node[auto,swap] {thm.~\ref{uniformhomeo}} node[auto] {$\widetilde{f}_{\beta}$} (m-1-2); \end{tikzpicture} \] By theorem~\ref{generators}, $SP(c(Q_{\beta}),s)$ is generated by $\widetilde{f}_{\beta}(|\beta|,0)$ with identity $\widetilde{f}_{\beta}(|\beta|,|\beta|)=|\beta|{\bf 1}_{Q_{\beta}}$ for all $\beta\in \{0,1\}^d$. Moreover, since $\widetilde{K}_{\beta}={\rm Im}(\widetilde{\pi}_{\beta}\circ \widetilde{f}_{\beta})$, then \begin{eqnarray*} \widetilde{K}_{\beta}&=&\{\widetilde{\pi}_{\beta}\circ \widetilde{f}_{\beta}({\bf c}) \,| \, {\bf c} \in SP(c(\mathcal{ K}_2(|\beta|)), s) \} =\{\widetilde{f}_{\beta}({\bf c})\Box (d-|\beta|){\bf 1}_{Q_{{\bf 1}-\beta}}\, | \, {\bf c} \in SP(c(\mathcal{ K}_2(|\beta|)), s) \}\\ &=&\{ \widetilde{f}_{\beta}((r,t))\Box (d-|\beta|){\bf 1}_{Q_{{\bf 1}-\beta}} \, | \, 0\leq r,t \leq d \text{ and either } r=|\beta| \text{ or } t=|\beta|\}\\ &=&\{ g_{\beta} (r,t)+ (d-|\beta|){\bf 1}\: | \: 0\leq r,t \leq d \text{ and either } r=|\beta| \text{ or } t=|\beta| \} \text{ for all } \beta\in \{0,1\}^d. \end{eqnarray*} Thus \[ \widetilde{K}_{\beta} \cong SP(c(\mathcal{ K}_2(|\beta|)), s)\cong {\mathbb Z}_{2|\beta|+1} \triangleleft SP(c(Q_{d}),s) \text{ for all } \beta\in \{0,1\}^d \] is generated by $\widetilde{f}_{\beta}(|\beta|,0)\Box (d-|\beta|){\bf 1}_{Q_{{\bf 1}-\beta}} = g_{\beta} (d,d-|\beta|)$ and $d{\bf 1}=g_{\bf 1}(d,d)$ is the identity of $\widetilde{K}_{\beta}$. \begin{figure}[h]\centering \begin{tikzpicture}[line width=1pt,scale=0.85] \tikzstyle{every node}=[inner sep=0pt, minimum width=4pt] \draw (0,0) { +(90:0.82) node[draw, circle, fill=green] (w2) {} +(270:0.82) node[draw, circle, fill=blue] (w3) {} (w2) to [bend right] (w3) (w2) to [bend left] (w3) (w2)+(0,0.35) node {$0$} (w3)+(0,-0.35) node {$2$} }; \draw (3.7,0) { +(45:1.1) node[draw, circle, fill=blue] (u1) {} +(135:1.1) node[draw, circle, fill=green] (u2) {} +(225:1.1) node[draw, circle, fill=blue] (u3) {} +(315:1.1) node[draw, circle, fill=green] (u4) {} (u1) -- (u2) --(u3) -- (u4)-- (u1) (u1)+(0.2,0.2) node {$2$} (u2)+(-0.2,0.2) node {$0$} (u3)+(-0.2,-0.2) node {$2$} (u4)+(0.2,-0.2) node {$0$} +(-0.75,0.75) node {\small $Q_{(1,1,0)}$} }; \draw(8,-0.1) { +(0,0) node[draw, circle, fill=blue] (v1) {} +(0:1.3) node[draw, circle, fill=green] (v2) {} +(293:1.015) node[draw, circle, fill=green] (v3) {} +(226:1.3) node[draw, circle, fill=blue] (v4) {} (v1)+(0,1.3) node[draw, circle, fill=green] (v5) {} (v2)+(0,1.3) node[draw, circle, fill=blue] (v6) {} (v3)+(0,1.3) node[draw, circle, fill=blue] (v7) {} (v4)+(0,1.3) node[draw, circle, fill=green] (v8) {} [line width=1.5pt] (v1)--(v2)--(v3)--(v4)--(v1) [line width=1.5pt] (v5)--(v6)--(v7)--(v8)--(v5) [line width=1.5pt] (v1)--(v5) (v2)--(v6) (v3)--(v7) (v4)--(v8) (v1)+(-0.3,0.1) node {\small $3$} (v2)+(0.3,0) node {\small $1$} (v3)+(0.1,-0.3) node {\small $1$} (v4)+(-0.15,-0.3) node {\small $3$} (v5)+(0,0.35) node {\small $1$} (v6)+(0.3,0.3) node {\small $3$} (v7)+(0.25,-0.15) node {\small $3$} (v8)+(-0.3,0.05) node {\small $1$} }; \draw[right hook->,red,line width=0.5] (0.4,0) to (2.7,0); \draw (1.55,-0.3) node[red] {\scriptsize $\widetilde{f}_{(1,1,0)}$}; \draw[right hook->,line width=0.5] (4.7,0) to (6.7,0); \draw (5.85,-0.3) node {\scriptsize $\widetilde{\pi}_{(1,1,0)}$}; \draw[right hook->, bend left,blue,dashed,line width=0.5] (0.4,0.7) to (7.3,0.7); \draw[blue] (3.85,2) node {\scriptsize $\widetilde{\pi}_{(1,1,0)}\circ \widetilde{f}_{(1,1,0)}$}; \end{tikzpicture} \caption{\small The mappings $\widetilde{f}_{(1,1,0)}$, $\widetilde{\pi}_{(1,1,0)}$, and $\widetilde{\pi}_{(1,1,0)}\circ \widetilde{f}_{(1,1,0)}$ on $Q_3$. } \end{figure} $(iii)$ Finally, let ${\bf c}\in \widetilde{\pi}_{\beta}(SP(c(Q_{\beta}),s))$. Since $\widetilde{\pi}_{\beta}({\bf a})={\bf a} \Box {\bf e}$ for all ${\bf a}\in SP(c(Q_{\beta}),s)$, where ${\bf e}=(d-|\beta|){\bf 1}$ is the identity of $SP(c(Q_{{\bf 1}-\beta}),s)$. Then ${\bf c}\in \widetilde{\pi}_{\beta}(SP(c(Q_{\beta}),s))$ if and only if ${\bf c}_{v_{\bf a}}={\bf c}_{v_{\bf b}}$ for all ${\bf a}\odot \beta={\bf b}\odot \beta$. Now, ${\bf c}\in \widetilde{\pi}_{\beta}(SP(c(Q_{\beta}),s))\cap \widetilde{\pi}_{\beta'}(SP(c(Q_{\beta'}),s))$ if and only if ${\bf c}_{v_{\bf a}}={\bf c}_{v_{\bf b}}$ for all ${\bf a}\odot \beta={\bf b}\odot \beta$, and ${\bf c}_{v_{\bf a}}={\bf c}_{v_{\bf b}}$ for all ${\bf a}\odot \beta'={\bf b}\odot \beta'$ if and only if ${\bf c}_{v_{\bf a}}={\bf c}_{v_{\bf b}}$ for all ${\bf a}\odot (\beta\odot \beta')={\bf b}\odot (\beta\odot \beta')$ if and only if ${\bf c}\in \widetilde{\pi}_{\beta\odot \beta'}(SP(c(Q_{\beta\odot \beta'}),s))$. \end{proof} The next lemma will be useful in order to get the description of the sandpile group of the hypercube. \begin{Lemma}{(\cite[Proposition 3.1]{Bai} and \cite[lemma 16]{cartesian})}\label{alpha} Let $A$ be an abelian group, and let $\alpha$ and $\beta$ be two endomorphisms of $A$ such that $\beta-\alpha=m\cdot 1_{A}$ for some integer $m$. Then \[ Syl_p({\rm coker} \alpha\beta) \cong Syl_p({\rm coker} \alpha \oplus {\rm coker}\beta) \] for all primes $p$ that does not divide $m$. \end{Lemma} The next theorem is the main result of this section. \begin{Theorem}\label{cQd} Let $k\geq 0$, $d\geq 1$ be natural numbers and let $c_{2k+1}(Q_d)$ be the $2k+1$-cone of the hypercube $Q_d$. If $s=V(c_{2k+1}(Q_d))\setminus V(Q_d)$, then \[ SP(c_{2k+1}(Q_d),s)\cong \bigoplus_{i=0}^{d} \mathbb{Z}_{2i+2k+1}^{\binom{d}{i}}. \] Furthermore, $SP(c(Q_{d}),s)=\bigoplus_{\beta \in \{0,1\}^d} \widetilde{K}_{\beta}$. \end{Theorem} \begin{proof} Let $s=V(c_{i}(Q_j))\setminus V(Q_j)$. Using elementary row and columns operations invertible over $\mathbb{Z}$ we get an equivalent matrix of $L(c_{2k+1}(Q_{d+1}),s)$. \begin{eqnarray*} L(c_{2k+1}(Q_{d+1}),s) &=& \left[ \begin{array}{cc} L(c_{2k+2}(Q_d),s) & -I_{2^d}\\ -I_{2^d} & L(c_{2k+2}(Q_d),s) \end{array} \right] \sim \left[ \begin{array}{cc} -I_{2^d} & L(c_{2k+2}(Q_d),s)\\ L(c_{2k+2}(Q_d),s) & -I_{2^d} \end{array} \right]\\ &\sim& \left[ \begin{array}{cc} I_{2^d} & -L(c_{2k+2}(Q_d),s)\\ 0 & L(c_{2k+2}(Q_d),s)^2-I_{2^d} \end{array} \right] \sim \left[ \begin{array}{cc} I_{2^d} & 0\\ 0 & L(c_{2k+1}(Q_d),s)\cdot L(c_{2k+3}(Q_d),s) \end{array} \right] \end{eqnarray*} Thus \begin{eqnarray*} |SP(c_{2k+1}(Q_{d+1}),s)| & = & |L(c_{2k+1}(Q_{d+1}),s)| = |L(c_{2k+1}(Q_{d}),s)|\cdot |L(c_{2k+3}(Q_{d}),s)|\\ & = & \prod_{i=0}^{d} |L(c_{2k+2i+1}(Q_{1}),s)|^{\binom{d}{i}}=\prod_{i=0}^{d} [(2k+2i+3)(2k+2i+1)]^{\binom{d}{i}}\\ & = & \prod_{i=1}^{d+1} (2k+2i+1)^{\binom{d+1}{i}}. \end{eqnarray*} Applying lemma~\ref{alpha} to $A=\mathbb{Z}^{2^{d}}$, $\alpha=L(c_{2k+1}(Q_{d}),s)$ and $\beta=L(c_{2k+3}(Q_{d}),s)$ ($\beta-\alpha=2I_{2^d}$) we get that \begin{eqnarray*} Syl_p(SP(c_{2k+1}(Q_{d+1}),s)) &\cong& Syl_p({\rm coker} \alpha\beta) \cong Syl_p({\rm coker} \alpha \oplus {\rm coker}\beta)\\ &\cong& Syl_p(SP(c_{2k+1}(Q_{d}),s)) \oplus Syl_p(SP(c_{2k+3}(Q_{d}),s)). \end{eqnarray*} Therefore, using induction on $d$ and the fact that $(2,|SP(c_{2k+1}(Q_{d+1}),s)|)=1$ we get that \[ SP(c_{2k+1}(Q_d),s)\cong \bigoplus_{i=0}^{d} \mathbb{Z}_{2i+2k+1}^{\binom{d}{i}}. \] On the other hand, we will use induction on $d$ in order to prove that $SP(c(Q_{d}),s)=\bigoplus_{\beta \in \{0,1\}^d} \widetilde{K}_{\beta}$. For $d=1$, the result follows from theorem~\ref{generators}. Now, for all $d\geq 1$, ${\bf 0} \neq \beta \in \{0,1\}^d$, and ${\bf a}\in \{0,1\}^{{\rm supp}(\beta)}$, let \[ \Delta_{\beta}({\bf a})=d_{c(Q_{\beta})}(v_{\bf a})e_{v_{\bf a}}-\sum_{{\rm supp}({\bf b})\subseteq {\rm supp}(\beta)}^{{\bf a}- {\bf b}=\pm e_i} e_{v_{\bf b}} \in \mathbb{Z}^{V(Q_{\beta})} \] be the toppling operator of the vertex $v_{\bf a}$ of $(c(Q_{\beta}),s)$, $I_{\Delta}(\beta)=\langle \{ \Delta_{\beta}({\bf a}) \,| \, {\bf a}\in \{0,1\}^{{\rm supp}(\beta)}\} \rangle$ be the subgroup generated by the images of the toppling operators of $(c(Q_{\beta}),s)$, and \[ I_{\Gamma}(\beta)=\langle \{ \Gamma_{\beta',\beta}=g_{\beta',\beta}(|\beta|,|\beta|-|\beta'|)\, | \, {\bf 0} \neq \beta' \in \{0,1\}^{{\rm supp}(\beta)}\} \rangle, \] be the subgroup generated by the generators of $\widetilde{K}_{\beta',\beta}$. If $\beta={\bf 1}$, we simply denote $\Delta_{\beta}({\bf a})$ by $ \Delta ({\bf a})$, $I_{\Delta}(\beta)$ by $I_{\Delta}$, $\Gamma_{\beta',\beta}$ by $\Gamma_{\beta'}$, and $I_{\Gamma}(\beta)$ by $I_{\Gamma}$. Since, $\bigoplus_{\beta \in \{0,1\}^d} \widetilde{K}_{\beta} \triangleleft SP(c(Q_{d}),s)$ (proposition~\ref{main} $(i)$) and \[ |SP(c(Q_{d}),s))|=\prod_{i=1}^{d} (2i+1)^{\binom{d}{i}}=\prod_{\beta \in \{0,1\}^d} (2|{\beta}|+1) =\prod_{\beta \in \{0,1\}^d} |\widetilde{K}_{\beta}|=|\bigoplus_{\beta \in \{0,1\}^d} \widetilde{K}_{\beta}|, \] then proving that $SP(c(Q_{d}),s)=\bigoplus_{\beta \in \{0,1\}^d} \widetilde{K}_{\beta}$ is equivalent to prove that $\mathbb{Z}^{V(Q_d)}=\langle I_{\Delta}\cup I_{\Gamma}\rangle$. Hence, we will prove this equivalent form of theorem~\ref{cQd}. \begin{Theorem}\label{cQdalternativo} Let $d\geq 1$ be a natural number, then $\mathbb{Z}^{V(Q_d)}=\langle I_{\Delta}\cup I_{\Gamma}\rangle$ \end{Theorem} We will use induction on $d$. For $d=1$, the result is clear because $\Delta(0)=(2,-1)$, $\Delta(1)=(-1,2)$, and by theorem~\ref{generators}, $\Gamma_1=(1,0)$ is a generator of $SP(c(Q_1),s)$ . Let us assume that the result is true for all hypercubes of dimension less than $d-1$. The proof is divided in two steps, first we will prove that $(2d+1)e_{v_{\bf a}} \in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $v_{\bf a}\in V(Q_d)$ and after that we will prove that $d2^{d-1}e_{v_{\bf a}}\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $v_{\bf a}\in V(Q_d)$. \begin{Claim}\label{2d+1} If $d\geq 1$, then $(2d+1)e_{v_{\bf a}} \in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $v_{\bf a}\in V(Q_d)$. \end{Claim} \begin{proof} We will fix some notation that will be useful in the following. For all ${\bf 0},{\bf 1} \neq\beta \in \{0,1\}^d$, ${\bf a} \in \{0,1\}^{{\rm supp }(\beta)}$, and ${\bf b}\in \{0,1\}^{{\rm supp }({\bf 1}-\beta)}$, let $h_{\beta}^{\bf b}: \{0,1\}^{{\rm supp}(\beta)}\rightarrow \{0,1\}^d$ be the mapping given by \[ h_{\beta}^{\bf b}({\bf a})_i= \begin{cases} {\bf a}_i & \text{ if } i \in {\rm supp}(\beta),\\ {\bf b}_i & \text{ if } i \notin {\rm supp}(\beta). \end{cases} \] Now, let $\beta \in \{0,1\}^d$ with $|\beta|=d-1$. Since $Q_d\cong Q_{\beta}\Box \mathcal{ K}_2$, then \begin{itemize} \item $\Delta_{\beta}({\bf a})\Box {\bf 0}=\Delta (h^0_{\beta}({\bf a}))+\Delta (h^1_{\beta}({\bf a}))\in I_{\Delta}$ for all ${\bf a}\in \{0,1\}^{{\rm supp}(\beta)}$, \item ${\bf 1}_{V(Q_d)}=\sum_{{\bf a}\in \{0,1\}^d} \Delta ({\bf a})\in I_{\Delta}$, and \item $\Gamma_{\beta',\beta}\Box {\bf 0}=\Gamma_{\beta'}-{\bf 1}_{V(Q_d)}\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $\beta'$ such that ${\rm supp}(\beta')\subseteq {\rm supp}(\beta)$. \end{itemize} Thus, if \[ g=\sum_{{\bf a}\in \{0,1\}^{{\rm supp}(\beta)}} z_{\bf a}\Delta_{\beta}({\bf a}) + \sum_{\beta'\in \{0,1\}^{{\rm supp}(\beta)}} w_{\beta'}\Gamma_{\beta',\beta} \in \langle I_{\Delta}(\beta),I_{\Gamma}(\beta)\rangle, \] then $g\Box {\bf 0}\in \langle I_{\Delta},I_{\Gamma}\rangle$. In particular, by induction hypothesis, $e_{v_{\bf a}} \in \langle I_{\Delta}(\beta),I_{\Gamma}(\beta)\rangle$ for all ${\bf a}\in \{0,1\}^{{\rm supp}(\beta)}$ and therefore \[ e_{v_{\bf a}}\Box {\bf 0}=e_{v_{h^0_{\beta}({\bf a})}}+e_{v_{h^1_{\beta}({\bf a})}} \in \langle I_{\Delta},I_{\Gamma}\rangle \text{ for all }{\bf a}\in \{0,1\}^{{\rm supp}(\beta)}, \] that is, if $e=v_{\bf a}v_{{\bf a}'}$ is an edge of $Q_d$ and $\chi_e$ is its characteristic vector, then $\chi_e\in \langle I_{\Delta},I_{\Gamma}\rangle$. Moreover, if $v_{\bf a}$ and $v_{{\bf a}'}$ are vertices of $Q_d$, then \[ e_{v_{\bf a}}+(-1)^{{\rm dist}(v_{\bf a},v_{{\bf a}'})} e_{v_{{\bf a}'}} \in \langle I_{\Delta},I_{\Gamma}\rangle, \] where ${\rm dist}(v_{\bf a},v_{{\bf a}'})$ is the distance between $v_{{\bf a}}$ and $v_{{\bf a}'}$ in $Q_d$. Finally, since $\chi_e\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $e\in E(Q_d)$, then \[ (2d+1)e_{v_{\bf a}}=\Delta({\bf a})+\sum_{{\bf a}'\in \{0,1\}^d}^{{\bf a}-{\bf a}'=\pm e_i } \chi_{v_{\bf a}v_{\bf a}'} \in \langle I_{\Delta},I_{\Gamma}\rangle \text{ for all }v_{\bf a}\in V(Q_d). \vspace{-10mm} \] \end{proof} \medskip Now, we will prove that $d2^{d-1}e_{v_{\bf a}}$ is in $\langle I_{\Delta},I_{\Gamma}\rangle$ for all $v_{\bf a}\in V(Q_d)$. \begin{Claim}\label{d2d} If $d\geq 2$, then $d2^{d-1}e_{v_{\bf a}}\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $v_{\bf a}\in V(Q_d)$. \end{Claim} \begin{proof} Again, we need to fix some notation before beginning with the proof. For all $n\leq 1$, let $h_{1}: \{0,1\}^{n}\rightarrow \{0,1\}^{n+1}$ be given by \[ h_{1}({\bf a})_i= \begin{cases} {\bf a}_i+1 \, (\text{mod } 2) & \text{ if } i=1,\\ {\bf a}_i & \text{ if } 2\leq i \leq n,\\ 1 & \text{ if } i=n+1, \end{cases} \] $h_{0}: \{0,1\}^{n}\rightarrow \{0,1\}^{n+1}$ given by $h_{0}({\bf a})=h_{{\bf 1}_n}^0({\bf a})$. For $k=1,2$, let $H_k: \mathbb{Z}^{V(Q_n)}\rightarrow \mathbb{Z}^{V(Q_{n+1})}$ be the mapping given by \[ H_k \left( \sum_{v_{\bf a}\in V(Q_n)} z_{v_{\bf a}} e_{v_{\bf a}} \right) =\sum_{v_{\bf a}\in V(Q_n)} z_{v_{\bf a}}e_{v_{h_k({\bf a})}}. \] Also, for all $d\geq 2$, let \[ R_d=H_0\left( \frac{d}{d-1}R_{d-1}\right)+H_1\left( \frac{d}{d-1}R_{d-1}\right)+d2^{d-2}(\chi_{v_{{\bf 0}}v_{e_1}}-\chi_{v_{e_1}v_{h_1({\bf 0})}}), \] with $R_2=2(\chi_{v_{(0,0)}v_{(1,0)}} -\chi_{v_{(1,0)}v_{(1,1)}} )$. \medskip Actually, we will prove by induction on $d$ that $R_d\in \langle I_{\Delta},I_{\Gamma}\rangle$ and $d2^{d-1}e_{v_{\bf 0}}=\Gamma_{\bf 1} +R_d \in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $d\geq 2$. For $d=2$, clearly $R_2\in \langle I_{\Delta},I_{\Gamma}\rangle$ because $\chi_{e}\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $e\in E(Q_2)$ and $4e_{v_{(0,0)}}=\Gamma_{(1,1)}+R_2\in \langle I_{\Delta},I_{\Gamma}\rangle$. Moreover, $4e_{v_{\bf a}}\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $v_{\bf a}\in V(Q_2)$ because $\chi_{e}$ for all $e\in E(Q_2)$ and $Q_2$ is connected. \begin{figure}[h]\centering \begin{tikzpicture}[line width=1pt,scale=0.85] \tikzstyle{every node}=[inner sep=0pt, minimum width=4pt] \draw (0,0) { +(45:1.1) node[draw, circle, fill=blue] (u1) {} +(135:1.1) node[draw, circle, fill=green] (u2) {} +(225:1.1) node[draw, circle, fill=blue] (u3) {} +(315:1.1) node[draw, circle, fill=green] (u4) {} (u1) -- (u2) --(u3) -- (u4)-- (u1) (u1)+(0.2,0.2) node {\small $2$} (u2)+(-0.2,0.2) node {\small $0$} (u3)+(-0.2,-0.2) node {\small $2$} (u4)+(0.2,-0.2) node {\small $0$} +(-0.75,0.75) node {\small $\Gamma_{(1,1)}$} }; \draw (3.5,0) { +(45:1.1) node[draw, circle, fill=blue] (u1) {} +(135:1.1) node[draw, circle, fill=green] (u2) {} +(225:1.1) node[draw, circle, fill=blue] (u3) {} +(315:1.1) node[draw, circle, fill=green] (u4) {} (u1) -- (u2) --(u3) -- (u4)-- (u1) (u1)+(0.2,0.2) node {\small $-2$} (u2)+(-0.2,0.2) node {\small $0$} (u3)+(-0.2,-0.2) node {\small $2$} (u4)+(0.2,-0.2) node {\small $0$} }; \draw[line width=1.5pt] (3.5,0) { (u3) to (u4) (u4) to (u1) (u3)+(1.5,-0.5) node {\small $2\chi_{v_{(0,0)}v_{(1,0)}}$ } (u4)+(1.4,0.7) node {\small $-2\chi_{v_{(0,1)}v_{(1,1)}}$ } +(-0.75,0.75) node {\small $R_2$} +(-2.5,0.75) node {\small $+$} +(3,0.75) node {\small $=$} }; \draw (9,0) { +(45:1.1) node[draw, circle, fill=blue] (u1) {} +(135:1.1) node[draw, circle, fill=green] (u2) {} +(225:1.1) node[draw, circle, fill=blue] (u3) {} +(315:1.1) node[draw, circle, fill=green] (u4) {} (u1) -- (u2) --(u3) -- (u4)-- (u1) (u1)+(0.2,0.2) node {\small $0$} (u2)+(-0.2,0.2) node {\small $0$} (u3)+(-0.2,-0.2) node {\small $4$} (u4)+(0.2,-0.2) node {\small $0$} +(-0.75,0.75) node {\small $4e_{v_{(0,0)}}$} }; \end{tikzpicture} \caption{\small $\Gamma_{(1,1)}+R_2=4e_{v_{(0,0)}}$} \end{figure} Let us assume that the result is true for all the natural numbers less or equal than $d-1$. Since, $v_{h_k({\bf a})}v_{h_k({\bf b})}\in E(Q_{n+1})$ for all $v_{\bf a}v_{\bf b}\in E(Q_n)$, $n\geq 1$, and $k=1,2$, then $H_k(\frac{d}{d-1}R_{d-1})\in \langle I_{\Delta},I_{\Gamma}\rangle$ for $k=1,2$. Thus, $R_d=H_0(\frac{d}{d-1}R_{d-1})+H_1(\frac{d}{d-1}R_{d-1})+d2^{d-2}(\chi_{v_{{\bf 0}}v_{e_1}}-\chi_{v_{e_1}v_{h_1({\bf 0})}}) \in \langle I_{\Delta},I_{\Gamma}\rangle$. Moreover, since $\Gamma_{\bf 1}=H_0(\frac{d}{d-1}\Gamma_{{\bf 1}_{d-1}})+H_1(\frac{d}{d-1}\Gamma_{{\bf 1}_{d-1}})$, then {\small \begin{eqnarray*} \Gamma_{\bf 1}+H_0\left(\frac{d}{d-1}R_{d-1}\right)+H_1\left(\frac{d}{d-1}R_{d-1}\right) & = & H_0\left(\frac{d}{d-1}\Gamma_{{\bf 1}_{d-1}}+R_{d-1}\right)+ H_1\left(\frac{d}{d-1}\Gamma_{{\bf 1}_{d-1}}+R_{d-1}\right)\\ & = & H_0(d2^{d-2}e_{v_{\bf 0}})+H_0(d2^{d-2}e_{v_{\bf 0}})=d2^{d-2} (e_{v_{h_0({\bf 0})}}+e_{v_{h_1({\bf 0})}}), \end{eqnarray*} } and therefore \[ (d2^{d-1})e_{v_{\bf 0}}=\Gamma_{\bf 1}+R_d\in \langle I_{\Delta},I_{\Gamma}\rangle. \] Since $\chi_{e}$ for all $e\in E(Q_d)$ and $Q_d$ is connected, then $d2^{d-1}e_{v_{\bf a}}\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all $v_{\bf a}\in V(Q_d)$. \end{proof} Finally, by claims~\ref{2d+1} and ~\ref{d2d}, $(2d+1)e_{v_{\bf a}}\in \langle I_{\Delta},I_{\Gamma}\rangle$ and $(d2^{d-1})e_{v_{\bf a}}\in \langle I_{\Delta},I_{\Gamma}\rangle$ for all ${v_{\bf a}}\in V(Q_d)$ and therefore \[ e_{v_{\bf a}}\in \langle I_{\Delta},I_{\Gamma}\rangle \text{ for all } {v_{\bf a}}\in V(Q_d) \] because $(2d+1,d2^{d-1})=(2d+1,d)=1$. \end{proof} \begin{Remark} If $n$ is odd, then theorem~\ref{cQd} says that \[ SP(c_{n}(Q_d),s)\cong \bigoplus_{i=0}^{d} \mathbb{Z}_{2i+n}^{\binom{d}{i}}. \] However, when $n$ is even, this formula is not valid. For instance, if $n=2$ and $d=2$, then \[ SP(c_{2}(Q_2),s)\cong \mathbb{Z}_{8}^2\oplus \mathbb{Z}_{3}\not\cong \mathbb{Z}_{2}\oplus \mathbb{Z}_{4}^2\oplus \mathbb{Z}_{6}=\bigoplus_{i=0}^{d} \mathbb{Z}_{2i+n}^{\binom{d}{i}}. \] \end{Remark} \begin{Remark} In~\cite{lorenzini08} is established a close relation between the sandpile group of a graph $G$ and the eigenvalues and eigenvectors of their Laplacian matrix. For instance, Lorenzini~\cite{lorenzini08} proved that if $\lambda$ is an integral eigenvalue of the Laplacian matrix of a graph $G$ and $\mu(\lambda)$ is the maximum number of linear independent eigenvectors associated to $\lambda$, then the sandpile group of $G$ contains a subgroup isomorphic to $\mathbb{Z}_{\lambda}^{\mu(\lambda)-1}$, see~\cite[Proposition 2.3]{lorenzini08}. When $G$ is the $n$-cone of the hypercube of dimension $d$ we can use induction on $n$ and $d$ in order to get the eigenvalues of their Laplacian matrix. More precisely: If $n, d$ are natural numbers and $0\leq i \leq d$, then $\lambda_i=2i+n$ is an eigenvalue of the Laplacian matrix of $c_n(Q_d)$ with multiplicity $\binom{d}{i}$. Using the results in~\cite{lorenzini08} that relate the eigenvalues and eigenvectors of the Laplacian matrix of a graph you can get only a partial description of the sandpile group of $c_{2k+1}(Q_d)$. For instance, using results in~\cite{lorenzini08} you can guarantee only that $\mathbb{Z}_{3}^2 \triangleleft SP(c(Q_2),s) \cong \mathbb{Z}_{3}^2\oplus \mathbb{Z}_{5}$. The $n$-cone of the hypercube is an example that in general the eigenvalues and eigenvectors of the Laplacian matrix are not enough to determine the group structure of the sandpile group. \end{Remark} \begin{Remark} If ${\bf c}_i \in \widetilde{K}_{{\beta}_i}$ for $i=1,2$ with $\beta_1 \odot \beta_2=0$, then ${\bf c}_1\oplus {\bf c}_2={\bf c}_1+{\bf c}_2-d{\bf 1}$. Also, by theorem~\ref{generators} the generator $g_{\beta}(d, d-|\beta|)$ of $\widetilde{K}_{\beta}$ satisfies that \[ k\cdot g_{\beta}(d, d-|\beta|)= \begin{cases} (d-j,d) & \text{ if } k=2j\leq 2|\beta|,\\ (d,d+j-|\beta|) & \text{ if } k=2j+1 \leq 2|\beta|+1. \end{cases} \] That is, in some cases it is easy to compute the sum of two elements of the sandpile group of $c(Q_d)$. \end{Remark} \begin{Example}\label{cq1cq2} If $d=1$, we have that $SP(c(Q_1))=\mathbb{Z}_3$, $SP(c(Q_1), s)=\{(1,0),(0,1),(1,1)\}$, $(1,0)$ and $(0,1)$ are generators $SP(c(Q_1))$, and $(1,1)$ is the identity of $SP(c(Q_1))$. If $d=2$, we have that $SP(c(Q_2))=\mathbb{Z}_3^2\oplus \mathbb{Z}_5$, $SP(c(Q_2), s)$ is generated by the recurrent configurations $\{\Gamma_{(1,0)}=(2,1,2,1),\Gamma_{(0,1)}=(2,2,1,1),\Gamma_{(1,1)}=(2,0,2,0)\}$, and $\Gamma_{(0,0)}=(2,2,2,2)$ is the recurrent configuration that plays the role of the identity in $SP(c(Q_2))$. \begin{figure}[h]\centering \begin{tikzpicture}[line width=1.1pt, scale=0.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4pt] \draw (4,0) { +(45:1) node[draw, circle, fill=blue] (v1) {} +(135:1) node[draw, circle, fill=blue] (v2) {} +(225:1) node[draw, circle, fill=blue] (v3) {} +(315:1) node[draw, circle, fill=blue] (v4) {} (v1) -- (v2) --(v3) -- (v4)-- (v1) (v1)+(0.2,0.2) node {\small $2$} (v2)+(-0.2,0.2) node {\small $2$} (v3)+(-0.2,-0.2) node {\small $2$} (v4)+(0.2,-0.2) node {\small $2$} (v1)+(-0.7,-0.7) node {\small $\Gamma_{(0,0)}$} }; \draw (8,0) { +(45:1) node[draw, circle, fill=green] (v1) {} +(135:1) node[draw, circle, fill=blue] (v2) {} +(225:1) node[draw, circle, fill=blue] (v3) {} +(315:1) node[draw, circle, fill=green] (v4) {} (v1) -- (v2) --(v3) -- (v4)-- (v1) (v1)+(0.2,0.2) node {\small $1$} (v2)+(-0.2,0.2) node {\small $2$} (v3)+(-0.2,-0.2) node {\small $2$} (v4)+(0.2,-0.2) node {\small $1$} (v1)+(-0.7,-0.7) node {\small $\Gamma_{(1,0)}$} }; \draw (12,0) { +(45:1) node[draw, circle, fill=green] (v1) {} +(135:1) node[draw, circle, fill=green] (v2) {} +(225:1) node[draw, circle, fill=blue] (v3) {} +(315:1) node[draw, circle, fill=blue] (v4) {} (v1) -- (v2) --(v3) -- (v4)-- (v1) (v1)+(0.2,0.2) node {\small $1$} (v2)+(-0.2,0.2) node {\small $1$} (v3)+(-0.2,-0.2) node {\small $2$} (v4)+(0.2,-0.2) node {\small $2$} (v1)+(-0.7,-0.7) node {\small $\Gamma_{(0,1)}$} }; \draw (16,0) { +(45:1) node[draw, circle, fill=blue] (v1) {} +(135:1) node[draw, circle, fill=gray] (v2) {} +(225:1) node[draw, circle, fill=blue] (v3) {} +(315:1) node[draw, circle, fill=gray] (v4) {} (v1) -- (v2) --(v3) -- (v4)-- (v1) (v1)+(0.2,0.2) node {\small $2$} (v2)+(-0.2,0.2) node {\small $0$} (v3)+(-0.2,-0.2) node {\small $2$} (v4)+(0.2,-0.2) node {\small $0$} (v1)+(-0.7,-0.7) node {\small $\Gamma_{(1,1)}$} }; \end{tikzpicture} \vspace{-2mm} \caption{\small The identity and the generators of $SP(c(Q_2),s)$} \end{figure} Furthermore, \[ \widetilde{K}_{(1,0)}=\{(2,1,2,1),(1,2,1,2),(2,2,2,2)\} \text{ and } \widetilde{K}_{(0,1)}=\{(2,2,1,1),(1,1,2,2),(2,2,2,2)\} \] form two subgroups of $SP(c(Q_2))$ isomorphics to $\mathbb{Z}_3$, and \[ \widetilde{K}_{(1,1)}=\{(2,0,0,2),(1,2,2,1),(2,1,1,2),(0,2,2,0),(2,2,2,2)\} \] forms one subgroup isomorphic to $\mathbb{Z}_5$ and $SP(c(Q_2),s)=\widetilde{K}_{(1,0)}\oplus \widetilde{K}_{(0,1)}\oplus \widetilde{K}_{(1,1)}$. \end{Example} \begin{Remark} It is clear that \[ SP(c_n(Q_0),s)=\{(i)\, | \, 0\leq i \leq n-1\}\cong \mathbb{Z}_{n} \] and $(i)\oplus (j)=(i+j ({\rm mod}\, \, n))$. Also, it is not difficult to see that ${\bf e}_{(c_n(Q_d),s)}=k_{max} \cdot n{\bf 1}_{2^d}$, where $k_{max}={\rm max}\{i \, | \, n\cdot i \leq n+d-1\}$ is the identity of $SP(c_n(Q_d),s)$. Furthermore, if $n> 1$ and \[ \hat{K}_{\beta}= \hat{\pi}_{\bf 1}\circ \widetilde{f}_{\beta} (SP(c_n({\mathcal K}_2(|\beta|)),s)) / \hat{\pi}_{\bf 1}(SP(c_n(Q_0),s)), \] then $SP(c_n(Q_d),s)= \bigoplus_{\beta \in \{0,1\}^d} \hat{K}_{\beta}$. For instance, if $d=2$ and $n=3$, then $SP(c_3(Q_2))=\mathbb{Z}_3\oplus \mathbb{Z}_5^2\oplus \mathbb{Z}_7$, $SP(c_3(Q_2), s)$ is generated by the recurrent configurations $\{(2,2,2,2),(3,0,3,0),(0,0,3,3),(0,3,3,0)\}$, and $(3,3,3,3)$ is the recurrent configuration that plays the role of the identity in $SP(c_3(Q_2))$. Furthermore, \[ \hat{K}_{(0,0)}=\{(2,2,2,2),(4,4,4,4),(3,3,3,3)\} \] forms a subgroup of $SP(c(Q_2),s)$ of order $3$, \begin{eqnarray*} \hat{K}_{(1,0)}&=&\{(3,0,3,0),(2,1,2,1),(1,2,1,2),(0,3,0,3),(3,3,3,3)\}, \text{ and }\\ \hat{K}_{(0,1)}&=&\{(0,0,3,3),(2,2,1,1),(1,1,2,2),(3,3,0,0),(3,3,3,3)\} \nonumber \end{eqnarray*} form two subgroups of $SP(c(Q_2),s)$ of order $5$, and \[ \hat{K}_{(1,0)}=\{(0,3,3,0),(2,1,1,2),(2,4,4,2),(4,2,2,4),(1,2,2,1),(3,0,0,3),(3,3,3,3)\} \] forms a subgroup of $SP(c(Q_2),s)$ of order $7$. Note that the construction of the $\hat{K}_{\beta}$ subgroups is not canonical, because $\hat{\pi}_{\bf 1}((0))=(0,0,0,0)+{\bf e}_{(c_n(Q_d),s)}$ and $\hat{\pi}_{\bf 1}((2))=(2,2,2,2)$. \end{Remark} \begin{Remark} If $IF(d)$ is the number of invariant factors of $SP(c(Q_d))$, then \[ IF(d)={\rm max} \left\{ \sum_{\substack{p | 2i+1\\3\leq p\leq 2d+1}} \binom{d}{i} \, | \, p \text{ is a prime number}\right\}= \begin{cases} 6 & \text{ if } d= 4,\\ \sum_{i=0}^{\lfloor \frac{d-1}{3} \rfloor} \binom{d}{1+3i} & \text{ if } d\neq 4. \end{cases} \] Furthermore, it is not difficult to see that \[ \lim_{d\rightarrow \infty} \frac{IF(d)}{2^d}=\frac{1}{3}. \] \end{Remark} \noindent {\bf Acknowledgments} The authors would like to thank V. Reiner, D. Lorenzini and an anonymous referee for their helpful comments.
{ "timestamp": "2011-10-13T02:03:41", "yymm": "1004", "arxiv_id": "1004.3321", "language": "en", "url": "https://arxiv.org/abs/1004.3321", "abstract": "In this article, we give a partial description of the sandpile group of the cone of the cartesian product of graphs in function of the sandpile group of the cone of their factors. Also, we introduce the concept of uniform homomorphism of graphs and prove that every surjective uniform homomorphism of graphs induces an injective homomorphism between their sandpile groups. As an application of these result we obtain an explicit description of a set of generators of the sandpile group of the cone of the hypercube of dimension d.", "subjects": "Combinatorics (math.CO)", "title": "On the Sandpile group of the cone of a graph", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986363166322039, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7087950463017905 }
https://arxiv.org/abs/1708.00094
On facial unique-maximum (edge-)coloring
A facial unique-maximum coloring of a plane graph is a vertex coloring where on each face $\alpha$ the maximal color appears exactly once on the vertices of $\alpha$. If the coloring is required to be proper, then the upper bound for the minimal number of colors required for such a coloring is set to $5$. Fabrici and Göring [Fabrici and Goring 2016] even conjectured that $4$ colors always suffice. Confirming the conjecture would hence give a considerable strengthening of the Four Color Theorem. In this paper, we prove that the conjecture holds for subcubic plane graphs, outerplane graphs and plane quadrangulations. Additionally, we consider the facial edge-coloring analogue of the aforementioned coloring and prove that every $2$-connected plane graph admits such a coloring with at most $4$ colors.
\section{Introduction} In this paper, we consider simple graphs only. We call a graph \emph{planar} if it can be embedded in the plane without crossing edges and we call it \emph{plane} if it is already embedded in this way. A \emph{coloring} of a graph is an assignment of colors to vertices. If in a coloring adjacent vertices receive distinct colors, it is \emph{proper}. The cornerstone of graph colorings is the Four Color Theorem stating that every planar graph can be properly colored using at most $4$ colors~\cite{AppHak76}. Fabrici and G\"{o}ring~\cite{FabGor16} proposed the following strengthening of the Four Color Theorem. \begin{conjecture}[Fabrici and G\"{o}ring \cite{FabGor16}] \label{conj:plane4} If $G$ is a plane graph, then there is a proper coloring of the vertices of $G$ by colors in $\{1,2,3,4\}$ such that every face contains a unique vertex colored with the maximal color appearing on that face. \end{conjecture} A proper coloring of a graph embedded on some surface, where colors are integers and every face has a unique vertex colored with a maximal color, is called a \emph{facial unique-maximum coloring} or \emph{FUM-coloring} for short (Wendland uses the notion \emph{capital coloring} instead). This type of coloring was first studied by Fabrici and G\"{o}ring~\cite{FabGor16}. The main motivation for their research comes from the \emph{unique-maximum coloring} (also known as \emph{ordered coloring}), defined as a coloring where there is only one vertex colored with the maximal color on every path in a graph. Studying unique-maximum coloring was motivated due to a number of applications it finds in various branches of mathematics and computer science; see, e.g.,~\cite{CheKesPal13,CheTot11,KatMccSea95} for more details. Fabrici and G\"{o}ring used this concept in a facial version, which is of great interest, among others, also due to Conjecture~\ref{conj:plane4} and its direct connection to the Four Color Theorem. Coloring embedded graphs with respect to faces is a bursting field itself; the main directions are presented in a recent survey by Czap and Jendrol'~\cite{CzaJen16}. For a graph $G$, the minimum number $k$ such that $G$ admits a FUM-coloring with colors $\{1,2,\ldots,k\}$ is called the \emph{FUM chromatic number of $G$}, denoted by $\chiC{G}$. Fabrici and G\"{o}ring~\cite{FabGor16} proved that if $G$ is a plane graph, then $\chiC{G} \leq 6$. Their result was further improved as follows. \begin{theorem}[Wendland~\cite{Wen16}] \label{thm:plane5} If $G$ is a plane graph, then $\chiC{G} \leq 5$. \end{theorem} We show that the upper bound $4$ from Conjecture~\ref{conj:plane4} holds for several subclasses of plane graphs, and that, surprisingly, the bound is tight in most of the cases. The main result of the paper regarding the FUM-coloring of vertices is the following. \begin{theorem} \label{thm:subcubic} If $G$ is a plane subcubic graph or an outerplane graph, then $\chiC{G} \leq 4$. \end{theorem} In the second part of the paper, we consider the edge-coloring version of the problem, which has been introduced by Fabrici, Jendrol', and Vrbjarov\'{a}~\cite{FabJenVrb15}. For a graph $G$ embedded on some surface, two distinct edges are said to be \emph{facially adjacent} if they are consecutive in some facial path, i.e., they have a common vertex and they are incident with a same face. A \emph{facial edge-coloring} is a coloring of edges such that facially adjacent edges receive distinct colors. It is rather straightforward to prove that every plane graph admits a facial edge-coloring with at most $4$ colors. For a graph $G$, we denote by $\chiCe{G}$ the minimum number $k$ such that there exists a facial edge-coloring using colors $1,\ldots,k$ such that each face is incident with a unique edge colored with the maximal color. Such a coloring is called a \emph{FUM-edge-coloring}. In~\cite{FabJenVrb15}, Fabrici et al. proposed the following conjecture. \begin{conjecture}[Fabrici et al.~\cite{FabJenVrb15}] \label{conj:main} If $G$ is a $2$-edge-connected plane graph, then $\chiCe{G} \leq 4$. \end{conjecture} In~\cite{FabJenVrb15}, the authors proved that $\chiCe{G} \leq 6$ for every $2$-edge-connected plane graph $G$. Our main result is that we prove $\chiCe{G} \leq 4$ if the assumption that the graph is $2$-edge-connected is replaced by $2$-vertex-connectivity, supporting Conjecture~\ref{conj:main}. \begin{theorem} \label{thm:maine} If $G$ is a $2$-vertex-connected plane graph, then $\chiCe{G} \leq 4$. \end{theorem} Observe that every edge in an embedded graph is facially adjacent to at most four other edges, therefore one can translate the problem of facial edge-coloring of a plane graph to a vertex coloring of a plane graph with maximum degree $4$. Hence, Theorem~\ref{thm:plane5} directly implies $\chiCe{G} \leq 5$ for every plane graph $G$. Similarly, Theorem~\ref{thm:subcubic} implies that if $G$ is obtained from a plane graph by subdividing every edge, then $\chiCe{G} \leq 4$. \smallskip The paper is organized as follows. In Section~\ref{sec:v}, we prove Theorem~\ref{thm:subcubic} and discuss the FUM-coloring of vertices. In Section~\ref{sec:e}, we consider the FUM-edge-coloring and prove Theorem~\ref{thm:maine}. Both proofs, of Theorem~\ref{thm:subcubic} and Theorem~\ref{thm:maine}, use precoloring extension technique successfully applied by Thomassen~\cite{Tho94} when proving that every planar graph is $5$-choosable. In Concluding remarks, we present some related results and discuss possible future directions on this topic. \section{FUM-(vertex-)coloring} \label{sec:v} In this section we consider the FUM-coloring of vertices and confirm that Conjecture~\ref{conj:plane4} holds for several subclasses of plane graphs. First, we recall a theorem, which is the main tool used in~\cite{FabGor16}, and will prove helpful also in proving our results. \begin{theorem}[Fabrici and G\"{o}ring \cite{FabGor16}] \label{thm:aux} Every plane graph has a (not necessarily proper) $3$-coloring with colors black, blue, and red such that \begin{itemize} \item[(1)] each face is incident with at most one red vertex, \item[(2)] each face that is not incident with a red vertex is incident with exactly one blue vertex. \end{itemize} \end{theorem} A slightly stronger version of Theorem~\ref{thm:aux} was proved by Wendland~\cite{Wen16} who also added the conclusion that each triangle, facial or separating, contains at least one vertex that is not black. This enabled him to improve the upper bound to $5$ colors. Recall that Conjecture~\ref{conj:plane4} states that if $G$ is a plane graph, then its FUM chromatic number is 4, which is the same upper bound as for the chromatic number. One can therefore ask, which are the plane graphs admitting a FUM-coloring with at most $3$ colors. However, natural candidates such as graphs of large girth, quadrangulations, and outerplane graphs have infinitely many examples with FUM chromatic number $4$. The example in Figure~\ref{fig:girth-v} shows that there is no analogue of Gr\"{o}tzsch's result for the FUM-coloring. Indeed, every vertex lies on the outer face, and hence only one can be colored with $3$ (assuming $3$ colors suffice). As every vertex is incident to at most three faces, the maximal color of the fourth face is $2$, and hence all the other vertices should receive $1$, which is not possible, since the coloring must be proper. \begin{figure}[htp!] \begin{center} \includegraphics{fig-girth-v} \end{center} \caption{Plane graphs with arbitrarily large girth (in fact also outerplane graphs) need at least $4$ colors for a FUM-coloring.} \label{fig:girth-v} \end{figure} We continue by considering plane quadrangulations. \begin{proposition} If $G$ is a plane quadrangulation, then $\chiC{G} \leq 4$. Moreover, there exists an infinite family of plane quadrangulations with FUM chromatic number $4$. \end{proposition} \begin{proof} Let $G$ be a plane quadragulation. A FUM-coloring of $G$ with at most $4$ colors can be obtained by using Theorem~\ref{thm:aux} to assign the colors $3$ and $4$ such that every face is incident with at most one $4$, and at most one $3$ if it is not incident with $4$; the remaining vertices may be colored by $1$ and $2$, since $G$ is bipartite. To prove the second part of the proposition, consider the graph $H$ depicted in Figure~\ref{fig:quadrangulation}. \begin{figure}[ht] $$ \includegraphics{fig-quadrangulation-bl} $$ \caption{A plane quadrangulation with FUM chromatic number $4$.} \label{fig:quadrangulation} \end{figure} Suppose $\chiC{H} = 3$. Then, one of the vertices incident with the outer face $f_0$, say $v_1$, must be colored with $3$. This sets the maximal color also for the faces $f_1$, $f_2$, and $f_3$. Thus, to provide a unique maximal color for $f_4$, we must color the vertex $v_2$ with $3$. Analogously, we must color with $3$ also the vertex $v_3$. But now, there are two vertices colored with $3$ incident with $f_5$, a contradiction. One obtains an infinite family of graphs that require $4$ colors, e.g., by inserting a copy of $H$ to the face $f_5$ by gluing the edges of the outer face of $H$ and the edges of $f_5$. \end{proof} We establish Conjecture~\ref{conj:plane4} also for the classes of subcubic plane graphs and outerplane graphs. The following lemma is motivated by Theorem~\ref{thm:aux}, and we use it to prove Theorem~\ref{thm:subcubic}. The upper bound of $4$ is tight for both classes by, e.g., the graph in Figure~\ref{fig:girth-v}. \begin{lemma} \label{lem:sub} Suppose $G$ is a plane graph that is either subcubic or outerplane, $P$ is a path in the outer face of $G$ on at most two vertices, and the vertices of $P$ are properly colored by a coloring $c'$ with colors $\{1,2,3\}$. Then there is a vertex coloring $c$ of $G$ with at most $4$ colors such that \begin{itemize} \item $c$ matches $c'$ on $P$, \item $c(v) \in \{1,2,3\}$ if $v$ is incident with the outer face, and \item each inner face has a vertex with unique maximal color. \end{itemize} \end{lemma} \begin{proof} Let $G$ be a smallest counterexample in terms of the number of vertices and with largest path $P$. Clearly, we may assume $G$ has at least $2$ vertices. If $G$ has more than one component incident with the outer face, then, by the minimality of $G$, for each of these components, we can color the subgraph induced by the component and the vertices in its interior. The colorings of all such subgraphs together give us a required coloring of $G$, a contradiction. Hence, we may assume that precisely one component of $G$ is incident with the outer face. If $P$ has less than two vertices, we extend $P$ arbitrarily by coloring one of its neighbors on the outer face. Hence $P$ has two vertices. We split the rest of the proof into four claims. \begin{claim} The outer face of $G$ is bounded by a cycle. \end{claim} \begin{proofclaim} Suppose for a contradiction that $v$ is a cut-vertex in $G$ incident with the outer face. Let $W$ be the set of vertices consisting of $v$ and the vertices of the connected component of $G-v$ that intersects $P$. Let $X = (V(G) \setminus W) \cup \{v\}$. By the minimality of $G$, there exists a coloring $c_W$ of $G[W]$ with a path $P_W = P$ and a coloring $c_W'=c'$, and there exists a coloring $c_X$ of $G[X]$ with $P_X=\{v\}$ and $c_X'$ being $c_W$ restricted to $v$. Since the colorings $c_W$ and $c_X$ assign the same color to $v$, they can be combined into a coloring $c$ of $G$, a contradiction. \end{proofclaim} Denote by $C$ the cycle by which the outer face of $G$ is bounded. \begin{claim} \label{cl:chord} $C$ has no chords. \end{claim} \begin{proofclaim} Suppose for a contradiction that $uv$ is a chord in $C$. Let $W$ be the set of vertices containing $u$, $v$, and the vertices of the connected component of $G - \{u,v\}$ that intersects $P$. Let $X = (V(G) \setminus W) \cup \{u,v\}$. By the minimality of $G$, there exists a coloring $c_W$ of $G[W]$ with $P_W = P$ and $c_W'=c'$, and there exists a coloring $c_X$ of $G[X]$ with $P_X=\{u,v\}$ and $c_X'$ being $c_W$ restricted to $u$ and $v$. Since the colorings $c_W$ and $c_X$ assign the same colors to $u$ and $v$, they can be combined into a coloring $c$ of $G$, a contradiction. Hence $C$ is a chordless cycle. \end{proofclaim} If $G$ is outerplane, it follows from Claim~\ref{cl:chord} that it must be a cycle. \begin{claim} \label{cl:cycle} $G$ is not a cycle. \end{claim} \begin{proofclaim} Suppose for a contradiction that $G$ is a cycle. The coloring $c'$ assigns the color $3$ to at most one vertex of $P$. Hence it is possible to color the vertices of $G$ such that exactly one vertex $x$ is colored with $3$ and all the others are colored with $1$ and $2$. The interior face of $G$ then has $x$ as the unique vertex colored by the maximal color. \end{proofclaim} Hence, $G$ is not outerplane, so it is subcubic. Moreover, it contains at least one vertex, which is not in $C$; we call such vertices \emph{interior}. \begin{claim} \label{mainclaim} In $V(C) \setminus V(P)$, there is no vertex of degree $3$ with an interior neighbor, nor a vertex of degree $2$ that is incident with a same face as any interior vertex. \end{claim} \begin{proofclaim} Suppose for a contradiction that $v \in V(C) \setminus V(P)$ is a vertex of degree $3$ with an interior neighbor $u$, or a vertex of degree $2$ and $u$ is an interior vertex incident with a same face as $v$. Let $G'$ be the graph obtained from $G$ by deleting $u$ and $v$. By the minimality of $G$, there is a coloring $c$ of $G'$ satisfying the assumptions of Lemma~\ref{lem:sub}. Notice that all the vertices incident with the same faces as $u$ in $G$ are incident with the outer face in $G'$ (except for $v$). Hence the neighbors of $u$ are colored by $c$ with the colors in $\{1,2,3\}$. We extend $c$ to $G$ by setting $c(u)=4$ and assigning to $v$ a color from $\{1,2,3\}$, which does not appear on its two neighbors on the outer face, a contradiction. \end{proofclaim} From Claim~\ref{mainclaim}, it follows that if $G$ is a subcubic plane graph, there are only vertices of degree $2$ in $V(C) \setminus V(P)$. Moreover, if there is an interior vertex in $G$, then it is incident with the same face as one of the vertices in $V(C) \setminus V(P)$. Hence, Claims~\ref{cl:cycle} and~\ref{mainclaim} give us a contradiction on existence of $G$. This finishes the proof of Lemma~\ref{lem:sub}. \end{proof} Now, we are ready to prove the main theorem of this section. \begin{proof}[Proof of Theorem~\ref{thm:subcubic}] Let $G$ be a plane subcubic graph or an outerplane graph and $v$ any vertex in the outer face of $G$. Apply Lemma~\ref{lem:sub} on the graph $G-v$ and color $v$ by $4$ to complete the coloring of $G$. \end{proof} \section{FUM-edge-coloring} \label{sec:e} In this section we turn our attention to the FUM-edge-coloring. Notice that the upper bound of $4$ is the same as in the vertex version, and as already remarked, the edge version is only a special case of the former. However, also here, the upper bound is achieved within very particular classes of plane graphs, e.g., subcubic outerplane bipartite graphs of arbitrarily large girth (see Figure~\ref{fig:girth-e} for an example). \begin{figure}[htp!] \begin{center} \includegraphics{fig-girth-e} \end{center} \caption{Subcubic outerplane bipartite graphs of arbitrarily large girth need $4$ colors for FUM-edge-coloring.} \label{fig:girth-e} \end{figure} However, regarding Conjecture~\ref{conj:main}, Theorem~\ref{thm:maine} is the first result supporting it. Let $G$ be a plane graph. If an edge $e=uv$ is removed from $G$, new facial adjacencies of edges may be introduced around $u$ and $v$ in $G-e$. However, if we are interested only in a facial edge-coloring of $G$, these new adjacencies may be ignored when coloring $G-e$. This motivates the following concept: let $\mathcal{F}$ be a set of pairs of edges. An \emph{$\mathcal{F}$-facial edge-coloring} is an edge-coloring, where every pair of facially adjacent edges that are not in $\mathcal{F}$ receive distinct colors. We call $\mathcal{F}$ the set of \emph{free pairs}. Two edges are a \emph{good pair} if they are a free pair or if they have a vertex of degree $2$ in common. If a vertex $v$ is a common vertex of the edges in a good pair, we call $v$ a \emph{good vertex}. Recall that every graph $G$ can be decomposed into maximal 2-connected blocks. The \emph{block graph} $B(G)$ is an intersection graph of blocks in $G$. Notice that $B(G)$ is a tree and hence has at least two leaves (unless $G$ is $2$-connected). We call a block corresponding a leaf a \emph{leaf-block}. \begin{observation} \label{obs:leaves} Let $G$ be a $2$-connected graph. If $uv$ is an edge of $G$, then $\{u,v\}$ intersects the set of vertices of every leaf-block of $G-uv$. \end{observation} The following lemma is the core of the proof of Theorem~\ref{thm:maine}. \begin{lemma} \label{lem:maine} Let $G$ be a plane graph and let $\mathcal{F}$ be a set of free pairs, where every leaf-block of $G$ has a good vertex in the outer face. Then there exists an $\mathcal{F}$-facial edge-coloring $c$ using colors in $\{1,2,3,4\}$ such that \begin{itemize} \item{} every edge in the outer face is colored with a color in $\{1,2,3\}$, and \item{} every face, except the outer face, has an edge of a unique maximal color. \end{itemize} \end{lemma} \begin{proof} Let $G$ be the smallest counterexample in terms of the sum of the number of vertices and edges. First we outline a process of removing an edge from $G$. Let $e=uv$ be an edge of $G$. Suppose $u$ is a vertex of degree at least $4$. Observe that in $G-e$, the edges $e_1$ and $e_2$ that were facially adjacent to $e$ at vertex $u$ are not facially adjacent to each other in $G$, but they are facially adjacent in $G-e$. Hence, when considering $G-e$, we modify $\mathcal{F}$ by adding the pair $\{e_1,e_2\}$. This means $u$ is a good vertex in $G-e$. Similarly, $v$ is good, since it is either a common vertex of a free pair or it has degree at most $2$ in $G-e$. Hence, by Observation~\ref{obs:leaves}, every leaf-block in $G-e$ contains a good vertex. We next describe two configurations that cannot appear in $G$. \begin{itemize} \item[(A)] \emph{There is no vertex of degree $1$ in the outer face of $G$.} \smallskip Suppose for a contradiction that $u$ is a vertex of degree $1$ in the outer face and let $e=uv$ be the edge incident with $u$. Let $G'$ be obtained from $G$ by removing $u$, and let $\mathcal{F}'$ be obtained from $\mathcal{F}$ by including any facially adjacent pair of edges in $G'$ that are not facially adjacent in $G$. By the minimality of $G$, there exists an $\mathcal{F}'$-facial edge-coloring $c'$ of $G'$. Since $e$ is facially adjacent to at most two edges in $G$, there is at least one available color in $\{1,2,3\}$. Hence, $c'$ can be extended to an $\mathcal{F}$-facial edge-coloring of $G$, a contradiction. \item[(B)] \emph{There is no edge $e$ in the outer face joining a good vertex $u$ with a vertex $v$ such that $u$ and $v$ are in the same block, $v$ is incident with an edge $f$ that is not in the outer face, $f$ is facially adjacent with $e$, and $e$ is in a good pair with some edge incident to $u$ (see Figure~\ref{fig:remove}).} \begin{figure}[htp!] \begin{center} \includegraphics{fig-remove-bl} \end{center} \caption{Situation in the configuration (B) in Lemma~\ref{lem:maine}.} \label{fig:remove} \end{figure} \smallskip Suppose for a contradiction that there exists such an edge $e$ in $G$. Let $G'$ be obtained from $G$ by removing the edges $e$ and $f$ and let $\mathcal{F}'$ be obtained from $\mathcal{F}$ by including any facially adjacent pair of edges in $G'$ that are not facially adjacent in $G$. By the minimality of $G$, there exists an $\mathcal{F}'$-facial edge-coloring $c'$ of $G'$. Notice that the edges of both faces with which $f$ is incident in $G$ become incident with the outer face of $G'$. Hence, setting $c'(f) = 4$ does not create any conflict with the other edges and it is the unique maximal color for the two faces in $G$. Since $e$ is in a good pair at $u$, there is at most one facially adjacent edge with $e$ at $u$ in $G$. There might be two facially adjacent edges with $e$ at $v$, but one of them is $f$ and as $c'(f)=4$, there is a color in $\{1,2,3\}$ for $e$ that is not conflicting with the edges that are facially adjacent with $e$. This gives a contradiction. \end{itemize} Now, let $B$ be a leaf-block in $B(G)$. Hence, there is at most one vertex $v \in V(B)$ with neighbors in $V(G) \setminus V(B)$, and it contains at least one good vertex by assumption. Observe that if $B$ contains an edge not incident with the outer face, then a configuration described in (B) would occur. Thus we may assume that every edge in $B$ is incident with the outer face. Furthermore, by (A), $B$ is a cycle. Let $G'$ be the graph obtained from $G$ by removing all the edges of $B$ and let $\mathcal{F}'$ be obtained from $\mathcal{F}$ by including any facially adjacent pairs of edges in $G'$ that are not facially adjacent in $G$. By the minimality of $G$, there exists an $\mathcal{F}'$-facial edge-coloring $c'$ of $G'$ satisfying the assumptions of the lemma. Now we show that $c'$ extends to $G$. Since $B$ is a cycle, it bounds some inner face which thus needs a unique maximal color. This is achieved by coloring exactly one edge of $B$ by the color $3$ and all the other edges by $1$ and $2$. Let $e_1$ and $e_2$ be the edges of $B$ incident with $v$. They may be facially adjacent in $G$ to edges of $G'$ that are colored by $c'$. Hence, each of $e_1$ and $e_2$ has two available colors and the other edges of $B$ have three available colors. If the color $3$ is available on $e_i$ for some $i \in \{1,2\}$, we assign $c'(e_i)=3$, and the remaining edges of $B$ can be colored greedily starting from $e_{3-i}$ using only the colors $1$ and $2$, a contradiction. Hence both, $e_1$ and $e_2$, have only the colors $1$ and $2$ available. Now, $B$ can be colored by coloring any edge except $e_1$ and $e_2$ by $3$ and the remaining edges of $B$, including $e_1$ and $e_2$, by alternating the colors $1$ and $2$. This gives a contradiction establishing Lemma~\ref{lem:maine}. \end{proof} We finish this section by presenting a proof of Theorem~\ref{thm:maine}. \begin{proof}[Proof of Theorem~\ref{thm:maine}] Let $G$ be a $2$-(vertex-)connected plane graph. Let $e=uv$ be any edge in the outer face of $G$. Let $G'$ be the graph obtained from $G$ by removing $e$, and let $\mathcal{F}'$ be the set of facially adjacent pairs of edges in $G'$ that are not facially adjacent in $G$. Notice that each of $u$ and $v$ is a good vertex in $G'$. Since $G$ is $2$-connected, the block graph of $G'$ is a path with $u$ and $v$ contained in the blocks (or the only block in the case when $G'$ is also $2$-connected) corresponding to the endvertices of the path. Hence, $G'$ and $\mathcal{F}'$ satisfy the assumptions of Lemma~\ref{lem:maine} and there exists an $\mathcal{F}'$-facial edge-coloring $c'$ of $G'$, which can be extended to a FUM-edge-coloring of $G$ by setting $c'(e) = 4$. \end{proof} \section{Concluding remarks} For both variants of FUM-colorings, vertex and edge, the proposed upper bound is set at $4$ colors. We have shown that there is no analogy with proper colorings, where some subclasses of plane graphs require at most $3$ colors. On the other hand, we have not been able to disprove any of the two conjectures. Although the problem of FUM-coloring is intriguing already in the class of plane graphs, the concept can be naturally studied also for graphs embedded in higher surfaces. Youngs~\cite{You96} proved that the chromatic number of any quadrangulation of the projective plane is either $2$ or $4$. In Figure~\ref{fig:projective}, we present an example of projective plane graph needing $5$ colors (we leave the proof to the reader). \begin{figure}[htp!] \begin{center} \includegraphics{fig-projective} \end{center} \caption{Projective quadrangulation needing $5$ colors for a FUM-coloring.} \label{fig:projective} \end{figure} One may therefore ask, what is the FUM chromatic number of graphs embedded in higher surfaces? How does it behave if we add assumption on minimum face length or girth? In~\cite{Wen16}, the author studied the list version of the problem, and he showed that having lists of size $7$ suffices for FUM-coloring of any plane graphs. He proposed the following conjecture. \begin{conjecture}[Wendland~\cite{Wen16}] If each vertex of a plane graph is assigned a list of $5$ integers, then there exists a FUM-coloring assigning each vertex a color from its list. \end{conjecture} We believe that in FUM-edge-coloring, the upper bound for the list version is the same as for the ordinary. \begin{conjecture} If each edge of a plane graph is assigned a list of $4$ integers, then there exists a FUM-edge-coloring assigning each edge a color from its list. \end{conjecture} \paragraph{Acknowledgment.} The project has been supported by the bilateral cooperation between USA and Slovenia, project no. BI--US/17--18--013. V. Andova, B. Lu\v{z}ar, and R. \v{S}krekovski were partially supported by the Slovenian Research Agency Program P1--0383. B. Lidick\'y was partially supported by NSF grant DMS-1600390.
{ "timestamp": "2017-11-28T02:17:53", "yymm": "1708", "arxiv_id": "1708.00094", "language": "en", "url": "https://arxiv.org/abs/1708.00094", "abstract": "A facial unique-maximum coloring of a plane graph is a vertex coloring where on each face $\\alpha$ the maximal color appears exactly once on the vertices of $\\alpha$. If the coloring is required to be proper, then the upper bound for the minimal number of colors required for such a coloring is set to $5$. Fabrici and Göring [Fabrici and Goring 2016] even conjectured that $4$ colors always suffice. Confirming the conjecture would hence give a considerable strengthening of the Four Color Theorem. In this paper, we prove that the conjecture holds for subcubic plane graphs, outerplane graphs and plane quadrangulations. Additionally, we consider the facial edge-coloring analogue of the aforementioned coloring and prove that every $2$-connected plane graph admits such a coloring with at most $4$ colors.", "subjects": "Combinatorics (math.CO)", "title": "On facial unique-maximum (edge-)coloring", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631655203046, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7087950457256685 }
https://arxiv.org/abs/2110.01703
Affine dimers from characteristic polygons
Recent work by Forsgård indicates that not every convex lattice polygon arises as the characteristic polygon of an affine dimer or, equivalently, an admissible oriented line arrangement on the torus in general position. We begin the classification of convex lattice polygons arising as characteristic polygons of affine dimers. We present several general constructions of new affine dimers from old, and an algorithm for finding affine dimers with prescribed polygon. With these tools we prove that all lattice triangles, generalised parallelograms, and polygons of genus at most two admit an affine dimer.
\section{Introduction} \label{sec:intro} A dimer model is an embedded bipartite graph on the torus $\mathbb{T}^2$ or, depending on the application, any surface $\Sigma$. They were originally introduced in statistical mechanics to model molecular interactions.\todo{(B1) Explain where dimers arise in other areas of physics / maths} In a simplified model, the thermodynamic properties of a mixture of molecules can be calculated from a combinatorial factor that counts the number of arrangements of molecules on a square lattice. If all molecules are dimers rather than monomers or higher polymers, this amounts to counting the number of domino tilings of the square lattice \cite{kasteleyn1961}.\todo[disable, color=yellow]{Ising model?} Thinking of the square lattice as an embedded graph, there is a one-to-one correspondence between domino tilings of the lattice and perfect matchings of the graph. The requirement that the graph is bipartite arises when one takes into account the two possible charges of a particle. Finally, the torus $\mathbb{T}^2$ is a natural choice of ambient space to account for translational symmetries such as that of a crystal. More recent applications of dimer models can be found in algebraic and tropical geometry, as well as in string theory (e.g., \cite{addition1} and \cite{addition2}). To every dimer model on $\mathbb{T}^2$ one can associate a convex lattice polygon, called the \textit{characteristic polygon}, in at least two ways. The first way is as the Newton polygon of the determinant of the Kasteleyn\todo[color=green]{(A1) Kastel{\bf e}yn} operator, a generalisation of the adjacency matrix where the entries are weighted according to their meridional and longitudinal winding numbers (\cite{cimasoni2014}, Section 7).\todo{(B2) explain where the Kasteleyn matrix arises} A variant of this operator was used by Kasteleyn in \cite{kasteleyn1961} to calculate the number of domino tilings of a rectangular square lattice as a Pfaffian. The second way is as the convex hull of the values of the height function, which assigns a value in $\mathbb{Z}^2$ to every perfect matching of the dimer model. These two notions turn out to be equivalent, and it is natural to consider the inverse problem: For which convex lattice polygons does there exist a dimer model with characteristic polygon as prescribed? This question has been answered positively for all convex polygons if no further restrictions are imposed on the dimer model \cite{gulottaInverse}. Futaki–Ueda and Ueda–Yamzaki realised that, in some cases, the dimer model may be obtained from the faces of a certain hyperplane arrangement on $\mathbb{T}^2$ (\cite{futaki2010dimer}, \cite{ueda2011}, \cite{ueda2012}, as cited in \cite{forsgard2016dimer}). In this case, the dimer is called \textit{affine}. However, Forsgård exhibited a family of convex polygons which do not admit an affine dimer (\cite{forsgard2016dimer}, Section 4). The goal of this paper is to classify which convex polygons admit an affine dimer. We present partial results consisting of a list of constructions to obtain new dimers from old, an algorithm implemented in Java to verify whether a polygon admits an affine dimer, and a positive answer for all convex lattice polygons that are triangles, ``generalised parallelograms'', or have at most two interior lattice points. These results are summarised in Theorem \ref{thm:homologyPolygonSpace} \& \ref{thm:mainResult} at the end of this section. The results have the following application to algebraic and tropical geometry. \todo{(B3) explained 'no combinatorial obstruction'} Given a complex curve $C$ in $(\mathbb{C}^*)^2$, the \textit{coamoeba} $\mathscr{C}\subseteq \mathbb{T}^2$ is its image under the argument projection $(x,y)\mapsto (\text{arg}(x), \text{arg}(y))$ which naturally takes values on $\mathbb{T}^2$. The \textit{shell} of the coamoeba is a line arrangement $\mathcal{H}$ on $\mathbb{T}^2$ that is derived from the bivariate polynomial defining $C$ and satisfies $\overline{\mathscr{C}}=\mathscr{C}\cup \mathcal{H}$ (c.f. \cite{johanssonShell2013} and \cite{forsgard2016dimer}). Then $\mathcal{H}$ divides $\mathbb{T}^2$ into several tiles, and we say that a tile is \textit{full} if it is fully contained in $\overline{\mathscr{C}}$. We say that the coamoeba is represented by a dimer if we can embed a bipartite graph on $\mathbb{T}^2$ such that every vertex is contained in the interior of a full tile, every tile contains at most one vertex, and the edges correspond to shared corners between two tiles. If such a graph exists, it is by definition a dimer, and automatically affine since it comes from the line arrangement $\mathcal{H}$. It is natural to ask which complex curves possess coamoebas that are represented by a dimer. An important observation is that the Newton polygon of the defining polynomial of the curve is the same as the characteristic polygon of the affine dimer representing its coamoeba, if such a dimer exists. Therefore, a first obstruction is the non-existence of an affine dimer with given characteristic polygon. We prove that this combinatorial obstruction vanishes if the genus of the curve is at most two. Our results also imply that all tropical curves of genus $\le 2$ can be lifted to an exact Lagrangian submanifold of $(\mathbb{C}^*)^2$, as described in \cite{jeff21}. For simplicity, we work with \textit{homology polygons} rather than \textit{characteristic polygons} from height functions. These concepts are equivalent, as outlined in the very readable source \cite{chan2016}. \subsection{Definitions} \label{sec:defns} The $n$\textit{-dimensional torus} $\mathbb{T}^n$ is the quotient $\mathbb{R}^n/\mathbb{Z}^n$ with quotient map $q:\mathbb{R}^n/\mathbb{Z}^n\rightarrow \mathbb{T}^n$. Note that $q$ is a universal cover for $\mathbb{T}^n$. \begin{definition} A \textit{dimer} is a bipartite multigraph $G=(V_\circ\sqcup V_\bullet, E)$ embedded on the two-dimensional torus. (This means we allow multiple edges between two vertices). \end{definition} Let $\hat H\subseteq\mathbb{R}^n$ be an affine hyperplane, i.e., there exist $a\in\mathbb{R}^n, b\in\mathbb{R}$ such that $\hat H={\{x\in\mathbb{R}^n : \langle a,x\rangle=b\}}$. We call $H:=q(\hat H)$ a \textit{hyperplane on the torus}. If $\text{dim}(\hat H)=1$, we call $H$ a \textit{line (on the torus)}. We now specialise to $n=2$. A \textit{closed geodesic} is a closed loop given by a line $H\subseteq\mathbb{T}^2$. Once we fix a choice of orientation for a closed geodesic $H$, there are unique coprime integers $a,b\in\mathbb{Z}$ such that the homology class of $H$ is $[H]=(a,b)\in H_1(\mathbb{T}^2)\cong\mathbb{Z}^2$, i.e., $(a,b)$ is the direction of $H$. We call $(a,b)\in\mathbb{Z}^2$ \textit{primitive} if $\text{\normalfont gcd}(a,b)=1$. One can interpret $a$ and $b$ as the winding numbers of $H$ around the two directions of $\mathbb{T}^2\cong \mathbb{T}\times \mathbb{T}$. Note, however, that this choice of directions is by no means intrinsic to $\mathbb{T}^2$ and we will exploit this symmetry on several occasions, using the action by automorphisms of $GL_2(\mathbb{Z})$ on $\mathbb{T}^2$. \begin{definition} An \textit{oriented line arrangement} (on the 2-torus) is a finite set $\mathcal{H}$ of closed geodesics on $\mathbb{T}^2$. The line arrangement is called \begin{itemize} \item \textit{in general position} if no three lines intersect in a point, parallel lines are disjoint, and not all lines are parallel; \item \textit{admissible} if it is in general position and every oriented line segment is a boundary component of a face whose edges are consistently oriented, i.e., all clockwise or all counterclockwise. (A \textit{line segment of} $\mathcal{H}$ is a segment of a line in $\mathcal{H}$ whose endpoints are intersection points of $\mathcal{H}$ and whose interior contains no intersection points.) \end{itemize} \end{definition} Note that if a line arrangement is in general position then all faces are automatically homeomorphic to a disk. Figure \ref{figure:admissible} gives an example of\todo[color=green]{(A2) added missing {\bf of}} admissible and non-admissible line arrangements. Note that these only differ by a translation of the upper horizontal line, so both arrangements represent the same multiset of homology classes in $H_1(\mathbb{T}^2)$. \begin{figure}[h!]\centering \includegraphics[width=7cm]{fig/tikz/LineArrangement_admissible.pdf} \hspace{1cm} \includegraphics[width=7cm]{fig/tikz/LineArrangement_not_admissible.pdf} \caption{Examples of an admissible (left) and non-admissible (right) oriented line arrangement. Consistently oriented faces are indicated with $\circlearrowleft$ and $\circlearrowright$. The example to the right is not admissible because, for example, the red line segment does not bound a consistently oriented face.} \label{figure:admissible} \end{figure} We briefly elaborate on the equivalence of admissible oriented line arrangements and a certain class of dimers, called \textit{affine dimers}. Given an admissible oriented line arrangement, we obtain a dimer $G=(V_\circ\sqcup V_\bullet, E)$ as follows. Let $V_\circ$ and $V_\bullet$ be the sets of faces oriented clockwise and counterclockwise, respectively. For each intersection point of the line arrangement, we add an edge to $E$ connecting the two oriented faces meeting there. The obtained graph is bipartite. To embed $G$ in $\mathbb{T}^2$ we place a vertex in the interior of each consistently oriented face. Each edge can then be realised as a union of two line segments meeting at the shared intersection point of the two faces (see Figure \ref{figure:dimer}). \begin{figure}[H]\centering \includegraphics[width=7cm]{fig/tikz/LineArrangementDimer.pdf} \caption{The affine dimer $G=(V_\circ\sqcup V_\bullet, E)$ obtained from the admissible line arrangement in Figure \ref{figure:admissible}. The edges of $G$ are depicted in blue.} \label{figure:dimer} \end{figure} The converse construction is also possible: Given a dimer $G=(V_\circ\sqcup V_\bullet, E)$ such that \begin{itemize} \item the vertices of $G$ are faces of a line arrangement in general position; \item each edge of $G$ connects two faces along an intersection point of the line arrangement such that the connected faces are opposite each other at that intersection point; \item each intersection point of the line arrangement lies on exactly one edge of $G$ and each edge of $G$ contains exactly one intersection point of the line arrangement, \end{itemize} then we may declare the faces in $V_\circ$ and $V_\bullet$ to be oriented clockwise and counterclockwise, respectively. The above conditions determine a well-defined choice of orientation for each line. Thus, we obtain an admissible oriented line arrangement. \begin{definition} An \textit{affine dimer} is a dimer satisfying the three conditions above. Figure \ref{figure:dimer} gives an example. \end{definition} Thus, the notions of an affine dimer and an admissible oriented line arrangement are equivalent, and we will use them interchangeably. \subsection{Problem Statement} \label{sec:problem_statement} Our leading question is the following: \begin{question} \label{question:main} For which multisets of homology classes $S=\{h_1,\dots,h_n\}\subset H_1(\mathbb{T}^2)\cong \mathbb{Z}^2$ is there an admissible oriented line arrangement $\mathcal{H}=\{H_1,\dots,H_n\}$ whose lines represent $S$, i.e., such that $[H_i]=h_i$ for $i=1,\dots,n$? \end{question} Figure \ref{figure:admissible} shows that a multiset of homology classes may be represented both by admissible and non-admissible oriented line arrangements. Moreover, Forsgård showed that there is a family of multisets of homology classes indexed by $\mathbb{N}_{\ge 5}$, for which there are no admissible oriented line arrangements representing them \cite{forsgard2016dimer}. Thus, the problem is non-trivial. We already saw that the homology class of a closed geodesic on $\mathbb{T}^2$ is automatically \textit{primitive}, i.e., $(a,b)\in\mathbb{Z}^2$ with $\text{\normalfont gcd}(a,b)=1$. There is another immediate necessary condition, which will allow us to reformulate the problem in terms of convex polygons on the integer lattice. \begin{lemma} \label{lemma:zeroSum} Let $\mathcal{H}=\{H_1,\dots,H_n\}$ be an admissible oriented line arrangement representing the homology classes $[H_i]=(a_i,b_i)\in\mathbb{Z}^2$. Then \[ \sum_{i=1}^n [H_i] = 0. \] \end{lemma} \begin{proof} Each oriented line $H_i$ is subdivided into several oriented line segments whose endpoints are intersection points of the line arrangement. These oriented line segments represent 1-chains on $\mathbb{T}^2$, so we may write $H_i=\sum_{\text{segments } e \text{ of } H_i} e$, where $e\in C_1(\mathbb{T}^2)$ is a line segment of $H_i$. On the other hand, each segment belongs to exactly one consistently oriented face. Thus, as chains \[ \sum_{i=1}^n H_i = \sum_{i=1}^n \left( \sum_{\text{segments } e \text{ of } H_i} e\right) = \sum_{\text{segments } e \text{ of }\mathcal{H}} e = \sum_{\text{oriented faces } F}\left(\sum_{\text{edges } e \text{ of } F} e\right). \] Passing to homology, we get $\left[\sum_{\text{edges } e \text{ of } F}e\right]=0$ for each oriented face $F$. Therefore $\sum_{i=1}^n [H_i]=0$. \end{proof} \begin{definition} A \textit{lattice polygon} is a polygon in $\mathbb{R}^2$ whose vertices all lie in $\mathbb{Z}^2$. \end{definition} \begin{lemma} There is a bijection between the finite multisets of primitive elements of $\mathbb{Z}^2$ summing to zero and the convex lattice polygons on $\mathbb{Z}^2$ up to translation. \end{lemma} \begin{proof} Given primitive elements $h_i=(a_i,b_i)\in\mathbb{Z}^2$, i.e., $\text{\normalfont gcd}(a_i,b_i)=1$, we may order them by their angle $\text{arg}(h_i)\in[0,2\pi)$ with the $x$-axis. We define a convex lattice polygon via the vertices $v_0=(0,0)$ and $v_i=v_{i-1}+h_i$. This is a closed polygon since $\sum_{i=1}^n h_i = 0$ and convex since we ordered the $h_i$. Conversely, given a convex lattice polygon, orient the edges counterclockwise and subdivide each edge so that it contains no integer lattice point in its interior. Viewing each edge as a vector $(a,b)\in\mathbb{Z}^2$, this corresponds exactly to $\text{\normalfont gcd}(a,b)=1$, and thus we obtain a primitive element $h_i=(a_i,b_i)\in\mathbb{Z}^2$ for each primitive edge segment of the polygon. Finally, $\sum_{i=1}^n h_i = 0$ since polygons are closed. \end{proof} \begin{definition} Let $\mathcal{H}$ be an admissible oriented line arrangement on $\mathbb{T}^2$. The \textit{homology polygon} $P$ of $\mathcal{H}$ is the convex lattice polygon obtained from the homology classes of the lines of $\mathcal{H}$. Equivalently, we can talk about the homology polygon of an affine dimer. Figure \ref{figure:homologyPolygon} shows an example. \end{definition} \begin{figure}[H]\centering \includegraphics[width=2.5cm]{fig/tikz/HomologyPolygon.pdf} \caption{Homology polygon of the admissible line arrangement in Figure \ref{figure:admissible} and the affine dimer in Figure \ref{figure:dimer}. The homology classes represented by the polygon are $(1,0), (1,0), (0,1),(-1,1),(-1,-2)$.} \label{figure:homologyPolygon} \end{figure} It is not a priori clear that the homology polygon of an affine dimer is well-defined, as there might exist different admissible line arrangements with different homology classes that give the same affine dimer $G=(V_\circ\sqcup V_\bullet, E)$ via the construction in Section \ref{sec:defns}. However, as mentioned earlier, the homology polygon is equivalent to the \textit{characteristic polygon} (see \cite{chan2016}) which only depends on the data of the dimer and not the line arrangement. Hence, the homology polygon of an affine dimer is well-defined. We only prefer to use homology polygons in the problem statement because they are slightly easier to define. Thus, we can reformulate our question: \begin{question} (Reformulation of Question \ref{question:main}) \label{question:latticeForm} Which convex lattice polygons arise as the homology polygon of an admissible oriented line arrangement? Or which convex lattice polygons admit an affine dimer? \end{question} \subsubsection{Invariance under $GL_2(\mathbb{Z})$} Finally, we note that $GL_2(\mathbb{Z})=\left\{A\in\mathbb{Z}^{2\times 2} : \det(A)=\pm1\right\}$ acts on $\mathbb{T}^2$ by linear automorphisms which preserve admissible oriented line arrangements. Similarly, $GL_2(\mathbb{Z})$ acts on the space of convex lattice polygons through its action on $\mathbb{Z}^2$, preserving area and the number of lattice points in the interior and on the boundary. We call two lattice polygons \textit{equivalent} if they are related by an action of $GL_2(\mathbb{Z})$ and translation by a vector in $\mathbb{Z}^2$.\todo[color=green]{(A3) translation only by vectors in $\mathbb{Z}^2$.} Thus, whether a convex lattice polygon admits an affine dimer only depends on its equivalence class, and we arrive at our final formulation of the problem: \begin{question} (Reformulation of Question \ref{question:latticeForm}) \label{question:latticeEquivForm} Which equivalence classes of convex lattice polygons arise as the homology polygon of an admissible oriented line arrangement? Or which equivalence classes admit an affine dimer? \end{question} \subsection{Outline of Results and Structure} Section \ref{sec:combinatorics} surveys some basic combinatorial properties of affine dimers and motivates the name \textit{genus} for the number of interior points of the homology polygon $P$, by connecting it to the genus of a punctured compact orientable surface that is homotopy equivalent to $G$. In Section \ref{sec:new_dimers_from_old} we present three constructions of affine dimers. The ``double everything''-construction exhibits an affine dimer for every lattice polygon consisting of pairs of antiparallel primitive side segments. The other two constructions (``lifting'' and ``adding an antiparallel pair'') give new dimers from old: \begin{alphatheorem} \label{thm:homologyPolygonSpace} Let $P$ be the homology polygon of an affine dimer. \begin{itemize} \item[(i)] If \todo[color=green]{(A4) B not b}$B\in\mathbb{Z}^{2\times 2}$ and $\det(B)\neq 0$ then $B(P)$ is also the homology polygon of an affine dimer. \item[(ii)] If $h\in\mathbb{Z}^2$ is a primitive side segment of $P$, then $P_h$ is also the homology polygon of an affine dimer, where $P_h$ is obtained from $P$ by adding the side segments $h,-h$. \end{itemize} \end{alphatheorem} \begin{proof} (i) Proposition \ref{prop:addParallelEdges}. (ii) Corollary \ref{cor:applyLinear} below. \end{proof} Section \ref{sec:algorithms} summarises our algorithms, including a description of the moduli space $\mathcal{M}\cong\mathbb{T}^n$ of line arrangements representing a given homology polygon $P$. Finally, Section \ref{sec:genus0and1} connects these results to finish the proof of Theorem \ref{thm:mainResult}: \begin{alphatheorem}\label{thm:mainResult} Let $P$ be a convex lattice polygon such that \begin{itemize} \item[(i)] $P$ is a triangle, or \item[(ii)] the primitive side segments of $P$ are pairs of antiparallel side segments, or \item[(iii)] the number of interior lattice points of $P$ is at most 2. \end{itemize} Then $P$ admits an affine dimer. \end{alphatheorem} \begin{proof} (i) Proposition \ref{prop:triangles}. (ii) Proposition \ref{prop:doubleEverything}. (iii) Propositions \ref{prop:genus0}, \ref{prop:genus1}, \ref{prop:genus2} below. \end{proof} \section{Basic Combinatorics of Affine Dimers} \label{sec:combinatorics} In this section we develop some basic combinatorics of affine dimers. \begin{definition} Let $G=(V_\circ\sqcup V_\bullet,E)$ be an affine dimer with corresponding admissible oriented line arrangement $\mathcal{H}=\{H_1,\dots,H_n\}$. Then we denote by \begin{itemize} \item $n$ ... the number of lines, \item $f_\circ := |V_\circ|, f_\bullet := |V_\bullet|$ ... the number of faces of the line arrangement oriented clockwise and anticlockwise, respectively, \item $f_\times$ ... the number of faces that are inconsistently oriented, \item $f=f_\circ + f_\bullet + f_\times$ ... the number of faces of the line arrangement, \item $v$ ... the total number of vertices of the line arrangement, i.e., intersection points of lines in $\mathcal{H}$, \item $e_\circ, e_\bullet$ ... the number of line segments of $\mathcal{H}$ belonging to faces in $V_\circ$ or $V_\bullet$, respectively, \item $e=e_\circ + e_\bullet$ ... the number of line segments of the line arrangement $\mathcal{H}$, \item $g$ ... the \textit{genus} of the dimer, which will be introduced in Section \ref{sec:genus}. \end{itemize} \end{definition} For example, the affine dimer in Figure \ref{figure:dimer} has $n=5, f_\circ=f_\bullet=4$, $f_\times = 5$, \\${f=v=e/2=13}$, ${e_\circ=e_\bullet=13}$, and $g=1$. \begin{proposition}[Basic counting] \label{prop:basicResults} \[ \textnormal{(i)} \hspace{2mm}v-e+f=0\hspace{6mm} \textnormal{(ii)} \hspace{2mm}e_\circ = e_\bullet \hspace{6mm} \textnormal{(iii)} \hspace{2mm}f_\circ = f_\bullet\hspace{6mm} \textnormal{(iv)}\hspace{2mm} v=f=e/2 \] \end{proposition} \begin{proof}The proofs are as follows: \begin{enumerate} \item[(i)] Immediate since $\mathbb{T}^2$ has Euler characteristic zero and a line arrangement in general position gives a CW decomposition of $\mathbb{T}^2$. \item[(ii)] Each of the $n$ closed geodesics consists alternately of edges counted by $e_\circ$ and $e_\bullet$. \item[(iii)] This follows from the existence of a perfect matching for $G$ (see Proposition \ref{lemma:matchingsExist}). \item[(iv)] By (i) it suffices to show $v=f$, for which we induct on the number of lines. Adding a closed geodesic in general position adds as many faces as it adds vertices. To verify the induction basis, assume we only have two closed geodesics which are not parallel. As the homology class of a closed geodesic is a primitive element of $\mathbb{Z}^2$, by the Euclidean algorithm we may use an action of $SL_2(\mathbb{Z})$ to assume that the geodesics have homology classes $(1,0)$ and $(c,d)$. By inspection, this configuration has $v=f=d$. \end{enumerate} \end{proof} \begin{proposition}\label{lemma:matchingsExist} An affine dimer $G$ admits a perfect matching. \end{proposition} \begin{proof} See \cite{chan2016} for a very readable discussion of the perfect matchings of a (not necessarily affine) dimer, whose information is encoded in the \textit{characteristic polygon} via height functions. This is a special case. Let $\rho\in \mathbb{R}^2\setminus\{0\}$ be a vector that does not indicate the (signed) direction of any line in $\mathcal{H}$. Then every consistently oriented face has a vertex at which the directions of the two intersecting lines are immediately to the right and to the left of $\rho$. Now match each clockwise face in $V_\circ$ to the counterclockwise face in $V_\bullet$ adjacent to it via that vertex. This defines a bijection $V_\circ\rightarrow V_\bullet$ whose inverse $V_\bullet\rightarrow V_\circ$ is constructed identically using the same $\rho$. Thus, we have a matching. This construction is illustrated in Figure \ref{figure:matching}. \end{proof} \begin{figure}[h]\centering \includegraphics[width=7cm]{fig/tikz/LineArrangementDimer_matching.pdf} \hspace{1.5cm} \raisebox{1.1\height}{\includegraphics[width=3cm]{fig/tikz/HomologyPolygon_matching.pdf}} \caption{Matching (red) of the affine dimer in Figure \ref{figure:dimer} corresponding to $\rho = (1,1)$ (left). As seen in the dimer's homology polygon (right), any $\rho$ with $\text{arg}(\rho)\in(0,\pi/2)$ produces the same matching.} \label{figure:matching} \end{figure} \begin{proposition} \label{prop:ftimes1213} We have \[ 1/3 \le f_\times / f \le 1/2 \] with $f_\times / f = 1/2$ if and only if every inconsistently oriented face is a 4-gon, and $f_\times / f = 1/3$ if and only if every consistently oriented face is a triangle. \end{proposition} \begin{proof} For the upper bound we count the number of corners of inconsistently oriented faces in two ways. On the one hand, this is $2v$ since each vertex is incident with two inconsistently oriented faces. On the other hand, each inconsistently oriented face has an even number of vertices, as otherwise $G$ contains an odd cycle contradicting bipartiteness. Thus, \[ 2v \ge 4f_\times. \] By Proposition \ref{prop:basicResults}, $v=f$, so the upper bound follows with equality condition as desired. For the lower bound we count the number of corners of consistently oriented faces in two ways. Again, this is $2v$. But every face has at least three edges, so \[ 2v \ge 3(f_\circ + f_\bullet) = 3(f - f_\times). \] The result follows using $f=v$ again. \end{proof} \subsection{Area of the Homology Polygon} The next result follows directly from Johansson's and Forsgård's work in \cite{forsgardJohansson2014}.\todo{(B4)=(A5)}\todo[color=green]{(A5) Index map not needed.} \begin{theorem} \label{thm:2areaP} Let $P$ be the homology polygon of an affine dimer. Then\todo[color=green]{(A6) Area not italic.} \[ f_\times = 2\text{\normalfont Area}(P). \] \end{theorem} \begin{proofsketch} The key step is to show that $2\pi\text{\normalfont Area}(P)$ is the sum of \textit{inner angles} of vertices of the line arrangement, counting one per vertex (\cite{forsgardJohansson2014}, Lemma 3.2). Here, the inner angle of two oriented intersecting lines is defined to be positive and lies between an ingoing and an outgoing ray. Thus, $2\pi\text{\normalfont Area}(P)$ is the sum of interior angles of all clockwise faces. This is exactly half the sum of exterior angles of all inconsistently oriented faces, which is $2\pi$ per face. Thus, \[ 2\pi\text{\normalfont Area}(P) = \frac{2\pi f_\times}{2}. \] \end{proofsketch} For example, the affine dimer in Figure \ref{figure:matching} has $f_\times = 5=2\text{\normalfont Area}(P)$. \subsection{Genus of an Affine Dimer} \label{sec:genus} We now describe two ways to think of an affine dimer (or equivalently of an admissible oriented line arrangement) as a two-dimensional geometric shape. \begin{definition} The \textit{realisation of an affine dimer} $G$ is the set $\bar{G}:=\bigcup_{F\in V_\circ\sqcup V_\bullet}\bar{F}\subseteq\mathbb{T}^2$, i.e., the union of the closed oriented faces of the admissible oriented line arrangement $\mathcal{H}$. \end{definition} By definition, this depends on the choice of admissible line arrangement $\mathcal{H}$ corresponding to $G$. However, many properties of $\bar{G}$ only depend on the homology polygon $P$. \begin{proposition} \label{prop:eulerCharFlat} The Euler characteristic of $\bar{G}$ is $\chi(\bar{G})=-f_\times$. \end{proposition} \begin{proof} \[ \chi(\bar{G}) = v - e + (f_\circ+f_\bullet) = v - e + f - f_\times = \chi(\mathbb{T}^2) - f_\times = -f_\times. \] \end{proof} The embedding of $G$ described in Section \ref{sec:defns} is a deformation retract of $\bar{G}$, so $\chi(G)=\chi(\bar{G})$. \begin{corollary} \label{cor:Gchar1} $\chi(G)=\chi(\bar{G})=-2\text{\normalfont Area}(P)$.\todo[color=green]{(A7) Area not in italic.} \end{corollary} We may consider $\bar{G}$ as the projection of a punctured smooth compact oriented surface $\hat{\bar{G}}$ embedded in $\mathbb{R}^3$. To this end we use the smooth standard embedding $\varphi: \mathbb{T}^2\hookrightarrow\mathbb{R}^3$ and consider $\varphi(\bar{G})\subset\mathbb{R}^3$. \begin{definition}(The smooth orientable surface $\hat{\bar{G}}\subset\mathbb{R}^3$) Away from intersection points of boundary components of $\varphi(\bar{G})$ we identify $\hat{\bar{G}}$ with $\varphi(\bar{G})$. Near an intersection point, we exploit the third dimension and let $\hat{\bar{G}}$ twist locally by $180^\circ$ like a helicoid as shown in Figure \ref{figure:twist}, i.e., the normal vector changes smoothly from $v$ to $-v$ when traversing this neighbourhood along an edge of $G$.\todo{(B5) explain "locally like a helicoid"} These patches are glued together using bump functions so that we obtain a smooth compact embedded surface $\hat{\bar{G}}\subset\mathbb{R}^3$. \end{definition} \begin{figure}[H]\centering \includegraphics[height=3.8cm]{fig/tikz/Twist_1.pdf} \hspace{1.2cm} \raisebox{6.5\height}{\scalebox{2}{$\longrightarrow$}} \hspace{1.2cm} \includegraphics[height=3.8cm]{fig/tikz/Twist_2.pdf} \caption{Twisting of $\hat{\bar{G}}$ in $\mathbb{R}^3$ near an intersection point of the boundary components of $\varphi(\bar{G})$. $\hat{\bar{G}}$ looks locally like the $180^\circ$ segment of a helicoid near such points.} \label{figure:twist} \end{figure} \begin{proposition} \label{prop:Gorientable} $\hat{\bar{G}}$ is orientable. \end{proposition} \begin{proof} An orientation $N:\hat{\bar{G}}\rightarrow S^2$ is obtained as follows. By the Jordan--Brouwer\todo[color=green]{(A8) -- not -} separation theorem, $\mathbb{R}^3\setminus\varphi(\mathbb{T}^2)$ consists of a bounded and an unbounded component. Away from intersection points of boundary components of $\varphi(\bar{G})$, let $N(p)$ point into the unbounded component at $p$ if $p\in\varphi(\bigcup V_\bullet)$ and into the bounded component if $p\in\varphi(\bigcup V_\circ)$. Near an intersection point, let $N$ twist as prescribed by the local helicoid in Figure \ref{figure:twist}. These local definitions of $N$ glue together to form a well-defined orientation of $\hat{\bar{G}}$ because $G=(V_\circ\sqcup V_\bullet,E)$ is bipartite, so $G$ does not contain a circuit of odd length. \end{proof} \begin{proposition} \label{prop:Gchar2} $\bar{G}$ and $\hat{\bar{G}}$ are homotopy equivalent. Therefore,\todo[color=green]{(A7) Area not in italic.} \[ \chi(\hat{\bar{G}})=\chi(\bar{G})=\chi(G)=-f_\times = -2\text{\normalfont Area}(P). \] \end{proposition} \begin{proof} It suffices to show that $\varphi(\bar{G})$ and $\hat{\bar{G}}$ are homotopy equivalent. Away from intersection points of boundary components of $\varphi(\bar{G})$, both surfaces are identical. Near an intersection point, the surfaces are equivalent by the homotopy that projects the right hand side of Figure \ref{figure:twist} onto the left hand side. These homotopies glue together compatibly and hence $\varphi(\bar{G})\simeq\hat{\bar{G}}$. The equalities now follow from Proposition \ref{prop:eulerCharFlat} and Corollary \ref{cor:Gchar1}. \end{proof} \begin{lemma}[Pick's formula] \label{lemma:pick} Let $P$ be a simple lattice polygon (i.e., $\partial P$ does not self-intersect and has exactly one connected component).\todo[color=green]{(A9) defined simple} Then\todo[color=green]{(A7) Area not in italic.} \[ \text{\normalfont Area}(P) = |\mathring{P}\cap\mathbb{Z}^2| + \frac{1}{2}\left|\partial P\cap\mathbb{Z}^2\right|- 1. \] \end{lemma} \begin{proof} This is a well-known result with many different proofs available. E.g., one standard proof is via Euler's formula \cite{theBOOK}, while a more non-standard proof uses the Weierstraß $\wp$-function \cite{pickViaWeierstrass}. \end{proof} \begin{theorem} $\hat{\bar{G}}$ is homeomorphic to the compact oriented surface $\Sigma_{g,n}$ obtained by removing $n$ disjoint open discs from the compact oriented surface $\Sigma_g$ of genus $g$ without boundary. Moreover, the genus $g$ is the number of interior points of $P$, i.e., \[ \hat{\bar{G}}\cong\Sigma_{g,n}\hspace{5mm}\text{where}\hspace{5mm}g=\left|\mathring{P}\cap\mathbb{Z}^2\right|. \] \end{theorem} \begin{proof} The first part follows from the classification of surfaces and the fact that $\hat{\bar{G}}$ has $n$ boundary components, one for each line in $\mathcal{H}$. Adding a puncture to a surface (i.e., removing an open disc) decreases the Euler characteristic by one. Thus, by Proposition \ref{prop:Gchar2}, \[ \chi(\Sigma_g) = \chi(\hat{\bar{G}}) + n = - 2\text{\normalfont Area}(P) + n. \] But $n$ is the number of primitive side segments of $P$ which equals the number of lattice points on the boundary $\partial P$. Using $\chi(\Sigma_g)=2-2g$ we get \[ g = 1 - \chi(\Sigma_g)/2 = 1 + \text{\normalfont Area}(P) - \left|\partial P\cap\mathbb{Z}^2\right|/2. \] Now the statement follows immediately from Lemma \ref{lemma:pick} (Pick's formula). \end{proof} This explains our definition of the genus of a dimer: \begin{definition} The \textit{genus of an affine dimer} $G$ with homology polygon $P$ is $g:=\left|\mathring{P}\cap\mathbb{Z}^2\right|$, the number of lattice points in the interior of $P$. We also call this the \textit{genus of the convex lattice polygon} $P$. \end{definition} This matches the relation between the genus of a tropical curve in $\mathbb{R}^2$ and the number of interior points of its Newton polygon \cite{mikhalkin2005}.\todo{(B6) reference added} For example, Figure \ref{figure:genusExample} shows an affine dimer $G$ of genus zero. Since its line arrangement has $n=3$ lines, $\hat{\bar{G}}\cong \Sigma_{0,3}$ is the 3-punctured sphere also known as \textit{pair of pants}. The affine dimer in Figure \ref{figure:matching} has genus one and $\hat{\bar{G}}\cong \Sigma_{1,5}$, the 5-punctured torus. \begin{figure}[H]\centering \includegraphics[width=4cm]{fig/tikz/LineArrangement_smallest.pdf} \hspace{1.5cm} \raisebox{0.7\height}{\includegraphics[width=1.7cm]{fig/tikz/HomologyPolygon_small.pdf}} \caption{An affine dimer of genus zero and its homology polygon.} \label{figure:genusExample} \end{figure} It is shown in Section \ref{sec:genus0and1} that every convex lattice polygon of genus at most 2 is the homology polygon of an affine dimer, answering Question \ref{question:latticeEquivForm} positively for these polygons. Moreover, by Proposition \ref{prop:triangles}, every lattice triangle admits an affine dimer. Since for every $g\in\mathbb{N}$ there exists a lattice triangle of genus $g$, there exist affine dimers of all genera. \section{Constructions of Affine Dimers} \label{sec:new_dimers_from_old} Next, we present three constructions of affine dimers and analyse the obtained homology polygons. \subsection{Adding parallel edges} \begin{proposition} \label{prop:addParallelEdges} Let $\mathcal{H}$ be an admissible oriented line arrangement with homology polygon $P$. Let $h\in\mathbb{Z}^2$ be a primitive side segment of $P$. Then the convex lattice polygon $P_h$ obtained by adding the antiparallel side segments $h$ and $-h$ to $P$ is the homology polygon of an admissible oriented line arrangement. Thus, if $P$ admits an affine dimer then so does $P_h$. \end{proposition} \begin{figure}[H]\centering \raisebox{0.05\height}{\includegraphics[width=6cm]{fig/tikz/AddLine_1.pdf}} \hspace{0.6cm} \raisebox{6\height}{\scalebox{2}{$\longrightarrow$}} \hspace{0.6cm} \raisebox{0.0\height}{\includegraphics[width=5.2cm]{fig/tikz/AddLine_2.pdf}} % \vspace{5mm} \newline % \raisebox{0\height}{\includegraphics[width=3cm]{fig/tikz/AddLine_polygon_1.pdf}} \hspace{0.6cm} \raisebox{3\height}{\scalebox{2}{$\longrightarrow$}} \hspace{0.6cm} \raisebox{0.0\height}{\includegraphics[width=3cm]{fig/tikz/AddLine_polygon_2.pdf}} \caption{Constructing $\mathcal{H}_H$ from $\mathcal{H}$ (top) and $P_h$ from $P$ (bottom) for $h=(1,0)$. Only the local picture near $H$ is displayed, as everything else remains unchanged.} \label{figure:addLine} \end{figure} \begin{proof} Let $H\in\mathcal{H}$ with $[H]=h$. We construct a new admissible oriented line arrangement $\mathcal{H}_H$ with homology polygon $P_h$ by adding two antiparallel lines $H_1,H_2$ with $[H_1]=-h$ and $[H_2]=h$. As depicted in Figure \ref{figure:addLine}, we place them in the order $H_2,H_1,H$ and close enough to $H$ so that no other intersection points of $\mathcal{H}$ lie between $H_2$ and $H$. If $k$ is the number of intersection points on $H$ then this construction adds $k$ consistently oriented faces to $\mathcal{H}$ locally near $H$, $k/2$ of each orientation. Away from $H$ the arrangement remains unchanged. Thus, we have obtained a new admissible arrangement $\mathcal{H}_H$ of homology polygon $P_h$, as required. \end{proof} \subsection{Double everything} All lattice polygons consisting of pairwise antiparallel primitive side segments admit an affine dimer. \begin{proposition} \label{prop:doubleEverything} Let $\Sigma=\{h_1,\dots,h_n\}\subseteq\mathbb{Z}^2$ be a multiset of primitive vectors and let $P_\Sigma$ be the convex lattice polygon consisting of the pairwise antiparallel side segments $\pm h_1, \dots,\pm h_n$. Then $P_\Sigma$ admits an affine dimer, i.e., $P_\Sigma$ is the homology polygon of an admissible oriented line arrangement. Additionally, the affine dimer may be taken to have $f_\times/f = 1/2$. \end{proposition} \begin{figure}[H]\centering \raisebox{0\height}{\includegraphics[width=5cm]{fig/tikz/DoubleEverything_1.pdf}} \hspace{0.6cm} \raisebox{10\height}{\scalebox{2}{$\longrightarrow$}} \hspace{0.6cm} \raisebox{0.0\height}{\includegraphics[width=5cm]{fig/tikz/DoubleEverything_2.pdf}} \vspace{5mm} \newline \raisebox{0.75\height}{\includegraphics[height=1cm]{fig/tikz/DoubleEverything_directions.pdf}} \hspace{1cm} \raisebox{4\height}{\scalebox{2}{$\longrightarrow$}} \hspace{2.7cm} \raisebox{0\height}{\includegraphics[height=2.3cm]{fig/tikz/DoubleEverything_polygon.pdf}} \caption{Illustration of the ``double everything''-construction.} \label{figure:doubleEverything} \end{figure} \begin{proof} Let $\mathcal{L}$ be any unoriented line arrangement in general position representing the homology classes $\Sigma$ on $\mathbb{T}^2$ (up to sign). We construct an admissible oriented line arrangement $\mathcal{H}_\mathcal{L}$ as follows. First, add each line in $\mathcal{L}$ to $\mathcal{H}_\mathcal{L}$. Then, for each $H\in\mathcal{L}$, add a line $H^-$ to $\mathcal{H}_\mathcal{L}$ that is parallel and close enough to $H$, such that no lines intersect between $H$ and $H^-$ and $\mathcal{H}_\mathcal{L}$ is in general position. Let $V_\circ$ be the parallelograms corresponding to intersection points of $\mathcal{L}$ and let $V_\bullet$ be the faces corresponding to the original faces of $\mathcal{L}$. This gives an affine dimer $G=(V_\circ\sqcup V_\bullet, E)$ whose edges $E$ encode the face-vertex incidence relations of $\mathcal{L}$ (see Figure \ref{figure:doubleEverything}). By the discussion in Section \ref{sec:defns} there is a choice of orientation for every line in $\mathcal{H}_\mathcal{L}$ such that the faces in $V_\circ$ and $V_\bullet$ are oriented clockwise and anticlockwise, respectively. This makes $\mathcal{H}_\mathcal{L}$ an admissible oriented line arrangement. Moreover, each pair $(H, H^-)$ is oppositely oriented, so the homology polygon of $\mathcal{H}_\mathcal{L}$ is $P_\Sigma$, as required. The inconsistently oriented faces of $\mathcal{H}_\mathcal{L}$ correspond to the line segments of $\mathcal{L}$ and are all 4-gons. Thus, by Proposition \ref{prop:ftimes1213}, we have $f_\times/f = 1/2$. \end{proof} \subsection{Lifting} \label{sec:lifting} In this section we use column and row vectors for elements of a vector space and its dual, respectively. Recall that $q:\mathbb{R}^2\rightarrow\mathbb{T}^2\cong\mathbb{R}^2/\mathbb{Z}^2$ is a universal cover of $\mathbb{T}^2$. Let $\mathcal{H}$ be an admissible oriented line arrangement on $\mathbb{T}^2$. The preimage $q^{-1}(\mathcal{H})$ consists of all lifts of all the lines in $\mathcal{H}$. Moreover, each fundamental parallelogram on $\mathbb{R}^2$ spanned by $\begin{pmatrix}1&0\end{pmatrix}^T, \begin{pmatrix}0&1\end{pmatrix}^T$ contains exactly one representative copy of $\mathcal{H}$. However, we may define a different fundamental parallelogram spanned by two elements of $\mathbb{Z}^2$ that gives a new universal cover of a torus on which it defines a new admissible oriented line arrangement. This is equivalent to first lifting $\mathcal{H}$ to the universal cover $\mathbb{R}^2$ and then quotienting out by a general sublattice $\Lambda\le\mathbb{Z}^2$. See Figure \ref{figure:lifting1} for an example. Let $\Lambda=\langle\alpha,\beta\rangle\le\mathbb{Z}^2$ be a (non-degenerate) lattice and let $\mathbb{T}^2_\Lambda$ be the torus associated to the universal cover $q_\Lambda:\mathbb{R}^2\rightarrow\mathbb{R}^2/\Lambda:=\mathbb{T}^2_\Lambda$. Then $H_1(\mathbb{T}^2_\Lambda)\cong\mathbb{Z}\alpha \oplus \mathbb{Z}\beta$. We want to find the homology polygon of the admissible oriented line arrangement $\mathcal{H}_\Lambda:=q_\Lambda(q^{-1}(\mathcal{H}))$ on $\mathbb{T}^2_\Lambda$. Note that since $\Lambda\le\mathbb{Z}^2$, this construction gives a well-defined regular cover $q\circ q^{-1}_\Lambda:\mathbb{T}^2_\Lambda\rightarrow\mathbb{T}^2$ of degree $\left|\text{covol}(\Lambda)\right|$, the volume of any fundamental parallelogram of $\Lambda$. This cover maps $q_\Lambda(q^{-1}(\mathcal{H}))$ onto $\mathcal{H}$ so that $\mathcal{H}_\Lambda$ is a regular cover of $\mathcal{H}$. \begin{figure}[H]\centering \raisebox{0\height}{\includegraphics[width=5cm]{fig/tikz/Lifting_universal_1.pdf}} \hspace{0.6cm} \raisebox{8\height}{\scalebox{2}{$\longrightarrow$}} \hspace{0.6cm} \raisebox{0\height}{\includegraphics[width=5cm]{fig/tikz/Lifting_universal_2.pdf}} \vspace{5mm} \\ \raisebox{0\height}{\includegraphics[width=4cm]{fig/tikz/Lifting_1.pdf}} \hspace{0.6cm} \raisebox{7\height}{\scalebox{2}{$\longrightarrow$}} \hspace{0.6cm} \raisebox{0\height}{\includegraphics[width=4cm]{fig/tikz/Lifting_2.pdf}} \vspace{5mm} \\ \raisebox{0\height}{\includegraphics[width=2.5cm]{fig/tikz/Lifting_polygon_1.pdf}} \hspace{1.6cm} \raisebox{3\height}{\scalebox{2}{$\longrightarrow$}} \hspace{1.6cm} \raisebox{0\height}{\includegraphics[width=2.5cm]{fig/tikz/Lifting_polygon_2.pdf}} \caption{The lifting construction corresponding to the lattice $\Lambda=\langle\begin{pmatrix}1&0\end{pmatrix}^T,\begin{pmatrix}0&2\end{pmatrix}^T\rangle$. Top: change of fundamental parallelogram. Middle: from $\mathbb{T}^2$ to $\mathbb{T}^2_\Lambda$. Bottom: the new homology polygon.} \label{figure:lifting1} \end{figure} \begin{proposition \label{prop:lifting} Let $P$ be the homology polygon of an admissible oriented line arrangement $\mathcal{H}$ on $\mathbb{T}^2$ and let $\Lambda=\langle\alpha,\beta\rangle\le\mathbb{Z}^2$ be a (non-degenerate) lattice with $\alpha=\begin{pmatrix}a\\b\end{pmatrix}$ and $\beta= \begin{pmatrix}c\\d\end{pmatrix}$.\todo{(B8) $\beta$ was missing} Let $A=\begin{pmatrix} a&c\\b&d \end{pmatrix}$ and let $P_\Lambda$ be the homology polygon of $\mathcal{H}_\Lambda$ on $\mathbb{T}^2_\Lambda$ with respect to the basis $H_1(\mathbb{T}^2_\Lambda)\cong\mathbb{Z}\alpha \oplus \mathbb{Z}\beta$. Then\todo[color=green]{(A10) adj not italic} \[P_\Lambda = \text{\normalfont adj}(A) (P)=\begin{pmatrix} d&-c\\-b&a \end{pmatrix}(P). \] \end{proposition} \begin{figure}[H]\centering \raisebox{0.15\height}{\includegraphics[width=3cm]{fig/tikz/Lifting_proof_0.pdf}} \hspace{1.2cm} \raisebox{6\height}{\scalebox{2}{$\longrightarrow$}} \hspace{1.2cm} \raisebox{0\height}{\includegraphics[width=4cm]{fig/tikz/Lifting_proof.pdf}} \caption{Illustration of the proof of Proposition \ref{prop:lifting} with $\alpha=\begin{pmatrix}3&1\end{pmatrix}^T$ and $\beta=\begin{pmatrix}1&2\end{pmatrix}^T$. In this case $\varphi(\begin{pmatrix}3&1\end{pmatrix}^T)=\begin{pmatrix}5&0\end{pmatrix}^T$, confirming $\varphi(\alpha)=\det(A)\alpha\in H_1(\mathbb{T}^2_\Lambda)\cong\mathbb{Z}\alpha\oplus\mathbb{Z}\beta$.} \label{figure:lifting_proof} \end{figure} \begin{proof} Fix an orientation of $\mathbb{T}^2$. For two transversal loops $\gamma_1$ and $\gamma_2$ on a torus let $\iota(\gamma_1,\gamma_2)$ be their signed intersection number, which is invariant under homotopy. By Poincaré duality we have an isomorphism \begin{align*} i:H_1(\mathbb{T}^2)&\stackrel{\cong}{\longrightarrow} H^1(\mathbb{T}^2)\\ [\gamma]&\longmapsto \iota([\gamma],\cdot) \end{align*} and similarly $i_\Lambda:H_1(\mathbb{T}_\Lambda^2)\stackrel{\cong}{\rightarrow} H^1(\mathbb{T}_\Lambda^2)$. As discussed above, the map $\pi_\Lambda:=q\circ q_\Lambda^{-1}:\mathbb{T}_\Lambda^2\rightarrow \mathbb{T}^2$ is a regular cover, restricting to a regular cover $\mathcal{H}_\Lambda$ of $\mathcal{H}$. Let $\varphi:=i_\Lambda^{-1}\circ \pi_\Lambda^* \circ i$. \begin{eqnarray} \label{eqn:commutative:square} \begin{tikzcd} H_1(\mathbb{T}^2) \arrow[dashed]{r}{\varphi} \arrow["i","\cong"']{d}& H_1(\mathbb{T}_\Lambda^2)\arrow["i_\Lambda","\cong"']{d}{}\\ H^1(\mathbb{T}^2) \arrow[]{r}{\pi_\Lambda^*} & H^1(\mathbb{T}_\Lambda^2) \end{tikzcd} \end{eqnarray} We shall show that $\varphi([l])=[\pi_\Lambda^{-1}(l)]$ for every $l\in Z_1(\mathbb{T}^2)$. By Poincaré duality this is equivalent to \begin{align} \label{eqn:poincare:iotas} \iota(l,\pi_\Lambda (g))=\iota(\pi_\Lambda^{-1}(l), g) \end{align} for all $l\in Z_1(\mathbb{T}^2)$ and $g\in Z_1(\mathbb{T}_\Lambda^2)$. Indeed, $\pi_\Lambda$ is orientation preserving and for $l:S^1\rightarrow\mathbb{T}^2$ and $g:S^1\rightarrow\mathbb{T}_\Lambda^2$ we have $l(s)=\pi_\Lambda(g(t))$ if and only if $g(t)=\hat{l}(s)$ for some lift $\hat{l}$ of $l$ along $\pi_\Lambda$, proving (\ref{eqn:poincare:iotas}). It remains to show that $\varphi$ in (\ref{eqn:commutative:square}) is given by the matrix $\text{adj}(A)$. Working in the basis of $H^1(\mathbb{Z}^2_\Lambda)$ dual to $H_1(\mathbb{Z}^2_\Lambda)\cong\mathbb{Z}\alpha\oplus\mathbb{Z}\beta$ we obtain the desired result: \begin{align*} \begin{bmatrix} i\begin{pmatrix}1\\0\end{pmatrix}&=&\begin{pmatrix}0&1\end{pmatrix} \\ i\begin{pmatrix}0\\1\end{pmatrix}&=&\begin{pmatrix}-1&0\end{pmatrix} \end{bmatrix} \hspace{2mm}\Longrightarrow\hspace{2mm} \begin{bmatrix} \pi_\Lambda^*\circ i\begin{pmatrix}1\\0\end{pmatrix}&=&\begin{pmatrix}b&d\end{pmatrix} \\ \pi_\Lambda^*\circ i\begin{pmatrix}0\\1\end{pmatrix}&=&\begin{pmatrix}-a&-c\end{pmatrix} \end{bmatrix} \hspace{2mm}\Longrightarrow\hspace{2mm} \begin{bmatrix} \varphi\begin{pmatrix}1\\0\end{pmatrix}&=&\begin{pmatrix}d\\-b\end{pmatrix} \\ \varphi\begin{pmatrix}0\\1\end{pmatrix}&=&\begin{pmatrix}-c\\a\end{pmatrix} \end{bmatrix} \end{align*} \end{proof} The map $A\mapsto \text{adj}(A)$ on $\left\{A\in\mathbb{Z}^{2\times 2} : \det(A)\neq 0\right\}$ is surjective. Thus: \begin{corollary} \label{cor:applyLinear} If $P$ is the homology polygon of an affine dimer and $B\in\mathbb{Z}^{2\times 2}$ with $\det(B)\neq 0$, then $B(P)$ is also the homology polygon of an affine dimer. \end{corollary} We already knew this for $B\in GL_2(\mathbb{Z})$ because $GL_2(\mathbb{Z})$ acts by linear automorphisms on $\mathbb{T}^2$. This corollary is a generalisation. \section{Affine Dimer Search Algorithm} \label{sec:algorithms} The class of homology polygons obtained from the constructions in Section \ref{sec:new_dimers_from_old} is not too big. Indeed, if $P$ is a homology polygon obtained from Proposition \ref{prop:addParallelEdges} or Proposition \ref{prop:doubleEverything} then $P$ has a pair of antiparallel side segments. If $P$ is obtained by lifting using a matrix $B\in\mathbb{Z}^{2\times 2}$ with $\det(B)\neq 0$ as in Corollary \ref{cor:applyLinear}, and if the non-primitive side segments of $P$ are $p_1,\dots,p_m\in\mathbb{Z}^2$, then $\det(B) | \det(p_i,p_j)$ for all $i,j$. For this construction to deliver a new $GL_2(\mathbb{Z})$ equivalence class of convex lattice polygons, we require $\det(B)\neq \{0,\pm 1\}$. Thus, the integers $\det(p_i,p_j)$ all have a common prime factor, which is a rare trait for a convex lattice polygon $P$. Therefore, we developed a computer program with GUI to manipulate line arrangements on the torus and check whether a given convex lattice polygon admits an affine dimer. This section summarises the algorithms used. The programming was done primarily in Java, using the library \texttt{JGraphT} \cite{jgrapht} for standard graph algorithms and \texttt{polymake} \cite{polymake2020} to work with cell decompositions of $\mathbb{R}^n$. \subsection{Checking a single arrangement} Given a line arrangement $\mathcal{H}=\{H_1,\dots,H_n\}$ in general position on $\mathbb{T}^2$ with $\sum_{i=1}^n [H_i]=0$, the following algorithm determines whether it corresponds to an affine dimer. \begin{enumerate} \item Calculate all intersection points. For a pair $(H_1,H_2)$ of lines, this is done by setting $[H_1]=(1,0)$ via an action of $SL_2(\mathbb{Z})$. This simplified configuration is dealt with by inspection. The number of intersection points of $H_1$ and $H_2$ is $|\det([H_1],[H_2])|$. \item For each line $H\in\mathcal{H}$, determine the order of the intersection points on $H$. Again, this is done by first setting $[H_1]=(1,0)$ via $SL_2(\mathbb{Z})$. Thus, we obtain the side segment data of $\mathcal{H}$. \item For each intersection point of each side segment, determine the next side segment at that point in clockwise and anticlockwise order. Thus, we obtain the face data of $\mathcal{H}$. Abstract this to a graph structure in which two faces are neighbours if and only if they share a vertex. \item Determine the number $k$ of bipartite connected components of the obtained graph. \item The obtained graph has exactly two connected components, which can be seen by considering intersection numbers modulo 2. Hence, there are three cases: \begin{itemize} \item If $k=2$ then the homology polygon of $\mathcal{H}$ is a parallelogram by Lemma \ref{lem:fullDimerIsParallelogram} below, which is already known to admit an affine dimer by the ``double everything''-construction of Proposition \ref{prop:doubleEverything}. \item If $k=1$ then there is a choice of orientation for each line in $\mathcal{H}$ making the arrangement admissible. A further check unveils whether this is compatible with the given orientations. If not, a dimer for a different homology polygon has been found. See Figure \ref{figure:pseudoDimer} for an example of why this is necessary. \item If $k=0$ then the configuration is not admissible and this cannot be fixed by re-orienting the lines. \end{itemize} \end{enumerate} This algorithm has linear time and space complexity $\mathcal{O}(f)$ by Proposition \ref{prop:basicResults} (iv), where $f$ is the number of faces of $\mathcal{H}$. \begin{lemma} \label{lem:fullDimerIsParallelogram} If $k=2$ in the above algorithm, then the homology polygon $P$ of $\mathcal{H}$ is a parallelogram. \end{lemma} \begin{figure}[H]\centering {\includegraphics[width=5cm]{fig/tikz/Full_Dimer_1.pdf}} \hspace{1.5cm} {\includegraphics[width=5cm]{fig/tikz/Full_Dimer_2.pdf}} \caption{The two types of lines in the case $k=2$.} \label{figure:fullDimer} \end{figure} \begin{proof} Since the obtained graph is bipartite and $k=2$, each line $H\in\mathcal{H}$ is of one of the two types depicted in Figure \ref{figure:fullDimer}. Thus, no two lines of the same type intersect, so all lines of the same type are parallel (or antiparallel). Thus, there are at most 4 homology classes and so $P$ is a parallelogram. \end{proof} \begin{figure}[h]\centering {\includegraphics[width=12cm]{fig/tikz/Pseudo_dimer.pdf}} \caption{The admissible arrangement to the left appears in the moduli space of the red (right) homology polygon and has $k=1$ in the above algorithm. However, it represents the green (middle) homology polygon.} \label{figure:pseudoDimer} \end{figure} \subsection{Moduli space of line arrangements} Checking whether a convex lattice polygon $P$ admits an affine dimer is more difficult as there are infinitely many line arrangements in general position realising $P$. However, only finitely many of them represent different combinatorial configurations. We consider two arrangements to be combinatorially the same if one of them can be obtained from the other by continuously translating some lines without ever creating a triple intersection point or two coinciding parallels. This notion is formalised by the moduli space $\mathcal{M}$ of line arrangements. \begin{lemma} \label{lemma:hyperplane} Let $\alpha\in\mathbb{Z}^n$, $c\in\mathbb{R}$, and $\hat{H}=\{x\in\mathbb{R}^n : \langle x,\alpha\rangle=c\}$ be a hyperplane. Let $q:\mathbb{R}^n\rightarrow \mathbb{T}^n$ be the quotient map and $H=q(\hat{H})$. Then $x+\mathbb{Z}^n\in H$ if and only if $\langle x,\alpha\rangle \in c + \text{\normalfont gcd}(\alpha)\mathbb{Z}$.\todo[color=green]{(A11) gcd not italic} \end{lemma} \begin{proof} $\mathbb{Z}$ is a Euclidean domain, so the ideal equation $(\alpha_1,\dots,\alpha_n)=\text{\normalfont gcd}(\alpha)\mathbb{Z}$ holds. \end{proof} Therefore, given a primitive homology class $\alpha\in\mathbb{Z}^2$, the set of lines realising this homology class is parametrised uniquely by $c\in\mathbb{R}/\mathbb{Z}\cong\mathbb{T}$, since $\text{\normalfont gcd}(\alpha)=1$. Therefore: \begin{definition} The \textit{moduli space} $\mathcal{M}$ of line arrangements on $\mathbb{T}^2$ consisting of $n$ lines with prescribed homology class is topologically $\mathbb{T}^n$. More precisely, if the primitive side segments of $P$ are $h_1,\dots,h_n\in\mathbb{Z}^2$ and $\alpha_i$ is the clockwise rotation of $h_i$ by $\pi/2$ then the correspondence is \[ (c_1,\dots,c_n)+\mathbb{Z}^n \in\mathbb{T}^n\cong\mathcal{M} \hspace{5mm} \longleftrightarrow \hspace{5mm} \begin{bmatrix} \mathcal{H}=\{H_1,\dots,H_n\},\hspace{5mm} [H_i] = h_i,\\ H_i = \{x+\mathbb{Z}^2 : \langle x , \alpha_i\rangle \in c_i+ \mathbb{Z}\} \end{bmatrix}. \] \end{definition} Let $C\subset\mathcal{M}$ be the locus where $\mathcal{H}$ is not in general position. This happens either when three or more lines intersect in a point or when two (anti)parallel lines have the same parameter. For each pair $\{h_i,h_j\}$ with $i\neq j$ and $h_i \parallel h_j$, $H_i$ and $H_j$ coincide if and only if $c_i=c_j$ on $\mathbb{T}$. This happens if and only if $(c_1,\dots,c_n)$ lies on the hyperplane $C_{i,j}: X_i-X_j=0$ on $\mathcal{M}$. For each triple $\{h_i,h_j,h_k\}$ of pairwise non-(anti)parallel homology classes, the lines $H_i,H_j,H_k$ intersect in a common point $x\in\mathbb{T}^2$ if and only if \[ \langle x,\alpha_i\rangle \equiv c_i,\hspace{5mm} \langle x,\alpha_j\rangle \equiv c_j,\hspace{5mm}\text{and}\hspace{5mm} \langle x,\alpha_k\rangle \equiv c_k \mod \mathbb{Z}. \] This holds for some $x$ if and only if \[ (c_i,c_j,c_k)\in\text{Im}(A)+\mathbb{Z}^3 \text{ for the }3\times2 \text{ integer matrix } A=\begin{pmatrix} \leftarrow \alpha_i \rightarrow\\ \leftarrow \alpha_j \rightarrow \\ \leftarrow \alpha_k \rightarrow \\ \end{pmatrix}=:\begin{pmatrix} \uparrow & \uparrow \\ e_{ijk}&f_{ijk}\\ \downarrow&\downarrow \end{pmatrix}, \] or equivalently $\langle (c_i,c_j,c_k), e_{ijk}\wedge f_{ijk}\rangle\in\text{\normalfont gcd}(e_{ijk}\wedge f_{ijk})\mathbb{Z}$ by Lemma \ref{lemma:hyperplane}. This defines a hyperplane $D_{i,j,k}\subset\mathbb{T}^n\cong\mathcal{M}$. Thus, the locus $C\subset\mathcal{M}$ of degenerate arrangements is\todo{(B9) only one is} given by the hyperplane arrangement \[ C:=\left(\bigcup_{\substack{h_i\parallel h_j\\i\neq j}} C_{i,j}\right)\cup\left(\bigcup_{\substack{h_i,h_j,h_k\text{pairwise}\\\text{non-(anti)parallel}}} D_{i,j,k}\right) \subset\mathcal{M}. \] To determine if $P$ admits an affine dimer, one therefore needs to apply the algorithm of the previous subsection to one arrangement $(c_1,\dots,c_n)$ of each connected component of the complement $\mathcal{M}\setminus C$. It remains to enumerate the connected components of $\mathcal{M}\setminus C$. Note also that the coordinates of $e_{ijk}\wedge f_{ijk}$ in the definition of $D_{i,j,k}$ are of the form $\det(h_{i_1},h_{i_2})$. Thus, the structure $C\subset\mathcal{M}$ only depends on the $GL_2(\mathbb{Z})$ equivalence class the homology polygon $P$. \subsubsection{Cell decomposition approach} One way of enumerating the components of $\mathcal{M}\setminus C$ is as follows. \begin{enumerate} \item Lift each constituting hyperplane $H=D_{i,j,k}\text{ or }C_{i,j}$ of $C$ to several hyperplanes $\{H_1,\dots,H_l\}$ in $\mathbb{R}^n$ such that $H=[0,1]^n\cap\{H_1,\dots,H_l\}\big/\mathbb{Z}^n$. (For $H=C_{i,j}$ we have $l=1$. For $H=D_{i,j,k}$ with normal vector $\nu:=e_{ijk}\wedge f_{ijk}$ we have $l\le 1+\sqrt{3}\left\Vert \nu\right\Vert / \text{\normalfont gcd}(\nu)$.) Thus, we obtain a (finite) hyperplane arrangement $\hat{C}\subseteq\mathbb{R}^n$ such that $q(\hat{C})=C$. \item Use a cell decomposition algorithm to find the connected components of $\mathbb{R}^n\setminus\hat{C}$. \item Pick one point in each cell and check if the corresponding configuration is admissible. \end{enumerate} We implemented this using \texttt{polymake} \cite{polymake2020}, but the resulting algorithm was not efficient enough to deliver results for some lattice polygons $P$ of interest (such as the conjectured counterexamples of Forsgård for $k=3,4$ (\cite{forsgard2016dimer}, Section 4)). This is no surprise since $C$ consists of $\mathcal{O}(n^3)$ hyperplanes, each lifting to an arbitrarily large number of hyperplanes in $\hat{C}$. For a hyperplane arrangement in general position on $\mathbb{R}^n$ the number of components of the complement is a degree $n$ polynomial in the number of hyperplanes. Thus, the number of cells could be $\mathcal{O}(n^{3n})$ or even higher, depending on the number of lifts when constructing $\hat{C}$ from $C$. It might be possible to optimize this using properties of $\mathbb{T}^n$ or the fact that we are only working on $[0,1]^n$. \subsubsection{Mesh approach} The cell decomposition algorithm of \texttt{polymake} gives us a proper cell decomposition, whereas we only really need one point in each cell of $\mathcal{M}\setminus C$. There is a smallest constant $m(C)\in\mathbb{N}_{>0}$ such that every component of $\mathcal{M}\setminus C$ contains a point of $q\left(m(C)^{-1}\mathbb{Z}^n\right)$. Thus, we are able to finish by checking $\mathcal{O}\left(m(C)^n\right)$ configurations and without calculating any cell decompositions. However, even when $C$ just consists of two lines on $\mathbb{T}^2$, there are configurations of arbitrarily large $m(C)$. E.g., take lines of homology class $(1,0)$ and $(1,l)$ with $l\rightarrow\infty$. This configuration has $l$ faces, so $m(C)\ge l$. Note also that this cannot be cured by applying a smart choice of $A\in GL_2(\mathbb{Z})$ to $C$ since the number of faces is invariant. \subsubsection{Reducing dimension by two} In both approaches described above, we may reduce the dimension by two by restricting our attention to the subtorus $\mathbb{T}^{n-2}\subset\mathcal{M}$ with $c_1\equiv c_2\equiv 0\mod\mathbb{Z}$. This corresponds to translating all arrangements so that $H_1,H_2$ intersect at the bottom left corner of the fundamental parallelogram, where $h_1\not\parallel h_2$ without loss of generality. \subsection{Randomized search \& admissible volume} \label{sec:admissibleVolume} We may choose random vectors $(c_1,\dots,c_n)\in\mathcal{M}$ and check each configuration until we find an affine dimer, or stop after a certain number of trials. This approach led to the discovery of many non-trivial affine dimers of genus 1 and 2 presented in Section \ref{sec:genus0and1}. \begin{definition} Let $\mathcal{A}\subset\mathcal{M}\setminus C$ be the locus of admissible oriented line arrangements. The \textit{admissible volume} of $P$ is $\text{\normalfont vol}(\mathcal{A})$. \end{definition} Note that $\text{\normalfont vol}(\mathcal{A})$ only depends on the $GL_2(\mathbb{Z})$ equivalence class of the homology polygon $P$, since $C$ is $GL_2(\mathbb{Z})$ invariant. Thus, it makes sense to talk about the admissible volume of an $GL_2(\mathbb{Z})$ equivalence class of convex lattice polygons. Using this randomized approach it is possible to estimate the admissible volume of $P$. For some homology polygons in Section \ref{sec:genus0and1} we had $\text{\normalfont vol}(\mathcal{A})<0.01$, making it highly unlikely that we could have found their affine dimers by hand, and justifying our computational approach. Furthermore, if $\mathbb{T}^{n-2}\subset\mathcal{M}$ denotes the subtorus with $c_1\equiv c_2\equiv 0\mod\mathbb{Z}$, then $\text{\normalfont vol}_{\mathbb{T}^n}(\mathcal{A})=\text{\normalfont vol}_{\mathbb{T}^{n-2}}(\mathcal{A}\cap\mathbb{T}^{n-2})$. This is because the $(n-2)$-dimensional fibers are isomorphic via global translation of the arrangements, and translation is an isometry on flat tori. Thus, we may speed up the estimation of $\text{\normalfont vol}(\mathcal{A})$ (and thus our search for affine dimers) by reducing the dimension by two. Finding bounds for $\text{\normalfont vol}(\mathcal{A})$ in terms of $P$ might allow us to answer Question \ref{question:latticeEquivForm} for bigger classes of polygons. For example, we have the following result for parallelograms, which might be generalised in future. \begin{proposition} \label{prop:parallelogramAdmissibleVol} Let $P$ be an $a\times b$ lattice parallelogram with $n=2a+2b$ primitive side segments. Then\todo[color=green]{(A12) vol not italic} \[ \text{\normalfont vol}(\mathcal{A})=4 \begin{pmatrix} 2a\\a \end{pmatrix}^{-1} \begin{pmatrix} 2b\\b \end{pmatrix}^{-1}. \] \end{proposition} \begin{proof} We say that $P$ has $2a$ horizontal and $2b$ vertical side segments, $a$ (respectively $b$) of each orientation. An arrangement $(c_1,\dots,c_n)\in\mathcal{M}\setminus C$ is admissible if and only if both the vertical lines and the horizontal lines have alternating orientations. There are $\begin{pmatrix}2a \\ a\end{pmatrix}$ orders of orientations for the horizontal lines, exactly two of which are alternating (and similarly for the vertical lines).\todo{(B10) lines not liens} The result follows since all orderings of horizontal (respectively vertical) lines are equally likely and from independence of vertical and horizontal lines. \end{proof} \section{Triangles and Genus $\boldmath\le 2$} \label{sec:genus0and1} We now apply the constructions of Section \ref{sec:new_dimers_from_old} and the algorithms of Section \ref{sec:algorithms} to exhibit various families of convex lattice polygons that admit an affine dimer. We also record some estimates of their admissible volumes (see Section \ref{sec:admissibleVolume}), indicating how hard it would be to find their affine dimers by hand without our constructions and algorithms. \subsection{Triangles} We begin with an application of the lifting construction in Section \ref{sec:lifting}. \begin{proposition} \label{prop:triangles} Let $P$ be a lattice triangle (possibly with more than three primitive side segments). Then $P$ admits an affine dimer. Moreover, it admits an affine dimer that is lifted from the one in Figure \ref{figure:genusExample}. \end{proposition} \begin{proof} Let the triangle $P$ be spanned by the vectors $(a,b), (c,d)\in\mathbb{Z}^2$, not necessarily primitive as we allow more than three primitive side segments. Then an admissible oriented line configuration with homology polygon $P$ is obtained by applying the matrix $B=\begin{pmatrix} a&c\\b&d \end{pmatrix}$ to the triangle spanned by $(1,0)$ and $(0,1)$ using Corollary \ref{cor:applyLinear}. \end{proof} \subsection{Genus Zero} \begin{figure}[H]\centering \raisebox{0\height}{\includegraphics[width=10cm]{fig/tikz/Genus_0.pdf}} \caption{The three families of equivalence classes of convex lattice polygons with no interior lattice points, where $a,b,c$ are positive integers \cite{genus0polygons}.} \label{figure:genus0classes} \end{figure} \begin{proposition} \label{prop:genus0} Let $P$ be a convex lattice polygon with no interior lattice points. Then $P$ admits an affine dimer. \end{proposition} \begin{proof} By \cite{genus0polygons} the equivalence classes of convex lattice polygons with no interior lattice points are those displayed in Figure \ref{figure:genus0classes}. By Proposition \ref{prop:triangles}, the triangles all admit an affine dimer. For the trapezoid given by $b,c\in\mathbb{Z}_{>0}$, there are two cases. If $b=c$ then $P$ consists of pairwise antiparallel edges, so admits an affine dimer by Proposition \ref{prop:doubleEverything}. If $b\neq c$ and without loss of generality $b>c$ then $P$ is obtained from the triangle in Figure \ref{figure:genus0classes} with $a=c$ by adding $b-c$ pairs of antiparallel edges parallel to $(1,0)$. Thus, $P$ admits an affine dimer by Proposition \ref{prop:triangles} and Proposition \ref{prop:addParallelEdges}. \end{proof} Even for the genus 0 triangles the admissible volume decays quickly: \begin{proposition} Let $P$ be the $a\times 1$ triangle in Figure \ref{figure:genus0classes}. The admissible volume of $\mathcal{A}\subset\mathcal{M}\setminus C$ is \[ \text{\normalfont vol}(\mathcal{A})=a! / a^a. \] \end{proposition} \begin{proof} Let $H_1,\dots,H_a$ be the lines of homology class $(1,0)$ and let $V$ and $S$ be the lines of homology class $(0,-1)$ and $(-a,1)$, respectively. Then $S$ subdivides $V$ into $a$ segments $s_1,\dots,s_a$ of equal length. Since the arrangement is in general position, there is a function $f:\{1,\dots,a\}\rightarrow\{1,\dots,a\}$ such that $H_i$ intersects $V$ in the segment $s_{f(i)}$. By inspection, the arrangement is admissible if and only if $f$ is injective. Since every $f$ is equally likely for $(c_1,\dots,c_{a+2})\in\mathcal{M}\setminus C$ chosen uniformly at random, we have \[ \text{\normalfont vol}(\mathcal{A})=\mathbb{P}(f \text{ is injective}) = a! / a^a. \] \end{proof} \subsection{Genus One} \begin{figure}[H]\centering \includegraphics[width=14cm]{fig/tikz/Genus_1_easy_cases.pdf} \caption{Equivalence class representatives of convex lattice polygons with one interior point that are triangles (top), consist of pairwise antiparallel side segments (bottom left), or have a pair of antiparallel side segments which are parallel to at least one other side segment (bottom right). The blue numbers are estimates ($\ge2\cdot 10^4$ trials) of the admissible volume of the polygon, indicating how hard it would be to find affine dimers by hand. The two bottom left numbers are exact by Proposition \ref{prop:parallelogramAdmissibleVol}.} \label{figure:genus1easyClasses} \end{figure} \begin{proposition} \label{prop:genus1} Let $P$ be a convex lattice polygon with exactly one interior lattice point. Then $P$ admits an affine dimer. \end{proposition} \begin{proof} There are 16 equivalence classes of convex lattice polygons with exactly one interior point \cite{poonenLattice12}, \cite{genus0polygons}. As seen in Figure \ref{figure:genus1easyClasses}, four of them are triangles, which admit an affine dimer by Proposition \ref{prop:triangles}. Three of them consist of pairwise antiparallel side segments, so admit an affine dimer by Proposition \ref{prop:doubleEverything}. Another three of them are obtained by adding a pair of antiparallel side segments $(\pm1, 0)$ to convex lattice polygons containing $(1,0)$ as a side segment which are already known to be dimers since they have no interior points. These admit an affine dimer by Proposition \ref{prop:addParallelEdges}. There are five equivalence classes left whose affine dimers are displayed in Figure \ref{figure:genus1checkedCases} \& \ref{figure:genus1checkedCases2}. \end{proof} \begin{figure}[H]\centering \includegraphics[width=4.5cm]{fig/tikz/Genus_1_check_1.pdf} \hspace{1cm} \includegraphics[width=4.5cm]{fig/tikz/Genus_1_check_2.pdf} \hspace{1cm} \includegraphics[width=4.5cm]{fig/tikz/Genus_1_check_3.pdf} \caption{The five remaining equivalence classes of convex lattice polygons with one interior lattice point and affine dimers for them (continued in Figure \ref{figure:genus1checkedCases2}). The blue numbers are estimates ($\ge2\cdot 10^4$ trials) of the admissible volume.} \label{figure:genus1checkedCases} \end{figure} \begin{figure}[H]\centering \includegraphics[width=4.5cm]{fig/tikz/Genus_1_check_4.pdf} \hspace{1.4cm} \includegraphics[width=4.5cm]{fig/tikz/Genus_1_check_5.pdf} \caption{Continuation of Figure \ref{figure:genus1checkedCases}.} \label{figure:genus1checkedCases2} \end{figure} \subsection{Genus Two} The same holds for two interior points. \begin{proposition} \label{prop:genus2} Let $P$ be a convex lattice polygon with exactly two interior lattice points. Then $P$ admits an affine dimer. \end{proposition} \begin{figure}[h]\centering \includegraphics[width=5.5cm]{fig/tikz/Genus_2_check_hexagon.pdf} \caption{The only genus 2 hexagon whose dimer cannot be constructed from a lower genus dimer using the constructions of Section \ref{sec:new_dimers_from_old}. The blue number is an estimate ($10^5$ trials) of the admissible volume.} \label{figure:genus2example} \end{figure} \begin{proof} A classification up to equivalence of convex lattice polygons with two interior lattice points is provided by \cite{genus2polygons}. There are five classes of triangles, all of which admit an affine dimer by Proposition \ref{prop:triangles}. There are 19 classes of quadrilaterals. Three of them are parallelograms and thus admit an affine dimer by Proposition \ref{prop:addParallelEdges}. Six of them are obtained by adding a pair of antiparallel edges parallel to an existing edge to a convex lattice polygon of genus 0 or 1, and therefore admit an affine dimer by Propositions \ref{prop:genus0}, \ref{prop:genus1}, and \ref{prop:addParallelEdges}. The other 10 classes of quadrilaterals were checked manually to admit an affine dimer (see below). Similarly, all sixteen classes of pentagons and five classes of hexagons admit an affine dimer (see Figure \ref{figure:genus2example} for an example). There are no convex lattice $n$-gons with two interior lattice points and $n>6$. The 19 classes that required computer-aided verification can be found online at \url{https://jeffhicks.net/files/DHolmesSupplemental.pdf}. \todo{(B7) Add reference for the remaining genus 2 classes.} \end{proof} This completes the proof of Theorem \ref{thm:mainResult}. \section*{Acknowledgments} The author would like to thank the \textit{London Mathematical Society} and the \textit{Department of Pure Mathematics and Mathematical Statistics, University of Cambridge} for their financial support in the form of an Undergraduate Research Bursary, and Jeff Hicks for suggesting and mentoring this project, and for providing invaluable feedback and many fruitful discussions. Finally, we would like to thank the referees for their helpful feedback, contributing multiple improvements to this paper. {\footnotesize
{ "timestamp": "2022-02-16T02:03:05", "yymm": "2110", "arxiv_id": "2110.01703", "language": "en", "url": "https://arxiv.org/abs/2110.01703", "abstract": "Recent work by Forsgård indicates that not every convex lattice polygon arises as the characteristic polygon of an affine dimer or, equivalently, an admissible oriented line arrangement on the torus in general position. We begin the classification of convex lattice polygons arising as characteristic polygons of affine dimers. We present several general constructions of new affine dimers from old, and an algorithm for finding affine dimers with prescribed polygon. With these tools we prove that all lattice triangles, generalised parallelograms, and polygons of genus at most two admit an affine dimer.", "subjects": "Geometric Topology (math.GT); Combinatorics (math.CO)", "title": "Affine dimers from characteristic polygons", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631643177029, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7087950448614856 }
https://arxiv.org/abs/2012.08731
The random walk on upper triangular matrices over $\mathbb{Z}/m \mathbb{Z}$
We study a natural random walk on the $n \times n$ upper triangular matrices, with entries in $\mathbb{Z}/m \mathbb{Z}$, generated by steps which add or subtract a uniformly random row to the row above. We show that the mixing time of this random walk is $O(m^2n \log n+ n^2 m^{o(1)})$. This answers a question of Stong and of Arias-Castro, Diaconis, and Stanley.
\section{Introduction} Let $n \geq 2$ and $m \geq 2$ be two integers, and let $G_n(m)$ denote the group of $n\times n$ upper triangular matrices with entries in $\mathbb{Z}/m \mathbb{Z}$ and ones along the diagonal. We number the rows of each matrix in $G_n(m)$ from top to bottom. We consider the following Markov chain $(X_t)_{t\geq0}$ on $G_{n}(m)$: $X_t$ is derived from $X_{t-1}$ by picking a row $i \in \{2,\ldots, n\}$ uniformly at random and with probability $1/4$ adding it to row $i-1$, with probability $1/4$ subtracting it from row $i-1$, and otherwise staying fixed. Let $P^t_A(B)$ be the probability that $X_t=B$ given that $X_0=A$. The walk is stationary with respect to the uniform distribution $U$ on $G_n(m)$, and our main result studies the mixing time of the walk, defined as \begin{equation*} t_{mix}(\varepsilon)= \inf \lbrace t\geq 0 : d_n(t) \leq \varepsilon \rbrace, \end{equation*} where \begin{equation*} d_n(t) = \max_{A \in G_n(m)} \Vert P^t_{A}-U \Vert_{T.V.} = \frac{1}{2} \max_{A \in G_n(m)} \big \lbrace \sum_{B \in G_n(m)} \vert P^t_{A}(B) - U(B) \vert \big \rbrace \end{equation*} is the total variation distance of $P^t_A(B)$ from $U$. Our main theorem determines the mixing time of the random walk $X_t$. \begin{theorem}\label{main} For $m$ prime, there are universal, positive constants $A,B$ and $D$, such that for $t_{n,m}= D (m^2 n \log n + n^2 e^{9 \sqrt{ \log m}}) + cnm^2 \log \log n ,$ we have that $$\ d_n(t_{n,m}) \leq B \left( e^{-Ac} + \frac{1}{\sqrt{t_{n,m}}} \right),$$ for $c>2$. \end{theorem} For the case where $m$ is not prime, we are missing one of the main tools, namely Theorem 3 of \cite{Hough}. We are still able to prove a similar upper for the mixing time, which is slightly less tight than the one of Theorem \ref{main}. \begin{theorem}\label{maing} There are universal, positive constants $A,B,C$ and $D$, such that for $t_{n,m}= D (m^2 n \log n + n^2 e^{C (\log m)^{2/3}}) + cnm^2 \log \log n ,$ we have that $$\ d_n(t_{n,m}) \leq B \left( e^{-Ac} + \frac{1}{\sqrt{t_{n,m}}} \right),$$ for $c>2$. \end{theorem} This process has a long history. The case $n=3$ was first introduced by Zack~\cite{Zack}, and Diaconis and Saloff-Coste proved that order $m^2$ steps are necessary and sufficient for convergence to uniform~\cite{Laurent, Har, Moderate}. This result was later proven again by Bump, Diaconis, Hicks, Miclo and Widom using Fourier analysis~\cite{Hicks}. For $n$ growing, a first upper bound of order $n^7$ was introduced by Ellenberg~\cite{Ellenberg}, which was later improved by Stong to $n^3 m^2 \log m $ \cite{Stong}. The case where $m=2$ was treated by Peres and the second author~\cite{PeresSly}, who proved the mixing time is $O(n^2)$. Arias-Castro, Diaconis and Stanley \cite{StanleyD} used super-character theory, introduced by Andre \cite{Andre3}, \cite{Andre}, \cite{Andre2} and Yan \cite{Yan}, \cite{Yan2} to bound the spectrum of the random walk. This results in a bound for the mixing time of order $n^4 m^4 \log n $, for the case where $m$ is prime. The first author~\cite{EN} improved their analysis to $n^4 m^2$, which gives the correct order of the mixing time in $m$, but not in $n$. More recently, some other features of this walk have been studied. Diaconis and Hough \cite{Hough} studied how many steps an element on the $i$--th diagonal of the matrix needs to mix. We state their result in detail in Section \ref{prime}. The projection onto the final column of the matrix is itself a well known Markov chain known as the East Model. Ganguly, Lubetzky and Martinelli~\cite{GLM:15} proved that the East model exhibits cutoff and later Ganguly and Martinelli~\cite{GM} extending this to the last $k$ columns of the upper triangular matrix walk. Recently, Hermon and Thomas \cite{HermonThomas} considered a different question concerning $G_n(m)$. They sample $k$ generators uniformly at random and they prove cutoff for the case where $k$ is growing with $\vert G_n(m) \vert$. Our strategy is to study how fast the first row mixes and proceed by induction on $n$. We do so by analysing it as a random sum of the second row at random times. It is important to understand the values that the second row takes to understand how well mixed this random sum becomes. This is easier to do in the case where $m$ is a prime, thanks to the work of Diaconis and Hough \cite{Hough}. \section{Preliminaries}\label{prem} For the remainder of the paper we study instead the continuous version of the random walk. For each $ i \in \{ 2,\ldots, n \}$, we consider a rate $1$ Poisson clock, and when the $i$--th clock rings, we either add or subtract row $i$ to row $i-1$, each event happening with probability $1/2$. Our main result for this walk is: \begin{theorem}\label{mainc} For $m$ prime, there are universal, positive constants $A, B$ and $D$, such that for $t_{n,m}= D(m^2 \log n + n e^{9 \sqrt{ \log m}} ) + cm^2 \log \log n,$ we have that $$ d_n(t_{n,m}) \leq B e^{-Ac},$$ for $c>2$. \end{theorem} Similarly, we have the following theorem for the continuous time random walk. \begin{theorem}\label{maingc} There are universal, positive constants $A,B,C$ and $D$, such that for $t_{n,m}= D (m^2 \log n + n e^{C (\log m)^{2/3}}) + cm^2 \log \log n ,$ we have that $$\ d_n(t_{n,m}) \leq B e^{-Ac} ,$$ for $c>2$. \end{theorem} Theorems \ref{mainc} and \ref{maingc} imply Theorems \ref{main} and \ref{maing} respectively, because of Theorem $20.3$ of \cite{Peresbook} and equation (3.4) of \cite{R}. \subsection{The induction lemma} The goal of this section is to prove an inequality that relates $d_n(t) $ to $d_{n-1}(t)$ thus allowing to prove Theorem \ref{maingc} inductively. We break $X_t$ in two parts: let $r_t$ be the $n \times n$ matrix that has the same first row as $X_t$ and every other entry zero, and let $Y_t= X_t-r_t$. Let $t_1, t_2 \ldots $ be the times that the clock of the second row rings. Let $N(t)= \max \{ j \geq 0 : t_j \leq t\}$. We have that \begin{equation}\label{separating} X_t= Y_t + \sum_{j=1}^{N(t)} a_j E(1,2) Y_{t_j} , \end{equation} where the $a_j \in \{-1,1\}$, each value occurring with probability $1/2$. Equation~\eqref{separating} will allow us to separate the mixing time of the first row from the mixing time of the rest of the matrix. The main idea is to prove a bound for the $\ell^2$ distance between $r_t$ and the uniform measure on $(\mathbb{Z}/m \mathbb{Z})^{n-1}$, by studying the spectrum of the transition matrix of $r_t$. These eigenvalues will be indexed by vectors $y \in (\mathbb{Z}/m \mathbb{Z})^{n}$ whose first coordinate is zero. For a nonzero such $y $, let $Z_y^t(i)= X_t(i) y $, where $X_t(i) $ is the $i$--th row of $X_t$. From now on, we will write that $y \in (\mathbb{Z}/m \mathbb{Z})^{n-1}$, though we actually mean that $y$ has $n$ coordinates, the first one of which is zero. The $\ell^2$ distance between $r_t$ and its stationary measure at time $t_{n,m}$ is given in terms of $\{Z^s_y(2) \}$ with $y \in (\mathbb{Z}/m \mathbb{Z})^{n-1} $ and $ s \in \{ t_1, \ldots, N(t_{n,m})\}$. We prove that most of these values are conveniently large. For $a,b \in \mathbb{Z}/m \mathbb{Z}$, we use the notation $|a| >b$ to mean that $a \in \{b+1, \ldots, m-b-1\} $ . Let $P^t$ be the indicator function that the clock of the second row rings at time $t$. Let $A^t_{y,x}= \int_0^t 1_{\lbrace P^s=1 \rbrace } 1_{\lbrace \vert Z_y^s(2) \vert > x\rbrace } ds$ count the number of times $s$ that the second row clock rings and $\vert Z_y^s(2) \vert > x$. Let $I \in \{2,\ldots, n \}$ and let $P_2=\langle e_1, e_2 \rangle \setminus \langle e_1 \rangle $, $Q_I= \langle e_1, \ldots e_{I-1} \rangle \setminus \langle e_1, e_2 \rangle $ and $W_I=\in (\mathbb{Z}/m \mathbb{Z})^{n-1} \setminus \langle e_1, \ldots e_{I-1} \rangle$. Let $A$ be an appropriately chosen constant, which is universal on $n$ and $m$, and let $E_{t,I,x,w}$ be the event that $A_{y,x}^t \geq A^{-1}t$ for every $y \in W_I$, $A_{y,w}^t \geq A^{-1}t$ for every $y \in Q_I$ and $A_{y,m/8}^t \geq A^{-1}t$ for every $y \in P_2$. The following lemma will help us prove Theorem \ref{maing} inductively. Let $\mathcal{F}_t$ be the $\sigma-$algebra generated by the all of updates except the random signs used when adding the second row to the first. In particular $\mathcal{F}_t$ contains all the information from rows 2 to $n$ as well as the times at which row 2 is added to row 1. \begin{lemma}\label{first} Let $q_{t}$ be the conditional distribution of $r_t$ at time $t$ given $\mathcal{F}_t$. Then $$d_{n}(t) \leq d_{n-1}(t) + \mathbb{E} [\Vert q_{t}-u \Vert_{T.V.}],$$ where $u$ is the uniform measure on $(\mathbb{Z}/ m \mathbb{Z})^{n-1}$. \end{lemma} \begin{proof} Let $X$ be a uniformly random element of $G_{n}(m)$. We can couple $Y_t$ and the $n-1$ last rows of $X$ except with probability $d_{n-1}(t)$. This coupling moreover can be made $\mathcal{F}_t$ measurable. Conditional on $\mathcal{F}_t$ we can then couple the first row except with probability $\Vert q_{t}-u \Vert_{T.V.}$. The lemma then follows by averaging. \end{proof} We use Lemma \ref{first} to prove Theorems \ref{mainc} and \ref{maingc} by induction. In particular, we prove the following lemma. \begin{lemma}\label{q} Let $t=t_{n,m}$. There exist positive constants $a, b, f,g$ such that for any $n$ we can find $x,w$ and $I$ such that on the event $E_{t,I,x,w}$ $$ \mathbb{E}[\Vert q_{t}-u \Vert_{T.V.}] \leq a n^{-1}(\log n)^{-c},$$ where $c>2$ is the constant from Theorems \ref{mainc} and \ref{maingc}. Furthermore, $$ \pr{E_{t,I,x,w} ^c} \leq b e^{-fc} \left( m^{-gn} + n^{-gm} \right). $$ \end{lemma} We now show how to use Lemma \ref{q} to prove Theorems \ref{mainc} and \ref{maingc}. \begin{proof}[Proof of Theorems \ref{mainc} and \ref{maingc}.] The proof for the case $n=3$ can be found in \cite{Laurent}. Combining Lemmas \ref{first} and \ref{q} with induction, we get that \begin{align*} d_n(t_{n,m}) & \leq B e^{-dc} + \sum_{i=4}^n \left( \frac{1}{i(\log i)^c} + b e^{-fc} \left( m^{-gi} + i^{-gm} \right) \right) , \end{align*} where the term $B e^{-dc}$ comes from the case $n=3$. Therefore, \begin{align*} d_n(t_{n,m}) & \leq B e^{-dc} + \tilde{b} e^{-fc}+ \int_{i=3}^{\infty} \frac{1}{x(\log x)^c}dx\\ & \leq B e^{-dc}+ \tilde{b} e^{-fc}+ \frac{(\log 3)^{1-c}}{c-1}, \end{align*} which completes the proof of Theorems \ref{mainc} and \ref{maingc} for $c>2$. \end{proof} \subsection{The $\ell^2$ bound.} The goal of this section is to establish an inequality, which will be used to bound $\mathbb{E}[\Vert q_{t}-u \Vert_{T.V.}]$. Let $N(t)$ be the number of times that the second clock has rang by time $t$. For $k \in \mathbb{N}$ and $\underline{w}=(w_1, \ldots, w_k)$, with $w_i \in ( \mathbb{Z}/m \mathbb{Z})^{(n-1)}$ let $G_{k,\underline{w}}$ be the event that $N(t)=k$ and that the second row $X_t(2)$ is equal to $w_j$ at the time of the $j$-th ring for $j=1,\ldots,k$. Let $q_{\substack{k,\underline{w}}}$ be the distribution of $r_t$, conditioning on $G_{k,\underline{w}}$. Then we have that \begin{align} \mathbb{E}[\Vert q_{t}-u \Vert_{T.V.}] = \label{l} \sum_{\substack{k, \underline{w}}}\Vert q_{\substack{k,\underline{w}}}-u \Vert_{T.V.} \prcond{G_{k,\underline{w}}}{\mathcal{F}_t}. \end{align} Each $q_{\substack{k,\underline{w}}}$ has the same orthonormal eigenbasis as the simple random walk on $ \mathbb{Z}/m \mathbb{Z}$, despite the fact that at each step we are adding/subtracting a different quantity. The corresponding eigenvalues are $e^{-2(k- \sum_{s=1}^k \lambda_{\substack{y, w_s}})},$ where the $ \lambda_{\substack{y, w_s} }= \cos \frac{2 \pi \langle y, w_s \rangle}{m}$ are the eigenvalues of the discrete time Markov chain on $ ( \mathbb{Z}/m \mathbb{Z})^{n-1}$ that adds or subtracts $w_s$ to the current state with probability $1/2$. Then we use the classical $\ell^2 $ bound, \begin{align}\label{y} &4 \Vert q_{\substack{k, \underline{w}}}-u \Vert_{T.V.}^2 \leq \sum_{ y \in (\mathbb{Z}/m \mathbb{Z})^{n-1} \setminus \{ \textbf{0} \} } e^{-2(k- \sum_{s=1}^k \lambda_{\substack{y, w_s}})}. \end{align} To continue with bounding \eqref{y}, we will need the following technical lemma. \begin{lemma}\label{integral} We have that $$\sum_{j =1}^{m-1} e^{-2 x (1- \cos \frac{2 \pi j }{m} )} \leq m e^{-2x} + \frac{\sqrt{3} m}{ 2\sqrt{2 \pi x}} ,$$ where $x>0$. \end{lemma} \begin{proof} \begin{align} \sum_{j =1}^{m-1} e^{-2 x (1- \cos \frac{2 \pi j }{m} )} & \leq 2 \sum_{j =1}^{m/2} e^{-2 x (1- \cos \frac{2 \pi j }{m} )}\cr & \label{spl} \leq m e^{-2x} + 2 \sum_{j =1}^{m/4} e^{-2 x (1- \cos \frac{2 \pi j }{m} )} , \end{align} where for the first term in \eqref{spl} we bound the negative cosines. Using the inequality $\cos x \leq 1 - \frac{x^2}{2} + \frac{x^4}{24}$, we get that \begin{align} \eqref{spl}& \label{thr} \leq m e^{-2x} + 2 \sum_{j=1}^{m/4} e^{- \frac{8j^2 \pi^2}{3 m^2} x }\\ &\label{i} \leq m e^{-2x} + 2 \int_{0}^{\infty } e^{- \frac{8w^2 \pi^2x }{3m^2} }dw. \end{align} Using the substitution $v=\frac{4 \pi \sqrt{x} }{\sqrt{3}m}w,$ we get that \begin{align} \eqref{i}&\leq m e^{-2x} + \frac{ \sqrt{3}m}{ 2\pi \sqrt{ x}} \int_{0}^{\infty } e^{-v^2/2}dv \cr & \leq m e^{-2x} + \frac{\sqrt{3} m}{ 2\sqrt{2 \pi x}} . \end{align} \end{proof} \subsection{Coupling with Exponentials}\label{good} Recall that $Z^t_y(i):= X_t (i)y,$ for $i=1,\ldots, n$. In this section, we study the time intervals during which $Z^t_y(i) \neq 0$. We want to understand for how long $Z^t_y(i) $ remains equal to zero and for how long it doesn't. Let $y \in (\mathbb{Z}/m \mathbb{Z})^{n-1} \setminus \langle e_1 \rangle,$ where $ \langle e_1 \rangle$ denotes the subspace of $ (\mathbb{Z}/m \mathbb{Z})^{n-1}$ generated by the vector $e_1=(1,0,\ldots ,0)$. We start by proving that there is a good chance that $Z^t_y(i)$ will be non-zero after order $ n $ steps. \begin{lemma}\label{nzero} Let $T_i$ denote the first time that $Z^t_y(i)$ is non-zero. We have that $$\pr{T_i> n -1 +c } \leq e^{-c}.$$ \end{lemma} \begin{proof} We consider the random walk on $\mathbb{Z}$, starting at zero. When at $x$, we move to $x+1$ according to a rate $1$ Poisson clock or to $x-1$ according to a rate $1/2$ Poisson clock. A Chernoff bound gives that \begin{equation}\label{t} \pr{T_i> t } \leq \pr{S_t <n-1}= \pr{e^{- \lambda S_t} > e^{- \lambda (n-1)}} \leq e^{\lambda (n-1)} \expect{e^{-\lambda S_t}} \end{equation} We have that $S_t= M_t-N_t,$ where $M_t$ is a Poisson($t$) random variable and $N_t$ is a Poisson($t/2$) random variable. Therefore, we have that \begin{equation*} \eqref{t} \leq e^{\lambda (n-1)} e^{-\frac{3}{2}t+t (e^{- \lambda + \frac{1}{2} e^{\lambda}})}. \end{equation*} Setting $\lambda= \log 2$, we get that $$\pr{T_i> t } \leq 2^{n-1} e^{-t (3 - \sqrt{2})}.$$ Setting $t= n-1 +c$ we get the desired result. \end{proof} We first want to study for how long $Z^t_y(2)$ can stay equal to zero. \begin{definition} Let $\ell_1$ be a time such that $Z^{\ell_1}_y(2)=0$ and $Z^{\ell_1^-}_y(2)\neq 0$. Let $\ell_2= \inf \{t>\ell_1: Z^{t}_y(2)\neq 0 \}$. We will call $[\ell_1, \ell_2] $ a $y$--zero interval. \end{definition} \begin{lemma}\label{nstar} Let $m>2$ be an odd integer. Let $[\ell_1, \ell_2]$ be a $y$--zero interval. Then, $$\prcond{\ell_2-\ell_1>13k }{\mathcal{F}_t} \leq e^{-k},$$ where $k>0$. \end{lemma} \begin{proof} Consider the column dynamics, $Z^t_y(i), $ where $i =2,\ldots, n$. Let $Z_t$ be the first entry of the column $Z^t_y(i)$ that is not divisible by $m$, when read from top to bottom. If this starts at the second coordinate, then we want to study the first time $\xi$ that $Z$ returns at $2$. We are going to couple $Z_t$ with a biased random walk on $\mathbb{Z}$. Let $S_x$ is the random walk on $\mathbb{Z}$ at time $x$, which starts at $0$, and moves by $+1$ according to a Poisson($1$) clock and by $-1$ according to a Poisson$(1/2)$ clock. Then, a Chernoff bound gives $$\pr{\xi>x} \leq \pr{S_x <0} \leq \pr{ e^{-\lambda S_x} \geq 1 } \leq \expect{e^{-\lambda S_x}} \leq e^{-\frac{3x}{2} } e^{x (e^{- \lambda }+\frac{e^{ \lambda}}{2})},$$ since $S_x= M_x-N_x,$ where $M_x$ is a Poisson($x$) random variable and $N_x$ is a Poisson($x/2$) random variable. Optimizing over $\lambda$, we have that $$\pr{\xi>x} \leq e^{(-\frac{3}{2} + \sqrt{2})x } \leq e^{- \frac{x}{13}} .$$ Therefore, if $x= 13k $, $$\prcond{\ell_2-\ell_1>13k }{\mathcal{F}_t} \leq e^{-k}.$$ \end{proof} We can also study the length of the intervals during which $Z^t_y(2)\neq 0$. \begin{definition} Let $\ell_3$ be a time such that $Z^{\ell_1}_y(3)\neq0$ and $Z^{\ell_3^-}_y(2)= 0$. Let $\ell_4= \inf \{t>\ell_3: Z^{t}_y(2)=0 \}$. We will call $[\ell_3, \ell_4] $ a $y$--non-zero interval. \end{definition} \begin{lemma}\label{star} Let $m>2$ be an integer. Let $[\ell_3, \ell_4] $ be a $y$--non-zero interval. Then, $$\prcond{\ell_4-\ell_3 \leq k }{\mathcal{F}_t} \leq e^{-k},$$ where $k>0$. \end{lemma} \begin{proof} We couple $\ell_4-\ell_3$ with the time it takes for the clock of the third row to ring and the statement follows. \end{proof} We are now going to put all this information together to prove that during any interval, $ Z^t_y(2)$ is non-zero for a constant fraction of the time. Let $t_0= n$. We break up the interval $[t_0,t_{n,m}] $ in intervals $[t_j, t_{j+1}]$ of length $L $, so that $t_j=n+jL$. Let $j \in \{ 1,\ldots,n\}$ and let $g=15$. \begin{definition}\label{g} An interval $[t_j, t_{j+1}] $ is called $y$--good if $ Z^t_y(i) \neq 0 \mod m $ for at least $1/g$ of $[t_j, t_{j+1}]$. Let $D_{y}^i$ be the set of all $y$--good intervals by time $t_{n,m}$ and let $M^t_{y}$ be the number of $y$--good intervals that have occurred by time $t$. \end{definition} The following lemma is a standard tool that will help us study how likely it is for a given interval to be $y$--good. \begin{lemma}\label{expon} Let $B_1, \ldots, B_k$ be independent, exponential random variables with mean one. We have that \begin{enumerate} \item[(a)] $\pr{\sum_{i=1}^k B_i > 2k} \leq \left( \frac{2}{e} \right)^{k}.$ \item[(b)] $\pr{\sum_{i=1}^k B_i < \frac{k}{2}} \leq \left(\frac{6}{7}\right)^k$. \end{enumerate} \end{lemma} The following lemma tells that a constant fraction of intervals are $y$--good. \begin{lemma}\label{prob} At time $t_{n,m}$ we have that $$\pr{M_{y}^t \leq \frac{t_{n,m}}{10L}} \leq \frac{2}{m^{2Dn}},$$ for every $j \geq 1$, for a suitable constant $D$. \end{lemma} \begin{proof} Consider the $y$--non-zero intervals $A_b \subset [T_2,t_{n,m}]$ and the $y$--zero intervals $B_k \subset [T_2,t_{n,m}]$. Let $\vert A_b \vert$ be the length of $A_b$. Let $$W_y= \sum_b \vert A_b \vert$$ be the total time that $Z^s_y(2)$ is not equal to zero. For the case where $m$ is odd, we have that \begin{align} \label{nine1}& \lbrace M_{y}^t \leq \frac{t_{n,m}}{10L}\rbrace \subset \lbrace W_y\leq x t_{n,m}\rbrace , \end{align} where $x= \frac{1}{10}+ \frac{9}{10g}$. Lemmas \ref{nstar} and \ref{star} say that we can couple each $\vert A_b \vert$ with a Exponential random variable with mean $1$ and each $\vert B_k\vert /13$ with an exponential random variable with mean $1$. Let $ r \in \left[\frac{1}{5} (1+9g^{-1})t_{n,m}, \frac{9}{520} (1-g^{-1}) t_{n,m} \right]$. We have that either $A_1 \cup \ldots \cup A_r \subset [0, t_{n,m}]$ or $(\cup_{i=1}^r B_i) $ contains all $y$--non-zero intervals that $[0, t_{n,m}]$ contains. This is summarized in the following equation $$W_y \geq \min \big \{\sum_{b=1}^r \vert A_b \vert, t_{n,m} - \sum_{k=1}^r \vert B_k \vert \big \}.$$ Let $\mathcal{B}$ be the event that $\lbrace T_2<2Dn \log m \rbrace$. Therefore, \begin{equation} \label{r} \pr{W_y\leq xt_{n,m},\mathcal{B}} \leq \pr{\sum_{b=1}^r \vert A_b \vert \leq x t_{n,m} } + \pr{\sum_{k=1}^r \vert B_k \vert \geq \left( 1-x \right) t_{n,m} }. \end{equation} Because of the choice of $r$ we have that \begin{align} \eqref{r} & \leq \pr{\sum_{b=1}^r \vert A_b \vert \leq r/2} + \pr{\sum_{k=1}^r \vert B_k \vert \geq 2r}\cr & \leq \left(\frac{6}{7} \right)^{ r} + \pr{\sum_{k=1}^r \frac{\vert B_k\vert }{13} \geq \frac{1-g^{-1}}{13} t_{n,m } }\cr &\label{e} \leq \left(\frac{6}{7} \right)^{r} + \left( \frac{2}{e} \right)^{ r} , \end{align} where \eqref{e} follows from Lemma \ref{expon}. Putting \eqref{nine1} and \eqref{e} together, we have that \begin{align} \label{nine}&\pr{M_{y}^t \leq \frac{t_{n,m}}{10L}} \leq \pr{W_y\leq x t_{n,m},\mathcal{B}} + \frac{1}{m^{2Dn}} \leq \left(\frac{6}{7} \right)^{r} + \left( \frac{2}{e} \right)^{ r}+\frac{1}{m^{2Dn}} . \end{align} Using the definition of $r$ and \eqref{nine} we get the desired result. For the case where $m$ is even, project all values over $\mathbb{Z}/2 \mathbb{Z}$. Equation 2.2 of \cite{PeresSly} says that for every $\varepsilon>0$, we have that $$\pr{\big \vert W_{t_{n,m}} - \frac{t_{n,m}}{2} \big \vert \geq \varepsilon} \leq 2^{n+1} e^{- \frac{t_{n,m} \varepsilon^2 \lambda}{12}},$$ where $\lambda$ is a positive constant not depending on $n,m$. Therefore \begin{align*} &\pr{W_y\leq x t_{n,m}} \leq \frac{1}{m^{dn}}, \end{align*} for a suitable constant $d$. \end{proof} Finally, we need a lemma that says that if $ Z_y^s(2) $ is sufficiently big with a good probability during a $y$--good interval, then $ Z_y^s(2) $ is sufficiently big for a constant fraction of $[0, t_{n,m}]$. Let $\mathcal{G}_j$ be the event that $[t_j,t_{j+1}] \in D_y^2$ and let $\mathcal{J} $ be the indices $j$ that satisfy $\pr{\mathcal{G}_j}\neq 0$. Recall that $P^t$ is the indicator function that the clock of the second row rings at time $t$ and $A^t_{y,x}= \int_0^t 1_{\lbrace P^s=1 \rbrace } 1_{\lbrace \vert Z_y^s(2) \vert > x\rbrace } ds$. The following is crucial to proving the second part of Lemma \ref{q}. \begin{lemma}\label{r15} For $y \notin \langle e_1 \rangle$, consider $\tilde{B}_{y}= \lbrace A^t_{y,x} \geq (2A)^{-1} t_{n,m} \rbrace $ and assume that $\prcond{\tilde{B}_{y}}{\mathcal{G}_j} \geq 1/2,$ for every $j \in \mathcal{J}$. We have that there are constants $b$ and $g$ such that \begin{align*} \pr{\tilde{B}_{y}} \geq 1- \left( e^{- g \frac{t_{n,m}}{L}} + \frac{b}{m^{gn}} \right) e^{-c}, \end{align*} where $b,g$ are suitable constants, $c$ is the constant from Theorem \ref{main} and $L$ is the length of the intervals. \end{lemma} \begin{proof} Let $B_{y}$ denote the event that $\vert Z^s_y(2) \vert > x$ for at least $A^{-1}$ of $ [0, t_{n,m}]$, where $A$ is a suitable constant. Let $\mathcal{F}_{j}$ be the $\sigma$--algebra generated by all the row operations performed before time $t_j$. We have that \begin{align} \sum_j I_{B_{j,y}} & = \sum_j \prcond{B_{j,y}}{\mathcal{F}_{j}} + \sum_{j} \left( I_{B_{j,y}}- \prcond{B_{j,y}}{\mathcal{F}_{j}}\right), \end{align} where the term $M=\sum_{j} \left( I_{B_{j,y}}- \prcond{B_{j,y}}{\mathcal{F}_{j}}\right)$ is a martingale. Using Lemma \ref{prob}, we have that \begin{align} \pr{B_y} &=\pr{ \sum_j I_{B_{j,y}} \geq A^{-1} t_{n,m}}\cr & \label{MM12} \geq \pr{ \sum_j \prcond{B_{j,y}}{\mathcal{F}_{j}} \geq 2A^{-1} t_{n,m}, M \geq -A^{-1} t_{n,m} }. \end{align} Using the Azuma-Hoeffding inequality we have that \begin{align} \label{M12} \pr{ M \geq -A^{-1} t_{n,m} } &\geq 1- e^{-t_{n,m}} \geq 1- m^{-dn}e^{-c}, \end{align} with $d>2$. Our assumption that $\prcond{\tilde{B}_{y}}{\mathcal{G}_j} \geq 1/2$, gives that \begin{equation} \prcond{B_{j,y}}{\mathcal{F}_{j}} \geq H^y_j I_{G^y_j}, \end{equation} where $H^y_j$ is a Bernoulli$(1/2)$ random variable, that is independent of $I_{G^y_j}$. Conditioning on the set $D^I_y$ of $y$--good intervals, we have that \begin{align*} \pr{ \sum_j \prcond{B_{j,y}}{\mathcal{F}_{j}} \geq 2A^{-1} t_{n,m} } \geq & \pr{M^{t,n,m}_y \geq \frac{t_{n,m}}{10L_1}}\\ & \quad \pr{ \sum_{j\in D^I_y} H^y_j \geq 2A^{-1} \frac{t_{n,m}}{L_1} \bigg \vert M^t_y \geq \frac{t_{n,m}}{10L_1}}, \end{align*} where $M^{t,n,m}_y $ is the number of $y$--good intervals by time $t_{n,m}$. Therefore, Lemma \ref{prob} combined with the properties of the binomial distribution and an appropriate choice of $A$ give that \begin{equation} \label{y12} \pr{ \sum_j \prcond{B_{j,y}}{\mathcal{F}_{j}} \geq 2A^{-1} t_{n,m} } \geq 1-e^{- D \frac{t_{n,m}}{L}}e^{-c}, \end{equation} where $D>2$ is an appropriate constant. Combining \eqref{MM12}, \eqref{M12} and \eqref{y12}, we get that \begin{align*} \pr{B^c_y} \leq \left( e^{- D \frac{t_{n,m}}{L}}+ \frac{1}{m^{dn}} \right) e^{-c}. \end{align*} Given $B_y$, we have that $A^t_{y,x}$ can be coupled with a Poisson random variable with mean $A^{-1}t_{n,m}$, so that \[\pr{\tilde{B}_{y}^c} \leq \prcond{\tilde{B}_{y}^c}{B_y} +\pr{B_y^c }\] Using the tails of a Poisson random variable, we have that \begin{align*} \pr{\tilde{B}_{y}^c} & \leq (e/2)^{-(2A)^{-1} t_{n,m}} + \left( e^{- D \frac{t_{n,m}}{L}}+ \frac{1}{m^{dn}} \right) e^{-c} \\ & \leq \left( \frac{1}{m^{\tilde{D}n}} + e^{- D \frac{t_{n,m}}{L}}+ \frac{1}{m^{dn}} \right) e^{-c}, \end{align*} which finishes the proof. \end{proof} \subsection{Inducing the walk to a smaller dimension} Let $I \in \lbrace 2,\ldots,n \rbrace$ and let $Z_y^t=X_ty$. In this section, we develop the necessary tools to study the walk if we only focus on the top $I$ coordinates of $Z_y^t$. Let $ s_1 <s_2< \ldots \leq t_{n,m}$ denote the times when the $I$--th clock rings and $Z_y^{s_j}(I) \neq 0$. Let $z_1< z_2<\ldots \leq t_{n,m}$ denote the times when a clock other than the $I$--th one rings, let $W_j$ be the corresponding operation matrix applied and $L(t)= \max \lbrace j \geq 0 :z_{j} \leq t \rbrace$. For $0\leq t \leq t_{n,m}$, we define the the backwards process by $Y_0=I_n$ and \begin{equation} Y_t= \prod_{j=0}^{L(t_{n,m} ) - L(t- t_{n,m} ) -1}W_{L(t_{n,m})-j} =W_{L(t_{n,m})}W_{L(t_{n,m})-1}\ldots W_{L(t_{n,m}-t)+1} \end{equation} and for $0\leq t'< t \leq t_{n,m}$ we let \begin{equation} Y_{t',t}= Y_{t_{n,m}-t}^{-1} Y_{t_{n,m}-t'}=W_{L(t)}\ldots W_{L(t')+1}. \end{equation} Notice that the entries of $Y_t$ and $Y_{t,t'}$ that fall on the $[1,I-1] \times [I,n]$ box are equal to zero and that $Y_t$ is a Markov chain on the columns of a matrix on $G_n(m)$. \begin{lemma}\label{vector} We have that \begin{align*} Z_y^{s_{\ell}} & = Y_{0, s_{\ell} } Z_y^{0} + \sum_{k=1}^{\ell-1} a_k Y_{s_k, s_{\ell} } E(I-1,I) Y_{ 0, s_{k}} Z_y^{0} \quad + a_{\ell} E(I-1,I) Y_{ 0,s_{\ell} } Z_y^{0}, \end{align*} where $a_k \in \{ \pm 1 \}$ are the random signs corresponding to the $k$--th time the $I$--th clock rings. \end{lemma} \begin{proof} We will prove the statement by induction. For $\ell=0$ both sides are equal to the identity $Z_y^{0}$. By the definition of $s_{\ell+1}$ we have that \begin{align} \label{defi}Z^{s_{\ell +1}}_y & = \left( I_n + a_{\ell+1} E(I-1,I) \right) Y_{ s_{\ell}, {s_{\ell +1}}} Z^{s_{\ell }}_y \end{align} By the induction hypothesis we have that \begin{align} \eqref{defi} & = \left( I_n + a_{\ell +1} E(I-1,I) \right) Y_{ s_{\ell}, {s_{\ell +1}}} \biggl( Y_{0, s_{\ell} } Z_y^{0}+ a_{\ell} E(I-1,I) Y_{0, s_{\ell} } Z_y^{0} \cr \label{special}&\quad\quad + \sum_{k=1}^{\ell} a_k Y_{ s_k,s_{\ell}} E(I-1,I) Y_{0, s_k} Z_y^{0} \biggr) \end{align} Using the facts that $ E(I-1,I) E(I-1,I)= 0$ and $ E(I-1,I)Y E(I-1,I)= 0$ for every $Y \in G$ whose $[I-1] \times [ I,n]$ entries are zero. \begin{align} \eqref{special}& = Y_{0, s_{\ell+1} } Z_y^{0}+ a_{\ell +1} E(I-1,I) Y_{0, s_{\ell+1} } Z_y^{0} +\sum_{k=1}^{\ell-1} a_k Y_{s_k, s_{\ell+1}} E(I-1,I) Y_{0, s_{k} } Z_y^{0} \cr &\label{a} \quad + a_{\ell} Y_{s_{\ell},s_{\ell +1}} E(I-1,I) Y_{0, s_{\ell+1} } Z_y^{0}, \end{align} which finishes the proof. \end{proof} Since we are interested in $Z_y^{t}(2)$, we write a similar version of Lemma \ref{vector}. Using the fact that $Z_y^{t} = Y_{s_{\ell},t } Z_y^{s_{\ell}},$ we get the following. \begin{corollary}\label{coordinate} We have that \begin{align*} Z_y^{t} &=Y_{0,t } Z_y^{0}+ \sum_{k=1}^{\ell-1} a_k Y_{s_k, t} E(I-1,I) Y_{0, s_{k}} Z_y^{0} \quad + a_{\ell} E(I-1,I) Y_{0,t} Z_y^{0} \end{align*} and $$Z_y^{t}(2) = Y_{0, t } Z_y^{0}(2)+ \sum_{k=1}^{\ell-1} a_k Y_{s_k, t} E(I-1,I) Y_{0, s_{k} } Z_y^{0}(2).$$ \end{corollary} To study $Y_{0, t } Z_y^{0}(2)$, consider a vector process starting at $ y_{I} e_{I}$ and having the same updates as the original process. Then $Y_{0, t } Z_y^{0}(2)$ is the second coordinate of this process. Similarly, $Y_{s_k, t } E(I-1,I) Y_{0, s_{k} } Z_y^{0}(2)$ is the same as the second coordinate of the vector process that starts at $y_{I}^{s_k} e_{I-1}$, where $y_{i}^{s_k}$ is the $I$--th coordinate of $Z_y^{s_k}$, and whose updates from are the same as the updates that occur between times $s_k$ and $t$. Let $\mathcal{G}_{j}^y$ be the event $\{[t_j, t_{j+1}] \in D^I_y \}$ for the $I$ that was chosen. The following lemma introduces a condition under which $\vert Z_y^{t}(2) \vert$ is guaranteed to be big. This will be crucial to proving that the eigenvalues of the walk are sufficiently small for a constant fraction of the time. \begin{lemma}\label{de20} Let $s_{1}$ is the first time after $t_{j}$ that the $I$--th clock rings and $Z_y^{s_{1}}(I) \neq 0$ and let $t\in [t_{j+1}, t_{j+2}]$. When $s_1 \leq t_{j+1}$, set $N^t= Y_{s_{1},t } E(I-1,I) Y_{0, s_{1} } Z_y^{0}(2)$. For every $x \in \{0, \ldots m/4\}$, we have that \begin{equation}\label{DiaHough2} \prcond{ \vert Z_y^{t}(2) \vert \geq x }{\mathcal{G}_{j}^y} \geq \frac{1}{2}\prcond{ \vert2 N^t\vert \geq 2x }{\mathcal{G}^y_{j}}1_{ \{s_1 \leq t_{j+1} \}}. \end{equation} \end{lemma} \begin{proof} Lemma \ref{nstar} says that $ s_{1}/13$ can be coupled with an exponential random variable with mean one. Let $\mathcal{Y}_t$ be the event that $ \vert2 N^t\vert \geq 2x$. We have that \begin{align} &\prcond{ \vert Z_y^{t}(2) \vert \geq x }{\mathcal{G}^y_{j}} \label{la2} \geq \prcond{ \vert Z_y^{t}(2) \vert \geq x }{\mathcal{Y}_t, \mathcal{G}^y_j} \prcond{ \mathcal{Y}_t }{\mathcal{G}^y_{j}}1_{ \{s_1 \leq t_{j+1}\}} \end{align} The condition $\{ \vert2 N^t\vert \geq 2x \} $ combined with the fact that $a_1=1$ or $\break a_1=-1$ with probability $1/2$ will result in $\vert Z_y^{t}(2) \vert \geq x$ with probability $1/2$. Therefore, \begin{align*} \eqref{la2}& \geq \frac{1}{2}\prcond{ \vert 2 N^t \vert \geq 2x }{\mathcal{G}^y_{j}} 1_{ \{s_1 \leq t_{j+1} \}}. \end{align*} \end{proof} \section{The case where $m$ is a prime}\label{prime} In this section $m$ is a prime number. Since the case $m=2$ was covered in \cite{PeresSly}, from now on in this section $m$ will denote an odd prime. We now write the Diaconis--Hough lemma, which we are going to use for the proof of Theorem \ref{mainc}. \begin{lemma}[Theorem 3, \cite{Hough}]\label{DH} Let $Z_t$ be the configuration of the rightmost corner of the upper triangular random walk at time $t$. For any set $A \subset \mathbb{Z}/ m \mathbb{Z}$ we have that $$\Vert \pr{Z_t \in \cdot} - U\Vert_{T.V.} \leq \exp(-r t 2^{-n} m^{-\frac{2}{n-1}}),$$ where $U$ is the uniform measure on $G$ and $r $ is a universal constant. \end{lemma} Diaconis and Hough mainly treat the case where $n$ is fixed. Therefore, the term $2^n$ is not announced in their main result, but can be found in the proof of Proposition 22 of \cite{Hough}. \subsection{The eigenvalues for $y \in W_I$} Let $I=2+J$, where $J= \sqrt{\log m}$, so that $2^J \leq m^{\frac{2}{J}}$. Recall that $W_I= \left( \mathbb{Z}/ m \mathbb{Z}\right)^{n-1} \setminus \langle e_1, \ldots, e_{I-2} \rangle$ and let $y \in W_I$. The goal of this section is to prove that $\vert Z_y^{t}(2) \vert \geq \frac{m}{8}$ for a constant fraction of $[0,t_{n,m}]$. We choose the length of each interval $[t_j, t_{j+1}]$ to be $L= d m^{\frac{4}{J }},$ where $d>0$ is a suitable constant that makes $L$ big enough for Lemma \ref{prob} to work. Let $T_I$ be the first time that $Z_y^{t}(I) \neq 0$. Let $\mathcal{G}_{j,I}^y$ be the event $\{[t_j, t_{j+1}] \in D^I_y \}$ for the $I$ that was chosen, but for simplicity we will write $\mathcal{G}^y_{j}$. The following lemma is the main tool for proving the Theorem \ref{mainc}. \begin{lemma}\label{de} For $y \in W_I$, we have that \begin{equation} \prcond{ \vert Z_y^{t}(2) \vert \geq \frac{m}{8} }{\mathcal{G}^y_{j}} \geq \frac{1}{64}, \end{equation} for every $t \geq t_{j+1}$. \end{lemma} \begin{proof} We just need to check that the assumptions of Lemma \ref{de20} hold. To bound $\prcond{ \vert2 N^t\vert \geq \frac{m}{4} }{\mathcal{G}^y_{j}}$ from below, we use Lemma \ref{DH}, to get that $$\prcond{ \vert 2 N^t\vert \geq \frac{m}{4} }{\mathcal{G}^y_{j} } \geq \frac{1}{8}- e^{-r \frac{L}{2^{I}m^{2J^{-1}}}}.$$ By our choice of $L$ and $J$, we have that \begin{equation}\label{DiaHough} \prcond{ \vert 2 N^t\vert \geq \frac{m}{4} }{\mathcal{G}^y_{j}} \geq \frac{1}{16}. \end{equation} Also, our choice of $L$ gives that \[\pr{s_1<t_{j+1}} \geq \frac{1}{2}.\] Lemma \ref{de20} gives the desired result. \end{proof} The next lemma says that the event considered in Lemma \ref{de} holds for a constant fraction of the time. Let $S$ be an appropriately chosen constant, which is is uniform on $j,m, n$ and $y$. \begin{lemma}\label{B1} Let $B_{j,y}$ denote the event \[ B_{j,y} :=\Big\{\int_{t_{j+1}}^{t_{j+2}}1_{ \{\vert Z^s_y(2) \vert > m/8 \} } ds \geq \frac{1}{3} L\Big \}. \] For $y \in W_I,$ we have that $$\prcond{B_{j,y}} {\mathcal{G}^y_{j}} \geq \frac{1}{65}, $$ for every $j \geq 1$. \end{lemma} \begin{proof} Let $R_j$ count the number of $s \in [t_{j+1}, t_{j+2}]$, such that $\vert Z^{s}_y(2) \vert \leq m/8$. So $R_j= \int_{t_{j+1}}^{ t_{j+2}} 1_{ \lbrace \vert Z^{s}_y(2) \vert \leq m/8 \vert \mathcal{G}^y_{j} \rbrace}ds $. Lemma \ref{de} gives that \begin{align*} &\expect{ R_j \vert \mathcal{G}^y_{j}}\leq \frac{63}{128} L. \end{align*} Markov's inequality then gives that \begin{align*} &\prcond{ R_j > \frac{4095}{8192} L }{\mathcal{G}^y_{j}} \leq \frac{64}{65}.\end{align*} The constant $S$ is chosen so that \begin{align*} &\prcond{B_{j,y}} {\mathcal{G}_{j}} \geq \prcond{ R_j \leq \frac{4095}{8192} L }{\mathcal{G}^y_{j}} \geq \frac{1}{65}. \end{align*} \end{proof} The rest of the eigenvalues will be studied in Section \ref{general}. \section{The case where $m$ is not a prime}\label{general} In this section, we study the quantity $Z_y^t(2)$ for the case where $m$ is not necessarily prime. We start by proving a lemma similar to Lemma \ref{DH} which works for $m \in \mathbb{N}$ that is not necessarily prime. Let $J= \lfloor (\log m)^{1/3} \rfloor$ and $D=20(J+1)$. Let $A_{J,D,m}= 2m^{2/J}/ \log D$ and let $p$ be a prime such that $\frac{1}{20} A_{J,D,m} \leq 6 r^{-1}2^J p^{2/J} \leq \frac{1}{2} A_{J,D,m}$. Recall that $r$ is the constant from Lemma \ref{DH}. The goal of this section is to prove the following lemma. \begin{lemma}\label{BH2} Let $Z_t$ be the the last column of $X_t$. We have that there is a constant $K$ such that \[\pr{\vert aZ_t(n-J-1)\vert > m e^{-K (\log m)^{2/3}} } \geq 1/2,\] where $t \in [6r^{-1}2^Jp^{2/J}, A_{J,D,m} ]$ and $a \in \{1,\ldots, m-1\}$. \end{lemma} To prove Lemma \ref{BH2}, we will need the following lemma concerning $Z_t$ over~$\mathbb{Z}$. \begin{lemma}\label{AH} Let $\mathcal{Z}_t= X_t e_{n-1}$ be the column process over $\mathbb{Z}$ which starts at $(0,\ldots,1)^T$. Set $x=2m^{2/J}/ \log D$. Then, we have that $$\pr{\max_{\substack{t \leq x,\\ 1\leq i \leq k}} \{ \vert \mathcal{Z}_t(n-i)\vert \} \leq x^{k/2} (\log D)^{k/2}} \geq 1- \frac{2k}{D} ,$$ for $k \leq J+1 \leq n-1$. \end{lemma} \begin{proof} We will prove the result by induction on $k$. For $k=1$, the only possible $i \leq k$ is $i=1$. We have that $\mathcal{Z}_t(n-1)$ is a simple random walk on $\mathbb{Z}$. The reflection principle gives that \begin{equation}\label{rp} \pr{\max_{t \leq x} \{ \vert \mathcal{Z}_t(n-1)\vert \} \geq m^{1/J}} \leq \pr{ \vert \mathcal{Z}_{x}(n-1)\vert \geq m^{1/J} }. \end{equation} The Azuma-Hoeffding inequality gives that \[\pr{ \vert \mathcal{Z}_{x}(n-1)\vert \geq m^{1/J} } \leq 2 e^{- \log L}= \frac{2}{D} \] and therefore, in combination with \eqref{rp}, we get that $$\pr{\max_{t \leq x} \{ \vert \mathcal{Z}_t(n-1)\vert \} \geq m^{1/J} } \leq 2 e^{- \log L}= \frac{2}{D}.$$ Let $\mathcal{A}_k$ be the event that $\{\max_{\substack{t \leq x,\\ i \leq k}} \{ \vert \mathcal{Z}_t(n-i)\vert \} \leq x^{k/2} (\log D)^{k/2} \}.$ Assume that $\pr{\mathcal{A}_k} \geq \left(1- \frac{2}{D} \right)^k$. Let $b_1\leq \ldots \leq t$ are the times that the $n-k$ clock rings. Writing $\mathcal{Z}_y^{t}(n-k-1)= \sum a_i \mathcal{Z}_y^{s_i}(n-k)$, the Azuma-Hoeffding inequality gives that $\prcond{\mathcal{A}_{k+1}}{\mathcal{A}_k} \geq 1 - e^{- 2\frac{\log D}{2}}$. Therefore $$\pr{\mathcal{A}_{k+1}}\geq (1 - e^{- \log D}) \left(1- \frac{2}{L} \right)^k \geq \left(1- \frac{2}{D} \right)^{k+1}. $$ Using the fact that $\left(1- \frac{2}{L} \right)^{k} \geq 1- \frac{2k}{D}$, we complete the proof. \end{proof} Let $T$ be the first time that there is a $j \leq n- J-1$ satisfies $\vert \mathcal{Z}_t(j)\vert > m/6$. Lemma \ref{AH} says that w.h.p. $T >2 m^{2/J}/ \log D$. Recall that $\frac{1}{20} A_{J,D,m} \leq 6 r^{-1}2^J p^{2/J} \leq \frac{1}{2} A_{J,D,m}$. Let \[\theta_k(t):= \max_{\substack{A \subset \mathbb{Z}/ m \mathbb{Z} \\ \vert A \vert \leq k}} \{\pr{Z_t(n-I) \in A}\}.\] \begin{lemma}\label{important1} For $t \in [6r^{-1}2^J p^{2/J}, A_{J,D,m} ]$, we have that $$\theta_{p/3}(t) \leq 1/2.$$ \end{lemma} \begin{proof} Let $\Tilde{Z}_t$ be the process over $\mathbb{Z}/p \mathbb{Z}$ and let $$\Tilde{\theta}_k(t):= \max_{\substack{A \subset \mathbb{Z}/ p \mathbb{Z} \\ \vert A \vert \leq k}} \{\pr{\Tilde{Z}_t(n-J-1) \in A}\}.$$ Since $p <m $, we notice that $\theta_{p/3}(t) \leq \Tilde{\theta}_{p/3}(t) + \pr{T \leq t}$. If we assume that $\Tilde{\theta}_{p/3}(t) \geq 2/5 $ then there is a set $A \subset \mathbb{Z}/ p \mathbb{Z}$ with $\vert A \vert \leq p/3$ such that $$\pr{\Tilde{Z}_t(n-J-1) \in A}- \pi_p(A) \geq 1/15,$$ where $\pi_p$ is the uniform measure over $\mathbb{Z}/ p \mathbb{Z}$. This implies that $$d_{T.V.}(\Tilde{Z}_t(n-J-1), \pi_p) \geq 1/15 ,$$ which contradicts the fact that $t \geq 6r^{-1}2^J p^{2/J}$, which means that Lemma \ref{DH} gives that $$d_{T.V.}(\Tilde{Z}_t(n-J-1), \pi_p) \leq e^{-3} .$$ Therefore, $\theta_{p/3}(t) \leq \Tilde{\theta}_{p/3}(t) + 1/10\leq 1/2$. \end{proof} \begin{corollary}\label{important3} For $t \in [6r^{-1}2^J p^{2/J} , A_{J,D,m} ]$, we have that there is a universal constant $K$ such that \[\pr{\vert Z_t(n-J-1)\vert > m e^{-K (\log m)^{2/3}} } \geq 1/2.\] \end{corollary} \begin{proof} Lemma \ref{important1} gives that \[\pr{\vert Z_t(n-J-1)\vert > p/6} \geq 1/2.\] The fact that $\frac{1}{20} A_{J,D,m} \leq 6 r^{-1}2^J p^{2/J} $ gives that $$p \geq \frac{\tilde{r}^{J/2}m}{2^{J^2/2} (\log D)^{J/2}} \geq m e^{-\tilde{K} (\log m)^{2/3}},$$ where $\tilde{K}$ is a universal constant. This completes the proof. \end{proof} We are now ready to prove Lemma \ref{BH2}. \begin{proof}[Proof of Lemma \ref{BH2}] Let $a \in \mathbb{Z}$ and let $$\theta^a_k(t):= \max_{\substack{A \subset \mathbb{Z}/ m \mathbb{Z} \\ \vert A \vert \leq k}} \{\pr{aZ_t(n-J-1) \in A}\}.$$ Let $g=g.c.d.(a,m) $. If $g=1$ then $\theta^a_k(t)=\theta_k(t)$ and the statement follows by Corollary \ref{important3}. If $g \neq 1$ then let $m'= m/g$ and $a'=a/g$. Then we can view $aZ_t(n-J-1)$ as a random walk on $\mathbb{Z}/m' \mathbb{Z}$. We denote this random walk by $\overline{Z}_t$. If $m' \gg 1,$ then let $p'$ be such that $\frac{1}{20} A_{J,D,m'} \leq 6r^{-1}2^J (p')^{2/J} \leq \frac{1}{2} A_{J,D,m'}$. We therefore have that there exists a constant $r'$, that does not depend on $m'$, such that $p \geq 6^{-1} \tilde{r}^{J/2} 2^{-J^2/2}D^{-J/2} (\log D)^{-J/2}m' $. Corollary \ref{important3} says that \[\pr{\vert a' \overline{Z}_t)\vert > p'/6} \geq 1/2.\] and therefore \[\pr{\vert aZ_t(n-J-1)\vert > (gp')/6} \geq 1/2.\] Using the fact that $gp' \geq 6^{-1} \tilde{r}^{J/2} 2^{-J^2/2}D^{-J/2} (\log D)^{-J/2}m $ we get that \[\pr{\vert aZ_t(n-J-1)\vert >6^{-1} \tilde{r}^{J/2} 2^{-J^2/2}D^{-J/2} (\log D)^{-J/2}m } \geq 1/2.\] If $m'\ll 1$, then the walk $aZ_t(n-J-1)$ on $\mathbb{Z}/m' \mathbb{Z}$ mixes in finite steps. Similarly to before, we have that \[\pr{\vert aZ_t(n-J-1)\vert > 6^{-1} \tilde{r}^{J/2} 2^{-J^2/2}D^{-J/2} (\log D)^{-J/2}m } \geq 1/2.\] Using the fact that there is a universal constant $K$ such that \[6^{-1} \tilde{r}^{J/2} 2^{-J^2/2}D^{-J/2} (\log D)^{-J/2}m \geq m e^{-K (\log m)^{2/3}},\] we conclude the proof. \end{proof} \subsection{The eigenvalues $y\in W_I$} We are now going to consider the decomposition proved in Lemma \ref{coordinate}. For the definition of the $y$--good intervals, we are going to consider $I=1+J$, where $J= \lfloor (\log m)^{1/3} \rfloor$. Let $L_1=d A_{J,D,m}$ be the length of each $y$--good interval, where $d$ is a suitable constant. Let $W_I=\left( \mathbb{Z}/ m \mathbb{Z}\right)^{n-1} \setminus \langle e_1, \ldots, e_{I-2} \rangle$. The following lemma is one of the main tools for proving Lemma \ref{q}. \begin{lemma}\label{deg} For $y \in W_I$, we have that \begin{equation} \prcond{ \vert Z_y^{t}(2) \vert \geq m e^{-K (\log m)^{2/3}} }{\mathcal{G}^y_{j}} \geq \frac{1}{8}, \end{equation} for every $t \geq t_{j+1}$. \end{lemma} \begin{proof} We just need to check that the assumptions of Lemma \ref{de20} hold. Using the following decomposition $$Z_y^{t}(2) = Y_{0, t } Z_y^{0}(2)+ \sum_{k=1}^{\ell-1} a_k Y_{s_k, t} E(I-1,I) Y_{0, s_{k} } Z_y^{0}(2),$$ as presented in Corollary \ref{coordinate}, we notice that $N_t= Y_{s_1, t} E(I-1,I) Y_{0, s_{1} } Z_y^{0}(2)$ has the form $a Z_t(n-J-1)$. By our choice of $L$ and $J$, Lemma \ref{BH2} gives that $$\prcond{ \vert 2 N^t\vert \geq m e^{-K (\log m)^{2/3}} }{\mathcal{G}^y_{j} } \geq \frac{1}{2},$$ and \[\pr{s_1 \leq t_{j+1}} \geq 1/2.\] \end{proof} \begin{lemma}\label{B12} Let $B_{j,y}$ denote the event that $\vert Z^s_y(2) \vert > m e^{-K (\log m)^{2/3}}$ for at least one third of $ [ t_{j+1}, t_{j+2}]$. For $y \in W_I ,$ we have that $$\prcond{B_{j,y}} {\mathcal{G}_{j}} \geq \frac{1}{2},$$ for every $j \geq 1$. \end{lemma} \begin{proof} Let $R_j$ count the number of $s \in [t_{j+1}, t_{j+2}]$, such that $\vert Z^{s}_y(2) \vert \leq m e^{-K (\log m)^{2/3}}.$ So $R_j= \int_{t_{j+1}}^{ t_{j+2}} 1_{ \lbrace \vert Z^{s}_y(2) \vert \leq m e^{-K (\log m)^{2/3}} \vert \mathcal{G}^y_{j} \rbrace}ds $. Lemma \ref{deg} gives that \begin{align*} &\expect{ R_j \vert \mathcal{G}^y_{j}}\leq \frac{1}{8} L. \end{align*} Markov's inequality then gives that \begin{align*} &\prcond{ R_j > \frac{1}{3} L }{\mathcal{G}^y_{j}} \leq \frac{3}{8}.\end{align*} The constant $S$ is chosen so that \begin{align*} &\prcond{B_{j,y}} {\mathcal{G}_{j}} \geq \prcond{ R_j \leq \frac{1}{3} L }{\mathcal{G}^y_{j}} \geq \frac{5}{8}. \end{align*} \end{proof} \begin{lemma}\label{r1} Recall that $P^t$ is the indicator function that the clock of the second row rings at time $t$ and $A^t_{y,x}= \int_0^t 1_{\lbrace P^s=1 \rbrace } 1_{\lbrace \vert Z_y^s(2) \vert > x\rbrace } ds$. Let $x= m e^{-K (\log m)^{2/3}}$. For $y \in W_I$, Consider $\tilde{B}_{y}= \lbrace A^t_{y,x} \geq (2A)^{-1} t_{n,m} \rbrace $. We have that \begin{align*} \pr{\tilde{B}_{y}} \geq 1- \frac{b}{m^{gn}} e^{-c}, \end{align*} where $b,g$ are suitable constants. \end{lemma} \begin{proof} We have that there is a constant $R$ such that $\frac{t_{n,m}}{L_1} \geq R n \log m$. Lemmas \ref{r15} and \ref{B12} give the desired result. \end{proof} \subsection{The eigenvalues $y\in \langle e_1, \ldots, e_{I-2} \rangle \setminus \langle e_1 , e_2\rangle $} Let $Q_I= \langle e_1, \ldots, e_{I-2} \rangle \setminus \langle e_1 , e_2\rangle $. Adjusting the proof of Corollary \ref{important3} for $J=3$ we get the following result. \begin{corollary}\label{important4} For $t \in [48r^{-1} p^{2/3} , A_{3,D,m} ]$, we have that there is a universal constant $\tilde{K}$ such that \[\pr{\vert Z_t(n-4)\vert > m / \tilde{K}} \geq 1/2.\] \end{corollary} Let $L_2 =d m$ be the length of each $y$--good interval, where $d$ is a suitable constant. \begin{lemma}\label{deg2} For $y \in Q_I$, we have that \begin{equation} \prcond{ \vert Z_y^{t}(2) \vert \geq m /\tilde{K} }{\mathcal{G}^y_{j}} \geq \frac{1}{8}, \end{equation} for every $t \geq t_{j+1}$. \end{lemma} The proof of Lemma \ref{deg2} is similar to the proof of Lemma \ref{deg}. Similarly to Lemma \ref{B12}, we have the following lemma. \begin{lemma}\label{B13} Let $B_{j,y}$ denote the event that $\vert Z^s_y(2) \vert > m / \tilde{K}$ for at least one third of $ [ t_{j+1}, t_{j+2}]$. For $y \in Q_I ,$ we have that $$\prcond{B_{j,y}} {\mathcal{G}_{j}} \geq \frac{1}{2},$$ for every $j \geq 1$. \end{lemma} \begin{lemma}\label{r2} Recall that $P^t$ is the indicator function that the clock of the second row rings at time $t$ and $A^t_{y,x}= \int_0^t 1_{\lbrace P^s=1 \rbrace } 1_{\lbrace \vert Z_y^s(2) \vert > x\rbrace } ds$. Let $x=m / \tilde{K}.$ For $y \in Q_I$, Consider $\tilde{D}_{y}= \lbrace A^t_{y,x} \geq (2A)^{-1} t_{n,m} \rbrace $. We have that \begin{align*} \pr{\tilde{D}_{y}} \geq 1- b \left( \frac{1}{n^{gm}}+ \frac{1}{m^{gn}}\right) e^{-c}, \end{align*} where $b,g$ are suitable constants. \end{lemma} \begin{proof} We have that there is a constant $R$ such that $\frac{t_{n,m}}{L_2} \geq R m \log n$. Lemmas \ref{r15} and \ref{B13} give the desired result. \end{proof} \subsection{The eigenvalues $y\in \langle e_1,e_2 \rangle \setminus \langle e_1 \rangle $} Let $P_2= \langle e_1,e_2 \rangle \setminus \langle e_1 \rangle $. For $y \in P_2$, we write $y= ae_1 +b e_2$ with $b \neq 0$. Therefore, we observe that $Z_y^t(2)= a+ b S^t$, where $S^t$ is a simple random walk on the cycle starting at zero. In this section, we consider the length of the intervals to be $L_3= \delta \log m$, where $\delta $ is a suitable constant. \begin{lemma} Let $\mathcal{I }\subset \mathbb{Z} / m \mathbb{Z}$ with $\vert \mathcal{I} \vert= \sqrt{\log m}$. For every $y \in P_2$ and $t\geq t_{j+1}$, we have that \[\pr{ Z_y^t(2)-Z_y^{t_j}(2) \notin \mathcal{I} } \geq 1/2 .\] \end{lemma} \begin{proof} Writing $y= ae_1 +b e_2$, we have that $Z_y^t(2)-Z_y^{t_j}(2)= b( S^t- S^{t_j})$. Assume that \begin{equation}\label{opp} \pr{ b(S^t- S^{t_j}) \notin \mathcal{I} } < 1/2 . \end{equation} Let $Q$ be the transition matrix of $b S$. We have that \begin{align} \label{qel2}\Vert Q^{t-t_j}- \pi \Vert_2^2 &= \sum_z \frac{1}{m}\vert m Q^{t-t_j}_0(z)-1 \vert^2 \cr &= \sum_z m ( Q^{t-t_j} (z))^2-1 \cr & \geq \sum_{z \in \mathcal{I} } m ( Q^{t-t_j} (z))^2-1 \end{align} Cauchy Schwartz leads to \begin{align} \eqref{qel2} & \geq \frac{ m}{\vert \mathcal{I} \vert} \left( \sum_{ z \in \mathcal{I} } Q^{t-t_j} (z)\right)^2-1 \cr & \label{f12} \geq \frac{ m}{4\vert \mathcal{I} \vert}-1\\ & \label{dr} = \frac{ m}{4\sqrt{\log m}}-1, \end{align} where \eqref{f12} occurs by applying \eqref{opp}. Let $g$ be the gcd of $b$ amd $m$ Given that $bS^t$ can be viewed as SRW on $\mathbb{Z}/ g \mathbb{Z}$, we have that \begin{align} \Vert Q^{t-t_j}- \pi \Vert_2^2 & \leq \sum_{y=1}^{m/g -1} e^{ -2 \left( t- \sum_{i=1}^t \cos \frac{2 \pi g y }{m} \right) }\cr & \label{expl} \leq \frac{m}{g} e^{-2t} + \frac{\sqrt{3} m}{ 2g\sqrt{2 \pi (t-t_j)}}\\ &\label{he} \leq \frac{m}{g} e^{-2L_3} + \frac{\sqrt{3} m}{ 2g\sqrt{2 \pi L_3}}, \end{align} where \eqref{expl} is a straightforward application of Lemma \ref{integral}. Equation \eqref{he} contradicts \eqref{dr} for a suitable choice of the constant $\delta$ and this completes the proof. \end{proof} This implies the following corollary. \begin{corollary} For every $y \in P_2$ and $t\geq t_{j+1}$, we have that \[\pr{\vert Z_y^t(2)\vert \geq \sqrt{ \log m}/2} \geq 1/2 .\] \end{corollary} We follow the reasoning of the previous section to conclude the following Lemmas. \begin{lemma}\label{B23} For $y \in P_2$, let $B_{j,y}$ denote the event that $\vert Z^s_y(2) \vert > \sqrt{ \log m}/2$ for at least one third of $ [ t_{j+1}, t_{j+2}]$, where $\beta$ is a suitable constant. For $y \in P_2 ,$ we have that $$\prcond{B_{j,y}} {\mathcal{G}_{3}} \geq \frac{1}{2},$$ for every $j \geq 1$. \end{lemma} \begin{lemma}\label{r4} Recall that $P^t$ is the indicator function that the clock of the second row rings at time $t$ and $A^t_{y,x}= \int_0^t 1_{\lbrace P^s=1 \rbrace } 1_{\lbrace \vert Z_y^s(2) \vert > x\rbrace } ds$ and let $x=\sqrt{\log m}/2$. For $y \in P_2$, Consider $\tilde{D}_{y}= \lbrace A^t_{y,x} \geq (2A)^{-1} t_{n,m} \rbrace $. We have that \begin{align*} \pr{\tilde{D}_{y}} \geq 1- \frac{b}{m^{gn}} e^{-c}, \end{align*} where $b,g$ are suitable constants. \end{lemma} \begin{proof} We have that there is a constant $R$ such that $\frac{t_{n,m}}{L_3} \geq R m \log n$. Lemmas \ref{r15} and \ref{B12} give the desired result. \end{proof} \section{The proof of Lemma \ref{q}.} \begin{proof}[Proof of Lemma \ref{q}] We will first consider the case where $m$ is not prime. Let $x=m e^{-K (\log m)^{2/3}},$ $w=m / \tilde{K} $ and $I=1+J$, where $J= \lfloor (\log m)^{1/3} \rfloor$. We also consider $$t_{n,m}= D (m^2 \log n + n e^{C (\log m)^{2/3}}) + cnm^2 \log \log n,$$ which satisfies \begin{equation}\label{fb} t_{n,m} \geq D n L_1 e^{K (\log m)^{2/3}} \log m + cnm^2 \log \log n \end{equation} and \begin{equation}\label{sb} t_{n,m} \geq D m^2 n \log n + cnm^2 \log \log n \geq D (\log m)^{4/3}+ cnm^2 \log \log n. \end{equation} Looking at \eqref{y}, given that $k, w_1, \ldots w_k$ are such so that $E_{t_{n,m},x,w}$ is satisfied, we have that for every $y \in W_I$ it is the case that $\vert Z_y^t(2) \vert \geq m e^{-K (\log m)^{2/3}}$ for at least $A^{-1}t_{n,m}$ steps. Then, \eqref{y} says that \begin{align*} 4 \Vert q^{t_{n,m}}_{\substack{k,w_1, \ldots w_k}}-u \Vert_{T.V.}^2 & \leq \sum_{y \neq 0} e^{ -2 \left( k- \sum_{i=1,\ldots k} \cos \frac{2 \pi (y^T w_i(2))}{m} \right) } . \end{align*} The definition of $E_{t_{n,m},x,w}$ gives that \begin{align} 4 \Vert q^{t_{n,m}}_{\substack{k,w_1, \ldots w_k}}-u \Vert_{T.V.}^2 & \leq \sum_{\substack{y \neq 0 \cr y \in \langle e_1 \rangle}} e^{ -2 \left( k- \sum_{i=1}^k \cos \frac{2 \pi y }{m} \right) } + \sum_{ y \in P_2} e^{ -2 k\left(1- \cos \frac{ \tilde{\beta}\sqrt{ \log m} }{ m} \right) } +\cr & \quad \sum_{y \in W_I} e^{ -2 k\left(1- \cos \left(2 \pi e^{-K (\log m)^{2/3}} \right)\right)} +\sum_{y \in Q_I} e^{ -2 k\left(1- \cos \left(2 \pi/ \tilde{K} \right)\right)} \cr \label{30} \end{align} Equation \eqref{spl} gives that \begin{align} \eqref{30} & \leq m e^{-2k} + 2 \sum_{j=1}^{\infty} e^{- \frac{4j^2 \pi^2}{ m^2} k } + m^2 e^{ -2 k \frac{ \overline{\beta} \log m}{ m} } + m^n e^{ -\tilde{C}k e^{-K (\log m)^{2/3}} } + m^{I } e^{-K'k } \end{align} Since $k \geq A^{-1}t_{n,m}$ and choosing $D$ to be a suitable constant, we have that there is a constant $B$ such that \begin{align}\label{mnk} &\eqref{30} \leq B \frac{1}{n (\log n)^c}. \end{align} Combining \eqref{l} and \eqref{mnk}, we have that there is a universal, positive constant $D$, such that \begin{align*} &\mathbb{E}[ \Vert q^{t_{n,m}}-u \Vert_{T.V.} ] \leq D \frac{1}{n (\log n)^c}. \end{align*} For $m$ prime, we make a choice of $I $ that allows us to prove a sharper result. Set $I= 2 + \sqrt{ \log m}$, $x= m/8 $ and $w= m/ \tilde{K}$. To prove part (a), we assume that $E_{t_{n,m},I, x,w}$ is satisfied for a universal constant $A$ that will be determined later in the proof. For the case of $m$ prime, we have \begin{align} 4 \Vert q^{t_{n,m}}_{\substack{k,w_1, \ldots w_k}}-u \Vert_{T.V.}^2 & \leq \sum_{\substack{y \neq 0 \cr y \in \langle e_1 \rangle}} e^{ -2 \left( k- \sum_{i=1}^k \cos \frac{2 \pi y }{m} \right) } +\sum_{ y \in P_2} e^{ -2 k\left(1- \cos \frac{\beta \sqrt{\log m} }{ m} \right) } + \cr & \label{tv} \quad + \sum_{ y \in W_I} e^{ -2 k\left(1- \cos \frac{ \pi }{4} \right) } + \sum_{y \in Q_I} e^{ -2 \left( k- \sum_{i=1}^k \cos \frac{2 \pi }{\tilde{K}} \right) } \end{align} Equation \eqref{spl} gives that \begin{align} \eqref{tv} & \leq m e^{-2k} +2 \sum_{j=1}^{m/4} e^{- \frac{4j^2 \pi^2}{ m^2} k } + m^2 e^{ -2 k \frac{ \overline{\beta} \log m}{ m} } \cr &+ m^n e^{ -k\left(2- \sqrt{2} \right) } + m^{I} e^{ -2 k\left(1- \cos \frac{2 \pi \sqrt{\delta} \log m }{m} \right) } \cr & \leq m e^{-2k} + 2 \sum_{j=1}^{\infty} e^{- \frac{4j^2 \pi^2}{ m^2} k } + m^n e^{ -k\left(2- \sqrt{2} \right) } + m^{I} e^{ - \frac{2k \pi^2 }{\tilde{K}^2} } \end{align} Since $k \geq A^{-1}t_{n,m}$ and choosing $D$ to be a suitable constant, we have that there is a constant $B$ such that \begin{align}\label{mnk} &\eqref{tv} \leq B \frac{1}{n (\log n)^c}. \end{align} Combining \eqref{l} and \eqref{mnk}, we have that there is a universal, positive constant $D$, such that \begin{align*} &\mathbb{E}[ \Vert q^{t_{n,m}}-u \Vert_{T.V.} ] \leq D \frac{1}{n (\log n)^c}. \end{align*} For the second part of Lemma \ref{q}, we will only focus on the case of general $m$, since the case $m$ prime follows the same outline. Lemmas \ref{r1}, \ref{r2}, \ref{r4} and a union bound gives \begin{align*} \pr{E^c_{t_{n,m},x,w}} & \leq \pr{ \cup_{y \in W_I} \tilde{B}_{y}^c} + \pr{ \cup_{y \in Q_I} \tilde{D}_{y}^c}+ \pr{ \cup_{y \in P_2} \tilde{D}_{y}^c} \\ &\leq m^n b e^{-c} \frac{1}{m^{g n}} + m^I b e^{-c} \left(\frac{1}{m^{g n}} + \frac{1}{n^{g m}} \right)\\ & \leq \tilde{b} e^{-c} \left(\frac{1}{m^{\tilde{g} n}} + \frac{1}{n^{\tilde{g} m}} \right), \end{align*} which finishes the proof. \end{proof} \bibliographystyle{plain}
{ "timestamp": "2020-12-17T02:08:59", "yymm": "2012", "arxiv_id": "2012.08731", "language": "en", "url": "https://arxiv.org/abs/2012.08731", "abstract": "We study a natural random walk on the $n \\times n$ upper triangular matrices, with entries in $\\mathbb{Z}/m \\mathbb{Z}$, generated by steps which add or subtract a uniformly random row to the row above. We show that the mixing time of this random walk is $O(m^2n \\log n+ n^2 m^{o(1)})$. This answers a question of Stong and of Arias-Castro, Diaconis, and Stanley.", "subjects": "Probability (math.PR); Combinatorics (math.CO)", "title": "The random walk on upper triangular matrices over $\\mathbb{Z}/m \\mathbb{Z}$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986363162714234, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7087950437092418 }
https://arxiv.org/abs/1601.07280
On Pure Derived Categories
We investigate the properties of pure derived categories of module categories, and show that pure derived categories share many nice properties of classical derived categories. In particular, we show that bounded pure derived categories can be realized as certain homotopy categories. We introduce the pure projective (resp. injective) dimension of complexes in pure derived categories, and give some criteria for computing these dimensions in terms of the properties of pure projective (resp. injective) resolutions and pure derived functors. As a consequence, we get some equivalent characterizations for the finiteness of the pure global dimension of rings. Finally, pure projective (resp. injective) resolutions of unbounded complexes are considered.
\section{ Introduction} \setcounter{equation}{0} \vspace{0.2cm} Let $(\mathcal{A},\mathcal{E})$ be an exact category in the sense of [Q] and $\mathbf{K}(\mathcal{A})$ its homotopy category. Then one can consider the triangulated quotient of $\mathbf{K}(\mathcal{A})$ by $\mathcal{E}$, called the derived category of $(\mathcal{A},\mathcal{E})$, which was studied by Neeman in [N1]. Now let $R$ be a ring and $R$-Mod the category of left $R$-modules. It is known that there are two interesting exact structures in $R$-$\Mod$; one is the usual and the other is the pure exact structure. The derived category with respect to the first one is traditional which provides a broader framework for studying homological algebra, and to the second one is the pure derived category which has attracted many authors, see [CHo], [EGO], [Gi], [Kr], [N3], [St] for the details. In general, triangulated quotients are not intuitive since they are usually realized as calculus of fractions. However, bounded derived categories are well understood since they are equivalent to certain homotopy categories of projective modules. It is known that pure projective modules are exactly projective objects with respect to the pure exact structure, see [KS], [EJ], [P], and [W]. So, it is expected that bounded pure derived categories will share some nice properties of classical bounded derived categories. In Section 3, we show that for a ring $R$, $R$-$\Mod$ is a full subcategory of its bounded pure derived category. Moreover, we show that the bounded pure derived category of $R$-$\Mod$ is triangulated equivalent to a triangulated full subcategory of the bounded above (resp. below) homotopy category of pure projective (resp. injective) $R$-modules. Note that the results in this section are standard analogs of the corresponding classical ones. In Section 4, we devote to building triangulated functors from (bounded) pure derived categories. A very natural choice is the right ``derived" version of $\Hom$. For this, we first establish the pure projective (resp. injective) resolutions of bounded complexes, and then use them to define right pure derived functors of $\Hom$ which preserve the corresponding triangles. As applications, we introduce and study the pure projective (resp. injective) dimension of complexes. In particular, we obtain some criteria for computing this dimension in terms of the properties of pure projective (resp. injective) resolutions and the vanishing of pure derived functors. As a consequence, we get some equivalent characterizations for the finiteness of the pure global dimension of rings. The results in this section are standard analogs of main results in [AvF], and generalize the corresponding ones for modules in [KS] and [S]. In Section 5, pure projective (resp. injective) resolutions of certain unbounded complexes are considered. We use the technique of homotopy (co)limits to show that any bounded below (resp. above) complex admits a pure projective (resp. injective) resolution. \bigskip \section{Preliminaries } \setcounter{equation}{0} Throughout this paper, $R$ is an associate ring with identity and $R$-$\Mod$ is the category of left $R$-modules. As usual, we use $\mathbf{C}(R)$ and $\mathbf{K}(R)$ to denote the category of complexes and homotopy category of $R$-$\Mod$, respectively. When we say ``$R$-module", without an adjective, we mean left $R$-module. For any $X\in\mathbf{C}(R)$, we write $$\CD X:=\ \cdots @> >> X^{i-1} @>d_{X} ^{i-1}>> X^{i} @>d_{X} ^{i}>> X^{i+1} @>d_{X} ^{i+1}>> X^{i+2} @> >> \cdots.\endCD$$ We regard an $R$-module $M$ as the stalk complex, that is, a complex concentrated in degree 0. We recall the bounded conditions for complexes which are standard in homological algebra, see for example [GM]. Let $X\in\mathbf{C}(R)$. If $X^{i}=0$ for $i\gg 0$, then $X$ is called {\it bounded above} (or {\it bounded on the right}). If $X^{i}=0$ for $i\ll 0$, then $X$ is called {\it bounded below} (or {\it bounded on the left}). $X$ is called {\it bounded} if it is bounded above and below. A cochain map $f:X\rightarrow Y$ in $\mathbf{C}(R)$ is called a {\it quasi-isomorphism} if it induces isomorphic homology groups; and $f$ is called a {\it homotopy equivalence} if there exists a cochain map $g:Y\rightarrow X$ such that there exist homotopies $g\circ f\sim \Id_{X}$ and $f\circ g\sim \Id_{Y}$. For $\Con (f)$ we mean the {\it mapping cone} of a cochain map $f$. Let $X,Y\in\mathbf{C}(R)$. We use $\Hom_{R}(X,Y)$ to denote the {\it total complex}, that is, a complex of $\mathbb{Z}$-modules (where $\mathbb{Z}$ is the additive group of integers) $$\CD\cdots @> >>\prod \limits_{i\in \mathbb{Z}}\Hom_{R}(X^{i},Y^{i+n}) @>d ^{n}>> \prod \limits_{i\in \mathbb{Z}} \Hom_{R}(X^{i},Y^{i+n+1}) @> >> \cdots, \endCD$$ where $\prod \limits_{i\in \mathbb{Z}}\Hom_{R}(X^{i},Y^{i+n})$ lies in degree $n$. For any $\varphi \in \Hom_{R}(X,Y)^{n}$, $d^{n}(\varphi)=(d _{Y}^{i+n}\circ \varphi^{i}-(-1)^{n}\varphi^{i+1}\circ d ^{i}_{X})_{i\in \mathbb{Z}}$. Note that this construction defines a bifunctor $$\Hom_R(-,-):\mathbf{K}(R)^{op}\times \mathbf{K}(R)\to \mathbf{K}(\mathbb{Z}).$$ \begin{definition} {\rm ([W]) A short exact sequence $$0 \to A \overset{f} \to B \overset{g} \to C \to 0$$ in $R$-$\Mod$ is called {\it pure exact} if for any right $R$-module $M$, the induced sequence $$0 \to M\otimes_{R} A\to M \otimes_{R} B \to M \otimes_{R} C \to 0$$ is exact. In this case, $f$ is called {\it pure monic} and $g$ is called {\it pure epic}.} \end{definition} \begin{remark}{\rm Using the Cohn's theorem (see [R, Theorem 3.69]), we have that a short exact sequence $$0 \to A \to B \to C \to 0$$ in $R$-$\Mod$ is pure exact if and only if $$0 \to\Hom_{R}(F,A) \to \Hom_{R}(F,B) \to \Hom_{R}(F,C) \to 0$$ is exact for any finitely presented $R$-module $F$.} \end{remark} \vspace{0.2cm} In general, the exactness of a complex of $R$-modules is defined ``pointwise". This definition provides convenience for understanding bounded derived categories. Let $(\mathcal{A},\mathcal{E})$ be an exact category in the sense of [Q]. Following [N1], a complex $X$ is called {\it acyclic} with respect to the exact structure of $\mathcal{A}$ if each differential $d_{X}^{i}$ decomposes as $$X^{i}\twoheadrightarrow D^{i} \rightarrowtail X^{i+1},$$ where the former morphism is admissible epic and the latter one is admissible monic; furthermore, the sequence $$D^{i}\rightarrowtail X^{i+1}\twoheadrightarrow D^{i+1}$$ is exact for any $i\in \mathbb{Z}$, see also [Gi, Section 4.2]. Now it is natural for us to propose the following definition, which provides convenience for understanding bounded pure derived categories later. \begin{definition} {\rm Let $X\in\mathbf{C}(R)$ and $n\in \mathbb{Z}$. Then $X$ is called {\it pure exact at $n$,} if the differentials $d_{X}^{n-1}$ and $d_{X}^{n}$ can decompose as above, and the sequence $$0 \to K^{n} \to X^{n} \to C^{n-1} \to 0$$ is pure exact, where $K^{n}=\Ker d_{X}^{n}$ and $C^{n-1}=\Coker d_{X}^{n-1}$. $X$ is called {\it pure exact} if it is pure exact at $n$ for all $n$.} \end{definition} \begin{remark} {\rm \begin{enumerate} \item[] \item[(1)] $X\in\mathbf{C}(R)$ is pure exact if and only if $M\otimes_{R} X$ is exact for any right $R$-module $M$, and if and only if $\Hom_{R}(F,X)$ is exact for any finitely presented $R$-module $F$. \item[(2)] A direct limit of pure exact complexes is again pure exact, since the tensor functor commutes with direct limits by [R, Theorem 5.27]. \item[(3)] By definition, pure exact complexes coincide with the exact structure in the sense of Neeman [N1]. \end{enumerate}} \end{remark} \begin{definition} {\rm ([W]) A module $M\in R$-$\Mod$ is called {\it pure projective} (resp. {\it injective}) if it is projective (resp. injective) with respect to every pure exact complex.} \end{definition} Let $\mathcal{PP}$ (resp. $\mathcal{PI}$) be the class of all pure projective (resp.~injective) $R$-modules. We use $\mathbf{K}^{-}(\mathcal{PP})$ (resp. $\mathbf{K}^{+}(\mathcal{PI})$) to denote the bounded above (resp. below) homotopy category of $\mathcal{PP}$ (resp. $\mathcal{PI}$). \begin{remark} {\rm \begin{enumerate} \item[] \item[(1)] We write $(-)^+:=\Hom_{\mathbb{Z}}(-,\mathbb{Q}/\mathbb{Z})$, where $\mathbb{Q}$ is the additive group of rational numbers. By [EJ, Proposition 5.3.7], we have that $M^+$ is a pure injective left $R$-module for any right $R$-module $M$. Using the fact that every $R$-module is a direct limit of finitely presented $R$-modules ([R, Lemma 5.39]), we have that pure projective modules are nothing but summands of direct sums of finitely presented modules. \item[(2)] By (1), it is easy to check that a complex $X$ is pure exact if and only if $\Hom _{R}(P,X)$ is exact for any $P\in \mathcal {P}\mathcal {P}$, and if and only if $\Hom _{R}(X,I)$ is exact for any $I\in \mathcal {P}\mathcal {I}$. \end{enumerate}} \end{remark} We need the following definition. \begin{definition} {\rm A cochain map $f$: $X \to Y$ in $\mathbf{C}(R)$ is called a {\it pure quasi-isomorphism} if its mapping cone $\Con(f)$ is a pure exact complex.} \end{definition} \begin{remark} {\rm \begin{enumerate} \item[] \item[(1)] A cochain map $f: X\to Y$ in $\mathbf{C}(R)$ is a pure quasi-isomorphism if and only if $$M\otimes_{R}f: M\otimes_{R}X \to M \otimes_{R}Y$$ is a quasi-isomorphism for any right $R$-module $M$. \item[(2)] By Remark 2.6, a cochain map $f$: $X \to Y$ in $\mathbf{C}(R)$ is a pure quasi-isomorphism if and only if $$\Hom_{R}(P,f):\Hom_{R}(P,X)\rightarrow \Hom_{R}(P,Y)$$ is a quasi-isomorphism for any $P\in \mathcal {P}\mathcal {P}$, and if and only if $$\Hom_{R}(f,I):\Hom_{R}(Y,I)\to \Hom_{R}(X,I)$$ is a quasi-isomorphism for any $I\in \mathcal {P}\mathcal {I}$. \end{enumerate}} \end{remark} The following result concerning both pure exact complexes and pure quasi-isomorphisms is essentially contained in [CFH]. \begin{lemma} \begin{enumerate} \item[] \item[(1)] Let $X\in\mathbf{C}(R)$. Then $X$ is pure exact if and only if $\emph{Hom}_{R}(P,X)$ is exact for any $P\in \mathbf{K}^{-}(\mathcal {P}\mathcal {P})$, and if and only if $\emph{Hom}_{R}(X,I)$ is exact for any $I\in \mathbf{K}^{+}(\mathcal {P}\mathcal {I})$. \item[(2)] A cochain map $f$ in $\mathbf{C}(R)$ is a pure quasi-isomorphism if and only if $\emph{Hom}_{R}(P,f)$ is a quasi-isomorphism for any $P\in \mathbf{K}^{-}(\mathcal {P}\mathcal {P})$, and if and only if $\emph{Hom}_{R}(f,I)$ is a quasi-isomorphism for any $I\in \mathbf{K}^{+}(\mathcal {P}\mathcal {I})$. \end{enumerate} \end{lemma} {\it Proof.} The assertion (1) follows from Remark 2.6(2) and [CFH, Lemmas 2.4 and 2.5], and the assertion (2) follows from Remark 2.8(2) and [CFH, Propositions 2.6 and 2.7]. \hfill$\square$ \begin{lemma} \begin{enumerate} \item[] \item[(1)] Let $f:X\rightarrow Y$ be a pure quasi-isomorphism in $\mathbf{C}(R)$ with $X,~Y\in \mathbf{K}^{-}(\mathcal {P}\mathcal {P})$. Then $f$ is a homotopy equivalence. \item[(2)] Let $f:X\rightarrow Y$ be a pure quasi-isomorphism in $\mathbf{C}(R)$ with $X,~Y\in \mathbf{K}^{+}(\mathcal {P}\mathcal {I})$. Then $f$ is a homotopy equivalence. \end{enumerate} \end{lemma} {\it Proof.} (1) Because there exists a quasi-isomorphism $$\Hom_{R}(Y,f):\Hom_{R}(Y,X)\rightarrow \Hom_{R}(Y,Y)$$ by Lemma 2.9, we have an isomorphism $$\H^{0}(\Hom_{R}(Y,f)):\H^{0}(\Hom_{R}(Y,X))\rightarrow \H^{0}(\Hom_{R}(Y,Y)).$$ One can easily check that there exists a cochain map $g:Y\rightarrow X$ such that $f\circ g\sim$~Id$_{Y}$. Similarly, there exists a cochain map $h$ such that $g\circ h\sim$~Id$_{X}$. As a consequence, we have that $g$ and $f$ are homotopy equivalences. (2) It is the dual of (1). \hfill$\square$ \begin{lemma} \begin{enumerate} \item[] \item[(1)] Let $Y\rightarrow X$ be a pure quasi-isomorphism in $\mathbf{C}(R)$ with $X\in \mathbf{K}^{b}(R)$ and $Y\in \mathbf{K}^{+}(R)$. Then there exists a pure quasi-isomorphism $X'\rightarrow Y$ with $X'\in \mathbf{K}^{b}(R)$. \item[(2)] Let $X\rightarrow Y$ be a pure quasi-isomorphism in $\mathbf{C}(R)$ with $X\in \mathbf{K}^{+}(R)$ and $Y\in \mathbf{K}(R)$. Then there exists a pure quasi-isomorphism $Y\rightarrow X'$ with $X'\in \mathbf{K}^{+}(R)$. \end{enumerate} \end{lemma} {\it Proof.} (1) We can assume that $Y^{n}=0$ for any $n<0$ and that H$^{i}\Hom_{R}(P,Y)=0$ for any $P\in \mathcal {P}\mathcal {P}$ and $i\geq m+1$. We have the following commutative diagram $$\CD \cdots @> >> 0 @> >> Y^{0} @> >> \cdots @> >> Y^{m-1} @> d_{Y}^{m-1} >> \Ker d_{Y}^{m} @> >> 0 @> >> \cdots \\ @. @. @V \Id_{Y^{0}} VV @. @V \Id_{Y^{m-1}} VV @V VV @V VV @. \\ \cdots @> >> 0 @> >> Y^{0} @> >> \cdots @> >> Y^{m-1} @> d_{Y}^{m-1} >> Y^{m} @> >> Y^{m+1} @> >> \cdots. \endCD$$ Let the upper row be the complex $X'$. Since $\Hom_{R}(P,-)$ preserves kernels, the cochain map is clearly a pure quasi-isomorphism by Remark 2.8(2). (2) We can assume that $\H^{i}(M\otimes_{R}Y)=0$ for any right $R$-module $M$ and $i\leq -1$. We have the following commutative diagram $$\CD \cdots @> >> Y^{-2} @> >> Y^{-1} @> d_{Y}^{-1}>> Y^{0} @> >> Y^{1} @> >> \cdots \\ @. @V VV @V VV @V \widetilde{d}_{Y}^{-1} VV @V \Id_{Y^{1}} VV @. \\ \cdots @> >> 0 @> >> 0 @> >> \Coker d_{Y}^{-1} @> >> Y^{1} @> >> \cdots. \endCD$$ Let the lower row be the complex $X'$. Since $M\otimes_{R}-$ preserves cokernels, the cochain map is clearly a pure quasi-isomorphism by Remark 2.8(1). \hfill$\square$ \bigskip \section{ Pure derived categories} \setcounter{equation}{0} Put $\mathbf{K}_{\mathcal {P}\mathcal {E}}(R):=\{X\in \mathbf{K}(R)\mid X$~is~pure~exact$\}$. Notice that pure exact complexes are closed under homotopy equivalences, so $\mathbf{K}_{\mathcal {P}\mathcal {E}}(R)$ is well defined. If $f:X\to Y$ is a cochain map between pure exact complexes, then $\Con(f)$ is again pure exact. Thus $\mathbf{K}_{\mathcal {P}\mathcal {E}}(R)$ is a triangulated subcategory of $\mathbf{K}(R)$. Because pure exact complexes are closed under summands by definition, $\mathbf{K}_{\mathcal{P}\mathcal {E}}(R)$ is a thick subcategory of $\mathbf{K}(R)$. Then by the Verdier's correspondence, we get the pure derived category $$\mathbf{D_{pur}}(R):=\mathbf{K}(R)/\mathbf{K}_{\mathcal {P}\mathcal {E}}(R).$$ Similarly, we define $$\mathbf{D^{*}_{pur}}(R):=\mathbf{K^{*}}(R)/\mathbf{K^{*}}_{\mathcal {P}\mathcal {E}}(R)$$ for $*\in \{+,~-,~b\}$. Note that the pure derived category coincides with the one given in [N1] and pure exact complexes here are exactly the exact structure there. Note that, as usual, a morphism from $X$ to $Y$ in $\mathbf{D_{pur}}(R)$ can be viewed as a graph (left roof) $$\xymatrix{ X & \bullet \ar@{=>}[l]_{s} \ar[r]^{a} & Y}$$ with $s$ a pure quasi-isomorphism ([GM, Chapters III.2.8 and III.2.9]). Two roofs $$\xymatrix{ X & \bullet \ar@{=>}[l]_{s} \ar[r]^{a} & Y}~~\mathrm{and}~~\xymatrix{ X & \bullet \ar@{=>}[l]_{s'} \ar[r]^{a'} & Y}$$ are equivalent if there exists the following commutative diagram $$\xymatrix{ & \bullet \ar@{=>}[dl]_{s} \ar[dr]^{a}\\ X & \bullet \ar@{=>}[l]_{g} \ar[r]\ar[d] \ar[u] & Y\\ & \bullet \ar@{=>}[ul]^{s'} \ar[ur]_{a'} }$$ with $g$ a pure quasi-isomorphism. So, two complexes $X, Y$ are isomorphic in $\mathbf{D_{pur}}(R)$ if there exists a graph $$\xymatrix{ X & \bullet \ar@{=>}[l]_{s} \ar@{=>}[r]^{a} & Y}$$ with $s$ and $a$ pure quasi-isomorphisms. If either $Y\in \mathbf{K}^{+}(\mathcal {P}\mathcal{I})$ or $X\in \mathbf{K}^{-}(\mathcal {P}\mathcal {P})$, then morphisms in $\Hom _{\mathbf{D_{pur}(R)}}(X,Y)$ are easy enough as showed below. \begin{proposition} \begin{enumerate} \item[] \item[(1)] Let $X\in \mathbf{K}^{-}(\mathcal {P}\mathcal {P})$ and $Y\in \mathbf{K}(R)$. Then the localization functor $$\mathbb{F}:\Hom_{\mathbf{K}(R)}(X,Y)\rightarrow \Hom_{\mathbf{D_{pur}}(R)}(X,Y),~f\mapsto f/ \Id_{X}\ (left\ roof),$$ induces an isomorphism of abelian groups. \item[(2)] Let $Y\in \mathbf{K}^{+}(\mathcal {P}\mathcal {I})$ and $X\in \mathbf{K}(R)$. Then the localization functor $$\mathbb{F}:\Hom_{\mathbf{K}(R)}(X,Y)\rightarrow \Hom_{\mathbf{D_{pur}}(R)}(X,Y),~f\mapsto \Id_{Y}\backslash f\ (right\ roof),$$ induces an isomorphism of abelian groups. \end{enumerate} \end{proposition} {\it Proof.} We only need to prove (1). If $f/\Id_{X}=0=0/\Id_{X}$, then there exists a pure quasi-isomorphism $g:Z \rightarrow X$ such that $f\circ g\sim 0$. By the proof of Lemma 2.10, there exists a pure quasi-isomorphism $h:X\to Z$ such that $g\circ h\sim \Id_{X}$. So $f\sim 0$. For any $f/s\in \Hom_{\mathbf{D_{pur}(R)}}(X,Y)$, since $s$ is a pure quasi-isomorphism, again by the proof of Lemma 2.10 there exists a pure quasi-isomorphism $t$ such that $s\circ t\sim \Id_{X}$. So we have $f/s=(f\circ t)/\Id_{X}$ in $\mathbf{D_{pur}}(R)$. \hfill$\square$ \vspace{0.2cm} \begin{proposition} For a ring $R$, we have \begin{enumerate} \item[(1)] $\mathbf{D^{b}_{pur}}(R)$ is a full subcategory of $\mathbf{D^{+}_{pur}}(R)$, and $\mathbf{D^{+}_{pur}}(R)$ is a full subcategory of $\mathbf{D_{pur}}(R)$. \item[(2)] $\mathbf{D^{b}_{pur}}(R)$ is a full subcategory of $\mathbf{D^{-}_{pur}}(R)$, and $\mathbf{D^{-}_{pur}}(R)$ is a full subcategory of $\mathbf{D_{pur}}(R)$. \item[(3)] $\mathbf{D^{b}_{pur}}(R)=\mathbf{D^{-}_{pur}}(R)\cap \mathbf{D^{+}_{pur}}(R)$. \end{enumerate} \end{proposition} {\it Proof.} The assertion (1) is a consequence of [GM, Proposition 3.2.10] and Lemma 2.11, and the assertion (2) is the dual of (1). The assertion (3) is an immediate consequence of (1) and (2). \hfill$\square$ \vspace{0.2cm} \begin{theorem} For a ring $R$, $R$-$\Mod$ is a full subcategory of $\mathbf{D^{b}_{pur}}(R)$, that is, the composition of functors $$R{\text -}\Mod \rightarrow \mathbf{K^{b}}(R)\rightarrow \mathbf{D^{b}_{pur}}(R)$$ is fully faithful. \end{theorem} {\it Proof.} For any $X, Y\in R$-$\Mod$, it suffices to prove that the morphism $$\mathbb{F}:\Hom_{R}(X,Y)\rightarrow \Hom_{\mathbf{D_{pur}}(R)}(X,Y)$$ is an isomorphism. Let $f\in \Hom_{R}(X,Y)$. If $\mathbb{F}(f)=0$, then there exists a pure quasi-isomorphism $s:Z \rightarrow X$ such that $f\circ s\sim 0$. So $\H^{0}(f)\circ\H^{0}(s)=0$. Since $\H^{0}(s)$ is an isomorphism, we have $f=0$. Let $a/s$ be a morphism in $\Hom_{\mathbf{D_{pur}}(R)}(X,Y)$. Then we have a diagram $$\xymatrix{ X & Z\ar@{=>}[l]_{s} \ar[r]^{a} & Y},$$ where $s$ is a pure quasi-isomorphism, and hence a quasi-isomorphism. So $\H^{0}(s)\in \Hom_{R}(\H^{0}(Z),X)$ is an isomorphism in $R$-$\Mod$ (note that $\H^0(X)=X$). Put $f:=\H^{0}(a)\circ \H^{0}(s)^{-1} \in \Hom_{R}(X,Y)$. Consider the truncation $$\CD U:=~~~ \cdots @> >> Z^{-2} @> >> Z^{-1} @>d^{-1}>> \Ker d_{Z}^{0} @> >> 0 \endCD$$ of $Z$ and the canonical map $i:U\rightarrow Z$. Note that, as in Lemma 2.11, $i$ is a pure quasi-isomorphism. Then $s\circ i$ is also a pure quasi-isomorphism. From the commutative diagram $$\CD U @> i >> Z \\ @V VV @V s VV \\ \H^{0}(Z) @> \textrm{H}^{0}(s) >> X, \endCD$$ we get $f\circ s\circ i=\H^{0}(a)\circ \H^{0}(s)^{-1}\circ s\circ i=a\circ i$. So the following diagram of complexes $$\xymatrix{ & Z \ar@{=>}[dl]_{s} \ar[dr]^{a}\\ X & U \ar@{=>}[l]_{si} \ar[r]^{ai}\ar[d]^{si} \ar[u]^{i} & Y\\ & X \ar@{=>}[ul]^{\textrm{Id}_{X}} \ar[ur]_{f} }$$ is commutative. It follows that $\mathbb{F}(f)=f/\Id_{X}=a/s$. \hfill$\square$ \vspace{0.2cm} For any $X\in\mathbf{C}(R)$, we write $$~~~~\mathbf{inf_{p}}X:=\inf \{n\in \mathbb{Z} \mid X\ {\rm is\ not\ pure\ exact\ at}\ n\},\ {\rm and}$$ $$\mathbf{sup_{p}}X:=\sup \{n\in \mathbb{Z}\mid X\ {\rm is\ not\ pure\ exact\ at}\ n\}.$$ If $X$ is not pure exact at $n$ for any $n$, then we set $\mathbf{inf_{p}}X=-\infty$ and $\mathbf{sup_{p}}X=\infty$. If $X$ is pure exact at $n$ for all $n$, that is, $X$ is a pure exact complex, then we set $\mathbf{inf_{p}}X=\infty$ and $\mathbf{sup_{p}}X=-\infty$. We will heavily rely on these two numbers in the remainder of this paper. Put $$\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P}):=\{X\in \mathbf{K^{-}}(\mathcal {P}\mathcal {P}) \mid \mathbf{inf_{p}}X \ {\rm is\ finite}\},\ {\rm and}$$ $$\mathbf{K^{+,pb}}(\mathcal {P}\mathcal {I}):=\{X\in \mathbf{K^{+}}(\mathcal {P}\mathcal {I}) \mid \mathbf{sup_{p}}X \ {\rm is\ finite}\}.$$ \begin{proposition} Let $X\in\mathbf{C}(R)$. Then the following hold. \begin{enumerate} \item[(1)] $X$ is pure exact in degree $\leq n$ if and only if $M\otimes_{R} X$ is exact in degree $\leq n$ for any right $R$-module $M$. \item[(2)] $X$ is pure exact in degree $\geq n$ if and only if $\Hom_{R}(P,X)$ is exact in degree $\geq n$ for any $P\in \mathcal {PP}$. \item[(3)] The numbers $\mathbf{inf_{p}}X$ and $\mathbf{sup_{p}}X$ are well defined for any $X\in \mathbf{D_{pur}}(R)$, that is, if $X\cong Y$ in $\mathbf{D_{pur}}(R)$, then $\mathbf{inf_{p}}X=\mathbf{inf_{p}}Y$ and $\mathbf{sup_{p}}X=\mathbf{sup_{p}}Y$. \item[(4)] $\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$ and $\mathbf{K^{+,pb}}(\mathcal {P}\mathcal {I})$ are triangulated subcategories of $\mathbf{K^{-}}(\mathcal{P}\mathcal {P})$ and $\mathbf{K^{+}}(\mathcal {P}\mathcal{I})$, respectively. \end{enumerate} \end{proposition} {\it Proof.} (1) Consider the following commutative diagram (tensor products act on $R$) $$\xymatrix{\cdots \ar[r] & M\otimes X^{n-1}\ar[rr]^{M\otimes d_{X}^{n-1}} \ar@{>>}[rd]_<<<<<<<{M\otimes \widetilde{d}_{X}^{n-2}} & & M\otimes X^{n} \ar@{>>}[rd]_<<<<<<<{M\otimes \widetilde{d}_{X}^{n-1}} \ar[rr]^{M\otimes d_{X}^{n}} & & M\otimes X^{n+1} \ar[r] & \cdots, \\ & & M\otimes C^{n-2} \ar[ru]_{M\otimes \iota ^{n}}& & M\otimes C^{n-1} \ar[ru]_{M\otimes \iota ^{n+1}}& & }$$ where $\widetilde{d}_{X}^{n-2}$ (resp. $\widetilde{d}_{X}^{n-1}$) denotes the cokernel of $d_{X}^{n-2}$ (resp. $d_{X}^{n-1}$) and $ \iota ^{n}$ (resp. $\iota ^{n+1}$) denotes the kernel of $d_{X}^{n}$ (resp. $d_{X}^{n+1}$). Then the assertion follows standardly. (2) The proof is similar to that of (1). (3) We only need to prove the assertion whenever both $\mathbf{inf_{p}}X$ (resp. $\mathbf{sup_{p}}X$) and $\mathbf{inf_{p}}Y$ (resp. $\mathbf{sup_{p}}Y$) are finite. By Remark 2.8, it is an immediate consequence of (1) and (2). (4) We only prove that $\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$ is a triangulated subcategory of $\mathbf{K^{-}}(\mathcal{P}\mathcal {P})$ and the proof of the other assertion is similar. Observe that $\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$ is closed under shifts. So it suffices to show that $\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$ is closed under extensions. Let $$X\to Y\to Z\to X[1]$$ be a triangle in $\mathbf{K^{-}}(\mathcal{P}\mathcal {P})$ with $X,Z\in \mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$. Then we have a triangle $$X\otimes_R M\to Y\otimes_R M\to Z\otimes_R M\to X[1]\otimes_R M$$ in $\mathbf{K}(\mathbb{Z})$ for any right $R$-module $M$. It induces a long exact sequence of homological groups since $\H^0(-)$ is cohomological by [GM, Chapter IV.1.6]. By (1) there exists $n\in \mathbb{Z}$ such that both $X\otimes_R M$ and $Z\otimes_R M$ are exact in degree $\leq n$, so $Y\otimes_R M$ is also exact in degree $\leq n$. Thus we have $Y\in\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$. \hfill$\square$ \begin{proposition} \begin{enumerate} \item[] \item[(1)] There exist a functor $P:\mathbf{K^{b}}(R)\rightarrow \mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$ and a pure quasi-isomorphism $f_{X}:P_{X}\rightarrow X$ for any $X\in \mathbf{K^{b}}(R)$, which is functorial in $X$. \item[(2)] There exist a functor $I:\mathbf{K^{b}}(R)\rightarrow \mathbf{K^{+,pb}}(\mathcal {P}\mathcal {I})$ and a pure quasi-isomorphism $g_{X}:X\rightarrow I_{X}$ for any $X\in \mathbf{K^{b}}(R)$, which is functorial in $X$. \end{enumerate} \end{proposition} {\it Proof.} We first prove that for any $X\in \mathbf{K^{b}}(R)$, there exists a pure quasi-isomorphism $P_{X}\rightarrow X$ with $P_{X}\in \mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$. We proceed by induction on the cardinal of the finite set $\mathcal {W}(X):=\{i\in \mathbb{Z}\mid X^{i}\neq 0 \}$. If $\mathcal {W}(X)=1$, then the assertion follows from the fact that every module admits a pure projective precover (see [EJ, Example 8.3.2]). Now suppose that $\mathcal {W}(X)\geq 2$ with $X^{j}\neq 0$ and $X^{i}=0$ for any $i<j$. Then we have a distinguished triangle $$\CD X_{1} @>u>> X_{2} @> >> X @> >> X_{1}[1] \endCD$$ in $\mathbf{K^{b}}(R)$, where $X_{1}=X^{j}[-j-1]$ and $X_{2}=X^{>j}$. By the induction hypothesis, there exist pure quasi-isomorphisms $f_{X_{1}}:P_{X_{1}}\rightarrow X_{1}$ and $f_{X_{2}}:P_{X_{2}}\rightarrow X_{2}$ with $P_{X_{1}}, P_{X_{2}}\in \mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$. Then by Lemma 2.9, $f_{X_{2}}$ induces an isomorphism $$\Hom_{\mathbf{K}(R)} (P_{X_{1}},P_{X_{2}})\cong \Hom_{\mathbf{K}(R)}(P_{X_{1}},X_{2}).$$ So there exists a morphism $f:P_{X_{1}}\rightarrow P_{X_{2}}$, which is unique up to homotopy, such that $f_{X_{2}}\circ f=u\circ f_{X_{1}}$. We have the distinguished triangle $$\CD P_{X_{1}} @> f >> P_{X_{2}} @> >> \Con(f) @> >> P_{X_{1}}[1] \endCD$$ in $\mathbf{K^{-,pb}}(\mathcal{PP})$. Then there exists a morphism $f_{X}:\Con(f)\rightarrow X$ such that the following diagram $$\CD P_{X_{1}} @> f >> P_{X_{2}} @> >> \Con(f) @> >> P_{X_{1}}[1] \\ @V f_{X_{1}} VV @V f_{X_{2}} VV @V f_{X} VV @V f_{X_{1}}[1] VV \\ X_{1} @> u >> X_{2} @> >> X @> >> X_{1}[1] \endCD$$ in $\mathbf{K}(R)$ commutes. For any $P\in \mathcal {P}\mathcal {P}$, we have the following commutative diagram $$\CD \Hom_{R}(P,P_{X_{1}}) @> >> \Hom_{R}(P,P_{X_{2}}) @> >> \Hom_{R}(P,\Con(f)) @> >> \Hom_{R}(P,P_{X_{1}}[1]) \\ @V (f_{X_{1}})_{\ast} VV @V (f_{X_{2}})_{\ast} VV @V (f_{X})_{\ast} VV @V (f_{X_{1}}[1])_{\ast} VV \\ \Hom_{R}(P,X_{1}) @> >> \Hom_{R}(P,X_{2}) @> >> \Hom_{R}(P,~X) @> >> \Hom_{R}(P,X_{1}[1]) \endCD$$ in $\mathbf{K}(\mathbb{Z})$, where both rows are exact triangles and $(-)_{*}$ denotes the functor $\Hom_{R}(P,-)$. Since both $f_{X_{1}}$ and $f_{X_{2}}$ are pure quasi-isomorphisms, we have that both $(f_{X_{1}})_{\ast}$ and $(f_{X_{2}})_{\ast}$ are quasi-isomorphisms. Passing to homology we get that $(f_{X})_{\ast}$ is a quasi-isomorphism, so $f_{X}$ is a pure quasi-isomorphism by Remark 2.8(2). Put $P_{X}:=\Con(f)$. Then we have a pure quasi-isomorphism $f_{X}:P_{X}\rightarrow X$ with $P_{X}\in \mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$. In the following we prove that $f_{X}$ is functorial in $X$. Let $X,Y\in \mathbf{K}^{b}(R)$. Then we have two pure quasi-isomorphisms $f_{X}:P_{X}\rightarrow X$ and $f_{Y}:P_{Y}\rightarrow Y$. These induce an isomorphism $$\Hom_{\mathbf{K}(R)}(P_{X},P_{Y})\cong \Hom_{\mathbf{K}(R)}(P_{X},Y).$$ Let $f:X\rightarrow Y$ be a cochain map. Then there exists a cochain map $f\circ f_{X}:P_{X}\rightarrow Y$. Using the above isomorphism, we have that there exists a unique cochain map $f':P_{X}\rightarrow P_{Y}$ such that the following diagram $$\CD P_{X} @>f_{X}>> X \\ @V f' VV @V f VV \\ P_{Y} @>f_{Y}>> Y \endCD$$ commutes up to homotopy. This completes the proof by putting $Y=X$. \hfill$\square$ (2) It is the dual of (1) just using the fact that every module admits a pure injective preenvelope by [EJ, Proposition 5.3.9]. \begin{theorem} For a ring $R$, there exist triangle-equivalences as follows. \begin{enumerate} \item[(1)] $\mathbf{D^{b}_{pur}}(R)\simeq \mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$. \item[(2)] $\mathbf{D^{b}_{pur}}(R)\simeq \mathbf{K^{+,pb}}(\mathcal {P}\mathcal {I})$. \end{enumerate} \end{theorem} {\it Proof.} We only need to prove (1). Let $\mathbb{H}$ be the composition of the embedding $$\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})\hookrightarrow \mathbf{K^{-}}(R)$$ and the localization functor $$\mathbb{F}:\mathbf{K^{-}}(R)\rightarrow \mathbf{D^{-}_{pur}}(R).$$ For any $X\in \mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$, there exists $n\in \mathbb{Z}$ such that $\mathbf{inf_{p}}X=n$. So $X$ is pure exact in degree $\leq n-1$ and the following cochain map $f$ is a pure quasi-isomorphism. $$\CD ~~~~~X:=~\cdots @> >> X^{n-1} @> >> X^{n} @> >> X^{n+1} @> >> X^{n+2} @> >> \cdots \\ @V f VV @V VV @V VV @V VV @V VV @. \\ X^{\supset n}:=~~\cdots @> >> 0 @> >> \Coker d^{n-1} @> >> X^{n+1} @> >> X^{n+2} @> >> \cdots. \endCD$$ It follows that $\mathbb{H}(X)\cong X^{\supset n}$ in $\mathbf{D_{pur}}(R)$. So $\mathbb{H}(X)\in \mathbf{D^{b}_{pur}}(R)$ and hence $\mathbb{H}$ induces a functor from $\mathbf{K^{-,pb}}(\mathcal {P}\mathcal {P})$ to $\mathbf{D^{b}_{pur}}(R)$, again denoted by $\mathbb{H}$. By Propositions 3.1 and 3.5, $\mathbb{H}$ is fully faithful and dense. This completes the proof. \hfill$\square$ \bigskip \section{Derived functors and dimensions} \setcounter{equation}{0} In this section, we introduce and investigate the pure projective and injective dimensions of complexes based on pure derived functors of $\Hom$ in $\mathbf{D^{b}_{pur}}(R)$. For the pure projective and injective dimensions of modules and pure derived functors in $R$-$\Mod$, we refer to [KS] and [S]. We have already known that $\Hom_{R}(P,-)$ transforms pure quasi-isomorphisms to quasi-isomorphisms for any $P\in \mathbf{K^{-}}(\mathcal {P}\mathcal {P})$. In order to define pure projective (resp. injective) resolutions of complexes in $\mathbf{D_{pur}}(R)$, we need the following lemma. \begin{lemma} Let $X$ be a pure exact complex of $R$-modules. Then we have \begin{enumerate} \item[(1)] $M\otimes_{R} X$ is a pure exact complex for any right $R$-module $M$. \item[(2)] $\Hom_{R}(P,X)$ is a pure exact complex for any $P\in\mathcal{PP}$. \item[(3)] $\Hom_{R}(X,I)$ is a pure exact complex for any $I\in \mathcal{P}\mathcal{I}$. \end{enumerate} \end{lemma} {\it Proof.} (1) It is obvious by the associativity of tensor products. (2) We will prove that $\Hom_{\mathbb{Z}}(F,\Hom_{R}(P,X))$ is exact for any finitely presented $\mathbb{Z}$-module $F$ and $P\in\mathcal{PP}$. By Remark 2.6, $\mathcal{P}\mathcal{P}$ consists of summands of direct sums of finitely presented $R$-modules. So we may assume that $P$ is finitely presented. Note that $P\otimes_{\mathbb{Z}} F$ is a finitely presented $R$-module. So by the adjoint isomorphism $\Hom_{\mathbb{Z}}(F,\Hom_{R}(P,X))\cong \Hom_{R}(P\otimes_{\mathbb{Z}} F,X)$, we have that $\Hom_{R}(P,X)$ is pure exact. (3) Let $I\in\mathcal{PI}$. Then $I$ is a direct summand of $I^{++}$ by [EJ, Proposition 5.3.9]. We may assume $I=M^{+}$ for some right $R$-module $M$. Let $F$ be a finitely presented $\mathbb{Z}$-module. By the adjoint isomorphism theorem, we have the isomorphisms $$\Hom_{\mathbb{Z}}(F,\Hom_{R}(X,M^{+})) \cong \Hom_{\mathbb{Z}}(F,(M\otimes_{R}X)^+) \cong (F\otimes_{\mathbb{Z}}M\otimes_{R}X)^+.$$ By (1), $F\otimes_{\mathbb{Z}} M\otimes_{R} X$ is pure exact. So $(F\otimes_{\mathbb{Z}}M\otimes_{R}X)^+$ is exact, and hence $\Hom_{R}(X,I)$ is pure exact. \hfill$\square$ \begin{remark}{\rm By [CFH, Lemmas 2.4 and 2.5] and Lemma 4.1, after a standard computation we have \begin{enumerate} \item[(1)] $\Hom_{R}(P,-)$ preserves pure exact complexes for any $P\in \mathbf{K^{-}}(\mathcal {P}\mathcal {P})$. \item[(2)] $\Hom_{R}(-,I)$ preserves pure exact complexes for any $I\in \mathbf{K^{+}}(\mathcal {P}\mathcal {I})$. \end{enumerate}} \end{remark} \begin{definition}{\rm Let $X\in\mathbf{D_{pur}}(R)$. \begin{enumerate} \item[(1)] A {\it pure projective resolution} of $X$ is a pure quasi-isomorphism $f:P \to X$ with $P$ a complex of pure projective $R$-modules, such that $\Hom_{R}(P,-)$ preserves pure exact complexes. Dually, a {\it pure injective resolution} of $X$ is defined. \item[(2)] $X$ is said to have {\it pure projective dimension} at most $n$, written $\ppd_{R}X\leq n$, if there exists a pure projective resolution $P\rightarrow X$ with $P^{i}=0$ for any $i<-n$. If $\ppd _{R}X\leq n$ for all $n$, then we write $\ppd _{R}X=-\infty$; and if there exists no $n$ such that $\ppd _{R}X\leq n$, then we write $\ppd _{R}X=\infty$. Dually, the {\it pure injective dimension} $\pid_{R}X$ of $X$ is defined. \end{enumerate}} \end{definition} \begin{remark} {\rm \begin{enumerate} \item[] \item[(1)] Let $X$ be an $R$-module (viewed as a complex concentrated in degree 0), then these definitions coincide with the usual ones, see [KS] and [S]. \item[(2)] In the above definition, $\ppd _{R}X=-\infty$ means that $X$ is a pure exact complex. \end{enumerate}} \end{remark} \vspace{0.2cm} These dimensions can be also expressed by the following equalities. $$\ppd _{R}X=-\sup\{\inf\{n\in \mathbb{Z}\mid P^{n}\neq 0\}\mid P\rightarrow X\ {\rm is\ a\ pure\ projective\ resolution}\}, {\rm and}$$ $$\pid _{R}X=\inf\{\sup\{n\in \mathbb{Z}\mid I^{n}\neq 0\}\mid X\rightarrow I\ {\rm is\ a\ pure\ injective\ resolution}\}.$$ Let $X\in\mathbf{D_{pur}^{b}}(R)$. Then by Proposition 3.5, there exists a complex $P\in\mathbf{K^{-}}(\mathcal{PP})$ such that $P\cong X$ in $\mathbf{D_{pur}^{b}}(R)$. By Remark 4.2, $\Hom_{R}(P,-)$ preserves pure exact complexes, and hence preserves pure quasi-isomorphisms. Then after an easy computation we get a pure quasi-isomorphism from $P$ to $X$. The statements for the pure injective version are dual. Thus, if $X\in \mathbf{D^{b}_{pur}}(R)$, then $X$ admits pure projective (resp. injective) resolutions. Now we may define a functor $$\mathbf{R}\Hom_{R}(-,-):\mathbf{D^{b}_{pur}}(R)^{op}\times \mathbf{D^{b}_{pur}}(R)\rightarrow \mathbf{D_{pur}}(\mathbb{Z})$$ using either the pure projective resolution of the first variable or the pure injective resolution of the second variable. More precisely, let $P_X$ be a pure projective resolution of $X$ and $I_Y$ a pure injective resolution of $Y$. Then we have a diagram of pure quasi-isomorphisms $$\mathbf{R}\Hom_R(X,Y):=\Hom_R(P_X,Y)\rightarrow \Hom_R(P_X,I_Y)\leftarrow \Hom_R(X,I_Y):=\mathbf{R}\Hom_R(X,Y).$$ It follows that $\mathbf{R}\Hom_{R}(-,-)$ is well defined, and we call it the {\it right pure derived functor} of Hom. Let $P\rightarrow X$ be a pure projective resolution of $X$ and $Y\rightarrow I$ a pure injective resolution of $Y$. In order to coincide with the classical ones in [KS] and [S], we put $$\Pext_{R}^{i}(X,Y):=\H ^{i}\mathbf{R}\Hom_{R}(X,Y)=\H ^{i}\Hom_{R}(P,Y),\ {\rm and}$$ $$\Pext_{R}^{i}(X,Y):=\H ^{i}\mathbf{R}\Hom_{R}(X,Y)=\H ^{i}\Hom_{R}(X,I).$$ Recall that $X\in\mathbf{C}(R)$ is called {\it contractible} if it is isomorphic to the zero object in $\mathbf{K}(R)$, equivalently, the identical map $\Id_X$ is homotopic to zero. That is to say, $X$ is splitting exact (see [We, Exercise 1.4.3]). \begin{theorem} For any $X\in \mathbf{D^{b}_{pur}}(R)$ and $n\in \mathbb{Z}$, the following statements are equivalent. \begin{enumerate} \item[(1)] $\ppd_{R}X\leq n$. \item[(2)] $\mathbf{inf_{p}}X \geq -n$, and if $f':P'\rightarrow X$ is a pure projective resolution of $X$, then the R-module $\Coker d_{P'}^{-n-1}$ is pure projective. \item[(3)] If $f':P'\rightarrow X$ is a pure projective resolution of $X$, then $P'=P_{1}\bigoplus P_{2}$, where $P_{1}^{i}=0$ for any $i<-n$ and $P_{2}$ is contractible. \item[(4)] $\Pext_{R}^{i}(X,Y)=0$ for any $Y\in \mathbf{D_{pur}}(R)$ and $i>n+ \mathbf{sup_{p}}Y$. \item[(5)] $\mathbf{inf_{p}}X\geq -n$ and $\Pext_{R}^{n+1}(X,N)=0$ for any $N\in R$-$\Mod$. \end{enumerate} \end{theorem} {\it Proof.} $(1)\Rightarrow (2)$ Let $\ppd_{R}X\leq n$. Then there exists a pure projective resolution $f:P\rightarrow X$ with $P^{i}=0$ for any $i<-n$. By Proposition 3.4, we have $\mathbf{inf_{p}}X \geq -n$. Let $f':P'\rightarrow X$ be another pure projective resolution of $X$. Then there exists a quasi-isomorphism of complexes $$\Hom_{R}(P,f'):\Hom _{R}(P,P')\rightarrow\Hom _{R}(P,X).$$ Thus there exists a cochain map $g: P\to P'$ such that $f'\circ g=f$, and therefore $$\Hom_R(F,f')\circ \Hom_R(F,g)=\Hom_R(F,f)$$ for any finitely presented $R$-module $F$. It follows from Remark 2.8(2) that $g$ is a pure quasi-isomorphism. Then $g$ is a homotopy equivalence by Lemma 2.10. It is easy to check that the exact sequence $$\CD \cdots @> >> P'^{-n-1} @>d_{P'}^{-n-1} >> P'^{-n} @> \widetilde{d}_{P'}^{-n-1}>>\Coker d_{P'}^{-n-1} @> >> 0 \endCD$$ is contractible. So $\Coker d_{P'}^{-n-1}$ is pure projective. $(2)\Rightarrow (3)$ Let $f':P'\rightarrow X$ be a pure projective resolution of $X$. Because $\mathbf{inf_{P}}P'=\mathbf{inf_{p}}X \geq -n$, we have that the sequence $$\CD \cdots @> >> P'^{-n-1} @> d_{P'}^{-n-1} >> P'^{-n} @> \widetilde{d}_{P'}^{-n-1} >>\Coker d_{P'}^{-n-1} @> >> 0~~~~~~~~~~~~~~(4.1) \endCD$$ is pure exact. Because $\Coker d_{P'}^{-n-1}$ is pure projective by assumption, $(4.1)$ is contractible. Now let $P'^{-n}=M\oplus \Coker d_{P'}^{-n-1}$, and put $$P_1:=~~~~\cdots \to 0\to \Coker d_{P'}^{-n-1}\to P'^{-n+1}\to P'^{-n+2}\to \cdots,\ {\rm and}$$ $$P_2:=~~~~\cdots \to P'^{-n-2} \to P'^{-n-1}\to M\to 0\to \cdots.$$ Then we have $P'=P_{1}\bigoplus P_{2}$, where $P_{1}^{i}=0$ for any $i<-n$ and $P_{2}$ is contractible. $(3)\Rightarrow (1)$ By (3), we have that the embedding $P_{1}\hookrightarrow P'$ is clearly a pure quasi-isomorphism. This implies that $X$ admits a pure projective resolution $P_{1}\hookrightarrow P'\rightarrow X$ with $P_{1}^{i}=0$ for any $i<-n$. $(3)\Rightarrow (4)$ We only need to consider the situation when $\mathbf{sup_{p}}Y=m<\infty$. Let $P\to X$ be a pure projective resolution of $X$. Then $P=P_{1}\bigoplus P_{2}$, where $P_{1}^{i}=0$ for any $i<-n$ and $P_{2}$ is contractible. So we have $$\Pext_{R}^{i}(X,Y)=\H^{i}\Hom_{R}(P,Y)=\H^{i}\Hom_{R}(P_{1},Y).$$ As in Lemma 2.11, let $Y'$ be the right canonical truncation complex of $Y$ at degree $m$. Then the embedding $Y'\hookrightarrow Y$ is a pure quasi-isomorphism. So we have $$\H^{i}\Hom_{R}(P_{1},Y)=\H^{i}\Hom_{R}(P_{1},Y')=0$$ for any $i>n+ m$. Thus $\Pext _{R}^{i}(X,Y)=0$ for any $Y \in \mathbf{D_{pur}}(R)$ and $i>n+\mathbf{sup_{p}}Y$. $(4)\Rightarrow (5)$ For any $N\in R$-$\Mod$, we have $\mathbf{sup_{p}}N=0$ and $\Pext _{R}^{n+1}(X,N)=0$ by (4). Let $M$ be a right $R$-module. Then $$\H^i((M\otimes_R X)^+)=\H^i(\Hom_R(X,M^+))=\Pext_{R}^{i}(X,M^+)=0$$ for any $i>n$ by the adjoint isomorphism theorem and (4). So $M\otimes_R X$ is exact in degree $<-n$, and hence $X$ is pure exact in degree $<-n$ by Proposition 3.4. It implies that $\mathbf{inf_{p}}X\geq -n$. $(5)\Rightarrow (3)$ Let $P'$ be a pure projective resolution of $X$ and $N\in R$-$\Mod$. Then we have $\mathbf{inf_{p}}P'=\mathbf{inf_{p}}X\geq -n$. So $P'$ is pure exact in degree $\leq -n-1$, and hence the sequence $$\CD \cdots @> >> P'^{-n-2} @> >> P'^{-n-1} @> >> P'^{-n} @> >> \mathrm{Coker}_{P'}^{-n-1} @> >> 0 \endCD$$is pure exact and it is a pure projective resolution of $\mathrm{Coker}_{P'}^{-n-1}$. We have the following equalities $$\Pext_{R}^{1}(\mathrm{Coker}_{P'}^{-n-1},N)=\H ^{n+1}\Hom_{R}(P',N)=\Pext^{n+1}_{R}(X,N)=0.$$ It implies that $\mathrm{Coker}_{P'}^{-n-1}$ is pure projective. Thus the above pure exact complex is contractible, and therefore $P'=P_{1}\bigoplus P_{2}$, where $P_{1}^{i}=0$ for any $i<-n$. \hfill$\square$ \vspace{0.2cm} Dually, we have the following \begin{theorem} For any $Y\in\mathbf{ D^{b}_{pur}}(R)$ and $n\in \mathbb{Z}$, the following statements are equivalent. \begin{enumerate} \item[(1)] $\pid_{R}Y\leq n$. \item[(2)] $\mathbf{sup_{p}}Y\leq n$, and if $f':Y\rightarrow I'$ is a pure injective resolution of $X$, then the R-module $\Ker d_{I'}^{n}$ is pure injective. \item[(3)] If $f':Y\rightarrow I'$ is a pure injective resolution of $X$, then $I'=I_{1}\bigoplus I_{2}$, where $I_{1}^{i}=0$ for any $i>n$ and $I_{2}$ is contractible. \item[(4)] $\Pext_{R}^{i}(X,Y)=0$ for any $X\in \mathbf{D_{pur}}(R)$ and $i>n -\mathbf{inf_{p}}X$. \item[(5)] $\mathbf{sup_{p}}Y\leq n$ and $\Pext_{R}^{n+1}(M,Y)=0$ for any $M\in R$-$\Mod$. \end{enumerate} \end{theorem} By the above two theorems and Proposition 3.4, for any complex $X \in\mathbf{ D^{b}_{pur}}(R)$, we have the following characterizations of $\ppd _{R}X$ and $\pid _{R}X$ via the pure derived functor $\mathbf{R}\Hom$ $$~~~~~~\ppd _{R}X=\sup\{i\in \mathbb{Z} \mid \Pext _{R}^{i}(X,N)\neq 0 {\rm \ for \ some}\ N\ \textrm{in}\ R{\text-}\Mod\}, \ {\rm and} $$ $$\pid _{R}Y=\sup\{i\in \mathbb{Z} \mid \Pext _{R}^{i}(M,Y)\neq 0 {\rm \ for \ some}\ M\ \textrm{in}\ R{\text-}\Mod\}.$$ \vspace{0.2cm} Recall that the {\it left pure global dimension} of $R$, written $\Pgldim R$, is the supremum of the pure projective dimension of all modules in $R$-$\Mod$. It is also equals to the supremum of the pure injective dimension of all modules in $R$-$\Mod$. It is well known that $\Pgldim R\leq n$ if and only if $\Pext_{R}^{i}(M,N)=0$ for any $M,N\in R$-$\Mod$ and $i>n$, see for example [S, p.95]. We have the cochain complex version of this result. \begin{theorem} For any $n\in \mathbb{Z}$, the following statements are equivalent. \begin{enumerate} \item[(1)] $\Pgldim R\leq n$. \item[(2)] $\ppd_{R}X\leq n-\mathbf{inf_{p}}X$ for any $X\in \mathbf{D_{pur}^{b}}(R)$. \item[(3)] $\pid_{R}Y\leq n+\mathbf{sup_{p}}Y$ for any $Y\in \mathbf{D_{pur}^{b}}(R)$. \item[(4)] $\Pext_{R}^{i}(X,Y)=0$ for any $X,Y\in \mathbf{D_{pur}^{b}}(R)$ and $i>n+\mathbf{sup_{p}}Y-\mathbf{inf_{p}}X$. \end{enumerate} \end{theorem} {\it Proof.} The implications $(2)\Rightarrow (4)$ and $(3)\Rightarrow (4)$ follow from Theorems 4.5 and 4.6, respectively. The implication $(4)\Rightarrow (1)$ is obvious just letting both $X$ and $Y$ be $R$-modules. The implication $(1)\Rightarrow (2)$ is the dual of $(1)\Rightarrow (3)$. So it remains to prove the implication $(1)\Rightarrow (3)$. Let $\mathbf{sup_{p}}Y=m$ and $Y\rightarrow I$ be a pure injective resolution of $Y$. Then $\mathbf{sup_{p}}I=m$ and $I$ is pure exact in degree $\geq m+1$. So $$0 \to \Ker d_{I}^{m}\to I^{m}\to I^{m+1}\to \cdots$$ is a pure injective resolution of $\Ker d_{I}^{m}$. By (1), we have $\pid_{R}\Ker d_{I}^{m}\leq n$. Let $$0\rightarrow \Ker d_{I}^{m}\to K^{0}\to K^{1} \to \cdots \to K^{n}\to 0$$ be a pure injective resolution of $\Ker d_{I}^{m}$. Then it is easy to check that $$\cdots \to I^{m-2} \to I^{m-1} \to K^{0} \to \cdots \to K^{n-1} \to K^{n} \to 0 \to\cdots$$ is a pure injective resolution of $Y$ and $\pid_{R}Y\leq n+m$. \hfill$\square$ \bigskip \section{ The case of unbounded complexes} \setcounter{equation}{0} In this section, we study the existence of pure projective resolutions of unbounded complexes. We need the tools of homotopy colimits and limits ([N2]). Let $$\CD X_{0} @>j_{1}>> X_{1} @>j_{2}>> X_{2} @>j_{3}>> \cdots ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(5.1)\endCD$$ be a sequence in $\mathbf{K}^b(R)$, where $j_{i}$ is a morphism of complexes for any $i>0$. Then we can form the {\it homotopy colimit} of this sequence, written $\Ho\underrightarrow{\colim}X_{i}$, by the triangle $$\CD \bigoplus\limits_{i=0}^{\infty} X_{i} @> \textrm{1-shift} >> \bigoplus\limits_{i=0}^{\infty} X_{i} @> >> \Ho \underrightarrow{\colim}X_{i} @> >> (\bigoplus\limits_{i=0}^{\infty} X_{i})[1] \endCD$$ in $\mathbf{K}(R)$. The notion of {\it homotopy limits} is defined dually, and denoted by $\Ho\underleftarrow{\mathrm{lim}}$. For the sequence (5.1), we can also form the {\it direct limit}, written $\underrightarrow{\colim} X_{i}$, in $\mathbf{C}(R)$. We have the following exact sequence of complexes (note: the morphism $\textrm{1-shift}$ is monic) $$\CD 0 @> >> \bigoplus\limits_{i=0}^{\infty} X_{i} @>\textrm{1-shift}>> \bigoplus\limits_{i=0}^{\infty} X_{i} @> \iota >> \underrightarrow{\colim} X_{i} @> >>0.~~~~~~~~~~~~~~(5.2) \endCD$$ Then it is easy to check that there exists a morphism $\alpha: \Ho\underrightarrow{\colim}X_{i}\to\underrightarrow{\colim} X_{i}$, since $\iota \circ(\textrm{1-shift})=0$. Passing to homology we conclude that $\alpha$ is a quasi-isomorphism, since H$^0(-)$ is cohomological by [GM, Chapter IV.1.6]. Because $\Hom_R(F,-)$ commutes with direct limits in $\mathbf{C}(R)$ for any finitely presented $R$-module $F$ by [AR, Corollary 1.54] (see also [St, Remark 4.13] or [CH, Corollary 4.6]), we have that (5.2) is pure exact in $\mathbf{C}(R)$, that is, it is pure exact in each degree. So after applying the functor $\Hom_{R}(P,-)$ for any $P\in \mathcal {P}\mathcal {P}$, we get the following exact sequence $$\CD 0 @> >> \Hom_{R}(P,\bigoplus\limits_{i=0}^{\infty} X_{i}) @>\Hom_{R}(P,\textrm{1-shift})>> \Hom_{R}(P,\bigoplus\limits_{i=0}^{\infty} X_{i} )@> >> \Hom_{R}(P,\underrightarrow{\colim} X_{i}) @> >>0 \endCD,$$ and the following exact triangle {\footnotesize $$\CD \Hom_{R}(P, \bigoplus\limits_{i=0}^{\infty} X_{i}) @> \Hom_{R}(P,\textrm{1-shift}) >> \Hom_{R}(P,\bigoplus\limits_{i=0}^{\infty} X_{i}) @> >>\Hom_{R}(P, \Ho \underrightarrow{\colim}X_{i}) @> >> \Hom_{R}(P,\bigoplus\limits_{i=0}^{\infty} X_{i})[1] \endCD$$} in $\mathbf{K}(R)$. When passing to homology we see that $\alpha$ is a pure quasi-isomorphism. \begin{theorem} Let $X\in\mathbf{C}(R)$ be a bounded below complex. Then there exists a complex $P$ consisting of pure projective $R$-modules satisfying the following properties. \begin{enumerate} \item[(1)] There exists a pure quasi-isomorphism $f:P\rightarrow X$. \item[(2)] $\Hom_R(P,-)$ preserves pure exact complexes. \end{enumerate} That is, $f:P\to X$ is a pure projective resolution of $X$. \end{theorem} {\it Proof.} (1) Write $X:=\underrightarrow{\colim} X_{i}$ with the structure map $j_{i+1}:X_{i}\to X_{i+1}$, where $X_{i}$ is a bounded complex for any $i\geq 0$. By Proposition 3.5, for any $X_{i}$ there exists a pure quasi-isomorphism $f_{i}:P_{i}\longrightarrow X_{i}$ with $P_{i}\in \mathbf{K}^{-}(\mathcal{PP})$ for any $i\geq 0$. Then we obtain the following commutative diagram $$\CD P_{i} @>\overline{j_{i+1}}>> P_{i+1} \\ @V f_{i} VV @V f_{i+1} VV \\ X_{i} @>j_{i+1}>> X_{i+1} \endCD$$ in $\mathbf{K}(R)$, where $\overline{j_{i+1}}$ is induced by $j_{i+1}$. So there exists a morphism of exact triangles $$\CD \bigoplus\limits_{i=0}^{\infty} P_{i} @>\textrm{1-shift}>> \bigoplus\limits_{i=0}^{\infty} P_{i} @> >> \Ho\underrightarrow{\colim} P_{i} @> >> (\bigoplus\limits_{i=0}^{\infty} P_{i})[1] \\ @V VV @V VV @V f VV @V VV \\ \bigoplus\limits_{i=0}^{\infty} X_{i} @>\textrm{1-shift}>> \bigoplus\limits_{i=0}^{\infty} X_{i} @> >> \Ho\underrightarrow{\colim} X_{i} @> >> (\bigoplus\limits_{i=0}^{\infty} X_{i})[1] \endCD$$ in $\mathbf{K}(R)$. After applying the localization functor, it is a morphism of exact triangles in $\mathbf{D_{pur}}(R)$. Since pure quasi-isomorphisms are closed under coproducts by Remark 2.8(2), we have that the first two vertical maps in the above diagram are pure quasi-isomorphisms. So $f$ and $$\alpha\circ f:P=\Ho\underrightarrow{\colim} P_{i}\rightarrow \Ho\underrightarrow{\colim} X_{i}\rightarrow \underrightarrow{\colim} X_{i}$$ are also pure quasi-isomorphisms. By the construction, we have that $\Ho\underrightarrow{\colim} P_{i}$ is the mapping cone of some cochain map between complexes consisting of pure projective $R$-modules. Thus $\Ho\underrightarrow{\colim} P_{i}$ is also a complex consisting of pure projective $R$-modules. (2) We will prove that $\Hom_{\mathbb{Z}}(F,\Hom_{R}(P,X))$ is exact for any pure exact complex $X$ of $R$-modules and any finitely presented $\mathbb{Z}$-module $F$. Consider the following commutative diagram $$\CD \Hom_{R}(P,X) @> >> \Hom_{R}(\bigoplus\limits_{i=0}^{\infty} P_{i},X) @> >> \Hom_{R}(\bigoplus\limits_{i=0}^{\infty} P_{i},X) @> >> (\Hom_{R}(P,X))[1] \\ @V = VV @V \cong VV @V \cong VV @V VV \\ \Hom_{R}( P,X) @> >> \prod\limits_{i=0}^{\infty} \Hom_{R}( P_{i},X) @> >> \prod\limits_{i=0}^{\infty} \Hom_{R}( P_{i},X) @> >> (\Hom_{R}( P,X))[1] \endCD$$ in $\mathbf{K}(\mathbb{Z})$, where both rows are exact triangles. We have the following isomorphisms $$\Hom_{\mathbb{Z}}(F,\Hom_{R}(\bigoplus\limits_{i=0}^{\infty} P_{i},X))\cong \prod \limits_{i=0}^{\infty} \Hom_{\mathbb{Z}}(F,\Hom_{R}(P_{i},X)).$$ Note that the latter one is exact by Remark 4.2. Because $\Hom_{\mathbb{Z}}(F,-)$ is a triangulated functor, the assertion follows standardly. \hfill$\square$ \begin{theorem} Let $X\in\mathbf{C}(R)$ be a bounded above complex. Then there exists a complex $I$ consisting of pure injective $R$-modules satisfying the following properties. \begin{enumerate} \item[(1)] There exists a pure quasi-isomorphism $f:X\rightarrow I$. \item[(2)] $\Hom_R(-,I)$ preserves pure exact complexes. \end{enumerate} That is, $f:X\to I$ is a pure injective resolution of $X$. \end{theorem} {\it Proof.} Write $X:=\underleftarrow{\mathrm{lim}} X_{i}$ with $X_{i}$ a bounded complex for any $i\leq 0$. Then by [IK, Lemma 2.6], we have $X\cong \Ho\underleftarrow{\mathrm{lim}} X_{i}$ in $\mathbf{K}(R)$. Note that pure quasi-isomorphisms are closed under products by Remark 2.8(2). Now by using an argument similar to that in the proof of Theorem 5.1, we get the assertion. \hfill$\square$ \begin{remark}{\rm One can find that the derived functor $$\mathbf{R}\Hom_{R}(-,-):\mathbf{D^{b}_{pur}}(R)^{op}\times \mathbf{D^{b}_{pur}}(R)\rightarrow \mathbf{D_{pur}}(\mathbb{Z})$$ may be extended to $$\mathbf{R}\Hom_{R}(-,-):\mathbf{D^{+}_{pur}}(R)^{op}\times \mathbf{D^{-}_{pur}}(R)\rightarrow \mathbf{D_{pur}}(\mathbb{Z}).$$ The corresponding characterizations of dimensions in Section 4 also hold in this situation.} \end{remark} \vspace{0.5cm} \noindent $\mathbf{Acknowledgements}$ This research was partially supported by NSFC (Grant Nos. 11171142 and 11571164) and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. The authors thank the referee for the helpful suggestions.
{ "timestamp": "2016-01-28T02:06:08", "yymm": "1601", "arxiv_id": "1601.07280", "language": "en", "url": "https://arxiv.org/abs/1601.07280", "abstract": "We investigate the properties of pure derived categories of module categories, and show that pure derived categories share many nice properties of classical derived categories. In particular, we show that bounded pure derived categories can be realized as certain homotopy categories. We introduce the pure projective (resp. injective) dimension of complexes in pure derived categories, and give some criteria for computing these dimensions in terms of the properties of pure projective (resp. injective) resolutions and pure derived functors. As a consequence, we get some equivalent characterizations for the finiteness of the pure global dimension of rings. Finally, pure projective (resp. injective) resolutions of unbounded complexes are considered.", "subjects": "Representation Theory (math.RT); K-Theory and Homology (math.KT); Rings and Algebras (math.RA)", "title": "On Pure Derived Categories", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631619124994, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7087950431331199 }
https://arxiv.org/abs/2210.04489
An algorithmic approach based on generating trees for enumerating pattern-avoiding inversion sequences
We introduce an algorithmic approach based on generating tree method for enumerating the inversion sequences with various pattern-avoidance restrictions. For a given set of patterns, we propose an algorithm that outputs either an accurate description of the succession rules of the corresponding generating tree or an ansatz. By using this approach, we determine the generating trees for the pattern-classes $I_n(000, 021), I_n(100, 021)$, $I_n(110, 021), I_n(102, 021)$, $I_n(100,012)$, $I_n(011,201)$, $I_n(011,210)$ and $I_n(120,210)$. Then we use the kernel method, obtain generating functions of each class, and find enumerating formulas. Lin and Yan studied the classification of the Wilf-equivalences for inversion sequences avoiding pairs of length-three patterns and showed that there are 48 Wilf classes among 78 pairs. In this paper, we solve six open cases for such pattern classes.
\section{Introduction} An {\em inversion sequence} of length $n$ is an integer sequence $e=e_0e_1\cdots e_n$ such that $0\leq e_i\leq i$ for each $0\leq i\leq n$. We denote by $I_n$ the set of inversion sequences of length $n$. There is a bijection between $I_n$ and $S_{n+1}$, the set of permutations of length $n+1$. Given any word $\tau$ of length $k$ over the alphabet $[k]:=\{0,1,\cdots,k-1\}$, we say that an inversion sequence $e\in I_n$ contains the pattern $\tau$ if there is a subsequence of length $k$ in $e$ that is order isomorphic to $\tau$; otherwise, we say that $e$ avoids the pattern $\tau$. For instance, $e=010213211 \in I_8$ avoids the pattern $201$ because there is no subsequence $e_je_ke_l$ of length three in $e$ with $j<k<l$ and $e_k<e_l<e_j$. On the other hand, $e=010213211$ contains the patterns $120$ and $0000$ because it has subsequence $---2-3-1-$ order isomorphic to 120, and subsequence $-1--1--11$ order isomorphic to $0000$. For a given pattern $\tau$, we use $I_n(\tau)$ to denote the set of all $\tau$-avoiding inversion sequences of length $n$. Similarly, for a given set of patterns $B$, we set $I_n(B)=\cap_{\tau \in B}I_n(\tau)$. Pattern-avoiding permutation classes have been thoroughly studied by researchers for more than forty years, for some highlights of the results see \cite{Kit} and references therein. A systematic study of pattern-avoidance for inversion sequences was initiated recently by Mansour and Shattuck \cite{ManS} for the patterns of length three with non-repeating letters, and by Corteel et.al \cite{CMS} for repeating and non-repeating letters. Martinez and Savage \cite{MaSa} generalized and extended the notion of pattern-avoidance for the inversion sequences to triples of binary relations that lead to new conjectures and open problems. Many successfully studied research programs for permutations such as pattern-avoidance in terms of vincular patterns, pairs of patterns and longer patterns have already been initiated to study for inversion sequences, for some recent results see \cite{AuEl, BGRR, BBGR, CJL, Ch, ManS2, YanLin, LinFu, LinY} and references therein. In the context of inversion sequences, two sets of patterns $B_1$ and $B_2$ are said to be Wilf equivalent if $|I_n(B_1)|=|I_n(B_2)|$ for all $n\geq 0$, that is, they have the same counting sequence. Note that there are basically thirteen patterns of length three up to order isomorphism; we denote them by $\mathcal{P}_3=\{000,001,010,100,011,101,110,021,012,102,120,201,210 \}$. Yan and Lin \cite{YanLin} completed the classification of the Wilf-equivalences for inversion sequences avoiding pairs of length-three patterns. They showed that there are 48 Wilf classes among 78 pairs; for a complete list of the classes with open cases in terms of enumeration see Table 1 and 2 in \cite{YanLin}. In this paper, we solve six open cases for such pattern classes: $I_n(000, 021)$, $I_n(102, 021)$, $I_n(100,012)$, $I_n(120,210)$, Wilf-equivalent $I_n(011,201)$ and $I_n(011,210)$, and Wilf-equivalent $I_n(100, 021)$ and $I_n(110, 021)$. Note that for simplicity of the notation, we leave curly brackets and use $I_n(\tau_1,\cdots,\tau_m)$ instead of $I_n(\{\tau_1,\cdots,\tau_m\})$ for a given set of patterns $B=\{\tau_1,\cdots,\tau_m\}$ throughout the paper. We shall use an algorithmic approach based on generating trees to enumerate pattern-restricted inversion sequences. {For some earlier results, in the context of pattern-restricted permutations, see \cite{V,Z} and references therein}. In this paper, we present applications of our algorithm only for the class $I_n(B)$ where either $B$ includes a single pattern or a pair of patterns of length three. However, the method applies to other inversion sequences with various pattern restrictions. As we will see, the algorithm outputs either an accurate description of the succession rules of the generating tree for the given avoidance class or an ansatz based on which we can figure out the complete description of the generating tree. For most cases, we can use the kernel method \cite{Ker} to compute the generating functions and then obtain an exact enumerating formula for the corresponding pattern class or get a functional equation for the generating function. The latter case yields a procedure to calculate the coefficients of the generating function up to a given index. We organize the paper as follows: In Section~\ref{GTA}, we present our algorithm and demonstrate how it works on some elementary examples such as $B=\{000,001,012\}$ and $B=\{000,001\}$. In Section~\ref{caseB1}, we consider the open cases from single pattern of length three and obtain functional equations for the generating functions of $I_n(100)$, and Wilf-equivalent $I_n(201)$ and $I_n(210)$. In Section~\ref{caseB2}, we obtain the generating trees for the classes $I_n(000, 021), I_n(100, 021)$, $I_n(110, 021),$ $I_n(102, 021)$, $I_n(100,012)$, $I_n(011,201)$, $I_n(011,210)$ and $I_n(120,210)$ by using our algorithm. Then we use the kernel method, obtain the corresponding generating functions, and determine the counting sequences for them. \section{An algorithm based on generating trees}\label{GTA} Any set $\mathcal{C}$ of discrete objects with a notion of size such that for each $n$ there is a finitely many objects of size $n$ is called a combinatorial class. A {\em generating tree} (see \cite{W}) for $\mathcal{C}$ is a rooted, labelled tree whose vertices are the objects of $\mathcal{C}$ with the following properties: (i) each object of $\mathcal{C}$ appears exactly once in the tree; (ii) objects of size $n$ appears at level $n$ in the tree (the root has level 0); (iii) the children of some object are obtained by a set of succession rules of the form that determine the number of children and their labels. {Note that any pattern over the alphabet $[k]$ can be extended to an inversion sequence. Suppose a pattern $\tau=\tau_1\cdots\tau_m$ be given and let the $\{0,1,\ldots,t\}$ denote the set of all letters appeared in $\tau$. We define $L_\tau$ to be the set of all inversion sequences $\theta^{(1)}\tau_1\theta^{(2)}\tau_2\cdots\theta^{(m)}\tau_m$ such that the length of the inversion sequence $\theta^{(1)}\tau_1\theta^{(2)}\tau_2\cdots\theta^{(j)}\tau_j$ is minimal for each $j=1,2,\ldots,m$. Note that some words $\theta^{(j)}$s might be empty. By the minimality condition on the lengths of $\theta^{(1)},\ldots,\theta^{(m)}$, we have that the length of any pattern in $L_\tau$ is at most $m+t$. For instance, if $\tau=021$, then $m=3$, $t=2$, and $L_\tau=\{0021,0121\}$; if $\tau=001$, then $m=3$, $t=1$, and $L_\tau=\{001\}$. Clearly, any inversion sequence $\sigma$ avoids $B$ if and only if $\sigma$ avoids $L=\cup_{\{\tau\in B\}}L_\tau$. For any set of patterns $B$, we identify $B$ with a set of patterns $L_B=\cup_{\{\tau\in B\}}L_\tau$. } For a given set of patterns $B$, let $\mathcal{I}_B=\cup_{n=0}^{\infty} I_n(B)$. We will construct a pattern-avoidance tree $\mathcal{T}(B)$ for the class of pattern-avoiding inversion sequences $\mathcal{I}_B$. The tree $\mathcal{T}(B)$ is understood to be empty if there is no inversion sequence of arbitrary length avoiding the set $B$. Otherwise, the root can always be taken as $0$, that is, $0\in \mathcal{T}(B)$. Starting with this root which stays at level $0$, the remainder of the tree $\mathcal{T}(B)$ can then be constructed in a recursive manner such that the $n^{th}$ level of the tree consists exactly the elements of $I_n(B)$ arranged in such a way that the parent of an inversion sequence $e_0e_1\cdots e_n \in I_n(B)$ is the unique inversion sequence $e_0e_1\cdots e_{n-1}\in I_{n-1}(B)$. The children of $e_0e_1\cdots e_{n-1}\in I_{n-1}(B)$ are obtained from the set $\{e_0e_1\cdots e_{n-1}e_n\mid e_n=0,1,\ldots,n\}$ by obeying the pattern-avoiding restrictions of the patterns in $B$. We arrange the nodes from the left to the right so that if $e=e_0e_1\cdots e_{n-1}i$ and $e'=e_0e_1\cdots e_{n-1}j$ are children of the same parent $e_1\cdots e_{n-1}$, then $e$ appears on the left of $e'$ if $i<j$. See Figure \ref{figT1} for the first few levels of $\mathcal{T}(\{012\})$. Note that the size of $I_n(B)$ is equal to the number of nodes in the $n$-th level of $\mathcal{T}(B)$. \begin{figure}[htp] {\tiny \begin{forest} for tree={fit=band,} [0[00,[000,[0000,[00000\\00001\\00002\\00003\\00004]] [0001,[00010\\00011]] [0002,[00020\\00021\\00022]] [0003,[00030\\00031\\00032\\00033]]] [001,[0010,[00100\\00101]] [0011,[00110\\00111]]] [002,[0020,[00200\\00201\\00202]], [0021,[00210\\00211]],[0022,[00220\\00221\\00222]]]] [01,[010,[0100,[01000\\01001]] [0101,[01010\\01011]]] [011,[0110,[01100\\01101]] [0111,[01110\\01111]]]]] \end{forest}} \caption{First four levels of $\mathcal{T}(\{012\})$}\label{figT1} \end{figure} For a given set of patterns $B$, it plays an essential role to understand the nature of the tree $\mathcal{T}(B)$, to enumerate the class $\mathcal{I}_B=\cup_{n=0}^{\infty} I_n(B)$. Let $\mathcal{T}(B;e)$ denote the subtree consisting of the inversion sequence $e$ as the root and its descendants in $\mathcal{T}(B)$. In our arguments, it will be important to determine if the subtrees starting from two distinct nodes $e, e' \in \mathcal{T}(B)$ are isomorphic or not, that is, $\mathcal{T}(B;e)\cong\mathcal{T}(B;e')$ in the sense of plane tree isomorphism (there is a bijection between the nodes of $T(B;e)$ and the nodes of $\mathcal{T}(B;e')$ whenever we read them level by level from top to bottom and from left to right). Lemma~\ref{lem1}, based on \cite{BM}, provides an easy to check criteria for this task. \begin{lemma}\label{lem1} Let $t$ be the length of the longest pattern in $B$. We have that $\mathcal{T}(B;e)\cong\mathcal{T}(B;e')$ for two inversion sequence $e, e' \in \mathcal{T}(B)$ if and only if $\mathcal{T}^{2t}(B;e)\cong\mathcal{T}^{2t}(B;e')$ where $\mathcal{T}^m(B;e)$ denotes the finite tree corresponding to the first $m-1$ level of $\mathcal{T}(B; e)$. \end{lemma} \begin{proof} {Since avoiding $B$ is equivalent to avoiding $L_B=\cup_{\{\tau\in B\}}L_\tau$ in the set of inversion sequences, we assume that any pattern in $B$ is an inversion sequence. Let $e, e' \in \mathcal{T}(B)$. Clearly, $\mathcal{T}(B;e)\cong\mathcal{T}(B;e')$ implies $\mathcal{T}^{2t}(B;e)\cong\mathcal{T}^{2t}(B;e')$. Now, let us assume that $\mathcal{T}(B;e)\not\cong\mathcal{T}(B;e')$ as plane trees. We read the nodes of $\mathcal{T}(B;e)$ (resp. $\mathcal{T}(B,e')$) from top to bottom and from left to right and denote them as $e_j$ (resp. $e'_j$) with $e_0=e$ (resp. $e'_0=e'$). Since $\mathcal{T}(B;e)\not\cong\mathcal{T}(B;e')$, there exists $s\geq0$ minimal such that (1) the number children of $e_j$ equals the number of children of $e'_j$, for $j=1,2,\ldots,s-1$, and (2) the number of children of $e_s$ does not equal the number of children of $e'_s$. By construction of $\mathcal{T}(B)$, for all $j=1,2,\ldots,s-1$, there exist letters $p_{ij},q_{ij}$ such that the inversion sequences $ef_j:=ep_{1j}p_{2j}\cdots p_{i_jj}$ and $e'f'_j:=e'q_{1j}q_{2j}\cdots q_{i_jj}$ avoid $B$, while the inversion sequence $ef_s:=ep_1p_2\cdots p_{i_s}$ contains $\tau\in B$; the inversion sequence $e'f_s'=e'q_1q_2\cdots q_{i_s}$ avoids $B$ and there exists a bijection $\alpha$ such that $q_j=\alpha(p_j)$, for all $j=1,2,\ldots,i_s$. Any occurrence of $\tau$ in $ef_s$ can use at most $t-1$ letters of $f_s$. Thus, there is a subsequence $g=p_{k_1}\cdots p_{k_m}$ of $f_s$ of minimal length $m$ such that the word $eg$ contains $\tau$ and $m\leq t-1$ but the word $e'g'$ avoids $B$, where $g'=q_{k_1}\cdots q_{k_{t-1}}$ is a subsequence of $f'_s$. Since $ef_s$ is an inversion sequence, then there exists inversion sequence $e\tilde{g}\in L_{eg}$ such that $e\tilde{g}$ is a subsequence of $ef_s$. Since each letter $p_j$ is mapped to the letter $q_j$ by $\alpha$, we see that the sequence $\tilde{g}$ is mapped to $\tilde{g}'$. Since $\tilde{g}'$ is subsequence of $f'_s$, we see that $e'\tilde{g}'$ is an inversion sequence in $L_{e'g'}$. Thus, the inversion sequence $e\tilde{g}$ contains $\tau$ such that the length of $\tilde{g}$ is at most $t+t-1$ and the inversion sequence $e'\tilde{g}'$ avoids $\tau$ such that the length of $\tilde{g}'$ is at most $t+t-1$. Hence, $\mathcal{T}^{2t}(B;e)\not\cong\mathcal{T}^{2t}(B;e')$.} \end{proof} For instance, we see from Figure~\ref{figT1} that Lemma~\ref{lem1} implies that $\mathcal{T}(\{012\};010)\cong\mathcal{T}(\{012\};001)$ $\cong\mathcal{T}(\{012\};011)$. We define an equivalence relation on the set of nodes of $\mathcal{T}(B)$ as follows. Let $v=v_0v_1\cdots v_a$ and $w=w_0w_1\cdots w_b$ two nodes in $\mathcal{T}(B)$. We say that $v$ is equivalent to $w$, denoted by $v\sim w$, if and only if $\mathcal{T}(B;v)\cong\mathcal{T}(B;w)$. Note that Lemma \ref{lem1} performs a finite procedure for checking $v\sim w$. Define $V[B]$ to be the set of all equivalence classes in the quotient set $\mathcal{T}(B)/\sim$. We will represent each equivalence class $[v]$ by the label of the unique node $v$ which appears on the tree $\mathcal{T}(B)$ as the left-most node at the lowest level among all other nodes in the same equivalence class. Let $\mathcal{T}[B]$ be the same tree $\mathcal{T}(B)$ where we replace each node $v$ by its equivalence class label. That is, $w$ is relabelled by $v$ such that \begin{itemize} \item $v\sim w$, and \item either $a<b$ or $a=b$ such that, in the list of the nodes at level $a$ in the tree $\mathcal{T}(B)$ from left to right, the node $v$ does appear before the node $w$. \end{itemize} Next, we define an algorithm for finding $\mathcal{T}[B]$ for a given set of patterns $B$ with $0\not\in B$. {As we run the algorithm, we use the set $Q_D$ and $R$ to keep track of the equivalence classes that are obtained at set $D$ and the deduced succession rules for the generating tree that are obtained up to step $D$, respectively}. The details are as follows: \begin{itemize} \item[(1)] We {\bf initialize} the tree $\mathcal{T}[B]$ by the root $0$, and define $Q_0=\{0\}$ and $R=\emptyset$. \item[(2)] Let $D$ be any positive integer. \item[(3)] For all $i=1,2,\ldots,D$, \begin{itemize} \item[(3.1)] for any $w\in Q_{i-1}$, we denote the set of all children of $w$ in $\mathcal{T}(B)$ by $N_w$. We {\bf denote} the set of all the children of all the new equivalence classes at $i^{th}$ step by $M_i=\cup_{w\in Q_{i-1}}N_w$. If $M_i=\emptyset$, then we stop the loop and go to (4). \item[(3.2)] we {\bf initialize} the set $Q_i$ (set of new equivalence classes at $i^{th}$ step) to be empty set. For each child $w$ in $M_i$, \begin{itemize} \item[(3.2.1)] we find $v\in \cup_{j=0}^{i-1}Q_j$, if possible, such that $w\sim v$, where we use Lemma \ref{lem1} to check that $w\sim v$ holds or not; \item[(3.2.2)] otherwise, we add the equivalence class $w$ to $Q_i$. \end{itemize} \item[(3.3)] based on (3.2), we {\bf add} the rule $w\rightsquigarrow v_1v_2\cdots v_s$ to the set $R$, where {$v_j$ is the label of the $j^{th}$ child of $w$}, from left to right, in $\mathcal{T}[B]$. \end{itemize} \item[(4)] If we {\bf stop} at $(3.1)$, then we have the finite set of labels $\cup_{j=0}^{i-1}Q_j$ and finite set of succession rules $R$ that specifies the tree $\mathcal{T}[B]$ with its root $0$. In this context, $B$ is called {\em regular}. \item[(5)] Otherwise, we have set of succession rules $R$ that specifies the tree $\mathcal{T}[B]$ with its root $0$ up to level $k(D)$ where $k(D)$ is an integer depending on $D$. We could {\bf guess}, if possible, all the set of succession rules of $\mathcal{T}[B]$ based on $R$, then use Lemma \ref{lem1} to prove this claim. In case we fail to guess the whole set of the succession rules, then either we increase $D$ or we say that our procedure does not lead us to determine the all succession rules of $\mathcal{T}[B]$. \end{itemize} We will use the following fact throughout the paper: for any pattern collection $B$, $\mathcal{T}(B)\cong\mathcal{T}[B]$ and the number of nodes at the $n^{th}$ level of the generating tree is equal to the number of inversion sequence of length $n$ avoiding the patterns in $B$. \begin{example} Let $B=\{000,001,012\}$, we apply our procedure with $D=5$ as follows: \begin{center} \begin{tabular}{l||l|l|l|l} $i$&$M_i$&Comments&$Q_i$&$R$\\\hline\hline $0$& & &$\{0\}$&$\emptyset$\\\hline $1$&$\{00,01\}$&$00\not\sim0$, $01\not\sim0$ &$\{00,01\}$ &$\{0\rightsquigarrow00,01\}$\\ &&$01\not\sim00$ & &\\\hline $2$&$\{010,011\}$&$010\sim00$, $011\not\sim v\in Q_0\cup Q_1$&$\{011\}$& $\{0\rightsquigarrow00,01$\\ &&&&$01\rightsquigarrow00,011\}$\\\hline $3$&$\{0110\}$&$0110\sim00$&$\emptyset$&$\{0\rightsquigarrow00,01$\\ &&&&$01\rightsquigarrow00,011$\\ &&&&$011\rightsquigarrow00\}$. \end{tabular} \end{center} Hence, the generating tree $\mathcal{T}[B]$ given by the algorithm has the following succession rules: \begin{align*} &\mbox{Root: }0,\\ &\mbox{Rules: } 0\rightsquigarrow00,01,\quad 01\rightsquigarrow00,011,\quad 011\rightsquigarrow00. \end{align*} We want to find the generating function $R(x)=\sum_{n\geq0}|I_n(B)|x^{n+1}$. We use $A_w(x)$ to denote the generating function for the number of nodes in the subtree $\mathcal{T}(B;w)$. Hence, by the generating tree $\mathcal{T}[B]$, we have $R(x)=x+xA_{00}(x)+xA_{01}(x)$, $A_{00}(x)=x$, $A_{01}(x)=x+xA_{00}(x)+xA_{011}(x)$, and $A_{011}(x)=x+xA_{00}(x)$. By solving for $R(x)$, we obtain that $R(x)=x^4+2x^3+2x^2+x$. \end{example} \begin{example} Let $B=\{000,001\}$. By applying our procedure with $D=5$, we guess that the tree $\mathcal{T}[B]$ is given by \begin{align*} &\mbox{Root: }a_0,\\ &\mbox{Rules: } a_0\rightsquigarrow b_0a_1,\quad a_m\rightsquigarrow b_0b_1b_2\ldots b_ma_{m+1}\quad b_m\rightsquigarrow b_0b_1b_2\ldots b_{m-1}, \end{align*} where $a_0=0$, $a_m=012\cdots m$, $b_0=00$ and $b_m=012\cdots(m-1)mm$ for $m\geq 1$. We will make use of Lemma \ref{lem1} to verify the succession rules of the generating tree. Since other cases are very similar, we only show that the succession rule $a_m\rightsquigarrow b_0b_1b_2\ldots b_ma_{m+1}$ holds. Let $v=012\cdots m$, then the children of $v$ in $\mathcal{T}(B)$ are $012\cdots mj$ with $j=0,1,\ldots,m+1$. By using Lemma \ref{lem1}, we see that $012\cdots m0\sim00$, $012\cdots mj\sim012\cdots(j-1)jj$ with $j=1,2,\ldots,m$, and for $j=m+1$ we have a new equivalence class $012\cdots m(m+1)$. This verifies the succession rule $a_m\rightsquigarrow b_0b_1b_2\ldots b_ma_{m+1}$. We aim to compute the generating function $R(x)=\sum_{n\geq0}|I_n(B)|x^{n+1}$. We use $A_w(x)$ to denote the generating function for the number of nodes in the subtree $\mathcal{T}(B;w)$. Let us define $B_m(x)=A_{012\cdots m}(x)$ and $C_m(x)=A_{012\cdots(m-1)mm}(x)$, for $m\geq1$. Then, by generating tree $\mathcal{T}[B]$, we obtain that $R(x)=x+xA_{00}(x)+xA_{01}(x)$, $A_{00}(x)=x$, and \begin{align*} B_m(x)&=x+x^2+x(C_1(x)+\cdots+C_m(x))+xB_{m+1}(x),\\ C_m(x)&=x+x^2+x(C_1(x)+\cdots+C_{m-1}(x)), \end{align*} We define $G(x,u)=\sum_{m\geq1}G_m(x)u^{m-1}$, where $G\in\{B,C\}$. Hence, by multiplying the recurrence relations by $u^{m-1}$ and summing over $m\geq1$, we have \begin{align} B(x,u)&=\frac{x(1+x)}{1-u}+\frac{x}{1-u}C(x,u)+\frac{x}{u}(B(x,u)-B(x,0)),\label{eqex21}\\ C(x,u)&=\frac{x(1+x)}{1-u}+\frac{x}{1-u}C(x,u)-xC(x,u).\label{eqex22} \end{align} By solving \eqref{eqex22} for $C(x,u)$, we have $$C(x,u)=\frac{x(1+x)}{(1+x)(1-u)-x}.$$ The equations of type \eqref{eqex21} can be solved systematically using the kernel method \cite{Ker}. In this case, if we assume that $u=x$, then \begin{align} B(x,0)=\frac{x(1+x)}{1-x}+\frac{x}{1-x}C(x,x) =\frac{x(1+x)}{1-x-x^2}. \end{align} Hence, by comparing coefficients of $x^{n+1}$, we obtain that $|I_n(000,001)|=Fib_{n+2}$, where $Fib_n$ is the $n^{th}$ Fibonacci number, that is, $Fib_n=Fib_{n-1}+Fib_{n-2}$ with $Fib_0=0$ and $Fib_1=1$. \end{example} \section{Set of patterns $B \subset \mathcal{P}_3$ with $|B|=1$}\label{caseB1} As we discussed in the introduction, the first systematic study of pattern-avoiding inversion sequences was carried out for the case of a single pattern of length three in \cite{CMS} and \cite{ManS}. The results of these papers demonstrated that there are some remarkable connections with pattern-restricted inversion sequences and other well-studied combinatorial structures. Some of the highlights of their results can be summarized as follows: the odd-indexed Fibonacci numbers count $I_n(012)$, the large Schröder numbers count $I_n(021)$, the Euler up/down numbers count $I_n(000)$, the Bell numbers count $I_n(011)$, and powers of two count $I_n(001)$; for details see the above references. There are still no enumerating formulas for the avoidance sets $I_n(010), I_n(100), I_n(120)$, and Wilf-equivalent $I_n(201)$ and $I_n(210)$. In this section, we use this paper's algorithmic approach and derive functional equations for the generating functions of the classes $I_n(100)$ and $I_n(201)$. For similar results in the context of pattern-restricted permutations, see \cite{NZ,YZ}. \subsection{Class $I_n(100)$}\label{case100} {Our algorithm allows us to guess the generating tree $\mathcal{T}[\{100\}]$.} \begin{theorem}\label{th100tt} The generating tree $\mathcal{T}[\{100\}]$ is given by $$\mbox{Root: }a_1,\quad \mbox{Rules: } a_m\rightsquigarrow a_{m+1}b_{m,1}\cdots b_{m,m},\,\, b_{m,j}\rightsquigarrow (b_{m,j-1})^j b_{m+1,j}\cdots b_{m+1,m+1},$$ where $a_m=0^m$ and $b_{m,j}=0^mj$, for all $1\leq j\leq m$. \end{theorem} \begin{proof} We proceed by using our algorithm. We label the inversion sequences $0\in I_0$ by $0$. Clearly, the children of $0^m$ are $0^{m+1},0^m1,\ldots,0^mm$. Thus it is remains to show that the children of $0^mj$ are $(0^m(j-1))^j(0^{m+1}j)\cdots(0^{m+1}(m+1))$. Let $v_i=0^mji$, we have that \begin{itemize} \item if $i=0,1,\ldots,j-1$, then $v_i\sim0^m(j-1)$ in $\mathcal{T}(\{100\})$ {(by removing the letter $i$ and subtract $1$ from each letter bigger than $i$)}. \item if $i=j,j+1,\ldots,m$, then $v_i=0^{m}ji\sim0^{m+1}i$ in $\mathcal{T}(\{100\})$ {(by replacing the letter $j$ by $0$)}, \end{itemize} which completes the proof. \end{proof} To study the generating function $R(x)=\sum_{n\geq0}|I_n(100)|x^{n+1}$, we define $A_m(x)$ and $B_{m,j}(x)$ to be the generating functions for the number of nodes in the subtrees $\mathcal{T}(B;0^m)$ and $\mathcal{T}(B;0^mj)$, respectively. Let $B_m(x)=\sum_{j=1}^mB_{m,j}(x)$ and $B_{m,0}(x)=A_{m+1}(x)$. Thus, from the generating tree's succession rules, we get \begin{align*} A_m(x)&=x+xA_{m+1}(x)+xB_m(x),\quad m\geq1,\\ B_{m,j}(x)&=x+jxB_{m,j-1}(x)+xB_{m+1,j}(x)+\cdots+xB_{m+1,m+1}(x),\quad j=1,2,\ldots,m. \end{align*} Then, we define the following generating functions: $A(x,v)=\sum_{m\geq1}A_m(x)v^{m-1}$, $B_m(x,u)=\sum_{j=1}^mB_{m,j}(x)u^{m-j}$, and $B(x,v,u)=\sum_{m\geq1}B_m(x,u)v^{m-1}$. Note that the system of recurrences can be written as follows: \begin{align*} A(x,v)&=\frac{x}{1-v}+\frac{x}{v}(A(x,v)-A(x,0))+xB(x,v,1),\\ B(x,v,u)&=\frac{x}{(1-v)(1-vu)} -xu\frac{\partial}{\partial u}\frac{B(x,v,u)-B(x,v,0)}{u} -xu\frac{\partial}{\partial u}\frac{A(x,uv)-A(x,0)}{uv} \\ &+x\frac{\partial}{\partial v}(\frac{v}{u}(B(x,v,u)-B(x,v,0))+\frac{A(x,uv)-A(x,0)}{u})\\ &+\frac{x}{uv(1-u)}(B(x,v,u)-uB(x,uv,1)-(1-u)B(x,0,0))\\ &-\frac{x}{uv}(B(x,v,0)-B(x,0,0)). \end{align*} By taking $v=x$ into first equation, we get the following result. \begin{theorem}\label{thm100} The generating function $\sum_{n\geq0}|I_n(100)|x^{n+1}$ is equal to $A(x,0)$ that satisfies the following functional equation: $$A(x,0)=\frac{x}{1-x}+xB(x,x,1).$$ \end{theorem} By using the above theorem, we can obtain the first $n$ terms of the generating function $A(x,0)$ for any positive integer $n$. The first 24 terms are 1, 2, 6, 23, 106, 565, 3399, 22678, 165646, 1311334, 11161529, 101478038, 980157177, 10011461983, 107712637346, 1216525155129,\\ 14380174353934, 177440071258827, 2280166654498540, 30450785320307436, 421820687108853017, 6050801956624661417, 89738550379292147192, 1374073440225390131037, 21694040050913295537753. \subsection{Class $I_n(201)$ or $I_n(210)$}\label{case201} Based on the algorithm's ansatz, we get the same succession rules for the generating trees of $I_n(201)$ and $I_n(210)$. The generating tree is given as follows {(Since the similarity to proof of Theorem \ref{th100tt}, from now we state the generating trees without proofs)}: \begin{align*} \mbox{Root: }a_1,\quad\mbox{Rules: }&a_m\rightsquigarrow a_{m+1}a_{m+1}b_{m,2}b_{m,3}\cdots b_{m,m},\\ &b_{m,j}\rightsquigarrow a_{m+3-j}b_{m+3-j,2}\cdots b_{m+1,j}b_{m+1,j}b_{m+1,j+1}\cdots b_{m+1,m+1}, \end{align*} where $a_m=0^m$ and $b_{m,j}=0^mj$, for all $m\geq1$ and $2\leq j\leq m$. This result implies the following corollary: \begin{corollary} $|I_n(201)|=|I_n(210)|$ for all $n\geq 1$. \end{corollary} For a bijection between these two classes, see \cite{ManS}. To study the generating function $R(x)=\sum_{n\geq0}|I_n(201)|x^{n+1}$, we define $A_m(x)$ and $B_{m,j}(x)$ to be the generating functions for the number of nodes in the subtrees $\mathcal{T}(B;0^m)$ and $\mathcal{T}(B;0^mj)$, respectively. Let $B_m(x)=\sum_{j=2}^mB_{m,j}(x)$. Thus, \begin{align} A_m(x)&=x+2xA_{m+1}(x)+xB_m(x),\, m\geq1,\label{eq201a1}\\ B_{m,j}(x)&=x+xA_{m+3-j}(x)+x\sum_{i=2}^jB_{m+1-j+i,i}(x)+x\sum_{i=j}^{m+1}B_{m+1,i}(x),\, 1\leq j\leq m.\label{eq201a2} \end{align} Clearly, $A_1(x)=x+2xA_2(x)$. We define the generating functions: $A(x,v)=\sum_{m\geq1}A_m(x)v^{m-1}$, $B_m(x,u)=\sum_{j=2}^mB_{m,j}(x)u^{m-j}$, and $B(x,v,u)=\sum_{m\geq2}B_m(x,u)v^{m-2}$. Then by multiplying \eqref{eq201a1} by $v^{m-1}$ and summing over $m\geq1$, we obtain \begin{align} A(x,v)&=\frac{x}{1-v}+\frac{2x}{v}(A(x,v)-A(x,0))+xvB(x,v,1).\label{eq201a3} \end{align} Note that $A_2(x)=(A(x,0)-x)/(2x)$. By multiplying \eqref{eq201a2} by $u^{m-j}v^{m-1}$ and summing over $2\leq j\leq m$, we obtain \begin{align*} B(x,v,u)&=\frac{x}{(1-v)(1-vu)} +\frac{x}{u^2v^2(1-v)}(A(x,uv)-\frac{uv}{2x}(A(x,0)-x)-A(x,0))\\ &+\frac{x}{uv(1-v)}(B(x,v,u)-B(x,v,0))\\ &+\frac{x}{uv(1-u)}(B(x,v,u)-uB(x,uv,1))-\frac{x}{uv}B(x,v,0). \end{align*} Hence, by setting $v=2x$ into \eqref{eq201a3}, we obtain the following functional equation for the generating function. For a similar functional equation, see \cite{ManS}. \begin{theorem}\label{thm201} The generating function $\sum_{n\geq0}|I_n(201)|x^{n+1}$ is equal to $A(x,0)$ that satisfies the following functional equation: $$A(x,0)=\frac{x}{1-2x}+2x^2B(x,2x,1).$$ \end{theorem} By using the above theorem, we can obtain the first $n$ terms of the generating function $A(x,0)$ for any positive integer $n$. The first 24 terms are 1, 2, 6, 24, 118, 674, 4306, 29990, 223668, 1763468, 14558588, 124938648, 1108243002, 10115202962, 94652608690, 905339525594, 8829466579404, 87618933380020, 883153699606024, 9028070631668540, 93478132393544988, 979246950529815364, 10368459385853924212, 110866577818487410864. \section{Set of patterns $B \subset \mathcal{P}_3$ with $|B|=2$}\label{caseB2} Inversion sequences avoiding pairs of patterns of length three was first systematically studied by Yan and Lin \cite{YanLin}. They determined that there are 48 Wilf classes among 78 pairs and provided enumerating formulas for some of the classes; for a complete list see Table 1 and 2 in \cite{YanLin}. In this section, we first obtain the generating trees corresponding to the classes $I_n(000, 021), I_n(100, 021)$, $I_n(110, 021),$ $I_n(102, 021)$, $I_n(100,012)$, $I_n(011,201)$, $I_n(011,210)$ and $I_n(120,210)$ by using our algorithm. It will follow from the generating trees that classes $I_n(011,201)$ and $I_n(011,210)$ are Wilf-equivalent, and $I_n(100, 021)$ and $I_n(110, 021)$ are Wilf-equivalent. Then we use the kernel method and determine the counting sequences for them. \begin{center} \begin{table}[t] \begin{tabular}{c|c|c} \hline \hline $ B$ & $a_n=|I_n(B)|$ & reference \\ \hline \hline &&\\ (000,021) & $\frac{1}{2}(3a_{n-1}+a_n-3a_{n+1}+a_{n+2})$ & Theorem~\ref{thAA2}\\ &\footnotesize{$a_n=\sum_{k=0}^n(-1)^{n-k}\binom{n}{k}\binom{2k}{k}$}&\\ &&\\ \hline &&\\ (100,021)$\sim$(110,021) & $\frac{n^2+n+6}{2(n+3)(n+2)}\binom{2n+2}{n+1}$ & Theorem~\ref{thCC3}\\ &&\\ \hline &&\\ (102,021) & $\sum_{k=0}^n\frac{1}{k+1}\binom{2k}{k}-1-\frac{1}{6}n^3-\frac{11}{6}n+2^n$ & Theorem~\ref{thDD1}\\ &&\\ \hline &&\\ (100,012)& $\frac{(n+7)Fib_n+15Fin_{n+1}+nFib_{n+2}}{5}-1-\binom{n+2}{2}$ & Theorem~\ref{thBB2} \\ &&\\ \hline (011,201)$\sim $(011,210) & functional equation & Theorem~\ref{thm011201} \\ &for the generating function&\\ \hline (120,210) & functional equation & Theorem~\ref{thm120210}\\ &for the generating function&\\ \hline \end{tabular} \caption{New enumerating formulas} \end{table} \end{center} \subsection{Class $I_n(000,021)$}\label{sec000-021} Let $B=\{000,021\}$. When we apply our algorithm to the pattern class $I_n(B)$, we obtain a generating tree that leads to an enumerating formula for this case. We define $r_0=0$, $a_m=0011\cdots mm, b_m=0011\cdots(m-1)(m-1)m$ with $m\geq0$, and $c_m=01122\cdots mm, d_m=01122\cdots(m-1)(m-1)m$ with $m\geq1$. The generating tree $\mathcal{T}[B]$ is given by \begin{align*} \mbox{Root: }r_0,\,\,\mbox{Rules: }& r_0\rightsquigarrow a_0d_1,\,\, a_m\rightsquigarrow b_{m+1}b_m\cdots b_0,\,\,b_m\rightsquigarrow a_m b_m b_{m-1}\cdots b_0,\quad m\geq0,\\ &c_m\rightsquigarrow a_m d_{m+1}d_m\cdots d_1,\,\,d_m\rightsquigarrow b_m c_m d_m d_{m-1}\cdots d_1,\quad m\geq1. \end{align*} This result follows from the following observations. We label the inversion sequences $0\in I_0$ and $00,01\in I_1$ by $r_0$ and $a_0,d_1$, respectively. Thus, $r_0\rightsquigarrow a_0d_1$. It remains to show that the generating tree's succession rules hold. Since the other cases are very similar, we will verify only the rule $a_m\rightsquigarrow b_{m+1}b_m\cdots b_0$ for all $m\geq0$. Let $e=e_0e_1\cdots e_n$ be any inversion sequence labelled by $a_m$. So, by definitions, we have that $\mathcal{T}(B;e)\cong\mathcal{T}(B;a_m)$. On other hand, the inversion sequence that labelled by $a_m j=0011\cdots mmj$ where $j=m+1,m+2,\ldots,2m+2$ (otherwise, $a_mj$ does not avoid $B$). Moreover, (i) $a_m(m+1)=0011\cdots mm(m+1)=b_{m+1}$; (ii) $a_m(m+j)=0011\cdots mm(m+j)$; the subtree $\mathcal{T}(B;a_m(m+j))$ is isomorphic to the subtree $\mathcal{T}(B;b_{m+2-j})$ by removing the letters $m+2-j,m+3-j,\ldots,m$ and decrease each letter greater than $m$ by $2j-1$. Thus, Lemma \ref{lem1} gives the children of the node with label $a_m$ are exactly the nodes labelled by $b_{m+1},b_m,\ldots,b_0$, that is, $a_m\rightsquigarrow b_{m+1}b_m\cdots b_0$ with $m\geq0$. In order to find an explicit formula for the generating function for the number of inversion sequences in $I_n(B)$, we define $R(x)$ (respectively, $A_m(x)$, $B_m(x)$, $C_m(x)$, and $D_m(x)$) to be the generating function for the number of nodes in the subtrees $\mathcal{T}(B;0)$ (respectively, $\mathcal{T}(B;a_m)$, $\mathcal{T}(B;b_m)$, $\mathcal{T}(B;c_m)$, and $\mathcal{T}(B;d_m)$), where its root is at level $0$. Hence, by the rules of the tree $\mathcal{T}(B)$, we have \begin{align} R(x)&=x+xA_0(x)+xD_1(x),\label{eqAA1}\\ A_m(x)&=x+x\sum_{j=0}^{m+1}B_j(x),\quad m\geq0,\label{eqAA2}\\ B_m(x)&=x+xA_m(x)+x\sum_{j=0}^mB_j(x),\quad m\geq0,\label{eqAA3} \end{align} \begin{align} C_m(x)&=x+xA_m(x)+x\sum_{j=1}^{m+1}D_j(x),\quad m\geq0,\label{eqAA4}\\ D_m(x)&=x+xB_m(x)+xC_m(x)+x\sum_{j=1}^mD_j(x),\quad m\geq0.\label{eqAA5} \end{align} \begin{figure}[t] {\footnotesize \begin{forest} for tree={fit=band,} [0[00,[001,[0011] [0012] [0013]] [002,[0022] [0023]]] [01,[010,[0101] [0102] [0103]] [011,[0110] [0112][0113]][012,[0120] [0122] [0123]]]] \end{forest}} {\footnotesize \begin{forest} for tree={fit=band,} [0[00,[001,[0011] [001] [0]] [0,[00] [0]]] [01,[001,[0011] [001] [0]] [011,[0011] [0112][01]][01,[001] [011] [01]]]] \end{forest}} \caption{First three levels of $\mathcal{T}(\{000,021\})$ and $\mathcal{T}[\{000,021\}]$}\label{figT3} \end{figure} We define $A(x,u)=\sum_{m\geq0}A_m(x)u^m$, $B(x,u)=\sum_{m\geq0}B_m(x)u^m$, $C(x,u)=\sum_{m\geq1}C_m(x)u^{m-1}$, and $D(x,u)=\sum_{m\geq1}D_m(x)u^{m-1}$. Hence, \eqref{eqAA1}-\eqref{eqAA5} can be written as \begin{align} R(x)&=x+xA(x,0)+xD(x,0),\label{eqAA6}\\ A(x,u)&=\frac{x}{1-u}+\frac{x}{u}(B(x,u)-B(x,0))+\frac{x}{1-u}B(x,u),\label{eqAA7}\\ B(x,u)&=\frac{x}{1-u}+xA(x,u)+\frac{x}{1-u}B(x,u),\label{eqAA8}\\ C(x,u)&=\frac{x}{1-u}+\frac{x}{u}(A(x,u)+D(x,u)-A(x,0)-D(x,0))+\frac{x}{1-u}D(x,u),\label{eqAA9}\\ D(x,u)&=\frac{x}{1-u}+\frac{x}{u}(B(x,u)-B(x,0))+xC(x,u)+\frac{x}{1-u}D(x,u).\label{eqAA10} \end{align} By \eqref{eqAA7}-\eqref{eqAA8}, we have \begin{align}\label{eqAA11} \frac{(1-x)u-x^2-u^2}{u(1-u-x)}A(x,u)=-\frac{x^2}{u(1-x)}A(x,0)+\frac{x}{(1-u-x)(1-x)}. \end{align} This type of functional equation can be solved systematically using the kernel method \cite{Ker}. In this case, if we assume that $u=x^2M(x)$, where $M(x)=\frac{1-x-\sqrt{1-2x-3x^2}}{2x^2}$ is the generating function for the Motzkin numbers, see \cite[Sequence A001006]{Slo}, then we obtain $$A(x,0)=\frac{xM(x)}{1-x-x^2M(x)}=xM^2(x).$$ By \eqref{eqAA11} \begin{align}\label{eqfAu} A(x,u)&=\frac{x(ux+x^2-x)A(x,0)+u)}{(1-x)(u(1-x)-x^2-u^2)}=\frac{xM(x)(x^2M(x)-u)}{u^2+(x-1)u+x^2}. \end{align} Thus, by \eqref{eqAA8} \begin{align}\label{eqfBu} B(x,u)&=\frac{x(1+(1-u)A(x,u))}{1-u-x}=\frac{xM(x)((u+x)x^2M(x)-u+x^2)}{u^2+u(x-1)+x^2}. \end{align} By substituting \eqref{eqAA9} into \eqref{eqAA10} with using \eqref{eqfAu}-\eqref{eqfBu}, we obtain \begin{align}\label{eqfDu} &\frac{((1-x)u-x^2-u^2)^2}{u(1-u)}D(x,u)+\frac{x^2((1-x)u-x^2-u^2)}{u}D(x,0)\nonumber\\ &=\frac{xM(x)((x^2-x-1)u^2+(x^3-3x^2+1)u+x^4+x^2)} {1-u}\nonumber\\ &+\frac{x^3M^2(x)((x-1)u^2+(x^2-x+3)u+x^3+x^2+x-2)}{1-u}. \end{align} By differentiating this equation respect to $u$ and taking $u=x^2M(x)$, after some simple algebraic simplifications, we get $$D(x,0)=\frac{x((x^2+2x-2)M(x)-x+1)((1-x)M(x)-2)}{(1+x)(1-3x)}.$$ Hence, by \eqref{eqAA6}, we obtain the following result. \begin{theorem}\label{thAA2} The generating function $R(x)=\sum_{n\geq0}|I_n(000,021)|x^{n+1}$ is given by \begin{align*} &\frac{3x^3+x^2-3x+1}{2x^2\sqrt{(1+x)(1-3x)}}-\frac{(1-x)^2}{2x^2}\\ &=x+2x^2+5x^3+14x^4+39x^5+111x^6+317x^7+911x^8+2627x^9+7600x^{10}+\cdots. \end{align*} Moreover, by Sequence A002426 in \cite{Slo}, we get for all $n\geq1$, $$|I_n(000,021)|=\frac{1}{2}(3a_{n-1}+a_n-3a_{n+1}+a_{n+2}),$$ where $a_n=\sum_{k=0}^n(-1)^{n-k}\binom{n}{k}\binom{2k}{k}$. \end{theorem} \subsection{Class $I_n(100,021)$ and $I_n(110,021)$}\label{sec100-021} In this section, we will provide the rules for the generating trees $\mathcal{T}[\{100,021\}]$ and $\mathcal{T}[\{110,021\}]$. The generating trees show that these two classes are equinumerous, and also lead to an exact enumerating formula. When we apply our algorithm to the class $I_n(100,021)$, we obtain the following rules for $\mathcal{T}[\{100,021\}]$: \begin{align*} \mbox{Root: }a_0,\,\,\mbox{Rules: }& a_m\rightsquigarrow a_{m+1}b_{m}b_{m-1}\cdots b_1,\,\, b_m\rightsquigarrow c_mb_{m+1}b_{m}\cdots b_1,\quad m\geq1,\\ &c_m\rightsquigarrow c_{m+1}d_{m}d_{m-1}\cdots d_1 e,\,\, d_m\rightsquigarrow d_{m+1}d_m\cdots d_1 e,\quad m\geq1,\\ &e\rightsquigarrow d_1 e, \end{align*} where $a_m=0^m$, $b_m=0^m1$, $c_m=0^m10$, $d_m=0^m102$ and $e=0103$. From a very similar argument presented in the section~\ref{sec000-021}, it follows that the number of nodes at level $n$ (the root is at level $0$) in $\mathcal{T}[\{100,021\}]$ is equal to the number of inversion sequences in $I_n(100,021)$. Next, we will apply our algorithm to the class $I_n(110,021)$, and obtain the rules for the generating tree $\mathcal{T}[\{110,021\}]$: \begin{align*} \mbox{Root: }a_0,\,\,\mbox{Rules: }& a_m\rightsquigarrow a_{m+1}b_{m}b_{m-1}\cdots b_1,\,\, b_m\rightsquigarrow c_mb_{m+1}b_{m}\cdots b_1,\quad m\geq1,\\ &c_m\rightsquigarrow c_{m+1}d_{m}d_{m-1}\cdots d_1 e,\,\, d_m\rightsquigarrow d_{m+1}d_m\cdots d_1 e,\quad m\geq1,\\ &e\rightsquigarrow d_1 e, \end{align*} where $a_m=0^m$, $b_m=0^m1$, $c_m=0^m11$, $d_m=0^m112$ and $e=0113$. The number of nodes at level $n$ in $\mathcal{T}[\{110,021\}]$ equals the number of inversion sequences in $I_n(110,021)$. From the above generating tree rules, we have that \begin{corollary} For all $n\geq0$, $|I_n(100,021)|=|I_n(110,021)|$. \end{corollary} We can use one of the above generating tress and enumerate these classes. Let $B=\{100,021\}$. We define $A_m(x)$ (respectively, $B_m(x)$, $C_m(x)$, $D_m(x)$, and $E(x)$) to be the generating function for the number of nodes in the subtrees $\mathcal{T}(B;a_m)$ (respectively, $\mathcal{T}(B;b_m)$, $\mathcal{T}(B;c_m)$, $\mathcal{T}(B;d_m)$, and $\mathcal{T}(B;e)$), where its root at level $0$. Hence, by thee generating tree rules for $\mathcal{T}(\{100,021\})$, we have \begin{align*} A_m(x)&=x+xA_{m+1}(x)+x\sum_{j=1}^mB_j(x),\quad m\geq1,\\ B_m(x)&=x+xC_m(x)+x\sum_{j=1}^{m+1}B_j(x),\quad m\geq1, \end{align*} \begin{align*} C_m(x)&=x+xC_{m+1}(x)+x\sum_{j=1}^mD_j(x)+xE(x),\quad m\geq1,\\ D_m(x)&=x+x\sum_{j=1}^{m+1}D_j(x)+xE(x),\quad m\geq1,\\ E(x)&=x+xD_1(x)+xE(x). \end{align*} We define $G(x,u)=\sum_{m\geq1}G_m(x)u^{m-1}$, where $G\in\{A,B,C,D\}$. Hence, by multiplying by $u^{m-1}$ and summing over $m\geq1$, we obtain \begin{align} A(x,u)&=\frac{x}{1-u}+\frac{x}{u}(A(x,u)-A(x,0))+\frac{x}{1-u}B(x,u),\label{eqCCa1}\\ B(x,u)&=\frac{x}{1-u}+xC(x,u)+\frac{x}{u}(B(x,u)-B(x,0))+\frac{x}{1-u}B(x,u),\label{eqCCa2} \end{align} \begin{align} C(x,u)&=\frac{x}{1-u}+\frac{x}{u}(C(x,u)-C(x,0))+\frac{x}{1-u}D(x,u)+\frac{x}{1-u}E(x),\label{eqCCa3}\\ D(x,u)&=\frac{x}{1-u}+\frac{x}{u}(D(x,u)-D(x,0))+\frac{x}{1-u}D(x,u)+\frac{x}{1-u}E(x),\label{eqCCa4}\\ E(x)&=\frac{x}{1-x}+\frac{x}{1-x}D(x,0).\label{eqCCa5} \end{align} By \eqref{eqCCa4}-\eqref{eqCCa5}, we have $$D(x,u)=\frac{x}{(1-x)(1-u)}+\frac{x}{u}(D(x,u)-D(x,0))+\frac{x}{1-u}D(x,u)+\frac{x^2}{(1-x)(1-u)}D(x,0).$$ By applying kernel method with $u=xC(x)$, where $C(x)=\frac{1-\sqrt{1-4x}}{2x}$ the generating function for the Catalan numbers, see \cite[Sequence A000108]{Slo}, we obtain $$D(x,0)=\frac{xC^2(x)}{1-x-x^2C^2(x)}\mbox{ and } E(x)=xC^2(x),$$ which leads to $$D(x,u)=\frac{xC^2(x)(xC(x)-u)}{u^2-u+x}.$$ Thus, by \eqref{eqCCa3}, we have $$C(x,u)=\frac{x}{1-u}+\frac{x}{u}(C(x,u)-C(x,0))+\frac{x^2C^2(x)(xC(x)-u)}{(1-u)(u^2-u+x)}+\frac{x^2C^2(x)}{1-u}.$$ Applying kernel method with $u=x$, we obtain $$C(x,0)=xC^3(x)$$ and then $$C(x,u)=\frac{(1-x-u)C(x)+u-1)}{u^2-u+x}.$$ Then, by kernel method with $u=xC(x)$ in \eqref{eqCCa2}, we obtain $$B(x,0)=\frac{xC(x)}{1-2xC(x)},$$ which implies that $B(x,u)$ is equal to the following: $$\frac{x(2-C(x))((2xu^3+x(2x-3)u^2+x(1-x)u)C(x)-(1+x)u^3+(1+3x+x^2)u^2-3xu+x^2)}{(1-4x)(u^2-u+x)^2}.$$ Thus, \eqref{eqCCa1} with $u=x$, leads to $$A(x,0)=\frac{x}{1-x}+\frac{x}{1-x}B(x,x),$$ and hence we obtain the following result. \begin{theorem}\label{thCC3} The generating function $R(x)=\sum_{n\geq0}|I_n(100,021)|x^{n+1}$ is given by \begin{align*} &\frac{(1-3x)^2}{2x^2\sqrt{1-4x}}-\frac{(1-3x)(1-x)}{2x^2}. \end{align*} Moreover, $|I_n(100,021)|=\frac{n^2+n+6}{2(n+3)(n+2)}\binom{2n+2}{n+1}$ for all $n\geq 0$. \end{theorem} \subsection{Class $I_n(102,021)$} Let $B=\{102,021\}$. We will apply our algorithm to $I_n(B)$ and characterize the generating tree $\mathcal{T}(B)$ that leads to an exact enumerating formula for this class. We have that the generating tree $\mathcal{T}[B]$ is given by \begin{align*} \mbox{Root: }a_0,\,\,\mbox{Rules: }& a_m\rightsquigarrow a_{m+1}b_{m}b_{m-1}\cdots b_1,\,\, b_m\rightsquigarrow c_mb_{m+1}d_{m}\cdots d_2 e,\quad m\geq1,\\ &c_m\rightsquigarrow c_{m+1}c_{m+1},\,\, d_m\rightsquigarrow fd_{m+1}d_m\cdots d_1,\\ &e\rightsquigarrow fd_2 e,\,\,f\rightsquigarrow f, \end{align*} where $a_m=0^m$, $b_m=0^m1$, $c_m=0^m10$, $d_m=0^m12$, $e=0013$, and $f=00130$. We can now compute the corresponding generating function. We define $A_m(x)$ (respectively, $B_m(x)$, $C_m(x)$, $D_m(x)$, $E(x)$, and $F(x)$) to be the generating function for the number of nodes in the subtrees $\mathcal{T}(B;a_m)$ (respectively, $\mathcal{T}(B;b_m)$, $\mathcal{T}(B;c_m)$, $\mathcal{T}(B;d_m)$, $\mathcal{T}(B;e)$, and $\mathcal{T}(B;f)$), where its root at level $0$. Hence, by above generating tree rules, we have \begin{align*} A_m(x)&=x+xA_{m+1}(x)+x\sum_{j=1}^mB_j(x),\quad m\geq1,\\ B_m(x)&=x+xC_m(x)+xB_{m+1}(x)+x\sum_{j=2}^{m}D_j(x)+xE(x),\quad m\geq1,\\ C_m(x)&=x+xC_{m+1}(x)+xC_{m+1}(x),\quad m\geq1,\\ D_m(x)&=x+xF(x)+x\sum_{j=1}^{m+1}D_j(x),\quad m\geq1,\\ E(x)&=x+xF(x)+xD_2(x)+xE(x),\\ F(x)&=x+xF(x). \end{align*} Let's define $G(x,u)=\sum_{m\geq1}G_m(x)u^{m-1}$, where $G\in\{A,B,C,D\}$. Hence, by multiplying by $u^{m-1}$ and summing over $m\geq1$, we obtain \begin{align*} A(x,u)&=\frac{x}{1-u}+\frac{x}{u}(A(x,u)-A(x,0))+\frac{x}{1-u}B(x,u),\\ B(x,u)&=\frac{x}{1-u}+xC(x,u)+\frac{x}{u}(B(x,u)-B(x,0))+\frac{x}{1-u}D(x,u)-\frac{x}{1-u}D(x,0)+\frac{x}{1-u}E(x),\\ C(x,u)&=\frac{x}{1-u}+\frac{2x}{u}(C(x,u)-C(x,0)),\\ D(x,u)&=\frac{x}{1-u}+\frac{x}{1-u}F(x)+\frac{x}{u}(D(x,u)-D(x,0))+\frac{x}{1-u}D(x,u),\\ E(x)&=x+xF(x)+xD_2(x)+xE(x),\\ F(x)&=\frac{x}{1-x},\\ D_2(x)&=\frac{1}{x(1-x)}((1-x)^2D(x,0)-x). \end{align*} Note that the formula of $D_2(x)$ is obtained from the recurrence for $D_m(x)$ and $F(x)$. By similar techniques used in the previous cases, we obtain the following result. \begin{theorem}\label{thDD1} The generating function $R(x)=\sum_{n\geq0}|I_n(102,021)|x^{n+1}$ is given by \begin{align*} &\frac{1-\sqrt{1-4x}}{2x(1-x)}-\frac{(2x^2-2x+1)(x^3-2x^2+3x-1)}{(1-x)^4(1-2x)}. \end{align*} Moreover, $|I_n(102,021)|=\sum_{k=0}^n\frac{1}{k+1}\binom{2k}{k}-1-\frac{1}{6}n^3-\frac{11}{6}n+2^n$. \end{theorem} \subsection{Class $I_n(100,012)$} Let $B=\{100,012\}$. We start with the following lemmas. \begin{lemma}\label{lemB1} Let $m\geq1$. The generating function $B^{(1)}_m(x)$ for the number of words $\pi'$ with $n-1$ letters over alphabet $\{0,1,\ldots,m-1\}$, $n\geq1$, such that $0^mm0\pi'\in I_{n+m+1}(B)$ is given by $x(1+x)^{m-1}$. \end{lemma} \begin{proof} Clearly, any inversion sequence $0^mm0\pi'\in I_{n+m+1}(B)$ can be decomposed as $0^mm0j\pi^{(j)}$ with $j=1,2,\ldots,m-1$. Note that the number of inversion sequences $0^mm0j\pi^{(j)}\in I_{n+m+1}(B)$ equals the number of inversion sequences $0^jj0\theta^{(j)}\in I_{n+j+1}(B)$, where $\theta^{(j)}$ is a word of length $n-2$ over alphabet $\{0,1,\ldots,j-1\}$. Hence, $$B^{(1)}_m(x)=x+x\sum_{j=1}^{m-1}B^{(1)}_j(x),\quad m\geq1.$$ By induction on $m$, we complete the proof. \end{proof} \begin{lemma}\label{lemB2} Let $m\geq1$. The generating function $B^{(2)}_m(x)$ for the number of words $\pi'$ with $n-1$ letters over alphabet $\{0,1,\ldots,m\}$, $n\geq1$, such that $0^mm0\pi'\in I_{n+m+1}(B)$ is given by $\frac{x(1+x)^{m-1}}{1-x}$. \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lemB1}, we see $$B^{(2)}_m(x)=x+xB^{(2)}_m(x)+x\sum_{j=1}^{m-1}B^{(1)}_j(x),\quad m\geq1.$$ Then, by Lemma \ref{lemB1}, we complete the proof. \end{proof} \begin{lemma}\label{lemB3} Let $m\geq1$. The generating function $B^{(3)}_m(x)$ for the number of words $\pi'$ with $n-1$ letters over alphabet $\{0,1,\ldots,m-1\}$, $n\geq1$, such that $0^mm\pi'\in I_{n+m}(B)$ is given by $$(m+1)x^3(1+x)^{m-2}-x(x^2-2x-1)(1+x)^{m-2}.$$ \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lemB1}, we see $$B^{(3)}_m(x)=x+xB^{(1)}_m(x)+x\sum_{j=1}^{m-1}B^{(3)}_j(x),\quad m\geq1.$$ Define $B^{(3)}(x,u)=\sum_{m\geq1}B^{(3)}_m(x)u^m$. By multiplying the above recurrence by $u^m$ and summing over $m\geq1$ with using Lemma \ref{lemB2}, we obtain $$B^{(3)}(x,u)=\frac{xu(1+x-u-2ux)}{(1-u-ux)^2}.$$ Then, by finding the coefficient of $u^m$, we complete the proof. \end{proof} \begin{lemma}\label{lemB4} Let $m\geq1$. The generating function $B_m(x)$ for the number of words $\pi'$ with $n-1$ letters over alphabet $\{0,1,\ldots,m\}$, $n\geq1$, such that $0^mm\pi'\in I_{n+m}(B)$ is given by $$\frac{x((m-1)x^2(1-x)+x+1)(1+x)^{m-2}}{(1-x)^2}.$$ \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lemB1}, we see $$B_m(x)=x+xB_m^{(2)}(x)+xB_m(x)+x\sum_{j=1}^{m-1}B^{(3)}_j(x),\quad m\geq1.$$ By Lemmas \ref{lemB2} and \ref{lemB3}, we complete the proof. \end{proof} When we apply our algorithm, we obtain the following generating tree. \begin{proposition}\label{thBB1} Let $\mathcal{T}_m(B)$ be the generating tree for all inversion sequence $\pi=0^mm\pi'$ that avoids $\{100,012\}$. The generating tree $\mathcal{T}[B]$ is given by \begin{align*} \mbox{Root: }a_1,\,\,\mbox{Rules: }a_m\rightsquigarrow a_{m+1},\mathcal{T}_1,\mathcal{T}_2,\ldots,\mathcal{T}_m, \end{align*} where $a_m=0^m$ with $m\geq1$. \end{proposition} Now, we are ready to find an explicit formula for the generating function $$R(x)=\sum_{n\geq0}|I_n(100,012)|x^{n+1}.$$ \begin{theorem}\label{thBB2} The generating function $R(x)$ is given by \begin{align*} R(x)&=\frac{x(x^6-x^5-3x^4+x^3+3x^2-3x+1)}{(1-x)^3(1-x-x^2)^2}\\ &=x+2x^2+5x^3+12x^4+27x^5+56x^6+110x^7+207x^8+378x^9+675x^{10}+\cdots. \end{align*} Moreover, by the sequence A001629 in \cite{Slo}, for all $n\geq0$, $$|I_n(100,012)|=\frac{(n+7)Fib_n+15Fin_{n+1}+nFib_{n+2}}{5}-1-\binom{n+2}{2},$$ where $Fib_n$ is the $n$th Fibonacci number, see sequence A000045 in \cite{Slo}. \end{theorem} \begin{proof} Let $R_m(x)$ be the generating function for the number of nodes in the subtree $\mathcal{T}(B,0^m)$ of Proposition \ref{thBB1}. Hence, Proposition \ref{thBB1} and Lemma \ref{lemB4} give $$R_m(x)=x+xR_{m+1}(x)+x\sum_{j=1}^m\frac{x((j-1)x^2(1-x)+x+1)(1+x)^{j-2}}{(1-x)^2}.$$ Define $R(x,u)=\sum_{m\geq1}R_m(x)u^{m-1}$. Then $$R(x,u)=\frac{x}{1-u}+\frac{x}{u}(R(x,u)-R(x,0))-\frac{(ux^3-ux^2+ux+u-1)x^2}{(1-u)(ux+u-1)^2(1-x)^2}.$$ Then by applying the kernel method with taking $u=x$, we obtain $$R(x,0)=\frac{x(x^6-x^5-3x^4+x^3+3x^2-3x+1)}{(1-x)^3(1-x-x^2)^2},$$ which completes the proof. \end{proof} \subsection{Class $I_n(011,201)$} Let $B=\{011,201\}$. By applying our algorithm to $I_n(B)$, we obtain the generating tree $\mathcal{T}[B]$ as follows: $$\mbox{Root: }a_1,\,\,\mbox{ Rules: }a_m\rightsquigarrow a_{m+1}a_mb_{m,2}\cdots b_{m,m},\,\, b_{m,j}\rightsquigarrow (a_{m+2-j})^2b_{m+3-j,2}\cdots b_{m,j-1}b_{m,j}\cdots b_{m,m},$$ where $a_m=0^m$ with $m\geq1$ and $b_{m,j}=0^mj$ with $2\leq j\leq m$. We define $A_m(x)$ and $B_{m,j}(x)$ be the generating functions for the number of nodes in the subtrees $\mathcal{T}(B;a_m)$ and $\mathcal{T}(B;b_{m,j})$, respectively. Thus, the generating tree $\mathcal{T}[B]$, leads to \begin{align*} A_m(x)&=x+xA_{m+1}(x)+xA_m(x)+x(B_{m,2}(x)+\cdots+B_{m,m}(x)),\quad m\geq1,\\ B_{m,j}(x)&=x+2xA_{m+2-j}(x)+x\sum_{i=2}^{j-1}B_{m+1-j+i,i}(x)+x\sum_{i=j}^mB_{m,i}(x),\quad 2\leq j\leq m. \end{align*} Define $A(x,v)=\sum_{m\geq1}A_m(x)v^{m-1}$ and $B(x,v,u)=\sum_{m\geq2}\sum_{j=2}^mB_{m,j}(x)v^{m-2}u^{m-j}$. Then the above recurrence can be written as \begin{align} A(x,v)&=\frac{x}{1-v}+xA(x,v)+\frac{x}{v}(A(x,v)-A(x,0))+xvB(x,v,1),\label{eqT6a1}\\ B(x,v,u)&=\frac{x}{(1-v)(1-vu)}+\frac{2x}{uv(1-v)}(A(x,uv)-A(x,0))\nonumber\\ &+\frac{x}{uv(1-v)}(B(x,v,u)-B(x,v,0))+\frac{x}{1-u}(B(x,v,u)-uB(x,uv,1)).\label{eqT6a2} \end{align} Then by taking $v=\frac{x}{1-x}$ into \eqref{eqT6a1}, we obtain \begin{theorem}\label{thm011201} The generating function $\sum_{n\geq0}|I_n(011,201)|x^{n+1}$ is equal to $A(x,0)$ that satisfies the following functional equation $$A(x,0)=\frac{x}{1-2x}+\frac{x^2}{(1-x)^2}B(x,x/(1-x),1).$$ \end{theorem} Applying this theorem, we obtain that the first $20$ terms of $A(x,0)$ as 1, 2, 5, 15, 52, 202, 859, 3930, 19095, 97566, 520257, 2877834, 16434105, 96505490, 580864901, 3573876308, 22426075431, 143242527870, 929759705415, 6123822269373. Here, we conjecture that $|I_n(011,201)|$ equals the number of set partitions of $\{1,2,\ldots,n\}$ that avoid 3-crossings, see Sequence A108304 in \cite{Slo}. \subsection{Class $I_n(120,210)$} Let $B=\{120,210\}$. By applying our algorithm to $I_n(B)$, we obtain the the generating tree $\mathcal{T}[B]$ as follows: \begin{align*} &\mbox{Root: }a_1,\\ &\mbox{ Rules: } a_m\rightsquigarrow a_{m+1}b_{m,1}\cdots b_{m,m},\,\, b_{m,j}\rightsquigarrow b_{m+1,j}\cdots b_{m+2-j,1}b_{m+1,j}b_{m+1-j,1}\cdots b_{m+1-j,m+1-j}, \end{align*} where $a_m=0^m$ with $m\geq1$ and $b_{m,j}=0^mj$ with $1\leq j\leq m$. It is not hard to prove this indeed the generating tree $\mathcal{T}[B]$ by using Lemma \ref{lem1}. We define $A_m(x)$ and $B_{m,j}(x)$ be the generating functions for the number of nodes in the subtrees $\mathcal{T}(B;a_m)$ and $\mathcal{T}(B;b_{m,j})$, respectively. Thus, the generating tree $\mathcal{T}[B]$, leads to \begin{align*} A_m(x)&=x+xA_{m+1}(x)+x(B_{m,1}(x)+\cdots+B_{m,m}(x)),\quad m\geq1,\\ B_{m,j}(x)&=x+x\sum_{i=1}^jB_{m+1-j+i,i}(x)+xB_{m+1,j}(x)+x\sum_{i=1}^{m+1-j}B_{m+1-j,i}(x),\quad 1\leq j\leq m. \end{align*} We define $A(x,v)=\sum_{m\geq1}A_m(x)v^{m-1}$ and $B(x,v,u)=\sum_{m\geq1}\sum_{j=1}^mB_{m,j}(x)v^{m-1}u^{m-j}$. Then the above recurrence can be written as \begin{align} A(x,v)&=\frac{x}{1-v}+\frac{x}{v}(A(x,v)-A(x,0))+xB(x,v,1),\label{eqT7a1}\\ B(x,v,u)&=\frac{x}{(1-v)(1-vu)}+\frac{x}{uv(1-v)}(B(x,v,u)-B(x,v,0))\nonumber\\ &+\frac{x}{uv}(B(x,v,u)-B(x,v,0))+\frac{x}{1-v}B(x,uv,1).\label{eqT7a2} \end{align} Then by taking $v=x$ into \eqref{eqT7a1}, we obtain \begin{theorem}\label{thm120210} The generating function $\sum_{n\geq0}|I_n(120,210)|x^{n+1}$ is equal to $A(x,0)$ that satisfies the following functional equation $$A(x,0)=\frac{x}{1-x}+xB(x,x,1).$$ \end{theorem} Applying this theorem, we obtain that the first $20$ terms of $A(x,0)$ as 1, 2, 6, 23, 102, 499, 2625, 14601, 84847, 510614, 3161964, 20050770, 129718404, 853689031, 5701759424, 38574689104, 263936457042, 1824032887177, 12718193293888, 89386742081688. Notice that the generating function $B(x,xv,0)$ can also be expressed as follows: by \eqref{eqT7a2} with $u=1$, we have \begin{align*} B(x,v,1)&=\frac{xv}{(1-v)(v(1-v)-2x)}-\frac{x(2-v)}{v(1-v)-2x}B(x,v,0). \end{align*} Using this with \eqref{eqT7a2}, we obtain \begin{align*} B(x,v,u/v)&=\frac{x}{(1-v)(1-u)}+\frac{x}{u(1-v)}(B(x,v,u/v)-B(x,v,0))\\ &+\frac{x}{u}(B(x,v,u/v)-B(x,v,0))\\ &+\frac{x}{1-v}\left(\frac{xv}{(1-v)(v(1-v)-2x)}-\frac{x(2-v)}{v(1-v)-2x}B(x,v,0)\right). \end{align*} Note that $$B(x,\frac{x(v-2)}{v-1},0)=x+\frac{5-4v}{1-v}x^2+\frac{16v^2-38v+23}{(1-v)^2}x^3+\cdots.$$ Hence, by replacing $v$ with $\frac{x(v-2)}{v-1}$, we obtain \begin{align*} B(x,v,0)&=\frac{(vx-2v-2x+2)x}{v^2x+v^2-4vx-v+4x}B(x,\frac{x(v-2)}{v-1},0)\\ &+\frac{x(v^2-vx-v+2x)}{(vx-v-2x+1)(v^2x+v^2-4vx-v+4x)}. \end{align*} Replacing $v$ by $vx$, we obtain that the generating function satisfies \begin{align*} B(x,vx,0)&=\frac{vx^2-2vx-2x+2}{v^2x^2+v^2x-4vx-v+4}B(x,\frac{x(2-vx)}{1-vx},0)\\ &+\frac{x(v^2x-vx-v+2)}{(vx^2-vx-2x+1)(v^2x^2+v^2x-4vx-v+4)}\\ &=x+(v+3)x^2+(v^2+4v+11)x^3+(v^3+5v^2+17v+47)x^4+\cdots. \end{align*} \section{Concluding remarks} We introduced an algorithmic approach based on generating trees that yields enumerative results for inversion sequences with many different pattern restrictions. In this paper, we present the applications of our algorithm only for the class $I_n(B)$ where either $B$ includes a single pattern or a pair of patterns of length three. In subsequent works, we consider $I_n(B)$ with $B\subseteq \mathcal{P}_3$ and $|B|\geq 3$, and many other pattern classes with more surprising results and conjectures.
{ "timestamp": "2022-10-11T02:24:32", "yymm": "2210", "arxiv_id": "2210.04489", "language": "en", "url": "https://arxiv.org/abs/2210.04489", "abstract": "We introduce an algorithmic approach based on generating tree method for enumerating the inversion sequences with various pattern-avoidance restrictions. For a given set of patterns, we propose an algorithm that outputs either an accurate description of the succession rules of the corresponding generating tree or an ansatz. By using this approach, we determine the generating trees for the pattern-classes $I_n(000, 021), I_n(100, 021)$, $I_n(110, 021), I_n(102, 021)$, $I_n(100,012)$, $I_n(011,201)$, $I_n(011,210)$ and $I_n(120,210)$. Then we use the kernel method, obtain generating functions of each class, and find enumerating formulas. Lin and Yan studied the classification of the Wilf-equivalences for inversion sequences avoiding pairs of length-three patterns and showed that there are 48 Wilf classes among 78 pairs. In this paper, we solve six open cases for such pattern classes.", "subjects": "Combinatorics (math.CO)", "title": "An algorithmic approach based on generating trees for enumerating pattern-avoiding inversion sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631675246405, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7087950412216346 }
https://arxiv.org/abs/1403.7460
On Expansion of a Solution of General Non-autonomous Polynomial Differential Equation
We give a recursive formula for an expansion of a solution of a general non-autonomous polynomial differential equation. The formula is given on the algebraic level with a use of shuffle product. This approach minimizes the number of integrations on each order of expansion. Using combinatorics of trees we estimate the radius of convergence of the expansion.
\section{Introduction} \label{sec:Introduction} Consider a non-autonomous polynomial differential equation, known also as a generalized Abel differential equation \begin{align} \label{eq:PreMain} \dot x(t) &= u_0(t) + u_1(t)x(t) +\cdots+ u_i(t)x^i(t) + \cdots + u_n(t) x^n(t), \\ \nonumber x(0) &= x_0, \end{align} with a solution $x:[0,T]\to \RR$ on a small segment of the reals. In this class of differential equation there are: for $n=1$ the linear equation with well known formula for the general solution; for $n=2$ the Riccati equation well known both for theoretical \cite{Redheffer56Solutions,Redheffer57Riccati,Carinena07Riccati,Carinena11Riccati} and practical (see \cite{Carinena2011Lie} and references therein at the beginning of section 4) reasons; for $n=3$ the Abel differential equation of the first kind studied theoretically \cite{Mak2002New} and for practical reasons (\cite{Mak2001Solutions,Harko2003Relativistic,Xu2011Short} and references in \cite{Mak2002New,Carinena11Geometric}); for $n >3$ the generalized Abel differential equations \cite{Alwash2005Periodic, Alwash2007Periodic}. Assuming $X_i = x^i \frac \partial {\partial x}$, for $i=0,\ldots,n$, are differential vector fields on $\RR$ one can, following Fliess \cite{Fliess81Fonctionnelles} (see also \cite{Kawski97NoncommutativePower, Gray2002Fliess} and \cite{Gray2008NonCausal} with references therein), expand the solution of the equation in terms of iterated integrals \begin{align*} \int_0^{t} \int_0^{t_k}\cdots \int_0^{t_2}u_{{i_k}}(t_k)\cdots u_{{i_1}}(t_1) \, dt_1\ldots dt_{k-1} dt_k \end{align*} and iterated differential operators $X_{i_1}\cdots X_{i_k}$ acting on the identity function $h(x) = x$ and evaluated at the zero point. In this approach one does not use a specific form of the vector fields, i.e. the fact that they are of polynomial type. In this article we show another approach to expanding a solution of the above equation in terms of iterated integrals with a use of an important feature of Chen's iterated integral mapping that it is a shuffle algebra homomorphism (see the comment after formula (\ref{eq:IteratedIntDef})). In fact with the use of Chen's mapping (\ref{eq:IteratedIntDef}) we will be able to consider a purely algebraic problem instead of the analytic one. More precisely, assuming that the solution of (\ref{eq:PreMain}) can be expanded in terms of iterated integrals we show that an algebraic equation must be satisfied in the space of non-commutative series on $n+1$ letters. It will be easy to show existence of the solution of the algebraic equation by a recursive formula of its homogeneous parts. The Chen's mapping gives us the expansion of the solution of initial problem as we state in Theorem \ref{thm:Analytic}. This is done in section \ref{sec:Existece}. Then, in section \ref{sec:CountingTrees}, by counting elements of a class of trees in two different ways we show convergence of the defined expansion of $x(t)$ for small times -- this is stated in Theorem \ref{thm:Main} (in section \ref{sec:Existece}). As an application of this general approach we consider, in section \ref{sec:Examples}, the cases of the linear equation (i.e., $n=1$), the Riccati equation ($n=2$) and the one where there are only two non-vanishing summands. In the first case we reestablish a well known formula for the general solution, and in the second case we deduce convergence of the series defining coordinates of the second kind connected with $\LieAlgebra{a}(1)$-type involutive distribution \cite{Pietrzkowski12Explicit}. Finally, in section \ref{sec:Comparison}, we compare the Chen-Fliess approach with the one given in this article. It occurs that in the letter case the number of integrals to compute grows significantly slower with the order of approximation, than in the first case. \section{Existence and convergence of an expansion} \label{sec:Existece} Let $n\in \NN$ \footnote{throughout the article we assume $\NN = \Set{0,1,2,3,\ldots}$ } , $T > 0$, and let $u_0,\ldots, u_n : [0,T] \to \RR$ be measurable and bounded (by a constant $M \in \RR$) functions. Consider a non-autonomous polynomial differential equation \begin{align} \label{eq:Main} \dot x(t) &= u_0(t) + \NewtonSymbol n 1 u_1(t)x(t) +\cdots+ \NewtonSymbol{n}{i} u_i(t)x^i(t) + \cdots + u_n(t) x^n(t), \\ \nonumber x(0) &= 0. \end{align} Two comments are in order. Firstly, the Newton symbols occurring in the above formula are for convenience reasons -- without these constants it would be harder to estimate the radius of convergence of a defined series. Secondly, we assume the initial value equals zero. This is without loss of generality in a sense that with the linear change of variables $x \to x - x_0$ we can transform the equation with the initial value equals $x_0$ to another equation with different $u_i$'s. Integrating both sides of the equation we get an integral equation \begin{align} \label{eq:MainIntegral} x(t) = \int_0^{t} u_0(s) + \NewtonSymbol n 1 \, u_1(s)x(s) + \cdots + u_n(s) x^n(s)\, ds. \end{align} By Caratheory's theorem for a small $\epsilon> 0$ there exists an absolutely continues solution $\Solution : [0,\epsilon]\to\RR$ of the initial equation (\ref{eq:Main}) in a sense that $\Solution$ satisfies the integral equation (\ref{eq:MainIntegral}) for $t \in [0,\epsilon]$. We want to express the solution $\Solution$ by means of iterated integrals of products of $u_i$'s. In order to do this we introduce some algebraic tools. To each function $u_i$ we assign a formal variable $a_i$, which we call a letter. The set of all letters $\Letters = \Set{a_0,\ldots,a_n}$ is called an alphabet. Juxtapositioning letters we can obtain words of an arbitrary length $k\in\NN$; the set of such words is denoted by $\WordsOfLength k = \SetSuchThat{b_1\cdots b_k}{b_1,\ldots, b_k \in \Letters}$. For $k=0$ the set $\WordsOfLength 0 = \Set{\EmptyWord}$ contains only one -- empty -- word. The set of all words is denoted by $\WordsOfLength{} = \bigcup_{k=0}^\infty \WordsOfLength{k}$. The juxtaposition gives rise to an associative, noncommutative product on the set of words $\WordsOfLength{}\times\WordsOfLength{}\ni(v,w) \to v\cdot w = vw \in \WordsOfLength{}$ called the concatenation product; then the set $\WordsOfLength{}$ with the concatenation product and the neutral element $\EmptyWord\in\WordsOfLength{}$ is a free monoid generated by $\Letters$. Taking $\RR$-linear combination of words and bilinearly extending the concatenation product we get the $\RR$-algebras $\Polynomials{\RR}{\Letters}$ of noncommutative polynomials on $\Letters$ and $\Series{\RR}{\Letters}$ of noncommutative series on $\Letters$. In both algebras we can consider the bilinear product $\ShuffleProduct:\Series{\RR}{\Letters} \otimes\Series{\RR}{\Letters}\to \Series{\RR}{\Letters}$ -- the shuffle product -- defined recursively for words by $\EmptyWord\ShuffleProduct w = w\ShuffleProduct \EmptyWord = w$ for any $w\in\WordsOfLength{}$, and \begin{align} \label{eq:ShuffleDef} (vb)\ShuffleProduct(wc) = (v\ShuffleProduct(wc))\cdot b + ((vb)\ShuffleProduct w)\cdot c \end{align} for all $b,c\in\Letters$ and $v, w\in\WordsOfLength{}$. It is easy to see that the shuffle product is commutative, thus with $\EmptyWord$ as the neutral element it gives rise to an additional commutative algebra structure on $\Polynomials{\RR}{\Letters}$ and $\Series{\RR}{\Letters}$. We will use both -- concatenation and shuffle -- products in our considerations. It is important to indicate the priority of the shuffle product over the concatenation product in all formulas of this article, so that $v\ShuffleProduct w \cdot a$ always means $(v\ShuffleProduct w) \cdot a$. On $\Series{\RR}{\Letters}$ we introduce a natural scalar product $\ScalarProduct{\cdot}{\cdot}{} : \Series{\RR}{\Letters} \times \Series{\RR}{\Letters} \to \RR$, which for elements $v,w \in \WordsOfLength{}\times\WordsOfLength{}$ is given by \begin{align*} \ScalarProduct{v}{w}{} = \Cases{ \Case 1 {v=w} \\ \Case 0 {v\neq w} }. \end{align*} For $S\in\Series{\RR}{\Letters}$, let $S_k \in \Polynomials{\RR}{\Letters}$ be the $k$-degree homogenous part of $S$, i.e., $$ S_k = \sum_{v\in\WordsOfLength{k}} \ScalarProduct{S}{v}{} \, v. $$ Clearly, $S = \sum_{k=0}^\infty S_k$. Define the linear homomorphism $\IteratedInt{t}:\Polynomials{\RR}{\Letters}\to \RR$ by $\IteratedInt t (\EmptyWord) = 1$, and \begin{align*} \WordsOfLength{k}\ni v = a_{i_1}\cdots a_{i_k} \mapsto \IteratedInt{t}(v) := \int_0^{t}u_{{i_k}}(t_k) \int_0^{t_k}\cdots \int_0^{t_2}u_{{i_1}}(t_1) \, dt_1\ldots dt_{k-1} dt_k. \end{align*} Equivalently, the homomorphism can be defined recursively by \begin{align} \label{eq:IteratedIntDef} \IteratedInt t(va_i) := \int_0^t \IteratedInt s(v)\, u_i(s)\, ds \end{align} for any $ v\in\WordsOfLength{}$ and $a_{i}\in\Letters$. Since $u_i$'s are bounded the definition is correct for all $t \geq 0$. One can check that $\IteratedInt t$ is in fact a shuffle algebra homomorphism, i.e., $\IteratedInt t (v\ShuffleProduct w) = \Upsilon^t(v)\,\Upsilon^t(w)$ (see \cite{Chen68Algebraic,Reutenauer93FreeLie}) which is a crucial feature in what follows. For a general series $S\in\Series{\RR}{\Letters}$ the homomorphism $\IteratedInt t$ is obviously not well defined since $\IteratedInt t (S)$ can be divergent. We restrict the definition of $\IteratedInt t $ to series $S\in\Series{\RR}{\Letters}$ and times $t \geq 0$ for which the series $$\sum_{k=0}^\infty \IteratedInt t (S_k)$$ is convergent. Coming back to the initial problem, assume that there exists a series $\MainSeries\in\Series{\RR}{\Letters}$ such that the solution $\Solution$ of (\ref{eq:Main}) satisfies $\Solution(t) = \IteratedInt{t}(\MainSeries)$ for $t\in[0,\epsilon]$ (in particular $\IteratedInt{t}(\MainSeries)$ is convergent). Using the recursive definition of $\IteratedInt{t}$ (\ref{eq:IteratedIntDef}) and the fact that $\IteratedInt{t}$ is a shuffle algebra homomorphism, we get from (\ref{eq:MainIntegral}) that \begin{align*} \IteratedInt{t}(\MainSeries) = \IteratedInt{t}(a_0 + \NS n 1 \, \MainSeries\cdot a_1 + \NS n 2\, \MainSeries\ShuffleProduct\MainSeries\cdot a_2 + \ldots + \MainSeries^{\ShuffleProduct n}\cdot a_n), \end{align*} where we abbreviate $\NS n i = \NewtonSymbol n i$, and $\MainSeries^{\ShuffleProduct n}$ is defined recursively in a natural way, i.e., $\MainSeries^{\ShuffleProduct 0} = \EmptyWord$ and $\MainSeries^{\ShuffleProduct n} = \MainSeries\ShuffleProduct \MainSeries^{\ShuffleProduct (n-1)}$. Now the point is that we can forget, for a moment, about the homomorphism and consider only the algebraic equation \begin{align} \label{eq:MainSeries} \MainSeries = a_0 + \NS n 1\, \MainSeries\cdot a_1 + \NS n 2\, \MainSeries\ShuffleProduct\MainSeries\cdot a_2 + \ldots + \MainSeries^{\ShuffleProduct n}\cdot a_n. \end{align} \begin{proposition} \label{prop:Existence} There exists the unique solution $\MainSeries\in\Series{\RR}{\Letters}$ of the algebraic equation (\ref{eq:MainSeries}). \end{proposition} \begin{proof} The equation under consideration must be satisfied for each homogeneous part, so we can split it into the following series of equations \begin{align*} \MainSeries_0 &= 0, & \MainSeries_1 &= a_0, \\ \MainSeries_2 &= \NS n 1 \, \MainSeries_1\cdot a_1, & \MainSeries_3 &= \NS n 1 \, \MainSeries_2\cdot a_1 + \NS n 2 \, \MainSeries_1\ShuffleProduct\MainSeries_1\cdot a_2, \\ \end{align*} and for arbitrary $k\in\NN$ \begin{align} \label{eq:Recurence} \MainSeries_{k+1} &= \sum_{i=1}^n \NewtonSymbol n i \sum_{\MultiIndex{l}\in\MultiIndexSet{i}} \MainSeries_{l_1}\ShuffleProduct\cdots\ShuffleProduct\MainSeries_{l_i} \cdot a_i, \end{align} where the second sum is taken over multi-indices $\MultiIndex{l} = (l_1,\ldots,l_i)$ in $$ \MultiIndexSet{i}= \SetSuchThat{(l_1,\ldots,l_i)\in\NN^i}{l_1,\ldots,l_i \geq 1,\, l_1 + \cdots + l_i = k }. $$ We see that the homogeneous parts of the series $\MainSeries$ are defined recursively, therefore the series is defined uniquely. \end{proof} Observe that from the recursive definition of $\IteratedInt t$ and a property $\IteratedInt t (v\ShuffleProduct w) = \IteratedInt t (v) \, \IteratedInt t (w)$ we get \begin{align} \label{eq:SolutionKOne} \Solution_{k+1}(t) &= \sum_{i=1}^n \NewtonSymbol n i \sum_{\MultiIndex{l}\in\MultiIndexSet{i}} \int_0^t \Solution_{l_1}(s)\cdots\Solution_{l_i}(s) \, u_i(s)\, ds, \end{align} where we use an abbreviation $\Solution_k(t) = \IteratedInt t (\MainSeries_k)$. By this definition $\Solution(t) = \sum_{k=0}^\infty \Solution_k (t)$. Moreover, any permutation of $(l_1,\ldots,l_i)$ gives the same expression under the integral. For $\MultiIndex{l} \in \MultiIndexSet{i}$ denote by $R(\MultiIndex{l})$ the number of such permutations, i.e., $$ R(\MultiIndex{l}) = \Cardinality\SetSuchThat{\sigma \in \Sigma_i}{l_1 = l_{\sigma (1)},\ldots, l_i = l_{\sigma (i)}}. $$ Then \begin{align} \label{eq:SolutionKTwo} \Solution_{k+1}(t) &= \sum_{i=1}^n \NewtonSymbol n i \sum_{\MultiIndex{l}\in\MultiIndexSetOrdered{i}} R(\MultiIndex{l}) \int_0^t \Solution_{l_1}(s)\cdots\Solution_{l_i}(s) \, u_i(s)\, ds, \end{align} where the second sum is taken over $$ \MultiIndexSetOrdered{i}= \SetSuchThat{(l_1,\ldots,l_i)\in\NN^i}{1 \leq l_1 \leq l_2 \leq \cdots \leq l_i,\, l_1 + \cdots + l_i = k }. $$ Let us state it in the following theorem. \begin{theorem} \label{thm:Analytic} Let $\Solution_1(t) = \int_0^t u_0(s) \, ds$ and recursively for $k \geq 1$ $\Solution_{k+1}(t)$ is given by \eqref{eq:SolutionKOne} or \eqref{eq:SolutionKTwo}. Then $\Solution(t) = \sum_{k=1}^\infty \Solution_k (t)$ is a formal solution of the differential equation (\ref{eq:Main}). \end{theorem} \begin{remark} \label{rem:Growth} It is worth noticing that for a fixed $k \geq 1$, the number of integrals in formula \eqref{eq:SolutionKTwo} is the cardinality of $\sum_{i=1}^n \MultiIndexSetOrdered{i}$. This is less than the cardinality of $\sum_{i=1}^\infty \MultiIndexSetOrdered{i}$, which is the number of partitions of $k$. The first ten of these numbers are 1, 2, 3, 5, 7, 11, 15, 22, 30, 42. It means that the number of integrals that we have to perform to compute $\Solution_{k+1}$ grows quite slowly. In section \ref{sec:Comparison} we show that this growth is much slower than the growth of the number of non-zero components in the Chen-Fliess expansion. \end{remark} There remains the problem under what assumption the solution for the algebraic equation is in the domain of the homomorphism $\IteratedInt t : \Series{\RR}{\Letters} \supset\Domain{\IteratedInt t } \to \RR$, i.e., when $\sum_k\IteratedInt{t} (\MainSeries_k)$ is convergent. In order to solve it, we need to compute the number of words (with multiplicities) in each homogeneous part of $\MainSeries$. So for $S\in\Polynomials \RR \Letters$ let us introduce the following definition \begin{align*} \NumberOfWords S = \sum_{v\in\WordsOfLength{}} |\ScalarProduct{S}{v}{}|. \end{align*} \begin{proposition} \label{prop:NumberOfWords} If $\MainSeries_k$ is the $k$-degree homogeneous part of the solution $\MainSeries$ of the algebraic equation (\ref{eq:MainSeries}), then for $k \geq 1$, $ \NumberOfWords\MainSeries_k = ((n-1)(k-1) +1)\cdot((n-1)(k-2) +1)\cdots n $ (in particular $\NumberOfWords\MainSeries_1 = 1$) and $\NumberOfWords\MainSeries_0 = 0$. \end{proposition} In particular, for $n=0$, $\NumberOfWords\MainSeries_1 = 1$ and $\NumberOfWords\MainSeries_k = 0$ otherwise; for $n = 1$, $\NumberOfWords\MainSeries_k = 1$; for $n = 2$, $\NumberOfWords\MainSeries_k = k!$; for $n = 3$, $\NumberOfWords\MainSeries_k = (2k-1)!!$; for $n = 4$, $\NumberOfWords\MainSeries_k = (3k-2)!!!$, and so on. The proposition will be proved in section \ref{sec:CountingTrees}. Now we state the theorem about convergence of the expansion. \begin{theorem} \label{thm:Main} Let $n\in\NN$ and assume $u_0,\ldots, u_n:[0,T]\to\RR$ are measurable functions such that $|u_i| \leq M$ for an $M>0$. Let $\MainSeries\in\Series{\RR}{\Letters}$ be the unique solution of the algebraic equation (\ref{eq:MainSeries}). Then the series $\sum_k \Solution_k(t) = \sum_k \IteratedInt t (\MainSeries_k)$ is absolutely convergent for $0 \leq t < \min\Set{T, 1/(M(n-1))}$ if $n\geq 2$, $0\leq t \leq T$ if $n=0,1$, and $\Solution(t) = \IteratedInt t (\MainSeries)$ is the solution of the differential equation (\ref{eq:Main}) on the same segment. \end{theorem} \begin{proof} For $v\in\WordsOfLength k $ the iterated integral $\IteratedInt t (v)$ is in fact taken over a $k$-dimensional simplex of $k$-dimensional measure $t^k/k!$. Since $u_i$'s are bounded by $M$ we have $|\IteratedInt t (v)| \leq (M t)^k/k!$. Therefore, \begin{align*} |\IteratedInt t (\MainSeries_k)| \leq \NumberOfWords\MainSeries_k\, \frac{(M t)^k}{k!} = \frac{ ((n-1)(k-1) +1)\cdot((n-1)(k-2) +1)\cdots n}{k!}\, {(Mt)^k}. \end{align*} Since $\NumberOfWords\MainSeries_{k+1}/\NumberOfWords\MainSeries_k = \frac {(n-1)k+1}{k+1} \xrightarrow[k\to\infty]{} n-1$, the series $\sum_k \IteratedInt t (\MainSeries_k)$ is convergent for $t < 1/(M(n-1))$ if $n \geq 2$, and $t < T$ if $n=1$. For $n=0$ the statement is obvious. \end{proof} \section{Counting trees} \label{sec:CountingTrees} In this section we prove Proposition \ref{prop:NumberOfWords}. In order to do this we consider certain classes of trees. It occurs that the number of trees in these particular classes equal $\NumberOfWords \MainSeries_k$ on the one hand and $((n-1)(k-1) +1)\cdot((n-1)(k-2) +1)\cdots n$ on the other hand. For $k,n\in\NN$ let $\Trees{n}{k}$ denote the set of planar, rooted, full $n$-ary and increasingly labeled trees on $k$ vertices. Recall that a tree is rooted if there exists a distinguished vertex called the root; is full $n$-ary if each vertex has exactly none or $n$ children; is on $k$ vertices if the number of vertices with $n$ children (parent vertices) is equal $k$; is increasingly labeled if the parent vertices are labeled by natural numbers from 1 to $k$, and the labels increase along each branch starting at the root (in particular the root is labeled by "$1$"). A leaf of a tree is a non-parent vertex, i.e., a vertex without children. It is important to note that the number of leafs in each tree in $\Trees{n}{k}$ is constant and equals $(n-1)k +1$. Indeed, using induction on $k$ we see that for $k=0$ the only tree in $ \Trees{n}{0}$ has $0$ children, so the root is the only leaf; each tree $\Trees{n}{k}$ can be obtained from a tree $\Tree \in\Trees{n}{k-1}$ by adding $n$ leafs to a certain leaf of $\Tree$, so the number of leafs increases by $(n-1)$. Now we count the cardinality of $\Trees{n}{k}$ in two different ways. \begin{lemma} \label{lem:Cardinality} The cardinality of $\Trees{n}{k}$ equals $\Cardinality{\Trees{n}{k}} = ((n-1)(k-1) +1)\cdot((n-1)(k-2) +1)\cdots n$ for $k\geq 1$ and $\Trees{n}{0}=1$. \end{lemma} \begin{proof} The case $n=0$ is trivial. Fix $n\in\NN$ s.t. $n\geq 1$. We proceed by induction on $k\in\NN$. For $k=0$ there is only one tree, so the statement is correct. Assume $\Cardinality{\Trees{n}{k} = ((n-1)(k-1) +1)\cdot((n-1)(k-2) +1)\cdots n}$. Each tree in $\Trees{n}{k+1}$ comes from the unique tree $\Tree$ in $\Trees{n}{k}$ by adding label "$k+1$" and $k$ vertices to a leaf of $\Tree$. Since the number of leafs of $\Tree$ is equal $(n-1)k +1$ we obtain the result. \end{proof} \begin{lemma} \label{lem:Recurrence} For $k\in\NN $ the cardinality of $\Trees{n}{k+1}$ equals \begin{align*} \Cardinality{\Trees{n}{k+1}} &= \sum_{\MultiIndex{l}\in\MultiIndexSetZero{n}} \NewtonSymbolGeneral{k}{l}{n}\, \Cardinality{\Trees{n}{l_1}}\cdots \Cardinality{\Trees{n}{l_n}}, \end{align*} where the sum is taken over multi-indices $\MultiIndex{l} = (l_1,\ldots,l_n)$ in $$ \MultiIndexSetZero{n}= \SetSuchThat{(l_1,\ldots,l_n)\in\NN^n}{l_1 + \cdots + l_n = k }. $$ \end{lemma} \begin{proof} First of all observe that for $k\in\NN$ each tree in $\Trees{n}{k+1}$ is uniquely given by $n$ trees $\Tree_1\in\Trees{n}{l_1},\ldots,\Tree_n\in\Trees{n}{l_n}$ such that $l_1 + \cdots + l_n = k$, and a partition of the set $\Set{2,\ldots,k+1}$ into $n$ disjoint sets $I_1,\ldots, I_n$ of the cardinality $\Cardinality I_i = l_i$ for $i=1,\ldots,n$ (we do not assume $I_i\neq \emptyset$), i.e., \begin{align*} \Trees{n}{k+1} \sim \bigsqcup_{\MultiIndex{l}\in\MultiIndexSetZero{n}} {\Trees{n}{l_1}}\times\cdots \times{\Trees{n}{l_n}} \times I(\MultiIndex{l}), \end{align*} where $I(\MultiIndex{l})$ is the set of all partitions of the set $\Set{2,\ldots,k+1}$ into $n$ disjoint sets $I_1,\ldots, I_n$ s.t. $\Cardinality I_i = l_i$ for $i=1,\ldots,n$. Indeed, the root of a given tree $\Tree \in \Trees{n}{k+1}$ has $n$ child vertices $\Vertex_1,\ldots,\Vertex_n$. Each $\Vertex_i$ is the root of a certain maximal subtree $\tilde\Tree_i$ of $\Tree$. We assume that $\tilde\Tree_i$ has $l_i$ parent vertices, which are labeled by some numbers $2 \leq a_i^1 < \cdots < a_i^{l_i} \leq k+1$. Obviously, $l_1 + \cdots + l_n = k$. Changing the label "$a^j_i$" into a label "$j$" we obtain a tree $\Tree_i\in\Trees{n}{l_i}$. Defining $I_i = \Set{a_i^1, \ldots , a_i^{l_i}}$ for $i=1,\ldots,n$, we have a partition of $\Set{2,\ldots,k+1}$ into $n$ disjoint sets, i.e., $\Set{2,\ldots, k+1} = I_1\cup\cdots\cup I_n$. It is clear how to invert this procedure in order to get its uniqueness. Using the above correspondence it is easy to establish the formula in the lemma since there are $\NewtonSymbolGeneral{k}{l}{n}$ possible partitions of the set $\Set{2,\ldots,k+1}$ into $n$ disjoint parts $I_1,\ldots,I_n$ such that $\Cardinality I_i = l_i \in \NN$, i.e., $\Cardinality I(\MultiIndex{l}) = \NewtonSymbolGeneral{k}{l}{n}$. \end{proof} We are now ready to prove Proposition \ref{prop:NumberOfWords}. \begin{proof}[Proof of Proposition \ref{prop:NumberOfWords}] First of all observe that for $i\in\NN$, $l_1,\ldots,l_i\in\NN$, and words $v_1\in\WordsOfLength{l_1},\ldots,v_i\in\WordsOfLength{l_i}$, the number of words in the shuffle product $v_1\ShuffleProduct\cdots\ShuffleProduct v_i$ equals $$\NumberOfWords(v_1\ShuffleProduct\cdots\ShuffleProduct v_i) = \NewtonSymbolGeneral{(l_1 + \cdots + l_i)}{l}{i}.$$ Using this fact, homogeneity of polynomials $\MainSeries_l$, and the recursive formula (\ref{eq:Recurence}) we get \begin{align} \label{eq:One} \NumberOfWords\MainSeries_{k+1} &= \sum_{i=1}^n \NewtonSymbol n i \sum_{\MultiIndex{l}\in\MultiIndexSet{i}} \NumberOfWords\left(\MainSeries_{l_1}\ShuffleProduct\cdots\ShuffleProduct\MainSeries_{l_i}\right) \\ \nonumber &= \sum_{i=1}^n \NewtonSymbol n i \sum_{\MultiIndex{l}\in\MultiIndexSet{i}} \NewtonSymbolGeneral{(l_1 + \cdots + l_i)}{l}{i}\cdot \NumberOfWords\MainSeries_{l_1}\cdots\NumberOfWords\MainSeries_{l_i} \end{align} where $\MultiIndexSet{i}$ contains multi-indices $(l_1,\ldots,l_i)\in\NN^i$ such that ${l_1 + \cdots + l_i = k }$ and, what is important, $l_1,\ldots,l_i \geq 1$. In order to get rid of the first sum, we introduce the following notation \begin{align*} \Number_k = \Cases{ \Case{1}{k=0} \\ \Case{\NumberOfWords\MainSeries_k}{k\neq 0} } \end{align*} and allow $l_1,\ldots,l_i$ to be equal $0$. Then we rewrite (\ref{eq:One}) as \begin{align} \label{eq:Two} \Number_{k+1} = \sum_{\MultiIndex{l}\in\MultiIndexSetZero{n}} \NewtonSymbolGeneral{(l_1 + \cdots + l_n)}{l}{n}\cdot \Number_{l_1}\cdots\Number_{l_n}, \end{align} where $\MultiIndexSetZero{n}= \SetSuchThat{(l_1,\ldots,l_n)\in\NN^n}{ l_1 + \cdots + l_n = k }.$ Indeed, if $l_1,\ldots,l_n \in \NN$ and only $i$ of them, say $\hat l_1,\ldots,\hat l_i$, are not equal $0$, then \begin{align*} \NewtonSymbolGeneral{(l_1 + \cdots + l_n)}{l}{n}\cdot \Number_{l_1}\cdots\Number_{l_n} = \NewtonSymbolGeneral{(\hat l_1 + \cdots + \hat l_i)}{\hat l}{i}\cdot \Number_{\hat l_1}\cdots\Number_{\hat l_i}. \end{align*} Clearly, there are $\NewtonSymbol{n}{i}$ different multi-indices $(l_1,\ldots,l_n)$ satisfying this condition, and this is the reason for the Newton symbol to disappear in formula (\ref{eq:Two}). Finally, we see that by Lemma \ref{lem:Recurrence} the recursive formula (\ref{eq:Two}) for the numbers $\Number_k$ overlaps with the one for the cardinality of trees $\Cardinality\Trees{n}{k}$. Since, the series coincide for $k = 0$, i.e., $\Number_0 = \Cardinality\Trees{n}{0} = 1$, we conclude using Lemma \ref{lem:Cardinality} that $$ \NumberOfWords\MainSeries_k = \Number_k = ((n-1)(k-1) +1)\cdot((n-1)(k-2) +1)\cdots n $$ for $k \geq 1$. The fact that $\NumberOfWords\MainSeries_0 = 0$ is trivial. \end{proof} \begin{remark} The above proof can be simplified for $n = 0, 1$ when $\NumberOfWords\MainSeries_k \leq 1$, but also for $n = 2$. In this case the recursive formula (\ref{eq:Recurence}) gives \begin{align*} \NumberOfWords\MainSeries_{k+1} &= 2\,\NumberOfWords\MainSeries_{k} + \sum_{j=1}^{k-1} \NewtonSymbol{k}{j} \NumberOfWords\MainSeries_{j}\,\NumberOfWords\MainSeries_{k-j}. \end{align*} Assuming the inductive hypothesis $\NumberOfWords\MainSeries_l = l!$ for $l\leq k$ we obtain \begin{align*} \NumberOfWords\MainSeries_{k+1} &= 2\, k! + \sum_{j=1}^{k-1} \NewtonSymbol{k}{j} j! (k-j)! = 2\, k! + (k-1)\, k! = (k+1)!\, . \end{align*} \end{remark} \section{Examples} \label{sec:Examples} In this section we discuss the three simplest cases when $n=0,1,2$, and the case where only $u_0$ and $u_n$ are not vanishing. We will need one additional intuitive notation. Namely, for $S\in\Series{\RR}{\Letters}$ such that $\ScalarProduct{S}{\EmptyWord}{} = 0$ we define the shuffle exponent \begin{align*} \ExpShuffle{S} = \sum_{k=0}^\infty \frac{S^{\ShuffleProduct k}}{k!}, \end{align*} where we recall that $S^{\ShuffleProduct 0} = \EmptyWord$ and $S^{\ShuffleProduct k} = S\ShuffleProduct S^{\ShuffleProduct (k-1)}$. If $n=0$, then the equation (\ref{eq:Main}) is $\dot x(t) = u_0(t), x(0) = 0$ and obviously a solution is $\Solution(t) = \IteratedInt{t}(\MainSeries)$, where $\MainSeries = \MainSeries_1 = a_0$ is homogeneous of degree 1. Let us pass to the case $n=1$ when (\ref{eq:Main}) is a linear equation $\dot x(t) = u_0(t) + u_1(t) x(t)$, $x(0) = 0$, which can be solved by variation of parameter. Let us see how it can be done using the series $\MainSeries$. Using recursive formula (\ref{eq:Recurence}) \begin{align*} \MainSeries_0 &= 0, & \MainSeries_1 & = a_0, & \MainSeries_{k+1} &= \MainSeries_k\cdot a_1, \end{align*} we get $\MainSeries = a_0\cdot(\EmptyWord + a_1 + a_1^2 + a_1^3 + \cdots).$ If we use formula $a_1^k = a_1^{\ShuffleProduct k}/k!$, we get \begin{align} \label{eq:MainSeriesNOne} \MainSeries = a_0\cdot \ExpShuffle{a_1}. \end{align} This expression looks nice, but there is a problem since $a_0$ factor is on the left hand side and therefore the expression will not simplify if we apply $\IteratedInt{t}$ to it. In order to obtain the solution in a common form, we prove the following lemma. \begin{lemma} For $a_0, a_1 \in \Letters$ it follows that $$\sum_{k=0}^{\infty} a_0a_1^k = \sum_{k,l=0}^\infty (-1)^l\, a_1^k\ShuffleProduct(a_1^l a_0).$$ \end{lemma} \begin{proof} Observe first that for $k,l\in\NN$ \begin{align} \label{eq:Three} a_1^k\ShuffleProduct(a_1^l a_0) = a_1^la_0a_1^k + \NewtonSymbol{l+1}{1}a_1^{l+1} a_0 a_1^{k-1}+\cdots+ \NewtonSymbol{l+k}{k}a_1^{l+k} a_0. \end{align} Indeed, for $k=0$ the formula is correct. Using the inductive hypothesis for each $m \leq k$, and the defining formula for shuffle product (\ref{eq:ShuffleDef}) we get \begin{align*} a_1^{k+1}\ShuffleProduct(a_1^l a_0) &= (a_1^{k}\ShuffleProduct(a_1^l a_0))a_1 + (a_1^{k+1}\ShuffleProduct a_1^l)a_0 \\ &= a_1^la_0a_1^{k+1} + \NewtonSymbol{l+1}{1}a_1^{l+1} a_0 a_1^{k} +\cdots+ \NewtonSymbol{l+k}{k}a_1^{l+k} a_0 a_1 + (a_1^{k+1}\ShuffleProduct a_1^l)a_0. \end{align*} Since $a_1^{k+1}\ShuffleProduct a_1^l = \NewtonSymbol{l+k+1}{k+1}a_1^{l+k+1}$, we obtain formula (\ref{eq:Three}). Using the above proved formula we see that \begin{align*} \sum_{k,l=0}^\infty (-1)^l\, a_1^k\ShuffleProduct(a_1^l a_0) &= \sum_{k,l=0}^\infty \sum_{m=0}^{k} (-1)^l\, \NewtonSymbol{l+m}{m}a_1^{l+m} a_0 a_1^{k-m} \\ &= \sum_{k',l'=0}^\infty \left[\sum_{m=0}^{l'} (-1)^{l'-m}\, \NewtonSymbol{l'}{m}\right]\, a_1^{l'} a_0 a_1^{k'}, \end{align*} where in the last line we change a method of summation by putting $k'=k-m$ and $l'=l+m$. Since the expression in the squared brackets equals $0^{l'}$, the sum over $l'$ reduces to the one summand with $l' = 0$, and therefore \begin{align*} \sum_{k,l=0}^\infty (-1)^l\, a_1^k\ShuffleProduct(a_1^l a_0) &= \sum_{k'=0}^\infty a_0 a_1^{k'}. \end{align*} This ends the proof. \end{proof} From the lemma it follows that \begin{align*} \MainSeries &= \sum_{k=0}^{\infty} a_0a_1^k = \sum_{k,l=0}^\infty (-1)^l\, a_1^k\ShuffleProduct(a_1^l a_0) = \sum_{k=0}^\infty \frac {a_1^{\ShuffleProduct k}}{k!} \ShuffleProduct \left(\sum_{k=0}^\infty \frac{(-a_1)^{\ShuffleProduct l}}{l!}\cdot a_0\right) \\ &= \ExpShuffle{a_1}\ShuffleProduct(\ExpShuffle{-a_1}\cdot a_0). \end{align*} Since $\IteratedInt{t}:\Series{\RR}{\Letters}\to \RR$ is a shuffle-algebra homomorphism, it follows that $\IteratedInt{t}(\ExpShuffle{S}) = \exp(\IteratedInt{t}(S))$ for all series $S$ such that $\ScalarProduct{S}{\EmptyWord}{} = 0$, and therefore \begin{align*} \Solution(t) = \IteratedInt{t}(\MainSeries) = \exp\left(\int_0^t u_1(s)\, ds\right)\, \int_0^t \exp\left(-\int_0^s u_1(\tau)\, d\tau\right)\, u_0(s)\, ds, \end{align*} which is the standard formula. For $n=2$ the equation under consideration is \begin{align} \label{eq:Riccati} \dot x(t) &= u_0(t) + 2 u_1(t) x(t) + u_2(t) x^2(t), & x(0) &= 0, \end{align} that is a general Riccati equation. In this case, the series $\MainSeries$ is the unique solution of \begin{align} \label{eq:MainSeriesTwo} \MainSeries = a_0 + 2\, \MainSeries\cdot a_1 + \MainSeries\ShuffleProduct\MainSeries\cdot a_2, \end{align} and therefore $\MainSeries = \sum_k \MainSeries_k$, where $\MainSeries_k$ are given by the recurrence \begin{align} \label{eq:RecurenceTwo} \MainSeries_0 &= 0, & \MainSeries_1 & = a_0, & \MainSeries_{2} &= 2\, \MainSeries_1\cdot a_1, & \MainSeries_{k+1} &= 2\, \MainSeries_k\cdot a_1 + \sum_{{l}=1}^{k-1} \MainSeries_{l}\ShuffleProduct\MainSeries_{k-l} \cdot a_2. \end{align} Let us mention that the Riccati equation is a Lie-Scheffers system of the type $\mathfrak{a}_1$ (see \cite{Carinena07Riccati,Carinena11Riccati} and \cite{Redheffer56Solutions,Redheffer57Riccati}). More precisely, if we take vector fields \begin{align*} X_0(x) & = \frac \partial {\partial x}, & X_1(x) &= 2x \frac \partial {\partial x}, & X_2(x) &= x^2 \frac \partial {\partial x} \end{align*} on $\RR$, then they satisfy the following commutation relations \begin{align*} [X_0,X_1] &= 2X_0, & [X_0, X_2] &= X_1, & [X_1, X_2] = 2 X_2. \end{align*} It means the vector fields span a simple Lie algebra of the type $\mathfrak{a}_1$ (isomorphic to $\mathfrak{sl}(2,\RR)$), and thus \eqref{eq:Riccati} -- equivalent to $\dot x(t) = \sum u_i(t) X_i$ -- is a Lie-Scheffers system of this type. The solution in terms of iterated integrals of $u_i$'s for such a system was given in \cite{Pietrzkowski12Explicit}. Let us recall the main theorems of this article. \begin{theorem}[Theorem 1 in \cite{Pietrzkowski12Explicit}] \label{thm:Pietrzkowski12Explicit} Let $X_a, X_b, X_c \in \TangentFields{\Manifold}$ be smooth tangent vector fields on a manifold $\Manifold$ satisfying $[X_a,X_b] = 2X_a,\, [X_a, X_c] = -X_b,\, [X_b, X_c] = 2 X_c$. Let $u_a, u_b, u_c : [0,T]\to \RR$ be fixed measurable functions. Then (locally) the solution $x:[0,T]\to\Manifold$ of the differential equation \begin{align*} \dot x(t) & = u_c(t) X_c + u_b(t) X_b + u_a(t) X_a, & x(0) &= x_0 \in \Manifold \end{align*} is of the form \begin{align} \label{eq:Solution} x(t) = \VectorFlow{\Xi_c(t) X_c} \VectorFlow{\Xi_b(t) X_b} \VectorFlow{\Xi_a(t) X_a} (x_0). \end{align} Here, $\Xi_a, \Xi_b, \Xi_c : [0,T] \to \RR$ are given by $\Xi_d(t) := \Upsilon^t(\WordsInSeriesOfLength{d}{})$ (for $d = a, b, c$), where \begin{align} \label{eq:ThreeFormulas} \WordsInSeriesOfLength{a}{} &= a\cdot \ExpShuffle{2\AOneSeries}, & \WordsInSeriesOfLength{b}{} &= {\AOneSeries}, & \WordsInSeriesOfLength{c}{} &= \ExpShuffle{2\AOneSeries}\cdot c, \end{align} and $\AOneSeries\in\Series{\RR}{\Letters}$ is the unique solution of the algebraic equation \begin{align*} \AOneSeries = b - a\cdot \ExpShuffle{2\AOneSeries}\cdot c. \end{align*} In particular, we have \begin{align*} b - a\cdot\WordsInSeriesOfLength{c}{} = \WordsInSeriesOfLength{b}{} = b - \WordsInSeriesOfLength{a}{}\cdot c. \end{align*} \end{theorem} \begin{theorem}[Theorem 2 in \cite{Pietrzkowski12Explicit}] \label{thm:Riccati} For fixed measurable functions $u_a, u_b, u_c:[0,T]\to\RR$ the function $\Xi_a:[0,T]\to\RR$, defined in Theorem \ref{thm:Pietrzkowski12Explicit} by $\Xi_a(t) = \Upsilon^t(a\cdot \ExpShuffle{2\AOneSeries})$, is (locally) the solution of the Riccati equation: \begin{align*} \dot\Xi_a(t) &= u_a(t) + 2u_b(t)\, \Xi_a(t) - u_c(t)\, \Xi_a^2(t), & \Xi_a(0) &= 0. \end{align*} \end{theorem} Observe that taking $X_a = X_0$, $X_b = X_1$, $X_c = - X_2$ (and therefore $c = -a_2$), and $u_a = u_0$, $u_b = u_1$, $u_c = u_2$, and $\Xi_a(t) = x(t)$ the system \eqref{eq:Riccati} can be put into the context of the above theorems in the following way. From Theorem \ref{thm:Riccati} we conclude that the solution of \eqref{eq:Riccati} is $x(t) = \IteratedInt t (\MainSeries)$, where \begin{align*} \MainSeries = a_0\cdot \ExpShuffle{2\AOneSeries}, \end{align*} and $\AOneSeries\in\Series{\RR}{\Letters}$ is, by Theorem \ref{thm:Pietrzkowski12Explicit}, the unique solution of the algebraic equation \begin{align} \label{eq:AOneSeries} \AOneSeries = a_1 + a_0\cdot \ExpShuffle{2\AOneSeries}\cdot a_2. \end{align} Additionally, from the last line in Theorem \ref{thm:Pietrzkowski12Explicit} we conclude that \begin{align} \label{eq:AOneSeriesAdd} \AOneSeries = a_1 + \MainSeries\cdot a_2. \end{align} In the discussed article, there was also given a recursive formula for $\AOneSeries$, but observe that in fact the algebraic equation (\ref{eq:MainSeriesTwo}) for $\MainSeries$ is simpler than the equation (\ref{eq:AOneSeries}) for $\AOneSeries$. In consequence, it is reasonable to invert this statement saying that the series $\AOneSeries$ is given by \eqref{eq:AOneSeriesAdd}, where $\MainSeries$ is the solution of (\ref{eq:MainSeriesTwo}). Now using Theorem \ref{thm:Main} we get the following corollary about the $\LieAlgebra{a}_1$-type Lie-Scheffers system considered in Theorem \ref{thm:Pietrzkowski12Explicit}. \begin{proposition} In the context of Theorem \ref{thm:Pietrzkowski12Explicit}, if $|u_d| \leq M$ for an $M>0$ ($d = a,b,c$), then the solution \eqref{eq:Solution} exists for $0 \leq t < \min\Set{T, 1/M}$. \end{proposition} \begin{proof} By Theorem \ref{thm:Analytic} the function $\IteratedInt t (\MainSeries)$ is well defined for $0 \leq t < \min\Set{T, 1/M}$. The above observations (in particular formula \eqref{eq:AOneSeriesAdd}) implies that $\IteratedInt t (\AOneSeries)$ is also well defined in this interval. Finally, by formulas \eqref{eq:ThreeFormulas} each function $\Xi_d(t) := \IteratedInt t (\WordsInSeriesOfLength{d}{})$ ($d=a, b, c$) is defined for $0 \leq t < \min\Set{T, 1/M}$, too. \end{proof} Let us observe that in each of the discussed cases the solution is of the form $\MainSeries = a_0\cdot\ExpShuffle{L}$, where $L\in\Series{\RR}{\Letters}$ such that $\ScalarProduct{L}{\EmptyWord}{} = 0$, and the series $L$ in case $n$ reduces to $L$ in case $n-1$ if taking $u_n \equiv 0$. Indeed, $L = 0$ for $n=0$, $L = a_1$ for $n=1$, and $L = \AOneSeries$ for $n=2$ reduces, by (\ref{eq:AOneSeries}), to $a_1$ for $u_2 \equiv 0$. This observation suggests a question: if the same holds for all $n\in\NN$? Since Riccati equation is essentially the only differential equation on a real line which is connected with the action of a group (namely the special linear group $SL(2)$) \cite{Carinena99Integrability, Carinena2011Lie} one could anticipate that a generalization is impossible. Nevertheless the problem is open. Another example we are going to consider is the one where there are only two non-vanishing summands, i.e., $\dot x(t) = \NewtonSymbol n m u(t)x^m(t) + u_n(t) x^n(t)$, $x(0) = 0$, and $0\leq m < n$ are fixed. The case $m\neq 0$ has the trivial solution $x(t) \equiv 0$, so in fact we consider \begin{align} \label{eq:n} \dot x(t) = u_0(t) + u_n(t) x^n(t),\quad x(0) &= 0 \end{align} with $n\geq 1$ fixed. \begin{proposition} Let $u_0, u_n :[0,T]\to\RR$ be measurable, bounded functions. Then the solution of \eqref{eq:n} is $x(t) = \sum_{k=0}^{\infty} x_{k}(t)$, where $x_{k}(t)$ are recursively given by $x_0(t) = \int_0^t u_0(s)\, ds$, and $$x_{k}(t) = \sum_{\MultiIndex l \in N(k)} \NewtonSymbolGeneral n l k \int_0^t (x_0(s))^{ l_1} \cdots (x_{k-1}(s))^{ l_{k}} \, u_n(s)\, ds,$$ where $N(k) = \SetSuchThat{(l_1,\ldots,l_{k}) \in \NN ^k}{n = l_1+\cdots +l_k,\, k -1 = l_2 + 2l_3 + \cdots + (k-1)l_k}.$ \end{proposition} Let us write the first few components of the expansion given in the above proposition. \begin{align*} x_0(t) & = \IteratedInt t (a_0) = \int_0^t u_0(s)\, ds \, , \\ x_1(t) & = \int_0^t (x_0(s))^n \, u_n(s)\, ds \, , \\ x_2(t) & = n \int_0^t (x_0(s))^{n-1} x_1(s) \, u_n(s)\, ds\, , \\ x_3(t) & = \NewtonSymbol n 2 \int_0^t (x_0(s))^{n-2} (x_1(s))^2\, u_n(s)\, ds + n\int_0^t (x_0(s))^{n-1} x_2(s) \, u_n(s)\, ds\, , \\ x_4(t) & = \NewtonSymbol n 3 \int_0^t (x_0(s))^{n-3} (x_1(s))^3\, u_n(s)\, ds \\ & \quad + n(n-1) \int_0^t (x_0(s))^{n-2} x_2(s) x_1(s) \, u_n(s)\, ds + n \int_0^t (x_0(s))^{n-1} x_3(s) \, u_n(s)\, ds\, . \end{align*} \begin{proof} The algebraic equation \eqref{eq:MainSeries} associated with the differential equation \eqref{eq:n} is \begin{align} \label{eq:MainSeriesN} \MainSeries = a_0 + \MainSeries^{\ShuffleProduct n}\cdot a_n. \end{align} Let us first show that the only non-vanishing homogeneous parts of $\MainSeries$ are $\MainSeries_{kn+1}$, where $k\in\NN$. We prove it by the induction on $k$. The $k$-th hypothesis is that $\MainSeries_{kn+l} =0$ for all $l = 2,\ldots,n$. For $k=0$ the hypothesis is clearly correct. Assume it is correct for $k < K$ and let us prove that $\MainSeries_{Kn+2} = \cdots = \MainSeries_{Kn+n} = 0$. Using \eqref{eq:MainSeriesN} and the induction hypothesis we see that \begin{align*} \MainSeries_{Kn+l} = \sum_{(\MultiIndex{p},\MultiIndex{m})\in \tilde N} C(\MultiIndex{p},\MultiIndex{m})\cdot \MainSeries_1^{\ShuffleProduct p_1}\ShuffleProduct\cdots \ShuffleProduct \MainSeries_{(K-1)n + 1}^{\ShuffleProduct p_{K}}\ShuffleProduct \MainSeries_{Kn+2}^{\ShuffleProduct m_2}\ShuffleProduct\cdots \ShuffleProduct \MainSeries_{Kn + l-1}^{\ShuffleProduct m_{l-1}} \cdot a_n, \end{align*} where the sum is taken over all $(\MultiIndex{p},\MultiIndex{m}) = (p_1,\ldots,p_K,m_2,\ldots, m_{l-1}) \in \tilde N\subset \NN^{K+l-2}$, \begin{align*} \tilde N = \{n = p_1+\cdots +p_K &+ m_2 +\cdots+ m_{l-1}, \\ Kn + l -1 & = p_1 + \cdots + p_{K}((K-1)n+1) \\ & \quad + m_2(Kn+2)+\cdots+ m_{l-1}(Kn + l-1)\}, \end{align*} and $C(\MultiIndex{p},\MultiIndex{m})= \frac{n!}{\MultiIndex p !\, \MultiIndex m !}$, $\MultiIndex p ! = p_1!\cdots p_K!$, $\MultiIndex m ! = m_2!\cdots m_{l-1}!$. If $m_2 + \cdots + m_{l-1} \geq 1$, then from the second equality defining $\tilde N$ we have \begin{align*} l - 1 & = p_1 + p_2(n+1)+ \cdots + p_{K}((K-1)n+1) \\ &\quad + 2m_2 + \cdots + (l-1)m_{l-1} + (m_2 + \cdots + m_{l-1} -1)Kn, \end{align*} which is not less than $n$ (by the first equality defining $\tilde N$), a contradiction. If $m_2 + \cdots + m_{l-1} = 0$, then $p_1 + \cdots + p_K = n$ and therefore \begin{align*} l - 1 = n(1 + p_2 + 2p_3 + \cdots + (K-1)p_K - K). \end{align*} But $n$ does not divide $l-1 \in\Set{ 1,\ldots, n-1}$. This implies $\tilde N = \emptyset$ and $\MainSeries_{kn +l} =0$ for $l = 2,\ldots, n$. We conclude that the solution of \eqref{eq:MainSeriesN} is of the form $\MainSeries = \sum_{k=0}^\infty \MainSeries_{kn +1} .$ Now, similarly as above we see from \eqref{eq:MainSeriesN} that \begin{align*} \MainSeries_{Kn+1} = \sum_{\MultiIndex{l}\in N(k)} \frac{n!}{l_1!\cdots l_k!}\cdot\MainSeries_1^{\ShuffleProduct l_1}\ShuffleProduct \MainSeries_{n+1}^{\ShuffleProduct l_2}\ShuffleProduct\cdots \ShuffleProduct \MainSeries_{(k-1)n + 1}^{\ShuffleProduct l_{k}} \cdot a_n, \end{align*} where the sum is taken over \begin{align*} N(k) = \SetSuchThat{(l_1,\ldots,l_{k}) \in \NN ^k}{n = l_1+\cdots +l_k,\, kn = l_1 + l_2(n+1)+ \cdots + l_{k}((k-1)n+1)}. \end{align*} Using the first equation defining $N(k)$ we simplify the second equation defining $N(k)$ as follows: \begin{align*} kn & = l_1 + \cdots + l_k + n(l_2 + 2l_3 + \cdots + (k-1)l_k) \\ & = n(1 + l_2 + 2l_3 + \cdots + (k-1)l_k). \end{align*} Therefore, $$N(k) = \SetSuchThat{(l_1,\ldots,l_{k}) \in \NN ^k}{n = l_1+\cdots +l_k,\, k -1 = l_2 + 2l_3 + \cdots + (k-1)l_k}.$$ Denoting $x_k(t) = \IteratedInt t (\MainSeries_{kn +1})$ and using the homomorphic property of $\IteratedInt t $ we obtain the hypothesis of the proposition. \end{proof} \section{Comparison with the Chen-Fliess approach} \label{sec:Comparison} In this section we compare the number of non-zero iterated integrals in two approaches: the one given in this article, and the Chen-Fliess one. Recall that in the letter approach \cite{Fliess81Fonctionnelles} we assume we have a differential equation \begin{align*} \dot x(t) &= u_0(t)\, X_0(x(t)) + u_1(t)\, X_1(x(t)) + \cdots + u_n(t)\, X_n(x(t)), \\ \nonumber x(0) &= 0, \end{align*} where $X_i(x) = x^i \frac \partial {\partial x}$, for $i=0,\ldots,n$, are differential vector fields on $\RR$. The solution is given by \begin{align} \label{eq:ChenFliess} x(t) = \sum_{v\in\WordsOfLength{}}\IteratedInt t (v) \, X_v(x)(0), \end{align} where for $v = a_{i_1}\cdots a_{i_k} \in \WordsOfLength {} $ we define $X_v(x)(0) := X_{i_1}\cdots X_{i_k}(x)(0)$ as a composition of vector fields acting on the function $h(x) = x$ and evaluated at the initial value $x_0 = 0$. Since $X_i(x)(0) \neq 0$ only for $i=0$ the sum can be significantly reduced. Our aim is to eliminate all unnecessary summands. Since the second derivative $\frac {\partial^2} {\partial x^2} x = 0$ we need to compute $X_v(x)(0)$ modulo the second and higher derivatives. \begin{lemma} For $k \geq 2$ and $v = a_{i_1}\cdots a_{i_k} \in\WordsOfLength {k}$ we have $$X_v = i_k(i_k + i_{k-1} - 1)(i_k + i_{k-1} + i_{k-2} - 2)\cdots (i_k + \cdots + i_{2} - k+2)x^{i_1 + \cdots + i_{k} - k+1} \frac {\partial} {\partial x} \mod \frac {\partial^2} {\partial x^2}.$$ \end{lemma} \begin{proof} We use the induction on $k$. The case $k =2$ is clear. Assume $w = a_{i_2}\cdots a_{i_{k+1}} \in \WordsOfLength {k}$ and $v = a_{i_1}w \in \WordsOfLength {k+1}$. By the induction hypothesis $X_w = I x^{\alpha} \frac {\partial} {\partial x} \mod \frac {\partial^2} {\partial x^2},$ where $I = i_{k+1}(i_{k+1} + i_{k} - 1)\cdots (i_{k+1} + \cdots + i_{3} - k+2)$, $\alpha = i_2 + \cdots + i_{k+1} - k+1$, and thus $X_v = x^{i_1 + \alpha-1} I \alpha \frac {\partial} {\partial x} \mod \frac {\partial^2} {\partial x^2}$. This ends the proof. \end{proof} Using this lemma we conclude that for $v = a_{i_1}\cdots a_{i_k} \in\WordsOfLength {k}$, $X_v(x)(0) \neq 0$ only if $i_1 + \cdots + i_{k} = k-1$, and therefore \eqref{eq:ChenFliess} simplifies to \begin{align*} x(t) = \sum_{k=1}^\infty \sum_{\MultiIndex i \in M_0(k)} \IteratedInt t (a_{\MultiIndex i}) \, i_k(i_k + i_{k-1} - 1)\cdots (i_k + \cdots + i_{2} - k+2), \end{align*} where $a_\MultiIndex {i} := a_{i_1}\cdots a_{i_k}$, and the second sum is taken over all multi-indices $\MultiIndex i = (i_1,\ldots, i_k)$ in the set $M_0(k)\subset (\Set{0,\ldots,n})^k$ given by a one equality and $k-1$ inequalities: \begin{align*} i_k + \cdots + i_{1} - k+1 & = 0, & i_k + \cdots + i_{2} - k+2 & \geq 0, \\ & & & \vdots \\ & & i_k + i_{k-1} - 1 & \geq 0, \\ & & i_k & \geq 0. \end{align*} On the $k$-th step of approximation there are $\Cardinality M_0(k)$ non-trivial integrals to compute. If we assume $n=\infty$ one can compute that these are Catalan numbers, i.e., $\Cardinality M_0(k) = \frac 1 {k+1} \NewtonSymbol{2k}{k}$. The first ten of these numbers are 1, 2, 5, 14, 42, 132, 429, 1430, 4862. We see that this growth is much faster than the growth 1, 2, 3, 5, 7, 11, 15, 22, 30, 42 of integrals needed to compute the $k$-th step of the expansion in our approach, as we mentioned in Remark \ref{rem:Growth}. \section{Concluding remarks} In the article we formulated a scheme for expanding a solution of a general non-autonomous polynomial differential equation. The time dependent homogeneous parts of the expansion were expressed in terms of iterated integrals. The formula for each of this part was given recursively by (\ref{eq:Recurence}). The advantage of our approach is that it is made on algebraic level. We use the shuffle product which is an algebraic analogue of multiplication of iterated integrals. Therefore, the algebraic formula can be easily transformed into the analytic one giving the expansion of the solution of the initial problem, as we stated in Theorem \ref{thm:Analytic}. Finally, there is some work to be done. One way of a development is to write an explicit formula for the algebraic series $\MainSeries$ preferably with the use of shuffle product. It would also be important to find a deeper algebraic structure of this solution. Another way is to rewrite the scheme for systems of non-autonomous polynomial differential equations and estimate the radius of convergence in this case. This is important for example to integrate higher order Lie-Sheffers systems. \section*{Acknowledgements} The author was partially supported by the Polish Ministry of Research and Higher Education grant NN201 607540, 2011-2014. \bibliographystyle{amsalpha
{ "timestamp": "2014-03-31T02:09:33", "yymm": "1403", "arxiv_id": "1403.7460", "language": "en", "url": "https://arxiv.org/abs/1403.7460", "abstract": "We give a recursive formula for an expansion of a solution of a general non-autonomous polynomial differential equation. The formula is given on the algebraic level with a use of shuffle product. This approach minimizes the number of integrations on each order of expansion. Using combinatorics of trees we estimate the radius of convergence of the expansion.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "On Expansion of a Solution of General Non-autonomous Polynomial Differential Equation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631671237733, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7087950409335738 }
https://arxiv.org/abs/2005.05195
Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality
Sparse principal component analysis (PCA) is a popular dimensionality reduction technique for obtaining principal components which are linear combinations of a small subset of the original features. Existing approaches cannot supply certifiably optimal principal components with more than $p=100s$ of variables. By reformulating sparse PCA as a convex mixed-integer semidefinite optimization problem, we design a cutting-plane method which solves the problem to certifiable optimality at the scale of selecting k=5 covariates from p=300 variables, and provides small bound gaps at a larger scale. We also propose a convex relaxation and greedy rounding scheme that provides bound gaps of $1-2\%$ in practice within minutes for $p=100$s or hours for $p=1,000$s and is therefore a viable alternative to the exact method at scale. Using real-world financial and medical datasets, we illustrate our approach's ability to derive interpretable principal components tractably at scale.
\section{Introduction} In the era of big data, interpretable methods for compressing a high-dimensional dataset into a lower dimensional set which shares the same essential characteristics are imperative. Since the work of \citet{hotelling1933analysis}, principal component analysis (PCA) has been one of the most popular approaches for completing this task. Formally, given centered data $\bm{A} \in \mathbb{R}^{n \times p}$ and its normalized empirical covariance matrix $\bm{\Sigma}:=\frac{\bm{A} \bm{A}^\top }{{n-1}} \in \mathbb{R}^{p \times p}$, PCA selects one or more leading eigenvectors of $\bm{\Sigma}$ and subsequently projects $\bm{A}$ onto these eigenvectors. This can be achieved in $O(p^3)$ time by taking a singular value decomposition {\color{black}$\bm{\Sigma}=\bm{U}\bm{\Lambda}\bm{U}^\top$}. A common criticism\footnote{\color{black}A second criticism of PCA is that, as set up here, it uses the sample correlation or covariance matrix. This is a drawback, because sample covariance matrices are poorly conditioned estimators which over-disperses the sample eigenvalues, particularly in high-dimensional settings. In practice, this can be rectified by, e.g., using a shrinkage estimator \citep[see, e.g.,][]{ledoit2004well}. We do not do so here for simplicity, but we recommend doing so if using the techniques developed in this paper in practice.} of PCA is that the columns of {\color{black}$\bm{U}$} are not interpretable, since each eigenvector is a linear combination of all $p$ original features. This causes difficulties because: \begin{itemize}\itemsep0em \item {\color{black}In medical applications such as cancer detection, PCs generated during exploratory data analysis need to supply interpretable modes of variation \citep{hsu2014sparse}.} \item In scientific applications such as protein folding, each original co-ordinate axis has a physical interpretation, and the reduced set of co-ordinate axes should too. \item In finance applications such as investing capital across index funds, each non-zero entry in each eigenvector used to reduce the feature space incurs a transaction cost. \item If $p \gg n$, PCA suffers from a curse of dimensionality and becomes physically meaningless \citep{amini2008high}. \end{itemize} One common method for obtaining interpretable principal components is to stipulate that they are sparse, i.e., maximize variance while containing at most $k$ non-zero entries. This approach leads to the following non-convex mixed-integer quadratically constrained problem \citep[see][]{d2005direct}: \begin{align}\label{OriginalSPCA} \max_{\bm{x} \in \mathbb{R}^p} \ \bm{x}^\top \bm{\Sigma} \bm{x} \quad \text{s.t.} \quad \bm{x}^\top \bm{x}= 1,\ \vert \vert \bm{x} \vert \vert_0 \leq k, \end{align} where the constraint $\vert \vert \bm{x} \vert \vert_0 \leq k$ forces variance to be explained in a compelling fashion. \subsection{Background and Literature Review} Owing to sparse PCA's fundamental importance in a variety of applications including best subset selection \citep{d2008optimal}, natural language processing \citep{zhang2012sparse}, compressed sensing \citep{candes2007dantzig}, and clustering \citep{luss2010clustering}, three distinct classes of methods for addressing Problem \eqref{OriginalSPCA} have arisen. Namely, (a) heuristic methods which obtain high-quality sparse PCs in an efficient fashion but do not supply guarantees on the quality of the solution, (b) convex relaxations which obtain certifiably near-optimal solutions by solving a convex relaxation and rounding, and (c) exact methods which obtain certifiably optimal solutions, albeit in exponential time \paragraph{Heuristic Approaches: } The importance of identifying a small number of interpretable principal components has been well-documented in the literature since the work of \citet{hotelling1933analysis} \citep[see also][]{jeffers1967two}, giving rise to many distinct heuristic approaches for obtaining high-quality solutions to Problem \eqref{OriginalSPCA}. Two interesting such approaches are to rotate dense principal components to promote sparsity \citep{kaiser1958varimax, richman1986rotation, jolliffe1995rotation}, or apply an $\ell_1$ penalty term as a convex surrogate to the cardinality constraint \citep{jolliffe2003modified, zou2006sparse}. Unfortunately, the former approach does not provide performance guarantees, while the latter approach {still results in} a non-convex optimization problem. More recently, motivated by the need to rapidly obtain high-quality sparse principal components at scale, a wide variety of first-order heuristic methods have emerged. The first such \textit{modern} heuristic was developed by \citet{journee2010generalized}, and involves combining the power method with thresholding and re-normalization steps. By pursuing similar ideas, several related methods have since been developed \citep[see][]{witten2009penalized, hein2010inverse, richtarik2012alternating, luss2013conditional, yuan2013truncated}. Unfortunately, while these methods are often very effective in practice, they sometimes badly fail to recover an optimal sparse principal component, and a practitioner using a heuristic method typically has no way of knowing when this has occurred. Indeed, \citet{berk2017} recently compared $7$ heuristic methods, including most of those reviewed here, on $14$ instances of sparse PCA, and found that none of the heuristic methods successfully recovered an optimal solution in all $14$ cases {(i.e., no heuristic was right all the time).} \paragraph{Convex Relaxations: } Motivated by the shortcomings of heuristic approaches on high-dimensional datasets, and the successful application of semi-definite optimization in obtaining high-quality approximation bounds in other applications \citep[see][]{goemans1995improved, wolkowicz2012handbook}, a variety of convex relaxations have been proposed for sparse PCA. The first such convex relaxation was proposed by \citet{d2005direct}, who reformulated sparse PCA as the rank-constrained mixed-integer semidefinite optimization problem (MISDO): \begin{equation}\label{sdospca1} \begin{aligned} \max_{\bm{X} \succeq \bm{0}} \quad \langle \bm{\Sigma}, \bm{X} \rangle\ \text{s.t.} \ \mathrm{tr}(\bm{X})=1, \ \Vert\bm{X}\Vert_0 \leq k^2,\ \mathrm{Rank}(\bm{X})=1, \end{aligned} \end{equation} where $\bm{X}$ models the outer product $\bm{x}\bm{x}^\top$. {\color{black}Note that, for a rank-one matrix $\bm{X}$, the constraint $\Vert \bm{X}\Vert_0 \leq k^2$ in \eqref{sdospca1} is equivalent to the constraint $\Vert\bm{x}\Vert_0 \leq k$ in \eqref{OriginalSPCA}, since a vector $\bm{x}$ is $k$-sparse if its outer product $\bm{x}\bm{x}^\top$ is $k^2$-sparse.} {After performing this reformulation,} \citet{d2005direct} relaxed both the cardinality and rank constraints and instead solved \begin{equation}\label{sdospca1.relax} \begin{aligned} \max_{\bm{X} \succeq \bm{0}} \quad \langle \bm{\Sigma}, \bm{X} \rangle\ \text{s.t.} \ \mathrm{tr}(\bm{X})=1, \ \Vert\bm{X}\Vert_1 \leq k, \end{aligned} \end{equation} which supplies a valid upper bound on Problem \eqref{OriginalSPCA}'s objective. The semidefinite approach has since been refined in a number of follow-up works. Among others, \citet{d2008optimal}, building upon the work of \citet{ben2002tractable}, proposed a different semidefinite relaxation which supplies a sufficient condition for optimality via the primal-dual KKT conditions, and \citet{d2014approximation} analyzed the quality of the semidefinite relaxation in order to obtain high-quality approximation bounds. A common theme in these approaches is that they require solving large-scale semidefinite optimization problems. This presents difficulties for practitioners because state-of-the-art implementations of interior point methods such as \verb|Mosek| require $O(p^6)$ memory to solve Problem \eqref{sdospca1.relax}, and therefore currently cannot solve instances of Problem \eqref{sdospca1.relax} with $p \geq 300$ \citep[see][for a recent comparison]{bertsimas2019polyhedral}. {\color{black}Techniques other than interior point methods, e.g., ADMM or augmented Lagrangian methods as reviewed in \cite{majumdar2019survey} could also be used to solve Problem \eqref{sdospca1.relax}, although they tend to require more runtime than IPMs to obtain a solution of a similar accuracy and be numerically unstable for problem sizes where IPMs run out of memory \citep{majumdar2019survey}.} {A number of works have also studied the statistical estimation properties of Problem \eqref{sdospca1.relax}, by assuming an underlying probabilistic model. Among others, \citet{amini2008high} have demonstrated the asymptotic consistency of Problem \eqref{sdospca1.relax} under a spiked covariance model once the number of samples used to generate the covariance matrix exceeds a certain threshold; see \cite{vu2012minimax, berthet2013optimal, wang2016statistical} for further results in this direction, \cite{miolane2018phase} for a recent survey.} { In an complementary direction, \citet{dey2018convex} has recently questioned the modeling paradigm of lifting $\bm{x}$ to a higher dimensional space by instead considering the following (tighter) relaxation of sparse PCA in the original problem space \begin{align}\label{prob:l1relax_small} \max_{\bm{x} \in \mathbb{R}^p}\quad \bm{x}^\top \bm{\Sigma}\bm{x}\quad \text{s.t.}\quad \Vert\bm{x}\Vert_2=1, {\color{black}\Vert \bm{x}\Vert_1 \leq \sqrt{k}}. \end{align} Interestingly, Problem \eqref{prob:l1relax_small}'s relaxation provides a $\left(1+\sqrt{\tfrac{k}{k+1}}\right)^2$-factor bound approximation of Problem \eqref{OriginalSPCA}'s objective, while Problem \eqref{sdospca1.relax}'s upper bound may be exponentially larger in the worst case \citep{amini2008high}. This additional tightness, however, comes at a price: Problem \eqref{prob:l1relax_small} is NP-hard to solve—indeed, providing a constant-factor guarantee on sparse PCA is NP-hard \citep{magdon2017np}—and thus \eqref{prob:l1relax_small} is best formulated as a MIO, while Problem \eqref{sdospca1.relax} can be solved in polynomial time. } More recently, by building on the work of \citet{kim2001second}, \citet[]{bertsimas2019polyhedral} introduced a second-order cone relaxation of \eqref{sdospca1} which scales to $p=1000s$, and matches the semidefinite bound after imposing a small number of cuts. Moreover, it typically supplies bound gaps of less than $5\%$. However, it does not supply an \textit{exact} certificate of optimality, which is often desirable, for instance in medical applications. A fundamental drawback of existing convex relaxation techniques is that they are not coupled with rounding schemes for obtaining high-quality feasible solutions. This is problematic, because optimizers are typically interested in obtaining high-quality solutions, rather than certificates. In this paper, we take a step in this direction, by deriving new convex relaxations that naturally give rise to greedy and random rounding schemes. The fundamental point of difference between our relaxations and existing relaxations is that we derive our relaxations by rewriting sparse PCA as a MISDO and dropping an integrality constraint, rather than using more ad-hoc techniques. \paragraph{Exact Methods:} Motivated by the successful application of mixed-integer optimization for solving statistical learning problems such as best subset selection \citep{bertsimas2020sparse} and sparse classification \citep{bertsimas2017sparse}, several exact methods for solving sparse PCA to certifiable optimality have been proposed. The first branch-and-bound algorithm for solving Problem \eqref{OriginalSPCA} was proposed by \citet{moghaddam2006spectral}, by applying norm equivalence relations to obtain valid bounds. However, \citet{moghaddam2006spectral} did not couple their approach with high-quality initial solutions and tractable bounds to prune partial solutions. Consequently, they could not scale their approach beyond $p=40$. A more sophisticated branch-and-bound scheme was recently proposed by \citet{berk2017}, which couples tighter Gershgorin Circle Theorem bounds \citep[][Chapter 6]{horn1990matrix} with a fast heuristic due to \cite{yuan2013truncated} to solve problems up to $p=250$. However, their method cannot scale beyond $p=100$s, because the bounds obtained are too weak to avoid enumerating a sizeable portion of the tree. Recently, the authors developed a framework for reformulating convex mixed-integer optimization problems with logical constraints \citep[see][]{bertsimas2019unified}, and demonstrated that this framework allows a number of problems of practical relevance to be solved to certifiably optimality via a cutting-plane method. In this paper, we build upon this work by reformulating Problem \eqref{OriginalSPCA} as a \textit{convex} mixed-integer semidefinite optimization problem, and leverage this reformulation to design a cutting-plane method which solves sparse PCA to certifiable optimality. A key feature of our approach is that we need not solve any semidefinite subproblems. Rather, we use {concepts} from SDO to design a semidefinite-free approach which uses simple linear algebra techniques. { Concurrently to our initial submission, \citet{li2020exact} also attempted to reformulate sparse PCA as an MISDO, and proposed valid inequalities for strengthening their formulation and local search algorithms for obtaining high-quality solutions at scale. Our work differs in the following two ways. First, we propose strengthening the MISDO formulation using the Gershgorin circle theorem and demonstrate that this allows our MISDO formulation to scale to problems with $p=100$s of features, while they do not, to our knowledge, solve any MISDOs to certifiable optimality where $p>13$. Second, we develop tractable second-order cone relaxations and greedy rounding schemes which allow practitioners to obtain certifiably near optimal sparse principal components even in the presence of $p=1,000$s of features. More remarkable than the differences between the works however is the similarities: more than $15$ years after \citet{d2005direct}'s landmark paper first appeared, both works proposed reformulating sparse PCA as an MISDO less than a week apart. In our view, this demonstrates that the ideas contained in both works transcend sparse PCA, and can perhaps be applied to other problems in the optimization literature which have not yet been formulated as MISDOs.} \subsection{Contributions and Structure} The main contributions of the paper are twofold. First, we reformulate sparse PCA exactly as a mixed-integer semidefinite optimization problem; a reformulation which is, to the best of our knowledge, novel. Second, we leverage this MISDO formulation to design efficient algorithms for solving non-convex mixed-integer quadratic optimization problems, such as sparse PCA, to certifiable optimality or {\color{black}within $1-2\%$ of optimality in practice} at a larger scale than existing state-of-the-art methods. The structure and detailed contributions of the paper are as follows: \begin{itemize}\itemsep0em \item In Section \ref{sec:exact.misdo}, we reformulate Problem \eqref{OriginalSPCA} as a mixed-integer SDO. { We propose a cutting-plane method which solves it to certifiable optimality in Section \ref{sec:exact.oa}. Our algorithm decomposes the problem into a purely binary master problem and a semidefinite separation problem. Interestingly, we show in Section \ref{sec:exact.subpb} that the separation problems can be solved efficiently via a leading eigenvalue computation and does not require any SDO solver. Finally, {\color{black}the} Gershgorin Circle theorem has been empirically successful for deriving upper-bounds on the objective value of \eqref{OriginalSPCA} \citep{berk2017}. We theoretically analyze the quality of such bounds in Section \ref{sec:exact.circle} and show in Section \ref{sec:exact.oval} that tighter bounds derived from Brauer's ovals of Cassini theorem can also be imposed via mixed-integer second-order cone constraints.} \item In Section \ref{sec:relaxandround}, we analyze the semidefinite reformulation's convex relaxation, and introduce a greedy rounding scheme (Section \ref{ssec:relax.bool}) which supplies high-quality solutions to Problem \eqref{OriginalSPCA} in polynomial time, {\color{black} together with a sub-optimality gap (see numerical experiments in Section \ref{sec:numres})}. To further improve the quality of rounded solution and the optimality gap, we introduce strengthening inequalities (Section \ref{ssec:validineq}). While solving the strengthened formulation exactly would result in an intractable MISDO problem, solving its relaxation and rounding the solution appears as an efficient strategy to return high-quality solutions with a {\color{black} numerical certificate} of near-optimality. \item In Section \ref{sec:numres}, we apply the cutting-plane and random rounding methods to derive optimal and near optimal sparse principal components for problems in the UCI dataset. We also compare our method's performance against the method of \citet{berk2017}, and find that our exact cutting-plane method performs comparably, while our relax+round approach successfully scales to problems an order of magnitude larger {\color{black}and often returns solutions which outperform the exact method at sizes which the exact method cannot currently scale to}. A key feature of our numerical success is that we sidestep the computational difficulties in solving SDOs at scale by proposing semidefinite-free methods for solving the convex relaxations, i.e., solving second-order cone relaxations. \end{itemize} \paragraph{Notation: } We let nonbold face characters such as $b$ denote scalars, lowercase bold faced characters such as $\bm{x}$ denote vectors, uppercase bold faced characters such as $\bm{X}$ denote matrices, and calligraphic uppercase characters such as $\mathcal{Z}$ denote sets. We let $[p]$ denote the set of running indices $\{1, ..., p\}$. We let $\mathbf{e}$ denote a vector of all $1$'s, $\bm{0}$ denote a vector of all $0$'s, and $\mathbb{I}$ denote the identity matrix, with dimension implied by the context. We also use an assortment of matrix operators. We let $\langle \cdot,\cdot \rangle$ denote the Euclidean inner product between two matrices, $\Vert \cdot \Vert_F$ denote the Frobenius norm of a matrix, $\Vert \cdot \Vert_\sigma$ denote the spectral norm of a matrix, $\Vert \cdot \Vert_*$ denote the nuclear norm of a matrix, $\bm{X}^\dag$ denote the Moore-Penrose psuedoinverse of a matrix $\bm{X}$ and $S_+^p$ denote the $p \times p$ positive semidefinite cone; see \citet{horn1990matrix} for a general theory of matrix operators. \section{An Exact Mixed-Integer Semidefinite {\color{black} Optimization Algorithm}}\label{sec:reformulation} In Section \ref{sec:exact.misdo}, we reformulate Problem \eqref{OriginalSPCA} as a convex mixed-integer semidefinite optimization problem. From this formulation, we propose an outer-approximation scheme (Section \ref{sec:exact.oa}) which, as we show in Section \ref{sec:exact.subpb}, does not require solving any semidefinite problems. We improve convergence of the algorithm by deriving quality upper-bounds on Problem's \eqref{OriginalSPCA} objective value in Section \ref{sec:exact.circle} and \ref{sec:exact.oval} {\color{black} \subsection{A Mixed-Integer Semidefinite Reformulation} \label{sec:exact.misdo} } Starting from the rank-constrained SDO formulation \eqref{sdospca1}, we introduce binary variables $z_i$ to model whether $X_{i,j}$ is non-zero, via the logical constraint $X_{i,j}=0$ if $z_i=0$; note that we need not require that $X_{i,j}=0$ if $z_j=0$, since $\bm{X}$ is a symmetric matrix. By enforcing the logical constraint via $-M_{i,j}z_i \leq X_{i,j}\leq M_{i,j}z_i$ for sufficiently large $M_{i,j}>0$, Problem \eqref{sdospca1} becomes \begin{align*} \max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k} \ \max_{\bm{X} \in S^p_+} \quad & \langle \bm{\Sigma}, \bm{X} \rangle\\ \text{s.t.} \quad & \mathrm{tr}(\bm{X})=1,\ -M_{i,j}z_i \leq X_{i,j} \leq M_{i,j}z_i \ \forall i, j \in [p],\ \mathrm{Rank}(\bm{X})=1. \end{align*} To obtain a MISDO reformulation, we omit the rank constraint. In general, omitting a rank constraint generates a relaxation and induces some loss of optimality. Remarkably, this omission is without loss of optimality in this case. Indeed, the objective is convex and therefore some rank-one extreme matrices $\bm{X}$ is optimal. We formalize this observation in the following theorem; note that a similar result—although in the context of computing Restricted Isometry constants and with a different proof—exists \citep[][]{gally2016computing}: \begin{theorem}\label{thm:misdpreformthm} Problem \eqref{OriginalSPCA} attains the same optimal objective value as the problem: \begin{equation}\label{misdpprimal} \begin{aligned} \max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k} \ \max_{\bm{X} \in S^p_+} \quad & \langle \bm{\Sigma}, \bm{X} \rangle&\\ \text{s.t.} \quad & \mathrm{tr}(\bm{X})=1 \quad & \\ & \vert X_{i,j}\vert \leq M_{i,j}z_i \quad & \forall i, j \in [p],\\ & \color{black}\sum_{j=1}^p \vert X_{i,j}\vert \leq \sqrt{k} z_i \quad & \forall i \in [p], \end{aligned} \end{equation} where $M_{i,i}=1$, {\color{black}and} $M_{i,j}=\frac{1}{2}$ if $j \neq i$. \end{theorem} \begin{remark} Observe that if {\color{black} $k \leq \sqrt{n}$ and} we set $M_{i,j}=1 \ \forall i,j \in [p]$ in Problem \eqref{misdpprimal} {\color{black}and omit the valid inequality $\sum_{j=1}^p \vert X_{i,j}\vert \leq \sqrt{k} z_i$} then the optimal value of the continuous relaxation is trivially $\lambda_{\max}(\bm{\Sigma})$. Indeed, letting $\bm{x}$ be a leading eigenvector of the unconstrained problem (where $\Vert \bm{x}\Vert_2=1$), we can set $z_i=\vert x_i\vert \geq \vert x_i \vert \vert x_j\vert${\color{black}, where the inequality holds since $\Vert \bm{x}\Vert_2=1$,} and $X_{i,j}=x_ix_j$, meaning {\color{black}(a) $\sum_i z_i=\Vert \bm{x}\Vert_1 \leq k \leq \sqrt{n}$ {\color{black}by norm equivalence} and (b) $\vert X_{i,j}\vert \leq z_i$} and thus $(\bm{X}, \bm{z})$ solves this continuous relaxation. Therefore, setting $M_{i,j}=\frac{1}{2}$ if $j\neq i$ {\color{black}and/or imposing the valid inequality $\sum_{j=1}^p \vert X_{i,j}\vert \leq \sqrt{k} z_i$} is necessary for obtaining non-trivial relaxations {\color{black}whenever $k$ is small}. \end{remark} {\color{black} } \begin{proof} It suffices to demonstrate that for any feasible solution to \eqref{OriginalSPCA} we can construct a feasible solution to \eqref{misdpprimal} with an equal or greater payoff, and vice versa. \begin{itemize}\itemsep0em \item Let $\bm{x} \in \mathbb{R}^{p}$ be a feasible solution to \eqref{OriginalSPCA}. Then, {\color{black}since $\Vert \bm{x}\Vert_1 \leq \sqrt{k}$}, $(\bm{X}:=\bm{x}\bm{x}^\top, \bm{z})$ is a feasible solution to \eqref{misdpprimal} with equal cost, where $z_i=1$ if $\vert x_i\vert>0$, $z_i=0$ otherwise. \item Let $(\bm{X}, \bm{z})$ be a feasible solution to Problem \eqref{misdpprimal}, and let $\bm{X}=\sum_{i=1}^p \sigma_i \bm{x}_i\bm{x}_i^\top$ be a Cholesky decomposition of $\bm{X}$, where $\bm{e}^\top \bm{\sigma}=1, \bm{\sigma} \geq \bm{0}$, {\color{black}and $\Vert \bm{x}_i\Vert_2=1 \ \forall i \in [p]$}. Observe that $\Vert\bm{x}_i\Vert_0 \leq k\ \forall i \in [p], $ since we can perform the Cholesky decomposition on the submatrix of $\bm{X}$ induced by $\bm{z}$, and ``pad'' out the remaining entries of each $\bm{x}_i$ with $0$s to obtain the decomposition of $\bm{X}$. Therefore, let us set $\hat{\bm{x}}:=\arg\max_{i}[\bm{x}_i^\top \bm{\Sigma}\bm{x}_i]$. Then, $\hat{\bm{x}}$ is a feasible solution to \eqref{OriginalSPCA} with an equal or greater payoff. \end{itemize} Finally, we let $M_{i,i}=1$, $M_{i,j}=\frac{1}{2}$ if $i \neq j$, as the $2 \times 2$ minors imply $X_{i,j}^2 \leq X_{i,i}X_{j,j}\leq \frac{1}{4}$ whenever $i \neq j$ \citep[c.f.][Lemma 1]{gally2016computing}. \end{proof} Theorem \ref{thm:misdpreformthm} reformulates Problem \eqref{OriginalSPCA} as a mixed-integer SDO. Therefore, we can solve Problem \eqref{misdpprimal} using general branch-and-cut techniques for semidefinite optimization problems \citep[see][]{gally2018framework, kobayashi2019branch}. However, this approach is not scalable, as it comprises solving a large number of semidefinite subproblems and the community does not know how to efficiently warm-start interior point methods (IPMs) for SDOs. Alternatively, we propose a saddle-point reformulation of Problem \eqref{misdpprimal} which avoids the computational difficulty of solving a large number of SDOs by exploiting problem structure, as we will show in Section \ref{sec:exact.subpb}. The following result reformulates Problem \eqref{misdpprimal} as a max-min saddle-point problem amenable to outer-approximation: \begin{theorem}\label{thm:saddlepointtheorem} Problem \eqref{misdpprimal} attains the same optimal value as the following problem: \begin{align}\label{prob:saddlepointproblem} \max_{\bm{z} \in \{0, 1\}^p: \ \bm{e}^\top \bm{z} \leq k} \quad f(\bm{z})\\ \label{eqn:separation} \quad \text{ where } \quad f(\bm{z}):= \min_{\lambda \in \mathbb{R}, \bm{\alpha} \in \mathbb{R}^{p \times p} {\color{black},\bm{\beta} \in \mathbb{R}^p}} \quad & \lambda +\sum_{i=1}^p {z}_i {\color{black}\left(\sum_{j=1}^p M_{i,j}\max(0, \vert \alpha_{i,j}\vert-\beta_i)+{\color{black}\sqrt{k}\beta_i}\right)}\\ \text{s.t.} \quad & \lambda\mathbb{I}+\bm{\alpha} \succeq \bm{\Sigma}.\nonumber \end{align} \end{theorem} \begin{remark} The above theorem demonstrates that $f(\bm{z})$ is concave in $\bm{z}$, by rewriting it as the infimum of functions which are linear in $\bm{z}$ \citep[][]{boyd2004convex}. \end{remark} \begin{proof} Let us {\color{black}introduce auxiliary variables $U_{i,j}$ to model the absolute value of $X_{i,j}$ and rewrite the inner optimization problem of \eqref{misdpprimal} as \begin{equation}\label{prob:sparsepcainnerprimal} \begin{aligned}\color{black} f(\bm{z}):= \quad & \max_{\bm{X} \succeq \bm{0}, \bm{U}} \quad & \langle \bm{\Sigma}, \bm{X}\rangle\\ \text{s.t.} \quad & \mathrm{tr}(\bm{X})=1, \quad & & [\lambda]\\\ & \color{black} U_{i,j} \leq M_{i,j}z_i \ &\forall i, j \in [p], \quad & [\sigma_{i,j}]\\ & \color{black} \vert X_{i,j}\vert \leq U_{i,j} \ &\forall i, j \in [p],\quad & [\alpha_{i,j}]\\ & \color{black} \sum_{j=1}^p U_{i,j} \leq \sqrt{k} z_i &\forall i \in [p], \quad & [\beta_{i}]\\ \end{aligned} \end{equation} where we associate dual constraint multipliers with primal constraints in square brackets.} For $\bm{z}$ such that $\bm{e}^\top \bm{z} \geq 1$, the maximization problem induced by $f(\bm{z})$ satisfies Slater's condition \citep[see, e.g.,][Chapter 5.2.3]{boyd2004convex}, strong duality applies and leads to {\color{black} \begin{align*} f(\bm{z}) = \min_{\substack{\lambda \\ \bm{\sigma},\bm{\alpha},\bm{\beta} \geq \bm{0}}} \quad & \lambda+\sum_{i,j}\sigma_{i,j}M_{i,j}z_i+\sum_{i=1}^p \beta_i \sqrt{k}z_i\\ \text{s.t.} \quad & \lambda \mathbb{I}+\bm{\alpha} \succeq \bm{\Sigma}, \vert \alpha_{i,j}\vert \leq \sigma_{i,j}+\beta_i. \end{align*}} {\color{black} We eliminate $\bm{\sigma}$ from the dual problem above by optimizing over $\sigma_{i,j}$ and setting $\sigma^\star_{i,j}=\max(0, \vert \alpha_{i,j}\vert-\beta_i)$.} Note that for $\bm{z}=\bm{0}$, the primal subproblem is infeasible and the dual subproblem has objective $-\infty$, but this can safely be ignored since $\bm{z}=\bm{0}$ is certainly suboptimal. \end{proof} \subsection{A Cutting-Plane Method} \label{sec:exact.oa} Theorem \ref{thm:saddlepointtheorem} shows that evaluating $f(\bm{\hat{z}})$ yields the globally valid overestimator: $$f(\bm{z}) \leq f(\hat{\bm{z}})+\bm{g}_{\hat{\bm{z}}}^\top(\bm{z}-\hat{\bm{z}}),$$ where $\bm{g}_{\hat{\bm{z}}}$ is a supergradient of $f$ at $\bm{\hat{z}}$, at no additional cost. In particular, we have $$g_{\hat{\bm{z}},i}={\color{black}\left(\sum_{j=1}^p M_{i,j}\max\left(0, \vert \alpha_{i,j}^\star(\bm{\hat{z}})\vert-\beta_i(\hat{\bm{z}})\right)+{\color{black}\sqrt{k}\beta_i(\hat{\bm{z}})}\right)},$$ where {\color{black}$\bm{\alpha}^\star(\hat{\bm{z}})$, $\bm{\beta}^\star(\hat{\bm{z}})$ constitutes an optimal choice of $(\bm{\alpha}, \bm{\beta})$ for a fixed $\bm{\hat{\bm{z}}}$}. This observation leads to an efficient strategy for maximizing $f(\bm{z})$: iteratively maximizing and refining a piecewise linear upper estimator of $f(\bm{z})$. This strategy is called outer-approximation (OA), and was originally proposed by \citet{duran1986outer}. OA works by iteratively constructing estimators of the following form at each {\color{black}iteration} $t$: \begin{align} f^t(\bm{z})=\min_{1 \leq i \leq t} \left\{f(\bm{z}_i)+\bm{g}_{\bm{z}_i}^\top (\bm{z}-\bm{z}_i)\right\}. \end{align} After constructing each overestimator, we maximize $f^t(\bm{z})$ over $\{0, 1\}^p$ to obtain $\bm{z}_t$, and evaluate $f(\cdot)$ and its supergradient at $\bm{z}_t$. This procedure yields a non-increasing sequence of overestimators $\{f^t(\bm{z}_t)\}_{t=1}^T$ which converge to the optimal value of $f(\bm{z})$ within a finite number of iterations {\color{black}$T \leq {p \choose 1}+\ldots+{p \choose k}$}, since $\{0, 1\}^p$ is a finite set and OA never visits a point twice. Additionally, we can avoid solving a different MILO at each OA iteration by integrating the entire algorithm within a single branch-and-bound tree, as proposed by \cite{quesada1992lp}, using \verb|lazy constraint callbacks|. Lazy constraint callbacks are now standard components of modern MILO solvers such as \verb|Gurobi| or \verb|CPLEX| and substantially speed-up OA. We formalize this procedure in Algorithm \ref{alg:cuttingPlaneMethod}; note that $\partial f(\bm{z}_{t+1})$ denotes the set of supergradients of $f$ at $\bm{z}_{t+1}$. \begin{algorithm*} \caption{An outer-approximation method for Problem \eqref{OriginalSPCA}} \label{alg:cuttingPlaneMethod} \begin{algorithmic}\normalsize \REQUIRE Initial solution $\bm{z}_1$ \STATE $t \leftarrow 1 $ \REPEAT \STATE Compute $\bm{z}_{t+1}, \theta_{t+1}$ solution of {\vspace{-2mm} \begin{align*} \max_{\bm{z} \in\{0, 1\}^p: \bm{e}^\top \bm{z} \leq k, \theta} \: \theta \quad \mbox{ s.t. } \theta \leq f(\bm{z}_i) + \bm{g}_{\bm{z}_i}^\top (\bm{z}-\bm{z}_i) \ \forall i \in [t], \end{align*}}\vspace{-5mm} \STATE Compute $f(\bm{z}_{t+1})$ and $\bm{g}_{\bm{z}_{t+1}} \in \partial f(\bm{z}_{t+1})$ by solving \eqref{eqn:separation} \STATE $t \leftarrow t+1 $ \UNTIL{$ f(\bm{z}_t)-\theta_t \leq \varepsilon$} \RETURN $\bm{z}_t$ \end{algorithmic} \end{algorithm*} \subsection{A { Semidefinite-free} Subproblem Strategy} \label{sec:exact.subpb} Our derivation and analysis of Algorithm \ref{alg:cuttingPlaneMethod} indicates that we can solve Problem \eqref{OriginalSPCA} to certifiable optimality by solving a (potentially large) number of semidefinite subproblems \eqref{eqn:separation}, which might be prohibitive in practice. Therefore, we now derive a computationally efficient subproblem strategy which crucially does not require solving \textit{any} semidefinite programs. Formally, we have the following result: \begin{theorem}\label{compefficientsubproblem}\color{black} For any $\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k$, optimal dual variables in \eqref{eqn:separation} are \begin{align}\color{black} \lambda=\lambda_{\max}\left(\bm{\Sigma}_{1,1}\right),\ \hat{\bm{\alpha}}=\begin{pmatrix} \hat{\bm{\alpha}}_{1,1} & \hat{\bm{\alpha}}_{1,2}\\ \hat{\bm{\alpha}}_{1,2}^\top & \hat{\bm{\alpha}}_{2,2}\end{pmatrix}=\begin{pmatrix} \bm{0} & \bm{0}\\ \bm{0} & \bm{\Sigma}_{2,2}-\lambda \mathbb{I}+\bm{\Sigma}_{1,2}^\top \left(\lambda \mathbb{I}-\bm{\Sigma}_{1,1}\right)^\dag \bm{\Sigma}_{1,2}\end{pmatrix},\\ {\color{black}\beta_{i}=(1-z_i)\begin{pmatrix}\vert\hat{\alpha}_{i,1}\vert, \vert\hat{\alpha}_{i,2}\vert, \ldots, \vert\hat{\alpha}_{i,i}\vert, \vert\hat{\alpha}_{i,i}\vert, \ldots, \vert\hat{\alpha}_{i,p}\vert\end{pmatrix}_{[\ceil{\sqrt{k}\ }]} \ \forall i \in [p],} \end{align} where $\lambda_{\max}(\cdot)$ denotes the leading eigenvalue of a matrix, $\hat{\bm{\alpha}}=\begin{pmatrix} \hat{\bm{\alpha}}_{1,1} & \hat{\bm{\alpha}}_{1,2}\\ \hat{\bm{\alpha}_{1,2}^\top} & \hat{\bm{\alpha}}_{2,2}\end{pmatrix}$ is a {\color{black}permutation} such that $\hat{\bm{\alpha}}_{1,1}$ (resp. $\hat{\bm{\alpha}}_{2,2}$) denotes the entries of $\hat{\bm{\alpha}}$ where $z_i=z_j=1$ ($z_i=z_j=0$); $\bm{\Sigma}$ is similar{\color{black}, and $(\bm{x})_{[k]}$ denotes the $k$th largest element of $\bm{x}$.} \end{theorem} \begin{remark} By Theorem \ref{compefficientsubproblem}, Problem \eqref{eqn:separation} can be solved by computing the leading eigenvalue of $\bm{\Sigma}_{1,1}$ and solving a linear system. This justifies our claim that we need not solve any SDOs in our algorithmic strategy. \end{remark} \begin{proof} We appeal to strong duality and complementary slackness. Observe that, for any $\bm{z} \in \{0, 1\}^p$, $f(\bm{z})$ is the optimal value of a maximization problem over a closed convex compact set. Therefore, there exists some optimal primal solution $\bm{X}^\star$ without loss of generality. Moreover, since the primal has non-empty relative interior {with respect to the non-affine constraints, it satisfies the Slater constraint qualification and} strong duality holds \citep[see, e.g.,][Chapter 5.2.3]{boyd2004convex}. Therefore, by complementary slackness \citep[see, e.g.,][Chapter 5.5.2]{boyd2004convex}, there must exist some dual-optimal solution $(\lambda, \hat{\bm{\alpha}}, \bm{\beta})$ which obeys complementarity with $\bm{X}^\star$. Moreover, $\vert X_{i,j}\vert \leq M_{i,j}$ is implied by $\mathrm{tr}(\bm{X})=1, \bm{X} \succeq \bm{0}$, while $\sum_{j=1}^p \vert X_{i,j}\vert \leq z_i \sqrt{k}$ is implied by $\vert X_{i,j}\vert \leq M_{i,j}z_i$ and $\bm{e}^\top \bm{z} \leq k$. Therefore, by complementary slackness, we can take the constraints $\vert X_{i,j}\vert \leq M_{i,j}z_i$, {\color{black}$\sum_{j=1}^p \vert X_{i,j}\vert \leq z_i \sqrt{k}$} to be inactive when $z_i=1$ without loss of generality, which implies that $\hat{\alpha}_{i,j}^\star, {\color{black} \beta_i^\star}=0$ if $z_i=1$ in some dual-optimal solution. Moreover, we also have $\hat{\alpha}_{i,j}^\star=0$ if $z_j=1$, since $\hat{\bm{\alpha}}$ obeys the dual feasibility constraint $\lambda \mathbb{I}+\hat{\bm{\alpha}}\succeq \bm{\Sigma}$, and therefore is itself symmetric. Next, observe that, by strong duality, $\lambda=\lambda_{\max}(\bm{\Sigma}_{1,1})$ in this dual-optimal solution, since $\bm{\alpha}$ only takes non-zero values if $z_i=z_j=0$ and does not contribute to the objective{\color{black}, and $\bm{\beta}$ is similar}. Next, observe that, by strong duality and complementary slackness, any dual feasible $(\lambda, \hat{\bm{\alpha}}, {\color{black} \bm{\beta}})$ satisfying the above conditions is dual-optimal. Therefore, we need to find an $\hat{\bm{\alpha}}_{2,2}$ such that \begin{align*} \begin{pmatrix}\lambda\mathbb{I} -\bm{\Sigma}_{1,1} & -\bm{\Sigma}_{1,2}\\ -\bm{\Sigma}_{2,1} & \lambda\mathbb{I}+\hat{\bm{\alpha}}_{2,2}-\bm{\Sigma}_{2,2} \end{pmatrix}\succeq \bm{0}. \end{align*} By the generalized Schur complement lemma \citep[see][Equation 2.41]{boyd1994linear}, this is PSD if and only if \begin{enumerate}\itemsep0em \item $\lambda\mathbb{I} -\bm{\Sigma}_{1,1} \succeq \bm{0}$, \item $\left(\mathbb{I}-(\lambda\mathbb{I} -\bm{\Sigma}_{1,1})(\lambda\mathbb{I} -\bm{\Sigma}_{1,1})^\dag\right) \bm{\Sigma}_{1,2}=\bm{0}$, and \item $\lambda \mathbb{I}+\hat{\bm{\alpha}}_{2,2}-\bm{\Sigma}_{2,2}\succeq \bm{\Sigma}_{1,2}^\top \left(\lambda \mathbb{I}-\bm{\Sigma}_{1,1}\right)^\dag \bm{\Sigma}_{1,2}$. \end{enumerate} The first two conditions hold {because, as argued above, $\lambda$ is optimal and therefore feasible, and the conditions are independent of $\hat{\bm{\alpha}}_{2,2}$}. Therefore, it suffices to pick $\hat{\bm{\alpha}}_{2,2}$ in order that the third condition holds. We achieve this by setting $\hat{\bm{\alpha}}_{2,2}$ so the PSD constraint in condition (3) holds with equality. Finally, let us {\color{black} optimize for} $\bm{\beta}$ to obtain stronger cuts (when $z_i=0$ we can pick any feasible $\beta_i$, but optimizing to set $\partial f(\bm{z})_i$ to be as small as possible gives stronger cuts). {\color{black} This is equivalent to solving the univariate minimization problem for each $\beta_i$: \begin{align*} \min_{\beta_i} \left(\sum_{j=1}^p M_{i,j}\max(0, \vert \alpha_{i,j}\vert-\beta_i)+{\color{black}\sqrt{k}\beta_i}\right). \end{align*} Moreover, it is a standard result from max-$k$ optimization \citep[see, e.g.,][]{zakeri2014optimization, todd2018max} that this is achieved by setting $\beta_i$ to be the $\lceil \sqrt{k}\ \rceil$ largest element of $\{{\alpha}_{i,j}\}_{j \in [p]} \cup \{\alpha_{i,i}\}$ in absolute magnitude, where we include $\alpha_{i,i}$ twice since $M_{i,i}=1$ while $M_{i,j}=1/2$ if $j \neq i$. } \end{proof} \subsection{Strengthening the Master Problem via the Gershgorin Circle Theorem} \label{sec:exact.circle} To accelerate Algorithm \ref{alg:cuttingPlaneMethod}, we strengthen the master problem by imposing bounds from the circle theorem. Formally, we have the following result, which can be deduced from \citep[Theorem 6.1.1]{horn1990matrix}: \begin{theorem}\label{thm:circletheorem} For any vector $\bm{z} \in \{0, 1\}^p$ we have the following upper bound on $f(\bm{z})$ \begin{align} f(\bm{z}) \leq \max_{j \in [p]: z_j=1}\sum_{i \in [p]}z_i \vert \Sigma_{i,j}\vert. \end{align} \end{theorem} Observe that this bound cannot be used to \textit{directly} strengthen Algorithm \ref{alg:cuttingPlaneMethod}'s master problem, since the bound is not convex in $\bm{z}$. Nonetheless, it can be successfully applied if we (a) impose a big-M assumption on Problem \eqref{OriginalSPCA}'s optimal objective and (b) introduce $p$ additional binary variables $\bm{s} \in \{0, 1\}^p: \bm{e}^\top \bm{s}=1$ {which model whether the $i$th Gershgorin disc is active; recall that each eigenvalue is contained in the union of the discs}. Formally, we impose the following valid inequalities in the master problem: \begin{align}\label{eqn:gershgorincircle} \exists \bm{s} \in \{0, 1\}^p: \ & \theta \leq \sum_{i \in [p]} z_i \vert \Sigma_{i,j}\vert+M(1-s_j) \ \forall j \in [p], \bm{e}^\top \bm{s}=1, \bm{s} \leq \bm{z}, \end{align} { where $\theta$ is the epigraph variable maximized in the master problem stated in Algorithm \ref{alg:cuttingPlaneMethod}{, and $M$ is an upper bound on the sum of the $k$ largest absolute entries in any column of $\bm{\Sigma}$.} Note that we set $\bm{s}\leq \bm{z}$ since if $z_i=0$ the $i$th column of $\bm{\Sigma}$ does not feature in the relevant submatrix of $\bm{\Sigma}$.} In the above inequalities, a valid $M$ is given by any bound on the optimal objective. Since Theorem {\color{black}\ref{thm:circletheorem}} supplies one such bound for any given $\bm{z}$, we can compute \begin{align} M:=\max_{j \in [p]}\max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k} \ \sum_{i \in [p]}z_i \vert \Sigma_{i,j}\vert, \end{align} {which can be done in $O(p^2)$ time. {\color{black}To further improve Algorithm \ref{alg:cuttingPlaneMethod}, we also make use of the Gershgorin circle theorem before generating each outer-approximation cut. Namely, at a given node in a branch-and-bound tree, there are indices $i$ where $z_i$ has been fixed to $1$, indices $i$ where $z_i$ has been fixed to $0$, and indices $i$ where $z_i$ has not yet been fixed. Accordingly, we compute the worst-case Gershgorin bound—by taking the worst-case bound over each index $j$ such that $z_j$ has not yet been fixed to $0$, i.e., $$\max_{j: z_j\neq 0}\left\{\max_{\bm{s} \in \{0, 1\}^p: \bm{e}^\top \bm{s} \leq k}\left\{\sum_{i \in [p]}s_i \vert \Sigma_{i,j}\vert \ \text{s.t.} \ s_i=0\ \text{if}\ z_i=0, s_i=1 \ \text{if}\ z_i=1\right\}\right\}.$$ If this bound is larger than our incumbent solution then we generate an outer-approximation cut, otherwise the entire subtree rooted at this node does not contain an optimal solution and we use instruct the solver to avoid exploring this node via a \verb|callback|.} { Our numerical results in Section \ref{sec:numres} echo the empirical findings of \citet{berk2017} and indicate that Algorithm \ref{alg:cuttingPlaneMethod} performs substantially better when the Gershgorin bound is supplied in the master problem. Therefore, it is interesting to theoretically investigate the tightness, or at least the quality, of Gershgorin's bound. We supply some results in this direction in the following proposition: \begin{proposition}\label{prop:gershgorinthmapprox} Suppose that $\bm{\Sigma}$ is a scaled diagonally dominant matrix as defined by \cite{boman2005factor}, i.e., there exists some vector $\bm{d}>0$ such that $$d_i\Sigma_{i,i} \geq \sum_{j \in [p]: j \neq i}d_j\vert \Sigma_{i,j}\vert \ \forall i \in [p].$$ Then, letting $\rho:=\max_{i,j \in [p]} \{\frac{d_i}{d_j}\}$, the Gershgorin circle theorem provides a $(1+\rho)$-factor approximation, i.e., \begin{align} f(\bm{z}) \leq \max_{j \in [p]}\left\{\sum_{i \in [p]} z_i \vert \Sigma_{i,j}\vert \right\}\leq (1+\rho) f(\bm{z}) \quad \forall \bm{z} \in \{0, 1\}^p. \end{align} \end{proposition} {\color{black} \begin{remark} Observe that, for a fixed $\bm{z}$, the ratio $\rho:=\max_{i,j \in [p]} \{\frac{d_i}{d_j}\}$ need only be computed over indices $i,j$ such that $z_i,z_j=1$. Moreover, for a partially specified $\bm{z}$—which might arise at an intermediate node in a branch-and-bound tree generated by Algorithm \ref{alg:cuttingPlaneMethod}—the ratio $\rho$ need only be computed over indices $i$ where $z_i$ is unspecified or set to $1$. This suggests that the quality of the Gershgorin bound improves upon branching. \end{remark} } \begin{remark} In particular, if $\bm{\Sigma} \in S^n_+$ is a diagonal matrix, then {\color{black}Equation \eqref{eqn:gershgorincircle}'s} bound is tight - which follows from the fact that the spectrum of $\bm{\Sigma}$ and the discs coincide if and only if $\bm{\Sigma}$ is diagonal \citep[see, e.g,][Chapter 6]{horn1990matrix}. Alternatively, if $\bm{\Sigma}$ is a diagonally dominant matrix then $\rho=1$ and the Gershgorin circle theorem provides a $2-$factor approximation. \end{remark} \begin{proof} Scaled diagonally dominant matrices have scaled diagonally dominant principal minors—this is trivially true because $$d_i\Sigma_{i,i} \geq \sum_{j \in [p]: j \neq i}d_j\vert \Sigma_{i,j}\vert \ \forall i \in [p]\implies d_i\Sigma_{i,i} \geq \sum_{j \in [p]: j \neq i}d_j z_j\vert \Sigma_{i,j}\vert \ \forall i \in [p]: z_i=1$$for the same vector $\bm{d}>\bm{0}$ and therefore the following chain of inequalities holds \begin{align*} f(\bm{z}) \leq & \max_{j \in [p]}\{\sum_{i \in [p]} z_i \vert \Sigma_{i,j}\vert \}=\max_{j \in [p]}\{z_j\Sigma_{j,j}+ \sum_{i \in [p]: j \neq i}z_i\vert \Sigma_{i,j}\vert \}\\ & \leq \max_{j \in [p]}\{z_j\Sigma_{j,j}+\sum_{i \in [p]: j \neq i}\rho \frac{d_i}{d_j}z_i\vert \Sigma_{i,j}\vert\}\leq (1+\rho)\max_{j \in [p]}\{z_j\Sigma_{j,j}\}\leq (1+\rho) f(\bm{z}) \quad \forall \bm{z} \in \{0, 1\}^p, \end{align*} where the second inequality follows because $\rho\geq \frac{d_i}{d_j}$, the third inequality follows from the scaled diagonal dominance of the principal submatrices of $\bm{\Sigma}$, and the fourth inequality holds because the leading eigenvalue of a PSD matrix is at least as large as each diagonal \end{proof} } To make clear the extent our numerical success depends upon Theorem \ref{thm:circletheorem}, our results in Section \ref{sec:numres} present implementations of Algorithm \ref{alg:cuttingPlaneMethod} both with and without the bound. { \subsection{Beyond Gershgorin: Further Strengthening via Brauer's Ovals of Cassini} \label{sec:exact.oval} Given the relevance of Gershgorin's bound, we propose, in this section, a stronger —yet more expensive to implement— upper bound, based on an generalization of the Gershgorin Circle theorem, namely Brauer's ovals of Cassini. First, we derive a new upper-bound on $f(\bm{z})$ that is at least as strong as the one presented in Theorem \ref{thm:circletheorem} and often strictly stronger \citep[][Chapter 6]{horn1990matrix} \begin{theorem}\label{thm:cassini1} For any vector $\bm{z} \in \{0, 1\}^p$, we have the following upper bound on $f(\bm{z})$: \begin{align} \label{eqn:bound.ovals} f(\bm{z}) \leq \max_{i,j \in [p]: i>j, z_i=z_j=1} \left\{\frac{\Sigma_{i,i}+\Sigma_{j,j}}{2}+\frac{\sqrt{(\Sigma_{i,i}-\Sigma_{j,j})^2+4R_i(\bm{z}) R_j(\bm{z})}}{2}\right\}, \end{align} where $R_i(\bm{z}):=\sum_{j \in [p]: j \neq i} z_j \vert \Sigma_{i,j}\vert$ is the absolute sum of off-diagonal entries in the $i$th column of the submatrix of $\bm{\Sigma}$ induced by $\bm{z}$. \end{theorem} \begin{proof} Let us first recall that, per \citet{brauer1946limits}'s original result, all eigenvalues of a matrix $\bm{\Sigma} \in S^p_+$ are contained in the union of the following $p(p-1)/2$ ovals of Cassini: \begin{align*} \bigcup_{i \in [p], j \in [p]: i < j} \left\{\lambda \in \mathbb{R}_+: \vert \lambda-\Sigma_{i,i}\vert \vert \lambda-\Sigma_{j,j}\vert \leq R_i R_j \right\}, \end{align*} where $R_i:=\sum_{j \in [p]: j \neq i} \vert \Sigma_{i,j}\vert$ is the absolute sum of off-diagonal entries in the $i$th column of $\bm{\Sigma}$. Next, let us observe that, if $\lambda$ is a dominant eigenvalue of a PSD matrix $\bm{\Sigma}$ then $\lambda \geq \Sigma_{i,i} \ \forall i $ and, in the $(i,j)$th oval, the bound reduces to \begin{align}\label{eqn:dominanteigenvaluebound} \lambda^2 -\lambda(\Sigma_{i,i}+\Sigma_{j,j})+\Sigma_{i,i}\Sigma_{j,j}-R_i R_j \leq 0, \end{align} which, by the quadratic formula, implies an upper bound is $\frac{\Sigma_{i,i}+\Sigma_{j,j}}{2}+\frac{\sqrt{(\Sigma_{i,i}-\Sigma_{j,j})^2+4R_i R_j}}{2}$. The result follows because if $z_i=0$ the $i$th row of $\bm{\Sigma}$ cannot be used to bound $f(\bm{z})$. \end{proof} Theorem \ref{thm:cassini1}'s inequality can be enforced numerically as mixed-integer second order cone constraints. Indeed, the square root term in \eqref{eqn:bound.ovals} can be modeled using second-order cone, and the bilinear terms only involve binary variables and can be linearized. Completing the square in Equation \eqref{eqn:dominanteigenvaluebound}, \eqref{eqn:bound.ovals} is equivalent to the following system of $p(p-1)/2$ mixed-integer second-order cone inequalities: \begin{align*} \left(\theta-\frac{1}{2}(\Sigma_{i,i}+\Sigma_{j,j})\right)^2 \leq \sum_{s,t \in [p]: s\neq i, t \neq j} W_{s,t}\vert \Sigma_{i,s}\Sigma_{j,t}\vert -\frac{3}{4}\Sigma_{i,i}\Sigma_{j,j}+M(1-s_{i,j}) \ \forall i,j \in [p]: i <j,\\ \sum_{i,j \in [p]: i<j} s_{i,j}=1, s_{i,j} \leq \min(z_i, z_j) \ i,j \in [p]: i<j, \ s_{i,j} \in \{0, 1\} \ i,j \in [p]: i<j. \end{align*} where $W_{i,j}=z_i z_j$ is a product of binary variables which can be modeled using, e.g., the {\color{black}McCormick} inequalities $\max(0, z_i+z_j-1) \leq W_{i,j} \leq \min(z_i, z_j)$, and $M$ is an upper bound on the right-hand-side of the inequality for any $i,j: i \neq j$, which can be computed in $O(p^3)$ time in much the same manner as a big-$M$ constant was computed in the previous section. Note that we do not make use of these inequalities directly in our numerical experiments, due to their high computational cost. However, an interesting extension would be to introduce the binary variables dynamically, via branch-and-cut-and-price \citep{barnhart1998branch}. Since the bound derived from the ovals of Cassini (Theorem \ref{thm:cassini1}) is at least as strong as the Gershgorin circle's one (Theorem \ref{thm:circletheorem}), it satisfies the same approximation guarantee (Proposition \ref{prop:gershgorinthmapprox}). In particular, it is tight when $\bm{\Sigma}$ is diagonal and provides a $2-$factor approximation for diagonally dominant matrices. Actually, we now prove a stronger result and demonstrate that Theorem \ref{thm:cassini1} provides a $2-$factor bound on $f(\bm{z})$ for doubly diagonally dominant matrices—a broader class of matrices than diagonally dominant matrices \citep[see][for a general theory]{li1997doubly}: \begin{proposition} Let $\bm{\Sigma}\in S^p_+$ be a doubly diagonally dominant matrix, i.e., \begin{align*} \Sigma_{i,i}\Sigma_{j,j}\geq R_i R_j \ \forall i,j \in [p]: i >j, \end{align*} where $R_i:=\sum_{j \in [p]: j \neq i} \vert \Sigma_{i,j} \vert$ is the sum of the off-diagonal entries in the $i$th column of $\bm{\Sigma}$. Then, we have that \begin{align} f(\bm{z}) \leq \max_{i,j \in [p]: i>j, z_i=z_j=1} \left\{\frac{\Sigma_{i,i}+\Sigma_{j,j}}{2}+\frac{\sqrt{(\Sigma_{i,i}-\Sigma_{j,j})^2+4R_i(\bm{z}) R_j(\bm{z})}\}}{2}\right\} \leq 2f(\bm{z}). \end{align} \end{proposition} \begin{proof} Observe that if $\Sigma_{i,i}\Sigma_{j,j}\geq R_i R_j$ then $$\sqrt{(\Sigma_{i,i}-\Sigma_{j,j})^2+4 R_i R_j}\leq \sqrt{(\Sigma_{i,i}-\Sigma_{j,j})^2+4 \Sigma_{i,i}\Sigma_{j,j}}=\Sigma_{i,i}+\Sigma_{j,j}.$$ The result then follows in essentially the same fashion as Proposition \ref{prop:gershgorinthmapprox}. \end{proof} } \section{Convex Relaxations and Rounding Methods}\label{sec:relaxandround} For large-scale instances, high-quality solutions can be obtained by solving a convex relaxation of Problem \eqref{misdpprimal} and rounding the optimal solution. In Section \ref{ssec:relax.bool}, we propose relaxing $\bm{z} \in \{0, 1\}^p$ in \eqref{misdpprimal} to $\bm{z} \in [0, 1]^p$ and applying a greedy rounding scheme. We further tighten this relaxation using second-order cones constraints in Section \ref{ssec:validineq}. \subsection{A Boolean Relaxation and a Greedy Rounding Method} \label{ssec:relax.bool} We first consider a Boolean relaxation of \eqref{misdpprimal}, which we obtain\footnote{\color{black}Note that we omit the {\color{black}$\sum_{j=1}^p \vert X_{i,j}\vert \leq \sqrt{k} z_i$} constraints when we develop our convex relaxations, since they are essentially dominated by the $\Vert \bm{X}\Vert_1 \leq k$ constraint we introduce in the next section; we introduced these inequalities to improve our semidefinite-free subproblem strategy for the exact method.} by relaxing $\bm{z} \in \{0, 1\}^p$ to $\bm{z} \in [0, 1]^p$. This gives $\displaystyle \max_{\bm{z} \in [0, 1]^p: \bm{e}^\top \bm{z} \leq k} \: f(\bm{z})$, {\color{black}i.e.}, \begin{equation} \begin{aligned}\label{prob:lprelax2} \max_{\substack{\bm{z} \in [0, 1]^p}: \bm{e}^\top \bm{z} \leq k} \: \max_{\bm{X} \succeq \bm{0}} \quad & \langle \bm{\Sigma}, \bm{X} \rangle \ \text{s.t.} \ \mathrm{tr}(\bm{X})=1, \vert X_{i,j}\vert \leq M_{i,j}z_i\ \forall i,j \in [p]. \end{aligned} \end{equation} A useful strategy for obtaining a high-quality feasible solution is to solve \eqref{prob:lprelax2} and set $z_i=1$ for $k$ indices corresponding to the largest $\bm{z}_j$'s in \eqref{prob:lprelax2} {\color{black}as proposed in the randomized case for general integer optimization problems by \cite{raghavan1987randomized}}. We formalize this in Algorithm \ref{alg:greedymethod}. {\color{black}We remark that rounding strategies for sparse PCA have previously been proposed \citep[see][]{fountoulakis2017randomized, dey2017sparse, chowdhury2020approximation}, however, the idea of rounding $\bm{z}$ and then optimizing for $\bm{X}$ appears to be new.} \begin{algorithm*} \caption{A greedy rounding method for Problem \eqref{OriginalSPCA}} \label{alg:greedymethod} \begin{algorithmic}\normalsize \REQUIRE Covariance matrix $\bm{\Sigma}$, sparsity parameter $k$ \STATE Compute $\bm{z}^\star$ solution of \eqref{prob:lprelax2} or \eqref{spca:sdpplussocp} \STATE Construct $\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z}=k$ such that $z_i\geq z_j$ if $z^\star_i \geq z^\star_j$. \STATE Compute $\bm{X}$ solution of \vspace{-2mm} \begin{align*} \max_{\bm{X} \in S^p_+} \ \langle \bm{\Sigma}, \bm{X} \rangle \ \text{s.t.} \ \mathrm{tr}(\bm{X})=1, X_{i,j}=0 \ \text{if} \ z_i z_j=0 \ \forall i,j \in [p]. \end{align*} \vspace{-5mm} \RETURN $\bm{z}, \bm{X}$. \end{algorithmic} \end{algorithm*} { \begin{remark}\label{remark:scalability} Our numerical results in Section \ref{sec:numres} reveal that explicitly imposing a PSD constraint on $\bm{X}$ in the relaxation \eqref{prob:lprelax2}—or the ones derived later in the following section—prevents our approximation algorithm from scaling to larger problem sizes than the exact Algorithm \ref{alg:cuttingPlaneMethod} can already solve. Therefore, to improve scalability, the semidefinite cone can be safely approximated via its second-order cone relaxation, $X_{i,j}^2 \leq X_{i,i}X_{j,j}\ \forall i,j \in [p]$, plus a small number of cuts of the form $\langle \bm{X}, \bm{x}_t\bm{x}_t^\top\rangle \geq 0$ as presented in \citet{bertsimas2019polyhedral}. \end{remark} } { \begin{remark} Rather than relaxing and greedily rounding $\bm{z}$, one could consider a higher dimensional relax-and-round scheme where we let $\bm{Z}$ model the outer product $\bm{z}\bm{z}^\top$ via $\bm{Z}\succeq \bm{z}\bm{z}^\top$, $\max(0, z_i+z_j-1)\leq Z_{i,j} \leq \min(z_i, z_j) \ \forall i,j \in [p]$, $Z_{i,i}=z_i$, and require that $\sum_{i,j \in [p]}Z_{i,j} \leq k^2$. Indeed, a natural ``round'' component of such a relax-and-round scheme is precisely Goemans-Williamson rounding \citep[][]{goemans1995improved,bertsimas1998semidefinite}, which performs at least as well as greedy rounding in both theory and practice. Unfortunately, some preliminary numerical experiments indicated that Goemans-Williamson rounding is not actually much better than greedy rounding in practice, and is considerably more expensive to implement. Therefore, we defer the details of the Goemans-Williamson scheme to Appendix \ref{sec:relax.goemans}, and do not consider it any further in this paper. \end{remark} } \subsection{Valid Inequalities for Strengthening Convex Relaxations}\label{ssec:validineq} We now propose valid inequalities which allow us to improve the quality of the convex relaxations discussed previously. Note that as convex relaxations and random rounding methods are two sides of the same coin \citep{barak2014rounding}, applying these valid inequalities also improves the quality of the randomly rounded solutions. \begin{theorem} Let $\mathcal{P}_{strong}$ denote the optimal objective value of the following problem: \begin{equation} \label{spca:sdpplussocp} \begin{aligned} \max_{\bm{z} \in [0, 1]^p: \bm{e}^\top \bm{z} \leq k} \max_{\bm{X} \in S^p_+} \ \langle \bm{\Sigma}, \bm{X} \rangle \ \text{s.t.} \quad & \mathrm{tr}(\bm{X})=1, \vert X_{i,j}\vert \leq M_{i,j}z_i\ \forall i,j \in [p] ,\\ &\sum_{j \in [p]} X_{i,j}^2 \leq X_{i,i}z_i, \Vert \bm{X}\Vert_1 \leq k. \end{aligned} \end{equation} Then, \eqref{spca:sdpplussocp} is a stronger relaxation than \eqref{prob:lprelax2}, i.e., the following inequalities hold: \begin{align*} \max_{\bm{z} \in [0, 1]^p: \bm{e}^\top \bm{z} \leq k} f(\bm{z})\geq \mathcal{P}_{strong} \geq \max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k} f(\bm{z}). \end{align*} Moreover, suppose an optimal solution to \eqref{spca:sdpplussocp} is of rank one. Then, the relaxation is tight: $$\mathcal{P}_{strong}= \max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k} f(\bm{z}).$$ \end{theorem} \begin{proof} The first inequality $\max_{\bm{z} \in [0, 1]^p: \bm{e}^\top \bm{z} \leq k} f(\bm{z})\geq \mathcal{P}_{strong}$ is trivial. The second inequality holds because $\mathcal{P}_{strong}$ is indeed a valid relaxation of Problem \eqref{OriginalSPCA}. Indeed, $\| \bm{X} \|_1 \leq k$ follows from the cardinality and big-M constraints. The semidefinite constraint $\bm{X} \succeq 0$ impose second-order cone constraints on the $2\times 2$ minors of $\bm{X}$, $X_{i,j}^2 \leq z_i X_{i,i} X_{j,j}$, which can be aggregated into $\sum_{j \in [p]} X_{i,j}^2 \leq X_{i,i} z_i $ \citep[see][for derivations]{bertsimas2019polyhedral}. Finally, suppose that an optimal solution to Problem \eqref{spca:sdpplussocp} is of rank one, i.e., the optimal matrix $\bm{X}$ can be decomposed as $\bm{X}=\bm{x}\bm{x}^\top$. Then, the SOCP inequalities imply that $\sum_{j \in [p]} x_i^2 x_j^2 \leq x_i^2 z_i$. However, $\sum_{j \in [p]}x_j^2=\mathrm{tr}(\bm{X})=1$, which implies that $x_i^2 \leq x_i^2 z_i$, i.e., $z_i=1$ for any index $i$ such that $\vert x_i\vert>0$. Since $\bm{e}^\top \bm{z} \leq k$, this implies that $\Vert \bm{x}\Vert_0 \leq k$, i.e., $\bm{X}$ also solves Problem \eqref{sdospca1}. \end{proof} { As our numerical experiments will demonstrate and despite the simplicity of our rounding mechanism in Algorithm \ref{alg:greedymethod}, the relaxation \eqref{spca:sdpplussocp} provides high-quality solutions to the original sparse PCA problem \eqref{OriginalSPCA}, without introducing any additional variables.} {\color{black}We remark that other inequalities, including the second-order cone inequalities proposed in \citet[Lemma 2 (ii)]{li2020exact}, could further improve the convex relaxation; we leave integrating these inequalities within our framework as future work.} \section{Numerical Results}\label{sec:numres} We now assess the numerical behavior of the algorithms proposed in Section \ref{sec:reformulation} and \ref{sec:relaxandround}. To bridge the gap between theory and practice, we present a \verb|Julia| code which implements the described convex relaxation and greedy rounding procedure on GitHub\footnote{\href{github.com/ryancorywright/ScalableSPCA.jl}{https://github.com/ryancorywright/ScalableSPCA.jl}}. The code requires a conic solver such as \verb|Mosek| and several open source Julia packages to be installed. \subsection{Performance of Exact Methods} In this section, we apply Algorithm \ref{alg:cuttingPlaneMethod} to medium and large-scale sparse principal component analysis problems, with and without Gershgorin circle theorem bounds in the master problem. All experiments were implemented in \verb|Julia| $1.3$, using \verb|Gurobi| $9.1$ and \verb|JuMP.jl| $0.21.6$, and performed on a standard Macbook Pro laptop, with a $2.9$GHz $6$-Core Intel i9 CPU, using $16$ GB DDR4 RAM. We compare our approach to the branch-and-bound algorithm\footnote{\color{black}The solve times for their method, as reported here, differ from those reported in \citet{berk2017} due to a small typo in their implementation (line $110$ of their branchAndBound.jl code should read ``if $y[i]==-1$ $||$ $y[i]==1$'', not ``if $y[i]==-1$'' in order to correctly compute the Gershgorin circle theorem bound); correcting this is necessary to ensure that we obtain correct results from their method.} developed by \cite{berk2017} on the UCI \verb|pitprops|, \verb|wine|, \verb|miniboone|, \verb|communities|, \verb|arrythmia| and \verb|micromass| datasets, both in terms of runtime and the number of nodes expanded; we refer to \cite{berk2017, bertsimas2019polyhedral} for descriptions of these datasets. Note that we normalized all datasets before running the method (i.e., we compute the leading sparse principal components of correlation matrices). Additionally, we warm-start all methods with the solution from the method of \cite{yuan2013truncated}, to maintain a fair comparison. Table \ref{tab:comparison} reports the time for Algorithm \ref{alg:cuttingPlaneMethod} (with and without Gershgorin circle theorem bounds in the master problem) and the method of \cite{berk2017} to identify the leading $k$-sparse principal component for {\color{black}$k \in \{5, 10, 20\}$}, along with the number of nodes expanded, and the number of outer approximation cuts generated. We impose a relative optimality tolerance of $10^{-3}$ for all approaches {\color{black}, i.e., terminate each method when $(UB-LB)/UB\leq 10^{-3}$ where $UB$ denotes the current objective bound and $LB$ denotes the current incumbent objective value}. Note that $p$ denotes the dimensionality of the correlation matrix, and $k \leq p$ denotes the target sparsity. { \begin{table}[h] \centering\footnotesize \caption{{\color{black}Runtime in seconds per approach. We impose a time limit of $600$s. If a solver fails to converge, we report the relative bound gap at termination in brackets }} \begin{tabular}{@{}l l l r r r r r r r r r@{}} \toprule Dataset & $p$ & $k$ & \multicolumn{3}{c@{\hspace{0mm}}}{Alg. \ref{alg:cuttingPlaneMethod}} & \multicolumn{3}{c@{\hspace{0mm}}}{Alg. \ref{alg:cuttingPlaneMethod}+ Circle Theorem} & \multicolumn{2}{c@{\hspace{0mm}}}{Method of B.+B.} \\ \cmidrule(l){4-6} \cmidrule(l){7-9} \cmidrule(l){10-11} & & & Time(s) & Nodes & Cuts & Time(s) & Nodes & Cuts & Time(s) & Nodes \\\midrule Pitprops & $13$ & $5$ & \color{black} $0.30$ & \color{black} $1,608$ & \color{black} $1,176$ & \color{black} $\textbf{0.06}$ & \color{black} $38$ & \color{black} $8$ & $1.49$ & $22$ \\ & & $10$ & \color{black} $0.14$ & \color{black} $414$ & \color{black} $387$ & \color{black} $\textbf{0.02}$ & \color{black} $18$ & \color{black} $21$ & $\textbf{0.02}$ & $14$ \\\midrule Wine & $13$ & $5$ & \color{black} $0.57$ & \color{black} $2,313$ & \color{black} $1,646$ & \color{black} $\textbf{0.02}$ & \color{black} $46$ & \color{black} $11$ & $0.04$ & $34$ \\ & & $10$ & \color{black} $0.17$ & \color{black} $376$ & \color{black} $311$ & $\color{black} 0.03$ & \color{black} $54$ & \color{black} $58$ & $\textbf{0.02}$ & $12$ \\\midrule Miniboone & $50$ & $5$ & \color{black} $\textbf{0.01}$ & \color{black} $0$ & \color{black} $11$ & \color{black} $\textbf{0.01}$ & \color{black} $0$ & \color{black} $3$ & $0.04$ & $2$ \\ & & $10$ & \color{black} $\textbf{0.01}$ & \color{black} $0$ & \color{black} $16$ & \color{black} $0.02$ & \color{black} $0$ & \color{black} $3$ & $0.04$ & $2$ \\ & & \color{black} $20$ & \color{black} $0.03$ & \color{black} $0$ & \color{black} $26$ & \color{black} $\textbf{0.01}$ & \color{black} $0$ & \color{black} $3$ & \color{black} $1.30$ & \color{black} $5,480$ \\\midrule Communities & $101$ & $5$ & \color{black} ($2.87\%$) & \color{black} $28,462$ & \color{black} $25,483$ & \color{black} $\textbf{0.20}$ & \color{black} $201$ & \color{black} $3$ & $0.57$ & $101$ \\ & & $10$ & \color{black}($13.3\%$) & \color{black} $37,479$ & \color{black} $36,251$ & \color{black} $\textbf{0.34}$ & \color{black} $406$ & \color{black} $39$ & $0.94$ & $1,298$ \\ & & \color{black} $20$ & \color{black} ($39.6\%)$ & \color{black} $24,566$ & \color{black} $24,632$ & \color{black} ($12.1\%)$ & \color{black} $42,120$ & \color{black} $37,383$ & \color{black} $(\textbf{9.97\%})$ & \color{black} $669,500$ \\\midrule Arrhythmia & $274$ & $5$ & \color{black} ($18.1\%$) & \color{black} $22,771$ & \color{black} $20,722$ & \color{black}$6.07$ & \color{black} $135$ & \color{black} $1,233$ & $\textbf{4.17}$ & $1,469$ \\ & & $10$ & \color{black} ($32.6\%)$ & \color{black} $19,500$ & \color{black} $19,314$ & \color{black} ($2.92\%$) & \color{black} $15,510$ & \color{black} $6,977$ & $\textbf{(0.83\%)}$ & $471,680$ \\ & & \color{black} $20$ & \color{black} $(74.4\%)$ & \color{black} $33,773$ & \color{black} $12,374$& \color{black} ($24.3\%$) & \color{black} $33,123$ & \color{black} $19,662$ & \color{black} $(\textbf{18.45\%})$ & \color{black} $311,400$ \\\midrule Micromass & $1300$ & $5$ & \color{black} $(1.29\%)$ & \color{black} $3,859$ & \color{black} $3,099$ & \color{black} $163.60$ & \color{black} $2,738$ & \color{black} $6$ & $\textbf{24.31}$ & $1,096$ \\ & & $10$ & \color{black} $(10.6\%)$ & \color{black} $3,366$ & \color{black} $3,369$ & \color{black} $\textbf{241.86}$ & \color{black} $3,233$ & \color{black} $121$ & $362.4$ & $36,690$ \\ & & \color{black} \color{black} $20$ & \color{black} ($35.9\%)$ & \color{black} $2,797$ & \color{black} $2,839$ & \color{black} ($35.9\%$) & \color{black} $2,676$ & \color{black} $2,115$ & \color{black} $(\textbf{10.34}\%)$ & \color{black} $31,990$ \\ \bottomrule \end{tabular} \label{tab:comparison} \end{table} } Our main findings from these experiments are as follows: \begin{itemize}\setlength\itemsep{0em} \item For smaller problems, the strength of Algorithm \ref{alg:cuttingPlaneMethod}'s cuts allows it to outperform state-of-the-art methods such as the method of \cite{berk2017}. Moreover, for larger problem sizes, the adaptive branching strategy {\color{black}performs comparably to} Algorithm \ref{alg:cuttingPlaneMethod}. This suggests that {\color{black}the relative merits of both approaches are roughly even, and which method is preferable may depend on the problem data.} \item Generating outer-approximation cuts and valid upper bounds from the Gershgorin circle theorem are both powerful ideas, but the greatest aggregate power appears to arise from intersecting these bounds, rather than using one bound alone. \end{itemize} \begin{itemize} \item {\color{black}Once both $k$ and $p$ are sufficiently large (e.g. $p>300$ and $k>10$), no approach is able to solve the problem to provable optimality within $600$s. This motivates our study of convex relaxations and randomized rounding methods in the next section.} \end{itemize} \vspace{-5mm} \subsection{Convex Relaxations and Randomized Rounding Methods} In this section, we apply Algorithm \ref{alg:greedymethod} to obtain high quality convex relaxations and feasible solutions for the datasets studied in the previous subsection, and compare the relaxation to a difference convex relaxation developed by \citet{d2008optimal}, in terms of the quality of the upper bound and the resulting greedily rounded solutions. All experiments were implemented using the same specifications as the previous section. { Note that \citet{d2008optimal}'s upper bound\footnote{ Strictly speaking, \citet{d2008optimal} does not actually write down this formulation in their work. Indeed, their bound involves dual variables which cannot be used directly to generate feasible solutions via greedy rounding. However, the fact that this bound and \citep[Problem (8)]{d2008optimal} are dual to each other follows directly from strong semidefinite duality, and therefore we refer to this formulation as being due to \cite{d2008optimal} (it essentially is).} which we compare against is: \begin{equation} \begin{aligned}\label{prob:lprelax3} \max_{\substack{\bm{z} \in [0, 1]^p}: \bm{e}^\top \bm{z} \leq k} \: \max_{\bm{X} \succeq \bm{0}, \bm{P}_i \succeq \bm{0}\ \forall i \in [p]} \quad & \sum_{i \in [p]}\ \langle \bm{a}_i\bm{a}_i^\top, \bm{P}_i\rangle \ \text{s.t.} \ \mathrm{tr}(\bm{X})=1,\ \mathrm{tr}(\bm{P}_i)=z_i, \ \bm{X}\succeq \bm{P}_i \ \forall i \in [p], \end{aligned} \end{equation} where $\bm{\Sigma}=\sum_{i=1}^p \bm{a}_i\bm{a}_i^\top$ is a Cholesky decomposition of $\bm{\Sigma}$, and we obtain feasible solutions from this relaxation by greedily rounding an optimal $\bm{z}$ in the bound \textit{\`{a} la} Algorithm \ref{alg:greedymethod}. {\color{black} To allow for a fair comparison,} we also consider augmenting this formulation with the inequalities derived in Section \ref{ssec:validineq} to obtain the following stronger yet more expensive to solve relaxation: \begin{equation} \label{spca:sdpplussocp2} \begin{aligned} \max_{\substack{\bm{z} \in [0, 1]^p}: \bm{e}^\top \bm{z} \leq k} \: \max_{\substack{\bm{X} \succeq \bm{0},\\ \bm{P}_i \succeq \bm{0}\ \forall i \in [p]}} \quad & \sum_{i \in [p]}\ \langle \bm{a}_i\bm{a}_i^\top, \bm{P}_i\rangle \ \text{s.t.} \ \mathrm{tr}(\bm{X})=1,\ \mathrm{tr}(\bm{P}_i)=z_i, \ \bm{X}\succeq \bm{P}_i \ \forall i \in [p],\\ &\sum_{j \in [p]} X_{i,j}^2 \leq X_{i,i}z_i, \Vert \bm{X}\Vert_1 \leq k. \end{aligned} \end{equation} } {\color{black} We first apply these relaxations on datasets where Algorithm \ref{alg:cuttingPlaneMethod} terminates, hence the optimal solution is known and can be compared against.} We report the quality of both methods with and without the additional inequalities discussed in Section \ref{ssec:validineq}, in Tables \ref{tab:comparison_convrelaxations}-\ref{tab:comparison_convrelaxations2} respectively\footnote{For the instances of \eqref{prob:lprelax3} or \eqref{spca:sdpplussocp2} where $p>13$ we used SCS version $2.1.1$ (with default parameters) instead of Mosek, since Mosek required more memory than was available in our computing environment, and SCS takes an augmented Lagrangian approach which is less numerically stable but requires significantly less memory. That is, \eqref{prob:lprelax3}'s formulation is too expensive to solve via IPMs on a laptop when $p=50$.}. \begin{table}[h] \centering\footnotesize \caption{{\color{black}Quality of relaxation gap (upper bound vs. optimal solution-denoted R. gap), objective gap (rounded solution vs. optimal solution-denoted O. gap) and runtime in seconds per method.}} \begin{tabular}{@{}l l l r r r r r r@{}} \toprule Dataset & $p$ & $k$ & \multicolumn{3}{c@{\hspace{0mm}}}{Alg. \ref{alg:greedymethod} with \eqref{prob:lprelax2}} & \multicolumn{3}{c@{\hspace{0mm}}}{Alg. \ref{alg:greedymethod} with \eqref{prob:lprelax3}} \\ \cmidrule(l){4-6} \cmidrule(l){7-9} & & & R. gap $(\%)$& O. gap $(\%)$ & Time(s) & R. gap $(\%)$ & O. gap $(\%)$ & Time(s) \\\midrule Pitprops & $13$ & $5$ &$23.8\%$ & $0.00\%$ &$0.02$ &$23.8\%$ & $16.1\%$ &$0.46$\\ & & $10$ & $1.10\%$ &$0.30\%$ &$0.03$ &$1.10\%$ & $1.33\%$ &$0.46$\\\midrule Wine & $13$ & $5$ &$36.8\%$ & $0.00\%$ &$0.02$ &$36.8\%$ & $40.4\%$ &$0.433$\\ & & $10$ &$2.43\%$ & $0.26\%$ &$0.03$ &$2.43\%$ & $15.0\%$ &$0.463$\\\midrule Miniboone & $50$ & $5$ & $781.3\%$ & $235.6\%$ & $7.37$ & $781.2\%$ & $34.7\%$ & $1,191.0$\\ & & $10$ & $340.6\%$ & $117.6\%$ & $7.50$ & $340.6\%$ & $44.9\%$ & $1,102.6$\\ & & \color{black} $20$ & \color{black} $120.3\%$ & \color{black} $38.08\%$ & \color{black} $6.25$ & \color{black} $120.3\%$ & \color{black} $31.9\%$ & \color{black} $1,140.2$ \\ \bottomrule \end{tabular} \label{tab:comparison_convrelaxations} \end{table} \begin{table}[h] \centering\footnotesize \caption{\color{black} Quality of relaxation gap (upper bound vs. optimal solution-denoted R. gap), objective gap (rounded solution vs. optimal solution-denoted O. gap) and runtime in seconds per method, with additional inequalities from Section \ref{ssec:validineq}.} \begin{tabular}{@{}l l l r r r r r r@{}} \toprule Dataset & $p$ & $k$ & \multicolumn{3}{c@{\hspace{0mm}}}{Alg. \ref{alg:greedymethod} with \eqref{spca:sdpplussocp}} & \multicolumn{3}{c@{\hspace{0mm}}}{Alg. \ref{alg:greedymethod} with \eqref{spca:sdpplussocp2}} \\ \cmidrule(l){4-6} \cmidrule(l){7-9} & & & R. gap $(\%)$& O. gap $(\%)$ & Time(s) & R. gap $(\%)$ & O. gap $(\%)$ & Time(s) \\\midrule Pitprops & $13$ & $5$ & $0.71\%$ &$0.00\%$ & $0.17$ &$1.53\%$ & $0.00\%$ &$0.55$ \\ & & $10$ & $0.12\%$ &$0.00\%$ & $0.27$ &$1.10\%$ & $0.00\%$ &$3.27$\\\midrule Wine & $13$ & $5$ &$1.56\%$ & $0.00\%$ &$0.24$ &$2.98\%$ & $15.03\%$ &$0.95$\\ & & $10$ &$0.40\%$ & $0.00\%$ &$0.22$ &$2.04\%$ & $0.00\%$ &$1.15$\\\midrule Miniboone & $50$ & $5$ & $0.00\%$ & $0.00\%$ & $163.3$ & $0.00\%$ & $0.01\%$ & $500.7$\\ & & $10$ & $0.00\%$ & $0.00\%$ & $148.5$ & $0.00\%$ & $0.02\%$ & $489.9$\\%\midrule & & \color{black} $20$ & \color{black} $0.00\%$ & \color{black} $0.00\%$ & \color{black} $194.5$ & \color{black} $0.00\%$ & \color{black} $0.00\%$ & \color{black} $776.3$\\ \bottomrule \end{tabular} \label{tab:comparison_convrelaxations2} \end{table} Observe that applying Algorithm \ref{alg:greedymethod} without the additional inequalities (Table \ref{tab:comparison_convrelaxations}) yields rather poor relaxations and randomly rounded solutions. However, by intersecting our relaxations with the additional inequalities from Section \ref{ssec:validineq} (Table \ref{tab:comparison_convrelaxations2}), we obtain extremely high quality relaxations. Indeed, with the additional inequalities, Algorithm \ref{alg:greedymethod} using formulation \eqref{spca:sdpplussocp} identifies the optimal solution in all instances (0\% O. gap), and always supplies a bound gap of less than $2\%$. Moreover, in terms of obtaining high-quality solutions, the new inequalites allow Problem \eqref{spca:sdpplussocp} to perform as well or better as Problem \eqref{prob:lprelax3}, despite optimizing over one semidefinite matrix, rather than $p+1$ semidefinite matrices. This suggests that Problem \eqref{spca:sdpplussocp} should be considered as a viable, more scalable and more accurate alternative to existing SDO relaxations such as Problem \eqref{prob:lprelax3}. For this reason, we shall only consider using Problem \eqref{spca:sdpplussocp}'s formulation for the rest of the paper. We remark however that the key drawback of applying these methods is that, as implemented in this section, they do not scale to sizes beyond which Algorithm \ref{alg:cuttingPlaneMethod} successfully solves. This is a drawback because Algorithm \ref{alg:cuttingPlaneMethod} supplies an exact certificate of optimality, while these methods do not. In the following set of experiments, we therefore investigate numerical techniques to improve the scalability of Algorithm \ref{alg:greedymethod}. \subsection{Scalable Dual Bounds and {\color{black}Randomized} Rounding Methods} To improve the scalability of Algorithm \ref{alg:greedymethod}, we relax the PSD constraint on $\bm{X}$ in \eqref{prob:lprelax2} and \eqref{spca:sdpplussocp}. With these enhancements, we demonstrate that Algorithm \ref{alg:greedymethod} can be successfully scaled to generate high-quality bounds for $1000s \times 1000s$ matrices. { As discussed in Remark \ref{remark:scalability}, we can replace the PSD constraint $\bm{X} \succeq \bm{0}$ by requiring that the $p(p-1)/2$ two by two minors of $\bm{X}$ are non-negative: $X_{i,j}^2 \leq X_{i,i} X_{j,j}$. Second, we consider adding $20$ linear inequalities of the form $\langle \bm{X}, \bm{x}_t\bm{x}_t^\top\rangle \geq 0$, for some vector $\bm{x}_t$ \citep[see][for a discussion]{bertsimas2019polyhedral}.} Table \ref{tab:comparison_convrelaxations3} reports the performance of Algorithm \ref{alg:greedymethod} (with the relaxation \eqref{spca:sdpplussocp}) with these two approximations of the positive semidefinite cone, ``Minors'' and ``Minors + 20 inequalities'' respectively. {\color{black}Note that we report the entire duality gap (i.e. do not break the gap down into its relaxation and objective gap components) since, as reflected in Table \ref{tab:comparison}, some of these instances are currently too large to solve to optimality.} \begin{table}[h] \centering\footnotesize \caption{Quality of bound gap (rounded solution vs. upper bound) and runtime in seconds of Algorithm \ref{alg:greedymethod} with \eqref{spca:sdpplussocp}, outer-approximation of the PSD cone.} \begin{tabular}{@{}l l l r r r r@{}} \toprule Dataset & $p$ & $k$ & \multicolumn{2}{c@{\hspace{0mm}}}{Minors} & \multicolumn{2}{c@{\hspace{0mm}}}{Minors + 20 inequalities} \\ \cmidrule(l){4-5} \cmidrule(l){6-7} & & & Gap $(\%)$ & Time(s) & Gap $(\%)$ & Time(s) \\\midrule Pitprops & $13$ & $5$ & $1.51\%$ & $0.02$ &$0.72\%$ &$0.36$ \\ & & $10$ & $5.29\%$ & $0.02$ &$1.12\%$ &$0.36$ \\\midrule Wine & $13$ & $5$ & $2.22\%$ & $0.02$ &$1.59\%$ &$0.38$ \\ & & $10$ & $3.81\%$ & $0.02$ &$1.50\%$ &$0.37$ \\\midrule Miniboone & $50$ & $5$ & $0.00\%$ & $0.11$ &$0.00\%$ &$0.11$ \\ & & $10$ & $0.00\%$ & $0.12$ &$0.00\%$ &$0.12$ \\ & & \color{black} $20$ & \color{black} $0.00\%$ & \color{black} $0.39$ & \color{black} $0.00\%$ & \color{black} $0.39$ \\ \midrule Communities & $101$ & $5$ & $0.07\%$ & $0.67$ &$0.07\%$ &$14.8$ \\ & & $10$ & $0.66\%$ & $0.68$ &$0.66\%$ &$14.4$ \\ & & \color{black} $20$ & \color{black} $3.32\%$ & \color{black} $1.84$ & \color{black} $2.23\%$ & \color{black} $33.5$ \\\midrule Arrhythmia & $274$ & $5$ & $3.37\%$ & $27.2$ &$1.39\%$ &$203.6$ \\ & & $10$ & $3.01\%$ & $25.6$ &$1.33\%$ &$184.0$ \\ & & \color{black} $20$ & \color{black} $8.87\%$ & \color{black} $21.8$ & \color{black} $4.48\%$ & \color{black} $426.8$ \\\midrule Micromass & $1300$ & $5$ & $0.04\%$ & $239.4$ &$0.01\%$ &$4,639$ \\ & & $10$ & $0.63\%$ & $232.6$ &$0.32\%$ &$6,392$ \\ & & \color{black} $20$ & \color{black} $13.1\%$ & \color{black} $983.5$ & \color{black} $5.88\%$ & \color{black} $16,350$ \\ \bottomrule \end{tabular} \label{tab:comparison_convrelaxations3} \end{table} Observe that if we impose constraints on the $2\times 2$ minors only then we obtain a solution {\color{black} certifiably} within {\color{black}$13\%$} of optimality in seconds (resp. minutes) for $p=100$s (resp. $p=1000$s). Moreover, adding $20$ linear inequalities, we obtain a solution within $6\%$ of optimality in minutes (resp. hours) for $p=100$s (resp. $p=1000$s). {\color{black}Moreover, the bound gaps compare favorably to Algorithm \ref{alg:cuttingPlaneMethod} and the method of \cite{berk2017} for instances which these methods could not solve to certifiable optimality. For instance, for the Arrhythmia dataset when $k=20$ we obtain a bound gap of less than $9\%$ in $20$s, while the method of \cite{berk2017} obtains a bound gap of $18.45\%$ in $600$s. This illustrates the value of the proposed relax+round method on datasets which are currently too large to be optimized over exactly.} To conclude this section, we explore Algorithm \ref{alg:greedymethod}'s ability to scale to even higher dimensional datasets in a high performance setting, by running the method on one Intel Xeon E5--2690 v4 2.6GHz CPU core using 600 GB RAM. Table \ref{tab:comparison_convrelaxations4} reports the methods scalability and performance on the Wilshire $5000$, and \verb|Arcene| UCI datasets. For the \verb|Gisette| dataset, we report on the methods performance when we include the first $3,000$ and $4,000$ rows/columns (as well as all $5,000$ rows/columns). Similarly, for the \verb|Arcene| dataset we report on the method's performance when we include the first $6,000$, $7,000$ or $8,000$ rows/columns. We do not report results for the \verb|Arcene| dataset for $p>8,000$, as computing this requires more memory than was available (i.e. $>600$ GB RAM). We do not report the method's performance when we impose linear inequalities for the PSD cone, as solving the relaxation without them is already rather time consuming. Moreover, we do not impose the $2 \times 2$ minor constraints to save memory, do not impose $\vert X_{i,j}\vert \leq M_{i,j}z_i$ {\color{black}when $p \geq 4000$} to save even more memory, and report the overall bound gap, as improving upon the randomly rounded solution is challenging in a high-dimensional setting. \begin{table}[h] \centering\footnotesize \caption{Quality of bound gap (rounded solution vs. upper bound) and runtime in seconds.} \begin{tabular}{@{}l l l r r @{}} \toprule Dataset & $p$ & $k$ & \multicolumn{2}{c@{\hspace{0mm}}}{Algorithm \ref{alg:greedymethod} (SOC relax)+Inequalities}\\ \cmidrule(l){4-5} & & & Bound gap $(\%)$ & Time(s) \\\midrule Wilshire $5000$ & $2130$ & $5$ & $0.38\%$ & $1,036$\\ & & $10$ & $0.24\%$ & $1,014$\\ & & \color{black} $20$ & \color{black} $0.36\%$ & \color{black} $1,059$\\ \midrule Gisette & $3000$ & $5$ & $1.67\%$ & $2,249$\\ & & $10$ & $35.81\%$ & $2,562$\\ & & \color{black} $20$ & \color{black} $10.61\%$ & \color{black} $3,424$\\ \midrule Gisette & $4000$ & $5$ & $1.55\%$ & $1,402$\\ & & $10$ & $54.4\%$ & $1,203$\\ & & \color{black} $20$ & \color{black} $11.84\%$ & \color{black} $1,435$\\ \midrule Gisette & $5000$ & $5$ & $1.89\%$ & $2,169$\\ & & $10$ & $2.22\%$ & $2,455$\\ & & \color{black} $20$ & \color{black} $7.16\%$ & \color{black} $2,190$\\\midrule Arcene & $6000$ & $5$ & $0.01\%$ & $3,333$\\ & & $10$ & $0.06\%$ & $3,616$\\ & & \color{black} $20$ & \color{black} $0.14\%$ & \color{black} $3,198$\\\midrule Arcene & $7000$ & $5$ & $0.03\%$ & $4,160$\\ & & $10$ & $0.05\%$ & $4,594$ \\ & & \color{black} $20$ & \color{black} $0.25\%$ & \color{black} $4,730$\\ \midrule Arcene & $8000$ & $5$ & $0.02\%$ & $6,895$\\ & & $10$ & $0.17\%$ & $8,479$\\ & & \color{black} $20$ & \color{black} $0.21\%$ & \color{black} $6,335$\\ \bottomrule \end{tabular} \label{tab:comparison_convrelaxations4} \end{table} These results suggest that if we solve the SOC relaxation using a first-order method rather than an interior point method, our approach could successfully generate certifiably near-optimal PCs when $p=10,000$s, particularly if combined with a feature screening technique \citep[see][]{d2008optimal, atamturk2020feature}. {\color{black} \subsection{Performance of Exact and Approximate Methods on Synthetic Data} We now compare the exact and approximate methods against existing state-of-the-art methods in a spiked covariance matrix setting. We use the experimental setup laid out in \citet[Section 7.1]{d2008optimal}. We recover the leading principal component of a test matrix\footnote{\color{black}This statement of the test matrix is different to \citet[Section 7.1]{d2008optimal}, who write $\bm{\Sigma}=\bm{U}^\top \bm{U}+\sigma \bm{v}\bm{v}^\top$, rather than $\bm{\Sigma}=\frac{1}{n}\bm{U}^\top \bm{U}+\frac{\sigma}{\Vert \bm{v}\Vert_2^2} \bm{v}\bm{v}^\top$. However, it agrees with their source code.} $\bm{\Sigma} \in S^{p}_+$, where $p=150$, $\bm{\Sigma}=\frac{1}{n}\bm{U}^\top \bm{U}+\frac{\sigma}{\Vert \bm{v}\Vert_2^2} \bm{v}\bm{v}^\top$, $\bm{U} \in [0, 1]^{150 \times 150}$ is a noisy matrix with i.i.d. standard uniform entries, $\bm{v} \in \mathbb{R}^{150}$ is a vector of signals such that \begin{align} v_i=\begin{cases} 1, & \text{if} \ i \leq 50,\\ \frac{1}{i-50}, & \text{if} \ 51 \leq i \leq 100,\\ 0, & \text{otherwise,} \end{cases} \end{align} and $\sigma=2$ is the signal-to-noise ratio. The methods which we compare are: \begin{itemize}\itemsep0em \item \textbf{Exact}: Algorithm \ref{alg:cuttingPlaneMethod} with Gershgorin inequalities and a time limit of $600$s. \item \textbf{Approximate:} Algorithm \ref{alg:greedymethod} with Problem \eqref{spca:sdpplussocp}, the SOC outer-approximation of the PSD cone, no PSD cuts, and the additional SOC inequalities. \item \textbf{Greedy:} as proposed by \cite{moghaddam2006spectral} and laid out in \citep[Algorithm 1]{d2008optimal}, start with a solution $\bm{z}$ of cardinality $1$ and iteratively augment this solution vector with the index which gives the maximum variance contribution. Note that \cite{d2008optimal} found this method outperformed the $3$ other methods (approximate greedy, thresholding and sorting) they considered in their work. \item \textbf{Truncated Power Method:} as proposed by \citet{yuan2013truncated}, alternate between applying the power method to the solution vector and truncating the vector to ensure that it is $k$-sparse. Note that \cite{berk2017} found that this approach performed better than $5$ other state-of-the-art methods across the real-world datasets studied in the previous section of this paper and often matched the performance of the method of \cite{berk2017}—indeed, it functions as a warm-start for the later method. \item \textbf{Sorting:} sort the entries of $\bm{\Sigma}_{i,i}$ by magnitude and set $z_i=1$ for the $k$ largest entries of $\bm{\Sigma}$, as studied in \cite{d2008optimal}. This naive method serves as a benchmark for the value of optimization in the more sophisticated methods considered here. \end{itemize} Figures \ref{fig:sensitivitytok} depicts the ROC curve (true positive rate vs. false positive rate for recovering the support of $\bm{v}$) over $20$ synthetic random instances, as we vary $k$ for each instance. We observe that among all methods, the sorting method is the least accurate, with a substantially larger false detection rate for a given true positive rate than the remaining methods (AUC$=0.7028$). The truncated power method and our exact method\footnote{\color{black}Note that the exact method would dominate the remaining methods if given an unlimited runtime budget. Its poor performance reflects its inability to find the true optimal solution within $600$ seconds.} then offer a substantial improvement over sorting, with respective AUCs of $0.7482$ and $0.7483$. The greedy method then offers a modest improvement over them (AUC$=0.7561$) and the approximate relax+round method is the most accurate (AUC$=0.7593$). In addition to support recovery, Figure \ref{fig:sensitivitytok2} reports average runtime (left panel) and average optimality gap (right panel) over the same instances. Observe that among all methods, only the exact and the approximate relax+round methods provide optimality gaps, i.e., {\color{black} numerical certificates} of near optimality. On this metric, relax+round supplies average bound gaps of $1\%$ or less on all instances, while the exact method typically supplies bound gaps of $30\%$ or more. This comparison illustrates the tightness of the valid inequalities from Section \ref{ssec:validineq} that we included in the relaxation. Moreover, the relax+round method converges in less than one minute on all instances. All told, the relax+round method is the best performing method overall, although if $k$ is set to be sufficiently close to $0$ or $p$ all methods behave comparably. In particular, the relax+round method should be preferred over the exact method, even though the exact method performs better at smaller problem sizes. \begin{figure}[h]\centering \includegraphics[scale=0.6]{roccurve_synthetic.pdf} \caption{\color{black}ROC curve over $20$ synthetic instances where $p=150$, $k_{\text{true}}=100$ is unspecified.} \label{fig:sensitivitytok} \end{figure} \begin{figure}[h] \begin{subfigure}[t]{.45\linewidth} \includegraphics[scale=0.4]{runtimes_synthetic.pdf} \end{subfigure} \begin{subfigure}[t]{.45\linewidth} \centering \includegraphics[scale=0.4]{boundgap_synthetic.pdf} \end{subfigure} \caption{\color{black}Average time to compute solution, and optimality gap over $20$ synthetic instances where $p=150$, $k_{\text{true}}=100$ is unspecified.} \label{fig:sensitivitytok2} \end{figure} } {\color{black} \subsection{Summary and Guidelines From Experiments} In summary, our main findings from our numerical experiments are as follows: \begin{itemize} \item For small or medium scale problems where $p \leq 100$ or $k \leq 10$, exact methods such as Algorithm \ref{alg:cuttingPlaneMethod} or the method of \cite{berk2017} reliably obtain certifiably optimal or near-optimal solutions in a short amount of time, and should therefore be preferred over other methods. However, for larger-scale sparse PCA problems, exact methods currently do not scale as well as approximate or heuristic methods. \color{black} \item For larger-scale sparse PCA problems, our proposed combination of solving a second-order cone relaxation and rounding greedily reliably supplies certifiably near-optimal solutions in practice (if not in theory) in a relatively small amount of time. Moreover, it outperforms other state-of-the-art heuristics including the greedy method of \cite{moghaddam2006spectral, d2008optimal} and the Truncated Power Method of \cite{yuan2013truncated}. Accordingly, it should be considered as a reliable and more accurate alternative for problems where $p=1000$s.\color{black} \item In practice, for even larger-scale problem sizes, we recommend using a combination of these methods: a computationally cheaper method (with $k$ set in the $1000$s) as a feature screening method, to be followed by the approximate relax+round method (with $k$ set in the $100$s) and/or the exact method, if time permits. \end{itemize} } \section{Three Extensions and their Mixed-Integer Conic Formulations} We conclude by discussing {\color{black}three} extensions of sparse PCA where our methodology applies. \subsection{Non-Negative Sparse PCA} One potential extension to this paper would be to develop a certifiably optimal algorithm for non-negative sparse PCA \citep[see][for a discussion]{zass2007nonnegative}, i.e., develop a tractable reformulation of \begin{align*} \max_{\bm{x} \in \mathbb{R}^p} \quad & \langle \bm{x}\bm{x}^\top, \bm{\Sigma} \rangle \ \text{s.t.}\ \bm{x}^\top \bm{x}=1, \bm{x} \geq \bm{0}, \Vert \bm{x}\Vert_0 \leq k. \end{align*} Unfortunately, we cannot develop a MISDO reformulation of non-negative sparse PCA \textit{mutatis mutandis} Theorem \ref{thm:misdpreformthm}. Indeed, while we can still set $\bm{X}=\bm{x}\bm{x}^\top$ and relax the rank-one constraint, if we do so then, by the non-negativity of $\bm{x}$, lifting $\bm{x}$ yields: \begin{equation}\label{misdpprimal_cp} \begin{aligned} \max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k} \ \max_{\bm{X} \in \mathcal{C}_n} \quad & \langle \bm{\Sigma}, \bm{X} \rangle\\ \text{s.t.} \quad & \mathrm{tr}(\bm{X})=1,\ X_{i,j}=0 \ \text{if} \ z_i=0, \ X_{i,j}=0 \ \text{if} \ z_j=0 \ \forall i, j \in [p]. \end{aligned} \end{equation} where $\mathcal{C}_n:=\{\bm{X}:\ \exists \ \bm{U} \geq \bm{0}, \bm{X}=\bm{U}^\top \bm{U}\}$ denotes the completely positive cone, which is NP-hard to separate over and cannot currently be optimized over tractably \citep{dong2013separating}. Nonetheless, we can develop relatively tractable mixed-integer conic upper and lower bounds for non-negative sparse PCA. Indeed, we can obtain a fairly tight upper bound by replacing the completely positive cone with the larger doubly non-negative cone $\mathcal{D}_n:=\{\bm{X} \in S^p_+: \bm{X} \geq \bm{0}\}$, which is a high-quality outer-approximation of $\mathcal{C}_n$, indeed exact when $k \leq 4$ \citep{burer2009difference}. Unfortunately, this relaxation is strictly different in general, since the extreme rays of the doubly non-negative cone are not necessarily rank-one when $k \geq 5$ \citep{burer2009difference}. Nonetheless, to obtain feasible solutions which supply lower bounds, we could inner approximate the completely positive cone with the cone of non-negative scaled diagonally dominant matrices \citep[see][]{ahmadi2019dsos,bostanabad2018inner}. \subsection{Sparse PCA on Rectangular Matrices} A second extension would be to extend our methodology to the non-square case: \begin{align} \max_{\bm{x} \in \mathbb{R}^m, \bm{y} \in \mathbb{R}^n} \quad & \bm{x}^\top \bm{A}\bm{y} \ \text{s.t.} \ \Vert \bm{x}\Vert_2=1, \Vert \bm{y}\Vert_2=1, \Vert \bm{x}\Vert_0 \leq k, \Vert \bm{y}\Vert_0 \leq k. \end{align} Observe that computing the spectral norm of a matrix $\bm{A}$ is equivalent to: \begin{align} \max_{\bm{X} \in \mathbb{R}^{n \times m}} \quad \langle \bm{A}, \bm{X}\rangle \ \text{s.t.} \ \begin{pmatrix} \bm{U} & \bm{X}\\ \bm{X}^\top & \bm{V}\end{pmatrix} \succeq \bm{0}, \mathrm{tr}(\bm{U})+\mathrm{tr}(\bm{V})=2, \end{align} where, in an optimal solution, $\bm{U}$ stands for $\bm{x}\bm{x}^\top$, $\bm{V}$ stands for $\bm{y}\bm{y}^\top$ and $\bm{X}$ stands for $\bm{x}\bm{y}^\top$—this can be seen by taking the dual of \citep[Equation 2.4]{recht2010guaranteed}. Therefore, by using the same argument as in the positive semidefinite case, we can rewrite sparse PCA on rectangular matrices as the following MISDO: \begin{equation} \begin{aligned} \max_{\bm{w} \in \{0, 1\}^m, \bm{z} \in \{0, 1\}^n}\max_{\bm{X} \in \mathbb{R}^{n \times m}} \quad & \langle \bm{A}, \bm{X}\rangle \\ \text{s.t.} & \ \begin{pmatrix} \bm{U} & \bm{X}\\ \bm{X}^\top & \bm{V}\end{pmatrix} \succeq \bm{0}, \mathrm{tr}(\bm{U})+\mathrm{tr}(\bm{V})=2,\\ & U_{i,j}=0 \ \text{if} \ w_i=0\ \forall i,j \in [m], \\ & V_{i,j}=0 \ \text{if} \ z_i=0\ \forall i,j \in [n], \bm{e}^\top \bm{w} \leq k, \bm{e}^\top \bm{z} \leq k. \end{aligned} \end{equation} \subsection{Sparse PCA with Multiple Principal Components} A third extension where our methodology is applicable is the problem of obtaining multiple principal components simultaneously, rather than deflating $\bm{\Sigma}$ after obtaining each principal component. As there are multiple definitions of this problem, we now discuss the extent to which our framework encompasses each case. \paragraph{Common Support:} Perhaps the simplest extension of sparse PCA to a multi-component setting arises when all $r$ principal components have common support. By retaining the vector of binary variables $\bm{z}$ and employing the Ky-Fan theorem \citep[c.f.][Theorem 2.3.8]{wolkowicz2012handbook} to cope with multiple principal components, we obtain the following formulation in much the same manner as previously: \begin{align} \max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k}\ \max_{\bm{X} \in S^p_+} \quad & \langle \bm{X}, \bm{\Sigma}\rangle\ \text{s.t.} \ \bm{0} \preceq \bm{X} \preceq \mathbb{I}, \ \mathrm{tr}(\bm{X})=r,\ X_{i,j}=0 \ \text{if} \ z_{i}=0\ \forall i \in [p]. \end{align} Notably, the logical constraint $X_{i,j}=0$ if $z_i=0$, which formed the basis of our subproblem strategy, still successfully models the sparsity constraint. This suggests that (a) one can derive an equivalent subproblem strategy under common support, and (b) a cutting-plane method for common support should scale equally well as with a single component. \paragraph{Disjoint Support:} In a sparse PCA problem with disjoint support \citep{vu2012minimax} , simultaneously computing the first $r$ principal components is equivalent to solving: \begin{equation} \begin{aligned} \max_{\substack{\bm{z} \in \{0, 1\}^{p \times r}: \bm{e}^\top \bm{z}_t \leq k\ \forall t \in [r], \\\bm{z}\bm{e} \leq \bm{e}}} \max_{\bm{W} \in \mathbb{R}^{p \times r}} \quad & \langle \bm{W}\bm{W}^\top, \bm{\Sigma}\rangle\\ & \bm{W}^\top \bm{W}=\mathbb{I}_{r},\ W_{i,j}=0 \ \text{if} \ z_{i,t}=0\ \forall i \in [p], t \in [r], \end{aligned} \end{equation} where $z_{i,t}$ is a binary variable denoting whether feature $i$ is a member of the $t$th principal component. By applying the technique used to derive Theorem \ref{thm:misdpreformthm} \textit{mutatis mutandis}, and invoking the Ky-Fan theorem \citep[c.f.][Theorem 2.3.8]{wolkowicz2012handbook} to cope with the rank-$r$ constraint, we obtain: \begin{equation} \begin{aligned} \max_{\bm{z} \in \{0, 1\}^p: \bm{e}^\top \bm{z} \leq k} \max_{\bm{X} \in S^p} \quad & \langle \bm{X}, \bm{\Sigma}\rangle\\ & \bm{0} \preceq \bm{X} \preceq \mathbb{I}, \ \mathrm{tr}(\bm{X})=r,\ X_{i,j}=0 \ \text{if} \ Y_{i,j}=0\ \forall i \in [p], \end{aligned} \end{equation} where $Y_{i,j}=\sum_{t=1}^r z_{i,t}z_{j,t}$ is a binary matrix denoting whether features $i$ and $j$ are members of the same principal component; this problem can be addressed by a cutting-plane method in much the same manner as when $r=1$. {\color{black} \section*{Acknowledgments} We are grateful to the three anonymous referees and the associate editor for many valuable comments which improved the paper. } {\footnotesize \bibliographystyle{abbrvnat}
{ "timestamp": "2021-08-26T02:19:11", "yymm": "2005", "arxiv_id": "2005.05195", "language": "en", "url": "https://arxiv.org/abs/2005.05195", "abstract": "Sparse principal component analysis (PCA) is a popular dimensionality reduction technique for obtaining principal components which are linear combinations of a small subset of the original features. Existing approaches cannot supply certifiably optimal principal components with more than $p=100s$ of variables. By reformulating sparse PCA as a convex mixed-integer semidefinite optimization problem, we design a cutting-plane method which solves the problem to certifiable optimality at the scale of selecting k=5 covariates from p=300 variables, and provides small bound gaps at a larger scale. We also propose a convex relaxation and greedy rounding scheme that provides bound gaps of $1-2\\%$ in practice within minutes for $p=100$s or hours for $p=1,000$s and is therefore a viable alternative to the exact method at scale. Using real-world financial and medical datasets, we illustrate our approach's ability to derive interpretable principal components tractably at scale.", "subjects": "Optimization and Control (math.OC); Machine Learning (cs.LG); Statistics Theory (math.ST); Computation (stat.CO)", "title": "Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631651194373, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7087950394932692 }
https://arxiv.org/abs/1809.03729
The Maximum Number of Three Term Arithmetic Progressions, and Triangles in Cayley Graphs
Let $G$ be a finite Abelian group. For a subset $S \subseteq G$, let $T_3(S)$ denote the number of length three arithemtic progressions in $S$ and Prob[$S$] $= \frac{1}{|S|^2}\sum_{x,y \in S} 1_S(x+y)$. For any $q \ge 1$ and $\alpha \in [0,1]$, and any $S \subseteq G$ with $|S| = \frac{|G|}{q+\alpha}$, we show $\frac{T_3(S)}{|S|^2}$ and Prob[$S$] are bounded above by $\max\left(\frac{q^2-\alpha q+\alpha^2}{q^2},\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2},\gamma_0\right)$, where $\gamma_0 < 1$ is an absolute constant. As a consequence, we verify a graph theoretic conjecture of Gan, Loh, and Sudakov for Cayley graphs.
\section{Introduction} The study of arithmetic progressions in subsets of integers and general Abelian groups is a central topic in additive combinatorics and has led to the development of many fascinating areas of mathematics. A famous result on three term arithmetic progressions (3APs) is Roth's theorem, which, in its finitary form, says that for each $\lambda > 0$, for $N$ large, any subset $S \subseteq \{1,\dots, N\}$ of size $|S| \ge \lambda N$ contains a 3AP. \vspace{3mm} Once Roth's theorem ensures that all subsets of a given size have a 3AP, one can generate many 3APs. For example, Varnavides [4] proved that for each $\lambda > 0$, there is some $c > 0$ so that for all large $N$, every subset $S \subseteq \{1,\dots,N\}$ with $|S| \ge \lambda N$ contains at least $cN^2$ 3APs. A natural question is then how many 3APs a subset of $\{1,\dots,N\}$ of a prescribed size can have. We look at this question in the group theoretic setting. \vspace{3mm} Fix $\lambda \in (0,1)$. Let $p$ be a large prime and consider subsets $S \subseteq \mathbb{Z}_p$ of size $|S| = \lfloor \lambda p \rfloor$. If $T_3(S)$ denotes the number of 3APs in $S$, namely, the number of $x,d \in \mathbb{Z}_p$ with $x,x+d,x+2d \in S$, then Croot [1] showed that $$\lim_{p \to \infty} \max_{\substack{S \subseteq \mathbb{Z}_p \\ |S| = \lfloor \lambda p \rfloor}} \frac{T_3(S)}{|S|^2}$$ exists, and then Green and Sisask [2] proved that the limit is in fact $\frac{1}{2}$, for all $\lambda$ less than some absolute constant. In $\mathbb{Z}_n$, for $n$ not prime, the situation is quite different, since subgroups have many 3APs relative to their size. In this paper, we nevertheless get an upper bound, useful when the size of $S$ is ``far" from dividing $n$. \vspace{3mm} \begin{theorem} There is an absolute constant $\gamma_1 < 1$ so that for any finite Abelian group $G$ of odd order, and for any $q \in \mathbb{N}, \alpha \in [0,1]$, $$\max_{\substack{S \subseteq G \\ |S| = \frac{|G|}{q+\alpha}}} \frac{T_3(S)}{|S|^2} \le \max\left(\frac{q^2-\alpha q+\alpha^2}{q^2},\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2},\gamma_1\right).$$ \end{theorem} \vspace{1mm} Related to $\frac{T_3(S)}{|S|^2} = \frac{1}{|S|^2} \sum_{x,y \in S} 1_S(\frac{x+y}{2})$ is the quantity $\frac{1}{|S|^2}\sum_{x,y \in S} 1_S(x+y)$. This quantity, which we denote $\Probb[S]$, arises in the expression for the number of triangles in a Cayley graph with generating set $S$. Precisely, let $G$ be an additive group of size $n$ and $S \subseteq G$ a symmetric set not containing $0$. Connect $x,y \in G$ iff $x-y \in S$. We obtain an undirected graph on $G$ with no self loops. The number of triangles in our graph is $$\frac{1}{6}\sum_{a,b,c \in G} 1_S(a-b)1_S(b-c)1_S(a-c).$$ Let $x = a-b$ and $y = b-c$. Then ranging over $c,b,a$ is equivalent to ranging over $c,y,x$ and thus $$|T| = \frac{1}{6}\sum_{x,y,c} 1_S(x)1_S(y)1_S(x+y) = \frac{1}{6}n\sum_{x,y \in S} 1_S(x+y) = \frac{1}{6}n|S|^2\Probb[S].$$ Quite recently, Gan, Loh, and Sudakov [3] resolved a conjecture of Engbers and Galvin regarding the maximum number of independent sets of size $3$ that a graph with a given minimum degree and fixed size can have. Phrased in complementary graphs, they showed that given a maximum degree $d$ and a positive integer $n \le 2d+2$, the maximum number of triangles that a graph on $n$ vertices with maximum degree $d$ can have is ${d+1 \choose 3}+{n-(d+1) \choose 3}$. This immediately raised the question of what the maximum is for $n > 2d+2$. They conjectured the following. \vspace{3mm} \begin{conjecture*}[Gan-Loh-Sudakov] Fix $d \ge 2$. For any positive integer $n$, if we write $n = q(d+1)+r$ for $0 \le r \le d$, then the maximum number of triangles that a graph on $n$ vertices with maximum degree $d$ can have is $q{d+1 \choose 3}+{r \choose 3}$. \end{conjecture*} \vspace{3mm} For each $d,n$, an example of a graph achieving $q{d+1 \choose 3}+{r \choose 3}$ is simply a disjoint union of $K_{d+1}$'s and a $K_r$. The conjecture for a Cayley graph on an additive group $G$ with generating set $S$, $|S| = \frac{|G|}{q+\alpha}$, takes the form $\Probb[S] \le \frac{q+\alpha^3}{q+\alpha}$, up to smaller order terms. We verify the conjecture for Cayley graphs when $q \ge 7$. \vspace{3mm} \begin{theorem} There is an absolute constant $\gamma_0 < 1$ so that the following holds. Let $G$ be a finite Abelian group and take $q \in \mathbb{N}, \alpha \in [0,1]$. Then for any symmetric subset $S \subseteq G$ with $|S| = \frac{|G|}{q+\alpha}$, $$\frac{1}{|S|^2}\sum_{x,y \in S} 1_S(x+y) \le \max\left(\frac{q^2-\alpha q+\alpha^2}{q^2},\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2},\gamma_0 \right).$$ Consequently, the Gan-Loh-Sudakov conjecture holds for Cayley graphs with generating set $|S| \le \frac{n}{7}$. \end{theorem} \vspace{3mm} We give a fourier analytic proof of Theorems 1 and 2. Here is a quick high-level overview of the argument. We express the relevant ``probability" (either $\frac{1}{|S|^2}\sum_{x,y \in S} 1_S(\frac{x+y}{2})$ or $\frac{1}{|S|^2}\sum_{x,y \in S} 1_S(x+y)$) in terms of the fourier coefficients of $1_S$. If the probability is large, then some nonzero fourier coefficient must be large. We deduce that (a dilate of) the residues of $S$ of a certain modulus concentrate near $0$. Since there won't be ``wraparound" near 0, this allows us to transfer the problem to $\mathbb{Z}$, which is a setting where it's easier to bound the relevant probabilities. We can show from the result in $\mathbb{Z}$ that we in fact must have many residues be $0$. This allows us to conclude that $S$ is very close to a subgroup. Induction and a purely combinatorial argument finish the job from there. \vspace{3mm} Here is an outline of the paper. We first set our notation for Fourier analysis on $\mathbb{Z}_n$. Then we give the proof of Theorems 1 and 2, modulo two Lemmas, which we prove afterwards. After, we show the calculations deducing the Gan-Loh-Sudakov conjecture from our main theorem. Finally, we prove Theorems 1 and 2 when $q=1$. \section{Fourier Analysis on $\mathbb{Z}_n$} In this section, we briefly fix our notation for fourier analysis on $\mathbb{Z}_n$ and obtain the fourier representation of the relevant quantities in the proofs to be given below. For a function $f: \mathbb{Z}_n \to \mathbb{C}$, define its (finite) fourier transform $\widehat{f} : \mathbb{Z}_n \to \mathbb{C}$ by $$\widehat{f}(m) := \frac{1}{n}\sum_{x \in \mathbb{Z}_n} f(x)e^{-2\pi i \frac{xm}{n}}.$$ The following well-known equalities are straightforward. $$\sum_{m \in \mathbb{Z}_n} |\widehat{f}(m)|^2 = \frac{1}{n} \sum_{x \in \mathbb{Z}_n} |f(x)|^2$$ $$ f(x) = \sum_{m \in \mathbb{Z}_n} \widehat{f}(m)e^{2\pi i \frac{xm}{n}}.$$ Let $S$ be a symmetric subset of $\mathbb{Z}_n$. Then, $\frac{1}{|S|^2}\sum_{x,y \in S} 1_S(x+y) = $ $$\frac{1}{|S|^2} \sum_{x,y \in \mathbb{Z}_n} \left[\sum_{m_1 \in \mathbb{Z}_n} \widehat{1_S}(m_1)e^{2\pi i \frac{xm_1}{n}}\right]\left[\sum_{m_2 \in \mathbb{Z}_n} \widehat{1_S}(m_2)e^{2\pi i \frac{ym_2}{n}}\right] \left[\sum_{m_3 \in \mathbb{Z}_n} \widehat{1_S}(m_3)e^{2\pi i \frac{(x+y)m_3}{n}}\right]$$ $$= \frac{1}{|S|^2}\sum_{m_1,m_2,m_3 \in \mathbb{Z}_n} \widehat{1_S}(m_1)\widehat{1_S}(m_2)\widehat{1_S}(m_3) \left[\sum_{x \in \mathbb{Z}_n} e^{2\pi i \frac{x(m_1+m_3)}{n}}\right] \left[\sum_{y \in \mathbb{Z}_n} e^{2\pi i \frac{y(m_2+m_3)}{n}}\right],$$ and using $$\sum_{x \in \mathbb{Z}_n} e^{2\pi i \frac{xk}{n}} = \begin{cases} n & k \equiv 0 \pmod{n} \\ 0 & k \not \equiv 0 \pmod{n} \end{cases},$$ we obtain $$\frac{1}{|S|^2}\sum_{x,y \in S} 1_S(x+y) =\frac{n^2}{|S|^2} \sum_{m \in \mathbb{Z}_n} \widehat{1_S}(-m)\widehat{1_S}(-m)\widehat{1_S}(m).$$ However, the symmetry of $S$ implies that $\widehat{1_S}(m) = \widehat{1_S}(-m)$ for each $m \in \mathbb{Z}_n$. Therefore, $$\Probb[S] = \frac{1}{|S|^2}\sum_{x,y \in S} 1_S(x+y) = \frac{n^2}{|S|^2} \sum_{m \in \mathbb{Z}_n} \widehat{1_S}(m)^3.$$ Similarly, for any subset $S \subseteq \mathbb{Z}_n$, $$\frac{1}{|S|^2}\sum_{x,y \in S} 1_S(\frac{x+y}{2}) = \frac{n^2}{|S|^2}\sum_{m \in \mathbb{Z}_n} \widehat{1_S}(m)^2\widehat{1_S}(-2m).$$ \vspace{3mm} \section{Proof of Theorems 1 and 2} We induct on $q$. We discuss the base case $q=1$ in section 6. Take some $q \ge 2$ and $\alpha \in [0,1]$. Let $S \subseteq \mathbb{Z}_n$ be a symmetric\footnote{In the 3AP setting, we do not assume $S$ is symmetric.} subset with $|S| = \frac{n}{q+\alpha}$. \vspace{3mm} Let $\gamma = \max(\frac{q^2-\alpha q+\alpha^2}{q^2},\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2},\gamma_0)$. Assume, for the sake of contradiction, that $\Probb[S] \ge \gamma$. Then, as explained in section 2, $$\sum_m \widehat{1_S}(m)^3 \ge \frac{d^2}{n^2}\gamma.$$ Note $\widehat{1_S}(0)^3 = \frac{d^3}{n^3}$, so, since $\widehat{1_S}(m)$ is real for each $m$\footnote{In the 3AP setting, we instead do $\gamma \frac{d^2}{n^2}-\frac{d^3}{n^3} \le \sup_{m \not = 0} |\widehat{1_S}(-2m)| \cdot [\frac{d}{n}-\frac{d^2}{n^2}]$. Then we take $m_0$ with $|\widehat{1_S}(m_0)| \ge \frac{d}{n}\mu$. Finally, we can translate $S$ so that $\widehat{1_S}(m_0)$ is real and positive.}, $$\gamma\frac{d^2}{n^2}-\frac{d^3}{n^3} \le \sum_{m \not = 0} \widehat{1_S}(m)^3 \le \left(\sup_{m \not = 0} \widehat{1_S}(m)\right) \cdot \sum_{m \not = 0} \widehat{1_S}(m)^2 = \left(\sup_{m \not = 0} \widehat{1_S}(m)\right) \cdot [\frac{d}{n}-\frac{d^2}{n^2}],$$ where we used Plancherel in the last step. Take $m_0 \not = 0$ with $$\widehat{1_S}(m_0) \ge \frac{d}{n}\frac{\gamma-\frac{d}{n}}{1-\frac{d}{n}} =: \frac{d}{n}\mu.$$ Then, $$\mu \le \frac{1}{d}\sum_{x \in S} e^{2\pi i \frac{m_0}{n}x} = \frac{1}{d}\sum_{x \in S} e^{2\pi i \frac{m_0/g}{n/g}x},$$ where $g := \gcd(m_0,n)$. Let $$A = \{x \in \mathbb{Z}_n : 2\pi \frac{m_0/g}{n/g}x \in [-2\pi/3,2\pi/3] \pmod{2\pi}\}$$ $$B = \mathbb{Z}_{n/g}\setminus A.\footnote{In the 3AP setting, we let $A = \{x \in \mathbb{Z}_n : 2\pi \frac{m_0/g}{n/g}x \in [-\frac{\pi}{2},\frac{\pi}{2}]\}$ and $B = \mathbb{Z}_{n/g} \setminus A$.}$$ Then, since $\widehat{1_S}(m_0)$ is real, $$d\mu \le \sum_{x \in S} \cos(2\pi \frac{m_0/g}{n_0/g}x) \le |A|+(d-|A|)(-\frac{1}{2}),$$ which implies $$ \frac{|A|}{d} \ge \frac{2\mu+1}{3}. \footnote{In the 3AP setting, we get $d\mu \le |A|+(d-|A|)0$ and thus $\frac{|A|}{d} \ge \mu$.} $$ For $z \in B$, $$\#\{(x,y) \in S^2 : x+y = z\} \le d$$ and for $z \in A$, $$\#\{(x,y) \in B\times A : x+y = z\} \le |B|$$ $$\#\{(x,y) \in S\times B : x+y=z\} \le |B|$$ $$\#\{(x,y) \in A \times A : x+y = z\} =: C_z. \footnote{In the 3AP setting, the sets will merely have $2z$ instead of $z$ - the same estimates thus hold.}$$ Therefore,, $$d^2\Probb[S] \le d|B|+2|A|\hspace{1mm}|B|+\sum_{z \in A} C_z$$ $$= d(d-|A|)+2|A|(d-|A|)+|A|^2\Probb[A].$$ \noindent So, we must have $$\Probb[A] \ge \frac{\gamma+2\frac{|A|^2}{d^2}-\frac{|A|}{d}-1}{\frac{|A|^2}{d^2}}.$$ If we let $f(x) = \frac{\gamma+2x^2-x-1}{x^2}$, then $f'(x) = -2\gamma x^{-3}+x^{-2}+2x^{-3}$ is positive for $x > 0$. We've shown $\frac{|A|}{d} \ge \frac{2\mu+1}{3} =: v \footnote{In the 3AP setting, we have $\nu := \mu$.}$, so we get that $$\Probb[A] \ge \frac{\gamma+2v^2-v-1}{v^2} =: \beta.$$ We now argue that the weight at $0$ must be large. For each $i \in [-\frac{1}{3}\frac{n}{g},\frac{1}{3}\frac{n}{g}]$, let $S_i = \{x \in S : x \equiv i \pmod{n/g}\}$. Let $a_i = |S_i|$. Note that for each $i,j \in [-\frac{1}{3}\frac{n}{g},\frac{1}{3}\frac{n}{g}]$ such that $i+j \in [-\frac{1}{3}\frac{n}{g},\frac{1}{3}\frac{n}{g}]$, $$\#\{(x_i,y_j,z_{i+j}) \in S_i\times S_j \times S_{i+j} : x_i+y_j = z_{i+j}\} \le \min(|S_i|\hspace{1mm} |S_j|, |S_i| \hspace{1mm} |S_{i+j}|, |S_j| \hspace{1mm} |S_{i+j}|). \footnote{In the 3AP setting, we'll be looking at $[-\frac{1}{4}\frac{n}{g},\frac{1}{4}\frac{n}{g}]$ instead. Also, we'll have $2z_{\frac{i+j}{2}} \in S_{\frac{i+j}{2}}$ instead of $z_{i+j} \in S_{i+j}$, and $|S_{\frac{i+j}{2}}|$ instead of $|S_{i+j}|$. This alters Lemma 1 not too significantly.} $$ The uniqueness of $0$ is that $0+0 = 0$, so that $\#\{(x_0,y_0,z_0) \in S_0^3 : x_0+y_0 = z_0\}$ cannot be upper bounded by potentially smaller terms $|S_i|, i \not = 0$. Note that the sets whose size we just bounded account for all the terms in the computation of $\Probb[A]$, since, by our choice of $A$, there is no ``wraparound".\footnote{In the 3AP setting, the lack of wraparound for $x,y \in [-\frac{1}{4}\frac{n}{g},\frac{1}{4}\frac{n}{g}] \pmod{n/g}$ follows from the fact that either $x+y$ is even and then of course $\frac{x+y}{2} \in [-\frac{1}{4}\frac{n}{g},\frac{1}{4}\frac{n}{g}]$, or it's odd and then $\frac{x+y}{2} = (x+y)\frac{n+1}{2} = \frac{x+y-1}{2}+\frac{g-1}{2}\frac{n}{g}+\frac{\frac{n}{g}+1}{2} = \frac{x+y-1}{2}+\frac{\frac{n}{g}+1}{2} \pmod{n/g}$; since $\frac{x+y-1}{2} \in [-\frac{1}{4}\frac{n}{g},\frac{1}{4}\frac{n}{g}]$ we therefore see that $\frac{x+y}{2} \not\in [-\frac{1}{4}\frac{n}{g},\frac{1}{4}\frac{n}{g}] \pmod{n/g}$.} Take $\gamma_0$ so that $\beta > \frac{9}{10}$ (for any $q,\alpha$). $\gamma_0 = .949$ works\footnote{In the 3AP setting, we get a larger value for $\gamma_1$, but of course, a value less than $1$.}. Then Lemma 1 applies and we obtain, $$\frac{|S_0|}{|A|} \ge \Probb[A] \ge \beta.$$ It should be noted that we already get a contradiction if $g \le \beta\nu d$ since we clearly must have $|S_0| \le g$. In any event, we argue that this large a weight at $0$ forces $S$ to be close enough to the subgroup $\{0,\frac{n}{g},\frac{2n}{g},\dots,\frac{(g-1)n}{g}\}$ for us to get a direct upper bound on $\Probb[S]$. For ease, let $$D = \{x \in S : x \equiv 0 \pmod{n/g}\}$$ $$E = S\setminus D.$$ Then, $$\Probb[S] = \frac{1}{d^2}\sum_{x,y \in S} 1_S(x+y)$$ $$= \frac{|D|^2}{d^2}\frac{1}{|D|^2}\sum_{x,y \in D} 1_S(x+y) + \frac{2}{d^2}\sum_{x \in D, y \in E} 1_S(x+y) + \frac{1}{d^2}\sum_{x,y \in E} 1_S(x+y).$$ Using that $D$ is contained in a subgroup disjoint from $E$, we have the following (in)equalities $$\sum_{x,y \in D} 1_S(x+y) = \sum_{x,y \in D} 1_D(x+y)$$ $$\sum_{x \in D, y\in E} 1_S(x+y) = \sum_{x \in D, y\in E} 1_E(x+y) = \sum_{y \in E} \sum_{x \in D} 1_{-y+E}(x) \le \sum_{y \in E} |E|$$ $$\sum_{x,y \in E} 1_S(x+y) \le |E|^2. \footnote{In the 3AP setting, we replace $x+y$ with $\frac{x+y}{2}$. If $x,y \in D$, then $\frac{x+y}{2} \in D$. And if $x \in D, y \in E$, then $x+y$ can't be in $2^{-1}D = D$. The three analogous (in)equalities thus hold.} $$ Hence, $$\Probb[S] \le \frac{|D|^2}{d^2}\Probb[D]+\frac{3}{d^2}|E|^2.$$ Using a cheaper ``approximation" argument, similar to the one used previously, that doesn't capitalize on the fact that $D$ is contained in a subgroup disjoint from $E$ will yield an upper bound for $\Probb[S]$ larger than $1$. Note $\frac{|D|}{d} = \frac{|D|}{|A|}\frac{|A|}{d} \ge \beta\nu$. Let $\eta = \frac{|D|}{d}, k = \frac{n}{g} \in \mathbb{N}, q' = \lfloor \frac{g}{|D|} \rfloor$, and $\alpha' = \frac{g}{|D|}-q'$. Then by induction and the obvious observation that $\Probb[D]$ is independent of whether the ambient group is $\mathbb{Z}_n$ or $\{0,\frac{n}{g},\dots,(g-1)\frac{n}{g}\}$, $$\Probb[D] \le \max\left(\frac{(q')^2-\alpha' q'+(\alpha')^2}{(q')^2},\frac{(q')^2+2\alpha' q'+4(\alpha')^2-6\alpha'+3}{(q'+1)^2},\gamma_0\right);$$ hence, $$\Probb[S] \le \eta^2 \max\left(\frac{(q')^2-\alpha' q'+(\alpha')^2}{(q')^2},\frac{(q')^2+2\alpha' q'+4(\alpha')^2-6\alpha'+3}{(q'+1)^2},\gamma_0\right) +3(1-\eta)^2.$$ Note that the induction is justified, as $q' = \lfloor \frac{g}{|D|} \rfloor \le \frac{g}{|D|} < q$, since $\frac{g}{|D|} \le \frac{n/2}{\beta v d} \le \frac{n/2}{\frac{3}{4}d} = \frac{2}{3}(q+\alpha)$, where we used that $\beta v \ge \frac{3}{4}$, which holds for $q \ge 2$. We finish by appealing to Lemma 2, which indeed applies when $\beta\nu \ge \frac{3}{4}$. \vspace{3mm} The above proof readily extends to an arbitrary finite Abelian group. Fix $r \ge 1$ and positive integers $n_1,\dots, n_r$. Let $n = n_1\dots n_r$ and $S$ be a subset of $\mathbb{Z}_{n_1}\times \dots \times \mathbb{Z}_{n_r}$ of size $|S| = \frac{n}{q+\alpha}$. Since $\widehat{1_S}(0,\dots,0) = \frac{|S|}{n}$ and Plancherel holds, there is some $(m_1,\dots,m_r) \not = (0,\dots,0)$ with $$\frac{d}{n}\mu := \frac{d}{n}\frac{\gamma-\frac{d}{n}}{1-\frac{d}{n}} \le \widehat{1_S}(m_1,\dots,m_r) = \frac{1}{n}\sum_{(x_1,\dots,x_r) \in S} e^{2\pi i (\frac{m_1x_1}{n_1}+\dots+\frac{m_rx_r}{n_r})}.$$ Analogous to before, letting $A = \{(x_1,\dots,x_r) \in S : 2\pi(\frac{m_1x_1}{n_1}+\dots+\frac{m_rx_r}{n_r}) \in [\frac{-2\pi}{3},\frac{2\pi}{3}] \pmod{2\pi}\}$, we must have $\frac{|A|}{d} \ge \frac{2\mu+1}{3}$. Let $S_j = \{(x_1,\dots,x_r) \in S : e^{2\pi i (\frac{m_1x_1}{n_1}+\dots+\frac{m_rx_r}{n_r})} = e^{2\pi i \frac{j}{n}}\}$. Then, as before, we must have $\frac{|S_0|}{|A|} \ge \beta$. But $S_0$ is a subgroup of $\mathbb{Z}_{n_1} \times \dots \times \mathbb{Z}_{n_r}$, so the same inductive argument finishes the job. \qed \vspace{6mm} \section{Proof of Lemmas} \begin{lemma} Fix $d \ge 1$ and $\epsilon \in [0,\frac{1}{10})$. Let $\{a_j\}_{j \in \mathbb{Z}}$ be a collection of non-negative integers such that $\sum_{i \in \mathbb{Z}} a_i = d$ and $a_j = a_{-j}$ for each $j \in \mathbb{Z}$. Then if $$\sum_{i,j} \min(a_ia_j,a_ia_{i+j},a_ja_{i+j}) \ge (1-\epsilon)d^2,$$ we must have that $$a_0 \ge (1-\epsilon)d.$$ \end{lemma} \begin{proof} Define $\supp(a_j) := \supp((a_j)_{j \in \mathbb{Z}}) := \#\{n \ge 1 : a_n \not = 0\}$. We induct on $\supp(a_j)$, with base case $\supp(a_j)=0$ obvious. Let $(a_j)_{j \in \mathbb{Z}}$ have $\supp(a_j) =: N+1$. Let $n+1$ be the largest index $j$ for which $a_j \not = 0$. First assume that $a_{n+1} \le \frac{1}{10}d$. Define $(b_j)_{j \in \mathbb{Z}}$ via $b_j = a_j$ if $|j| \le n$ and $b_j = 0$ if $|j| \ge n+1$. Then $b_j = b_{-j}$ for $j \in \mathbb{Z}$, $\supp(b_j) \le N$, and $\sum_{j \in \mathbb{Z}} b_j = d-2a_{n+1}$. Note that $$A_{n+1} := \sum_{i,j} \min(a_ia_j,a_ia_{i+j},a_ja_{i+j})$$ $$ \le \sum_{i,j} \min(b_ib_j,b_ib_{i+j},b_jb_{i+j}) + 2\sum_{k=1}^n a_ka_{n+1} + 4\sum_{-n \le k \le -1} a_{n+1}a_k + 2a_{n+1}^2+4a_{n+1}^2$$ $$=: A_n + 6a_{n+1}(\frac{d-a_0-2a_{n+1}}{2}) + 6a_{n+1}^2.$$ Here we counted the number of ways $n+1$ or $-(n+1)$ can occur as $i+j$ for $i,j \not = 0$, then the number of ways $n+1$ or $-(n+1)$ can occur as $i$ or $j$ with no $0$ as the other coordinate, and then accounted for the terms $(i,j) = (n+1,-(n+1)),(-(n+1),n+1),$ $(n+1,0),(-(n+1),0),(0,n+1)$, and $(0,-(n+1))$. If $A_{n+1} \ge (1-\epsilon)d^2$, then $$(*) \hspace{10mm} A_n \ge (1-\epsilon)d^2-3a_{n+1}(d-a_0).$$ We first show $3a_0 \ge (1+2\epsilon)d$. Bounding $a_0 \ge 0$ in (*) gives $$A_n \ge \frac{(1-\epsilon)d^2-3a_{n+1}d}{(d-2a_{n+1})^2}(d-2a_{n+1})^2.$$ To use the claim applied to $(b_j)_{j \in \mathbb{Z}}$ and total weight $d-2a_{n+1}$, we must check that $$1-\frac{(1-\epsilon)d^2-3a_{n+1}d}{(d-2a_{n+1})^2} < \frac{1}{10}.$$ It suffices to show $$1-\frac{(1-\epsilon)d^2-3a_{n+1}d}{(d-2a_{n+1})^2} < \epsilon.$$ Rearranging gives $$a_{n+1} < \frac{1-4\epsilon}{4(1-\epsilon)}d,$$ which is true for $\epsilon < 1/10$ and $a_{n+1} < \frac{d}{10}$. Hence, by induction, $$3a_0 \ge 3\left[\frac{(1-\epsilon)d^2-3a_{n+1}d}{(d-2a_{n+1})^2}\right](d-2a_{n+1}) = 3\frac{(1-\epsilon)d^2-3a_{n+1}d}{(d-2a_{n+1})}.$$ This is larger than $(1+2\epsilon)d$ iff $$a_{n+1} < \frac{2-5\epsilon}{7-4\epsilon}d.$$ This is true for $\epsilon < 1/10$ and $a_{n+1} < d/10$. Now, let $\alpha$ be such that $$(1-\epsilon)d^2-3a_{n+1}(d-2a_{n+1}-a_0)-6a_{n+1}^2 = (1-\alpha)(d-2a_{n+1})^2.$$ Then, assuming $\alpha < \frac{1}{10}$, we can use induction to get that $$a_0 \ge (1-\alpha)(d-2a_{n+1}).$$ So to finish the induction, it suffices to show that $$(1-\alpha)(d-2a_{n+1}) \ge (1-\epsilon)d,$$ which is equivalent to $$\frac{(1-\epsilon)d^2-3a_{n+1}(d-a_0)}{d-2a_{n+1}} \ge (1-\epsilon)d,$$ which, after simplifying, is equivalent to $$3a_0 > (1+2\epsilon)d,$$ which we have proven. Therefore, all we need to do is prove $\alpha < \frac{1}{10}$. It suffices to show $\alpha < \epsilon$. But, as we've just noted, $(1-\alpha)(d-2a_{n+1}) \ge (1-\epsilon)d$, so $\alpha \le 1-\frac{(1-\epsilon)d}{d-2a_{n+1}} \le 1-\frac{(1-\epsilon)d}{d} = \epsilon$, as desired. \vspace{3mm} We finish by arguing that we in fact must have $a_{n+1} < \frac{d}{10}$ for $\epsilon < \frac{1}{10}$. First note $$\sum_{i,j} a_ia_j - \sum_{i,j} \min(a_ia_j,a_ia_{i+j},a_ja_{i+j}) \ge 4\sum_{1 \le k \le n} a_ka_{n+1}+2a_{n+1}^2.$$ Therefore, we have that $$d^2 \ge (1-\epsilon)d^2 + 4a_{n+1}(\frac{d-a_0-2a_{n+1}}{2})+2a_{n+1}^2$$ and hence, $$2a_{n+1}^2-2a_{n+1}(d-a_0)+\epsilon d^2 \ge 0.$$ As one can verify, the proof given above (for $a_{n+1} < \frac{d}{10}$) works regardless of what $a_{n+1}$ is, if $a_0 > (\frac{1+2\epsilon}{3})d$. Therefore, we may assume $a_0 \le (\frac{1+2\epsilon}{3})d$ and get that we must have $$2a_{n+1}^2 - 2a_{n+1}(\frac{2-2\epsilon}{3})d+\epsilon d^2 \ge 0.$$ So, $\frac{a_{n+1}}{d} < \frac{\frac{2-2\epsilon}{3}-\sqrt{(\frac{2-2\epsilon}{3})^2-2\epsilon}}{2}$ or $\frac{a_{n+1}}{d} > \frac{\frac{2-2\epsilon}{3}+\sqrt{(\frac{2-2\epsilon}{3})^2-2\epsilon}}{2}$. However, the first expression in $\epsilon$ is less than $\frac{1}{10}$ for $\epsilon < \frac{1}{10}$, and the second expression is greater than $\frac{1}{2}$ for $\epsilon < \frac{1}{10}$. Since we clearly can't have $a_{n+1} > \frac{d}{2}$, we're done. \end{proof} \vspace{3mm} \begin{remark*} It should be noted that the largest we can possibly take $\epsilon$ in the statement of Lemma 1 is $\epsilon = \frac{2}{9}$. Consider, for example, $a_0,a_{-1},a_1 = \frac{d}{3}$. Extending Lemma 1 from $\epsilon < \frac{1}{10}$ to $\epsilon < \frac{2}{9}$ will just slightly lower the value of $\gamma_0$, and will not allow one to get all the way down to $q \le 3$. \end{remark*} \begin{remark*} In the 3AP setting we may not necessarily have that $a_j = a_{-j}$ for each $j \in \mathbb{Z}$. However, a suitable adjustment of the given proof shows that, for $\epsilon$ small enough, $\sum_{i,j} \min(a_ia_j,a_ia_{\frac{i+j}{2}},a_ja_{\frac{i+j}{2}}) \ge (1-\epsilon)d^2$ implies $a_j \ge (1-\epsilon)d$ for some $j$. We can then just translate $S$ to assume $j=0$. \end{remark*} \vspace{3mm} \begin{lemma} For $q\in \mathbb{N}, \alpha \in [0,1]$, define $$F(q,\alpha) = \max\left(\frac{q^2-\alpha q+\alpha^2}{q^2},\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2},\gamma_0 \right).$$ For any $q\ge 2, \alpha \in [0,1], 1 \le k \le q, \eta \in (\frac{3}{4},1]$, if we let $q' = \lfloor \frac{q+\alpha}{k\eta} \rfloor$ and $\alpha' = \frac{q+\alpha}{k\eta} - q'$, then $$\eta^2F(q',\alpha')+3(1-\eta)^2 < F(q,\alpha).$$ \end{lemma} \begin{proof} Fix any $q,k,q' \ge 1$ and $\alpha \in [0,1]$. Substitute $\eta = \frac{q+\alpha}{(q'+\alpha')k}$ and let $$f(\alpha') := \frac{(q+\alpha)^2}{k^2}\frac{1}{(q'+\alpha')^2}F(q',\alpha')+3(1-\frac{q+\alpha}{(q'+\alpha')k})^2.$$ We show that $f(\alpha')$ attains its maximum at (one of) the extreme values of $\alpha'$. Define $$f_1(\alpha') := \frac{(q+\alpha)^2}{k^2}\frac{1}{(q'+\alpha')^2}\frac{(q')^2-\alpha' q'+(\alpha')^2}{(q')^2}+3(1-\frac{q+\alpha}{(q'+\alpha')k})^2$$ $$f_2(\alpha') := \frac{(q+\alpha)^2}{k^2}\frac{1}{(q'+\alpha')^2}\frac{(q')^2+2\alpha' q'+4(\alpha')^2-6\alpha'+3}{(q+1)^2}+3(1-\frac{q+\alpha}{(q'+\alpha')k})^2.$$ A straightforward computation shows $$f_1'(\alpha') = \frac{q+\alpha}{k^2}\frac{1}{(q'+\alpha')^3} \cdot $$ $$\bigg[ (2\alpha'-q')(\alpha'+q')(q+\alpha)-2((\alpha')^2-2q'\alpha'+(q')^2)(q+\alpha)+6(k(\alpha'+q')-(q+\alpha))\bigg]$$ $$f_2'(\alpha') = \frac{q+\alpha}{k^2}\frac{1}{(q'+\alpha')^3} \cdot$$ $$ \bigg[(\alpha'+q')(8\alpha'+2(q'-3))(q+\alpha)-2(4(\alpha')^2+2(q'-3)\alpha'+(q')^2+3)(q+\alpha)+6(k(\alpha'+q')-(q+\alpha))\bigg]$$ In each $f_j'(\alpha')$, in the brackets, the quadratic term in $\alpha'$ vanishes. Therefore, in the brackers is a term linear in $\alpha'$. In $f_1'(\alpha')$ the coefficient of $\alpha'$ is $q'(q+\alpha)+4q'(q+\alpha)+6k$, which is positive. Similarly, the coefficient of $\alpha'$ in $f_2'(\alpha')$ is $8q'(q+\alpha)+2(q'-3)(q+\alpha)-4(q'-3)(q+\alpha)+6k = (6q'+6)(q+\alpha)+6k$, which is positive. Hence, $f_1(\alpha'),f_2(\alpha')$ attain their maximum values only at the extreme values of $\alpha'$. Since $f(\alpha') = \max(f_1'(\alpha'),f_2'(\alpha'))$\footnote{Clearly $\eta^2 \gamma_0+3(1-\eta)^2 \le \gamma_0$ for $\eta \in (\frac{3}{4},1)$, since $\gamma_0 > \frac{3}{7}$. So, we assume $F(q',\alpha') \not = \gamma_0$.}, we see that $f(\alpha')$ attains its maximum at (one of) the extreme values of $\alpha'$. \vspace{3mm} Suppose $\frac{q+\alpha}{(q'+\alpha')k} < 1$ for some $\alpha' \in (0,1)$. Then $\frac{q+\alpha}{(q'+1)k} < 1$. Note $\alpha' = 1 \implies F(q',\alpha') = 1$, and $\eta^2+3(1-\eta)^2$ is increasing for $\eta > \frac{3}{4}$. Since $\eta > \frac{3}{4}$ and since $\eta < 1$, we take $\eta = \frac{q+\alpha}{q+1}$ (since $q'k \in \mathbb{N}$). We obtain $\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2}$, which, of course, is at most $F(q,\alpha)$. \vspace{3mm} If $\frac{q+\alpha}{q'k} < 1$, then we take $\alpha' = 0$ and argue as above. Otherwise, the extreme value of $\alpha'$ is the one making $\eta = 1$, namely $\alpha'_{crit} = \frac{q+\alpha}{k}-q'$. At $\eta = 1$, our desired inequality becomes $F(q',\alpha'_{crit}) \le F(q,\alpha)$. Since $\alpha'_{crit} \in [0,1]$ and $q' \in \mathbb{N}$, we have $q' = \lfloor \frac{q+\alpha}{k} \rfloor, \alpha'_{crit} = \{\frac{q+\alpha}{k}\}$, the fractional part. Therefore, it just suffices to show, generally, that $$q,k \ge 1, \alpha \in [0,1] \implies F(\lfloor \frac{q+\alpha}{k} \rfloor, \{\frac{q+\alpha}{k}\}) \le F(q,\alpha).$$ \vspace{3mm} Clearly, the inequality holds if $F(\lfloor \frac{q+\alpha}{k} \rfloor, \{\frac{q+\alpha}{k}\}) = \gamma_0$. If $q=2$, then either $k=1$ and the inequality is an equality, or $k=2$ and $F(\lfloor \frac{q+\alpha}{k} \rfloor, \{\frac{q+\alpha}{k}\}) = F(1, \frac{\alpha}{2}) = 1-\frac{\alpha}{2}+\frac{\alpha^2}{4}$, while $F(q,\alpha) \ge \frac{4-2\alpha+\alpha^2}{4} = 1-\frac{\alpha}{2}+\frac{\alpha^2}{4}$. So, assume $q \ge 3$. \vspace{3mm} Note that $\frac{q^2-\alpha q+\alpha^2}{q^2} = 1-\frac{\alpha}{q}+(\frac{\alpha}{q})^2$ is decreasing in $\frac{\alpha}{q}$ if $\frac{\alpha}{q} < \frac{1}{2}$. And for $q \ge 3$, $\frac{\alpha}{q}, \frac{\{\frac{q+\alpha}{k}\}}{\lfloor \frac{q+\alpha}{k}\rfloor} < \frac{1}{2}$. Therefore, to show that $$\frac{\lfloor \frac{q+\alpha}{k} \rfloor^2-\{\frac{q+\alpha}{k}\}\lfloor \frac{q+\alpha}{k}\rfloor+\{\frac{q+\alpha}{k}\}^2}{\lfloor \frac{q+\alpha}{k} \rfloor^2} \le \frac{q^2-\alpha q +\alpha^2}{q^2},$$ it suffices to show $$\frac{\{\frac{q+\alpha}{k}\}}{\lfloor \frac{q+\alpha}{k} \rfloor} \ge \frac{\alpha}{q}.$$ But $q\{\frac{q+\alpha}{k}\} = q(\frac{q+\alpha}{k}-\lfloor \frac{q+\alpha}{k}\rfloor)$, so the inequality reduces to $\frac{q}{k} \ge \lfloor \frac{q+\alpha}{k} \rfloor$, which is true since $\lfloor \frac{q+\alpha}{k}\rfloor = \lfloor \frac{q}{k} \rfloor$, since if $\frac{q}{k} < m \in \mathbb{N}$, then $\frac{q}{k} \le m-\frac{1}{k}$. Next, observe that $$\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2} = \frac{(q+1)^2-(2-2\alpha)(q+1)+(2-2\alpha)^2}{(q+1)^2},$$ so since $\frac{2-2\alpha}{q+1} \le \frac{1}{2}$ for $q \ge 3$, as before it suffices to show that $$\frac{2-2\{\frac{q+\alpha}{k}\}}{\lfloor \frac{q+\alpha}{k}\rfloor +1} \ge \frac{2-2\alpha}{q+1}.$$ However, substituting $\{\frac{q+\alpha}{k}\} = \frac{q+\alpha}{k}-\lfloor \frac{q+\alpha}{k} \rfloor$, collecting terms with $q+\alpha$, and simplifying yields the equivalent $$\lfloor \frac{q+\alpha}{k} \rfloor + 1 \ge \frac{q+1}{k}.$$ And this is clearly true. \end{proof} \vspace{6mm} \section{Verifying the Gan-Loh-Sudakov Conjecture for Cayley Graphs} We verify that our bound implies the bound in the Gan-Loh-Sudakov conjecture when $q \ge 7$. Take a finite Abelian group $G$ and a symmetric subset $S \subseteq G$ not containing $0$. Let $n=|G|$, $S_0 = S\cup\{0\}$, $d= |S|$, $q = \lfloor \frac{n}{|S_0|} \rfloor$, and $\alpha = \frac{n}{|S_0|}-q$. The benefit of working with $S_0$ is that the graph-theoretic bound takes the simpler form $$|T_{conj}| \le q{d+1 \choose 3}+{r \choose 3} = q{|S_0| \choose 3}+{\alpha|S_0| \choose 3}.$$ Note $$\Probb[S_0] = \frac{1}{|S_0|^2}\sum_{x,y \in S_0} 1_{S_0}(x+y) = \frac{1}{|S_0|^2}\left[\sum_{x,y \in S} 1_{S_0}(x+y) + 2\sum_{y \in S} 1_{S_0}(y)+1_{S_0}(0+0)\right].$$ Taking into account that for each $x \in S$ there is exactly one $y \in S$ for which $x+y = 0$, we see $$\Probb[S] = \frac{|S_0|^2}{|S|^2}\left[\Probb[S_0]-\frac{3|S|+1}{|S_0|^2}\right].$$ The number of triangles in our Cayley graph is thus $$\frac{1}{6}n|S|^2\Probb[S] = \frac{1}{6}(q+\alpha)|S_0|^3\left[\Probb[S_0]-\frac{3|S|+1}{|S_0|^2}\right].$$ For ease, let $M = \max\left(\frac{q^2-\alpha q+\alpha^2}{q^2},\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2},\gamma_0\right)$ so that, by Theorem 2 applied to $S_0$ (which is symmetric), we may bound the number of triangles by $$\frac{1}{6}(q+\alpha)M |S_0|^3 - \frac{1}{6}(q+\alpha)|S_0|(3|S_0|-2).$$ As one may check, this is less than $q{|S_0| \choose 3}+{\alpha |S_0| \choose 3}$ iff $$[(q+\alpha^3)-(q+\alpha)M]|S_0|^3+[3\alpha-3\alpha^2]|S_0|^2 \ge 0.$$ Therefore, it suffices to have $M \le \frac{q+\alpha^3}{q+\alpha}$. We have $\gamma_0 \le \frac{q+\alpha^3}{q+\alpha}$ for all $q \ge 7$ and any $\alpha \in [0,1]$. And, for any $q \ge 1, \alpha \in [0,1]$, $$\frac{q+\alpha^3}{q+\alpha} - \frac{q^2-\alpha q+\alpha^2}{q^2} = \frac{\alpha^3(q^2-1)}{q^2(q+\alpha)},$$ $$\frac{q+\alpha^3}{q+\alpha}-\frac{q^2+2\alpha q+4\alpha^2-6\alpha+3}{(q+1)^2} = \frac{(1-\alpha)^2(q-1)((2+\alpha)q+3\alpha)}{(q+1)^2(q+\alpha)}$$ are non-negative. \vspace{3mm} \section{Base Case $q=1$} We finish by proving Theorems 1 and 2 when $|S| = \frac{n}{1+\alpha}$ for some $\alpha \in [0,1]$. Note $$\sum_{y \in S}\sum_{x \in G} 1_S(x+y) = \sum_{y \in S} |S| = |S|^2.$$ So, $$\sum_{x,y \in S} 1_S(x+y) = |S|^2-\sum_{x \not \in S} \sum_{y \in S} 1_S(x+y) = |S|^2-\sum_{x \not \in S} |(-x+S)\cap S|.$$ By pigeonhole, $|(-x+S)\cap S| \ge 2|S|-n$, and thus, $$|S|^2\Probb[S] \le |S|^2-\sum_{x \not \in S} (2|S|-n) = |S|^2(1-\alpha+\alpha^2).$$ As $1-\alpha+\alpha^2 = \frac{q^2-\alpha q+\alpha^2}{q^2}$ for $q=1$, Theorem 2 is established. Replacing $S$ with $2S$ in the appropriate places establishes Theorem 1 as well. \vspace{3mm} \section{Acknowledgments} I would like to thank Po-Shen Loh for telling me the graph theoretic conjecture. I would also like to thank Adam Sheffer and Cosmin Pohoata for helpful comments. \vspace{3mm}
{ "timestamp": "2018-09-12T02:07:49", "yymm": "1809", "arxiv_id": "1809.03729", "language": "en", "url": "https://arxiv.org/abs/1809.03729", "abstract": "Let $G$ be a finite Abelian group. For a subset $S \\subseteq G$, let $T_3(S)$ denote the number of length three arithemtic progressions in $S$ and Prob[$S$] $= \\frac{1}{|S|^2}\\sum_{x,y \\in S} 1_S(x+y)$. For any $q \\ge 1$ and $\\alpha \\in [0,1]$, and any $S \\subseteq G$ with $|S| = \\frac{|G|}{q+\\alpha}$, we show $\\frac{T_3(S)}{|S|^2}$ and Prob[$S$] are bounded above by $\\max\\left(\\frac{q^2-\\alpha q+\\alpha^2}{q^2},\\frac{q^2+2\\alpha q+4\\alpha^2-6\\alpha+3}{(q+1)^2},\\gamma_0\\right)$, where $\\gamma_0 < 1$ is an absolute constant. As a consequence, we verify a graph theoretic conjecture of Gan, Loh, and Sudakov for Cayley graphs.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "The Maximum Number of Three Term Arithmetic Progressions, and Triangles in Cayley Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631647185702, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7087950392052084 }
https://arxiv.org/abs/1304.1826
Concentration inequalities for non-Lipschitz functions with bounded derivatives of higher order
Building on the inequalities for homogeneous tetrahedral polynomials in independent Gaussian variables due to R. Latała we provide a concentration inequality for non-necessarily Lipschitz functions $f\colon \R^n \to \R$ with bounded derivatives of higher orders, which hold when the underlying measure satisfies a family of Sobolev type inequalities $\|g- \E g\|_p \le C(p)\|\nabla g\|_p.$Such Sobolev type inequalities hold, e.g., if the underlying measure satisfies the log-Sobolev inequality (in which case $C(p) \le C\sqrt{p}$) or the Poincaré inequality (then $C(p) \le Cp$). Our concentration estimates are expressed in terms of tensor-product norms of the derivatives of $f$.When the underlying measure is Gaussian and $f$ is a polynomial (non-necessarily tetrahedral or homogeneous), our estimates can be reversed (up to a constant depending only on the degree of the polynomial). We also show that for polynomial functions, analogous estimates hold for arbitrary random vectors with independent sub-Gaussian coordinates.We apply our inequalities to general additive functionals of random vectors (in particular linear eigenvalue statistics of random matrices) and the problem of counting cycles of fixed length in Erdős-R{é}nyi random graphs, obtaining new estimates, optimal in a certain range of parameters.
\section{Introduction} Concentration of measure inequalities are one of the basic tools in modern probability theory (see the monograph \cite{LedouxConcBook}). The prototypic result for all concentration theorems is arguably the Gaussian concentration inequality \cite{BorellGaussianConc,SCGaussianConc}, which asserts that if $G$ is a standard Gaussian vector in $\mathbb{R}^n$ and $f\colon \mathbb{R}^n \to \mathbb{R}$ is a 1-Lipschitz function, then for all $t > 0$, \begin{displaymath} \mathbb P(|f(G) - \mathbb E f(G)|\ge t) \le 2\exp(-t^2/2). \end{displaymath} Over the years the above inequality has found numerous applications in the analysis of Gaussian processes, as well as in asymptotic geometric analysis (e.g. in modern proofs of Dvoretzky type theorems). Its applicability in geometric situations comes from the fact that it is dimension free and all norms in $\mathbb{R}^n$ are Lipschitz with respect to one another. However, there are some probabilistic or combinatorial situations, when one is concerned with functions that are not Lipschitz. The most basic case is the probabilistic analysis of polynomials in independent random variables, which arise naturally, e.g., in the study of multiple stochastic integrals, in discrete harmonic analysis as elements of the Fourier expansions on the discrete cube or in numerous problems of random graph theory, to mention just the famous subgraph counting problem \cite{JaRuInf, JanOleRu,ChatterjeeTr,DeMarcoKahnTr,DeMarcoKahnCl}. The concentration of measure or more generally integrability properties for polynomials have attracted a lot of attention in the last forty years. In particular Bonami \cite{Bon} and Nelson~\cite{N} provided hypercontractive estimates (Khintchine type inequalities) for polynomials on the discrete cube and in the Gauss space, which have been later extended to other random variables by Kwapie\'{n} and Szulga \cite{KwapienSzulga} (see also \cite{KwapienWoyczynski}). Khintchine type inequalities have been also obtained in the absence of independence for polynomials under log-concave measures by Bourgain \cite{BourgainConvPolynomials}, Bobkov \cite{BobkovPolynomials}, Nazarov-Sodin-Volberg \cite{NazarovSodinVolberg} and Carbery-Wright \cite{CarberyWright}. Another line of research is to provide two sided estimates of moments of polynomials in terms of deterministic functions of the coefficients. Borell \cite{Bo} and Arcones-Gin\'{e} \cite{AG} provided such two sided bounds for homogeneous polynomials in Gaussian variables. They were expressed in terms of expectations of suprema of certain empirical processes. Talagrand \cite{TalagrandNewConc} and Bousquet-Boucheron-Lugosi-Massart \cite{BLMEntropy, BBLM} obtained counterparts of these results for homogeneous tetrahedral\footnote{A multivariate polynomial is called tetrahedral if all variables appear in it in power at most one.} polynomials in Rademacher variables and {\L}ochowski \cite{Loch} and Adamczak \cite{AdLogSobConv} for random variables with log-concave tails. Inequalities of this type, while implying (up to constants) hypercontractive bounds, have a serious downside as the analysis of the empirical processes involved is in general difficult. It is therefore important to obtain two-sided bounds in terms of purely deterministic quantities. Such bounds for random quadratic forms in independent symmetric random variables with log-concave tails have been obtained by Lata{\l}a \cite{L1} (the case of linear forms was solved earlier by Gluskin and Kwapie\'n in \cite{GK}, whereas bounds for quadratic forms in Gaussian variables were obtained by Hanson-Wright \cite{HansonWright}, Borell \cite{Bo} and Arcones-Gin\'{e} \cite{AG}). Their counterparts for multilinear forms of arbitrary degree in nonnegative random variables with log-concave tails have been derived by Lata{\l}a and {\L}ochowski \cite{LL}. As for the symmetric case, the general problem is still open. An important breakthrough has been obtained by Lata{\l}a \cite{L2}, who proved two-sided estimates for Gaussian chaoses of arbitrary order, that is for homogeneous tetrahedral polynomials of arbitrary degree in independent Gaussian variables (we recall his bounds below as they are the starting point for our investigations). For general symmetric random variables with log-concave tails similar bounds are known only for chaoses of order at most three \cite{Chaos3d}. Polynomials in independent random variables have been also investigated in relation with combinatorial problems, e.g. with subgraph counting \cite{JaRuInf, JanOleRu,ChatterjeeTr,DeMarcoKahnTr,DeMarcoKahnCl}. The best known result for general polynomial in this area has been obtained by Kim and Vu \cite{KimVuConc,VuConc}, who presented a family of powerful inequalities for $[0,1]$-valued random variables. Over the last decade they have been applied successfully to handle many problems in probabilistic combinatorics. Some recent inequalities for polynomials in the so called subexponential random variables have been also obtained by Schudy and Sviridenko \cite{SchudySviridenko1,SchudySviridenko2}. They are a generalization of the special case of exponential random variables in \cite{LL} and are expressed in terms of quantities similar to those considered by Kim-Vu. Since it is beyond the scope of this paper to give a precise account of all the concentration inequalities for polynomials, we refer the reader to the aforementioned sources and recommend also the monographs \cite{KwapienWoyczynski,dlPg}, where some parts of the theory are presented in a uniform way. As already mentioned we will present in detail only the results from \cite{L2}, which are our main tool as well as motivation. As for concentration results for general non-Lipschitz functions, the only reference we are aware of, which addresses this question is \cite{GuillinJoulin}, where the Authors obtain interesting inequalities for stationary measures of certain Markov processes and functions satisfying a Lyapunov type condition. Their bounds are not comparable to the ones which we present in this paper. On the one hand they work in a more general Markov process setting, on the other hand, when specialized, e.g., to quadratic forms of Gaussian vectors, they do not recover optimal inequalities given in \cite{Bo,AG,L2} (see Section 4 in \cite{GuillinJoulin}). Since the language of \cite{GuillinJoulin} is very different from ours, we will not describe the inequalities obtained therein and refer the interested reader to the original paper. Let us now proceed to the presentation of our results. To do this we will first formulate a two sided tail and moment inequality for homogeneous tetrahedral polynomials in i.i.d. standard Gaussian variables due to Lata{\l}a \cite{L2}. To present it in a concise way we need to introduce some notation which we will use throughout the article. For a positive integer $n$ we will denote $[n] = \{1,\ldots,n\}$. The cardinality of a set $I$ will be denoted by $\# I$. For ${\bf i} = (i_1,\ldots,i_d) \in [n]^d$ and $I\subseteq[d]$ we write ${\bf i}_{I}=(i_{k})_{k\in I}$. We will also denote $|{\bf i}| = \max_{j\le d} {i_j}$. Consider thus a $d$-indexed matrix $A = (a_{i_1,\ldots,i_d})_{i_1,\ldots,i_d = 1}^n$, such that $a_{i_1,\ldots,i_d} = 0$ whenever $i_j = i_k$ for some $j\neq k$, a sequence $g_1,\ldots,g_n$ of i.i.d. $\mathcal{N}(0,1)$ random variables and define \begin{align}\label{eq:Z_def} Z = \sum_{{\bf i} \in [n]^d} a_{{\bf i}} g_{i_1}\cdots g_{i_d}. \end{align} Without loss of generality we can assume that the matrix $A$ is symmetric, i.e., for all permutations $\sigma\colon [n]\to [n]$, $a_{i_1,\ldots,i_d} = a_{\sigma(i_1),\ldots,\sigma(i_d)}$. Let now $P_d$ be the set of partitions of $\{1,\ldots,d\}$ into nonempty, pairwise disjoint sets. For a partition $\mathcal{J} =\{J_1,\ldots,J_k\}$, and a $d$-indexed matrix $A = (a_{\bf i})_{{\bf i} \in [n]^d}$ (non-necessarily symmetric or with zeros on the diagonal), define \begin{equation}\label{Gaussian_norm_def_intro} \|A\|_{{\cal J}}=\sup\Big\{\sum_{{\bf i}\in [n]^d} a_{{\bf i}}\prod_{l=1}^k x\ub{l}_{\mathbf{i}_{J_l}}\colon \|(x\ub{l}_{\mathbf{i}_{J_l}})\|_2\leq 1, 1\leq l\leq k \Big\}, \end{equation} where $\|(x_{{\bf i}_{J_l}})\|_2 = \sqrt{\sum_{|{\bf i}_{J_l}|\le n} x_{{\bf i}_{J_l}}^2}$. Thus, e.g., \begin{align*} \|(a_{ij})_{i,j\le n}\|_{\{1,2\}}&= \sup\{ \sum_{i,j\le n} a_{ij}x_{ij}\colon \sum_{i,j\le n} x_{ij}^2 \le 1\} = \sqrt{\sum_{i,j\le n}a_{ij}^2} = \|(a_{ij})_{i,j\le n}\|_{\textup{HS}},\\ \|(a_{ij})_{i,j\le n}\|_{\{1\}\{2\}}&= \sup\{ \sum_{i,j\le n} a_{ij}x_iy_j\colon \sum_{i\le n} x_{i}^2\le 1,\sum_{j\le n}y_j^2 \le 1\} = \|(a_{ij})_{i,j\le n}\|_{\ell_2^n\to \ell_2^n},\\ \|(a_{ijk})_{i,j,k\le n}\|_{\{1,2\} \{3\}} &= \sup\{ \sum_{i,j,k\le n} a_{ij}x_{ij}y_k\colon \sum_{i,j\le n} x_{ij}^2\le 1,\sum_{k\le n}y_k^2 \le 1\}. \end{align*} From the functional analytic perspective the above norms are injective tensor product norms of $A$ seen as a multilinear form on $ (\mathbb{R}^{n})^d$ with the standard Euclidean structure. We are now ready to present the inequalities by Lata{\l}a. Below, as in the whole article by $C_d$ we denote a constant, which depends only on $d$. The values of $C_d$ may differ between occurrences. \begin{theorem}\label{thm:Latala_intro} For any $d$-indexed symmetric matrix $A = (a_{{\bf i}})_{{\bf i} \in [n]^d}$ such that $a_{\bf i} = 0$ if $i_j = i_k$ for some $j\neq k$, the random variable $Z$, defined by \eqref{eq:Z_def} satisfies for all $p \ge 2$, \begin{displaymath} C_d^{-1} \sum_{\mathcal{J}\in P_d} p^{\#\mathcal{J}/2} \|A\|_{\mathcal{J}} \le \|Z\|_p \le C_d \sum_{\mathcal{J}\in P_d} p^{\#\mathcal{J}/2} \|A\|_{\mathcal{J}}. \end{displaymath} As a consequence, for all $t > 1$, \begin{displaymath} C_d^{-1}\exp\Big(-C_d\min_{\mathcal{J}\in P_d} \Big(\frac{t}{\|A\|_\mathcal{J}}\Big)^{2/\#\mathcal{J}}\Big) \le \mathbb P(|Z| \ge t) \le C_d\exp\Big(-\frac{1}{C_d}\min_{\mathcal{J}\in P_d} \Big(\frac{t}{\|A\|_\mathcal{J}}\Big)^{2/\#\mathcal{J}}\Big). \end{displaymath} \end{theorem} It is worthwhile noting that for $\#\mathcal{J} > 1$, the norms $\|A\|_{\mathcal{J}}$ are not unconditional in the standard basis (decreasing coefficients of the matrix may not result in decreasing the norm). Moreover, for specific matrices they may not be easy to compute. On the other hand, for any $d$-indexed matrix $A$ and any $\mathcal{J} \in P_d$, we have $\|A\|_\mathcal{J} \le \|A\|_{\{1,\ldots,d\}} = \sqrt{\sum_{{\bf i}} a_{\bf i}^2}$. Using this fact in the upper estimates above allows to recover (up to constants depending on $d$) hypercontractive estimates for homogeneous tetrahedral polynomials due to Nelson. Our main result is an extension of the upper bound given in the above theorem to more general random functions and measures. Below we present the most basic setting we will work with and state the corresponding theorems. Some additional extensions are deferred to the main body of the article. We will consider a random vector $X$ in $\mathbb{R}^n$, which satisfies the following family of Sobolev inequalities. For any $p \ge 2$ and any smooth integrable function $f\colon \mathbb{R}^n \to \mathbb{R}$, \begin{align}\label{eq:sobolev_def} \|f(X)-\mathbb E f(X)\|_p \le L\sqrt{p}\Big\||\nabla f(X)|\Big\|_p, \end{align} for some constant $L$ (independent of $p$ and $f$), where $|\cdot|$ is the standard Euclidean norm on $\mathbb{R}^n$. It is known (see \cite{Aida-Stroock-1994} and Theorem \ref{thm:AidaStroock} below) that if $X$ satisfies the logarithmic Sobolev inequality with constant $D_{LS}$, then it satisfies \eqref{eq:sobolev_def} with $L = \sqrt{D_{LS}/2}$. We remark that there are many criteria for a random vector to satisfy the logarithmic Sobolev inequality (see e.g. \cite{LedouxConcBook,BakryEmery,BobkovGoetze,BartheMilmanTransference,KlartagCube}), so in particular our assumption \eqref{eq:sobolev_def} can be verified for many random vectors of interest. Our first result is the following theorem, which provides moment estimates and concentration for $D$-times differentiable functions. The estimates are expressed by $\|\cdot\|_{\mathcal{J}}$ norms of derivatives of the function (which we will identify with multi-indexed matrices). We will denote the $d$-th derivative of $f$ by $\mathbf{D}^d f$. \begin{theorem}\label{thm:main_intro} Assume that a random vector $X$ in $\mathbb{R}^n$ satisfies the inequality \eqref{eq:sobolev_def} with constant $L$. Let $f\colon \mathbb{R}^n \to \mathbb{R}$ be a function of the class $\mathcal{C}^D$. For all $p \ge 2$ if $\mathbf{D}^Df(X) \in L^p$, then \begin{displaymath} \|f(X) - \mathbb E f(X)\|_p \le C_D\Big(L^D\sum_{\mathcal{J} \in P_D} p^{\frac{\#\mathcal{J}}{2}} \Big\|\|\mathbf{D}^Df(X)\|_\mathcal{J}\Big\|_p + \sum_{1\le d\le D-1}L^d\sum_{\mathcal{J}\in P_d} p^{\frac{\#\mathcal{J}}{2}} \|\mathbb E \mathbf{D}^df(X)\|_\mathcal{J}\Big). \end{displaymath} In particular if $\mathbf{D}^Df (x)$ is uniformly bounded on $\mathbb{R}^n$, then setting \begin{displaymath} \eta_f(t) = \min\left(\min_{\mathcal{J}\in P_D}\Big(\frac{t}{L^{D}\sup_{x\in \mathbb{R}^n}\|\mathbf{D}^D f(x)\|_\mathcal{J}}\Big)^{\frac{2}{\#\mathcal{J}}},\min_{1\le d\le D-1}\min_{\mathcal{J}\in P_d} \Big(\frac{t}{L^{d}\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}}\Big)^{\frac{2}{\#\mathcal{J}}}\right) \end{displaymath} we obtain for $t > 0$, \begin{displaymath} \mathbb P(|f(X) - \mathbb E f(X)| \ge t ) \le 2\exp\Big(-\frac{1}{C_D} \eta_f(t)\Big). \end{displaymath} \end{theorem} The above theorem is quite technical, so we will now provide a few comments, comparing it to known results. \paragraph{1.} It is easy to see that if $D = 1$, Theorem \ref{thm:main_intro} reduces (up to absolute constants) to the Gaussian-like concentration inequality, which can be obtained from \eqref{eq:sobolev_def} by Chebyshev's inequality (applied to general $p$ and optimized). \paragraph{2.} If $f$ is a homogeneous tetrahedral polynomial of degree $D$, then the tail and moment estimates of Theorem \ref{thm:main_intro} coincide with those from Lata{\l}a's Theorem. Thus Theorem \ref{thm:main_intro} provides an extension of the upper bound from Lata{\l}a 's result to a larger class of measures and functions (however we would like to stress that our proof relies heavily on Lata{\l}a's work). \paragraph{3.} If $f$ is a general polynomial of degree $D$, then $\mathbf{D}^D f(x)$ is constant on $\mathbb{R}^n$ (and thus equal to $\mathbb E \mathbf{D}^D f(X)$). Therefore in this case the function $\eta_f$ appearing in Theorem \ref{thm:main_intro} can be written in a simplified form \begin{align}\label{eq:eta_def_poly} \eta_f(t) = \min_{1\le d\le D}\min_{\mathcal{J}\in P_d} \Big(\frac{t}{L^{d}\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}}\Big)^{2/\#\mathcal{J}}. \end{align} \paragraph{4.} For polynomials in Gaussian variables, the estimates given in Theorem \ref{thm:main_intro} can be reversed, like in Theorem \ref{thm:Latala_intro}. More precisely we have the following theorem, which provides an extension of Theorem \ref{thm:Latala_intro} to general polynomials. \begin{theorem}\label{thm:Gaussian_intro} If $G$ is a standard Gaussian vector in $\mathbb{R}^n$ and $f\colon \mathbb{R}^n \to \mathbb{R}$ is a polynomial of degree $D$, then for all $p \ge 2$, \begin{displaymath} C_D^{-1} \sum_{1\le d\le D}\sum_{\mathcal{J}\in P_d} p^{\frac{\#\mathcal{J}}{2}} \|\mathbb E \mathbf{D}^df(G)\|_\mathcal{J} \le \|f(G) - \mathbb E f(G)\|_p \le C_D \sum_{1\le d\le D}\sum_{\mathcal{J}\in P_d} p^{\frac{\#\mathcal{J}}{2}} \|\mathbb E \mathbf{D}^df(G)\|_\mathcal{J}. \end{displaymath} Moreover for all $t > 0$, \begin{displaymath} \frac{1}{C_D}\exp\Big(-C_D \eta_f(t)\Big) \le \mathbb P(|f(G) - \mathbb E f(G)| \ge t) \le C_D\exp\Big(-\frac{1}{C_D} \eta_f(t)\Big), \end{displaymath} where \begin{align*} \eta_f(t) = \min_{1\le d\le D}\min_{\mathcal{J}\in P_d} \Big(\frac{t}{\|\mathbb E \mathbf{D}^d f(G)\|_\mathcal{J}}\Big)^{2/\#\mathcal{J}}. \end{align*} \end{theorem} \paragraph{5.} It is well known that concentration of measure for general Lipschitz functions fails e.g. on the discrete cube and one has to impose some additional convexity assumptions to get sub-Gaussian concentration \cite{TalCube}. It turns out that if we restrict to polynomials, estimates in the spirit of Theorems \ref{thm:Latala_intro} and \ref{thm:main_intro} still hold. To formulate our result in full generality recall the definition of the $\psi_2$ Orlicz norm of a random variable $Y$, \begin{displaymath} \|Y\|_{\psi_2} = \inf\Big\{ t > 0\colon \mathbb E \exp\Big(\frac{Y^2}{t^2}\Big) \le 2\Big\}. \end{displaymath} By integration by parts and Chebyshev's inequality $\|Y\|_{\psi_2} < \infty$ is equivalent to a sub-Gaussian tail decay for $Y$. We have the following result for polynomials in sub-Gaussian random vectors with independent components. \begin{theorem}\label{thm:subgaussian_intro} Let $X = (X_1,\ldots,X_n)$ be a random vector with independent components, such that for all $i \le n$, $\|X_i\|_{\psi_2} \le L$. Then for every polynomial $f\colon \mathbb{R}^n \to \mathbb{R}$ of degree $D$ and every $p \ge 2$, \begin{displaymath} \|f(X) - \mathbb E f(X)\|_p \le C_D \sum_{d=1}^D L^d \sum_{\mathcal{J}\in \mathcal{P}_d} p^{\#\mathcal{J}/2}\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}. \end{displaymath} As a consequence, for any $t > 0$, \begin{displaymath} \mathbb P\Big(|f(X) - \mathbb E f(X)| \ge t\Big) \le 2\exp\Big(-\frac{1}{C_D}\eta_f(t)\Big), \end{displaymath} where \begin{align*} \eta_f(t) = \min_{1\le d\le D}\min_{\mathcal{J}\in P_d} \Big(\frac{t}{L^{d}\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}}\Big)^{2/\#\mathcal{J}}. \end{align*} \end{theorem} \paragraph{6.} We postpone the applications of our theorems to subsequent sections of the article and here we announce only that apart from polynomials we apply Theorem \ref{thm:main_intro} to additive functionals and $U$-statistics of random vectors, in particular to linear eigenvalue statistics of random matrices, obtaining bounds which complement known estimates by Guionnet and Zeitouni \cite{GZConc}. Theorem \ref{thm:subgaussian_intro} is applied to the special case of the problem of subgraph counting in large random graphs. In a special case when one counts copies of a given small cycle, our result allows to obtain optimal inequalities for random graphs $G(n,p)$, with $p \to 0$ slowly, namely $p \ge n^{-\frac{k-2}{2(k-1)}} \log^{-\frac12} n$, where $k$ is length of a cycle. To the best of our knowledge they are the best currently known inequalities for this range of $p$. \paragraph{7.} Let us now briefly discuss optimality of our inequalities. The lower bound in Theorem \ref{thm:Gaussian_intro} clearly shows that Theorem \ref{thm:main_intro} is optimal in the class of measures and functions it covers up to constants depending only on $D$. As for Theorem \ref{thm:subgaussian_intro}, it is similarly optimal in the class of random vectors with independent sub-Gaussian coordinates. In concrete combinatorial applications, for $0$-$1$ random variables this theorem may be however suboptimal. This can be seen already for $D = 1$, for a linear combination of independent Bernoulli variables $X_1,\ldots,X_n$ with $\mathbb P(X_i = 1) = 1 - \mathbb P(X_i=0) = p$. When $p$ becomes small, the tail bound for such variables given e.g. by the Chernoff inequality is more subtle than what can be obtained from general inequalities for sums of sub-Gaussian random variables and the fact that $\|X_i\|_{\psi_2}$ is of order $(\log(2/p))^{-1/2}$. Roughly speaking, this is the reason why in our estimates for random graphs we have a restriction on the speed at which $p \to 0$. At the same time our inequalities still give results comparable to what can be obtained from other general inequalities for polynomials. As already noted in the survey \cite{JaRuInf}, bounds obtained from various general inequalities for the subgraph-counting problem, may not be directly comparable, i.e. those performing well in one case may exhibit worse performance in some other cases. Similarly, our inequalities cannot be in general compared e.g. to the estimates by Kim and Vu. For this reason and since it would require introducing new notation, we will not discuss these inequalities and just indicate, when presenting applications of Theorem \ref{thm:subgaussian_intro}, several situations when our inequalities perform in a better or worse way than those by Kim and Vu. Let us only mention that the Kim-Vu inequalities similarly as ours are expressed in terms of higher order derivatives of the polynomials. However, Kim and Vu (as well as Schudy and Sviridenko) look at maxima of absolute values of partial derivatives, which does not lead to tensor-product norms which we consider. While in the general sub-Gaussian case we consider, such tensor product norms cannot be avoided (in view of Theorem \ref{thm:Gaussian_intro}), it is not necessarily the case for $0$-$1$ random variables. \bigskip The organization of the paper is as follows. First, in Section \ref{sec:Notation}, we introduce the notation used in the paper, next in Section \ref{sec:general_nonlipschitz} we give the proof of Theorem \ref{thm:main_intro} together with some generalizations and examples of applications. In Section \ref{sec:Gaussian} we prove Theorem \ref{thm:Gaussian_intro}, whereas in Section \ref{sec:subgaussian} we present the proof of Theorem \ref{thm:subgaussian_intro} and applications to the subgraph counting problems. In Section \ref{sec:Weibull} we provide further refinements of estimates from Section \ref{sec:general_nonlipschitz} in the case of independent random variables satisfying modified log-Sobolev inequalities (they are deferred to the end of the article as they are more technical than those of Section \ref{sec:general_nonlipschitz}). In the Appendix we collect some additional facts used in the proofs. \paragraph{Acknowledgement} We would like to thank Michel Ledoux and Sandrine Dallaporta for interesting discussions concerning tail estimates for linear eigenvalue statistics of random matrices. \section{Notation}\label{sec:Notation} \paragraph{Sets and indices} For a positive integer $n$ we will denote $[n] = \{1,\ldots,n\}$. The cardinality of a set $I$ will be denoted by $\# I$. For ${\bf i} = (i_1,\ldots,i_d)\in [n]^d$ and $I\subseteq[d]$ we write ${\bf i}_{I}=(i_{k})_{k\in I}$. We will also denote $|{\bf i}| = \max_{j\le d} {i_j}$. For a finite set $A$ and an integer $d \ge 0$ we set \[ A\uu{d} = \{{\bf i} = (i_1,\ldots,i_d) \in A^d\colon \forall_{j,k \in \{1,\ldots,d\}} \ j\neq k \Rightarrow i_j\neq i_k\} \] (i.e. $A\uu{d}$ is the set of $d$-indices with pairwise distinct coordinates). Accordingly we will denote $n\uu{d} = n(n-1)\cdots(n-d+1)$. By $P_d$ we will denote the family of partitions of $[d]$ into nonempty, pairwise disjoint sets. For a finite set $I$ by $\ell_2(I)$ we will denote the finite dimensional Euclidean space $\mathbb{R}^I$ endowed with the standard Euclidean norm $|x|_2 = \sqrt{\sum_{i\in I} x_i^2}$. Whenever there is no risk of confusion we will denote the standard Euclidean norm simply by $|\cdot|$. \paragraph{Multi-indexed matrices} For a function $f\colon \mathbb{R}^n \to \mathbb{R}$ by $\mathbf{D}^d f(x)$ we will denote the ($d$-indexed) matrix of its derivatives of order $d$, which we will identify with the corresponding symmetric $d$-linear form. If $M = (M_{\bf i})_{{\bf i}\in [n]^d}$, $N = (N_{\bf i})_{{\bf i}\in [n]^d}$ are $d$-indexed matrices, we define $\langle M,N\rangle =\sum_{{\bf i}\in [n]^d} M_{\bf i} N_{\bf i}$. Thus for all vectors $y_1,\ldots,y_d \in \mathbb{R}^n$ we have $\mathbf{D}^d f(x) (y_1,\ldots,y_d) = \langle \mathbf{D}^d f(x),y_1\otimes\cdots\otimes y_d\rangle$, where $y_1\otimes\cdots\otimes y_d = (y_{i_1}y_{i_2}\cdots y_{i_d})_{{\bf i} \in [n]^d}$. We will also define the Hadamard product of two such matrices $M\circ N$ as a $d$-indexed matrix with entries $m_{{\bf i}} = M_{\bf i} N_{\bf i}$ (pointwise multiplication of entries). Let us also define the notion of ``generalized diagonals'' of a $d$-indexed matrix $A = (a_{\bf i})_{{\bf i} \in [n]^d}$. For a fixed set $K \subseteq [d]$, with $\#K > 1$, the ``generalized diagonal'' corresponding to $K$ is is the set of indices $\{{\bf i} \in [n]^d\colon i_k = i_l\;\textrm{for}\; k,l \in K\}$. \paragraph{Constants} We will use the letter $C$ to denote absolute constants and $C_a$ for constants depending only on some parameter $a$. In both cases the values of such constants may differ between occurrences. \section{A concentration inequality for non-lipschitz functions}\label{sec:general_nonlipschitz} In this Section we prove Theorem \ref{thm:main_intro}. Let us first state our main tool, which is an inequality by Lata{\l}a in a decoupled version. \begin{theorem}[Lata{\l}a, \cite{L2}]\label{thm_Latala_dec} Let $A = (a_{\bf i})_{{\bf i} \in [n]^d}$ be a $d$-indexed matrix with real entries and let $G_1,G_2,\ldots,G_d$ be i.i.d. standard Gaussian vectors in $\mathbb{R}^n$. Let $Z = \langle A, G_1\otimes\cdots\otimes G_d\rangle$. Then for every $p\ge 2$, \begin{displaymath} C_d^{-1}\sum_{\mathcal{J}\in P_d}p^{\#\mathcal{J}/2}\|A\|_\mathcal{J} \le \|Z\|_p \le C_d\sum_{\mathcal{J}\in P_d}p^{\#\mathcal{J}/2}\|A\|_\mathcal{J} \end{displaymath} \end{theorem} Thanks to general decoupling inequalities for $U$-statistics \cite{dlPMS1}, which we recall in the Appendix (Theorem \ref{thm:decoupling}), the above theorem is formally equivalent to Theorem \ref{thm:Latala_intro}. In fact in \cite{L2} Lata{\l}a first proves the above version. In the proof of Theorem \ref{thm:main} we will need just Theorem \ref{thm_Latala_dec} (in particular in this part of the article we do not need any decoupling inequalities). From now on we will work in a more general setting than in Theorem \ref{thm:main_intro} and assume that $X$ is a random vector in $\mathbb{R}^n$, such that for all $p\ge 2$ there exists a constant $L_X(p)$ such that for all bounded $\mathcal{C}^1$ functions $f \colon \mathbb{R}^n \to \mathbb{R}$, \begin{equation}\label{eq_main_assumption_2} \|f(X) - \mathbb E f(X)\|_p \le L_X(p) \Big\||\nabla f(X)|\Big\|_p. \end{equation} Clearly in this situation the above inequality generalizes to all $\mathcal{C}^1$ functions (if the right-hand side is finite then the left-hand side is well defined and the inequality holds). Let now $G$ be a standard $n$-dimensional Gaussian vector, independent of $X$. Using the Fubini theorem together with the fact that for some absolute constant $C$, all $x \in \mathbb{R}^n$ and $p \ge 2$, $C^{-1}\sqrt{p}|x| \le \|\langle x, G\rangle\|_p \le C\sqrt{p}|x|$, we can linearise the right-hand side above and write \eqref{eq_main_assumption_2} equivalently (up to absolute constants) as \begin{align}\label{eq:linearization} \|f(X) - \mathbb E f(X)\|_p \le \frac{C L_X(p)}{\sqrt{p}} \Big\|\langle \nabla f(X),G\rangle\Big\|_p. \end{align} We remark that similar linearisation has been used by Maurey and Pisier to provide a simple proof of the Gaussian concentration inequality \cite{PisierProbabMethods,PisierVolume} (see remark following Theorem \ref{thm:main} below). Inequality \eqref{eq:linearization} has an advantage over \eqref{eq_main_assumption_2} as it allows for iteration leading to the following simple proposition. \begin{prop}\label{prop:moment_Poincare} Consider $p \ge 2$ and let $X$ be an $n$-dimensional random vector satisfying \eqref{eq_main_assumption_2}. Let $f \colon \mathbb{R}^n \to \mathbb{R}$ be a $\mathcal{C}^D$ function. Let moreover $G_1,\ldots,G_D$ be independent standard Gaussian vectors in $\mathbb{R}^n$, independent of $X$. Then for all $p \ge 2$, if $\mathbf{D}^D f(X) \in L^p$, then \begin{align}\label{eq:moment_estimate} \|f(X) - \mathbb E f(X)\|_p \le& \frac{C^{D}L_X(p)^D}{p^{D/2}}\|\langle \mathbf{D}^D f(X), G_1\otimes\cdots\otimes G_D\rangle\|_p \\ &+ \sum_{1\le d\le D-1} \frac{C^{d}L_X(p)^d}{p^{d/2}}\|\langle \mathbb E_X \mathbf{D}^d f(X), G_1\otimes\cdots\otimes G_d\rangle\|_p\nonumber. \end{align} \end{prop} \begin{proof} Induction on $D$. For $D = 1$ the assertion of the proposition coincides with \eqref{eq:linearization}, which (as already noted) is equivalent to \eqref{eq_main_assumption_2}. Let us assume that the proposition holds for $D-1$. Applying thus \eqref{eq:moment_estimate} with $D-1$ instead of $D$, we obtain \begin{align}\label{eq:main_prop_aux_1} \|f(X) - \mathbb E f(X)\|_p \le& \frac{C^{D-1}L_X(p)^{D-1}}{p^{(D-1)/2}}\|\langle \mathbf{D}^{D-1} f(X),G_1\otimes\cdots\otimes G_{D-1}\rangle\|_p\\ &+ \sum_{d=1}^{D-2} \frac{C^{d}L_X(p)^{d}}{p^{d/2}}\|\langle \mathbb E_X \mathbf{D}^d f(X),G_1\otimes\cdots\otimes G_d\rangle\|_p.\nonumber \end{align} Applying now the triangle inequality in $L^p$, we get \begin{align} \|\langle \mathbf{D}^{D-1} f(X),G_1\otimes\cdots\otimes G_{D-1}\rangle\|_p \le& \|\langle \mathbf{D}^{D-1} f(X) - \mathbb E_X \mathbf{D}^{D-1}f(X),G_1\otimes\cdots\otimes G_{D-1}\rangle\|_p \nonumber\\ &+ \|\langle \mathbb E_X \mathbf{D}^{D-1}f(X),G_1\otimes\cdots\otimes G_{D-1}\rangle\|_p.\label{eq:main_prop_aux_2} \end{align} Let us now apply \eqref{eq:linearization} conditionally on $G_1,\ldots,G_{D-1}$ to the function $f_1(x) = \langle \mathbf{D}^{D-1} f(x),G_1\otimes\cdots\otimes G_{D-1}\rangle$. Since $\langle \mathbf{D}^{D-1} f(X) - \mathbb E_X \mathbf{D}^{D-1}f(X),G_1\otimes\cdots\otimes G_{D-1}\rangle = f_1(X) - \mathbb E_X f_1(X)$) and $\langle \nabla f_1(X),G_D\rangle= \langle \mathbf{D}^D f(X),G_1\otimes\cdots\otimes G_D\rangle$, we obtain \begin{align*} &\mathbb E_X |\langle \mathbf{D}^{D-1} f(X)- \mathbb E_X\mathbf{D}^{D-1} f(X),G_1\otimes\cdots\otimes G_{D-1}\rangle|^p \\ &\le \frac{C^pL_X(p)^p}{p^{p/2}} \mathbb E_{X,G_D} |\langle \mathbf{D}^D f(X),G_1\otimes\cdots\otimes G_D\rangle|^p. \end{align*} To finish the proof it is now enough to integrate this inequality with respect to the remaining Gaussian vectors and combine the obtained estimate with \eqref{eq:main_prop_aux_1} and \eqref{eq:main_prop_aux_2}. \end{proof} Let us now specialize to the case when $L_X(p) = Lp^\gamma$ for some $L>0,\gamma \ge 1/2$. Combining the above proposition with Lata{\l}a's Theorem \ref{thm_Latala_dec}, we obtain immediately the following theorem, a special case of which is Theorem \ref{thm:main_intro}. \begin{theorem}\label{thm:main} Assume that $X$ is a random vector in $\mathbb{R}^n$, such that for some constants $L>0,\gamma\ge 1/2$, all smooth functions $f$ and all $p \ge 2$, \begin{align}\label{eq:Sobolev_gamma} \|f(X)-\mathbb E f(X)\|_p \le Lp^\gamma\Big\||\nabla f(X)|\Big\|_p. \end{align} For any smooth function $f\colon \mathbb{R}^n \to \mathbb{R}$ of class $\mathcal{C}^D$ and $p \ge 2$ if $\mathbf{D}^D f(X) \in L^p$, then \begin{align*} \|f(X)-\mathbb E f(X)\|_p \le& C_D\Big(\sum_{\mathcal{J}\in P_D} L^Dp^{(\gamma - 1/2)D + \#\mathcal{J}/2} \Big\|\|\mathbf{D}^D f(X)\|_\mathcal{J}\Big\|_p\\ &+\sum_{1\le d\le D-1} \sum_{\mathcal{J}\in P_d} L^d p^{(\gamma-1/2)d + \# \mathcal{J}/2}\|\mathbb E \mathbf{D}^d f(X)\|_{\mathcal{J}}\Big). \end{align*} If $\mathbf{D}^D f $ is bounded uniformly on $\mathbb{R}^n$, then for all $t > 0$, \begin{displaymath} \mathbb P(|f(X)-\mathbb E f(X)| \ge t) \le 2\exp\Big(-\frac{1}{C_D}\eta_f(t)\Big), \end{displaymath} where \begin{align*} \eta_f(t) &= \min(A,B),\\ A &= \min_{\mathcal{J}\in P_D}\Big(\Big(\frac{t}{L^{D}\sup_{x\in \mathbb{R}^n}\|\mathbf{D}^D f(x)\|_\mathcal{J}}\Big)^{2/((2\gamma-1) D+ \#\mathcal{J})}\Big),\\ B &= \min_{1\le d\le D-1} \min_{\mathcal{J}\in P_d} \Big(\Big(\frac{t}{L^{d}\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}}\Big)^{2/((2\gamma -1)d+\#\mathcal{J})}\Big). \end{align*} \end{theorem} \begin{proof} The first part is a straightforward combination of Proposition \ref{prop:moment_Poincare} and Theorem \ref{thm_Latala_dec}. The second part follows from the first one by Chebyshev's inequality $\mathbb P(|Y|\ge e\|Y\|_p) \le \exp(-p)$ applied with $p = \eta_f(t)/C_D$ (note that if $\eta_f(t)/C_D\le 2$ then one can make the tail bound asserted in the theorem trivial by adjusting the constants). \end{proof} \paragraph{Remark} In \cite{PisierProbabMethods,PisierVolume} Pisier presents a stronger inequality than \eqref{eq:Sobolev_gamma} with $\gamma = 1/2$. More specifically, he proves that if $X,G$ are independent standard centred Gaussian vectors in $\mathbb{R}^n$, $E$ is a Banach space and $f \colon \mathbb{R}^n \to E$ is a $\mathcal{C}^1$ function, then for every convex function $\Phi \colon E \to \mathbb{R}$, \begin{align}\label{eq_Pisier} \mathbb E \Phi(f(X) - \mathbb E f(X)) \le \mathbb E \Phi\Big(L \langle \nabla f(X),G\rangle\Big), \end{align} where $L = \frac{\pi}{2}$. As noted in \cite{LedOle}, Caffarelli's contraction principle \cite{Caffarelli_contraction} implies that, e.g., a random vector $X$ with density $e^{-V}$, where $V\colon \mathbb{R}^n \to \mathbb{R}$ satisfies $D^2 V \ge \lambda \mathrm{Id}$, $\lambda > 0$ satisfies the above inequality with $L = \frac{\pi}{2\sqrt{\lambda}}$ (where $G$ is still a standard Gaussian vector independent of $X$). Therefore in this situation a similar approach as in the proof of Proposition \ref{prop:moment_Poincare} can be used for functions $f$ with values in a general Banach space. Moreover, a counterpart of Lata{\l}a's results is known for chaoses with values in a Hilbert space (to the best of our knowledge this observation has not been published, in fact it can be quite easily obtained from the version for real valued chaoses). Thus in this case we can obtain a counterpart of Theorem \ref{thm:main} (with $\gamma = 1/2$) for Hilbert space valued-functions. In the case of a general Banach space two-sided estimates for Banach space-valued Gaussian chaoses are not known. Still, one can use some known inequalities (like hypercontraction or Borell-Arcones-Gin\'{e} inequality) instead of Theorem \ref{thm_Latala_dec} and thus obtain new concentration bounds. We remark that if one uses hypercontraction, one can obtain explicit dependence of the constants on the degree of the polynomial, since explicit constants are known for hypercontractive estimates of (Banach space-valued) Gaussian chaoses and one can keep track of them during the proof. We skip the details. \bigskip In view of Theorem \ref{thm:main} a natural question arises: for what measures is the inequality \eqref{eq:Sobolev_gamma} satisfied? Before we provide examples, for technical reasons let us recall the definition of the length of the gradient of a locally Lipschitz function. For a metric space $(\mathcal{X},d)$, a locally Lipschitz function $f\colon \mathcal{X} \to \mathbb{R}$ and $x \in \mathcal{X}$, we define \begin{align}\label{eq:length_of_gradient} |\nabla f|(x) = \limsup_{d(x,y)\to 0} \frac{|f(y)-f(x)|}{d(x,y)}. \end{align} If $\mathcal{X} = \mathbb{R}^n$ and $f$ is differentiable at $x$, then clearly $|\nabla f|(x)$ coincides with the Euclidean length of the usual gradient $\nabla f(x)$. For this reason, with slight abuse of notation, we will write $|\nabla f(x)|$ instead of $|\nabla f|(x)$. We will consider only measures on $\mathbb{R}^n$, however since we allow measures which are not necessarily absolutely continuous with respect to the Lebesgue measure, at some points in the proofs we will work with the above abstract definition. Going back to the question of measures satisfying \eqref{eq:Sobolev_gamma}, it is well known (see e.g. \cite{Milman_role_iso}) that if $X$ satisfies the Poincar\'e inequality \begin{align}\label{eq:Poincare} {\rm Var\,}(f(X)) \le D_{Poin}\mathbb E|\nabla f(X)|^2 \end{align} for all locally Lipschitz bounded functions, then $X$ satisfies \eqref{eq:Sobolev_gamma} with $\gamma = 1$ and $L = C\sqrt{D_{Poin}}$ (recall that $C$ always denotes a universal constant). Assume now that $X$ satisfies the logarithmic Sobolev inequality \begin{equation}\label{eq_log_Sobolev} \mathrm{Ent} f^2(X) \le D_{LS} \mathbb E |\nabla f(X)|^2 \end{equation} for locally Lipschitz bounded functions, where for a nonnegative random variable $Y$, \begin{displaymath} \mathrm{Ent} Y = \mathbb E Y\log Y - \mathbb E Y\log(\mathbb E Y). \end{displaymath} Then, by the results from \cite{Aida-Stroock-1994}, it follows that $X$ satisfies \eqref{eq:Sobolev_gamma} with $\gamma = 1/2$ and $L = \sqrt{D_{LS}/2}$. We will now generalize this observation to measures satisfying the so-called modified logarithmic Sobolev inequality (introduced in \cite{Gentil-Guillin-Miclo-2005}). We will present it in greater generality than needed for proving \eqref{eq:Sobolev_gamma}, since we will use it later (in Section \ref{sec:Weibull}) to prove refined concentration results for random vectors with independent Weibull coordinates. Let $\beta \in (2,\infty)$. We will say that a random vector $Y \in \mathbb{R}^k$ satisfies a $\beta$-modified logarithmic Sobolev inequality if for every locally Lipschitz bounded positive function $f \colon \mathbb{R}^k \to \mathbb{R}$, \begin{align}\label{eq:modifiedLS} \mathrm{Ent} f^2(Y) \le D_{LS_\beta} \Big(\mathbb E |\nabla f(Y)|^2 + \mathbb E \frac{|\nabla f(Y)|^\beta}{f(Y)^{\beta-2}}\Big). \end{align} Let us also introduce two quantities, measuring the length of the gradient in product spaces. Consider a locally Lipschitz function $f \colon \mathbb{R}^{mk} \to \mathbb{R}$, where we identify $R^{mk}$ with the $m$-fold Cartesian product of $\mathbb{R}^k$. Let $x = (x_1,\ldots,x_m)$, where $x_i \in \mathbb{R}^k$. For each $i=1,\ldots,m$, let $|\nabla_i f(x)|$ be the length of the gradient of $f$, treated as a function of $x_i$ only, with the other coordinates fixed. Now for $r \ge 1$, set \begin{displaymath} |\nabla f(x)|_r = \Big(\sum_{i=1}^m |\nabla_i f(x)|^r\Big)^{1/r}. \end{displaymath} Note that if $f$ is differentiable at $x$, then $|\nabla f(x)|_2 = |\nabla f(x)|$ (the Euclidean length of the ``true'' gradient), whereas for $k = 1$ (and $f$ differentiable), $|\nabla f(x)|_r$ is the $\ell_r^m$ norm of $\nabla f(x)$. \begin{theorem}\label{thm:AidaStroock} Let $\beta \in [2,\infty)$ and $Y$ be a random vector in $\mathbb{R}^k$, satisfying \eqref{eq:modifiedLS}. Consider a random vector $X = (X_1,\ldots,X_m)$ in $\mathbb{R}^{mk}$, where $X_1,\ldots,X_m$ are independent copies of $Y$. Then for any locally Lipschitz $f \colon \mathbb{R}^{mk} \to \mathbb{R}$ such that $f(X)$ is integrable, and $p \ge 2$, \begin{equation}\label{ineq:sobolev-ineq-from-beta-modified} \|f(X) - \mathbb E f(X)\|_p \le C_\beta D_{LS_\beta}^{1/2} p^{1/2}\Big\||\nabla f(X)|_2\Big\|_p + D_{LS_\beta}^{1/\beta}p^{1/\alpha}\Big\||\nabla f(X)|_\beta\Big\|_p, \end{equation} where $\alpha = \frac{\beta}{\beta-1}$ is the H\"older conjugate of $\beta$. \end{theorem} In particular using the above theorem with $m = 1$ and $k = n$, we obtain the following \begin{cor}If $X$ is a random vector in $\mathbb{R}^n$ which satisfies the $\beta$-modified log-Sobolev inequality \eqref{eq:modifiedLS}, then it satisfies \eqref{eq:Sobolev_gamma} with $\gamma = \frac{\beta-1}{\beta} \ge \frac{1}{2}$ and $L = C_\beta \max(D_{LS_\beta}^{1/2},D_{LS_\beta}^{1/\beta})$. \end{cor} We remark that in the class of logarithmically concave random vectors, the $\beta$-modified log-Sobolev inequality is known to be equivalent to concentration for 1-Lipschitz functions of the form $\mathbb P(|f(X) - \mathbb E f(X)| \ge t) \le 2\exp(-c t^{\beta/(\beta-1)})$ \cite{MilmanProperties}. \begin{proof}[Proof of Theorem \ref{thm:AidaStroock}] By the tensorization property of entropy (see e.g. \cite{LedouxConcBook}, Proposition 5.6) we get for all positive locally Lipschitz bounded functions $f \colon \mathbb{R}^{mk} \to \mathbb{R}$, \begin{align}\label{eq:modified_LS_tensorized} \mathrm{Ent} f^2(X) \le D_{LS_\beta}\Big(\mathbb E |\nabla f(X)|_2^2 + \sum_{i=1}^m \mathbb E \frac{|\nabla_i f(X)|^\beta}{f(X)^{\beta -2}}\Big). \end{align} Following~\cite{Aida-Stroock-1994}, consider now any locally Lipschitz bounded $f > 0$ and denote $F(t) = \mathbb E f(X)^t$. For $t > 2$, \[ F'(t) = \mathbb E \left( f(X)^t \log f(X) \right) \] and \[ \begin{split} \frac{d}{dt} \left( \mathbb E f(X)^t \right)^{2/t} &= \frac{d}{dt} F(t)^{2/t} = F(t)^{2/t} \cdot \frac{d}{dt} \left( \frac{2}{t} \log F(t) \right) \\[1ex] &= F(t)^{2/t} \left( \frac{2}{t} \frac{F'(t)}{F(t)} - \frac{2}{t^2} \log F(t) \right) = \frac{2}{t^2} F(t)^{\frac{2}{t} - 1} \left( t F'(t) - F(t) \log F(t) \right) \\[1ex] &= \frac{2}{t^2} \left( \mathbb E f(X)^t \right)^{\frac{2}{t} - 1} \left( \mathbb E \left( f(X)^t \log f(X)^t \right) - \left(\mathbb E f(X)^t \right) \log \left(\mathbb E f(X)^t \right) \right). \end{split} \] By \eqref{eq:modified_LS_tensorized} applied to the function $g = f^{t/2} = \varphi \circ f $ where $\varphi(u) = |u|^{t/2}$, \begin{displaymath} \frac{d}{dt} \left( \mathbb E f(X)^t \right)^{2/t} \le \frac{2}{t^2} \left( \mathbb E f(X)^t \right)^{\frac{2}{t} - 1} \cdot D_{LS_\beta} \Big(\mathbb E |\nabla (\varphi \circ f)(X)|_2^2 + \mathbb E |\nabla (\varphi\circ f)(X)|_\beta^\beta f(X)^{t(2-\beta)/2}\Big). \end{displaymath} By the chain rule and the H{\"o}lder inequality for the pair of conjugate exponents $t/2, t/(t-2)$, \[ \begin{split} \mathbb E \left|\nabla (\varphi \circ f)(X)\right|_2^2 &= \mathbb E \big( \left|\varphi'(f(X)) \right| \cdot \left| \nabla f(X)\right|_2 \big)^2 \\[1ex] &\le \left( \mathbb E |\nabla f(X)|_2^t \right)^{2/t} \left( \mathbb E \left(\varphi'(f(X))\right)^{2t/(t-2)} \right)^{(t-2)/t} \\[1ex] &= \big\||\nabla f(X)|_2\big\|_t^2 \cdot \left( \frac{t^2}{4} \right) \left(\mathbb E f(X)^t \right)^{1 - \frac{2}{t}}. \end{split} \] Similarly, for $t \ge \beta$, \begin{align*} \mathbb E |\nabla(\varphi\circ f)(X)|_\beta^\beta f(X)^{t(2-\beta)/2} & = \frac{t^\beta}{2^\beta}\mathbb E f(X)^{(t/2-1)\beta}|\nabla f(X)|_\beta^\beta f(X)^{t(2-\beta)/2}\\ & = \frac{t^\beta}{2^\beta} \mathbb E f(X)^{t-\beta} |\nabla f(X)|_\beta^\beta\\ &\le \frac{t^\beta}{2^\beta} (\mathbb E f(X)^t)^{1-\beta/t} (\mathbb E |\nabla f(X)|_\beta^t)^{\beta/t}\\ &= \frac{t^\beta}{2^\beta} (\mathbb E f(X)^t)^{1-\beta/t} \big\| |\nabla f(X)|_\beta\big\|_t^\beta. \end{align*} Thus we get for $\beta \le t \le p$, \begin{displaymath} \frac{d}{dt} \left( \mathbb E f(X)^t \right)^{2/t} \le \frac{D_{LS_\beta}}{2}\big\||\nabla f(X)|_2\big\|_p^2 + \frac{D_{LS_\beta}}{2^{\beta-1}}t^{\beta -2}(\mathbb E f(X)^t)^{(2-\beta)/t}\big\||\nabla f(X)|_\beta\big\|_p^\beta. \end{displaymath} Denote $ a = \frac{D_{LS_\beta}}{2}\big\||\nabla f(X)|_2\big\|_p^2$, $b = \frac{D_{LS_\beta}}{2^{\beta-1}}\big\||\nabla f(X)|_\beta\big\|_p^\beta$, $g(t) = \left( \mathbb E f(X)^t \right)^{2/t}$. The above inequality can be written as \begin{displaymath} g^{\beta/2-1}\frac{d}{dt} g \le g^{\beta/2 - 1} a + t^{\beta - 2} b \end{displaymath} for $t \in [\beta,p]$ or, denoting $G = g^{\beta/2}$, \begin{displaymath} \frac{d}{dt} G \le \frac{\beta}{2}(G^{(\beta -2)/\beta} a + t^{\beta-2}b). \end{displaymath} For $\varepsilon > 0$ consider now the function $H_\varepsilon(t) = (g(\beta) + a (t-\beta) + b^{2/\beta} t^{2 - 2/\beta}+\varepsilon)^{\beta/2}$. We have \begin{displaymath} H_\varepsilon(\beta) > G(\beta) \end{displaymath} and \begin{align*} \frac{d}{dt} H_\varepsilon(t) = \frac{\beta}{2} H_\varepsilon(t)^{(\beta-2)/\beta} (a + (2-2/\beta)t^{1-2/\beta}b^{2/\beta}) \ge \frac{\beta}{2}(H_\varepsilon(t)^{(\beta-2)/\beta}a + t^{\beta -2}b), \end{align*} where we used the assumption $\beta \ge 2$. Using the last three inequalities together with the fact that for $t \ge 0$ the function $x \mapsto x^{(\beta-2)/2}a + t^{\beta-2}b$ is increasing on $[0,\infty)$ we obtain that $G(t) \le H_\varepsilon(t)$ for all $t \in [\beta,p]$, which by taking $\varepsilon \to 0^+$ implies that for $p \ge \beta$, \begin{displaymath} g(p) = G(p)^{2/\beta} \le H_0(p)^{2/\beta} \le g(\beta) + \frac{D_{LS_\beta}}{2}(p-\beta)\big\||\nabla f(X)|_2\big\|_p^2 + \frac{D_{LS_\beta}^{2/\beta}}{2} p^{2-2/\beta}\big\||\nabla f(X)|_\beta\big\|_p^2, \end{displaymath} i.e., \begin{equation}\label{eq:intermediate_ineq} \|f(X)\|_p^2 \le \|f(X)\|_\beta^2 + \frac{D_{LS_\beta}}{2}(p-\beta)\big\||\nabla f(X)|_2\big\|_p^2 + \frac{D_{LS_\beta}^{2/\beta}}{2} p^{2-2/\beta}\big\||\nabla f(X)|_\beta\big\|_p^2. \end{equation} The above inequality has been proved so far for strictly positive, locally Lipschitz functions (the boundedness assumption can be easily removed by truncation and passage to the limit). For the case of a general locally Lipschitz function $f$, take any $\varepsilon>0$ and consider $\tilde{f} = |f| + \varepsilon$. Since $\tilde{f}$ is strictly positive and locally Lipschitz, the above inequality holds also for $\tilde{f}$. Taking $\varepsilon \to 0^+$, we can now extend \eqref{eq:intermediate_ineq} to arbitrary locally Lipschitz $f$. Finally, assume $f \colon \mathbb{R}^{mk} \to \mathbb{R}$ is locally Lipschitz and $f(X)$ is integrable. Applying~\eqref{eq:intermediate_ineq} to $f - \mathbb E f(X)$ instead of $f$ and taking the square root, we obtain \begin{displaymath} \|f(X) - \mathbb E f(X)\|_p \le \|f(X) - \mathbb E f(X)\|_\beta + \sqrt{D_{LS_\beta}(p-\beta)}\big\||\nabla f(X)|_2\big\|_p + D_{LS_\beta}^{1/\beta} p^{1/\alpha}\big\||\nabla f(X)|_\beta\big\|_p \end{displaymath} for $p \ge \beta$. For $p \in [2, \beta]$, since \eqref{eq:modifiedLS} implies the Poincar\'e inequality with constant $D_{LS_\beta}/2$ (see Proposition 2.3. in \cite{Gentil-Guillin-Miclo-2005}), we get \[ \|f(X) - \mathbb E f(X)\|_p \le C D_{LS_\beta}^{1/2} p\big\| |\nabla f(X)|_2\big\|_p \] (see the remark following \eqref{eq:Poincare}). These two estimates yield~\eqref{ineq:sobolev-ineq-from-beta-modified} with $C_\beta = C \sqrt{\beta}$. \end{proof} \subsection{Applications of Theorem \ref{thm:main_intro}} Let us now present certain applications of estimates established in the previous section. For simplicity we will restrict to the basic setting presented in Theorem \ref{thm:main_intro}. \subsubsection{Polynomials} A typical application of Theorem \ref{thm:main_intro} would be to obtain tail inequalities for multivariate polynomials in the random vector $X$. The constants involved in such estimates do not depend on the dimension, but only on the degree of the polynomial. As already mentioned in the introduction, our results in this setting can be considered a transference of inequalities by Lata{\l}a from the tetrahedral Gaussian case to the case of non-necessarily product random vectors and general polynomials. \subsubsection{Additive functionals and related statistics} We will now consider three related classes of additive statistics of a random vector, often arising in various problems. \paragraph{Additive functionals} Let $X$ be a random vector in $\mathbb{R}^n$ satisfying \eqref{eq:sobolev_def}. For a function $f\colon \mathbb{R} \to \mathbb{R}$ define the random variable \begin{align}\label{eq:additive_def} Z_f = f(X_1)+\ldots+f(X_n). \end{align} It is classical and follows from \eqref{eq:sobolev_def} by a simple application of the Chebyshev inequality that if $f$ is smooth with $\|f'\|_\infty \le \alpha$, then for all $t > 0$, \begin{align}\label{eq:additive_D1} \mathbb P\big(|Z_f - \mathbb E Z_f| \ge t\big) \le e^2\exp\Big(-\frac{t^2}{e^2 nL^2\alpha^2}\Big). \end{align} Using Theorem \ref{thm:main_intro} we can easily obtain inequalities which hold if $f$ is a polynomial-like function, i.e., if $\|f^{(D)}\|_\infty < \infty$ for some $D$. Note that the derivatives of the function $F(x_1,\ldots,x_n) = f(x_1)+\ldots+f(x_n)$ have a very simple diagonal form. In consequence, calculating their $\|\cdot\|_\mathcal{J}$ norms is simple. More precisely, we have \begin{displaymath} \mathbf{D}^d F(x) = {\rm diag}_d \Big(f\ub{d}(x_1),\ldots,f\ub{d}(x_n)\Big), \end{displaymath} where ${\rm diag}_d(x_1,\ldots,x_n)$ stands for the $d$-indexed matrix $(a_{{\bf i}})_{{\bf i}\in[n]^d}$ such that $a_{\bf i} = x_i$ if $i_1 = \ldots = i_d = i$ and $0$ otherwise. It is easy to see that if $\mathcal{J} = \{[d]\}$, then $\|{\rm diag}_d(x_1,\ldots,x_n)\|_\mathcal{J} = \sqrt{x_1^2+\ldots+x_n^2}$ and if $\# \mathcal{J} \ge 2$, then $\|{\rm diag}_d(x_1,\ldots,x_n)\|_\mathcal{J} = \max_{i\le n}|x_i|$. Therefore we obtain the following corollary to Theorem \ref{thm:main_intro}. We will apply it in the next section to linear eigenvalue statistics of random matrices. \begin{cor}\label{cor:additive_1} Let $X$ be a random vector in $\mathbb{R}^n$ satisfying \eqref{eq:sobolev_def}, $f \colon \mathbb{R} \to \mathbb{R}$ a $\mathcal{C}^D$ function, such that $\|f^{(D)}\|_\infty < \infty$ and $Z_f$ is defined by \eqref{eq:additive_def}. Then for all $t > 0$, \begin{align*} \mathbb P(|Z_f - \mathbb E Z_f| \ge t) &\le 2\exp\Big(-\frac{1}{C_D}\min\Big(\frac{t^2}{L^{2D}n\|f^{(D)}\|_\infty^2},\frac{t^{2/D}}{L^2\|f^{(D)}\|_\infty^{2/D}}\Big)\Big)\\ &+2\exp\Big(-\frac{1}{C_D}\min_{1\le d\le D-1}\Big(\frac{t^2}{L^{2d} \sum_{i=1}^n (\mathbb E f^{(d)}(X_i))^2 }\Big)\Big)\\ &+ 2\exp\Big(-\frac{1}{C_D}\min_{2\le d \le D-1}\Big(\frac{t^{2/d}}{L^2 \max_{i\le n} |\mathbb E f^{(d)}(X_i)|^{2/d}}\Big)\Big). \end{align*} \end{cor} Clearly the case $D=1$ of the above corollary recovers up to constants \eqref{eq:additive_D1}. Moreover using the (yet unproven) Theorem \ref{thm:Gaussian_intro} one can see that for $f(x) = x^D$ and $X$ being a standard Gaussian vector in $\mathbb{R}^n$, the estimate of the corollary is optimal up to absolute constants (in this case, since $Z_f$ is a sum of independent random variables, one can also use estimates from \cite{HOMS}). \paragraph{Additive functionals of partial sums} Let us now consider a slightly more involved additive functional of the form \begin{align}\label{eq:additive_def_2} S_f = \sum_{i=1}^n f\Big(\sum_{j=1}^i X_j\Big). \end{align} Such random variables arise e.g., in the study of additive functionals of random walks (see e.g. \cite{SkorohodSlobodenjuk,BorodinIbragimov}). For simplicity we will only discuss what can be obtained directly for Lipschitz functions $f$ and what Theorem \ref{thm:main_intro} gives for $f$ with bounded second derivative. Let thus $F(x) = \sum_{i=1}^n f(\sum_{j=1}^i x_j)$. We have $\frac{\partial}{\partial x_i} F(x) = \sum_{l\ge i} f'(\sum_{j\le l} x_j)$. Therefore \begin{displaymath} \big\||\nabla F|\big\|_\infty^2 = \|f'\|_\infty^2 \sum_{i=1}^n (n-i+1)^2 = \frac{1}{6} n(n+1)(2n+1) \|f'\|_\infty^2, \end{displaymath} which, when combined with \eqref{eq:sobolev_def} and Chebyshev's inequality yields \begin{displaymath} \mathbb P(|S_f - \mathbb E S_f| \ge t) \le 2\exp\Big(-\frac{t^2}{C L^2 n^3 \|f'\|_\infty^2}\Big). \end{displaymath} Now, let us assume that $f \in \mathcal{C}^2$ and $f''$ is bounded. We have \begin{displaymath} |\mathbb E \nabla F(X)|^2 = \sum_{i=1}^n\bigg(\sum_{l=i}^n \mathbb E f'\Big(\sum_{j=1}^l X_j\Big)\bigg)^2 \end{displaymath} Moreover \begin{displaymath} \frac{\partial^2}{\partial x_i \partial x_j} F(x_1,\ldots,x_n) = \sum_{l \ge i \vee j}f''\Big(\sum_{k=1}^l x_k\Big) \end{displaymath} and thus \begin{align*} \|\mathbf{D}^2 F(x)\|_{\{1,2\}}^2= \sum_{i,j=1}^n \Big(\sum_{l = i \vee j}^n f''\Big(\sum_{k=1}^l x_k\Big)\Big)^2 \le 2\|f''\|_\infty^2 \sum_{i=1}^n\sum_{j=i}^n (n-j+1)^2 \le Cn^4\|f''\|_\infty^2. \end{align*} Since $\mathbf{D}^2 F$ is a symmetric bilinear form, we have \begin{align*} \|\mathbf{D}^2 F(x)\|_{\{1\}\{2\}} &\le \sup_{|\alpha| \le 1} \sum_{i,j=1}^n \sum_{l = i \vee j}^n\Big|f''\Big(\sum_{k=1}^l x_k\Big)\Big|\alpha_i\alpha_j\\ &\le \sup_{|\alpha| \le 1} \|f''\|_\infty \sum_{l=1}^n \big(\sum_{i \le l} \alpha_i\big)^2 \le \sup_{|\alpha| \le 1} \|f''\|_\infty \sum_{l=1}^n l\sum_{i\le l} \alpha_i^2 \le Cn^2 \|f''\|_\infty. \end{align*} Using the above estimates and Theorem \ref{thm:main_intro} we obtain \begin{align*} \mathbb P(|S_f - \mathbb E S_f| \ge t) \le 2\exp\Big(-\frac{1}{C L^2}\min\Big(\frac{t^2}{\sum_{i=1}^n \big( \sum_{l=i}^n \mathbb E f'(\sum_{j=1}^l X_j) \big)^2 },\frac{t}{n^2\|f''\|_\infty}\Big)\Big). \end{align*} To effectively bound the sub-Gaussian coefficient in the above inequality one should use some additional information about the structure of the vector $X$. For a given function $f$ it is of order at most $n^5,$ but if, e.g., the function $f$ is even and $X$ is symmetric, it clearly vanishes. In this case we get \begin{displaymath} \mathbb P(|S_f - \mathbb E S_f| \ge t) \le 2\exp\Big(-\frac{1}{CL^2}\frac{t}{n^2\|f''\|_\infty}\Big). \end{displaymath} One can check that if for instance $X$ is a standard Gaussian vector in $\mathbb{R}^n$ and $f(x) = x^2$ then this estimate is tight up to the value of the constant $C$. \paragraph{$U$-statistics} Our last application in this section will concern $U$-statistics (for simplicity of order 2) of the random vector $X$, i.e., random variables of the form \begin{displaymath} U = \sum_{i,j \le n, i\neq j} h_{ij}(X_i,X_j), \end{displaymath} where $h_{ij}\colon \mathbb{R}^2 \to \mathbb{R}$ are smooth functions. Without loss of generality let us assume that $h_{ij}(x,y) = h_{ji}(y,x)$. A simple application of Chebyshev's inequality and \eqref{eq:sobolev_def} gives that if $\mathbf{D} h_{i,j}$ are uniformly bounded on $\mathbb{R}^2$ then for all $t > 0$, \begin{align*} \mathbb P(|U - \mathbb E U| \ge t) &\le 2 \exp\Big(-\frac{1}{C L^2}\frac{t^2}{\sum_{i=1}^n (\sum_{j\neq i} \frac{\partial}{\partial x}h_{ij}(x_i,x_j))^2}\Big) \\ &\le 2\exp\Big(-\frac{1}{C L^2}\frac{t^2}{n^3 \max_{i\neq j}\|\frac{\partial}{\partial x} h_{ij}\|_\infty^2}\Big). \end{align*} For $h_{ij}$ of class $\mathcal{C}^2$ with bounded derivatives of second order, a direct application of Theorem \ref{thm:main_intro} gives \begin{align*} \mathbb P(|U - \mathbb E U| \ge t) \le 2\exp\Big(-\frac{1}{C}\min\Big(\frac{t^2}{L^4 \alpha^2},\frac{t^2}{L^2 \beta^2},\frac{t}{L^2 \gamma}\Big)\Big), \end{align*} where \begin{align*} \alpha^2 & = \sup_{x\in \mathbb{R}^n} \Big\{\sum_{i,j\le n,i\neq j} \Big(\frac{\partial^2}{\partial x\partial y} h_{ij}(x_i,x_j)\Big)^2 + \sum_{i=1}^n\Big(\sum_{j\neq i} \frac{\partial^2}{\partial x^2} h_{ij}(x_i,x_j)\Big)^2\Big\}\\ &\le n^2\max_{i\neq j}\Big\|\frac{\partial^2}{\partial x\partial y} h_{ij}\Big\|_\infty + n^3\max_{i\neq j}\Big\|\frac{\partial^2}{\partial x^2} h_{ij}\Big\|_\infty,\\ \beta^2 & = \sum_{i=1}^n \Big(\sum_{j\neq i}\mathbb E \frac{\partial}{\partial x} h_{ij}(X_i,X_j)\Big)^2\le n^3 \max_{i\neq j} |\mathbb E \frac{\partial}{\partial x} h_{ij}(X_i,X_j)|^2,\\ \gamma & = \sup_{x \in \mathbb{R}^n} \sup_{|\alpha|,|\beta| \le 1} \Big\{\sum_{i,j\le n,i\neq j} \frac{\partial^2}{\partial x\partial y} h_{ij}(x_i,x_j)\alpha_i\beta_j + \sum_{i=1}^n\alpha_i\beta_i \sum_{j\neq i} \frac{\partial^2}{\partial x^2} h_{ij}(x_i,x_j)\Big\}\\ &\le n \Big(\max_{i\neq j}\Big\|\frac{\partial^2}{\partial x\partial y} h_{ij}\Big\|_\infty + \max_{i\neq j} \Big\|\frac{\partial^2}{\partial x^2} h_{ij}\Big\|_\infty\Big). \end{align*} In particular, if $h_{ij} = h$, a function with bounded derivatives of second order, we get $\alpha^2 = \mathcal{O}(n^3)$, $\beta^2 = \mathcal{O}(n^3)$, $\gamma = \mathcal{O}(n)$, which shows that the oscillations of $U$ are of order at most $\mathcal{O}(n^{3/2})$. In the case of $U$-statistics of independent random variables, generated by bounded $h$, this is a well known fact, corresponding to the CLT and classical Hoeffding inequalities for $U$-statistics. We remark that in the so called non-degenerate case, i.e. when ${\rm Var\,} (\mathbb E_X h(X,Y)) > 0$, $n^{3/2}$ is then indeed the right normalization in the CLT for $U$-statistics (see e.g. \cite{dlPg}). \subsubsection{Linear statistics of eigenvalues of random matrices \label{subsubsec:matrices}} We will now use Corollary \ref{cor:additive_1} to obtain tail inequalities for linear eigenvalue statistics of random Wigner matrices. We remark that one could also apply to the random matrix case the other inequalities considered in the previous section, obtaining in particular estimates on $U$-statistics of eigenvalues (which have been recently investigated by Lytova and Pastur \cite{LP}). We will focus on linear eigenvalues statistics (additive functionals in the language of the previous section) and obtain inequalities involving as a sub-Gaussian term a Sobolev norm of the function $f$ with respect to the semicircle law (the limiting spectral distribution for Wigner ensembles). We refer the reader to the monographs \cite{AGZ,BS,Mehta,PSbook} for basic facts concerning random matrices. Consider thus a real symmetric $n\times n$ random matrix $A$ ($n \ge 2$) and let $\lambda_1\le \ldots\le \lambda_n$ be its eigenvalues. We will be interested in concentration inequalities for functionals of the form \begin{displaymath} Z = \sum_{i=1}^n f(\lambda_i/\sqrt{n}). \end{displaymath} In \cite{GZConc} Guionnet and Zeitouni obtained concentration inequalities for $Z$ with Lipschitz $f$ assuming that the entries of $A$ are independent and satisfy the log-Sobolev inequality with some constant $L$. More specifically, they prove that for all $t > 0$, \begin{align* \mathbb P(|Z - \mathbb E Z| \ge t) \le 2\exp\Big(-\frac{t^2}{8L\|f'\|_\infty^2}\Big). \end{align*} (In fact they treat a more general case of banded matrices, but for simplicity we will focus on the basic case.) As a corollary to Theorem \ref{thm:main_intro} we present below an inequality which compliments the above result. Our aim is to replace the strong parameter $\|f'\|_\infty$ controlling the sub-Gaussian tail by a weaker Sobolev norm with respect to the semicircular law \begin{displaymath} d\rho(x) = \frac{1}{2\pi} \sqrt{4 - x^2} \Ind{(-2,2)}(x)\,dx. \end{displaymath} (recall that this is the limiting spectral distribution for Wigner matrices). Imposing additional smoothness assumptions on the function $f$ it can be done in a window $|t| \le c_f n$, where $c_f$ depends on $f$. \begin{prop}\label{prop:linear-statistics} Assume the entries of the matrix $A$ are independent (modulo symmetry conditions), mean zero and variance one random variables, satisfying the logarithmic Sobolev inequality~\eqref{eq_log_Sobolev} with constant $L^2$. If $f$ is $\mathcal{C}^2$ with bounded second derivative, then for all $t > 0$, \begin{align}\label{ineq:linear-statistics-prop} \mathbb P(|Z - \mathbb E Z| \ge t) \le 2 \exp \left( - \frac{1}{C_L} \left( \frac{t^2}{\int_{-2}^2 f'^2 \, d\rho + n^{-2/3} \norm{f''}_\infty^2} \land \frac{nt}{\norm{f''}_\infty} \right) \right). \end{align} \end{prop} \paragraph{Remark} The case $f(x) = x^2$ shows that under the assumptions of Proposition~\ref{prop:linear-statistics} one cannot expect a tail behaviour better than exponential for large $t$. Indeed, since $Z = \frac{1}{n}(\lambda_1^2 + \ldots + \lambda_n^2) = \frac{1}{n}\sum_{i,j \le n} A_{ij}^2$, even if $A$ is a matrix with standard Gaussian entries, then for all $t > 0$, $\mathbb P(|Z-\mathbb E Z| \ge t) > \frac{1}{C} \exp( -C (t^2 \land nt))$. \paragraph{Remark} A similar inequality to~\eqref{ineq:linear-statistics-prop} holds in the case of Hermitian matrices with independent entries as well. In the proof given below one should invoke an appropriate result concerning the speed of convergence of the spectral distribution of Wigner matrices to the semicircular law. \begin{proof} Let us identify the random matrix $A$ with a random vector $\tilde{A} = (A_{ij})_{1\le i\le j\le n}$ having values in $\mathbb{R}^{n(n+1)/2}$ endowed with the standard Euclidean norm $|\tilde{A}| = \left( \sum_{1 \le i \le j \le n} A_{ij}^2\right)^{1/2}$. Note that $\|A\|_{\textup{HS}} \le \sqrt{2} |\tilde{A}|$. By independence of coordinates of $\tilde{A}$ and the tensorization property of the logarithmic Sobolev inequality (see, e.g., \cite[Corollary 5.7]{LedouxConcBook}), $\tilde{A}$ also satisfies~\eqref{eq_log_Sobolev} with constant $L^2$. Furthermore, by the Hoffman-Wielandt inequality (see, e.g., \cite[Lemma 2.1.19]{AGZ}) which asserts that if $B, C$ are two $n \times n$ real symmetric (or Hermitian) matrices and $\lambda_i(B), \lambda_i(C)$ resp. their eigenvalues arranged in nondecreasing order, then \begin{displaymath} \sum_{i=1}^n |\lambda_i(B) - \lambda_i(C)|^2 \le \|B - C\|_{\textup{HS}}^2, \end{displaymath} the map $\tilde{A} \mapsto (\lambda_1/\sqrt{n}, \ldots, \lambda_n/\sqrt{n}) \in \mathbb{R}^n$ is $\sqrt{2/n}$-Lipschitz. Therefore, the random vector $(\lambda_1/\sqrt{n}, \ldots, \lambda_n/\sqrt{n})$ satisfies~\eqref{eq_log_Sobolev} with constant $2L^2/n$. In consequence, by the results from~\cite{Aida-Stroock-1994} (see also Theorem~\ref{thm:AidaStroock}), $(\lambda_1/\sqrt{n}, \ldots, \lambda_n/\sqrt{n})$ also satisfies~\eqref{eq:sobolev_def} with constant $L/\sqrt{n}$. Applying Corollary~\ref{cor:additive_1} with $D=2$ we obtain \begin{equation}\label{ineq:linear-statistics-d-2} \mathbb P(|Z - \mathbb E Z| \ge t) \le 2\exp \left(-\frac{1}{C L^2} \left( \frac{t^2}{n^{-1}\sum_{i=1}^n (\mathbb E f'(\lambda_i/\sqrt{n}))^2 + L^2 n^{-1} \norm{f''}^2_\infty} \land \frac{n t}{\norm{f''}_\infty}\right) \right). \end{equation} In what follows we shall estimate from above the term $n^{-1}\sum_{i=1}^n (\mathbb E f'(\lambda_i/\sqrt{n}))^2$ from~\eqref{ineq:linear-statistics-d-2}. First, by Jensen's inequality \begin{equation}\label{ineq:rm-sobolev-norm-1} \frac{1}{n}\sum_{i=1}^n (\mathbb E f'(\lambda_i/\sqrt{n}))^2 \le \mathbb E \left(\frac{1}{n}\sum_{i=1}^n f'(\lambda_i/\sqrt{n})^2 \right) = \int_\mathbb{R} (f')^2 d\mu, \end{equation} where $\mu$ is the expected spectral measure of the matrix $n^{-1/2}A$. According to Wigner's theorem, for a fixed $f$, $\mu$ converges to the semicircular law as $n \to \infty$ and thus $\int_\mathbb{R} (f')^2 \, d\mu \to \int_{-2}^2 (f')^2 \, d\rho$. A non-asymptotic bound on the term $\int_\mathbb{R} f'^2 \, d\mu$ can be obtained using the result of Bobkov, G{\"o}tze and Tikhomirov~\cite{Bobkov-Goetze-Tikhomirov-2010} on the speed of convergence of the expected spectral distribution of real Wigner matrices to the semicircular law. Since each entry of $A$ satisfies the logarithmic Sobolev inequality with constant $L^2$, it also satisfies the Poincar{\'e} inequality with the same constant (see e.g.~\cite[Chapter 5]{LedouxConcBook}). Therefore Theorem 1.1 from~\cite{Bobkov-Goetze-Tikhomirov-2010} gives \begin{equation}\label{ineq:bgt-kolmogorov-distance} \sup_{x \in \mathbb{R}} |F_\mu(x) - F_\rho(x)| \le C_L n^{-2/3}, \end{equation} where $F_\mu$ and $F_\rho$ are the distribution functions of $\mu$ and $\rho$, respectively. The decay of $1-F_\mu(x)$ and $F_\mu(x)$ as $x \to \infty$ and $x \to -\infty$ (resp.) can be obtained using the sub-Gaussian concentration of $\lambda_n/\sqrt{n}$ and $\lambda_1/\sqrt{n}$, which is, e.g., a consequence of~\eqref{eq:sobolev_def} for the vector of eigenvalues of $n^{-1/2} A$. For example, for any $t \ge 0$, \begin{align}\label{ineq:decay-of-F-1} \mathbb P\left(\frac{\lambda_n}{\sqrt{n}} \ge \mathbb E \frac{\lambda_n}{\sqrt{n}} + t \right) &\le 2 \exp\left(- \frac{1}{C} \frac{n t^2}{L^2} \right). \end{align} Using the classical technique of $\delta$-nets for estimating the operator norm of a matrix (see e.g.~\cite{PisierVolume}) and the fact that the entries of $A$ are sub-Gaussian (as they satisfy the logarithmic Sobolev inequality) one gets $\mathbb E \lambda_n \le \mathbb E \|A\|_{\text{op}} \le C L \sqrt{n}$, which together with~\eqref{ineq:decay-of-F-1} yields \begin{equation}\label{ineq:decay-of-F} 1 - F_\mu(CL + t) \le \mathbb P\left(\frac{\lambda_n}{\sqrt{n}} \ge CL + t \right) \le 2 \exp\left(- \frac{1}{C} \frac{n t^2}{L^2} \right) \end{equation} for all $t \ge 0$. Clearly, the same inequality holds for $F(-CL - t)$. Integrating by parts, \begin{equation}\label{eq:bobkov-goetze-tikhomirov-int-by-parts} \int_\mathbb{R} f'^2 \, d\mu = \int_\mathbb{R} f'^2 \, d\rho + \int_\mathbb{R} \left( f'(x)^2 \right)' (F_\rho(x) - F_\mu(x)) \, dx. \end{equation} Combining the uniform estimate~\eqref{ineq:bgt-kolmogorov-distance} with~\eqref{ineq:decay-of-F} and using an elementary inequality $2 x y \le x^2 + y^2$, we estimate the last integral in~\eqref{eq:bobkov-goetze-tikhomirov-int-by-parts} as follows: \begin{multline}\label{ineq:bgt-error-term} \left| \int_\mathbb{R} \left(f'(x)^2 \right)' (F_\mu(x) - F_\rho(x)) \, dx \right| \\[1ex] \le \int_\mathbb{R} \left| 2 f'(x) f''(x) \right| \left( \norm{F_\mu - F_\rho}_\infty \land 2\exp\left(-\frac{n}{C} \frac{ \text{dist}(x, [-CL, CL])^2}{L^2} \right) \right) \, dx \\[1ex] \le \int_\mathbb{R} f'(x)^2 \, d\nu(x) + \nu(\mathbb{R}) \norm{f''}_\infty^2, \end{multline} where \[ d\nu(x) = C_L n^{-2/3} \land 2\exp\left(-\frac{\text{dist}(x, [-CL, CL])^2}{2 \sigma^2}\right) \, dx, \qquad \text{and} \qquad \sigma^2 = \frac{C L^2}{2n}. \] We proceed to estimate the two last terms from~\eqref{ineq:bgt-error-term}. Take $r > 0$ such that \begin{align}\label{eq:on-r-1} 2 e^{-r^2/(2\sigma^2)} = C_L n^{-2/3} \end{align} or put $r=0$ if no such $r$ exists. Note that if we assume $C_L \ge 1$, as we obviously can, then \begin{align}\label{eq:on-r-2} r \le C L n^{-1/2} \sqrt{\log n}. \end{align} We shall need the following estimates, which are easy consequences of the standard estimate for a Gaussian tail: \begin{align}\label{ineq:gaussian-tail-1} \int_r^\infty e^{-y^2/(2\sigma^2)} \, dy \le C \sigma e^{-r^2/(2\sigma^2)} \le C_L \sigma n^{-2/3} \le C_L n^{-7/6}, \end{align} and \begin{equation}\label{ineq:gaussian-tail-2} \begin{split} \int_r^\infty y^2 e^{-y^2/(2\sigma^2)} \, dy &\le \left( \int_0^\infty y^4 e^{-y^2/(2\sigma^2)} \, dy \right)^{1/2} \left( \int_r^\infty e^{-y^2/(2\sigma^2)} \, dy \right)^{1/2} \\[1ex] &\le C_L \sigma^{5/2} (\sigma n^{-2/3})^{1/2} \le C_L n^{-11/6}. \end{split} \end{equation} Now, \eqref{eq:on-r-1}, \eqref{eq:on-r-2} and \eqref{ineq:gaussian-tail-1} yield \begin{align}\label{ineq:nu-R} \nu(\mathbb{R}) \le (CL + r) C_L n^{-2/3} + 4\int_r^\infty e^{-y^2/(2\sigma^2)} \, dy \le C_L n^{-2/3}. \end{align} We shall also need the estimate for $\int_\mathbb{R} x^2 \, d\nu(x)$ which follows from~\eqref{eq:on-r-1}, \eqref{eq:on-r-2} and~\eqref{ineq:gaussian-tail-2}: \begin{align}\label{ineq:nu-x2} \int_\mathbb{R} x^2 \, d\nu(x) = \frac23 (CL+r)^3 C_L n^{-2/3} + 4 \int_r^\infty (CL+y)^2 e^{-y^2/(2\sigma^2)} \, dy \le C_L n^{-2/3}. \end{align} In order to estimate $\int_\mathbb{R} f'^2 \, d\nu$, take any $x_0 \in [-2,2]$ such that $|f'(x_0)|^2 \le \int_{-2}^2 f'^2 \, d\rho$, and use $|f'(x)| \le |f'(x_0)| + |x-x_0| \norm{f''}_\infty$ to obtain \begin{align*} \int_\mathbb{R} f'(x)^2 \, d\nu(x) &\le 2 \Big( \int_{-2}^2 f'^2 \, d\rho \Big) \nu(\mathbb{R}) + 2\norm{f''}^2_\infty \int_\mathbb{R} |x-x_0|^2 \, d\nu(x) \\[1ex] &\le 2 \Big( \int_{-2}^2 f'^2 \, d\rho \Big) \nu(\mathbb{R}) + 4 \norm{f''}^2_\infty x_0^2 \nu(\mathbb{R}) + 4\norm{f''}^2_\infty \int_\mathbb{R} x^2 \, d\nu(x). \end{align*} Plugging \eqref{ineq:nu-R} and \eqref{ineq:nu-x2} into the above yields \begin{align}\label{ineq:nu-f2} \int_\mathbb{R} f'(x)^2 \, d\nu(x) \le C_L n^{-2/3} \left( \int_{-2}^2 f'^2 \, d\rho + \norm{f''}_\infty^2 \right). \end{align} In turn, plugging~\eqref{ineq:nu-R} and~\eqref{ineq:nu-f2} into~\eqref{ineq:bgt-error-term} and then combining with~\eqref{eq:bobkov-goetze-tikhomirov-int-by-parts} we finally get \[ \int_\mathbb{R} f'^2 \, d\mu \le (1 + C_L n^{-2/3}) \int_\mathbb{R} f'^2 \, d\rho + C_L n^{-2/3} \norm{f''}_\infty \] which combined with~\eqref{ineq:linear-statistics-d-2} and \eqref{ineq:rm-sobolev-norm-1} completes the proof. \end{proof} \paragraph{Remark} With some more work (using truncations or working directly on moments) one can extend the above proposition to the case, when $|f''(x)| \le a(1+|x|^k)$ for some non-negative integer $k$ and $a \in \mathbb{R}$. In this case we obtain \[ \mathbb P\big(|Z-\mathbb E Z| \ge t\big) \le 2 \exp\left(-\left(\frac{t^2}{C_L \int_{-2}^2 f'^2 \,d\rho + C_{L,k} n^{-2/3} a^2} \land \frac{n}{C_{L,k}} \left(\frac{t}{a}\right)^{\frac{2}{k+2}} \right) \right). \] We also remark that to obtain the inequality \eqref{ineq:linear-statistics-d-2} one does not have to use independence of the entries of $A$, it is enough to assume that the vector $\tilde{A}$ satisfies the inequality \eqref{eq:sobolev_def}. \section{Two-sided estimates of moments for Gaussian polynomials}\label{sec:Gaussian} We will now prove Theorem \ref{thm:Gaussian_intro}, showing that in the case of general polynomials in Gaussian variables, the estimates of Theorem \ref{thm:main_intro} are optimal (up to constants depending only on the degree of the polynomial). In the special case of tetrahedral polynomials this follows from Lata{\l}a's Theorem \ref{thm:Latala_intro} and the following result by Kwapie\'n. \begin{theorem}[Kwapie\'n, \cite{KwaDec}]\label{thm_Kwapien} If $X = (X_1,\ldots,X_n)$ where $X_i$ are independent symmetric random variables, $Q$ is a multivariate tetrahedral polynomial of degree $D$ with coefficients in a Banach space $E$ and $Q_d$ is its homogeneous part of degree $d$, then for any symmetric convex function $\Phi \colon E \to \mathbb{R}_+$ and any $d \in \{0,1, \ldots, D\}$, \begin{displaymath} \mathbb E\Phi(Q_d(X)) \le \mathbb E\Phi(C_d Q(X)). \end{displaymath} \end{theorem} Indeed, when combined with Theorem \ref{thm:Latala_intro} and the triangle inequality, the above theorem gives the following \begin{cor} \label{cor_tetrahedral} Let \begin{displaymath} Z = \sum_{0\le d \le D} \sum_{{\bf i} \in [n]^d} a\ub{d}_{\bf i} g_{i_1}\cdots g_{i_d}, \end{displaymath} where $A_d = (a\ub{d}_{\bf i})_{{\bf i}\in [n]^d}$ is a $d$-indexed symmetric matrix of real numbers such that $a_{\bf i} = 0$ if $i_j = i_l$ for some $k\neq l$ (we adopt the convention that for $d=0$ we have a single number $a\ub{0}_\emptyset$). Then for any $p\ge 2$, \begin{displaymath} C_D^{-1}\sum_{0\le d\le D} \sum_{\mathcal{J} \in P_d} p^{\#\mathcal{J}/2} \|A_d\|_\mathcal{J} \le \|Z\|_p \le C_D \sum_{0\le d\le D} \sum_{\mathcal{J} \in P_d} p^{\#\mathcal{J}/2} \|A_d\|_\mathcal{J}. \end{displaymath} \end{cor} The strategy of proof of Theorem \ref{thm:Gaussian_intro} is very simple and relies on infinite divisibility of Gaussian random vectors, which will help us approximate the law of a general polynomial in Gaussian variables by the law of a tetrahedral polynomial, for which we will use Corollary \ref{cor_tetrahedral}. \medskip It will be convenient to have the polynomial $f$ represented as a combination of multivariate Hermite polynomials: \begin{equation}\label{eq:f-as-Hermite} f(x_1, \ldots, x_n) = \sum_{d=0}^D \sum_{{\bf d} \in \Delta_d^n} a_{\bf d} h_{d_1}(x_1) \cdots h_{d_n}(x_n), \end{equation} where \[ \Delta_d^n = \{ {\bf d} = (d_1, \ldots, d_n) \colon \forall_{k \in [n]}\ d_k \ge 0 \text{ and } d_1 + \cdots + d_n = d \} \] and $h_d(x) = (-1)^d e^{x^2/2} \frac{d^n}{dx^n} e^{-x^2/2}$ is the $d$-th Hermite polynomial. Let $(W_t)_{t \in [0,1]}$ be a standard Brownian motion. Consider standard Gaussian random variables $g = W_1$ and, for any positive integer $N$, \[ g_{j,N} = \sqrt{N} (W_{\frac{j}{N}} - W_{\frac{j-1}{N}}), \quad j = 1, \ldots, N. \] For any $d \ge 0$, we have the following representation of $h_d(g) = h_d(W_1)$ as a multiple stochastic integral (see~\cite[Example 7.12 and Theorem 3.21]{JansonGHS}), \[ h_d(g) = d! \int_0^1 \! \int_0^{t_d} \! \cdots \! \int_0^{t_2} \, dW_{t_1} \cdots dW_{t_{d-1}} dW_{t_d}. \] Approximating the multiple stochastic integral leads to \begin{equation}\label{eq:Hermite-as-tetrahedral-polynomial} \begin{split} h_d(g) &= d! \lim_{N \to \infty} N^{-d/2} \sum_{1 \le j_1 < \cdots < j_d \le N} g_{j_1, N} \cdots g_{j_d, N} \\[1ex] &= \lim_{N \to \infty} N^{-d/2} \sum_{{\bf j} \in [N]\uu{d}} g_{j_1, N} \cdots g_{j_d, N}, \end{split} \end{equation} where the limit is in $L^2(\Omega)$ (see \cite[Theorem 7.3. and formula (7.9)]{JansonGHS}) and actually the convergence holds in any $L^p$ (see~\cite[Theorem 3.50]{JansonGHS}). We remark that instead of multiple stochastic integrals with respect to the Wiener process we could use the CLT for canonical $U$-statistics (see \cite[Chapter 4.2]{dlPg}), however the stochastic integral framework seems more convenient as it allows to put all the auxiliary variables on the same probability space. Now, consider $n$ independent copies $(W_t\ub{i})_{t \in [0,1]}$ of the Brownian motion ($i=1, \ldots, n$) together with the corresponding Gaussian random variables: $g\ub{i} = W_1\ub{i}$ and, for $N \ge 1$, \[ g_{j, N}\ub{i} = \sqrt{N} (W_{\frac{j}{N}}\ub{i} - W_{\frac{j-1}{N}}\ub{i}), \quad j = 1, \ldots, N. \] In the lemma below we state the representation of a multivariate Hermite polynomial in the variables $g\ub{1}, \ldots, g\ub{n}$ as a limit of tetrahedral polynomials in the variables $g_{j, N}\ub{i}$. To this end introduce some more notation. Let \[ G\ub{n, N} = (g_{1,N}\ub{1}, \ldots, g_{N,N}\ub{1}, \ g_{1,N}\ub{2}, \ldots, g_{N,N}\ub{2}, \ \ldots,\ g_{1,N}\ub{n}, \ldots, g_{N,N}\ub{n}) = (g_{j,N}\ub{i})_{(i,j)\in [n]\times [N]} \] be a Gaussian vector with $n \times N$ coordinates. We identify here the set $[nN]$ with $[n]\times [N]$ via the bijection $(i,j) \leftrightarrow (i-1)N+j$. We will also identify the sets $([n]\times [N])^d$ and $[n]^d\times [N]^d$ in a natural way. For $d \ge 0$ and ${\bf d} \in \Delta_d^n$, let \[ I_{{\bf d}} = \big\{ {\bf i} \in [n]^d \colon \forall_{l \in [n]} \, \# {\bf i}^{-1}(\{l\}) = d_l \big\}, \] and define a $d$-indexed matrix $B_{{\bf d}}\ub{N}$ of $n^d$ blocks each of size $N^d$ as follows: for ${\bf i} \in [n]^d$ and ${\bf j} \in [N]^d$, \[ \big(B_{{\bf d}}\ub{N}\big)_{({\bf i}, {\bf j})} = \begin{cases} \frac{d_1! \cdots d_n!}{d!} N^{-d/2} & \text{if ${\bf i} \in I_{{\bf d}}$ and $({\bf i}, {\bf j}) := \big((i_1, j_1), \ldots, (i_d, j_d)\big) \in ([n] \times [N])\uu{d},$} \\[1ex] 0 & \text{otherwise.} \end{cases} \] \begin{lemma}\label{lemma:multivariate-Hermite-as-tetrahedral-polynomial} With the above notation, for any $p > 0$, \[ \big\langle B_{{\bf d}}\ub{N}, (G\ub{n,N})^{\otimes d} \big\rangle \stackrel[N \to \infty]{}{\longrightarrow} h_{d_1}(g\ub{1}) \cdots h_{d_n}(g\ub{n}) \quad \text{in $L^p(\Omega)$}. \] \end{lemma} \begin{proof} Using~\eqref{eq:Hermite-as-tetrahedral-polynomial} for each $h_{d_i}(g\ub{i})$, \begin{multline*} h_{d_1}(g\ub{1}) \cdots h_{d_n}(g\ub{n}) \\ = \lim_{N \to \infty} N^{-d/2} \sum_{\substack{(j_1^{(1)}, \ldots, j_{d_1}^{(1)}) \in [N]^{\underline{d_1}} \\ \vdots \\ (j_1^{(n)}, \ldots, j_{d_n}^{(n)}) \in [N]^{\underline{d_n}} }} \big( g_{j_1\ub{1},N}\ub{1} \cdots g_{j_{d_1}\ub{1},N}\ub{1} \big) \cdots \big( g_{j_1\ub{n},N}\ub{n} \cdots g_{j_{d_n}\ub{n},N}\ub{n} \big). \end{multline*} For each $N$, the right-hand side equals \begin{displaymath} \frac{1}{\# I_{{\bf d}}} N^{-d/2} \sum_{{\bf i} \in I_{{\bf d}}} \sum_{\substack{{\bf j} \in [N]^d \text{ s.t.} \\ ({\bf i}, {\bf j}) \in ([n]\times[N])\uu{d}}} g_{j_1, N}\ub{i_1} \cdots g_{j_d, N}\ub{i_d} = \big\langle B_{{\bf d}}\ub{N}, (G\ub{n,N})^{\otimes d} \big\rangle, \end{displaymath} since $\# I_{{\bf d}} = \frac{d!}{d_1! \cdots d_n!}$. \end{proof} Note that $B_{\bf d}\ub{N}$ is symmetric, i.e., for any ${\bf i} \in [n]^d$, ${\bf j} \in [N]^d$ if $\pi \colon [d] \to [d]$ is a permutation and ${\bf i}' \in [n]^d$, ${\bf j}' \in [N]^d$ are such that $\forall_{k \in [d]} \; i'_k = i_{\pi(k)}$ and $j'_k = j_{\pi(k)}$, then \[ \big( B_{\bf d}\ub{N} \big)_{({\bf i}',{\bf j}')} = \big( B_{\bf d}\ub{N} \big)_{({\bf i},{\bf j})}. \] Moreover, $B_{\bf d}\ub{N}$ has zeros on ``generalized diagonals'', i.e., $\big( B_{\bf d}\ub{N} \big)_{({\bf i},{\bf j})} = 0$ if $(i_k, j_k) = (i_l, j_l)$ for some $k \neq l$. \begin{proof}[Proof of Theorem \ref{thm:Gaussian_intro}]Let us first note that it is enough to prove the moment estimates, the tail bound follows from them by the Paley-Zygmund inequality (see e.g. the proof of Corollary 1 in \cite{L2}). Moreover, the upper bound on moments follows directly from Theorem~\ref{thm:main_intro}. For the lower bound we use Lemma~\ref{lemma:multivariate-Hermite-as-tetrahedral-polynomial} to approximate the $L^p$ norm of $f(G)-\mathbb E f(G)$ with that of a tetrahedral polynomial, for which we can use the lower bound from Corollary~\ref{cor_tetrahedral}. Assuming $f$ is of the form~\eqref{eq:f-as-Hermite}, Lemma~\ref{lemma:multivariate-Hermite-as-tetrahedral-polynomial} together with the triangle inequality implies \[ \lim_{N \to \infty} \Big\|\sum_{d=1}^D \Big\langle \sum_{{\bf d} \in \Delta_d^n} a_{\bf d} B_{\bf d}\ub{N}, \big(G\ub{n,N}\big)^{\otimes d} \Big\rangle \Big\|_p = \big\|f(G) - \mathbb E f(G)\big\|_p \] for any $p > 0$, where $G = (g\ub{1}, \ldots, g\ub{n} )$. It therefore remains to relate $\big\|\sum_{{\bf d}} a_{\bf d} B_{\bf d}\ub{N}\big\|_{\mathcal{J}}$ with $\norm{\mathbb E \mathbf{D}^d f(G)}_{\mathcal{J}}$ for any $d \ge 1$ and $\mathcal{J} \in P_d$. In fact we shall prove that \begin{equation}\label{eq:description-via-diff} \lim_{N \to \infty} \Big\|\sum_{{\bf d} \in \Delta_d^n} a_{\bf d} B_{\bf d}\ub{N}\Big\|_{\mathcal{J}} = \frac{1}{d!} \norm{\mathbb E \mathbf{D}^d f(G)}_{\mathcal{J}}, \end{equation} which will end the proof. Fix $d \ge 1$ and $\mathcal{J} \in P_d$. For any ${\bf d} \in \Delta_d^n$ define a symmetric $d$-indexed matrix $(b_{\bf d})_{{\bf i} \in [n]^d}$ as \[ (b_{\bf d})_{\bf i} = \begin{cases} \frac{d_1! \cdots d_n!}{d!} & \text{if ${\bf i} \in I_{\bf d},$} \\ 0 & \text{otherwise.} \end{cases} \] and a symmetric $d$-indexed matrix $(\tilde{B}_{\bf d}\ub{N})_{({\bf i}, {\bf j}) \in ([n] \times [N])^d}$ as \[ (\tilde{B}_{\bf d}\ub{N})_{({\bf i}, {\bf j})} = N^{-d/2} (b_{\bf d})_{\bf i} \quad \text{for all ${\bf i} \in [n]^d$ and ${\bf j} \in [N]^d.$} \] It is a simple observation that \begin{equation}\label{eq:blown-matrices} \Big\| \sum_{{\bf d} \in \Delta_d^n} a_{\bf d} \tilde{B}_{\bf d}\ub{N} \Big\|_\mathcal{J} = \Big\| \sum_{{\bf d} \in \Delta_d^n} a_{\bf d} (b_{\bf d})_{{\bf i} \in [n]^d} \Big\|_\mathcal{J}. \end{equation} On the other hand, for any ${\bf d} \in \Delta_d^n$, the matrices $\tilde{B}_{\bf d}\ub{N}$ and $B_{\bf d}\ub{N}$ differ at no more than $\# I_{\bf d} \cdot \#([N]^d \setminus [N]\uu{d})$ entries. More precisely, if $\mathcal{J}_0 = \{ [d] \}$ (a trivial partition of $[d]$ into one set), then \[ \big\| \tilde{B}_{\bf d}\ub{N} - B_{\bf d}\ub{N} \big\|_\mathcal{J}^2 \le \big\| \tilde{B}_{\bf d}\ub{N} - B_{\bf d}\ub{N} \big\|_{\mathcal{J}_0}^2 \le \frac{d_1! \cdots d_n!}{d!} N^{-d} (N^d - N\uu{d}) \longrightarrow 0 \quad \text{as $N \to \infty$}. \] Thus the triangle inequality for the $\|\cdot\|_\mathcal{J}$ norm together with~\eqref{eq:blown-matrices} yields \begin{equation}\label{eq:B-and-b} \lim_{N \to \infty} \Big\|\sum_{{\bf d} \in \Delta_d^n} a_{\bf d} B_{\bf d}\ub{N}\Big\|_{\mathcal{J}} = \Big\| \sum_{{\bf d} \in \Delta_d^n} a_{\bf d} (b_{\bf d})_{{\bf i} \in [n]^d} \Big\|_\mathcal{J}. \end{equation} Finally, note that \begin{equation}\label{eq:Df-and-b} \mathbb E \mathbf{D}^d f(G) = d! \sum_{{\bf d} \in \Delta_d^n} a_{\bf d} (b_{\bf d})_{{\bf i} \in [n]^d}. \end{equation} Indeed, using the identity on Hermite polynomials, $h_k'(x) = k h_{k-1}(x)$ ($k \ge 1$), we obtain $\mathbb E h_k\ub{l}(g) = k! \delta_{k,l}$ for $k,l\ge 0$, where $f\ub{l}$ stands for the $l$-th derivative of $f$, and thus, for any ${\bf d} \in \Delta_d^n$, \[ \big(\mathbb E \mathbf{D}^d h_{d_1}(g\ub{1}) \cdots h_{d_n}(g\ub{n})\big)_{\bf i} = d! (b_{\bf d})_{\bf i} \quad \text{for each ${\bf i} \in [n]^d$}. \] Now, \eqref{eq:Df-and-b} follows by linearity. Combining it with~\eqref{eq:B-and-b} proves~\eqref{eq:description-via-diff}. \end{proof} \paragraph{Remark} Note that the above infinite-divisibility argument can be also used to prove the upper bound on moments in Theorem \ref{thm:Gaussian_intro} (giving a proof independent of the one relying on Theorem \ref{thm:main_intro}). \section{Polynomials in independent sub-Gaussian random variables}\label{sec:subgaussian} In this section we prove Theorem \ref{thm:subgaussian_intro}. Before we proceed with the core of the proof we will need to introduce some auxiliary inequalities for the norms $\|\cdot\|_\mathcal{J}$ as well as some additional notation. \subsection{Properties of $\|\cdot\|_\mathcal{J}$ norms} The first inequality we will need is pretty standard and given in the following lemma (it is a direct consequence of the definition of the norms $\|\cdot\|_{\mathcal{J}}$). \begin{lemma}\label{lem_tensor_product} For any $d$-indexed matrix $A = (a_{\bf i})_{{\bf i} \in [n]^d}$ and any vectors $v_1,\ldots,v_d \in \mathbb{R}^n$ we have for all $\mathcal{J} \in P_d$, \begin{displaymath} \|A\circ \otimes_{i=1}^d v_i\|_\mathcal{J} \le \|A\|_\mathcal{J}\prod_{i=1}^d \|v_i\|_\infty \end{displaymath} \end{lemma} To formulate subsequent inequalities we need some auxiliary notation concerning $d$-indexed matrices. We will treat matrices as functions from $[n]^d$ into the real line, which in particular allows us to use the notation of indicator functions and for a set $C \subseteq \{1,\ldots,n\}^d$ write $\Ind{C}$ for the matrix $(a_{\bf i})$ such that $a_{\bf i} = 1$ if ${\bf i}\in C$ and $0$ otherwise. Note that for $\#\mathcal{J} > 1$, $\|\cdot\|_\mathcal{J}$ is not unconditional in the standard basis, i.e., in general it is not true that $\|A\circ\Ind{C}\|_\mathcal{J} \le \|A\|_\mathcal{J}$. One situation in which this inequality holds is when $C$ is of the form $C = \{{\bf i}\colon i_{k_1} = j_1,\ldots,i_{k_l} = j_l\}$ for some $1 \le k_1<\ldots<k_l \le d$ and $j_1,\ldots,j_l \in [n]$ (which follows from Lemma \ref{lem_tensor_product}). This corresponds to setting to zero all coefficients which are outside a ``generalized row'' of a matrix and leaving the coefficients in this row intact. Later we will need another inequality of this type, which will allow us to select a ``generalized diagonal'' of a matrix. The corresponding estimate is given in the following \begin{lemma}\label{lem_diagonal_selection} Let $A = (a_{\bf i})_{{\bf i}\in [n]^d}$ be a $d$-indexed matrix and let $C \subseteq [n]^d$ be of the form $C = \{{\bf i}\colon i_k = i_l \;\textrm{for} \; k,l \in K\}$, with $K \subseteq [d]$. Then for every $\mathcal{J} \in P_d$, $\|A\circ \Ind{C}\|_\mathcal{J} \le \|A\|_\mathcal{J}$. \end{lemma} \begin{proof} Since $\Ind{C_1\cap C_2} = \Ind{C_1} \circ \Ind{C_2}$, it is enough to consider the case $\#K = 2$, i.e. $C = \{{\bf i}\colon i_k = i_l\}$ for some $1\le k<l\le d$. Let $\mathcal{J} = \{J_1,\ldots,J_m\}$. We will consider two cases. \paragraph{1.} The numbers $k$ and $l$ are separated by the partition $\mathcal{J}$. Without loss of generality we can assume that $k\in J_1$, $l\in J_2$. Then \begin{align}\label{eq_diag_selection} &\|A\circ \Ind{C}\|_\mathcal{J} \\ &= \sup_{\|x\ub{j}_{{\bf i}_{J_j}}\|_2\le 1\colon j \ge 3}\Big(\sup_{\|x\ub{1}_{{\bf i}_{J_1}}\|_2,\|x\ub{2}_{{\bf i}_{J_2}}\|_2\le 1} \sum_{|{\bf i}_{J_1}|\le n}\sum_{|{\bf i}_{J_2}|\le n}\ind{i_k=i_l}\Big(\sum_{|{\bf i}_{(J_1\cup J_2)^c}|\le n} a_{\bf i} x\ub{3}_{{\bf i}_{J_3}}\cdots x\ub{m}_{{\bf i}_{J_m}}\Big)x\ub{1}_{{\bf i}_{J_1}}x\ub{2}_{{\bf i}_{J_2}}\Big).\nonumber \end{align} For any $x\ub{3}_{{\bf i}_{J_3}},\ldots,x\ub{m}_{{\bf i}_{J_m}}$, consider the matrix \begin{displaymath} B_{{\bf i}_{J_1},{\bf i}_{J_2}} = \Big(\sum_{|{\bf i}_{(J_1\cup J_2)^c}|\le n} a_{\bf i} x\ub{3}_{{\bf i}_{J_3}}\cdots x\ub{m}_{{\bf i}_{J_m}}\Big)_{{\bf i}_{J_1},{\bf i}_{J_2}} \end{displaymath} acting from $\ell_2([n]^{J_1})$ to $\ell_2([n]^{J_2})$. For fixed $x\ub{3}_{{\bf i}_{J_3}},\ldots,x\ub{m}_{{\bf i}_{J_m}}$ the inner expression on the right hand side of (\ref{eq_diag_selection}) is the operator norm of the block-diagonal matrix obtained from $B_{{\bf i}_{J_1},{\bf i}_{J_2}}$ by setting to zero entries in off-diagonal blocks. Therefore it is not greater than the operator norm of $B_{{\bf i}_{J_1},{\bf i}_{J_2}}$, which allows us to write \begin{align*} \|A\circ \Ind{C}\|_\mathcal{J} &\le \sup_{\|x\ub{j}_{{\bf i}_{J_j}}\|_2\le 1\colon j \ge 3}\Big(\sup_{\|x\ub{1}_{{\bf i}_{J_1}}\|_2,\|x\ub{2}_{{\bf i}_{J_2}}\|_2\le 1} \sum_{|{\bf i}_{J_1}|\le n}\sum_{|{\bf i}_{J_2}|\le n}\Big(\sum_{|{\bf i}_{(J_1\cup J_2)^c}|\le n} a_{\bf i} x\ub{3}_{{\bf i}_{J_3}}\cdots x\ub{m}_{{\bf i}_{J_m}}\Big)x\ub{1}_{{\bf i}_{J_1}}x\ub{2}_{{\bf i}_{J_2}}\Big)\\ & = \|A\|_\mathcal{J}. \end{align*} \paragraph{2.} There exists $j$ such that $k,l \in J_j$. Without loss of generality we can assume that $j = 1$. We have \begin{align*} \|A\circ \Ind{C}\|_\mathcal{J} &= \sup_{\|x\ub{j}_{{\bf i}_{J_j}}\|_2\le 1\colon j \ge 2}\Big(\sup_{\|x\ub{1}_{{\bf i}_{J_1}}\|_2\le 1} \sum_{|{\bf i}_{J_1}|\le n}\ind{i_k=i_l}\Big(\sum_{|{\bf i}_{J_1^c}|\le n} a_{\bf i} x\ub{2}_{{\bf i}_{J_2}}\cdots x\ub{m}_{{\bf i}_{J_m}}\Big)x\ub{1}_{{\bf i}_{J_1}}\Big)\\ & = \sup_{\|x\ub{j}_{{\bf i}_{J_j}}\|_2\le 1\colon j \ge 2}\Big(\sum_{|{\bf i}_{J_1}|\le n}\ind{i_k=i_l}\Big(\sum_{|{\bf i}_{J_1^c}|\le n} a_{\bf i} x\ub{2}_{{\bf i}_{J_2}}\cdots x\ub{m}_{{\bf i}_{J_m}}\Big)^2\Big)^{1/2}\\ &\le \sup_{\|x\ub{j}_{{\bf i}_{J_j}}\|_2\le 1\colon j \ge 2}\Big(\sum_{|{\bf i}_{J_1}|\le n}\Big(\sum_{|{\bf i}_{J_1^c}|\le n} a_{\bf i} x\ub{2}_{{\bf i}_{J_2}}\cdots x\ub{m}_{{\bf i}_{J_m}}\Big)^2\Big)^{1/2} = \|A\|_\mathcal{J}. \end{align*} \end{proof} For a partition $\mathcal{K} = \{K_1,\ldots,K_m\} \in P_d$ define \begin{align}\label{eq_level_set_def} L(\mathcal{K}) = \{{\bf i}\in[n]^d\colon i_k= i_l \;\textrm{iff}\; \exists_{j\le m}\; k,l\in K_j\}. \end{align} Thus $L(\mathcal{K})$ is the set of all indices for which the partition into level sets is equal to $\mathcal{K}$. \begin{cor} \label{cor_norm_monotonicity} For any $\mathcal{J,K} \in P_d$ and any $d$-indexed matrix $A$, \begin{displaymath} \|A\circ \Ind{L(\mathcal{K})}\|_\mathcal{J} \le 2^{\#\mathcal{K}(\#\mathcal{K}-1)/2}\|A\|_\mathcal{J}. \end{displaymath} \end{cor} \begin{proof} By Lemma \ref{lem_diagonal_selection} and the triangle inequality for any $k< l$, $\|A\circ \ind{i_k\neq i_l}\|_\mathcal{J} = \|A - A\circ\ind{i_k=i_l}\|_\mathcal{J} \le 2\|A\|_\mathcal{J}$. Now it is enough to note that $L(\mathcal{K})$ can be expressed as an intersection of $\#\mathcal{K}$ ``generalized diagonals'' and $\#\mathcal{K}(\#\mathcal{K}-1)/2$ sets of the form $\{{\bf i}\colon i_k\neq i_l\}$ where $k < l$ and use again Lemma \ref{lem_diagonal_selection} together with the above inequality. \end{proof} \subsection{Proof of Theorem \ref{thm:subgaussian_intro}} Let us first note that the tail bound of Theorem \ref{thm:subgaussian_intro} follows from the moment estimate and Chebyshev inequality in the same way as in Theorems \ref{thm:main_intro} or \ref{thm:main}. We will therefore focus on the moment bound. The method of proof will rely on the reduction to the Gaussian case via decoupling inequalities, symmetrization and the contraction principle. To carry out this strategy we will need the following representation of $f$. \begin{align}\label{eq_poly_rep_1} f(x) = \sum_{0\le d\le D} \sum_{m=0}^d \sum_{{k_1, \ldots, k_m > 0}\atop{k_1+\ldots+k_m=d}} \sum_{{\bf i} \in [n]\uu{m}} c_ {(i_1,k_1),\ldots,(i_m,k_m)}\ub{d} x_{i_1}^{k_1}x_{i_2}^{k_2}\cdots x_{i_m}^{k_m}, \end{align} where the coefficients $c_{(i_1,k_1),\ldots,(i_m,k_m)}\ub{d}$ satisfy \begin{align}\label{eq:poly_symmetry} c_{(i_1,k_1),\ldots,(i_m,k_m)}\ub{d} = c_{(i_{\pi_1},k_{\pi_1}),\ldots,(i_{\pi_m},k_{\pi_m})}\ub{d} \end{align} for all permutations $\pi \colon [m] \to [m]$. At this point we would like to explain the convention regarding indices which we will use throughout this section. It is rather standard, but we prefer to draw the Reader's attention to it, as we will use it extensively in what follows. Namely, we will treat the sequence ${\bf k} = (k_1,\ldots,k_m)$ as a function acting on $[m]$ and taking values in positive integers. In particular if $m=0$, then $[m] = \emptyset$ and there exists exactly one function ${\bf k}\colon [m]\to \mathbb{N}\setminus\{0\}$ (the empty function). Moreover by convention this function satisfies $\sum_{i=1}^m k_i = 0$ (as the summation runs over an empty set). Therefore, for $d=0$ and $m=0$ the subsum over $k_1,\ldots,k_m$ and ${\bf i}$ above is equal to the free coefficient of the polynomial (which can be denoted by $c_\emptyset\ub{0}$), since the summation over $k_1,\ldots,k_m$ runs over a one-element set containing the empty index/function and for this index there is exactly one index ${\bf i}\colon [m]\to \{1,\ldots,n\}$, which belongs to $[n]^{\underline{m}}$ (again the empty-index). Here we also use the convention that a product over an empty set is equal to one. On the other hand, for $d>0$, the contribution from $m=0$ is equal to zero (as the empty index ${\bf k}$ does not satisfy the constraint $k_1+\ldots+k_m = d$ and so the summation over $k_1,\ldots,k_m$ runs over the empty set). Using \eqref{eq_poly_rep_1} together with independence of $X_1,\ldots,X_n$, one may write \begin{displaymath} f(X) - \mathbb E f(X) = \sum_{1\le d\le D} \sum_{m=1}^d \sum_{{k_1, \ldots, k_m > 0}\atop{k_1+\ldots+k_m=d}} \sum_{{\bf i}\in[n]\uu{m}} c_{(i_1,k_1),\ldots,(i_m,k_m)}\ub{d} \sum_{\emptyset \neq J \subseteq [m]} \prod_{j\in J} (X_{i_j}^{k_j} - \mathbb E X_{i_j}^{k_j})\prod_{j\notin J} \mathbb E X_{i_j}^{k_j}. \end{displaymath} Rearranging the terms and using \eqref{eq:poly_symmetry} together with the triangle inequality, we obtain \begin{displaymath} |f(X) - \mathbb E f(X) | \le \sum_{1\le d\le D} \sum_{a=1}^d \sum_{{k_1, \ldots, k_a > 0}\atop{k_1+\ldots+k_a=d}} \Big|\sum_{{\bf i} \in [n]\uu{a}} d_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a} (X_{i_1}^{k_1} - \mathbb E X_{i_1}^{k_1})\cdots (X_{i_a}^{k_a} - \mathbb E X_{i_a}^{k_a})\Big|, \end{displaymath} where \begin{align*} d_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a} = \sum_{a\le m \le D}\sum_{{k_{a+1},\ldots,k_m > 0\colon}\atop{k_1+\ldots+k_m \le D}}\sum_{{i_{a+1},\ldots,i_m\colon}\atop{(i_1,\ldots,i_m)\in [n]\uu{m}}}\binom{m}{a}c_{(i_1,k_1),\ldots,(i_m,k_m)}\ub{k_1+\ldots+k_m}\mathbb E X_{i_{a+1}}^{k_{a+1}}\cdots \mathbb E X_{i_m}^{k_{i_m}}. \end{align*} Note that \eqref{eq:poly_symmetry} implies that for every permutation $\pi\colon [a]\to [a]$, \begin{align}\label{eq:poly_symmetry_d} d_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a} = d_{i_{\pi_1},\ldots,i_{\pi_a}}\ub{k_{\pi_1},\ldots,k_{\pi_a}}. \end{align} Let now $X\ub{1},\ldots,X\ub{D}$ be independent copies of the random vector $X$ and $(\varepsilon_i\ub{j})_{i\le n,j\le D}$ an array of i.i.d. Rademacher variables independent of $(X\ub{j})_j$. For each $k_1,\ldots,k_a$, by decoupling inequalities (Theorem \ref{thm:decoupling} in the Appendix) applied to the functions \begin{displaymath} h_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a}(x_1,\ldots,x_a) = d_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a}(x_1^{k_1} - \mathbb E X_{i_1}^{k_1})\cdots(x_a^{k_a} - \mathbb E X_{i_a}^{k_a}) \end{displaymath} and standard symmetrization inequalities (applied conditionally $a$ times) we obtain, \begin{align}\label{eq:after_decoupling} &\|f(X) - \mathbb E f(X) \|_p\\ &\le C_D\sum_{d=1}^D \sum_{a=1}^d \sum_{{k_1, \ldots, k_a > 0}\atop{k_1+\ldots+k_a=d}} \bigg\|\sum_{{\bf i} \in[n]\uu{a}} d_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a} \Big((X_{i_1}\ub{1})^{k_1} - \mathbb E (X_{i_1}\ub{1})^{k_1}\Big)\cdots \Big((X_{i_a}\ub{a})^{k_a} - \mathbb E (X_{i_a}\ub{a})^{k_a}\Big)\bigg\|_p\nonumber\\ &\le C_D\sum_{d=1}^D \sum_{a=1}^d \sum_{{k_1, \ldots, k_a > 0}\atop{k_1+\ldots+k_a=d}} \bigg\|\sum_{{\bf i} \in[n]\uu{a}} d_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a} \Big(\varepsilon_{i_1}\ub{1}(X_{i_1}\ub{1})^{k_1} \cdots \varepsilon_{i_a}\ub{a}(X_{i_a}\ub{a})^{k_a} \Big)\bigg\|_p\nonumber \end{align} (note that in the first part of Theorem \ref{thm:decoupling} one does not impose any symmetry assumptions on the functions $h_{\bf i}$). We will now use the following standard comparison lemma (for reader's convenience its proof is presented in the Appendix). \begin{lemma}\label{lemma_comp_Gauss} For any positive integer $k$, if $Y_1,\ldots,Y_n$ are independent symmetric variables with $\|Y_i\|_{\psi_{2/k}} \le M$, then \begin{displaymath} \|\sum_{i=1}^n a_i Y_i\|_p \le C_k M\|\sum_{i=1}^n a_i g_{i1}\cdots g_{ik} \|_p, \end{displaymath} where $g_{ij}$ are i.i.d. $\mathcal{N}(0,1)$ variables. \end{lemma} Note that for any positive integer $k$ we have $\|X_i^k\|_{\psi_{2/k}} = \|X_i\|_{\psi_2}^k \le L^k$, so \eqref{eq:after_decoupling} together with the above lemma (used repeatedly and conditionally) yield \begin{multline}\label{eq_now_Gaussian} \|f(X) - \mathbb E f(X) \|_p \\ \le C_D\sum_{1\le d\le D} L^d\sum_{a=1}^d \sum_{{k_1, \ldots, k_a > 0}\atop{k_1+\ldots+k_a=d}} \Big\|\sum_{{\bf i} \in[n]\uu{a}} d_{i_1,\ldots,i_a}\ub{k_1,\ldots,k_a} (g\ub{1}_{i_1,1}\cdots g\ub{1}_{i_1,k_1})\cdots(g\ub{a}_{i_a,1}\cdots g\ub{a}_{i_a,k_a})\Big\|_p, \end{multline} where $(g_{i,k}\ub{j})$ is an array of i.i.d. standard Gaussian variables. Consider now multi-indexed matrices $B_1,\ldots,B_D$ defined as follows. For $1\le d\le D$, and a multi-index ${\bf r} = (r_1,\ldots,r_d)\in [n]^d$ let $\mathcal{I} = \{I_1,\ldots,I_a\}$ be the partition of $\{1,\ldots,d\}$ into the level sets of ${\bf r}$ and $i_1,\ldots,i_a$ be the values corresponding to the level sets $I_1,\ldots,I_a$. Define moreover \begin{align*} b\ub{d}_{r_1,\ldots,r_d} = d\ub{\#I_1,\ldots,\#I_a}_{i_1,\ldots,i_a} \end{align*} (note that thanks to \eqref{eq:poly_symmetry_d} this definition does not depend on the order of $I_1,\ldots,I_a$). Finally, define the $d$-indexed matrix $B_d = (b\ub{d}_{{\bf r}})_{{\bf r}\in [n]^d}$. Let us also define for $k_1,\ldots,k_a > 0$, $\sum_{i=1}^a k_i = d$ the partition $\mathcal{K}(k_1,\ldots,k_a) \in \mathcal{P}_d$ by splitting the set $\{1,\ldots,d\}$ into consecutive intervals of length $k_1,\ldots,k_a$, i.e., $\mathcal{K} = \{K_1,\ldots,K_a\}$, where for $l = 1,\ldots,a$, $K_l = \{1+\sum_{i=1}^{l-1} k_i,2+\sum_{i=1}^{l-1} k_i, \ldots,\sum_{i=1}^{l} k_i\}$. Applying Theorem \ref{thm_Latala_dec} to the right hand side of (\ref{eq_now_Gaussian}), we obtain \begin{align*} &\|f(X) - \mathbb E f(X) \|_p \\ & \le C_D\sum_{1\le d\le D} L^d\sum_{a=1}^d \sum_{{k_1, \ldots, k_a > 0}\atop{k_1+\ldots+k_a=d}}\Big\|\Big\langle B_d \circ \Ind{L(\mathcal{K}(k_1,\ldots,k_a))},\bigotimes_{j=1}^a \bigotimes_{k=1}^{k_j} (g\ub{j}_{i,k_j})_{i\le n}\Big\rangle \Big\|_p\nonumber \\ &\le C_D\sum_{1\le d\le D}L^d \sum_{a=1}^d \sum_{{k_1, \ldots, k_a > 0}\atop{k_1+\ldots+k_a=d}} \sum_{\mathcal{J} \in P_{d}} p^{\#\mathcal{J}/2}\|B_d \circ \Ind{L(\mathcal{K}(k_1,\ldots,k_a))}\|_\mathcal{J}. \end{align*} Note that for all $k_1,\ldots,k_a$ by Corollary \ref{cor_norm_monotonicity} we have $\|B_d\circ\Ind{L(\mathcal{K}(k_1,\ldots,k_a))}\|_\mathcal{J} \le C_d \|B_d\|_\mathcal{J}$. Thus we obtain \begin{align*} \|f(X) - \mathbb E f(X) \|_p\le C_D\sum_{1\le d\le D} L^d\sum_{\mathcal{J} \in P_{d}} p^{\#\mathcal{J}/2}\|B_d \|_\mathcal{J}. \end{align*} Our next goal is to replace $B_d$ in the above inequality by $\mathbb E \mathbf{D}^d f(X)$. To this end we will analyse the structure of the coefficients of $B_d$ and compare them with the integrated partial derivatives of $f$. Let us first calculate $\mathbb E \mathbf{D}^d f(X)$. Consider ${\bf r}\in [n]^d$, such that $i_1,\ldots,i_a$ are its distinct values, taken $l_1,\ldots,l_a$ times respectively. We have \begin{multline*} \mathbb E \frac{\partial^{d} f}{\partial x_{r_1}\cdots\partial x_{r_d}}(X) = \sum_{k_1\ge l_1,\ldots,k_a\ge l_a}\sum_{a\le m \le D} \sum_{{k_{a+1},\ldots,k_m > 0}\atop{k_1+\ldots+k_m \le D}} \sum_{{i_{a+1},\ldots,i_m}\atop{(i_1,\ldots,i_m) \in [n]\uu{m}}} \\ \Bigg[\binom{m}{a}a!c\ub{k_1+\ldots+k_m}_{(i_1,k_1),\ldots,(i_m,k_m)}\prod_{j=1}^a \mathbb E X_{i_j}^{k_j-l_j}\prod_{j=a+1}^m \mathbb E X_{i_j}^{k_j}\prod_{j=1}^a \frac{k_j!}{(k_j - l_j)!}\Bigg], \end{multline*} where we have used \eqref{eq:poly_symmetry}. By comparing this with the definition of $b\ub{d}_{r_1,\ldots,r_d}$ and $d\ub{k_1,\ldots,k_a}_{i_1,\ldots,i_a}$ one can see that the sub-sum of the right hand side above corresponding to the choice $k_1 = l_1,\ldots,k_a = l_a$ is equal to $a!l_1!\cdots l_a! b\ub{d}_{r_1,\ldots,r_d}$. In particular for $d=D$, since $l_1+\ldots+l_a = D$, we have \begin{displaymath} \mathbb E \frac{\partial^{D} f}{\partial x_{r_1}\cdots\partial x_{r_D}}(X) = a! l_1! \cdots l_a! b\ub{D}_{r_1,\ldots,r_D} \end{displaymath} and so \begin{align*} \|B_D\|_\mathcal{J}\le \sum_{\mathcal{K}\in \mathcal{P}_D} \|B_D\circ \Ind{L(\mathcal{K})}\|_\mathcal{J} \le \sum_{\mathcal{K}\in \mathcal{P}_D} \|\mathbf{D}^D f(X)\circ \Ind{L(\mathcal{K})}\|_\mathcal{J} \le C_D \|\mathbf{D}^D f(X)\|_\mathcal{J}, \end{align*} where in the last inequality we used Corollary \ref{cor_norm_monotonicity}. Therefore if we prove that for all $d < D$ and all partitions $\mathcal{I} = \{I_1,\ldots,I_a\},\mathcal{J} = \{J_1,\ldots,J_b\} \in P_d$, \begin{align}\label{eq_subgaussian_to_prove} \|a!\#I_1!\cdots \#I_a! (B_d \circ \Ind{L(\mathcal{I})}) - \mathbb E \mathbf{D}^d f(X)\circ \Ind{L(\mathcal{I})}\|_\mathcal{J} \le C_D \sum_{d< k \le D} L^{k-d} \sum_{{\mathcal{K} \in P_k}\atop {\#\mathcal{K} = \#\mathcal{J}}}\|B_k\|_\mathcal{K}, \end{align} then by simple reverse induction (using again Corollary \ref{cor_norm_monotonicity}) we will obtain \begin{align*} \sum_{1\le d\le D} L^d \sum_{\mathcal{J} \in P_{d}} p^{\#\mathcal{J}/2}\|B_d \|_\mathcal{J} \le C_D\sum_{1\le d\le D} L^d\sum_{\mathcal{J} \in P_{d}} p^{\#\mathcal{J}/2}\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}, \end{align*} which will end the proof of the theorem. Fix any $d < D$ and partitions $\mathcal{I} = \{I_1,\ldots,I_a\}, \mathcal{J} = \{J_1,\ldots,J_b\} \in P_d$. Denote $l_i = \# I_i$. For every sequence $k_1,\ldots,k_a$ such that $k_i \ge l_i$ for $i\le a$ and there exists $i \le a$ such that $k_i > l_i$, let us define a $d$-indexed matrix $E\ub{d,k_1,\ldots,k_a}_\mathcal{I} = (e\ub{d,k_1,\ldots,k_a}_{\bf r})_{{\bf r} \in [n]^d}$, such that $e\ub{d,k_1,\ldots,k_a}_{\bf r} = 0$ if ${\bf r} \notin L(\mathcal{I})$ and for ${\bf r} \in L(\mathcal{I})$, \begin{align*} e\ub{d,k_1,\ldots,k_a}_{\bf r} = \sum_{a\le m\le D}\sum_{{k_{a+1},\ldots,k_m > 0}\atop{k_1+\ldots+k_m \le D}} \sum_{{i_{a+1},\ldots,i_m}\atop{(i_1,\ldots,i_m) \in[n]^{\underline{m}}}} \binom{m}{a}c\ub{k_1+\ldots+k_m}_{(i_1,k_1),\ldots,(i_m,k_m)}\prod_{j=1}^a \mathbb E X_{i_j}^{k_j-l_j}\prod_{j=a+1}^m \mathbb E X_{i_j}^{k_j}, \end{align*} where $i_1,\ldots,i_a$ are the values of ${\bf r}$ corresponding to the level sets $I_1,\ldots,I_a$. We then have \begin{align*} \sum_{{k_1\ge l_1,\ldots,k_a \ge l_a}\atop{\exists_i k_i > l_i}} a!\frac{k_1!}{(k_1-l_1)!}\cdots \frac{k_a!}{(k_a-l_a)!}E_\mathcal{I}\ub{d,k_1,\ldots,k_a} = \mathbb E \mathbf{D}^d f(X)\circ \Ind{L(\mathcal{I})} - a!l_1!\cdots l_a! B_d\circ \Ind{L(\mathcal{I})}. \end{align*} Since we do not pay attention to constants depending only on $D$, by the above formula and the triangle inequality, to prove (\ref{eq_subgaussian_to_prove}) it is enough to show that for all sequences $k_1,\ldots,k_a$ such that $k_i \ge l_i$ for $i\le a$ and there exists $i \le a$ such that $k_i > l_i$ one has \begin{align}\label{eq_subgaussian_reduced} \|E\ub{d,k_1,\ldots,k_a}_\mathcal{I}\|_{\mathcal{J}} \le C_D L^{\sum_{j\le a} (k_j-l_j)}\|B_{k_1+\ldots+k_a}\|_\mathcal{K} \end{align} for some partition $\mathcal{K}\in \mathcal{P}_{k_1+\ldots+k_a}$ with $\# \mathcal{K} = \#\mathcal{J}$ (note that $\sum_{j\le a} l_j = d$). Therefore in what follows we will fix $k_1,\ldots,k_a$ as above and to simplify the notation we will write $E\ub{d}$ instead of $E\ub{d,k_1,\ldots,k_a}_\mathcal{I}$ and $e\ub{d}_{\bf r}$ instead of $e\ub{d,k_1,\ldots,k_a}_{\bf r}$. Fix therefore any partition $\tilde{\mathcal{I}} = \{\tilde{I}_1,\ldots,\tilde{I_a}\} \in \mathcal{P}_{k_1+\ldots+k_a}$ such that $\#\tilde{I}_i = k_i$ and $I_i \subseteq \tilde{I}_i$ for all $i \le a$ (the specific choice of $\tilde{\mathcal{I}}$ is irrelevant). Finally define a $(k_1+\ldots+k_a)$-indexed matrix $\tilde{E}\ub{k_1+\ldots+k_a} = (\tilde{e}\ub{k_1+\ldots+k_a}_{\bf r})_{{\bf r}\in [n]^d}$ by setting \begin{align}\label{eq:E_tilde_construction} \tilde{e}\ub{k_1+\ldots+k_a}_{\bf r} = e\ub{d}_{{\bf r}_{[d]}} \ind{{\bf r}\in L(\mathcal{\tilde{I}})}. \end{align} In other words the new matrix is created by embedding the $d$-indexed matrix into a ``generalized diagonal'' of a $(k_1+\ldots+k_a)$-indexed matrix by adding $\sum_{j\le a} (k_j-l_j)$ new indices and assigning to them the values of old indices (for each $j \le a$ we add $k_j-l_j$ times the common value attained by ${\bf r}_{\{1,\ldots,d\}}$ on $I_j$). Recall now the definition of the coefficients $b\ub{d}_{\bf r}$ and note that for any ${\bf r} \in L(\mathcal{\tilde{I}})\subseteq [n]^{k_1+\ldots+k_a}$ we have $\tilde{e}\ub{k_1+\ldots+k_a}_{\bf r} = b\ub{k_1+\ldots+k_a}_{\bf r}\prod_{j=1}^a\mathbb E X_{i_j}^{k_j-l_j}$, where for $j\le a$, $i_j$ is the value of ${\bf r}$ on its level set $\tilde{I}_j$. This means that $\tilde{E}\ub{k_1+\ldots+k_a} = (B_{k_1+\ldots+k_a}\circ \Ind{L(\mathcal{\tilde{I}})})\circ (\otimes_{s=1}^{k_1+\ldots+k_a} v_s)$, where $v_s = (\mathbb E X_i^{k_j-l_j})_{i\le n}$ if $s \in \{\min I_1,\ldots,\min I_a\}$ and $v_s = (1,\ldots,1)$ otherwise. Since $\|v_s\|_\infty \le (C_DL)^{k_j-l_j}$ if $s \in \{\min I_j\}_{j\le a}$ and $\|v_s\|_\infty = 1$ otherwise, by Lemma \ref{lem_tensor_product} this implies that for any $\mathcal{K} \in P_{k_1+\ldots+k_a}$, \begin{align}\label{eq_subgaussian_aux} \|\tilde{E}\ub{k_1+\ldots+k_a}\|_\mathcal{K} \le (C_D L)^{\sum_{j\le a}(k_j-l_j)}\|B_{k_1+\ldots+k_a}\circ \Ind{L(\tilde{\mathcal{I}})}\|_\mathcal{K} \le C_DL^{\sum_{j\le a}(k_j-l_j)}\|B_{k_1+\ldots+k_a}\|_\mathcal{K}, \end{align} where in the last inequality we used Corollary \ref{cor_norm_monotonicity}. We will now use the above inequality to prove \eqref{eq_subgaussian_reduced}. Consider the unique partition $\mathcal{K} = \{K_1,\ldots,K_b\}$ satisfying the following two conditions: \begin{itemize} \item for each $j\le b$, $J_j \subseteq K_j$, \item for each $s \in \{d+1,\ldots,k_1+\ldots+k_a\}$ if $s \in \tilde{I}_j$ and $\pi(s) := \min \tilde{I}_j\in J_k$, then $s \in K_k$. In other words all indices $s$, which in the construction of $\tilde{\mathcal{I}}$ were added to $I_j$ (i.e., elements of $\tilde{I}_j\setminus I_j$) are now added to the unique element of $\mathcal{J}$ containing $\pi(s) = \min \tilde{I}_j = \min I_j$. \end{itemize} Now, it is easy to see that $\|E\ub{d}\|_\mathcal{J}\le \|\tilde{E}\ub{k_1+\ldots+k_a}\|_\mathcal{K}$. Indeed, consider an arbitrary $x\ub{j} = (x_{{\bf r}_{J_j}}\ub{j})_{|{\bf r}_{J_j}|\le n}$, $j=1,\ldots,b$, satisfying $\|x\ub{j}\|_2\le 1$. Define $y\ub{j} = (y_{{\bf r}_{K_j}}\ub{j})_{|{\bf r}_{K_j}|\le n}$, $j =1,\ldots,b$ with the formula \begin{displaymath} y\ub{j}_{{\bf r}_{K_j}} = x\ub{j}_{{\bf r}_{K_j\cap[d]}}\prod_{s\in K_j \setminus [d]}\ind{r_s = r_{\pi(s)}}. \end{displaymath} We have $\|y\ub{j}\|_2 = \|x\ub{j}\|_2\le 1$. Moreover, by the construction of the matrix $\tilde{E}\ub{k_1+\ldots+k_a}$ (recall \eqref{eq:E_tilde_construction}), we have \begin{displaymath} \sum_{|{\bf r}_{[d]}|\le n} e_{{\bf r}_{[d]}}\ub{d} \prod_{j=1}^b x\ub{j}_{{\bf r}_{J_j}} = \sum_{|{\bf r}_{[k_1+\ldots+k_a]}|\le n} \tilde{e}_{{\bf r}_{[k_1+\ldots+k_a]}}\ub{k_1+\ldots+k_a} \prod_{j=1}^b x\ub{j}_{{\bf r}_{J_j}} = \sum_{|{\bf r}_{[k_1+\ldots+k_a]}|\le n} \tilde{e}_{{\bf r}_{[k_1+\ldots+k_a]}}\ub{k_1+\ldots+k_a} \prod_{j=1}^b y\ub{j}_{{\bf r}_{K_j}} \end{displaymath} (in the last equality we used the fact that if ${\bf r} \in L(\tilde{I})$, then for $s > d$, $r_{\pi(s)} = r_s$ and so $y\ub{j}_{{\bf r}_{K_j}} = x\ub{j}_{{\bf r}_{K_j\cap[d]}} = x\ub{j}_{{\bf r}_{J_j}}$). By taking the supremum over $x\ub{j}$ one thus obtains $\|E\ub{d}\|_\mathcal{J}\le \|\tilde{E}\ub{k_1+\ldots+k_a}\|_\mathcal{K}$. Combining this inequality with (\ref{eq_subgaussian_aux}) proves (\ref{eq_subgaussian_reduced}) and thus (\ref{eq_subgaussian_to_prove}). This ends the proof of Theorem \ref{thm:subgaussian_intro}. \subsection{Application: Subgraph counting in random graphs} We will now apply results from Section \ref{sec:subgaussian} to some special cases of the problem of subgraph counting in Erd\H{o}s-R{\'e}nyi random graphs $G(n,p)$, which is often used as a test model for deviation inequalities for polynomials in independent random variables. More specifically we will investigate the problem of counting cycles of fixed length. It turns out that Theorem \ref{thm:subgaussian_intro} may give in some ranges of parameters optimal inequalities (leading to improvements of known results), whereas in some other regimes the estimates it gives are suboptimal. \medskip Let us first describe the setting (we will do it in a slightly more general form that needed for our example). We will consider undirected graphs $G = (V,E)$, where $V$ is a finite set of vertices and $E$ is the set of edges (i.e. two-element subsets of $V$). By $V_G = V(G)$ and $E_G = E(G)$ we mean the set of vertices and edges (respectively) of a graph $G$. Also, $v_G = v(G)$ and $e_G = e(G)$ denote the number of vertices and edges in $G$. We say that a graph $H$ is a subgraph of a graph $G$ if $V_H \subseteq V_G$ and $E_H \subseteq E_G$ (thus a subgraph is non-necessarily induced). Graphs $H$ and $G$ are isomorphic if there is a bijection $\pi\colon V_H \to V_G$ such that for all distinct $v,w \in V_H$, $\{\pi(v),\pi(w)\} \in E_G$ iff $\{v,w\} \in E_H$. For $p \in [0,1]$ consider now the Erd\H{o}s-R{\'e}nyi random graph $G = G(n,p)$, i.e., a graph with $n$ vertices (we will assume that $V_G = [n]$) whose edges are selected independently at random with probability $p$. In what follows we will be concerned with a number of copies of a given graph $H = ([k],E_H)$ in a graph $G$, i.e., the number of subgraphs of $G$ which are isomorphic to $H$. We will denote this random variable by $Y_H(n,p)$. To relate $Y_H(n,p)$ to polynomials, let us consider the family $C(n,2)$ of two-element subsets of $[n]$ and the family of independent random variables $X = (X_{e})_{e \in C(n,2)}$, such that $\mathbb P(X_{e} = 1) = 1 - \mathbb P(X_{e} = 0) = p$ (i.e., $X_{e}$ indicates whether the edge $e$ has been selected or not). Denote moreover by $\textup{Aut}(H)$ the group of isomorphisms of $H$ into itself and note that \begin{displaymath} Y_H(n,p) = \frac{1}{\#\textup{Aut}(H)} \sum_{{\bf i} \in [n]\uu{k}} \prod_{{v,w \in [k]}\atop{v < w, \{v,w\} \in E(H)}} X_{\{i_v,i_w\}}. \end{displaymath} The right-hand side above is a homogeneous tetrahedral polynomial of degree $e_H$. Moreover the variables $X_{\{v,w\}}$ satisfy \begin{displaymath} \mathbb E \exp\Big(X_{\{v,w\}}^2\log(1/p)\Big) = 1 - p + p\cdot\frac{1}{p} \le 2 \end{displaymath} and \begin{displaymath} \mathbb E \exp\Big(X_{\{v,w\}}^2\log 2\Big) \le 2, \end{displaymath} which implies that $\|X_{\{v,w\}}\|_{\psi_2} \le (\log(1/p))^{-1/2}\wedge (\log(2))^{-1/2} \le \sqrt{2} (\log(2/p))^{-1/2}$. We can thus apply Theorem~\ref{thm:subgaussian_intro} to $Y_H(n,p)$ and obtain \begin{align}\label{eq:general_graph} \mathbb P\big(|Y_{H}(n,p) - \mathbb E Y_{H}(n,p) |\ge t\big)\le 2\exp\bigg(-\frac{1}{C_k}\min_{1 \le d\le k}\min_{\mathcal{J} \in P_d} \Big(\frac{t}{L_p^d\|\mathbb E\mathbf{D}^d f(X)\|_\mathcal{J}}\Big)^{2/\#\mathcal{J}}\bigg), \end{align} where $L_p = \sqrt{2} \big(\log(2/p)\big)^{-1/2}$ and $f \colon \mathbb{R}^{C(n,2)} \to \mathbb{R}$ is given by \begin{align*} f((x_{e})_{e \in C(n,2)}) = \frac{1}{\# \textup{Aut}(H)} \sum_{{\bf i} \in [n]\uu{k}} \prod_{{v,w \in [k]}\atop{v < w, \{v,w\} \in E}} x_{\{i_v,i_w\}}. \end{align*} Deviation inequalities for subgraph counts have been studied by many authors, to mention \cite{KimVuConc,JaRuInf,VuConc,JanOleRu,KimVUtr,ChatterjeeTr,DeMarcoKahnTr,DeMarcoKahnCl}. As it turns out the lower tail $\mathbb P(Y_H(n,p) \le \mathbb E Y_H(n,p) - t)$ is easier than the upper tail $\mathbb P(Y_H(n,p) \ge \mathbb E Y_H(n,p) +t)$. The lower tail turns out to be also lighter than the upper one. Since our inequalities concern $|Y_H(n,p) - \mathbb E Y_H(n,p)|$, we cannot hope to recover optimal lower tail estimates, however we can still hope to get bounds which in some range of parameters $n,p$ will agree with optimal upper tail estimates. Of particular importance in literature is the law of large numbers regime, i.e., the case when $t = \varepsilon \mathbb E Y_H(n,p)$. In~\cite{JanOleRu} the Authors prove that for every $\varepsilon>0$ such that $\mathbb P\big(Y_H(n,p) \ge (1+\varepsilon)\mathbb E Y_H(n,p)\big) > 0$, \begin{equation}\label{eq:JOR} \exp\left(-C(H,\varepsilon)M_H^\ast(n,p)\log\frac{1}{p}\right) \le \mathbb P\big(Y_H(n,p) \ge (1+\varepsilon)\mathbb E Y_H(n,p)\big) \le \exp\big(-c(H,\varepsilon)M_H^\ast(n,p)\big) \end{equation} for certain constants $c(H,\varepsilon), C(H,\varepsilon)$ and a certain function $M_H^\ast(n,p)$. Since the general definition of $M_H^\ast$ is rather involved we will skip the details (in the examples considered in the sequel we will provide specific formulas). Note that if one disregards the constants depending only on $H$ and $\varepsilon$, the lower and upper estimate above differ by the factor $\log(1/p)$ in the exponent. To our best knowledge providing a lower and upper bound for general $H$, which would agree up to multiplicative constants in the exponent (depending only on $H$ and $\varepsilon$, but not on $n$ or $p$) is an open problem. We will now specialize to the case when $H$ is a cycle. For simplicity we will first present the case of the triangle $K_3$ (the clique with three vertices). For this graph the upper bound from \cite{JanOleRu} has been recently strengthened to match the lower one (up to a constant depending only on $\varepsilon$) by Chatterjee \cite{ChatterjeeTr} and DeMarco and Kahn \cite{DeMarcoKahnTr} (who also obtained a similar result for general cliques \cite{DeMarcoKahnCl}). In the next section we show that if $p$ is not too small, the inequality \eqref{eq:general_graph} also allows to recover the optimal upper bound. In Section \ref{sec:cycles} we provide an upper bound for cycles of arbitrary (fixed) length $k$, which is optimal for $p \ge n^{-\frac{k-2}{2(k-1)}} \log^{-\frac12} n$. \subsubsection{Counting triangles} Assume that $H = K_3$ and let us analyse the behaviour of $\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}$ for $d = 1,2,3$. Of course in this case $\#\textup{Aut}(H) = 6$. We have for any $e = \{v,w\}$, $v,w \in [n]$, \begin{displaymath} \frac{\partial}{\partial x_e} f(x) = \sum_{i \in [n]\setminus\{v,w\}} x_{\{i,v\}} x_{\{i,w\}} \end{displaymath} and so $\|\mathbb E \mathbf{D} f(X)\|_{\{1\}}= (n-2)p^2 \sqrt{n(n-1)/2} \le n^2p^2$. For $e_1 = e_2$ or when $e_1$ and $e_2$ do not have a common vertex, we have $\frac{\partial^2}{\partial x_{e_1}\partial x_{e_2}} f = 0$, whereas for $e_1,e_2$ sharing exactly one vertex, we have \begin{displaymath} \frac{\partial^2}{\partial x_{e_1}\partial x_{e_2}} f (x) = x_{\{v,w\}}, \end{displaymath} where $v,w$ are the vertices of $e_1,e_2$ distinct from the common one. Therefore \begin{displaymath} \mathbb E \mathbf{D}^2 f(X) = p (\ind{\textrm{$e_1,e_2$ have exactly one common vertex}})_{e_1,e_2 \in C(n,2)}. \end{displaymath} Using the fact that $\mathbb E \mathbf{D}^2 f(X)$ is symmetric and for each $e_1$ the sum of entries of $\mathbb E \mathbf{D}^2 f(X)$ in the row corresponding to $e_1$ equals $2p(n-2)$, we obtain $\|\mathbb E \mathbf{D}^2 f(X)\|_{\{1\}\{2\}} = 2p(n-2) \le 2pn$. One can also easily see that $\|\mathbb E \mathbf{D}^2 f(X)\|_{\{1,2\}} = p\sqrt{n(n-1)(n-2)} \le pn^{3/2}$. Finally \begin{displaymath} \frac{\partial^3}{\partial x_{e_1}\partial x_{e_2} \partial x_{e_3}} f = \ind{\textrm{$e_1,e_2,e_3$ form a triangle}} \end{displaymath} and thus $\|\mathbb E \mathbf{D}^3f(X)\|_{\{1,2,3\}} = \sqrt{n(n-1)(n-2)} \le n^{3/2}$. Moreover, due to symmetry we have \begin{displaymath} \|\mathbb E \mathbf{D}^3 f(X)\|_{\{1,2\}\{3\}} = \|\mathbb E \mathbf{D}^3 f(X)\|_{\{1,3\}\{2\}} = \|\mathbb E \mathbf{D}^3 f(X)\|_{\{2,3\}\{1\}}. \end{displaymath} Consider arbitrary $(x_{e_1})_{e_1 \in C(n,2)}$ and $(y_{e_2,e_3})_{e_2,e_3 \in C(n,2)}$ of norm one. We have \begin{align*} &\sum_{e_1,e_2,e_3} \ind{\textrm{$e_1,e_2,e_3$ form a triangle}}x_{e_1}y_{e_2,e_3} \le \sqrt{\sum_{e_1} \Big( \sum_{e_2,e_3}\ind{\textrm{$e_1,e_2,e_3$ form a triangle}}y_{e_2,e_3} \Big)^2}\\ &\le \sqrt{\sum_{e_1} \Big( \sum_{e_2,e_3}\ind{\textrm{$e_1,e_2,e_3$ form a triangle}} \Big) \Big( \sum_{e_2,e_3}\ind{\textrm{$e_1,e_2,e_3$ form a triangle}}y_{e_2,e_3}^2 \Big) }\\ &= \sqrt{2(n-2)} \sqrt{ \sum_{e_2,e_3} y_{e_2,e_3}^2 \sum_{e_1} \ind{\textrm{$e_1,e_2,e_3$ form a triangle}} } \le \sqrt{2(n-2)}, \end{align*} where the first two inequalities follow by the Cauchy-Schwarz inequality and the last one from the fact that for each $e_2,e_3$ there is at most one $e_1$ such that $e_1,e_2,e_3$ form a triangle. We have thus obtained $\|\mathbb E \mathbf{D}^3 f(X)\|_{\{1,2\}\{3\}} = \|\mathbb E \mathbf{D}^3 f(X)\|_{\{1,3\}\{2\}} = \|\mathbb E \mathbf{D}^3 f(X)\|_{\{2,3\}\{1\}} \le \sqrt{2n}$. It remains to estimate $\|\mathbb E \mathbf{D}^3 f(X)\|_{\{1\}\{2\}\{3\}}$. For all $(x_e)_{e\in C(n,2)}$, $(y_e)_{e\in C(n,2)}$, $(z_e)_{e\in C(n,2)}$ of norm one we have by the Cauchy-Schwarz inequality \begin{align*} &\sum_{e_1,e_2,e_3}\ind{\textrm{$e_1,e_2,e_3$ form a triangle}}x_{e_1}y_{e_2} z_{e_3} = \sum_{(i_1,i_2,i_3) \in [n]\uu{3}}x_{\{i_1,i_2\}}y_{\{i_2,i_3\}}z_{\{i_1,i_3\}}\\ &\le \sum_{i_1 \in [n]} \Big(\sum_{(i_2,i_3) \in ([n]\setminus{\{i_1\}})\uu{2}} x_{\{i_1,i_2\}}^2 z_{\{i_1,i_3\}}^2\Big)^{1/2}\Big(\sum_{(i_2,i_3) \in ([n]\setminus{\{i_1\}})\uu{2}} y_{\{i_2,i_3\}}^2\Big)^{1/2}\\ &\le \sqrt{2}\sum_{i_1 \in [n]}\Big(\sum_{i_2\in [n]\setminus{\{i_1\}}} x_{\{i_1,i_2\}}^2\Big)^{1/2}\Big(\sum_{i_3\in [n]\setminus{\{i_1\}}} z_{\{i_1,i_3\}}^2\Big)^{1/2}\\ &\le \sqrt{2}\Big(\sum_{(i_1,i_2)\in [n]\uu{2}} x_{\{i_1,i_2\}}^2\Big)^{1/2}\Big(\sum_{(i_1,i_3)\in [n]\uu{2}} z_{\{i_1,i_3\}}^2\Big)^{1/2} \le 2^{3/2}, \end{align*} which gives $\|\mathbb E \mathbf{D}^3 f(X)\|_{\{1\}\{2\}\{3\}} \le 2^{3/2}$. Using \eqref{eq:general_graph} together with the above estimates, we obtain \begin{prop} \label{prop:triangle}For any $t >0$, \begin{multline*} \mathbb P\big(|Y_{K_3}(n,p) - \mathbb E Y_{K_3}(n,p) | \ge t\big)\\ \le 2\exp\Big(-\frac{1}{C}\min\Big(\frac{t^2}{L_p^6 n^3 + L_p^4p^2n^3 + L_p^2p^4 n^4},\frac{t}{L_p^3n^{1/2} +L_p^2 p n },\frac{t^{2/3}}{L_p^2}\Big)\Big), \end{multline*} where $L_p = \big(\log(2/p)\big)^{-1/2}$. \end{prop} In particular for $t = \varepsilon \mathbb E Y_{K_3}(n,p) = \varepsilon \binom{n}{3} p^3$, \begin{multline*} \mathbb P\big(|Y_{K_3}(n,p) - \mathbb E Y_{K_3}(n,p) | \ge \varepsilon \mathbb E Y_{K_3}(n,p) \big) \\ \le 2\exp\Big(-\frac{1}{C} \min\Big(\varepsilon^2 n^3p^6 \log^3(2/p), (\varepsilon^2 \land \varepsilon^{2/3}) n^2p^2\log(2/p)\Big)\Big). \end{multline*} Thus for $p \ge n^{-\frac14}\log^{-\frac12} n$ we obtain \begin{displaymath} \mathbb P\big(|Y_{K_3}(n,p) - \mathbb E Y_{K_3}(n,p) | \ge \varepsilon \mathbb E Y_{K_3}(n,p) \big) \le 2\exp\big(-(\varepsilon^2 \land \varepsilon^{2/3}) n^2p^2\log(2/p)\big). \end{displaymath} By Corollary 1.7 in \cite{JanOleRu}, if $p \ge 1/n$, then $\frac{1}{C} n^2p^2 \le M^\ast_{K_3}(n,p) \le C n^2p^2$ (recall \eqref{eq:JOR}) and so for $p \ge n^{-1/4} \log^{-1/2} n$ the estimate obtained from the above proposition is optimal. As already mentioned the optimal estimate has been recently obtained in the full range of $p$ by Chatterjee, DeMarco and Kahn. Unfortunately it seems that using our general approach we are not able to recover the full strength of their result. From Proposition \ref{prop:triangle} one can also see that Theorem \ref{thm:subgaussian_intro}, when specialized to polynomials in $0$-$1$ random variables is not directly comparable with the family of Kim-Vu inequalities. As shown in \cite{JaRuInf} (see table 2 therein), various inequalities by Kim and Vu give for the triangle counting problem exponents $-\min(n^{1/3}p^{1/6}, n^{1/2}p^{1/2})$, $-n^{3/2}p^{3/2}$, $-np$ (disregarding logarithmic factors). Thus for ``large'' $p$ our inequality performs better than those by Kim-Vu, whereas for ``small'' $p$ this is not the case (note that the Kim-Vu inequalities give meaningful bounds for $p \ge C n^{-1}$ while ours only for $p \ge C n^{-1/2}$). As already mentioned in the introduction the fact that our inequalities degenerate for small $p$ is not surprising as even for sums of independent $0$-$1$ random variables, when $p$ becomes small, general inequalities for the sums of independent random variables with sub-Gaussian tails do not recover the correct tail behaviour (the $\|\cdot\|_{\psi_2}$ norm of the summands becomes much larger than the variance). \subsubsection{Counting cycles} \label{sec:cycles} We will now generalize Proposition \ref{prop:triangle} to cycles of arbitrary length. If $H$ is a cycle of length $k$, then by Corollary 1.7 in~\cite{JanOleRu}, $\frac{1}{C} n^2p^2 \le M^\ast_{H}(n,p) \le C n^2p^2$ for $p\ge 1/n$. Thus the bounds for the upper tail from~\eqref{eq:JOR} imply that for $p \ge 1/n$, \begin{displaymath} \exp\big(-C(k,\varepsilon)n^2p^2\log(1/p)\big) \le \mathbb P\big(Y_H(n,p) \ge (1+\varepsilon)\mathbb E Y_H(n,p) \big) \le \exp\big(-c(k,\varepsilon)n^2p^2\big) \end{displaymath} for every $\varepsilon > 0$ for which the above probability is not zero. We will show that similarly as for triangles, Theorem~\ref{thm:subgaussian_intro} allows to strengthen the upper bound if $p$ is not too small with respect to $n$. More precisely, we have the following \begin{prop}\label{prop:cycles} Let $H$ be a cycle of length $k$. Then for every $t > 0$, \begin{displaymath} \mathbb P\big(|Y_H(n,p) - \mathbb E Y_H(n,p)| \ge t\big) \le 2\exp\Big(-\frac{1}{C_k} \Big(\frac{t^2}{L_p^{2k}n^k} \wedge \min_{\substack{1\le l\le d\le k \colon\\ d<k\;\textup{or}\; l>1}}\Big(\frac{t^{2/l}}{L_p^{2d/l} p^{2(k-d)/l}n^{(2k-d-l)/l}}\Big)\Big)\Big), \end{displaymath} where $L_p = \big(\log(2/p)\big)^{-1/2}$. In particular for every $\varepsilon > 0$ and $p \ge n^{-\frac{k-2}{2(k-1)}}\log^{-1/2} n$, \begin{displaymath} \mathbb P\big(Y_H(n,p) \ge (1+\varepsilon)\mathbb E Y_H(n,p)\big) \le 2\exp\Big(-\frac{1}{C_k} (\varepsilon^2 \land \varepsilon^{2/k}) n^2p^2\log(2/p)\Big). \end{displaymath} \end{prop} To prove the above proposition we need to estimate the corresponding $\|\cdot\|_\mathcal{J}$ norms. Since a major part of the argument does not rely on the fact that $H$ is a cycle and bounds on $\|\cdot\|_\mathcal{J}$ norms may be of independent interest, we will now consider arbitrary graphs. Let thus $H$ be a fixed graph with no isolated vertices. Similarly to~\cite{JanOleRu}, it will be more convenient to count ``ordered'' copies of a graph $H$ in $G(n,p)$. Namely, for $H = ([k], E_H)$, each sequence of $k$ distinct vertices in the clique $K_n$, ${\bf i} \in [n]^{\underline{k}}$ determines an ordered copy $G_{{\bf i}}$ of $H$ in $K_n$, where $G_{\bf i} = {\bf i}(H)$, i.e., $V(G_{\bf i}) = {\bf i}([k])$ and $E(G_{\bf i}) = \{ {\bf i}(e) \colon e \in E(H) \} = \left\{ \{i_u, i_v\} \colon \{u, v\} \in E(H) \right\}$. Define \[ X_H(n,p) := \sum_{{\bf i} \in [n]^{\underline{k}}} \ind{G_{{\bf i}} \subseteq G(n,p)} = \sum_{{\bf i} \in [n]^{\underline{k}}} \; \prod_{\tilde{e} \in E(G_{\bf i})} X_{\tilde{e}}. \] Clearly $X_H(n,p) = \# \textup{Aut}(H) Y_H(n,p)$ and $X_H(n,p) = f(X)$ where \begin{align}\label{eq:counting_function} f(x) := \sum_{{\bf i} \in [n]^{\underline{k}}} \; \prod_{\tilde{e} \in E(G_{{\bf i}})} x_{\tilde{e}} = \sum_{{\bf i} \in [n]^{\underline{k}}} \; \prod_{e \in E(H)} x_{{\bf i}(e)}. \end{align} A sequence of distinct edges $(\tilde{e}_1, \ldots, \tilde{e}_d) \in E(K_n)^{\underline{d}}$ determines a subgraph $G_0 \subseteq K_n$ with $V(G_0) = \bigcup_{i=1}^d \tilde{e}_i$, $E(G_0) = \{\tilde{e}_1, \ldots, \tilde{e}_d\}$. Note that \[ \partial_{G_0} f(x) := \frac{\partial^d f(x)}{\partial x_{\tilde{e}_1} \cdots \partial x_{\tilde{e}_d}} = \sum_{{\bf i} \in [n]^{\underline{k}} \colon G_{{\bf i}} \supseteq G_0} \; \prod_{\tilde{e} \in E(G_{{\bf i}}) \setminus E(G_0)} x_{\tilde{e}} \] and thus \[ \mathbb E \partial_{G_0} f(X) = p^{e(H) - d} \#\{ {\bf i} \in [n]^{\underline{k}} \colon G_0 \subseteq G_{\bf i}\} \] Consider ${\bf e} = (e_1, \ldots, e_d) \in E(H)^{\underline{d}}$ and let $H_0({\bf e})$ be the subgraph of $H$ with $V(H_0({\bf e})) = \bigcup_{i=1}^d e_i$, $E(H_0({\bf e})) = \{e_1, \ldots, e_d\}$. Clearly, for any ${\bf i} \in [n]^{\underline{k}}$, ${\bf i}(H_0({\bf e})) \subseteq G_{\bf i}$. We write $(e_1, \ldots e_d) \simeq (\tilde{e}_1, \ldots, \tilde{e}_d)$ if there exists ${\bf i} \in [n]^{\underline{k}}$ such that ${\bf i}(e_j) = \tilde{e}_j$ for $j = 1, \ldots, d$. Note that given $(\tilde{e}_1, \ldots, \tilde{e}_d) \in E(K_n)^{\underline{d}}$ and the corresponding graph $G_0$, \begin{align*} \#\{ {\bf i} \in [n]^{\underline{k}} \colon G_0 \subseteq G_{\bf i}\} &= \sum_{{\bf e} \in E(H)^{\underline{d}}} \#\{ {\bf i} \in [n]^{\underline{k}} \colon {\bf i}(e_j) = \tilde{e}_j \text{ for $j = 1, \ldots, d$}\} \\ &= \sum_{{\bf e} \in E(H)^{\underline{d}} } 2^{s(H_0({\bf e}))} (n - v(H_0({\bf e})))^{\underline{k - v(H_0({\bf e}))}} \ind{(\tilde{e}_1, \ldots, \tilde{e}_d) \simeq {\bf e}}, \end{align*} where for a graph $G$, $v(G)$ is the number of vertices of $G$ and $s(G)$ is the number of edges in $G$ with no other adjacent edge. Therefore, \[ \mathbb E \mathbf{D}^d f(X) = p^{e(H) - d} \sum_{{\bf e} \in E(H)^{\underline{d}} } 2^{s(H_0({\bf e}))} (n - v(H_0({\bf e})))^{\underline{k - v_{H_0({\bf e})}}} \left( \ind{(\tilde{e}_1, \ldots, \tilde{e}_d) \simeq {\bf e}} \right)_{(\tilde{e}_1 \ldots, \tilde{e}_d)}. \] Let $\mathcal{J}$ be a partition of $[d]$. By the triangle inequality for the norms $\norm{\cdot}_{\mathcal{J}}$, \begin{equation}\label{norms-for-subgraph-count} \norm{\mathbb E \mathbf{D}^d f(X)}_{\mathcal{J}} \le p^{e(H) - d} \sum_{{\bf e}\in E(H)^{\underline{d}}} 2^{s(H_0({\bf e}))} n^{k - v(H_0({\bf e}))} \norm{\left( \ind{(\tilde{e}_1, \ldots, \tilde{e}_d) \simeq {\bf e}} \right)_{(\tilde{e}_1 \ldots, \tilde{e}_d)}}_\mathcal{J}. \end{equation} The norms appearing on the right hand side of~(\ref{norms-for-subgraph-count}) are handled by the following \begin{lemma}\label{lemma:norms-for-cycles} Fix $1 \le d \le e(H)$, ${\bf e} = (e_1, \ldots, e_d) \in E(H)^{\underline{d}}$ and $\mathcal{J} = \{J_1,\ldots,J_l\} \in P_d$. Let $H_0 = H_0({\bf e})$ and for $r = 1, \ldots, l$, let $H_r$ be a subgraph of $H_0$ spanned by the set of edges $\{e_j \colon j \in J_r\}$. Then, \begin{multline*} \norm{\left( \ind{(\tilde{e}_1, \ldots, \tilde{e}_d) \simeq (e_1, \ldots, e_d)} \right)_{(\tilde{e}_1 \ldots, \tilde{e}_d)}}_{\mathcal{J}} \le 2^{-s(H_0) + \frac12 \sum_{r=1}^l s(H_r)} \\ \times n^{\frac12 \# \{ v \in V(H_0) \colon \textup{$v \in V(H_r)$ for exactly one $r \in [l]$} \}}. \end{multline*} \end{lemma} \begin{proof} We shall bound the sum \begin{equation}\label{sum-defining-the-norm-for-subgraph-count} \sum_{\tilde{e}_1, \ldots, \tilde{e}_d \in E(K_n)} \ind{(\tilde{e}_1, \ldots, \tilde{e}_d) \simeq {\bf e}} \prod_{r=1}^l x_{(\tilde{e}_j)_{j \in J_r}}^{(r)} \end{equation} under the constraints $\sum_{(\tilde{e}_j)_{j \in J_r} \in E(K_n)^{J_r}} \left(x_{(\tilde{e}_j)_{j \in J_r}}^{(r)}\right)^2 \le 1$ for $r = 1, \ldots, l$. Note that we can assume $x^{(r)} \ge 0$ for all $r \in [l]$. Rewrite the sum~(\ref{sum-defining-the-norm-for-subgraph-count}) as the sum over a sequence of vertices instead of edges: \[ 2^{-s(H_0)} \sum_{{\bf i} \in [n]^{\underline{V(H_0)}}} \; \prod_{r=1}^l x_{({\bf i}(e_j))_{j \in J_r}}^{(r)}, \] where for two sets $A,B$, $A^{\underline{B}}$ is the set of 1-1 functions from $B$ to $A$. Further note that it is enough to prove the desired bound for the sum \begin{equation}\label{sum2-for-the-norm-for-subgraph-count} 2^{-s(H_0)} \sum_{{\bf i} \in [n]^{\underline{V(H_0)}}} \; \prod_{r=1}^l y_{{\bf i}_{V(H_r)}}^{(r)} \end{equation} under the constraints $2^{-s(H_r)} \sum_{{\bf i} \in [n]^{\underline{V(H_r)}}} \left( y_{{\bf i}_{V(H_r)}}^{(r)} \right)^2 \le 1$ for each $r = 1, \ldots, l$. Indeed, given $x$'s, for each $r = 1, \ldots, l$ and all ${\bf i} \in [n]^{\underline{V(H_r)}}$ take $y_{{\bf i}_{V(H_r)}}^{(r)} = x_{({\bf i}(e_j)_{j \in J_r})}^{(r)}$ and notice that the sum~(\ref{sum2-for-the-norm-for-subgraph-count}) equals the sum~(\ref{sum-defining-the-norm-for-subgraph-count}) while the constraints for $x$'s imply the constraints for $y$'s. Finally, by homogeneity and the fact that the sum \eqref{sum2-for-the-norm-for-subgraph-count} does not depend on the full graph structure but only on the sets of vertices of the graphs $H_r$, the lemma will follow from the statement: For a sequence of finite, non-empty sets $V_1, \ldots, V_l$, let $V = V_1 \cup \ldots \cup V_l$. Then \begin{equation}\label{sum3-for-the-norm-for-subgraph-count} \sum_{{\bf i} \in [n]^{\underline{V}}} \; \prod_{r=1}^l y_{{\bf i}_{V_r}}^{(r)} \le n^{\frac12 \# \{ v \in V \colon \text{$v \in V_r$ for exactly one $r \in [l]$} \}} \end{equation} for $y^{(1)}, \ldots, y^{(l)} \ge 0$ satisfying \begin{equation}\label{constraint-for-then-norm-for-subgraph-count} \sum_{{\bf i} \in [n]^{\underline{V_r}}} \left( y_{{\bf i}_{V_r}}^{(r)} \right)^2 \le 1. \end{equation} We prove (\ref{sum3-for-the-norm-for-subgraph-count}) by induction on $\# V$. For $V = \emptyset$ (and $l=0$), (\ref{sum3-for-the-norm-for-subgraph-count}) holds trivially. For the induction step fix any $v_0 \in V$ and put $R = \{r \in [l] \colon v_0 \in V_r\}$. We write \[ \sum_{{\bf i} \in [n]^{\underline{V}}}\;\prod_{r=1}^l y_{{\bf i}_{V_r}}^{(r)} = \sum_{{\bf i} \in [n]^{\underline{V \setminus \{v_0\}}}} \left( \left(\prod_{r \in [l] \setminus R} y_{{\bf i}_{V_r}}^{(r)} \right) \sum_{i_{v_0} \in [n] \setminus {\bf i}(V \setminus \{v_0\})} \; \prod_{r \in R} y_{{\bf i}_{V_r}}^{(r)} \right). \] We bound the inner sum using the Cauchy-Schwarz inequality. If $\# R \ge 2$, we get \[ \sum_{i_{v_0} \in [n] \setminus {\bf i}(V \setminus \{v_0\})} \; \prod_{r \in R} y_{{\bf i}_{V_r}}^{(r)} \le \prod_{r \in R} \left( \sum_{i_{v_0} \in [n] \setminus {\bf i}(V \setminus \{v_0\})} \left( y_{{\bf i}_{V_r}}^{(r)} \right)^2 \right)^{1/2}, \] and if $R = \{r_0\}$ then \[ \sum_{i_{v_0} \in [n] \setminus {\bf i}(V \setminus \{v_0\})} y_{{\bf i}_{V_{r_0}}}^{(r_0)} \le \sqrt{n} \left( \sum_{i_{v_0} \in [n] \setminus {\bf i}(V \setminus \{v_0\})} \left( y_{{\bf i}_{V_{r_0}}}^{(r_0)} \right)^2 \right)^{1/2}. \] Now, for each $r \in R$ put $W_r = V_r \setminus \{v_0\}$ and define \[ z_{{\bf i}_{W_r}}^{(r)} = \left( \sum_{i_{v_0} \in [n] \setminus {\bf i}(W_r )} \left( y_{{\bf i}_{V_r}}^{(r)} \right)^2 \right)^{1/2} \text{ for all ${\bf i}_{W_r} \in [n]^{\underline{W_r}}$}. \] Note that if $W_r = \emptyset$ then $z^{(r)}$ is a scalar and by~(\ref{constraint-for-then-norm-for-subgraph-count}), $0 \le z^{(r)} \le 1$. For $r \in [l] \setminus R$, just put $W_r = V_r$ and $z^{(r)} \equiv y^{(r)}$. Let $L = \{ r \in [l] \colon W_r \neq \emptyset\}$. Combining the estimates obtained above, we arrive at \[ \sum_{{\bf i} \in [n]^{\underline{V}}} \; \prod_{r=1}^l y_{{\bf i}_{V_r}}^{(r)} \le (\sqrt{n})^{\ind{\text{$v_0 \in V_r$ for exactly one $r \in [l]$}}} \sum_{{\bf i} \in [n]^{\underline{V \setminus \{v_0\}}}} \; \prod_{r \in L} z_{{\bf i}_{W_r}}^{(r)}. \] Now we use the induction hypothesis for the sequence of sets $(W_r)_{r \in L}$ and the vectors $z^{(r)}$, $r \in L$ (note that $\sum_{{\bf i}\in[n]^{\underline{W_r}}}(z_{{\bf i}_{W_r}}\ub{r})^2 \le 1$). \end{proof} \paragraph{Remark} The bound in Lemma~\ref{lemma:norms-for-cycles} is essentially optimal, at least for large $n$, say $n \ge 2 k$. To see this let us analyse optimality of~(\ref{sum3-for-the-norm-for-subgraph-count}) under the constraints~(\ref{constraint-for-then-norm-for-subgraph-count}) (it is easy to see that this is equivalent to the optimality in the original problem). Denote $V_0 = \{ v \in V \colon v \in V_r \text{ for exactly one $r \in [l]$}\}$. Fix any ${\bf i}^{(0)} \in [n]^{\underline{k}}$. Then for $r =1, \ldots, l$ take \[ y_{{\bf i}_{V_r}}^{(r)} = \begin{cases} n^{-\frac12 \#(V_r \cap V_0)} & \text{if ${\bf i}_{V_r \setminus V_0} \equiv {\bf i}_{V_r \setminus V_0}^{(0)}$} \\ 0 & \text{otherwise.} \end{cases} \] The vectors $y\ub{r}$ satisfy the constraints~(\ref{constraint-for-then-norm-for-subgraph-count}) and \[ \begin{split} \sum_{{\bf i} \in [n]^{\underline{V}}} \; \prod_{r=1}^l y_{{\bf i}_{V_r}}^{(r)} &= \sum_{{\bf i} \in [n]^{\underline{V}} \colon {\bf i}_{V \setminus V_0} \equiv {\bf i}_{V \setminus V_0}^{(0)}} \prod_{r=1}^l n^{-\frac12 \#(V_r \cap V_0)} \\[1ex] &= \left(n - \#(V \setminus V_0)\right)^{\underline{\# V_0}} \, n^{-\frac12 \# V_0} \ge (n/2)^{\# V_0} n^{-\frac12 \# V_0} = 2^{-\# V_0} n^{\frac12 \# V_0}. \end{split} \] \medskip Combining Lemma \ref{lemma:norms-for-cycles} with \eqref{norms-for-subgraph-count} we obtain \begin{lemma} \label{le:norm_estimates_cycles} Let $H$ be any graph with $k$ vertices, which are not isolated, and let $f$ be defined by \eqref{eq:counting_function}. Then for any $1\le d\le e(H)$ and any $\mathcal{J} = \{J_1,\ldots,J_l\} \in P_d$, \begin{multline*} \|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J} \\ \le p^{e(H)-d}\sum_{{\bf e}\in E(H)^{\underline{d}}}2^{\frac{1}{2}\sum_{r=1}^l s(H_r({\bf e}))} n^{k - v(H_0({\bf e})) + \frac{1}{2}\#\{v\in V(H_0({\bf e}))\colon \textup{$v \in V(H_r({\bf e}))$ for exactly one $r\in[l]$}\}}, \end{multline*} where for ${\bf e} \in E(H)^{\underline{d}}$ and $r \in [l]$, $H_r({\bf e})$ is the subgraph of $H_0({\bf e})$ spanned by $\{e_j\colon j\in J_r\}$. \end{lemma} We are now ready for \begin{proof}[Proof of Proposition \ref{prop:cycles}] We will use Lemma \ref{le:norm_estimates_cycles} to estimate $\|\mathbb E \mathbf{D}^d f(X)\|_\mathcal{J}$ for any $d \le k$ and $\mathcal{J} \in P_d$ with $\#\mathcal{J} = l$. Note that for any ${\bf e}\in \mathbb E(H)^{\underline{d}}$, \begin{multline*} v(H_0({\bf e})) - \frac12 \# \{ v \in V(H_0({\bf e})) \colon \text{$v \in V(H_r({\bf e}))$ for exactly one $r \in [l]$}\} \\[1ex] = \frac12 \big( v(H_0({\bf e})) + \#\{ v \in V(H_0({\bf e})) \colon \text{$v$ belongs to more than one $V(H_r({\bf e}))$}\} \big) \\[1ex] \begin{cases} = k/2 & \text{if $d = k$ and $l=1$,} \\[1ex] \ge \frac12 (d+l) & \text{otherwise}, \end{cases} \end{multline*} where to get the second inequality we used the fact that each vertex of $H$ has degree two and the inclusion-exclusion formula. Thus we obtain \begin{align*} \|\mathbb E\mathbf{D}^k f(X)\|_{\{[k]\}} &\le n^{k/2}, \\ \|\mathbb E\mathbf{D}^d f(X)\|_\mathcal{J} &\le C_k p^{k-d}n^{k - \frac{1}{2}d - \frac{1}{2}l} \quad \text{if $d < k$ or $l > 1$}. \end{align*} Together with~\eqref{eq:general_graph} this yields the first inequality of the proposition. Using the fact that $\mathbb E Y_H(n,p) \ge \frac{1}{C_k} n^k p^k$, the second inequality follows by simple calculations. \end{proof} \section{Refined inequalities for polynomials in independent random variables satisfying the modified log-Sobolev inequality}\label{sec:Weibull} In this section we refine the inequalities which can be obtained from Theorem \ref{thm:main} for polynomials in independent random variables satisfying the $\beta$-modified log-Sobolev inequality \eqref{eq:modifiedLS} with $\beta > 2$. To this end we will use Theorem \ref{thm:AidaStroock} together with a result from \cite{Chaos3d}, which is a counterpart of Theorem \ref{thm_Latala_dec} for homogeneous tetrahedral polynomials in general independent symmetric random variables with log-concave tails, however only of degree at most 3. We recall that for a set $I$, by $P_I$ we denote the family of partitions of $I$ into pairwise disjoint, nonempty sets. Theorems 3.1 and 3.2 and 3.4 from \cite{Chaos3d} specialized to Weibull variables can be translated into \begin{theorem}\label{thm:Weibull_chaos} Let $\alpha \in [1,2]$ and let $Y_1,\ldots,Y_n$ be a sequence of i.i.d. symmetric random variables satisfying $\mathbb P(|Y_i| \ge t) = \exp(-t^\alpha)$. Define $Y = (Y_1,\ldots,Y_n)$ and let $Z_1,\ldots,Z_d$ be independent copies of $Y$. Consider a $d$-indexed matrix $A$. Define also \begin{equation}\label{eq:mdpA} m_d(p, A) = \sum_{I\subseteq [d]} \sum_{\mathcal{J} \in P_I}\sum_{\mathcal{K}\in P_{[d]\setminus I}} p^{\#\mathcal{J}/2 + \#\mathcal{K}/\alpha}\|A\|_{\mathcal{J}|\mathcal{K}}, \end{equation} where for $\mathcal{J} = \{J_1,\ldots,J_r\}\in P_I$ and $\mathcal{K} = \{K_1,\ldots,K_k\} \in P_{[d]\setminus I}$, \begin{align*} \|A\|_{\mathcal{J}|\mathcal{K}} =\sum_{s_1\in K_1,\ldots,s_k \in K_k}\sup\Big\{&\sum_{{\bf i}\in[n]^d} a_{{\bf i}}\prod_{l=1}^r x\ub{l}_{\mathbf{i}_{J_l}}\prod_{l=1}^k y\ub{l}_{\mathbf{i}_{J_l}}\colon \|(x\ub{l}_{\mathbf{i}_{J_l}})\|_2\leq 1, \,\textrm{for $1\le l \le r$}, \\ &\sum_{i_{s_l} \le n}\|(y\ub{l}_{\mathbf{i}_{K_l}})_{{\bf i}_{K_l \setminus \{s_l\}}}\|_2^\alpha \leq 1,\,\textrm{for $1\le l \le k$}\Big\}. \end{align*} If $d \le 3$, then for any $p \ge 2$, \begin{displaymath} C_d^{-1}m_d(p, A)\le \|\langle A,Z_1\otimes\cdots\otimes Z_d\rangle\|_p \le C_d m_d(p, A). \end{displaymath} Moreover, if $\alpha = 1$, then the above inequality holds for all $d \ge 1$. \end{theorem} Before we proceed, let us provide a few specific examples of the norms $\|A\|_{\mathcal{J}|\mathcal{K}}$, which for $\alpha < 2$ are more complicated than in the Gaussian case. In what follows, $\beta = \frac{\alpha}{\alpha -1}$ (with $\beta = \infty$ for $\alpha = 1$). For $d=1$, \begin{align*} \|(a_i)\|_{\{1\}| \emptyset} &= \sup\big\{ \sum a_i x_i \colon \sum x_i^2 \le 1 \big\} = |(a_i)|_2, \\ \|(a_i)\|_{\emptyset| \{1\}} &= \sup\big\{ \sum a_i y_i \colon \sum |y_i|^\alpha \le 1 \big\} = |(a_i)|_\beta. \end{align*} For $d=2$, $\|(a_{ij})\|_{\{1,2\}| \emptyset} = \|(a_{ij})\|_{\textup{HS}}$, $\|(a_{ij})\|_{\{1\}\{2\}| \emptyset} = \|(a_{ij})\|_{\ell_2 \to \ell_2}$, \begin{align*} \|(a_{ij})\|_{\{1\}|\{2\}} &= \sup\big\{ \sum a_{ij} x_i y_j \colon \sum x_i^2 \le 1, \sum |y_j|^\alpha \le 1\big\} = \|(a_{ij})\|_{\ell_\alpha \to \ell_2}, \\ \|(a_{ij})\|_{\{2\}|\{1\}} &= \sup\big\{ \sum a_{ij} y_i x_j \colon \sum x_j^2 \le 1, \sum |y_i|^\alpha \le 1\big\} = \|(a_{ij})\|_{\ell_2 \to \ell_\beta}, \\ \|(a_{ij})\|_{\emptyset| \{1\}\{2\}} &= \sup\big\{\sum a_{ij} y_i z_j \colon \sum |y_i|^\alpha \le 1, \sum |z_j|^\alpha \le 1\big\} = \|(a_{ij})\|_{\ell_\alpha \to \ell_\beta}, \end{align*} and \begin{align*} \|(a_{ij})\|_{\emptyset| \{1,2\}} &= \sup\big\{\sum a_{ij} y_{ij} \colon \sum_i \big(\sum_j y_{ij}^2\big)^{\frac{\alpha}{2}} \le 1\big\} + \sup\big\{\sum a_{ij} y_{ij} \colon \sum_j \big(\sum_i y_{ij}^2\big)^{\frac{\alpha}{2}} \le 1\big\} \\ &= \Big(\sum_i \big( \sum_j a_{ij}^2 \big)^{\beta/2} \Big)^{1/\beta} + \Big(\sum_j \big( \sum_i a_{ij}^2 \big)^{\beta/2} \Big)^{1/\beta}. \end{align*} For $d=3$, we have, for example, \begin{align*} \|(a_{ijk})\|_{\{2\}|\{1\}\{3\}} &= \sup\big\{ \sum a_{ijk} y_i x_j z_k \colon \sum |x_j|^2 \le 1, \sum |y_i|^\alpha \le 1, \sum |z_k|^\alpha \le 1 \big\}, \\ \|(a_{ijk})\|_{\{2\}|\{1,3\}} &= \sup\big\{ \sum a_{ijk} x_j y_{ik} \colon \sum x_j^2 \le 1, \sum_i \big( \sum_k y_{ik}^2 \big)^{\frac{\alpha}{2}} \le 1 \big\} \\ &+ \sup\big\{ \sum a_{ijk} x_j y_{ik} \colon \sum x_j^2 \le 1, \sum_k \big( \sum_i y_{ik}^2 \big)^{\frac{\alpha}{2}} \le 1 \big\}, \\ \|(a_{ijk})\|_{\emptyset|\{1\} \{2,3\}} &= \sup\big\{ \sum a_{ijk} y_i z_{jk} \colon \sum |y_i|^\alpha \le 1, \sum_j \big( \sum_k z_{jk}^2 \big)^{\frac{\alpha}{2}} \le 1 \big\} \\ &+ \sup\big\{ \sum a_{ijk} y_i z_{jk} \colon \sum |y_i|^\alpha \le 1, \sum_k \big( \sum_j z_{jk}^2 \big)^{\frac{\alpha}{2}} \le 1 \big\}. \end{align*} In particular, from Theorem \ref{thm:Weibull_chaos} it follows that for $\alpha \in [1,2]$, if $Y = (Y_1, \ldots, Y_n)$ is as in Theorem~\ref{thm:Weibull_chaos} then for every $x \in \mathbb{R}^n$, \begin{displaymath} \frac{1}{C}(\sqrt{p}|x|_2 + p^{1/\alpha}|x|_\beta) \le \|\langle x,Y\rangle\|_p \le C(\sqrt{p}|x|_2 + p^{1/\alpha}|x|_\beta), \end{displaymath} where $|\cdot|_r$ stands for $\ell_r^n$ norm (see also~\cite{GK}). Thus, for $\beta \in (2, \infty)$, the inequality of Theorem \ref{thm:AidaStroock}, for $m=n$, $k = 1$ and a $\mathcal{C}^1$ function $f \colon \mathbb{R}^n \to \mathbb{R}$, can be written in the form \begin{equation}\label{ineq:sobolev-beta} \|f(X) - \mathbb E f(X)\|_p \le C_\beta \|\langle \nabla f(X),Y\rangle\|_p. \end{equation} This allows for induction, just as in the proof of Proposition \ref{prop:moment_Poincare}, except that instead of Gaussian vectors we will have independent copies of $Y$. We can thus repeat the proof of Theorem \ref{thm:main}, using the above observation and Theorem \ref{thm:Weibull_chaos} instead of Theorem \ref{thm_Latala_dec}. This argument will then yield the following proposition, which is a counterpart of Theorem \ref{thm:main}. At the moment we can prove it only for $D \le 3$, clearly generalizing Theorem \ref{thm:Weibull_chaos} to chaoses of arbitrary degree would immediately imply it for general $D$. \begin{prop}\label{prop:Weibull} Let $X = (X_1,\ldots,X_n)$ be a random vector in $\mathbb{R}^n$, with independent components. Let $\beta \in (2,\infty)$ and assume that for all $i \le n$, $X_i$ satisfies the $\beta$-modified logarithmic Sobolev inequality with constant $D_{LS_\beta}$. Let $f \colon \mathbb{R}^n \to \mathbb{R}$ be a $\mathcal{C}^D$ function. Define \[ m(p, f) = \big\| m_D(p, \mathbf{D}^D f(X)) \big\|_p + \sum_{1 \le d \le D-1} m_d(p, \mathbb E \mathbf{D}^d f(X)), \] where $m_d(p, A)$ is defined by~\eqref{eq:mdpA} with $\alpha = \frac{\beta}{\beta-1}$. If $D\le 3$ then for $p\ge 2$, \begin{displaymath} \|f(X)-\mathbb E f(X)\|_p \le C_{\beta,D_{LS_\beta}} m(p, f). \end{displaymath} As a consequence, for all $p \ge 2$, \begin{displaymath} \mathbb P\big(|f(X) - \mathbb E f(X)| \ge C_{\beta,D_{LS_\beta}} m(p, f)\big) \le e^{-p}. \end{displaymath} \end{prop} \paragraph{Remarks} \paragraph{1.} For $\beta = 2$, the estimates of the above proposition agree with those of Theorem \ref{thm:main_intro}. For $\beta > 2$ it improves on what can be obtained from Theorem \ref{thm:main} in two aspects (of course just for $D \le 3$). First, the exponent of $p$ is smaller as $(\gamma-1/2)d +\#(\mathcal{J}\cup\mathcal{K})/2 = (1/\alpha -1/2)d+\#\mathcal{J}/2 + \#\mathcal{K}/2 \ge \#\mathcal{J}/2 + \#\mathcal{K}/\alpha$. Second $\|A\|_{\mathcal{J}\cup \mathcal{K}} \ge \|A\|_{\mathcal{J}|\mathcal{K}}$ (since for $\alpha < 2$, $|x|_\alpha \ge |x|_2$, so the supremum on the left hand side is taken over a larger set). \paragraph{2.} From results in \cite{Chaos3d} it follows that if $f$ is a tetrahedral polynomial of degree $D$ and $X_i$ are i.i.d. symmetric random variables satisfying $\mathbb P(|X_i| \ge t) = \exp(-t^\alpha)$, then the inequalities of Proposition \ref{prop:Weibull} can be reversed (up to constants), i.e., \begin{displaymath} \|f(X) - \mathbb E f(X)\|_p \ge \frac{1}{C_D}m_f(p). \end{displaymath} This is true for any positive integer $D$. \paragraph{3.} One can also consider another functional inequality, which may be regarded a counterpart of \eqref{eq:modifiedLS} for $\beta = \infty$. We say that a random vector $X$ in $\mathbb{R}^n$ satisfies the Bobkov-Ledoux inequality if for all locally Lipschitz positive functions such that $|\nabla f(x)|_\infty := \max_{1\le i \le n} |\frac{\partial }{\partial x_i} f(x)| \le d_{BL}f(x)$ for all $x$, \begin{align}\label{eq:BL} \mathrm{Ent} f^2(X) \le D_{BL} \mathbb E |\nabla f(X)|^2. \end{align} This inequality has been introduced in \cite{BobLed_exp} to provide a simple proof of Talagrand's two-level concentration for the symmetric exponential measure in $\mathbb{R}^n$. Here $|\frac{\partial }{\partial x_i} f(x)|$ is defined as ``partial length of gradient'' (see \eqref{eq:length_of_gradient}). Thus in the case of differentiable functions $|\nabla f|_\infty$ coincides with the $\ell_\infty^n$ norm of the ``true'' gradient. In view of Theorem \ref{thm:AidaStroock} it is natural to conjecture that the Bobkov-Ledoux inequality implies \begin{align}\label{eq:Sobolev_exp} \|f(X) - \mathbb E f(X)\|_p \le C\Big(\sqrt{p}\big\||\nabla f(X)|\big\|_p + p\big\| |\nabla f(X)|_\infty \big\|_p\Big), \end{align} which in turn implies~\eqref{ineq:sobolev-beta} with $Y = (Y_1, \ldots, Y_n)$ being a vector of independent symmetric exponential variables and some $C_\infty < \infty$. This would yield an analogue of Proposition~\ref{prop:Weibull} for $\beta = \infty$, this time with no restriction on $D$. Unfortunately at present we do not know whether the implication \eqref{eq:BL} $\implies$ \eqref{eq:Sobolev_exp} holds true or even if \eqref{eq:Sobolev_exp} holds for the symmetric exponential measure in $\mathbb{R}^n$. We only are able to prove the following weaker inequality, which is however not sufficient to obtain a counterpart of Proposition \ref{prop:Weibull} for $\beta = \infty$. \begin{prop}\label{prop:exp} If $X$ is a random vector in $\mathbb{R}^n$, which satisfies \eqref{eq:BL}, then for any locally Lipschitz function $f \colon \mathbb{R}^n \to \mathbb{R}$, and any $p \ge 2$, \begin{displaymath} \|f(X) - \mathbb E f(X)\|_p \le 3\Big(D_{BL}^{1/2} \sqrt{p}\big\||\nabla f(X)|\big\|_p + d_{BL}^{-1}p \big\||\nabla f(X)|_\infty\big\|_\infty\Big). \end{displaymath} \end{prop} \begin{proof} To simplify the notation we suppress the argument $X$. In what follows $\|\cdot\|_p$ denotes the $L_p$ norm with respect to the distribution of $X$. Let us fix $p \ge 2$ and consider $f_1 = \max(f, \|f\|_p /2)$. We have \begin{align}\label{eq:properties_f_1} \|f_1\|_p &\ge \|f\|_p,\\ \|f_1\|_2 &\le \frac12 \|f\|_p + \|f\|_2,\nonumber\\ \|f_1\|_p &\le \frac{3}{2}\|f\|_p \le 3 \min f_1.\nonumber \end{align} Moreover, $f_1$ is locally Lipschitz and we have pointwise estimates $|\nabla f_1|\le |\nabla f|$, $|\nabla f_1|_\infty \le |\nabla f|_\infty$. Assume now that we have proved that \begin{align}\label{eq:BL_auxiliary} \|f_1\|_p \le \|f_1\|_2 + \sqrt{\frac{D_{BL}}{2}}\sqrt{p}\big\||\nabla f_1| \big\|_p + \frac{3p}{2d_{BL}}\big\||\nabla f_1|_\infty\big\|_\infty. \end{align} Then, together with the two first inequalities of~\eqref{eq:properties_f_1}, it yields \begin{align*} \|f\|_p &\le \|f_1\|_p \le \|f_1\|_2 + \sqrt{\frac{D_{BL}}{2}}\sqrt{p}\big\||\nabla f_1| \big\|_p + \frac{3p}{2d_{BL}}\big\||\nabla f_1|_\infty\big\|_\infty\\ & \le \frac{1}{2}\|f\|_p + \|f\|_2 + \sqrt{\frac{D_{BL}}{2}}\sqrt{p}\big\||\nabla f| \big\|_p + \frac{3p}{2d_{BL}}\big\||\nabla f|_\infty\big\|_\infty, \end{align*} which gives \begin{equation}\label{eq:BL_fp} \|f\|_p \le 2\Big(\|f\|_2 + \sqrt{\frac{D_{BL}}{2}}\sqrt{p}\big\||\nabla f| \big\|_p + \frac{3p}{2d_{BL}}\big\||\nabla f|_\infty\big\|_\infty\Big). \end{equation} Since \eqref{eq:BL} implies the Poincar\'{e} inequality with constant $D_{BL}/2$ (see e.g. Proposition 2.3 in \cite{Gentil-Guillin-Miclo-2005}), we can conclude the proof applying~\eqref{eq:BL_fp} to $|f - \mathbb E f|$ (similarly as in the proof of Theorem \ref{thm:AidaStroock}). Thus it is enough to prove \eqref{eq:BL_auxiliary}. From now on we are going to work with the function $f_1$ only, so for brevity we will drop the subscript and write $f$ instead of $f_1$. Assume $\|f\|_p \ge \frac{3p}{2d_{BL}} \||\nabla f|_\infty \|_\infty$ (otherwise~\eqref{eq:BL_auxiliary} is trivially satisfied). Then, using the third inequality of \eqref{eq:properties_f_1}, for $2\le t \le p$ and all $x \in \mathbb{R}^n$, \begin{displaymath} |\nabla f^{t/2}(x)|_\infty \le \frac{t}{2} f^{t/2-1}(x) |\nabla f(x)|_\infty \le \frac{3}{2} f^{t/2}(x) \frac{p|\nabla f(x)|_\infty}{\|f\|_p} \le d_{BL} f^{t/2}(x). \end{displaymath} We can thus apply~\eqref{eq:BL} with $f^{t/2}$, which together with H\"older's inequality gives \begin{displaymath} \mathrm{Ent} f^t \le D_{BL} \mathbb E |\nabla f^{t/2}|^2 \le D_{BL}\frac{t^2}{4}\mathbb E \big( f^{t-2} |\nabla f|^2 \big) \le D_{BL}\frac{t^2}{4} \big\||\nabla f|\big\|_t^2 (\mathbb E f^t)^{1 - \frac{2}{t}}. \end{displaymath} Now, as in the proof of Theorem \ref{thm:AidaStroock}, we have \begin{displaymath} \frac{d}{dt} (\mathbb E f^t)^{2/t} = \frac{2}{t^2} (\mathbb E f^t)^{\frac{2}{t} - 1} \mathrm{Ent} f^t \le \frac{D_{BL}}{2}\big\||\nabla f|\big\|_p^2, \end{displaymath} which upon integrating gives \begin{displaymath} \|f\|_p^2 \le \|f\|_2^2 + \frac{D_{BL}}{2} p \big\||\nabla f|\big\|_p^2, \end{displaymath} which clearly implies \eqref{eq:BL_auxiliary}. \end{proof} \section{Appendix} \subsection{Decoupling inequalities} Let us here state the main decoupling result for $U$-statistics (Theorem 1 in \cite{dlPMS1}). \begin{theorem}\label{thm:decoupling} For natural numbers $n \ge d$ let $(X_i)_{i=1}^n$ be a sequence of independent random variables with values in a measurable space $(S,\mathcal{S})$ and let $(X\ub{j}_i)_{i=1}^n$ $j= 1,\ldots,d$ be $d$ independent copies of this sequence. Let $B$ be a separable Banach space and for each ${\bf i} \in [n]\uu{d}$ let $h_{\bf i}\colon S^d \to B$ be a measurable function. Then for all $t > 0$, \begin{displaymath} \mathbb P\Big(\Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X_{i_1},\ldots,X_{i_d})\Big\|> t\Big) \le C_d \mathbb P\Big(\Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X\ub{1}_{i_1},\ldots,X\ub{d}_{i_d})\Big\|> t/C_d\Big). \end{displaymath} In consequence for all $p \ge 1$, \begin{displaymath} \Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X_{i_1},\ldots,X_{i_d})\Big\|_p \le C_d \Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X\ub{1}_{i_1},\ldots,X\ub{d}_{i_d})\Big\|_p. \end{displaymath} If moreover the functions $h_{\bf i}$ are symmetric in the sense that, for all $x_1,\ldots,x_d \in S$ and all permutations $\pi\colon [d]\to [d]$, $h_{i_1,\ldots,i_d}(x_1,\ldots,x_d) = h_{i_{\pi_1},\ldots,i_{\pi_d}}(x_{\pi_1},\ldots,x_{\pi_d})$, then for all $t > 0$, \begin{displaymath} \mathbb P\Big(\Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X\ub{1}_{i_1},\ldots,X\ub{d}_{i_d})\Big\|> t\Big) \le C_d \mathbb P\Big(\Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X_{i_1},\ldots,X_{i_d})\Big\|> t/C_d\Big) \end{displaymath} and in consequence for all $p \ge 1$, \begin{displaymath} \Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X\ub{1}_{i_1},\ldots,X\ub{d}_{i_d})\Big\|_p \le C_d\Big\|\sum_{{\bf i} \in [n]\uu{d}} h_{\bf i} (X_{i_1},\ldots,X_{i_d})\Big\|_p. \end{displaymath} \end{theorem} \subsection{Proof of Lemma \ref{lemma_comp_Gauss}} Without loss of generality we can assume that $M = 1$. It is easy to see that for some constant $C_k$ and $t > 1$, $\mathbb P(C_k|g_{i1}\cdots g_{ik}| > t) \ge 2\exp(-t^{2/k})$. Since $ \mathbb P(|Y_i|\ge t) \le 2\exp(-t^{2/k})$, we get \begin{displaymath} \mathbb P(|Y_i\Ind{|Y_i|\ge 1}| \ge t) \le \mathbb P(C_k |g_{i1}\cdots g_{ik}| > t). \end{displaymath} Therefore, using the inverse of the distribution function, we can define i.i.d. copies $\tilde{Y}_i$ of $|Y_i\Ind{|Y_i|\ge 1}|$ and i.i.d copies $Z_i$ of $|g_{i1}\cdots g_{ik}|$, such that $\tilde{Y}_i \le C_k Z_i$ pointwise. We may assume that these copies are defined on a common probability space with $Y_i$ and $g_{ij}$. We can now write for a sequence $\varepsilon_i$ of i.i.d. Rademacher variables independent of all he variables introduced so far. \begin{align*} \|\sum_{i=1}^n a_i Y_i\|_p & = \|\sum_{i=1}^n a_i \varepsilon_i|Y_i|\|_p\\ &\le \|\sum_{i=1}^n a_i \varepsilon_i|Y_i\ind{|Y_i|< 1}|\|_p + \|\sum_{i=1}^n a_i \varepsilon_i|Y_i\ind{|Y_i|\ge 1}|\|_p\\ &= \|\sum_{i=1}^n a_i \varepsilon_i|Y_i\ind{|Y_i|< 1}|\|_p + \|\sum_{i=1}^n a_i \varepsilon_i\tilde{Y}_i\|_p\\ &\le \|\sum_{i=1}^n a_i \varepsilon_i\|_p + C_k\|\sum_{i=1}^n a_i \varepsilon_i Z_i\|_p\\ &\le C_k \|\sum_{i=1}^n a_i \varepsilon_i \mathbb E_Z Z_i\|_p + C_k\|\sum_{i=1}^n a_i \varepsilon_iZ_i\|_p\\ &\le C_k \|\sum_{i=1}^n a_i \varepsilon_i Z_i\|_p = C_k\|\sum_{i=1}^n a_i g_{i_1}\cdots g_{ik}\|_p, \end{align*} where in the second inequality we used the contraction principle (once conditionally on $\tilde{Y}_i$'s and $Z_i$'s) and in the third one Jensen's inequality. \bibliographystyle{abbrv}
{ "timestamp": "2013-04-09T02:00:21", "yymm": "1304", "arxiv_id": "1304.1826", "language": "en", "url": "https://arxiv.org/abs/1304.1826", "abstract": "Building on the inequalities for homogeneous tetrahedral polynomials in independent Gaussian variables due to R. Latała we provide a concentration inequality for non-necessarily Lipschitz functions $f\\colon \\R^n \\to \\R$ with bounded derivatives of higher orders, which hold when the underlying measure satisfies a family of Sobolev type inequalities $\\|g- \\E g\\|_p \\le C(p)\\|\\nabla g\\|_p.$Such Sobolev type inequalities hold, e.g., if the underlying measure satisfies the log-Sobolev inequality (in which case $C(p) \\le C\\sqrt{p}$) or the Poincaré inequality (then $C(p) \\le Cp$). Our concentration estimates are expressed in terms of tensor-product norms of the derivatives of $f$.When the underlying measure is Gaussian and $f$ is a polynomial (non-necessarily tetrahedral or homogeneous), our estimates can be reversed (up to a constant depending only on the degree of the polynomial). We also show that for polynomial functions, analogous estimates hold for arbitrary random vectors with independent sub-Gaussian coordinates.We apply our inequalities to general additive functionals of random vectors (in particular linear eigenvalue statistics of random matrices) and the problem of counting cycles of fixed length in Erdős-R{é}nyi random graphs, obtaining new estimates, optimal in a certain range of parameters.", "subjects": "Probability (math.PR); Combinatorics (math.CO); Functional Analysis (math.FA)", "title": "Concentration inequalities for non-Lipschitz functions with bounded derivatives of higher order", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986363164317703, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7087950389171473 }
https://arxiv.org/abs/1701.04369
Arithmetic degrees and dynamical degrees of endomorphisms on surfaces
For a dominant rational self-map on a smooth projective variety defined over a number field, Kawaguchi and Silverman conjectured that the (first) dynamical degree is equal to the arithmetic degree at a rational point whose forward orbit is well-defined and Zariski dense. We prove this conjecture for surjective endomorphisms on smooth projective surfaces. For surjective endomorphisms on any smooth projective varieties, we show the existence of rational points whose arithmetic degrees are equal to the dynamical degree. Moreover, we prove that there exists a Zariski dense set of rational points having disjoint orbits if the endomorphism is an automorphism.
\section{Introduction}\label{intro} Let $k$ be a number field, $X$ a smooth projective variety over $\overline{k}$, and $f\colon X\dashrightarrow X$ a dominant rational self-map on $X$ over $\overline{k}$. Let $I_f \subset X$ be the indeterminacy locus of $f$. Let $X_f (\overline{k})$ be the set of $\overline{k}$-rational points $P$ on $X$ such that $f^n(P) \notin I_f$ for every $n \geq 0$. For $P\in X_f(\overline{k}),$ its {\it forward $f$-orbit} is defined as $\mathcal{O}_f(P):=\{ f^n(P):n\geq 0\}.$ Let $H$ be an ample divisor on $X$ defined over $\overline{k}$. The ({\it first}) {\it dynamical degree} of $f$ is defined by $$\delta_f :=\lim_{n\to \infty} ((f^n)^\ast H \cdot H^{\dim X -1})^{1/n}.$$ The first dynamical degree of a dominant rational self-map on a smooth complex projective variety was first defined by Dinh and Sibony in \cite{dinhsib, dinh}. In \cite{Tru}, Truong gave an algebraic definition of dynamical degrees. The {\it arithmetic degree}, introduced by Silverman in \cite{Gm}, of $f$ at a $\overline{k}$-rational point $P\in X_f(\overline{k})$ is defined by $$\alpha _f(P):=\lim_{n\to \infty} h_H^+(f^n(P))^{1/n}$$ if the limit on the right hand side exists. Here, $h_H\colon X(\overline{k})\longrightarrow [0,\infty )$ is the (absolute logarithmic) Weil height function associated with $H$, and we put $h_H^+:=\max\{ h_H,1\}$. Then we have two types of quantity concerned with the iteration of the action of $f$. It is natural to consider the relation between dynamical degrees and arithmetic degrees. In this direction, Kawaguchi and Silverman formulated the following conjecture. \begin{conj}[{The Kawaguchi--Silverman conjecture (see \cite[Conjecture 6]{rat})}] \label{KS} For every $\overline{k}$-rational point $P\in X_f(\overline{k})$, the arithmetic degree $\alpha_f(P)$ exists. Moreover, if the forward $f$-orbit $\mathcal{O} _f(P)$ is Zariski dense in $X$, the arithmetic degree $\alpha _f(P)$ is equal to the dynamical degree $\delta_f$, i.e., we have $$\alpha _f(P)=\delta _f.$$ \end{conj} \begin{rmk}\label{rem_for_KS} Let $X$ be a complex smooth projective variety with $\kappa(X)>0$, $\Phi: X \dashrightarrow W$ the Iitaka fibration of $X$, and $f \colon X \dashrightarrow X$ a dominant rational self-map on $X$. Nakayama and Zhang proved that there exists an automorphism $g \colon W \longrightarrow W$ of finite order such that $\Phi \circ f = g \circ \Phi$ (see \cite[Theorem A]{NaZh}). This implies that any dominant rational self-map on a smooth projective variety of positive Kodaira dimension does not have a Zariski dense orbit. So the latter half of Conjecture \ref{KS} is meaningful only for smooth projective varieties of non-positive Kodaira dimension. However, we do not use their result in this paper. \end{rmk} When $f$ is a dominant {\it endomorphism} (i.e.~$f$ is defined everywhere), the existence of the limit defining the arithmetic degree was proved in \cite{ab1}. But in general, the convergence is not known. It seems difficult at the moment to prove Conjecture \ref{KS} in full generality. In this paper, we prove Conjecture \ref{KS} for any endomorphisms on any smooth projective surfaces: \begin{thm}\label{Theorem:MainTheorem} Let $k$ be a number field, $X$ a smooth projective surface over $\overline k$, and $f \colon X \longrightarrow X$ a surjective endomorphism on $X$. Then Conjecture \ref{KS} holds for $f$. \end{thm} As by-products of our arguments, we also obtain the following two cases for which Conjecture \ref{KS} holds: \begin{thm}[Theorem {\ref{thm3.3.1}}]\label{Theorem:BirationalOnSurfaces} Let $k$ be a number field, $X$ a smooth projective irrational surface over $\overline k$, and $f \colon X \dashrightarrow X$ a birational automorphism on $X$. Then Conjecture \ref{KS} holds for $f$. \end{thm} \begin{thm}[Theorem {\ref{thm3.3.2}}]\label{Theorem:ToricEndo} Let $k$ be a number field, $X$ a smooth projective toric variety over $\overline k$, and $f\colon X\longrightarrow X$ a toric surjective endomorphism on $X$. Then Conjecture \ref{KS} holds for $f$. \end{thm} As we will see in the proof of Theorem \ref{Theorem:MainTheorem}, there does not always exist a Zariski dense orbit for a given self-map. For instance, a self-map cannot have a Zariski dense orbit if it is a self-map over a variety of positive Kodaira dimension. So it is also important to consider whether a self-map has a $\overline k$-rational point whose orbit has full arithmetic complexity, that is, whose arithmetic degree coincides with the dynamical degree. We prove that such a point always exists for any surjective endomorphism on any smooth projective variety. \begin{thm}\label{thm_existence} Let $k$ be a number field, $X$ a smooth projective variety over $\overline k$, and $f \colon X \longrightarrow X$ a surjective endomorphism on $X$. Then there exists a $\overline k$-rational point $P \in X(\overline k)$ such that $\alpha_f(P)=\delta_f$. \end{thm} If $f$ is an automorphism, we can construct a ``large'' collection of points whose orbits have full arithmetic complexity. \begin{thm}\label{thm_large collection} Let $k$ be a number field, $X$ a smooth projective variety over $\overline k$, and $f \colon X \longrightarrow X$ an automorphism. Then there exists a subset $S \subset X( \overline{k})$ which satisfies all of the following conditions. \begin{enumerate} \item[\rm (1)] For every $P \in S$, $ \alpha_{f}(P)=\delta_{f}$. \item[\rm (2)] For $P, Q \in S$ with $P \neq Q$, $ \mathcal{O}_{f}(P) \cap \mathcal{O}_{f}(Q) = \emptyset$. \item[\rm (3)] $S$ is Zariski dense in $X$. \end{enumerate} \end{thm} \begin{rmk}\label{results} Kawaguchi, Silverman, and the second author proved Conjecture \ref{KS} in the following cases (for details, see \cite{ab1}, \cite{eg}, \cite{sano1}, \cite{Gm}, \cite{ab2}). \begin{itemize} \item[(1)] (\cite[Theorem 2 (a)]{eg}) $f$ is an endomorphism and the N\'eron-Severi group of $X$ has rank one. \item[(2)] (\cite[Theorem 2 (b)]{eg}) $f$ is the extension to $\mathbb{P} ^N$ of a regular affine automorphism on $\mathbb{A} ^N$. \item[(3)] (\cite[Theorem A]{surf}, \cite[Theorem 2 (c)]{eg}) $X$ is a smooth projective surface and $f$ is an automorphism on $X$. \item[(4)] (\cite[Proposition 19]{Gm}) $f$ is the extension to $\mathbb{P}^N$ of a monomial endomorphism on $\mathbb{G}_m^N$ and $P\in \mathbb{G} _m^N(\overline{k})$. \item[(5)] (\cite[Corollary 31]{ab1}, \cite[Theorem 2]{ab2}) $X$ is an abelian variety. Note that any rational map between abelian varieties is automatically a morphism. \item[(6)] (\cite[Theorem 1.3]{sano1}) $f$ is an endomorphism and $X$ is the product $\prod_{i=1}^n X_i$ of smooth projective varieties, with the assumption that each variety $X_i$ satisfies one of the following conditions: \begin{itemize} \item the first Betti number of $(X_i)_{\mathbb{C}}$ is zero and the N\'eron--Severi group of $X_i$ has rank one, \item $X_i$ is an abelian variety, \item $X_i$ is an Enriques surface, or \item $X_i$ is a $K3$ surface. \end{itemize} \item[(7)](\cite[Theorem 1.4]{sano1}) $f$ is an endomorphism and $X$ is the product $X_1\times X_2$ of positive dimensional varieties such that one of $X_1$ or $X_2$ is of general type. (In fact, there do not exist Zariski dense forward $f$-orbits on such $X_1\times X_2$.) \end{itemize} \end{rmk} \subsection*{Notation} \begin{itemize} \item Throughout this paper, we fix a number field $k$. \item A \textit{variety} always means an integral separated scheme of finite type over $\overline k$ in this paper. \item A {\it divisor} on a variety $X$ means a divisor on $X$ defined over $\overline{k}$. \item An \textit{endomorphism} on a variety $X$ means a morphism from $X$ to itself defined over $\overline{k}$. A \textit{non-trivial endomorphism} is a surjective endomorphism which is not an automorphism. \item A \textit{curve} (resp.~\textit{surface}) simply means a smooth projective variety of dimension 1 (resp.~dimension 2) unless otherwise stated. \item For any curve $C$, the genus of $C$ is denoted by $g(C)$. \item When we say that $P$ is a point of $X$ or write as $P\in X$, it means that $P$ is a $\overline{k}$-rational point of $X$. \item The N\'eron--Severi group of a smooth projective variety $X$ is denoted by $\NS(X)$. It is well-known that $\NS(X)$ is a finitely generated abelian group. We put $\NS(X)_\mathbb{R} := \NS(X)\otimes_\mathbb{Z} \mathbb{R}.$ \item The symbols $\equiv$, $\sim$, $\sim_{\mathbb Q}$ and $\sim_{{\mathbb R}}$ mean algebraic equivalence, linear equivalence, ${\mathbb Q}$-linear equivalence, and ${\mathbb R}$-linear equivalence, respectively. \item Let $X$ be a smooth projective variety and $f \colon X \dashrightarrow X$ a dominant rational self-map. A point $P \in X_f(\overline k)$ is called \textit{preperiodic} if the forward $f$-orbit $\mathcal O_f(P)$ of $P$ is a finite set. This is equivalent to the condition that $f^n(P)=f^m(P)$ for some $n, m \geq 0$ with $n \neq m$. \item Let $f$, $g$ and $h$ be real-valued functions on a domain $S$. The equality $f = g + O(h)$ means that there is a positive constant $C$ such that $|f(x)-g(x)| \leq C |h(x)|$ for every $x \in S$. The equality $f=g + O(1)$ means that there is a positive constant $C'$ such that $|f(x)-g(x)| \leq C'$ for every $x \in S$. \end{itemize} \subsection*{Outline of this paper} In Section \ref{Section:Recall}, we recall the definitions and some properties of dynamical and arithmetic degrees. In Section \ref{Section:Reductions}, at first we recall some lemmata about reduction for Conjecture \ref{KS}, which were proved in \cite{sano1} and \cite{ab2}. Then, we prove the birational invariance of arithmetic degree, and prove Theorem \ref{Theorem:BirationalOnSurfaces} and Theorem \ref{Theorem:ToricEndo}. In Section \ref{Section:EndomorphismsOnSurfaces}, we reduce Theorem \ref{Theorem:MainTheorem} to three cases, i.e. the case of ${\mathbb P}^1$-bundles, hyperelliptic surfaces, and surfaces of Kodaira dimension one. In Section \ref{Section:RuledSurface} we recall fundamental properties of ${\mathbb P}^1$-bundles over curves. In Section \ref{Section:KSCforRuled}, Section \ref{Section:HyperEllipticSurface}, and Section \ref{Section:EllipticSurface}, we prove Theorem \ref{Theorem:MainTheorem} in each case explained in Section \ref{Section:EndomorphismsOnSurfaces}. Finally, in Section \ref{Section:ExistenceOfOrbits}, we prove Theorem \ref{thm_existence} and Theorem \ref{thm_large collection}. \section{Dynamical degree and Arithmetic degree}\label{Section:Recall} Let $H$ be an ample divisor on a smooth projective variety $X$. The ({\it first}) {\it dynamical degree} of a dominant rational self-map $f\colon X \dashrightarrow X$ is defined by $$\delta_f :=\lim_{n\to \infty} ((f^n)^\ast H \cdot H^{\dim X-1})^{1/n}.$$ The limit defining $\delta_f$ exists, and $\delta _f$ does not depend on the choice of $H$ (see \cite[Corollary 7]{dinh}, \cite[Proposition 1.2]{guedj}). Note that if $f$ is an endomorphism, we have $(f^n)^{\ast}=(f^\ast )^n$ as a linear self-map on $\NS (X)$. But if $f$ is merely a rational self-map, then $(f^n)^{\ast}\neq (f^\ast )^n$ in general. \begin{rmk}[{\cite[Proposition 1.2 (iii)]{dinh}, \cite[Remark 7]{rat}}] \label{n-th power of delta} Let $\rho ((f ^n)^\ast)$ be the spectral radius of the linear self-map $(f^n)^\ast \colon \NS (X)_\mathbb{R} \longrightarrow \NS (X)_\mathbb{R}.$ The dynamical degree $\delta_f$ is equal to the limit $\lim_{n\to \infty} (\rho ((f ^n)^\ast ))^{1/n}.$ Thus we have $\delta_{f^n}=\delta_f^n$ for every $n\geq 1$. \end{rmk} Let $X_f(\overline{k})$ be the set of points $P$ on $X$ such that $f$ is defined at $f^n(P)$ for every $n \geq 0.$ The {\it arithmetic degree} of $f$ at a point $P\in X_f(\overline{k})$ is defined as follows. Let $$h_H\colon X(\overline{k})\longrightarrow [0,\infty )$$ be the (absolute logarithmic) Weil height function associated with $H$ (see \cite[Theorem B3.2]{HS}). We put $$h_H^+(P):=\max\left\{ h_H(P),1\right\}.$$ We call \begin{align*} \overline{\alpha} _f(P)&:=\limsup_{n\to \infty} h_H^+(f^n(P))^{1/n}\text{ and}\\ \underline{\alpha} _f(P)&:=\liminf_{n\to \infty} h_H^+(f^n(P))^{1/n}\\ \end{align*} {\it the upper arithmetic degree} and {\it the lower arithmetic degree} of $f$ at $P$, respectively. It is known that $\overline{\alpha}_f(P)$ and $\underline{\alpha}_f(P)$ do not depend on the choice of $H$ (see \cite[Proposition 12]{rat}). If $\overline{\alpha}_f(P)=\underline{\alpha}_f(P)$, the limit $$\alpha _f(P):=\lim_{n\to \infty} h_H^+(f^n(P))^{1/n}$$ is called {\it the arithmetic degree of} $f$ {\it at} $P$. \begin{rmk} Let $D$ be a divisor on $X$, $H$ an ample divisor on $X$, and $f$ a dominant rational self-map on $X$. Take $P \in X_f(\overline k)$. Then we can easily check that \begin{align*} \overline{\alpha}_f(P)&\geq \limsup_{n\to\infty}h_D^+(f^n(P))^{1/n}, \text{ and}\\ \underline{\alpha}_f(P)&\geq \liminf_{n\to\infty}h_D^+(f^n(P))^{1/n}. \end{align*} So when these limits exist, we have \begin{align*} \alpha_f(P)&\geq \lim_{n\to\infty}h_D^+(f^n(P))^{1/n}. \end{align*} \end{rmk} \begin{rmk}\label{convergence} When $f$ is an endomorphism, the existence of the limit defining the arithmetic degree $\alpha_f(P)$ was proved by Kawaguchi and Silverman in \cite[Theorem 3]{ab1}. But it is not known in general. \end{rmk} \begin{rmk}\label{upperineq} The inequality $\overline{\alpha}_f(P)\leq \delta_f$ was proved by Kawaguchi and Silverman, and the third author (see \cite[Theorem 4]{rat},\cite[Theorem 1.4]{Matsuzawa}). Hence, in order to prove Conjecture \ref{KS}, it is enough to prove the opposite inequality $\underline{\alpha}_f(P)\geq \delta_f$. \end{rmk} \section{Some reductions for Conjecture \ref{KS}}\label{Section:Reductions} \subsection{Reductions}\label{Subsection:Reductions} We recall some lemmata which are useful to reduce the proof of some cases of Conjecture \ref{KS} to easier cases. \begin{lem}\label{Lemma:iterate} Let $X$ be a smooth projective variety and $f\colon X \longrightarrow X$ a surjective endomorphism. Then Conjecture \ref{KS} holds for $f$ if and only if Conjecture \ref{KS} holds for $f^t$ for some $t\geq 1$. \end{lem} \begin{proof} See \cite[Lemma 3.3]{sano1}. \end{proof} \begin{lem}[{\cite[Lemma 6]{ab2}}]\label{Lemma:ReductionByFiniteMorphisms} Let $\psi \colon X \longrightarrow Y$ be a finite surjective morphism between smooth projective varieties. Let $f_X\colon X\longrightarrow X$ and $f_Y\colon Y\longrightarrow Y$ be surjective endomorphisms on $X$ and $Y$, respectively. Assume that $\psi\circ f_X=f_Y\circ\psi$. Then Conjecture \ref{KS} holds for $f_X$ if and only if Conjecture \ref{KS} holds for $f_Y$. \end{lem} \begin{proof} Since $\psi$ is a finite surjective morphism, we have $\dim X=\dim Y$. For a point $P\in X(\overline k)$, the forward $f_X$-orbit $\mathcal{O}_{f_X}(P)$ is Zariski dense in $X$ if and only if the forward $f_Y$-orbit $\mathcal{O}_{f_Y}(\psi(P))$ is Zariski dense in $Y$. Let $H$ be an ample divisor on $Y$. Then $\psi^\ast H$ is an ample divisor on $X$. Hence, we can calculate the dynamical degree and the arithmetic degree of $f_X$ as follows: \begin{align*} \delta_{f_X} &= \lim_{n\to\infty} ((f_X^n)^\ast \psi^\ast H \cdot (\psi^\ast H)^{\dim X-1})^{1/n}\\ &= \lim_{n\to\infty} (\psi^\ast (f_Y^n)^\ast H \cdot (\psi^\ast H)^{\dim Y-1})^{1/n}\\ &= \lim_{n\to\infty} (\deg(\psi)((f_Y^n)^\ast H \cdot H^{\dim Y-1}))^{1/n}\\ &=\delta_{f_Y}.\\ \alpha_{f_X}(P) &= \lim_{n\to\infty} h^+_{\psi^\ast H} (f_X^n(P))^{1/n}\\ &= \lim_{n\to\infty} h^+_H (\psi \circ f_X^n(P))^{1/n}\\ &= \lim_{n\to\infty} h^+_H (f_Y^n\circ \psi (P))^{1/n}\\ &= \alpha_{f_Y}(\psi(P)). \end{align*} Our assertion follows from these calculations. \end{proof} \subsection{Birational invariance of the arithmetic degree} We show that arithmetic degree is invariant under birational conjugacy. \begin{lem}\label{lem4.2.1} Let $\mu \colon X \dashrightarrow Y$ be a birational map of smooth projective varieties. Take Weil height functions $h_X, h_Y$ associated with ample divisors $H_X, H_Y$ on $X, Y$, respectively. Then there are constants $M \in \mathbb R_{>0}$ and $M' \in \mathbb R$ such that $$h_X(P) \geq M h_Y(\mu(P))+M'$$ for any $P \in X(\overline k) \setminus I_\mu(\overline k)$. \end{lem} \begin{proof} Take a smooth projective variety $Z$ and a birational morphism $p \colon Z \longrightarrow X$ such that $p$ is isomorphic over $X \setminus I_\mu$ and $q=\mu \circ p \colon Z \longrightarrow Y$ is a morphism. Set $E=p^*p_*q^*H_Y-q^*H_Y$. Then $E$ is a $p$-exceptional divisor on $Z$ such that $-E$ is $p$-nef. By the negativity lemma (cf.~\cite[Lemma 3.39]{KoMo}), $E$ is an effective and $p$-exceptional divisor on $Z$. Take a sufficiently large integer $N$ such that $NH_X-p_*q^*H_Y$ is very ample. Then, for $P \in X \setminus I_\mu$, we have \begin{align*} h_X(P) &= \frac{1}{N} (h_{NH_X-p_*q^*H_Y}(P)+h_{p_*q^*H_Y}(P))+O(1) \\ &\geq \frac{1}{N} h_{p_*q^*H_Y}(P)+O(1) \\ &= \frac{1}{N} h_{p^*p_*q^*H_Y}(p^{-1}(P)) +O(1) \\ &= \frac{1}{N} h_{q^*H_Y}(p^{-1}(P))+h_E(p^{-1}(P))+O(1) \\ &= \frac{1}{N} h_Y(\mu(P)) +h_E(p^{-1}(P))+O(1). \end{align*} We know that $h_E \geq O(1)$ on $Z \setminus \Supp E$ (cf.~\cite[Theorem B.3.2(e)]{HS}). Since $\Supp E \subset p^{-1}(I_\mu)$, $h_E(p^{-1}(P)) \geq O(1)$ for $P \in X \setminus I_\mu$. Eventually, we obtain that $h_X(P) \geq (1/N)h_Y(\mu(P))+O(1)$ for $P \in X \setminus I_\mu$. \end{proof} \begin{thm}\label{Theorem:BirationalInvariance} Let $f \colon X \dashrightarrow X$ and $g \colon Y \dashrightarrow Y$ be dominant rational self-maps on smooth projective varieties and $\mu \colon X \dashrightarrow Y$ a birational map such that $g \circ \mu = \mu \circ f$. \begin{itemize} \item[{\rm (i)}] Let $U \subset X$ be a Zariski open subset such that $\mu|_U \colon U \longrightarrow \mu(U)$ is an isomorphism. Then $\overline \alpha_f(P)=\overline \alpha_g(\mu(P))$ and $\underline \alpha_f(P)=\underline \alpha_g(\mu(P))$ for $P \in X_f(\overline k) \cap \mu^{-1}(Y_g(\overline k))$ such that $\mathcal O_f(P) \subset U(\overline k)$. \item[{\rm (ii)}] Take $P \in X_f(\overline k) \cap \mu^{-1}(Y_g(\overline k))$. Assume that $\mathcal O_f(P)$ is Zariski dense in $X$ and both $\alpha_f(P)$ and $\alpha_g(\mu(P))$ exist. Then $\alpha_f(P)=\alpha_g(\mu(P))$. \end{itemize} \end{thm} \begin{proof} (i) Using Lemma \ref{lem4.2.1} for both $\mu$ and $\mu^{-1}$, there are constants $M_1, L_1 \in \mathbb R_{>0}$ and $M_2, L_2 \in \mathbb R$ such that \begin{equation} M_1h_Y(\mu(P))+M_2 \leq h_X(P) \leq L_1h_Y(\mu(P))+L_2 \tag{$*$} \end{equation} for $P \in U(\overline k)$. The claimed equalities follow from ($*$). (ii) Since $\mathcal O_f(P)$ is Zariski dense in $X$, we can take a subsequence $\{f^{n_k}(P)\}_k$ of $\{f^n(P)\}_n$ contained in $U$. Using ($*$) again, it follows that $$\alpha_f(P)=\lim_{k \to \infty}h_X^+(f^{n_k}(P))^{1/n_k} =\lim_{k \to \infty}h_Y^+(g^{n_k}(\mu(P)))^{1/n_k}=\alpha_g(\mu(P)).$$ \end{proof} \begin{rmk} In \cite{Gm}, Silverman dealt with a height function on $\mathbb{G}_m^n$ induced by an open immersion $\mathbb{G}_m^n\hookrightarrow {\mathbb P}^n$ and proved Conjecture \ref{KS} for monomial maps on $\mathbb{G}_m^n$. It seems that it had not be checked in the literature that the arithmetic degrees of endomorphisms on quasi-projective varieties do not depend on the choice of open immersions to projective varieties. Now by Theorem \ref{Theorem:BirationalInvariance}, the arithmetic degree of a rational self-map on a quasi-projective variety at a point does not depend on the choice of an open immersion of the quasi-projective variety to a projective variety. Furthermore, by the birational invariance of dynamical degrees, we can state Conjecture \ref{KS} for rational self-maps on quasi-projective varieties, such as semi-abelian varieties. \end{rmk} \subsection{Applications of the birational invariance} In this subsection, we prove Theorem \ref{Theorem:BirationalOnSurfaces} and Theorem \ref{Theorem:ToricEndo} as applications of Theorem \ref{Theorem:BirationalInvariance}. \begin{thm}[Theorem \ref{Theorem:BirationalOnSurfaces}]\label{thm3.3.1} Let $X$ be an irrational surface and $f \colon X \dashrightarrow X$ a birational automorphism on $X$. Then Conjecture \ref{KS} holds for $f$. \end{thm} \begin{proof} Take a point $P \in X_f(\overline k)$. If $\mathcal O_f(P)$ is finite, the limit $\alpha_f(P)$ exists and is equal to 1. Next, assume that the closure $\overline{\mathcal O_f(P)}$ of $\mathcal O_f(P)$ has dimension 1. Let $Z$ be the normalization of $\overline{\mathcal O_f(P)}$ and $\nu \colon Z \longrightarrow X$ the induced morphism. Then an endomorphism $g \colon Z \longrightarrow Z$ satisfying $\nu \circ g=f \circ \nu$ is induced. Take a point $P' \in Z$ such that $\nu(P')=P$. Then $\alpha_g(P')=\alpha_f(P)$ since $\nu$ is finite. It follows from \cite[Theorem 2]{ab1} that $\alpha_g(P')$ exists (note that \cite[Theorem 2]{ab1} holds for possibly non-surjective endomorphisms on possibly reducible normal varieties). Therefore $\alpha_f(P)$ exists. Assume that $\mathcal O_f(P)$ is Zariski dense. If $\delta_f=1$, then $1 \leq \underline \alpha_f(P) \leq \overline \alpha_f(P) \leq \delta_f =1$ by Remark \ref{upperineq}, so $\alpha_f(P)$ exists and $\alpha_f(P)=\delta_f=1$. So we may assume that $\delta_f >1$. Since $X$ is irrational and $\delta_f >1$, $\kappa(X)$ must be non-negative (cf.~\cite[Theorem 0.4, Proposition 7.1 and Theorem 7.2]{defa}). Take a birational morphism $\mu \colon X \longrightarrow Y$ to the minimal model $Y$ of $X$ and let $g \colon Y \dashrightarrow Y$ be the birational automorphism on $Y$ defined as $g=\mu \circ f \circ \mu^{-1}$. Then $g$ is in fact an automorphism since, if $g$ has indeterminacy, $Y$ must have a $K_Y$-negative curve. It is obvious that $\mathcal O_g(\mu(P))$ is also Zariski dense in $Y$. Since $\mu(\Exc(\mu))$ is a finite set, there is a positive integer $n_0$ such that $\mu(f^n(P))= g^n(\mu(P)) \not\in \mu(\Exc(\mu))$ for $n \geq n_0$. So we have $f^n(P) \not\in \Exc(\mu)$ for $n \geq n_0$. Replacing $P$ by $f^{n_0}(P)$, we may assume that $\mathcal O_f(P) \subset X \setminus \Exc(\mu)$. Applying Theorem \ref{Theorem:BirationalInvariance} (i) to $P$, it follows that $\alpha_f(P)=\alpha_g(\mu(P))$. We know that $\alpha_g(\mu(P))$ exists since $g$ is a morphism. So $\alpha_f(P)$ also exists. The equality $\alpha_g(\mu(P))=\delta_g$ holds as a consequence of Conjecture \ref{KS} for automorphisms on surfaces (cf.~Remark \ref{results} (3)). Since dynamical degree is invariant under birational conjugacy, it follows that $\delta_g=\delta_f$. So we obtain the equality $\alpha_f(P)=\delta_f$. \end{proof} \begin{thm}[Theorem \ref{Theorem:ToricEndo}]\label{thm3.3.2} Let $X$ be a smooth projective toric variety and $f\colon X\longrightarrow X$ a toric surjective endomorphism on $X$. Then Conjecture \ref{KS} holds for $f$. \end{thm} \begin{proof} Let $\mathbb G_m^d \subset X$ be the torus embedded as an open dense subset in $X$. Then $f|_{\mathbb G_m^d} \colon \mathbb G_m^d \longrightarrow \mathbb G_m^d$ is a homomorphism of algebraic groups by assumtion. Let $\mathbb G_m^d \subset \mathbb P^d$ be the natural embedding of $\mathbb G_m^d$ to the projective space $\mathbb P^d$ and $g \colon \mathbb P^d \dashrightarrow \mathbb P^d$ be the induced rational self-map. Then $g$ is a monomial map. Take $P \in X(\overline k)$ such that $\mathcal O_f(P)$ is Zariski dense. Note that $\alpha_f(P)$ exists since $f$ is a morphism. Since $\mathcal O_f(P)$ is Zariski dense and $f(\mathbb G_m^d) \subset \mathbb G_m^d$, there is a positive integer $n_0$ such that $f^n(P) \in \mathbb G_m^d$ for $n \geq n_0$. By replacing $P$ by $f^{n_0}(P)$, we may assume that $\mathcal O_f(P) \subset \mathbb G_m^d$. Applying Theorem \ref{Theorem:BirationalInvariance} (i) to $P$, it follows that $\alpha_f(P)=\alpha_g(P)$. The equality $\alpha_g(P)=\delta_g$ holds as a consequence of Conjecture \ref{KS} for monomial maps (cf.~Remark \ref{results} (4)). Since dynamical degree is invariant under birational conjugacy, it follows that $\delta_g=\delta_f$. So we obtain the equality $\alpha_f(P)=\delta_f$. \end{proof} \section{Endomorphisms on surfaces}\label{Section:EndomorphismsOnSurfaces} We start to prove Theorem \ref{Theorem:MainTheorem}. Since Conjecture \ref{KS} for automorphisms on surfaces is already proved by Kawaguchi (see Remark \ref{results} (3)), it is sufficient to prove Theorem \ref{Theorem:MainTheorem} for \textit{non-trivial} endomorphisms, that is, surjective endomorphisms which are not automorphisms. Let $f \colon X \longrightarrow X$ be a non-trivial endomorphism on a surface. First we divide the proof of Theorem \ref{Theorem:MainTheorem} according to the Kodaira dimension of $X$. (I) $\kappa(X)=-\infty$; we need the following result due to Nakayama. \begin{lem}[cf.~{\cite[Proposition 10]{Nakayama}}]\label{lem_inv} Let $f \colon X \longrightarrow X$ be a non-trivial endomorphism on a surface $X$ with $\kappa(X)=-\infty$. Then there is a positive integer $m$ such that $f^m(E)=E$ for any irreducible curve $E$ on $X$ with negative self-intersection. \end{lem} \begin{proof} See {\cite[Proposition 10]{Nakayama}}. \end{proof} Let $\mu \colon X \longrightarrow X'$ be the contraction of a $(-1)$-curve $E$ on $X$. By Lemma \ref{lem_inv}, there is a positive integer $m$ such that $f^m(E)=E$. Then $f^m$ induces an endomorphism $f' \colon X' \longrightarrow X'$ such that $\mu \circ f^m= f' \circ \mu$. Using Lemma \ref{Lemma:iterate} and Theorem \ref{Theorem:BirationalInvariance}, the assertion of Theorem \ref{Theorem:MainTheorem} for $f$ follows from that for $f'$. Continuing this process, we may assume that $X$ is relatively minimal. When $X$ is irrational and relatively minimal, $X$ is a $ {\mathbb{P}}^{1}$-bundle over a curve $C$ with $g(C) \geq 1$. When $X$ is rational and relatively minimal, $X$ is isomorphic to $\mathbb P^2$ or the Hirzebruch surface $\mathbb F_n= \mathbb P(\mathcal O_{\mathbb P^1} \oplus \mathcal O_{\mathbb P^1}(-n))$ for some $n \geq 0$ with $n \neq 1$. Note that Conjecture \ref{KS} holds for surjective endomorphisms on projective spaces (see Remark \ref{results} (1)). (II) $\kappa(X)=0$; for surfaces with non-negative Kodaira dimension, we use the following result due to Fujimoto. \begin{lem}[cf.~{\cite[Lemma 2.3 and Proposition 3.1]{Fujim}}]\label{lem_min} Let $f \colon X \longrightarrow X$ be a non-trivial endomorphism on a surface $X$ with $\kappa(X) \geq 0$. Then $X$ is minimal and $f$ is \'etale. \end{lem} \begin{proof} See \cite[Lemma 2.3 and Proposition 3.1]{Fujim} \end{proof} So $X$ is either an abelian surface, a hyperelliptic surface, a K3 surface, or an Enriques surface. Since $f$ is \'etale, we have $\chi(X, \mathcal O_X)= \deg(f) \chi(X, \mathcal O_X)$. Now $\deg(f) \geq 2$ by assumption, so $\chi(X, \mathcal O_X)=0$ (cf.~\cite[Corollary 2.4]{Fujim}). Hence $X$ must be either an abelian surface or a hyperelliptic surface because K3 surfaces and Enriques surfaces have non-zero Euler characteristics. Note that Conjecture \ref{KS} is valid for endomorphisms on abelian varieties (see Remark \ref{results} (5)). (III) $\kappa(X)=1$; this case will be treated in Section \ref{Section:EllipticSurface}. (IV) $\kappa(X)=2$; the following fact is well-known. \begin{lem}\label{Lemma:EndomorphismsOnCurvesOfGeneralType} Let $X$ be a smooth projective variety of general type. Then any surjective endomorphisms on $X$ are automorphisms. Furthermore, the group of automorphisms $\Aut(X)$ on $X$ has finite order. \end{lem} \begin{proof} See {\cite[Proposition 2.6]{Fujim}}, {\cite[Theorem 11.12]{Iitaka}}, or {\cite[Corollary 2]{Matsumura}}. \end{proof} So there is no non-trivial endomorphism on $X$. As a summary, the remaining cases for the proof of Theorem \ref{Theorem:MainTheorem} are the following: \begin{itemize} \item Non-trivial endomorphisms on $ {\mathbb{P}^{1}}$-bundles over a curve. \item Non-trivial endomorphisms on hyperelliptic surfaces. \item Non-trivial endomorphisms on surfaces of Kodaira dimension 1. \end{itemize} \begin{rmk}\label{rmk_classification} Fujimoto and Nakayama gave a complete classification of surfaces which admit non-trivial endomorphisms (cf.~\cite[Theorem 1.1]{FN2}, \cite[Proposition 3.3]{Fujim}, \cite[Theorem 3]{Nakayama}, and \cite[Appendix to Section 4]{FN1}). \end{rmk} \section{Some properties of ${\mathbb P}^1$-bundles over curves}\label{Section:RuledSurface} In this section, we recall and prove some properties of ${\mathbb P}^1$-bundles (see \cite[Chapter V.2]{AG}, \cite{Homma1}, \cite{Homma2} for detail). In this section, let $X$ be a ${\mathbb P}^1$-bundle over a curve $C$. Let $\pi\colon X\longrightarrow C$ be the projection. \begin{prop}\label{Proposition:StructureOfRuledSurfaces} We can represent $X$ as $X\cong {\mathbb P}(\mathcal{E})$, where $\mathcal{E}$ is a locally free sheaf of rank 2 on $C$ such that $H^0(\mathcal{E})\neq 0$ but $H^0(\mathcal{E}\otimes \mathcal{L})=0$ for all invertible sheaves $\mathcal{L}$ on $C$ with $\deg \mathcal{L}<0$. The integer $e:=-\deg \mathcal{E}$ does not depend on the choice of such $\mathcal E$. Furthermore, there is a section $\sigma\colon C\longrightarrow X$ with image $C_0$ such that $\mathcal{O}_X(C_0)\cong \mathcal{O}_X(1).$ \end{prop} \begin{proof} See \cite[Proposition 2.8]{AG}. \end{proof} \begin{lem} The Picard group and the N\'eron--Severi group of $X$ have the structure as follows. \begin{align*} \Pic(X)&\cong \mathbb{Z} \oplus \pi^\ast \Pic(C),\\ \NS(X)&\cong \mathbb{Z} \oplus \pi^\ast \NS(C) \cong \mathbb{Z} \oplus \mathbb{Z}. \end{align*} Furthermore, the image $C_0$ of the section $\sigma\colon C\longrightarrow X$ in Proposition \ref{Proposition:StructureOfRuledSurfaces} generates the first direct factor of $\Pic(X)$ and $\NS(X)$. \end{lem} \begin{proof} See \cite[V, Proposition 2.3]{AG}. \end{proof} \begin{lem}\label{Lemma:FiberPreservingConditions} Let $F\in \NS(X)$ be a fiber $\pi^{-1}(p)=\pi^\ast p$ over a point $p\in C(\overline{k})$, and $e$ the integer defined in Proposition \ref{Proposition:StructureOfRuledSurfaces}. Then the intersection numbers of generators of $\NS(X)$ are the following. \begin{align*} F \cdot F&= 0,\\ F \cdot C_0&= 1,\\ C_0 \cdot C_0&= -e. \end{align*} \end{lem} \begin{proof} It is easy to see that the equalities $F \cdot F=0$ and $F \cdot C_0=1$ hold. For the last equality, see \cite[V, Proposition 2.9]{AG}. \end{proof} We say that {\it $f$ preserves fibers} if there is an endomorphism $f_C$ on $C$ such that $\pi\circ f=f_C\circ \pi$. In our situation, since there is a section $\sigma \colon C\longrightarrow X$, $f$ preserves fibers if and only if, for any point $p\in C$, there is a point $q\in C$ such that $f(\pi^{-1}(p)) \subset \pi^{-1}(q)$. The following lemma appears in \cite[p.~18]{Amerik} in more general form. But we need it only in the case of ${\mathbb P}^1$-bundles on a curve, and the proof in general case is similar to our case. So we deal only with the case of ${\mathbb P}^1$-bundle on a curve. \begin{lem}\label{Lemma:FiberPreserving} For any surjective endomorphism $f$ on $X$, the iterate $f^2$ preserves fibers. \end{lem} \begin{proof} By the projection formula, the fibers of $\pi\colon X \longrightarrow C$ can be characterized as connected curves having intersection number zero with any fibers $F_p=\pi^\ast p$, $p\in C$. Hence, to check that the iterate $f^2$ sends fibers to fibers, it suffices to show that $(f^2)^\ast (\pi^\ast \NS(C)_{\mathbb R}) = \pi^\ast\NS(C)_{\mathbb R}$. Since $\pi^\ast\NS(C)_{\mathbb R}$ is a hyperplane in $\NS(X)_{\mathbb R}$ such that any divisor class $D$ from this hyperplane satisfies $D \cdot D = 0$, its pullback $f^\ast\pi^\ast \NS(C)_{\mathbb R}$ is a hyperplane with the same property. There are at most two such hyperplanes, because the form of self-intersection $\NS(X)_{\mathbb R} \longrightarrow{\mathbb R}$ is a quadratic form associated to the coefficients of $C_0$ and $F$. Hence, $f^*$ fixes or interchanges them and so $(f^2)^*$ fixes them. \end{proof} \begin{lem}\label{Lemma:EquivalenceOfFiberPreserving} A surjective endomorphism $f$ preserves fibers if and only if there exists a non-zero integer $a$ such that $f^\ast F \equiv aF$. Here, $F$ is the numerical class of a fiber. \end{lem} \begin{proof} Assume $f^{*}F\equiv aF$. For any point $p\in C$, we set $F_p:=\pi^{-1}(p)=\pi^\ast p$. If $f$ does not preserve fibers, there is a point $p \in C$ such that $f(F_{p}) \cdot F>0$. Now we can calculate the intersection number as follows: \begin{align*} 0&= F \cdot aF = F \cdot (f^\ast F) = F_{p} \cdot (f^\ast F)\\ &= (f_\ast F_{p}) \cdot F =\deg(f |_{F_{p}})\cdot (f(F_{p}) \cdot F) > 0. \end{align*} This is a contradiction. Hence $f$ preserves fibers. Next, assume that $f$ preserves fibers. Write $f^\ast F =aF+bC_0$. Then we can also calculate the intersection number as follows: \begin{align*} b&= F \cdot (aF+bC_0) = F \cdot f^\ast F = (f_\ast F) \cdot F\\ &= \deg(f |_F)\cdot(F \cdot F) = 0. \end{align*} Further, by the injectivity of $f^\ast$, we have $a\neq 0$. The proof is complete. \end{proof} \begin{lem}\label{Lemma:PositivityOfInvariant} If $\mathcal{E}$ splits, i.e., if there is an invertible sheaf $\mathcal{L}$ on $C$ such that $\mathcal{E}\cong \mathcal{O}_C \oplus \mathcal{L}$, the invariant $e$ of $X= {\mathbb P} (\mathcal{E})$ is non-negative. \end{lem} \begin{proof} See \cite[V, Example 2.11.3]{AG}. \end{proof} \begin{lem}\label{Lemma:Ampleness} Assume that $e\geq 0$. Then for a divisor $D=aF+bC_0\in \NS(X)$, the following properties are equivalent. \begin{itemize} \item $D$ is ample. \item $a>be$ and $b>0.$ \end{itemize} \end{lem} \begin{proof} See \cite[V, Proposition 2.20]{AG}. \end{proof} We can prove a result stronger than Lemma \ref{Lemma:FiberPreserving} as follows. \begin{lem} Assume that $e>0$. Then any surjective endomorphism $f\colon X\longrightarrow X$ preserves fibers. \end{lem} \begin{proof} By Lemma \ref{Lemma:EquivalenceOfFiberPreserving}, it is enough to prove $f^\ast F \equiv aF$ for some integer $a>0$. We can write $f^\ast F \equiv aF+bC_0$ for some integers $a,b\geq 0$. Since we have $$ a F + b C_0 = ( a - be ) F + b ( e F + C_0 ) $$ and $f$ preserves the nef cone and the ample cone, either of the equalities $a-be=0$ or $b = 0$ holds. We have \begin{align*} 0&= \deg(f)(F \cdot F) = (f_\ast f^\ast F) \cdot F\\ &= (f^\ast F) \cdot (f^\ast F) = (aF+bC_0) \cdot (aF+bC_0)\\ &= 2ab-b^2e = b(2a-be). \end{align*} So either of the equalities $b=0$ or $2a-be=0$ holds. If we have $b\neq 0$, we have $a-be=0$ and $2a-be=0$. So we get $a=0$. But since $e\neq 0$, we obtain $b=0$. This is a contradiction. Consequently, we get $b=0$ and $f^*F \equiv aF$. \end{proof} \begin{lem}\label{degrees} Fix a fiber $F=F_p$ for a point $p\in C(\overline{k})$. Let $f$ be a surjective endomorphism on $X$ preserving fibers, $f_{C}$ the endomorphism on $C$ satisfying $\pi \circ f=f_C \circ \pi$, $f_F:=f|_F\colon F\longrightarrow f(F)$ the restriction of $f$ to the fiber $F$. Set $f ^\ast F\equiv aF$ and $f^\ast C_0\equiv cF+dC_0$. Then we have $a=\deg(f_C)$, $d=\deg(f_F)$, $\deg(f)=ad$, and $\delta_f=\max\{ a,d\}$. \end{lem} \begin{proof} Our assertions follow from the following equalities of divisor classes in $\NS(X)$ and of intersection numbers: \begin{align*} aF &= f^\ast F = f^\ast \pi^\ast p\\ &= \pi^\ast f_C^\ast p = \pi^\ast (\deg(f_C) p)\\ &= \deg(f_C)\pi^\ast p = \deg(f_C)F,\\ \deg(f)F &= f_\ast f^\ast F = f_\ast f^\ast \pi^\ast p\\ &= f_\ast \pi^\ast f_C^\ast p = f_\ast \pi^\ast (\deg(f_C)p)\\ &= \deg(f_C) f_\ast F = \deg(f_C) \deg(f_F) f(F)\\ &= \deg(f_C) \deg(f_F) F\\ \deg(f) &= \deg(f)C_0 \cdot F = (f_\ast f^\ast C_0) \cdot F\\ &= (f^\ast C_0) \cdot (f^\ast F)= (cF+dC_0) \cdot aF = ad. \end{align*} The last assertion $\delta_f=\max\{a,d \}$ follows from the functoriality of $f^\ast$ and the equality $\delta_f=\lim_{n\to\infty}\rho((f^n)^\ast)^{1/n}$ (cf.~Remark \ref{n-th power of delta}). \end{proof} \begin{lem}\label{Lemma:CulculationOfDynamicalDegree} Let Notation be as in Lemma \ref{degrees}. Assume that $e\geq 0$. Then both $F$ and $C_0$ are eigenvectors of $f^* \colon \NS(X)_\mathbb R \longrightarrow \NS(X)_\mathbb R$. Further, if $e$ is positive, then we have $\deg(f_C)= \deg (f_F)$. \end{lem} \begin{proof} Set $f^\ast F = aF$ and $f^\ast C_0= cF+dC_0$ in $\NS(X)$. Then we have \begin{align*} -ead &= -e\deg f = (f_\ast f^\ast C_0) \cdot C_0\\ &= (f^\ast C_0)^2 = (cF+dC_0)^2 = 2cd-ed^2. \end{align*} Hence, we get $c=e(d-a)/2$. We have the following equalities in $\NS(X)$: \begin{align*} f^\ast(eF + C_0) = aeF + (cF+dC_0) = (ae+c)F+dC_0. \end{align*} By the fact that $f^\ast D$ is ample if and only if $D$ is ample, it follows that $eF+C_0$ is an eigenvector of $f^\ast$. Thus, we have \begin{align*} de = ae+c = ae+e(d-a)/2 = e(d+a)/2. \end{align*} Therefore, the equality $e(d-a)=0$ holds. So $c=e(d-a)/2=0$ holds. Further, we assume that $e>0$. Then it follows that $d-a=0$. So we have $\deg(f_C)=a=d=\deg(f_F)$. \end{proof} The following lemma is used in Subsection \ref{Subsection:Elliptic}. \begin{lem}\label{lem_three_sections} Let $\mathcal{L}$ be a non-trivial invertible sheaf of degree $0$ on a curve $C$ with $g(C)\geq 1$, $\mathcal{E}=\mathcal{O}_C\oplus \mathcal{L}$, and $X={\mathbb P}(\mathcal{E})$. Let $C_0, C_1$ be sections corresponding to the projections $\mathcal{E}\longrightarrow \mathcal{L}$ and $\mathcal{E}\longrightarrow \mathcal{O}_C$. If $\sigma\colon C \longrightarrow X$ is a section such that $(\sigma(C))^2=0$, then $\sigma(C)$ is equal to $C_0$ or $C_1$. \end{lem} \begin{proof} Note that $e=0$ in this case and thus $(C_{0}^{2})=0$. Moreover, $ \mathcal{O}_{X}(C_{0}) \cong \mathcal{O}_{X}(1)$ and $ \mathcal{O}_{X}(C_{1}) \cong \mathcal{O}_{X}(1) {\otimes} \pi^{*} \mathcal{L}^{-1}$. Let $F$ be the numerical class of a fiber. Set $\sigma(C) \equiv aC_{0}+bF$. Then $a=(\sigma(C)\cdot F)=1$ and $2ab=(\sigma(C)^{2})=0$. Thus $\sigma(C)\equiv C_{0}$. Therefore, $ \mathcal{O}_{X}(\sigma(C)) \cong \mathcal{O}_{X}(C_{0}) {\otimes} \pi^{*} \mathcal{N}$ for some invertible sheaf $ \mathcal{N}$ of degree $0$ on $C$. Then \begin{align*} 0&\neq H^{0}( X,\mathcal{O}_{X}( \sigma(C))) = H^{0}(C, \pi_{*} \mathcal{O}_{X}(C_{0}) {\otimes} \mathcal{N})\\ &= H^{0}(C, (\mathcal{L} {\oplus} \mathcal O_C)\otimes \mathcal{N}) \end{align*} and this implies $ \mathcal{N}\cong \mathcal{O}_{C}$ or $ \mathcal{N} \cong \mathcal{L}^{-1}$. Hence $ \mathcal{O}_{X}(\sigma(C))$ is isomorphic to $\mathcal{O}_{X}(C_{0})$ or $\mathcal{O}_{X}(C_{0}) {\otimes} \pi^{*} \mathcal{L}^{-1}= \mathcal{O}_{X}(C_{1})$. Since $ \mathcal{L}$ is non-trivial, we have $H^{0}( \mathcal{O}_{X}(C_{0}))=H^{0}( \mathcal{O}_{X}(C_{1}))=\overline{k}$ and we get $\sigma(C)=C_{0}$ or $C_{1}$. \end{proof} \section{${\mathbb P}^1$-bundles over curves}\label{Section:KSCforRuled} In this section, we prove Conjecture \ref{KS} for non-trivial endomorphisms on ${\mathbb P}^1$-bundles over curves. We divide the proof according to the genus of the base curve. \subsection{$\mathbb P^1$-bundles over $\mathbb P^1$}\label{Subsection:Hirzebruch} \begin{thm}\label{prop:Hirz} Let $\pi\colon X\longrightarrow \mathbb P^1$ be a ${\mathbb P}^1$-bundle over $\mathbb P^1$ and $f \colon X \to X$ be a non-trivial endomorphism. Then Conjecture \ref{KS} holds for $f$. \end{thm} \begin{proof Take a locally free sheaf $\mathcal E$ of rank $2$ on $\mathbb P^1$ such that $X\cong {\mathbb P}(\mathcal{E})$ and $\deg\mathcal{E} =-e$ (cf.~Proposition \ref{Proposition:StructureOfRuledSurfaces}). Then $\mathcal{E}$ splits (see \cite[V. Corollary 2.14]{AG}). When $X$ is isomorphic to ${\mathbb P}^1\times {\mathbb P}^1$, i.e.~the case of $e=0$, the assertion holds by \cite[Theorem 1.3]{sano1}. When $X$ is not isomorphic to ${\mathbb P}^1 \times {\mathbb P}^1$, i.e.~the case of $e>0$, the endomorphism $f$ preserves fibers and induces an endomorphism $f_{ {\mathbb{P}}^{1}}$ on the base curve $ {\mathbb{P}}^{1}$. By Lemma \ref{Lemma:CulculationOfDynamicalDegree}, we have $\delta_f=\delta_{f_{{\mathbb P}^1}}$. Fix a point $p \in {\mathbb P}^1$ and set $F=\pi^\ast p$. Let $P\in X(\overline{k})$ be a point whose forward $f$-orbit is Zariski dense in $X$. Then the forward $f_{{\mathbb P}^1}$-orbit of $\pi (P)$ is also Zariski dense in ${\mathbb P}^1$. Now the assertion follows from the following computation. \begin{align*} \alpha_{f}(P) &\geq \lim_{n\to\infty} h_F(f^n(P))^{1/n}= \lim_{n\to \infty} h_{\pi^\ast p}(f^n(P))^{1/n}\\ &= \lim_{n\to \infty} h_{p}(\pi\circ f^n(P))^{1/n} = \lim_{n\to \infty} h_{p}(f_{{\mathbb P}^1}^n \circ \pi(P))^{1/n} =\delta_{f_{{\mathbb P}^1}}=\delta_{f}. \end{align*} \end{proof} \subsection{$\mathbb P^1$-bundles over genus one curves}\label{Subsection:Elliptic} In this subsection, we prove Conjecture \ref{KS} for any endomorphisms on a ${\mathbb P}^1$-bundle on a curve $C$ of genus one. The following result is due to Amerik. Note that Amerik in fact proved it for $\mathbb P^1$-bundles over varieties of arbitrary dimension (cf.~\cite{Amerik}). \begin{lem}[Amerik]\label{Lemma:FiniteBaseChange} Let $X=\mathbb P(\mathcal E)$ be a $\mathbb P^1$-bundle over a curve $C$. If $X$ has a fiber-preserving surjective endomorphism whose restriction to a general fiber has degree greater than 1, then $\mathcal{E}$ splits into a direct sum of two line bundles after a finite base change. Furthermore, if $\mathcal E$ is semistable, then $\mathcal{E}$ splits into a direct sum of two line bundles after an \'etale base change. \end{lem} \begin{proof} See \cite[Theorem 2 and Proposition 2.4]{Amerik}. \end{proof} The following lemma is used when we take the base change by an \'etale cover of genus one curve. \begin{lem}\label{Lemma:EndomorphismOnTrivializedFibration} Let $E$ be a curve of genus one with an endomorphism $f\colon E\longrightarrow E$. If $g\colon E' \longrightarrow E$ is a finite \'etale covering of $E$, there exists a finite \'etale covering $h\colon E''\longrightarrow E'$ and an endomorphism $f '\colon E'' \longrightarrow E''$ such that $f \circ g\circ h=g\circ h \circ f'$. Furthermore, we can take $h$ as satisfying $E''=E$. \end{lem} \begin{proof} At first, since $E'$ is an \'etale covering of genus one curve $E$, $E'$ is also a genus one curve. By fixing a rational point $p\in E'(\overline{k})$ and $g(p)\in E(\overline{k})$, these curves $E$ and $E'$ are regarded as elliptic curves, and $g$ can be regarded as an isogeny between elliptic curves. Let $h:=\hat{g}\colon E\longrightarrow E'$ be the dual isogeny of $g$. The morphism $f$ is decomposed as $f=\tau_c \circ \psi$ for a homomorphism $\psi$ and a translation map $\tau_c$ by $c\in E(\overline{k})$. Fix a rational point $c'\in E(\overline{k})$ such that $[\deg(g)](c')=c$ and consider the translation map $\tau_{c'}$, where $[\deg(g)]$ is the multiplication by $\deg(g)$. We set $f'=\tau_{c'} \circ\psi$. Then we have the following equalities. \begin{align*} &\hphantom{=} f \circ g\circ h = \tau_c \circ \psi \circ g \circ \hat{g}\\ &= \tau_c \circ \psi \circ [\deg(g)] = \tau_c \circ [\deg(g)] \circ \psi\\ &= [\deg(g)]\circ \tau_{c'} \circ \psi = g\circ h \circ f'. \end{align*} This is what we want. \end{proof} \begin{prop}\label{prop:reduction elliptic} Let $ \mathcal{E}$ be a locally free sheaf of rank $2$ on a genus one curve $C$ and $X = {\mathbb{P}}( \mathcal{E})$. Suppose Conjecture \ref{KS} holds for any non-trivial endomorphism on $X$ with $ \mathcal{E}= \mathcal{O}_{C} \oplus \mathcal{L}$ where $ \mathcal{L}$ is a line bundle of degree zero on $C$. Then Conjecture \ref{KS} holds for any non-trivial endomorphism on $X={\mathbb P}(\mathcal{E})$ for any $\mathcal{E}$. \end{prop} \begin{proof By Lemma \ref{Lemma:FiberPreserving} and Lemma \ref{Lemma:iterate}, we may assume that $f$ preserves fibers. We can prove Conjecture\ref{KS} in the case of $\deg(f |_F)=1$ by the same way as in the case of $g(C)=0$ since $\deg(f |_F)=1\leq \deg(f_C)$. Since we are considering the case of $g(C)=1$, if $\mathcal{E}$ is indecomposable, then $\mathcal{E}$ is semistable (see \cite[10.2 (c), 10.49]{Mukai} or \cite[V. Exercise 2.8 (c)]{AG}). By Lemma \ref{Lemma:FiniteBaseChange}, if $\deg(f |_F)>1$ and $\mathcal{E}$ is indecomposable, there is a finite \'etale covering $g\colon E\longrightarrow C$ satisfying that $E\times _C X\cong {\mathbb P}(\mathcal{O}_E \oplus \mathcal{L})$ for an invertible sheaf $\mathcal{L}$ over $E$. Furthermore, by Lemma \ref{Lemma:EndomorphismOnTrivializedFibration}, we can take $E$ equal to $C$ and there is an endomorphism $f_C'\colon C\longrightarrow C$ satisfying $f_C\circ g=g\circ f_C'$. Then by the universality of cartesian product $X\times_{C,g}C$, an endomorphism $f'\colon X\times _{C,g}C\longrightarrow X\times _{C,g}C$ is induced. By Lemma \ref{Lemma:ReductionByFiniteMorphisms}, it is enough to prove Conjecture \ref{KS} for the endomorphism $f'$. Thus, we may assume that $\mathcal{E}$ is decomposable, i.e., $X\cong {\mathbb P}(\mathcal{O}_C\oplus \mathcal{L})$. Then the invariant $e$ is non-negative by Lemma \ref{Lemma:PositivityOfInvariant}. When $e$ is positive, by the same way as the proof of Theorem \ref{Theorem:MainTheorem} in the case of $g(C)=0$, the proof is complete. When $e=0$, we have $\deg \mathcal{L}=0$ and the assertion holds by the assumption. \end{proof} In the rest of this subsection, we keep the following notation. Let $C$ be a genus one curve and $ \mathcal{L}$ an invertible sheaf on $C$ with degree $0$. Let $X= {\mathbb{P}}( \mathcal{O}_{C} \oplus \mathcal{L})= {\rm Proj} ({\rm Sym}( \mathcal{O}_{C} \oplus \mathcal{L}))$ and $\pi \colon X \longrightarrow C$ the projection. When $ \mathcal{L}$ is trivial, we have $X \cong C \times {\mathbb{P}}^{1}$, and by \cite[Theorem1.3]{sano1}, Conjecture \ref{KS} is true for $X$. Thus we may assume $ \mathcal{L}$ is non-trivial. In this case, we have two sections of $\pi \colon X \longrightarrow C$ corresponding to the projections $\mathcal{O}_{C} \oplus \mathcal{L} \longrightarrow \mathcal{L}$ and $\mathcal{O}_{C} \oplus \mathcal{L} \longrightarrow \mathcal{O}_{C}$. Let $C_{0}$ and $C_{1}$ denote the images of these sections. Then we have $ \mathcal{O}_{X}(C_{0})= \mathcal{O}_{X}(1)$ and $ \mathcal{O}_{X}(C_{1})= \mathcal{O}_{X}(1) {\otimes} \pi^{*} \mathcal{L}^{-1}$. Since $ \mathcal{L}$ is non-trivial, we have $C_{0}\neq C_{1}$. But since $\deg \mathcal{L}=0$, $C_{0}$ and $C_{1}$ are numerically equivalent. Thus $(C_{0}\cdot C_{1})=(C_{0}^{2})=0$ and therefore $C_{0}\cap C_{1}=\emptyset$. Let $f$ be a non-trivial endomorphism on $X$ such that there is a surjective endomorphism $f_{C} \colon C\longrightarrow C$ with $\pi \circ f=f_{C} \circ \pi$. \begin{lem}\label{Lemma:torsion case} When $ \mathcal{L}$ is a torsion element of $\Pic C$, Conjecture \ref{KS} holds for $f$. \end{lem} \begin{proof} We fix an algebraic group structure on $C$. Since $ \mathcal{L}$ is torsion, there exists a positive integer $n>0$ such that $[n]^{*} \mathcal{L} \cong \mathcal{O}_{C}$. Then the base change of $\pi \colon X \longrightarrow C$ by $[n] \colon C\longrightarrow C$ is the trivial $\mathbb P^1$-bundle ${\mathbb{P}}^{1} \times C \longrightarrow C$. Applying Lemma \ref{Lemma:EndomorphismOnTrivializedFibration} to $g=[n]$, we get a finite morphism $h \colon C\longrightarrow C$ such that the base change of $\pi \colon X \longrightarrow C$ by $h \colon C\longrightarrow C$ is $ {\mathbb{P}}^{1} \times C \longrightarrow C$ and there exists a finite morphism $f_{C}' \colon C \longrightarrow C$ with $f_{C} \circ h=h\circ f_{C}'$. Then $f$ induces a non-trivial endomorphism $f' \colon {\mathbb{P}}^{1} \times C \longrightarrow {\mathbb{P}}^{1} \times C$. By \cite[Theorem1.3]{sano1}, Conjecture \ref{KS} holds for $f'$. By Lemma \ref{Lemma:ReductionByFiniteMorphisms}, Conjecture \ref{KS} holds also for $f$. \end{proof} Now, let $F$ be the numerical class of a fiber of $\pi$. By Lemma \ref{Lemma:CulculationOfDynamicalDegree}, we have \begin{align*} &f^{*}F \equiv aF,\\ &f^{*}C_{0} \equiv bC_{0} \end{align*} for some integers $a,b\geq1$. Note that $a=\deg f_C$, $b=\deg f|_F$ and $ab=\deg f$ (cf.~Lemma \ref{degrees}). \begin{lem}\label{Lemma: images of C_{i}} \ \begin{enumerate} \item[\rm (1)] One of the equalities $f(C_{0})=C_{0}$, $f(C_{0})=C_{1}$ and $f(C_{0})\cap C_0=f(C_{0})\cap C_1=\emptyset$ holds. The same is true for $f(C_{1})$. \item[\rm (2)] If $f(C_{0})\cap C_{i}=\emptyset$ for $i=0,1$, then the base change of $\pi \colon X \longrightarrow C$ by $f_{C} \colon C \longrightarrow C$ is isomorphic to $ {\mathbb{P}}^{1} \times C$. In particular, $f_{C}^{*} \mathcal{L} \cong \mathcal{O}_{C}$ and $ \mathcal{L}$ is a torsion element of $\Pic C$. The same conclusion holds under the assumption that $f(C_{1})\cap C_{i}=\emptyset$ for $i=0,1$. \end{enumerate} \end{lem} \begin{proof} (1) Since $f^{*}C_{i} \equiv bC_{i}$, $C_{0}\equiv C_{1}$ and $(C_{0}^{2})=0$, we have $(f_{*}C_{i}\cdot C_{j})=0$ for every $i$ and $j$. Thus the assertion follows. (2) Assume $f(C_{0})\cap C_{i}=\emptyset$ for $i=0,1$. Consider the following Cartesian diagram. \[ \xymatrix{ Y \ar[r]^{g} \ar[d]_{\pi'} & X \ar[d]^{\pi}\\ C \ar[r]^{f_{C}} & C } \] Then $Y$ is a $\mathbb P^1$-bundle over $C$ associated with the vector bundle $ \mathcal{O}_{C} \oplus f_{C}^{*} \mathcal{L}$. The pull-backs $C_{i}=g^{-1}(C_{i}), i=0,1$ are sections of $\pi'$. By the projection formula, we have $(C_{i}'^{2})=0$. Let $\sigma \colon C \longrightarrow X$ be the section with $\sigma (C) = C_{0}$. Since $\pi \circ f \circ \sigma=f_{C}$, we get a section $s \colon C \longrightarrow Y$ of $\pi'$. \[ \xymatrix{ &C \ar[ldd]_{s} \ar[d]^{\sigma} \ar@/_11mm/[lddd]_{\rm id}\\ &X \ar[d]^{f}\\ Y \ar[d]^{\pi'} \ar[r]^{g} & X \ar[d]^{\pi}\\ C \ar[r]_{f_{C}} &C } \] Note that $g(s(C))=f(C_{0}) \neq C_{0}, C_{1}$. Thus $s(C), C_{0}', C_{1}'$ are distinct sections of $\pi'$. Moreover, by the projection formula, we have $(s(C)\cdot C_{0}')=0$. Thus we have three sections which are numerically equivalent to each other. Then Lemma \ref{lem_three_sections} implies $f_{C}^{*} \mathcal{L} \cong \mathcal{O}_{C}$ and $Y \cong {\mathbb{P}}^{1} \times C$. Since $f_{C}^{*}\colon \Pic^{0}C\longrightarrow \Pic^{0}C$ is an isogeny, the kernel of $f_{C}^{*}$ is finite and thus $ \mathcal{L}$ is a torsion element of $\Pic C$. \end{proof} \begin{lem}\label{Lemma: images of C_{i}2} \ \begin{enumerate} \item[\rm (1)] Suppose that \begin{itemize} \item $ \mathcal{L}$ is non-torsion in $\Pic C$, \item $f(C_{0})=C_{0}\ \text{or}\ C_{1}$, and \item $f(C_{1})=C_{0}\ \text{or}\ C_{1}$. \end{itemize} Then $f(C_{0})=C_{0}$ and $f(C_{1})=C_{1}$, or $f(C_{0})=C_{1}$ and $f(C_{1})=C_{0}$. \item[\rm (2)] If the equalities $f(C_{0})=C_{0}$ and $f(C_{1})=C_{1}$ hold, then $f^{*}C_{i} \sim_{ {\mathbb{Q}}} bC_{i}$ for $i=0$ and $1$. \end{enumerate} \end{lem} \begin{proof} (1) Assume that $f(C_{0})=C_{0}$ and $f(C_{1})=C_{0}$. Then $f_{*}C_{0}=aC_{0}$ and $f_{*}C_{1}=aC_{0}$ as cycles. Since $f_{C}^{*} \colon \Pic^{0}C \longrightarrow \Pic^{0}C$ is surjective, there exists a degree zero divisor $M$ on $C$ such that $f_{C}^{*} \mathcal{O}_{C}(M) \cong \mathcal{L}$. Then $C_{1} \sim C_{0}-\pi^{*}f_{C}^{*}M$. Hence \[ aC_{0}=f_{*}C_{1} \sim (f_{*}C_{0}-f_{*}\pi^{*}f_{C}^{*}M)=(aC_{0}-f_{*}\pi^{*}f_{C}^{*}M) \] and \[ 0 \sim f_{*}\pi^{*}f_{C}^{*}M \sim f_{*}f^{*}\pi^{*}M\sim (\deg f) \pi^{*}M. \] Thus $\pi^{*}M$ is torsion and so is $M$. This implies that $ \mathcal{L}$ is torsion, which contradicts the assumption. The same argument shows that the case when $f(C_{0})=C_{1}$ and $f(C_{1})=C_{1}$ does not occur. (2) In this case, we have $f_{*}C_{0} \sim aC_{0}$. We can write $f^{*}C_{0} \sim bC_{0}+\pi^{*}D$ for some degree zero divisor $D$ on $C$. Thus \[ (\deg f)C_{0} \sim f_{*}f^{*}C_{0} \sim abC_{0}+f_{*}\pi^{*}D=(\deg f)C_{0}+f_{*}\pi^{*}D \] and $f_{*}\pi^{*}D\sim 0$. Since $f_{C}^{*} \colon \Pic^{0}C \longrightarrow \Pic^{0}C$ is surjective, there exists a degree zero divisor $D'$ on $C$ such that $f_{C}^{*}D' \sim D$. Then \[ 0\sim f_{*}\pi^{*}D\sim f_{*}\pi^{*}f_{C}^{*}D' \sim f_{*}f^{*}\pi^{*}D' \sim (\deg f)\pi^{*}D'. \] Hence $\pi^{*}D' \sim_{ {\mathbb{Q}}}0$ and $D' \sim _{ {\mathbb{Q}}} 0$. Therefore $D \sim_{ {\mathbb{Q}}} 0$ and $f^{*}C_{0} \sim_{ {\mathbb{Q}}} bC_{0}$. Similarly, we have $f^\ast C_1 \sim _{\mathbb Q} bC_1$. \end{proof} \begin{lem}\label{Lemma: canonical height zero} Suppose $a<b$. If $f^{*}C_{i} \sim_{ {\mathbb{Q}}} bC_{i}$ for $i=0,1$, the line bundle $\mathcal{L}$ is a torsion element of $\Pic C$. \end{lem} \begin{proof} Let $L$ be a divisor on $C$ such that $ {\mathcal{O}}_{C}(L) \cong \mathcal{L}$. Note that $C_{1}\sim C_{0}-\pi^{*}L$. Thus \[ f^{*}\pi^{*}L\sim f^{*}(C_{0}-C_{1})\sim_\mathbb Q bC_{0}-bC_{1} \sim b\pi^{*}L \] and $f_{C}^{*}L\sim_\mathbb Q bL$ hold. Thus, from the following lemma, $\mathcal{L}$ is a torsion element. \end{proof} \begin{lem}\label{prop:torsion} Let $a,b$ be integers such that $1\leq a<b$. Let $C$ be a curve of genus one defined over an algebraically closed field $k$. Let $f_C\colon C \longrightarrow C$ be an endomorphism of $\deg f_C=a$. If $L$ is a divisor on $C$ of degree $0$ satisfying \[ f_C^\ast L\sim_{\mathbb Q} bL, \] the divisor $L$ is a torsion element of $\Pic^0(C)$ \end{lem} \begin{proof} By the definition of ${\mathbb Q}$-linear equivalence, we have $f_C^\ast rL \sim brL$ for some positive integer $r$. Since the curve $C$ is of genus one, the group $\Pic^0(C)$ is an elliptic curve. Assume the (group) endomorphism \[ f_C^\ast -[b]\colon \Pic^0(C)\longrightarrow \Pic^0(C) \] is the $0$ map. Then we have the equalities $a=\deg f_C =\deg f_C^\ast =\deg [b]=b^2$. But this contradicts to the inequality $1\leq a <b$. Hence the map $f_C^\ast -[b]$ is an isogeny, and $\Ker (f_C^\ast -[b])\subset \Pic^0(C)$ is a finite group scheme. In particular, the order of $rL \in \Ker (f_C^\ast -[b])(k)$ is finite. Thus, $L$ is a torsion element. \end{proof} \begin{rmk} We can actually prove the following. Let $X$ be a smooth projective variety over $\overline{\mathbb Q}$ and $f \colon X \longrightarrow X$ be a surjective morphism over $\overline{\mathbb Q}$ with first dynamical degree $\delta$. If an ${\mathbb R}$-divisor $D$ on $X$ satisfies \[ f^{*}D \sim_{{\mathbb R}} \lambda D \] for some $\lambda>\delta$, then one has $D \sim_{{\mathbb R}} 0$. \begin{proof}[Sketch of the proof] Consider the canonical height \[ \hat{h}_{D}(P)=\lim_{n \to \infty}h_{D}(f^{n}(P))/\lambda^{n} \] where $h_{D}$ is a height associated with $D$ (cf.\ \cite{callsilv}). If $\hat{h}_{D}(P)\neq 0$ for some $P$, then we can prove $ \overline{\alpha}_{f}(P) \geq \lambda$. This contradicts to the fact $\delta \geq \overline{\alpha}_{f}(P)$ and the assumption $\lambda > \delta$. Thus one has $\hat{h}_{D}=0$ and therefore $h_{D}=\hat{h}_{D}+O(1)=O(1)$. By a theorem of Serre, we get $D \sim_{{\mathbb R}} 0$. \end{proof} \end{rmk} \begin{prop}\label{prop:split deg zero case} Let $ \mathcal{L}$ be an invertible sheaf of degree zero on a genus one curve $C$ and $X= {\mathbb{P}}( \mathcal{O}_{C}\oplus \mathcal{L})$. For any non-trivial endomorphism $f \colon X \longrightarrow X$, Conjecture \ref{KS} holds. \end{prop} \begin{proof} By Lemma \ref{Lemma:torsion case} and Proposition \ref{prop:torsion} we may assume $a \geq b$. In this case, $\delta_{f}=a$ and Conjecture \ref{KS} can be proved as in the proof of Proposition \ref{prop:Hirz}. \end{proof} \begin{proof}[Proof of Theorem \ref{Theorem:MainTheorem} for ${\mathbb P}^1$-bundles over genus one curves] As we argued at the first of Section \ref{Section:EndomorphismsOnSurfaces}, we may assume that the endomorphism $f\colon X\longrightarrow X$ is not an automorphism. Then the assertion follows from Proposition \ref{prop:reduction elliptic} and Proposition \ref{prop:split deg zero case}. \end{proof} \begin{rmk} In the above setting, the line bundle $ \mathcal{L}$ is actually an eigenvector for $f_{C}^{*}$ up to linear equivalence. More precisely, for a ${\mathbb P}^1$-bundle $\pi \colon X= {\mathbb{P}}( \mathcal{O}_{C} \oplus \mathcal{L}) \longrightarrow C$ over a curve $C$ with $\deg \mathcal{L}=0$ and an endomorphism $f \colon X\longrightarrow X$ that induces an endomorphism $f_{C}\colon C \longrightarrow C$, there exists an integer $t$ such that $ f_{C}^{*} \mathcal{L} \cong \mathcal{L}^{t}$. Indeed, let $C_{0}$ and $C_{1}$ be the sections defined above. Since $(f^{*}(C_{0})\cdot C_{0})=0$, we can write $ \mathcal{O}_{X}(f^{-1}(C_{0})) \cong \mathcal{O}_{X}(mC_{0}) {\otimes} \pi^{*} \mathcal{N}$ for some integer $m$ and degree zero line bundle $ \mathcal{N}$ on $C$. Since \begin{align*} 0&\neq H^{0}( \mathcal{O}_{X}(f^{-1}(C_{0}))) = H^{0}(\mathcal{O}_{X}(mC_{0}) {\otimes} \pi^{*} \mathcal{N})\\ &=H^{0}(\Sym^{m}( \mathcal{O}_{C}\oplus \mathcal{L}) {\otimes} \mathcal{N})=\bigoplus_{i=0}^{m}H^{0}( \mathcal{L}^{i} {\otimes} \mathcal{N}), \end{align*} we have $\mathcal{N} \cong \mathcal{L}^{r}$ for some $-m \leq r \leq0$. Thus $f^{*} \mathcal{O}_{X}(C_{0}) \cong \mathcal{O}_{X}(mC_{0}) {\otimes}\pi^{*} \mathcal{L}^{r}$. The key is the calculation of global sections using projection formula. Since $ \mathcal{O}_{X}(C_{1}) \cong \mathcal{O}_{X}(C_{0}) {\otimes} \pi^{*} \mathcal{L}^{-1}$, we have $\pi_{*} \mathcal{O}_{X}(mC_{1}) \cong \pi_{*} \mathcal{O}_{X}(mC_{0}) {\otimes} \mathcal{L}^{-m}$. Moreover, since $C_{0}$ and $C_{1}$ are numerically equivalent, we can similarly get $f^{*} \mathcal{O}_{X}(C_{1}) \cong \mathcal{O}_{X}(mC_{0}) {\otimes} \pi^{*} \mathcal{L}^{s}$ for some integer $s$. Thus, $f^{*}\pi^{*} \mathcal{L} \cong \pi^{*} \mathcal{L}^{r-s}$. Therefore, $\pi^{*}f_{C}^{*} \mathcal{L} \cong \pi^{*} \mathcal{L}^{r-s}$. Since $\pi^{*} \colon \Pic C \longrightarrow \Pic X$ is injective, we get $f_{C}^{*} \mathcal{L} \cong \mathcal{L}^{r-s}$. \end{rmk} \subsection{$\mathbb P^1$-bundles over curves of genus $\geq 2$} \label{Subsection:general type base curve} By the following proposition, Conjecture \ref{KS} trivially holds in this case. \begin{prop} Let $C$ be a curve with $g(C)\geq2$ and $\pi \colon X\longrightarrow C$ be a $ {\mathbb{P}}^{1}$-bundle over $C$. Let $f \colon X \longrightarrow X$ be a surjective endomorphism. Then there exists an integer $t > 0$ such that $f^{t}$ is a morphism over $C$, that is, $f^t$ satisfies $\pi \circ f^{t}=\pi$. In particular, $f$ admits no Zariski dense orbit. \end{prop} \begin{proof} By Lemma \ref{Lemma:FiberPreserving}, we may assume that $f$ induces a surjective endomorphism $f_{C} \colon C \longrightarrow C$ with $\pi \circ f=f_{C} \circ \pi$. Since $C$ is of general type, $f_{C}$ is an automorphism of finite order and the assertion follows. \end{proof} \begin{rmk} The fact that $f$ does not admit any Zariski dense orbits also follows from the Mordell conjecture (Faltings's theorem). Indeed, assume there exists a Zariski dense orbit $\mathcal O_f(P)$ on $X$. Then $\pi(\mathcal O_f(P))$ is also Zariski dense in $C$. We may assume that $X, C, f, \pi, P$ are defined over a number field $K$. Since $g(C)\geq2$, by the Mordell conjecture, the set of $K$-rational points $C(K)$ is finite and therefore $\pi(\mathcal O_f(P))$ is also finite. This is a contradiction. \end{rmk} \section{Hyperelliptic surfaces}\label{Section:HyperEllipticSurface} \begin{thm} Let $X$ be a hyperelliptic surface and $f \colon X \longrightarrow X$ a non-trivial endomorphism on $X$. Then Conjecture \ref{KS} holds for $f$. \end{thm} \begin{proof} Let $\pi \colon X \longrightarrow E$ be the Albanese map of $X$. By the universality of $\pi$, there is a morphism $g \colon E \longrightarrow E$ satisfying $\pi \circ f = g \circ \pi$. It is well-known that $E$ is a genus one curve, $\pi$ is a surjective morphism with connected fibers, and there is an \'etale cover $\phi \colon E' \longrightarrow E$ such that $X'=X \times_E E' \cong F \times E'$, where $F$ is a genus one curve (cf.~\cite[Chapter 10]{Badescu}). In particular, $X'$ is an abelian surface. By Lemma \ref{Lemma:EndomorphismOnTrivializedFibration}, taking a further \'etale base change, we may assume that there is an endomorphism $h \colon E' \longrightarrow E'$ such that $\phi \circ h=g \circ \phi$. Let $\pi' \colon X' \longrightarrow E'$ and $\psi \colon X' \longrightarrow X$ be the induced morphisms. Then, by the universality of fiber products, there is a morphism $f' \colon X' \longrightarrow X'$ satisfying $\pi' \circ f'= \pi' \circ h$ and $\psi \circ f' = f \circ \psi$. Applying Lemma \ref{Lemma:ReductionByFiniteMorphisms}, it is enough to prove Conjecture \ref{KS} for the endomorphism $f'$. Since $X'$ is an abelian variety, it holds by \cite[Corollary 31]{ab1} and \cite[Theorem 2]{ab2}. \end{proof} \section{Surfaces with $\kappa(X)=1$}\label{Section:EllipticSurface} Let $f \colon X \longrightarrow X$ be a non-trivial endomorphism on a surface $X$ with $\kappa(X)=1$. In this section we shall prove that $f$ does not admit any Zariski dense forward $f$-orbit. Although this result is a special case of \cite[Theorem A]{NaZh} (see Remark \ref{rem_for_KS}), we will give a simpler proof of it. By Lemma \ref{lem_min}, $X$ is minimal and $f$ is \'etale. Since $\deg(f) \geq 2$, we have $\chi(X, \mathcal O_X)=0$. Let $\phi = \phi_{|mK_X|} \colon X \longrightarrow \mathbb P^N=\mathbb P H^0(X, mK_X)$ be the Iitaka fibration of $X$ and set $C_0=\phi(X)$. Since $f$ is \'etale, it induces an automorphism $g \colon \mathbb P^N \longrightarrow \mathbb P^N$ such that $\phi \circ f = g \circ \phi$ (cf.~\cite[Lemma 3.1]{FN2}). The restriction of $g$ to $C_0$ gives an automorphism $f_{C_0}\colon C_0 \longrightarrow C_0$ such that $\phi \circ f= f_{C_0} \circ \phi$. Take the normalization $\nu \colon C \longrightarrow C_0$ of $C_0$. Then $\phi$ factors as $X \overset{\pi}{\longrightarrow} C \overset{\nu}{\longrightarrow} C_0$ and $\pi$ is an elliptic fibration. Moreover, $f_{C_0}$ lifts to an automorphism $f_C\colon C \longrightarrow C$ such that $\pi \circ f=f_C \circ \pi$. So we obtain an elliptic fibration $\pi \colon X \longrightarrow C$ and an automorphism $f_C$ on $C$ such that $\pi\circ f=f_C\circ \pi$ In this situation, the following holds. \begin{thm}\label{inv_of_fibers} Let $X$ be a surface with $\kappa(X)=1$, $\pi \colon X \longrightarrow C$ an elliptic fibration, $f \colon X \longrightarrow X$ a non-trivial endomorphism, and $f_C \colon C \longrightarrow C$ an automorphism such that $\pi \circ f=f_C \circ \pi$. Then $f_C^t=\mathrm{id}_C$ for a positive integer $t$. \end{thm} \begin{proof} Let $\{ P_1, \ldots, P_r \}$ be the points over which the fibers of $\pi$ are multiple fibers (possibly $r=0$, i.e.~$\pi$ does not have any multiple fibers). We denote by $m_i$ denotes the multiplicity of the fiber $\pi^*P_i$ for every $i$. Then we have the canonical bundle formula: $$K_X = \pi^*(K_C + L)+ \sum_{i=1}^r \frac{m_i-1}{m_i} \pi^*P_i,$$ where $L$ is a divisor on $C$ such that $\deg(L)=\chi(X, \mathcal O_X)$. Then $\deg(L)=0$ because $f$ is \'etale and $\deg(f) \geq 2$ (cf.~Lemma \ref{lem_min}). Since $\kappa(X)=1$, the divisor $K_C+L+\sum_{i=1}^r \frac{m_i-1}{m_i} P_i$ must have positivedegrees. So we have \begin{equation*} 2(g(C)-1)+\sum_{i=1}^r \frac{m_i-1}{m_i}>0. \tag{$*$} \end{equation*} For any $i$, set $Q_i=f_C^{-1}(P_i)$. Then $\pi^* Q_i = \pi^* f_C^*P_i =f^*\pi^*P_i$ is a multiple fiber. So $(f_C)|_{\{P_1, \ldots, P_r\}}$ is a permutation of $\{ P_1, \ldots, P_r \}$ since $f_C$ is an automorphism. We divide the proof into three cases according to the genus $g(C)$ of $C$: (1) $g(C) \geq 2$; then the automorphism group of $C$ is finite. So $f_C^t=\mathrm{id}_C$ for a positive integer $t$. (2) $g(C)=1$; by ($*$), it follows that $r \geq 1$. For a suitable $t$, all $P_i$ are fixed points of $f_C^t$. We put the algebraic group structure on $C$ such that $P_1$ is the identity element. Then $f_C^t$ is a group automorphism on $C$. So $f_C^{ts}= \mathrm{id}_C$ for a suitable $s$ since the group of group automorphisms on $C$ is finite. (3) $g(C)=0$; again by ($*$), it follows that $r \geq 3$. For a suitable $t$, all $P_i$ are fixed points of $f_C^t$. Then $f_C^t$ fixes at least three points, which implies that $f_C^t$ is in fact the identity map. \end{proof} Immediately we obtain the following corollary. \begin{cor} Let $f \colon X \longrightarrow X$ be a non-trivial endomorphism on a surface $X$ with $\kappa(X)=1$. Then there does not exist any Zariski dense $f$-orbit. \end{cor} Therefore Conjecture \ref{KS} trivially holds for non-trivial endomorphisms on surfaces of Kodaira dimension 1. \section{Existence of a rational point $P$ satisfying $\alpha_f(P)=\delta_f$} \label{Section:ExistenceOfOrbits} In this section, we prove Theorem \ref{thm_existence} and Theorem \ref{thm_large collection}. Theorem \ref{thm_existence} follows from the following lemma. A subset $\Sigma \subset V( \overline{k})$ is called a {\it set of bounded height} if for an (every) ample divisor $A$ on $V$, the height function $h_{A}$ associated with $A$ is a bounded function on $\Sigma$. \begin{lem}\label{lem:existence} Let $X$ be a smooth projective variety and $f \colon X \longrightarrow X$ a surjective endomorphism with $\delta_{f}>1$. Let $D \not \equiv 0$ be a nef ${\mathbb R}$-divisor such that $f^{*}D \equiv \delta_{f} D$. \ Let $V \subset X$ be a closed subvariety of positive dimension such that $(D^{\dim V}\cdot V)>0$. Then there exists a non-empty open subset $U \subset V$ and a set $\Sigma \subset U( \overline{k})$ of bounded height such that for every $P \in U( \overline{k})\setminus \Sigma$ we have $ \alpha_{f}(P)=\delta_{f}$. \end{lem} \begin{rmk} By Perron-Frobenius-type result of \cite[Theorem]{Birkhoff}, there is a nef ${\mathbb R}$-divisor $D\not \equiv 0$ satisfying the condition $f^\ast D \equiv \delta_f D$ since $f^\ast$ preserves the nef cone. \end{rmk} \begin{proof} Fix a height function $h_{D}$ associated with $D$. For every $P\in X( \overline{k})$, the following limit exists (cf.~\cite[Theorem 5]{rat}). \[ \hat{h}(P)=\lim_{n \to \infty} \frac{h_{D}(f^n(P))}{\delta_{f}^{n}} \] The function $\hat{h}$ has the following properties (cf.~\cite[Theorem 5]{rat}). \begin{enumerate} \item[(i)] $\hat{h}=h_{D}+O(\sqrt{h_{H}})$ where $H$ is any ample divisor on $X$ and $h_{H}\geq1$ is a height function associated with $H$. \item[(ii)] If $\hat{h}(P)>0$, then $ \alpha_{f}(P)=\delta_{f}$. \end{enumerate} Since $(D^{\dim V}\cdot V)>0$, we have $({D|_{V}}^{\dim V})>0$ and $D|_{V}$ is big. Thus we can write $D|_{V} \sim_{{\mathbb R}} A+E$ with an ample ${\mathbb R}$-divisor $A$ and an effective ${\mathbb R}$-divisor $E$ on $V$. Therefore we have \[ \hat{h}|_{V( \overline{k})}=h_{A}+h_{E}+O(\sqrt{h_{A}}) \] where $h_{A}, h_{E}$ are height functions associated with $A,E$ and $h_{A}$ is taken to be $h_{A}\geq 1$. In particular, there exists a positive real number $B>0$ such that $h_{A}+h_{E}-\hat{h}|_{V( \overline{k})}\leq B \sqrt{h_{A}}$. Then we have the following inclusions. \begin{align*} \{ P\in V( \overline{k})\mid \hat{h}(P)\leq0\} &\subset \{ P\in V( \overline{k})\mid h_{A}(P)+h_{E}(P)\leq B\sqrt{h_{A}(P)}\}\\ &\subset \Supp E \cup \{ P\in V( \overline{k})\mid h_{A}(P)\leq B\sqrt{h_{A}(P)}\}\\ &= \Supp E \cup \{ P\in V( \overline{k})\mid h_{A}(P)\leq B^{2}\}. \end{align*} Hence we can take $U= V \setminus \Supp E$ and $\Sigma=\{ P\in U( \overline{k})\mid \hat{h}(P)\leq0\}$. \end{proof} \begin{cor}\label{cor:pos curve} Let $X$ be a smooth projective variety of dimension $N$ and $f \colon X \longrightarrow X$ a surjective endomorphism. Let $C$ be a irreducible curve which is a complete intersection of ample effective divisors $H_{1},\ldots ,H_{N-1}$ on $X$. Then for infinitely many points $P$ on $C$, we have $ \alpha_{f}(P)=\delta_{f}$. \end{cor} \begin{proof} We may assume $\delta_{f}>1$. Let $D$ be as in Lemma \ref{lem:existence}. Then $(D\cdot C)=(D\cdot H_{1}\cdots H_{N-1})>0$ (cf.~\cite[Lemma 20]{rat}). Since $C( \overline{k})$ is not a set of bounded height, the assertion follows from Lemma \ref{lem:existence}. \end{proof} To prove Theorem \ref{thm_large collection}, we need the following theorem which is a corollary of the dynamical Mordell--Lang conjecture for \'etale finite morphisms. \begin{thm}[Bell--Ghioca--Tucker {\cite[Corollary 1.4]{bgt}}]\label{dml} Let $f \colon X \longrightarrow X$ be an \'etale finite morphism of smooth projective variety $X$. Let $P \in X( \overline{k})$. If the orbit $ \mathcal{O}_{f}(P)$ is Zariski dense in $X$, then any proper closed subvariety of $X$ intersects $ \mathcal{O}_{f}(P)$ in at most finitely many points. \end{thm} \begin{proof}[Proof of Theorem \ref{thm_large collection}] We may assume $\dim X \geq2$. Since we are working over $ \overline{k}$, we can write the set of all proper subvarieties of $X$ as \[ \{V_{i} \subsetneq X\mid i=0,1,2,\ldots\}. \] By Corollary \ref{cor:pos curve}, we can take a point $P_{0}\in X\setminus V_{0}$ such that $ \alpha_{f}(P)=\delta_{f}$. Assume we can construct $P_{0},\ldots ,P_{n}$ satisfying the following conditions. \begin{enumerate} \item $ \alpha_{f}(P_{i})=\delta_{f}$ for $i=0,\ldots ,n$. \item $ \mathcal{O}_{f}(P_{i}) \cap \mathcal{O}_{f}(P_{j})=\emptyset$ for $i\neq j$. \item $P_{i} \notin V_{i}$ for $i=0,\ldots ,n$. \end{enumerate} Now, take a complete intersection curve $C \subset X$ satisfying the following conditions. \begin{itemize} \item For $i=0,\ldots, n$, $C \not \subset \mathcal{O}_{f}(P_{i})$ if $ \overline{ \mathcal{ O}_{f}(P_{i})} \neq X$. \item For $i=0,\ldots, n$, $C \not \subset \mathcal{O}_{f^{-1}}(P_{i})$ if $ \overline{ \mathcal{ O}_{f^{-1}}(P_{i})} \neq X$. \item $C \not \subset V_{n+1}$. \end{itemize} By Theorem \ref{dml}, if $ \mathcal{O}_{f^{\pm}}(P_{i})$ is Zariski dense in $X$, then $ \mathcal{O}_{f^{\pm}}(P_{i}) \cap C$ is a finite set. By Corollary \ref{cor:pos curve}, there exists a point $$P_{n+1}\in C \setminus \left(\bigcup_{0\leq i\leq n} \mathcal{O}_{f}(P_{i}) \cup \bigcup_{0\leq i\leq n} \mathcal{O}_{f^{-1}}(P_{i})\cup V_{n+1}\right) $$ such that $ \alpha_{f}(P_{n+1})=\delta_{f}$. Then $P_{0},\ldots, P_{n+1}$ satisfy the same conditions. Therefore we get a subset $S=\{P_{i}\mid i=0,1,2,\ldots\}$ of $X$ which satisfies the desired conditions. \end{proof} \section*{Acknowledgements} The authors would like to thank Professors Tetsushi Ito, Osamu Fujino, and Tomohide Terasoma for helpful advice. They would also like to thank Takeru Fukuoka and Hiroyasu Miyazaki for answering their questions.
{ "timestamp": "2017-01-27T02:05:07", "yymm": "1701", "arxiv_id": "1701.04369", "language": "en", "url": "https://arxiv.org/abs/1701.04369", "abstract": "For a dominant rational self-map on a smooth projective variety defined over a number field, Kawaguchi and Silverman conjectured that the (first) dynamical degree is equal to the arithmetic degree at a rational point whose forward orbit is well-defined and Zariski dense. We prove this conjecture for surjective endomorphisms on smooth projective surfaces. For surjective endomorphisms on any smooth projective varieties, we show the existence of rational points whose arithmetic degrees are equal to the dynamical degree. Moreover, we prove that there exists a Zariski dense set of rational points having disjoint orbits if the endomorphism is an automorphism.", "subjects": "Algebraic Geometry (math.AG); Dynamical Systems (math.DS); Number Theory (math.NT)", "title": "Arithmetic degrees and dynamical degrees of endomorphisms on surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631639168357, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7087950386290864 }
https://arxiv.org/abs/1403.0023
Superspecial rank of supersingular abelian varieties and Jacobians
An abelian variety defined over an algebraically closed field k of positive characteristic is supersingular if it is isogenous to a product of supersingular elliptic curves and is superspecial if it is isomorphic to a product of supersingular elliptic curves. In this paper, the superspecial condition is generalized by defining the superspecial rank of an abelian variety, which is an invariant of its p-torsion. The main results in this paper are about the superspecial rank of supersingular abelian varieties and Jacobians of curves. For example, it turns out that the superspecial rank determines information about the decomposition of a supersingular abelian variety up to isomorphism; namely it is a bound for the maximal number of supersingular elliptic curves appearing in such a decomposition.
\section{Introduction} If $A$ is a principally polarized abelian variety of dimension $g$ defined over an algebraically closed field $k$ of positive characteristic $p$, then the multiplication-by-$p$ morphism $[p]=\ver \circ \frob$ is inseparable. Typically, $A$ is {\it ordinary} in that the Verschiebung morphism $\ver$ is separable, a condition equivalent to the number of $p$-torsion points of $A$ being $p^g$, or the only slopes of the $p$-divisible group of $A$ being $0$ and $1$, or the $p$-torsion group scheme of $A$ being isomorphic to $(\ZZ/p \oplus \boldsymbol{\mu}_p)^g$. Yet the abelian varieties which capture great interest are those which are as far from being ordinary as possible. In dimension $g=1$, an elliptic curve is {\it supersingular} if it has no points of order $p$; if the only slope of its $p$-divisible group is $1/2$; or if its $p$-torsion group scheme is isomorphic to the unique local-local ${\rm BT}_1$ group scheme of rank $p^2$, which we denote by $I_{1,1}$. These characterizations are different for a principally polarized abelian variety $A$ of higher dimension $g$. One says that $A$ has {\it $p$-rank $0$} when $A$ has no points of order $p$; that $A$ is {\it supersingular} when the only slope of its $p$-divisible group is $1/2$; and that $A$ is {\it superspecial} when its $p$-torsion group scheme is isomorphic to $I_{1,1}^g$. If $A$ is supersingular, then it has $p$-rank $0$, but the converse is false for $g \geq 3$. If $A$ is superspecial, then it is supersingular, but the converse is false for $g \geq 2$. The Newton polygon and Ekedahl-Oort type of an abelian variety usually do not determine the decomposition of the abelian variety. In fact, for any prime $p$ and formal isogeny type $\eta$ other than the supersingular one, there exists an absolutely simple abelian variety over $k$ having Newton polygon $\eta$ \cite{lenstraoort}. On the other hand, consider the following results about supersingular and superspecial abelian varieties. \begin{theorem} (Oort) Let $A/k$ be a principally polarized abelian variety. \begin{enumerate} \item Then $A$ is supersingular if and only if it is isogenous to a product of supersingular elliptic curves by \cite[Theorem 4.2]{O:sub} (which uses \cite[Theorem 2d]{tate:endo}). \item Then $A$ is superspecial if and only if it is isomorphic to a product of supersingular elliptic curves \cite[Theorem 2]{oort75}, see also \cite[Theorem 4.1]{Nygaard}. \end{enumerate} \end{theorem} The motivation for this paper was to find ways to measure the extent to which supersingular non-superspecial abelian varieties decompose up to isomorphism. The $a$-number $a:={\rm dim}_k {\rm Hom}(\boldsymbol{\alpha}_p, A[p])$ gives some information about this; if $A$ has $p$-rank $0$, then the number of factors in the decomposition of $A$ up to isomorphism is bounded above by the $a$-number, see \cite[Lemma 5.2]{SummerA}. However, a supersingular abelian variety with large $a$-number could still be indecomposable up to isomorphism. This paper is about another invariant of $A$, the {\it superspecial rank}, which we define in Section \ref{Sssdef} as the number of (polarized) factors of $I_{1,1}$ appearing in the $p$-torsion group scheme of $A$. In Proposition \ref{Pexists}, we determine which superspecial ranks occur for supersingular abelian varieties. The superspecial rank of Jacobians also has an application involving Selmer groups, see Section \ref{Sselmer}. In Section \ref{Sdecompose}, we define another invariant of $A$, the {\it elliptic rank}, which is the maximum number of elliptic curves appearing in a decomposition of $A$ up to isomorphism. In Proposition \ref{Pssrank=ssE}, we prove an observation of Oort which states that, for a supersingular abelian variety $A$, the elliptic rank equals the number of rank 2 factors in the $p$-divisible group $A[p^\infty]$. Proposition \ref{Psselliptic} states that the elliptic rank is bounded by the superspecial rank for an abelian variety of $p$-rank $0$. As a result, for an abelian variety $A$ of $p$-rank zero, the superspecial rank gives an upper bound for the maximal number of dimension one factors in a decomposition of $A$ up to isomorphism; this upper bound is most interesting for supersingular abelian varieties, which decompose completely up to isogeny. In Section \ref{Sjacobian}, we apply this observation to prove some results about the superspecial rank and elliptic rank of Jacobians of curves. For example, in characteristic $2$, Application \ref{App1general} states that the superspecial rank of the Jacobian of any hyperelliptic curve of $2$-rank $r$ is bounded by $1 + r$, while its elliptic rank is bounded by $1+2r$. The superspecial ranks of all the Hermitian curves are computed in Section \ref{Sherm}; in particular, when $n$ is even the elliptic rank of the Hermitian curve $X_{p^n}$ is zero. The authors thank the organizers of the 2013 Journ\'ees Arithm\'etiques, the referee for valuable comments, Ritzenthaler for help with the French abstract, and Oort for sharing the idea for Proposition \ref{Pssrank=ssE} and more generally for being a source of inspiration for this work. The first-named author was partially supported by grants from the Simons Foundation (204164) and the NSA (H98230-14-1-0161 and H98230-15-1-0247). The second-named author was partially supported by NSF grants DMS-11-01712 and DMS-15-02227. \section{Notation} All geometric objects in this paper are defined over an algebraically closed field $k$ of characteristic $p>0$. Some objects are defined over the ring $W(k)$ of Witt vectors over $k$. Let $\sigma$ denote the Frobenius automorphism of $k$ and its lift to $W(k)$. Let $A$ be a principally polarized abelian variety of dimension $g$ over $k$. Here are some relevant facts about $p$-divisible groups and $p$-torsion group schemes. \subsection{The $p$-divisible group} By the Dieudonn\'e-Manin classification \cite{maninthesis}, there is an isogeny of $p$-divisible groups \[A[p^\infty] \sim \oplus_{\lambda=\frac{d}{c+d}} \til G_{c,d}^{m_\lambda},\] where $(c,d)$ ranges over pairs of relatively prime nonnegative integers, and $\til G_{c,d}$ denotes a $p$-divisible group of codimension $c$, dimension $d$, and thus height $c+d$. The Dieudonn\'e module $\til D_\lambda := \dieu_*(\til G_{c,d})$ (see \ref{subsecdefcartier} below) is a free $W(k)$-module of rank $c+d$. Over $\operatorname{Frac}W(k)$, there is a basis $x_1, \ldots, x_{c+d}$ for $\til D_\lambda$ such that $F^{d}x_i=p^c x_i$. The Newton polygon of $A$ is the data of the numbers $m_\lambda$; it admits an intepretation as the $p$-adic Newton polygon of the operator $F$ on $\dieu_*(A[p^\infty])$. The abelian variety $A$ is {\it supersingular} if and only if $\lambda=\frac{1}{2}$ is the only slope of its $p$-divisible group $A[p^\infty]$. Letting $\til{I}_{1,1}=\til G_{1,1}$ denote the $p$-divisible group of dimension $1$ and height $2$, one sees that $A$ is supersingular if and only $A[p^\infty] \sim \til{I}_{1,1}^g$. \subsection{The $p$-torsion group scheme} The multiplication-by-$p$ morphism $[p]:A \to A$ is a finite flat morphism of degree $p^{2g}$. The {\it $p$-torsion group scheme} of $A$ is \[A[p]= \ker[p] = \ker(\ver\circ \frob), \] where $\frob:A \to A^{(p)}$ denotes the relative Frobenius morphism and $\ver: A^{(p)} \to A$ is the Verschiebung morphism. In fact, $A[p]$ is a $\bt_1$ group scheme as defined in \cite[2.1, Definition 9.2]{O:strat}; it is killed by $[p]$, with $\ker(\frob) = \im(\ver)$ and $\ker(\ver) = \im(\frob)$. The principal polarization on $A$ induces a principal quasipolarization on $A[p]$, i.e., an anti-symmetric isomorphism $\psi:A[p] \to A[p]^D$. (This definition must be modified slightly if $p=2$.) Summarizing, $A[p]$ is a principally quasipolarized (pqp) $\bt_1$ group scheme of rank $p^{2g}$. Isomorphisms classes of pqp $\bt_1$ group schemes over $k$ (also known as Ekedahl-Oort types) have been completely classified \cite[Theorem 9.4 \& 12.3]{O:strat}, building on unpublished work of Kraft \cite{kraft} (which did not include polarizations) and of Moonen \cite{M:group} (for $p \geq 3$). (When $p=2$, there are complications with the polarization which are resolved in \cite[9.2, 9.5, 12.2]{O:strat}.) \subsection{Covariant Dieudonn\'e modules} \label{subsecdefcartier} The $p$-divisible group $A[p^\infty]$ and the $p$-torsion group scheme $A[p]$ can be described using covariant Dieudonn\'e theory; see e.g., \cite[15.3]{O:strat}. Briefly, let $\til \EE = \til\EE(k) = W(k)[F,V]$ denote the non-commutative ring generated by semilinear operators $F$ and $V$ with relations \begin{equation} \label{Efv} FV=VF=p, \ F \lambda = \lambda^\sigma F, \ \lambda V=V \lambda^\sigma, \end{equation} for all $\lambda \in W(k)$. There is an equivalence of categories $\dieu_*$ between $p$-divisible groups over $k$ and $\til\EE$-modules which are free of finite rank over $W(k)$. Similarly, let $\EE = \til \EE \otimes_{W(k)} k$ be the reduction of the Cartier ring mod $p$; it is a non-commutative ring $k[F,V]$ subject to the same constraints as \eqref{Efv}, except that $FV = VF = 0$ in $\EE$. Again, there is an equivalence of categories $\dieu_*$ between finite commutative group schemes (of rank $2g$) annihilated by $p$ and $\EE$-modules of finite dimension ($2g$) over $k$. If $M = \dieu_*(G)$ is the Dieudonn\'e module over $k$ of $G$, then a principal quasipolarization $\psi:G \to G^D$ induces a a nondegenerate symplectic form \begin{equation} \label{eqdefpolar} \xymatrix{ \ang{\cdot,\cdot}:M \times M \ar[r]& k } \end{equation} on the underlying $k$-vector space of $M$, subject to the additional constraint that, for all $x$ and $y$ in $M$, \begin{equation} \label{eqproppolar} \ang{Fx,y} = \ang{x,Vy}^\sigma. \end{equation} If $A$ is the Jacobian of a curve $X$, then there is an isomorphism of $\EE$-modules between the {\em contravariant} Dieudonn\'e module over $k$ of ${\rm Jac}(X)[p]$ and the de Rham cohomology group $H^1_{\rm dR}(X)$ by \cite[Section 5]{Oda}. The canonical principal polarization on $\operatorname{Jac}(X)$ then induces a canonical isomorphism $\dieu_*(\operatorname{Jac}(X)[p]) \simeq H^1_{\rm dR}(X)$; we will use this identification without further comment. For elements $A_1, \ldots, A_r \in \EE$, let $\EE(A_1, \ldots, A_r)$ denote the left ideal $\sum_{i=1}^r \EE A_i$ of $\EE$ generated by $\{A_i \mid 1 \leq i \leq r\}$. \subsection{The $p$-rank and $a$-number} \label{Sprankanumber} \label{Sanumber} For a $\bt_1$ group scheme $G/k$, the {\it $p$-rank} of $G$ is $f={\rm dim}_{\FF_p} {\rm Hom}(\boldsymbol{\mu}_p, G)$ where $\boldsymbol{\mu}_p$ is the kernel of Frobenius on $\GG_m$. Then $p^f$ is the cardinality of $G(k)$. The {\it $a$-number} of $G$ is \[a={\rm dim}_k {\rm Hom}(\boldsymbol{\alpha}_p, G),\] where $\boldsymbol{\alpha}_p$ is the kernel of Frobenius on $\GG_a$. It is well-known that $0 \leq f \leq g$ and $1 \leq a +f \leq g$. Moreover, since $\boldsymbol{\mu}_p$ and $\boldsymbol{\alpha}_p$ are both simple group schemes, the $p$-rank and $a$-number are additive; \begin{equation} \label{eqfadditive} f(G\oplus H) = f(G)+f(H)\text{ and }a(G\oplus H) = a(G)+a(H). \end{equation} If $\til G$ is a $p$-divisible group, its $p$-rank and $a$-number are those of its $p$-torsion; $f(\til G) = f(\til G[p])$ and $a(\til G) = a(\til G[p])$. Similarly, if $A$ is an abelian variety, then $f(A) = f(A[p])$ and $a(A) = a(A[p])$. \subsection{The Ekedahl-Oort type} \label{Seotype} As in \cite[Sections 5 \& 9]{O:strat}, the isomorphism type of a pqp ${\rm BT}_1$ group scheme $G$ over $k$ can be encapsulated into combinatorial data. If $G$ is symmetric with rank $p^{2g}$, then there is a {\it final filtration} $N_1 \subset N_2 \subset \cdots \subset N_{2g}$ of ${\mathbb D}_*(G)$ as a $k$-vector space which is stable under the action of $V$ and $F^{-1}$ such that $i={\rm dim}(N_i)$ \cite[5.4]{O:strat}. The {\it Ekedahl-Oort type} of $G$ is \[\nu=[\nu_1, \ldots, \nu_g], \ {\rm where} \ {\nu_i}={\rm dim}(V(N_i)).\] The $p$-rank is ${\rm max}\{i \mid \nu_i=i\}$ and the $a$-number equals $g-\nu_g$. There is a restriction $\nu_i \leq \nu_{i+1} \leq \nu_i +1$ on the Ekedahl-Oort type. There are $2^g$ Ekedahl-Oort types of length $g$ since all sequences satisfying this restriction occur. By \cite[9.4, 12.3]{O:strat}, there are bijections between (i) Ekedahl-Oort types of length $g$; (ii) pqp ${\rm BT}_1$ group schemes over $k$ of rank $p^{2g}$; and (iii) pqp Dieudonn\'e modules of dimension $2g$ over $k$. \begin{example}\label{exi11} {\em The group scheme $I_{1,1}$.} There is a unique ${\rm BT}_1$ group scheme of rank $p^2$ which has $p$-rank $0$, which we denote $I_{1,1}$. It fits in a non-split exact sequence \begin{equation} \label{eqdefi11} 0 \to \boldsymbol{\alpha}_p \to I_{1,1} \to \boldsymbol{\alpha}_p \to 0. \end{equation} The structure of $I_{1,1}$ is uniquely determined over $\overline{\FF}_p$ by this exact sequence. The image of $\boldsymbol{\alpha}_p$ is the kernel of $\frob$ and $\ver$. The Dieudonn\'e module of $I_{1,1}$ is $$M_{1,1} := \dieu_*(I_{1,1}) \simeq \EE/\EE(F+V).$$ If $E$ is a supersingular elliptic curve, then the $p$-torsion group scheme $E[p]$ is isomorphic to $I_{1,1}$. \end{example} \section{Superspecial rank} \label{Sssrank} Let $A$ be a principally polarized abelian variety defined over an algebraically closed field $k$ of characteristic $p >0$. \subsection{Superspecial} First, recall the definition of the superspecial property. \begin{definition} One says that $A/k$ is {\it superspecial} if it satisfies the following equivalent conditions: \begin{enumerate} \item The $a$-number of $A$ equals $g$. \item The group scheme $A[p]$ is isomorphic to $I_{1,1}^g$. \item The Dieudonn\'e module over $k$ of $A[p]$ is isomorphic to $M_{1,1}^g$. \item $A$ is isomorphic (as an abelian variety without polarization) to the product of $g$ supersingular elliptic curves. \end{enumerate} \end{definition} A superspecial abelian variety is defined over $\overline{\FF}_p$, and thus over a finite field. For every $g \in \NN$ and prime $p$, the number of superspecial principally polarized abelian varieties of dimension $g$ defined over $\overline{\FF}_p$ is finite and non-zero. \subsection{Definition of superspecial rank} \label{Sssdef} Recall (Example \ref{exi11}) that the $p$-torsion group scheme of a supersingular elliptic curve is isomorphic to $I_{1,1}$, the unique local-local pqp ${\rm BT}_1$ group scheme of rank $p^2$. From \eqref{eqdefi11}, it follows that $I_{1,1}$ is not simple as a group scheme. However, $I_{1,1}$ is simple in the category of $\bt_1$ group schemes since $\boldsymbol{\alpha}_p$ is not a $\bt_1$ group scheme. \begin{definition} Let $G/k$ be a $\bt_1$ group scheme. A {\it superspecial factor} of $G$ is a group scheme $H \subset G$ with $H \simeq I_{1,1}^s$. \end{definition} By the equivalence of categories $\dieu_*$, superspecial factors of $G$ of rank $2s$ are in bijection with $\EE$-submodules $N \subset \dieu_*(G)$ with $N \simeq (\EE/\EE(F+V))^s$; we call such an $N$ a {\it superspecial factor} of $M=\dieu_*(G)$. Now suppose $(G, \psi)/k$ is a pqp $\bt_1$ group scheme. A superspecial factor $H$ of $G$ is {\it polarized} if the isomorphism $\psi: G \to G^D$ restricts to an isomorphism $\psi_H: H \to G^D \twoheadrightarrow H^D$. Equivalently, a superspecial factor $N$ of $(\dieu_*(G), \ang{\cdot,\cdot})$ is polarized if the nondegenerate symplectic form $\ang{\cdot,\cdot}:M \times M \to k$ restricts to a non-degenerate symplectic form $\ang{\cdot,\cdot}:N \times N \to k$. \begin{definition} Let $G=(G,\psi)/k$ be a pqp $\bt_1$ group scheme. The {\it superspecial rank} $s(G)$ of $G$ is the largest integer $s$ for which $G$ has a polarized superspecial factor of rank $2s$. \end{definition} Since $I_{1,1}$ is simple in the category of $\bt_1$ group schemes, the superspecial rank $s$ has an additive property similar to that for the $p$-rank and $a$-number \eqref{eqfadditive}; if $G$ and $H$ are pqp $\bt_1$ group schemes, then \begin{equation} \label{eqsadditive} s(G\oplus H) = s(G)+s(H). \end{equation} A $\bt_1$ group scheme $G$ may fail to be simple (i.e., admit a nontrivial $\bt_1$ subgroup scheme $0\subsetneq H \subsetneq G$) and yet still be indecomposable (i.e., admit no isomorphism $G \simeq H\oplus K$ with $H$ and $K$ nonzero). This distinction vanishes in the category of pqp $\bt_1$ group schemes: \begin{lemma} \label{lemdecompbt1} Let $G/k$ be a pqp $\bt_1$ group scheme, and let $H\subset G$ be a pqp $\bt_1$ sub-group scheme. Let $N = \dieu_*(H) \subseteq M = \dieu_*(G)$, and let $P$ be the orthogonal complement of $N$ in $M$. Then $P$ is a pqp sub-Dieudonn\'e module of $M$, and $G$ admits a decomposition $G\simeq H\oplus K$ as pqp $\bt_1$ group schemes, where $K\subseteq G$ is the sub-group scheme with $\dieu_*(K) = P$. \end{lemma} Lemma \ref{lemdecompbt1} is essentially present in \cite[Section 5]{kraft}; see, e.g., \cite[9.8]{O:strat}. \begin{proof} The $k$-vector space $P$ is an $\EE$-module if it is stable under $F$ and $V$. It suffices to check that, for $\beta\in P$, $F \beta \in P$ and $V \beta \in P$. If $\alpha \in N$, the relation \eqref{eqproppolar} implies that \[ \ang{F\beta, \alpha} = \ang{\beta, V \alpha}^\sigma = 0^\sigma = 0 \] and \[ \ang{V\beta, \alpha} = \ang{\beta,F \alpha}^{\sigma^{-1}} = 0^{\sigma^{-1}}=0. \] Thus $F \beta$ and $V \beta$ are in the orthogonal complement $P$ of $N$. Since $H$ is polarized, the restriction of $\ang{\cdot,\cdot}$ to $N$ is perfect and so the restriction of $\ang{\cdot,\cdot}$ to $P$ is perfect as well. Since $\dieu_*$ is an equivalence of categories, there is a decomposition $G\simeq H \oplus K$ as pqp group schemes. It remains to verify that $K$ is a $\bt_1$ group scheme, i.e., that $\ker(\frob) = \im(\ver)$ and $\ker(\ver) = \im(\frob)$. In terms of Dieudonn\'e modules, this is equivalent to the property that $\ker F\rest P = V(P)$ and $\ker V\rest P = F(P)$. This, in turn, follows from the analogous statement for $M$ and $N$ and from the fact that the decomposition $M = N \oplus P$ is stable under $F$ and $V$. \end{proof} \begin{lemma} \label{lemsplitss} Let $G/k$ be a pqp $\bt_1$ group scheme of $p$-rank $f$ and $a$-number $a$, and let $H\subset G$ be a maximal polarized superspecial factor. Then $G \simeq H \oplus K$ for a pqp $\bt_1$ group scheme $K$ with respective $p$-rank, superspecial rank and $a$-number $f(K)=f$, $s(K) =0$, and $a(K) = a-s$. \end{lemma} \begin{proof} The existence of the decomposition $G\simeq H\oplus K$ follows from Lemma \ref{lemdecompbt1}; the assertions about the $p$-rank, superspecial rank and $a$-number of $K$ follow from the additivity of these quantities, \eqref{eqfadditive} and \eqref{eqsadditive}. \end{proof} Since one can always canonically pull off the \'etale and toric components of a finite group scheme over a perfect field, Lemma \ref{lemsplitss} admits a further refinement: \begin{lemma} \label{Lpulloffs} Let $G/k$ be a pqp $\bt_1$ group scheme with $f(G)=f$, $s(G) = s$, and $a(G) = a$. Then there is a local-local pqp $\bt_1$ group scheme $B$ such that \[ G \simeq (\ZZ/p\oplus \boldsymbol{\mu}_p)^f \oplus I_{1,1}^s \oplus B \] where $f(B) = s(B) = 0$ and $a(B) = a-s$. \end{lemma} \begin{proof} Since $k$ is perfect and $G$ is self-dual, there is a canonical decomposition of pqp group schemes $G \simeq (\ZZ/p\oplus \boldsymbol{\mu}_p)^f \oplus H$. Then $f(H) = 0$, $s(H) = s(G)$, and $a(H) = a(G)$. Now invoke Lemma \ref{lemsplitss}. \end{proof} Let $A$ be a principally polarized abelian variety of dimension $g$. On one hand, $A$ is superspecial if and only if $s(A[p]) = g$. On the other hand, if $A$ is ordinary, then $s(A[p]) =0$. More generally: \begin{lemma} \label{lemsanda} Let $G/k$ be a pqp $\bt_1$ group scheme of rank $p^{2g}$; let $f = f(G)$, $a=a(G)$, and $f = f(G)$. \begin{alphabetize} \item Then $0 \le s \le a \le g-f$. \item If $a = g-f$, then $G \simeq (\ZZ/p\oplus \boldsymbol{\mu}_p)^f \oplus I_{1,1}^a$ and $s=a$. \item If $a\not = g-f$, then $s<a$. \end{alphabetize} \end{lemma} \begin{proof} Write $G \simeq (\ZZ/p \oplus \boldsymbol{\mu}_p)^f \oplus B_1$ with $B_1 \simeq I_{1,1}^s \oplus B$ as in Lemma \ref{Lpulloffs}. \begin{alphabetize} \item Then $a \leq g-f$, since (using additivity) $a(G) = a(B_1)$, and $B_1$ has rank $p^{2(g-f)}$. Moreover, $s \le a$ since $a(I_{1,1}^s) = s$. \item This is true since the only pqp ${\rm BT}_1$ group scheme of rank $p^{2(g-f)}$ with $p$-rank $0$ and $a$-number $g-f$ is $I_{1,1}^{g-f}$, which has superspecial rank $g-f$ by definition. \item The hypothesis $a \not = g-f$ implies that $B$ is non-trivial. Then $a > s$ since the $a$-number of the local-local group scheme $B$ is at least $1$. \end{alphabetize} \end{proof} \subsection{Unpolarized superspecial rank} If $G/k$ is a $\bt_1$ group scheme, or indeed any $p$-torsion finite commutative group scheme, then there is also an obvious notion of an {\em unpolarized} superspecial rank, namely, the largest $u$ such that there is an inclusion $I_{1,1}^{u}\hookrightarrow G$. In this section, we briefly explore some of the limitations of this notion. For integers $r,s \ge 1$, let $J_{r,s}$ be the $\bt_1$ group scheme with Dieudonn\'e module \[ M_{r,s} := \dieu_*(J_{r,s}) = \EE/\EE(F^r+V^s). \] \begin{lemma} \label{lemirs} Suppose $r,s \ge 2$. Then \begin{alphabetize} \item $J_{r,s}$ is an indecomposable local-local $\bt_1$ group scheme. \item There exists an inclusion $\iota:I_{1,1} \hookrightarrow J_{r,s}$. \end{alphabetize} \end{lemma} \begin{proof} Part (a) is standard. Indeed, using the relations $F^r = -V^s$ and $FV = VF = p$, one sees that $F$ and $V$ act nilpotently, and thus $J_{r,s}$ is local-local; in particular, it has $p$-rank zero. Note that $M_{r,s}$ is generated over $\EE$ by a single element $x$ such that $F^r x = -V^sx$. It follows that $a(J_{r,s}) = 1$. The additivity relation \eqref{eqfadditive} now implies that $M_{r,s}$ is indecomposable. For (b), let $y \in M_{r,s}$ be an element such that $Fy = - Vy \not = 0$. Since $r,s \geq 2$, the element $y= F^{r-1}x+V^{s-1}x$ is suitable. Then there is an inclusion $\iota_*:M_{1,1} \to M_{r,s}$ which sends a generator of $M_{1,1}$ to $y$. \end{proof} If $r = s$, then $J_{r,s}$ is self-dual and admits a principal quasipolarization; in this case, let $H_{r,s} = J_{r,s}$. If $r\not = s$, then the Cartier dual of $J_{r,s}$ is $J_{s,r}$; in this case, $H_{r,s}:= J_{r,s} \oplus J_{s,r}$ admits a principal quasipolarization. In spite of Lemma \ref{lemirs}, we find: \begin{lemma} \label{lemsshrs0} Suppose $r, s \ge 2$. For any principal quasipolarization on $H_{r,s}$, the superspecial rank of $H_{r,s}$ is zero. \end{lemma} \begin{proof} If $r=s$, this is immediate, since $H_{r,r}$ is indecomposable by Lemma \ref{lemirs} and yet a polarized superspecial factor of positive rank would induce a factorization by Lemma \ref{lemdecompbt1}. Now suppose $r\not = s$. The argument used in the classification of polarizations on superspecial $p$-divisible groups in \cite[Section 6.1]{LO} shows that for some $u \in \st{1,2}$, there exists an inclusion $\iota:I_{1,1}^u \hookrightarrow H_{r,s}$ with $G := \iota(I_{1,1}^u)$ polarized. If $G$ is contained in either $J_{r,s}$ or $J_{s,r}$ (and in particular if $u=1$), then we may argue as before. Otherwise, consider the sum of $G$ and $J_{r,s}$ inside $H_{r,s}$, which is {\em not} direct since $G\cap J_{r,s} \simeq I_{1,1}$ is nonempty. By Lemma \ref{lemdecompbt1}, $G$ has a complement $K$ in $G+J_{r,s}$. Then $J_{r,s} \simeq I_{1,1}\oplus K$, contradicting the indecomposability of $J_{r,s}$. \end{proof} \subsection{Superspecial ranks of abelian varieties} \label{SssA} If $A/k$ is a principally polarized abelian variety, we define its superspecial rank to be that of its $p$-torsion group scheme; $s(A) = s(A[p])$. Lemma \ref{lemsanda} gives constraints between the $p$-rank, $a$-number, and superspecial rank of $A$. It turns out that these are the only constraints on $f$, $a$ and $s$: \begin{proposition} Given integers $g,f,a,s$ such that $0 \leq s < a < g-f$, there exists a principally polarized abelian variety $A/k$ of dimension $g$ with $p$-rank $f$, $a$-number $a$ and superspecial rank $s$. \end{proposition} \begin{proof} By \cite[Theorem 1.2]{O:strat}, it suffices to show that there exists a pqp ${\rm BT}_1$ group scheme $G$ of rank $p^{2g}$ with $p$-rank $f$, $a$-number $a$ and superspecial rank $s$. Set \[g_1=g-f-s, \ {\rm and} \ a_1=a-s,\] and note that $a_1 \geq 1$ and $g_1-a_1 \geq 1$ by hypothesis. Considering \[G = (\ZZ/p \oplus \boldsymbol{\mu}_p)^f \oplus I_{1,1}^s \oplus B,\] together with the product polarization, allows one to reduce to the case of finding a pqp ${\rm BT}_1$ group scheme $B$ of rank $p^{2g_1}$ with $p$-rank $0$, $a$-number $a_1$ and superspecial rank $0$. This is possible as follows. Consider the word $w$ in $F$ and $V$ given by \[w=F^{g_1-a_1+1}(VF)^{a_1-1}V^{g_1-a_1+1}.\] Then $w$ is simple and symmetric with length $2g_1$, and thus the corresponding $\bt_1$ group scheme admits a canonical principal quasipolarization \cite[9.11]{O:strat}. Let $L_1, \ldots, L_{2g_1} \in \{F, V\}$ be such that $w=L_1 \cdots L_{2g_1}$. Consider variables $z_1, \ldots, z_{2g_1}$ with $z_{2g_1+1}=z_1$. As in \cite[Section 9.8]{O:strat}, the word $w$ defines the structure of a Dieudonn\'e module on $N_w=\oplus_{i} k \cdot z_i$ as follows: if $L_i=F$, let $F(z_i)=z_{i+1}$ and $V(z_{i+1})=0$; if $L_i=V$, let $V(z_{i+1})=z_i$ and $F(z_i)=0$. The $a$-number is the number of generators for $N_w$ as an $\EE$-module. By construction, $N_w$ has $a$-number $a_1$. Since $g_1-a_1+1 \geq 2$, then $N_w$ has superspecial rank $0$. \end{proof} We now focus on supersingular abelian varieties \begin{lemma} \label{sswiths=0} For every $g \geq 2$ and prime $p$, a generic supersingular principally polarized abelian variety of dimension $g$ over $k$ has superspecial rank $0$. \end{lemma} \begin{proof} A generic supersingular principally polarized abelian variety has $p$-rank $0$ and $a$-number $1$ \cite[Section 4.9]{LO}. This forces its Ekedahl-Oort type to be $[0,1, \ldots, g-1]$, its Dieudonn\'e module to be $M_{g,g}$, and its superspecial rank to be zero (Lemma \ref{lemsshrs0}) since $g \ge 2$. \end{proof} It is not difficult to classify the values of the supersingular rank which occur for supersingular abelian varieties. \begin{proposition} \label{Pexists} For every $g \geq 2$ and prime $p$, there exists a supersingular principally polarized abelian variety of dimension $g$ over $k$ with superspecial rank $s$ if and only if $0 \leq s \leq g-2$ or $s=g$. \end{proposition} \begin{proof} It is impossible for the superspecial rank to be $g-1$ since there are no local-local pqp ${\rm BT}_1$ group schemes of rank $p^2$ other than $I_{1,1}$. For the reverse implication, recall that there exists a supersingular principally polarized abelian variety $A_1/k$ of dimension $g-s$ with $a=1$. Its Dieudonn\'e module is $M_{g-s,g-s}$. In particular, $s(A_1)=0$ as long as $s \le g-2$ (Lemma \ref{lemsshrs0}). Let $E$ be a supersingular elliptic curve. Then $A=E^s \times A_1$, together with the product polarization, is a supersingular principally polarized abelian variety over $k$ with dimension $g$ and $s(A)=s$. \end{proof} \begin{example} Let $A/k$ be a supersingular principally polarized abelian variety of dimension $3$. Then the $a$-number $a=a(A)$ satisfies $1 \leq a \leq 3$. \begin{alphabetize} \item{If $a=1$,} then $A[p] \simeq J_{3,3}$, which has superspecial rank $s=0$. \item{If $a=2$,} then $A[p^\infty] \simeq \til G_{1,1} \times \til Z$ where $\til Z$ is supergeneral of height $4$ and $a(\til Z)=1$ \cite{odaoort}. Then $s(\til Z[p]) = 0$ (Lemma \ref{lemsanda}(c)) and thus $s(A)=1$. \item{If $a=3$,} then $A$ has superspecial rank $s=3$. \end{alphabetize} \end{example} \subsection{Application of superspecial rank to Selmer groups} \label{Sselmer} Here is another motivation for studying the superspecial rank of Jacobians. The superspecial rank equals the rank of the Selmer group associated with a particular isogeny of function fields in positive characteristic. Let $K$ be the function field of a smooth projective connected curve $X$ over $k$. Let ${\mathcal E}$ be a constant supersingular elliptic curve over $K$. Consider the multiplication-by-$p$ isogeny $f=[p]: {\mathcal E} \to {\mathcal E}$ of abelian varieties over $K$. Recall the Tate-Shafarevich group \[{\mbox{\textcyr{Sh}}}(K, {\mathcal E})_f={\rm Ker}({\mbox{\textcyr{Sh}}}(K, {\mathcal E}) \stackrel{f}{\to} {\mbox{\textcyr{Sh}}}(K, {\mathcal E})),\] where \[{\mbox{\textcyr{Sh}}}(K,{\mathcal E})={\rm Ker}(H^1(K, {\mathcal E}) \to \prod_{v} H^1(K_v, {\mathcal E}))\] and $v$ runs over all places of $K$. The Selmer group ${\rm Sel}(K, f)$ is the subset of elements of $H^1(K, {\rm Ker}(f))$ whose restriction is in the image of \[{\rm Sel}(K_v, f) = {\rm Im}({\mathcal E}(K_v) \to H^1(K_v, {\rm Ker}(f))),\] for all $v$. There is an exact sequence \[0 \to {\mathcal E}(K)/f({\mathcal E}(K)) \to {\rm Sel}(K,f) \to {\mbox{\textcyr{Sh}}}(K, {\mathcal E})_f \to 0.\] Here is an earlier result, rephrased using the terminology of this paper, which provides motivation for studying the superspecial rank. \begin{theorem} (Ulmer) The rank of ${\rm Sel}(K, [p])$ is the superspecial rank of ${\rm Jac}(X)$ \cite[Proposition 4.3]{Ulmer}. \end{theorem} \section{Elliptic curve summands of abelian varieties} \label{Sdecompose} Let $A/k$ be a principally polarized abelian variety of dimension $g$. In this section, we define the elliptic rank of $A$ to be the maximum number of elliptic curves appearing in a decomposition of $A$ up to isomorphism. When $A$ has $p$-rank $0$, the elliptic rank is bounded by the superspecial rank, Proposition \ref{Psselliptic}. Proposition \ref{Pssrank=ssE} states that the elliptic rank is the number of rank $2$ factors in the $p$-divisible group $A[p^\infty]$ when $A$ is supersingular. \subsection{Elliptic rank} \begin{definition} The {\it elliptic rank} $e(A)$ of $A$ is \begin{equation} \label{eqdefe} e(A):={\rm max} \{e \mid \iota: A \stackrel{\simeq}{\to} A_1 \times (\times_{i=1}^e E_i)\}, \end{equation} where $E_1, \ldots, E_e$ are elliptic curves, $A_1$ is an abelian variety of dimension $g-e$, and $\iota$ is an isomorphism of abelian varieties over $k$. \end{definition} (We remind the reader that many ``cancellation problems'' for abelian varieties have negative answers \cite{shioda77}, and that the abelian variety $A_1$ in \eqref{eqdefe} is not necessarily unique.) Here are some properties of the elliptic rank. \begin{proposition} \label{Psselliptic} If $A$ has $p$-rank $0$, then the elliptic rank is bounded by the superspecial rank: $e(A) \leq s(A)$. \end{proposition} \begin{proof} If $A$ has $p$-rank $0$, then the elliptic curves $E_1, \ldots, E_e$ in a maximal decomposition of $A$ are supersingular. Each supersingular curve in the decomposition contributes a factor of $\EE/\EE(F+V)$ to the Dieudonn\'e module $\dieu_*(A[p])$. \end{proof} The proof of Proposition \ref{Pexists} shows that, for every $g \geq 2$ and prime $p$, there exists a supersingular principally polarized abelian variety of dimension $g$ over $k$ with elliptic rank $e$ if and only if $0 \leq e \leq g-2$ or $e=g$. \begin{remark} \label{Rabssimple} It is clear that $e(A) =0$ if $A$ is simple and ${\rm dim}(A)>1$. Recall from \cite{lenstraoort} that there exists a simple abelian variety $A$ with formal isogeny type $\eta$, for each non-supersingular Newton polygon $\eta$. It follows from Proposition \ref{Psselliptic} that there exist abelian varieties $A$ with $s(A) > 0$ and $e(A) =0$ for all dimensions $g \geq 4$. \end{remark} \subsection{Superspecial rank for $p$-divisible groups} We briefly sketch a parallel version of superspecial rank in the category of $p$-divisible groups, rather than $p$-torsion group schemes. Many of the notions and results in Section \ref{Sssdef} generalize to truncated Barsotti-Tate groups of arbitrary level, and indeed to Barsotti-Tate, or $p$-divisible, groups. Let $\til G$ be a pqp $p$-divisible group, and let $\til H \subseteq \til G$ be a sub-$p$-divisible group. We say that $\til H$ is polarized if the principal quasipolarization on $\til G$ restricts to one on $\til H$. Lemma \ref{lemdecompbt1} admits an analogue for $p$-divisible groups; for such an $\til H$, there exists a pqp complement $\til K$ such that $\til G \simeq \til H \oplus \til K$. Let $\til I_{1,1}$ be the $p$-divisible group whose Dieudonn\'e module is \[ \til M_{1,1} = \dieu_*(\til I_{1,1}) \simeq \til\EE/\til\EE(F+V); \] then $\til I_{1,1}[p] \simeq I_{1,1}$. With this preparation, we define the superspecial rank $\til s(\til G)$ of a pqp $p$-divisible group $\til G$ as the largest value of $s$ for which there exists a sub-pqp $p$-divisible group of $\til G$ isomorphic to $\til I_{1,1}^{s}$. Since a decomposition of a $p$-divisible group induces a decomposition on its finite levels, it follows that \begin{equation} \til s(\til G) \le s(\til G[p]). \end{equation} Similarly, if $A/k$ is a principally polarized abelian variety, then any decomposition of $A$ induces a decomposition of its $p$-divisible group. So if $A$ has $p$-rank $0$, then \begin{equation} e(A) \le \til s(A[p^\infty]). \end{equation} We thank Oort for suggesting the following result: \begin{proposition}\label{Pssrank=ssE} Let $A/k$ be a supersingular principally polarized abelian variety. Then \[ e(A) = \til s(A[p^\infty]). \] \end{proposition} \begin{proof} Let $\til M$ be the Dieudonn\'e module $\til M= \dieu_*(A[p^\infty])$, and let $E/k$ be a supersingular elliptic curve. Since $A$ is principally polarized, $\til M$ is principally quasipolarized. Let $\til s = \til s(A[p^\infty])$. By the same proof as for Lemma \ref{lemdecompbt1}, there is a decomposition of pqp Dieudonn\'e modules \begin{equation} \label{eqdecomptilM} \til M \simeq \til M_{1,1}^{\til s} \oplus \til N, \end{equation} where $\til N$ has superspecial rank zero. By \cite[Theorem 6.2]{ogusSS}, since $\til M$ is supersingular, \eqref{eqdecomptilM} induces a corresponding decomposition \begin{equation} \label{eqdecompabvar} A \simeq E^{\til s} \oplus A_1. \end{equation} where $A_1$ is a principally polarized abelian variety of dimension $g-\til{s}$ with $\til s(A_1[p^\infty]) = 0$. Thus $e(A) \geq \til{s}$ and the result follows. \end{proof} \begin{remark} In fact, it is not hard to give a direct proof that the existence of decomposition \eqref{eqdecomptilM} implies the existence of \eqref{eqdecompabvar}. Indeed, since $A$ is supersingular, there exists an isogeny $\psi:E^g \to A$, which induces an isogeny of $p$-divisible groups $\psi[p^\infty]: \til I_{1,1}^g \to A[p^\infty]$. Let $H = \ker (\psi[p^\infty])$; it is a finite group scheme, and is thus also a sub-group scheme of $E^g$. Since $\End(\til I_{1,1})$ is a maximal order in a division ring over $\ZZ_p$, it is a (noncommutative) principal ideal domain (see also \cite[p.\ 335]{Li:ss}). By the theory of elementary divisors for such rings (e.g., \cite[Chapter 3, Theorem 18]{jacobsonrings}), there is an isomorphism $\til I_{1,1}^g \simeq \til I_{1,1}^{\til s} \times \til I_{1,1}^{g-\til s}$ under which $H$ is contained in $0\times \til I_{1,1}^{g-\til s}$. Since $\End(E^g[p^n]) \simeq \End(\til I_{1,1}^g [p^n])$ for each $n \in \NN$, there is an analogous decomposition $E^g \simeq E^{\til s} \times E^{g-\til s}$ under which $H$ is contained in $0\times E^{g-\til s}$. Then $A = E^g/N \simeq E^{\til s} \oplus A_1$, where $A_1$ is supersingular but has superspecial rank zero. \end{remark} \subsection{An open question} Consider a principally polarized abelian variety $A/k$. By Remark \ref{Rabssimple}, if $A$ is not supersingular, then it can be absolutely simple ($e(A)=0$) and yet have positive superspecial rank ($s(A)>0$). (Similarly, if $A$ admits ordinary elliptic curves as factors, then it is possible to have $e(A)>0$ while $s(A)=0$.) However, if $A$ has $p$-rank $0$, there are {\em a priori} inequalities \[ e(A) \le \til s(A[p^\infty]) \le s(A[p]). \] Proposition \ref{Pssrank=ssE} shows the first inequality is actually an equality when $A$ is supersingular. This leads one to ask the following: \begin{question} \label{Qask} \begin{enumerate} \item If $A/k$ is supersingular, is $e(A)=s(A)$? \item If $\til G$ is a supersingular pqp $p$-divisible group, is $\til{s}(\til{G})=s(\til{G}[p])$? \end{enumerate} \end{question} The two parts of Question \ref{Qask} have the same answer by Proposition \ref{Pssrank=ssE}. Here is one difficulty in answering this question. \begin{remark} The $p$-divisible group $\til I_{1,1}$ is isomorphic (over $k$) to the $p$-divisible group $H_{1,1}$ introduced in \cite[5.2]{dejongoort}. Consequently, it is {\em minimal} in the sense of \cite[page 1023]{oortminimal}; if $\til M$ is any Dieudonn\'e module such that $(\til M \otimes_W k) \simeq M_{1,1}^{\oplus s}$, then there is an isomorphism $\til M \simeq \til M_{1,1}^{\oplus s}$. In spite of this, because of difficulties with extensions (see, e.g., \cite[Remark 3.2]{oortminimal}), one cannot immediately conclude that $\til{M}$ admits $\til M_{1,1}^{\oplus s}$ as a summand if $\til M/p \til{M}$ has superspecial rank $s$. Indeed, Lemma \ref{lemsshrs0} indicates that an appeal to minimality alone is insufficient; any argument must make use of the principal quasipolarization. \end{remark} \section{Superspecial rank of supersingular Jacobians} \label{Sjacobian} If $X/k$ is a (smooth, projective, connected) curve, its superspecial and elliptic ranks are those of its Jacobian: $s(X)=s({\rm Jac}(X))$ and $e(X)=e({\rm Jac}(X))$. In this section, we address the question of which superspecial ranks occur for Jacobians of (supersingular) curves. First, recall that there is a severe restriction on the genus of a superspecial curve. \begin{theorem} \label{Tekedahl} (Ekedahl) If $X/k$ is a superspecial curve of genus $g$, then $g \leq p(p-1)/2$ \cite[Theorem 1.1]{Ekedahl}, see also \cite{Baker}. \end{theorem} For example, if $p=2$, then the genus of a superspecial curve is at most $1$. The Hermitian curve $X_p: y^p+y=x^{p+1}$ is a superspecial curve realizing the upper bound of Theorem \ref{Tekedahl}. In Section \ref{Shyp}, we determine the superspecial ranks of all hyperelliptic curves in characteristic $2$. We determine the superspecial rank of the Jacobians of Hermitian curves in Section \ref{Sherm}. In both cases, this gives an upper bound for the elliptic rank. \subsection{Supersingular Jacobians} \label{SssJ} Recall that a curve $X/\FF_q$ is {\it supersingular} if the Newton polygon of $L(X/\FF_q, t)$ is a line segment of slope $1/2$ or, equivalently, if the Jacobian of $X$ is supersingular. One thing to note is that a curve $X/\FF_q$ is supersingular if and only if $X$ is minimal over $\FF_{q^c}$ for some $c \in \NN$. Van der Geer and Van der Vlugt proved that there exists a supersingular curve of every genus in characteristic $p=2$ \cite{VdGVdV}. For $p \geq 3$, it is unknown if there exists a supersingular curve of every genus. An affirmative answer would follow from a conjecture about deformations of reducible supersingular curves \cite[Conjecture 8.5.7]{oort:rend}. There are many constructions of supersingular curves having arbitrarily large genus. Recall (from proof of Proposition \ref{Pexists} and remarks after Proposition \ref{Psselliptic}) that there exists a (non-simple) supersingular principally polarized abelian variety of dimension $g$ over $k$ with elliptic rank $e$ if and only if $0 \leq e \leq g-2$ or $e=g$. In light of this, one can ask the following question. \begin{question} \label{QssJac} Given $p$ prime and $g \geq 2$ and $0 \leq s \leq g-2$, does there exist a smooth curve $X$ over $\overline{\FF}_p$ of genus $g$ whose Jacobian is supersingular and has elliptic rank $e$? \end{question} The answer to Question \ref{QssJac} is yes when $g=2,3$ and $e=0$. To see this, recall from the proof of Lemma \ref{sswiths=0} that a generic supersingular principally polarized abelian variety of dimension $g$ has Dieudonn\'e module $\EE/\EE(F^g+V^g)$, which has superspecial rank $s=0$. When $g=2,3$, such an abelian variety is the Jacobian of a smooth curve with $e=0$. One expects the answer to Question \ref{QssJac} is yes when $g=3$ and $e=1$ also. To see this, let $E$ be a supersingular elliptic curve. Let $A$ be a supersingular, non-superspecial abelian surface. The $3$-dimensional abelian variety $B= A \times E$ is supersingular and has superspecial rank $1$. If there is a principal polarization on $B$ which is not the product polarization, then $B$ is the Jacobian of a smooth curve. Question \ref{QssJac} is open for $g\geq 4$. \subsection{Superspecial rank of hyperelliptic curves when $p=2$} \label{Shyp} In this section, suppose $k$ is an algebraically closed field of characteristic $p=2$. Application \ref{App1} states that the superspecial rank of a hyperelliptic curve over $k$ with $2$-rank $0$ is either 0 or 1. More generally, Application \ref{App1general} states that the superspecial rank of a hyperelliptic curve over $k$ with $2$-rank $r$ is bounded by $1+r$. A hyperelliptic curve $Y$ over $k$ is defined by an Artin-Schreier equation \[y^2+y=h(x),\] for some non-constant rational function $h(x) \in k(x)$. In \cite{EP13}, the authors determine the structure of the Dieudonn\'e module $M$ of ${\rm Jac}(Y)$ for all hyperelliptic curves $Y$ in characteristic $2$. A surprising feature is that the isomorphism class of $M$ depends only on the orders of the poles of $h(x)$, and not on the location of the poles or otherwise on the coefficients of $h(x)$. In particular, consider the case that the $2$-rank of $Y$ is $0$, or equivalently, that $h(x)$ has only one pole. In this case, the Ekedahl-Oort type is $[0,1,1,2,2, \ldots, \lfloor \frac{g}{2} \rfloor]$ \cite[Corollary 5.3]{EP13}. The $a$-number is $\lceil \frac{g}{2} \rceil$. \begin{application} \label{App1} Let $Y$ be a hyperelliptic curve of genus $g$ with $2$-rank $0$ defined over an algebraically closed field of characteristic $2$. Then the superspecial rank of ${\rm Jac}(Y)$ is $s=1$ if $g \equiv 1 \pmod 3$ and is $s=0$ otherwise. The elliptic rank of ${\rm Jac}(Y)$ is $e \leq 1$ if $g \equiv 1 \pmod 3$ and $e=0$ otherwise. \end{application} \begin{proof} This follows by applying the algorithm in \cite[Section 5.2]{EP13}. Specifically, by \cite[Proposition 5.10]{EP13} (where $c=g$), the Dieudonn\'e module of the group scheme with Ekedahl-Oort type $[0,1,1,2,2, \ldots, \lfloor \frac{g}{2} \rfloor]$ is generated by variables $X_j$ for $\lceil (g+1)/2 \rceil \leq j \leq g$ subject to the relations $F^{e(j)+1}(X_j)+V^{\epsilon(\iota(j))+1}(X_{\iota(j)})$, where the notation is defined in \cite[Notation 5.9]{EP13}. Then $M_{1,1}$ occurs as a summand if and only if there is some $j$ such that $e(j)=\epsilon(j)=0$ and $j=\iota(j)$. The condition $e(j)=0$ is equivalent to $j$ being odd. The conditions $\epsilon(j)=0$ and $j=\iota(j)$ imply that $2g-2j+1=g-(j-1)/2$ which is possible only if $g \equiv 1 \bmod 3$. If $g \equiv 1 \bmod 3$, then $j=(2g+1)/3$ so the maximal rank of a summand isomorphic to $I_{1,1}^s$ is $s=1$. \end{proof} \begin{remark} It is not known exactly which natural numbers $g$ can occur as the genus of a supersingular hyperelliptic curve over $\overline{\FF}_2$. On one hand, if $g=2^s-1$, then there does not exist a supersingular hyperelliptic curve of genus $g$ over $\overline{\FF}_2$ \cite{SZss}. On the other hand, if $h(x)=xR(x)$ for an additive polynomial $R(x)$ of degree $2^{s}$, then $Y$ is supersingular of genus $2^{s-1}$ \cite{VdGVdV92}. If $s$ is even, then Application \ref{App1} shows that ${\rm Jac}(Y)$ has no elliptic curve factors in a decomposition up to isomorphism, even though it decomposes completely into elliptic curves up to isogeny. \end{remark} More generally, we now determine the superspecial ranks of hyperelliptic curves in characteristic $2$ having arbitrary $2$-rank. Consider the divisor of poles \[{\rm div}_\infty (h(x)) = \sum_{j=0}^{r} d_j P_j.\] By Artin-Schreier theory, one can suppose that $d_j$ is odd for all $j$. Then ${\rm Jac}(Y)$ has genus $g$ satisfying $2g+2=\sum_{j=0}^r (d_j+1)$ by the Riemann-Hurwitz formula \cite[IV, Prop.\ 4]{Se:lf} and has $2$-rank $f=r$ by the Deuring-Shafarevich formula \cite[Theorem 4.2]{Subrao} or \cite[Cor.\ 1.8]{Crew}. These formulae imply that, for a given genus $g$ (and $2$-rank $r$), there is another discrete invariant of a hyperelliptic curve $Y/k$, namely a partition of $2g+2$ into $r+1$ positive even integers $d_j + 1$. In \cite{EP13}, the authors prove that the Ekedahl-Oort type of $Y$ depends only on this discrete invariant. Specifically, consider the variable $x_j:=(x-P_j)^{-1}$, which is the inverse of a uniformizer at the branch point $P_j$ in ${\mathbb P}^1$ (with $x_j=x$ if $P_j = \infty$). Then $h(x)$ has a partial fraction decomposition of the form \[h(x)=\sum_{j=0}^r h_{j} \big(x_j\big),\] where $h_j(x) \in k[x]$ is a polynomial of degree $d_j$. Let $c_j=(d_j-1)/2$ and note that $g=r+\sum_{j=0}^r c_j$. For $0 \leq j \leq r$, consider the Artin-Schreier $k$-curve $Y_j$ with affine equation $y^2 - y = h_j(x)$. Let $E_0$ be an ordinary elliptic curve over $k$. Then \cite[Theorem 1.2]{EP13} states that the de Rham cohomology of $Y$ decomposes, as a module under the actions of Frobenius $F$ and Verschiebung $V$, as: \[ H^1_{\rm dR}(Y) \simeq H^1_{\rm dR}(E_0)^{r} \oplus \bigoplus_{j=0}^r H^1_{\rm dR}(Y_j). \] Since $E_0$ is ordinary, it has superspecial rank $0$. The superspecial rank of ${\rm Jac}(Y)$ is thus the sum of the superspecial ranks of ${\rm Jac}(Y_j)$. Applying Application \ref{App1} to $\{Y_j\}_{j=0}^r$ proves the following. \begin{application} \label{App1general} Consider a hyperelliptic curve $Y$ defined over an algebraically closed field of characteristic $2$. Then $Y$ is defined by an equation of the form $y^2+y=h(x)$ with ${\rm div}_\infty (h(x)) = \sum_{j=0}^{r} d_j P_j$ and $d_j$ odd. Recall that $Y$ has genus $g=r+\sum_{j=0}^r c_j$ where $c_j=(d_j-1)/2$ and $p$-rank $r$. The superspecial rank of ${\rm Jac}(Y)$ equals the number of $j$ such that $c_j \equiv 1 \bmod 3$. In particular, $s({\rm Jac}(Y)) \leq 1 + r$ and $e(({\rm Jac}(Y))) \leq 1 + 2r$. \end{application} \subsection{Hermitian curves} \label{Sherm} The last examples of the paper are about the superspecial rank for one of the three classes of (supersingular) Deligne-Lusztig curves: the Hermitian curves $X_q$ for $q=p^n$ for an arbitrary prime $p$. In most cases, the superspecial (and elliptic) ranks are quite small, which is somewhat surprising since these curves are exceptional from many perspectives. Let $q=p^n$. The Hermitian curve $X_q$ has affine equation \[y^q + y = x^{q+1}.\] It is supersingular with genus $g=q(q-1)/2$. It is maximal over $\FF_{q^2}$ because $\#X_q\left(\FF_{q^2}\right)=q^3+1$. The zeta function of $X_q$ is \[Z(X_q/\FF_q, t)=\frac{(1+qt^2)^g}{(1-t)(1-qt)}.\] In fact, $X_q$ is the unique curve of this genus which is maximal over $\FF_{q^2}$ \cite{ruckstich}. This was used to prove that $X_q$ is the Deligne-Lusztig variety for ${\rm Aut}(X_q)={\rm PGU}(3,q)$ \cite[Proposition 3.2]{HansenDL}. By \cite[Proposition 14.10]{Gross}, the $a$-number of $X_q$ is \[a=p^n(p^{n-1}+1)(p-1)/4,\] which equals $g$ when $n=1$, equals $g/2$ when $n=2$, and is approximately $g/2$ for $n \geq 3$. In particular, $X_{p^n}$ is superspecial if and only if $n=1$. In \cite{PW12}, for all $q=p^n$, the authors determine the Dieudonn\'e module $\dieu_*(X_q) = \dieu_*(\operatorname{Jac}(X_q)[p])$, complementing earlier work in \cite{dum95, dum99}. In particular, \cite[Theorem 5.13]{PW12} states that the distinct indecomposable factors of Dieudonn\'e module $\dieu_*(X_q)$ are in bijection with orbits of $\ZZ/(2^n+1) -\{0\}$ under $\times 2$. Each factor's structure is determined by the combinatorics of the orbit, which depends only on $n$ and not on $p$. The multiplicities of the factors do depend on $p$. For example, when $n=2$, the Dieudonn\'e module of $X_{p^2}$ is $M_{2,2}^{g/2}$, which has superspecial rank $0$ (Lemma \ref{lemsshrs0}). Here is an application of these results. \begin{application} \label{App2} The elliptic rank of the Jacobian of the Hermitian curve $X_{p^n}$ equals $0$ if $n$ is even and is at most $(\frac{p(p-1)}{2})^n$ if $n$ is odd. \end{application} \begin{proof} By Proposition \ref{Psselliptic}, $e({\rm Jac}(X_{p^n})) \leq s({\rm Jac}(X_{p^n}))$. Applying \cite[Application 6.1]{PW12}, the factor $\EE/\EE(F+V)$ occurs in the Dieudonn\'e module if and only if there is an orbit of length 2 in $\ZZ/(2^n+1)$ under $\times 2$. This happens if and only if there is an element of order three in $\ZZ/(2^n+1)$, which is true if and only if $n$ is odd. If $n$ is odd, this shows that $\EE/\EE(F+V)$ is not a factor of the Dieudonn\'e module and $s({\rm Jac}(X_{p^n}))=0$. If $n$ is even, the multiplicity of this factor is $s({\rm Jac}(X_{p^n}))=(\frac{p(p-1)}{2})^n$. \end{proof}
{ "timestamp": "2015-10-20T02:12:57", "yymm": "1403", "arxiv_id": "1403.0023", "language": "en", "url": "https://arxiv.org/abs/1403.0023", "abstract": "An abelian variety defined over an algebraically closed field k of positive characteristic is supersingular if it is isogenous to a product of supersingular elliptic curves and is superspecial if it is isomorphic to a product of supersingular elliptic curves. In this paper, the superspecial condition is generalized by defining the superspecial rank of an abelian variety, which is an invariant of its p-torsion. The main results in this paper are about the superspecial rank of supersingular abelian varieties and Jacobians of curves. For example, it turns out that the superspecial rank determines information about the decomposition of a supersingular abelian variety up to isomorphism; namely it is a bound for the maximal number of supersingular elliptic curves appearing in such a decomposition.", "subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG)", "title": "Superspecial rank of supersingular abelian varieties and Jacobians", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631619124993, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7087950371887816 }
https://arxiv.org/abs/1904.07761
Trace operators of the bi-Laplacian and applications
We study several trace operators and spaces that are related to the bi-Laplacian. They are motivated by the development of ultraweak formulations for the bi-Laplace equation with homogeneous Dirichlet condition, but are also relevant to describe conformity of mixed approximations.Our aim is to have well-posed (ultraweak) formulations that assume low regularity, under the condition of an $L_2$ right-hand side function. We pursue two ways of defining traces and corresponding integration-by-parts formulas. In one case one obtains a non-closed space. This can be fixed by switching to the Kirchhoff-Love traces from [Führer, Heuer, Niemi, An ultraweak formulation of the Kirchhoff-Love plate bending model and DPG approximation, Math. Comp., 88 (2019)]. Using different combinations of trace operators we obtain two well-posed formulations. For both of them we report on numerical experiments with the DPG method and optimal test functions.In this paper we consider two and three space dimensions. However, with the exception of a given counterexample in an appendix (related to the non-closedness of a trace space), our analysis applies to any space dimension larger than or equal to two.
\section{Introduction} The bi-Laplace operator and biharmonic functions have generated sustained interest in the mathematics community until today. Just in numerical analysis, MathSciNet reports well beyond 500 publications with these key words in their titles. An early overview of numerical methods for the Dirichlet problem of the bi-Laplacian is given by Glowinski and Pironneau in \cite{GlowinskiP_79_NMF}. A more recent discussion can be found in the introduction of \cite{CockburnDG_09_HSD}. Our interest in this operator arose while studying the Kirchhoff--Love plate bending model and its numerical approximation by the discontinuous Petrov--Galerkin method with optimal test functions (DPG method). It is well known that the Kirchhoff--Love model (with constant coefficients) reduces to the bi-Laplace equation when considering the deflection of the plate as the only unknown. In this paper we introduce and analyze trace operators that stem from the bi-Laplacian and relate to integration-by-parts formulas. Such operators are of general interest as they characterize interface conditions for (piecewise) sufficiently smooth functions to be globally in the domain of the bi-Laplacian, or the subordinated Laplacian when considering the Laplacian of the unknown as independent unknown. Specifically, this analysis is required to construct conforming finite element spaces of minimal regularity. Regularity is a delicate issue when splitting the bi-Laplace equation into two Laplace equations (explicitly, or implicitly through a mixed formulation). Early papers on this technique are by Ciarlet and Raviart \cite{CiarletR_74_MFE}, and Monk \cite{Monk_88_IFE}. Regularity issues at corners have been analyzed, e.g., in \cite{GerasimovSS_12_CGP,DeCosterNS_15_SBD}. Thus, our aim is to use the least possible regularity subject to a given right-hand side function in $L_2$. We note that Zulehner \cite{Zulehner_15_CRM} presents a formulation (and space) where less regular right-hand side functions are permitted. We consider Dirichlet boundary conditions, that is, a clamped plate in the two-dimensional case. Here we only note that, in principle, it is possible to study different boundary conditions, but the regularity of solutions will depend on them and some technical details can be tricky. In the rest of this paper we motivate our definitions and analysis by requirements for the DPG method. For instance, the right-hand side function to be in $L_2$ is such a requirement. Considering this method, there are good reasons to use ultraweak variational formulations. From the mathematical point of view they simplify the analysis of well-posedness as they allow for exact representations of adjoint operators, cf.~\cite{DemkowiczG_11_ADM}. From a practical point of view they give access to approximations of field variables that are close to optimal in the $L_2$ sense, cf., e.g.,~\cite{DemkowiczH_13_RDM,HeuerK_17_RDM} for singularly perturbed problems. For general second order elliptic problems, the $L_2$-optimality up to higher order terms is proved in \cite{Fuehrer_SDM}. Now, since field variables of ultraweak formulations are only $L_2$-elements, the inherent regularity of the underlying problem is passed onto appearing traces. Therefore, the study of trace spaces is at the heart of proving well-posedness of ultraweak formulations. As explained before, the appearing traces (and trace spaces) are equally relevant for the underlying problem and other variational formulations as they precisely describe the notion of conformity and represent tools for its study. It is the nature of DPG methods to use product test spaces (defined on meshes). This is a fundamental paradigm proposed by Demkowicz and Gopalakrishnan in \cite{DemkowiczG_11_CDP}. For that reason, our traces will live in product spaces related to the boundaries of elements. Nevertheless, our results will apply to operations on domains without mesh, simply by using meshes that consist of a single element. The remainder of this paper is as follows. In the next section we fix our model problem and introduce a setting needed to develop ultraweak variational formulations. This approach motivates the framework in which we study trace and jump operators, and trace spaces, and is presented in \S\ref{sec_traces_jumps}. Aiming at lowest regularity, we first develop a setting where the unknown $u$ of the bi-Laplace equation and its Laplacian (as independent unknown) are considered as elements of the same regularity ($L_2$-functions whose Laplacian is in $L_2$). This is done in \S\ref{sec_trace1}. Later, in \S\ref{sec_VF2}, we present a variational formulation based on this framework, state its well-posedness and equivalence with the model problem (Theorem~\ref{thm_stab1}), and prove the quasi-optimal convergence of the induced DPG scheme (Theorem~\ref{thm_DPG1}). For this formulation, discrete subspaces with good approximation properties seem to require coupled basis functions (trace components are not independent). This limits the practicality of the induced DPG scheme. We therefore also consider the option of using more regular test functions (then trace components can be approximated separately). This change gives rise to different trace operators ($\traceDt{}$ acting on $u$, and $\tracetD{}$ acting on $\sigma=\Delta u$) and spaces. They are studied in \S\ref{sec_trace2}. Unfortunately, it turns out that the image of $\tracetD{}$ is not closed (this is proved in Appendix~\ref{sec_app1}). We therefore embed this space in a larger, closed trace space known from our Kirchhoff--Love traces studied in \cite{FuehrerHN_19_UFK}. The corresponding variational formulation and DPG scheme are presented in \S\ref{sec_VF2}, stating well-posedness and quasi-optimal convergence by Theorems~\ref{thm_stab2} and~\ref{thm_DPG2}, respectively. Proofs of Theorems~\ref{thm_stab1} and~\ref{thm_stab2} are given in \S\ref{sec_proofs}. We do not provide a discrete analysis here, but we do present some numerical experiments in \S\ref{sec_num}. They illustrate expected convergence properties. One conclusion of our analysis is that the solution $u$ to the bi-Laplace equation with right-hand side function in $L_2$ and homogeneous boundary condition satisfies $u\in H^2$. Since we have not seen this result in the literature, we resume and prove this statement in Appendix~\ref{sec_reg}. There are, however, related $H^2$-regularity results for the bi-Laplacian by Girault and Raviart in \cite[\S{5}]{GiraultR_86_FEM} (dimensions $2$ and $3$), and by De~Coster \emph{et al.} \cite{DeCosterNS_15_SBD} for corner-type domains in $\ensuremath{\mathbb{R}}^2$. Throughout the paper, $a\lesssim b$ means that $a\le cb$ with a generic constant $c>0$ that is independent of the underlying mesh (except for possible general restrictions like shape-regularity of elements). Similarly, we use the notation $a\simeq b$. \section{Model problem} \label{sec_model} Let $\Omega\subset\ensuremath{\mathbb{R}}^d$ ($d\in\{2,3\}$) be a bounded simply connected Lipschitz domain. (We remark that our analysis and results will apply to any space dimension $d\ge 2$, with the exception of Lemma~\ref{la_trtD} with respect to the Dirac distributions and the counterexample of Appendix~\ref{sec_app1}. Nevertheless, we restrict ourselves to $d\in\{2,3\}$ since we will make use of some results from \cite{FuehrerHN_19_UFK} which are true in any space dimension $d\ge 2$, but are only formulated for $d\in\{2,3\}$.) The boundary of $\Omega$ is denoted by $\Gamma=\partial\Omega$ with exterior unit normal vector $\ensuremath{\mathbf{n}}$. For given $f\in L_2(\Omega)$ our model problem is \begin{subequations} \label{prob} \begin{alignat}{3} \Delta^2 u &= f && \quad\text{in}\ \Omega\label{p1},\\ u = \partial_\ensuremath{\mathbf{n}} u &= 0 && \quad\text{on}\ \Gamma.\label{p2} \end{alignat} \end{subequations} We intend to develop an ultraweak formulation of \eqref{prob} with product test spaces. To this end we consider a mesh $\ensuremath{\mathcal{T}}$ that consists of general non-intersecting Lipschitz elements. To the mesh $\ensuremath{\mathcal{T}}=\{T\}$ we assign the skeleton $\ensuremath{\mathcal{S}}=\{\partial T;\;T\in\ensuremath{\mathcal{T}}\}$. Introducing $\sigma:=\Delta u$, we test the two equations $\Delta\sigma=f$, $\Delta u-\sigma=0$ on any $T\in\ensuremath{\mathcal{T}}$ by sufficiently smooth functions $v$ and $\tau$, respectively, and integrate by parts twice. This formally gives \[ \vdual{\sigma}{\Delta v}_T +\dual{\partial_\ensuremath{\mathbf{n}}\sigma}{v}_{\partial T} -\dual{\sigma}{\partial_\ensuremath{\mathbf{n}} v}_{\partial T} +\vdual{u}{\Delta\tau}_T + \dual{\partial_\ensuremath{\mathbf{n}} u}{\tau}_{\partial T} - \dual{u}{\partial_\ensuremath{\mathbf{n}}\tau}_{\partial T} -\vdual{\sigma}{\tau}_T = \vdual{f}{v}_T, \] where $\vdual{\cdot}{\cdot}_T$ denotes the $L_2(T)$-duality. We still have to interpret the dualities on $\partial T$ denoted by $\dual{\cdot}{\cdot}_{\partial T}$. Summing over $T\in\ensuremath{\mathcal{T}}$, we obtain, again formally, \begin{align} \label{VFa} &\vdual{u}{\Delta \tau}_\ensuremath{\mathcal{T}} + \vdual{\sigma}{\Delta v-\tau}_\ensuremath{\mathcal{T}} \nonumber\\ & + \sum_{T\in\ensuremath{\mathcal{T}}} \dual{\partial_\ensuremath{\mathbf{n}}\sigma}{v}_{\partial T} - \sum_{T\in\ensuremath{\mathcal{T}}} \dual{\sigma}{\partial_\ensuremath{\mathbf{n}} v}_{\partial T} + \sum_{T\in\ensuremath{\mathcal{T}}} \dual{\partial_\ensuremath{\mathbf{n}} u}{\tau}_{\partial T} - \sum_{T\in\ensuremath{\mathcal{T}}} \dual{u}{\partial_\ensuremath{\mathbf{n}} \tau}_{\partial T} = \vdual{f}{v}. \end{align} Here and in the following, $\vdual{\cdot}{\cdot}_\ensuremath{\mathcal{T}}$ denotes the $L_2$-duality in the product space $L_2(\ensuremath{\mathcal{T}})$, meaning that appearing differential operators are taken piecewise with respect to $T\in\ensuremath{\mathcal{T}}$. Below, we also use the notation of differential operators with index $\ensuremath{\mathcal{T}}$ to indicate piecewise operations, e.g., $\vdual{\Delta_\cT u}{v}=\vdual{\Delta u}{v}_\ensuremath{\mathcal{T}}$. Furthermore, from now on, $\ensuremath{\mathbf{n}}$ denotes a generic unit normal vector on $\partial T$ ($T\in\ensuremath{\mathcal{T}}$) and $\Gamma$, pointing outside $T$ and $\Omega$, respectively. Before returning to our formulation \eqref{VFa} we need to study trace operators to give a meaning to the skeleton dualities appearing in \eqref{VFa}. This will be done next, before returning to \eqref{VFa} in \S\ref{sec_VF1}, and again in \S\ref{sec_VF2}. \section{Traces and jumps} \label{sec_traces_jumps} \subsection{Spaces and norms} Given $T\in\ensuremath{\mathcal{T}}$, and sufficiently smooth scalar (respectively, symmetric tensor) function $z:\;T\to\ensuremath{\mathbb{R}}$ (respectively, $\mathbf{\Theta}:\;T\to\ensuremath{\mathbb{R}}^{d\times d}$), we define the norms $\|\cdot\|_{\Delta,T}$, $\|\cdot\|_{2,T}$ and $\|\cdot\|_\trddiv{T}$ by \begin{align*} \|z\|_{\Delta,T}^2 &:= \|z\|_T^2 + \|\Delta z\|_T^2,\quad \|z\|_{2,T}^2 := \|z\|_T^2 + \|\boldsymbol{\varepsilon}\nabla z\|_T^2,\quad \|\mathbf{\Theta}\|_\trddiv{T}^2 := \|\mathbf{\Theta}\|_T^2 + \|{\rm div\,}{\rm\bf div\,}\mathbf{\Theta}\|_T^2. \end{align*} Here, $\|\cdot\|_T$ is the $L_2(T)$-norm (for scalar and tensor-valued functions), $\boldsymbol{\varepsilon}(\cdot):=\frac 12(\nabla(\cdot)+\nabla(\cdot)^\mathsf{T})$ denotes the symmetric gradient, that is, $\boldsymbol{\varepsilon}\nabla z$ is the Hessian of $z$, ${\rm div\,}\!$ is the standard divergence operator, and ${\rm\bf div\,}\!$ is the divergence applied row-wise to tensors. Analogously, we use the corresponding norms on $\Omega$ where we drop the index $T$. For instance, $\|\cdot\|$ is the $L_2(\Omega)$-norm. We also need the $L_2(\Omega)$-bilinear form $\vdual{\cdot}{\cdot}$, for scalar and tensor functions. We define the spaces $\HD{T}$ and $H^2(T)$ as the closures of $\ensuremath{\mathcal{D}}(\overline{T})$ with respect to the norms $\|\cdot\|_{\Delta,T}$ and $\|\cdot\|_{2,T}$, respectively. Correspondingly, $\HdDiv{T}$ is the closure of the space of smooth symmetric tensors on $T$ with respect to $\|\cdot\|_\trddiv{T}$. Analogously, $\HD{\Omega}$ and $H^2_0(\Omega)$ are the respective closures of $\ensuremath{\mathcal{D}}(\overline{\Omega})$ and $\ensuremath{\mathcal{D}}(\Omega)$, with norms $\|\cdot\|_{\Delta}$ and $\|\cdot\|_2$, and $\HdDiv{\Omega}$ is the closure with respect to $\|\cdot\|_\mathrm{dDiv}$ of the space of smooth symmetric tensors on $\Omega$. Given the mesh $\ensuremath{\mathcal{T}}$, we will need the induced product spaces \begin{align*} \HD{\ensuremath{\mathcal{T}}} &:= \{z\in L_2(\Omega);\; z|_T\in \HD{T}\ \forall T\in\ensuremath{\mathcal{T}}\},\\ H^2(\ensuremath{\mathcal{T}}) &:= \{z\in L_2(\Omega);\; z|_T\in H^2(T)\ \forall T\in\ensuremath{\mathcal{T}}\},\\ \HdDiv{\ensuremath{\mathcal{T}}} &:= \{\mathbf{\Theta}\in \ensuremath{\mathbb{L}}_2^s(\Omega);\; \mathbf{\Theta}|_T\in\HdDiv{T}\ \forall T\in\ensuremath{\mathcal{T}}\} \end{align*} with canonical product norms $\|\cdot\|_{\Delta,\ensuremath{\mathcal{T}}}$, $\|\cdot\|_{2,\ensuremath{\mathcal{T}}}$, and $\|\cdot\|_\trddiv{\ensuremath{\mathcal{T}}}$, respectively. Here, $\ensuremath{\mathbb{L}}_2^s$ indicates the space of symmetric $L_2$-tensors on the indicated domain. \subsection{Traces and jumps, part one} \label{sec_trace1} We define linear operators $\traceD{T}:\;\HD{T}\to \HD{T}'$ for $T\in\ensuremath{\mathcal{T}}$ by \begin{align} \label{trDT} \dual{\traceD{T}(z)}{v}_{\partial T} := \vdual{\Delta v}{z}_T - \vdual{v}{\Delta z}_T \quad\forall v\in \HD{T}, \end{align} and observe that, for sufficiently smooth functions $v$ and $z$, \begin{equation} \label{trDT_classical} \dual{\traceD{T}(z)}{v}_{\partial T} = \dual{z}{\partial_\ensuremath{\mathbf{n}} v}_{\partial T} - \dual{v}{\partial_\ensuremath{\mathbf{n}} z}_{\partial T} \end{equation} with $L_2(\partial T)$-duality $\dual{\cdot}{\cdot}_{\partial T}$ and standard trace and normal derivative. In other words, the trace operator $\traceD{T}$ can deliver standard traces (trace and normal derivative) on $\partial T$ when diverting from the setting as a map from $\HD{T}$ to its dual. This will be further discussed in \S\ref{sec_trace2} below. Note the duality \[ \dual{\traceD{T}(z)}{v}_{\partial T} = - \dual{\traceD{T}(v)}{z}_{\partial T} \quad\forall z,v\in\HD{T}. \] The range of $\traceD{T}$ is \[ \bHD{\partial T} := \traceD{T}(\HD{T}\quad (T\in\ensuremath{\mathcal{T}}). \] Switching from individual elements $T\in\ensuremath{\mathcal{T}}$ to the whole of $\ensuremath{\mathcal{T}}$, a collective trace operator is defined by \[ \traceD{}:\; \left\{\begin{array}{cll} \HD{\Omega} & \to & \HD{\ensuremath{\mathcal{T}}}',\\ z & \mapsto & \traceD{}(z) := (\traceD{T}(z))_T \end{array}\right., \] with duality \begin{align} \label{trD_duality} \dual{\traceD{}(z)}{v}_\ensuremath{\mathcal{S}} &:= \sum_{T\in\ensuremath{\mathcal{T}}} \dual{\traceD{T}(z)}{v}_{\partial T} \quad(z\in\HD{\Omega},\ v\in \HD{\ensuremath{\mathcal{T}}}). \end{align} To define a trace space that reflects the homogeneous boundary condition under consideration, we make use of the operator $\traceD{\Omega}$ that is defined like $\traceD{T}$ by replacing $T$ with $\Omega$: \begin{align} \label{trace_Omega} \dual{\traceD{\Omega}(z)}{v}_\Gamma := \vdual{z}{\Delta v} - \vdual{\Delta z}{v} \quad (z,v\in\HD{\Omega}). \end{align} Then, with \[ \HDz{\Omega} := \ker(\traceD{\Omega}), \] we introduce the product trace spaces \[ \bHDzz{\ensuremath{\mathcal{S}}} := \traceD{}(\HDz{\Omega}) \ \subset\ \bHD{\ensuremath{\mathcal{S}}} := \traceD{}(\HD{\Omega}) \ \subset\ \HD{\ensuremath{\mathcal{T}}}'. \] Below, we refer to elements of such skeleton trace spaces in the form, e.g., $\wat{\boldsymbol{v}}=(\wat{\boldsymbol{v}}_T)_{T\in\ensuremath{\mathcal{T}}}$. The local and global trace spaces are equipped with the canonical trace norms, \begin{align*} &\|\wat{\boldsymbol{v}}\|_\trD{\partial T} = \inf\{\|v\|_{\Delta,T};\; v\in \HD{T},\ \traceD{T}(v)=\wat{\boldsymbol{v}}\} \quad (\wat{\boldsymbol{v}}\in\bHD{\partial T},\ T\in\ensuremath{\mathcal{T}}), \\ &\|\wat{\boldsymbol{v}}\|_\trD{\ensuremath{\mathcal{S}}} = \inf\{\|v\|_\Delta;\; v\in \HD{\Omega},\ \traceD{}(v)=\wat{\boldsymbol{v}}\} \qquad (\wat{\boldsymbol{v}}\in\bHD{\ensuremath{\mathcal{S}}}\cup\bHDzz{\ensuremath{\mathcal{S}}}). \end{align*} (Obviously, $\bHDzz{\ensuremath{\mathcal{S}}}$ is a subspace of $\bHD{\ensuremath{\mathcal{S}}}$. But here, and in some instances below, we write $\bHD{\ensuremath{\mathcal{S}}}\cup\bHDzz{\ensuremath{\mathcal{S}}}$ to stress the fact that both spaces are furnished with the same norm.) Alternative norms are defined by duality, \begin{align*} \|\wat{\boldsymbol{v}}\|_{\Delta',\partial T} &:= \sup_{0\not=z\in\HD{T}} \frac{\dual{\wat{\boldsymbol{v}}}{z}_{\partial T}}{\|z\|_{\Delta,T}} \quad (\wat{\boldsymbol{v}}\in \bHD{\partial T},\ T\in\ensuremath{\mathcal{T}}), \\ \|\wat{\boldsymbol{v}}\|_{\Delta',\ensuremath{\mathcal{S}}} &:= \sup_{0\not=z\in\HD{\ensuremath{\mathcal{T}}}} \frac{\dual{\wat{\boldsymbol{v}}}{z}_{\ensuremath{\mathcal{S}}}}{\|z\|_{\Delta,\ensuremath{\mathcal{T}}}} \quad (\wat{\boldsymbol{v}}\in \bHD{\ensuremath{\mathcal{S}}}\cup\bHDzz{\ensuremath{\mathcal{S}}}). \end{align*} Here, the dualities on $\partial T$ and $\ensuremath{\mathcal{S}}$ are given by the corresponding trace operations, \eqref{trDT} for the local spaces and \eqref{trD_duality} on $\ensuremath{\mathcal{S}}$. For instance, the duality between $\wat{\boldsymbol{v}}\in\bHD{\partial T}$ and $z\in\HD{T}$ is $\dual{\wat{\boldsymbol{v}}}{z}_{\partial T}=\vdual{\Delta z}{v}_T-\vdual{z}{\Delta v}_T$ with arbitrary $v\in\HD{T}$ such that $\traceD{T}(v)=\wat{\boldsymbol{v}}$. \begin{lemma} \label{la_tr_unity} It holds the identity \[ \|\wat{\boldsymbol{z}}\|_{\Delta',\partial T} = \|\wat{\boldsymbol{z}}\|_\trD{\partial T}\quad \forall \wat{\boldsymbol{z}}\in \bHD{\partial T},\ T\in \ensuremath{\mathcal{T}}, \] so that \[ \traceD{T}:\; \HD{T}\to \bHD{\partial T} \] has unit norm and $(\bHD{\partial T},\|\cdot\|_{\Delta',\partial T})$ is closed. \end{lemma} \begin{proof} The proof is essentially identical to the one of Lemma~3.2 in \cite{FuehrerHN_19_UFK}. We just need to replace spaces, operators and norms by the ones used here. For the convenience of the reader we repeat the proof. The estimate $\|\wat{\boldsymbol{z}}\|_{\Delta',\partial T}\le \|\wat{\boldsymbol{z}}\|_\trD{\partial T}$ is due to the boundedness \begin{align*} \dual{\traceD{T}(z)}{v}_{\partial T} &\le \|z\|_{\Delta,T} \|v\|_{\Delta,T} \quad\forall z,v\in\HD{T},\ T\in\ensuremath{\mathcal{T}}. \end{align*} To show the other direction we consider an element $T\in\ensuremath{\mathcal{T}}$ and $\wat{\boldsymbol{z}}\in\bHD{\partial T}$, and define $v\in\HD{T}$ by solving \begin{align} \label{prob_dd_z} \vdual{\Delta v}{\Delta \delta\!v}_T + \vdual{v}{\delta\!v}_T = \dual{\wat{\boldsymbol{z}}}{\delta\!v}_{\partial T} \quad\forall \delta\!v\in\HD{T}. \end{align} One deduces that \begin{align} \label{pde_dd_z} \Delta^2 v + v = 0\quad\text{in}\ L_2(T). \end{align} We then define $z\in\HD{T}$ as the solution to \begin{align} \label{prob_dd_QQ} \vdual{\Delta z}{\Delta\delta\!z}_T + \vdual{z}{\delta\!z}_T = \dual{\traceD{T}(\delta\!z)}{v}_{\partial T} \quad\forall\delta\!z\in \HD{T}. \end{align} Again, it holds \begin{align} \label{pde_dd_QQ} \Delta^2 z + z = 0\quad\text{in}\ L_2(T). \end{align} Let us show that $z=\Delta v$. To this end we define $z^*:=\Delta v$ and find that \( \Delta z^* = -v, \) cf.~\eqref{pde_dd_z}. Using this relation, and the definitions of $z^*$ and $\traceD{T}$, cf.~\eqref{trDT}, we obtain \begin{align*} \vdual{\Delta z^*}{\Delta\delta\!z}_T + \vdual{z^*}{\delta\!z}_T &= -\vdual{v}{\Delta\delta\!z}_T + \vdual{\Delta v}{\delta\!z}_T = \dual{\traceD{T}(\delta\!z)}{v}_{\partial T} \end{align*} for any $\delta\!z\in\HD{T}$. This shows that $z^*$ solves \eqref{prob_dd_QQ}, that is, $z=z^*=\Delta v$. Due to this relation and $\Delta z = -v$, it follows by \eqref{prob_dd_z} that \begin{align*} \dual{\traceD{T}(z)}{\delta\!v}_{\partial T} &= \vdual{z}{\Delta\delta\!v}_T - \vdual{\Delta z}{\delta\!v}_T \\ &= \vdual{\Delta v}{\Delta\delta\!v}_T + \vdual{v}{\delta\!v}_T = \dual{\wat{\boldsymbol{z}}}{\delta\!v}_{\partial T} \quad\forall\delta\!v\in\HD{T}. \end{align*} In other words, $\traceD{T}(z)=\wat{\boldsymbol{z}}$. This relation together with selecting $\delta\!v=v$ in \eqref{prob_dd_z} and $\delta\!z=z$ in \eqref{prob_dd_QQ}, shows that \begin{align*} \dual{\wat{\boldsymbol{z}}}{v}_{\partial T} = \|v\|_{\Delta,T}^2 = \|z\|_{\Delta,T}^2. \end{align*} Noting that \( \|z\|_{\Delta,T} = \|\wat{\boldsymbol{z}}\|_\trD{\partial T} \) by \eqref{pde_dd_QQ}, this relation finishes the proof of the norm identity. The space $\bHD{\partial T}$ is closed as the image of a bounded below operator. \end{proof} \begin{prop} \label{prop_D_jump} (i) For $z\in\HD{\ensuremath{\mathcal{T}}}$ it holds \[ z\in\HD{\Omega} \quad\Leftrightarrow\quad \dual{\traceD{}(v)}{z}_\ensuremath{\mathcal{S}} = 0\quad\forall v\in\HDz{\Omega} \] and \[ z\in\HDz{\Omega} \quad\Leftrightarrow\quad \dual{\traceD{}(v)}{z}_\ensuremath{\mathcal{S}} = 0\quad\forall v\in\HD{\Omega}. \] (ii) The identity \begin{align*} \sum_{T\in\ensuremath{\mathcal{T}}} \|\wat{\boldsymbol{z}}_T\|_\trD{\partial T}^2 = \|\wat{\boldsymbol{z}}\|_\trD{\ensuremath{\mathcal{S}}}^2 \quad\forall \wat{\boldsymbol{z}}=(\wat{\boldsymbol{z}}_T)_T \in \bHD{\ensuremath{\mathcal{S}}}\cup\bHDzz{\ensuremath{\mathcal{S}}} \end{align*} holds true. \end{prop} \begin{proof} The proof of (i) follows the standard procedure, cf.~\cite[Proof of Theorem 2.3]{CarstensenDG_16_BSF} and \cite[Proof of Proposition 3.8(i)]{FuehrerHN_19_UFK}. For $z\in\HD{\Omega}$ and $v\in\HDz{\Omega}$ we have that \begin{align*} -\dual{\traceD{}(z)}{v}_\ensuremath{\mathcal{S}} = \dual{\traceD{}(v)}{z}_\ensuremath{\mathcal{S}} &\overset{\mathrm{def}}= \sum_{T\in\ensuremath{\mathcal{T}}} \vdual{\Delta z}{v}_T - \vdual{z}{\Delta v}_T = \vdual{\Delta z}{v} - \vdual{z}{\Delta v} = \dual{\traceD{\Omega}(v)}{z}_\Gamma = 0. \end{align*} The penultimate step is due to \eqref{trace_Omega}, and the last identity holds since $\traceD{\Omega}(v)=0$ by definition of $\HDz{\Omega}$. This is the direction ``$\Rightarrow$'' in both statements of part (i). Now, for given $z\in\HD{\ensuremath{\mathcal{T}}}$ with $\dual{\traceD{}(v)}{z}_\ensuremath{\mathcal{S}} = 0$ for any $v\in\HDz{\Omega}$ we have in the distributional sense \[ \Delta z(v)=\vdual{z}{\Delta v}=\vdual{\Delta z}{v}_\ensuremath{\mathcal{T}} - \dual{\traceD{}(v)}{z}_\ensuremath{\mathcal{S}} = \vdual{\Delta z}{v}_\ensuremath{\mathcal{T}}\quad\forall v\in\mathcal{D}(\Omega). \] Therefore, $\Delta z\in L_2(\Omega)$, that is, $z\in\HD{\Omega}$. Analogously, if $\dual{\traceD{}(v)}{z}_\ensuremath{\mathcal{S}} = 0$ for any $v\in\HD{\Omega}$, we conclude as before that $z\in\HD{\Omega}$. Then, \[ 0 = \dual{\traceD{}(v)}{z}_\ensuremath{\mathcal{S}} = \vdual{v}{\Delta z} - \vdual{\Delta v}{z} = -\dual{\traceD{\Omega}(z)}{v}_\Gamma \quad\forall v\in\HD{\Omega} \] implies that $\traceD{\Omega}(z)=0$, cf.~\eqref{trace_Omega}. That is, $z\in\HDz{\Omega}$. It remains to prove (ii). Here we follow \cite[Proof of Proposition 3.8(ii)]{FuehrerHN_19_UFK}. By definition of the norms it holds $\sum_{T\in\ensuremath{\mathcal{T}}} \|\wat{\boldsymbol{z}}_T\|_\trD{\partial T}^2 \le \|\wat{\boldsymbol{z}}\|_\trD{\ensuremath{\mathcal{S}}}^2$ for any $\wat{\boldsymbol{z}}=(\wat{\boldsymbol{z}}_T)_T\in\bHD{\ensuremath{\mathcal{S}}}\cup\bHDzz{\ensuremath{\mathcal{S}}}$. To show the other bound let $\wat{\boldsymbol{z}}=(\wat{\boldsymbol{z}}_T)_T\in \bHD{\ensuremath{\mathcal{S}}}\cup\bHDzz{\ensuremath{\mathcal{S}}}$ be given with $z\in\HD{\Omega}$ such that $\traceD{}(z)=\wat{\boldsymbol{z}}$. Furthermore, for any $T\in\ensuremath{\mathcal{T}}$, there exists $\tilde z_T\in\HD{T}$ such that $\traceD{T}(\tilde z_T)=\wat{\boldsymbol{z}}_T$ and $\|\tilde z_T\|_{\Delta,T}=\|\wat{\boldsymbol{z}}_T\|_\trD{\partial T}$. Defining $\tilde z\in\HD{\ensuremath{\mathcal{T}}}$ by $\tilde z|_T:=\tilde z_T$ ($T\in\ensuremath{\mathcal{T}}$) we find with part (i) that $\tilde z\in\HD{\Omega}$ with $\traceD{}(\tilde z)=\wat{\boldsymbol{z}}$. Therefore, \begin{align*} \sum_{T\in\ensuremath{\mathcal{T}}} \|\wat{\boldsymbol{z}}_T\|_\trD{\partial T}^2 &= \sum_{T\in\ensuremath{\mathcal{T}}} \|\tilde z_T\|_{\Delta,T}^2 = \|\tilde z\|_\Delta^2 \ge \|\wat{\boldsymbol{z}}\|_\trD{\ensuremath{\mathcal{S}}}^2, \end{align*} which was left to prove. \end{proof} \begin{prop} \label{prop_D_trace} It holds the identity \[ \|\wat{\boldsymbol{z}}\|_{\Delta',\ensuremath{\mathcal{S}}} = \|\wat{\boldsymbol{z}}\|_\trD{\ensuremath{\mathcal{S}}} \quad\forall\wat{\boldsymbol{z}}\in\bHD{\ensuremath{\mathcal{S}}}. \] In particular, \[ \traceD{}:\; \HD{\Omega}\to \bHD{\ensuremath{\mathcal{S}}},\qquad \traceD{}:\; \HDz{\Omega}\to \bHDzz{\ensuremath{\mathcal{S}}} \] have unit norm and $\bHD{\ensuremath{\mathcal{S}}}$, $\bHDzz{\ensuremath{\mathcal{S}}}$ are closed. \end{prop} \begin{proof} Having the tools at hand, the proof is standard (cf., e.g., \cite[Theorem~2.3]{CarstensenDG_16_BSF} and \cite[Proposition 3.5]{FuehrerHN_19_UFK}). By definition of the involved norms, a duality argument in product spaces, Lemma~\ref{la_tr_unity} and Proposition~\eqref{prop_D_jump}(ii) one finds that \begin{align*} \|\wat{\boldsymbol{z}}\|_{\Delta',\ensuremath{\mathcal{S}}}^2 &= \Bigl(\sup_{0\not=v\in\HD{\ensuremath{\mathcal{T}}}} \frac{\sum_{T\in\ensuremath{\mathcal{T}}}\dual{\wat{\boldsymbol{z}}_T}{v}_{\partial T}}{\|v\|_{\Delta,\ensuremath{\mathcal{T}}}}\Bigr)^2 = \sum_{T\in\ensuremath{\mathcal{T}}} \sup_{0\not=v\in\HD{T}} \frac{\dual{\wat{\boldsymbol{z}}_T}{v}_{\partial T}^2}{\|v\|_{\Delta,T}^2} \\ &= \sum_{T\in\ensuremath{\mathcal{T}}} \|\wat{\boldsymbol{z}}_T\|_{\Delta',\partial T}^2 = \sum_{T\in\ensuremath{\mathcal{T}}} \|\wat{\boldsymbol{z}}_T\|_\trD{\partial T}^2 = \|\wat{\boldsymbol{z}}\|_\trD{\ensuremath{\mathcal{S}}}^2\qquad\forall\wat{\boldsymbol{z}}\in\bHD{\ensuremath{\mathcal{S}}}. \end{align*} The spaces $\bHD{\ensuremath{\mathcal{S}}}$ and $\bHDzz{\ensuremath{\mathcal{S}}}$ are closed as the images of bounded below operators. \end{proof} \subsection{Traces and jumps, part two} \label{sec_trace2} As it is not straightforward to discretize the range of $\traceD{T}$ (where the trace components are coupled), we proceed to introduce different trace operators and spaces. According to the regularity of $u$ (the solution of \eqref{prob}) and $\Delta u$ (which will be represented by an independent variable) we consider two different cases. \subsubsection{Trace of $u$.} Let us start by defining a trace operator that takes $H^2(T)$ instead of $\HD{T}$ as domain ($T\in\ensuremath{\mathcal{T}}$). It is the restriction of $\traceD{T}$, cf.~\eqref{trDT}, \[ \traceDt{T}:\; \left\{\begin{array}{cll} H^2(T) & \to & \HD{T}',\\ v & \mapsto & \traceDt{T}(v):=\traceD{T}(v) \end{array}\right.. \] Similarly as before, we have the duality relation \[ \dual{\traceDt{T}(v)}{z}_{\partial T} = - \dual{\traceD{T}(z)}{v}_{\partial T} \quad\forall v\in H^2(T),\ z\in\HD{T}. \] The corresponding collective trace operator (including boundary conditions) is \[ \traceDt{}:\; \left\{\begin{array}{cll} H^2_0(\Omega) & \to & \HD{\ensuremath{\mathcal{T}}}',\\ v & \mapsto & \traceDt{}(v) := (\traceDt{T}(v))_T \end{array}\right. \] with duality \begin{align} \label{trDt_duality} \dual{\traceDt{}(v)}{z}_\ensuremath{\mathcal{S}} &:= \sum_{T\in\ensuremath{\mathcal{T}}} \dual{\traceDt{T}(v)}{z}_{\partial T} \quad(v\in H^2_0(\Omega),\ z\in\HD{\ensuremath{\mathcal{T}}}). \end{align} The ranges of these operators are denoted by \[ \bHDt{\partial T} := \traceDt{T}(H^2(T)) \quad (T\in\ensuremath{\mathcal{T}}) \quad\text{and}\quad \bHDtzz{\ensuremath{\mathcal{S}}} := \traceDt{}(H^2_0(\Omega)). \] As before, the local and global trace spaces are equipped with canonical trace norms, \begin{align*} \|\wat{\boldsymbol{v}}\|_\trt{\partial T} &:= \inf\{\|v\|_{2,T};\; v\in H^2(T),\ \traceD{T}(v)=\wat{\boldsymbol{v}}\} &&(\wat{\boldsymbol{v}}\in\bHDt{\partial T},\ T\in\ensuremath{\mathcal{T}}),\\ \|\wat{\boldsymbol{v}}\|_\trt{\ensuremath{\mathcal{S}}} &:= \inf\{\|v\|_2;\; v\in H^2_0(\Omega),\ \traceD{}(v)=\wat{\boldsymbol{v}}\} &&(\wat{\boldsymbol{v}}\in\bHDtzz{\ensuremath{\mathcal{S}}}), \end{align*} and alternative norms are induced by the respective duality, \begin{align*} \|\wat{\boldsymbol{v}}\|_{\Delta',\partial T} &:= \sup_{0\not=z\in\HD{T}} \frac{\dual{\wat{\boldsymbol{v}}}{z}_{\partial T}}{\|z\|_{\Delta,T}} && (\wat{\boldsymbol{v}}\in \bHDt{\partial T},\ T\in\ensuremath{\mathcal{T}}),\\ \|\wat{\boldsymbol{v}}\|_{\Delta',\ensuremath{\mathcal{S}}} &:= \sup_{0\not=z\in\HD{\ensuremath{\mathcal{T}}}} \frac{\dual{\wat{\boldsymbol{v}}}{z}_{\ensuremath{\mathcal{S}}}}{\|z\|_{\Delta,\ensuremath{\mathcal{T}}}} && (\wat{\boldsymbol{v}}\in \bHDtzz{\ensuremath{\mathcal{S}}}). \end{align*} It goes without saying that the dualities on $\partial T$ and $\ensuremath{\mathcal{S}}$ are defined by the corresponding trace operations \eqref{trDT} (generically for any local space), and \eqref{trDt_duality} on $\ensuremath{\mathcal{S}}$. For instance, the duality $\dual{\wat{\boldsymbol{v}}}{z}_{\partial T}$ between $\wat{\boldsymbol{v}}\in\bHDt{\partial T}$ and $z\in\HD{T}$ is $\vdual{\Delta z}{v}_T-\vdual{z}{\Delta v}_T$ with arbitrary $v\in H^2(T)$ such that $\traceDt{T}(v)=\wat{\boldsymbol{v}}$. It is immediate that all the trace operators are bounded both with respect to the respective canonical trace norm and the respective duality norm. \begin{remark} \label{rem_tr} The trace operator $\traceDt{T}$ gives rise to two components, $\traceDt{T}(v)=(v|_{\partial T},\partial_\ensuremath{\mathbf{n}} v|_{\partial T})$ for $v\in H^2(T)$. On a non-smooth boundary $\partial T$, they are generally not independent. That is, this trace operator does not map surjectively onto the product space of separate traces, $v|_{\partial T}$ and $\partial_\ensuremath{\mathbf{n}} v|_{\partial T}$, cf.~Grisvard~\cite{Grisvard_85_EPN}. In \cite{CostabelD_96_IBS}, Costabel and Dauge discuss this subject including dual spaces. \end{remark} \begin{prop} \label{prop_Dt_trace} It holds the identity \[ \|\wat{\boldsymbol{v}}\|_{\Delta',\ensuremath{\mathcal{S}}} = \|\wat{\boldsymbol{v}}\|_\trt{\ensuremath{\mathcal{S}}} \quad\forall\wat{\boldsymbol{v}}\in\bHDtzz{\ensuremath{\mathcal{S}}}. \] In particular, \[ \traceDt{}:\; H^2_0(\Omega)\to \bHDtzz{\ensuremath{\mathcal{S}}} \] has unit norm and $\bHDtzz{\ensuremath{\mathcal{S}}}$ is closed. \end{prop} \begin{proof} Let $\wat{\boldsymbol{v}}=(\wat{\boldsymbol{v}}_T)_T\in\bHDtzz{\ensuremath{\mathcal{S}}}$ be given. By definition of the norms, one sees that $\|\wat{\boldsymbol{v}}\|_{\Delta',\ensuremath{\mathcal{S}}}\le \|v\|_\Delta$ for any $v\in H^2_0(\Omega)$ with $\traceDt{}(v)=\wat{\boldsymbol{v}}$. Since \begin{equation} \label{eq_Dt} \|\Delta v\|=\|\boldsymbol{\varepsilon}\nabla v\| \quad\forall v\in H^2_0(\Omega) \end{equation} (cf.~\cite[(1.2.8)]{Ciarlet}) we conclude that $\|\wat{\boldsymbol{v}}\|_{\Delta',\ensuremath{\mathcal{S}}} \le \|\wat{\boldsymbol{v}}\|_\trt{\ensuremath{\mathcal{S}}}$. To show the other inequality, we define $v_T\in H^2(T)$ ($T\in\ensuremath{\mathcal{T}}$) as the solution to \[ \bigl({\rm div\,}{\rm\bf div\,}\boldsymbol{\varepsilon}\nabla v_T + v_T =\bigr)\ \Delta^2 v_T+v_T = 0\quad\text{in}\quad T,\qquad \traceDt{T}(v_T)=\wat{\boldsymbol{v}}_T, \] and introduce functions $v$, $z$ with $v|_T=v_T$ and $z|_T=\Delta v_T$ ($T\in\ensuremath{\mathcal{T}}$). We conclude that $v\in H^2_0(\Omega)$ and $\|v\|_2=\|\wat{\boldsymbol{v}}\|_\trt{\ensuremath{\mathcal{S}}}$. Furthermore, since $\Delta z_T=-v_T$, $z\in\HD{\ensuremath{\mathcal{T}}}$, and also using relation \eqref{eq_Dt} we find that \[ \|z\|_{\Delta,\ensuremath{\mathcal{T}}}^2 = \sum_{T\in\ensuremath{\mathcal{T}}} \|\Delta v_T\|_T^2 + \|v_T\|_T^2 = \|v\|_\Delta^2=\|v\|_2^2. \] Finally, we observe that \begin{align*} \|v\|_2^2 &= \|v\|_\Delta^2 = \sum_{T\in\ensuremath{\mathcal{T}}} \vdual{\Delta v_T}{\Delta v_T}_T + \vdual{v_T}{v_T}_T = \sum_{T\in\ensuremath{\mathcal{T}}} -\dual{\traceDt{T}(v_T)}{\Delta v_T}_{\partial T} = -\dual{\wat{\boldsymbol{v}}}{z}_\ensuremath{\mathcal{S}}. \end{align*} Here, we made use of the relation $\Delta^2 v_T+v_T = 0$. Collecting the findings we conclude that \[ \|\wat{\boldsymbol{v}}\|_\trt{\ensuremath{\mathcal{S}}}^2 = \|v\|_2^2 = \|z\|_{\Delta,\ensuremath{\mathcal{T}}}^2 = -\dual{\wat{\boldsymbol{v}}}{z}_\ensuremath{\mathcal{S}}. \] This yields \[ \|\wat{\boldsymbol{v}}\|_\trt{\ensuremath{\mathcal{S}}} \le \|\wat{\boldsymbol{v}}\|_{\Delta',\ensuremath{\mathcal{S}}} \] and finishes the proof. \end{proof} \begin{remark} \label{rem_tr_local} Comparing the results for our trace operators $\traceD{}$ (Proposition~\ref{prop_D_trace}) and $\traceDt{}$ (Proposition~\ref{prop_Dt_trace}) one notices that there is no result for the local operator $\traceDt{T}$ that corresponds to Lemma~\ref{la_tr_unity}. The reason for the lack of such a local property is that relation~\eqref{eq_Dt} requires homogeneous boundary conditions. \end{remark} \begin{prop} \label{prop_tDt_jump} For $z\in\HD{\ensuremath{\mathcal{T}}}$ it holds \[ z\in\HD{\Omega} \quad\Leftrightarrow\quad \dual{\traceDt{}(v)}{z}_\ensuremath{\mathcal{S}} = 0\quad\forall v\in H^2_0(\Omega). \] \end{prop} \begin{proof} The proof is analogous to that of Proposition~\ref{prop_D_jump}(i). The direction ``$\Rightarrow$'' follows by integration by parts and density arguments. The other direction is proved by taking $z\in\HD{\ensuremath{\mathcal{T}}}$ with $\dual{\traceDt{}(v)}{z}_\ensuremath{\mathcal{S}} = 0$ for any $v\in H^2_0(\Omega)$, and concluding that $\Delta z\in L_2(\Omega)$ so that $z\in\HD{\Omega}$. \end{proof} \subsubsection{Trace of $\Delta u$.} Now let us turn to possible trace operations for $\Delta u$ ($u$ representing a function with a regularity according to the solution of \eqref{prob}). Obviously, since $f\in L_2(\Omega)$ by assumption, $\Delta u\in\HD{\Omega}$ by \eqref{p1}. That is why we have considered the trace operator $\traceD{}$ in \S\ref{sec_trace1}. Since we have restricted the domain for the definition of $\traceDt{}$, duality considerations reveal that we now have to consider extended traces by testing with $H^2$-functions. This seems to force to define an operator \begin{equation} \label{trtDT} \tracetD{T}:\; \left\{\begin{array}{cll} \HD{T} & \to & H^2(T)',\\ z & \mapsto & \tracetD{T}(z):=\traceD{T}(z) \end{array}\right.\quad (T\in\ensuremath{\mathcal{T}}) \end{equation} with corresponding collective trace operator $\tracetD{}$, and trace norms and norms defined by duality with $H^2$. Again, this operator gives rise to two components, \[ (\cdot)|_{\partial T}:\;\left\{\begin{array}{clc} \HD{T} & \to & \{z\in H^2(T);\; \partial_\ensuremath{\mathbf{n}} z|_{\partial T}=0\}'\\ v & \mapsto & z\mapsto \dual{\tracetD{T}(v)}{z}_{\partial T} \end{array}\right.\qquad (T\in\ensuremath{\mathcal{T}}) \] and \[ (\partial_\ensuremath{\mathbf{n}}\,\cdot)|_{\partial T}:\;\left\{\begin{array}{clc} \HD{T} & \to & \{z\in H^2(T);\; z|_{\partial T}=0\}'\\ v & \mapsto & z\mapsto -\dual{\tracetD{T}(v)}{z}_{\partial T} \end{array}\right.\qquad (T\in\ensuremath{\mathcal{T}}), \] cf.~\eqref{trDT_classical}. For a smooth boundary $\partial T$, the two components are independent as in that case the operator $\tracetD{T}$ maps $\HD{T}$ onto $H^{-3/2}(\partial T)\times H^{-1/2}(\partial T):=H^{3/2}(\partial T)'\times H^{1/2}(\partial T)'$. Here, $H^{3/2}(\partial T)$ denotes the space of traces onto $\partial T$ of $H^2(T)$-functions, and $H^{1/2}(\partial T)$ is that of the normal derivatives. Glowinski and Pironneau give details in \cite[Props 2.3, 2.4]{GlowinskiP_79_NMF} and refer to Lions and Magenes for a proof, see~\cite[Chapter 2: Theorem 6.5, Section 9.8 (p.~213)]{LionsMagenes}. However, on a polygonal element $T$, the trace operator is not surjective onto $H^{-3/2}(\partial T)\times H^{-1/2}(\partial T)$. This has been indicated by Costabel and Dauge in \cite{CostabelD_96_IBS}. Furthermore, it turns out that in general the operator $\tracetD{T}$ is not bounded below. We give a counterexample in the appendix. For these reasons we avoid to employ the seemingly obvious choice \eqref{trtDT}. Instead, we take a trace operator defined in \cite{FuehrerHN_19_UFK}. It can be interpreted as an extension of $\tracetD{T}$ to a larger domain, see Lemma~\ref{la_trtD} below. Let us repeat some definitions and needed properties from \cite{FuehrerHN_19_UFK}. We introduce trace operators $\traceDD{T}:\;\HdDiv{T}\to H^2(T)'$ for $T\in\ensuremath{\mathcal{T}}$ by \begin{align} \label{trT_dd} \dual{\traceDD{T}(\mathbf{\Theta})}{z}_{\partial T} := \vdual{{\rm div\,}{\rm\bf div\,}\mathbf{\Theta}}{z}_T - \vdual{\mathbf{\Theta}}{\boldsymbol{\varepsilon}\nabla z}_T, \end{align} with the collective variant defined as \[ \traceDD{}:\; \left\{\begin{array}{cll} \HdDiv{\Omega} & \to & H^2(\ensuremath{\mathcal{T}})',\\ \mathbf{\Theta} & \mapsto & \traceDD{}(\mathbf{\Theta}) := (\traceDD{T}(\mathbf{\Theta}))_T \end{array}\right. \] with duality \begin{align} \label{tr_dd} \dual{\traceDD{}(\mathbf{\Theta})}{z}_\ensuremath{\mathcal{S}} := \sum_{T\in\ensuremath{\mathcal{T}}} \dual{\traceDD{T}(\mathbf{\Theta})}{z}_{\partial T}. \end{align} The range of $\traceDD{}$ is denoted by \begin{align*} \ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}) := \traceDD{}(\HdDiv{\Omega}) \end{align*} and provided with the trace norm \begin{align*} \|\wat{\boldsymbol{q}}\|_\trddiv{\ensuremath{\mathcal{S}}} &:= \inf \Bigl\{\|\mathbf{\Theta}\|_{{\rm div\,}{\rm\bf div\,}};\; \mathbf{\Theta}\in \HdDiv{\Omega},\ \traceDD{}(\mathbf{\Theta})=\wat{\boldsymbol{q}}\Bigr\} \end{align*} or the duality norm \begin{align*} \|\wat{\boldsymbol{q}}\|_{-3/2,-1/2,\ensuremath{\mathcal{S}}} &:= \sup_{0\not=z\in H^2(\ensuremath{\mathcal{T}})} \frac{\dual{\wat{\boldsymbol{q}}}{z}_\ensuremath{\mathcal{S}}}{\|z\|_{2,\ensuremath{\mathcal{T}}}}, \quad \wat{\boldsymbol{q}}\in \ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}). \end{align*} Here, the duality is defined as \begin{align} \label{tr_dd_dual} \dual{\wat{\boldsymbol{q}}}{z}_\ensuremath{\mathcal{S}} := \sum_{T\in\ensuremath{\mathcal{T}}} \dual{\wat{\boldsymbol{q}}_T}{z}_{\partial T} \end{align} with \[ \dual{\wat{\boldsymbol{q}}}{z}_{\partial T} := \dual{\traceDD{T}(\mathbf{\Theta})}{z}_{\partial T} \quad\text{for } \mathbf{\Theta}\in \HdDiv{T}\text{ with } \traceDD{T}(\mathbf{\Theta}) = \wat{\boldsymbol{q}}=(\wat{\boldsymbol{q}}_T)_T, \] as in \eqref{trT_dd} and \eqref{tr_dd}. \begin{prop}[{\cite[Proposition 5]{FuehrerHN_19_UFK}}] \label{prop_dd_trace} It holds the identity \[ \|\wat{\boldsymbol{q}}\|_{-3/2,-1/2,\ensuremath{\mathcal{S}}} = \|\wat{\boldsymbol{q}}\|_\trddiv{\ensuremath{\mathcal{S}}} \quad\forall\wat{\boldsymbol{q}}\in\ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}). \] In particular, \[ \traceDD{}:\; \HdDiv{\Omega}\to \ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}) \] has unit norm and $\ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}})$ is closed. \end{prop} \begin{prop}[{\cite[Proposition 8]{FuehrerHN_19_UFK}}] \label{prop_gg_jump} For $z\in H^2(\ensuremath{\mathcal{T}})$ the following equivalence holds, \[ z\in H^2_0(\Omega)\quad\Leftrightarrow\quad \dual{\wat{\boldsymbol{q}}}{z}_\ensuremath{\mathcal{S}}=0 \quad\forall\wat{\boldsymbol{q}}\in \ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}). \] \end{prop} Now, the connection between $\tracetD{}$ and $\traceDD{}$ is as follows. For given $\sigma\in\HD{\Omega}$, it holds $\mathbb{D}(\sigma):=\begin{pmatrix}\sigma & 0\\ 0 & \sigma\end{pmatrix}\in\HdDiv{\Omega}$ since ${\rm div\,}{\rm\bf div\,}\mathbb{D}(\sigma)=\Delta\sigma$, and one concludes that \begin{align*} \dual{\tracetD{}(\sigma)}{v}_\ensuremath{\mathcal{S}} &= \vdual{\sigma}{\Delta v}_\ensuremath{\mathcal{T}} - \vdual{\Delta\sigma}{v} \\ &= \vdual{\mathbb{D}(\sigma)}{\boldsymbol{\varepsilon}\nabla v}_\ensuremath{\mathcal{T}} - \vdual{{\rm div\,}{\rm\bf div\,}\mathbb{D}(\sigma)}{v} = -\dual{\traceDD{}(\mathbb{D}(\sigma))}{v}_\ensuremath{\mathcal{S}} \quad\forall v\in H^2(\ensuremath{\mathcal{T}}). \end{align*} It is clear that $\mathbb{D}:\;\HD{\Omega}\to\HdDiv{\Omega}$ is not surjective. Furthermore, since traces of images of $\mathbb{D}$ do not have jump terms at vertices of $\ensuremath{\mathcal{T}}$ which are present in the case of traces of $\HdDiv{\Omega}$, see~\cite{FuehrerHN_19_UFK}, it is clear that $\tracetD{}$ does not map surjectively onto $\ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}})$. Let us note this result. \begin{lemma} \label{la_trtD} \[ \tracetD{} = -\traceDD{}\circ\mathbb{D}:\; \HD{\Omega} \to \ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}) \] is bounded but not surjective. In particular, Dirac distributions at boundary points, $\delta_e:\;z\mapsto z|_T(e)$ ($e\in\Gamma\cap\overline{T}$, $T\in\ensuremath{\mathcal{T}}$, $z\in H^2(\ensuremath{\mathcal{T}})$ with $\mathrm{supp}(z)=\overline{T}$) are elements of $\ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}})$ but not of $\tracetD{}(\HD{\Omega})$. \end{lemma} The fact that $\delta_e\not\in\tracetD{}(\HD{\Omega})$ is illustrated in Appendix~\ref{sec_app2}. \section{First variational formulation and DPG approximation} \label{sec_VF1} Let us continue to develop a variational formulation of \eqref{prob}. Considering the trace operator $\traceD{}$ from \S\ref{sec_trace1}, our preliminary formulation \eqref{VFa} now reads \[ \vdual{u}{\Delta \tau}_\ensuremath{\mathcal{T}} + \vdual{\sigma}{\Delta v-\tau}_\ensuremath{\mathcal{T}} - \dual{\traceD{}(\sigma)}{v}_\ensuremath{\mathcal{S}} - \dual{\traceD{}(u)}{\tau}_\ensuremath{\mathcal{S}} = \vdual{f}{v}. \] In this case, test functions $v$ and $\tau$ come from $\HD{\ensuremath{\mathcal{T}}}$. Therefore, introducing independent trace variables $\wat{\boldsymbol{\sigma}}:=\traceD{}(\sigma)$, $\wat{\boldsymbol{u}}:=\traceD{}(u)$, and spaces \[ \ensuremath{\mathcal{U}}_1 := L_2(\Omega)\times L_2(\Omega)\times \bHDzz{\ensuremath{\mathcal{S}}} \times \bHD{\ensuremath{\mathcal{S}}},\qquad \ensuremath{\mathcal{V}}_1 := \HD{\ensuremath{\mathcal{T}}}\times\HD{\ensuremath{\mathcal{T}}} \] with respective norms \begin{align*} \|(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\|_{\ensuremath{\mathcal{U}}_1}^2 &:= \|u\|^2 + \|\sigma\|^2 + \|\wat{\boldsymbol{u}}\|_\trD{\ensuremath{\mathcal{S}}}^2 + \|\wat{\boldsymbol{\sigma}}\|_\trD{\ensuremath{\mathcal{S}}}^2,\qquad \|(v,\tau)\|_{\ensuremath{\mathcal{V}}_1}^2 := \|v\|_{\Delta,\ensuremath{\mathcal{T}}}^2 + \|\tau\|_{\Delta,\ensuremath{\mathcal{T}}}^2, \end{align*} our first ultraweak variational formulation of \eqref{prob} is \begin{align} \label{VF1} (u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in \ensuremath{\mathcal{U}}_1:\quad b_1(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}};v,\tau) = L(v,\tau) \quad\forall (v,\tau)\in\ensuremath{\mathcal{V}}_1, \end{align} in strong form written as $B_1(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})=L\in\ensuremath{\mathcal{V}}_1'$. Here, \begin{align} \label{b1} b_1(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}};v,\tau) := \vdual{u}{\Delta \tau}_\ensuremath{\mathcal{T}} + \vdual{\sigma}{\Delta v-\tau}_\ensuremath{\mathcal{T}} - \dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}} - \dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}}, \end{align} \( L(v,\tau) := \vdual{f}{v}, \) and $\dual{\cdot}{\cdot}_\ensuremath{\mathcal{S}}$ refers to the duality between $\bHD{\ensuremath{\mathcal{S}}}$ (including $\bHDzz{\ensuremath{\mathcal{S}}}$) and $\HD{\ensuremath{\mathcal{T}}}$ implied by \eqref{trD_duality}. \begin{theorem} \label{thm_stab1} The operator $B_1:\;\ensuremath{\mathcal{U}}_1\to\ensuremath{\mathcal{V}}_1'$ is continuous and bounded below. In particular, for any function $f\in L_2(\Omega)$, there exists a unique and stable solution $(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in \ensuremath{\mathcal{U}}_1$ to \eqref{VF1}, \[ \|u\| + \|\sigma\| + \|\wat{\boldsymbol{u}}\|_{\Delta,\ensuremath{\mathcal{S}}} + \|\wat{\boldsymbol{\sigma}}\|_{\Delta,\ensuremath{\mathcal{S}}} \lesssim \|f\| \] with a hidden constant that is independent of $f$ and $\ensuremath{\mathcal{T}}$. Furthermore, \eqref{prob} and \eqref{VF1} are equivalent: If $u\in H^2_0(\Omega)$ solves \eqref{prob} then $\ensuremath{\mathbf{u}}:=(u,\Delta u,\traceD{}(u),\traceD{}(\Delta u))$ solves \eqref{VF1}; and if $\ensuremath{\mathbf{u}}=(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})$ solves \eqref{VF1} then $u$ is element of $H^2_0(\Omega)$ and solves \eqref{prob}. \end{theorem} For a proof of this theorem we refer to Section~\ref{sec_proofs}. A DPG approximation with optimal test functions based on formulation \eqref{VF1} is as follows. We select discrete spaces $\ensuremath{\mathcal{U}}_{1,h}\subset\ensuremath{\mathcal{U}}$ and test spaces $\ensuremath{\mathcal{V}}_{1,h}:={\rm T}_1(\ensuremath{\mathcal{U}}_{1,h})\subset\ensuremath{\mathcal{V}}_1$ where ${\rm T}_1:\;\ensuremath{\mathcal{U}}_1\to\ensuremath{\mathcal{V}}_1$ is the \emph{trial-to-test operator} defined by \[ \ip{{\rm T}_1(\ensuremath{\mathbf{u}})}{\ensuremath{\mathbf{v}}}_{\ensuremath{\mathcal{V}}_1} = b_1(\ensuremath{\mathbf{u}},\ensuremath{\mathbf{v}})\quad\forall\ensuremath{\mathbf{v}}\in\ensuremath{\mathcal{V}}_1. \] Here, $\ip{\cdot}{\cdot}_{\ensuremath{\mathcal{V}}_1}$ is the inner product in $\ensuremath{\mathcal{V}}_1$ that generates the selected norm $\bigl(\|\cdot\|_{\Delta,\ensuremath{\mathcal{T}}}^2+\|\cdot\|_{\Delta,\ensuremath{\mathcal{T}}}^2\bigr)^{1/2}$. Then, an approximation $\ensuremath{\mathbf{u}}_h=(u_h,\sigma_h,\wat{\boldsymbol{u}}_h,\wat{\boldsymbol{\sigma}}_h)\in\ensuremath{\mathcal{U}}_{1,h}$ is defined as the solution to \begin{align} \label{DPG1} b_1(\ensuremath{\mathbf{u}}_h,\ensuremath{\mathbf{v}}) = L(\ensuremath{\mathbf{v}}) \quad\forall\ensuremath{\mathbf{v}}\in\ensuremath{\mathcal{V}}_{1,h}. \end{align} Being a minimum residual method it delivers the best approximation of the exact solution in the residual norm $\|B_1(\cdot)\|_{\ensuremath{\mathcal{V}}_1'}$, cf., e.g.,~\cite{DemkowiczG_11_ADM}. Then, using the equivalence of the norms $\|B_1(\cdot)\|_{\ensuremath{\mathcal{V}}_1'}$ and $\|\cdot\|_{\ensuremath{\mathcal{U}}_1}$ stated by Theorem~\ref{thm_stab1}, we obtain its quasi-optimal convergence in the latter norm. \begin{theorem} \label{thm_DPG1} Let $f\in L_2(\Omega)$ be given and let $\ensuremath{\mathbf{u}}$ be the solution of \eqref{VF1}. For any finite-dimensional subspace $\ensuremath{\mathcal{U}}_{1,h}\subset\ensuremath{\mathcal{U}}_1$ there exists a unique solution $\ensuremath{\mathbf{u}}_h\in\ensuremath{\mathcal{U}}_{1,h}$ to \eqref{DPG1}. It satisfies the quasi-optimal error estimate \[ \|\ensuremath{\mathbf{u}}-\ensuremath{\mathbf{u}}_h\|_{\ensuremath{\mathcal{U}}_1} \lesssim \|\ensuremath{\mathbf{u}}-\ensuremath{\mathbf{w}}\|_{\ensuremath{\mathcal{U}}_1} \quad\forall\ensuremath{\mathbf{w}}\in\ensuremath{\mathcal{U}}_{1,h} \] with a hidden constant that is independent of $f$, $\ensuremath{\mathcal{T}}$ and $\ensuremath{\mathcal{U}}_{1,h}$. \end{theorem} \section{Second variational formulation and DPG approximation} \label{sec_VF2} Let us reconsider the preliminary formulation \eqref{VFa}. We make use of the regularity $u\in H^2_0(\Omega)$. Then, the variable $\wat{\boldsymbol{u}}$ replaces $\traceDt{}(u)\in\bHDtzz{\ensuremath{\mathcal{S}}}$ instead of $\traceD{}(u)\in\bHDzz{\ensuremath{\mathcal{S}}}$. We then use test functions $v\in H^2(\ensuremath{\mathcal{T}})$ instead of $v\in\HD{\ensuremath{\mathcal{T}}}$. This means that we have a trace $\traceDD{}\circ\mathbb{D}(\Delta u)\in\ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}})$ (cf.~Lemma~\ref{la_trtD}) rather than $\traceD{}(\Delta u)\in\bHD{\ensuremath{\mathcal{S}}}$. This corresponds to using the spaces \[ \ensuremath{\mathcal{U}}_2 := L_2(\Omega)\times L_2(\Omega)\times \bHDtzz{\ensuremath{\mathcal{S}}} \times \ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}),\qquad \ensuremath{\mathcal{V}}_2 := H^2(\ensuremath{\mathcal{T}})\times\HD{\ensuremath{\mathcal{T}}} \] with respective norms \begin{align*} \|(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\|_{\ensuremath{\mathcal{U}}_2}^2 &:= \|u\|^2 + \|\sigma\|^2 + \|\wat{\boldsymbol{u}}\|_{2,\ensuremath{\mathcal{S}}}^2 + \|\wat{\boldsymbol{\sigma}}\|_\trddiv{\ensuremath{\mathcal{S}}}^2,\qquad \|(v,\tau)\|_{\ensuremath{\mathcal{V}}_2}^2 := \|v\|_{2,\ensuremath{\mathcal{T}}}^2 + \|\tau\|_{\Delta,\ensuremath{\mathcal{T}}}^2. \end{align*} The corresponding ultraweak variational formulation is: \begin{align} \label{VF2} (u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in \ensuremath{\mathcal{U}}_2:\quad b_2(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}};v,\tau) = L(v,\tau) \quad\forall (v,\tau)\in\ensuremath{\mathcal{V}}_2. \end{align} Here, the bilinear form $b_2$ is defined similarly as $b_1$ in \eqref{b1}, namely \begin{align*} b_2(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}};v,\tau) := \vdual{u}{\Delta \tau}_\ensuremath{\mathcal{T}} + \vdual{\sigma}{\Delta v-\tau}_\ensuremath{\mathcal{T}} - \dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}} + \dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}}. \end{align*} Specifically, the duality $\dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}}$ is the one induced by \eqref{trDt_duality} analogously as before, and $\dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}}$ is defined by \eqref{tr_dd_dual}. For consistency with trace definitions we have changed the sign in front of the latter duality, cf.~Lemma~\ref{la_trtD}. We refer to the strong form of \eqref{VF2} as $B_2(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})=L\in\ensuremath{\mathcal{V}}_2'$. \begin{theorem} \label{thm_stab2} The operator $B_2:\;\ensuremath{\mathcal{U}}_2\to\ensuremath{\mathcal{V}}_2'$ is continuous and bounded below. In particular, for any function $f\in L_2(\Omega)$, there exists a unique solution $(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})$ of \eqref{VF2}. It holds the bound \[ \|u\| + \|\sigma\| + \|\wat{\boldsymbol{u}}\|_{2,\ensuremath{\mathcal{S}}} + \|\wat{\boldsymbol{\sigma}}\|_{\trddiv{\ensuremath{\mathcal{S}}}} \lesssim \|f\| \] with a hidden constant that is independent of $f$ and $\ensuremath{\mathcal{T}}$. Furthermore, \eqref{prob} and \eqref{VF2} are equivalent: If $u\in H^2_0(\Omega)$ solves \eqref{prob} then $\ensuremath{\mathbf{u}}:=(u,\Delta u,\traceDt{}(u),\traceDD{}(\mathbb{D}(\Delta u)))$ solves \eqref{VF2}; and if $\ensuremath{\mathbf{u}}=(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})$ solves \eqref{VF2} then $u\in H^2_0(\Omega)$ solves \eqref{prob}. \end{theorem} A proof of this result is given in Section~\ref{sec_proofs}. The corresponding DPG approximation uses discrete spaces $\ensuremath{\mathcal{U}}_{2,h}\subset\ensuremath{\mathcal{U}}_2$ and test spaces $\ensuremath{\mathcal{V}}_{2,h}:={\rm T}_2(\ensuremath{\mathcal{U}}_{2,h})\subset\ensuremath{\mathcal{V}}_2$ where the trial-to-test operator ${\rm T}_2:\;\ensuremath{\mathcal{U}}_2\to\ensuremath{\mathcal{V}}_2$ is defined by \[ \ip{{\rm T}_2(\ensuremath{\mathbf{u}})}{\ensuremath{\mathbf{v}}}_{\ensuremath{\mathcal{V}}_2} = b_2(\ensuremath{\mathbf{u}},\ensuremath{\mathbf{v}})\quad\forall\ensuremath{\mathbf{v}}\in\ensuremath{\mathcal{V}}_2 \] with inner product $\ip{\cdot}{\cdot}_{\ensuremath{\mathcal{V}}_2}$ that induces the norm $\bigl(\|\cdot\|_{2,\ensuremath{\mathcal{T}}}^2+\|\cdot\|_{\Delta,\ensuremath{\mathcal{T}}}^2\bigr)^{1/2}$ in $\ensuremath{\mathcal{V}}_2$. The approximation $\ensuremath{\mathbf{u}}_h=(u_h,\sigma_h,\wat{\boldsymbol{u}}_h,\wat{\boldsymbol{\sigma}}_h)\in\ensuremath{\mathcal{U}}_{2,h}$ is defined analogously as before, \begin{align} \label{DPG2} b_2(\ensuremath{\mathbf{u}}_h,\ensuremath{\mathbf{v}}) = L(\ensuremath{\mathbf{v}}) \quad\forall\ensuremath{\mathbf{v}}\in\ensuremath{\mathcal{V}}_{2,h}. \end{align} Again, this scheme converges quasi-optimally, see Theorem~\ref{thm_DPG1}. \begin{theorem} \label{thm_DPG2} Let $f\in L_2(\Omega)$ be given and let $\ensuremath{\mathbf{u}}$ be the solution of \eqref{VF2}. For any finite-dimensional subspace $\ensuremath{\mathcal{U}}_{2,h}\subset\ensuremath{\mathcal{U}}_2$ there exists a unique DPG approximation $\ensuremath{\mathbf{u}}_h=(u_h,\sigma_h,\wat{\boldsymbol{u}}_h,\wat{\boldsymbol{\sigma}}_h)\in\ensuremath{\mathcal{U}}_{2,h}$ defined by \eqref{DPG2}. It satisfies the quasi-optimal error estimate \[ \|\ensuremath{\mathbf{u}}-\ensuremath{\mathbf{u}}_h\|_{\ensuremath{\mathcal{U}}_2} \lesssim \|\ensuremath{\mathbf{u}}-\ensuremath{\mathbf{w}}\|_{\ensuremath{\mathcal{U}}_2} \quad\forall\ensuremath{\mathbf{w}}\in\ensuremath{\mathcal{U}}_{2,h} \] with a hidden constant that is independent of $f$, $\ensuremath{\mathcal{T}}$ and $\ensuremath{\mathcal{U}}_{2,h}$. \end{theorem} \section{Proofs of Theorems~\ref{thm_stab1} and~\ref{thm_stab2}} \label{sec_proofs} We start by showing unique and stable solvability of the (self) adjoint problem to \eqref{prob}, with continuous spaces. \begin{lemma} \label{la_adj2} For given $g_1,g_2\in L_2(\Omega)$, there exists a unique solution $(v,\tau)\in H^2_0(\Omega)\times\HD{\Omega}$ of \begin{subequations} \label{adj} \begin{alignat}{3} \Delta v - \tau &= g_1 &&\quad\text{in}\ \Omega, \label{a1}\\ \Delta\tau &= g_2 &&\quad\text{in}\ \Omega. \label{a2} \end{alignat} \end{subequations} It satisfies \[ \|v\|_2 + \|\tau\|_\Delta \lesssim \|g_1\| + \|g_2\| \] with a constant that is independent of $g_1$, $g_2$ and $\ensuremath{\mathcal{T}}$. \end{lemma} \begin{proof} We write a variational formulation for $v$. Applying $\Delta$ to \eqref{a1} and using \eqref{a2}, this gives the relation \[ \Delta(\Delta v-g_1) = g_2\quad\text{in}\quad L_2(\Omega). \] Testing with $\delta\!v\in H^2_0(\Omega)$ and integrating by parts we see that $v\in H^2_0(\Omega)$ solves \[ \vdual{\Delta v}{\Delta\delta\!v} = \vdual{g_1}{\Delta\delta\!v} + \vdual{g_2}{\delta\!v} \quad\forall\delta\!v\in H^2_0(\Omega). \] By standard arguments, this problem has a unique solution with bound \[ \|\boldsymbol{\varepsilon}\nabla v\|^2 = \|\Delta v\|^2 \le \Bigl(\|g_1\|^2 + \|g_2\|^2\Bigr)^{1/2} \|v\|_\Delta. \] Here, we made use of \eqref{eq_Dt}. Using Poincar\'e's inequality $\|v\|\lesssim \|\boldsymbol{\varepsilon}\nabla v\|$ we conclude that \[ \|v\|_2 \lesssim \|g_1\| + \|g_2\|. \] A unique solution $(v,\tau)$ of \eqref{adj} is then obtained by setting $\tau:=\Delta v-g_1$, with bound \[ \|\tau\| + \|\Delta\tau\| = \|\Delta v-g_1\| + \|g_2\| \lesssim \|g_1\| + \|g_2\|. \] This finishes the proof. \end{proof} \subsection{Proof of Theorem~\ref{thm_stab1}.} \paragraph{Well-posedness of \eqref{VF1}.} We check the standard conditions. The boundedness of $b_1$ and $L$ holds by definition of the norms in $\ensuremath{\mathcal{U}}_1$ and $\ensuremath{\mathcal{V}}_1$. The injectivity of the adjoint operator $B_1^*:\;\ensuremath{\mathcal{V}}_1\to\ensuremath{\mathcal{U}}_1'$ can be seen as follows. Let $(v,\tau)\in\ensuremath{\mathcal{V}}_1$ be such that $b_1(\ensuremath{\mathbf{u}};v,\tau)=0$ for any $\ensuremath{\mathbf{u}}=(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in\ensuremath{\mathcal{U}}_1$. The selection of $\ensuremath{\mathbf{u}}=(0,0,\wat{\boldsymbol{u}},0)$ for any $\wat{\boldsymbol{u}}\in\bHDzz{\ensuremath{\mathcal{S}}}$ reveals that $\tau\in\HD{\Omega}$ by Proposition~\ref{prop_D_jump}(i). Analogously, selecting $\ensuremath{\mathbf{u}}=(0,0,0,\wat{\boldsymbol{\sigma}})$ with arbitrary $\wat{\boldsymbol{\sigma}}\in\bHD{\ensuremath{\mathcal{S}}}$, Proposition~\ref{prop_D_jump}(i) shows that $v\in\HDz{\Omega}$. We conclude that $(v,\tau)\in\HDz{\Omega}\times\HD{\Omega}$ solves $\tau=\Delta v$ and $\Delta\tau=0$. It follows that $\Delta^2 v = 0$. Since $v\in\HDz{\Omega}$, so that $\traceD{\Omega}(v)=0$, relation \eqref{trace_Omega} shows that $\|\Delta v\|^2=\vdual{\Delta^2 v}{v}=0$. In particular, $\tau=\Delta v=0$. Now, defining $z\in H^2_0(\Omega)$ as the solution to $\Delta^2 z=v$, and again using \eqref{trace_Omega}, we find that \( \vdual{v}{v}=\vdual{v}{\Delta^2 z}=\vdual{\Delta v}{\Delta z}=0, \) that is, $v=0$. It remains to check the inf--sup condition \begin{equation} \label{infsup_D} \|B_1\ensuremath{\mathbf{u}}\|_{\ensuremath{\mathcal{V}}_1'} \gtrsim \|\ensuremath{\mathbf{u}}\|_{\ensuremath{\mathcal{U}}_1}\quad\forall\ensuremath{\mathbf{u}}\in\ensuremath{\mathcal{U}}_1. \end{equation} To this end we employ the technique proposed by Carstensen {\em et al.} in \cite{CarstensenDG_16_BSF}. To simplify reading let use relate our notation to the one in \cite{CarstensenDG_16_BSF}. \begin{align*} &X=\ensuremath{\mathcal{U}}_1,\quad X_0 = L_2(\Omega)\times L_2(\Omega),\quad \hat X=\bHDzz{\ensuremath{\mathcal{S}}}\times\bHD{\ensuremath{\mathcal{S}}},\\ &Y=\ensuremath{\mathcal{V}}_1,\quad Y_0=\HDz{\Omega}\times\HD{\Omega},\quad b(\cdot,\cdot)=b_1(\cdot,\cdot),\\ &b_0(x,y)=b_1(u,\sigma,0,0;v,\tau)=\vdual{u}{\Delta \tau}_\ensuremath{\mathcal{T}} + \vdual{\sigma}{\Delta v-\tau}_\ensuremath{\mathcal{T}}\quad \text{with $x=(u,\sigma)$, $y=(v,\tau)$},\\ &\hat b(\hat x,y)=b_1(0,0,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}};v,\tau)= -\dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}} - \dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}}\quad \text{with $\hat x=(\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})$, $y=(v,\tau)$}. \end{align*} According to \cite[Theorem~3.3]{CarstensenDG_16_BSF} it suffices to show the two inf--sup properties \begin{align} \label{infsup1} &\text{\cite[Ass.~3.1]{CarstensenDG_16_BSF}:}\ \sup_{0\not=(v,\tau)\in\HDz{\Omega}\times\HD{\Omega}} \frac{b_1(u,\sigma,0,0;v,\tau)}{\|(v,\tau)\|_{\ensuremath{\mathcal{V}}_1}} \gtrsim \|u\| + \|\sigma\| \quad\forall u, \sigma\in L_2(\Omega), \\ \label{infsup2} &\text{\cite[(18)]{CarstensenDG_16_BSF}:}\qquad \sup_{0\not=(v,\tau)\in\ensuremath{\mathcal{V}}_1} \frac{\dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}} + \dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}}}{\|(v,\tau)\|_{\ensuremath{\mathcal{V}}_1}} \gtrsim \|\wat{\boldsymbol{u}}\|_\trD{\ensuremath{\mathcal{S}}} + \|\wat{\boldsymbol{\sigma}}\|_\trD{\ensuremath{\mathcal{S}}} \nonumber\\ &\hspace*{0.55\textwidth} \forall (\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in \bHDzz{\ensuremath{\mathcal{S}}}\times\bHD{\ensuremath{\mathcal{S}}}, \end{align} and the identity \[ \HDz{\Omega}\times\HD{\Omega} = \{(v,\tau)\in\ensuremath{\mathcal{V}}_1;\; \dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}} + \dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}} = 0\ \forall (\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in \bHDzz{\ensuremath{\mathcal{S}}}\times\bHD{\ensuremath{\mathcal{S}}}\}. \] This identity is true by Proposition~\ref{prop_D_jump}. Lemma~\ref{la_adj2} shows that \eqref{infsup1} holds: \begin{align} \label{B1_below} \Bigl(\|u\|^2 + \|\sigma\|^2\Bigr)^{1/2} &= \sup_{0\not=(g_1,g_2)\in L_2(\Omega)\times L_2(\Omega)} \frac {\vdual{u}{g_1} + \vdual{\sigma}{g_2}} {(\|g_1\|^2+\|g_2\|^2)^{1/2}} \nonumber\\ &\lesssim \sup_{0\not=(v,\tau)\in H^2_0(\Omega)\times\HD{\Omega}} \frac {b_1(u,\sigma,0,0;v,\tau)} {(\|v\|_2^2+\|\tau\|_\Delta^2)^{1/2}} \\ \nonumber &\le \sup_{0\not=(v,\tau)\in \HDz{\Omega}\times\HD{\Omega}} \frac {b_1(u,\sigma,0,0;v,\tau)} {\|(v,\tau)\|_{\ensuremath{\mathcal{V}}_1}} \quad\forall u, \sigma\in L_2(\Omega). \end{align} Finally, Proposition~\ref{prop_D_trace} shows that \eqref{infsup2} is satisfied. This finishes the proof of \eqref{infsup_D}, and of the theorem. \paragraph{Equivalence of \eqref{prob} and \eqref{VF1}.} By construction of \eqref{VF1}, any solution $u\in H^2_0(\Omega)$ of \eqref{prob} provides a solution $\ensuremath{\mathbf{u}}:=(u,\Delta u,\traceD{}(u),\traceD{}(\Delta u))\in\ensuremath{\mathcal{U}}_1$ of \eqref{VF1}. In fact, the regularity $u\in\HDz{\Omega}$, together with $f\in L_2(\Omega)$, is sufficient for this conclusion. To see the other direction we use that \eqref{VF1} is uniquely solvable. Its solution $\ensuremath{\mathbf{u}}=(u,\sigma,\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})$ satisfies $u\in \HDz{\Omega}$ and solves $\Delta^2u = f$ in $\Omega$, as can be seen as follows. Selecting smooth test functions $v$ and $\tau$ with supports on individual elements, one obtains $\sigma=\Delta_\cT u$ and $\Delta_\cT\sigma=f$, first in the distributional sense and then in $L_2(\Omega)$ by the regularity $\sigma,f\in L_2(\Omega)$. Second, denoting as usual $\wat{\boldsymbol{u}}=(\wat{\boldsymbol{u}}_T)_T$, $\wat{\boldsymbol{\sigma}}=(\wat{\boldsymbol{\sigma}}_T)_T$, and using test functions $v,\tau\in\ensuremath{\mathcal{D}}(\overline{T})$ for $T\in\ensuremath{\mathcal{T}}$, one concludes that $\wat{\boldsymbol{u}}_T=\traceD{T}(u)$ and $\wat{\boldsymbol{\sigma}}_T=\traceD{T}(\sigma)$ for any $T\in\ensuremath{\mathcal{T}}$ so that $u\in\HDz{\Omega}$ and $\sigma\in\HD{\Omega}$ by Proposition~\ref{prop_D_jump}. Altogether, $u\in\HDz{\Omega}$ solves $\Delta^2u=f$. Since any such function $u$ leads to a solution of \eqref{VF1}, as noted before, one concludes the stronger regularity $u\in H^2_0(\Omega)$ by uniqueness of \eqref{VF1}. Therefore, $u\in H^2_0(\Omega)$ solves \eqref{prob}. \subsection{Proof of Theorem~\ref{thm_stab2}.} The proof of Theorem~\ref{thm_stab2} is analogous to the one of Theorem~\ref{thm_stab1}. The equivalence between \eqref{prob} and \eqref{VF2} holds as before. To show the well-posedness of \eqref{VF2} we repeat the steps that show the well-posedness of \eqref{VF1} where we only have to replace the corresponding ingredients. Specifically, the injectivity of $B_2^*:\;\ensuremath{\mathcal{V}}_2\to\ensuremath{\mathcal{U}}_2'$ is obtained by using Propositions~\ref{prop_tDt_jump} and~\ref{prop_gg_jump} instead of Proposition~\ref{prop_D_jump}(i) to deduce the continuity $(v,\tau)\in H^2_0(\Omega)\times\HD{\Omega}$ of $(v,\tau)\in\ensuremath{\mathcal{V}}_2$ satisfying $b_2(\ensuremath{\mathbf{u}};v,\tau)=0$ $\forall\ensuremath{\mathbf{u}}\in\ensuremath{\mathcal{U}}_2$. Then Lemma~\ref{la_adj2} shows that $(v,\tau)=0$. The inf--sup condition for $B_2$, corresponding to \eqref{infsup_D}, is shown by the same framework, based on the two inf--sup conditions \begin{align} \label{infsup1b} &\sup_{0\not=(v,\tau)\in H^2_0(\Omega)\times\HD{\Omega}} \frac{b_2(u,\sigma,0,0;v,\tau)}{\|(v,\tau)\|_{\ensuremath{\mathcal{V}}_2}} \gtrsim \|u\| + \|\sigma\| \quad\forall u, \sigma\in L_2(\Omega), \\ \label{infsup2b} &\sup_{0\not=(v,\tau)\in\ensuremath{\mathcal{V}}_2} \frac{\dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}} + \dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}}}{\|(v,\tau)\|_{\ensuremath{\mathcal{V}}_2}} \gtrsim \|\wat{\boldsymbol{u}}\|_{2,\ensuremath{\mathcal{S}}} + \|\wat{\boldsymbol{\sigma}}\|_\trddiv{\ensuremath{\mathcal{S}}} \quad\forall (\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in \bHDtzz{\ensuremath{\mathcal{S}}}\times\ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}}), \end{align} and the identity \[ H^2_0(\Omega)\times\HD{\Omega} = \{(v,\tau)\in\ensuremath{\mathcal{V}}_2;\; -\dual{\wat{\boldsymbol{u}}}{\tau}_\ensuremath{\mathcal{S}} + \dual{\wat{\boldsymbol{\sigma}}}{v}_\ensuremath{\mathcal{S}} = 0\ \forall (\wat{\boldsymbol{u}},\wat{\boldsymbol{\sigma}})\in \bHDtzz{\ensuremath{\mathcal{S}}}\times\ensuremath{\mathbf{H}}^{-3/2,-1/2}(\ensuremath{\mathcal{S}})\}. \] This identity is true by Propositions~\ref{prop_tDt_jump} and~\ref{prop_gg_jump}, and \eqref{infsup1b} holds as we have seen with \eqref{B1_below}. Finally, Propositions~\ref{prop_Dt_trace} and~\ref{prop_dd_trace} show that \eqref{infsup2b} is satisfied. \section{Numerical examples} \label{sec_num} According to Theorems~\ref{thm_DPG1} and~\ref{thm_DPG2}, any conforming subspaces $\ensuremath{\mathcal{U}}_{1,h}\subset\ensuremath{\mathcal{U}}_1$ and $\ensuremath{\mathcal{U}}_{2,h}\subset\ensuremath{\mathcal{U}}_2$ yield quasi-optimal approximations $\ensuremath{\mathbf{u}}_{1,h}\in\ensuremath{\mathcal{U}}_{1,h}$ and $\ensuremath{\mathbf{u}}_{2,h}\in\ensuremath{\mathcal{U}}_{2,h}$, respectively, of the solution(s) $\ensuremath{\mathbf{u}}_1=(u,\Delta u,\traceD{}(u),\traceD{}(\Delta u))$ and $\ensuremath{\mathbf{u}}_2=(u,\Delta u,\traceDt{}(u),\traceDD{}(\mathbb{D}(\Delta u)))$ to \eqref{VF1} and \eqref{VF2}, respectively. (In fact, $\ensuremath{\mathbf{u}}_1=\ensuremath{\mathbf{u}}_2$.) Here, $u\in H^2_0(\Omega)$ solves \eqref{prob}, and $\ensuremath{\mathbf{u}}_{1,h}$ and $\ensuremath{\mathbf{u}}_{2,h}$ are the solutions of \eqref{DPG1} and \eqref{DPG2}, respectively. The construction of discrete subspaces of $\ensuremath{\mathcal{U}}_1$ and $\ensuremath{\mathcal{U}}_2$ and their approximation properties is ongoing research. In the case of the Kirchhoff--Love model we have presented a fully discrete analysis in \cite{FuehrerH_19_FDD}. Here, we only select some discrete spaces in an \emph{ad hoc} fashion and present the corresponding convergence results without proving any convergence orders. Also, the construction of appropriate Fortin operators (needed to take the approximation of optimal test functions into account) is left open. Test functions are approximated by selecting identical meshes for ansatz and test spaces, and increasing polynomial degrees in the test spaces (see~\cite{FuehrerH_19_FDD} for details). Specifically, we consider the two-dimensional case $d=2$, and use regular triangular meshes $\ensuremath{\mathcal{T}}$ of shape-regular elements, with mesh parameter $h:=h_\ensuremath{\mathcal{T}} := \max_{T\in\ensuremath{\mathcal{T}}} \mathrm{diam}(T)$. The DPG method provides a built-in error estimator, the residual norm $\eta:=\|B_i(\ensuremath{\mathbf{u}}_i-\ensuremath{\mathbf{u}}_{i,h})\|_{\ensuremath{\mathcal{V}}_i'}$. (We generically use $\eta$ and select $i=1$ or $i=2$ as needed.) By the product form of the test spaces, $\eta$ is composed of local element contributions \( \eta^2 = \sum_{T\in\ensuremath{\mathcal{T}}} \eta(T)^2. \) For the case with singular solution we use these indicators to perform adaptive DPG schemes, based on newest-vertex-bisection and D\"orfler marking with parameter of one half. \subsection{Example with smooth solution}\label{sec_ex_smooth} We take $\Omega = (0,1)^2$ and use the manufactured solution $u(x,y)=x^2(1-x)^2y^2(1-y)^2$. To compare the approximations given by the schemes \eqref{DPG1} and \eqref{DPG2}, we use piecewise constant functions on uniform meshes for $u_{i,h}$ and $\sigma_{i,h}$, and traces of the reduced Hsieh--Clough--Tocher (HCT) functions for both $\wat{\boldsymbol{u}}_{i,h}$ and $\wat{\boldsymbol{\sigma}}_{i,h}$ ($i=1,2$). These HCT traces use piecewise cubic polynomials for (standard) traces on edges and piecewise linear polynomials for normal derivatives on edges, subject to the regularity of stemming from $H^2(\Omega)$-functions. For the reduced HCT elements we refer to \cite{Ciarlet_78_IEE}, and the traces we use are described in \cite{FuehrerHN_19_UFK}. Figure~\ref{fig_smooth} presents the $L_2(\Omega)$ approximation errors for $u$ and $\sigma=\Delta u$ along with the corresponding residual $\eta$. The results for scheme \eqref{DPG1} are on the left and for \eqref{DPG2} on the right. It appears that in both cases we have an asymptotical behavior of $\|u-u_{i,h}\|\simeq\|\sigma-\sigma_{i,h}\|\simeq\eta=\ensuremath{\mathcal{O}}(h)$. This is expected for lowest order approximations of a smooth function. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth]{VF1_smooth.pdf} \includegraphics[width=0.45\textwidth]{VF2_smooth.pdf} \end{center} \caption{Errors generated by schemes \eqref{DPG1} (left) and scheme \eqref{DPG2} (right) for the smooth example from \S\ref{sec_ex_smooth}.} \label{fig_smooth} \end{figure} \subsection{Example with singular solution}\label{sec_ex_sing} The next example is taken from \cite{FuehrerHN_19_UFK}. We consider the non-convex domain from Figure~\ref{fig_domain} with reentrant corner at $(0,0)$. The outer angle at this corner is $\tfrac3{4}\pi$. We take the manufactured solution \begin{align*} u(r,\varphi) = r^{1+\alpha}(\cos( (\alpha+1)\varphi)+C \cos( (\alpha-1)\varphi)) \end{align*} with polar coordinates $(r,\varphi)$ centered at the origin. It holds \( \Delta^2 u = 0 =: f. \) For the boundary conditions we prescribe the values of $u|_\Gamma$ and $\nabla u|_\Gamma$. The parameters $\alpha$ and $C$ are chosen such that $u$ and its normal derivative vanish on the boundary edges that meet at the origin. Here, we have $\alpha\approx 0.673583432147380$ and $C\approx 1.234587795273723$. It holds $u\in H^{2+\alpha-\varepsilon}(\Omega)$ but $\Delta u\not\in H^1(\Omega)$. \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\textwidth]{domain.pdf} \end{center} \caption{The non-convex domain with initial mesh.} \label{fig_domain} \end{figure} The numerical results for the two schemes \eqref{DPG1} (on the left) and \eqref{DPG2} (on the right) are shown in Figure~\ref{fig_sing}. As before, we plot the $L_2(\Omega)$-errors for $u$ and $\sigma=\Delta u$ along with the corresponding residual $\eta$. In both cases the schemes converge at a low rate when using quasi-uniform meshes (curves without label ``adap''), variant \eqref{DPG1} being extremely slow. The rates exhibited by the second scheme are as expected by the regularity of $\sigma$. However, scheme \eqref{DPG1} seems to suffer from the approximation of $\wat{\boldsymbol{\sigma}}_h$ by smooth $H^2$-traces. This is clearly not an efficient basis. We can only claim convergence based on a density argument. We have also used adaptive variants of both DPG schemes (curves with label ``adap'' in the same figures). It turns out that the second scheme \eqref{DPG2} recovers its optimal rate of $\ensuremath{\mathcal{O}}(\dim(\ensuremath{\mathcal{U}}_{2,h})^{-1/2})$. On the other hand, the residual $\eta$ and error $\|\sigma-\sigma_{1,h}\|$ of the first scheme converge as slowly as before. Again, this seems to be caused by the inappropriate basis for $\wat{\boldsymbol{\sigma}}_{1,h}$. It is an open problem to construct discrete trace spaces that improve the convergence rate of scheme \eqref{DPG1} for non-smooth solutions. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth]{VF1_sing.pdf} \includegraphics[width=0.45\textwidth]{VF2_sing.pdf} \end{center} \caption{Errors generated by schemes \eqref{DPG1} (left) and \eqref{DPG2} (right) for the singular example from \S\ref{sec_ex_sing}.} \label{fig_sing} \end{figure} \clearpage
{ "timestamp": "2019-04-17T02:32:26", "yymm": "1904", "arxiv_id": "1904.07761", "language": "en", "url": "https://arxiv.org/abs/1904.07761", "abstract": "We study several trace operators and spaces that are related to the bi-Laplacian. They are motivated by the development of ultraweak formulations for the bi-Laplace equation with homogeneous Dirichlet condition, but are also relevant to describe conformity of mixed approximations.Our aim is to have well-posed (ultraweak) formulations that assume low regularity, under the condition of an $L_2$ right-hand side function. We pursue two ways of defining traces and corresponding integration-by-parts formulas. In one case one obtains a non-closed space. This can be fixed by switching to the Kirchhoff-Love traces from [Führer, Heuer, Niemi, An ultraweak formulation of the Kirchhoff-Love plate bending model and DPG approximation, Math. Comp., 88 (2019)]. Using different combinations of trace operators we obtain two well-posed formulations. For both of them we report on numerical experiments with the DPG method and optimal test functions.In this paper we consider two and three space dimensions. However, with the exception of a given counterexample in an appendix (related to the non-closedness of a trace space), our analysis applies to any space dimension larger than or equal to two.", "subjects": "Numerical Analysis (math.NA); Analysis of PDEs (math.AP)", "title": "Trace operators of the bi-Laplacian and applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631675246404, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950352772961 }
https://arxiv.org/abs/1812.11466
Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis
This work is concerned with the non-negative rank-1 robust principal component analysis (RPCA), where the goal is to recover the dominant non-negative principal components of a data matrix precisely, where a number of measurements could be grossly corrupted with sparse and arbitrary large noise. Most of the known techniques for solving the RPCA rely on convex relaxation methods by lifting the problem to a higher dimension, which significantly increase the number of variables. As an alternative, the well-known Burer-Monteiro approach can be used to cast the RPCA as a non-convex and non-smooth $\ell_1$ optimization problem with a significantly smaller number of variables. In this work, we show that the low-dimensional formulation of the symmetric and asymmetric positive rank-1 RPCA based on the Burer-Monteiro approach has benign landscape, i.e., 1) it does not have any spurious local solution, 2) has a unique global solution, and 3) its unique global solution coincides with the true components. An implication of this result is that simple local search algorithms are guaranteed to achieve a zero global optimality gap when directly applied to the low-dimensional formulation. Furthermore, we provide strong deterministic and probabilistic guarantees for the exact recovery of the true principal components. In particular, it is shown that a constant fraction of the measurements could be grossly corrupted and yet they would not create any spurious local solution.
\section{Introduction} \begin{sloppypar} The principal component analysis (PCA) is perhaps the most widely-used dimension-reduction method that reveals the components with maximum variability in high-dimensional datasets. In particular, given the data matrix $X\in\mathbb{R}^{m\times n}$, where each row corresponds to a data sample with size $n$, the goal is to recover its most dominant component under the {rank-1} spiked model\footnote[1]{There are more general models under which the PCA is shown to be useful (see~\cite{jolliffe2011principal} for more details). We use the {rank-1} spiked model since it fits into our framework and is often used as a baseline to evaluate the performance of the PCA.} \end{sloppypar} \begin{equation}\label{spike} X = \beta\mathbf{u}\mathbf{v}^\top+S \end{equation} where $\beta$ determines the signal-to-noise ratio, $S$ is the additive noise matrix, and $\mathbf{u}$ and $\mathbf{v}$ are two unknown unit norm vectors. If the data matrix $X$ is symmetric (for instance, it corresponds to a sample covariance matrix), then~\eqref{spike} can be modified as \begin{equation} X = \beta\mathbf{v}\bv^\top+S \end{equation} Depending on the nature of the noise matrix, different methods have been proposed in the literature to recover the principal components from (partial) observations of $X$. The problem of recovering $\beta$, $\mathbf{u}$, and $\mathbf{v}$ under a Gaussian and sparse noise is conventionally referred to as PCA and robust PCA (or RPCA), respectively. The properties of both PCA and its robust analog have been heavily studied in the literature and their applications span from quantitative finance to health care and neuroscience ~(\cite{hull1990pricing, caprihan2008application, brenner2000adaptive}). Recently, a special focus has been devoted to further exploiting the prior knowledge on the principal components, such as sparsity~(\cite{zou2006sparse}) and nonlinearity~(\cite{gorban2008principal}). Accordingly, one such knowledge appearing in different applications is the non-negativity of the principal components~(\cite{montanari2016non}). In this scenario, one needs to solve the PCA or the RPCA under the additional constraints $\mathbf{u}, \mathbf{v}\geq 0$. While the non-negative PCA has been recently studied in~\cite{montanari2016non}, the main focus of our work is on its robust variant, where the noise matrix is assumed to be sparse and the goal is the \textit{exact} recovery of the non-negative vectors $\mathbf{u}$ and $\mathbf{v}$. Note that the non-negativity of principal components naturally arises in many real-world problems. In what follows, we will present two classes of real-world applications for which the non-negative RPCA is useful. \vspace{2mm} \noindent{\bf 1. Non-negative matrix factorization:} Extracting the dominant principal component of a symmetric or asymmetric data matrix appears in many applications and the examples are ubiquitous. For instance, an important problem in astronomy is the recovery of non-negative astronomical signals from the covariance matrix of photometric observations~(\cite{ren2018non}). The measured data samples are prone to sparse and random outliers. Similarly, one can extract moving objects from video frames via non-negative matrix factorization by treating the background as the dominant low-rank component in the video frames and the moving object as sparse noise (the non-negativity of the data is due to the non-negative values of the pixels)~(\cite{lee1999learning, candes2011robust}). We will conduct a case study on this application later in the paper. \vspace{2mm} \noindent{\bf 2. Gene networks:} Gene activities can be captured by the samples collected from different organs, and are described by multi-spiked models~(\cite{lazzeroni2002plaid}): \begin{equation} X = X_0+\sum_{i = 1}^k \mathbf{u}_{(i)}\mathbf{v}_{(i)}^\top \end{equation} where $(i,j)^{\text{th}}$ entry of $X$ measures the strength of the participation of gene $i$ in sample $j$ and $X_0$ is an offset. Furthermore, $k$ is the number of the gene-block, and $\mathbf{u}_{(i)}$ and $\mathbf{v}_{(i)}$ measure the participation of different genes and samples in the $i^{\text{th}}$ gene-block. The participation vectors are non-negative and the measurements can be subject to malfunctioning of the measurement tools. Therefore, the problem of obtaining $\mathbf{u}_{(i)}$ and $\mathbf{v}_{(i)}$ can be cast as a non-negative RPCA with multiple principal components. \vspace{2mm} The seminal work by~\cite{candes2011robust} proposes a sparsity promoting convex relaxation for the RPCA that is capable of the exact recovery of $\mathbf{u}$ and $\mathbf{v}$. Upon defining $W = \mathbf{u}\mathbf{v}^\top$, the convex relaxation of the RPCA is defined as \begin{align}\label{opt_rpca} \min_{W\in\mathbb{R}^{m\times n}}\ \ & \|W\|_*+\lambda\|\mathcal{P}_{\Omega}(X-W)\|_1 \end{align} where $\|W\|_*$ is the nuclear norm of $W$, serving as a penalty on the rank of the recovered matrix $W$, and $\|\cdot\|_1$ is used to denote the element-wise $\ell_1$ norm. Furthermore, $\mathcal{P}_{\Omega}(\cdot)$ is the projection onto the set of matrices with the same support as the measurement set $\Omega$. Therefore, upon defining $S = X-W$ as the corruption or noise matrix, $\|\mathcal{P}_{\Omega}(X-W)\|_1$ plays the role of promoting sparsity in the estimated noise matrix. After finding an optimal value of $W$, the matrix can then be decomposed into the desired vectors $\mathbf{u}$ and $\mathbf{v}$, provided that the relaxation is exact. Notice that the problem is convexified via lifting from $n+m$ variables on $(\mathbf{u},\mathbf{v})$ to $nm$ variables on $W$. Despite the convexity of the lifted problem, its dimension makes it prohibitive to solve in high-dimensional settings. To circumvent this issue, one popular approach is to resort to an alternative formulation, inspired by~\cite{burer2003nonlinear} (commonly known as the Burer-Monteiro technique): \begin{align}\label{norm1} \min_{\mathbf{u}\in\mathbb{R}_+^{m}, \mathbf{v}\in\mathbb{R}_+^{n}}\quad \|\mathcal{P}_{\Omega}(X-\mathbf{u}\mathbf{v}^\top)\|_1 \end{align} Despite the non-convexity of~\eqref{norm1}, its smooth counterpart (with or without non-negativity constraints) defined as \begin{align}\label{norm2} \min_{\mathbf{u}\in\mathbb{R}^{m}, \mathbf{v}\in\mathbb{R}^{n}}\quad \underbrace{\|\mathcal{P}_{\Omega}(X-\mathbf{u}\mathbf{v}^\top)\|^2_F}_{g(\mathbf{u},\mathbf{v})} \end{align} has been widely used in matrix completion/sensing and is known to possess \textit{benign global landscape}, i.e., every local solution is also global and every saddle point has a direction with a strictly negative curvature~(\cite{bhojanapalli2016global, ge2016matrix, ge2017no}). This will be stated below. \begin{theorem}[Informal, Benign Landscape~(\cite{ge2017no})]\label{thm_diff} Under some technical conditions, a regularized version of~\eqref{norm2} has benign landscape: every local minimum is global and every saddle point has a direction with a strictly negative curvature. \end{theorem} In particular, both symmetric and asymmetric matrix completion (or matrix sensing) under dense Gaussian noise can be cast as~\eqref{norm2} and in light of the above theorem, they have benign landscape. However, it is well-known that such smooth norms are incapable of correctly identifying and rejecting sparse-but-large noise/outliers in the measurements. Despite the generality of Theorem~\ref{thm_diff} within the realm of smooth norms, it does not address the following important question: \textit{Does the non-smooth and non-negative {rank-1} RPCA~\eqref{norm1} have benign landscape?} % % % % % % \subsection{The Issue with the Known Proof Techniques} To understand the inherent difficulty of examining the landscape of~\eqref{norm1}, it is essential to explain why the existing proof techniques for the absence of spurious local minima in matrix sensing/completion cannot naturally be extended to their robust counterparts. In general, the main idea in the literature behind proving the benign landscape of matrix sensing/completion is based on analyzing the gradient and the Hessian of the objective function. More precisely, for every point that satisfies $\nabla g(\mathbf{u}, \mathbf{v}) = 0$ and does not correspond to a globally optimal minimum, it suffices to find a \textit{global} direction of descent $\mathbf{d}$ such that $\mathrm{vec}(\mathbf{d})^\top\nabla^2g(\mathbf{u}, \mathbf{v})\mathrm{vec}(\mathbf{d})<0$, where $\mathrm{vec}(\mathbf{d})$ is the vectorized version of $\mathbf{d}$ and $\nabla^2g(\mathbf{u}, \mathbf{v})$ is the Hessian of $g(\mathbf{u}, \mathbf{v})$. Such a direction certifies that every stationary point that is not globally optimal must be either a local maximum or a saddle point with a strictly negative direction. However, this approach cannot be used to prove similar results for~\eqref{norm1} mainly because the objective function of~\eqref{norm1} is non-differentiable and, hence, the Hessian is not well-defined. This difficulty calls for a new methodology for analyzing the landscape of the robust and non-smooth PCA; a goal that is at the core of this work. {\section{Contributions} \label{sec:contributions}} {In this work, we characterize the landscape of both the symmetric non-negative rank-1 RPCA defined as \begin{equation}\tag{SN-RPCA}\label{snpca} \min_{\mathbf{u}\in\mathbb{R}^n_+}\quad \underbrace{\|\mathcal{P}_{\Omega}(X-\mathbf{u}\bu^\top)\|_1 + R_{\beta}(\mathbf{u})}_{ f_{\mathrm{reg}}(\mathbf{u})} \end{equation} and its asymmetric counterpart defined as \begin{equation}\tag{AN-RPCA}\label{anpca} \min_{\mathbf{u}\in\mathbb{R}^m_+, \mathbf{v}\in\mathbb{R}^n_+}\quad \underbrace{\|\mathcal{P}_{\Omega}(X-\mathbf{u}\mathbf{v}^\top)\|_1+R_{\beta}(\mathbf{u},\mathbf{v})}_{f_{\mathrm{reg}}(\mathbf{u}, \mathbf{v})} \end{equation} In particular, we fully characterize the stationary points of these optimization problems, under both deterministic and probabilistic models for the measurement index $\Omega$ and the noise matrix $S$. The functions $R(\mathbf{u})$ and $R(\mathbf{u},\mathbf{v})$ are regularization functions that prevent the solutions from \textit{blowing up}; roughly speaking, they penalize the points whose norm is greater than $\beta$, but do not change the landscape otherwise. The exact definitions of these regularization functions will be presented later in Section~\ref{sec6}. \begin{remark}\label{remark_rankr} The focus of this paper is on the symmetric and non-symmetric RPCA under the rank-1 spiked model. A natural extension to this model is its rank-$r$ variant: \begin{align} X = UV^\top+S \end{align} where $U := \begin{bmatrix} \mathbf{u}_1 & \cdots & \mathbf{u}_r \end{bmatrix}\in\mathbb{R}^{m\times r}_+$ and $V := \begin{bmatrix} \mathbf{v}_1 & \cdots & \mathbf{v}_r \end{bmatrix}\in\mathbb{R}^{n\times r}_+$ are non-negative matrices encompassing the $r$ principal components of the model (the symmetric version can be defined in a similar manner). Furthermore, similar to the rank-1 case, $S$ is a sparse noise matrix. Under this rank-$r$ spiked model, the aim of the non-negative \textbf{rank-$\bf r$} RPCA is to recover the non-negative matrices $U$ and $V$ given a subset of the elements of the noisy measurement matrix $X$. In Section~\ref{sec:rankr}, we will elaborate on the technical difficulties behind this extension. In addition, we will provide some empirical evidence to support that the developed results may hold for the general non-negative rank-$r$ RPCA with $r\geq 2$. \end{remark} } \begin{definition} Given the set $\Omega$, two graphs are defined below: \begin{itemize} \item[-] The sparsity graph $\mathcal{G}(\Omega)$ induced by $\Omega$ for an instance of~\eqref{snpca} is defined as a graph with the vertex set $V :=\{1,2,...,n\}$ that includes an edge $(i,j)$ if $(i,j)\in\Omega$. \item[-] The bipartite sparsity graph $\mathcal{G}_{m, n}(\Omega)$ induced by $\Omega$ for an instance of~\eqref{anpca} is defined as a graph with the vertex partitions $V_u :=\{1,2,...,m\}$ and {$V_v :=\{m+1,m+2,...,m+n\}$} that includes an edge $(i,j)$ if $(i,j-m)\in\Omega$. \end{itemize} Furthermore, define $\Delta(\mathcal{G}(\Omega))$ and $\delta(\mathcal{G}(\Omega))$ as the maximum and minimum degrees of the nodes in $\mathcal{G}(\Omega)$, respectively. Similarly, $\Delta(\mathcal{G}_{m,n}(\Omega))$ and $\delta(\mathcal{G}_{m,n}(\Omega))$ are used to refer to the maximum and minimum degrees of the nodes in $\mathcal{G}_{m,n}(\Omega)$, respectively. \end{definition} \begin{definition} The sets of~\textbf{bad/corrupted} and~\textbf{good/correct} measurements are defined as ${B} = \{(i,j)| (i,j)\in\Omega, S_{ij}\not=0\}$ and ${G} = \{(i,j)| (i,j)\in\Omega, S_{ij}=0\}$, respectively. \end{definition} Based on the above definitions, the sparsity graph is allowed to include self-loops. For a positive vector $\mathbf{x}$, we denote its maximum and minimum values with $x_{\max}$ and $x_{\min}$, respectively. Furthermore, define $\kappa(\mathbf{x}) = \frac{x_{\max}}{x_{\min}}$ as the condition number of the vector $\mathbf{x}$. The first result of this paper develops deterministic conditions on the measurement set $\Omega$ and the sparsity pattern of the noise matrix $S$ to guarantee that the positive {rank-1} RPCA has benign landscape. Let $\mathbf{u} ^*$ and $(\mathbf{u} ^*, \mathbf{v} ^*)$ denote the true principal components of~\eqref{snpca} and~\eqref{anpca}, respectively. \begin{theorem}[Informal, Deterministic Guarantee]\label{thm_inf_det} {Assuming that $\mathbf{u} ^*, \mathbf{v} ^*>0$, there exist regularization functions $R(\mathbf{u})$ and $R(\mathbf{u},\mathbf{v})$ such that the following statements hold with overwhelming probability: \begin{itemize} \item[1.]~\eqref{snpca} has no spurious local minimum and has a unique global minimum that coincides with the true component, provided that $\mathcal{G}(G)$ has \textbf{no bipartite component} and \begin{equation} \kappa(\mathbf{u}^*)^4\Delta(\mathcal{G}(B))\lesssim\delta(\mathcal{G}(G)) \end{equation} \item[2.]~\eqref{anpca} has no spurious local minimum and has a unique global minimum that coincides with the true components, provided that $\mathcal{G}_{m,n}(G)$ is \textbf{connected} and \begin{equation} \max\left\{\kappa(\mathbf{u}^*)^4, \kappa(\mathbf{v}^*)^4\right\}\Delta(\mathcal{G}_{m,n}(B))\lesssim\delta(\mathcal{G}_{m,n}(G)) \end{equation} \end{itemize}} \end{theorem} Theorem~\ref{thm_inf_det} puts forward a set of deterministic conditions for the absence of spurious local solutions in~\eqref{snpca} and~\eqref{anpca} as well as the uniqueness of the global solution. Notice that no upper bound is assumed on the values of the nonzero entries in the noise matrix. The reasoning behind the conditions imposed on the minimum and maximum degrees of the nodes in the sparsity graph of the measurement set is to ensure the identifiability of the problem. We will elaborate more on this subtle point later in Section~\ref{sec6}. Furthermore, we will show later in the paper that some of the conditions delineated in Theorem~\ref{thm_inf_det}---such as the strict positivity of $\mathbf{u} ^*$ and $\mathbf{v} ^*$, as well as the absence of bipartite components in $\mathcal{G}(G)$ for~\eqref{snpca}---are also necessary for the exact recovery. The second main result of this paper investigates~\eqref{snpca} and~\eqref{anpca} under random sampling and noise structures. In particular, suppose that each element (in the symmetric case, each element of the upper triangular part) of $S$ is nonzero with probability $d$. Then, for every $(i,j)$, we have \begin{equation} X_{ij} = \left\{ \begin{array}{ll} u^*_iv^*_j& \text{with probability}\ 1-d\\ \text{arbitrary}& \text{with probability}\ d \end{array} \right. \end{equation} Furthermore, suppose that every element of $X$ is measured with probability $p$. In other words, every $(i,j)$ belongs to $\Omega$ with probability $p$. Finally, we assume that the noise and sampling events are independent. \begin{theorem}[Informal, Probabilistic Guarantee]\label{thm_inf_prob} {Assuming that $\mathbf{u} ^*, \mathbf{v} ^*>0$, there exist regularization functions $R(\mathbf{u})$ and $R(\mathbf{u},\mathbf{v})$ such that the following statements hold with overwhelming probability: \begin{itemize} \item[1.]~\eqref{snpca} has no spurious local minimum and has a unique global minimum that coincides with the true component, provided that \begin{equation}\label{upper} p\gtrsim\frac{\kappa(\mathbf{u}^*)^4\log n}{n},\qquad d\lesssim\frac{1}{\kappa(\mathbf{u}^*)^4} \end{equation} \item[2.]~\eqref{anpca} has no spurious local minimum and has a unique global minimum that coincides with the true components, provided that \begin{equation} p\gtrsim\frac{\kappa(\mathbf{w}^*)^4n\log n}{m^2},\qquad d\lesssim\frac{r}{\kappa(\mathbf{w}^*)^4} \end{equation} where $\mathbf{w}^* = \begin{bmatrix} {\mathbf{u}^*}^\top & {\mathbf{v}^*}^\top \end{bmatrix}^\top$, $r = m/n$, and $n\geq m$. \end{itemize}} \end{theorem} A number of interesting corollaries can be obtained based on Theorem~\ref{thm_inf_prob}. For instance, it can be inferred that the exact recovery is guaranteed even if the number of grossly corrupted measurements is on the same order as the total number of measurements, provided that $\frac{u^*_{\max}}{u^*_{\min}}$ is uniformly bounded from above. In addition to the absence of spurious local minima and the uniqueness of the global minimum, the next proposition states that the true solution can be recovered via local search algorithms for non-smooth optimization. \begin{proposition}[Informal, Global Convergence]\label{prop_global} Under the assumptions of Theorem~\ref{thm_inf_det} and~\ref{thm_inf_prob}, local search algorithms converge to the true solutions of~\eqref{snpca} and~\eqref{anpca} with overwhelming probability. \end{proposition} Starting from Section~\ref{sec2}, we will delve into the detailed analysis of the symmetric and asymmetric non-negative RPCA. In particular, we will analyze~\eqref{snpca} and~\eqref{anpca} under different deterministic and probabilistic settings and provide formal versions of Theorems~\ref{thm_inf_det} and~\ref{thm_inf_prob}. \section{Numerical Results}\label{sec:num} {In this section, we demonstrate the efficacy of the above-mentioned results in different experiments. To this goal, first we briefly introduce the recently developed sub-gradient method~\cite{li2018nonconvex} that is specifically tailored to non-smooth and non-convex problems, such as those considered in this paper. The main advantage of the sub-gradient algorithm compared to other state-of-the-art methods is its extremely simple implementation; we present a sketch of the algorithm for solving the non-symmetric positive RPCA below\footnote{Note that this is a slightly modified version of the sub-gradient algorithm in~\cite{li2018nonconvex} to ensure the positivity of the iterates.} (the symmetric version can be solved using a similar algorithm with slight modifications): \vspace{2mm} {\begin{algorithm}[H] \SetAlgoLined {\textbf{Initialization:} Strictly positive initial point $\mathbf{w}_0^\top = \begin{bmatrix} \mathbf{u}_0^\top & \mathbf{v}_0^\top \end{bmatrix}^\top$ and step size $\mu_0$}\; \For{$k = 0,1,\dots$}{ set $\mathbf{d}_k$ as a sub-gradient of $f_{\mathrm{reg}}(\mathbf{u}_0,\mathbf{v}_0)$ defined in~\eqref{anpca}\; set $\mu_k$ according to a geometrically diminishing rule such that $\mathbf{w}_k-\mu_k\mathbf{d}_k$ is strictly positive\; set $\mathbf{w}_{k+1} = \mathbf{w}_k-\mu_k\mathbf{d}_k$\; } \caption{Sub-gradient algorithm}\label{algo1} \end{algorithm} } \vspace{2mm} \noindent It has been shown in~\cite{li2018nonconvex} that, under certain conditions on the initial point $\mathbf{w}_0$, the initial step size $\mu_0$, and the update rule for $\mu_k$, the iterates $\mathbf{w}_0,\mathbf{w}_1,\dots$ converge to the globally optimal solution at \textit{linear} rate, provided that $\mathbf{w}_0$ is sufficiently close to the optimal solution. The closeness of $\mathbf{w}_0$ to $\mathbf{w}^*$ is required partly to avoid becoming stuck at a spurious local minima. This requirement can be relaxed for the positive RPCA due to the absence of undesired spurious local solutions, as proven in this paper. It is also worthwhile to mention that, even though we use the sub-gradient algorithm to solve the positive RPCA, it will be shown in Section~\ref{sec8} that the results of this paper guarantee that a large class of local-search algorithms converge to the globally optimal solution of~\eqref{snpca} or~\eqref{anpca}. All of the following simulations are run on a laptop computer with an Intel Core i7 quad-core 2.50 GHz CPU and 16GB RAM. The reported results are for a serial implementation in MATLAB R2017b. } \begin{figure*} \centering \subfloat[RPCA]{\label{fig_rpca} \includegraphics[width=.46\columnwidth]{norm1_heatmap_ver4.eps}} \hspace{0cm} \subfloat[PCA]{\label{fig_runtime} \includegraphics[width=.5\columnwidth]{runtime_large.eps}} \caption{ \footnotesize (a) The performance of the randomly initialized sub-gradient method for~\eqref{snpca}. The intensity of the color is proportional to the exact recovery rate of the true solution (darker blue implies higher recovery rate). (b) The runtime of the sub-gradient method for~\eqref{snpca}. For each dimension, it shows the average runtime and its min-max interval over 100 independent trials.} \label{rpca_vs_pca} \end{figure*} \subsection{Exact Recovery:}\label{subsec:exact} To demonstrate the strength of the above-mentioned results, we consider thousands of randomly generated instances of the positive {rank-1} RPCA with different sizes and noise levels. In particular, the dimension of the instances ranges from $10$ to $100$. For each instance, the elements of $\mathbf{u}^*$ are uniformly chosen from the interval $[0,2]$. Note that $\mathbf{u}^*$ will be strictly positive with probability one. Furthermore, each element of the upper triangular part of the symmetric noise matrix $S$ is set to $2$ with probability $d$ and $0$ with probability $1-d$. Figure~\ref{fig_rpca} shows the performance of randomly initialized sub-gradient method for the symmetric positive {rank-1} RPCA. {We declare that a solution is recovered exactly if $\|\mathbf{u}\bu^\top - \mathbf{u}^*{\mathbf{u}^*}^\top\|_F/\|\mathbf{u}^*{\mathbf{u}^*}^\top\|_F\leq 10^{-4}$}. For each dimension and noise probability, we consider 100 randomly generated instances of the problem and demonstrate its exact recovery rate. The heatmap shows the exact recovery rate of the sub-gradient method, when directly applied to~\eqref{snpca}. It can be observed that the algorithm has recovered the globally optimal solution even when $35\%$ of the entries in the data matrix were severely corrupted with the noise. In contrast, even a highly sparse additive noise in the data matrix prevents the sub-gradient method from recovering the true solution, when applied to the smooth problem~\eqref{norm2}. {Figure~\ref{fig_runtime} shows the graceful scalability of the sub-gradient algorithm when applied to~\eqref{snpca}. It can be seen that the algorithm is highly efficient. In particular, its average runtime varies from $0.88$ seconds for $n = 100$ to $43.20$ seconds for $n = 1000$.} \vspace{5mm} \subsection{The Emergence of Local Solutions} Recall that $\mathbf{u}^*$ and $\mathbf{v}^*$ are both assumed to be strictly positive. In what follows, we will illustrate that relaxing these conditions to non-negativity gives rise to spurious local solutions. Consider an instance of the symmetric {non-negative rank-1} RPCA with the parameters \begin{equation} \mathbf{u}^* = \begin{bmatrix} 1 & 1 & 0 \end{bmatrix}^\top,\qquad S = 0,\qquad \Omega = \{1,2,3\}^2\backslash\{(3,3)\} \end{equation} Notice that $\mathbf{u}^*$ consists of two strictly positive and one zero entries. Furthermore, this is a noiseless scenario where $\Omega$ consists of all possible measurements except for one. To examine the existence of spurious local solutions in this example, $10000$ randomly initialized trials of the sub-gradient method is ran and the normalized distances between the obtained and true solutions are displayed in Figure~\ref{hist}. Based on this histogram, about $20\%$ of the trials converge to spurious local solutions, implying that they are ubiquitous in this instance. This experiment shows why the positivity of the true solution is crucial and cannot be relaxed. We will formalize and prove this statement later in Section~\ref{sec3}. \begin{figure} \centering \includegraphics[width=.45\columnwidth]{Histogram.eps} \caption{ \footnotesize The normalized distance between the obtained solution using randomly initialized sub-gradient method and the true solution.} \label{hist} \end{figure} \vspace{5mm} \subsection{Moving Object Detection} In video processing, one of the most important problems is to detect anomaly or moving objects in different frames of a video. In particular, given a video sequence, the goal is to separate the nearly-static or slowly-changing background from the dynamic foreground objects~(\cite{cucchiara2003detecting}). Based on this observation,~\cite{candes2011robust} has proposed to model the background as a low-rank component, and the dynamic foreground as the sparse noise. In particular, suppose that the video sequence consists of $d_f$ gray-scale frames, each with the resolution of $d_m\times d_n$ pixels. The data matrix $X$ is defined as an asymmetric $d_md_n \times d_f$ matrix whose $i^{\text{th}}$ column is the vectorized version of the $i^{\text{th}}$ frame. Therefore, the moving object detection problem can be cast as the recovery of the non-negative vectors $\mathbf{u}\in\mathbb{R}^{d_md_n}_+$ and $\mathbf{v}\in\mathbb{R}^{d_f}_+$, as well as the sparse matrix $S\in\mathbb{R}^{d_md_n\times d_f}$, such that \begin{equation}\label{model} X \approx \mathbf{u}\mathbf{v}^\top+S \end{equation} Note that the background may not always have a rank-1 representation. However, we will show that~\eqref{model} is sufficiently accurate if the background is relatively static. Furthermore, notice that when the background is completely static, the elements of $\mathbf{v}$ should be equal to one. However, this is not desirable in practice since the background may change due to varying illuminations, which can be captured by the variable vector $\mathbf{v}$. Each entry of $X$ is an integer between 0 (darkest) and 255 (brightest). To ensure the positivity of the true components, we increase each element of $X$ by 1 without affecting the performance of the method. The considered test case is borrowed from the work by~\cite{toyama1999wallflower}\footnote{The video frames are publicly available at~\url{https://www.microsoft.com/en-us/research/project/test-images-for-wallflower-paper/}.} and is a sequence of video frames taken from a room, where a person walks in, sits on a chair, and uses a phone. We consider 100 gray-scale frames of the sequence, each with the resolution of $120\times 160$ pixels. Therefore, $X$, $\mathbf{u}$, and $\mathbf{v}$ belong to $\mathbb{R}^{19,200\times 100}_+$, $\mathbb{R}^{19,200}_+$, and $\mathbb{R}^{100}_+$, respectively. Figure~\ref{movedObj} shows that the sub-gradient method with a random initialization can recover the moving object, which is in accordance with the theoretical results of this paper. \begin{figure} \centering \subfloat{\label{bw1} \includegraphics[width=.3\columnwidth]{bw1.png}} \subfloat{\label{bw2} \includegraphics[width=.3\columnwidth]{bw2.png}} \subfloat{\label{bw3} \includegraphics[width=.3\columnwidth]{bw3.png}} \subfloat{\label{pic1_2} \includegraphics[width=.3\columnwidth]{pic1_2.png}} \subfloat{\label{pic2_2} \includegraphics[width=.3\columnwidth]{pic2_2.png}} \subfloat{\label{pic3_2} \includegraphics[width=.3\columnwidth]{pic3_2.png}} \caption{ \footnotesize The performance of the sub-gradient method in the moving object detection problem. The first row shows 3 out of 100 gray-scale frames in the studied test case that contain the moving objects. The second row shows the outcome of~\eqref{snpca} solved using randomly initialized sub-gradient method. }\label{movedObj} \end{figure} {\section{Related Work}} \vspace{2mm} {\subsection{Non-convex and Low-rank Optimization}} \vspace{2mm} A considerable amount of work has been carried out to understand the inherent difficulty of solving low-rank optimization problems both locally and globally. \vspace{2mm} \noindent{\bf Convexification:} Recently, there has been a pressing need to develop efficient methods for solving large-scale nonconvex optimization problems that naturally arise in data analytics and machine learning~(\cite{dumais1998inductive, sharif2014cnn, bottou2018optimization, zhang2018large, olfat2017spectral}). One promising approach for making these large-scale problems more {tractable} is to resort to their convex surrogates; these methods started to receive a great deal of attention after the seminal works by~\cite{donoho2006most} and~\cite{candes2006stable} on the \textit{compressive sensing} and have been extended to emerging problems in machine learning, such as fairness~(\cite{olfat2017spectral}), robust polynomial regression~(\cite{molybog2018conic, Madani2018conic}), and neural networks~(\cite{bach2017breaking}), to name a few. Nonetheless, the size of today's problems has been a major impediment to the tractability of these methods. In practice, the dimension of the real-world problems is overwhelmingly large, often surpassing the ability of these seemingly efficient convex methods to solve the problem in a reasonable amount of time. Due to this so-called \textit{curse of dimensionality}, the common practice is to deploy fast local search algorithms directly applied to the original nonconvex problem with the hope of converging to acceptable solutions. Roughly speaking, these methods can only guarantee the local optimality, thus exposing themselves to potentially large optimality gaps. However, a recent line of work has shown that a surprisingly large class of nonconvex problems, including matrix completion/sensing~(\cite{bhojanapalli2016global, ge2016matrix, ge2017no, zhu2017global}), phase retrieval~(\cite{sun2018geometric}), and dictionary recovery~(\cite{sun2017complete}) have \textit{benign global landscape}, i.e., every local solution is also global and every saddle point has a direction with a strictly negative curvature (see~\cite{chi2018nonconvex} for a comprehensive survey on the related problems). {More recently, the work by~\cite{zhang2018primal} has introduced a unified framework that shows the benign landscape of nonconvex low-rank optimization problems with general loss functions, provided that they satisfy certain restricted convexity and smoothness properties.} This enables most of the saddle-escaping local search algorithms to converge to a global solution, thereby resulting in a zero optimality gap~(\cite{ge2015escaping}). \vspace{2mm} \noindent{\bf Benign landscape:} As mentioned before, it has been recently shown that many low-rank optimization problems can be cast as smooth-but-nonconvex optimization problems that are free of spurious local minima. These methods heavily rely on the notion of \textit{restricted isometry property} (RIP)---a property that was initially introduced by~\cite{candes2005decoding} and has been used ever since as a metric to measure a norm-preserving property of the objective function. In general, these methods have two major drawbacks: 1) they can only target a narrow set of nearly-isotropic instances~(\cite{zhang2018much}), and 2) their proof technique depends on the differentiability of the objective function; a condition that is not satisfied for non-smooth norms, such as $\ell_1$. To the best of our knowledge, the work by~\cite{josz2018theory} is the only one that studies the landscape of the $\ell_1$ minimization problem, where the authors consider the tensor decomposition problem under the full and perfect measurements. Our work is somewhat related to ~\cite{ma2018gradient} that derives similar conditions for the absence of spurious local solution of the non-negative rank-1 matrix completion but for the smooth Frobenius norm minimization problem. \vspace{2mm} \noindent{\bf PCA with prior information:} With an exponential growth in the size and dimensionality of the real-world datasets, it is often required to exploit the additional prior information in the PCA. In many real-world applications, prior knowledge from the underlying physics of the problem---such as non-negativity~(\cite{montanari2016non}), sparsity~(\cite{zou2006sparse}), robustness~(\cite{candes2011robust}), and nonlinearity~(\cite{gorban2008principal})---can be taken into account to perform more efficient, consistent, and accurate PCA. \vspace{2mm} \noindent{\bf Numerical algorithms for non-smooth optimization:} Numerical algorithms for non-smooth optimization problems can be dated back to the work by Clarke on the extended definitions of gradients and directional derivatives, commonly known as generalized derivatives~(\cite{clarke1990optimization}). Intuitively, for non-smooth functions, the gradient in the classical sense seize to exist at a subset of the points in the domain. The Clarke generalized derivative is introduced to circumvent this issue by associating a convex differential to these points, even if the original problem is non-convex. In the domain of unconstrained non-smooth optimization, earlier works have introduced simple algorithms that converge to approximate Clarke-stationary points~(\cite{goldstein1977optimization, chaney1978extension}). More recent methods take advantage of the fact that many non-smooth optimization problems are smooth in every open dense subset of their domains. This implies that the objective function is smooth with probability one at a randomly drawn point. This observation lays the groundwork for several gradient-sampling-based algorithms for both unconstrained and constrained non-smooth optimization problems~(\cite{burke2005robust, curtis2012sequential}). {As mentioned before, a sub-gradient method has been recently proposed by~\cite{li2018nonconvex} for solving the RPCA, where the authors prove linear convergence of the algorithm to the true components, provided that the initial point is chosen sufficiently close to the globally optimal solution.} \vspace{2mm} {\subsection{Comparison to the Existing Results on RPCA} Similar to the non-convex matrix sensing and completion, most of the existing results on the RPCA work on a \textit{lifted} space of the variables via different convex relaxations and they do not incorporate the positivity constraints in the problem. In what follows, we will explain the advantages of our proposed method compared to these results. \vspace{2mm} \noindent{\bf Positivity constraints:} In the present work, we show that the positivity of the true components is both sufficient and (almost) necessary for the absence of spurious local solutions. We use this prior knowledge to obtain sharp deterministic and probabilistic guarantees on the absence of spurious local minima for the RPCA based on the Burer-Monteiro formulation. For instance, we show that up to a constant factor of the measurements can be grossly corrupted and yet they do not introduce any spurious local solution. Considering the fact that these results heavily rely on the positivity of the true components, it is unclear if similar ``no spurious local minima'' results hold for the general case without the positivity assumption. The statistical properties of these types of constraints have also been shown to be useful in the classical PCA by~\cite{montanari2016non}, where the authors show that by imposing positivity constraints on the principal components, one can guarantee its consistent recovery with smaller signal-to-noise ratio. It is also worthwhile to mention that the incorporation of the non-negativity/positivity constraints in the low-rank matrix recovery can be traced back to some earlier works on the non-negative matrix factorization problem~(\cite{lee1999learning, hoyer2004non}). \vspace{2mm} \noindent{\bf Computational savings:} Similar to the convexification techniques in nonconvex optimization, most of the classical results on the RPCA \text{relax} the inherent non-convexity of the problem by lifting it to higher dimensions~(\cite{candes2011robust, chandrasekaran2011rank, zhou2010stable, hsu2011robust}). In particular, by moving from vector to matrix variables, they guarantee the convexity of the problem at the expense of significantly increasing the number of variables. In this work, we show that such lifting is not necessary for the positive rank-1 RPCA since---despite the non-convexity of the problem---it is free of spurious local solutions and, hence, simple local search algorithms converge to the true components when directly applied to its original formulation. \vspace{2mm} \noindent {\bf Sharp guarantees with mild conditions:} In general, most of the existing results on RPCA for guaranteeing the recovery of the true components fall into two categories. First, a large class of methods rely on some deterministic conditions on the spectra of the dominant components and/or the structure of the sparse noise~(\cite{hsu2011robust, chandrasekaran2011rank, yi2016fast}). For instance, the works by~\cite{hsu2011robust, chandrasekaran2011rank} require the regularization coefficient to be within a specific interval that is defined in terms of the true principal components. Furthermore, the algorithm proposed by~\cite{yi2016fast} requires prior knowledge on the density of the sparse noise matrix. Although being theoretically significant, these types of conditions cannot be easily verified and met in practice. With the goal of bypassing such stringent conditions, the second category of research has studied the RPCA under probabilistic models. These types of guarantees were popularized by~\cite{candes2011robust,wright2009robust} and they do not rely on any prior knowledge on the true components or the density of the noise matrix. However, their success is contingent upon specific random models on the sparse noise or the spectra of the true components, neither of which may be satisfied in practice. In contrast, the method proposed here does not rely on any prior knowledge on the true solution, other than the availability of an upper bound on the maximum absolute value of the elements in the principal components\footnote{Note that in most cases, these types of upper bounds can be immediately inferred by the domain knowledge; see e.g. our discussion on the moving object detection problem.}. Furthermore, unlike the previous works, our results encompass \textit{both deterministic and probabilistic} models under random sampling. } {\section{Preliminaries}}\label{sec2} A \textbf{directional derivative} of a locally Lipschitz and possibly non-smooth function $h(\mathbf{x})$ at $\mathbf{x}$ in the direction $\mathbf{d}$ is defined as \begin{equation} h'(\mathbf{x},\mathbf{d}) := \underset{\begin{subarray}{c} t\downarrow 0 \end{subarray}}{\lim}\frac{h(\mathbf{x}+t\mathbf{d})-h(\mathbf{x})}{t} \end{equation} upon existence. Based on this definition, $\bar\mathbf{u}$ is \textbf{directional-minimum-stationary} (or D-min-stationary) for~\eqref{snpca} if $f'(\bar\mathbf{u},\mathbf{d})\geq 0$ for every \textit{feasible} direction $\mathbf{d}$, i.e., a direction that satisfies $d_i\geq 0$ when $u_i = 0$ for every index $i$. Similarly, $\bar\mathbf{u}$ is \textbf{directional-maximum-stationary} (or D-max-stationary) for~\eqref{snpca} if $f'(\bar\mathbf{u},\mathbf{d})\leq 0$ for every feasible $\mathbf{d}$. Finally, $\bar\mathbf{u}$ is \textbf{directional-stationary} (or D-stationary) for~\eqref{snpca} if it is either D-min- or D-max-stationary\footnote{Note that the notion of D-stationary points is often used in lieu of D-min-stationary in the literature. However, we use a slightly more general definition in this paper to account for the local maxima of~\eqref{snpca}.}. Every local minimum (maximum) $\bar{\mathbf{u}}$ should be D-min (max)-stationary for $f(\mathbf{u})$. On the other hand, $\bar{\mathbf{u}}$ cannot be a D-stationary point if $f(\mathbf{u})$ has strictly positive and negative directional derivatives at that point. In that case, $\bar{\mathbf{u}}$ is neither local maximum nor minimum. A solution to a minimization problem is referred to as~\textbf{spurious local} (or simply local) if there exists another feasible point with a strictly smaller objective value; a solution is~\textbf{globally optimal} (or simply global) if no such point exists. Finally, a \textbf{vertex partitioning} of a non-empty bipartite graph is the partition of its vertices into two groups such that there exist no adjacent vertices within each group. {\bf Notation:} The upper-case, bold lower-case, and lower-case letters are used to show the matrices, vectors, and scalars, respectively. The space of non-negative and real $n\times 1$ vectors and $m\times n$ matrices are denoted by $\mathbb{R}^n_+$ and $\mathbb{R}^{n\times m}_+$, respectively. The symbols $\|{W}\|_1$ and $\|W\|_F$ denote the element-wise $\ell_1$ norm and Frobenius norm of $W$, respectively. The $(i,j)^{\text{th}}$ entry of a matrix $W$ is shown as $W_{ij}$, whereas the $i^{\text{th}}$ entry of a vector ${\bf w}$ is denoted by $w_i$. Given the sequences $f_1(n)$ and $f_2(n)$, the notation $f_1(n) \lesssim f_2(n)$ or equivalently $f_1(n) = O(f_2(n))$ means that there exists a number $c_1\in[0,\infty)$ such that $f_1(n)\leq c_1f_2(n)$ for all $n$. Similarly, the notation $f_1(n) \gtrsim f_2(n)$ or $f_1(n) = \Omega(f_2(n))$ means that there exists a number $c_2>0$ such that $f_1(n)\geq c_2f_2(n)$ for all $n$. The indicator function $\mathbb{I}_{x\geq\alpha}$ takes the value $1$ if $x\geq\alpha$ and $0$ otherwise. For an event $\mathcal{E}$, the notation $\mathbb{P}(\mathcal{E})$ is used to show the probability of its occurrence. { For a random variable $X$, the symbol $\mathbb{E}\{X\}$ shows its expected value. For notational simplicity and unless stated otherwise, we will refer to non-negative (or positive) rank-1 RPCA as non-negative (or positive) RPCA in the sequel.} \vspace{2mm} \section{Base Case: Noiseless Non-negative RPCA}\label{sec3} In this section, we consider the noiseless version of both symmetric and asymmetric non-negative RPCA. While not entirely obvious, the subsequent arguments are at the core of our proofs for the general noisy case. In the noiseless scenario,~\eqref{snpca} is reduced to {\begin{equation}\tag{P1-Sym}\label{p2-sym} \min_{\mathbf{u}\geq 0}\quad \underbrace{\sum_{(i,j)\in\Omega}|u_iu_j-u_i^*u_j^*|}_{f(\mathbf{u})} \end{equation}} % For the asymmetric problem~\eqref{anpca}, the solution is invariant to scaling. In other words, if $(\mathbf{u}, \mathbf{v})$ is a solution to~\eqref{anpca}, then $(\frac{1}{q}\mathbf{u}, q\mathbf{v})$ is also a valid solution with the same objective value, for every scalar $q>0$. To circumvent the issue of invariance to scaling, it is common to balance the norms of $\mathbf{u}$ and $\mathbf{v}$ by penalizing their difference. Therefore, similar to the works by~\cite{ge2017no, zheng2016convergence, yi2016fast}, we consider the following regularized variant of~\eqref{anpca}: {\begin{equation}\label{p2-asym} \min_{\mathbf{u}\geq 0, \mathbf{v}\geq 0}\quad \underbrace{\|\mathcal{P}_{\Omega}(X-\mathbf{u}\mathbf{v}^\top)\|_1 + \alpha|\mathbf{u}^\top\mathbf{u} - \mathbf{v}^\top\mathbf{v}|}_{f_{\mathrm{asym}}(\mathbf{u}, \mathbf{v})} \end{equation}} for an arbitrary constant $\alpha>0$ (note that the positivity of $\alpha$ is the only condition required in this work). To deal with the asymmetric case, we first convert it to a symmetric problem after a simple concatenation of variables. Define $\mathbf{w} = [\mathbf{u}^\top\ \ \mathbf{v}^\top]^\top$, $\mathbf{w}^* = [{\mathbf{u} ^*}^\top\ \ {\mathbf{v}^*}^\top]^\top$, and $\bar\Omega = \{(i,j)| (i,j-m)\in\Omega\}$. Based on these definitions, one can symmetrize~\eqref{p2-asym} as follows: {\begin{equation}\tag{P1-Asym}\label{p2_asym_sym} \min_{\mathbf{w}\geq 0}\quad \underbrace{\sum_{(i,j)\in\bar\Omega}|w_iw_j-w^*_iw^*_j|+\alpha\left|\sum_{i=1}^{m}w_i^2-\sum_{j = m+1}^{m+n}w_j^2\right|}_{f_{\mathrm{sym}}(\mathbf{w})} \end{equation}} To simplify the notation, we drop the subscript from $f_{\mathrm{sym}}(\mathbf{w})$ whenever there is no ambiguity in the context. \subsection{Deterministic Guarantees} {\bf Symmetric case:} First, we introduce deterministic conditions to guarantee a benign landscape for~\eqref{p2-sym}. \begin{theorem}\label{thm1} Suppose that $\mathbf{u}^*>0$ and $\mathcal{G}(\Omega)$ has no bipartite component. Then, the following statements hold for \eqref{p2-sym}: \begin{itemize} \item[1.] It does not have any spurious local minimum; \item[2.] The point $\mathbf{u} = \mathbf{u}^*$ is the unique global minimum; \item[3.] In the positive orthant, the point $\mathbf{u} = \mathbf{u}^*$ is the only D-stationary point. \end{itemize} Additionally, if $\mathcal{G}(\Omega)$ is connected, the following statements hold for \eqref{p2-sym}: \begin{itemize} \item[4.] The points $\mathbf{u} = \mathbf{u}^*$ and $\mathbf{u} = 0$ are the only D-min-stationary points; \item[5.] The point $\mathbf{u} = 0$ is a local maximum. \end{itemize} \end{theorem} The above theorem has a number of important implications for~\eqref{p2-sym}: 1) it has no spurious local solution, 2) $\mathbf{u} = \mathbf{u} ^*$ is its unique global solution, and 3) every feasible point $\mathbf{u}>0$ such that $\mathbf{u} \not= \mathbf{u} ^*$ has at least a strictly negative directional derivative. Additionally, if $\mathcal{G}(\Omega)$ is connected, the feasible points of~\eqref{p2-sym} with zero entries either have a strictly negative directional derivative or correspond to the origin that is a local maximum with a strictly negative curvature. Therefore, these points are not local/global minima and can be easily avoided using local search algorithms. To prove Theorem~\ref{thm1}, we first need the following important lemma. \begin{lemma}\label{l1} Suppose that $\mathcal{G}(\Omega)$ has no bipartite component and $\mathbf{u} ^*>0$. Then, for every D-min-stationary point $\mathbf{u}$ of~\eqref{p2-sym}, we have $\mathbf{u}[c] >0$ or $\mathbf{u}[c] = 0$, where $\mathbf{u}[c]$ is a sub-vector of $\mathbf{u}$ induced by the $c^{\text{th}}$ component of $\mathcal{G}(\Omega)$. \end{lemma} \begin{proof} See Appendix~\ref{app_l1}. \end{proof} Now, we are ready to present the proof of Theorem~\ref{thm1}. \vspace{2mm} \noindent{\bf Proof of Theorem~\ref{thm1}:} We prove the first three statements. Note that Statement 5 can be easily verified and Statement 4 is implied by Lemma~\ref{l1} and Statement 3. Suppose that $\mathbf{u}\not=\mathbf{u}^*$ is a local minimum. Note that if $u_i = 0$ for some $i$, Lemma~\ref{l1} implies that $\mathbf{u}[c] = 0$ for the $c^{\text{th}}$ component that includes node $i$. However, a strictly positive perturbation of $\mathbf{u}[c]$ decreases the objective function and, therefore, $\mathbf{u}$ cannot be a local minimum. Hence, it is enough to consider the case $\mathbf{u}>0$. We show that $\mathbf{u}$ cannot be D-stationary. This immediately certifies the validity of the first three statements. First, we prove that \begin{equation}\label{eq5} \min_{k\in\Omega_i}\frac{u^*_k}{u_k}\leq \frac{u_i}{u^*_i}\leq\max_{k\in\Omega_i}\frac{u^*_k}{u_k} \end{equation} for every $i\in\{1,\cdots, n\}$, where $\Omega_i = \{j|(i,j)\in\Omega\}$. By contradiction and without loss of generality, suppose that ${u_i}/{u^*_i}> \max_{k\in\Omega_i}{u^*_k}/{u_k}$ for some $i$. This implies that $u_iu_j>u^*_iu^*_j$ for every $j\in\Omega_i$. Therefore, a negative or positive perturbation of $u_i$ results in respective negative or positive directional derivatives, contradicting the D-stationarity of $\mathbf{u}$. With no loss of generality, assume that the sparsity graph $\mathcal{G}(\Omega)$ is connected (since the arguments made in the sequel can be readily applied to every disjoint component of $\mathcal{G}(\Omega)$) and that the following ordering holds: \begin{equation}\label{eq17} 0<\frac{u^*_1}{u_1}\leq \frac{u^*_2}{u_2}\leq \cdots\leq \frac{u^*_n}{u_n} \end{equation} Therefore, due to~\eqref{eq5}, we have \begin{equation}\label{eq7} 0<\frac{u^*_1}{u_1}\leq \min_{k\in\Omega_i}\frac{u^*_k}{u_k}\leq \frac{u_i}{u^*_i}\leq\max_{k\in\Omega_i}\frac{u^*_k}{u_k}\leq \frac{u^*_n}{u_n} \end{equation} for every $i\in\{1,\cdots, n\}$. Since $\mathbf{u} \not=\mathbf{u}^*$, there exists some index $t$ such that $u_t\not=u^*_t$. This implies that $u^*_n/u_n>1$; otherwise, we should have $u^*_n/u_n\leq 1$. This together with~\eqref{eq17}, implies that $u^*_t/u_t<1$ and $u_t/u^*_t>1$, which contradicts~\eqref{eq7}. Now, define the sets \begin{align} & T_1 = \left\{i|\frac{u^*_i}{u_i} = \frac{u^*_n}{u_n}, 1\leq i\leq n\right\}\label{eq8}\\ & T_2 = \left\{j|\frac{u_j}{u^*_j} = \frac{u^*_n}{u_n}, 1\leq j\leq n\right\}\label{eq9} \end{align} Moreover, define the set $N = V\backslash(T_1\cup T_2)$ and let $\mathbf{d}$ be \begin{equation}\label{eqd} {d}_i= \left\{ \begin{array}{ll} \frac{u_i}{u_n} &\text{if}\ i\in T_1\\ -\frac{u_i}{u_n} &\text{if}\ i\in T_2\\ 0 &\text{if}\ i\in N\\ \end{array} \right. \end{equation} Define a perturbation of $\mathbf{u}$ as $\hat{\mathbf{u}} = \mathbf{u}+\mathbf{d}\epsilon$ where $\epsilon>0$ is chosen to be sufficiently small. Next, the effect of the above perturbation on different terms of~\eqref{p2-sym} will be analyzed. To this goal, we divide $\Omega$ into four sets \begin{itemize} \item[1.] $(i,j)\in\Omega$ and $i,j\in T_1$: In this case, since $u_i<u^*_i$ and $u_j<u^*_j$, one can write \begin{align} |\hat{u}_i\hat{u}_j-u^*_iu^*_j| = u^*_iu^*_j - \hat{u}_i\hat{u}_j &= u^*_iu^*_j - \left(u_i\!+\!\frac{u_i}{u_n}\epsilon\right)\left(u_j\!+\!\frac{u_j}{u_n}\epsilon\right) \nonumber\\ &= |u_iu_j-u^*_iu^*_j| -\left(\frac{2u_iu_j}{u_n}\right)\epsilon - \left(\frac{u_iu_j}{u_n^2}\right)\epsilon^2 \end{align} where we have used the assumption $\mathbf{u}^*, \mathbf{u}>0$. \item[2.] $(i,j)\in\Omega$ and $i,j\in T_2$: In this case, since $u_i>u^*_i$ and $u_j>u^*_j$, one can write \begin{align} |\hat{u}_i\hat{u}_j-u^*_iu^*_j| = \hat{u}_i\hat{u}_j-u^*_iu^*_j &= \left(u_i\!-\!\frac{u_i}{u_n}\epsilon\right)\left(u_j\!-\!\frac{u_j}{u_n}\epsilon\right) \!- u^*_iu^*_j\nonumber\\ &= |u_iu_j-u^*_iu^*_j| -\left(\frac{2u_iu_j}{u_n}\right)\!\epsilon + \left(\frac{u_iu_j}{u_n^2}\right)\!\epsilon^2 \end{align} where we have used the assumption $\mathbf{u}^*, \mathbf{u}>0$. \item[3.] $(i,j)\in\Omega$, $i\in N$, and $j\in T_1\cup T_2$: According to the definitions of $T_1$ and $T_2$, we have \begin{equation} \frac{u_i}{u^*_i}<\frac{u^*_n}{u_n},\qquad \frac{u^*_i}{u_i}<\frac{u^*_n}{u_n} \end{equation} Now, if $j\in T_1$, one can write \begin{equation} \frac{u_i}{u^*_i}<\frac{u^*_j}{u_j}\implies u_iu_j<u^*_iu^*_j \end{equation} which implies that \begin{equation} |\hat{u}_i\hat{u}_j-u^*_iu^*_j| = u^*_iu^*_j - \hat{u}_i\hat{u}_j = u^*_iu^*_j - u_i\left(u_j+\frac{u_j}{u_n}\epsilon\right)= |{u}_i{u}_j-u^*_iu^*_j|-\left(\frac{u_iu_j}{u_n}\right)\!\epsilon \end{equation} Similarly, if $j\in T_2$, one can verify that \begin{equation} |\hat{u}_i\hat{u}_j-u^*_iu^*_j|= |{u}_i{u}_j-u^*_iu^*_j|-\left(\frac{u_iu_j}{u_n}\right)\!\epsilon \end{equation} \item[4.] $(i,j)\in\Omega$, $i\in T_1$, and $j\in T_2$: In this case, note that \begin{equation} |\hat{u}_i\hat{u}_j-u^*_iu^*_j| = \left|\left(u_i+\frac{u_i}{u_n}\epsilon\right)\left(u_j-\frac{u_j}{u_n}\epsilon\right)-u^*_iu^*_j\right|\leq |{u}_i{u}_j-u^*_iu^*_j|+\left(\frac{u_iu_j}{u_n^2}\right)\epsilon^2 \end{equation} \end{itemize} The above analysis entails that---unless $N$ and the subgraphs of $\mathcal{G}(\Omega)$ induced by the nodes in $T_1$ or $T_2$ are empty---$f'(\mathbf{u},\mathbf{d})>0$ and $f'(\mathbf{u},-\mathbf{d})<0$, implying that $\mathbf{u}$ cannot be D-stationary. On the other hand, these conditions enforce $\mathcal{G}(\Omega)$ to be bipartite, which is a contradiction. This completes the proof.~\hfill$\blacksquare$ Next, we show that $\mathbf{u}^*>0$ is \textit{almost} necessary to guarantee the absence of spurious local minima for~\eqref{p2-sym}. \begin{proposition}\label{prop1} {Assume that $\mathbf{u}^*\geq 0$ and that $\mathbf{u}^*\not=0$ with $u_i^* = 0$ for some $i$. Then, upon choosing $\Omega = \{1,\dots,n\}^2\backslash \{(i,i)\}$,~\eqref{p2-sym} has a spurious local minimum.} \end{proposition} \begin{proof} See Appendix~\ref{app_prop1}. \end{proof} {The above corollary shows that if $\mathbf{u}^*$ is non-negative with at least one zero element, even in the almost perfect scenario where the set $\Omega$ includes all of the measurements except for one, it may not be free of spurious local minima.} The next corollary shows that the assumption on the absence of bipartite components in $\mathcal{G}(\Omega)$ is also necessary for the uniqueness of the global solution. \begin{proposition}\label{prop2} Given any vector $\mathbf{u}^*>0$ and set $\Omega$, suppose that $\mathcal{G}(\Omega)$ has a bipartite component. Then, the global solution of~\eqref{p2-sym} is not unique. \end{proposition} \begin{proof} Without loss of generality, suppose that $\mathcal{G}(\Omega)$ is a connected bipartite graph. For any vector $\mathbf{u}^*>0$, the solution $\mathbf{u} = \mathbf{u}^*$ is globally optimal for~\eqref{p2-sym}. Suppose that the bipartite graph $\mathcal{G}(\Omega)$ partitions the entries of $\mathbf{u}$ into two sets $V_1$ and $V_2$ such that $u_n\in V_1$. Based on some simple algebra, one can easily verify that, for a sufficiently small $\epsilon>0$, the solution \begin{equation} \hat{u}_i\leftarrow \left\{ \begin{array}{ll} u_i+\frac{u_i}{u_n}\epsilon &\text{if}\ i\in V_1\\ u_i-\frac{u_i}{u_n+\epsilon}\epsilon &\text{if}\ i\in V_2\\ \end{array} \right. \end{equation} is also globally optimal for~\eqref{p2-sym}. \end{proof} {\begin{remark} Suppose that $\mathbf{u}^*$ is a globally optimal solution of~\eqref{p2-sym} and that $\mathcal{G}(G)$ includes a bipartite component. Then, according to Proposition~\ref{prop2}, the part of $\mathbf{u}^*$ whose elements correspond to the nodes in this bipartite component can be \textit{perturbed} to attain another globally optimal solution, thereby resulting in the \textbf{non-uniqueness of the global solution}. On the other hand, the connectedness assumption is required to eliminate the undesirable stationary points on the \textit{boundary} of the feasible region. Roughly speaking, the elements of the vector variable $\mathbf{u}$ corresponding to different disconnected components can behave independently from each other, giving rise to spurious D-stationary points in the problem. To elaborate, recall that $\mathbf{u}[c]$ is a sub-vector of $\mathbf{u}$ induced by the $c^{\text{th}}$ component of $\mathcal{G}(G)$. Based on Lemma~\ref{l1}, the D-stationary points restricted to each disjoint component of $\mathcal{G}(G)$ are either strictly positive or equal to zero. Therefore, upon having two disconnected components $c_1$ and $c_2$, the points $\mathbf{u}' = \begin{bmatrix} {\mathbf{u}^*[c_1]}^\top & 0 \end{bmatrix}^\top$ and $\mathbf{u}'' = \begin{bmatrix} 0 & {\mathbf{u}^*[c_2]}^\top \end{bmatrix}^\top$ are indeed D-stationary points of~\eqref{snpca}, thereby resulting in \textbf{spurious stationary points}. \end{remark}} \noindent{\bf Asymmetric case:} Next, we consider~\eqref{p2-asym} in the noiseless scenario by analyzing its symmetrized counterpart~\eqref{p2_asym_sym}. Based on the construction of $\bar{\Omega}$, the corresponding sparsity graph $\mathcal{G}(\bar{\Omega})$ is bipartite. On the other hand, according to Proposition~\ref{prop2}, the existence of a bipartite component in $\mathcal{G}(\bar\Omega)$ makes a part of the solution~\textit{invariant to scaling}, which subsequently results in the non-uniqueness of the global minimum. The additional regularization term in~\eqref{p2_asym_sym} is introduced to circumvent this issue by penalizing the difference in the norms of $\mathbf{u}$ and $\mathbf{v}$. \begin{theorem}\label{thm1_asym} Suppose that $\mathbf{w}^*>0$ and $\mathcal{G}(\bar\Omega)$ is connected. Then, the following statements hold for \eqref{p2_asym_sym}: \begin{itemize} \item[1.] The points $\mathbf{w} = 0$ and $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ are the only D-min-stationary points; \item[2.] The point $\mathbf{w} = 0$ is a local maximum; \item[3.] In the positive orthant, the point $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ is the only D-stationary point. \end{itemize} \end{theorem} \begin{proof} See Appendix~\ref{app_thm1_asym}. \end{proof} \begin{remark} Notice that, unlike the symmetric case, Theorem~\ref{thm1_asym} requires the connectedness of $\mathcal{G}(\bar\Omega)$. This is due to the additional regularization term in~\eqref{anpca}. In particular, similar arguments do not necessarily hold for the disjoint components of $\mathcal{G}(\bar\Omega)$ because of the coupling nature of the regularization term. \end{remark} \subsection{Probabilistic Guarantees} Next, we consider the random sampling regime. Similar to the previous subsection, we first focus on the symmetric case. \vspace{2mm} \noindent{\bf Symmetric case:} Suppose that every element of the upper triangular part of the matrix $\mathbf{u}^*{\mathbf{u}^*}^\top$ is measured independently with probability $p$. In other words, for every $(i,j)\in\{1,2,...,n\}^2$ and $i\leq j$, the probability of $(i,j)$ belonging to $\Omega$ is equal to $p$. {\begin{theorem}\label{thm2} Suppose that $n\geq 2$, $\mathbf{u}^*>0$, and $p\geq \min\left\{1,\frac{(2\eta+2)\log n + 2}{n-1}\right\}$ for some constant $\eta\geq 1$. Then, the following statements hold for \eqref{snpca} with probability of at least $1-\frac{3}{2}n^{-\eta}$: \begin{itemize} \item[1.] The points $\mathbf{u} = \mathbf{u} ^*$ and $\mathbf{u} = 0$ are the only D-min-stationary points; \item[2.] The point $\mathbf{u} = 0$ is a local maximum; \item[3.] In the positive orthant, the point $\mathbf{u} = \mathbf{u} ^*$ is the only D-stationary point. \end{itemize} \end{theorem}} {Before presenting the proof of Theorem~\ref{thm2}, we note that the required lower bound on $p$ is to guarantee that the random graph $\mathcal{G}(\Omega)$ is connected with high probability. This implies that Theorem~\ref{thm1} can be invoked to verify the statements of Theorem~\ref{thm2}. It is worthwhile to mention that the classical results on \textit{Erd\"os-R\'enyi} graphs characterize the \textit{asymptotic} properties of $\mathcal{G}(\Omega)$ as $n$ approaches infinity. In particular, it is shown by~\cite{erdds1959random} that with the choice of $p = \frac{\log n+c}{n}$ for some $c>0$, $\mathcal{G}(\Omega)$ becomes connected with probability of at least $\Omega(e^{-e^{-c}})$ as $n\to\infty$. In contrast, we introduce the following non-asymptotic result characterizing the probability that $\mathcal{G}(\Omega)$ is connected and non-bipartite for any finite $n\geq 2$, and subsequently use it to prove Theorem~\ref{thm2}. \begin{lemma}\label{l2} Given a constant $\eta\geq 1$, suppose that $p\geq \min\left\{1,\frac{(2\eta+2)\log n + 2}{n-1}\right\}$ and $n\geq 2$. Then, $\mathcal{G}(\Omega)$ is connected and non-bipartite with probability of at least $1-\frac{3}{2}n^{-\eta}$. \end{lemma} \begin{proof} See Appendix~\ref{app_l2}. \end{proof} % % \noindent\textbf{Proof of Theorem~\ref{thm2}:} The proof immediately follows from Theorem~\ref{thm1} and Lemma~\ref{l2}.~\hfill$\blacksquare$} \vspace{2mm} Similar to the deterministic case, we will show that both assumptions $\mathbf{u}^*>0$ and $p\gtrsim \log n/n$ are \textit{almost} necessary for the successful recovery of the global solution of~\eqref{p2-sym}. In particular, it will be proven that relaxing $\mathbf{u}^*>0$ to $\mathbf{u}^*\geq 0$ will result in an instance that possesses a spurious local solution with non-negligible probability. Furthermore, it will be shown that the choice $p \approx \log n/n$ is optimal\textemdash modulo $\log n$-factor\textemdash for the unique recovery of the global solution. \begin{proposition}\label{prop3} Assuming that $\mathbf{u}^*\geq 0$ with $u^*_i = 0$ for some $i\in\{1,\dots,n\}$ and that $p<1$,~\eqref{p2-sym} has a spurious local minimum with probability of at least $1-p>0$. \end{proposition} \begin{proof} Suppose that $\mathbf{u}^*\geq 0$ and there exists an index $i$ such that $u^*_i = 0$. The proof of Proposition~\ref{prop1} can be used to show that excluding the measurement $(i,i)$ gives rise to a spurious local minimum. This occurs with probability $1-p$. The details are omitted due to their similarities to the proof of Proposition~\ref{prop1}. \end{proof} \begin{proposition}\label{prop4} Given any $\mathbf{u}^*>0$, suppose that $np\rightarrow 0$ as $n\rightarrow\infty$. Then, the global solution of~\eqref{p2-sym} is not unique with probability approaching to one. \end{proposition} \begin{proof} See Appendix~\ref{app_prop4}. \end{proof} \noindent{\bf Asymmetric case:} Consider~\eqref{p2-asym} under a random sampling regime, where each element of $\mathbf{u}^*{\mathbf{v}^*}^\top$ is independently observed with probability $p$. Next, the analog of Theorem~\ref{thm2} for the asymmetric case is provided. {\begin{theorem}\label{thm2_asym} Suppose that $n,m\geq 2$, $\mathbf{w}^*>0$, and $p\geq \min\left\{1,\frac{(m+n)((1+\eta)\log(mn)+1)}{(m-1)(n-1)}\right\}$ for some constant $\eta\geq 1$. Then, the following statements hold for \eqref{p2_asym_sym} with probability of at least $1-2(mn)^{-\eta}-4(mn)^{-2\eta}$: \begin{itemize} \item[1.] The points $\mathbf{w} = 0$ and $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ are the only D-min-stationary points; \item[2.] The point $\mathbf{w} = 0$ is a local maximum; \item[3.] In the positive orthant, the point $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ is the only D-stationary point. \end{itemize} \end{theorem} Before presenting the proof of Theorem~\ref{thm2_asym}, we note that $\mathcal{G}(\bar{\Omega})$ no longer corresponds to an {Erd\"os-R\'enyi} random graph due to its bipartite structure. Therefore, we present the analog of Lemma~\ref{l2} for random bipartite graphs. \begin{lemma}\label{l_bipartite} Given a constant $\eta\geq 1$, suppose that $p\geq \min\left\{1,\frac{(m+n)((1+\eta)\log(mn)+1)}{(m-1)(n-1)}\right\}$ and $m,n\geq 2$. Then, $\mathcal{G}(\bar{\Omega})$ is connected with probability of at least $1-2(mn)^{-\eta}-4(mn)^{-2\eta}$. \end{lemma} \begin{proof} See Appendix~\ref{app_l_bipartite}. \end{proof} % \vspace{2mm} \noindent{\bf Proof of Theorem~\ref{thm2_asym}:} The proof immediately follows from Theorem~\ref{thm1_asym} and Lemma~\ref{l_bipartite}.~\hfill$\blacksquare$ Before proceeding, we note that, similar to the classical results on the \textit{Erd\"os-R\'enyi} graphs, there are asymptotic results guaranteeing the connectedness of a random bipartite graph as a function of $p$. In particular,~\cite{saltykov1995number} shows that $\mathcal{G}(\bar{\Omega})$ is connected with probability approaching to 1 as $m+n\to \infty$, provided that $p \geq 3\left(1+\frac{m}{n}\right)^{-1}\frac{(n+m)\log(n+m)}{nm}$. Lemma~\ref{l_bipartite} offers another lower bound on $p$ that matches this threshold (modulo a constant factor), while being non-asymptotic in nature. In particular, it characterizes the probability that the random bipartite graph is connected for \textit{all $m,n\geq 2$}. } \section{Extension to Noisy Positive RPCA}\label{sec6} In this section, we will show that an additive sparse noise with arbitrary values does not drastically change the landscape of the RPCA. In other words, a limited number of grossly wrong measurements will not introduce any spurious local solution to the positive RPCA. The key idea is to prove that the direction of descent that was introduced in the previous section is also valid when the measurements are not perfect, i.e., when they are subject to sparse noise. To this goal, consider the following problem in the symmetric case: {\begin{equation}\label{p3_sym} \min_{\mathbf{u}\geq 0}\quad \underbrace{\sum_{(i,j)\in\Omega}|u_iu_j-X_{ij}|}_{f(\mathbf{u})} \end{equation}} where \begin{equation} X = \mathbf{u}^*{\mathbf{u}^*}^\top+S \end{equation} is the matrix of true measurements perturbed with sparse noise. Similarly, consider the following problem for the asymmetric case: \begin{equation}\label{p3_aasym} \min_{\mathbf{u}\geq 0, \mathbf{v}\geq 0}\quad \sum_{(i,j)\in\Omega}|u_iv_j-{X}_{ij}|+\alpha\left|\sum_{i=1}^{m}u_i^2-\sum_{j = 1}^{n}v_j^2\right| \end{equation} where $\alpha$ is an arbitrary positive number. After symmetrization,~\eqref{p3_aasym} can be re-written as \begin{equation}\label{p3_asym} \min_{\mathbf{w}\geq 0}\quad \underbrace{\sum_{(i,j)\in\bar\Omega}|w_iw_j-\bar{X}_{ij}|+\alpha\left|\sum_{i=1}^{m}w_i^2-\sum_{j = m+1}^{m+n}w_j^2\right|}_{f(\mathbf{w})} \end{equation} where \begin{equation}\label{Xbar} \bar{X} = \mathbf{w}\bw^\top+\bar{S} \end{equation} for $\bar{X}\in\mathbb{R}^{(n+m)\times (n+m)}$ and \begin{equation}\label{Sbar} \bar{S} = \begin{bmatrix} 0 & S\\ S^\top & 0 \end{bmatrix} \end{equation} Furthermore, define $\bar{B} = \{(i,j): (i,j)\in\bar\Omega, \bar{S}_{ij}\not=0\}$ and $\bar{G} = \{(i,j): (i,j)\in\bar\Omega, \bar{S}_{ij}=0\}$ as the sets of bad and good measurements for the symmetrized problem, respectively. In this work, we do not impose any assumption on the maximum value of the nonzero elements of $S$. However, without loss of generality, one may assume that $\mathbf{u}^*{\mathbf{u}^*}^\top+S > 0$ and $\mathbf{w}^*{\mathbf{w}^*}^\top+\bar{S} > 0$; otherwise, the non-positive elements can be discarded due to the assumptions $\mathbf{u}^*>0$ and $(\mathbf{u}^*, \mathbf{v}^*)>0$. In fact, we impose a slightly more stronger condition in this work. \begin{assumption}\label{assum1} There exists a constant $c\in (0,1]$ such that $S_{ij}+u^*_iu^*_j>cu^{*^2}_{\min}$ and $\bar{S}_{ij}+w^*_iw^*_j>cw^{*^2}_{\min}$ for~\eqref{p3_sym} and~\eqref{p3_asym}, respectively. \end{assumption} \subsection{Identifiability} Intuitively, the non-negative RPCA under the unknown-but-sparse noise is more challenging to solve than its noiseless counterpart. In particular, one may consider~\eqref{p3_sym} as a variant of~\eqref{p2-sym} discussed in the previous section, where the locations of the bad measurements are unknown; if these locations were known, they could have been discarded to reduce the problem to~\eqref{p2-sym}. If the measurements are subject to unknown noise, one of the main issues arises from the identifiability of the solution. To further elaborate, we will offer an example below. \begin{example} Suppose that $X(\epsilon) = (e_1+\mathbf{1}\epsilon)(e_1+\mathbf{1}\epsilon)^\top$, where $e_1$ is the first unit vector and $\mathbf{1}$ is a vector of ones. Assuming that $\Omega = \{1,...,n\}^2$, one can decompose $X(\epsilon)$ in two forms \begin{subequations} \begin{align} & X(\epsilon) = \underbrace{(e_1+\mathbf{1}\epsilon)(e_1+\mathbf{1}\epsilon)^\top}_{\mathbf{u}^*_1{\mathbf{u}^*_1}^\top}+\underbrace{0}_{S_1}\\ & X(\epsilon) = \underbrace{\mathbf{1}\mathbf{1}^\top\epsilon^2}_{\mathbf{u}^*_2{\mathbf{u}^*_2}^\top}+\underbrace{e_1e_1^\top+\mathbf{1}e_1^\top\epsilon+e_1\mathbf{1}^\top\epsilon}_{S_2} \end{align} \end{subequations} For every $\epsilon>0$, both $S_1$ and $S_2$ can be considered as sparse matrices since the number of nonzero elements in each of these matrices is at most on the order of $O(n)$. However, unless more restrictions on the number of nonzero elements at each row or column of $S$ are imposed, it is impossible to distinguish between these two cases. This implies that the solution is not identifiable. \\ \end{example} In order to ensure that the solution is identifiable in the symmetric case, we assume that $\Delta(\mathcal{G}(B))\leq \eta\cdot \delta(\mathcal{G}(G))$ for some constant $\eta\leq 1$ to be defined later. Roughly speaking, this implies that at each row of the measurement matrix, the number of good measurements should be at least as large as the number of bad ones. Similar to the work by~\cite{ge2016matrix, ge2017no}, we consider the regularized version of the problem, as in \begin{equation}\tag{P2-Sym}\label{p3_sym_reg} \min_{\mathbf{u}\geq 0}\quad \underbrace{\sum_{(i,j)\in\Omega}|u_iu_j-X_{ij}|+ R(\mathbf{u})}_{f_{\mathrm{reg}}(\mathbf{u})} \end{equation} where $R(\mathbf{u})$ is a regularizer defined as \begin{equation} R(\mathbf{u}) = \lambda\sum_{i = 1}^{n}\left(u_i-\beta\right)^4\mathbb{I}_{u_i\geq\beta} \end{equation} for some fixed parameters $\lambda$ and $\beta$ to be specified later. Similarly, one can define an analogous regularization for~\eqref{p3_asym} as \begin{equation}\tag{P2-Asym}\label{p3_asym_reg} \min_{\mathbf{w}\geq 0}\quad \underbrace{\sum_{(i,j)\in\bar\Omega}|w_iw_j-\bar{X}_{ij}|+\alpha\left|\sum_{i=1}^{m}w_i^2-\sum_{j = m+1}^{m+n}w_j^2\right|+R(\mathbf{w})}_{f_{\mathrm{reg}}(\mathbf{w})} \end{equation} with \begin{equation} R(\mathbf{u}) = \lambda\sum_{i = 1}^{m+n}\left(w_i-\beta\right)^4\mathbb{I}_{w_i\geq\beta} \end{equation} for some fixed parameters $\lambda$ and $\beta$ to be specified later. Note that the defined regularization function is convex in its domain. In particular, it eliminates the candidate solutions that are far from the true solution. Without loss of generality and to streamline the presentation, it is assumed that $u^*_{\max} = w^*_{\max} = 1$ in the sequel. % % \begin{lemma}\label{l4} Consider the parameter $c$ defined in Assumption~\ref{assum1}. The following statements hold: \begin{itemize} \item[-] By choosing $\beta = 1$ and $\lambda = n/2$, any D-stationary point $\mathbf{u}>0$ of~\eqref{p3_sym_reg} satisfies the inequalities $(c/2)u^{*^2}_{\min}\leq u_{\min}\leq u_{\max}\leq 2$. \item[-] By choosing $\beta = 1$ and $\lambda = (m+n)/2$, any D-stationary point $\mathbf{w}>0$ of~\eqref{p3_asym_reg} satisfies the inequalities $(c/2)w^{*^2}_{\min}\leq w_{\min}\leq w_{\max}\leq 2$. \end{itemize} \end{lemma} \begin{proof} See Appendix~\ref{app_l4}. \end{proof} \subsection{Deterministic Guarantees} In what follows, the deterministic conditions under which~\eqref{p3_sym_reg} and~\eqref{p3_asym_reg} have benign landscape will be investigated. The results of this subsection will be the building blocks for the derivation of the main theorems for both symmetric and asymmetric positive RPCA under the random sampling and noise regime. Note that the analysis of the landscape will be more involved in this case since the effect of the regularizer should be taken into account. \vspace{2mm} \noindent{\bf Symmetric case:} Recall that, for the sparsity graph $\mathcal{G}(\Omega)$, $\Delta(\mathcal{G}(\Omega))$ and $\delta(\mathcal{G}(\Omega))$ correspond to its maximum and minimum degrees, respectively. \begin{theorem}\label{thm4} Suppose that \begin{itemize} \item[i.] $\mathbf{u}^*>0$; \item[ii.] ${\delta(\mathcal{G}(G))}>({48/c^2}){\kappa(\mathbf{u}^*)^4}{\Delta(\mathcal{G}(B))}$; \item[iii.] $\mathcal{G}(\Omega)$ has no bipartite component. \end{itemize} Then, with the choice of $\beta = 1$ and $\lambda = n/2$ for the parameters of the regularization function $R(\mathbf{u})$, the following statements hold for \eqref{p3_sym_reg}: \begin{itemize} \item[1.] It does not have any spurious local minimum; \item[2.] The point $\mathbf{u} = \mathbf{u}^*$ is the unique global minimum; \item[3.] In the positive orthant, the point $\mathbf{u} = \mathbf{u}^*$ is the only D-stationary point. \end{itemize} Additionally, if $\mathcal{G}(\Omega)$ is connected, the following statements hold for \eqref{p3_sym_reg}: \begin{itemize} \item[4.] The points $\mathbf{u} = \mathbf{u}^*$ and $\mathbf{u} = 0$ are the only D-min-stationary points; \item[5.] The point $\mathbf{u} = 0$ is a local maximum. \end{itemize} \end{theorem} \begin{proof} See Appendix~\ref{app_thm4}. \end{proof} \vspace{2mm} \noindent{\bf Asymmetric case:} Theorem~\ref{thm4} has the following natural extension to asymmetric problems. \begin{theorem}\label{thm4_asym} Suppose that \begin{itemize} \item[i.] $\mathbf{w}^*>0$; \item[ii.] ${\delta(\mathcal{G}(\bar G))}>({48}/c^2){\kappa(\mathbf{w}^*)^4}{\Delta(\mathcal{G}(\bar B))}$; \item[iii.] $\mathcal{G}(\bar G)$ is connected. \end{itemize} Then, with the choice of $\beta = 1$ and $\lambda = (m+n)/2$ for the parameters of the regularization function $R(\mathbf{w})$, the following statements hold for \eqref{p3_asym_reg}: \begin{itemize} \item[1.] The points $\mathbf{w} = 0$ and $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ are the only D-min-stationary points; \item[2.] The point $\mathbf{w} = 0$ is a local maximum; \item[3.] In the positive orthant, the point $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ is the only D-stationary point. \end{itemize} \end{theorem} \begin{proof} The proof is omitted due to its similarity to that of Theorem~\ref{thm4}. \end{proof} \subsection{Probabilistic Guarantees} As an extension to our previous results, we analyze the landscape of the noisy non-negative RPCA with randomness both in the location of the samples and in the structure of the noise matrix. Suppose that for the symmetric case, with probability $d$, each element of the upper triangular part of $X$ is independently corrupted with an arbitrary noise value. In other words, for every $(i,j)$ with $i\leq j$, one can write \begin{equation} X_{ij} = \left\{ \begin{array}{ll} u^*_iu^*_j& \text{with probability}\ 1-d\\ \text{arbitrary}& \text{with probability}\ d \end{array} \right. \end{equation} Furthermore, similar to the preceding section, suppose that every element of the upper triangular part of $X = \mathbf{u}^*{\mathbf{u}^*}^\top+S$ is independently measured with probability $p$. The randomness in the location of the measurements and noise is naturally extended to the asymmetric case by considering the symmetrized $\bar{X}$ and $\bar{S}$ defined in~\eqref{Xbar} and~\eqref{Sbar}, respectively. \vspace{2mm} \noindent{\bf Symmetric case:} First, the main result in the symmetric case is presented below. \begin{theorem}\label{thm5} {Suppose that \begin{itemize} \item[i.] $n\geq 2$, \item[ii.] $\mathbf{u}^*>0$, \item[iii.] $d<\frac{1}{(144/c^2)k(\mathbf{u}^*)^4+1}$, \item[iv.] $p>\frac{(1740/c^2)\kappa(\mathbf{u}^*)^4(1+\eta)\log n}{n}$, \end{itemize} for some $\eta> 0$. Then, with the choice of $\beta = 1$ and $\lambda = n/2$ for the parameters of the regularization function $R(\mathbf{u})$, the following statements hold for \eqref{p3_sym_reg} with probability of at least $1-3n^{-\eta}$: \begin{itemize} \item[1.] The points $\mathbf{u} = \mathbf{u} ^*$ and $\mathbf{u} = 0$ are the only D-min-stationary points; \item[2.] The point $\mathbf{u} = 0$ is a local maximum; \item[3.] In the positive orthant, the point $\mathbf{u} = \mathbf{u} ^*$ is the only D-stationary point. \end{itemize}} \end{theorem} To prove Theorem~\ref{thm5}, first we present the following lemma on the concentration of the minimum and maximum degrees of random graphs. \begin{lemma}\label{l7} {Consider a random graph $\mathcal{G}(n,p)$. Given a constant $\eta>0$, the inequality: \begin{align} &\mathbb{P}\left(\Delta(\mathcal{G}(n,p))\geq\max\left\{\frac{3np}{2},18(1+\eta)\log n\right\}\right)\leq n^{-\eta}\label{eq72} \end{align} holds for every $0< p\leq 1$. Furthermore, we have \begin{align} \mathbb{P}\left(\delta(\mathcal{G}(n,p))\leq\frac{np}{2}\right)\leq n^{-\eta} \end{align} provided that $p\geq \frac{12(1+\eta)\log n}{n}$.} \end{lemma} \begin{proof} See Appendix~\ref{app_l7}. \end{proof} \begin{remark} Note that since the degree of each node in $\mathcal{G}(n,p)$ is concentrated around $np$ with high probability, one may speculate that $\Delta(\mathcal{G}(n,p))$ and $\delta(\mathcal{G}(n,p))$ should also concentrate around $np$ for all values of $p$ and hence the inclusion of $18(1+\eta)\log n$ in~\eqref{eq72} may seem redundant. Surprisingly, this is not the case in general. In fact, it can be shown that if $p = 1/n$ (and hence $np=1$), there exists a node whose degree is lower bounded by ${\log n}/{\log\log n}$ with high probability. This explains the reasoning behind the inclusion of $18(1+\eta)\log n$ in the lemma. \end{remark} \noindent\textbf{Proof of Theorem~\ref{thm5}:} {In light of Lemma~\ref{l2}, the bounds on $p$ and $d$ guarantee that $\mathcal{G}(G)$ is connected and non-bipartite with probability of at least $1-\frac{3}{2}n^{-430(1+\eta)}$. Therefore, the proof is completed by invoking Theorem~\ref{thm4}, provided that the second condition of Theorem~\ref{thm4} holds. Define the events $\mathcal{E}_1 = \left\{\Delta(\mathcal{G}(B))\leq\max\left\{\frac{3npd}{2}, 18(1+\eta)\log n\right\}\right\}$ and $\mathcal{E}_2 = \left\{\delta(\mathcal{G}(G))\geq\frac{np(1-d)}{2}\right\}$. Observe that Lemma~\ref{l7} together with the bounds on $p$ and $d$ results in the inequalities \begin{subequations} \begin{align} & \mathbb{P}\left(\mathcal{E}_1\right)\geq 1-n^{-\eta}\label{eq81}\\ & \mathbb{P}\left(\mathcal{E}_2\right)\geq 1-n^{-144\eta}\label{eq82} \end{align} \end{subequations} This in turn implies that the events $\mathcal{E}_1$ and $\mathcal{E}_2$ occur with probability of at least $1-n^{-\eta}-n^{-144\eta}$. Conditioned on these events, it suffices to show that \begin{equation}\label{eqvalid} \frac{np(1-d)}{2}>\frac{48}{c^2}\kappa(\mathbf{u}^*)^4\max\left\{\frac{3npd}{2}, 18(1+\eta)\log n\right\} \end{equation} in order to certify the validity of the second condition of Theorem~\ref{thm4}. It can be easily verified that the assumed upper and lower bounds on $p$ and $d$ guarantee the validity of~\eqref{eqvalid}. Therefore, a simple union bound and the fact that $n^{-\eta}>\frac{3}{2}n^{-430(1+\eta)}$ imply that the conditions of Theorem~\ref{thm4} are satisfied with probability of at least $1-3n^{-\eta}$.~\hfill$\blacksquare$} \vspace{2mm} A number of interesting corollaries can be derived based on Theorem~\ref{thm5}. \begin{corollary}\label{cor1} Suppose that $p$ is a positive number independent of $n$ and $d \lesssim \log n/n$. Then, under an appropriate choice of parameters for the regularization function, the statements of Theorem~\ref{thm5} hold with overwhelming probability, provided that $\kappa(\mathbf{u}^*) \lesssim ({n/\log n})^{1/4}$. \end{corollary} Corollary~\ref{cor1} implies that, roughly speaking, if the total number of measurements is sufficiently large (i.e., on the order of $n^2$), then up to factor of $n\log n$ bad measurements with arbitrary magnitudes will not introduce any spurious local solution to the problem. Under such circumstances, the required upper bound on the ratio between the maximum and the minimum entries of $\mathbf{u}^*$ will be more relaxed as the dimension of the problem grows. \begin{corollary}\label{cor2} Suppose that $p$ is a positive number independent of $n$ and that $d \lesssim n^{\epsilon-1}$ for some $\epsilon\in [0,1)$. Then, under an appropriate choice of parameters for the regularization function, the statements of Theorem~\ref{thm5} hold with overwhelming probability, provided that $\kappa(\mathbf{u}^*) \lesssim n^{(1-{\epsilon})/{4}}$. \end{corollary} Corollary~\ref{cor2} describes an interesting trade-off between the sparsity level of the noise and the maximum allowable variation in the entries of $\mathbf{u}^*$; roughly speaking, as $\kappa(\mathbf{u}^*)$ decreases, a larger number of noisy elements can be added to the problem without creating any spurious local minimum. The next corollary shows that a constant fraction of the measurements can be grossly corrupted without affecting the landscape of the problem, provided that $\kappa(\mathbf{u}^*)$ is uniformly bounded from above. \begin{corollary}\label{cor3} Suppose that $p$ and $d$ are positive numbers independent of $n$ and that $d < \frac{1}{(144/c^2)+1}$. Then, under an appropriate choice of parameters for the regularization function, the statements of Theorem~\ref{thm5} hold with overwhelming probability, provided that $\kappa(\mathbf{u}^*)\leq \left(\frac{1-d}{(144/c^2)d}\right)^{1/4}$. \end{corollary} \noindent{\bf Asymmetric case:} The aforementioned results on the symmetric positive RPCA under random sampling and noise will be generalized to the asymmetric case below. \begin{theorem}\label{thm6} {Define $r = m/n$ and suppose that \begin{itemize} \item[i.] $n\geq m\geq 2$, \item[ii.] $\mathbf{w}^*>0$, \item[iii.] $d<\frac{r}{(144/c^2)\kappa(\mathbf{w}^*)^4+r}$, \item[iv.] $p>\frac{(1740/c^2)\kappa(\mathbf{w}^*)^4(1+\eta)n\log n}{m^2}$, \end{itemize} for some $\eta>0$. Then, with the choice of $\beta = 1$ and $\lambda = (m+n)/2$ for the parameters of the regularization function $R(\mathbf{u})$, the following statements hold for \eqref{p3_sym_reg} with probability of at least $1-10n^{-\eta}$: \begin{itemize} \item[1.] The points $\mathbf{w} = 0$ and $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ are the only D-min-stationary points; \item[2.] The point $\mathbf{w} = 0$ is a local maximum; \item[3.] In the positive orthant, the point $\mathbf{w}$ with the properties $\mathbf{w}\bw^\top = \mathbf{w}^*{\mathbf{w}^*}^\top$ and $\sum_{i=1}^{m}w^{2}_i=\sum_{j = m+1}^{m+n}w^{2}_j$ is the only D-stationary point. \end{itemize}} \end{theorem} To prove Theorem~\ref{thm6}, we derive a concentration bound on the minimum and maximum degree of the random bipartite graphs. Define $\mathcal{G}(m, n, p)$ as a bipartite graph with the vertex partitions $V_u = \{1,\cdots,m\}$ and $V_v = \{m+1, \cdots, m+n\}$ where each edge is independently included in the graph with probability $p$. \begin{lemma}\label{l10} {Consider a random bipartite graph $\mathcal{G}(m,n,p)$. Given a constant $\eta>0$, the inequality \begin{align}\label{eq722} \mathbb{P}\left(\Delta(\mathcal{G}(m,n,p))\geq\max\left\{\frac{3np}{2}, \frac{18(1+\eta)n\log n}{m}\right\}\right)\leq2n^{-\eta} \end{align} holds for every $0< p\leq 1$. Furthermore, we have \begin{align} \mathbb{P}\left(\delta(\mathcal{G}(m,n,p))\leq\frac{mp}{2}\right)\leq 2n^{-\eta} \end{align} provided that $p\geq {12(1+\eta)\log n}/m$.} \end{lemma} \begin{proof} See Appendix~\ref{app_l10}. \end{proof} \noindent{\bf Proof of Theorem~\ref{thm6}:} {The bounds on $p$ and $d$ indeed guarantee that $\mathcal{G}(\bar{G})$ is connected with overwhelming probability. Based on this fact, the result of Lemma~\ref{l10} and the proof of Theorem~\ref{thm5} can be combined to arrive at this theorem. The details are omitted for brevity.~\hfill$\blacksquare$ \begin{remark} The presented probability guarantees for RPCA share some similarities with those derived for noisy matrix completion in~\cite{ge2017no, ge2016matrix}. In particular, according to Theorems~\ref{thm5} and~\ref{thm6} and similar to the results of~\cite{ge2017no, ge2016matrix}, the probability of having a spurious local solution decreases polynomially with respect to the dimension of the problem. Furthermore, similar to our work, the required lower bound on the sampling probability $p$ in~\cite{ge2017no, ge2016matrix} scales polynomially with respect to the condition number of the true solution. Finally, for non-symmetric noisy matrix completion problem,~\cite{ge2017no} shows that the required lower bound on $p$ scales as $\frac{\log n}{m}$. Comparing this dependency with the one introduced in Theorem~\ref{thm6}, it can be inferred that our proposed lower bound is higher by a factor of $\frac{n}{m}$; this is not surprising considering the fundamentally different natures of these problems. \end{remark}} \vspace{2mm} \section{Global Convergence of Local Search Algorithms}\label{sec8} So far, it has been shown that the positive RPCA is free of spurious local minima. Furthermore, it has been proven that the global solution is the only D-stationary point in the positive orthant. The question of interest in this section is: How could this unique D-stationary point be obtained? Before answering this question, we will take a detour and revisit the notion of stationarity for smooth optimization problems. Recall that $\bar{\mathbf{x}}$ is a stationary point of a differentiable function $f(\mathbf{x})$ if and only if $\nabla f(\mathbf{x}) = 0$ and, under some mild conditions, basic local search algorithms will converge to a stationary point. Therefore, the uniqueness of the stationary point for a smooth optimization problem immediately implies the convergence to global solution. Extra caution should be taken when dealing with non-smooth optimization. In particular, the convergence of classical local search algorithms may fail to hold since the gradient and/or Hessian of the function may not exist at every iteration. To deal with this issue, different local search algorithms have been introduced to guarantee convergence to generalized notions of stationary points for non-smooth optimization, such as directional-stationary (which is used in this paper) or Clarke-stationary (to be defined next). For a non-smooth and locally Lipschitz function $h(\mathbf{x})$ over the convex set $\mathcal{X}$, define the Clarke generalized directional derivative at the point $\bar\mathbf{x}$ in the feasible direction $\mathbf{d}$ as \begin{equation} h^\circ(\mathbf{x},\mathbf{d}) := \underset{\begin{subarray}{c} \mathbf{y}\rightarrow \mathbf{x}\\ t\downarrow 0 \end{subarray}}{\lim\sup}\frac{h(\mathbf{y}+t\mathbf{d})-h(\mathbf{y})}{t} \end{equation} Note the difference between the ordinary directional derivative $h'(\mathbf{x},\mathbf{d})$ and its Clarke generalized counterpart: in the latter, the limit is taken with respect to a~\textit{variable} vector $\mathbf{y}$ that approaches $\bar{\mathbf{x}}$, rather than taking the limit exactly at $\bar{\mathbf{x}}$. The Clarke differential of $h(\mathbf{x})$ at $\bar{\mathbf{x}}$ is defined as the following set~(\cite{clarke1990optimization}): \begin{equation}\label{partial} \partial_C h(\bar{\mathbf{x}}) := \{\mathbf{\psi} | h^\circ(\mathbf{x},\mathbf{d})\geq\langle\mathbf{\psi}, \mathbf{d}\rangle, \forall\mathbf{d}\in\mathbb{R}^{n}\ \text{such that}\ \mathbf{x}+\mathbf{d}\in \mathcal{X}\} \end{equation} where $\mathcal{X}$ is the feasible set of the problem. A point $\bar{\mathbf{x}}$ is Clarke-stationary (or C-stationary) if $0\in\partial_C(\bar{\mathbf{x}})$, or equivalently, $h^\circ(\bar\mathbf{x},\mathbf{d})\geq 0$ for every feasible direction $\mathbf{d}$. It is well known that C-stationary is a weaker condition than the D-min-stationarity. In particular, every D-min-stationary point is C-stationary but not all C-stationary points are D-min-stationary. \begin{sloppypar} On the other hand, although some local search algorithms converge to D-min-stationary points for problems with special structures~(\cite{cui2018composite}), the most well-known numerical algorithms for non-smooth optimization---such as gradient sampling, sequential quadratic programming, and exact penalty algorithms---can only guarantee the C-stationarity of the obtained solutions~(\cite{burke2005robust, curtis2012sequential, fasano2014linesearch}). Therefore, it remains to study whether the global solution of the positive RPCA is the only C-stationary point. To answer this question, we need the following two lemmas. \end{sloppypar} \begin{lemma}\label{l1_clarke} The following statements hold: \begin{itemize} \item[-] If $h: \mathcal{X}\rightarrow \mathbb{R}$ and $g: \mathcal{X}\rightarrow \mathbb{R}$ are continuously differentiable at $\bar{\mathbf{x}}\in \mathcal{X}$, then $(h+g)^\circ(\bar{\mathbf{x}},\mathbf{d}) = h^\circ(\bar{\mathbf{x}},\mathbf{d})+g^\circ(\bar{\mathbf{x}},\mathbf{d})$ for every feasible direction $\mathbf{d}$. \item[-] If $h: \mathcal{X}\rightarrow \mathbb{R}$ is continuously differentiable at $\bar{\mathbf{x}}\in \mathcal{X}$, then $h^\circ(\bar{\mathbf{x}},\mathbf{d}) = h'(\bar{\mathbf{x}},\mathbf{d})$ for every feasible direction $\mathbf{d}$. \end{itemize} \end{lemma} \begin{proof} Refer to the textbook by~\cite{clarke1990optimization}. \end{proof} \begin{lemma}\label{l2_clarke} Let $h_1(\mathbf{x}), h_1(\mathbf{x}), ..., h_m(\mathbf{x}):\mathcal{X}\rightarrow \mathbb{R}$ be continuous and locally Lipschitz functions at $\bar{\mathbf{x}}\in\mathcal{X}$. Define \begin{equation} h(\mathbf{x}) = \max_{1\leq i\leq m} h_i(\mathbf{x}) \end{equation} and let $I(\bar{\mathbf{x}})$ be the set of indices $i$ such that $h(\bar{\mathbf{x}}) = h_i(\bar{\mathbf{x}})$. Then, \begin{equation} h^\circ(\bar{\mathbf{x}},\mathbf{d})\leq \max_{i\in I(\bar{\mathbf{x}})}h_i^\circ(\bar{\mathbf{x}},\mathbf{d}) \end{equation} for every feasible direction $\mathbf{d}$. \end{lemma} \begin{proof} Consider a feasible point $\mathbf{y}\in \mathcal{B}(\bar{\mathbf{x}},\epsilon)\cap\mathcal{X}$, where $\mathcal{B}(\bar{\mathbf{x}},\epsilon)$ is the Euclidean ball with the center $\bar{\mathbf{x}}$ and radius $\epsilon$. First, we prove that $I(\mathbf{y})\subseteq I(\bar{\mathbf{x}})$ for sufficiently small $\epsilon>0$. Notice that $h_i(\bar{\mathbf{x}})<h_j(\bar{\mathbf{x}})$ for every $i\in I(\bar{\mathbf{x}})$ and $j\in\{1,...,m\}\backslash I(\bar{\mathbf{x}})$. Therefore, due to the continuity of $h_i(\cdot)$ for every $i\in\{1,...,m\}$, it follows that there exists $\bar\epsilon>0$ such that $h_i(\mathbf{y})<h_j(\mathbf{y})$ for every $\mathbf{y}\in \mathcal{B}(\bar{\mathbf{x}},\epsilon)\cap \mathcal{X}$ with $0<\epsilon<\bar{\epsilon}$. This implies that $I(\mathbf{y}+t\mathbf{d})\subseteq I(\mathbf{y})\subseteq I(\bar{\mathbf{x}})$ for every $\mathbf{y}\in \mathcal{B}(\bar{\mathbf{x}},\epsilon)\cap \mathcal{X}$ and every feasible direction $\mathbf{d}$ with sufficiently small $\epsilon>0$ and $t>0$. Now, note that \begin{align} h(\mathbf{y}+t\mathbf{d})-h(\mathbf{y}) = \max_{i\in I(\mathbf{y}+t\mathbf{d})} h_i(\mathbf{y}+t\mathbf{d})-h_i(\mathbf{y})\leq \max_{i\in I(\bar{\mathbf{x}})} h_i(\mathbf{y}+t\mathbf{d})-h_i(\mathbf{y}) \end{align} This implies that \begin{align} h^\circ(\bar{\mathbf{x}}, \mathbf{d}) = \underset{\begin{subarray}{c} \mathbf{y}\rightarrow \mathbf{x}\\ t\downarrow 0 \end{subarray}}{\lim\sup}\frac{h(\mathbf{y}+t\mathbf{d})-h(\mathbf{y})}{t}\leq \max_{i\in I(\bar{\mathbf{x}})} \left\{\underset{\begin{subarray}{c} \mathbf{y}\rightarrow \mathbf{x}\\ t\downarrow 0 \end{subarray}}{\lim\sup}\frac{h_i(\mathbf{y}+t\mathbf{d})-h_i(\mathbf{y})}{t}\right\} = \max_{i\in I(\bar{\mathbf{x}})} h^\circ_i(\bar{\mathbf{x}}, \mathbf{d}) \end{align} This completes the proof. \end{proof} Based on the above lemmas, we develop the following theorem. \begin{theorem}\label{thm14} Under the conditions of Theorems~\ref{thm4} and assuming that $\mathcal{G}(\Omega)$ is connected, the global solution and the origin are the only C-stationary points of the symmetric positive RPCA. A similar result holds for the asymmetric positive RPCA. \end{theorem} \begin{proof} Without loss of generality, we only consider the symmetric case. At a given point $\mathbf{u}$, the function $f(\mathbf{u})$ is locally Lipschitz and can be written as \begin{equation} f(\mathbf{u}) = \sum_{(i,j)\in\Omega}\max\{u_iu_j-X_{ij}, -u_iu_j+X_{ij}\} = \max_{\sigma\in \mathcal{M}}f_\sigma(\mathbf{u}) \end{equation} where $\mathcal{M}$ is the class of functions from $\Omega$ to $\{-1,+1\}$ and $f_{\sigma}(\mathbf{u})$ is defined as \begin{equation} f_{\sigma}(\mathbf{u}) = \sum_{(i,j)\in\Omega}\sigma(i,j)(u_iu_j-X_{ij}). \end{equation} Hence, \begin{equation} f_{\mathrm{reg}}(\mathbf{u}) = R(\mathbf{u})+\max_{\sigma\in \mathcal{M}}f_\sigma(\mathbf{u}) \end{equation} Notice that each function $f_{\sigma}(\mathbf{u})$ is differentiable and locally Lipschitz for every $\sigma\in\mathcal{M}$. By contradiction, suppose that there exists $\mathbf{u}\geq 0$ such that $\mathbf{u} \not\in \left\{\mathbf{u}^*, 0\right\}$ and $0\in\partial_C f_{\mathrm{reg}}(\mathbf{u})$. Furthermore, define $I(\mathbf{u})$ as the set of all functions $\sigma\in\mathcal{M}$ for which $f_{\sigma}(\mathbf{u}) = f(\mathbf{u})$. Using the proof technique developed in Theorem~\ref{thm4}, one can easily verify that there exists a feasible direction $\mathbf{d}$ such that $f'_{\sigma}(\mathbf{u},\mathbf{d})+R'(\mathbf{u},\mathbf{d})<0$ for every $\sigma\in I(\mathbf{u})$. By invoking Lemma~\ref{l1_clarke} for every $\sigma\in I(\mathbf{u})$, it can be concluded that $f^\circ_{\sigma}(\mathbf{u},\mathbf{d})+R^\circ(\mathbf{u},\mathbf{d})<0$. This, together with Lemma~\ref{l2_clarke}, certifies that $f_{\mathrm{reg}}^\circ(\mathbf{u},\mathbf{d})<0$, hence contradicting the assumption $0\in\partial_C f_{\mathrm{reg}}(\mathbf{u})$. \end{proof} {\section{Discussions on Extension to Rank-$r$}\label{sec:rankr} So far, we have characterized the conditions under which the non-negative rank-1 RPCA has no spurious local solution. However, the following question has been left unanswered: \textit{Can these results be extended to the general non-negative \textbf{rank-$\bf r$} RPCA?} As a first step toward answering this question and similar to our analysis in the rank-1 case, we consider the noiseless symmetric non-negative rank-$r$ RPCA defined as \begin{equation}\tag{P1-Sym-$r$}\label{p2-sym_r} \min_{U\in\mathbb{R}^{n\times r}_+}\quad f(U) = \|\mathcal{P}_{\Omega}(U^*{U^*}^\top-UU^\top)\|_1 \end{equation} \noindent Indeed, a fundamental roadblock in extending the results of Section~\ref{sec3} to~\eqref{p2-sym_r} is the implicit \textit{rotational symmetry} in the solution: given a rotation matrix $R$ and a solution $\tilde{U}$ to~\eqref{p2-sym_r}, $\tilde UR$ is another feasible solution with $f(\tilde UR) = f(\tilde U)$, provided that $\tilde UR$ is a non-negative matrix. In the rank-1 case, this does not pose any problem since $R = 1$ is the only possible value. However, for the general rank-$r$ case with $r\geq 2$, this rotational symmetry undermines the strict positivity assumption of the true components. In particular, even if the true solution $U^*$ is strictly positive, there exists a rotation matrix $R$ such that $U^*R$ is non-negative with at least one zero entry. This in turn implies that Lemma~\ref{l1} and, as a consequence, the technique used in Theorem~\ref{thm1} may not be readily extended to the rank-$r$ cases. Despite the theoretical difficulties in extending the presented results to the general rank-$r$ instances, we have indeed observed---through thousands of simulations---that in general, the sub-gradient method introduced in Section~\ref{sec:num} successfully converges to a solution $U$ that satisfies $UU^\top = U^*{U^*}^\top$, even if the measurement matrix is corrupted with a surprisingly dense noise matrix. To illustrate this, we consider randomly generated instances of the problem with the dimension $n = 100$ and the rank $r \in \{2,3,4,5\}$. For each instance, the elements of $U^*$ are uniformly chosen from the interval $[0.5, 2.5]$. Furthermore, each element in the upper triangular part of the noise matrix $S$ is set to $2$ and $0$ with probabilities $d$ and $1-d$, respectively. For each rank $r$ and the noise probability $d$, we consider 500 independent instances of the problem and solve them using the randomly initialized sub-gradient method. Similar to Subsection~\ref{subsec:exact}, we assume that a solution is recovered exactly if $\|UU^\top-U^*{U^*}^\top\|_F/\|U^*{U^*}^\top\|_F\leq 10^{-4}$. Figure~\ref{rankr} demonstrates the ratio of the instances for which the sub-gradient method successfully recovers the true solution. As illustrated in this figure, $d$ can be as large as $0.30$, $0.28$, $0.26$, and $0.25$ to guarantee a success rate of at least $90\%$ when $r$ is equal to $2$, $3$, $4$, and $5$, respectively. This empirical study suggests that one of the following statements may hold for the positive rank-$r$ RPCA: (1) it is devoid of spurious local minima, or (2) its spurious local minima can be escaped efficiently using the sub-gradient method. Further investigation of this direction is left as an enticing challenge for future research. \begin{figure} \centering \includegraphics[width=.45\columnwidth]{success_rate_rankr.eps} \caption{ \footnotesize The success rate of the randomly initialized sub-gradient method for the positive rank-$r$ RPCA.} \label{rankr} \end{figure} } \section{Conclusion} This paper deals with the non-negative {rank-1} robust principal component analysis (RPCA), where the goal is to recover the true non-negative principal component of the data matrix exactly, using partial and potentially noisy measurements of the data matrix. The main difference between the RPCA and its classical counterpart is the sparse-but-arbitrarily-large values of the additive noise. The most commonly known methods for solving the RPCA are based on convex relaxations, where the problem is \textit{convexified} at the expense of significantly increasing the number of variables. In this work, we show that the original non-convex and non-smooth $\ell_1$ formulation of the positive {rank-1} RPCA problem based on the well-known Burer-Monteiro approach has benign landscape, i.e., it does not have any spurious local solution and has a unique global solution that coincides with the true components. In particular, we provide strong deterministic and statistical guarantees for the benign landscape of the positive {rank-1} RPCA and show that the absence of spurious local solutions is guaranteed to hold with a surprisingly large number of corrupted measurements. While the results on ``no spurious local minima'' are ubiquitous for smooth problems related to matrix completion and sensing, to the best of our knowledge, the results presented in this paper are the first to prove the absence of local minima when the objective function is non-smooth. {Finally, through extensive simulations, we provide strong evidence suggesting that the proposed results may hold for the general non-negative rank-$r$ RPCA. The extension of our theoretical results to this generalized problem is left as a future work.} \section*{Acknowledgments} The authors are grateful to Javad Lavaei, Richard Zhang, and Cedric Josz for insightful discussions on earlier versions of this manuscript. Moreover, the authors thank Richard Zhang for his assistance in providing us with the code for the simulations. This work was supported by grants from ONR, AFOSR and NSF. \bibliographystyle{IEEEtran}
{ "timestamp": "2019-09-05T02:04:45", "yymm": "1812", "arxiv_id": "1812.11466", "language": "en", "url": "https://arxiv.org/abs/1812.11466", "abstract": "This work is concerned with the non-negative rank-1 robust principal component analysis (RPCA), where the goal is to recover the dominant non-negative principal components of a data matrix precisely, where a number of measurements could be grossly corrupted with sparse and arbitrary large noise. Most of the known techniques for solving the RPCA rely on convex relaxation methods by lifting the problem to a higher dimension, which significantly increase the number of variables. As an alternative, the well-known Burer-Monteiro approach can be used to cast the RPCA as a non-convex and non-smooth $\\ell_1$ optimization problem with a significantly smaller number of variables. In this work, we show that the low-dimensional formulation of the symmetric and asymmetric positive rank-1 RPCA based on the Burer-Monteiro approach has benign landscape, i.e., 1) it does not have any spurious local solution, 2) has a unique global solution, and 3) its unique global solution coincides with the true components. An implication of this result is that simple local search algorithms are guaranteed to achieve a zero global optimality gap when directly applied to the low-dimensional formulation. Furthermore, we provide strong deterministic and probabilistic guarantees for the exact recovery of the true principal components. In particular, it is shown that a constant fraction of the measurements could be grossly corrupted and yet they would not create any spurious local solution.", "subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)", "title": "Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631663220388, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950344131134 }
https://arxiv.org/abs/1806.11566
Analysis and preconditioning of parameter-robust finite element methods for Biot's consolidation model
In this paper we consider a three-field formulation of the Biot model which has the displacement, the total pressure, and the pore pressure as unknowns. For parameter-robust stability analysis, we first show a priori estimates of the continuous problem with parameter-dependent norms. Then we study finite element discretizations which provide parameter-robust error estimates and preconditioners. For finite element discretizations we consider standard mixed finite element as well as stabilized methods for the Stokes equations, and the complete error analysis of semidiscrete solutions is given. Abstract forms of parameter-robust preconditioners are investigated by the operator preconditioning approach. The theoretical results are illustrated with numerical experiments.
\section{Introduction} In poroelastic media saturated by fluids, the behaviors of porous medium and the saturating fluid flow are described by Biot's consolidation model \cite{MR0066874}. Poroelasticity models are widely used in geophysics and petrolium engineering applications, so development of finite element methods for the poroelastic models began more than four decades ago \cite{VermeerVerruijt1981,ZienkiewiczShiomi1984} and is still an active research area \cite{MuradLoula1992,MuradLoula1994,MuradThomeeLoula1996,KorsaweStarke2005,PhillipsWheeler2007a, PhillipsWheeler2007b,Yi2013,ChenLuoFeng2013,LeeEtAl2017,berger2015stabilized,NabilRiviere2018,bause2017space}. Poroelasticity models for practical applications have various different ranges of parameters. For example, geophysics materials are compressible solids whereas most soft biological tissues are modelled as incompressible or nearly incompressible materials. It turns out that the different parameter ranges are intimately related to accuracy of numerical methods and construction of efficient iterative solvers. Therefore, one of main interests of numerical methods for the Biot model is robustness for model parameter ranges, and there are various recent studies for parameter-robust numerical methods \cite{Lee2016,Hong-Kraus,Feng-Ge-Li,Fu,Lee2018,OyarzuaRuizBaier2016} and efficient solvers \cite{hu2017nonconforming,rodrigo2017new,LeeEtAl2017,BaerlandLeeMardalWinther2017}. Recently, a new three-field formulation for the Biot model was independently introduced in \cite{LeeEtAl2017} and \cite{OyarzuaRuizBaier2016} with different foci of interests. In \cite{LeeEtAl2017}, the main interest is construction of preconditioners robust for various parameters (large bulk and shear moduli, small hydraulic conductivity, and small time step sizes). In \cite{OyarzuaRuizBaier2016}, the main interest is optimal error estimates robust for large bulk modulus. The two main purposes of this work is to provide comprehensive a priori error analysis of time dependent solutions of the three-field formulation with extension to stabilized numerical methods. In \cite{OyarzuaRuizBaier2016}, stability of the static system is proved using compactness of a linear operator and error estimates are obtained with standard argument but complete error analysis for time dependent problems was not given. In contrast, we do not use the compactness argument because it is difficult to extend the error estimates to time dependent solutions. Instead, we utilize an improved energy-type estimates, and prove the a priori error estimates of time dependent solutions without using Gronwall inequality. We also consider stabilized methods in this paper and provide complete error analysis and an abstract form of parameter-robust preconditioners. The paper is organized as follows. In Section~2, we introduce preliminary materials including notations, definitions, and the variational formulation of the Biot model. In Section~3, we discuss stability of the system and prove energy-type estimates of solutions. In Section~4, we discuss finite element discretizations and the a priori error estimates of semidiscrete solutions. In Section~5, we prove stability of static system with respect to parameter-dependent norms and propose abstract forms of parameter-robust preconditioners. Finally, we present numerical results illustrating convergence of erros and parameter-robust performances of preconditioners in Section~6. \section{Preliminaries} \label{sec:prelim} \subsection{Notations} Let $\Omega$ be a bounded polygonal domain with Lipschitz continuous boundary in ${\mathbb R}^n$ with $n=2$ or $3$. For a nonnegative integer $m$, $H^m (\Omega)$, $H^m(\Omega; {\mathbb R}^n)$ denote the standard ${\mathbb R}$ and ${\mathbb R}^n$-valued Sobolev spaces based on $L^2$ norm. For a Banach space $\mc{X}$ and $(a, b) \subset {\mathbb R}$, $C^0 (a, b; \mc{X})$ denotes the set of functions $f : (a, b) \rightarrow \mc{X}$ which are continuous in $t \in (a,b)$. For an integer $m \geq 1$ we define \begin{align*} C^m (a, b ; \mc{X}) = \{ f \, | \, \partial^{i}f/\partial t^{i} \in C^0(a, b\,;\mc{X}), \, 0 \leq i \leq m \}, \end{align*} where $\partial f/\partial t$ is the time derivative in the sense of the Fr\'echet derivative in $\mc{X}$ (see e.g., \cite{Yosida-book}). We also define the space-time norm \begin{align*} \| f \|_{L^p(a,b; \mc{X})} = \begin{cases} \left( \int_a^b \| f \|_{\mc{X}}^p ds \right)^{1/p}, \quad 1 \leq p < \infty, \\ \operatorname{esssup}_{t \in (a,b)} \| f \|_{\mc{X}}, \quad p = \infty. \end{cases} \end{align*} If a time interval $J$ is clear in context, then we use $L^p \mc{X}$ to denote $L^p(J; \mc{X})$ for simplicity. We define the space-time Sobolev spaces $W^{k,p}(J; \mc{X})$ for nonnegative integer $k$ and $1 \leq p \leq \infty$ as the closure of $C^k (J; \mc{X})$ with the norm $\| f \|_{W^{k,p} \mc{X}} = \sum_{i=0}^k \| \partial^i f / \partial t^i \|_{L^p \mc{X}}$. For simplicity we adopt the convention $\| f, g \|_\mc{X} = \| f \|_\mc{X} + \| g \|_\mc{X}$, and $\dot{u}$ is used to denote the time derivative of $f$. For a triangulation of $\Omega$, $\mathcal{T}_h$ is used to denote a shape-regular triangulation for which $h$ is the maximum diameter of triangles (or tetrahedra) and $\mathcal{E}_h$ is the corresponding set of edges (faces), respectively. For $E \in \mathcal{E}_h$ and functions $\boldsymbol{f}, \boldsymbol{g} : \mathcal{E}_h \rightarrow {\mathbb R}^n$ we define \begin{align*} \langle \boldsymbol{f}, \boldsymbol{g} \rangle_E = \int_E \boldsymbol{f} \cdot \boldsymbol{g} \,ds, \qquad \langle \boldsymbol{f}, \boldsymbol{g} \rangle = \sum_{E \in \mathcal{E}_h} \langle \boldsymbol{f}, \boldsymbol{g} \rangle_E. \end{align*} For an integer $k \geq 0$ and for each $T \in \mc{T}_h$, $\mc{P}_k(T)$ is the space of polynomials of degree $\le k$ on $T$, and $\mc{P}_k(\mc{T}_h)$ denotes the space \algns{ \mc{P}_k(\mc{T}_h) = \case{ \LRc{q \in H^1(\Omega) \;:\; q|_T \in \mc{P}_k(T), \; T \in \mc{T}_h } \quad \text{if } k \ge 1 \\ \LRc{q \in L^2(\Omega) \;:\; q|_T \in \mc{P}_k(T), \; T \in \mc{T}_h } \quad \text{if } k = 0 \\ } . } For a vector space ${\mathbb X}$, we use $\mathcal{P}_k(G; {\mathbb X})$ and $\mathcal{P}_k(\mathcal{T}_h; {\mathbb X})$ to denote the space of ${\mathbb X}$-valued polynomials with same conditions. \subsection{The Biot's consolidation model} Throughout this paper we restrict our interest on quasistatic consolidation problems and the acceleration term is ignored. In our description of the model, $\boldsymbol{u}$ is the displacement of porous media, $p$ is the pore pressure, $\boldsymbol{f}$ is the body force, $g$ is the mass change rate of fluid. The governing equations of Biot's consolidation model with an isotropic elastic porous medium are \subeqns{eq:strong-eq}{ \label{eq:strong-eq1}-\div \LRp{ 2 \mu \epsilon(\boldsymbol{u}) + (\lambda \div \bs{u} - \alpha p) \Bbb{I} } &= \boldsymbol{f}, \\ \label{eq:strong-eq2} s_0 \dot{p} + \alpha \div \dot{\boldsymbol{u}} - \div (\underline{\boldsymbol{\kappa}} \nabla p) &= g, } where $\mu$ and $\lambda$ are the Lam\'e coefficients, $s_0 \geq 0 $ is the constrained specific storage coefficient, $\underline{\boldsymbol{\kappa}}$ is the hydraulic conductivity tensor, $\alpha>0$ is the Biot--Willis constant which is close to 1, and $\Bbb{I}$ is the identity matrix. We assume that $\mu$ is uniformly bounded above and below with positive constants. We assume $\lambda$ has a uniformly positive lower bound but $\lambda$ may not have a uniform upper bound and $\lambda = +\infty$ corresponds to the incompressibility of the solid matrix. We assume that there are constants $c_0, c_1$ such that \algns{ 0 \le c_0 \le s_0 (x) \le c_1 , \qquad x \in \Omega. } We remark that $s_0$ is related to $\alpha$, the porosity $\phi$, and the bulk moduli of the solid and fluid. Under the assumption that $\phi$ is uniform with $0 < \phi < \alpha$, if the solid is not incompressible, then $s_0 \ge C/\lambda$ holds with a constant $C$ of scale 1. However, $s_0$ may vanish on a subdomain if $\lambda = + \infty$ on the subdomain and the fluid is incompressible. The hydraulic conductivity tensor $\underline{\boldsymbol{\kappa}} = \underline{\boldsymbol{\kappa}}(x)$ is positive definite with uniform lower and upper bounds $\kappa_0, \kappa_1 >0$, i.e., \algns{ \kappa_{0} | \xi |^2 \le \xi^T \underline{\boldsymbol{\kappa}}(x) \xi \le \kappa_{1} | \xi |^2 , \qquad \forall \;0 \not = \xi \in {\mathbb R}^n,\quad \text{a.e.} \; x \in \Omega . } On details of deriving these equations from physical modelling, we refer to standard porous media texts, e.g., \cite{anandarajah2010computational}. For well-posedness of the problem, the equations \eqref{eq:strong-eq} need appropriate boundary and initial conditions. We assume that there are partitions of $\partial \Omega$ which are \begin{align*} \partial \Omega = \Gamma_p \cup \Gamma_f, \qquad \partial \Omega = \Gamma_d \cup \Gamma_t, \qquad | \Gamma_d |, |\Gamma_p| > 0 \end{align*} where $| \Gamma |$ is the $(n-1)$-dimensional Lebesgue measure of $\Gamma$. We also assume that boundary conditions are given as \begin{align} \label{eq:bc} p(t) = 0 \text{ on } \Gamma_p, \quad - \underline{\boldsymbol{\kappa}} \nabla p(t) \cdot \boldsymbol{n} = 0 \text{ on } \Gamma_f, \quad \boldsymbol{u}(t) = 0 \text{ on } \Gamma_d, \quad \underline{\boldsymbol{\sigma}}(t) \boldsymbol{n} = 0 \text{ on } \Gamma_t, \end{align} for all $t \in (0, T]$ where $\boldsymbol{n}$ is the outward unit normal vector field on $\partial \Omega$ and $\underline{\boldsymbol{\sigma}} := 2 \mu \epsilon(\boldsymbol{u}) + (\lambda \div \bs{u} - \alpha p) \Bbb{I}$, the Cauchy stress tensor. Here we only consider the homogeneous boundary condition for simplicity but our method can be easily extended to problems with nonhomogeneous boundary conditions. We also assume that given initial data $p(0), \boldsymbol{u}(0)$ and $\boldsymbol{f}(0)$ satisfy the compatibility condition \eqref{eq:strong-eq1}. Well-posedness of this system under these assumptions can be found in \cite{MR1790411}. \subsection{The formulation with the displacement, total and pore pressures} In \cite{LeeEtAl2017}, a formulation of the Biot model with three unknowns was introduced in order to obtain finite element discretizations of the Biot model with parameter-robust preconditioning. Introduction of a new unknown $p_{t} := \lambda \div \boldsymbol{u} - \alpha p_{p}$, which will be called total pressure, gives an additional equation $\div \bs{u} - \lambda^{-1} (p_{t} + \alpha p_{p})= 0$. Therefore, we consider a system \subeqns{eq:upp-eq}{ \label{eq:upp-eq1} - \div \LRp{ 2 \mu \epsilon(\boldsymbol{u}) } - \nabla p_{t} &= \boldsymbol{f}, \\ \label{eq:upp-eq2} \div \bs{u} - \lambda^{-1} (p_{t} + \alpha p_{p}) &= 0, \\ \label{eq:upp-eq3} - \alpha \lambda^{-1} \dot{p}_{t} - \LRp{s_0 + \alpha^2 \lambda^{-1} } \dot{p}_{p} + \div ( \underline{\bs{\kappa}} \nabla p_{p}) &= -g . } Let us define function spaces \algns{ \bs{V} = \LRc{ \bs{v} \in {H}^1 (\Omega; {\mathbb R}^n) \;:\; \bs{v}|_{\Gamma_d} = 0 }, \quad Q_{t} = L^2(\Omega), \quad Q_{p} = \{ q \in H^1(\Omega) \;:\; q|_{\Gamma_p} = 0 \} , } and consider the following variational form of \eqref{eq:upp-eq}: ({\bf VP}) For initial data $(\bs{u}(0), p_{t}(0), p_{p}(0)) \in \bs{V} \times Q_{t} \times Q_{p}$ satisfying \subeqns{eq:comp-cond}{ \label{eq:comp-cond-1} \LRp{2 \mu \epsilon(\boldsymbol{u} (0)), \epsilon(\boldsymbol{v}) } + \LRp{ p_{t} (0), \div \boldsymbol{v} } &= \LRp{ \boldsymbol{f} (0), \boldsymbol{v} } & & \forall \bs{v} \in \bs{V}, \\ \label{eq:comp-cond-2} \div \bs{u} (0) - \lambda^{-1} (p_{t} (0) + \alpha p_{p} (0)) &= 0 , } find $(\bs{u}, p_{t}, p_{p}) \in C^1(0,T; \bs{V}) \times C^1(0,T; Q_{t}) \times C^1(0,T; Q_{p})$ such that \subeqns{eq:weak-upp-eq}{ \label{eq:weak-upp-eq1} \LRp{2 \mu \epsilon(\boldsymbol{u}), \epsilon(\boldsymbol{v}) } + \LRp{ p_{t} , \div \boldsymbol{v} } &= \LRp{ \boldsymbol{f}, \boldsymbol{v} } & & \forall \boldsymbol{v} \in \boldsymbol{V}, \\ \label{eq:weak-upp-eq2} \LRp{ \div \bs{u} , q_{t} } - \LRp{\lambda^{-1} p_{t} , q_{t} } - \LRp{ \alpha \lambda^{-1} p_{p} , q_{t} } &= 0 & & \forall q_{t} \in Q_{t} , \\ \label{eq:weak-upp-eq3} - \LRp{ \alpha \lambda^{-1} \dot{p}_{t}, q_{p} } - \LRp{ \LRp{s_0 + \alpha^2 \lambda^{-1} } \dot{p}_{p} , q_{p} } - \LRp{ \underline{\bs{\kappa}} \nabla p_{p} , \nabla q_{p} } &= -\LRp{ g, q_{p} } & & \forall q_{p} \in Q_{p} . } \section{Energy estimates and stability} In this section, we discuss stability of the system \eqref{eq:weak-upp-eq} with parameter-dependent norms. The streamline of this stability analysis will also lead to the a priori error analysis in the next section. Let us first define parameter dependent norms \algns{ \nor{\bs{v}}_{\bs{V}} = \LRp{2\mu \epsilon(\bs{u}), \epsilon(\bs{u})}^{\half}, \quad \nor{q_{p}}_{1,\kappa} = \LRp{\underline{\bs{\kappa}} \nabla q_{p}, \nabla q_{p}}^{\half}, \quad \norw{q_{t}}{Q_{t}} = ((2\mu)^{-1} q_{t}, q_{t})^{\half}, } and for a nonnegative function (or a positive semidefinite tensor) $w$, $\nor{q }_{0,w}$ denotes $\nor{q }_{0,w} = \LRp{w q, q}^{\half}$. We will use $\bs{V}'$ and $Q_{t}'$ to denote the dual spaces of $\bs{V}$ and $Q_{t}$, respectively. We also use $H^{-1}$ to denote the dual space of $H_{\Gamma_p}^1$ with the norms \algns{ \norw{q}{-1} = \sup_{p \in H_{\Gamma_p}^1} \frac{\LRp{p, q}}{\norw{\nabla p}{0}} . } \begin{theorem} Assume that $\bs{f} \in W^{1,2}(0,T; \bs{V}')$, $g \in L^2(0,T; L^2) \cap W^{1,1}(0,T; H^{-1})$, and initial data $(\bs{u}(0), p_{t}(0), p_{p}(0))$ satisfying \eqref{eq:comp-cond} are given. If $(\bs{u}, p_{t}, p_{p})$ is a solution of \eqref{eq:weak-upp-eq}, then \algn{ \label{eq:estm_1} &\nor{\bs{u}}_{L^{\infty}(0,t; \bs{V})} + \nor{p_{t} - \alpha p_{p}}_{L^\infty(0,t; L_{\lambda^{-1}}^2)} + \nor{p_{p}}_{L^\infty(0,t; L_{s_0}^2)} + \nor{p_{p}}_{L^2(0,t; H_{\kappa}^1)} \\ &\quad \lesssim \nor{\bs{u}(0)}_{\bs{V}} + \nor{p_{t}(0) - \alpha p_{p}(0)}_{0,\lambda^{-1}} + \nor{p_{p}(0)}_{0,s_0} + \nor{\bs{f}}_{W^{1,1}(0,t; \bs{V}')} \notag \\ &\qquad + \min \LRc{{c_0}^{-\half} \nor{g}_{\LtL{1}{}{2}} , 2 \kappa_0^{-\half} \nor{g}_{\LtH{2}{}{-1}} } , \notag \\ \label{eq:estm_2} &\nor{\dot{\bs{u}}}_{L^{2}(0,t; \bs{V})} + \nor{\dot{p_{t}} - \alpha \dot{p_{p}}}_{L^2(0,t; L_{\lambda^{-1}}^2)} + \nor{\dot{p_{p}}}_{L^2(0,t; L_{s_0}^2)} + \nor{p_{p}}_{L^\infty(0,t; H_{\kappa}^1)} \\ \notag &\quad \lesssim \nor{p_{p}(0)}_{1,\kappa} + \nor{\dot{\bs{f}}}_{L^2(0,t; \bs{V}')} \\ \notag & \qquad + \min \{ c_0^{-\half} \nor{g}_{L^2(0,t; L^2)}, \kappa_0^{-\half} \nor{g}_{W^{1,1}(0,t; H^{-1})} \} \\ \label{eq:estm_3} &\nor{p_{t}}_{\LtV{\infty}{Q_{t}}} \le C_0 ( \nor{\bs{u}}_{L^\infty(0,t; \bs{V})} + \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} ) } with $C_0$ depending on $\Omega$ and $\mu$. The constants in \eqref{eq:estm_1} and \eqref{eq:estm_2} are independent of $\Omega$ and parameters. \end{theorem} \begin{proof} We first prove \eqref{eq:estm_1}. Taking $\bs{v} = \dot{\bs{u}}$ in \eqref{eq:weak-upp-eq1}, $q_{t} = - p_{t}$ in the time differentiation of \eqref{eq:weak-upp-eq2}, $q_{p} = -p_{p}$ in \eqref{eq:weak-upp-eq3}, and adding the three equations altogether, we have \algn{ \label{eq:energy-eq} \half \frac{d}{dt} \LRp{ \nor{\bs{u}}_{\bs{V}}^2 + \nor{p_{t} - \alpha p_{p}}_{0,\lambda^{-1}}^2 + \nor{p_{p}}_{0,s_0}^2 } + \nor{p_{p}}_{1,\kappa}^2 = (\bs{f}, \dot{\bs{u}}) + (g, p_{p}) . } Let us define $X(s) \ge 0$ and $Y(s) \ge 0$ for $s \ge 0$ as \algns{ X(s)^2 &= \nor{\bs{u}(s)}_{\bs{V}}^2 + \nor{p_{t}(s) - \alpha p_{p}(s)}_{0,\lambda^{-1}}^2 + \nor{p_{p}(s)}_{0,s_0}^2 , \\ Y(s)^2 &= \int_0^s \nor{p_{p}(r)}_{1,\kappa}^2 dr . } Then integration of \eqref{eq:energy-eq} from 0 to $t$ gives \algns{ \half(X(t)^2 - X(0)^2) + Y(t)^2 = \int_0^t \LRs{(\bs{f}(s), \dot{\bs{u}}(s)) + (g(s),p_{p}(s)) } \,ds . } By the integration by parts in time, \algns{ \int_0^t \LRs{(\bs{f}(s), \dot{\bs{u}}(s)) }\,ds = (\bs{f}(t), \bs{u}(t)) - (\bs{f}(0), \bs{u}(0)) - \int_0^t (\dot{\bs{f}}(s), \bs{u}(s)) \,ds , } therefore we have \algn{ \notag &\half(X(t)^2 - X(0)^2) + Y(t)^2 \\ \label{eq:int-ineq1} &\quad = (\bs{f}(t), \bs{u}(t)) - (\bs{f}(0), \bs{u}(0)) + \int_0^t \LRs{(- \dot{\bs{f}}(s), \bs{u}(s)) + (g(s),p_{p}(s)) } \,ds \\ \notag &\quad \le (\nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')} ) \nor{\bs{u}}_{L^\infty(0,t;\bs{V})} \\ \notag &\qquad + \min\{ c_0^{-\half} \nor{g}_{L^1(0,t; L^2)} \nor{p_{p}}_{L^\infty(0,t; L_{s_0}^2)}, \kappa_0^{-\half} \nor{g}_{L^2(0,t; H^{-1})} Y(t) \} . } To prove \eqref{eq:estm_1} for \algns{ \nor{\bs{u}}_{L^{\infty}(0,t; \bs{V})} + \nor{p_{t} - \alpha p_{p}}_{L^\infty(0,t; L_{\lambda^{-1}}^2)} + \nor{p_{p}}_{L^\infty(0,t; L_{s_0}^2)}, } note that it suffices to show the estimate for $t \in (0,T]$ such that $X(t) = \max_{s \in (0,t]} X(s)$, so we assume this maximality condition of $X(t)$. From the above inequality we can derive \algns{ &X(t)^2 + 2 Y(t)^2 \\ &\quad \le X(0)^2 + 2 \LRp{ ( \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')}) + c_0^{-\half} \nor{g}_{L^1(0,t; L^2)} } X(t) } or \algns{ X(t)^2 + 2 Y(t)^2 &\le X(0)^2 + 2 ( \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')} ) X(t) \\ & \quad + 2 \kappa_0^{-\half} \nor{g}_{L^2(0,t; H^{-1})} Y(t) . } Applying Young's inequality, we can obtain either \algns{ X(t)^2 &\le 2 X(0)^2 + 4 \LRp{ ( \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')}) + {c}_0^{-\half} \nor{g}_{L^1(0,t; L^2)} }^2 } or \algns{ X(t)^2 \le 2 X(0)^2 + 4 ( \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')} )^2 + 2 \kappa_0^{-1} \nor{g}_{\LtH{2}{}{-1}}^2 , } thus \algn{ \label{eq:Xt-estm} X(t) &\lesssim X(0) + ( \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')} ) \\ \notag &\quad + \min \{ {c}_0^{-\half} \nor{g}_{L^1(0,t; L^2)}, \kappa_0^{-\half} \nor{{g}}_{L^2(0,t; H^{-1})} \} . } Note that $\nor{\bs{u}}_{L^{\infty}(0,t; \bs{V})}, \nor{p_{t} - \alpha p_{p}}_{L^\infty(0,t; L_{\lambda^{-1}}^2)} , \nor{p_{p}}_{L^\infty(0,t; L_{s_0}^2)} \le X(t)$ due to the maximality of $X(t)$. Then \eqref{eq:estm_1} for \algns{ \nor{\bs{u}}_{L^{\infty}(0,t; \bs{V})} + \nor{p_{t} - \alpha p_{p}}_{L^\infty(0,t; L_{\lambda^{-1}}^2)} + \nor{p_{p}}_{L^\infty(0,t; L_{s_0}^2)} } follows from the above inequality. We remark that this estimate can be extended to all $t \in (0, T]$, and we will use this estimate for general $t$ below. To complete the proof of \eqref{eq:estm_1}, we need to estimate $Y(t)$ without the assumption $X(t) = \max_{s \in (0,t]} X(s)$. From \eqref{eq:int-ineq1} we get \algns{ Y(t)^2 \le \half X(0)^2 + \LRp{ ( \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')}) + c_0^{-\half} \nor{g}_{\LtL{1}{}{2}} } X(\bar{t}) } or \algns{ \half Y(t)^2 \le \half X(0)^2 + (\nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} + \|{\dot{\bs{f}}}\|_{L^1(0,t; \bs{V}')}) X(\bar{t}) + \half \kappa_0^{-\half} \nor{g}_{\LtH{2}{}{-1}} } where $X(\bar{t}) = \max_{s \in [0,t]} X(s)$. Combining these with \eqref{eq:Xt-estm}, the proof of \eqref{eq:estm_1} is completed. We now prove \eqref{eq:estm_2}. For this, we take $q_{p} = - \dot{p}_{p}$ in \eqref{eq:weak-upp-eq3}, $\bs{v} = \dot{\bs{u}}$ in the time derivative of \eqref{eq:weak-upp-eq1}, $q_{t} = - \dot{p}_{t}$ in the time derivative of \eqref{eq:weak-upp-eq2}, and add all the equations together. Then we have \algn{ \label{eq:d-energy-eq} &\nor{\dot{\bs{u}}(t)}_{\bs{V}}^2 + \nor{\dot{p}_{t}(t) - \alpha \dot{p}_{p}(t)}_{0,\lambda^{-1}}^2 + \nor{\dot{p}_{p}(t)}_{0,s_0}^2 + \half \frac{d}{dt} \nor{p_{p}(t)}_{1,\kappa}^2 \\ \notag &\quad = (\dot{\bs{f}}(t), \dot{\bs{u}}(t)) + (g(t), \dot{p}_{p}(t)) . } If $s_0$ is non-degenerate with $s_0 \ge c_0 > 0$, by Young's inequality, \algns{ &\half \nor{\dot{\bs{u}}(t)}_{\bs{V}}^2 + \nor{\dot{p}_{t}(t) - \alpha \dot{p}_{p}(t)}_{0,\lambda^{-1}}^2 + \half \nor{\dot{p}_{p}(t)}_{0,s_0}^2 + \half \frac{d}{dt} \nor{p_{p}(t)}_{1,\kappa}^2 \\ &\quad \le \half \nor{\dot{\bs{f}}(t)}_{\bs{V}'}^2 + \half c_0^{-1} \nor{g(t)}_0^2 . } Integrating this from 0 to $t$ gives \mltln{ \label{eq:stab-energy-estm1} \nor{p_{p}(t)}_{1,\kappa}^2 + \int_0^t \LRs{\nor{\dot{\bs{u}}(s)}_{\bs{V}}^2 + 2\nor{\dot{p}_{t}(s) - \alpha \dot{p}_{p}(s)}_{0,\lambda^{-1}}^2 + \nor{\dot{p}_{p}(s)}_{0,s_0}^2 } \,ds \\ \quad \le \nor{p_{p}(0)}_{1,\kappa}^2 + \int_0^t \LRs{ \nor{\dot{\bs{f}}(s)}_{\bs{V}'}^2 + c_0^{-1} \nor{g(s)}_0^2 } \,ds . } When $s_0$ is degenerate, we integrate \eqref{eq:d-energy-eq} from 0 to $t$ and get \algn{ \label{eq:aux_estm} &\int_0^t \LRs{\nor{\dot{\bs{u}}(t)}_{\bs{V}}^2 + \nor{\dot{p}_{t}(t) - \alpha \dot{p}_{p}(t)}_{0,\lambda^{-1}}^2 + \nor{\dot{p}_{p}(t)}_{0,s_0}^2 } ds + \half \nor{p_{p}(t)}_{1,\kappa}^2 \\ \notag &\quad = \half \nor{p_{p}(0)}_{1,\kappa}^2 + \int_0^t \LRs{(\dot{\bs{f}}(s), \dot{\bs{u}}(s)) + (g(s), \dot{p}_{p}(s))} \,ds \\ \notag &\quad = \half \nor{p_{p}(0)}_{1,\kappa}^2 + \int_0^t \LRs{(\dot{\bs{f}}(s), \dot{\bs{u}}(s)) - (\dot{g}(s), p_{p}(s))} \,ds + (g(t), p_{p}(t)) - (g(0), p_{p}(0)) . } Since $\nor{p_{p}(t)}_{1,\kappa} \le \nor{p_{p}}_{L^\infty(0,t; H_{\kappa}^1)}$, without loss of generality, we may assume that $\nor{p_{p}(t)}_{1,\kappa} = \nor{p_{p}}_{L^\infty(0,t; H_{\kappa}^1)}$. Then the above formula gives \algns{ &\int_0^t \LRs{\nor{\dot{\bs{u}}(t)}_{\bs{V}}^2 + \nor{\dot{p}_{t}(t) - \alpha \dot{p}_{p}(t)}_{0,\lambda^{-1}}^2 + \nor{\dot{p}_{p}(t)}_{0,s_0}^2 } ds + \half \nor{p_{p}(t)}_{1,\kappa}^2 \\ &\quad \le \half \nor{p_{p}(0)}_{1,\kappa}^2 + \nor{\dot{\bs{f}}}_{L^2(0,t; \bs{V}')} \nor{\dot{\bs{u}}}_{L^2(0,t; \bs{V})} \\ &\qquad + \kappa_0^{-\half} \LRp{\nor{\dot{g}}_{L^1(0,t; H^{-1})} + \nor{g}_{L^\infty(0,t;H^{-1})} } \nor{p_{p}}_{L^\infty(0,t; H_{\kappa}^1)} . } By Young's inequality, we have \mltln{ \label{eq:stab-energy-estm2} \int_0^t \LRs{\nor{\dot{\bs{u}}(t)}_{\bs{V}}^2 + 2\nor{\dot{p}_{t}(s) - \alpha \dot{p}_{p}(s)}_{0,\lambda^{-1}}^2 + 2\nor{\dot{p}_{p}(s)}_{0,s_0}^2 } ds + \half \nor{p_{p}(t)}_{1,\kappa}^2 \\ \quad \le \nor{p_{p}(0)}_{1,\kappa}^2 + \nor{\dot{\bs{f}}}_{L^2(0,t; \bs{V}')}^2 + 2\kappa_0^{-1} \LRp{\nor{\dot{g}}_{L^1(0,t; H^{-1})} + \nor{g}_{L^\infty(0,t;H^{-1})} }^2 . } Combining \eqref{eq:stab-energy-estm1} and \eqref{eq:stab-energy-estm2}, we have \algns{ \nor{p_{p}(t)}_{1,\kappa} &\lesssim \nor{p_{p}(0)}_{1,\kappa} + \nor{\dot{\bs{f}}}_{L^2(0,t; \bs{V}')} \\ &\quad + \min \{ c_0^{-\half} \nor{g}_{L^2(0,t; L^2)}, \kappa_0^{-\half} (\nor{\dot{g}}_{\LtH{1}{}{-1}} + \nor{g}_{\LtH{\infty}{}{-1}}) \} , } so \eqref{eq:estm_2} for $\nor{p_{p}}_{L^\infty(0,t; H_{\kappa}^1)}$ is proved. We can also estimate $$\int_0^t \left[{\nor{\dot{\bs{u}}(s)}_{\bs{V}}^2 + \nor{\dot{p}_{t}(s) - \alpha \dot{p}_{p}(s)}_{0,\lambda^{-1}}^2 + \nor{\dot{p}_{p}(s)}_{0,s_0}^2 } \right] ds$$ from \eqref{eq:stab-energy-estm1} and \eqref{eq:stab-energy-estm2} with the estimate of $\nor{p_{p}}_{L^\infty(0,t; H_{\kappa}^1)}$. The argument is completely analogous to the estimate of $\nor{p_{p}}_{L^2(0,t; H_{\kappa}^1)}$, so we omit details. Finally, we prove \eqref{eq:estm_3}. From the inf-sup condition \algns{ \inf_{0 \not = q_{t} \in Q_{t}} \sup_{0 \not = \bs{v} \in \bs{V}} \frac{(\div \bs{v}, q_{t})}{\nor{\epsilon(\bs{v})}_{0} \nor{q_{t}}_{0}} \ge C, } for any given $q_{t}$, there exists $\bs{v} \in \bs{V}$ such that $(\div \bs{v}, q_{t}') = (q_{t}, q_{t}')$ for all $q_{t}' \in Q_{t}$, and $\nor{\epsilon(\bs{v})}_0 \le C_{\Omega} \nor{q_{t}}_0$ with $C_{\Omega}$ depending only on $\Omega$. If we take $\bs{v}$ as such an element in $\bs{V}$ with $q_{t} = \frac{1}{2\mu} p_{t}$, then we can check that \algns{ \nor{\bs{v}}_{\bs{V}} \le \sqrt{2 \mu_1} \nor{\bs{v}}_1 \le C_{\Omega} \sqrt{\frac{\mu_1}{\mu_0}} \nor{p_{t}}_{Q_{t}} } with $\mu_1 := \nor{\mu}_{L^\infty}$, $\frac{1}{\mu_0} := \nor{\frac{1}{\mu}}_{L^\infty}$. From the estimate of $\nor{\bs{u}}_{L^{\infty}(0,t; \bs{V})}$ and \eqref{eq:weak-upp-eq1}, \algns{ \nor{p_{t}}_{Q_{t}}^2 = - (2\mu \epsilon(\bs{u}), \epsilon(\bs{v})) + (\bs{f}, \bs{v}) \le C_{\Omega} \sqrt{ \frac{\mu_1}{\mu_0}} \LRp{ \nor{\bs{u}}_{\bs{V}} + \nor{\bs{f}}_{\bs{V}'} } \nor{p_{t}}_{Q_{t}} } holds, and therefore \algn{ \label{eq:stab-pt-estm} \nor{p_{t}}_{\LtV{\infty}{Q_{t}}} \le C_{\Omega} \sqrt{ \frac{\mu_1}{\mu_0}} ( \nor{\bs{u}}_{L^\infty(0,t; \bs{V})} + \nor{\bs{f}}_{L^\infty(0,t; \bs{V}')} ) . } \end{proof} \section{Discretization with finite elements} In this section we discuss finite element discretization of \eqref{eq:weak-upp-eq} and the a priori error analysis of numerical solutions. We are interested in discretizations which are robust for the parameters including arbitrarily large $\lambda>0$, and only nonnegative $s_0 \ge 0$. Note that the limit case $\lambda = \infty$ decouples \eqref{eq:weak-upp-eq} into two separate problems, the Stokes equation and a time-dependent Darcy flow problems. Therefore, it is natural to combine two finite element methods, one for the Stokes equation for $(\bs{u}, p_{t})$ and the other for the Darcy flow problems for $p_{p}$. For discretizations of the Stokes equation, standard mixed methods with conforming finite elements are natural choices but stabilized methods for the Stokes equation are sometimes preferred due to their smaller number of degrees of freedom. Therefore we propose formulations covering some low order stabilized methods for discretization of $(\bs{u}, p_{t})$ with the a priori error analysis. The parameter $\mu$ is assumed to be 1 in the model problem of Stokes equations. However, $\mu$ is a function in $\Omega$ with large parameter value in most practical poroelasticity problems, so we assume that $1 \lesssim \mu_{\min} \le \mu \le \mu_{\max}$ and $\mu_{\max}/ \mu_{\min}$ is bounded above and below in $\Omega$. For discretizations of $p_{p}$, the standard method with Lagrange finite elements is the simplest numerical method but it does not give numerical solutions with local mass conservation. In this paper we use the enriched Galerkin method that we can obtain a locally mass conservative flux via local post-processing. However, our error analysis can be extended to any discretization methods of the Poisson equation including continuous and various discontinuous Galerkin methods. \subsection{Finite element methods for the Stokes and Poisson equations} In this subsection we introduce the mixed and stabilized methods for the Stokes equation of $(\bs{u}, p_{t})$ and the Lagrange finite elements for the Poisson equation of $p_{p}$. In this section we denote $\bs{V}_{h}$, $Q_{t,h}$, $Q_{p,h}$ the finite element spaces for the unknowns $\bs{u}$, $p_{t}$, $p_{p}$, and assume that $\bs{V}_{h} \subset \bs{V}$, $Q_{t,h} \subset Q_{t}$, and $Q_{p,h} \subset Q_{p}$. We will use $k_{\bs{u}}$, $k_{p_{t}}$, $k_{p_{p}}$ to denote the maximum polynomial approximation orders of $\bs{V}_{h}$, $Q_{t,h}$, $Q_{p,h}$ with the $L^2$ norm. To describe the mixed and stabilized methods of $\bs{V}_{h}$ and $Q_{t,h}$, let us consider an auxiliary problem to find $(\bs{u}, p) \in \bs{V} \times Q_{t}$ such that \algn{ \label{eq:aux-stokes} \LRp{2 \mu \epsilon(\bs{u}), \epsilon(\bs{v}) } + (p_{t}, \div \bs{v}) = (\bs{f}_1, \bs{v}), \qquad (\div \bs{u}, q_{t}) = (f_2, q_{t}) } for all $(\bs{v}, q_{t}) \in \bs{V} \times Q_{t}$. First, we can use stable mixed finite elements $(\bs{V}_{h}, Q_{t,h})$, i.e., the pair $(\bs{V}_{h}, Q_{t,h})$ satisfies the inf-sup condition \algn{ \label{eq:mixed-inf-sup} \inf_{0 \not = q_{t} \in Q_{t,h}} \sup_{0 \not = \bs{v} \in \bs{V}_{h}} \frac{(\div \bs{v}, q_{t})}{\nor{\nabla\bs{v}}_{0} \nor{q_{t}}_{0}} \ge C >0 } with a constant $C$ independent of $h$. A similar inf-sup condition holds with denominator $\norw{\epsilon(\bs{v})}{\bs{V}}\norw{q_{t}}{Q_{t}}$ by rescaling of norms, and the inf-sup constant depends on the constant of Korn's inequality and $\mu_{\max} / \mu_{\min}$. For stabilized methods for \eqref{eq:aux-stokes}, we consider the stabilized methods of the form \algns{ \mc{B}(\bs{u}_{h}, p_{t,h}; \bs{v}, q_{t}) &:= (2 \mu \epsilon(\bs{u}_{h}), \epsilon(\bs{v})) + (p_{t,h}, \div \bs{v}) + (\div \bs{u}_{h}, q_{t}) - s_h(p_{t,h}, q_{t}), \\ F(\bs{v}, q_{t}) &:= (\bs{f}_1, \bs{v}) + (f_2, q_{t}) + \tilde{s}_h(\bs{f}_1, q_{t}) } with some bilinear and linear forms $s_h$ and $\tilde{s}_h$ on $\bs{V}_{h} \times Q_{t,h}$ such that \algns{ | s_h(p_{t}, q_{t}) | \lesssim \nor{p_{t}}_{Q_{t}} \nor{q_{t}}_{Q_{t}}. } The discretization of \eqref{eq:aux-stokes} is to find $(\bs{u}_{h}, p_{t,h}) \in \bs{V}_{h} \times Q_{t,h}$ such that \algn{ \label{eq:stab-stokes} \mc{B}(\bs{u}_{h}, p_{t,h}; \bs{v}, q_{t}) = F(\bs{v}, q_{t}) \qquad (\bs{v}, q_{t}) \in \bs{V}_{h} \times Q_{t,h} . } We assume that this discretization is consistent (with sufficiently regular exact solutions) and also assume that an inf-sup condition \algn{ \label{eq:stab-inf-sup} \inf_{(\bs{u}, p_{t}) \in \bs{V}_{h} \times Q_{t,h}} \sup_{(\bs{v}, q_{t}) \in \bs{V}_{h} \times Q_{t,h}} \frac{\mc{B}(\bs{u}, p_{t}; \bs{v}, q_{t})}{ (\norw{\bs{u}}{\bs{V}} + \norw{p_{t}}{Q_{t}}) (\norw{\bs{v}}{\bs{V}} + \norw{q_{t}}{Q_{t}}) } \ge C > 0 } holds with $C$ independent of $h$ and parameters. We here remark that there are known stabilized methods satisfying \eqref{eq:stab-inf-sup}, for example, \begin{subequations} \label{eq:stab-method1} \algn{ \bs{V}_{h} &= \mc{P}_1(\mc{T}_h; {\mathbb R}^n), & Q_{t,h} &= \mc{P}_0(\mc{T}_h), \\ \quad s_h (p_{t}, q_{t}) &= \frac{\gamma_2}{2\mu} \sum_{e \in \mc{E}_h} h_e^{-1} \LRa{\jump{p_{t}}, \jump{q_{t}}}_e , & \tilde{s}_h &= 0, } \end{subequations} with $\jump{q_{t}}$, the jump of $q_{t}$ on edges/faces (cf. \cite{KechkarSilvester1992}), and \begin{subequations} \label{eq:stab-method2} \algn{ \bs{V}_{h} &= \mc{P}_1(\mc{T}_h ; {\mathbb R}^n), & Q_{t,h} &= \mc{P}_1(\mc{T}_h), \\ s_h(p_{t}, q_{t}) &= \frac{\gamma_2}{2\mu} \sum_{T \in \mc{T}_h} h_T^2 \LRp{\nabla p_{t}, \nabla q_{t} }_{T}, & \tilde{s}_h(\bs{f}, q_{t}) &= - \frac{\gamma_2}{2\mu} \sum_{T \in \mc{T}_h} h_T^2 (\bs{f}, \nabla q_{t}) } \end{subequations} where $\gamma_2 >0$ is a parameter depending on the shape regularity of meshes. These stabilization methods were proposed in \cite{BrezziPitkaranta1984} and \cite{KechkarSilvester1992}, respectively. For more on stabilized methods for the Stokes equation, we refer to \cite{FrancaHughesStenberg2008}. For $Q_{p,h}$ we use the standard Lagrange finite elements. \subsection{Semidiscrete error analysis} The semidiscrete formulation of \eqref{eq:weak-upp-eq} is to find $(\bs{u}_{h}, p_{t,h}, p_{p,h}) \in C^1(0,T;\bs{V}_{h} ) \times C^1(0,T; Q_{t,h} ) \times C^1(0,T; Q_{p,h} )$ such that \subeqns{eq:weak-upp-semi}{ \label{eq:weak-upp-semi1} \LRp{2 \mu \epsilon(\bs{u}_{h} ), \epsilon(\bs{v} ) } + \LRp{ p_{t,h} , \div \bs{v} } &= (\bs{f}, \bs{v}), \\ \label{eq:weak-upp-semi2} \LRp{ \div \bs{u}_{h} , q_{t} } - s_h \LRp{p_{t,h}, q_{t}} - \LRp{ \lambda^{-1} p_{t,h} , q_{t} } - \LRp{\alpha \lambda^{-1} p_{p,h} , q_{t}} &= \tilde{s}_h(\bs{f}, q_{t}), \\%& & q_{t} \in Q_{t,h} , \\ \label{eq:weak-upp-semi3} - \LRp{ \alpha \lambda^{-1} \dot{p}_{t,h} , q_{p} } - \LRp{\LRp{s_0 + \alpha^2 \lambda^{-1} } \dot{p}_{p,h} , q_{p} } - \LRp{\underline{\bs{\kappa}} \nabla p_{p,h} , \nabla q_{p} } &= \LRp{ g, q_{p} } } for any $\bs{v} \in \bs{V}_{h}$, $q_{t} \in Q_{t,h}$, $q_{p} \in Q_{p,h}$. It is obvious that $s_h = \tilde{s}_h=0$ if we use mixed methods for $(\bs{u}_{h}, p_{t})$. Suppose that $(\bs{u}, p_{t}, p_{p})$ is an exact solution of \eqref{eq:weak-upp-eq} and $(\bs{u}_{h}, p_{t,h}, p_{p,h})$ is a numerical solution of \eqref{eq:weak-upp-semi}, and define \algns{ e_{\bs{u}}(t) := \bs{u} (t) - \bs{u}_{h}(t), \quad e_{p_{t}}(t) := p_{t}(t) - p_{t,h}(t), \quad e_{p_{p}}(t) := p_{p}(t) - p_{p,h}(t) . } For some interpolations $(\Pi_h^{\bs{V}} \bs{u} (t), \Pi_h^{Q_{t}} p_{t}(t), \Pi_h^{Q_{p}} p_{p}(t)) \in \bs{V}_{h} \times Q_{t,h} \times Q_{p,h}$, which will be defined below, we split the errors into two parts as \algn{ \label{eq:u-split} e_{\bs{u}}(t) &= e_{\bs{u}}^I(t) + e_{\bs{u}}^h(t) := (\boldsymbol{u}(t) - \Pi_h^{\bs{V}} \boldsymbol{u}(t)) + (\Pi_h^{\bs{V}} \boldsymbol{u}(t) - \boldsymbol{u}_h (t)), \\ \label{eq:pt-split} e_{p_{t}}(t) &= e_{p_{t}}^I(t) + e_{p_{t}}^h(t) := (p_{t}(t) - \Pi_h^{Q_{t}} p_{t}(t)) + (\Pi_h^{Q_{t}} p_{t}(t) - p_{t,h}(t) ), \\ \label{eq:pf-split} e_{p_{p}}(t) &= e_{p_{p}}^I(t) + e_{p_{p}}^h(t) := (p_{p}(t) - \Pi_h^{Q_{p}} p_{p}(t)) + (\Pi_h^{Q_{p}} p_{p}(t) - p_{p,h}(t) ). } We define $\Pi_h^{\bs{V}} \bs{u} (t)$ and $\Pi_h^{Q_{t}} p_{t}(t)$ as the solution of auxiliary problem: \\ {\bf (AP1)} Find $(\Pi_h^{\bs{V}} \bs{u} (t), \Pi_h^{Q_{t}} p_{t}(t)) \in \bs{V}_{h} \times Q_{t,h}$ such that \algns{ \LRp{2 \mu \epsilon(\Pi_h^{\bs{V}} \bs{u} (t)), \epsilon(\bs{v} ) } + \LRp{ \Pi_h^{Q_{t}} p_{t}(t) , \div \bs{v} } &= (\bs{f}(t), \bs{v}), \\ \LRp{ \div \Pi_h^{\bs{V}} \bs{u} (t), q_{t} } - s_h \LRp{ \Pi_h^{Q_{t}} p_{t}(t), q_{t}} &= (\div \bs{u}(t), q_{t}) + \tilde{s}_h(\bs{f}(t), q_{t}) } for any $(\bs{v}, q_{t}) \in \bs{V}_{h} \times Q_{t,h}$. \\ The stability of mixed methods (when $s_h = \tilde{s}_h = 0$) or stabilized methods guarantees the well-posedness of this problem, and furthermore, standard error analyses of mixed or stabilized methods for the Stokes equation give \algn{ \label{eq:up-intp} \norw{\bs{u}(t) - \Pi_h^{\bs{V}} \bs{u} (t)}{\bs{V}} + \|p_{t}(t) - \Pi_h^{Q_{t}} p_{t}(t)\|_{Q_{t}} \lesssim h^m (\norw{\bs{u}(t)}{m+1} + \norw{p_{t}(t)}{m}) } with $m \le \max \{ k_{\bs{u}}-1, k_{p_{t}} \}$ which depends on the regularities of $\bs{u}(t)$ and $p_{t}(t)$. We define $\Pi_h^{Q_{p}} p_{p}(t)$ as the solution of another auxiliary problem: \\ {\bf (AP2)} Find $\Pi_h^{Q_{p}}p_{p}(t) \in Q_{p,h}$ such that \algns{ (\underline{\bs{\kappa}} \nabla \Pi_h^{Q_{p}}p_{p}, \nabla q_{p}) = (\underline{\bs{\kappa}} \nabla p_{p}, \nabla q_{p}) \qquad \forall q_{p} \in Q_{p,h} . } It is well-known that \algn{ \label{eq:pf-H1} \| p_{p}(t) - \Pi_h^{Q_{p}} p_{p}(t) \|_{1, \kappa} \lesssim \kappa_0^{-\half} h^m \norw{p_{p}(t)}{m+1} } holds with $m \le k_{p_{p}} - 1$ depending on the regularity of $p_{p}(t)$. If $\Omega$ satisfies the full elliptic regularity assumption and $\underline{\boldsymbol{\kappa}}$ is a Lipschitz continuous scalar field on $\Omega$, then \algn{ \label{eq:pf-L2} \|{p_{p}(t) - \Pi_h^{Q_{p}} p_{p}(t)}\|_0 \lesssim h \norw{p_{p}(t) - \Pi_h^{Q_{p}} p_{p}(t)}{1,\kappa} } holds as well. Before we prove the a priori error analysis we discuss compatible numerical initial data. Note that \eqref{eq:weak-upp-semi1}, \eqref{eq:weak-upp-semi2} are algebraic equations, so our problem is a system of differential algebraic equations. When the backward Euler method is used for time discretization, compatible numerical data is not significant because the algebraic equation will be satisfied after one time step. However, numerical initial data satisfying this algebraic equation can be important for stability of numerical methods when other time discretization methods such as the Crank--Nicolson method are used. In order to have compatible numerical initial data, we can use the solution of \algns{ \LRp{2 \mu \epsilon(\bs{u}_{h} ), \epsilon(\bs{v} ) } + \LRp{ p_{t,h} , \div \bs{v} } &= (\bs{f}(0), \bs{v}), \\ \LRp{ \div \bs{u}_{h} , q_{t} } - s_h \LRp{p_{t,h}, q_{t}} - \LRp{ \lambda^{-1} p_{t,h} , q_{t} } - \LRp{\alpha \lambda^{-1} p_{p,h} , q_{t}} &= \tilde{s}_h(\bs{f}(0), q_{t}), \\ - \LRp{\alpha^2 \lambda^{-1} p_{t,h} , q_{p} } - \LRp{\underline{\bs{\kappa}} \nabla p_{p,h} , \nabla q_{p} } &= - \LRp{\alpha^2 \lambda^{-1} p_{t}(0) , q_{p} } \\ &\quad - \LRp{\underline{\bs{\kappa}} \nabla p_{p}(0) , \nabla q_{p} } } as numerical initial data. Since this is a stabilized saddle point problem with inf-sup condition, it is rather standard to show that the numerical initial data from this problem is a good approximation of initial data of the continuous problem. In the theorem below we assume that the exact solutions are sufficiently regular and maximum approximation orders can be achieved in the Bramble--Hilbert lemma for simplicity of presentation. In addition, we also assume that $\Pi^{Q_{p}} p_{p}$ is an approximation of $p_{p}$ with optimal order in the $L^2$ norm, i.e., \eqref{eq:pf-L2} holds. \begin{theorem} \label{thm:eh-estm} Suppose that $(\bs{u}, p_{t}, p_{p})$ is the solution of \eqref{eq:weak-upp-eq} with initial data $(\bs{u}(0), p_{t}(0), p_{p}(0))$, and $(\bs{u}_{h}, p_{t,h}, p_{p,h})$ is the solution of \eqref{eq:weak-upp-semi} with numerical initial data $(\bs{u}_{h}(0), p_{t,h}(0), p_{p,h}(0)) \in \bs{V}_{h} \times Q_{t,h} \times Q_{p,h}$ satisfying \eqref{eq:weak-upp-semi1}, \eqref{eq:weak-upp-semi2}, and \algn{ \label{eq:init-approx1} \norw{p_{t}(0) - p_{t,h}(0)}{Q_{t}} \lesssim h^{k_{p_{t}}} \norw{p_{t}(0)}{k_{p_{t}}} , \\ \label{eq:init-approx2} \norw{p_{p}(0) - p_{p,h}(0)}{0} \lesssim h^{k_{p_{p}}} \norw{p_{p}(0)}{k_{p_{p}}} . } Then \algn{ \label{eq:semi-error1} &\nor{\Pi_h^{\bs{V}} \bs{u} - \bs{u}_{h}}_{L^\infty(0,t; \bs{V})} + \norw{\Pi_h^{Q_{t}} p_{t} - p_{t,h}}{L^\infty(0,t; Q_{t})} \\ \notag &+ \nor{\Pi_h^{Q_{p}}p_{p} - p_{p,h}}_{L^\infty(0,t; L_{s_0}^2)} + \nor{\Pi_h^{Q_{p}}p_{p} - p_{p,h}}_{L^2(0,t; H_{\kappa}^1)} \\ \notag & \quad \lesssim h^k \LRp{\nor{p_{t}(0) }_{H^k} + \nor{p_{p}(0)}_{H^k} + \nor{\dot{p}_{t}}_{L^1(0,t; H^k)} + \nor{\dot{p}_{p}}_{L^1(0,t; H^k)} } } and \algn{ \label{eq:semi-error2} &\nor{\Pi_h^{\bs{V}} \dot{\bs{u}} - \dot{\bs{u}}_h}_{L^2(0,t; \bs{V})} + \norw{\Pi_h^{Q_{t}}\dot{p}_{t} - \dot{p}_{t,h}}{L^2(0,t; Q_{t})} \\ \notag &+ \nor{\Pi_h^{Q_{p}}\dot{p}_{p} - \dot{p}_{p,h}}_{L^2(0,t; L_{s_0}^2)} + \nor{\Pi_h^{Q_{p}} p_{p} - p_{p,h}}_{L^\infty(0,t; H_{\kappa}^1)} \\ \notag & \quad \lesssim \nor{\Pi_h^{Q_{p}}p_{p}(0) - p_{p,h}(0)}_{1, \kappa} + h^{k} \norw{\dot{p}_{t}, \dot{p}_{p}}{\LtH{2}{}{k}} } hold with $k = \min \{ k_{p_{t}}, k_{p_{p}} \}$. \end{theorem} \begin{proof} The difference of \eqref{eq:weak-upp-eq} and \eqref{eq:weak-upp-semi} gives \algns \LRp{2 \mu \epsilon(e_{\bs{u}}), \epsilon(\bs{v} ) } + \LRp{ e_{p_{t}} , \div \bs{v} } &= 0, \\ \LRp{ \div e_{\bs{u}} , q_{t} } + s_h \LRp{p_{t,h}, q_{t}} - \LRp{ \lambda^{-1} e_{p_{t}} , q_{t} } - \LRp{\alpha \lambda^{-1} e_{p_{p}} , q_{t}} &= - \tilde{s}_h(\bs{f}, q_{t}), \\ - \LRp{\alpha \lambda^{-1} \dot{e}_{p_{t}} , q_{p} } - \LRp{\LRp{s_0 + \alpha^2 \lambda^{-1} } \dot{e}_{p_{p}}, q_{p} } - \LRp{ \underline{\bs{\kappa}} \nabla e_{p_{p}} , \nabla q_{p} } &= 0 . } From the decomposition \eqref{eq:u-split}--\eqref{eq:pf-split} and the equations of {\bf (AP1)}, {\bf (AP2)}, we have reduced error equations \subeqns{eq:err-eq}{ \label{eq:err-eq1} & \LRp{2 \mu \epsilon(e_{\bs{u}}^h), \epsilon(\bs{v} ) } + \LRp{ e_{p_{t}}^h , \div \bs{v} } = 0 , \\%-\tilde{s}_h (\bs{f}, \bs{v}), \\ \label{eq:err-eq2} &\LRp{ \div e_{\bs{u}}^h , q_{t} } - s_h \LRp{e_{p_{t}}^h, q_{t}} - \LRp{ \lambda^{-1} (e_{p_{t}}^h - \alpha e_{p_{p}}^h), q_{t} } \\ \notag & \qquad = \LRp{ \lambda^{-1} e_{p_{t}}^I , q_{t} } + (\alpha \lambda^{-1} e_{p_{p}}^I , q_{t}) , \\ \label{eq:err-eq3} &- \LRp{ \alpha \lambda^{-1} \dot{e}_{p_{t}}^h , q_{p} } - \LRp{\LRp{s_0 + \alpha^2 \lambda^{-1} } \dot{e}_{p_{p}}^h , q_{p} } - \LRp{ \underline{\bs{\kappa}} \nabla e_{p_{p}}^h , \nabla q_{p} } \\ \notag &\qquad = \LRp{ \alpha \lambda^{-1} \dot{e}_{p_{t}}^I , q_{p} } - \LRp{\LRp{s_0 + \alpha^2 \lambda^{-1} } \dot{e}_{p_{p}}^I , q_{p} } } for any $\bs{v} \in \bs{V}_{h}$, $q_{t} \in Q_{t,h}$, $q_{p} \in Q_{p,h}$. {\bf Proof of \eqref{eq:semi-error1} }: We take $\bs{v} = \dot{e}_{\bs{u}}^h$ in \eqref{eq:err-eq1}, $q_{t} = -e_{p_{t}}^h$ in the time derivative of \eqref{eq:err-eq2}, $q_{p} = - e_{p_{p}}^h$ in \eqref{eq:err-eq3}, and add them altogether. Then we have \algn{ \label{eq:err-energy-eq} \half \frac{d}{dt} \LRp{ \nor{e_{\bs{u}}^h}_{\bs{V}}^2 + s_h(e_{p_{t}}^h, e_{p_{t}}^h) + \nor{e_{p_{t}}^h - \alpha e_{p_{p}}^h}_{0,\lambda^{-1}}^2 + \nor{e_{p_{p}}^h}_{0,s_0}^2 } + \| e_{p_{p}}^h \|_{1, \kappa}^2 \\ \notag = -\LRp{ \lambda^{-1} (\dot{e}_{p_{t}}^I - \alpha \dot{e}_{p_{p}}^I) , e_{p_{t}}^h - \alpha e_{p_{p}}^h} + \LRp{s_0 \dot{e}_{p_{p}}^I , e_{p_{p}}^h } . } Defining \algns{ X(s)^2 &= \nor{e_{\bs{u}}^h(s)}_{\bs{V}}^2 + s_h(e_{p_{t}}^h(s), e_{p_{t}}^h(s)) + \nor{e_{p_{t}}^h(s) - \alpha e_{p_{p}}^h(s)}_{0,\lambda^{-1}}^2 + \nor{e_{p_{p}}^h(s)}_{0,s_0}^2, } and integrating \eqref{eq:err-energy-eq} from 0 to $t$, we have \algn{ \label{eq:err-int-ineq1} &\half (X(t)^2 - X(0)^2) + \int_0^t \| e_{p_{p}}^h(s) \|_{1,\kappa}^2 \,ds, \\ \notag & \quad = \int_0^t \LRs{-\LRp{ \lambda^{-1} (\dot{e}_{p_{t}}^I(s) - \alpha \dot{e}_{p_{p}}^I(s)) , e_{p_{t}}^h(s) - \alpha e_{p_{p}}^h(s)} + \LRp{s_0 \dot{e}_{p_{p}}^I (s), e_{p_{p}}^h (s) } } ds \\ \notag & \quad \le \norw{ \dot{e}_{p_{t}}^I - \alpha \dot{e}_{p_{p}}^I}{\LtL{1}{\lambda^{-1}}{2}} \norw{e_{p_{t}}^h - \alpha e_{p_{p}}^h}{\LtL{\infty}{\lambda^{-1}}{2}} \\ \notag & \qquad + \norw{ \dot{e}_{p_{p}}^I}{\LtL{1}{s_0}{2}} \norw{e_{p_{p}}^h}{\LtL{\infty}{s_0}{2}} . } Adopting the argument of the estimate of $X(t)$ in the previous section, we may assume that $X(t) = \max_{s \in (0,t]} X(s)$ without loss of generality. Then \algns{ \half X(t)^2 \le \half X(0)^2 + \max \left\{ \norw{ \dot{e}_{p_{t}}^I - \alpha \dot{e}_{p_{p}}^I}{\LtL{1}{\lambda^{-1}}{2}}, \norw{ \dot{e}_{p_{p}}^I}{\LtL{1}{s_0}{2}} \right\} X(t) . } By Young's inequality and the arithmetic-geometric mean inequality, we can obtain \algns{ X(t) \le X(0) + 2 \max \left\{ \norw{ \dot{e}_{p_{t}}^I - \alpha \dot{e}_{p_{p}}^I}{\LtL{1}{\lambda^{-1}}{2}}, \norw{ \dot{e}_{p_{p}}^I}{\LtL{1}{s_0}{2}} \right\} . } As a corollary, assuming the exact solution is sufficiently smooth, we obtain \mltln{ \label{eq:euh-estm} \norw{e_{\bs{u}}^h}{L^\infty(0, t; \bs{V}) } + \max_{s \in [0,t]} s_h(e_{p_{t}}^h, e_{p_{t}}^h)^{\half} + \nor{e_{p_{t}}^h - \alpha e_{p_{p}}^h}_{L^\infty(0,t; L_{\lambda^{-1}}^2)} \\ + \norw{e_{p_{p}}^h}{\LtL{\infty}{s_0}{2}} \lesssim X(0) + h^{k} \norw{ \dot{p}_{t}, \dot{p}_{p} }{\LtH{1}{}{k}} } where $k = \min \{ k_{p_{t}}, k_{p_{p}} \}$. Note that the implicit constant in this estimate is independent of parameter scales, i.e., for large $\mu$, arbitrarily large $\lambda$, small $\kappa_{0}$ and $\kappa_1$, and small or degenerate $s_0$. For mixed methods, the equation \eqref{eq:err-eq1} and the inf-sup condition \eqref{eq:mixed-inf-sup} can be used to obtain \algn{ \label{eq:pt-estm} \norw{e_{p_{t}}^h}{L^\infty(0, t; Q_{t})} \lesssim X(0) + h^{k} \norw{ \dot{p}_{t}, \dot{p}_{p} }{\LtH{1}{}{k}} ,\qquad k = \min \{ k_{p_{t}}, k_{p_{p}} \} . } In case of stabilized methods, for any $t \in (0, T]$, there exists $(\bs{v}, q_{t})$ such that $\nor{\bs{v}}_{\bs{V}} + \nor{q_{t}}_{Q_{t}} \le 1$ and \mltlns{ \nor{e_{\bs{u}}^h(t)}_{\bs{V}} + \nor{e_{p_{t}}^h(t)}_{Q_{t}} \lesssim \\ \LRp{2 \mu \epsilon(e_{\bs{u}}^h(t)), \epsilon(\bs{v} ) } + \LRp{ e_{p_{t}}^h (t), \div \bs{v} } + \LRp{ \div e_{\bs{u}}^h (t), q_{t} } - s_h \LRp{e_{p_{t}}^h (t), q_{t}} . } Using this $(\bs{v}, q_{t})$ with \eqref{eq:err-eq1} and \eqref{eq:err-eq2}, we get \algn{ \label{eq:upt-estm} &\nor{e_{\bs{u}}^h(t)}_{\bs{V}} + \nor{e_{p_{t}}^h(t)}_{Q_{t}} \\ \notag &\quad \lesssim \LRp{ \lambda^{-1} (e_{p_{t}}^h (t) - \alpha e_{p_{p}}^h(t)), q_{t} } + \LRp{ \lambda^{-1} e_{p_{t}}^I (t), q_{t} } + (\alpha \lambda^{-1} e_{p_{p}}^I (t), q_{t}) , \\ \notag &\quad \lesssim \nor{e_{p_{t}}^h (t) - \alpha e_{p_{p}}^h (t) }_{0,\lambda^{-1}} + \nor{e_{p_{t}}^I (t) - \alpha e_{p_{p}}^I(t)}_{0,\lambda^{-1}} \\ \notag &\quad \lesssim X(0) + h^{k} \norw{ \dot{p}_{t}, \dot{p}_{p} }{\LtH{1}{}{k}} , \qquad k = \min \{ k_{p_{t}}, k_{p_{p}} \}, } where we used \eqref{eq:euh-estm} in the last inequality. To estimate $\nor{e_{p_{p}}^h}_{L^2(0,t; H_{\kappa}^1)}$, we use \eqref{eq:err-int-ineq1} and get \algns{ \half X(t)^2 + \int_0^t \nor{e_{p_{p}}^h(s)}_{1,\kappa}^2 \,ds &\le \half X(0)^2 + \norw{ \dot{e}_{p_{t}}^I - \alpha \dot{e}_{p_{p}}^I}{\LtL{1}{\lambda^{-1}}{2}} X(t) \\ &\quad + \norw{ \dot{e}_{p_{p}}^I}{\LtL{1}{s_0}{2}} X(t) . } By Young's inequality, \algn{ \label{eq:epfh-H1-estm} \nor{e_{p_{p}}^h}_{L^2(0,t; H_{\kappa}^1)} \lesssim X(0) + h^{k} \norw{ \dot{p}_{t}, \dot{p}_{p} }{\LtH{1}{}{k}} , \qquad k = \min \{ k_{p_{t}}, k_{p_{p}} \} . } To complete the proof, we need to estimate $X(0)$. Recall that $(\bs{u}_{h}(0), p_{t,h}(0), p_{p,h}(0))$ satisfies \eqref{eq:weak-upp-semi1} and \eqref{eq:weak-upp-semi2} at $t=0$. Recall also that $(\Pi_h^{\bs{V}} \bs{u}(0), \Pi_h^{Q_{t}} p_{t}(0))$ satisfies {\bf (AP1)} at $t= 0$. Noting that $\div \bs{u}(0) = \lambda^{-1} p_{t}(0) + \lambda^{-1} \alpha p_{p}(0)$, $(e_{\bs{u}}^h (0), e_{p_{t}}^h(0), e_{p_{p}}^h(0))$ satisfies \algns{ &\LRp{2 \mu \epsilon(e_{\bs{u}}^h (0) ), \epsilon(\bs{v} ) } + \LRp{ e_{p_{t}}^h(0) , \div \bs{v} } = 0 , \\%-\tilde{s}_h (\bs{f}, \bs{v}), \\ &\LRp{ \div e_{\bs{u}}^h(0) , q_{t} } - s_h \LRp{e_{p_{t}}^h(0), q_{t}} = \LRp{ \lambda^{-1} (e_{p_{t}}^I (0) - \alpha e_{p_{p}}^I(0)), q_{t} } } for all $\bs{v} \in \bs{V}_{h}$, $q_{t} \in Q_{t,h}$, therefore \algns{ \nor{e_{\bs{u}}^h(0)}_{\bs{V}} + \nor{e_{p_{t}}^h(0)}_{Q_{t}} \lesssim h^k \nor{p_{t} (0), p_{p}(0)}_{H^k}, \qquad k = \min \{ k_{p_{t}}, k_{p_{p}} \} . } From the boundedness of $s_h(\cdot, \cdot)$, \algns{ X(0) \lesssim \nor{e_{\bs{u}}^h(0)}_{\bs{V}} + \nor{e_{p_{t}}^h(0)}_{Q_{t}} + \nor{e_{p_{p}}^h(0)}_{L^2_{s_0}} \lesssim h^k \nor{p_{t} (0), p_{p}(0)}_{H^k}, } with $k = \min \{ k_{p_{t}}, k_{p_{p}} \}$. {\bf Proof of \eqref{eq:semi-error2} }: We now estimate $\| e_{p_{p}}^h(t) \|_{L^\infty(0,t; H_\kappa^1)}$. For this, we take $\bs{v} = \dot{e}_{\bs{u}}^h$ in the time derivative of \eqref{eq:err-eq1}, $q_{t} = - \dot{e}_{p_{t}}^h$ in the time derivative of \eqref{eq:err-eq2}, $q_{p} = -\dot{e}_{p_{p}}^h$ in \eqref{eq:err-eq3}, and add the equations altogether. Then \algn{ \label{eq:err-d-energy-eq} &\nor{\dot{e}_{\bs{u}}^h(t)}_{\bs{V}}^2 + \nor{\dot{e}_{p_{t}}^h(t) - \alpha \dot{e}_{p_{p}}^h(t)}_{0,\lambda^{-1}}^2 + \nor{\dot{e}_{p_{p}}^h(t)}_{0,s_0}^2 + \half \frac{d}{dt} \| e_{p_{p}}^h(t) \|_{1,\kappa }^2 \\ \notag &\quad = -\LRp{ \lambda^{-1} \dot{e}_{p_{t}}^I - \alpha \lambda^{-1} \dot{e}_{p_{p}}^I , \dot{e}_{p_{t}}^h - \alpha \dot{e}_{p_{p}}^h} + \LRp{s_0 \dot{e}_{p_{p}}^I , \dot{e}_{p_{p}}^h} } Integrating it from 0 to $t$ and using Young's inequality, we get \mltlns{ \half \| e_{p_{p}}^h(t) \|_{1,\kappa}^2 + \int_0^t \LRs{ \nor{\dot{e}_{\bs{u}}^h(s)}_{\bs{V}}^2 + \half \nor{\dot{e}_{p_{t}}^h(s) - \alpha \dot{e}_{p_{p}}^h(s)}_{0,\lambda^{-1}}^2 + \half \nor{\dot{e}_{p_{p}}^h(s)}_{0,s_0}^2} \,ds \\ \le \half \| e_{p_{p}}^h(0) \|_{1,\kappa}^2 + \half \int_0^t \LRs{ \nor{\dot{e}_{p_{t}}^I(s) - \alpha \dot{e}_{p_{p}}^I(s)}_{0,\lambda^{-1}}^2 + \nor{\dot{e}_{p_{p}}^I(s)}_{0,s_0}^2 }\,ds . } In particular, \algns{ &\| e_{p_{p}}^h(t) \|_{1,\kappa} + \nor{\dot{e}_{\bs{u}}^h}_{L^2(0,t; \bs{V})} + \nor{\dot{e}_{p_{t}}^h - \alpha \dot{e}_{p_{p}}^h}_{L^2(0,t; L^2_{\lambda^{-1}})} + \nor{\dot{e}_{p_{p}}^h}_{L^2(0,t; L^2_{s_0})} \\ &\lesssim \| e_{p_{p}}^h(0) \|_{1,\kappa} + \nor{\dot{e}_{p_{t}}^I - \alpha \dot{e}_{p_{p}}^I}_{\LtL{2}{\lambda^{-1}}{2}} + \nor{\dot{e}_{p_{p}}^I}_{\LtL{2}{s_0}{2}} \\ &\lesssim \| e_{p_{p}}^h(0) \|_{1,\kappa} + h^{k} \norw{\dot{p}_{t}, \dot{p}_{p}}{\LtH{2}{}{k}} , \qquad k = \min\{k_{p_{t}}, k_{p_{p}} \} . } In this estimate, the implicit constants are uniformly bounded for small $\kappa_{0}$, $\kappa_{1}$, large $\mu$, arbitrarily large $\lambda$, and small or degenerate $s_0$. \end{proof} \begin{cor} Under the same assumptions in Theorem~\ref{thm:eh-estm} and an additional assumption \algn{ \label{eq:init-approx3} \norw{p_{p}(0) - p_{p,h}(0)}{1,\kappa} \lesssim h^{k_{p_{p}}-1} \norw{p_{p}(0)}{H^{k_{p_{p}}}}, } we can show that \algns{ &\nor{ \bs{u} - \bs{u}_{h}}_{L^\infty(0,t; \bs{V})} + \norw{ p_{t} - p_{t,h}}{L^\infty(0,t; Q_{t})} + \nor{ p_{p} - p_{p,h}}_{L^\infty(0,t; L_{s_0}^2)} \\ \notag & \quad \lesssim h^k \LRp{ \nor{\bs{u} }_{L^\infty(0,t; H^k)} + \nor{p_{t}}_{W^{1,1}(0,t; H^k)} + \nor{p_{p}}_{W^{1,1}(0,t; H^k)} } } with $k = \min \{k_{\bs{u}}-1, k_{p_{t}}, k_{p_{p}} \}$, \algns{ & \nor{p_{p} - p_{p,h}}_{L^2(0,t; H_{\kappa}^1)} \\ \notag & \quad \lesssim h^k \LRp{\nor{p_{t}(0) }_{H^k} + \nor{p_{p}(0)}_{H^k} + \nor{\dot{p}_{t}}_{L^1(0,t; H^k)} + \nor{\dot{p}_{p}}_{L^1(0,t; H^{k})} + \nor{p_{p}}_{L^2(0,t; H^{k+1})}} } with $k = \min \{k_{p_{t}}, k_{p_{p}}-1 \}$, and \algns{ &\nor{\dot{\bs{u}} - \dot{\bs{u}}_h}_{L^2(0,t; \bs{V})} + \norw{\dot{p}_{t} - \dot{p}_{t,h}}{L^2(0,t; Q_{t})} + \nor{\dot{p}_{p} - \dot{p}_{p,h}}_{L^2(0,t; L_{s_0}^2)} \\ \notag & \quad \lesssim \nor{\Pi_h^{Q_{p}}p_{p}(0) - p_{p,h}(0)}_{1, \kappa} + h^{k} \norw{\dot{\bs{u}}, \dot{p}_{t}, \dot{p}_{p}}{\LtH{2}{}{k}} } with $k = \min \{ k_{\bs{u}} - 1, k_{p_{t}}, k_{p_{p}} \}$, \algns{ &\nor{p_{p} - p_{p,h}}_{L^\infty(0,t; H_{\kappa}^1)} \lesssim h^k \LRp{ \nor{p_{p} }_{L^\infty(0,t; H^{k+1})}+ \norw{\dot{p}_{t}, \dot{p}_{p}}{\LtH{2}{}{k}} } } with $k= \min\{k_{p_{p}}-1, k_{p_{t}} \}$ hold. \end{cor} \begin{proof} These assertions can be proved easily from the results in Theorem~\ref{thm:eh-estm} and the triangle inequality, so we omit details. \end{proof} \section{Parameter-robust preconditioning} In this section we discuss preconditioners of the finite element discretizations robust for certain parameter scales. In most applications, the parameters $\mu$, $\lambda$, $\kappa$ are in the ranges \algn{ \label{eq:param-range} 0 < \kappa_{0}, \kappa_{1} \ll 1 \ll \mu \lesssim \lambda \leq + \infty . } It turns out that preconditioners efficient for the model problem with unit parameter values do not perform well for problems with realistic parameter values. In fact, construction of preconditioners robust for all variations of parameters in \eqref{eq:param-range} is the motivation of \cite{LeeEtAl2017}, and abstract form of parameter-robust block diagonal preconditioners are studied for discretizations with Taylor--Hood and MINI elements. Therefore we only focus on preconditioners for discretizations with the two stabilized methods in \eqref{eq:stab-method1} and \eqref{eq:stab-method2}. Following the approach in \cite{LeeEtAl2017}, we first define parameter-dependent discrete norms of $\bs{V}_{h}$, $Q_{t,h}$, $Q_{p,h}$, and show that the stability of the system with the parameter-dependent norms. Then we can derive abstract forms of block diagonal preconditioners based on the parameter-dependent norms. The numerical results we will present in the last section show that performances of algebraic multigrid block diagonal preconditioners based on the abstract forms are robust for parameter scales. Before we define parameter-dependent norms, we consider fully discrete schemes of the system to reduce the preconditioning problem. In fully discretization scheme of \eqref{eq:weak-upp-semi} with time step size $\Delta t >0$, we solve a static system \subeqns{eq:weak-upp-static}{ \label{eq:weak-upp-static1} \LRp{2 \mu \epsilon(\bs{u}_{h} ), \epsilon(\bs{v} ) } + \LRp{ p_{t,h} , \div \bs{v} } &= (\tilde{\bs{f}}, \bs{v}), \\ \label{eq:weak-upp-static2} \LRp{ \div \bs{u}_{h} , q_{t} } - s_h \LRp{p_{t,h}, q_{t}} - \LRp{ \lambda^{-1} p_{t,h} , q_{t} } - \LRp{\alpha \lambda^{-1} p_{p,h} , q_{t}} &= (\tilde{f}, q_{t}), \\%& & q_{t} \in Q_{t,h} , \\ \label{eq:weak-upp-static3} - \LRp{ \alpha \lambda^{-1} p_{t,h} , q_{p} } - \LRp{\LRp{s_0 + \alpha^2 \lambda^{-1} } p_{p,h} , q_{p} } - \LRp{\underline{\bs{\kappa}} \nabla p_{p,h} , \nabla q_{p} } &= \LRp{ \tilde{g}, q_{p} } } for all $(\bs{v}, q_{t}, q_{p}) \in \bs{V}_{h} \times Q_{t,h} \times Q_{p,h}$ at each time step but $\underline{\bs{\kappa}}$ here is $\underline{\bs{\kappa}} \Delta t$ with $\underline{\bs{\kappa}}$ in the previous section, and $\tilde{\bs{f}}$, $\tilde{f}$, $\tilde{g}$ are right-hand side terms depending on time discretization schemes. Let us define norms of $\bs{V}_{h}$, $Q_{t,h}$, $Q_{p,h}$ as \algns{ \nor{\bs{v}}_{\bs{V}_{h}}^2 &= (2 \mu \epsilon(\bs{v}) , \epsilon(\bs{v})) , \qquad \nor{q_{t}}_{Q_{t,h}}^2 = ((2\mu)^{-1} q_{t}, q_{t}) + s_h(q_{t}, q_{t}) , \\ \nor{q_{p}}_{Q_{p,h}}^2 &= \norw{q_{p}}{0,s_0}^2 + (\underline{\bs{\kappa}} \nabla q_{p}, \nabla q_{p}) , } and let $\mathcal{X}_h = \bs{V}_{h} \times Q_{t,h} \times Q_{p,h}$ be the Hilbert space with the norm \algns{ \norw{(\bs{v}, q_{t}, q_{p})}{\mathcal{X}_h}^2 = \norw{\bs{v}}{\bs{V}_{h}}^2 + \norw{q_{t}}{Q_{t,h}}^2 + \norw{Q_{p}}{Q_{p,h}}^2 . } We define a linear operator $\mathcal{A}$ from $\mathcal{X}_h$ to its dual space $\mathcal{X}_h^*$ using the left-hand side of \eqref{eq:weak-upp-static} as \algns{ &\LRa{\mathcal{A} (\bs{u}, p_{t}, p_{p}), (\bs{v}, q_{t}, q_{p})}_{(\mathcal{X}_h^*, \mathcal{X}_h)} \\ &\quad = \LRp{2 \mu \epsilon(\bs{u} ), \epsilon(\bs{v} ) } + \LRp{ p_{t} , \div \bs{v} } + \LRp{ \div \bs{u} , q_{t} } - s_h \LRp{p_{t}, q_{t}} - \LRp{ \lambda^{-1} p_{t} , q_{t} } - \LRp{\alpha \lambda^{-1} p_{p} , q_{t}} \\ &\qquad - \LRp{ \alpha \lambda^{-1} p_{t} , q_{p} } - \LRp{\LRp{s_0 + \alpha^2 \lambda^{-1} } p_{p} , q_{p} } - \LRp{\underline{\bs{\kappa}} \nabla p_{p} , \nabla q_{p} } } for $(\bs{u}, p_{t}, p_{p}), (\bs{v}, q_{t}, q_{p}) \in \mathcal{X}_h$, where $\LRa{\cdot, \cdot}_{(\mathcal{X}_h^*, \mathcal{X}_h)}$ is the duality pairing of $\mathcal{X}_h$ and $\mathcal{X}_h^*$. We claim that $\mc{A}$ is an isomorphism from $\mathcal{X}_h$ to $\mathcal{X}_h^*$ such that $\norw{\mc{A}}{L(\mathcal{X}_h, \mathcal{X}_h^*)}$ and $\norw{\mc{A}^{-1}}{L(\mathcal{X}_h^*, \mathcal{X}_h)}$ are independent of mesh sizes and the parameters in the ranges of \eqref{eq:param-range}. \begin{theorem} There exists ${\beta} >0$, independent of the scales of $\mu$, $\underline{\bs{\kappa}}$, $\lambda$ in \eqref{eq:param-range}, and the mesh sizes, such that the following inf-sup condition holds: \begin{align*} \inf_{ (\bs{u} , p_{t}, p_{p}) \in \mathcal{X}_h } \sup_{(\bs{v} , q_{t}, q_{p}) \in \mathcal{X}_h } \frac{( {\mathcal{A}} (\bs{u}, p_{t}, p_{p}), (\bs{v}, q_{t}, q_{p}))_{(\mathcal{X}_h^* , \mathcal{X}_h)} } {\| (\bs{u}, p_{t}, p_{p}) \|_{\mathcal{X}_h} \| (\bs{v}, q_{t}, q_{p}) \|_{\mathcal{X}_h} } \geq {\beta} . \end{align*} \end{theorem} \begin{proof To prove the assertion, for given $(0, 0, 0) \not = (\bs{u} , p_{t}, p_{p}) \in \mathcal{X}_h$, we will find $(\bs{v} , q_{t}, q_{p}) \in \mathcal{X}_h$ such that \begin{align} \label{eq:inf-sup-sub1} \| (\bs{v} , q_{t}, q_{p}) \|_{\mathcal{X}_h} &\leq C \| (\bs{u} , p_{t}, p_{p}) \|_{\mathcal{X}}, \\ \label{eq:inf-sup-sub2} ({\mathcal{A}} (\bs{u} , p_{t}, p_{p}), (\bs{v} , q_{t}, q_{p}))_{(\mathcal{X}_h^*, \mathcal{X}_h )} &\geq C' {\| (\bs{u} , p_{t}, p_{p}) \|_{\mathcal{X}_h}^2} , \end{align} with $C, C'>0$ independent of the scales of $\mu$, $\lambda$, $\underline{\bs{\kappa}}$, and mesh sizes. Suppose that $(0, 0, 0) \not = (\bs{u}, p_{t}, p_{p}) \in \mathcal{X}$ is given. For stabilized methods, there exist $C_1, C_2 >0$ independent of mesh sizes and parameters such that \algns{ \sup_{\bs{v} \in \bs{V}_{h}} \frac{(\div \bs{v}, q_{t})}{\norw{\bs{v}}{\bs{V}}} \ge 2C_1 \norw{q_{t}}{Q_{t}} - 2C_2 (s_h(q_{t}, q_{t}) )^{\half} \qquad \forall q_{t} \in Q_{t,h} . } From this there exists $\bs{w} \in \bs{V}_{h}$ such that \algn{ \label{eq:w-estm} (\div \bs{w}, p_{t}) \ge \LRp{ {C_1 \norw{p_{t}}{Q_{t}} - C_2 (s_h(p_{t}, p_{t}) )^{\half} } }\norw{\bs{w}}{\bs{V}} . } Due to linearity of this inequality in $\bs{w}$ we may rescale $\bs{w}$ so that $\norw{\bs{w}}{\bs{V}} = \norw{p_{t}}{Q_{t}}$. To prove \eqref{eq:inf-sup-sub1} and \eqref{eq:inf-sup-sub2}, we set $\bs{v} = \bs{u} + \delta \bs{w}$, $q_{t} = - p_{t}$, $q_{p} = -p_{p}$ with a constant $\delta>0$ which will be determined later. One can check that \begin{align} \notag \| (\bs{v} , q_{t}, q_{p}) \|_{\mathcal{X}_h} \leq {\sqrt{2(1 + \delta^2 )}} \| (\bs{u} , p_{t}, p_{p}) \|_{\mathcal{X}_h} , \end{align} and \eqref{eq:inf-sup-sub1} follows if $\delta$ is independent of the parameters and mesh sizes. To establish \eqref{eq:inf-sup-sub2} and determine $\delta$, we use the previously chosen $\bs{v}$, $q_{t}$, $q_{p}$, and \eqref{eq:w-estm} to have \begin{align} \notag &\LRa{ {\mathcal{A}} (\bs{u} , p_{t}, p_{p}), (\bs{v} , q_{t}, q_{p})}_{(\mathcal{X}_h^* , \mathcal{X}_h)} \\ \label{eq:inf-sup-bilinear} &= \| \bs{u} \|_{\bs{V}_{h}}^2 + \delta (2 \mu \epsilon(\bs{u}), \epsilon(\bs{w})) + \delta (\div \bs{w}, p_{t}) + s_h(p_{t}, p_{t}) \\ \notag & \quad + ( \lambda^{-1} p_{t}, p_{t}) + ((s_0 + \alpha^2 \lambda^{-1}) p_{p}, p_{p}) + 2 (\alpha \lambda^{-1}p_{t}, p_{p})) + (\underline{\bs{\kappa}} \nabla p_{p}, \nabla p_{p}) \end{align} By Young's inequality and the fact $\norw{\bs{w}}{\bs{V}} = \norw{p_{t}}{Q_{t}}$, we also have \begin{align*} & \delta (2 \mu \epsilon(\boldsymbol{u}), \epsilon(\boldsymbol{w})) \leq \frac{\delta \theta}{2} \| \boldsymbol{u} \|_{\bs{V}}^2 + \frac{\delta}{2 \theta} \| \bs{w} \|_{\bs{V}}^2 \le \frac{\delta \theta}{2} \| \bs{u} \|_{\bs{V}}^2 + \frac{\delta}{2 \theta} \| p_{t} \|_{Q_{t}}^2 \quad \forall \theta >0. \end{align*} By \eqref{eq:w-estm} and Young's inequality, \algns{ \delta (\div \bs{u}, p_{t}) &\ge \delta \LRp{C_1 \norw{p_{t}}{Q_{t}} - C_2 (s_h(p_{t}, p_{t}) )^{\half} } \norw{\bs{w}}{\bs{V}} \\ &\ge \delta C_1 \norw{p_{t}}{Q_{t}}^2 - \delta C_2 \LRp{\frac{\eta}{2} s_h(p_{t}, p_{t}) + \frac{1}{2\eta} \norw{p_{t}}{Q_{t}}^2 } } for any $\eta >0$. From these we can get \algns{ &\LRa{ {\mathcal{A}} (\bs{u} , p_{t}, p_{p}), (\bs{v} , q_{t}, q_{p})}_{(\mathcal{X}_h^* , \mathcal{X}_h)} \\ &= \LRp{ 1 - \frac{\delta \theta}{2} } \nor{\bs{u}}_{\bs{V}}^2 + \delta \LRp{C_1 - \frac{1}{2\theta} - \frac{C_2}{2 \eta} } \nor{p_{t}}_{Q_{t}}^2 + \LRp{1 - \delta \frac{C_2 \eta}{2} } s_h(p_{t}, p_{t}) \\ &\quad + \nor{ p_{t} - \alpha p_{p}}_{\lambda^{-1}}^2 + \nor{p_{p}}_{s_0}^2 + \nor{ p_{p}}_{1,\kappa}^2 . } We now set \algns{ \theta = \frac{2}{C_1},\qquad \eta = \frac{2 C_2}{C_1}, \qquad \delta = \min \LRb{\frac{C_1}{2}, \frac{C_1}{2 C_2^2} }, } and get \mltlns{ \LRa{ {\mathcal{A}} (\bs{u} , p_{t}, p_{p}), (\bs{v} , q_{t}, q_{p})}_{(\mathcal{X}_h^* , \mathcal{X}_h)} \\ \ge \half \| \bs{u} \|_{\bs{V}}^2 + \half s_h(p_{t}, p_{t}) + \frac{\delta C_1}{2} \norw{p_{t}}{Q_{t}}^2 + \norw{p_{p}}{0,s_0}^2 + \nor{p_{p}}_{1,\kappa}^2 . } Since $C_1$, $C_2$ are independent of parameters and mesh sizes, so is $\delta$, and therefore \eqref{eq:inf-sup-sub1} and \eqref{eq:inf-sup-sub2} are proved. \end{proof} \begin{table}[h] \begin{center} \begin{tabular}{c | c c | c c | c c | c c} \multirow{3}{*}{$N$} & \multicolumn{2}{c}{$\nor{p_{t} - p_{t,h}}_{L^2}$}& \multicolumn{2}{c}{$\nor{p_{p} - p_{p,h}}_{L^2}$}& \multicolumn{2}{c}{$\nor{\bs{u} - \bs{u}_{h}}_{H^1}$}& \multicolumn{2}{c}{$\nor{p_{p} - p_{p,h}}_{1,\kappa}$}\\ \cline{2-9}& error & rate & error & rate & error & rate & error & rate \\ \hline 8 & 4.342e-02 & $-$ & 3.527e-03 & $-$ & 5.725e-02 & $-$ & 1.127e-01 & $-$ \\ 16 & 1.071e-02 & 2.02 & 8.826e-04 & 2.00 & 1.424e-02 & 2.01 & 5.642e-02 & 1.00 \\ 32 & 2.669e-03 & 2.00 & 2.207e-04 & 2.00 & 3.559e-03 & 2.00 & 2.822e-02 & 1.00 \\ 64 & 6.668e-04 & 2.00 & 5.519e-05 & 2.00 & 8.897e-04 & 2.00 & 1.411e-02 & 1.00 \\ 128 & 1.667e-04 & 2.00 & 1.380e-05 & 2.00 & 2.225e-04 & 2.00 & 7.056e-03 & 1.00 \\ \hline \end{tabular} \caption{Errors and convergence rates with the lowest order Taylor--Hood finite elements} \label{TH-conv} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{c | c c | c c | c c | c c} \multirow{3}{*}{$N$} & \multicolumn{2}{c}{$\nor{p_{t} - p_{t,h}}_{L^2}$}& \multicolumn{2}{c}{$\nor{p_{p} - p_{p,h}}_{L^2}$}& \multicolumn{2}{c}{$\nor{\bs{u} - \bs{u}_{h}}_{H^1}$}& \multicolumn{2}{c}{$\nor{p_{p} - p_{p,h}}_{1,\kappa}$}\\ \cline{2-9}& error & rate & error & rate & error & rate & error & rate \\ \hline 8 & 6.024e+00 & $-$ & 3.549e-03 & $-$ & 7.017e+00 & $-$ & 1.139e-01 & $-$ \\ 16 & 3.748e+00 & 0.68 & 1.409e-03 & 1.33 & 4.438e+00 & 0.66 & 5.723e-02 & 0.99 \\ 32 & 1.519e+00 & 1.30 & 5.280e-04 & 1.42 & 1.793e+00 & 1.31 & 2.843e-02 & 1.01 \\ 64 & 4.642e-01 & 1.71 & 1.643e-04 & 1.68 & 5.463e-01 & 1.71 & 1.415e-02 & 1.01 \\ 128 & 1.259e-01 & 1.88 & 4.586e-05 & 1.84 & 1.568e-01 & 1.80 & 7.061e-03 & 1.00 \\ \hline \end{tabular} \caption{Errors and convergence rates with the Brezzi--Pitk\"{a}ranta stabilized method} \label{BP-conv} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{c | c c | c c | c c | c c} \multirow{3}{*}{$N$} & \multicolumn{2}{c}{$\nor{p_{t} - p_{t,h}}_{L^2}$}& \multicolumn{2}{c}{$\nor{p_{p} - p_{p,h}}_{L^2}$}& \multicolumn{2}{c}{$\nor{\bs{u} - \bs{u}_{h}}_{H^1}$}& \multicolumn{2}{c}{$\nor{p_{p} - p_{p,h}}_{1,\kappa}$}\\ \cline{2-9}& error & rate & error & rate & error & rate & error & rate \\ \hline 8 & 5.051e+00 & $-$ & 1.149e-02 & $-$ & 5.816e+00 & $-$ & 1.223e-01 & $-$ \\ 16 & 2.604e+00 & 0.96 & 2.726e-03 & 2.08 & 2.906e+00 & 1.00 & 5.772e-02 & 1.08 \\ 32 & 1.349e+00 & 0.95 & 6.126e-04 & 2.15 & 1.402e+00 & 1.05 & 2.835e-02 & 1.03 \\ 64 & 6.832e-01 & 0.98 & 1.490e-04 & 2.04 & 6.762e-01 & 1.05 & 1.413e-02 & 1.01 \\ 128 & 3.296e-01 & 1.05 & 3.787e-05 & 1.98 & 3.082e-01 & 1.13 & 7.058e-03 & 1.00 \\ \hline \end{tabular} \caption{Errors and convergence rates with the $\mc{P}_1$-$\mc{P}_0$ stabilized method} \label{CGDG-conv} \end{center} \end{table} The above stability in the parameter-dependent norm $\mc{X}_h$ suggests an abstract form of preconditioner \algn{ \label{eq:precond} \boldsymbol{P} = \pmat{P_{\bs{u}} & & \\ & P_{p_{t}} & \\ & & P_{p_{p}} } } with $P_{\bs{u}}$, $P_{p_{t}}$, $P_{p_{p}}$ which are (approximate) inverses of the maps \algns{ \bs{u} \mapsto -\div (2 \mu \epsilon (\bs{u})), \qquad p_{t} \mapsto (1/\mu) p_{t}, \qquad p_{p} \mapsto (s_0 + \alpha^2 \lambda^{-1})p_{p} - \div (\underline{\bs{\kappa}} \nabla p_{p}) . } \begin{table}[th] \begin{center} \begin{tabular}{>{\small}c | >{\small}c | >{\small}c || >{\small}c >{\small}c >{\small}c >{\small}c } \hline & & & \multicolumn{4}{>{\small}c}{$\kappa$} \\ $N$ & $\mu$ & $\lambda/\mu$ & $10^0$ & $10^{-3}$ & $10^{-6}$ & $10^{-9}$ \\ \hline \hline \multirow{9}{*} {$16$} & \multirow{3}{*} {$10^0$} & $10^0$ & $44 \; (0.19)$& $56 \; (0.24)$& $65 \; (0.27)$& $65 \; (0.27)$\\ & & $10^3$ & $59 \; (0.27)$& $55 \; (0.24)$& $70 \; (0.29)$& $67 \; (0.27)$\\ & & $10^6$& $60 \; (0.25)$& $54 \; (0.22)$& $42 \; (0.18)$& $53 \; (0.22)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $41 \; (0.17)$& $41 \; (0.17)$& $54 \; (0.22)$& $62 \; (0.25)$\\ & & $10^3$& $59 \; (0.25)$& $59 \; (0.25)$& $52 \; (0.22)$& $67 \; (0.29)$\\ & & $10^6$ & $59 \; (0.25)$& $59 \; (0.25)$& $51 \; (0.22)$& $38 \; (0.16)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $41 \; (0.17)$& $41 \; (0.17)$& $41 \; (0.17)$& $54 \; (0.23)$\\ & & $10^3$ & $59 \; (0.25)$& $59 \; (0.25)$& $59 \; (0.25)$& $52 \; (0.22)$\\ & & $10^6$ & $59 \; (0.25)$& $59 \; (0.25)$& $59 \; (0.25)$& $51 \; (0.22)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$32$} & \multirow{3}{*} {$10^0$} & $10^0$ & $46 \; (0.41)$& $55 \; (0.53)$& $67 \; (0.63)$& $67 \; (0.61)$\\ & & $10^3$ & $61 \; (0.56)$& $56 \; (0.52)$& $73 \; (0.66)$& $69 \; (0.61)$\\ & & $10^6$ & $62 \; (0.58)$& $55 \; (0.54)$& $42 \; (0.40)$& $54 \; (0.51)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $42 \; (0.41)$& $42 \; (0.41)$& $52 \; (0.50)$& $63 \; (0.58)$\\ & & $10^3$ & $61 \; (0.58)$& $61 \; (0.58)$& $52 \; (0.48)$& $68 \; (0.60)$\\ & & $10^6$ & $61 \; (0.59)$& $61 \; (0.60)$& $52 \; (0.49)$& $38 \; (0.36)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $42 \; (0.41)$& $42 \; (0.41)$& $42 \; (0.41)$& $52 \; (0.47)$\\ & & $10^3$ & $60 \; (0.58)$& $61 \; (0.59)$& $61 \; (0.62)$& $52 \; (0.53)$\\ & & $10^6$ & $60 \; (0.52)$& $61 \; (0.59)$& $61 \; (0.63)$& $52 \; (0.53)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$64$} & \multirow{3}{*} {$10^0$} & $10^0$ & $46 \; (1.47)$& $55 \; (1.91)$& $67 \; (1.98)$& $68 \; (2.16)$\\ & & $10^3$ & $61 \; (1.88)$& $56 \; (1.72)$& $72 \; (2.26)$& $69 \; (2.37)$\\ & & $10^6$ & $61 \; (2.19)$& $56 \; (1.88)$& $42 \; (1.40)$& $53 \; (1.93)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $42 \; (1.51)$& $42 \; (1.37)$& $50 \; (1.52)$& $63 \; (1.86)$\\ & & $10^3$ & $61 \; (1.85)$& $61 \; (1.97)$& $52 \; (1.84)$& $66 \; (2.05)$\\ & & $10^6$& $61 \; (2.05)$& $61 \; (2.14)$& $52 \; (1.82)$& $37 \; (1.34)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $42 \; (1.55)$& $42 \; (1.60)$& $42 \; (1.43)$& $50 \; (1.86)$\\ & & $10^3$ & $60 \; (2.10)$& $61 \; (2.19)$& $61 \; (2.12)$& $52 \; (1.84)$\\ & & $10^6$ & $60 \; (2.26)$& $61 \; (1.86)$& $61 \; (1.88)$& $52 \; (1.63)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$128$} & \multirow{3}{*} {$10^0$} & $10^0$ & $46 \; (7.30)$& $55 \; (8.39)$& $67 \; (10.19)$& $70 \; (10.53)$\\ & & $10^3$ & $63 \; (10.27)$& $56 \; (8.78)$& $72 \; (11.37)$& $68 \; (10.63)$\\ & & $10^6$ & $63 \; (10.50)$& $56 \; (9.09)$& $41 \; (6.36)$& $52 \; (7.50)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $42 \; (6.53)$& $42 \; (7.12)$& $50 \; (7.80)$& $63 \; (9.11)$\\ & & $10^3$ & $62 \; (9.13)$& $62 \; (10.03)$& $52 \; (8.69)$& $66 \; (10.63)$\\ & & $10^6$ & $62 \; (9.87)$& $62 \; (10.10)$& $52 \; (8.29)$& $37 \; (5.94)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $42 \; (6.72)$& $42 \; (6.54)$& $42 \; (5.86)$& $50 \; (6.95)$\\ & & $10^3$ & $62 \; (8.48)$& $62 \; (8.49)$& $61 \; (8.50)$& $52 \; (7.18)$\\ & & $10^6$ & $62 \; (8.45)$& $62 \; (8.32)$& $62 \; (8.34)$& $52 \; (7.02)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$256$} & \multirow{3}{*} {$10^3$} & $10^0$ & $46 \; (25.82)$& $54 \; (30.11)$& $65 \; (35.70)$& $69 \; (38.01)$\\ & & $10^3$ & $62 \; (35.41)$& $56 \; (31.50)$& $71 \; (40.29)$& $70 \; (38.55)$\\ & & $10^6$ & $63 \; (35.49)$& $55 \; (31.34)$& $41 \; (23.18)$& $51 \; (29.03)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^3$ & $43 \; (24.18)$& $43 \; (24.33)$& $51 \; (28.79)$& $59 \; (32.19)$\\ & & $10^3$ & $60 \; (34.44)$& $60 \; (33.83)$& $51 \; (28.70)$& $64 \; (35.27)$\\ & & $10^6$ & $60 \; (34.06)$& $60 \; (33.44)$& $50 \; (28.59)$& $37 \; (21.02)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $42 \; (24.00)$& $42 \; (23.66)$& $42 \; (24.02)$& $50 \; (28.52)$\\ & & $10^3$ & $60 \; (33.83)$& $60 \; (33.90)$& $60 \; (33.89)$& $51 \; (28.97)$\\ & & $10^6$ & $60 \; (33.80)$& $60 \; (33.91)$& $60 \; (34.40)$& $50 \; (28.71)$\\ \cline{3-7} \hline \end{tabular} \caption{Number of iterations and wall-clock time for one solve with the Taylor--Hood element} \label{table:TH-precond} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{>{\small}c | >{\small}c | >{\small}c || >{\small}c >{\small}c >{\small}c >{\small}c } \hline & & & \multicolumn{4}{>{\small}c}{$\kappa$} \\ $N$ & $\mu$ & $\lambda/\mu$ & $10^0$ & $10^{-3}$ & $10^{-6}$ & $10^{-9}$ \\ \hline \hline \multirow{9}{*} {$16$} & \multirow{3}{*} {$10^0$} & $10^0$ & $21 \; (0.07)$& $32 \; (0.10)$& $30 \; (0.09)$& $30 \; (0.09)$\\ & & $10^3$ & $22 \; (0.07)$& $20 \; (0.06)$& $28 \; (0.09)$& $26 \; (0.08)$\\ & & $10^6$ & $22 \; (0.07)$& $20 \; (0.06)$& $16 \; (0.05)$& $23 \; (0.07)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $17 \; (0.06)$& $20 \; (0.06)$& $30 \; (0.09)$& $28 \; (0.09)$\\ & & $10^3$ & $21 \; (0.07)$& $21 \; (0.07)$& $19 \; (0.06)$& $26 \; (0.08)$\\ & & $10^6$ & $21 \; (0.07)$& $21 \; (0.07)$& $18 \; (0.06)$& $14 \; (0.05)$\\ \cline{3-7} & \multirow{3}{*} {$10^9$} & $10^0$ & $17 \; (0.06)$& $17 \; (0.06)$& $19 \; (0.06)$& $30 \; (0.10)$\\ & & $10^3$ & $21 \; (0.07)$& $21 \; (0.07)$& $21 \; (0.07)$& $19 \; (0.06)$\\ & & $10^6$ & $21 \; (0.07)$& $21 \; (0.07)$& $21 \; (0.07)$& $18 \; (0.06)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$32$} & \multirow{3}{*} {$1$} & $1$ & $24 \; (0.11)$& $37 \; (0.17)$& $35 \; (0.16)$& $35 \; (0.16)$\\ & & $10^3$ & $29 \; (0.13)$& $25 \; (0.12)$& $33 \; (0.16)$& $31 \; (0.14)$\\ & & $10^6$ & $29 \; (0.14)$& $25 \; (0.12)$& $20 \; (0.09)$& $26 \; (0.12)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $19 \; (0.09)$& $22 \; (0.11)$& $35 \; (0.16)$& $32 \; (0.15)$\\ & & $10^3$ & $26 \; (0.13)$& $26 \; (0.13)$& $23 \; (0.11)$& $30 \; (0.15)$\\ & & $10^6$ & $26 \; (0.12)$& $26 \; (0.12)$& $23 \; (0.11)$& $18 \; (0.09)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $19 \; (0.09)$& $19 \; (0.09)$& $22 \; (0.11)$& $35 \; (0.16)$\\ & & $10^3$ & $26 \; (0.12)$& $26 \; (0.12)$& $26 \; (0.12)$& $23 \; (0.11)$\\ & & $10^6$ & $26 \; (0.13)$& $26 \; (0.13)$& $26 \; (0.12)$& $23 \; (0.11)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$64$} & \multirow{3}{*} {$10^0$} & $10^0$ & $26 \; (0.31)$& $40 \; (0.48)$& $38 \; (0.41)$& $38 \; (0.39)$\\ & & $10^3$ & $35 \; (0.39)$& $31 \; (0.34)$& $38 \; (0.43)$& $36 \; (0.37)$\\ & & $10^6$ & $35 \; (0.40)$& $30 \; (0.35)$& $24 \; (0.28)$& $29 \; (0.36)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $21 \; (0.25)$& $24 \; (0.29)$& $37 \; (0.44)$& $36 \; (0.39)$\\ & & $10^3$ & $32 \; (0.38)$& $32 \; (0.35)$& $28 \; (0.31)$& $35 \; (0.41)$\\ & & $10^6$ & $32 \; (0.38)$& $32 \; (0.35)$& $27 \; (0.30)$& $20 \; (0.23)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $20 \; (0.23)$& $21 \; (0.24)$& $24 \; (0.27)$& $37 \; (0.44)$\\ & & $10^3$ & $32 \; (0.38)$& $32 \; (0.38)$& $32 \; (0.37)$& $27 \; (0.32)$\\ & & $10^6$ & $32 \; (0.38)$& $32 \; (0.38)$& $32 \; (0.38)$& $27 \; (0.32)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$128$} & \multirow{3}{*} {$10^0$} & $10^0$ & $28 \; (1.08)$& $44 \; (1.71)$& $43 \; (1.54)$& $42 \; (1.51)$\\ & & $10^3$ & $39 \; (1.63)$& $33 \; (1.33)$& $42 \; (1.75)$& $40 \; (1.51)$\\ & & $10^6$ & $39 \; (1.64)$& $33 \; (1.36)$& $26 \; (1.07)$& $31 \; (1.31)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $23 \; (0.98)$& $25 \; (1.05)$& $40 \; (1.70)$& $39 \; (1.54)$\\ & & $10^3$ & $37 \; (1.56)$& $37 \; (1.57)$& $32 \; (1.36)$& $40 \; (1.74)$\\ & & $10^6$ & $37 \; (1.60)$& $37 \; (1.57)$& $32 \; (1.33)$& $23 \; (1.00)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $22 \; (0.95)$& $23 \; (0.99)$& $25 \; (1.07)$& $40 \; (1.72)$\\ & & $10^3$ & $36 \; (1.59)$& $36 \; (1.55)$& $36 \; (1.53)$& $31 \; (1.36)$\\ & & $10^6$ & $36 \; (1.51)$& $36 \; (1.53)$& $36 \; (1.54)$& $31 \; (1.28)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$256$} & \multirow{3}{*} {$10^0$} & $10^0$ & $28 \; (4.83)$& $46 \; (8.01)$& $46 \; (8.10)$& $44 \; (6.71)$\\ & & $10^3$ & $42 \; (8.16)$& $35 \; (7.12)$& $45 \; (8.65)$& $44 \; (7.78)$\\ & & $10^6$ & $42 \; (7.55)$& $35 \; (6.38)$& $26 \; (5.02)$& $33 \; (6.27)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^3$ & $25 \; (4.80)$& $26 \; (5.19)$& $42 \; (7.19)$& $43 \; (6.62)$\\ & & $10^3$ & $41 \; (7.53)$& $41 \; (7.82)$& $34 \; (6.34)$& $42 \; (8.35)$\\ & & $10^6$ & $41 \; (7.47)$& $41 \; (7.16)$& $34 \; (6.63)$& $26 \; (5.22)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $22 \; (4.54)$& $24 \; (4.73)$& $26 \; (4.97)$& $41 \; (7.65)$\\ & & $10^3$ & $40 \; (7.56)$& $40 \; (8.14)$& $40 \; (7.57)$& $33 \; (6.48)$\\ & & $10^6$ & $40 \; (7.80)$& $40 \; (8.04)$& $40 \; (7.89)$& $33 \; (6.70)$\\ \cline{3-7} \hline \end{tabular} \caption{Number of iterations and wall-clock time for one solve with the Brezzi--Pitk\"{a}ranta stabilized method} \label{table:BP-precond} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{>{\small}c | >{\small}c | >{\small}c || >{\small}c >{\small}c >{\small}c >{\small}c } \hline & & & \multicolumn{4}{>{\small}c}{$\kappa$} \\ $N$ & $\mu$ & $\lambda$ & $10^0$ & $10^{-3}$ & $10^{-6}$ & $10^{-9}$ \\ \hline \hline \multirow{9}{*} {$16$} & \multirow{3}{*} {$10^0$} & $10^0$ & $24 \; (0.08)$& $38 \; (0.12)$& $37 \; (0.12)$& $37 \; (0.12)$\\ & & $10^3$ & $33 \; (0.11)$& $30 \; (0.10)$& $39 \; (0.13)$& $36 \; (0.12)$\\ & & $10^6$ & $33 \; (0.11)$& $29 \; (0.10)$& $22 \; (0.07)$& $30 \; (0.10)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $20 \; (0.07)$& $23 \; (0.08)$& $37 \; (0.12)$& $35 \; (0.11)$\\ & & $10^3$ & $31 \; (0.10)$& $32 \; (0.11)$& $29 \; (0.09)$& $38 \; (0.12)$\\ & & $10^6$ & $31 \; (0.10)$& $32 \; (0.10)$& $28 \; (0.09)$& $22 \; (0.08)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $19 \; (0.06)$& $20 \; (0.07)$& $23 \; (0.08)$& $37 \; (0.12)$\\ & & $10^3$ & $31 \; (0.10)$& $31 \; (0.10)$& $32 \; (0.10)$& $29 \; (0.09)$\\ & & $10^6$ & $31 \; (0.10)$& $31 \; (0.10)$& $32 \; (0.10)$& $28 \; (0.09)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$32$} & \multirow{3}{*} {$10^0$} & $10^0$ & $26 \; (0.13)$& $42 \; (0.20)$& $41 \; (0.20)$& $41 \; (0.20)$\\ & & $10^3$ & $38 \; (0.19)$& $34 \; (0.17)$& $43 \; (0.21)$& $40 \; (0.19)$\\ & & $10^6$ & $38 \; (0.19)$& $33 \; (0.16)$& $26 \; (0.13)$& $33 \; (0.16)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $21 \; (0.11)$& $24 \; (0.12)$& $40 \; (0.20)$& $39 \; (0.19)$\\ & & $10^3$ & $36 \; (0.18)$& $36 \; (0.19)$& $32 \; (0.17)$& $42 \; (0.22)$\\ & & $10^6$ & $36 \; (0.18)$& $36 \; (0.19)$& $32 \; (0.16)$& $24 \; (0.13)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $21 \; (0.11)$& $21 \; (0.11)$& $24 \; (0.13)$& $40 \; (0.20)$\\ & & $10^3$ & $35 \; (0.18)$& $36 \; (0.19)$& $36 \; (0.19)$& $32 \; (0.17)$\\ & & $10^6$ & $35 \; (0.18)$& $36 \; (0.19)$& $37 \; (0.19)$& $32 \; (0.16)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$64$} & \multirow{3}{*} {$10^0$} & $10^0$ & $26 \; (0.33)$& $43 \; (0.54)$& $43 \; (0.50)$& $43 \; (0.49)$\\ & & $10^3$ & $39 \; (0.48)$& $35 \; (0.43)$& $46 \; (0.56)$& $41 \; (0.48)$\\ & & $10^6$ & $39 \; (0.47)$& $35 \; (0.43)$& $27 \; (0.34)$& $35 \; (0.43)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $22 \; (0.28)$& $25 \; (0.31)$& $41 \; (0.52)$& $40 \; (0.46)$\\ & & $10^3$ & $37 \; (0.46)$& $38 \; (0.47)$& $34 \; (0.41)$& $43 \; (0.55)$\\ & & $10^6$ & $37 \; (0.45)$& $38 \; (0.48)$& $34 \; (0.42)$& $25 \; (0.33)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $21 \; (0.28)$& $22 \; (0.28)$& $24 \; (0.32)$& $41 \; (0.48)$\\ & & $10^3$ & $36 \; (0.42)$& $37 \; (0.46)$& $38 \; (0.50)$& $33 \; (0.43)$\\ & & $10^6$ & $36 \; (0.44)$& $37 \; (0.43)$& $38 \; (0.45)$& $33 \; (0.39)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$128$} & \multirow{3}{*} {$10^0$} & $10^0$ & $28 \; (1.25)$& $46 \; (1.98)$& $46 \; (1.86)$& $46 \; (1.77)$\\ & & $10^3$ & $42 \; (1.82)$& $38 \; (1.67)$& $49 \; (2.13)$& $43 \; (1.62)$\\ & & $10^6$ & $42 \; (1.67)$& $37 \; (1.48)$& $28 \; (1.12)$& $37 \; (1.58)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $24 \; (1.04)$& $27 \; (1.06)$& $43 \; (1.70)$& $44 \; (1.65)$\\ & & $10^3$ & $40 \; (1.75)$& $40 \; (1.78)$& $37 \; (1.61)$& $46 \; (2.02)$\\ & & $10^6$ & $40 \; (1.58)$& $40 \; (1.71)$& $35 \; (1.53)$& $28 \; (1.18)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $23 \; (1.03)$& $24 \; (1.07)$& $26 \; (1.18)$& $43 \; (1.80)$\\ & & $10^3$ & $38 \; (1.64)$& $39 \; (1.56)$& $40 \; (1.83)$& $36 \; (1.68)$\\ & & $10^6$ & $38 \; (1.82)$& $39 \; (1.72)$& $40 \; (1.72)$& $35 \; (1.47)$\\ \cline{3-7} \cline{2-7} \multirow{9}{*} {$256$} & \multirow{3}{*} {$10^0$} & $10^0$ & $29 \; (5.25)$& $46 \; (9.32)$& $49 \; (7.97)$& $48 \; (7.97)$\\ & & $10^3$ & $50 \; (9.03)$& $44 \; (8.35)$& $57 \; (10.94)$& $54 \; (9.90)$\\ & & $10^6$ & $50 \; (9.34)$& $44 \; (8.31)$& $33 \; (6.76)$& $43 \; (8.90)$\\ \cline{3-7} & \multirow{3}{*} {$10^3$} & $10^0$ & $25 \; (4.85)$& $28 \; (5.29)$& $45 \; (9.21)$& $47 \; (8.14)$\\ & & $10^3$ & $48 \; (9.19)$& $48 \; (9.40)$& $43 \; (7.84)$& $55 \; (10.93)$\\ & & $10^6$ & $48 \; (9.76)$& $48 \; (9.52)$& $43 \; (8.55)$& $32 \; (6.44)$\\ \cline{3-7} & \multirow{3}{*} {$10^6$} & $10^0$ & $24 \; (4.82)$& $24 \; (4.84)$& $27 \; (5.40)$& $44 \; (9.02)$\\ & & $10^3$ & $46 \; (8.77)$& $47 \; (8.23)$& $47 \; (8.00)$& $42 \; (7.22)$\\ & & $10^6$ & $46 \; (7.82)$& $47 \; (7.97)$& $48 \; (8.07)$& $42 \; (7.11)$\\ \cline{3-7} \hline \end{tabular} \caption{Number of iterations and wall-clock time for one solve with the $\mc{P}_1$--$\mc{P}_0$ stabilized method} \label{table:CGDG-precond} \end{center} \end{table} \section{Numerical results} In this section we present the results of numerical experiments. All numerical experiments are performed with FEniCS version 2017.2.0. In the first numerical experiments, we show convergence of finite finite element methods. The computational $\Omega$ is the unit square $[0,1]\times [0,1]$ and is divided into $N \times N$ uniform squares, i.e., $h = 1/N$, and then each squares are divided into to two triangles to obtain the triangulation $\mathcal{T}_h$. To illustrate convergence of errors, we consider a manufactured solution of the problem with \algns{ \bs{u} = \pmat{\sin(\pi x) \sin(1 + t) \\ \sin y \sin t }, \qquad p = x^2 y^2 \cos t } and parameters $\mu = 10$, $\lambda = 15$, $\alpha = 1$, $s_0 = 1$, $\kappa = 1$. For boundary conditions we impose Dirichlet boundary conditions of $\bs{u}$ on $\Gamma_d := \{0\} \times [0,1] \cup \{1\} \times [0,1]$ and of $p_{p}$ on $\Gamma_p := \partial \Omega$. We consider the lowest order Taylor--Hood element, the Brezzi--Pitk\"{a}ranta stabilized method (cf. \eqref{eq:stab-method1}), and the $\mc{P}_1$--$\mc{P}_0$ stabilized method (cf. \eqref{eq:stab-method2}). We use the backward Euler time discretization with time step $\Delta t = h^2$ and the errors are computed at $t = 0.5$. Convergence rates of errors for mesh refinements are given in Tables~\ref{TH-conv}--\ref{CGDG-conv}. Although parameter-robust preconditioning for mixed methods are already studied in \cite{LeeEtAl2017}, we show the results of mixed method and stabilized methods for comparision. To construct preconditioners based on \eqref{eq:precond} for mixed methods, we use the algebraic multigrid method for the blocks of $\bs{u}$ and $p_{p}$ but use the Jacobi preconditioner for the block of $p_{t}$ as in \cite{LeeEtAl2017}: \algns{ \pmat{\text{AMG} (A_{\bs{u}}) & & \\ & \text{Jacobi}(A_{p_{t}}) & \\ & & \text{AMG}(A_{p_{p}}) } } where $A_{\bs{u}}$, $A_{p_{t}}$, $A_{p_{p}}$ are matrices obtained from the bilinear forms \algns{ (2 \mu \epsilon(\bs{u}), \epsilon(\bs{v})), \quad ((2\mu)^{-1} p_{t}, q_{t}) , \quad ((s_0 + \alpha^2 \lambda^{-1}) p_{p}, q_{p}) + (\underline{\bs{\kappa}} \nabla p_{p}, \nabla q_{p}) . } For stabilized methods our preconditioners have the form \algns{ \pmat{\text{AMG} (A_{\bs{u}}) & & \\ & \text{AMG}(A_{p_{t}}) & \\ & & \text{AMG}(A_{p_{p}}) } } where $A_{\bs{u}}$, $A_{p_{t}}$, $A_{p_{p}}$ are matrices obtained by \algns{ (2 \mu \epsilon(\bs{u}), \epsilon(\bs{v})), \quad ((2\mu)^{-1} p_{t}, q_{t}) + s_h(p_{t}, q_{t}), \quad ((s_0 + \alpha^2 \lambda^{-1}) p_{p}, q_{p}) + (\underline{\bs{\kappa}} \nabla p_{p}, \nabla q_{p}) } for each stabilized method, and MinRes algorithm is used for iterative solvers. For algebraic multigrid methods we use the algebraic multigrid package Hypre AMG. To test robustness of these preconditioners for mesh refinements, and parameter values, we consider the cases with meshes $N = 16, 32, 64,128,256$, $\mu = 1, 10^3, 10^6$, $\lambda/\mu = 1, 10^3, 10^6$, and scalar $\kappa = 1, 10^{-3}, 10^{-6}, 10^{-9}$. At each case, we only test the static problem with randomly generated right-hand side vectors, and measured number of iterations with relative tolerance $10^{-6}$, and measured the wall-clock time for one solve by averaging 10 solves with different right-hand side vectors. The results are given in Tables~\ref{table:TH-precond}--\ref{table:CGDG-precond}. One can see that the numbers of iteration in Tables~\ref{table:BP-precond}-\ref{table:CGDG-precond} are quite robust for different parameter values and mesh refinements. In the results, the stabilized methods have significantly less computational times for same meshes, so they can be advantageous to accelerate simulations but the price to pay is the low accuracy of stabilized methods as we have seen before. \section{Conclusion} In this paper we studied the three-field formulation of the Biot model which has the displacement, the total pressure, and the pore pressure as unknowns. We first carried out a comprehensive investigation of the a priori estimate of the continuous problem. Then we studied finite element discretization with parameter-robust stability, and parameter-robust preconditioning of the discretizations. For finite element discretizations we considered standard mixed finite element as well as stabilized methods for the Stokes equations, and complete error estimates of semidiscrete solutions of the Biot model are proved. For parameter-robust preconditioning, we showed parameter-robust stability of the system and derived an abstract form of robust preconditioners. The theoretical results are illustrated with numerical experiments. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2018-07-02T02:12:14", "yymm": "1806", "arxiv_id": "1806.11566", "language": "en", "url": "https://arxiv.org/abs/1806.11566", "abstract": "In this paper we consider a three-field formulation of the Biot model which has the displacement, the total pressure, and the pore pressure as unknowns. For parameter-robust stability analysis, we first show a priori estimates of the continuous problem with parameter-dependent norms. Then we study finite element discretizations which provide parameter-robust error estimates and preconditioners. For finite element discretizations we consider standard mixed finite element as well as stabilized methods for the Stokes equations, and the complete error analysis of semidiscrete solutions is given. Abstract forms of parameter-robust preconditioners are investigated by the operator preconditioning approach. The theoretical results are illustrated with numerical experiments.", "subjects": "Numerical Analysis (math.NA)", "title": "Analysis and preconditioning of parameter-robust finite element methods for Biot's consolidation model", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631647185702, "lm_q2_score": 0.7185943865443352, "lm_q1q2_score": 0.70879503326087 }
https://arxiv.org/abs/1108.1888
3-manifolds with nonnegative Ricci curvature
For a noncompact 3-manifold with nonnegative Ricci curvature, we prove that either it is diffeomorphic to $\mathbb{R}^3$ or the universal cover splits. As a corollary, it confirms a conjecture of Milnor in dimension 3.
\section{\bf{Introduction}} Let $M$ be a complete manifold with nonnegative Ricci curvature, then it is a fundamental question in geometry to find the restriction of the topology on $M$. Recall in 2-dimensional case, Ricci curvature is the same as Gaussian curvature $K$. It is a well known result that if $K \geq 0$, the universal cover is either conformal to $\mathbb{S}^2$ or $\mathbb{C}$. Let us consider 3-manifolds with nonnegative Ricci curvature. By using the Ricci flow, Hamilton \cite{[H]} classified all compact 3-manifolds with nonnegative Ricci curvature. He proved that the universal cover is either diffeomorphic to $\mathbb{S}^3$ or $\mathbb{S}^2 \times \mathbb{R}$ or $\mathbb{R}^3$. In the latter two cases, the metric is a product on each factor $\mathbb{R}$. For the noncompact case, there are some partial classification results. Anderson-Rodriguez \cite{[AR]} and Shi \cite{[Sh]} classified these manifolds by assuming the upper bound of the sectional curvature. Zhu \cite{[Z1]} proved that if the volume grows like $r^3$, then the manifold is contractible. Based on Schoen and Yau's work \cite{[SY1]}, Zhu \cite{[Z2]} also proved that if the Ricci curvature is quasi-positive, then the manifold is diffeomorphic to $\mathbb{R}^3$. In late 1970s, Yau initiated a program of using minimal surfaces to study 3-manifolds. It turns out that this method is very powerful. For example, Schoen and Yau proved the famous positive mass conjecture \cite{[SY2]}\cite{[SY3]}. Meeks and Yau \cite{[MY1]}\cite{[MY2]} proved the loop theorem, sphere theorem and Dehn lemma together with the equivariant forms. In \cite{[SY1]}, Schoen and Yau proved that a complete noncompact 3-manifold with positive Ricci curvature is diffeomorphic to $\mathbb{R}^3$, they also announced the classification of complete noncompact 3-manifolds with nonnegative Ricci curvature. In this note we classify complete noncompact 3-manifolds with nonnegative Ricci curvature in full generality. The proof is based on the minimal surface theory developed by Schoen and Fischer-Colbrie \cite{[FS]}, Schoen and Yau \cite{[SY1]} , Schoen \cite{[S]}. We will use the following theorem frequently. \begin{theorem}\label{thm1}(Schoen-Yau\cite{[SY1]}) Let $M^3$ be a complete 3-manifold with nonnegative Ricci curvature. Let $\Sigma$ be a complete oriented stable minimal surface in $M$, then $\Sigma$ is totally geodesic, and the Ricci curvature of $M$ normal to $\Sigma$ vanishes at all points on $\Sigma$. \end{theorem} Below is our result: \begin{theorem}\label{thm2} Let $M^3$ be a complete noncompact 3-manifold with nonnegative Ricci curvature, then either $M^3$ is diffeomorphic to $\mathbb{R}^3$ or the universal cover of $M^3$ is isometric to a Riemann product $N^2 \times \mathbb{R}$ where $N^2$ is a complete 2-manifold with nonnegative sectional curvature. \end{theorem} In \cite{[M]}, Milnor proposed the following conjecture: \begin{conj} If a complete manifold has nonnegative Ricci curvature, then the fundamental group is finitely generated. \end{conj} \begin{cor} Milnor's conjecture is true in dimension $3$. \end{cor} \noindent\emph{Proof of the corollary.} If $M$ is diffeomorphic to $\mathbb{R}^3$, then the conclusion is obvious. Otherwise by theorem \ref{thm2}, $M$ has nonnegative sectional curvature. Hence the corollary follows from a result of Gromov \cite{[G]}. \qed \begin{center} \bf {\quad Acknowledgment} \end{center} The author would like to thank Professors Richard Schoen, Jiaping Wang, Shing-Tung Yau for their interests in this note. He also thanks Chenxu He for informing him the paper \cite{[E]}. \section{\bf{proof of the theorem}} \noindent\emph{Proof of Theorem \ref{thm2}.} We assume $M$ is not flat, otherwise the conclusion is obvious. Let us review Schoen and Yau's argument in \cite{[SY1]}. Assume $M$ is simply connected, if $\pi_2(M) \neq 0$, according to Lemma 2 in \cite{[SY1]}, $M$ must have at least two ends. From Cheeger-Gromoll splitting theorem \cite{[CG]}, the universal cover splits. So we assume $\pi_2(M) = 0$. Therefore, the universal cover of $M$ is contractible. If $M$ is not simply connected, Schoen and Yau \cite{[SY1]} proved that $\pi_1(M)$ must have no torsion elements. Thus, after replacing $M$ by a suitable covering, we may assume that $\pi_1(M) = \mathbb{Z}$ and that $M$ is orientable. Let $\gamma$ be a Jordan curve representing the generator of the fundamental group of $M$. Consider an exaustion of $M$ by $\Omega_i$, where $\partial \Omega_i$ is a disjoint union of smooth 2-manifolds. We may assume that $\gamma$ lies in each $\Omega_i$. By Poincare duality for manifolds with boundary, there exists a oriented surface $\Sigma_i \subset \Omega_i$ such that $\partial \Sigma_i \subset \partial \Omega_i$, moreover, the oriented intersection number of $\Sigma_i$ with $\gamma$ is 1. We would like to minimize the area among all surfaces which are in the same homology class as $\Sigma_i$ and with the same boundary as $\Sigma_i$. We can perturb the metric near $\partial \Omega_i$ such that the mean curvature is positive with respect to the outer normal vector. So there exists a minimizing surface for each $i$, which we still call $\Sigma_i$. For each $i$, the intersection of $\Sigma_i$ with $\gamma$ is nonempty. Therefore, a subsequence of $\Sigma_i$ converges to an oriented stable minimal surface $\Sigma$ in $M$. If the Ricci curvature is strictly positive on $M$, then this contradicts theorem \ref{thm1}. Let us deal with the case when the Ricci curvature is nonnegative. For a fixed point $p \in M$, we may assume that $p$ does not lie on $\gamma$, otherwise we perturb $\gamma$ a little bit such that $p$ is not on $\gamma$. According to the result in \cite{[E]} by Ehrlich, we can perturb the metric such that the Ricci curvature is strictly positive in a small annulus around $p$, while the metric remains the same outside the annulus(this means that inside the ball bounded by the annulus, the Ricci curvature might be negative). For reader's convenience, we give the details as follows: According to the well-known formula, if $g(t) = e^{2tf}g_0$ and $|\nu|_{g(0)} = 1$, then $$Ric^t(v, v) = e^{-2tf}(Ric(v, v) - t(n-2)\nabla^2 f(v, v) - t\Delta f + t^2(n-2)(v(f)^2-|\nabla f|^2))$$ where $n = dim(M) = 3$. Define $r$ to be the distance function to $p$. For a very small $R > 0$, consider the function $\rho = R-r$ for $\frac{R}{2}< r < R$. Then we extend $\rho$ to be a positive smooth function for $0 \leq r < \frac{R}{2}$. Define $f = -\rho^5$, for $|v| = 1$, $$Ric^t(v, v)=e^{2t\rho^5}(Ric(v, v) + t(n-2)\nabla^2 (\rho^5)(v, v) + t\Delta (\rho^5) + t^2(n-2)(v(\rho^5)^2-|\nabla \rho^5|^2)).$$ Now $\nabla^2 (\rho^5)(v, v) = 20\rho^3v(\rho)^2+ 5\rho^4\nabla^2(\rho)(v, v)$, therefore, \begin{equation} Ric^t(v, v) \geq e^{2t\rho^5}(Ric(v, v) + 20t\rho^3+ 5t\rho^4(\Delta \rho+ (n-2)\nabla^2 (\rho)(v, v)) -25(n-2)t^2\rho^8). \end{equation} From now on, we restrict $r$ such that $\lambda R < r < R$, where $\lambda > \frac{1}{2}$ is to be determined. Using the fact that near $p$, the manifold is almost Euclidean, for small $R$, we have $$|\Delta \rho + (n-2)\nabla^2 \rho(v, v)| \leq \frac{9(2n-3)}{8(R-\rho)}.$$ We plug this in (1). So for all small $t$, $g(t)$ have strictly positive Ricci curvature in an annulus $B_p(R)\backslash B_p(\lambda R)$ for $\lambda = \frac{7}{8}$. The metric remains the same outside $B_p(R)$. The deformation is $C^4$ continuous with respect to the metric and $C^{\infty}$ with respect to $t$. We apply this perturbation finitely many times so that the Ricci curvature is positive on $\gamma$(each time we perturb the metric a little bit around a point) and that the Ricci curvature is nonnegative except a small neighborhood of $p$. Then we can minimize the area as before. This will yield a complete stable minimal surface $\Sigma$. Now the claim is that $\Sigma$ must pass through the small neighborhood of $p$. If this is not true, then on $\Sigma$, the Ricci curvature is nonnegative, the normal Ricci curvature is strictly positive somewhere on $\gamma$. This contradicts theorem \ref{thm1}. Using $t$ to denote the deformation parameter, we shrink the size of the neighborhood of $p$ where the Ricci curvature might be negative. So we get a sequence of metrics on $M$ and for each metric, a stable minimal surface passing through a small neighborhood of $p$. We may let $t\to 0$ sufficiently fast so that these metrics are converging to the initial metric in $C^4$ sense. Taking the limit for a subsequence of these complete minimal surfaces, we obtain a complete oriented stable minimal surface passing through $p$, with the initial metric. According to theorem \ref{thm1}, this surface is totally geodesic with vanishing normal Ricci curvature. Since the manifold is not flat, there exists a neighborhood $U$ such that the scalar curvature is strictly positive in $U$. Consider a point $p \in U$ and a sequence of points $p_i \to p$, where all $p_i \in U$. Through each $p_i$, there exists a complete totally geodesic surface $H_i$. So a subsequence of $H_i$ converges to a complete totally geodesic surface $H$ through $p$. We assume that the normal vector of $H_i$ at $p_i$ converges to the normal vector of $H$ at $p$. We can choose $p_j$ so that for any $j > i$, $p_j$ does not lie on $H_i$. Therefore, for all large $i$, $H_i$ does not coincide with $H$. By the assumption of $U$, $H_i$ and $H$ are not flat. They have nonnegative sectional curvature, so they are conformal to $\mathbb{C}$. The normal bundle is trivial. We denote the unit normal vector of $H$ by $N$. For any $x \in H$, when $k$ is very large, we shall construct a piece $\Sigma_k \subset H_k$. For a shortest geodesic on $H$ connecting $p$ and $x$, we assume $x = exp_p(v)$ where $v \in T_{p}{H}$. If the geodesic is not unique, then we just choose one. We parallel transport the vector $v$ along the shortest geodesic connecting $p$ and $p_k$ to obtain a tangent vector $u_k$ at $p_k$. Then we project $u_k$ to $T_{p_k}(H_k)$ to get $v_k \in T_{p_k}(H_k)$. Define a point $x_k = exp_{p_k}v_k$. Since we may have multiple choices of $v$, $x_k$ may be different. However, when $k$ is very large, these $x_k$ are close to $x$, since $p_k \to p$ and the normal vector of $H_k$ at $p_k$ is converging to the normal vector of $H$ at $p$. Moreover, these $x_k$ belong to the same piece of $H_k$, i.e, the $H_k$ distances between them are very small, since $H_k$ and $H$ are simply connected. Let $r = \frac{1}{10}inj_M(x)$ where $inj_M(x)$ denotes the injective radius of $M$ at $x$. Define $\Sigma_k = B_{H_k}(x_k, r)$. From the construction of $x_k$, for $k$ large, the normal vector of $H$ at $x$ and the normal vector of $H_k$ at $x_k$ are close in the obvious sense, as the normal vectors of $H$ and $H_k$ are parallel along each surfaces. Since $x_k$ is very close to $x$, $inj_M(x_k) \geq \frac{1}{2}inj_M(x) \geq r$. Therefore $dist_M(\partial B_{H_k}(x_k, r), x) \geq r - dist_M(x_k, x) > 5 dist_M(x, x_k)$ for $k$ large. Thus if $l$ is the normalized shortest geodesic connecting $x$ and $\Sigma_k$, $l$ will intersect the inner part of $\Sigma_k$, say at the point $\overline x_k$. Triangle inequality implies that $dis_{H_k}(x_k, \overline x_k) \leq 2 dis_M(x, x_k)$. Therefore, the unit normal vector of $H$ at $x$ and the unit normal vector of $H_k$ at $\overline x_k$ are close in the obvious sense. Denote the initial tangent vector of $l$ at $x$ by $e$. The oriented distance is defined by $d_k(x) = dist_M(x, \Sigma_k)Sign(\langle e, N\rangle)$ for $x \in H$. The function $Sign(t) = 1$ when $t > 0$; $Sign(t) = -1$ when $t < 0$; $Sign(t) = 0$ when $t = 0$. For any $x \in H$, $d_k(x)$ is well defined and smooth for $k$ sufficiently large. Via the second variation of arc length, there is a nice pinching estimate for the Hessian of $d_k(x)$ when $d_k(x)$ is very small, namely, $$-d_k(x)(R_{NijN}+Sign(d_k(x))\epsilon(k, x)) \leq (d_k(x))_{ij} \leq -d_k(x)(R_{NijN}-Sign(d_k(x))\epsilon(k, x))$$ where $\lim\limits_{k \to \infty}\epsilon(k, x) = 0$ and the convergence is uniform for any compact set of $H$. In the above estimate, we have used the fact that for $k$ large, the normal direction of $H_k$ at $\overline x_k$ and the normal direction of $H$ at $x$ are close in the obvious sense. Since $d_k$ does not vanish identically, after a suitable rescaling, a subsequence converges to a nonzero function $f$ when $k \to \infty$. Then $f$ satisfies \begin{equation} f_{ij} + fR_{NijN} = 0 \end{equation} where $f_{ij}$ is the Hessian of $f$ on $H$ with the induced metric. Moreover, $\Delta f = 0$ since the normal Ricci curvature vanishes identically. \begin{remark} We use the rescaled distance function to approximate the variational vector field on $H$. If the surfaces $H_k$ and $H$ are properly embedded, then we can simply define $d_k(x) = dist_M(x, H_k)Sign(\langle e, N\rangle)$. We define the function $d_k(x)$ as in last paragraph because in the final part of the paper, when we try to show that $M$ is simply connected at infinity, we obtain stable minimal surfaces which could be immersed and improper. \end{remark} \begin{lemma} $f \equiv Constant$. \end{lemma} \begin{proof} First, $H$ is conformal to $\mathbb{C}$, since it is not flat and the Gaussian curvature is nonnegative. We may assume $f$ changes sign, otherwise from the Liouville property for positive harmonic functions on $H$, $f$ is constant. We observe that the vanishing points of $f$ consists of the geodesics on $H$, since $\nabla f$ is parallel along the vanishing points of $f$(the hessian of $f$ vanishes when $f$ vanishes, see (2)). Moreover, these geodesics do not intersect, otherwise $\nabla f = 0$ along one geodesic. Combining this with (2), we find $f \equiv 0$. This is a contradiction. Now suppose the zero set of $f$ contains at least 2 distinct geodesics. Let us call them $L_1, L_2$. We claim that $L_1, L_2$ are proper on $H$. The reason is this: we can write $f$ as the real part of a holomorphic function $h = f + ig$, since $f$ is harmonic. By Cauchy-Riemann relation, along the vanishing set of $f$, $g$ is strictly monotonic, $|\nabla g|$ is constant along $L_1$ and $L_2$(since $|\nabla f|$ is constant on each of these two geodesics). But in a compact set of $H$, $|h|$ is bounded, therefore, $L_1$, $L_2$ are properly embedded on $H$. Consider the function $d(x) = dist_H(x, L_2)$ for $x \in L_1$. From the Hessian comparison, we can show that $d'' \leq 0$. Since $L_1$ and $L_2$ never intersect, $d(x) \equiv d_0$. Using the Hessian comparison again, we find the metric to be flat in the domain $\Omega$ bounded by $L_1$ and $L_2$ on $H$. therefore the scalar curvature of the ambient space vanishes on $\Omega$. Considering (2), we find that $f$ is linear on $\Omega$. However, the vanishing points of $f$ have two components, this is a contradiction. Thus the vanishing points of $f$ consist of one geodesic. By the monotonicity of $g$, for any $t \in \mathbb{R}$, there exists exactly one solution to the equation $h(z) = (0, t) \in \mathbb{C}$. By big Picard theorem for entire functions, infinity can not be an essential singularity for the entire function $h$, since $h$ can take each value $(0, t)$ only once. Therefore, $h$ is a polynomial. Using again that there exists exactly one solution to the equation $h(z) = (0, t) \in \mathbb{C}$, we find $h$ to be a linear function. After some conformal transformation, we may assume $f = x$ on the complex plane. Suppose the metric on $H$ is given by $ds^2 = e^{2\rho}(dx^2 + dy^2)$ using Cartisian coordinate on $\mathbb{C}$. Let $e_1 = \frac{\partial}{\partial x}, e_2 = \frac{\partial}{\partial y}$, then $$\langle \nabla_{e_1}e_1, e_1\rangle = e^{2\rho}\rho_1, \langle \nabla_{e_1}e_1, e_2\rangle = -\langle \nabla_{e_2}e_1, e_1\rangle = -e^{2\rho}\rho_2.$$ Therefore $$\nabla_{e_1}e_1 = \rho_1e_1 - \rho_2e_2.$$ Similarly $$\nabla_{e_1}e_2 = \nabla_{e_2}e_1 = \rho_2e_1 + \rho_1e_2, \nabla_{e_2}e_2 = \rho_2e_2 - \rho_1e_1.$$ So the Hessian of $f$ is given by $$f_{11} = 0 - (\nabla_{e_1}e_1)f = -\rho_1, f_{12} = 0 - (\nabla_{e_1}e_2)f = -\rho_2, f_{22} = 0 - (\nabla_{e_2}e_2)f = \rho_1.$$ Let us write (2) as $f_{ij} + f\tau_{ij} = 0$. Therefore, the norm of the tensor $\tau$ is $$|\tau_{ij}| = \frac{\sqrt{2}|\nabla_E \rho|}{|x|e^{2\rho}}$$(here $\nabla_E, \Delta_E$ denotes the gradient and the Laplacian with respect to the standard metric on $\mathbb{C}$). Since the Ricci curvature of the ambient manifold is nonnegative and that the normal Ricci curvature vanishes, $|\tau_{ij}| \leq \sqrt{2}K$ where $K = -\frac{\Delta_E \rho}{e^{2\rho}}$ is the Gaussian curvature on the surface. Therefore $$\frac{|\nabla_E \rho|}{|x|} \leq -\Delta_E \rho.$$ Let $h = -\rho$, so $$\Delta_E h \geq \frac{|\nabla_E h|}{|x|} \geq \frac{|\nabla_E h|}{r}$$ where $r^2 = x^2 + y^2$. By Cohn-Vossen inequality, $\int K ds^2 \leq 2\pi$. Therefore, $$\int \frac{|\nabla_E h|}{|x|}dxdy \leq \int \Delta_E h dxdy< \infty.$$ Define $$g(t) = \int_{B(t)} \frac{|\nabla_E h|}{r}dxdy$$ where $B(t)$ is the Euclidean disk centered at the origin with radius $t$. We have $$t\int_{\partial B(t)}\frac{|\nabla_E h|}{r}dl \geq \int_{B(t)} \Delta_E h dxdy \geq \int_{B(t)} \frac{|\nabla_E h|}{r}dxdy.$$ That is to say, $$tg' \geq g.$$ Solving this inequality, combining with the condition that $g$ is bounded, we find that $$g \equiv 0.$$ Therefore $H$ is flat. But this contradicts the assumption that $H$ is not flat. Thus the lemma is proved. \end{proof} We plug this result in (2). It turns out that $R_{iNNj} = 0$ on $H$. So in fact the rank of the Ricci curvature is 2 at $p$. Therefore, through each point close to $p$, there is a unique totally geodesic surface. From linear algebra, we see these surfaces vary smoothly. By the calculus of variation, the variational vector field of each surface satisfies equation (2). According to the lemma, after a reparametrization, we may assume the variational vector fields of these surfaces are given by $\nu = N$. We call these surfaces $\Sigma_t$, $-\epsilon < t < \epsilon$. Given a point $x \in \Sigma_t$, if $X \in T_x\Sigma_t$, then $\nabla_XN=0$, as $\Sigma_t$ is totally geodesic. Since $N = \nu$, we may extend $X$ in a small neighborhood of $x$ in $M$ such that $X \in T\Sigma$ and $[X, N] = 0$. We have $<\nabla_NN, X> = -<\nabla_NX, N>= -<\nabla_XN, N> = 0$. Since $X \in T_x\Sigma_t$ is arbitrary, $\nabla_NN = 0$. Thus the unit normal vector of these surfaces is parallel and $\Sigma_t$ are all isometric to $\Sigma_0$ via the integral curve of the variational vector field. Let $I$ be the maximal connected interval of $t$ such that there exists a local isometry $F: \Sigma \times I \to M$ with $F(\Sigma, 0) = \Sigma_0$. From the definition of $I$, it is easy to see that $I$ is closed. Let $c(t)$ denote the integral curve of the normal vector field $N$ such that $c(0) = p$. Then for any $t \in I$, the scalar curvature at $c(t)$ are the same, since $F$ is a local isometry. $I$ is open, since for any $t\in I$, the scalar curvature at $c(t)$ is positive, we can extend $I$ a little bit more at the end points. Therefore we have a local isometry $F: \Sigma \times \mathbb{R} \to M$, which means that the universal cover of $M$ splits. Now assume that $M$ is contractible. To prove that $M$ is diffeomorphic to $\mathbb{R}^3$, from a topological result by Stallings \cite{[St]}, it suffices to prove that $M$ is simply connected at infinity and irreducible. Suppose $M$ is not simply connected at infinity, this means that there exists a sequence of closed curves $\sigma_i$ tending to infinity such that for any immersed disk $D_i$ with $\partial D_i = \sigma_i$, $D_i \cap K \neq \Phi$ where $K$ is a fixed compact set of $M$. We may assume these disks are area minimizing, by the compactness and regularity result in Theorem 3 of \cite{[S]}, a subsequence of $D_i$ converges to a complete stable minimal surface which could be immersed and improper. We can apply the argument as before. For reader's convenience, we give some details here. Given a point $p\in M$, we perturb the metric such that $Ric > 0$ in $K\backslash B_p(r)$ and $Ric \geq 0$ in $M\backslash B_p(r)$. Then for the perturbed metric, we have a complete immersed(not necessarily proper) stable minimal surface $\Sigma_i$ which intersects $K$, thus intersects $B_p(r)$ at some $p_i$. The surfaces $(\Sigma_i, p_i)$ have uniform regularity in any compact set in $M$. When the perturbation is smaller and smaller, a subsequence of $(\Sigma_i, p_i)$ converges to a stable minimal surface $(\Sigma, p)$. According to theorem \ref{thm1}, $\Sigma$ is totally geodesic and the normal Ricci curvature vanishes. Then we can use arguments in page $4, 5$ and $6$ to show that $M$ splits, which contradicts that $M$ is not simply connected at infinity. To prove that $M$ is irreducible, we can invoke the solution of Poincare conjecture by Perelman \cite{[P1]}\cite{[P2]}\cite{[P3]}. Therefore $M$ is diffeomorphic to $\mathbb{R}^3$. This completes the proof of theorem \ref{thm2}. \qed \bigskip
{ "timestamp": "2012-10-08T02:00:51", "yymm": "1108", "arxiv_id": "1108.1888", "language": "en", "url": "https://arxiv.org/abs/1108.1888", "abstract": "For a noncompact 3-manifold with nonnegative Ricci curvature, we prove that either it is diffeomorphic to $\\mathbb{R}^3$ or the universal cover splits. As a corollary, it confirms a conjecture of Milnor in dimension 3.", "subjects": "Differential Geometry (math.DG)", "title": "3-manifolds with nonnegative Ricci curvature", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631631151011, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950321086259 }
https://arxiv.org/abs/2003.02754
Generalizations of the Ruzsa-Szemerédi and rainbow Turán problems for cliques
Considering a natural generalization of the Ruzsa-Szemerédi problem, we prove that for any fixed positive integers $r,s$ with $r<s$, there are graphs on $n$ vertices containing $n^{r}e^{-O(\sqrt{\log{n}})}=n^{r-o(1)}$ copies of $K_s$ such that any $K_r$ is contained in at most one $K_s$. We also give bounds for the generalized rainbow Turán problem $\operatorname{ex}(n, H,$rainbow-$F)$ when $F$ is complete. In particular, we answer a question of Gerbner, Mészáros, Methuku and Palmer, showing that there are properly edge-coloured graphs on $n$ vertices with $n^{r-1-o(1)}$ copies of $K_r$ such that no $K_r$ is rainbow.
\section{Introduction}\label{sec.intrKrKsrainbow} The famous Ruzsa--Szemerédi or $(6,3)$-problem is to determine how many edges there can be in a 3-uniform hypergraph on $n$ vertices if no six vertices span three or more edges. This rather specific-sounding problem turns out to have several equivalent formulations and bounds in both directions have had many applications. It is not difficult to prove an upper bound of $O(n^2)$: one first observes that if two edges have two vertices in common, then neither of them can intersect any other edges, and after removing all such pairs of edges one is left with a linear hypergraph, for which the bound is trivial. Brown, Erdős and Sós \cite{sos1973existence} gave a construction achieving $\Omega(n^{3/2})$ edges and asked whether the maximum is $o(n^2)$. The argument sketched in the previous paragraph shows that this question is equivalent to asking whether a graph on $n$ vertices such that no edge is contained in more than one triangle must contain $o(n^2)$ triangles. A positive answer to this question was given by Ruzsa and Szemerédi \cite{ruzsa1978triple}, who obtained a bound of $O(n^2/\log_*n)$ with the help of Szemerédi's regularity lemma. They also gave a construction showing that the number of triangles can be as large as $n^2e^{-O(\sqrt{\log n})}=n^{2-o(1)}$, so the exponent in their upper bound cannot be improved. One of the applications they gave of their upper bound was an alternative proof of Roth's theorem. Indeed, let $A$ be a subset of $\{1,\dots,N\}$ that contains no arithmetic progression of length $3$. Define a tripartite graph $G$ with vertex classes $X=\{1,2,\dots,N\}$, $Y=\{1,2,\dots,2N\}$ and $Z=\{1,2,\dots,3N\}$, where if $x\in X$, $y\in Y$ and $z\in Z$, then $xy$ is an edge if and only if $y-x\in A$, $yz$ is an edge if and only if $z-y\in A$ and $xz$ is an edge if and only if $(z-x)/2\in A$. Note that these are the edges of the triangles with vertices belonging to triples of the form $(x,x+a,x+2a)$ with $x\in X$ and $a\in A$. If $xyz$ is a triangle in this graph, then $a=y-x, b=z-y, c=(z-x)/2$ satisfy $a,b,c\in A$ and $a+b=2c$, which gives us an arithmetic progression of length $3$ in $A$ unless $y-x=z-y$. Thus, the only triangles are the `degenerate' ones of the form $(x,x+a,x+2a)$, which implies that each edge is contained in at most one triangle. Therefore, the number of triangles is $o(n^2)$ (where $n=6N$). We also have that for each $a\in A$ there are $N$ triangles of the form $(x,x+a,x+2a)$, so $|A|=o(N)$. As Ruzsa and Szemerédi also observed, this argument can be turned round: it tells us that if $A$ has density $\alpha$, then there is a graph with $6N$ vertices and $\alpha N^2$ triangles such that each edge is contained in at most one triangle. Since Behrend proved \cite{behrend1946sets} that there exists a subset $A$ of $\{1,\dots,N\}$ of size $Ne^{-O(\sqrt{\log{N}})}$ that does not contain an arithmetic progression of length $3$, this gives the lower bound mentioned above. Several related questions have been studied, as well as applications and generalizations of the Ruzsa--Szemerédi problem: see for example \cite{alon2006extremal,alon2012nearly}. A natural generalization that we believe has not been considered is the following generalized Turán problem. \begin{question}\label{question_RuzsaSzemeredigen} Let $r$ and $s$ be positive integers with $1\leq r<s$. Let $G$ be a graph on $n$ vertices such that any of its subgraphs isomorphic to $K_r$ is contained in at most one subgraph isomorphic to $K_s$. What is the largest number of copies of $K_s$ that $G$ can contain? \end{question} The Ruzsa--Szemerédi problem is the case $r=2, s=3$ of Question \ref{question_RuzsaSzemeredigen}, and the answer is trivially $\Theta(n)$ if $r=1$. One can easily deduce from the graph removal lemma an upper bound of $o(n^r)$ when $r\geq 2$. In the case $r=2$, the construction for the lower bound can be generalized (for example, by using $h$-sum-free sets from \cite{alon2006characterization}) to get a lower bound of $n^2e^{-O(\sqrt{\log{n}})}$. However, there is no obvious way of generalizing the algebraic construction for $r\geq 3$. We shall present a geometric construction instead, in order to prove the following result, which is the first of the two main results of this paper. \begin{theorem}\label{theorem_KrKs} For each $1\leq r<s$ and positive integer $n$ there is a graph on $n$ vertices with $n^{r}e^{-O(\sqrt{\log{n}})}=n^{r-o(1)}$ copies of $K_s$ such that every $K_r$ is contained in at most one $K_s$. \end{theorem}\medskip We shall also use a modification of our construction to answer a question about rainbow colourings. Given an edge-colouring of a graph $G$, we say that a subgraph $H$ is \textit{rainbow} if all of its edges have different colours. We denote by $\operatorname{ex}^*(n,H)$ the maximal number of edges that a graph on $n$ vertices can contain if it can be properly edge-coloured (that is, no two edges of the same colour meet at a vertex) in such a way that it contains no rainbow copy of $H$. The rainbow Turán problem (i.e., the problem of estimating $\operatorname{ex}^*(n,H)$) was introduced by Keevash, Mubayi, Sudakov and Verstra\"ete \cite{keevash2007rainbow}, and was studied for several different families of graphs $H$, such as complete bipartite graphs \cite{keevash2007rainbow}, even cycles \cite{keevash2007rainbow,das2013rainbow} and paths \cite{johnston2016rainbow,ergemlidze2019rainbow}. Gerbner, Mészáros, Methuku and Palmer \cite{gerbner2019generalized} considered the following generalized rainbow Turán problem (analogous to a generalization of the usual Turán problem introduced by Alon and Shikhelman \cite{alon2016many}). Given two graphs $H$ and $F$, let $\operatorname{ex}(n,H,\textnormal{rainbow-}F)$ denote the maximal number of copies of $H$ that a properly edge-coloured graph on $n$ vertices can contain if it has no rainbow copy of $F$. Note that $\operatorname{ex}^*(n,H)$ is the special case $\operatorname{ex}(n,K_2,\textnormal{rainbow-}H)$. The authors of \cite{gerbner2019generalized} focused on the case $H=F$ and obtained several results, for example when $H$ is a path, cycle or a tree, and also gave some general bounds. One of their concluding questions was the following. \begin{question}[Gerbner, Mészáros, Methuku and Palmer \cite{gerbner2019generalized}]\label{question_rbKr} What is the order of magnitude of $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)$ for $r\geq 4$? \end{question} {For fixed $r$,} a straightforward double-counting argument shows that if $H$ has $r$ vertices, then $\operatorname{ex}(n,H,\textnormal{rainbow-}H)=O(n^{r-1})$. {Indeed, if $G$ is a graph with $n$ vertices that contains no rainbow copy of $H$, then every copy of $H$ contains two edges of the same colour. But the number of such pairs of edges is at most $\binom n2\frac{n-2}2=O(n^3)$, since there are at most $\frac{n-2}2$ edges with the same colour as any given edge, and each such pair can be extended to at most $r!n^{r-4}$ copies of $H$.} The authors above improved this bound to $o(n^{r-1})$, and gave an example that shows that $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)=\Omega(n^{r-2})$. They also asked whether there is a graph $H$ for which the exponent $r-1$ in the upper bound is sharp. Our next result shows that $H=K_r$ is such a graph. \begin{theorem}\label{theorem_rainbow} For each $r\geq 4$ we have $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)=n^{r-1-o(1)}$. \end{theorem} Note that a triangle is always rainbow in a proper edge-colouring, so we trivially have $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)=0$ for $r<4$.\medskip In fact, our method can be used to prove the following more general result. \begin{restatable}{theorem}{rainbowgeneral}\label{theorem_rbgeneral} Let $r\geq 4$, let $H$ be a graph, and let $H$ have a proper edge-colouring with no rainbow $K_r$. Suppose that for each vertex $v$ of $H$ there is a $p_v\in\mathbb{R}^m$, and for each colour $\kappa$ in the colouring there is a non-zero vector $z_\kappa$ such that for every edge $vw$ of colour $\kappa$, $z_\kappa$ is a linear combination of $p_v$ and $p_w$ with non-zero coefficients. Then $\operatorname{ex}(n,H,\textnormal{rainbow-}K_r)\geq n^{m_0-o(1)}$, where $m_0$ is the dimension of the subspace of $\mathbb{R}^m$ spanned by the points $p_v$. \end{restatable} It is easy to see that Theorem \ref{theorem_rainbow} is a special case of Theorem \ref{theorem_rbgeneral}, but Theorem \ref{theorem_rbgeneral} also allows us to determine the behaviour of $\operatorname{ex}(n,H,\textnormal{rainbow-}K_r)$ for several other natural choices of $H$. We give some examples in Section \ref{sec_examples}. Theorem \ref{theorem_rbgeneral} is `almost equivalent' to the following, slightly weakened, alternative version \bigskip \noindent \textbf{Theorem \ref{theorem_rbgeneral}$'$.} \textit{Let $r\geq 4$, let $H$ be a graph, and let $c$ be a proper edge-colouring of $H$ without a rainbow $K_r$. Suppose that for each vertex $v\in V(H)$ we have a vector $p_v\in\mathbb{R}^{m-1}$, and for each colour $\kappa$ of $c$ the lines through the pairs $p_v, p_w$ with $c(vw)=\kappa$ are either all parallel, or all go through the same point and that point is different from $p_v,p_w$ unless $p_v=p_w$. Assume that no $(m-2)$-dimensional affine subspace contains all the points $p_v$. Then $\operatorname{ex}(n,H,\textnormal{rainbow-}K_r)\geq n^{m-o(1)}$.}\bigskip It is easy to see that Theorem \ref{theorem_rbgeneral}$'$ is equivalent to the weakened version of Theorem \ref{theorem_rbgeneral} where we make the additional assumption that each $p_v$ is non-zero. Indeed, given a configuration of points $p_v$ as in Theorem \ref{theorem_rbgeneral} (with $m=m_0$), we can project it from the origin to an appropriate affine $(m-1)$-dimensional subspace not going through the origin to get a configuration as in Theorem \ref{theorem_rbgeneral}$'$. Conversely, a configuration of points $p_v$ as in Theorem \ref{theorem_rbgeneral}$'$ gives a configuration as in Theorem \ref{theorem_rbgeneral} by taking the points $p_v\times \{1\}\in \mathbb{R}^m$. \section{The idea of the construction, and a preliminary lemma}\label{subsec_constr} We now briefly describe the construction used in our proof of Theorem \ref{theorem_KrKs}. For simplicity, we focus on the case $r=2, s=3$, i.e., the Ruzsa--Szemerédi problem. Consider the $d$-dimensional sphere $S^d=\{x\in\mathbb{R}^{d+1}:\Vert x\Vert=1\}$. (We will choose $d$ to be about $\sqrt{\log{n}}$.) Join two points of the sphere by an edge if the angle between the corresponding vectors is between $2\pi/3-\delta$ and $2\pi/3+\delta$, where $\delta$ is some appropriately chosen small number (roughly $e^{-\sqrt{\log{n}}}$). Then there are `few' triangles containing any given edge, since if $xy$ is an edge then any point $z$ such that $xyz$ is a triangle is restricted to lie in a small neighbourhood around the point $-(x+y)$. However, there are `many' edges, since the edge-neighbourhood of a point is a set of points around a codimension-1 surface, which is much larger then the neighbourhood of a single point. Choosing the constants appropriately, we can achieve that if we pick $n$ random points then any two of them form an edge with probability $n^{-o(1)}$, and any three of them form a triangle with probability $n^{-1-o(1)}$. Then any edge is expected to be in $n^{-o(1)}$ triangles and there are $n^{2-o(1)}$ edges. After some modification, we get a graph with $n^{2-o(1)}$ triangles in which any edge extends to at most one triangle. The general construction is quite similar. We want to define the edges in such a way that knowing the position of any $r$ of the vertices of a $K_s$ restricts the remaining $s-r$ vertices to small neighbourhoods around certain points, but knowing the position of $i$ points with $i<r$ only restricts the remaining points to a neighbourhood of a codimension-$i$ surface. For example, when $(r,s)=(3,4)$, we can define our graph by joining two points if the angle between the corresponding vectors is close to the angle given by two vertices of a regular tetrahedron (centred at the origin). In fact, our construction and the construction of Ruzsa and Szemer\'edi based on the Behrend set are more similar than they might at first appear, which also explains why they give similar bounds (namely $n^2e^{-O(\sqrt{\log{n}})}$ for the case $r=2,s=3$). Behrend's construction \cite{behrend1946sets} of a large set with no arithmetic progression of length 3 starts by observing that for any positive integers $k,d$ there is some $m$ such that the grid $\{1,\dots,k\}^d$ intersects the sphere $\{x\in\mathbb{R}^d:\Vert x \Vert^2=m\}$ in a set $A$ consisting of at least $k^d/(dk^2)$ points. This set $A$ has no arithmetic progression of length $3$. (In Behrend's construction, this is transformed into a subset of $\mathbb{Z}$ using an appropriate map, but this is unnecessary for our purposes.) Repeating the construction from Section \ref{sec.intrKrKsrainbow}, we define a tripartite graph $G$ on vertex set $X\cup Y\cup Z$ where $X=\{1,\dots,k\}^d, Y=\{1,\dots,2k\}^d, Z=\{1,\dots,3k\}^d$, and edges given by the edges of the triangles $(x,x+a,x+2a)\in X\times Y\times Z$ for $x\in X, a\in A$. Explicitly, for $x\in X, y\in Y, z\in Z$, we join $x$ and $y$ if $\Vert x-y\Vert =m^{1/2}$ (and $y_i\geq x_i$ for all $i$), we join $y$ and $z$ if $\Vert z-y\Vert =m^{1/2}$ (and $z_i\geq y_i$), and we join $x$ and $z$ if $\Vert x-z\Vert =2m^{1/2}$ (and $z_i\geq x_i$). This gives the same phenomenon as our construction: the neighbourhood of a point $x$ is given by a codimension-1 condition, but the joint neighbourhood of two points is a single point, since $y$ must be the midpoint of $x$ and $z$. \medskip We conclude this section with the following technical fact, whose proof we include for completeness. Given unit vectors $v,w$, we write $\angle(v,w)$ for the angle between $v$ and $w$ -- that is, for $\cos^{-1}(\langle v,w\rangle)$. \begin{lemma}\label{lemma_spherebounds} There exist constants $0<\alpha<B$ such that the following holds. Let $d$ be a positive integer, let $0<\rho\leq 2$ and let $v\in S^d$. Let $X_\rho=\{w\in S^d: \Vert v-w\Vert <\rho\}$. Let $\mu$ denote the usual probability measure on $S^d$. Then \[\alpha^d\rho^d\leq \mu(X_\rho)\leq B^d\rho^d.\] Furthermore, for any $-1<\xi<1$ there exists $\beta>0$ such that for every positive integer $d$, every point $v\in S^d$, and every $0\leq\delta\leq 2$, the set $Y_{\xi,\delta}=\{w\in S^d: |\langle v,w\rangle-\xi|<\delta\}$ has \[\mu(Y_{\xi,\delta})\geq \beta^d\delta.\] \end{lemma} \begin{proof} Using the usual spherical coordinate system, we see that for $0\leq \varphi\leq \pi$ the set $Z_\varphi=\{w\in S^d: \angle(v,w)<\varphi\}$ satisfies \begin{equation}\label{eq_spherecalc} \mu(Z_\varphi)=\frac{\int_{0}^{\varphi} \sin^{d-1}{\theta}\diff\theta}{\int_{0}^{\pi} \sin^{d-1}{\theta}\diff\theta}. \end{equation} But we have $\theta\geq \sin\theta\geq \frac{2}{\pi}\theta$ for $0\leq \theta\leq \pi/2$. Thus, ${\int_{0}^{t} \sin^{d-1}{\theta}\diff\theta}$ is between $\frac{c_1^{d-1}}{d}t^d$ and $\frac{1}{d}t^d$ for all $0\leq t\leq \pi /2$ (for some constant $0<c_1<1$). Using this bound for both the numerator and the denominator in \eqref{eq_spherecalc}, we deduce that $\alpha_0^d\varphi^{d}\leq \mu(Z_\varphi)\leq B_0^d\varphi^d$ for some absolute constants $0<\alpha_0<B_0$. But if $\angle(v,w)=\varphi$ and $\Vert v-w\Vert =\rho\leq 2$, then $\rho\leq \varphi\leq \rho\pi/2$, so $Z_{\rho} \subseteq X_\rho\subseteq Z_{\rho\pi/2}$. The first claim follows. For the second claim, let $0<\varphi<\pi$ such that $\xi=\cos\varphi$ and let $\epsilon_0=\min\{\varphi/2,(\pi-\varphi)/2\}$. Write $W_{\varphi,\epsilon}=\{w\in S^d: \varphi-\epsilon<\angle(v,w)<\varphi+\epsilon\}$. For $0<\epsilon<\epsilon_0$ we have \[ \mu(W_{\varphi,\epsilon})=\frac{\int_{\varphi-\epsilon}^{\varphi+\epsilon} \sin^{d-1}{\theta}\diff\theta}{\int_{0}^{\pi/2} \sin^{d-1}{\theta}\diff\theta}.\] But also $\sin\theta\geq \min\{\sin(\varphi-\epsilon_0),\sin(\varphi+\epsilon_0)\}=\sin(\epsilon_0)$ when $\varphi-\epsilon_0\leq \theta\leq \varphi+\epsilon_0$. Writing $\beta_0=\sin(\epsilon_0)>0$, it follows that whenever $\epsilon<\epsilon_0$, then $\mu(W_{\varphi,\epsilon})\geq\frac{2\epsilon \beta_0^{d-1}}{\pi/2}$. However, we have $|\cos(\theta)-\cos(\varphi)|\leq |\theta-\varphi| $, so $Y_{\xi,\delta}\supseteq W_{\varphi,\delta}$. Choosing some sufficiently small $\beta$, the second claim follows. \end{proof} \section{The generalized Ruzsa--Szemerédi problem} In this section we prove the first of our main results, Theorem \ref{theorem_KrKs}. In the case $r=2, s=3$, the construction is based, as we saw in Section~\ref{subsec_constr}, on the observation that if we wish to find three vectors in $S^d=\{x\in \mathbb{R}^{d+1}: \Vert x\Vert =1\}$ in such a way that the angle between any two of them is $120^\circ$, and if we choose the vertices one by one, then there are $d$ degrees of freedom for the first vertex and $d-1$ for the second, but the third is then uniquely determined. This gives us an example of a `continuous graph' with `many' edges, such that each edge is in exactly one triangle, and a suitable perturbation and discretization of this graph gives us a finite graph with $n^{2-o(1)}$ triangles such that each edge belongs to at most one triangle. To generalize this to arbitrary $(r,s)$ we need to find a configuration of $s$ unit vectors (where by `configuration' we mean an $s\times s$ symmetric matrix that specifies the angles, or equivalently inner products, between the unit vectors) with the property that if we choose the points of the configuration one by one, then for $i\leq r$ the $i$\textsuperscript{th} point can be chosen with $d+1-i$ degrees of freedom, but from the $(r+1)$\textsuperscript{st} point onwards all points are uniquely determined. It turns out that all we have to do is choose an arbitrary collection of $s$ points $p_1,\dots,p_s$ in general position from the sphere $S^{r-1}$ and take the angles $\angle(p_i,p_j)$. To see that this works, suppose we we wish to choose $x_1,\dots,x_s\in S^d$ one by one in such a way that $\langle x_i,x_j\rangle=\langle p_i,p_j\rangle$ for every $i,j$. Suppose that we have chosen $x_1,\dots,x_r$ and let $V$ be the $r$-dimensional subspace that they generate. Let $u_{r+1}$ be the orthogonal projection of $x_{r+1}$ to $V$. Then $\langle u_{r+1},x_i\rangle=\langle x_{r+1},x_i\rangle$ for each $i\leq r$, and $u_{r+1}\in V$, so $u_{r+1}$ is uniquely determined. Furthermore, since the angles $\langle p_i,p_j\rangle$ are equal to the angles $\langle x_i,x_j\rangle$ when $i,j\leq r$ and to the angles $\langle x_i,u_{r+1}\rangle$ when $i\leq r, j=r+1$, and $p_{r+1}$ is a unit vector, it must be that $u_{r+1}$ is a unit vector, which implies that $x_{r+1}=u_{r+1}$. Since this argument made no use of the ordering of the vectors, it follows that any $r$ vectors in a configuration determine the rest, as claimed. We shall now use this observation as a guide for constructing a finite graph with many copies of $K_s$ such that each $K_r$ is contained in at most one $K_s$. As above, pick $s$ `reference' points $p_1, \dots, p_s$ in general position on the sphere $S^{r-1}$. Since for any set $B\subseteq \{1,\dots,s\}$ of size $r$ the points $p_b$ ($b\in B$) form a basis of $\mathbb{R}^r$, we may write, for any $a$, \[p_a=\sum_{b\in B}{\lambda_{B,a,b}p_b}\] for some real constants $\lambda_{B,a,b}$. \medskip For any $c>0$ and positive integers $N,d$ we define an $s$-partite random graph $G_{N,d,c}$ as follows. (The graph will also depend on $r,s,p_1,\dots,p_s$, but for readability we drop these dependencies from the notation.) Consider the usual probability measure on the $d$-sphere $S^d$. Pick, independently and uniformly at random, $sN$ points $x_{a,i}$ ($1\leq a\leq s, 1\leq i\leq N$) on $S^d$: these points form the vertex set. Join two points $x_{a,i}$ and $x_{b,j}$ by an edge if $a\not =b$ and $|\langle x_{a,i},x_{b,j}\rangle-\langle p_a,p_b\rangle|<c$. Write $V_a=\{x_{a,i}: 1\leq i\leq N\}$ so that $G_{N,d,c}$ is $s$-partite with classes $V_1,\dots,V_s$. We also define a graph $G_{N,d,c}'$ as follows. Let $M_0$ be the maximum among all values of $|\lambda_{B,a,b}|$ and $\lambda_{B,a,b}^2$, and let $M=2(r+1)\sqrt{M_0}$. Then $G_{N,d,c}'$ is obtained from $G_{N,d,c}$ by deleting all vertices $x_{a,i}$ for which there is another vertex $x_{a,j}$ ($i\not =j$) such that $\Vert x_{a,i}-x_{a,j}\Vert <M\sqrt{c}$. This graph is designed to be finite and to have the property that any copy of $K_s$ must be close to a configuration with angles determined by the points $p_1,\dots,p_s$. The vertex deletions are there to ensure that the vertices are reasonably well separated. This will imply that no $K_r$ is contained in more than one $K_s$, since once $r$ vertices of a $K_s$ are chosen, the remaining vertices are constrained to lie in small neighbourhoods. \begin{lemma} The graph $G_{N,d,c}'$ has the property that any of its subgraphs isomorphic to $K_r$ is contained in at most one subgraph isomorphic to $K_s$ (for any choices of $r,s,p_1,\dots,p_s,N,d,c$). \end{lemma} \begin{proof} Let $x_{a_1,i_i},\dots, x_{a_r,i_r}$ be points that form a $K_r$. Then necessarily all $a_t$ are distinct. Suppose that we have two extensions $H_1,H_2$ of this $K_r$ to a $K_s$. Then both $H_1$ and $H_2$ intersect each class $V_a$ in exactly one point. We now show that for each $a$ this point must be the same for $H_1$ and $H_2$, which will imply the lemma. Suppose that $H_1$ intersects $V_a$ in point $x$. Write $B=\{a_1,\dots,a_r\}$. Then \begin{align*} \left\Vert x-\sum_{t=1}^{r}\lambda_{B,a,a_t}x_{a_t}\right\Vert ^2&=\left\langle x-\sum_{t=1}^{r}\lambda_{B,a,a_t}x_{a_t}, x-\sum_{t=1}^{r}\lambda_{B,a,a_t}x_{a_t}\right\rangle\\ &=\langle x,x\rangle-2\sum_{t=1}^r{\lambda_{B,a,a_t}\left\langle x, x_{a_t}\right\rangle}+\sum_{t,t'=1}^{r}{\lambda_{B,a,a_t}\lambda_{B,a,a_{t'}}\langle x_{a_t}, x_{a_{t'}}\rangle}\\ &\leq 1 -2\sum_{t=1}^r\lambda_{B,a,a_t}\langle p_a,p_{a_t}\rangle+2c\sum_{t=1}^r{|\lambda_{B,a,a_t}|}\\ &\hspace{2cm}+\sum_{t,t'=1}^{r}{\lambda_{B,a,a_t}\lambda_{B,a,a_{t'}}\langle p_{a_t},p_{a_{t'}}\rangle}+c\sum_{t,t'=1}^{r}{|\lambda_{B,a,a_t}\lambda_{B,a,a_{t'}}|}\\ &\leq \left\langle p_a-\sum_{t=1}^{r}\lambda_{B,a,a_t}p_{a_t},p_a-\sum_{t=1}^{r}\lambda_{B,a,a_t}p_{a_t}\right\rangle + 2rcM_0+r^2cM_0\\ &= (r^2+2r)cM_0. \end{align*} It follows that $\left\Vert x-\sum_{t=1}^{r}\lambda_{B,a,a_t}x_t\right\Vert < (r+1)\sqrt{M_0c}$. Similarly, if $H_2$ intersects $V_a$ in point $y$ then $\left\Vert y-\sum_{t=1}^{r}\lambda_{B,a,a_t}x_t\right\Vert < (r+1)\sqrt{M_0c}$. Hence $\Vert x-y\Vert < 2(r+1)\sqrt{M_0c}=M\sqrt c$. By the definition of $G'_{N,d,c}$, we must have $x=y$. \end{proof} To prove Theorem \ref{theorem_KrKs}, it suffices to show that the expected number of copies of $K_s$ in $G_{N,d,c}'$ is at least $N^{r}e^{-O(\sqrt{\log{N}})}$ for suitable choices of $d$ and $c$. For this purpose we shall use the following technical lemma. For later convenience (in Section \ref{sec_rainbow}), we state it in a slightly more general form than required here, to allow the possibility that $r=s$ and the possibility that $p_1,\dots,p_s$ are not in general position (but still span $\mathbb{R}^r$). \begin{restatable}{lemma}{appendixlemma}\label{lemma_probabilityKs} Let $1\leq r\leq s$ be positive integers and let $p_1, \dots, p_s$ be points on $S^{r-1}$ such that $p_1,\dots, p_r$ form a basis of $\mathbb{R}^r$. Then there exist constants $\alpha>0$ and $h$ such that for any $d\geq r$ and $0<c<1$ the probability that a set $\{x_{a}:1\leq a \leq s\}$ of random unit vectors (chosen independently and uniformly) on $S^d$ satisfies $|\langle x_a,x_b\rangle-\langle p_a,p_b\rangle|<c$ for all $a,b$ is at least $\alpha^{d}c^{d(s-r)/2+h}$. \end{restatable} We may think of the conclusion of Lemma \ref{lemma_probabilityKs} as follows. The dominant (smallest) factor in the probability above is the factor $c^{d(s-r)/2}$. The probability should be close to this because if we imagine placing the $s$ points one by one and we have already picked $x_1,\dots,x_i$ joined to each other, then \begin{itemize} \item if $i<r$, then $x_{i+1}$ is restricted to a neighbourhood of a codimension-$i$ surface, so with reasonably large probability (comparable to $c^{i}$) it is connected to all previous vertices; \item if $i\geq r$, then the linear dependencies between the points restrict $x_{i+1}$ to be in a ball of radius about $c^{1/2}$ around a certain point, which has measure about $c^{d/2}$ (which is much smaller than $c^{r}$). \end{itemize} The proof of Lemma \ref{lemma_probabilityKs} is given in an appendix. \begin{proof}[Proof of Theorem \ref{theorem_KrKs}] By Lemma \ref{lemma_spherebounds}, there are constants $c_0,B,C$ such that if $c<c_0$ then the probability that a given vertex $x_{a,i}$ is removed from $G_{N,d,c}$ when forming $G_{N,d,c}'$ is at most $NB^d(M\sqrt{c})^d\leq NC^dc^{d/2}$. Here $B>0$ is an absolute constant and the constants $C,c_0>0$ depend on $r,s,p_1,\dots,p_s$ only. Moreover, the event `$x_{a,i}$ is removed' is independent of any event of the form `$x_{1,i_1},\dots,x_{s,i_s}$ form a $K_s$ in $G_{N,d,c}$'. Using Lemma \ref{lemma_probabilityKs}, we deduce that the probability that $x_{1,i_i},\dots,x_{s,i_s}$ is contained in $G_{N,d,c}'$ and forms a $K_s$ is at least $(1-sNC^dc^{d/2})\alpha^dc^{d(s-r)/2+h}$ (where $\alpha, h$ depend on $r,s,p_1,\dots,p_s$ only). So the expected number of copies of $K_s$ in $G_{N,d,c}'$ is at least \[N^s(1-sNC^dc^{d/2})\alpha^dc^{d(s-r)/2+h}.\] If $c=(2sNC^d)^{-2/d}$, then this is at least \begin{equation}\label{eq_optimisation} \frac{1}{2}N^s\alpha^d\frac{1}{(2sC^d)^{s-r}}N^{r-s}(2sNC^d)^{-2h/d}\geq \eta^dN^{r-2h/d}=N^re^{-Ed-(2h/d)\log{N}} \end{equation} for some constants $\eta>0$ and $E$ not depending on $N,d$. Choosing $d=\lfloor \sqrt{\log{N}}\rfloor $, this is $N^re^{-O(\sqrt{\log{N}})}$, and $c<c_0$ when $N$ is sufficiently large. The result follows, as $G_{N,d,c}'$ has at most $Ns$ vertices. \end{proof} Note that our proof in fact also gives the correct (and trivial) lower bound $\Theta(n)$ in the case $r=1$, since if $r=1$ then $h=0$ so we may choose $d$ to be a constant and get $\Theta(N)$ in \eqref{eq_optimisation}. \section{Generalized rainbow Turán numbers for complete graphs}\label{sec_rainbow} We now turn to the proofs of our results about generalized rainbow Turán numbers (Theorems \ref{theorem_rainbow} and \ref{theorem_rbgeneral}). First we recall a general result of Gerbner, Mészáros, Methuku and Palmer \cite{gerbner2019generalized}, which can be proved using the graph removal lemma. \begin{proposition}[Gerbner, Mészáros, Methuku and Palmer \cite{gerbner2019generalized}]\label{proposition_rbHH} For any graph $H$ on $r$ vertices, we have $\operatorname{ex}(n,H,\textnormal{rainbow-}H)=o(n^{r-1})$. \end{proposition} In particular, we know that $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)=o(n^{r-1})$. We would like to match this with a lower bound of the form $n^{r-1-o(1)}$. Before we prove such a bound, let us briefly discuss the ideas that underlie the proof. It is easy to show that a lower bound $\operatorname{ex}(n,K_4,\textnormal{rainbow-}K_4)\geq n^{3-o(1)}$ would imply that $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)\geq n^{r-1-o(1)}$ for all $r\geq 4$, so it suffices to consider the case $r=4$. However, when $r=4$ and $G$ is a properly edge-coloured graph with no rainbow $K_4$, then every triangle of $G$ is contained in at most three copies of $K_4$. (Indeed, if the vertices of the $K_3$ are $x,y,z$, then the only way that adding a further vertex $w$ can lead to a non-rainbow $K_4$ is if $wx$ has the same colour as $yz$, $wy$ has the same colour as $xz$ or $wz$ has the same colour as $xy$. But since the edge-colouring is proper, we cannot find more than one $w$ such that the same one of these three events occurs.) So it is natural to expect that our construction for Theorem \ref{theorem_KrKs} is relevant here. To see how a similar construction gives the desired result, it is helpful, as earlier, to look at a simpler continuous example that serves as a guide to the construction. Consider the graph where the vertex set is $S^d$ and two unit vectors $v,w$ are joined if and only if $\langle v,w\rangle=-1/3$ (the angle between vectors that go through the origin and two distinct vertices of a regular tetrahedron). Then any $K_4$ in this graph must be given by the vertices of a regular tetrahedron. We colour an edge by the line that joins the origin to the midpoint of that edge. This is a proper colouring with the property that opposite edges have the same colour, so each $K_4$ is 3-coloured in this colouring. The construction we are about to describe is a suitable perturbation and discretization of this one. For the discretized graph, we will again have `near-regular' tetrahedra forming $K_4$s. To ensure that each copy of $K_4$ is still rainbow, we shall have to modify the colouring slightly. We shall take only certain `allowed lines' as colours, and we shall colour an edge by the allowed line that is closest to the line through the midpoint (if that line is not very far -- otherwise we delete the edge). We need to choose the allowed lines in such a way that no two allowed lines are close (so that near-regular tetrahedra are still 3-coloured), but a large proportion of lines are close to an allowed line (so that not too many edges are deleted). This can be achieved using the following lemma. \begin{lemma}\label{lemma_directions} There exists $\delta>0$ with the following property. For any $0< c_1< 1$ we can choose $L\geq (\delta/c_1)^d$ points $q_1,\dots,q_L$ on $S^d$ such that $\Vert q_i-\epsilon q_j\Vert \geq 3c_1$ for any $i\not =j$ and any $\epsilon\in\{1,-1\}$. \end{lemma} \begin{proof} Take a maximal set of points satisfying the condition above. Then the balls of radius $3c_1$ around the points $\pm q_1,\dots,\pm q_L$ cover the entire sphere. But any such ball covers a proportion of surface area at most $(Bc_1)^d$ for some constant $B$ (by Lemma \ref{lemma_spherebounds}). Therefore $2L(Bc_1)^d\geq 1$, which gives the result. \end{proof} One can prove Theorem \ref{theorem_rainbow} using the method described above. However, the proof naturally yields the more general Theorem \ref{theorem_rbgeneral} (which is restated below), so that is what we shall do. Essentially, we can prove a lower bound of $n^{m-o(1)}$ for a graph $H$ whenever we can draw $H$ in $\mathbb{R}^m$ in such a way that for each colour there is a line through the origin meeting (the line of) each edge of that colour, and the vertices of the graph span $\mathbb{R}^m$. \rainbowgeneral* \begin{proof Passing to a subspace, we may assume that $m=m_0$ and $\{p_v: v\in V(H)\}$ spans $\mathbb{R}^m$. Furthermore, by rescaling we may assume that each $z_\kappa$ and each non-zero $p_v$ has unit length. Write $V_0=\{v\in V(H): p_v=0\}$ and $V_1=\{v\in V(H): p_v\not =0\}$. For each $c>0$ and any two positive integers $N,d$, we define a (random) graph $F_{N,d,c}$ as follows. The vertex set of $F_{N,d,c}$ has $|H|$ parts labelled by the vertices of $H$. If $v\in V_0$ then there is a single point $x_{v,1}=0$ in the part labelled by $v$. If $v\in V_1$, then we pick (uniformly and independently at random) $N$ points $x_{v,1},\dots,x_{v,N}$ on $S^d$: these will be the vertices in the part labelled by $v$. We join two vertices $x_{v,a}$ and $x_{w,b}$ by an edge if and only if $vw\in E(H)$ and $|\langle x_{v,a},x_{w,b}\rangle-\langle p_v,p_w\rangle|<c$. By assumption, we know that for each edge $vw$ of colour $\kappa$ there exist $\lambda_{\kappa,v}, \lambda_{\kappa,w}$ non-zero real coefficients such that $z_\kappa=\lambda_{\kappa,v}p_v+\lambda_{\kappa,w}p_w$. Let $\lambda$ be the minimum and $M_0$ the maximum over all values of $|\lambda_{\kappa,v}|$. Write $c_1=(12M_0^2c)^{1/2}$. Form a new graph $F_{N,d,c}'$ out of $F_{N,d,c}$ by removing any vertex $x_{v,i}$ for which there is another vertex $x_{v,j}$ (with $j\not =i$) such that $\Vert x_{v,i}-x_{v,j}\Vert \leq \frac{2}{\lambda}c_1$. (The exact values of the constants are not particularly important -- they were chosen so that the graph described below will be properly coloured with no rainbow $K_r$. That is, we could replace $12M_0^2$ and $2/\lambda$ by other sufficiently large constants.) Let $q_1,\dots, q_L$ be points on $S^d$ with $L\geq (\delta/c_1)^d$ such that $\Vert q_i-q_j\Vert \geq 3c_1$ for all $i\not =j$. Here $\delta$ is some positive (absolute) constant, and the existence of such a set follows from Lemma \ref{lemma_directions}. Also, pick independently and uniformly at random a rotation $R_\kappa\in \mathrm{SO}(d+1)$ for each colour $\kappa$ used in the edge-colouring of $H$. The probability measure we use on $\mathrm{SO}(d+1)$ is the usual (Haar) measure, so for any $q\in S^d$ the points $R_\kappa q$ are independently and uniformly distributed on $S^d$. We think of the points $R_\kappa q_l$ ($l=1,\dots,L$) as the allowed colours for the edges $x_{v,i}x_{w,j}$ when $vw\in E(H)$ has colour $\kappa$ (and we take different rotations for different colours to have independence). We form an edge-coloured graph $F_{N,d,c}''$ from $F_{N,d,c}'$ as follows. For any edge $x_{v,i}x_{w,j}$ of $F_{N,d,c}'$, we perform the following modification. Let $\kappa$ be the colour of $vw$ in $E(H)$, and let $\lambda_{\kappa,v}, \lambda_{\kappa,w}\not =0$ be as before, so that $z_\kappa=\lambda_{\kappa,v}p_v+\lambda_{\kappa,w}p_w$. \begin{itemize} \item If there is some $l$ with $\Vert \lambda_{\kappa,v}x_{v,i}+\lambda_{\kappa,w}x_{w,j}-R_\kappa q_l\Vert <c_1$, then we colour the edge $x_{v,i}x_{w,j}$ with colour $(\kappa, l)$. Note that such an $l$ must be unique since $\Vert R_\kappa q_l-R_\kappa q_{l'}\Vert \geq 3c_1$ if $l'\not =l$. \item Otherwise we delete the edge $x_{v,i}x_{w,j}$. \end{itemize} \textbf{Claim 1.} The edge-colouring of $F_{N,d,c}''$ is proper. \textbf{Proof.} Suppose that $x_{v,i}x_{w,j}$ and $x_{v,i}x_{w',j'}$ are both edges with colour $(\kappa, l)$. Then $vw$ and $vw'$ both have colour $\kappa$ in $E(H)$, thus $w=w'$. Also, \begin{align*} \Vert x_{w,j'}-x_{w,j}\Vert &\leq \frac{1}{|\lambda_{\kappa,w}|}\left(\Vert \lambda_{\kappa,v}x_{v,i}+\lambda_{\kappa,w}x_{w,j'}-R_\kappa q_l\Vert + \Vert \lambda_{\kappa,v}x_{v,i}+\lambda_{\kappa,w}x_{w,j}-R_\kappa q_l\Vert\right)\\ &\leq \frac{1}{|\lambda_{\kappa,w}|}2c_1\\ &\leq \frac{2}{\lambda}c_1. \end{align*} But then $j=j'$ by the definition of $F_{N,d,c}'$. So the edge-colouring of $F_{N,d,c}''$ is indeed proper.\qed\medskip \textbf{Claim 2.} There is no rainbow copy of $K_r$ in $F_{N,d,c}''$. \textbf{Proof.} Suppose that the vertices $x_{v_1,i_1},\dots,x_{v_r,i_r}$ form a $K_r$ in $F_{N,d,c}''$. Then $v_1, \dots, v_r$ form a $K_r$ in $H$. This $K_r$ is not rainbow (by assumption). By symmetry, we may assume that the edges $v_1v_2$ and $v_3v_4$ both have colour $\kappa$. Write $x_{a}$ for $x_{v_a,i_a}$ and $\lambda_a$ for $\lambda_{\kappa,v_a}$ for $a=1,2,3,4$. Then we have (recalling that $M_0=\max_{\kappa',v}{|\lambda_{\kappa',v}|}$) \begin{align*} \Vert \lambda_1x_1+\lambda_2x_2&-\lambda_3x_3-\lambda_4x_4\Vert^2\\ &=\langle \lambda_1x_1+\lambda_2x_2-\lambda_3x_3-\lambda_4x_4,\lambda_1x_1+\lambda_2x_2-\lambda_3x_3-\lambda_4x_4\rangle\\ &=\sum_{a=1}^{4}\lambda_a^2\Vert x_a\Vert^2+2\lambda_1\lambda_2\langle x_1,x_2 \rangle-2\lambda_1\lambda_3\langle x_1,x_3\rangle-2\lambda_1\lambda_4\langle x_1,x_4\rangle\\ &\hspace{2cm}-2\lambda_2\lambda_3\langle x_2,x_3\rangle-2\lambda_2\lambda_4\langle x_2,x_4\rangle+2\lambda_3\lambda_4\langle x_3,x_4\rangle\\ &\leq \sum_{a=1}^{4}\lambda_a^2\Vert p_{v_a}\Vert^2 +2\lambda_1\lambda_2\langle p_{v_1},p_{v_2} \rangle-2\lambda_1\lambda_3\langle p_{v_1},p_{v_3}\rangle-2\lambda_1\lambda_4\langle p_{v_1},p_{v_4}\rangle\\ &\hspace{2cm}-2\lambda_2\lambda_3\langle p_{v_2},p_{v_3}\rangle-2\lambda_2\lambda_4\langle p_{v_2},p_{v_4}\rangle+2\lambda_3\lambda_4\langle p_{v_3},p_{v_4}\rangle+12M_0^2c\\ &=\langle \lambda_1p_{v_1}+\lambda_2p_{v_2}-\lambda_3p_{v_3}-\lambda_4p_{v_4}, \lambda_1p_{v_1}+\lambda_2p_{v_2}-\lambda_3p_{v_3}-\lambda_4p_{v_4}\rangle+12M_0^2c\\ &=12M_0^2c. \end{align*} Since $c_1=(12M_0^2c)^{1/2}$, we get that $\Vert \lambda_1x_1+\lambda_2x_2-\lambda_3x_3-\lambda_4x_4\Vert\leq c_1$. But if $x_1x_2$ has colour $(\kappa,l)$ and $x_3x_4$ has colour $(\kappa,l')$, then \[\Vert q_l-q_{l'}\Vert\leq \Vert \lambda_1x_1+\lambda_2x_2-R_\kappa q_l\Vert+\Vert \lambda_3x_3+\lambda_4x_4-R_\kappa q_{l'}\Vert+\Vert \lambda_1x_1+\lambda_2x_2-\lambda_3x_3-\lambda_4x_4\Vert< 3c_1.\] It follows that $l=l'$ and hence the $K_r$ with vertices $x_{v_1,i_1},\dots,x_{v_r,i_r}$ is not rainbow.\qed \textbf{Claim 3.} The expected number of copies of $H$ in $F_{N,d,c}''$ is at least $N^{m-o(1)}$ if $d$ and $c$ are chosen appropriately. \textbf{Proof.} Pick arbitrary vertices $x_{v,i_v}$ in the classes (with $i_v=1$ if $v\in V_0$ and $1\leq i_v\leq N$ otherwise). We consider the probability that they form a copy of $H$ in $F_{N,d,c}''$. Write $x_v$ for $x_{v,i_v}$. Let $\epsilon>0$ be a small constant to be specified later. By Lemma \ref{lemma_probabilityKs}, we have \begin{equation}\label{eq_innerproductsgood} \mathbb{P}[|\langle x_v,x_w\rangle-\langle p_v,p_w\rangle|<\epsilon c\textnormal{ for all $v,w\in V(H)$}]\geq \alpha^d(\epsilon c)^{d(|V_1|-m)/2+h} \end{equation} for some constants $\alpha>0$ and $h$. Let $v\in V_1$. By Lemma \ref{lemma_spherebounds}, the probability that $x_v$ is removed when we form $F_{N,d,c}'$ is at most $NB_1^dc^{d/2}$, for some constant $B_1>0$ that does not depend on $N,d,c$. By independence, if $NB_1^dc^{d/2}<1$ then \begin{equation}\label{eq_notremoved} \mathbb{P}[\textnormal{none of the $x_v$ are removed when we form $F_{N,d,c}'$}]\geq (1-NB_1^dc^{d/2})^{|V_1|}. \end{equation} Finally, for each colour $\kappa$ in the colouring of $E(H)$, pick an edge $v_\kappa w_\kappa$ of that colour in $H$. Write $y_\kappa=\lambda_{\kappa,v_\kappa}x_{v_\kappa}+\lambda_{\kappa,w_\kappa}x_{w_\kappa}$ and $y_\kappa'=\frac{y_\kappa}{\Vert y_\kappa\Vert}$. Note that $\Vert y_\kappa\Vert\not =0$ with probability 1, since all $\lambda_{\kappa,v}$ are non-zero and at least one of $p_{v_\kappa}$ and $p_{w_{\kappa}}$ is non-zero. For each $\kappa$, if $\epsilon$ is sufficiently small then by Lemma \ref{lemma_spherebounds} we have \begin{equation}\label{eq_colourgood} \mathbb{P}[\textnormal{there is some $l_\kappa$ such that $\Vert y_\kappa'-R_\kappa q_{l_\kappa}\Vert <\epsilon c^{1/2}$}]\geq L\eta^d(\epsilon c^{1/2})^d\geq \eta_1^d\epsilon^d \end{equation} for some constants $\eta,\eta_1>0$. Observe that the events in \eqref{eq_innerproductsgood}, \eqref{eq_notremoved} and \eqref{eq_colourgood} (for all $\kappa$) are independent. It follows that \begin{equation}\label{eq_allhold} \mathbb{P}[\textnormal{the events in \eqref{eq_innerproductsgood}, \eqref{eq_notremoved}, and, for all $\kappa$, \eqref{eq_colourgood} hold}]\geq \gamma_\epsilon^dc^{d(|V_1|-m)/2+h}(1-NB_1^dc^{d/2})^{|V_1|} \end{equation} where $\gamma_\epsilon$ is some constant depending on $\epsilon$ (but not on $N,d,c$). We show that these events together imply that the $x_v$ form a copy of $H$, if $\epsilon$ is sufficiently small. The only property that we need to check is that no edge is removed when $F_{N,d,c}''$ is formed out of $F_{N,d,c}'$. Consider then an edge $uu'$ of $H$. Let $\kappa$ be its colour and write $v=v_\kappa, w=w_\kappa, y=y_\kappa, y'=y_\kappa', \lambda_v=\lambda_{\kappa,v}, \lambda_w=\lambda_{\kappa,w}, \lambda_u=\lambda_{\kappa,u}$, and $\lambda_{u'}=\lambda_{\kappa,u'}$. We have \begin{align*} \langle y,y\rangle &=\langle \lambda_vx_v+\lambda_wx_w,\lambda_vx_v+\lambda_wx_w\rangle\\ &=\langle \lambda_vp_v+\lambda_wp_w,\lambda_vp_v+\lambda_wp_w\rangle+O(\epsilon c)\\ &=\langle z_\kappa,z_\kappa\rangle+O(\epsilon c)\\ &=1+O(\epsilon c). \end{align*} So \[\Vert y-y'\Vert =|\Vert y\Vert-1|=O(\epsilon c).\] Furthermore, if we write $y''=\lambda_ux_u+\lambda_{u'}x_{u'}$, then \begin{align*} \langle y-y'',y-y''\rangle &=\langle \lambda_vx_v+\lambda_wx_w-\lambda_ux_u-\lambda_{u'}x_{u'}, \lambda_vx_v+\lambda_wx_w-\lambda_ux_u-\lambda_{u'}x_{u'} \rangle\\ &=\langle \lambda_vp_v+\lambda_wp_w-\lambda_up_u-\lambda_{u'}p_{u'}, \lambda_vp_v+\lambda_wp_w-\lambda_up_u-\lambda_{u'}p_{u'} \rangle+O(\epsilon c)\\ &=\langle z_\kappa-z_\kappa,z_\kappa-z_\kappa\rangle +O(\epsilon c)\\ &=O(\epsilon c). \end{align*} It follows that \begin{align*} \Vert y''-R_\kappa q_{l_\kappa}\Vert &\leq \Vert y''-y\Vert +\Vert y-y'\Vert +\Vert y'-R_\kappa q_{l_\kappa}\Vert\\ &\leq O((\epsilon c)^{1/2})+O(\epsilon c)+\epsilon c^{1/2}. \end{align*} This is indeed less than $c_1=(12M_0^2c)^{1/2}$ if $\epsilon$ is sufficiently small.\medskip Choosing $\epsilon$ appropriately, $\eqref{eq_allhold}$ gives that the expected number of copies of $H$ in $F_{N,d,c}''$ is at least \[N^{|V_1|}\gamma^dc^{(|V_1|-m)d/2+h}(1-NB_1^dc^{d/2})^{|V_1|}\] for some constant $\gamma$. Letting $c=\left(\frac{1}{2NB_1^d}\right)^{2/d}$ and $d=\lfloor\sqrt{\log{N}}\rfloor$, we get that the expected number of copies of $H$ in $F_{N,d,c}''$ is at least $N^{m-o(1)}$, which proves the claim.\qed The theorem follows from Claims 1, 2 and 3. \end{proof} \begin{proof}[Deduction of Theorem \ref{theorem_rainbow} from Theorem \ref{theorem_rbgeneral}] Given a complete graph $K_r$ on vertex set $\{1,\dots,r\}$, we can properly edge-colour it by giving the edges $12$ and $34$ the same colour $\kappa$, and giving arbitrary different colours to the remaining edges. Pick $r-1$ linearly independent points $p_2, p_3, \dots, p_r$ in $\mathbb{R}^{r-1}$, and let $p_1=p_2+p_3+p_4$. Let $z_\kappa=p_3+p_4=p_1-p_2$ and let $z_{\kappa'}=p_i+p_j$ when $ij$ is an edge of colour $\kappa'\not =\kappa$. Theorem \ref{theorem_rbgeneral} gives that $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)\geq n^{r-1-o(1)}$, and we have a matching upper bound by Proposition \ref{proposition_rbHH}. \end{proof} \section{Some applications of Theorem \ref{theorem_rbgeneral}}\label{sec_examples} We have already seen that Theorem \ref{theorem_rbgeneral} can be used to answer the question of Gerbner, Mészáros, Methuku and Palmer about the order of magnitude of $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_r)$. In this section we give some other examples of applications of the theorem. To show that our lower bounds are sharp, we shall use a simple proposition to give matching upper bounds. This will require the following definition. Given a graph $H$ and a proper edge-colouring $c$ of $H$, we say that a subset $V_0\subseteq V(H)$ is a \textit{$c$-spanning set} if there is an ordering $v_1,\dots,v_k$ of the vertices in $V(H)\setminus V_0$ such that for all $i$ there are some $u,u',w\in V_0\cup\{v_1,\dots,v_{i-1}\}$ such that $uu'\in E(H)$, $v_iw\in E(H)$ and $c(uu')=c(v_iw)$. In other words, we can add the remaining vertices to $V_0$ one by one in a way that new vertices are joined to some vertex in the set by a colour already used. \begin{proposition}\label{proposition_rbsharp} Let $H$ and $F$ be graphs and let $r$ be a positive integer. Assume that for every proper edge-colouring $c$ of $H$ that does not contain a rainbow copy of $F$ there is a $c$-spanning set of size at most $r$. Then $\operatorname{ex}(n,H,\textnormal{rainbow-}F)=O(n^r)$. If we also have $r<|V(H)|$, and if for every such $c$ and every edge $e$ of $H$ there is a $c$-spanning set of size at most $r$ containing $e$, then $\operatorname{ex}(n,H,\textnormal{rainbow-}F)=o(n^r)$. \end{proposition} \begin{proof} Let $G$ be a graph on $n$ vertices and let $\kappa$ be a proper edge-colouring of $G$ without a rainbow copy of $F$. Let $G$ contain $M$ copies of $H$. Then we can partition the vertices into classes $X_v$ for $v\in V(H)$ in such a way that there are $\Omega(M)$ choices of $\mathbf{x}=(x_v)_{v\in V(H)}$ such that $x_v\in X_v$ and $v\mapsto x_v$ is a graph homomorphism from $H$. (To see this, place each vertex independently, uniformly at random into one of the classes. If $\{x_v: v\in V(H)\}$ is an isomorphic copy of $H$ in $G$ (such that $v\mapsto x_v$ is the corresponding isomorphism), then we have $\mathbb{P}[x_v\in X_v\textnormal{ for all $v$}]=1/|V(H)|^{|V(H)|}$, so the expected number of such tuples $\mathbf{x}$ is $M/|V(H)|^{|V(H)|}=\Omega(M)$.) For each $\mathbf{x}$ as above pick an isomorphic proper edge-colouring $c_{\mathbf{x}}: E(H)\to \{1,\dots, |E(H)|\}$, that is, $c_{\mathbf{x}}(vw)=c_{\mathbf{x}}(v'w')$ if and only if $\kappa(x_vx_w)=\kappa(x_{v'}x_{w'})$ for all edges $vw,v'w'$ of $H$. Note that $c_\mathbf{x}$ cannot contain a rainbow copy of $F$. Then there is a colouring $c: E(H)\to \{1,\dots, |E(H)|\}$ that appears for $\Omega(M)$ choices of $\mathbf{x}$. Let $V_0$ be a $c$-spanning set of size at most $r$. Note that any $\mathbf{x}$ with $c_\mathbf{x}=c$ is determined by $(x_v)_{v\in V_0}$, since the edge-colouring is proper. But there are $O(n^r)$ choices for $(x_v)_{v\in V_0}$, hence $M=O(n^r)$. Now assume that $r<|V(H)|$ and that for every proper edge-colouring $c'$ of $H$ without a rainbow $F$ and every edge $e$ of $H$ there is a $c'$-spanning set of size at most $r$ that contains $e$. By the graph removal lemma and the first part of our proposition, we can remove $o(n^2)$ edges from $G$ so that the new graph $G'$ contains no copy of $H$. So it suffices to show that each edge appeared in at most $O(n^{r-2})$ tuples $\mathbf{x}$ with $c_\mathbf{x}=c$. Given an edge $e=y_vy_w$ with $y_v\in X_v, y_w\in X_w, vw\in E(H)$ we can pick in $H$ a $c$-spanning set $V_{0,e}$ of size at most $r$ containing $vw$. Then any $\mathbf{x}$ with $c_\textbf{x}=c$ and $x_v=y_v, x_w=y_w$ is determined by $(x_u)_{u\in V_0\setminus\{v,w\}}$, which gives the result. \end{proof} Now we give some sample applications of Theorem \ref{theorem_rbgeneral} and Proposition \ref{proposition_rbsharp}. We shall give two illustrations, but it is quite easy to generate additional examples. \subsection{Complete graphs}\label{rbkrks} Perhaps the most natural extension of Question \ref{question_rbKr} is to determine the behaviour of the function $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_s)$. Note that trivially $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_s)=\Theta(n^r)$ when $s>r$ (by taking a complete $r$-partite graph), and we have seen that $\operatorname{ex}(n,K_s,\textnormal{rainbow-}K_s)=n^{s-1-o(1)}$ (when $s\geq 4$). We also have $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_s)=0$ whenever $r\geq r_s$ for some integer $r_s$ depending on $s$. Indeed, if we have a $K_r$ with no rainbow copy of $K_s$, and the largest rainbow subgraph has order $t\leq s$, then any of the remaining $(r-t)$ vertices must be joined to this $K_t$ by one of the $\binom{t}{2}$ colours appearing in the $K_t$. But each such colour appears at most once at each vertex, giving $r=O(s^3)$. {In fact, Alon, Lefmann and Rödl showed \cite{alon1991anti} that $r_s=\Theta(s^3/\log{s})$.} However, the question is non-trivial for $s< r< r_s$. First note that $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_s)=o(n^{s-1})$ whenever $r\geq s$ by Proposition \ref{proposition_rbsharp} (since any maximal rainbow subgraph is a $c$-spanning set). The simplest case for the lower bound is $(r,s)=(5,4)$. In this case Theorem \ref{theorem_rbgeneral} gives a matching lower bound $n^{3-o(1)}$. Indeed, take an arbitrary proper edge-colouring of $K_5$ with no rainbow $K_4$, and take points $p_1,\dots,p_5$ in general position in $\mathbb{R}^3$. The existence of appropriate values of $z_\kappa$ follows from the fact that any four of the $p_i$ are linearly dependent (but any three are independent), and each colour is used at most twice. It is easy to deduce that $\operatorname{ex}(n,K_{s+1},\textnormal{rainbow-}K_s)=n^{s-1-o(1)}$ for all $s\geq 4$. When $s=4$ then $r_s=7$ (since any triangle is in at most one $K_4$), leaving the case $(r,s)=(6,4)$. Unfortunately, in this case Theorem \ref{theorem_rbgeneral} does not give a lower bound of $n^{3-o(1)}$. (To see this, observe that to get such a bound the corresponding points $p_v$ would all have to be non-zero. Then we can use the alternative formulation Theorem \ref{theorem_rbgeneral}$'$ to see that we would have to be able to draw a properly edge-coloured $K_6$ in the plane such that there is no rainbow $K_4$ and lines of edges of the same colour are either all parallel or go through the same point. Applying an appropriate projection and affine transformation, we may assume that we have two colour classes where the edges are all parallel, and these two parallel directions are perpendicular. This leaves essentially two cases to be checked, and neither of them yields an appropriate configuration.) However, we can still deduce a lower bound of $\operatorname{ex}(n,K_6,\textnormal{rainbow-}K_4)\geq n^{12/5-o(1)}$, as sketched below. We can take 6 points $p_0=0$ and $p_a=e^{2\pi i a/5}$ (for $a=1,\dots,5$), that is, the vertices of a regular pentagon together with its centre. We define a colouring $c$ as follows. Give parallel lines between vertices of the pentagon the same colour, and also give the same colour to the edge incident at the centre which is perpendicular to these lines (see Figure \ref{K6K4}). This gives a proper edge-colouring of $K_6$ and corresponding points in 2 dimensions for which the conditions of Theorem \ref{theorem_rbgeneral} are satisfied, giving a lower bound of $n^{2-o(1)}$. (The point $z_\kappa$ is chosen to be $p_a$ when $p_0p_a$ has colour $\kappa$.) This can be improved to $n^{12/5-o(1)}$ by a product argument as follows. Looking at the construction, we see that our graph $G$ is $6$-partite with classes $V_0,\dots,V_5$, at most $n$ vertices, and a proper edge-colouring $\kappa$ such that the following hold. \begin{itemize} \item There are (at least) $n^{2-o(1)}$ copies of $K_6$ in $G$. \item The class $|V_0|$ has size $1$. \item There is a $5$-colouring $c$ of the edges of $K_6$ (on vertex set $\{0,\dots,5\}$) with no rainbow $K_4$ such that whenever $v_{i_1},v_{i_2},v_{i_3},v_{i_4}$ form a $K_4$ in $G$ with $v_{i_j}\in V_{i_j}$, then $i_j\mapsto v_{i_j}$ gives an isomorphism of colourings between the restrictions of $c$ and $\kappa$ to the appropriate four-vertex graphs (i.e., $\kappa(v_{i_j}v_{i_l})=\kappa(v_{i_{j'}}v_{i_{l'}})$ if and only if $c(i_ji_{j'})=c(i_{j'}i_{l'})$). Moreover, this 5-colouring $c$ has the property that for all $i,j\in \{1,\dots,6\}$ there is a permutation of the vertices $\{0,\dots,5\}$ which is an automorphism of colourings and maps $i$ to $j$. (Indeed, we can take rotations of the pentagon when $i,j\not =0$, and we can take the permutation $(01)(34)$ when $i=0$, $j=1$.) \end{itemize} We construct a new graph as follows. For each $i\in\{0,\dots,5\}$, pick a permutation $\pi_i$ of $\{0,\dots,5\}$ which gives a colouring automorphism of $c$ and sends $i$ to $0$. Define a $6$-partite graph $G_i$ obtained from $G$ by permuting the vertex classes: $G_i$ has classes $V_0^i,\dots,V_5^i$ given by $V_a^i=V_{\pi_i(a)}$ and same edge set as $G$. Let $G'$ be the product of these $6$-partite graphs, that is, it is $6$-partite with vertex classes $W_a=V_a^0\times V_a^1\times\dots\times V_a^5$, and two vertices $(v_0,\dots,v_5)\in W_a$ and $(w_0,\dots,w_5)\in W_b$ are joined by an edge if $v_iw_i\in E(G)$ for all $i$. Moreover, colour such an edge by colour $(\kappa(v_0w_0),\dots,\kappa(v_5w_5))$. It is easy to check that the colouring is proper, $G'$ contains no rainbow $K_4$, $G'$ has at most $n^5$ vertices in each class, and $G'$ contains at least $n^{12-o(1)}$ copies of $K_6$, giving the bound stated. \begin{figure}[h] \includegraphics[clip,trim=0.5cm 0.4cm 1cm 0.1cm, width=0.4\linewidth]{K6K4} \centering \caption{The colouring and points used for $(r,s)=(6,4)$ to get a lower bound.} \label{K6K4} \end{figure} This leaves some open questions about $\operatorname{ex}(n,K_r,\textnormal{rainbow-}K_s)$. It would be interesting to determine its order of magnitude for $(r,s)=(6,4)$, or the magnitude for other pairs with $s<r<r_s$. \subsection{King's graphs} Given positive integers $k,l\geq 2$, write $H_{k,l}$ for the graph with vertex set $\{1,\dots,k\}\times\{1,\dots,l\}$ where $(a,b)$ and $(a',b')$ are joined by an edge if and only if they are distinct and $|a-a'|,|b-b'|\leq 1$. In other words, $H_{k,l}$ is the strong product of a path with $k$ points and a path with $l$ points, sometimes called the $k\times l$ king's graph. We can use our results to show that $\operatorname{ex}(n,H_{k,l},\textnormal{rainbow-}K_4)=n^{k+l-1-o(1)}$. First consider the upper bound. It is easy to see that any sequence of vertices $p_1,\dots,p_{k+l-1}$ is a $c$-spanning set (for all proper edge-colourings $c$ of $H_{k,l}$ without a rainbow $K_4$) if either of the following statements holds. \begin{enumerate} \item We have $p_1=(1,1)$, $p_{k+l-1}=(k,l)$ and $p_{i+1}-p_i\in\{(0,1),(1,0)\}$ for all $i$. \item We have $p_1=(1,l)$, $p_{k+l-1}=(k,1)$ and $p_{i+1}-p_i\in\{(0,1),(-1,0)\}$ for all $i$. \end{enumerate} (Indeed, this follows from the fact that we can add the other vertices one by one, creating a new copy of $K_4$ in our set in each step.) Since any edge is contained in such a sequence, Proposition \ref{proposition_rbsharp} gives $\operatorname{ex}(n,H_{k,l},\textnormal{rainbow-}K_4)=o(n^{k+l-1})$. For the lower bound, consider an edge-colouring $c$ of $H_{k,l}$ with $c((a,b)(a+1,b))=a$, where the other edges are given arbitrary distinct colours. This gives a proper edge-colouring of $H_{k,l}$ with no rainbow $K_4$. For each vertex $(a,b)$ of $H$, define $p_{a,b}\in \mathbb{R}^{k+l}$ to be the vector with $i$\textsuperscript{th} coordinate \begin{equation*} (p_{a,b})_i = \begin{cases*} 0 & if $i\not =a, k+b$ \\ 1 & if $i=a$\\ (-1)^a & if $i=k+b$ \end{cases*} \end{equation*} For each $1\leq a\leq k-1$ we let $z_a\in \mathbb{R}^{k+l}$ be the vector with all entries zero except the $a$\textsuperscript{th} and $(a+1)$\textsuperscript{th} coordinates which are $1$, and for each other colour $\kappa$ used in the colouring of $H_{k,l}$ we take $z_\kappa=p_v+p_w$, where $vw$ is the unique edge of colour $\kappa$. Then we have $p_{a,b}+p_{a+1,b}=z_a$, so the conditions of Theorem \ref{theorem_rbgeneral} are satisfied. The dimension of the subspace of $\mathbb{R}^{k+l}$ spanned by the vectors $p_{a,b}$ is at least $k+l-1$, since $p_{1,l},p_{1,l-1},\dots,p_{1,1},p_{2,1},p_{3,1},\dots,p_{k,1}$ are linearly independent. We get the required lower bound $n^{k+l-1-o(1)}$. \section*{Acknowledgement} We are grateful to Shagnik Das for pointing out that the behaviour of $r_s$ (in Subsection \ref{rbkrks}) was known.
{ "timestamp": "2020-03-20T01:01:10", "yymm": "2003", "arxiv_id": "2003.02754", "language": "en", "url": "https://arxiv.org/abs/2003.02754", "abstract": "Considering a natural generalization of the Ruzsa-Szemerédi problem, we prove that for any fixed positive integers $r,s$ with $r<s$, there are graphs on $n$ vertices containing $n^{r}e^{-O(\\sqrt{\\log{n}})}=n^{r-o(1)}$ copies of $K_s$ such that any $K_r$ is contained in at most one $K_s$. We also give bounds for the generalized rainbow Turán problem $\\operatorname{ex}(n, H,$rainbow-$F)$ when $F$ is complete. In particular, we answer a question of Gerbner, Mészáros, Methuku and Palmer, showing that there are properly edge-coloured graphs on $n$ vertices with $n^{r-1-o(1)}$ copies of $K_r$ such that no $K_r$ is rainbow.", "subjects": "Combinatorics (math.CO)", "title": "Generalizations of the Ruzsa-Szemerédi and rainbow Turán problems for cliques", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631631151011, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950321086259 }
https://arxiv.org/abs/1708.04869
Martingale Benamou--Brenier: a probabilistic perspective
In classical optimal transport, the contributions of Benamou-Brenier and McCann regarding the time-dependent version of the problem are cornerstones of the field and form the basis for a variety of applications in other mathematical areas.We suggest a Benamou-Brenier type formulation of the martingale transport problem for given $d$-dimensional distributions $\mu, \nu $ in convex order. The unique solution $M^*=(M_t^*)_{t\in [0,1]}$ of this problem turns out to be a Markov-martingale which has several notable properties: In a specific sense it mimics the movement of a Brownian particle as closely as possible subject to the conditions $M^*_0\sim\mu, M^*_1\sim \nu$. Similar to McCann's displacement-interpolation, $M^*$ provides a time-consistent interpolation between $\mu$ and $\nu$. For particular choices of the initial and terminal law, $M^*$ recovers archetypical martingales such as Brownian motion, geometric Brownian motion, and the Bass martingale. Furthermore, it yields a natural approximation to the local vol model and a new approach to Kellerer's theorem.This article is parallel to the work of Huesmann-Trevisan, who consider a related class of problems from a PDE-oriented perspective.
\section{Introduction} The roots of optimal transport as a mathematical field go back to Monge \cite{Mo81} and Kantorovich \cite{Ka42} who established its modern formulation. Important triggers for its steep development in the last decades were the seminal results of Benamou, Brenier, and McCann \cite{Br87, Br91, BeBr99, Mc94}. Today the field is famous for its striking applications in areas ranging from mathematical physics and PDE-theory to geometric and functional inequalities. We refer to \cite{Vi03, Vi09, AmGi13, Sa15} for comprehensive accounts of the theory. Recently there has also been interest in optimal transport problems where the transport plan must satisfy additional martingale constraints. Such problems arise naturally in robust finance, but are also of independent mathematical interest, for example they have important consequences for the study of martingale inequalities (see e.g.\ \cite{BoNu13,HeObSpTo12,ObSp14}) and the Skorokhod embedding problem \cite{BeCoHu14, KaTaTo15}. Early papers to investigate such problems include \cite{HoNe12, BeHePe12, TaTo13, GaHeTo13, DoSo12, CaLaMa14}, and this topic is commonly referred to as martingale optimal transport. In view of the central role taken by the seminal results of Benamou, Brenier, and McCann on optimal transport for squared Euclidean distance, the related continuous time transport problem and McCann's displacement interpolation, it is intriguing to search for similar concepts also in the martingale context. While \cite{BeJu16, HeTo13} propose a martingale version of Brenier's monotone transport mapping, our starting point is the Benamou-Brenier continuous time transport problem which we restate here for comparison with the martingale analogues that we will consider subsequently. \subsection{Benamou-Brenier transport problem and McCann-interpolation in probabilistic terms} In view of the probabilistic nature of the results we present subsequently, it is convenient to recall some classical concepts and results of optimal transport in probabilistic language. Given probabilities $\mu, \nu $ in the space $\mathcal P_2(\mathbb{R}^d) $ of $d$-dimensional distributions with finite second moment consider \begin{align}\label{MBBB}\tag{BB} T_2(\mu, \nu):=\inf_{X_t=X_0+\int_0^tv_s\, ds, X_0\sim \mu,X_1\sim \nu} {\mathbb E}\left[\int_0^1 |v_t|^2\, dt\right]. \end{align} Then by \cite{Br87} we have \begin{theorem}\label{BrenierTheoremIntro} Let $\mu, \nu \in \mathcal P_2(\mathbb{R}^d)$ and assume that $\mu$ is absolutely continuous with respect to Lebesgue measure. Then \eqref{MBBB} has a unique optimizer $X^*$. \end{theorem} \begin{remark}\label{UniquenessMeaning} \comment{MB: Potentially we want to elaborate a bit more in this remark} In Theorem \ref{BrenierTheoremIntro} (and similarly below) the solution to \eqref{MBBB} is unique in the sense that there exists a unique probability measure on the pathspace $C([0,1])$ such that the canonical/identity process optimizes \eqref{MBBB}. \end{remark} In probabilistic terms, McCann's displacement interpolation can be defined by $[\mu,\nu]_t:= \text{\it{law}}\,}%{\mathrm{\emph{law}}(X^*_t)$ where $t\in [0,1]$ and $\mu, \nu$, $X^*$ are as in Theorem \ref{BrenierTheoremIntro}. \begin{theorem}\label{DisplacementIntro} Let $\mu, \nu \in \mathcal P_2(\mathbb{R}^d)$ and assume that $\mu$ is absolutely continuous with respect to Lebesgue measure. Let $s, t, \lambda\in [0,1], s < t$. Then \begin{align}\left[[\mu, \nu]_s, [\mu, \nu]_t\right]_\lambda =[\mu, \nu]_{(1-\lambda) s + \lambda t }.\end{align} Moreover \begin{align} (t-s)\, T_2^{1/2}\, (\mu, \nu)= T_2^{1/2} ( [\mu, \nu]_s, [\mu, \nu]_t).\end{align} \end{theorem} Finally, the optimizer of \eqref{MBBB} is given through the gradient of a convex function. More precisely, by \cite{BeBr99}, we have \begin{theorem}\label{BrenierStructureTheoremIntro} Assume that $\mu$ is absolutely continuous with respect to Lebesgue measure and $\mu, \nu \in \mathcal P_2(\mathbb{R}^d)$. A candidate process $X$, $X_0\sim \mu, X_1\sim \nu$ is an optimizer if and only if $X_1= f(X_0)$, where $f$ is the gradient of a convex function $\phi:{\mathbb R}^n\to {\mathbb R}$ and all particles move with constant speed, i.e.\ $X_t= t X_1 + (1-t) X_0 = X_0 + t(X_1-X_0)$. \end{theorem} \subsection{Martingale counterparts} Let $\mu, \nu\in \mathcal P_2(\mathbb{R}^d)$ be in convex order (denoted $\mu\preceq_c \nu$) and write $B$ for Brownian motion on $\mathbb{R}^d$. We consider the optimization problem \begin{align}\label{MBMBB}\tag{MBB} MT(\mu, \nu):=\sup_{\substack{M_t=M_0+\int_0^t \sigma_s\, dB_s \\ M_0\sim \mu,M_1\sim \nu}} {\mathbb E}\left[\int_0^1 \mbox{tr}(\sigma_t)\, dt\right], \end{align} see also \eqref{MBMBBv2} below. We have \begin{theorem}\label{MainTheoremIntro} Assume that $\mu, \nu \in \mathcal P_2(\mathbb{R}^d)$ satisfy $\mu\preceq_c\nu$. Then \eqref{MBMBB} has an optimizer $M^*$ which is unique in law. \end{theorem} At its face, the optimization problems \eqref{MBBB} and \eqref{MBMBB} look rather different. However it is not hard to see that both problems are equivalent to optimization problems that are much more obviously related. In Section \ref{sec causal} below we establish that \begin{align} X^*&=\text{argmin}_{X_0\sim\mu, X_1\sim\nu} W^2(X, \text{constant speed particle}),\\ M^*&=\text{argmin}_{M_0\sim\mu, M_1\sim\nu} W^2_c(M, \text{constant volatility martingale}),\label{CausalFormulation} \end{align} where $W^2$ denotes Wasserstein distance with respect to squared Cameron-Martin norm, while $W^2_c$ denotes an \emph{adapted} or \emph{causal} analogue\footnote{Causal transport plans generalize adapted processes in the same way as classical Kantorovich transport plans extend Monge maps.} (in the terminology of Lassalle \cite{La13}), see Section \ref{sec causal} for details. The reformulation in \eqref{CausalFormulation} allows for the following interpretation: $M^*$ is the process whose evolution follows the movement of a Brownian particle as closely as possible subject to the marginal conditions $M_0\sim \mu, M_1\sim \nu$. This motivates the name in the following definition. \begin{definition} \label{def d dim sBm} Let $\mu, \nu, M^*$ be as in Theorem \ref{MainTheoremIntro}. Then we call $M^*$ the \emph{stretched Brownian motion} (sBm) from $\mu$ to $\nu$. We define the \emph{martingale displacement interpolation} by \begin{align} [\mu, \nu]^M_t:= \text{\it{law}}\,}%{\mathrm{\emph{law}} M^*_t\,, \end{align} for $t\in [0,1]$. \end{definition} In analogy to Theorem \ref{DisplacementIntro} we have \begin{theorem}\label{MDisplacementIntro} Assume that $\mu, \nu \in \mathcal P_2(\mathbb{R}^d)$ satisfy $\mu\preceq_c\nu$. Let $s, t, \lambda\in [0,1], s < t$. Then \begin{align}\left[[\mu, \nu]^M_s, [\mu, \nu]^M_t\right]^M_\lambda =[\mu, \nu]^M_{(1-\lambda) s + \lambda t }\,.\end{align} Moreover \begin{align} (t-s)\, MT^{2}\, (\mu, \nu)= MT^{2} ( [\mu, \nu]^M_s, [\mu, \nu]^M_t).\end{align} \end{theorem} \subsection{Structure of stretched Brownian motion} In the solution of the classical Benamou--Brenier transport problem, particles travel with constant speed along straight lines. In contrast, we will see that in the case of sBm\ the movement of individual particles mimic that of Brownian motion. Broadly speaking, the ``direction'' of these particles will be determined -- similar to the classical case -- by a mapping which is the gradient of a convex function. \medskip For simplicity, we first consider the particular case where $\mu, \nu, \mu\preceq_c \nu$ are probabilities on the real line and $\mu$ is concentrated in a single point, i.e.\ $\mu= \delta_m$ where $m$ is the center of $\nu$. It turns out that in this case sBm\ $M^*$ is precisely the ``Bass martingale'' \cite{Ba83} (or `Brownian martingale') with terminal distribution $\nu$. We briefly recall its construction: Pick $f:{\mathbb R}\to{\mathbb R}$ increasing such that $f(\gamma)=\nu$, where $\gamma$ is the standard Gaussian distribution on $\mathbb{R}$. Then set for $t\in [0,1]$ \begin{align}\label{MBBass} M_t:= \mathbb{E}[f(B_1)|\mathcal{F}_t]= \mathbb{E}[f(B_1)|B_t] = f_t(B_t), \end{align} where $B=(B_t)_{t\in [0,1]}$ denotes Brownian motion started in $B_0\sim \delta_{0}$, $(\mathcal{F}_t)_{t\in [0,1]}$ the Brownian filtration and $f_t(b):=\int f(b+y)\, d\gamma_{1-t}(y)$, $\gamma_s\sim N(0,s)$. Clearly $M$ is a continuous Markov martingale such that $M_0\sim \delta_m, M_1\sim \nu$. As a particular consequence of the results below we will see that $M$ is a stretched Brownian motion. \medskip To state our results for the general, multidimensional case we need to consider an extension of the Bass construction. Let $F: \mathbb{R}^d\to \mathbb{R}$ be a convex function and set \begin{align} \textstyle \label{eq f_t} f_t(b)=\int \nabla F(b + y)\gamma_{1-t}^d(dy), \end{align} where $\gamma_s^d$ denotes the centered $d$-dimensional Gaussian with covariance matrix $s\, \mbox{Id}$.\comment{MB: Any way we can write this in a less clumsy fashion?} If $B$ denotes $d$-dimensional Brownian motion started in $B_0 \sim \alpha$, we have \begin{align} \mathbb{E}[\nabla F(B_1)| \mathcal{F}_t]= f_t(B_t), \ t\in [0,1]. \end{align} \begin{definition}\label{def d dim ssBm} A continuous $\mathbb{R}^d$-valued martingale $M$ is a \emph{standard stretched Brownian motion (s$^2$Bm)} from $\mu$ to $\nu$ if there exist a probability measure $\alpha$ on $\mathbb{R}^d$ and a convex function $F:{\mathbb R}^d\to {\mathbb R}$ with $\nabla F(\alpha*\gamma^d)=\nu$, such that $$M_t = E[\nabla F(B_1)| \mathcal{F}_t]\, \mbox{ and }\, M_0\sim \mu,$$ where $B$ is a Brownian motion with $B_0\sim \alpha$. \end{definition} Note, that for $\alpha,\nu\in \mathcal P_2(\mathbb{R}^d)$ there exists a convex function $F$ with $\nabla F(\alpha*\gamma^d)=\nu$ and $F$ is $\alpha*\gamma^d$-unique up to an additive constant. (This is a consequence of Brenier's Theorem, see e.g.\ Theorem \ref{BrenierTheoremIntro} or \cite[Theorem 2.12]{Vi03}.) \begin{remark} Both Brownian motion and geometric Brownian motion are examples of standard stretched Brownian motion. \end{remark} We have the following results \begin{theorem}\label{ThmStandardtoStretchedIntro} Let $\mu, \nu \in \mathcal P_2(\mathbb{R}^d)$ with $ \mu\preceq_c \nu$. If $M$ is a standard stretched Brownian motion from $\mu $ to $\nu$, then $M$ is an optimizer of \eqref{MBMBB}, i.e.\ $M$ is the stretched Brownian motion from $\mu$ to $\nu$. \end{theorem} \begin{theorem}\label{ThmStretchedtoStandardIntro}Let $\mu, \nu \in \mathcal P_2(\mathbb{R}^d)$ with $ \mu\preceq_c \nu$. Let $M^*$ be the stretched Brownian motion from $\mu $ to $\nu$, i.e.\ the optimizer of \eqref{MBMBB}. Write $M^{*,x}$ for the martingale $M$ conditioned on starting in $M_0= x$. Then for $\mu$-a.a.\ $x\in \mathbb{R}^d$ the martingale $M^{*,x}$ is a standard stretched Brownian motion. \end{theorem} As a particular consequence of these results, the notions sBm\ and s$^2$Bm\ coincide if $\mu$ is concentrated in a single point. However the relation between sBm\ and s$^2$Bm\ is more complicated in general: A notable intricacy of the martingale transport problem is caused by the fact that, loosely speaking, certain regions of the space do not communicate with each other. Consider for a moment the particular case where $\mu, \nu$ are distributions on the real line. In this instance, a martingale transport problem can be decomposed into countably many ``minimal'' components and on each of these components the behaviour of the problem is fairly similar to the classical transport problem. We refer the reader to Section \ref{sec main dim 1} for the precise definition and only provide an illustrative example at this stage. \begin{example} Let $\mu:=1/2 (\lambda_{|[-3,-2]} + \lambda_{|[2,3]}),$ $\nu:=1/6 (\lambda_{|[-4,-1]} + \lambda_{|[1,4]})$. Then \emph{any} martingale $M, M_0 \sim \mu, M_1\sim\nu$ will satisfy the following: If $M_0 >0$, then $M_1>0$ and if $M_0 \leq0 $ then $M_1\leq 0$. I.e.\ the positive and the negative halfline do not ``communicate,'' and a problem of martingale transport should be considered on either of these parts of space separately. \end{example} If the pair $(\mu, \nu)$ decomposes into more than one minimal component, as in the previous example, there exists no s$^2$Bm\ from $\mu$ to $\nu$. However for the one-dimensional case we will establish the following: A martingale is a sBm\ if and only if it behaves like a s$^2$Bm\ on each minimal component, see Theorem \ref{MainTheoremOneDim}. \medskip Notably, the challenges posed by non-communicating regions appear much more intricate for dimension $d\geq 2$, see the deep contributions of Ghoussoub--Kim--Lim \cite{GhKiLi16}, DeMarch--Touzi \cite{DMTo17} and \OB--Siorpaes \cite{ObSi17}. In particular it is not yet fully understood how to break up a martingale transport problem into distinct pieces which mimic the behaviour of minimal components in the one dimensional case. Below we will give special emphasis to the case $d=2$ under the additional regularity assumption that $\nu$ is absolutely continuous. This instance seems of particular interest since it allows to recognize the geometric structure of the problem while avoiding the more intricate effects of non-minimality which are present in higher dimension. Based on the results of \cite{DMTo17,ObSi17} and a particular `monotonicity principle' we will be able to largely recover the main one-dimensional result (Theorem \ref{MainTheoremOneDim}) in the two-dimensional case, see Sections \ref{sec preliminaries}-\ref{sec main dim 2} below and specifically Theorem \ref{thm main} therein. We conjecture that a similar structural characterization of sBm\ can be established in general dimensions, pending future developments in the direction of \cite{DMTo17,ObSi17}. \subsection{Further remarks} \subsubsection{Discrete time version and monotonicity principle} The classical Benamou--Brenier transport formulation immediately reduces to the familiar discrete time transport problem for squared distance costs. Similarly, the martingale version \eqref{MBMBB} can be reformulated as discrete time problem, more precisely, a weak transport problem in the sense of \cite{GoRoSaTe14}. The discrete time reformulation of \eqref{MBMBB} plays an important role in the derivation of our main results. To analyze the discrete problem we introduce a ``monotonicity principle'' for weak transport problems. The origin of this approach is the characterization of optimal transport plans in terms of $c$-cyclical monotonicity. In optimal transport, the potential of this concept has been recognized by Gangbo--McCann \cite{GaMc96}. More recently, variants of this idea have proved to be useful in a number of related situation, see \cite{KiPa13, BeGr14, Za14, GuTaTo15b, BeJu16, BeCoHu14, NuSt16, BeEdElSc16} among others. In view of this, it seems possible that the monotonicity principle for weak transport problems could also be of interest in its own right {(cf.\ \cite{GoJu18, BaBePa18} which appeared after we first posted this article). } \subsubsection{Schr\"odinger problem} Our variational problem \eqref{MBMBB} is reminiscent of the celebrated Schr\"odinger problem, in which the idea is to minimize the relative entropy with respect to Wiener measure (or other Markov laws) over path-measures with fixed initial and final marginals. We refer to the survey \cite{Le14} and the references therein. Among the similarities, let us mention that the solution to the Schr\"odinger problem is unique and is a Markov law, and furthermore this problem also has a transport-like discrete time reformulation which is fundamental to the dynamic path-space version. On the other hand, \eqref{MBMBB} and the Schr\"odinger problem are in particular sense at opposing ends of probabilistic variational problems, we optimize over ``volatilities keeping the drift fixed'' whereas the latter optimizes over ``drifts keeping the volatility fixed.'' \subsubsection{Bass-martingale and Skorokhod embedding} The Bass-martingale \eqref{MBBass} was used by Bass \cite{Ba83} to solve the Skorokhod embedding problem. Hobson asked whether there are natural optimality properties related to this construction and if one could give a version with a non trivial starting law. \eqref{MBMBB} yields such an optimality property of the Bass construction and stretched Brownian motion gives rise to a version of the Bass embedding with non trivial starting law. Notably a characterization of the Bass martingale in terms of an optimality property was first obtained in \cite{BeCoHuKa17}, the variational problem considered in that article refers to measure valued martingales and appears rather different from the one considered in \eqref{MBMBB}. \subsubsection{Geometric Brownian motion.} From the above results it is clear that Brownian motion is (up to an appropriate scaling of time) a s$^2$Bm\ between any of its marginals. In fact, the same holds for Brownian martingales $dM_t=\sigma dB_t$ for constant and time-independent $\sigma$. We find it notable that same applies in the case of \emph{geometric} Brownian motion. \subsubsection{Kellerer's theorem and Lipschitz kernels} Kellerer's theorem \cite{Ke73} states that if a family of distributions $(\mu_t)_{t\in[0,1]}$ on the real line satisfies $s\leq t\Rightarrow \mu_s\preceq_c \mu_t,$ there exists a Markovian martingale $(X_t)_{t\in \mathbb{R}_+}$ with $\text{\it{law}}\,}%{\mathrm{\emph{law}}(X_t)=\mu_t$ for every $t$. In contemporary terms (see \cite{HPRY}), $(\mu_t)_{t\in \mathbb{R}_+}$ is called a \emph{peacock} and $(X_t)_{t\in \mathbb{R}_+}$ is a Markovian martingale associated to this peacock. The technically most involved part in establishing Kellerer's theorem is to prove that for $\mu\preceq_c\nu$ there exists a martingale transition kernel $P$ having the following \emph{Lipschitz-property}: A kernel $P: x\mapsto \pi_x $, $\nu(dy)= \int \mu(dx) \pi_x(dy)$ is called Lipschitz (or more precisely $1$-Lipschitz) if $\mathcal{W}_1(\pi_x,\pi_{x'})\leq |x-x'|$ for all $x,x'$. Kellerer's proof of the existence of Lipschitz-kernels is not constructive and employs Choquet's theorem. Other proofs are based on solutions to the Skorokhod problem for non-trivial starting law, see \cite{{Lo08b}, BeHuSt16}. Stretched Brownian motion yields a new construction of a Lipschitz-kernel: Given probabilities $\mu, \nu, \mu\preceq_c \nu$ on the real line and writing $M^*$ for sBm\ from $\mu$ to $\nu$, then $\text{\it{law}}\,}%{\mathrm{\emph{law}} (M_1^*| M_0^*)$ is a Lipschitz kernel. We provide the argument in Corollary \ref{coro LM} below. The question whether Kellerer's theorem can be extended to the case of marginal measures on $\mathbb{R}^d, d\geq 2$ remains open. While all previously known constructions of kernels used for the proof of Kellerer's theorem were inherently limited to dimension $d=1$, the approach sketched above seems more susceptible to generalization. We intend to pursue this question further in future work. \subsubsection{Almost continuous diffusions / local volatility model}\label{LocalVolParagraph} Assume that $(\mu_t)_{t\in [0,1]}$ (where $\mu_t, t\in [0,1]$ are probabilities on the real line) is a peacock such that $t\mapsto \mu_t$ is continuous in the weak topology. Lowther \cite{Lo08b} establishes that an appropriate continuity condition makes the Markov martingale appearing in Kellerer's theorem unique. In his terms, there is a unique ``almost continuous'' martingale diffusion $M^{ac}$ such that $M^{ac}_t\sim \mu_t, t\in [0,1]$. Under further regularity conditions, $M^{ac}$ is precisely Dupire's local volatility model. Stretched Brownian motion yields a simple approximation scheme to $M^{ac}$. Write $M^n$ for the Markov martingale satisfying that for each $k\in \{1,\ldots, n\}$, $(M^n_t)_{t\in [(k-1)/n, k/n]}$ is (modulo the obvious affine time-change) stretched Brownian motion between $\mu_{(k-1)/n}$ and $\mu_{k/n}$. $M^n$ is then a continuous diffusion and based on Lowther's \cite{Lo08b, Lo09} it is straightforward that \begin{align}\label{LocVolLimit} M^{ac}= \lim_{n\to \infty} M^n, \end{align} where the limit is in the sense of convergence of finite dimensional distribution (cf.\ \cite{BeHuSt16}). \subsubsection{L\'evy processes} Many arguments in this article rely only on the independence and stationarity of increments of Brownian motion. Therefore a problem similar to \eqref{MBMBB}, but based on a reference L\'evy process instead, should conceivably exhibit similar properties as we find in the Brownian case. In this direction it could be an interesting question to identify the outcome of the approximation procedure described in \eqref{LocVolLimit}. \subsubsection{Dual problem, related work} Optimization problems similar to \eqref{MBMBB} were first studied from a general perspective by Tan and Touzi \cite{TaTo13}, in particular establishing a duality theory for these type of problems. The dual viewpoint is also emphasized in \cite{HuTr17}, which is parallel to the present work. Among other results, \cite{HuTr17} derives a PDE that yields a sufficient condition for a flow of measures to optimize \eqref{MBMBB} or related cost criteria. \subsection{Outline of the article: } In Section \ref{sec refined} we introduce the discrete-time variant of our optimization problem. We also prove some of the multidimensional results stated in the introduction and provide further properties of sBm\ (dynamic programming principle for \eqref{MBMBB}, the Markov property of sBm). In Section \ref{sec main dim 1 and 2} we state our main results regarding the structure of sBm\ in dimensions one and two. In Section \ref{sec mono} we present a monotonicity principle for weak transport problems, which is crucial for our analysis in dimension two, but may also be of independent interest. In Section \ref{sec pending proofs} we conclude the proofs of our main results. Finally in Section \ref{sec causal} we present further optimality properties of sBm\ and s$^2$Bm\ in terms of a (causal) optimal transport problem between martingale laws. \subsection{Notation:} The set of probability measures on a set $\mathsf{X}$ will be denoted by $\mathcal P(\mathsf{X})$. For $\rho_1,\rho_2\in\mathcal P(\mathsf{X})$ we write $\Pi(\rho_1,\rho_2)$ for the set of all couplings of $\rho_1$ and $\rho_2$, i.e.\ all measures on the product space with marginals $\rho_1$ and $\rho_2$ resp. Two probability measures $\mu, \nu \in \mathcal P(\mathbb{R}^d)$ are said to be in convex order, short $\mu\preceq_c\nu$ iff for all convex real valued functions $\phi$ it holds that $\int \phi \, d\mu\leq \int \phi \, d\nu.$ \medskip\\ \textit{In this article, we fix $\mu,\nu\in\mathcal P(\mathbb{R}^d)$, assume that $\mu\preceq_c\nu$ and that both measures have finite second moment. } \medskip\\ We denote by $\mathsf{M} (\mu,\nu)$ the set of all martingale couplings with marginals $\mu$ and $\nu$ (which is non-empty by Strassen's Theorem \cite{St65}), i.e.\ $$\mathsf{M} (\mu,\nu):=\{\pi\in\mathcal{P}(\mathbb{R}^d\times \mathbb{R}^d): \mathbb{E}^\pi[(y-x)h(x)]=0\mbox{ for all $h:\mathbb{R}^d\to\mathbb{R}$ Borel bounded}\}.$$ For a generic measure $\pi$ on $\mathbb{R}^d\times\mathbb{R}^d$ we denote by $(\pi_x)_{x\in\mathbb{R}^d}$ the conditional transition kernel given the first coordinate or equivalently its disintegration w.r.t.\ the first marginal. For $\rho\in\mathcal P(\mathsf{X})$ and a measurable map $f:\mathsf{X}\to\mathsf{Y}$ we write $f(\rho)=\rho\circ f^{-1}$ for the pushforward of $\rho$ under $f$. For a set $A\subset \mathbb{R}^d$ we denote by $\mbox{aff}(A)$ the smallest affine vector space containing it, $\mbox{dim}(A)$ the dimension of $\mbox{aff}(A)$, $\mbox{ri}(A)$ the relative interior of $A$ (i.e.\ interior of $A$ with respect to the relative topology of $\mbox{aff}(A)$ as inherited from the usual topology in $\mathbb{R}^d$), and $\partial A:= \overline{A}\backslash \text{ri}(A)$ the relative boundary. By $\mbox{co}(A)$ and $\overline{\mbox{co}}(A)$ we denote the convex hull and the closed convex hull of $A$ respectively. The relative face of $A$ at $a$ is defined by $\text{rf}_a(A)=\{y\in A: (a-\varepsilon(y-a),y+\varepsilon(y-a))\subset A,\text{ some }\varepsilon >0 \}$. For a set $\Gamma\subset \mathbb{R}^d\times\mathbb{R}^d$ we denote $\Gamma_x:=\{y:(x,y)\in\Gamma\}$ and $\mbox{proj}_1(\Gamma)$ the projection of $\Gamma$ onto the first coordinate. Given $\pi\in \mathsf{M} (\mu,\nu)$ we say that $\Gamma\subset \mathbb{R}^d\times\mathbb{R}^d$ is a martingale support for $\pi$ if $\pi(\Gamma)=1$ and $x\in \mbox{ri}(\Gamma_x)$ for $\mu$-a.e.\ $x$. Finally, we denote by $\lambda^d,\gamma^d,\gamma^d_t$ resp.\ the Lebesgue, standard Gaussian, and the Gaussian measure with covariance matrix $t\,\text{Id}$ in $\mathbb{R}^d$, and reserve the symbol $*$ for convolution. \section{Refined and auxiliary results in arbitrary dimensions}\label{sec refined} We start by restating our main optimization problem in (slightly) more precise form. \begin{align}\label{MBMBBv2 MT:=MT(\mu,\nu):=\sup_{M_t=M_0+\int_0^t \sigma_s\, dB_s, M_0\sim \mu,M_1\sim \nu} {\mathbb E}\left[\int_0^1 \mbox{tr}(\sigma_t)\, dt\right]. \end{align} Here the supremum is taken over the class of all filtered probability spaces $(\Omega,\mathcal F, \mathbb{P})$, with $\sigma$ an $\mathbb{R}^{d\times d}$-valued $\mathcal F$-progressive process and $B$ a $d$-dimensional $\mathcal F$-Brownian motion, such that $M$ is a martingale. In fact, as a particular consequence of Theorem \ref{lem inequality static dynamic}, the choice of the underlying probability space is not relevant, provided that $(\Omega,\mathcal F, \mathbb{P})$ is rich enough to support a $\mathcal{F}_0$-measurable random variable with continuous distribution. By Doob's martingale representation theorem (see e.g.\ \cite[Theorem 4.2]{KaSh91}), the supremum above is the same if we optimized over all continuous $d$-dimensional local martingales from $\mu$ to $\nu$ with absolutely continuous cross variation matrix (one then replaces the cost by the trace of the root of the Radon Nikodym density of said matrix). We will be also interested in a ``static'' version of the above problem, just as the Benamou-Brenier formula is associated to the static optimal transport problem with quadratic cost \begin{align} WT:=WT(\mu,\nu):= \sup_{\substack{\{\pi_x\}_x,\, \text{mean}(\pi_x)=x \\ \int\mu(dx)\pi_x(dy)=\nu(dy) }}\int\mu(dx) \sup_{q\in\Pi(\pi_x,\gamma^d)}\int q(dm,db)\,\, m\cdot b\, . \label{static weak transport}\tag{$WOT$} \end{align} \noindent The tag $(WOT)$ reflects the fact that this is a weak optimal transport problem (the cost function is non-linear in the optimization variable). \begin{remark}\label{rem sup tp inf} Completing the square in \eqref{static weak transport} yields \begin{align}\label{eq weak Gozlan 2} 1+\int|y|^2\, d\nu - 2\,WT= \inf_{\substack{\{\pi_x\}_x,\, \text{mean}(\pi_x)=x \\ \int\mu(dx)\pi_x(dy)=\nu(dy) }}\int\mu(dx) \mathcal{W}_2(\pi_x,\gamma^d)^2, \end{align} where $\mathcal{W}_2$ is the usual $L^2$ Wasserstein distance on $\mathcal P(\mathbb{R}^d)$. The r.h.s.\ of \eqref{eq weak Gozlan 2} is clearly a weak transport problem in the setting of Gozlan et.\ al.\ \cite{GoRoSaTe14,GoRoSaTe15}. \end{remark} \medskip We start by establishing the link between the static and dynamic problems introduced so far, and moreover, establish the uniqueness of optimizers in either case. As a corollary, this yields Theorem \ref{MainTheoremIntro} stated in the introduction. \begin{theorem} \label{lem inequality static dynamic} The static and the dynamic problems \eqref{static weak transport} and \eqref{MBMBB} are equivalent. More precisely, \begin{enumerate} \item $WT=MT<\infty,$ \item \eqref{static weak transport} has a unique optimizer $\pi^*$; \item \eqref{MBMBB} has a unique-in-law optimizer $M^*$; \item $\pi^*=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_0,M^*_1)$ and $M^*=G(\pi^*)$ for some function $G$, i.e.\ $M^*$ can be explicitly constructed from $\pi^*$. \end{enumerate} \end{theorem} \begin{proof} Let $M$ be feasible for \eqref{MBMBB}. By It\^o's formula and the martingale property of $M$ we have $$ \textstyle {\mathbb E}\left[\int_0^1 \mbox{tr}(\sigma_t)\, dt\right] = {\mathbb E}[M_1\cdot B_1-M_0\cdot B_0]= {\mathbb E}[M_1\cdot (B_1-B_0)]= {\mathbb E}[\,\, {\mathbb E}[ M_1\cdot (B_1-B_0) \, |\, M_0] \,\,] .$$ Letting $q_x = \text{\it{law}}\,}%{\mathrm{\emph{law}}(M_1,B_1-B_0\,|\,M_0=x)$ we find $q_x\in\Pi(\pi_x\, ,\, \gamma^d)$ for $\pi_x = \text{\it{law}}\,}%{\mathrm{\emph{law}}(M_1\,|\,M_0=x)$ and $$\textstyle {\mathbb E}\left[\int_0^1 \mbox{tr}(\sigma_t)\, dt\right] = \int\mu(dx)\int q_x(dm,db)~m\cdot b. $$ From this we easily conclude $WT \geq MT$. Now let $\pi$ be feasible for \eqref{static weak transport}. For each $x$ we can find $F^x(\cdot)$ convex such that $\nabla F^x(\gamma^d)=\pi_x$. We now define $M^x_t:=E[\nabla F^x(B_1)|\mathcal{F}^B_t]$ for a given standard Brownian motion on $\mathbb{R}^d$ with Brownian filtration $\mathcal{F}^B$. Potentially enlarging our probability space we can assume the existence of a random variable $X$ independent of the Brownian motion $B$ with $X\sim\mu$. We denote the filtration (on the potentially bigger probability space) by $\mathcal F$. Since $M_0^x=\int y\pi_x(dy)=x$ and $\int\mu(dx)\pi_x(dy)=\nu(dy)$ we conclude that $\{M^{X}_t\}_{t\in [0,1]}$ is a continuous martingale from $\mu$ to $\nu$. By construction $$\textstyle \int\mu(dx) \sup_{q\in\Pi(\pi_x,\gamma^d)}\int q(dm,db)\,\, m\cdot b = \int\mu(dx) \int\gamma^d(db)\,\, b\cdot \nabla F^x(b) = {\mathbb E}\left[ \, \mathbb{E} \left[ B_1\cdot M_1^X |X \right ]\, \right ] , $$ and the last term equals $\textstyle\mathbb{E}[\int_0^1 \text{tr}(\sigma_t)dt ]$ as before ($\sigma$ can easily be computed from $\nabla F^x$). This proves $WT \leq MT$ and hence $WT = MT$. The finiteness $ \infty> WT $ follows from $m\cdot b \leq |m|^2+|b|^2$ and $\nu$ and $\gamma$ having finite second moment; see \eqref{static weak transport}. To show that \eqref{static weak transport} is attained let us denote by $(\pi^n)_{n\in\mathbb{N}}$ (where $\pi^n(dx,dy)=\pi^n_x(dy)\mu(dy)$) an optimizing sequence. The set $\Pi(\mu,\nu)$ is weakly compact in $\mathcal P(\mathbb{R}^d\times\mathbb{R}^d)$. Moreover, the convex subset $\mathsf{M} (\mu,\nu)$ is weakly closed (hence weakly compact), e.g.\ \cite[Theorem 7,12 (iv)]{Vi03}. By \cite[Theorem 3.7]{Ba97} we obtain the existence of a measurable kernel $x\mapsto \pi_x\in\mathcal{P}(\mathbb{R}^d)$ and a subsequence, still denoted by $(\pi^n)_n$, such that on a $\mu$-full set $$\textstyle\frac{1}{N}\sum_{n\leq N} \pi_x^n(dy)\to \pi_x(dy),$$ with respect to weak convergence in $\mathcal{P}(\mathbb{R}^d)$. In particular $\frac{1}{N}\sum_{n\leq N} \pi^n\to \pi$ in the weak topology in $\mathcal{P}(\mathbb{R}^d\times \mathbb{R}^d)$, where $\pi(dx,dy):=\mu(dx)\pi_x(dy)$\comment{MH: why do we write out $\pi$ here but not $\pi_n$ before?}. Since $\mathsf{M}(\mu,\nu)$ is closed, we have that $\pi\in\mathsf{M} (\mu,\nu)$. Finally, \begin{align*} WT& =\textstyle \lim_n \int\mu(dx) \sup_{q\in\Pi(\pi_x^n,\gamma^d)}\int q(dm,db)\,\, m\cdot b \\ & =\textstyle \lim_N \int\mu(dx) \frac{1}{N}\sum_{n\leq N}\sup_{q\in\Pi(\pi_x^n,\gamma^d)}\int q(dm,db)\,\, m\cdot b \\ & \leq\textstyle \lim_N \int\mu(dx) \sup_{q\in\Pi( \frac{1}{N}\sum_{n\leq N}\pi_x^n,\gamma^d)}\int q(dm,db)\,\, m\cdot b \\ & \leq\textstyle \int\mu(dx) \limsup_N\sup_{q\in\Pi( \frac{1}{N}\sum_{n\leq N}\pi_x^n,\gamma^d)}\int q(dm,db)\,\, m\cdot b \\ & \leq\textstyle \int\mu(dx) \sup_{q\in\Pi( \pi_x,\gamma^d)}\int q(dm,db)\,\, m\cdot b\, \,\,\leq WT\, . \end{align*} The first inequality holds by concavity of $\eta\mapsto H(\eta):= \sup_{q\in \Pi(\eta,\gamma^d)}\int q(dm,db)~m\cdot b$ w.r.t.\ convex combinations of measures. The second inequality is Fatou's lemma, noticing that the integrand is bounded in $L^1(\mu)$ (the bound equals the sum of the second moments of $\mu$ and $\gamma$). The third inequality follows by weak convergence of the averaged kernel on a $\mu$-full set and upper semicontinuity of $H(\cdot)$. For uniqueness it suffices to notice that $H(\cdot)$ is actually strictly concave, which is an easy consequence of Brenier's Theorem. Hence, \eqref{static weak transport} is attained and we denote the unique optimizer by $\pi^*$. Taking $\pi^*$ we may build an optimizer $M^*$ for \eqref{MBMBB} as in the first part of the proof (as the value of both problems agree). We finally establish the uniqueness of optimizers for \eqref{MBMBB}. Let $\tilde M$ be any such optimizer. From the previous considerations, we deduce that the law of $(\tilde M_0,\tilde M_1)$ is the unique optimizer $\pi^*$ of \eqref{static weak transport}. Conditioning on $\{\tilde M_0=x\}$ we thus have that $\tilde M$ connects $\delta_x$ to $\pi^*_x$. It follows that $\mu(dx)$-a.s.\ $\tilde M$ conditioned on $\{\tilde M_0=x\}$ is optimal between these marginals. Indeed, \begin{align}\textstyle \sup\limits_{N_t=x+\int_0^t \sigma_s\, dB_s,\,N_1\sim \pi^*_x} {\mathbb E}\left[\int_0^1 \mbox{tr}(\sigma_t)\, dt\right] = \sup\limits_{q\in\Pi(\pi^*_x,\gamma^d)}\int q(dm,db)\,\, m\cdot b\,\, , \label{eq aux conditioning} \end{align} by the results obtained so far, since if $\tilde M$ conditioned on $\{\tilde M_0=x\}$ was not optimal for the l.h.s.\ it could not deliver the equality $MT=WT$. So it suffices to show that the l.h.s.\ of \eqref{eq aux conditioning} is uniquely attained. But any candidate martingale $N$ with volatility $\sigma$ satisfies $\mathbb{E}[\int_0^1\text{tr}(\sigma_t)dt]= \mathbb{E}[ N_1 B_1]$ (since here we can assume $B_0=0$). Hence, Brenier's Theorem implies that $ \tilde M_1= \nabla F^x(B_1)$ on $\{\tilde M_0=x\}$, for a convex function $F^x$. Since the optimal transport map $\nabla F^x$ is unique, and the martingale property determines uniquely the law of $\tilde M$, we finally get $\tilde M=M^*$ in law. \end{proof} \begin{remark}\label{rem connection} The proof of Theorem \ref{lem inequality static dynamic} shows how to build the optimizer for \eqref{MBMBB} via the following procedure, making the statement $ M^*=G(\pi^*)$ in Theorem \ref{lem inequality static dynamic} (2) precise: \begin{enumerate} \item Find the unique optimizer $\pi^*$ of \eqref{static weak transport}. \item Find convex functions $F^x$ such that $\nabla F^x(\gamma^d)=\pi_x^*$. \item Define $M^x_t := \mathbb{E}[\nabla F^x(B_1)|B_t] = \int \nabla F^x(y+B_t)\gamma^d_{1-t}(dy)$. \item Take $X\sim \mu$ independent of $B$ and let $M_t:=M^X_t$. \end{enumerate} \noindent In particular, this proves Theorem \ref{ThmStretchedtoStandardIntro} in the introduction. \end{remark} We now establish further properties of the optimizer $M^*$ of \eqref{MBMBB}, which hold likewise in any number of dimensions. The first two of them will be important for the proofs of the results yet to come, namely that \eqref{MBMBB} obeys a dynamic programming principle and that $M^*$ is a strong Markov martingale.{ The final property, that $M^*$ is an ``optimal constant-speed'' interpolation between its marginals, is crucial for the interpretation of our martingale as an analogue of displacement interpolation in classical transport and in particular proves Theorem \ref{MDisplacementIntro} in the introduction.} For stopping times $0\leq\tau\leq T$ we define \begin{align}\label{intermediate def} V(\tau,T,\mu,\nu):=\sup_{\substack{M_r=X+\int_\tau^r \sigma_u\, dB_u,\, \tau\leq r\leq T \\ X\sim \mu,\,M_T\sim \nu}} {\mathbb E}\left[\int_\tau^T \mbox{tr}(\sigma_u)\, du\right], \end{align} so that $MT = V(0,1,\mu,\nu)$. \begin{lemma}[Dynamic programming principle]\label{lem DPP} For every stopping time $0\leq\tau \leq 1$ \begin{align}\label{eq DPP}\textstyle V(0,1,\mu,\nu) = \textstyle\sup\limits_{\substack{ M_s=M_0+\int_0^s \sigma_rdB_r \\ 0\leq s\leq \tau,\, M_0\sim\mu}}\left\{\mathbb{E}\left [ \int_0^\tau\text{tr}(\sigma_r)dr\right ]+ V(\tau,1,\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_\tau),\nu) \right\}, \end{align} with the convention that $\sup \emptyset = -\infty$. In particular if $M^*$ is optimal for $V(0,1,\mu,\nu)$, then: \begin{enumerate} \item $M^*|_{[\tau,1]}$ is optimal for $V(\tau,1,\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_\tau),\nu)$, \item $M^*|_{[0,\tau]}$ is optimal for $V(0,\tau,\mu,\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_t))$ \item A.s.\ we have $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_1|M^*_s,s\leq\tau)= \text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_1|M^*_\tau)$. \end{enumerate} \end{lemma} \begin{proof} Obviously the l.h.s.\ of \eqref{eq DPP} is smaller than the r.h.s. of \eqref{eq DPP}. Take now $M^1$ feasible for the r.h.s.\ (so that $M^1$ is adapted to a filtration $\{\mathcal F^1_{s\wedge\tau}\}_{s\geq 0}$, $B$ is a Brownian motion on $[0,\tau]$ adapted to it, and $dM^1= \sigma^{(1)}dB$). Let $M^2$ be optimal for $V(\tau,1,\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^1_\tau),\nu) $. By Remark \ref{rem connection} we may build $M^2$ from the starting distribution $M^1_\tau$ and the filtration $\mathcal F^2$ of this random variable and a Brownian motion $W$ independent of $\mathcal F^1$ (and so of $M^1_\tau$), so $dM^2=\sigma^{(2)}dW$. We then build a continuous martingale $M$ on $[0,1]$ by setting it to $M^1$ on $[0,\tau]$ and $M^2$ on $(\tau,1]$, obtaining easily that $$\textstyle\mathbb{E}\left [ \int_0^\tau\text{tr}(\sigma^{(1)}_r)dr\right ]+ V(\tau,\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^1_\tau),\nu)=\mathbb{E}\left [ \int_0^\tau\text{tr}(\sigma^{(1)}_r)dr + \int_\tau^1\text{tr}(\sigma^{(2)}_r)dr\right ] .$$ Observing that $\tilde{B}_s=1_{[0,\tau]}(s)B_s+ 1_{(\tau,1]}(s)[B_\tau+W_s-W_\tau]$ is a Brownian motion for the concatenation of filtrations $\mathcal F^1$ and $\mathcal F^2$, and $dM=(1_{[0,\tau]}(s)\sigma^{(1)}_s + 1_{(\tau,1]}(s)\sigma^{(2)}_s)\, d\tilde B $, then the r.h.s.\ above is the cost of $M$ as a martingale starting at $\mu$ and ending at $\nu$, and so is smaller than $V(0,1,\mu,\nu)$. Let $M^*$ be optimal for $V(0,1,\mu,\nu)$. Using \eqref{eq DPP} it is trivial to show Points $(1)$-$(2)$. But from this follows that $M^*|_{[0,\tau]}$ is optimal for the r.h.s.\ of \eqref{eq DPP}. This, Point $(1)$, and the arguments in the previous paragraph show how to stitch together $M^*|_{[0,\tau]}$ and $M^*|_{[\tau,1]}$ to produce an optimizer $M$ for $V(0,1,\mu,\nu)$. But this must then coincide with $M^*$, by uniqueness. On the other hand $M_1$ is defined via $M^*_\tau$ and a Brownian motion independent of $\{M^*_s:s\leq \tau\}$, so $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_1|M^*_s,s\leq\tau)= \text{\it{law}}\,}%{\mathrm{\emph{law}}(M_1|M^*_\tau)$ and we conclude. \end{proof} { \begin{proposition} \label{disp mart interpolation} Let $M^*$ be the optimizer of \eqref{MBMBB} and set $$[\mu,\nu]^M_t:=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_t).$$ Then $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_0,M^*_t)$ is optimal for \eqref{static weak transport} between the marginals $\mu$ and $[\mu,\nu]^M_t$. Similarly, the optimizer of \eqref{MBMBB} between the same marginals is the time-changed martingale $s\in[0,1]\mapsto M^*_{st}$. Finally, for $0\leq r\leq t\leq 1$, we have \begin{align}\label{eq disp mart interpolation} WT(\,[\mu,\nu]^M_r\,,\,[\mu,\nu]^M_t)= MT(\,[\mu,\nu]^M_r\,,\,[\mu,\nu]^M_t)=\sqrt{t-r}\,MT(\mu,\nu) =\sqrt{t-r}\,WT(\mu,\nu). \end{align} \end{proposition} \begin{proof} We use the notation in Remark \ref{rem connection} and write $M^*_t=M^X_t=f^X_t(B_t)$ where $$\textstyle f^x_t(\cdot):= \int \nabla F^x(b+\cdot)\gamma^d_{1-t}(db).$$ Since $[\mu,\nu]^M_t=f_t^X(\sqrt{t}B_1)$, it is not difficult to see that $$N^*_s:=\mathbb{E}[f_t^X(\sqrt{t}B_1)|{\mathcal F}^B_s]=f_{st}^X(\sqrt{t}B_s),$$ is the optimizer of \eqref{MBMBB} from $\mu$ at $s=0$ to $[\mu,\nu]^M_t$ at $s=1$. Of course $N^*$ coincides (in law) with the time-changed martingale $s\mapsto M^*_{st}$, and by Theorem \ref{lem inequality static dynamic} we get the optimality of $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_0,M^*_t)$. We next remark that $J(f^X_s)(B_s)$ is a matrix-valued martingale, where $J$ stands for Jacobian, as can be easily seen from the convolution structure or PDE arguments. Thus $\mathbb{E}[J(f_s^X)(B_s)]=\mathbb{E}[J(f_{st}^X)(\sqrt{t}B_s)]$. To recognize the ``$\sigma$'' of $N^*$ and $M^*$ we observe that \begin{align*} dN^*_s &= \sqrt{t}J(f^X_{st})(\sqrt{t}B_s)dB_s,\\ dM^*_s & = J(f^X_s)(B_s)dB_s, \end{align*} by It\^o formula. Putting all together we find \begin{align*} \textstyle \mathbb{E}\left [\int_0^1 \sqrt{t}J(f^X_{st})(\sqrt{t}B_s)ds \right ]&=\textstyle \sqrt{t} \int_0^1 \mathbb{E}\left [ J(f^X_{st})(\sqrt{t}B_s)\right ]ds \\ & = \textstyle \sqrt{t} \int_0^1 \mathbb{E}\left [ J(f^X_{s})(B_s)\right ]ds \\ & = \textstyle \sqrt{t} \mathbb{E}\left [ \int_0^1J(f^X_{s})(B_s)ds \right ] , \end{align*} and again by Theorem \ref{lem inequality static dynamic} we get $$ MT([\mu,\nu]^M_0\,,\,[\mu,\nu]^M_t)=\sqrt{t}\,MT(\mu\,,\,\nu) .$$ The general case of \eqref{eq disp mart interpolation} follows similarly. \end{proof} Since Proposition \ref{disp mart interpolation} shows that $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_0,M^*_t)$ is optimal for \eqref{static weak transport} between its marginals, Point $(3)$ of Lemma \ref{lem DPP} immediately implies \begin{corollary}\label{strong Markov} The unique optimizer $M^*$ of \eqref{MBMBB} has the strong Markov property. \end{corollary} \begin{remark} The identities \eqref{eq disp mart interpolation}, at least for the continuous-time problems, have been obtained in \cite[Remark 4.1]{HuTr17} in a more general setting, via a scaling argument. The interpretation of \eqref{eq disp mart interpolation} is clear: Our optimal martingale is a constant-speed geodesic when distance is measured wrt the square of our cost functional. \end{remark} } \section{Main results in dimensions one and two} \label{sec main dim 1 and 2} In this section we study finer structural properties of the unique optimizer of \eqref{MBMBB} established in the previous section. We get a full description in dimension one, in dimension two under the additional Assumption \ref{ass negligible boundary} and a partial description in general dimensions. \subsection{The one-dimensional case} \label{sec main dim 1} Let $\mu\preceq_c \nu$ be probability measures on the line with finite second moment. For a measure $\alpha$ on $\mathbb{R}$ and $x\in \mathbb{R}$ we write $u_\alpha(x):=\int |x-y|\, d\alpha(y)$. The convex order relation $\mu\preceq_c \nu $ is equivalent to $u_\mu\leq u_\nu$. We recall from \cite[Appendix A.1]{BeJu16} that the ``irreducible components of $(\mu,\nu)$'' are determined by the (unique) family of open disjoint intervals $\{I_k\}_{k\in\mathbb{N}}$ whose union equals the open set $$\textstyle \{u_\mu<u_\nu\}:=\left\{ x\in\mathbb{R}: \int |x-y|\,d(\mu-\nu)(y)\neq 0\right\}.$$ One can then decompose $$\textstyle\mu=\eta+\sum_k \mu_k\,\,\,\,\,\,\mbox{ and }\,\,\,\,\,\,\nu=\eta+\sum_k \nu_k,$$ where $\mu_k=\mu|_{I_k}$, with $I_k=\{u_{\mu_k}<u_{\nu_k}\} $ and $\nu_k(\overline{I_k})=\mu_k(I_k)$, whereas $\eta$ is concentrated on $\mathbb{R}\backslash \bigcup_k I_k$. A useful straightforward result is that every martingale coupling from $\mu$ to $\nu$ (i.e.\ $\pi\in \mathsf{M}(\mu,\nu)$) is fully characterized by how it looks on the sets $I_k\times \overline{I_k}$. The restrictions $\pi_k:= \pi_{|I_k\times \overline{I_k}}= \pi_{|I_k\times \overline{\mathbb{R}}}$ are still martingale couplings (in the sense that their respective disintegrations satisfy $\int y\, (\pi_k)_x(y)=x$ for $\mu_k$-a.a.\ $x$) but with total mass $\mu_k(I_k)$ and marginals $\mu_k, \nu_k$. We can now state our main result for $d=1$, characterizing the structure of stretched Brownian motion. \begin{theorem}\label{MainTheoremOneDim} Let $\mu\preceq_c \nu$ be probability measures on the line with finite second moment. A candidate martingale $M$ is an optimizer of \eqref{MBMBB} if and only if it is a \emph{standard stretched Brownian motion} {on each irreducible component $(\mu_k, \nu_k)$ of $(\mu,\nu)$}. In particular, \emph{stretched Brownian motion} (sBm) is a \emph{standard stretched Brownian motion} (s$^2$Bm) in each irreducible component. \end{theorem} Let us explain the terminology used here. Saying that $M$ is s$^2$Bm\ on the irreducible components of $(\mu,\nu$) concretely means that, conditionally on $M_0\in I_k$, $M$ is a s$^2$Bm\ from $\frac{1}{\mu_k(I_k)}\mu_k$ to $\frac{1}{\mu_k(I_k)}\nu_k$. We stress that in the present $1-d$ case Theorem \ref{MainTheoremOneDim} is significantly stronger than Theorem \ref{ThmStretchedtoStandardIntro}. We now prove the fact, first mentioned in the introduction, that in $1-d$ the transition kernel of stretched Brownian motion is Lipschitz: \begin{corollary}\label{coro LM} Let $\mu\preceq_c \nu$ be probability measures on the line with finite second moment, and $M^*$ the unique stretched Brownian motion from $\mu$ to $\nu$. Then the kernel $$x\mapsto \pi_x^*:=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_1|M^*_0=x),$$ has the Lipschitz property: $\mathcal W_1(\pi^*_x,\pi^*_{x'})\leq |x-x'|$. \end{corollary} \begin{proof} By Theorem \ref{MainTheoremOneDim}, $M^*$ is a (s$^2$Bm) in each irreducible component. Assume first that $x<x'$ and that they belong to the same component. Conditioning to starting in this component, we can write $M^*_t={\mathbb E}[f(B_1)|B_t]$, with $f$ increasing and $B$ a Brownian motion with some starting law. Choose $y,y'$ such that $${\mathbb E}^y[f(B_1)]=x< x' = {\mathbb E}^{{ y'}}[f(B_1 )]$$ and observe that this implies $y<y'$ since $f$ is increasing. This in turn implies that for a Brownian motion $B^0$ starting in zero the random vector $$(\,f(B_1^0+y)\,,\,f(B_1^{0} +y')\,),$$ is ordered and has marginals $\pi^*_x$ and $\pi^*_{x'}$. Hence \begin{align*} \mathcal{W}_1(\pi^*_x,\pi^*_{ x'})&\leq {\mathbb E}[\,|\, f(B_1^{0}+ y')- f(B_1^0+y) \,|\,]= {\mathbb E}^{ y'}[f(B_1)]-{\mathbb E}^y[ f(B_1) ]= x' - x. \end{align*} On the other hand, if $x,x'$ are not in the same component, we let $f$ and $g$ denote the increasing functions associated to the representations in terms of (s$^2$Bm)'s. If $x<x'$ then the range of $f$ lies below the range of $g$, and we conclude much as in the above display. \end{proof} We now proceed towards the subtler extension of Theorem \ref{MainTheoremOneDim} for $d=2$. We omit the proof of Theorem \ref{MainTheoremOneDim} since it is easily derived from the two-dimensional considerations (with less effort and without the additional assumptions). \subsection{Preliminaries}\label{sec preliminaries} We briefly discuss some of the aspects related to the decomposition of martingale couplings in arbitrary dimensions. Later this will be mostly used in dimension two. After this, we also provide an analytical result of much importance for the next sections. \begin{definition} A convex paving $\mathcal{C}$ is a collection of disjoint relatively open convex sets from $\mathbb{R}^d$. Denoting $\bigcup \mathcal{C} := \bigcup_{C\in\mathcal{C}}C$, we will always assume $\mu(\bigcup \mathcal{C})=1$ for such objects. For $x\in \bigcup\mathcal{C}\subset\mathbb{R}^d$ we denote by $C(x)$ the unique element of $\mathcal{C}$ which contains $x$. We say that $\mathcal{C}$ is measurable (resp.\ $\mu$-measurable, universally measurable) if the function $x\mapsto \overline{C(x)}$ is measurable as a map from $\mathbb{R}^d$ to the Polish space of all closed (convex) subsets of $\mathbb{R}^d$ equipped with the \textit{Wijsman} topology\footnote{The Wijsman topology on the collection of all closed subsets of a metric space ($\mathsf{X},d)$ is the weak topology generated by $\{ \mbox{dist}(x,\cdot) : x\in\mathsf{X}\} $, cf.\ \cite{DMTo17}.}. \end{definition} \begin{definition} Let $\Gamma\subset \mathbb{R}^d\times \mathbb{R}^d$ and $\pi\in \mathsf{M} (\mu,\nu) $. We say that a convex paving $\mathcal{C}$ is \begin{itemize} \item $\pi$-invariant if $\pi_x(\, \overline{C(x)} \,)=1$ for $\mu$-a.e.\ $x$, \item $\Gamma$-invariant if $\mbox{ri}(\Gamma_x)\subset C(x)$ for all $x\in \mbox{proj}_1(\Gamma)$. \end{itemize} \end{definition} Note that a natural order between convex pavings $\mathcal{C},\mathcal{C}'$ is given by $$\mathcal{C}\leq_\mu\mathcal{C}'\,\,\iff\,\, C(x)\subset C'(x)\,\, \mbox{ for }\mu-a.e.\, x,$$ in which case we say that $\mathcal{C}$ is finer than $\mathcal{C}'$ (and the latter is coarser than the former). The following two theorems are shown in \cite{GhKiLi16b,DMTo17,ObSi17}. \begin{theorem}[Ghoussoub-Kim-Lim \cite{GhKiLi16b}] Given $\pi\in \mathsf{M} (\mu,\nu)$ and $\Gamma\subset \mathbb{R}^d\times \mathbb{R}^d$ a martingale support for $\pi$, there is a finest $\Gamma$-invariant convex paving. We denote it by $\mathcal{C}_{\pi,\Gamma}$. \end{theorem} \begin{theorem}[De March-Touzi \cite{DMTo17}, Ob{\l}{\'o}j-Siorpaes \cite{ObSi17}] \label{thm DmTOS}There is a finest convex paving, denoted $\mathcal{C}_{\mu,\nu}$, which is $\pi$-invariant for all $\pi\in \mathsf{M} (\mu,\nu)$ simultaneously. Writing $\mathcal{C}_{\mu,\nu}=\{C_{\mu,\nu}(x)\}_{x\in \mathbb{R}^d}$, the function $x\mapsto C_{\mu,\nu}(x)$ is universally measurable. \end{theorem} If we knew that these convex pavings coincide, this would streamline some of our proofs. For the case $d=1$ this is indeed the case, but already for $d=2$ this can fail. We will actually use another convex paving which incorporates ideas/properties from the above two. \begin{lemma}\label{lem existence paving} Given $\pi\in \mathsf{M} (\mu,\nu)$ there is a finest measurable $\pi$-invariant convex paving, which we denote $\mathcal{C}_\pi$. \end{lemma} This can be established by a close reading of \cite{DMTo17}, and adapting the arguments therein (of course \cite{DMTo17} achieves much more!). We give a self-contained, shorter argument under the following additional hypothesis, which will also appear in Section \ref{sec main dim 2}. \begin{assumption}\label{ass negligible boundary}For all $\pi\in \mathsf{M} (\mu,\nu)$ and $\mathcal{C} \mbox{ convex paving}$ we have $$ \pi_x(\,\overline{C(x)}\,)=1 \, \mu-a.s.\,\,\Rightarrow \pi_x(C(x))=1\,\mu-a.s.$$ In particular, for such $\mathcal{C}$ and $\pi$, $\mathcal{C}$ is $\pi$-invariant iff $\pi_x(C(x))=1\,\mu-$a.s. \end{assumption} \begin{proof}[Proof of Lemmma \ref{lem existence paving} under Assumption \ref{ass negligible boundary}] Inspired by \cite{DMTo17}, we introduce the optimization problem $$\textstyle \inf \{ \int\mu(dx)\,G(C(x)):\,\mathcal{C} \mbox{ is a $\pi$-invariant measurable convex paving} \},$$ where $G(C):= dim(C)+g_C(C)$ and $g_C$ is the standard Gaussian measure on $\mbox{aff}(C)$, i.e.\ as obtained from the $\mbox{dim}(C)$-dimensional Lebesgue measure on $\mbox{aff}(C)$. Let $\mathcal{C}^n$ be an optimizing sequence of $\pi$-invariant convex pavings and let $\Omega$ be a set of $\mu$-full measure on which we have $\pi_x(\,\overline{C^n(x)}\,)=1$ for all $n$ (here $C^n(x)$ denotes an element of $\mathcal{C}^n$). Introduce for $x\in \Omega$ the relatively open convex sets $C_\pi(x):=\text{rf}_x\left( \bigcap C^n(x) \right )$. We have\footnote{Recall that $A\subset A '\Rightarrow \text{rf}_a\, A \subset \text{rf}_a\, A'$, that $a\in A\iff a\in\text{rf}_a(A)$ and that $\text{rf}_a(A)=\text{ri}\,A\iff a\in \text{ri}\,A $. } $x\in C_\pi(x)$ since $x\in \bigcap C^n(x) $. Moreover we have that $\mathcal{C}_\pi:=\{C_\pi(x):x\in\Omega\}$ forms a partition since already $\{\bigcap C^n(x):x\in\Omega\}$ is a partition. Let us establish that $\pi_x(\overline{C_\pi(x)})=1$. We start assuming \begin{align} \label{eq claim 1} \forall \,K\,\,\text{convex}:\,\,\text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi_x\subset K\Rightarrow \pi_x\left(\overline{\text{rf}_xK}\right)=1. \end{align} Let us take $K:=\bigcap C^n(x)$. Since $\overline{C^n(x)}$ is closed, convex and satisfies $\pi_x(\overline{C^n(x)})=1$ we have $\overline{\text{co}}\,\text{supp}\,\pi_x\subset \overline{C^n(x)}$. On the other hand, $\overline{\text{co}}\,\text{supp}\,\pi_x$ cannot be contained in $\partial C^n(x)$ since by Assumption \ref{ass negligible boundary} we have $\pi_x(\partial C^n(x))=0$. By \cite[Corollary 6.5.2]{Ro70} we must then have $\text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi_x\subset\text{ri}\, C^n(x)=C^n(x)$ for all $n$, so $\text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi_x\subset\bigcap C^n(x)=K$. By \eqref{eq claim 1} we get $ \pi_x\left(\overline{\text{rf}_xK}\right)= \pi_x\left(\overline{C_\pi(x)}\right)=1$ as desired. All in all $\mathcal{C}_\pi$ is a $\pi$-invariant convex paving, and since $C_\pi(x)\subset C^n(x)$ we find $\int\mu(dx)G(C_\pi(x))\leq \int\mu(dx)G(C^n(x))$ from which we get the optimality of $\mathcal{C}_\pi$. To finish the proof, let us establish \eqref{eq claim 1}. By the martingale property we easily see\footnote{Let $m=\text{dim}(\overline{\text{co}}\,\text{supp}\,\pi_x)$ and suppose $x\in\partial( \overline{\text{co}}\,\text{supp}\,\pi_x)$. We can then find an $(m-1)$-dimensional hyperplane supporting $x$ and having $\overline{\text{co}}\,\text{supp}\,\pi_x$ contained in one associated half-space. By the martingale property one obtains that necessarily $\text{supp}\,\pi_x$, and then $\overline{\text{co}}\,\text{supp}\,\pi_x$ too, must be actually contained in the hyperplane itself. Thus $\text{dim}(\overline{\text{co}}\,\text{supp}\,\pi_x) \leq m-1$ yielding a contradiction.} that $x\in \text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi_x$. From this, $\text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi_x =\text{rf}_x\,\left( \text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi_x\right )\subset \text{rf}_x\,K$. Hence $\overline{\text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi_x} \subset \overline{ \text{rf}_x\,K}$, whose l.h.s.\ equals $\overline{\text{co}}\,\text{supp}\,\pi_x$ by \cite[Theorem 6.3]{Ro70}, so \eqref{eq claim 1} follows. \end{proof} \begin{remark} The same proof, modulo obvious changes, proves the existence of a finest measurable convex paving invariant for all $\pi\in \mathsf{M} (\mu,\nu)$ simultaneously. This however does not establish the existence of a maximally spreading martingale coupling as in \cite{DMTo17}. \end{remark} Here is a sufficient criterion for Assumption \ref{ass negligible boundary} to hold. \begin{lemma}\label{lem sufficiency for negligible boundary} Assumption \ref{ass negligible boundary} is satisfied if $d\in \{ 1,2\}$ and $\nu\ll\lambda^d$. \end{lemma} \begin{proof} This follows by similar arguments as in \cite[Lemma C.1]{GhKiLi16b}. We omit the details. \end{proof} A direct consequence of Theorem \ref{thm DmTOS} and Assumption \ref{ass negligible boundary} is the decomposition of a martingale into irreducible components. Notice the resemblance to the one-dimensional case explained in Section \ref{sec main dim 1}. \begin{proposition}\label{prop martingale decomposition} Let $\mathcal{C}_{\mu,\nu}=\{C_{\mu,\nu}(x)\}_{x\in\mathbb{R}^d}$ be the convex paving of Theorem \ref{thm DmTOS} and assume Assumption \ref{ass negligible boundary}. Then \begin{itemize} \item[(i)] we may decompose $$\textstyle \mu=\int \mu(~\cdot~ |K) dC_{\mu,\nu}(\mu)(K), \mbox{ and }\, \nu = \int \nu(~\cdot~ |K) dC_{\mu,\nu}(\mu)(K), $$ with $\mu(~\cdot~|K)\preceq_c \nu(~\cdot~|K)$ for $C_{\mu,\nu}(\mu)$-a.e.\ $K$; \item[(ii)] for any martingale coupling $\pi\in \mathsf{M} (\mu,\nu)$ we have that $$\pi(~\cdot~|K\times K) = \pi(~\cdot~|K\times \mathbb{R}^d\,)\, \mbox{ for } \, C_{\mu,\nu}(\mu)-a.e.\, K, $$ and this common measure has first and second marginals equal to $\mu(~\cdot~| K)$ and $\nu(~\cdot~| K)$ respectively; \item[(iii)] any martingale coupling $\pi\in \mathsf{M} (\mu,\nu)$ can be uniquely decomposed as $$\textstyle \pi = \int \pi(~\cdot~|K\times K)dC_{\mu,\nu}(\mu)(K).$$ \end{itemize} \end{proposition} \noindent The proof is just as in \cite[Appendix A.1]{BeJu16}, but simpler, thanks to the fact that under Assumption \ref{ass negligible boundary} we have that \emph{martingales started on two neighbouring cells will not go on to reach the intersection of the boundaries of the cells}. We thus omit the proof. We finally present a technical lemma which will be extremely useful in the proofs of the main results in dimension two. \begin{lemma}\label{lem technical bla} Let $\eta$ be a probability measure in $\mathbb{R}^d$ with finite second moment, and ${F}:\mathbb{R}^d\to \mathbb{R}$ convex such that $\nabla {F}(\gamma^d)=\eta$. Denote $V:=\text{aff}(\text{supp}(\eta))$ and let $P$ be the orthogonal projection onto $V$. Then, there exists a convex function $\tilde F:V\to\mathbb{R}$ such that $\gamma^d-a.s.\,\,\,\nabla {F} = \nabla \tilde F\circ P$. For all $s>0$, the function $$\textstyle \mathbb{R}^d\ni b\mapsto f_s(b):=\int \nabla F(b+y)\gamma^d_{s}(dy) = \int \nabla \tilde F( Pb+z)P(\gamma^d_{s})(dz) \in \mathbb{R}^d,$$ has the following properties: \begin{enumerate} \item It is infinitely continuously differentiable. \item Restricted to V, it is one-to-one. \item $\overline{f_s(\mathbb{R}^d)}=\overline{\text{co}}\,\nabla F(\mathbb{R}^d)$. \item $f_s(\gamma^d)$ is equivalent to the $m$-dimensional Lebesgue measure on $V$ restricted to ${\text{co}}\,\nabla F(\mathbb{R}^d)$, where $m=\text{dim}(V)$ \item $\text{supp}(f_s(\gamma^d_t))=\overline{\text{co}}\,\nabla F(\mathbb{R}^d)$ is convex and does not depend on $s>0$ nor $t>0$. \end{enumerate} \end{lemma} \begin{proof} The $\gamma^d-a.s.$ equality $\nabla {F} = \nabla \tilde F\circ P$, follows from Brenier's Theorem by taking $\nabla\tilde F$ mapping $P(\gamma^d)$ into $\eta$ and observing that $\nabla (\tilde F\circ P)=P((\nabla \tilde F )\circ P)=(\nabla \tilde F )\circ P$. Point (1) follows by change of variables and differentiation under the integral sign. Alternatively, one can argue with the classical backwards heat equation. Points (2), (3) and (5) follow by the full-support property of $\gamma^d$ in $\mathbb{R}^d$ and $P(\gamma^d)$ in $V$. Point 4 is trivially true if $\eta$ is a Dirac delta (then $m$=0). Otherwise it suffices to consider the smooth function $V\ni v\mapsto\tilde{f}_s(v):= \int \nabla \tilde F( v+z)\tilde \gamma(dz)$, with $\tilde \gamma =P(\gamma^d_{s})$, and to prove that $\tilde{f}_s(\tilde \gamma) \sim \lambda_V|_{{\text{co}}\,\nabla F(\mathbb{R}^d)}$, where the latter denotes $m$-dimensional Lebesgue on $V$ restricted to ${\text{co}}\,\nabla F(\mathbb{R}^d)$. Since $\tilde \gamma\sim \lambda_V$, we have by \cite[Theorem 4.8(i)]{Vi03} that $\lambda_V$-a.e.\ the Jacobian of $\tilde{f}_s$ is invertible. By the change of variables formula, it is easy to obtain that $\tilde f_s(\tilde \gamma)\ll \lambda_V$, and the previous observation with the Monge-Amp\`ere equation \cite[Theorem 4.8(iii)]{Vi03} yield \begin{align} \lambda_V-a.e.\ r:\,\,\,\, \frac{d \tilde f_s(\tilde \gamma)}{d \lambda_V}(r) & = \left |\text{det}\left((J\tilde f_s)^{-1}(r)\right )\right | \frac{d\tilde \gamma}{d \lambda_V}\left( (\tilde f_s)^{-1}(r)\right) {\bf 1}_{\tilde f_s(V)}(r). \end{align} By Point 3, $\tilde f_s(V)={\text{co}}\,\nabla F(\mathbb{R}^d) $, and so we conclude $\tilde f_s(\tilde \gamma)\sim \lambda_V|_{{\text{co}}\,\nabla F(\mathbb{R}^d)}$ since under the latter measure the density $\frac{d \tilde f_s(\tilde \gamma)}{d \lambda_V|_{{\text{co}}\,\nabla F(\mathbb{R}^d)}}$ is a.e.\ non-vanishing. \end{proof} \subsection{The two-dimensional case} \label{sec main dim 2} \comment{JB: It seemed to me that we needed to explain why we put so much effort in this Thm. Thus I added this paragraph.} Our first main result for $d=2$ is a characterization of the structure of sBm, providing a significantly strengthened version of Theorem \ref{ThmStretchedtoStandardIntro} in the introduction. \comment{MB: I agree that it would be good to have something like this. However I couldn't grasp the explanation why if it is the deepest result of the article.} \begin{theorem}\label{thm main} Let $\mu\preceq_c\nu$ be probability measures in $\mathbb{R}^2$ with finite second moments. Suppose $\nu\ll\lambda^2$, and let $M^*$ be the unique optimizer for \eqref{MBMBB}. Set $\pi^t=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*_0,M^*_t)$ for $0<t<1$. Then the stretched Brownian motion $M^*$ is a standard stretched Brownian motion on each cell of $\mathcal{C}_{\pi^t}$. \end{theorem} The second main result of this part is the optimality of s$^2$Bm\ whenever we are able to build them with respect to the coarser $\mathcal{C}_{\mu,\nu}$ convex paving. Our proof of such result relies on the simplifying Assumption \ref{ass negligible boundary}, which as seen in Lemma \ref{lem sufficiency for negligible boundary} is \emph{verified in dimension two under the further requirement that $\nu$ be absolutely continuous}. We therefore place this result here, although in principle it is a result valid in arbitrary dimensions. \comment{JB: Parts of what used to be here were redundant with Theorem \ref{ThmStandardtoStretchedIntro}, so I chopped them away.} \begin{theorem}\label{thm converse} Under Assumption \ref{ass negligible boundary}, if $M$ is a standard stretched Brownian motion on each cell\footnote{This means that for $C_{\mu,\nu}(\mu)$-a.e.\ $K$, the conditioning of $M^*$ to $M_0^*\in K$ is a stretched Brownian motion between the marginals $\mu(\cdot\,|K)$ and $\nu(\cdot\,|K)$ introduced in Proposition \ref{prop martingale decomposition}} of the convex paving $\mathcal{C}_{\mu,\nu}$, then it is optimal for \eqref{MBMBB} \comment{MH: I think we should stress more for which results we really need the assumption \ref{ass negligible boundary}} (i.e. it is a sBm). \end{theorem} \begin{remark} The difference between Theorem \ref{ThmStandardtoStretchedIntro} and Theorem \ref{thm converse} is as follows: the first result says that standard stretched Brownian motion is optimal in its own, whereas the second statement allows for more freedom in that we are allowed to choose the convex function in the definition of stretched Brownian motion dependent on the cells of $\mathcal{C}_{\mu,\nu}$. Therefore this result is a strengthened version of Theorem \ref{ThmStandardtoStretchedIntro}. \end{remark} \begin{remark} For dimension one ($d=1$), Theorem \ref{MainTheoremOneDim} establishes the existence of standard stretched Brownian motion, and characterize it as the sole optimizer. Both existence and optimality are understood with respect to the same (countable) convex paving. For two dimensions ($d=2$), Theorems \ref{thm main} and \ref{thm converse} and Lemma \ref{lem sufficiency for negligible boundary} establish, under the assumption that $\nu\ll\lambda^2$, the existence and optimality characterization of standard stretched Brownian motion. In this case however, existence and optimality are understood with respect to potentially different convex pavings. \end{remark} The proofs of these results are deferred to Section \ref{sec pending proofs}. Theorem \ref{thm main} relies crucially on a monotonicity principle which we now establish and which seems of independent interest. { \section{A monotonicity principle for weak optimal transport problems}\label{sec mono} For this part only, we adopt a more general setting. Let $\mathsf X,\mathsf Y$ be Polish spaces and $C:\mathsf X\times {\mathcal P}(\mathsf Y)\to \mathbb{R}\cup\{+\infty\}$ Borel measurable. Consider for $\mu\in {\mathcal P}(\mathsf X),\nu\in {\mathcal P}(\mathsf Y)$ the optimization problem \begin{align}\label{eq defi gen Gozlan} \inf_{\pi\in\Pi(\mu,\nu)}\int_X\mu(dx)C(x,\pi_x). \end{align} \noindent This is a weak (i.e.\ non-linear) transport problem in the sense of \cite{GoRoSaTe14,GoRoSaTe15} and the references therein. We now obtain a ``monotonicity principle'' for this problem, i.e.\ a finitistic ``zeroth-order'' necessary optimality condition. \begin{proposition} \label{prop monotonicity general} Suppose that \begin{itemize} \item Problem \eqref{eq defi gen Gozlan} is finite with optimizer $\pi$; \item $C$ is jointly measurable; \item $\mu(dx)$-a.e. the function $C(x,\cdot)$ is convex and lower semicontinuous. \end{itemize} Then there exists a Borel set $\Gamma\subset X$ with $\mu(\Gamma)=1$ and the following property \begin{center} if $x,x'\in \Gamma$ and $m_x,m_{x'}\in {\mathcal P}(Y)$ satisfy $m_x+m_{x'}=\pi_x+\pi_{x'}$, then\end{center} $$C(x,\pi_x)+C(x',\pi_{x'})\leq C(x,m_x)+C(x',m_{x'}).$$ \end{proposition} \begin{proof} Let $$ {\mathcal D}:=\left\{ \left (\,(x,x'),( m_1,m_2)\,\right )\in X^2\times {\mathcal P}(Y)^2:\, \begin{array}{l} m_1+m_{2}=\pi_x+\pi_{x'}\,\text{, and }\\C(x,\pi_x)+C(x',\pi_{x'})>C(x,m_1)+C(x',m_{2}) \end{array} \right\},$$ which is an analytic set. By the Jankov-von Neumann uniformization theorem there is \cite[Theorem 18.1]{ke95} an analytically measurable function $$D:=\text{proj}_{X^2}({\mathcal D})\ni (x,x') \mapsto (m^{(x,x')}_1,m^{(x,x')}_{2})\in {\mathcal P}(Y)^2,$$ so that $(x,x',m^{(x,x')}_1,m^{(x,x')}_{2})\in {\mathcal D}$. Since $(x,x',m_1,m_2)\in {\mathcal D}\iff (x',x,m_2,m_1)\in {\mathcal D}$, it is possible to prove that we may actually assume that \begin{align}\label{eq sym of m} (m^{(x',x)}_1,m^{(x',x)}_{2})=(m^{(x,x')}_2,m^{(x,x')}_{1}). \end{align} Of course the set $D$ is likewise analytic. Thus extending $(m^{(\cdot,\cdot)}_1,m^{(\cdot,\cdot)}_2)$ to $(x,x')\notin D$ by setting it to $(\pi_x,\pi_{x'})$, analytic-measurability and the symmetry property \eqref{eq sym of m} are preserved. Assume that there exists $Q\in\Pi(\mu,\mu)$ such that $Q(D)>0$. We now show that this is in conflict with the optimality of $\pi$. By considering $\frac{Q+e(Q)}{2}$, where $e(x,x'):=(x',x)$, we may assume that $Q$ is symmetric. We first define \begin{align}\label{eq pi tilde better}\textstyle \tilde{\pi}(dx,dy):=\mu(dx)\int_{x'}Q_x(dx')m^{(x,x')}_1(dy), \end{align} which is legitimate owing to the measurability precautions we have taken. We will prove \begin{enumerate} \item $\tilde{\pi}\in \Pi(\mu,\nu)$, \item $\int\mu(dx) C(x,\pi_x)>\int\mu(dx) C(x,\tilde{\pi}_x)$. \end{enumerate} For $(1)$: Evidently the first marginal of $\tilde{\pi}$ is $\mu$. On the other hand $$\textstyle \int_x\mu(dx)\tilde{\pi}_x(dy)= \int_x\mu(dx) \int_{x'}Q_x(dx')m^{(x,x')}_1(dy) = \int_{x,x'}Q(dx,dx')m^{(x,x')}_1(dy). $$ The last quantity is equal to $ \int_{x,x'}Q(dx,dx')m^{(x,x')}_{2}(dy) $ by symmetry of $Q$ and \eqref{eq sym of m}. So $$\textstyle \int_x\mu(dx)\tilde{\pi}_x(dy)= \int_{x,x'}Q(dx,dx')\frac{m^{(x,x')}_{1}+m^{(x,x')}_{2}}{2}(dy)=\int_{x,x'}Q(dx,dx')\frac{\pi_{x'}+\pi_{x}}{2}(dy)=\nu(dy),$$ by definition of $m_i^{(x,x')}$ and $Q$. Thus $\tilde{\pi}$ has second marginal $\nu$. For $(2)$: By convexity of $C(x,\cdot)$, the symmetry of $Q$ and \eqref{eq sym of m}, and by the assumption that on the $Q$-non negligible set $D$ we have $C(x,\pi_x)+C(x',\pi_{x'})>C(x,m^{(x,x')}_1)+C(x',m^{(x,x')}_{2})$, we obtain \begin{align*} \textstyle\int_x\mu(dx)C(x,\tilde{\pi}_x)&= \textstyle\int_x\mu(dx)C\left(x,\int_{x'}Q_x(dx')m^{(x,x')}_1\right)\\ &\leq \textstyle \int_x\mu(dx)\int_{x'}Q_x(dx') C\left(x,m^{(x,x')}_1\right)\\ & =\textstyle \int_{x,x'}Q(dx,dx')C\left(x,m^{(x,x')}_1\right) \\&\textstyle =\int_{x,x'}Q(dx,dx') \frac {C\left(x,m^{(x,x')}_1\right)+ C\left(x,m^{(x,x')}_2\right) }{2} \\ & \textstyle < \int_{x,x'}Q(dx,dx') \frac {C(x,\pi_x)+ C(x',\pi_{x'})}{2} \\&\textstyle = \int_x\mu(dx)C(x,\pi_x). \end{align*} \noindent As expected, we have contradicted the optimality of $\pi$. We conclude that no measure $Q$ with the stated properties exists. By ``Kellerer's lemma'' \cite[Proposition 2.1]{BeGoMaSc08}, which is also true for analytic sets, we obtain that $D$ is contained in a set of the form $N\times N$ where $\mu(N)=0$. Letting $\Gamma:= N^c$, so $\Gamma\times\Gamma\subset D^c$, we easily conclude. \end{proof} We now go back to the main framework in this article. The monotonicity principle will be crucially used, under the following guise, in order to prove the results in Section \ref{sec main dim 2}. For a kernel $\pi_x(dy)$ and $\tilde\mu(d\tilde x)=\frac12(\delta_x(d\tilde x)+\delta_{x'}(d\tilde x))$ we write $\pi_{\tilde x}(dy)\tilde\mu(d\tilde x)=\frac12(\delta_x\pi_x+\delta_{x'}\pi_{x'}).$ \begin{corollary} \label{prop monotonicity} Let $\pi$ be optimal for \eqref{static weak transport}. Then there exists $\Gamma\subset \mathbb{R}^d$ with $\mu(\Gamma)=1$ such that \begin{center} if $x,x'\in \Gamma$, then the measure $\frac{\delta_x\pi_x+\delta_{x'}\pi_{x'}}{2} $ is optimal for \end{center} \begin{align}\label{prob discrete mart} \inf_{\substack{\text{mean}(m_{x'})=x',\, \text{mean}(m_x)=x \\ (m_x+m_{x'})/2=(\pi_x+\pi_{x'})/2 }} \left\{ \mathcal{W}_2(m_x,\gamma^d)^2 + \mathcal{W}_2(m_{x'},\gamma^d)^2 \right \}. \end{align} \end{corollary} \begin{proof} Consider Proposition \ref{prop monotonicity general}, taking $X=Y=\mathbb{R}^d$ and setting $$C(x,m)= \mathcal{W}_2(m,\gamma^d)^2,$$ if $mean(m)=x$ and $+\infty$ otherwise. It is immediate that $C(x,\cdot)$ is convex and lower semicontinuous. Taking $\Gamma$ to be the $\mu$-full set given by Proposition \ref{prop monotonicity general}, the result follows. \end{proof} Observe that Problem \eqref{prob discrete mart} is of the same kind as \eqref{static weak transport}, with initial marginal $\frac{\delta_x+\delta_{x'}}{2}$ and terminal marginal $\frac{\pi_x+\pi_{x'}}{2}$. It follows as in Theorems \ref{lem inequality static dynamic} and Lemma \ref{lem DPP} that \eqref{prob discrete mart} has a continuous-time analogue, which enjoys the dynamic programming principle, and whose optimizer is a strong Markov martingale. This fact will be repeatedly used in the next part. \begin{remark} Of course there are versions of the results in this section for general $n$-tuples instead of pairs. Since we only use the version with pairs we did not state the result in its most general form (cf.\ \cite{GoJu18, BaBePa18}). \end{remark} } \section{Pending proofs} \label{sec pending proofs} \subsection{Proof of Theorems \ref{ThmStandardtoStretchedIntro} and \ref{thm converse}}\label{subsec converse} \begin{proof}[Proof of Theorem \ref{ThmStandardtoStretchedIntro}] Let $A:\mathbb{R}^d\to\mathbb{R}^d$ be in $L^2(\mu)$ and $\phi,\psi:\mathbb{R}^d\to \mathbb{R}$ be conjugate convex functions. We start by proving that \begin{align}\textstyle \label{eq aux W lew P} WT \leq \int \phi \, d\nu -\int x\cdot A(x)\, d\mu +\int\mu(dx)\int \gamma^{(A(x))}(db)\psi (b), \end{align} where $\gamma^{(a)}:=\delta_a*\gamma^d$. First observe that $$\textstyle \sup_{q\in\Pi(\pi,\gamma)}\int q(dm,db)m\cdot b = \sup_{q\in\Pi(\pi,\gamma^{(a)})}\int q(dm,db)m\cdot [b-a].$$ Let us write $\textstyle\Sigma := \{ \, \{\pi_x\}_x :\, \text{mean}(\pi_x)=x \mbox{ and } \int\mu(dx)\pi_x(dy)=\nu(dy)\, \} $. From here, \begin{align*} WT &= \sup_{\{\pi_x\}_x \in \Sigma}\int \mu(dx) \sup_{q\in\Pi(\pi_x,\gamma^{(A(x))})}\int q(dm,db)m\cdot [b-A(x)] \\ &= \sup_{\{\pi_x\}_x \in \Sigma}\int \mu(dx)\left [-\int\pi_x(dm)m\cdot A(x) + \sup_{q\in\Pi(\pi_x,\gamma^{(A(x))})}\int q(dm,db)m\cdot b\right ]\\ &= \sup_{\{\pi_x\}_x \in \Sigma}\int \mu(dx)\left [-x\cdot A(x) + \sup_{q\in\Pi(\pi_x,\gamma^{(A(x))})}\int q(dm,db)m\cdot b\right ]\\ &\leq -\int x\cdot A(x)\, d\mu + \sup_{\{\pi_x\}_x \in \Sigma}\int\mu(dx) \sup_{q\in\Pi(\pi_x,\gamma^{(A(x))})}\int q(dm,db)[\phi(m)+\psi(b)]\\ &= -\int x\cdot A(x)\, d\mu + \int \phi~d\nu + \int\mu(dx)\int \gamma^{(A(x))}(db)~\psi(b) \end{align*} by the conjugacy relationship $m\cdot b\leq \phi(m)+\psi(b)$ and the defining property of $\Sigma$. Hence, \eqref{eq aux W lew P} follows. Let now $M$ be standard stretched Brownian motion from $\mu$ to $\nu$ in the notation of Definition \ref{def d dim ssBm} and Equation \eqref{eq f_t}. By classical convex analysis arguments, or optimal transport theory, there exists $\phi,\psi$ convex conjugate functions such that $\lambda^d$-a.e. ($\gamma^d$-a.e.)\ $$\nabla F(b)\cdot b = \phi(\nabla F(b))+ \psi(b).$$ We also choose $A(x)=f_0^{-1}(x)$, which is well defined on $\text{supp}(\mu)$ by Lemma \ref{lem technical bla}. By definition $\mu\sim M_0=f_0(B_0)\sim f_0(\alpha)$, so % $$\textstyle \int x\cdot A(x)\, d\mu(x) = \int x\cdot A(x)df_0(\alpha)(x)=\int f_0(x)\cdot x \,d\alpha(x) = \mathbb{E}[f_0(B_0)B_0]= \mathbb{E}[M_0\cdot B_0].$$ On the other hand \begin{align*}\textstyle \int\phi \, d\nu +\int\mu(dx)\int \gamma^{(A(x))}(db)\psi (b)& \textstyle = \mathbb{E}[\phi(M_1)] + \int A(\mu)(dx)\int \gamma^{(x)}(db)\psi(b)\\ & \textstyle = \mathbb{E}[\phi(M_1)] + \int A(\mu)(dx) \mathbb{E}[\psi(B_1)|B_0= x]\\ & \textstyle = \mathbb{E}[\phi(M_1)] + \mathbb{E}[\,\, \mathbb{E}[\psi(B_1)|B_0= x]\,\,],\\ &=\textstyle \mathbb{E}[\phi(M_1)+ \psi(B_1)], \end{align*} since $A(\mu)=\alpha=\text{\it{law}}\,}%{\mathrm{\emph{law}}(B_0)$. So the r.h.s.\ of \eqref{eq aux W lew P} becomes in this case \begin{multline*}\textstyle\mathbb{E}[\phi(M_1)+ \psi(B_1)-M_0\cdot B_0]= \mathbb{E}[\phi(\nabla F(B_1))+ \psi(B_1)-M_0\cdot B_0]\\ \textstyle = \mathbb{E}[\nabla F(B_1)\cdot B_1-M_0\cdot B_0]=\mathbb{E}[M_1\cdot B_1-M_0\cdot B_0]= \mathbb{E}\left[ \int_0^1 \text{tr}(\sigma_t)dt \right ]. \end{multline*} Hence, Theorem \ref{lem inequality static dynamic} implies the optimality of $M$. \end{proof} We now work under Assumption \ref{ass negligible boundary}, still in arbitrary dimension $d$. \begin{proof}[Proof of Theorem \ref{thm converse}] We observe from Proposition \ref{prop martingale decomposition} that the optimization problem \eqref{static weak transport} can be decomposed / disintegrated along the cells of $\mathcal{C}_{\mu,\nu}$. Therefore, optimality must only hold for $C_{\mu,\nu}(\mu)$-a.e.\ $K$ for the corresponding transport problems with first and second marginals $\mu(~\cdot ~|K)$ and $\nu(~\cdot~|K)$ respectively. This reduces the argument to the previous case of Theorem \ref{ThmStandardtoStretchedIntro}, and we conclude. \end{proof} \subsection{Proof of Theorem \ref{thm main}} Although this is eventually a two-dimensional result, for the arguments we do not fix the dimension $d$ to two unless we explicitly say so. Let $M$ be the unique optimizer of \eqref{MBMBB}, where we drop the superscript $*$ for simplicity. By Theorem \ref{lem inequality static dynamic} this continuous-time martingale is associated to the unique two-step martingale $\pi$ optimizing \eqref{static weak transport}. Let $\nabla F^x$ be the optimal transport map pushing $\gamma^d$ to $\pi_x$. By Remark \ref{rem connection}, we know that conditioning on $M_0=x$ the martingale $M$ is given by \begin{align} \textstyle M^x_t: = f^x_t(B_t),\,\,\mbox{ where }\,\, f^x_t(\cdot):= \int \nabla F^x(b+\cdot)\gamma^d_{1-t}(db).\label{eq Mx} \end{align} We fix $0<t<1$ throughout. By Lemma \ref{lem technical bla} we find $B_t=(f^x_t)^{-1}(M^x_t)$. We denote \begin{align}\label{eq pixz} \pi_{x,y}:=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_1|M_0=x,M_t=y)= \nabla F^x(\delta_{(f^x_t)^{-1}(y)}*\gamma^d_{1-t}). \end{align} {\bf Important convention:} For the rest of this section we make the convention that $x,y,z$ denote possible values of the random variables $M_0,M_t,M_1$ respectively. \begin{lemma}\label{lem type 1} Let $g$ be the unique gradient of a convex function such that $g(\gamma_{1-t}^d)=\pi_{x,y}$. Then $\nabla F^x(\cdot)= g(-(f_t^x)^{-1}(y)+\cdot)$. In particular $\nabla F^x$ is uniquely determined by the family of translates of $g$, which we denote by $$\text{type}(\pi_{x,y}):=\{ a\mapsto g(a-r):\, r\in\mathbb{R}^d\}.$$ \end{lemma} \begin{proof} For $r\in\mathbb{R}^d$ write $g_r(\cdot)=g(\cdot -r)$. Then, we have $\pi_{x,y}=g(\gamma_{1-t}^d)= g_{(f_t^x)^{-1}(y)}(\delta_{(f^x_t)^{-1}(y)}*\gamma^d_{1-t})$. Hence, both $\nabla F^x$ and $g_{(f_t^x)^{-1}(y)}$ push forward $\delta_{(f^x_t)^{-1}(y)}*\gamma^d_{1-t}$ into $\pi_{x,y}$, and both are gradients of convex functions. By the uniqueness result in Brenier's theorem, it follows that they are equal. Thus knowing $\nabla F^x$ determines $g$ modulo translation. Conversely, knowing $\text{type}(\pi_{x,y})$ (i.e.\ the translations of $g$) determines $\nabla F^x$ upon finding the vector $r$ such that $\int g(r+a)\gamma^d_{1-t}(da)=y$. \end{proof} Let \begin{align}\label{eq pit} \pi^{t}:=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_0,M_t) \end{align} and consider $$\mathcal{C}:=\{\mathcal{C}(x)\}_x:=\mathcal{C}_{\pi^t}\, ,$$ the minimal $\pi^{t}$-invariant measurable convex paving of Lemma \ref{lem existence paving}. We need to show that on each cell of $\mathcal{C}$, $M$ is a standard stretched Brownian motion, i.e.\ on each cell $C(x)$, we need to find a convex function $F=F_{C(x)}$ such that \begin{align} M^x_1= \nabla F( (f_0)^{-1}(x)+B_1 ), \label{eq to prove} \end{align} where $f_0$ and $F$ are related as in \eqref{eq Mx}. To this end, we introduce $$A(x):=\text{type}(\pi_x)=\{ a\mapsto \nabla F^x(a-r)\,:\,\, r\in\mathbb{R}^d\}$$ and we need to show that on each cell $A(x)$ is constant. We start by establishing a few preliminary results. \begin{lemma}\label{lem A constant} If $A(x)$ is constant in each cell of $\mathcal{C}$, then $M$ is a standard stretched Brownian motion on each of these cells. \end{lemma} \begin{proof} As in Lemma \ref{lem type 1}. Fix arbitrary $ x'\in\mathcal{C}(x)$. Then, we have $\nabla F^{x}(\cdot)= \nabla F^{x'}(r(x)+\cdot)$ as $A(x)=A(x')$. Setting $\nabla F:=\nabla F^{x'}$, proves the claim. \end{proof} To make use of the previous lemma, we shall study the behaviour of the martingale $M$ for times in $[0,t]$ and $[t,1]$. Let \begin{align*} \tilde{\pi}(dy,dz)&:= \text{argsup}\,\, V(t,1,\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_t),\nu), \end{align*} where $\tilde{\pi}$ is understood as the coupling of the initial and terminal marginals of the unique optimizer for $V(t,1,\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_t),\nu)$. For $\pi^t$ from \eqref{eq pit} we denote its disintegration w.r.t.\ the second marginal by $(\pi^t_y)_y$. Recall $\pi_{x,y}$ from \eqref{eq pixz}. \begin{lemma}\label{lem pixz pitilde z} For $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_t)$-a.e.\ $y$ and $\pi^t_y$-a.e.\ $x$, we have $$\pi_{x,y}(dz)=\tilde{\pi}_y(dz).$$ \end{lemma} \begin{proof} We must have $\tilde{\pi}=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_t,M_1)$, by Lemma \ref{lem DPP} (1). Thus $\tilde{\pi}_y=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_1|M_t=y)$. On the other hand, $\pi_{x,y}=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_1|M_t=y,M_0=x)$ so by Lemma \ref{lem DPP} we get $\pi_{x,y}(dz)=\tilde{\pi}_y(dz)$ for $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_t)$-a.e.\ $y$ and $\pi^t_y$-a.e.\ $x$. \end{proof} The previous lemma shows that the type of $\pi_{x,y}$ in fact only depends on $y$. Indeed, the same applies also for $x$: \begin{lemma}\label{lem pixz = A} For $\mu$-a.e.\ $x$ and $\pi^t_x$-a.e.\ $y$ we have $$\text{type}(\pi_{x,y})=A(x).$$ \end{lemma} \begin{proof} By Lemma \ref{lem type 1}, if $g\in \text{type}(\pi_{x,y})$ then $\nabla F^x$ is a translate of $g$ (the translation may depend on $x,y$). But this means conversely that $g$ is a translate of $\nabla F^x$, i.e.\ $g\in A(x)$. Reversing the steps gives the equality. \end{proof} We finalize the proof of Theorem \ref{thm main}. In a nutshell, the key is to deal with the null sets in Lemmas \ref{lem pixz pitilde z}-\ref{lem pixz = A}. \textbf{Only from now on we assume that $d=2$}. \begin{proof}[Proof of Theorem \ref{thm main}] Lemma \ref{lem pixz = A} proves that for $\pi^t$-a.e.\ $(x,y)$, $\text{type}(\pi_{x,y})=A(x)$. On the other hand Lemma \ref{lem pixz pitilde z} implies that for $\pi^t$-a.e.\ $(x,y)$, $\text{type}(\pi_{x,y})=D(y)$, where $D(y)$ is the common almost sure type of all $\pi_{x,y}$ which can be reached from $y$. By Fubini we have $$\pi^t(\,\{(x,y):A(x)=D(y)\}\,)=1.$$ We want to use this to show that $A(\cdot)$ is constant on the cells of $\mathcal{C}_{\pi^t}$. We first prove this for $$\mathcal{C}^t:=\{\,\text{ri}\,\mbox{supp}(\pi^t_x)\}_{x\in\mathbb{R}^d}.$$ By \eqref{eq Mx} and Lemma \ref{lem technical bla} (v) we have $\text{supp}(\pi^t_x)=\text{supp}(\, \text{\it{law}}\,}%{\mathrm{\emph{law}}(M_t|M_0=x)\,)=\overline{\text{co}}\,\nabla F^x(\mathbb{R}^d)$ is convex. As in the final part of the proof of Lemma \ref{lem existence paving}, the martingale property implies $x\in \text{ri}\,\overline{\text{co}}\,\text{supp}\,\pi^t_x=\text{ri}\,\text{supp}\,\pi^t_x$, and by \cite[Theorem 6.3]{Ro70} we know $\overline{\text{ri}\,\text{supp}\,\pi^t_x}=\text{supp}\,\pi^t_x$. Hence, to show that $\mathcal{C}^t$ is a candidate $\pi^t$-invariant convex paving, it remains to show that the cells of $\mathcal{C}^t$ are pairwise disjoint or equal. By Proposition \ref{lem essential} below, there is a $\mu$-full set of initial positions with the property that, if $x,x'$ satisfy $\text{ri}\,\text{supp}\,\pi^t_x\bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}\neq \emptyset$, then $A(x)=A(x')$, i.e.\ the types of $\pi_x$ and $\pi_{x'}$ coincide. This means that $\nabla F^x$ and $\nabla F^{x'}$ are translates of each other, implying that $\overline{\text{co}}\,\nabla F^x(\mathbb{R}^d)=\overline{\text{co}}\,\nabla F^{x'}(\mathbb{R}^d)$. From the previous paragraph, this shows $\text{supp}(\pi^t_x)=\text{supp}(\pi^t_{x'})$ and in particular $\text{ri}\,\text{supp}\,\pi^t_x = \text{ri}\,\text{supp}\,\pi^t_{x'}$. In one stroke this proves that $\mathcal{C}^t$ is a ($\pi^t$-invariant) convex paving and that $A(\cdot)$ is constant on its cells. Since $\mathcal{C}_{\pi^t}$ is finer than $\mathcal{C}^t$, this proves that $A(\cdot)$ is constant in the cells of $\mathcal{C}_{\pi^t}$ as well, and we conclude the proof by Lemma \ref{lem A constant} \end{proof} The crucial Proposition \ref{lem essential} below relies on the ``monotonicity principle'' of Proposition \ref{prop monotonicity general}, and more specifically Corollary \ref{prop monotonicity}. \\ \textbf{For the rest of this section let $\Gamma$ be the $\mu$-full set of Corollary \ref{prop monotonicity}.} { \begin{proposition} \label{lem essential}There is a $\mu$-full set $S\subset \Gamma$ with the following property: If $x,x'\in S$ satisfy $\text{ri}\,\text{supp}\,\pi^t_x\bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}\neq \emptyset$, then $A(x)=A(x')$. \end{proposition} \begin{proof} \textbf{Step 1:} By Lemma \ref{lem same dimension} below, for $x,x'\in\Gamma$ we have \begin{align}\textstyle &\text{dim}\,\text{ri}\,\text{supp}\,\pi_x^t = \text{dim}\, \text{ri}\,\text{supp}\,\pi_{x'}^t = \text{dim}\,\bra{\text{ri}\,\text{supp}\,\pi_x^t\cap \text{ri}\,\text{supp}\,\pi_{x'}^t}\Rightarrow \notag \\ & \pi_x^t(\text{ri}\,\text{supp}\,\pi_x^t\cap \text{ri}\,\text{supp}\,\pi_{x'}^t \cap\{\pi_{x,y}\neq\pi_{x',y}\})=\pi_{x'}^t(\text{ri}\,\text{supp}\,\pi_x^t\cap \text{ri}\,\text{supp}\,\pi_{x'}^t\cap\{\pi_{x,y}\neq\pi_{x',y}\}) =0. \label{case eq dim} \end{align} The goal is now to prove that for pairs $x,x'\in\Gamma$ the l.h.s.\ of \eqref{case eq dim} holds. As we will see in the final step of the proof, the r.h.s.\ of \eqref{case eq dim} is a strengthening of the dynamic programming principle that allows to deal with the null sets in Lemmas \ref{lem pixz pitilde z} and \ref{lem pixz = A} more effectively. \textbf{Step 2:} By Lemma \ref{lem build better plan} we know that if $x,x'\in\Gamma$, then \begin{align*} \textstyle \text{dim}\,\text{ri}\,\text{supp}\,\pi^t_x &\textstyle =\text{dim}\, \text{ri}\,\text{supp}\,\pi^t_{x'}=1 \mbox{ and }\text{ri}\,\text{supp}\,\pi^t_x\bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}\neq \emptyset \\ &\Rightarrow \text{dim}\,\bra{\text{ri}\,\text{supp}\,\pi_x^t\cap \text{ri}\,\text{supp}\,\pi_{x'}^t} =1. \end{align*} \textbf{Step 3:} By Lemma \ref{lem build better plan line to point}, we have for $x,x'\in\Gamma$ \begin{align*} \text{dim}\,\text{ri}\,\text{supp}\,\pi^t_x =1\mbox{ and } \text{ri}\,\text{supp}\,\pi^t_{x'}=\{x'\} \Rightarrow x'\notin \text{ri}\,\text{supp}\,\pi^t_x\,\, . \end{align*} \textbf{Step 4:} By Lemma \ref{lem build better plan surface to line}, we have for $x,x'\in\Gamma$ \begin{align*}\textstyle \text{dim}\,\text{ri}\,\text{supp}\,\pi^t_x =2\mbox{ and } \text{dim}\,\text{ri}\,\text{supp}\,\pi^t_{x'}=1\Rightarrow \text{ri}\,\text{supp}\,\pi^t_x \bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}=\emptyset \,\, . \end{align*} \textbf{Step 5:} By Lemma \ref{lem build better plan surface to cloud of dots}, the family $\mathcal{C}^t_2:=\{\text{ri}\, \text{supp}\,\pi^t_x:\, x\in\Gamma,\,\text{dim}\, \text{ri}\, \text{supp}\,\pi^t_x=2\}$ consists of pairwise either disjoint or equal sets. As these are open sets, there can only be countable many different such sets (i.e.\ $|\mathcal{C}^t_2|=|\mathbb{N}|$). If $C\in \mathcal{C}^t_2$ is such that $\mu(\{x:\text{ri}\, \text{supp}\,\pi^t_x =C\})=0$, then we discard this set $C$ from our convex paving. So we may assume, for $C\in \mathcal{C}^t_2$ that $\mu(\{x:\text{ri}\, \text{supp}\,\pi^t_{x} =C\})>0$. By Lemma \ref{lem build better plan surface to cloud of dots} the set $\{x'\in C: \text{ri}\, \text{supp}\,\pi^t_{x'} = \{x'\}\}$ is $\mu$-null under the assumption $\nu\ll\lambda^2$. Hence, for each of the countable many $C\in \mathcal{C}^t_2$ we can discard a $\mu$-null set such that on a possibly smaller but still $\mu$-full subset of $\Gamma$, which we keep calling $\Gamma$ for simplicity, we have $$x,x'\in\Gamma,\, \text{dim}\, \text{ri}\, \text{supp}\,\pi^t_x=2 \,\,\mbox{ and }\,\, \{x'\} = \text{ri}\, \text{supp}\,\pi^t_{x'} \Rightarrow x' \notin \text{ri}\, \text{supp}\,\pi^t_x.$$ \textbf{Final Step:} By Steps 3, 4 and 5, we may assume that, for $x,x'$ in a $\mu$-full set, we have $$\textstyle\text{ri}\, \text{supp}\,\pi^t_x \bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}\neq\emptyset\Rightarrow \text{dim}\,\text{ri}\, \text{supp}\,\pi^t_x = \text{dim}\,\text{ri}\, \text{supp}\,\pi^t_{x'}. $$ In this situation, if the common dimension in the r.h.s.\ is equal to one, by Step 2 also the dimension of the intersection in the l.h.s.\ is equal to one. On the other hand, if the common dimension in the r.h.s.\ is two, then automatically the dimension of the intersection is two (as an open convex set in $\mathbb{R}^2$). In any case, call $d^{(x,x')}$ this common dimension\footnote{Actually the case $d^{(x,x')}=2$ is settled by Lemma \ref{lem build better plan surface to cloud of dots}, but we prefer to give a general argument.}. We find ourselves in the setting of \eqref{case eq dim}, so by Step 1 we must have with $I:=\text{ri}\,\text{supp}\,\pi_x^t\cap \text{ri}\,\text{supp}\,\pi_{x'}^t$ \begin{align}\label{eq good set} \pi_x^t(I \cap\{\pi_{x,y}\neq\pi_{x',y}\})=\pi_{x'}^t(I\cap\{\pi_{x,y}\neq\pi_{x',y}\}) =0. \end{align} Possibly throwing away another $\mu$-null set, we know by Lemma \ref{lem pixz = A} that on a $\mu$-full set $S\subset \Gamma$ and for sets $\mathsf Y, \mathsf Y'$ with $\pi^t_x(\mathsf Y)=\pi^t_{x'}(\mathsf Y')=1$ it holds that \begin{itemize} \item[] $\text{type}(\pi_{x,y})= A(x),\, \forall y\in\mathsf Y,$ \item[] $\text{type}(\pi_{x',y})= A(x'),\, \forall y\in\mathsf Y'.$ \end{itemize} By Lemma \ref{lem technical bla}, $\pi^t_x$ is equivalent to $d^{(x,x')}$-dimensional Lebesgue measure on the $d^{(x,x')}$-dimensional open set $\text{ri}\, \text{supp}\,\pi^t_x$. Since $\text{ri}\, \text{supp}\,\pi^t_x \bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}$ is a $d^{(x,x')}$-dimensional open subset it is also of positive $d^{(x,x')}$-Lebesgue measure. Then it is also of positive $\pi^t_x$-measure. Thus $\text{ri}\, \text{supp}\,\pi^t_x \bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}\bigcap \mathsf Y$ has positive $\pi^t_x$-measure, and positive $d^{(x,x')}$-Lebesgue measure. But then again by Lemma \ref{lem technical bla} this same set must have positive $\pi^t_{x'}$-measure. We conclude that $I\bigcap \mathsf Y\bigcap \mathsf Y'$ has likewise positive $\pi^t_{x'}$-measure. The symmetric argument shows that the same set has positive $\pi^t_{x}$-measure. But by \eqref{eq good set} the set $\{y:\pi_{x,y}=\pi_{x',y}\}$ is $\pi^t_x$-full in $I$. It follows that $$\textstyle I\bigcap \mathsf Y\bigcap \mathsf Y'\bigcap \{y:\pi_{x,y}=\pi_{x',y}\},$$ has positive $\pi^t_{x}$-measure, and by the same token it has positive $\pi^t_{x'}$-measure. In particular, $$\textstyle \mathsf Y\bigcap \{y:\pi_{x,y}=\pi_{x',y}\}\bigcap \mathsf Y' \neq \emptyset,$$ and taking $y$ in this intersection we find $$A(x)=\text{type}(\pi_{x,y})=\text{type}(\pi_{x',y})=A(x'). $$ \end{proof} \begin{lemma}\label{lem same dimension} We have $$\textstyle \left \{ (x,x'):\, \begin{array}{c} \text{dim}\,\text{ri}\,\text{supp}\,\pi_x^t=\text{dim}\, \text{ri}\,\text{supp}\,\pi_{x'}^t =\text{dim}\,\bra{\text{ri}\,\text{supp}\,\pi_x^t\bigcap \text{ri}\,\text{supp}\,\pi_{x'}^t},\\ \mbox{and either } \,\, \pi_x^t\left(\text{ri}\,\text{supp}\,\pi_x^t\bigcap \text{ri}\,\text{supp}\,\pi_{x'}^t \bigcap \{\pi_{x,y}\neq\pi_{x',y}\}\right) >0,\\ \mbox{or }\,\, \pi_{x'}^t\left(\text{ri}\,\text{supp}\,\pi_x^t\bigcap \text{ri}\,\text{supp}\,\pi_{x'}^t \bigcap\{\pi_{x,y}\neq\pi_{x',y}\}\right ) >0 \end{array} \right \}\bigcap (\Gamma\times\Gamma)=\emptyset. $$ \end{lemma} \begin{proof} Take $x,x'\in\Gamma$. By Corollary \ref{prop monotonicity}, the two-step martingale $\frac{\delta_x\pi_x+\delta_{x'}\pi_{x'}}{2}$ is optimal for \eqref{prob discrete mart}. Consider its continuous-time analogue, i.e.\ the martingale which started at $x$ equals $M^x$ and started at $x'$ equals $M^{x'}$ (cf. \eqref{eq Mx}) and both starting points have equal probability. We denote this continuous time martingale by $M^{(x,x')}$. By construction, $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^{(x,x')}_t|M_0^{(x,x')}=x)=\pi^t_x$ and likewise for $x'$. Similarly $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^{(x,x')}_1|M_0^{(x,x')}=x,\, M_t^{(x,x')}=y)=\pi_{x,y} $ and the same holds for $x'$ instead of $x$. By optimality of $\frac{\delta_x\pi_x+\delta_{x'}\pi_{x'}}{2}$ also $M^{(x,x')}$ is optimal for the continuous-time analogue of \eqref{prob discrete mart}, then by dynamic programming (Lemma \ref{lem DPP}), we obtain sets $\mathsf Y, \mathsf Y'$ such that \begin{enumerate} \item[] $\pi_{x,y}=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^{(x,x')}_1|M_t^{(x,x')}=y)$ for $y\in\mathsf Y$ with $\pi^t_x(\mathsf Y)=1$, \item[] $\pi_{x',y}=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^{(x,x')}_1|M_t^{(x,x')}=y)$ for $y\in\mathsf Y'$ with $\pi^t_{x'}(\mathsf Y')=1$. \end{enumerate} The important point is that this is ``pointwise'' in $M^{(x,x')}_0\in\{x,x'\}$. Now assume further that $\text{dim}\,\text{ri}\,\text{supp}\,\pi_x^t=\text{dim}\, \text{ri}\,\text{supp}\,\pi_{x'}^t =\text{dim}\,\bra{\text{ri}\,\text{supp}\,\pi_x^t\bigcap \text{ri}\,\text{supp}\,\pi_{x'}^t}$, and call $d^{(x,x')}$ this common dimension. By Lemma \ref{lem technical bla} we have that $\pi^t_x$ and $\pi^t_{x'}$ restricted to $\text{ri}\,\text{supp}\,\pi_x^t\bigcap \text{ri}\,\text{supp}\,\pi_{x'}^t$ are equivalent to $d^{(x,x')}$-dimensional Lebesgue measure restricted to this same set. We write $$\textstyle I:=\text{ri}\,\text{supp}\,\pi_x^t\bigcap \text{ri}\,\text{supp}\,\pi_{x'}^t\,.$$ Necessarily $\mathsf Y\bigcap I$ is $\pi^t_x$-full in $I$, and therefore also $\pi^t_{x'}$-full in $I$. But then $\textstyle \mathsf Y\bigcap I\bigcap\mathsf Y'$ is $\pi^t_{x'}$-full in $I$ too. Inverting the roles of $x,x'$ this set must also be $\pi^t_{x}$-full in $I$. We conclude $$ \textstyle\pi_x^t(I\backslash (\mathsf Y\bigcap\mathsf Y'))=\pi_{x'}^t(I\backslash (\mathsf Y\bigcap\mathsf Y'))=0.$$ But on $\mathsf Y\bigcap\mathsf Y'$ we have $\pi_{x,y}=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^{(x,x')}_1|M_t^{(x,x')}=y)=\pi_{x',y}$, so $$\textstyle \pi_x^t(I\bigcap \{\pi_{x,y}\neq\pi_{x',y}\})=\pi_{x'}^t(I\bigcap\{\pi_{x,y}\neq\pi_{x',y}\}) =0.$$ This concludes the proof. \end{proof} \begin{lemma} \label{lem build better plan} Put $${\mathcal V}:=\{(x,x'):\text{ri}\,\text{supp}\,\pi^t_x \mbox{ and }\text{ri}\,\text{supp}\,\pi^t_{x'} \mbox{ have dimension 1 and intersect in a singleton}\}.$$ Then, $${\mathcal V}\cap (\Gamma\times \Gamma)=\emptyset.$$ \end{lemma} \begin{proof} We use the same notation as in the proof of Lemma \ref{lem same dimension}, and assume $(x,x')\in{\mathcal V}$. By construction, $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^{(x,x')}_t|M_0^{(x,x')}=x)=\pi^t_x$ and likewise for $x'$. So $M^{(x,x')}_t$ conditioned to start at $x$, or at $x'$, live respectively in line segments exactly intersecting in a single point ${p}\in\mathbb{R}^2$. By Lemma \ref{lem technical bla}, the paths of these martingales (restricted to times in $[0,t]$) evolve in different ``space-time'' strips that only intersect along the line $L:=\{(p,s):s\geq 0\}$. Let $\tau:=\inf\{s: (M_s^{(x,x')},s)\in L\}$. It follows that $0<\tau<t$ on a non-negligible set. The law of $\tau$ conditioned on the starting point of $M^{(x,x')}$ is equivalent to Lebesgue measure on $(0,1)$. The reason is that this is true for 1-dimensional Brownian motion, and thanks to Lemma \ref{lem technical bla} the martingale $M^{(x,x')}$ conditioned to start say in $x$, is a one-dimensional Brownian motion after a continuous strictly increasing time-change. Hence for any set $E\subset (0,1)$ of positive Lebesgue measure we have $\mathbb{P}(\tau\in E\bigcap (0,t)\,|\,M^{(x,x')}_0=x )>0$ and $\mathbb{P}(\tau\in E\bigcap (0,t)\,|\,M^{(x,x')}_0=x' )>0$. Thus we observe that the law of $M_t^{(x,x')}$ given $\{M_s^{(x,x')}:\,s\leq \tau\wedge t\}$ is different from the law of $M_t^{(x,x')}$ given $M_{\tau\wedge t}^{(x,x')}$. Indeed, when $\tau<t$ (equivalently when $M_{\tau\wedge t}^{(x,x')}=p$) and $\tau\in E$, one cannot for sure say in which of the aforementioned strips the martingale will continue to evolve. On the contrary, by observing $\{M_s^{(x,x')}:\,s\leq \tau\wedge t\}$ and on $\{\tau<t\}\bigcap \{\tau\in E\}$, such a strip is completely determined. Therefore $M^{(x,x')}$ fails to have the strong Markov property. But then it cannot be optimal between its marginals, by Corollary \ref{strong Markov}, and so neither can be $\frac{\delta_x\pi_x+\delta_{x'}\pi_{x'}}{2}$ optimal for \eqref{prob discrete mart}. We conclude by Corollary \ref{prop monotonicity} that $(x,x')\notin \Gamma\times\Gamma$. \end{proof} \begin{lemma} \label{lem build better plan line to point} We have \begin{align}\label{eq:210} \textstyle \{(x,x'):\text{dim}\,\text{ri}\,\text{supp}\,\pi^t_x =1,\, \text{ri}\,\text{supp}\,\pi^t_{x'} =x', \, x'\in \text{ri}\,\text{supp}\,\pi^t_x \}\bigcap (\Gamma\times\Gamma)=\emptyset. \end{align} \end{lemma} \begin{proof} The proof is very similar to that of Lemma \ref{lem build better plan}. Let $(x,x')$ belong to the leftmost set in \eqref{eq:210}. Using the same notation, $M^{(x,x')}$ is a martingale which evolves in a space-time strip if started at $x$, and otherwise is a constant equal to $x'$. We denote $\tau$ the first hitting time of $\{(x',s):s\geq 0\}$. Since the martingale lives in a strip, we have that $\tau<t$ has probability strictly greater than $1/2$. The strong Markov property of $M^{(x,x')}$ is destroyed at $\tau\wedge t$, since the knowledge of the past up to $\tau\wedge t$ reveals whether the martingale is constant or not thereafter. As before, by Corollary \ref{strong Markov} and Corollary \ref{prop monotonicity}, $M^{(x,x')}$ cannot be optimal and $(x,x')\notin \Gamma\times\Gamma$. \end{proof} \begin{lemma} \label{lem build better plan surface to line} We have $$\textstyle \{(x,x'):\text{dim}\,\text{ri}\,\text{supp}\,\pi^t_x =2,\, \text{dim}\,\text{ri}\,\text{supp}\,\pi^t_{x'} =1,\, \text{ri}\,\text{supp}\,\pi^t_x\bigcap \text{ri}\,\text{supp}\,\pi^t_{x'}\neq\emptyset \} \bigcap (\Gamma\times\Gamma)=\emptyset .$$ \end{lemma} \begin{proof} As in the previous proofs, with $M^{(x,x')}$ we associate $\tau=\inf\{s: M^{(x,x')}_s\in \text{ri}\,\text{supp}\,\pi^t_{x'}\}$. Taking $(x,x')$ in the leftmost set, it is tedious but not difficult to see that $$\text{\it{law}}\,}%{\mathrm{\emph{law}}\left(\, (M^{(x,x')}_{\tau},\tau )\,|\tau\leq t,M^{(x,x')}_0=x\right ),\,\,\,\mbox{ and }\,\,\, \text{\it{law}}\,}%{\mathrm{\emph{law}}\left( (M^{(x,x')}_U,U)\,| M^{(x,x')}_0=x' \right),$$ are equivalent to Lebesgue measure on $\text{ri}\,\text{supp}\,\pi^t_{x'}\times [0,t]$, where $U$ is uniformly distributed on $[0,t]$ and independent of everything. The point is that there is a common ``space-time'' set $E$ charged by the two aforementioned laws. But the behaviour of $M^{(x,x')}_t$ conditioned on its past up to $\tau\wedge t$ is drastically depending on its starting position (e.g.\ whether it will evolve in a one- or two- dimensional set), whereas if for example we knew $(M^{(x,x')}_{\tau},\tau )\in E$ then this does not reveal the dimension of the set where the martingale will continue to evolve. This contradicts the strong Markov property and we conclude as before. \end{proof} \begin{lemma} \label{lem build better plan surface to cloud of dots} The family $$\mathcal{C}^t_2:=\{\text{ri}\, \text{supp}\,\pi^t_x:\,x\in \Gamma,\, \text{dim}\, \text{ri}\, \text{supp}\,\pi^t_x=2\},$$ consists of open sets which are pairwise disjoint or equal. Assuming $\nu\ll\lambda^2$, we have $$C\in \mathcal{C}^t_2\, \mbox{ and } \,\mu(\{x:\text{ri}\, \text{supp}\,\pi^t_{x} =C\})>0\Rightarrow \mu(\{x'\in C: \text{ri}\, \text{supp}\,\pi^t_{x'} = \{x'\}\})=0.$$ \end{lemma} \begin{proof} Let $\Lambda$ consist of all $(x,x')$ such that $$\textstyle\{\text{dim}\, \text{ri}\, \text{supp}\,\pi^t_x = 2 = \text{dim}\, \text{ri}\, \text{supp}\,\pi^t_{x'},\, \text{ri}\, \text{supp}\,\pi^t_x\neq \text{ri}\, \text{supp}\,\pi^t_{x'}, \text{ri}\, \text{supp}\,\pi^t_x \bigcap \text{ri}\, \text{supp}\,\pi^t_{x'} \neq \emptyset \}.$$ As before we can show that $\Lambda$ cannot intersect $\Gamma\times\Gamma$. We do not give the argument, to avoid repetition, but mention that instead of contradicting the strong Markov property it suffices to contradict the regular Markov property. We conclude the first assertion. \comment{JB:for this part it seems we could have assumed $\mu\ll\lambda^2$ instead.} Now let $C\in \mathcal{C}^t_2$ such that $\mu(\{x:\text{ri}\, \text{supp}\,\pi^t_{x} =C\})>0$, $K:=\{x'\in C: \text{ri}\, \text{supp}\,\pi^t_{x'}=\{x'\}\}$, and suppose $\mu(K)>0$. We think of $K$ as a non-negligible cloud of dots where the martingale $M$ stays frozen. Since $M_0\in K\Rightarrow M_1\in K$, we have $v(K)>0$ and by assumption $\lambda^2(K)>0$. It follows that $\{M_t\in K\}$ is non-negligible, no matter if $M$ has started on $K$ or on $\{x:\text{ri}\, \text{supp}\,\pi^t_{x} =C\} $ at time zero (in the latter case, by Lemma \ref{lem technical bla}). Since both sets of initial conditions are non-negligible, we contradict the regular Markov property of $M$. Indeed, on $\{M_t\in K\}$ the behaviour of $M$ after $t$ is drastically different depending on the starting condition at time zero being in $K$ or $\{x:\text{ri}\, \text{supp}\,\pi^t_{x} =C\} $. This contradicts the optimality of $M$, and we conclude $\mu(K)=0$. \end{proof} \section{Further optimality properties}\label{sec causal} Let $\mathbb{T}:=\{0=t_0\leq t_1\leq \dots\leq t_{n-1}\leq t_n=1\}\subset [0,1]$ be a finite subgrid. Suppose $M$ is a standard stretched Brownian motion from $\mu$ to $\nu$, so $M_t=f_t(B_t)$ for a Brownian motion starting with some distribution $\alpha$; see \eqref{eq f_t}. Then \begin{align*} M_{t_0}= f_0(B_{t_0}),\,\, M_{t_1}= f_1(B_{t_1}),\,\, \dots \,\, , \,\, M_{t_n}= f_n(B_{t_n}). \end{align*} Denote $\nu^{\mathbb{T}}:=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M_{t_0},M_{t_1},\dots,M_{t_n})$, the projection of $\nu$ onto the time indices in $\mathbb{T}$, and $\gamma^{\mathbb{T}}:=\text{\it{law}}\,}%{\mathrm{\emph{law}}(B_{t_0},B_{t_1},\dots,B_{t_n}) $. Finally, consider the \textit{adapted map} $$ [\mathbb{R}^d]^{n+1}\ni(b_0,\dots, b_{n})\mapsto f^{\mathbb{T}}(b_0,\dots, b_{n}):=(f_0(b_0),f_1(b_1),\dots, f_n(b_n))\in [\mathbb{R}^d]^{n+1} .$$ It follows that $$f^{\mathbb{T}}(\gamma^{\mathbb{T}})=\nu^{\mathbb{T}}.$$ Each component of $f^\mathbb{T}$ is \textit{increasing} in the sense that it is the gradient of a convex function. Such a map is an example of an ``increasing triangular transformation,'' as in \cite{BoKoMe05}. It can also be understood in terms of increasingness w.r.t.\ lexicographical order in case $\nu^\mathbb{T}$ has a density. In a sense properly explained in \cite{BaBeLiZa16}, $f^{\mathbb{T}}$ sends $\gamma^{\mathbb{T}}$ into $\nu^{\mathbb{T}}$ in a canonical respect.\ optimal way: see respect.\ Proposition 5.6 and Corollary 2.10 therein. Since this is true no matter the subgrid $\mathbb{T}$, we are entitled to think of $M$ as an \textit{adapted increasing rearrangement} of the Brownian motion into a martingale with given initial an final laws. Also, the aforementioned canonical/optimal character of such rearrangements should translate into the optimality of $M$ as obtained in the previous section, and vice-versa. We now make this heuristics rigorous. Problem \eqref{MBMBB} is equivalent to \begin{align}\label{eq starting point} \inf_{M_t=M_0+\int_0^t \sigma_s\, dB_s, M_0\sim \mu,M_1\sim \nu} {\mathbb E}\left[\text{tr}\,\langle M-B\rangle_1\right], \end{align} since ${\mathbb E}\left[ \text{tr} \,\langle M-B\rangle_1\right] = {\mathbb E}\left[\text{tr}\,\langle M\rangle_1\right] + {\mathbb E}\left[\text{tr}\,\langle B\rangle_1\right]-2 {\mathbb E}\left[ \int_0^1 \text{tr}(\sigma_t)dt\right]$, and the first two quantities in the r.h.s.\ do not depend on the concrete coupling $(M,B)$. This also proves that for \eqref{eq starting point} it is irrelevant where $B$ is started. We now want to formulate a transport problem between laws of stochastic processes which is compatible with \eqref{eq starting point}. For ease of notation we denote $$\Omega:= C([0,1];\mathbb{R}^d).$$ \begin{definition} A causal coupling between $\mathbb{P}$ and $\mathbb{Q}$ is a probability measure $\pi$ on $\Omega\times\Omega$ with first and second marginals $\mathbb{P}$ and $\mathbb{Q}$ respectively, and satisfying the additional property \begin{align}\label{eq causality} \forall t,\forall A\in \mathcal{F}_t:\,\, \left ( \,\, \Omega\ni x\mapsto\pi_x(A)\in [0,1]\,\, \right ) \mbox{ is $(\mathbb{P},\mathcal{F}_t)$-measurable}, \end{align} where $\mathcal{F}$ is the $\mathbb{P}$-completed canonical filtration and $\pi_x$ is a regular conditional probability of $\pi$ w.r.t.\ the first marginal. We denote $\Pi_c(\mathbb{P},\mathbb{Q})$ the set of all such $\pi$. We also denote $\Pi_{bc}(\mathbb{P},\mathbb{Q})=\{\pi \in \Pi_c(\mathbb{P},\mathbb{Q}):\, e(\pi)\in \Pi_c(\mathbb{Q},\mathbb{P}) \}$ for $e(x,y)=(y,x)$, the set of bicausal couplings. \end{definition} We refer to \cite{La13,BaBeLiZa16,AcBaZa16,BaBeEdPi17} for more on this definition. In what follows, we write $(\omega,\bar{\omega})$ for a generic element in $\Omega\times\Omega$. \begin{lemma}\label{lem joint mart} Let $\mathbb{P}$ and $\mathbb{Q}$ be martingale laws, and $\pi \in \Pi_{bc}(\mathbb{P},\mathbb{Q})$. Then the canonical process on $\Omega\times\Omega$ is a $\pi$-martingale in its own filtration. \end{lemma} \begin{proof} One can easily see that under $\pi$ we have \begin{align*} \{\bar{\omega}_s: 0\leq s\leq t\} \mbox{ is $\pi$-conditionally independent from }\{\omega_s: 0\leq s\leq 1\} \mbox{ given }\{\omega_s: 0\leq s\leq t\},\\ \{\omega_s: 0\leq s\leq t\} \mbox{ is $\pi$-conditionally independent from }\{\bar{\omega}_s: 0\leq s\leq 1\} \mbox{ given }\{\bar{\omega}_s: 0\leq s\leq t\}, \end{align*} by bicausality. The first property above, for $T>t$, implies $$\mathbb{E}^\pi[\omega_T|\,\{ \omega_s,\bar{\omega}_s,s\leq t\}\,] = \mathbb{E}^\pi[\omega_T|\,\{ \omega_s,s\leq t\}\,] = \mathbb{E}^{\mathbb{P}}[\omega_T|\,\{ \omega_s,s\leq t\}\,]=\omega_t.$$ The second property implies similarly $\mathbb{E}^\pi[\bar{\omega}_T|\,\{ \omega_s,\bar{\omega}_s,s\leq t\}\,]=\bar{\omega}_t$, so we conclude. \end{proof} Let us denote by $\mathbb{W}$ Wiener measure (started at zero) on $\Omega$. The next lemma establishes the crucial connection between standard stretched Brownian motion and the present \emph{causal transport} setting. \begin{lemma}\label{lem identification causal} Let $M$ be standard stretched Brownian motion from $\mu$ to $\nu$, with $M_t=M_0+\int_0^t\sigma_s dB_s$. Then $$\text{\it{law}}\,}%{\mathrm{\emph{law}}(B-B_0,M)\in \Pi_{bc}(\mathbb{W},\text{\it{law}}\,}%{\mathrm{\emph{law}}(M)).$$ More generally, if $M$ is stretched Brownian motion and $B$ is as in Remark \ref{rem connection}, the same conclusion holds. \end{lemma} \begin{proof} Let $M$ be standard stretched Brownian motion from $\mu$ to $\nu$. By Lemma \ref{lem technical bla} there is an orthogonal projection $P$ such that $M_t=\tilde f_t(\tilde B_t)$, where $\tilde B_t =PB_t$. By the same result, the filtrations of $M$ and $\tilde B$ coincide. This shows that the coupling $\text{\it{law}}\,}%{\mathrm{\emph{law}}(B-B_0,M)$ is causal from $\mathbb W$ to $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M)$. For the reverse causality, it suffices to observe that $\{\tilde B_{t+h}-\tilde B_t:h\geq 0\}$ is independent from $\{B_s:s\leq t\}$, so in particular given $\{M_s:s\leq t\}$ we have that $\{B_s-B_0:s\leq t\}$ and $\{M_s:s\leq 1\}$ are independent. The case of $M$ a stretched Brownian motion is similar, taking $B$ independent of $M_0$ and upon conditioning on the latter random variable. \end{proof} We can now put the pieces together to obtain optimality of (standard) stretched Brownian motion in the sense of trajectorial laws. Let us fix a refining sequence of partition $P_n $ of $[0,1]$ in order to define the quadratic variation $\langle \,\cdot\,\rangle$ pathwise on $C([0,1];\mathbb{R}^d)$ in the usual manner, namely $$\omega\mapsto \langle \omega \rangle_1^{i,j}:= \lim_{n\to\infty} \sum_{t_m\in P_n} (\omega^i_{t_{m+1}} -\omega^i_{t_m})(\omega^j_{t_{m+1}} -\omega^j_{t_m}) , $$ when the limit exist, and otherwise $+\infty$. We then consider \begin{align}\label{eq trajectorial} \inf_{\substack{\mathbb{Q} \in \mathsf{M}^c (\mu,\nu) \\ \pi\in \Pi_{bc}(\mathbb{W},\mathbb{Q})}}\mathbb{E}^{\pi}[\text{tr}\,\langle \omega-\bar{\omega} \rangle_1 ], \end{align} where $\mathsf{M}^c (\mu,\nu)$ denotes the set of laws of continuous martingales indexed by $[0,1]$ starting in $\mu$ and terminating in $\nu$. \begin{proposition}\label{prop causal} Problems \eqref{eq starting point} and \eqref{eq trajectorial} are equivalent. In particular, let $M^*$ be the optimizer of the former, i.e.\ stretched Brownian motion. Then ${\mathbb Q}^*:=\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*)$ is optimal for the latter. \end{proposition} \begin{proof} Let $\mathbb{Q},\pi$ be feasible for \eqref{eq trajectorial}. Since \begin{align*} \mathbb{E}^{\pi}[ \text{tr}\,\langle \omega-\bar{\omega} \rangle_1 ]&= \mathbb{E}^{\mathbb{W}}[\text{tr}\,\langle \omega \rangle_1 ] + \mathbb{E}^{\mathbb{Q}}[\text{tr}\,\langle \bar{\omega} \rangle_1 ]-2\mathbb{E}^{\pi}[\text{tr}\,\langle \omega,\bar{\omega} \rangle_1 ]\\ &= \mathbb{E}^{\mathbb{W}}[|\omega_1|^2-|\omega_0|^2 ] + \mathbb{E}^{\mathbb{Q}}[|\bar{\omega}_1|^2 - |\bar{\omega}_0|^2 ]-2\mathbb{E}^{\pi}[\text{tr}\,\langle \omega,\bar{\omega} \rangle_1 ], \end{align*} we can equivalently maximize $\mathbb{E}^{\pi}[\text{tr}\,\langle \omega,\bar{\omega} \rangle_1 ]$ in \eqref{eq trajectorial}, rather than minimizing $\mathbb{E}^{\pi}[\text{tr}\,\langle \omega-\bar{\omega} \rangle_1 ]$. However by Lemma \ref{lem joint mart} the canonical process is a $\pi$-martingale so $$\mathbb{E}^{\pi}[\text{tr}\,\langle \omega,\bar{\omega} \rangle_1 ] = \mathbb{E}^{\pi}[ \omega_1 \cdot \bar{\omega}_1]= \mathbb{E}^{\pi}[ \, \mathbb{E}^{\pi}[\omega_1 \cdot \bar{\omega}_1 \,|\, \bar{\omega}_0]\,] ,$$ by the product formula and as $\omega_0=0$ under $\pi$. Denoting $\pi_x=\text{\it{law}}\,}%{\mathrm{\emph{law}}_\mathbb{Q}(\bar{\omega_1}|\bar{\omega}_0=x)$ and $q_x=\text{\it{law}}\,}%{\mathrm{\emph{law}}_\pi((\bar{\omega}_1,\omega_1)|\bar{\omega}_0=x)$ we have that the first marginal of $q_x$ is $\pi_x$ and the second one is $\gamma^d$. Indeed, by bicausality $\pi-\text{\it{law}}\,}%{\mathrm{\emph{law}}(\omega_1|\omega_0,\bar{\omega}_0)=\pi-\text{\it{law}}\,}%{\mathrm{\emph{law}}(\omega_1|\omega_0)=\gamma^d$, so in particular $\pi-\text{\it{law}}\,}%{\mathrm{\emph{law}}(\omega_1|\bar{\omega}_0)=\gamma^d$. Therefore \begin{align}\label{eq chain ineq} \mathbb{E}^{\pi}[\text{tr}\,\langle \omega,\bar{\omega} \rangle_1 ] = \int \mu(dx)\int q^x(dm,db)\, m\cdot b \leq \int \mu(dx)\sup_{q\in \Pi(\pi_x,\gamma^d) }\int q(dm,db)\, m\cdot b. \end{align} By Theorem \ref{lem inequality static dynamic} we conclude that the value of \eqref{eq starting point} is greater or equal than that of \eqref{eq trajectorial}. Let $M^*$ be the optimizer of \eqref{eq starting point} (equiv.\ of \eqref{MBMBB}). By Remark \ref{rem connection} $M^*$ is precisely built via attaining the r.h.s.\ of \eqref{eq chain ineq} when maximizing over kernels $\pi^x$. By the final part of Lemma \ref{lem identification causal} we may build a bicausal coupling $\pi$ so that in \eqref{eq chain ineq} we have equality. This proves that Problems \eqref{eq starting point} and \eqref{eq trajectorial} have the same value and that $\text{\it{law}}\,}%{\mathrm{\emph{law}}(M^*)$ is optimal for the latter. \end{proof} \begin{remark} The discrete-time version of Problem \eqref{eq trajectorial} would have shown, in light of \cite{BaBeLiZa16}, that the optimal way to send a Gaussian random walk into a martingale is through the Knothe-Rosenblatt rearrangement (the unique increasing bicausal triangular transformation between its marginals). This is in tandem with the first paragraphs of the present part (once we switched to increments $b_i-b_{i-1}$). Via Proposition \ref{prop causal} we know that stretched Brownian motion attains Problem \eqref{eq trajectorial}. Hence, one can arguably describe stretched Brownian motion as the canonical/optimal Knothe-Rosenblatt rearrangement of Brownian motion with prescribed initial and final marginals. \end{remark} \section*{Acknowledgements} We thank Dario Trevisan for stimulating discussions at the outset of this project.
{ "timestamp": "2019-01-16T02:21:57", "yymm": "1708", "arxiv_id": "1708.04869", "language": "en", "url": "https://arxiv.org/abs/1708.04869", "abstract": "In classical optimal transport, the contributions of Benamou-Brenier and McCann regarding the time-dependent version of the problem are cornerstones of the field and form the basis for a variety of applications in other mathematical areas.We suggest a Benamou-Brenier type formulation of the martingale transport problem for given $d$-dimensional distributions $\\mu, \\nu $ in convex order. The unique solution $M^*=(M_t^*)_{t\\in [0,1]}$ of this problem turns out to be a Markov-martingale which has several notable properties: In a specific sense it mimics the movement of a Brownian particle as closely as possible subject to the conditions $M^*_0\\sim\\mu, M^*_1\\sim \\nu$. Similar to McCann's displacement-interpolation, $M^*$ provides a time-consistent interpolation between $\\mu$ and $\\nu$. For particular choices of the initial and terminal law, $M^*$ recovers archetypical martingales such as Brownian motion, geometric Brownian motion, and the Bass martingale. Furthermore, it yields a natural approximation to the local vol model and a new approach to Kellerer's theorem.This article is parallel to the work of Huesmann-Trevisan, who consider a related class of problems from a PDE-oriented perspective.", "subjects": "Probability (math.PR); Classical Analysis and ODEs (math.CA); Mathematical Finance (q-fin.MF)", "title": "Martingale Benamou--Brenier: a probabilistic perspective", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631631151011, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950321086259 }
https://arxiv.org/abs/1308.4361
Inequalities with angular integrability and applications
We prove an extension of the Stein-Weiss weighted estimates for fractional integrals, in the context of Lp spaces with different integrability properties in the radial and the angular direction. In this way, the classical estimates can be unified with their improved radial versions. A number of consequences are obtained: in particular we deduce precised versions of weighted Sobolev embeddings, Caffarelli-Kohn-Nirenberg estimates, and Strichartz estimates for the wave equation, which extend the radial improvements to the case of arbitrary functions. Then we apply this technology in order to give new a priori assumptions on weak solutions of the Navier-Stokes equation so as to be able to conclude that they are smooth. The regularity criteria are given in terms of mixed radial-angular weighted Lebesgue space norms.
\chapter*{Introduction} We study the improvements due to the angular regularity in the context of Sobolev embeddings and PDEs. It is well known that many fundamental inequalities in mathematical analysis get improvements under some additional symmetry assumptions. Such improvements are related to the geometric nature of the space and in particular to the action of a certain group of symmetry. This is not surprising because a symmetric function on a differentiable manifold can be considered as a function defined on lower dimensional manifold on wich stronger estimates are often available. Then such improved estimates can be extended on the whole manifold by the action of a certain group. In particular we work in a very simple setting by considering radially symmetric functions defined on $\mathbb{R}^{n}$. Such functions are indeed defined on $\mathbb{R}^{+}$ and $SO(n)$ acts to go back to $\mathbb{R}^{n}$. \noindent For instance the Hardy-Littelwood (\cite{Vilela01-a}, \cite{HidanoKurokawa08-a}, \cite{Rubin83-a}, \cite{DenapoliDrelichmanDuran11-a}), Caffarelli-Kohn-Nirenberg (\cite{DenapoliDrelichmanDuran10-a}) and Strichartz (\cite{DanconaCacciafesta11-a}, \cite{MachiharaNakamuraNakanishi05-a}, \cite{Sterbenz05-a}) inequality on $\mathbb{R}^{n}$ get improvements if one restricts to consider just radially symmetric functions. A natural question is if and how this phenomenon occurs if we replace the symmetry hypotesis with merely higher integrability in the angular variables. We show that improvements can be obtained by working with the norms: \begin{equation*} \begin{array}{lcl} \|f\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}&=& \left( \int_{0}^{+\infty} \|f(\rho\ \cdot \ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{n-1}d \rho \right)^{\frac1p}, \\ \|f\|_{L^{\infty}_{|x|}L^{\widetilde{p}}_{\theta}}&=& \sup_{\rho>0}\|f(\rho\ \cdot \ )\|_{L^{\widetilde{p}}(\mathbb{S}^{n-1})}; \end{array} \end{equation*} \noindent we observe that such results interpolate beetween the improved versions for radially symmetric functions and the classical ones. \noindent We develop widely this technology in the first chapter. The main results proved are extensions of Hardy-Littelwood-Sobolev (theorem \ref{the:Our1Thm}, corollary \ref{cor:nonhom}) and Caffarelli-Kohn-Nirenberg inequality (theorem \ref{the:Our2Thm}). Another interesting aspect is that the inequalities on which we focus are fundamental tools in the study of PDEs. For instance the Caffarelli-Kohn-Nirenberg inequality is really important in the study of regularity of the Navier-Stokes equation's solutions, because it provides a priori estimates through interpolation of quantities related to the energy dissipation. So it is expectable that the technology developed can lead to succesfully results also in this areas. We focus basically on the small data theory and on regularity criteria. We actually extended the criteria in \cite{YongZhou} and we made a conjecture on a possible extension of the small data result in \cite{CKN}. \noindent As well known the well posedness of the Cauchy Problem for the Navier-Stokes equation \begin{equation*} \left \{ \begin{array}{rcccl} \partial_{t}u + (u \cdot \nabla) u +\nabla p & = & \Delta u & \quad \mbox{in}& \quad \mathbb{R}^{+} \times \mathbb{R}^{n} \\ \nabla \cdot u & = & 0 & \quad \mbox{in} & \quad \mathbb{R}^{+} \times \mathbb{R}^{n} \\ u & = & u_{0} & \quad \mbox{in}& \quad \{0\} \times \mathbb{R}^{n}, \end{array}\right. \end{equation*} is a big mathematical challenge, and only partial results have been obtained. Leray has proved global existence of weak solutions for $L^{2}$ initial data in his pioneering work \cite{Ler}. On the other hand the uniqeness of Leray's solutions is still open, as it is the propagation of regularity of the initial datum $u_{0}$. The well posedness theory is well developed for short times, or if one restricts to small $u_{0}$. In this scenario is useful to estabilish at least regularity or uniqeness criteria. We mean to find some a priori assumptions on the solutions under which the regularity (or the uniqeness) is guaranteed. We focus on the regularity in space variables, being the time regularity a different and more difficult problem. This is actually expectable by a physical viewpoint, because of the assumption of incompressibility of the fluid; see \cite{Ser}. It turns out that the a priori assumptions requested in order to get regularity are basically boundedness assumptions on $u, \nabla u$, or $\nabla \times u$. \noindent As mentioned we refer basically to the criteria in \cite{YongZhou}, in which the main novelty is to consider boundedness in weighted $L^{p}$ spaces with weights $|x|^{\alpha}$. More precisely the author works with solutions $u:[0,T]\times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$, equipped with the norms: \begin{equation}\label{IntrodScaling} \||x|^{\alpha}u\|_{L^{s}_{T}L^{p}_{x}}, \qquad \frac{2}{s}+\frac{n}{p}=1-\alpha, \end{equation} where, of course, the indexes relations follow by scaling considerations. Under boundedness assumptions the regularity in the segment $(0,T)\times \{ 0 \}$ is achieved, more precisely u is $C^{\infty}$ in the space variables. So the introduction of weights allows to get local regularity criteria, where we mean only in a neighborhood of the origin. At first we show how for some choices of the indexes the criteria in \cite{YongZhou} are indeed global (the regularity is achieved in $(0,T)\times \mathbb{R}^{n}$). Then we get improvements by working with spaces with different integrability in radial and angular directions; so instead of the norms (\ref{IntrodScaling}) we use: \begin{equation}\nonumber \| |x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} = \left( \int_{0}^{T} \left( \int_{0}^{+\infty} \|u(t, \rho \ \cdot \ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \ \rho^{\alpha p + n-1} \ d \rho \right)^{\frac{s}{p}} \ dt \right)^{\frac{1}{s}}, \end{equation} with $\frac{2}{s}+\frac{n}{p}=1-\alpha$. In this setting we get global regularity if sufficiently high values of $\widetilde{p}$ are considered. We observe, as expectable, two different behaviour in the ranges $\alpha <0$ and $\alpha > 0$; we show regularity in the case $\widetilde{p} \geq \widetilde{p}_{G}$ where $$ \widetilde{p}_{G}= \frac{(n-1)p}{\alpha p +n -1}; $$ and of course: \begin{equation*} p < \widetilde{p}_{G}, \qquad \mbox{if} \quad \alpha <0, \end{equation*} \begin{equation*} \widetilde{p}_{G} < p, \qquad \mbox{if} \quad \alpha > 0. \end{equation*} A similar analysis has been performed about the well posedness with small data in mixed angular-radial weighted spaces equipped with the norms: \begin{equation} \||x|^{\alpha} u_{0} \|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} = \left( \int_{0}^{+\infty} \|u_{0}( \rho \ \cdot \ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{\alpha p + n-1} \ d \rho \right)^{\frac1p}, \end{equation} with again the critical scaling relationship $\alpha = 1 + \frac{n}{p}$. Here we find out the critical value \begin{equation}\nonumber \widetilde{p}_{G} = \frac{(n-1)p}{p-1}, \end{equation} and the well posedness is achieved for small data, with $\widetilde{p} \geq \widetilde{p}_{G}$. Actually we have closely looked at the following heuristic: the weights $|x|^{\alpha}$, $\alpha<0$ localize, in some sense, the norms of the data (or of the solutions) near to the origin. In such a way local results are still available, but a loss of informations far from the origin occurs. These informations can be recovered every times by a suitbale amount of angular integrability. \noindent On the other hand the local (in the sense of localized near to the origin) results have an intrinsic interest, and we also look at this problem. In theorem \ref{OurYZTheoremLoc} we prove local regularity for bounded solutions in $$ \| |x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad \mbox{with} \quad \widetilde{p} \leq p. $$ This improves in a different direction the results in \ref{YZTheorem}. We get local regularity under the assumption of a sufficiently high angular regularity, i.e for $\widetilde{p} \geq \widetilde{p}_{L}$ where \begin{equation*} \widetilde{p}_{L} = \left \{ \begin{array}{lcr} \frac{2(n-1)p}{(2 \alpha +1)p +2(n-1)} & \mbox{if} & -\frac{1}{2} \leq \alpha < 0 \\ && \\ \frac{2(n-1)p}{p +2(n-1)} & \mbox{if} & 0 < \alpha < 1. \end{array}\right. \end{equation*} The main technical tools we use consist in decay estimates for convolutions with the heat and Oseen kernels in the context of weighted $L^{p}_{|x|}L^{\widetilde{p}}_{\theta}$ spaces. Estimates in weighted spaces have been considered in literature, but the information provided by the angular integrability leads to a really satisfactory admissibility range for the weights. The precise relation between the weights and the angular integrability is basically contained in the relation (\ref{eq:condDL}) in the corollary \ref{cor:nonhom}. \noindent The small data theory in the context of weighted $L^{p}_{|x|}L^{\widetilde{p}}_{\theta}$ spaces with $\widetilde{p} < p$ is more delicate and we just make a conjecture about a possible improvement of theorem \ref{CKNSmallData}, in which the authors show regularity of the Leray's solutions in the interior of the space-time parabola: $$ \Pi = \left\{ (t,x) \quad \mbox{s.t.} \quad t > \frac{|x|^{2}}{\varepsilon_{0} - \varepsilon} \right\}, $$ for a sufficiently small $\varepsilon_{0} > 0$ and data with $\||x|^{-1/2} \cdot\|_{L^{2}_{x}} = \varepsilon < \varepsilon_{0}$. The conjecture in section \ref{smallDataMixed} is made in order to cover the gap between this localized result and the classical well posedness results (in particular we refer to theorem \ref{WeighKato} that's a particular case of the Koch-Tataru theorem \ref{TatTheo}). \begin{remark} Of course by translations all the results are still valid if the norms and the weights are centered in a point $\bar{x} \neq 0$. So if we consider \begin{equation*} \begin{array}{lcl} \| f \|_{L^{p}_{|x-\bar{x}|}L^{\widetilde{p}}_{\theta}}&=& \left( \int_{0}^{+\infty} \|f( \bar{x} + \rho \theta )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{n-1}d \rho \right)^{\frac1p}, \\ \| |x-\bar{x}|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x-\bar{x}|}L^{\widetilde{p}}_{\theta}} &=& \left( \int_{0}^{T} \left( \int_{0}^{+\infty} \|u( \bar{x} + \rho \theta )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \ \rho^{\alpha p + n-1} \ d \rho \right)^{\frac{s}{p}} \ dt \right)^{\frac{1}{s}}, \\ \||x-\bar{x}|^{\alpha} u_{0} \|_{L^{p}_{|x-\bar{x}|}L^{\widetilde{p}}_{\theta}} &=& \left( \int_{0}^{+\infty} \|u_{0}( \bar{x} + \rho \theta )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{\alpha p + n-1} \ d \rho \right)^{\frac1p}, \end{array} \end{equation*} and so on. \end{remark} \noindent The content of the Chapter \ref{SectInequality} is taken from \cite{DL}. \chapter{Classical inequalities with angular integrability}\label{SectInequality} \noindent The goal of this section is to extend some classical estimates in the context of $L^{p}$ spaces to a setting in which the role of the angular and radial integrability is well distinguished. In order of explaining our purpose we start by a well known estimate of Walter Strauss \cite{Strauss77-a} that proves \begin{equation}\label{eq:strauss} |x|^{\frac{n-1}{2}}|u(x)|\le C\|\nabla u\|_{L^{2}},\qquad|x|\ge1, \end{equation} for radial functions $u\in \dot H^{1}(\mathbb{R}^{n})$, $n\ge2$, This is an example of a well known general phenomenon: under suitable assumptions of symmetry, notably radial symmetry, classical estimates and embeddings of spaces admit substantial improvements. In the case of \eqref{eq:strauss}, a control on the $H^{1}$ norm of $u$ gives a pointwise bound and decay of $u$. Both are false in the general case. Radial and more general symmetric estimates have been extensively investigated, in view of their relevance for applications, especially to differential equations. \noindent This phenomenon is quite natural; indeed, symmetric functions can be regarded as functions defined on lower dimensional manifolds, on which stronger estimates are available, then extended by the action of some group of symmetries. Radial functions are essentially functions on $\mathbb{R}^{+}$, while the norms on $\mathbb{R}^{n}$ are recovered by the action of $SO(n)$ that introduces suitable dimensional weights connected to the volume form. \noindent In view of the gap between the symmetric and the non symmetric case, an interesting question arises: is it possible to quantify the defect of symmetry of functions and prove more general estimates which encompass all cases, and in particular reduce to radial estimates when applied to radial functions? Heuristically, one should be able to improve on the general case by introducing some measure of the distance from the maximizers of the inequality, which typically have the greatest symmetry. \noindent The aim of this paper is to give a partial positive answer to this question, through the use of the following type of mixed radial-angular norms: \begin{equation*} \begin{array}{lcl} \|f\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}&=& \left( \int_{0}^{+\infty} \|f(\rho\ \cdot\ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{n-1}d \rho \right)^{\frac1p}, \\ \|f\|_{L^{\infty}_{|x|}L^{\widetilde{p}}_{\theta}}&=& \sup_{\rho>0}\|f(\rho\ \cdot\ )\|_{L^{\widetilde{p}}(\mathbb{S}^{n-1})}. \end{array} \end{equation*} When the context is clear we shall write simply $L^{p}L^{\widetilde{p}}$. For $p=\widetilde{p}$ the norms reduce to the usual $L^{p}$ norms \begin{equation*} \|u\|_{L^{p}_{|x|}L^{p}_{\theta}}\equiv \|u\|_{L^{p}(\mathbb{R}^{n})}, \end{equation*} while for radial functions the value of $\widetilde{p}$ is irrelevant: \begin{equation*} \text{$u$ radial}\quad\implies\quad \|u\|_{L^{p}L^{\widetilde{p}}}\simeq \|u\|_{L^{p}(\mathbb{R}^{n})} \quad \forall p,\widetilde{p}\in[1,\infty]. \end{equation*} Notice also that the norms are increasing in $\widetilde{p}$. The idea of distinguishing radial and angular directions is not new and has proved successful in the context of Strichartz estimates and dispersive equations (see \cite{MachiharaNakamuraNakanishi05-a}, \cite{Sterbenz05-a}, \cite{DanconaCacciafesta11-a}; see also \cite{ChoOzawa09-a}). To give a flavour of the results which can be obtained, Strauss' estimate \eqref{eq:strauss} can be extended as follows: \begin{equation* |x|^{\frac np-\sigma}|u(x)|\lesssim \||D|^{\sigma}u\|_{L^{p}L^{\widetilde{p}}},\qquad \frac {n-1}\widetilde{p}+\frac1p<\sigma <\frac np \end{equation*} for arbitrary non radial functions $u$ and all $1<p<\infty$, $1\le\widetilde{p}\le \infty$. \begin{remark} Of course by translations all the results we will prove hold with the norm \begin{equation*} \begin{array}{lcl} \|f\|_{L^{p}_{|x-\bar{x}|}L^{\widetilde{p}}_{\theta}}&=& \left( \int_{0}^{+\infty} \|f( \bar{x} + \rho \theta )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{n-1}d \rho \right)^{\frac1p}, \\ \|f\|_{L^{\infty}_{|x-\bar{x}|}L^{\widetilde{p}}_{\theta}}&=& \sup_{\rho>0}\|f( \bar{x} + \rho \theta )\|_{L^{\widetilde{p}}(\mathbb{S}^{n-1})}; \end{array} \end{equation*} \end{remark} \section{The Stein-Weiss inequality} \noindent A central role in our approach will be played by the fractional integrals $$ (T_{\gamma}\phi)(x)=\int_{\mathbb{R}^{n}} \frac{\phi(y)}{|x-y|^{\gamma}}dy, \qquad 0<\gamma<n. $$ Weighted $L^{p}$ estimates for $T_{\gamma}$ are a fundamental problem of harmonic analysis, with a wide range of applications. Starting from the classical one dimensional case studied by Hardy and Littlewood, an exhaustive analysis has been made of the admissible classes of weights and ranges of indices (see \cite{Stein93-a} and the references therein). In the special case of power weights the optimal result is due to Stein and Weiss: \begin{theorem}[\cite{SteinWeiss58-b}]\label{SteinWeissThm} Let $n\geq 1$ and $1< p\le q<\infty$. Assume $\alpha,\beta,\gamma$ satisfy the set of conditions ($1=1/p+1/p'$) \begin{equation}\label{eq:condSW} \begin{split} &\beta<\frac nq,\quad \alpha<\frac{n}{p'},\quad 0<\gamma<n \\ &\alpha+\beta+\gamma=n+\frac nq-\frac np \\ &\alpha+\beta\ge0. \end{split} \end{equation} Then the following inequality holds \begin{equation}\label{eq:stw} \||x|^{-\beta}T_{\gamma}\phi\|_{L^{q}}\le C(\alpha,\beta,p,q)\cdot \||x|^{\alpha}\phi\|_{L^{p}}. \end{equation} \end{theorem} \noindent Conditions in the first line of \eqref{eq:condSW} are necessary to ensure integrability, while the necessity of the condition on the second line is due to scaling. On the other hand, the sharpness of $\alpha+\beta\ge0$ is less obvious and follows from the results of \cite{SawyerWheeden92-a}. \noindent In the radial case the last condition can be relaxed and $\alpha+\beta$ is allowed to assume negative values. Radial improvements were noticed in \cite{Vilela01-a}, \cite{HidanoKurokawa08-a}, and the sharp result was obtained by Rubin \cite{Rubin83-a} and more recently by De Napoli, Dreichman and Dur\'an: \begin{theorem}[\cite{Rubin83-a},\cite{DenapoliDrelichmanDuran09-a}] \label{DeNapoli1Thm} Let $n,p,q,\alpha,\beta,\gamma$ be as in the statement of Theorem \eqref{SteinWeissThm} but with the condition $\alpha+\beta\ge0$ relaxed to \begin{equation}\label{eq:condDDD} \alpha+\beta\ge(n-1)\left(\frac1q-\frac1p\right). \end{equation} Then estimate \eqref{eq:stw} is valid for all radial functions $\phi=\phi(|x|)$. \end{theorem} \noindent Using the $L^{p}_{|x|}L^{\widetilde{p}}_{\theta}$ norms we are able prove the following general result which extends both theorems: \begin{theorem}\label{the:Our1Thm} Let $n \geq 2$ and $1<p\le q<\infty$, $1\le\widetilde{p}\le\widetilde{q}\le\infty$. Assume $\alpha,\beta,\gamma$ satisfy the set of conditions \begin{equation}\label{eq:cDL} \begin{split} &\beta<\frac nq,\quad \alpha<\frac{n}{p'},\quad 0<\gamma<n \\ &\alpha+\beta+\gamma=n+\frac nq-\frac np \\ &\alpha+\beta\ge(n-1) \left(\frac1q-\frac1p+\frac{1}{\widetilde{p}}-\frac{1}{\widetilde{q}}\right). \end{split} \end{equation} Then the following estimate holds: \begin{equation}\label{oHLS} \||x|^{-\beta}T_{\gamma} \phi\| _{L^{q}_{|x|}L^{\widetilde{q} }_{\theta}} \le C \| |x|^{\alpha} \phi\|_{L^{p}_{|x|}L^{\widetilde{p} }_{\theta}}. \end{equation} The range of admissible $p,q$ indices can be relaxed to $1\le p\le q\le \infty$ in two cases: \begin{enumerate}\setlength{\itemindent}{-0pt} \renewcommand{\labelenumi}{\textit{(\roman{enumi})}} \item when the third inequality in \eqref{eq:cDL} is strict, or \item when the Fourier transform $\widehat{\phi}$ has support contained in an annulus $c_{1}R\le|\xi|\le c_{2} R$ ($c_{2}\ge c_{1}>0$, $R>0$); in this case \eqref{oHLS} holds with a constant independent of $R$. \end{enumerate} \end{theorem} \begin{remark} Notice that: \begin{enumerate}\setlength{\itemindent}{-0pt} \renewcommand{\labelenumi}{(\alph{enumi})} \item with the choices $q=\widetilde{q} $ and $p=\widetilde{p} $ (i.e.~in the usual $L^{p}$ norms) Theorem \ref{the:Our1Thm} reduces to Theorem \ref{SteinWeissThm}; \item if $\phi$ is radially symmetric, with the choice $\widetilde{q} =\widetilde{p} $, Theorem \ref{the:Our1Thm} reduces to Theorem \ref{DeNapoli1Thm}. Indeed, if $\phi$ is radially symmetric then $T_{\gamma}\phi$ is radially symmetric too, so that all choices for $\widetilde{q},\widetilde{p} $ are equivalent; \item obviously, the same estimate is true for general operators $T_{F}$ with nonradial kernels $F(x)$ satisfying \begin{equation*} T_{F}\phi(x)=\int F(x-y) \phi(y)dy, \qquad |F|\le C|x|^{-\gamma}. \end{equation*} \end{enumerate} \end{remark} \noindent The proof of Theorem \ref{the:Our1Thm} is based on two successive applications of Young's inequality for convolutions on suitable Lie groups: first we use the strong inequality on the rotation group $SO(n)$; then we use a Young inequality in the radial variable, which in some cases must be replaced by the weak Young-Marcinkewicz inequality on the multiplicative group $(\mathbb{R}^{+},\cdot)$ with the Haar measure $d\rho/\rho$. The convenient idea of using convolution in the measure $d\rho/\rho$ was introduced in \cite{DenapoliDrelichmanDuran09-a}. \begin{remark}\label{rem:nonhomogeneous} The operator $T_{\gamma}$ is a convolution with the homogenous kernel $|x|^{-\gamma}$. Consider instead the convolution with a nonhomogeneous kernel \begin{equation*} S_{\gamma}\phi(x)=\int \frac{\phi(y)}{\bra{x-y}^{\gamma}}dy. \end{equation*} By the obvious pointwise bound \begin{equation*} |S_{\gamma}\phi(x)|\le T_{\gamma}|\phi|(x) \end{equation*} it is clear that $S_{\gamma}$ satisfies the same estimates as $T_{\gamma}$. However the scaling invariance of the estimate is broken, and indeed something more can be proved, thanks to the smoothness of the kernel (see Lemma \ref{lem:xmu}): \end{remark} \begin{corollary}\label{cor:nonhom} Let $n \geq 2$ and $1\le p\le q\le\infty$, $1\le\widetilde{p}\le\widetilde{q}\le\infty$. Assume $\alpha,\beta,\gamma$ satisfy the set of conditions \begin{equation}\label{eq:condDL} \beta<\frac nq,\qquad \alpha<\frac{n}{p'},\qquad \alpha+\beta \ge (n-1) \left(\frac1q-\frac1p+\frac{1}{\widetilde{p}}-\frac{1}{\widetilde{q}}\right), \end{equation} \begin{equation}\label{eq:condabg} \alpha+\beta+\gamma>n\left(1+\frac1q-\frac1p\right). \end{equation} Then the following estimate holds: \begin{equation}\label{ourHLS} \||x|^{-\beta}S_{\gamma} \phi\| _{L^{q}_{|x|}L^{\widetilde{q} }_{\theta}} \le C \| |x|^{\alpha} \phi\|_{L^{p}_{|x|}L^{\widetilde{p} }_{\theta}}. \end{equation} \end{corollary} \noindent The first result we need is an explicit estimate of the angular part of the fractional integral $T_{\gamma}\phi$. Notice that a similar analysis in the radial case was done in \cite{DenapoliDrelichmanDuran09-a} (see Lemma 4.2 there). The following estimates are sharp: \begin{lemma}\label{lem:singint} Let $n\ge2$, $\nu>0$, and write $\bra{x}=(1+|x|^{2})^{1/2}$. Then the integral \begin{equation*} I_{\nu}(x)=\int_{\mathbb{S}^{n-1}}|x-y|^{-\nu }dS(y) \qquad x\in \mathbb{R}^{n} \end{equation*} satisfies \begin{equation}\label{eq:stima0} |I_{\nu}(x)|\simeq\bra{x}^{-\nu }\qquad \text{for}\quad|x|\ge2, \end{equation} while for $|x|\le2$ we have \begin{equation}\label{stimaI} |I_{\nu}(x)| \simeq \left\{ \begin{array}{cc} 1& \mbox{if} \ \ \nu <n-1 \\ |\log{||x|-1|}| + 1& \mbox{if} \ \ \nu =n-1 \\ ||x|-1|^{n-1- \nu } & \mbox{if} \ \ \nu > n-1. \end{array} \right. \end{equation} \end{lemma} \begin{proof} We consider four different regimes according to the size of $|x|$. We write for brevity $I$ instead of $I_{\nu}$. \subsection*{First case: $|x|\geq 2$} For $x$ large and $|y|=1$ we have $|x-y| \simeq |x|$, hence $|I(x)| \simeq |x|^{-\nu } \simeq \langle x \rangle^{-\nu }$. This proves \eqref{eq:stima0}. \subsection*{Second case: $0 \leq |x| \leq \frac{1}{2}$} Clearly we have $|x-y| \simeq 1$ when $|y|=1$, and this implies $|I(x)| \simeq 1 \simeq \langle x \rangle^{-\nu }$. This is equivalent to \eqref{stimaI} when $|x|\le1/2$. \subsection*{Third case: $1 \leq |x| \leq 2$} This is the bulk of the computation since it contains the singular part of the integral, as $|x|\to1$. We write the integral in polar coordinates using the spherical angles $(\theta_{1},\theta_{2},...,\theta_{n-1})$ on $\mathbb{S}^{n-1}$, oriented in such a way that $\theta_{1}$ is the angle between $x$ and $y$. Using the notation $\sigma=|x-y|$, by the symmetry of $I(x)$ in $(\theta_{2},...,\theta_{n-1})$ we have $$ |I(x)| \simeq \int_{0}^{\pi} \sigma^{-\nu }(\sin{\theta_{1}})^{n-2}d\theta_{1}. $$ In order to rewrite the integral using $\sigma$ as a new variable, we compute $$ 2\sigma d\sigma = d(|x-y|^{2})=d(|x|+1 -2|x| \cos{\theta_{1}}) =2|x|\sin{\theta_{1}}d\theta_{1} $$ so we have $$ (\sin{\theta_{1}})^{n-2}d\theta_{1} = \frac{\sigma (\sin{\theta_{1}})^{n-3}}{|x|}d\sigma $$ and, noticing that $0\le|x|-1\le|x-y|=\sigma\le|x|+1$, $$ |I(x)| \simeq \int_{|x|-1}^{|x|+1} \sigma^{1-\nu }\frac{(\sin{\theta_{1}})^{n-3}}{|x|}d\sigma. $$ Now let $A$ be the area of the triangle with vertices $0,x$ andd $y$: we have $2A=|x|\sin{\theta_{1}}$ so that $$ |I(x)| \simeq |x|^{2-n}\int_{|x|-1}^{|x|+1} \sigma^{1-\nu }A^{n-3}d \sigma. $$ Recalling Heron's formula for the area of a triangle as a function of the length of its sides we obtain $$ |I(x)| \simeq |x|^{2-n}\int_{|x|-1}^{|x|+1} \sigma^{1-\nu } \Bigl[(|x|+\sigma +1)(|x|+\sigma -1) (|x|+1 -\sigma)(\sigma +1 -|x|)\Bigr]^{\frac{n-3}{2}}d\sigma. $$ Notice that this formula is correct for all dimensions $n\ge2$. Now we split the integral as $I \simeq I_{1} + I_{2}$ with $$ I_{1}(x)= |x|^{2-n}\int_{|x|-1}^{|x|} \sigma^{1-\nu } \Bigl[ (|x|+\sigma +1) (|x|+\sigma -1) (|x|+1 -\sigma) (\sigma +1 -|x|) \Bigr]^{\frac{n-3}{2}} d\sigma $$ and $$ I_{2}(x) = |x|^{2-n}\int_{|x|}^{|x|+1} \sigma^{1-\nu } \Bigl[ (|x|+\sigma +1) (|x|+\sigma -1) (|x|+1 -\sigma) (\sigma +1 -|x|) \Bigr]^{\frac{n-3}{2}} d\sigma. $$ In the second integral $I_{2}$, recalling that $1\le|x|\le2$, we have \begin{equation*} |x|\simeq \sigma \simeq |x|+\sigma+1 \simeq |x|+\sigma-1 \simeq \sigma+1-|x| \simeq 1 \end{equation*} so that \begin{equation*} I_{2}\simeq \int_{|x|}^{|x|+1}(|x|+1-\sigma)^{\frac{n-3}{2}}d \sigma = \int_{0}^{1}(1-\sigma)^{\frac{n-3}{2}}d \sigma \simeq 1. \end{equation*} In the first integral $I_{1}$, using that $1 \leq |x| \leq 2$ and $|x|-1 \leq \sigma \leq |x|$, we see that \begin{equation*} |x|\simeq (|x|+\sigma +1)\simeq (|x|+1 -\sigma)\simeq 1; \end{equation*} moreover, \begin{equation*} 1\le \frac{|x|+\sigma-1}{\sigma}\le 2 \quad\text{so that}\quad |x|+\sigma-1 \simeq \sigma \end{equation*} and we have $$ I_{1}(x) \simeq \int_{|x|-1}^{|x|} \sigma^{1-\nu + \frac{n-3}{2}} (\sigma+1 -|x|)^{\frac{n-2}{2}}d\sigma $$ or, after the change of variable $\sigma\to\sigma(|x|-1)$, $$ I_{1}(x)\simeq(|x|-1)^{n-1-\nu } \int_{1}^{1+\frac{1}{|x|-1}} (\sigma -1)^{\frac{n-3}{2}} \sigma ^{\frac{n-1}{2}-\nu }d\sigma . $$ Now split the last integral as $A+B$ where $$ A= (|x|-1)^{n-1-\nu }\int_{1}^{2} (\sigma -1)^{\frac{n-3}{2}}\sigma ^{\frac{n-1}{2}-\nu }d\sigma $$ and $$ B=(|x|-1)^{n-1-\nu } \int_{2}^{1+\frac{1}{|x|-1}}(\sigma -1)^{\frac{n-3}{2}} \sigma ^{\frac{n-1}{2}-\nu }d\sigma; $$ we have immediately $$ A \simeq (|x|-1)^{n-1 - \nu } $$ while, keeping into account that $\sigma \simeq \sigma -1$ for $\sigma$ in $(2,1+\frac{1}{|x|-1})$, $$ B=(|x|-1)^{n-1-\nu } \int_{2}^{1+\frac{1}{|x|-1}} \sigma ^{n-2-\nu }d\sigma $$ which gives \begin{equation}\label{andamentoB} B\simeq \left\{ \begin{array}{cc} 1& \mbox{if} \ \ \nu <n-1 \\ |\log{||x|-1|}| + 1& \mbox{if} \ \ \nu =n-1 \\ ||x|-1|^{n-1- \nu } & \mbox{if} \ \ \nu > n-1 \end{array} \right. \end{equation} \subsection*{Fourth case: $\frac{1}{2} \leq |x| \leq 1$} Using the change of variable $|x'|=1/|x|$, we see that $|I(x)|\simeq |I(1/|x'|)|$, thus the fourth case follows immediately from the third one, and this concludes the proof of the Lemma. \end{proof} \noindent We shall also need the following estimate which is proved in a similar way: \begin{lemma}\label{lem:singint2} Let $n\ge2$, $\nu>0$. Then the integral \begin{equation*} J_{\nu}(x,\rho)=\int_{\mathbb{S}^{n-1}} \bra{x-\rho\theta}^{-\nu}dS(\theta) \qquad x\in \mathbb{R}^{n},\ \rho\ge0 \end{equation*} satisfies: \begin{equation}\label{eq:stimab0} |J_{\nu}(x,\rho)|\simeq\bra{x}^{-\nu }\qquad \text{for $\rho\le1$ or $|x|\ge 2\rho$}, \end{equation} \begin{equation}\label{eq:stimac0} |J_{\nu}(x,\rho)|\simeq\bra{\rho}^{-\nu }\qquad \text{for $|x|\le1$ or $\rho\ge 2|x|$}, \end{equation} while in the remaining case, i.e.~when $|x|\ge1$ and $\rho\ge1$ and $2^{-1}|x|\le\rho\le2|x|$, \begin{equation}\label{eq:stimabI} |J_{\nu}(x,\rho)| \simeq \left\{ \begin{array}{cc} \bra{\rho}^{-\nu}& \mbox{if} \ \ \nu <n-1 \\ \bra{\rho}^{-\nu} \log\left(\frac{2\bra{\rho}}{\bra{|x|-\rho}}\right) & \mbox{if} \ \ \nu =n-1 \\ \bra{\rho}^{1-n}\bra{|x|-\rho}^{n-1-\nu} & \mbox{if} \ \ \nu > n-1. \end{array} \right. \end{equation} As a consequence, one has $J_{\nu}\lesssim \bra{\rho+|x|}^{-\nu}$ when $\nu<n-1$ and $J_{\nu}\lesssim \bra{\rho+|x|}^{-\nu}\log(2\bra{\rho}+|x|)$ when $\nu=n-1$. \end{lemma} \begin{proof The proof is similar to the proof of Lemma \ref{lem:singint}; we sketch the main steps. Estimates \eqref{eq:stimab0} and \eqref{eq:stimac0} are obvious, thus we focus on \eqref{eq:stimabI}. Write $r=|x|$, so that we are in the region $1/2\le r/\rho\le2$; we shall consider in detail the case \begin{equation*} 1\le \frac{r}{\rho}\le 2, \end{equation*} the remaining region being similar. Using the same coordinates as before, the integral is reduced to \begin{equation*} J_{\nu}(|x|,\rho)= |x|^{2-n}\int_{|x|-1}^{|x|+1}\bra{\rho \sigma}^{-\nu} A^{n-3}\sigma\ d \sigma \end{equation*} where $A$ is given by Heron's formula \begin{equation*} A(|x|,\sigma)^{2}= (|x|+\sigma+1) (|x|+\sigma-1) (|x|+1-\sigma) (\sigma+1-|x|). \end{equation*} We split the integral on the intervals $|x|\le\sigma\le|x|+1$ and $|x|-1\le\sigma\le|x|$. The first piece gives \begin{equation*} I_{1}\simeq \bra{\rho}^{-\nu} \int_{|x|}^{|x|+1}(|x|+1-\sigma)^{\frac{n-3}{2}}d \sigma \end{equation*} and by the change of variable $\sigma\to \sigma(|x|+1)$ we obtain \begin{equation*} I_{1}(|x|,\rho)\simeq \bra{\rho}^{-\nu}. \end{equation*} For the second integral on $|x|-1\le\sigma\le|x|$, noticing that \begin{equation*} 1\le \frac{|x|+\sigma-1}{\sigma}\le2 \end{equation*} we have \begin{equation*} \begin{split} I_{2}\simeq & \int_{|x|-1}^{|x|} \bra{\rho \sigma}^{-\nu} \sigma^{\frac{n-1}{2}} (\sigma+1-|x|)^{\frac{n-3}{2}}d \sigma \\ = & (|x|-1)^{n-1} \int_{1}^{\frac{|x|}{|x|-1}} \bra{(r-\rho)\sigma}^{-\nu}\sigma^{\frac{n-1}{2}} (\sigma-1)^{\frac{n-3}{2}}d \sigma \end{split} \end{equation*} via the change of variables $\sigma\to \sigma(|x|-1)$ which gives $\rho \sigma\to(r-\rho)\sigma$. The part of the integral bewteen 1 and 2 produces \begin{equation*} \simeq (|x|-1)^{n-1}\bra{r-\rho}^{-\nu}= \rho^{1-n}(r-\rho)^{n-1}\bra{r-\rho}^{-\nu} \end{equation*} while the remaining part between 2 and $|x|/(|x|-1)$ gives \begin{equation*} \begin{split} \simeq & (|x|-1)^{n-1} \int_{2}^{\frac{r}{r-\rho}} \bra{(r-\rho)\sigma}^{-\nu} \sigma^{n-2}d \sigma \\ = & \rho^{1-n} \int_{2(r-\rho)}^{r}\bra{\sigma}^{-\nu}\sigma^{n-2}d \sigma \\ \simeq\ & \rho^{1-n} \int_{2(r-\rho)}^{r} \frac{\sigma^{n-2}}{1+\sigma^{\nu}}d \sigma \end{split} \end{equation*} which can be computed explicitly. Summing up we obtain \eqref{eq:stimabI}. \end{proof} \noindent We are ready for the main part of the proof. By the isomorphism \begin{equation*} \mathbb{S}^{n-1}\simeq SO(n)/SO(n-1) \end{equation*} we can represent integrals on $\mathbb{S}^{n-1}$ in the form \begin{equation*} \int_{\mathbb{S}^{n-1}}g(y)dS(y)= c_{n} \int_{SO(n)}g(Ae)dA,\qquad n\ge2 \end{equation*} where $dA$ is the left Haar measure on $SO(n)$, and $e\in\mathbb{S}^{n-1}$ is a fixed arbitrary unit vector. Thus, via polar coordinates, a convolution integral can be written as follows (apart from inessential constants depending only on the space dimension $n$): \begin{equation*} \begin{split} F*\phi(x)= \int_{\mathbb{R}^{n}}F(x-y)\phi(y)dy &= \int_{0}^{\infty} \int_{\mathbb{S}^{n-1}} F(x-\rho \omega)\phi(\rho \omega)dS_{\omega}\rho^{n-1}d \rho \\ &\simeq \int_{0}^{\infty} \int_{SO(n)}F(x-\rho Be)\phi(\rho Be)dB\rho^{n-1}d \rho \end{split} \end{equation*} Hence the $L^{\widetilde{q}}$ norm of the convolution on the sphere can be written as \begin{equation*} \begin{split} \|F*\phi(|x|\theta)\|_{L^{\widetilde{q}}_{\theta}(\mathbb{S}^{n-1})} &\simeq \|F*\phi(|x|Ae)\|_{L^{\widetilde{q}}_{A}(SO(n))} \\ &\le \int_{0}^{\infty} \left\| \int_{SO(n)}F(|x|Ae-\rho Be)\phi(\rho Be)dB \right\|_{L^{\widetilde{q}}_{A}(SO(n))} \rho^{n-1}d \rho \end{split} \end{equation*} where $e$ is any fixed unit vector. By the change of variables $B\to AB^{-1}$ in the inner integral (and the invariance of the measure) this is equivalent to \begin{equation*} =\int_{0}^{\infty} \left\| \int_{SO(n)}F(AB^{-1}(|x|Be-\rho e))\phi(\rho AB^{-1}e)dB \right\|_{L^{\widetilde{q}}_{A}(SO(n))} \rho^{n-1}d \rho \end{equation*} If $F$ satisfies \begin{equation}\label{eq:rad} |F(x)|\le C f(|x|) \end{equation} for a radial function $f$, we can write \begin{equation*} |F(AB^{-1}(|x|Be-\rho e))|\le C f\left(\bigl||x|Be-\rho e\bigr|\right) \end{equation*} and we notice that the integral \begin{equation*} \int_{SO(n)}f\left(\bigl||x|Be-\rho e\bigr|\right) |\phi(\rho AB^{-1}e)|dB= g*h(A) \end{equation*} is a convolution on $SO(n)$ of the functions \begin{equation*} g(A)=f\left(\bigl||x|Ae-\rho e\bigr|\right),\qquad h(A)=|\phi(\rho Ae)|. \end{equation*} We can thus apply the Young's inequality on $SO(n)$ (see e.g.~Theorem 1.2.12 in \cite{Grafakos08-a}) and we obtain, for any \begin{equation*} \widetilde{q},\widetilde{r},\widetilde{p} \in[1,+\infty] \quad\text{with}\quad 1+\frac{1}{\widetilde{q}}=\frac1\widetilde{r}+\frac1\widetilde{p}, \end{equation*} the estimate \begin{equation}\label{eq:firstest} \|F*\phi(|x|\theta)\|_{L^{\widetilde{q}}_{\theta}(\mathbb{S}^{n-1})} \lesssim \int_{0}^{\infty} \|f(||x|e-\rho\theta|)\|_{L^{\widetilde{r}}_{\theta}(\mathbb{S}^{n-1})} \|\phi(\rho\theta)\|_{L^{\widetilde{p}}_{\theta}(\mathbb{S}^{n-1})} \rho^{n-1}d \rho \end{equation} where we switched back to the coordinates of $\mathbb{S}^{n-1}$. Notice that the conditions on the indices imply in particular \begin{equation*} \widetilde{q}\ge\widetilde{p}. \end{equation*} Specializing $f$ to the choice \begin{equation*} f(|x|)=|x|^{-\gamma} \end{equation*} we get \begin{equation*}% \|F*\phi(|x|\theta)\|_{L^{\widetilde{q}}_{\theta}} \lesssim \int_{0}^{\infty} \rho^{-\gamma} \||\rho^{-1}|x|e-\theta|^{-\gamma}\|_{L^{\widetilde{r}}_{\theta}} \|\phi(\rho\theta)\|_{L^{\widetilde{p}}_{\theta}} \rho^{n-1}d \rho \end{equation*} which can be written in the form \begin{equation*} = |x|^{n-\alpha-\frac np-\gamma} \int_{0}^{\infty} \left( \frac{|x|}{\rho} \right)^{\alpha+\frac np-n+\gamma} \||\rho^{-1}|x|e-\theta|^{-\gamma}\|_{L^{\widetilde{r}}_{\theta}} \rho^{\alpha+\frac np} \|\phi(\rho\theta)\|_{L^{\widetilde{p}}_{\theta}} \frac{d\rho}{\rho} \end{equation*} or equivalently, recalling \eqref{eq:condSW}, \begin{equation*} = |x|^{\beta-\frac nq} \int_{0}^{\infty} \left( \frac{|x|}{\rho} \right)^{-\beta+\frac nq} \||\rho^{-1}|x|e-\theta|^{-\gamma}\|_{L^{\widetilde{r}}_{\theta}} \rho^{\alpha+\frac np} \|\phi(\rho\theta)\|_{L^{\widetilde{p}}_{\theta}} \frac{d\rho}{\rho} \end{equation*} Following \cite{DenapoliDrelichmanDuran09-a}, we recognize that the last integral is a convolution in the multiplicative group $(\mathbb{R},\cdot)$ with the Haar measure $d \rho/\rho$, which implies \begin{equation*} |x|^{-\beta+\frac nq} \|F*\phi(|x|\theta)\|_{L^{\widetilde{q}}_{\theta}} \lesssim g_{1}*h_{1}(|x|), \end{equation*} with \begin{equation*} g_{1}(\rho)=\rho^{-\beta+\frac nq} \||\rho e-\theta|^{-\gamma}\|_{L^{\widetilde{r}}_{\theta}},\qquad h_{1}(\rho)= \rho^{\alpha+\frac np} \|\phi(\rho\theta)\|_{L^{\widetilde{p}}_{\theta}}. \end{equation*} By the weak Young's inequality in the measure $d\rho/\rho$ (Theorem 1.4.24 in \cite{Grafakos08-a}) we obtain \begin{equation*} \begin{split} \||x|^{-\beta}F*\phi\|_{L^{q}L^{\widetilde{q}}}\equiv &\left\||x|^{-\beta+\frac nq} \|F*\phi(|x|\theta)\|_{L^{\widetilde{q}}_{\theta}} \right\|_{L^{q}(\rho^{-1}d\rho)} \\ \lesssim & \|h_{1}\|_{L^{p}(\rho^{-1}d\rho)} \|g_{1}\|_{L^{r,\infty}(\rho^{-1}d\rho)} \end{split} \end{equation*} that is to say \begin{equation}\label{eq:almostfin} \||x|^{-\beta}F*\phi\|_{L^{q}L^{\widetilde{q}}} \lesssim \|\phi\|_{L^{p}L^{\widetilde{p}}} \left\| \rho^{-\beta+\frac nq}\||\rho e-\theta|^{-\gamma}\| _{L^{\widetilde{r}}_{\theta}} \right\|_{L^{r,\infty}(\rho^{-1}d\rho)}. \end{equation} provided \begin{equation*} q,r,p\in(1,+\infty)\qquad 1+\frac1q=\frac1r+\frac1p. \end{equation*} In particular this implies \begin{equation}\label{eq:qp} q>p. \end{equation} In order to achieve the proof, it remains to check that the last norm in \eqref{eq:almostfin} is finite. Notice that, when $\widetilde{r}<\infty$, \begin{equation*} \||\rho e-\theta|^{-\gamma}\|_{L^{\widetilde{r}}_{\theta}}= I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}} \end{equation*} where $I_{\nu}$ was defined and estimated in Lemma \ref{lem:singint}. On the other hand, when $\widetilde{r}=\infty$ one has directly \begin{equation}\label{eq:rinf} \widetilde{r}=\infty \quad\implies\quad \||\rho e-\theta|^{-\gamma}\|_{L^{\widetilde{r}}_{\theta}}\simeq |\rho-1|^{-\gamma}. \end{equation} \noindent Using cutoffs, we split the $L^{r,\infty}$ norm in three regions $0\le \rho\le 1/2$, $\rho\ge2$ and $1/2\le \rho\le2$. \noindent In the region $0\le\rho\le1/2$, recalling \eqref{eq:stima0}-\eqref{stimaI} or \eqref{eq:rinf}, we have \begin{equation*} I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}}\simeq 1 \quad\implies\quad \rho^{-\beta+\frac nq}I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}} \in L^{1}(0,1/2; d\rho/\rho) \end{equation*} since by assumption $\beta<n/q$; thus the contribution of this part to the $L^{r,\infty}(d\rho/\rho)$ norm is finite. In the region $\rho\ge2$ we have \begin{equation*} I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}}\simeq \rho^{-\gamma} \quad\implies\quad \rho^{-\beta+\frac nq}I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}} \simeq \rho^{-\beta-\gamma+\frac nq} \in L^{1}(2,\infty; d\rho/\rho) \end{equation*} since the condition \begin{equation*} -\beta-\gamma+\frac nq<0 \quad\iff \quad \alpha<\frac{n}{p'} \end{equation*} is satisfied by \eqref{eq:condDL}, and again the contribution to the $L^{r,\infty}$ norm is finite. For the third region $1/2\le \rho\le2$, by estimate \eqref{stimaI}, we see that in the case $\gamma\widetilde{r}\le n-1$ one has again, for some $\sigma\ge0$, \begin{equation*} I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}}\simeq |\log||\rho|-1|^{\sigma} \quad\implies\quad \rho^{-\beta+\frac nq}I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}} \in L^{1}(1/2,2; d\rho/\rho) \end{equation*} On the other hand, in the case $\gamma\widetilde{r}>n-1$ (which includes the choice $\widetilde{r}=\infty$), we see that \begin{equation*} \rho^{-\beta+\frac nq}I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}}\simeq |\rho-1|^{\frac{n-1}{\widetilde{r}}-\gamma} \in L^{r,\infty}(1/2,2; d\rho/\rho)\quad \iff \frac{n-1}{\widetilde{r}}-\gamma\ge-\frac1r. \end{equation*} Recalling the relation between $q,r,p$ (resp.~$\widetilde{q},\widetilde{r},\widetilde{p}$) the last condition is equivalent to \begin{equation*} -\gamma \ge (n-1) \left( \frac1q-\frac1p-\frac1\widetilde{q}+\frac1\widetilde{p} \right)-\frac nq+\frac np-n \end{equation*} which is precisely the third of conditions \eqref{eq:condDL}. \noindent The weak Young inequality can be used in \eqref{eq:almostfin} only in the range $q,r,p\in(1,+\infty)$, which forces \begin{equation*} 1<p<q<\infty. \end{equation*} To cover the cases \begin{equation*} 1\le p<q\le\infty \end{equation*} we use instead the strong Young inequality: we can write \begin{equation}\label{eq:almostfin2} \||x|^{-\beta}F*\phi\|_{L^{q}L^{\widetilde{q}}} \lesssim \|\phi\|_{L^{p}L^{\widetilde{p}}} \left\| \rho^{-\beta+\frac nq}\||\rho e-\theta|^{-\gamma}\| _{L^{\widetilde{r}}_{\theta}} \right\|_{L^{r}(\rho^{-1}d\rho)} \end{equation} for the full range $q,r,p\in[1,+\infty]$. The previous arguments are still valid apart from the last step which must be replaced by \begin{equation*} \rho^{-\beta+\frac nq}I_{\gamma \widetilde{r}}(\rho e)^{\frac{1}{\widetilde{r}}}\simeq |\rho-1|^{\frac{n-1}{\widetilde{r}}-\gamma} \in L^{r}(1/2,2; d\rho/\rho)\quad \iff \frac{n-1}{\widetilde{r}}-\gamma>-\frac1r \end{equation*} and this implies that the inequality in the last condition \eqref{eq:condDL} must be strict. \noindent The case \begin{equation*} 1<p=q<\infty \end{equation*} has already been covered. Indeed, in this case the scaling condition \eqref{eq:condDL} implies \begin{equation*} \alpha+\beta+\gamma=n \quad\implies\quad \alpha+\beta>0 \end{equation*} since $\gamma<n$. Thus when $\widetilde{p}=\widetilde{q}$ the last inequality in \eqref{eq:condDL} is strict and we can apply the second part of the proof; the cases $\widetilde{p}\le\widetilde{q}$ follow from the case $\widetilde{p}=\widetilde{q}$. \noindent To complete the proof, it remains to consider the case (ii) where we assume that the support of the Fourier transform $\widehat{\phi}$ is contained in an annular region of size $R$. By scaling invariance of the inequality, it is sufficient to consider the case $R=1$. Now let $\psi(x)$ be such that $\widehat{\psi}\in C^{\infty}_{c}$ and precisely \begin{equation*} \widehat{\psi}(\xi)=1 \quad\text{for $c_{1}'\le|\xi|\le c_{2}'$}, \qquad \widehat{\psi}(\xi)=0 \quad\text{for $|\xi|>2c_{1}'$ and $|\xi|<\frac12 c_{2}'$}, \end{equation*} for some constants $c'_{2}>c_{2}\ge c_{1}>c'_{1}>0$. This implies \begin{equation*} \phi =\mathsf{F}^{-1}(\widehat{\psi}\widehat{\phi}) =\psi*\phi \end{equation*} and we can write \begin{equation*} T_{\gamma}\phi=|x|^{-\gamma}*\psi*\phi= (T_{\gamma}\psi)*\phi. \end{equation*} Since $T_{\gamma}\psi=c\mathsf{F}^{-1} (|\xi|^{\gamma-n}\widehat{\psi}(\xi))$ is a Schwartz class function, we arrive at the estimates \begin{equation}\label{eq:allN} |T_{\gamma}\phi(x)|\le C_{\mu,\gamma}\bra{x}^{-\mu}*|\phi|\qquad \forall \mu\ge1. \end{equation} Here we can take $\mu$ arbitrarily large. Thus the proof of case (ii) is concluded by applying the following Lemma: \begin{lemma}\label{lem:xmu} Let $n\ge2$. Assume $1\le p\le q\le \infty$, $1\le \widetilde{p}\le \widetilde{q}\le \infty$ and $\alpha,\beta, \mu$ satisfy \begin{equation}\label{eq:condabmu} \beta<\frac nq,\qquad \alpha<\frac{n}{p'},\qquad \alpha+\beta\ge(n-1) \left(\frac1q-\frac1p+\frac1\widetilde{p}-\frac1\widetilde{q}\right), \end{equation} \begin{equation}\label{eq:condmu} \mu> -\alpha-\beta+n\left(1+\frac1q-\frac1p\right). \end{equation} Then the following estimate holds: \begin{equation}\label{eq:estmu} \||x|^{-\beta}\bra{x}^{-\mu}*\phi\| _{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}\lesssim \|\phi\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}. \end{equation} \end{lemma} \begin{proof Notice that, by \eqref{eq:condabmu}, the right hand side in \eqref{eq:condmu} is always strictly positive and never larger than $n-1$, thus it is sufficient to prove the lemma for $\mu$ in the range \begin{equation*} 0<\mu\le n. \end{equation*} By \eqref{eq:firstest} we have, for all $\widetilde{p},\widetilde{q},\widetilde{r}\in[1,+\infty]$ with $1+1/\widetilde{q}=1/\widetilde{r}+1/\widetilde{p}$, \begin{equation}\label{eq:quant} \|\bra{\cdot}^{-\mu}*|\phi|(|x|\theta)\| _{L^{\widetilde{q}}_{\theta}(\mathbb{S}^{n-1})}\lesssim \int_{0}^{\infty} J_{\mu\widetilde{r}}(|x|,\rho)^{\frac{1}{\widetilde{r}}} \|\phi(\rho \theta)\| _{L^{\widetilde{p}}_{\theta}(\mathbb{S}^{n-1})}\rho^{n-1} d\rho. \end{equation} Notice that when $\widetilde{r}=\infty$ we have \begin{equation*} \|\bra{|x|e-\rho \theta}^{-\mu}\|_{L^{\infty}_{\theta}}\lesssim \bra{|x|-\rho}^{-\mu}. \end{equation*} We write for brevity \begin{equation*} Q(|x|)\equiv |x|^{-\beta+\frac{n-1}{q}} \|\bra{\cdot}^{-\mu}*|\phi|(|x|\theta)\| _{L^{\widetilde{q}}_{\theta}},\qquad P(\rho)=\rho^{\alpha+\frac{n-1}{p}} \|\phi(\rho \theta)\| _{L^{\widetilde{p}}_{\theta}} \end{equation*} \begin{equation*} J(|x|,\rho)=J_{\mu\widetilde{r}}^{\frac{1}{\widetilde{r}}}(x,\rho) \quad\text{(resp. $\bra{|x|-\rho}^{-\mu}$ if $\widetilde{r}=\infty$).} \end{equation*} Thus \eqref{eq:quant} becomes \begin{equation}\label{eq:quantt} Q(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}} \int_{0}^{\infty}J(\sigma,\rho) \rho^{\frac{n-1}{p'}-\alpha} P(\rho) d\rho \end{equation} and the estimate to be proved \eqref{eq:estmu} can be written as \begin{equation}\label{eq:thesis} \|Q\|_{L^{q}(0,+\infty)}\lesssim \|P\|_{L^{p}(0,+\infty)} \end{equation} Recall that the integrals of the form $J(\sigma,\rho)$ have been estimated in Lemma \ref{lem:singint2}. \noindent We split $Q$ into the sum of several terms corresponding to different regions of $\rho,\sigma$. In the region $\sigma\le1$ we have $J(\sigma,\rho)\lesssim \bra{\rho}^{-\mu}$ so that \begin{equation}\label{eq:quant2} Q_{1}(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}} \int_{0}^{\infty}\bra{\rho}^{-\mu}\rho^{\frac{n-1}{p'}-\alpha} P(\rho) d \rho \end{equation} Thus we see that in this region \eqref{eq:thesis} follows simply from H\"older's inequality and the fact that $\alpha<n/p'$ and $\beta<n/q$. Similarly, it is easy to handle the part of the integral with $\rho\le1$ since we have then $J(\sigma,\rho)\lesssim \bra{\sigma}^{-\mu}$. Thus in the following we can restrict to $\sigma \gtrsim 1,\rho \gtrsim 1$. \noindent When $1 \lesssim \sigma \le \rho/2$ we have again $J(\sigma,\rho)\lesssim \bra{\rho}^{-\mu}$ and \eqref{eq:quantt} becomes \begin{equation}\label{eq:quant3} Q_{2}(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}} \int_{\sigma}^{\infty}\bra{\rho}^{-\mu}\rho^{\frac{n-1}{p'}-\alpha} P(\rho) d \rho \end{equation} If we assume \begin{equation}\label{eq:conda} \mu>\frac{n}{p'}-\alpha \end{equation} we can apply H\"older's inequality and we get \begin{equation*} Q_{3}(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}} \sigma^{\frac{n}{p'}-\mu-\alpha}\|P\|_{L^{p}}. \end{equation*} Now the right hand side is in $L^{q}(\sigma\ge1)$ provided \begin{equation}\label{eq:condb} \mu>\frac{n}{p'}-\alpha+\frac nq-\beta \equiv -\alpha-\beta+n\left(1+\frac1q-\frac1p\right) \end{equation} and we see that \eqref{eq:condb} implies \eqref{eq:conda} since $\beta<n/q$ by assumption. When $1 \lesssim \rho \le \sigma/2$ we have $J(\sigma,\rho)\lesssim \bra{\sigma}^{-\mu}$ and \eqref{eq:quantt} becomes \begin{equation}\label{eq:quant3-4} Q_{4}(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}}\sigma^{-\mu} \int_{0}^{\sigma}\rho^{\frac{n-1}{p'}-\alpha} P(\rho) d \rho \end{equation} and by H\"older's inequality we have as before \begin{equation*} \lesssim \sigma^{-\beta+\frac{n-1}{q}} \sigma^{\frac{n}{p'}-\mu-\alpha} \|P\|_{L^{p}} \end{equation*} so that \eqref{eq:condb} is again sufficient to obtain \eqref{eq:thesis}. \noindent Finally, let $\sigma \gtrsim 1$, $\rho \gtrsim 1$ and $2^{-1}\sigma\le \rho\le 2 \sigma$. In this region we must treat differently the values of $\mu\widetilde{r}$ larger or smaller than $n-1$, and the case $\widetilde{r}=\infty$ is considered at the end. Assume that $n-1<\mu\widetilde{r}\le n$; then $J(\sigma,\rho)\lesssim \bra{\rho}^{1-n} \bra{\sigma-\rho}^{\frac{n-1}{\widetilde{r}}-\mu}$, and using the relations \begin{equation*} \sigma \simeq \rho,\qquad \frac1\widetilde{r}=1+\frac1\widetilde{p}-\frac1\widetilde{q} \end{equation*} we see that \eqref{eq:quantt} reduces to \begin{equation}\label{eq:quant4} Q_{5}(\sigma)\lesssim \sigma^{-\alpha-\beta+(n-1) \left( \frac1q-\frac1p+\frac1\widetilde{p}-\frac1\widetilde{q} \right)} \int_{\sigma/2}^{2\sigma} \bra{\sigma-\rho}^{\frac{n-1}{\widetilde{r}}-\mu} P(\rho) d \rho. \end{equation} The last integral is (bounded by) a convolution of $P(\rho)$ with the function $\bra{\rho}^{\frac{n-1}{\widetilde{r}}-\mu}$. In order to estimate the $L^{q}(\sigma\ge1)$ norm of $Q_{5}$, we use first H\"older's then Young's inequality: \begin{equation*} \|Q_{5}\|_{L^{q}}\lesssim \|\bra{\sigma}^{-\epsilon}\|_{L^{q_{0}}} \|\bra{\rho}^{\frac{n-1}{\widetilde{r}}-\mu}\|_{L^{q_{1}}} \|P\|_{L^{p}} \end{equation*} where \begin{equation*} \epsilon=-\alpha-\beta+(n-1) \left( \frac1q-\frac1p+\frac1\widetilde{p}-\frac1\widetilde{q} \right),\qquad \frac1q=\frac{1}{q_{0}}+\frac{1}{q_{1}}+\frac1p-1. \end{equation*} By assumption we have $\epsilon\ge0$. When $\epsilon>0$, in order for the norms to be finite we need \begin{equation*} \epsilon q_{0}> 1,\qquad \frac{n-1}{\widetilde{r}}-\mu<-\frac{1}{q_{1}} \end{equation*} which can be rewritten \begin{equation*} (n-1)\left( 1+\frac1\widetilde{q}-\frac1\widetilde{p} \right)-\mu +1+\frac1q-\frac1p < \frac1{q_{0}} <\epsilon \end{equation*} and we see that we can find a suitable $q_{0}$ provided the first side is strictly smaller than the last side; this condition is precisely equivalent to \eqref{eq:condb} again (recall also that $n-1<\mu\le n$). The argument works also in the case $\epsilon=0$ by choosing $q_{0}=\infty$. \noindent If on the other hand $0<\mu<n-1$, we have $J_{\mu\widetilde{r}}^{\frac1\widetilde{r}}\lesssim \bra{\rho}^{-\mu}$ also in this region, so that \begin{equation*} Q_{5}(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}} \sigma^{\frac{n-1}{p'}-\alpha-\mu} \int_{\sigma/2}^{\sigma} P(\rho) d \rho \end{equation*} by $\sigma \simeq \rho$. H\"older's inequality gives \begin{equation*} Q_{5}(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}} \sigma^{\frac{n-1}{p'}-\alpha-\mu} \sigma^{\frac1{p'}}\|P\|_{L^{p}} \end{equation*} which leads to exactly the same computations as above and in the end to \eqref{eq:condb}. The case $\mu=n-1$ introduces a logarithmic term which does not change the integrability properties used here. \noindent It remains the last region when $\widetilde{r}=\infty$ so that $J(\sigma,\rho)=\bra{\sigma-\rho}^{-\mu}$ and $1/\widetilde{p}-1/\widetilde{q}=1$. Then \begin{equation*} Q_{5}(\sigma)\lesssim \sigma^{-\beta+\frac{n-1}{q}} \int_{\sigma/2}^{2 \sigma}\bra{\sigma-\rho}^{-\mu} \rho^{\frac{n-1}{p'}-\alpha} P(\rho)d \rho \end{equation*} which is identical with \eqref{eq:quant4} with $\widetilde{r}=\infty$, thus the same computations apply and the proof is concluded. \end{proof} \section{Weighted Sobolev embeddings}\label{SobEmb} \noindent In this section we write estimate (\ref{oHLS}) in the form of a Sobolev embedding. In this way we get also critical estimates in Besov Spaces. \noindent Recalling the pointwise bound \begin{equation}\label{eq:derivT} |u(x)|\le C T_{\lambda}(||D|^{n-\lambda}u|),\qquad 0<\lambda<n \end{equation} where $|D|^{\sigma}=(-\Delta)^{\frac s2}$, we see that an immediate consequence of \eqref{ourHLS} is the weighted Sobolev inequality \begin{equation}\label{eq:weightS} \||x|^{-\beta}u\|_{L^{q}L^{\widetilde{q}}}\lesssim \||x|^{\alpha}|D|^{\sigma}u\|_{L^{p}L^{\widetilde{p}}} \end{equation} provided $1<p\le q<\infty$, $1\le\widetilde{p}\le\widetilde{q}\le\infty$ and \begin{equation}\label{eq:condDLsob} \begin{split} &\beta<\frac nq,\quad \alpha<\frac{n}{p'},\quad 0<\sigma<n \\ &\alpha+\beta=\sigma+\frac nq-\frac np \\ &\alpha+\beta\ge(n-1) \left(\frac1q-\frac1p+\frac{1}{\widetilde{p}}-\frac{1}{\widetilde{q}}\right). \end{split} \end{equation} As usual, if the last condition is strict we can take $p,q$ in the full range $1\le p\le q\le \infty$. For instance, this implies the inequality \begin{equation}\label{eq:infsob} |x|^{-\beta}|u(x)|\lesssim \||x|^{\alpha}|D|^{\sigma}u\|_{L^{p}L^{\widetilde{p}}} \end{equation} provided $1\le p\le \infty$ and \begin{equation*} \begin{split} &\beta<0,\quad \alpha<\frac{n}{p'},\quad 0<\sigma<n \\ &\alpha+\beta=\sigma-\frac np \\ &\alpha+\beta>(n-1) \left(\frac{1}{\widetilde{p}}-\frac1p\right). \end{split} \end{equation*} If we choose $\alpha=0$ we have in particular for $p\in(1,\infty)$, $\widetilde{p}\in[1,\infty]$ \begin{equation}\label{eq:partsob} |x|^{\frac np-\sigma}|u(x)|\lesssim \||D|^{\sigma}u\|_{L^{p}L^{\widetilde{p}}},\qquad \frac {n-1}\widetilde{p}+\frac1p<\sigma <\frac np. \end{equation} This extends to the non radial case the radial inequalities in \cite{Strauss77-a}, \cite{Ni82-a}, \cite{ChoOzawa09-a} (see also \cite{ChoNakanishi10-a}) and many others; notice that in the radial case we can choose $\widetilde{p}=\infty$ to obtain the largest possible range. When $\sigma$ is an integer we can replace the fractional operator $|D|^{\sigma}$ with usual derivatives; see Corollary \ref{cor:integers} below for a similar argument. \noindent By similar techniques it is possible to derive nonhomogeneous estimates in terms of norms of type $\|\bra{D}^{\sigma}u\|_{L^{p}}$; we omit the details. \subsection*{Critical estimates in Besov spaces}\label{sub:besov} \noindent Case (ii) in Theorem \ref{the:Our1Thm} is suitable for applications to spaces defined via Fourier decompositions, in particular Besov spaces. We recall the standard machinery: fix a $C^{\infty}_{c}$ radial function $\psi_{0}(\xi)$ equal to 1 for $|\xi|<1$ and vanishing for $|\xi|>2$, define a Littlewood-Paley partition of unity via $\phi_{0}(\xi)=\psi(\xi)-\psi(\xi/2)$, $\phi_{j}(\xi)=\phi_{0}(2^{-j}\xi)$, and decompose $u$ as $u=\sum_{j\in \mathbb{Z}}u_{j}$ where $u_{j}=\phi_{j}(D)u=\mathbb{F}^{-1}\phi_{j}(\xi)\mathbb{F}u$. Then the homogeneous Besov norm $\dot B^{s}_{p,1}$ is defined as \begin{equation}\label{eq:besovn} \|u\|_{\dot B^{s}_{p,1}}= \sum_{j\in \mathbb{Z}}2^{js}\|u_{j}\|_{L^{p}}. \end{equation} We can apply Theorem \ref{the:Our1Thm}-(ii) to each component $u_{j}$ in the full range of indices $1\le p\le q\le \infty$, with a constant independent of $j$. By the standard trick $\widetilde{u}_{j}=u_{j-1}+u_{j}+u_{j+1}$, $u_{j}=\phi_{j}(D)\widetilde{u}_{j}$ we obtain the estimate \begin{equation}\label{eq:besovest} \||x|^{-\beta}T_{\gamma} u\| _{L^{q}_{|x|}L^{\widetilde{q} }_{\theta}} \le C \sum_{j\in \mathbb{Z}} \| |x|^{\alpha} \widetilde{u}_{j}\|_{L^{p}_{|x|}L^{\widetilde{p} }_{\theta}} \end{equation} for the full range $1\le p\le q\le \infty$, $1\le \widetilde{p}\le \widetilde{q}\le \infty$, with $\alpha,\beta,\gamma$ satisfying \eqref{eq:condDLsob}. The right hand side can be interpreted as a weighted norm of Besov type with different radial and angular integrability; this kind of spaces were already considered in \cite{ChoNakanishi10-a}. In the special case $\alpha=0$, $p=\widetilde{p}>1$ we obtain a standard Besov norm \eqref{eq:besovn} and hence the estimate (with the optimal choice $\widetilde{q}=\widetilde{p}=p$) reduces to \begin{equation}\label{eq:besovtrue} \||x|^{-\beta}T_{\gamma} u\| _{L^{q}_{|x|}L^{p }_{\theta}} \le C \|u\|_{\dot B^{0}_{p,1}}. \end{equation} This estimate is weaker than \eqref{ourHLS} when the third condition in \eqref{eq:condDL} is strict, but in the case of equality it gives a new estimate: recalling \eqref{eq:derivT}, we have proved the following \begin{corollary}\label{cor:besov} For all $1< p\le q\le \infty$ we have \begin{equation}\label{eq:truesob} \||x|^{\frac {n-1}p-\frac {n-1}q} u\| _{L^{q}_{|x|}L^{p }_{\theta}} \le C \|u\|_{\dot B^{\frac1p-\frac1q}_{p,1}}. \end{equation} \end{corollary} \noindent If we restrict \eqref{eq:truesob} to radial functions and $q=\infty$, we obtain the well known radial pointwise estimate \begin{equation}\label{eq:radialbes} |x|^{\frac {n-1}p}|u|\le C \|u\|_{\dot B^{1/p}_{p,1}}\qquad 1< p<\infty \end{equation} (see \cite{ChoOzawa09-a}, \cite{SickelSkrzypczak00-a}). \section{Caffarelli-Kohn-Nirenberg weighted interpolation inequalities} \noindent In this section we use the technology outlined before toghether with interpolation. In this way we can extend to $L^{p}L^{\widetilde{p}}$ setting also the Caffarelli-Kohn-Nirenberg inequalities. Such inequalities are fundamental tools in mathematical analysis and in PDEs theory. In particular are really useful in the context of Navier-Stokes equation because provide a priori estimate for weak solutions by interpolation of quantities related to the energy dissipation. Let's start by the family of inequalities on $\mathbb{R}^{n}$, $n\ge1$ \begin{equation}\label{eq:CKN} \||x|^{-\gamma}u\|_{L^{r}}\le C \||x|^{-\alpha}\nabla u\|^{a}_{L^{p}} \||x|^{-\beta}u\|^{1-a}_{L^{q}}. \end{equation} for the range of parameters \begin{equation}\label{eq:rangeCKN} n\ge1,\qquad 1\le p<\infty,\qquad 1\le q<\infty,\qquad 0<r<\infty,\qquad 0< a\le1. \end{equation} Some conditions are immediately seen to be necessary for the validity of \eqref{eq:CKN}: to ensure local integrability we need \begin{equation}\label{eq:intCKN} \gamma<\frac nr\qquad \alpha<\frac np\qquad \beta<\frac nq \end{equation} and by scaling invariance we need to assume \begin{equation}\label{eq:scaCKN} \gamma-\frac nr= a\left(\alpha+1-\frac np\right)+ (1-a)\left(\beta-\frac nq\right). \end{equation} In \cite{CaffarelliKohnNirenberg84-a} the following remarkable result was proved, which improves and extends a number of earlier estimates including weighted Sobolev and Hardy inequalities: \begin{theorem}[\cite{CaffarelliKohnNirenberg84-a}]\label{the:CKN} Consider the inequalities \eqref{eq:CKN} in the range of parameters given by \eqref{eq:intCKN}, \eqref{eq:rangeCKN}, \eqref{eq:scaCKN}. Denote with $\mathbf\Delta$ the quantity \begin{equation}\label{eq:Delta} \mathbf\Delta=\gamma-a \alpha-(1-a)\beta \equiv a +n \left( \frac1r-\frac{1-a}{q}-\frac ap \right) \end{equation} (the identity in \eqref{eq:Delta} is a reformulation of the scaling relation \eqref{eq:scaCKN}). Then the inequalities \eqref{eq:CKN} are true if and only if both the following conditions are satisfied: \begin{enumerate}\setlength{\itemindent}{-0pt} \renewcommand{\labelenumi}{\textit{(\roman{enumi})}} \item $\mathbf\Delta\ge0$ \item $\mathbf\Delta\le a$ when $\gamma-n/r=\alpha+1-n/p$. \end{enumerate} \end{theorem} \begin{remark}\label{rem:original} Notice that in the original formulation of \cite{CaffarelliKohnNirenberg84-a} also the case $a=0$ was considered, but with the introduction of an additional parameter forcing $\beta=\gamma$ when $a=0$. Thus the case $a=0$ becomes trivial in the original formulation; however, at least for $r>1$, a much larger range $0\le \gamma-\beta<n$ can be obtained by a direct application of the Hardy-Littlewood-Sobolev inequality, so strictly speaking the additional requirement $\beta=\gamma$ is not necessary. We think the formulation adopted here is cleaner. \noindent On the other hand, the necessity of (i) follows from the uniformity of the estimate w.r.to translations, while the necessity of (ii) is proved by testing the inequality on the spikes $|x|^{\gamma-n/r}\log|x|^{-1}$ truncated near $x=0$. \end{remark} \noindent In \cite{DenapoliDrelichmanDuran09-a} the authors prove the following radial improvement of Theorem \ref{the:CKN}: \begin{theorem}[\cite{DenapoliDrelichmanDuran09-a}] \label{the:CKNDDD} Let $n\ge2$, let $\alpha,\beta,\gamma,r,p,q,a$ be in the range determined by \eqref{eq:intCKN}, \eqref{eq:rangeCKN}, \eqref{eq:scaCKN}, define $\mathbf\Delta$ as in \eqref{eq:Delta}, and assume that \begin{equation}\label{eq:CKNDDD} a\left(1-\frac np\right)\le \mathbf\Delta\le a,\qquad \alpha<\frac np-1, \end{equation} the first inequality being strict when $p=1$. Then estimate \eqref{eq:CKN} is true for all radial functions $u\in C^{\infty}_{c}(\mathbb{R}^{n})$. \end{theorem} \noindent We somewhat simplified the statement of Theorem 1.1 in \cite{DenapoliDrelichmanDuran09-a}, and in particular conditions (1.8)-(1.10) in that paper are equivalent to \eqref{eq:CKNDDD} here, as it is readily seen. Notice that the condition $\mathbf\Delta\le a$ forces $r$ to be larger than 1. \noindent Using the $L^{p}L^{\widetilde{p}}$ norms we can extend both Theorems \ref{the:CKN} and \ref{the:CKNDDD}. For greater generality we prove an estimate with fractional derivatives \begin{equation*} |D|^{\sigma}=(-\Delta)^{\frac \sigma2},\qquad \sigma>0. \end{equation*} Our result is the following: \begin{theorem}\label{the:Our2Thm} Let $n\ge2$, $r,\widetilde{r},p,\widetilde{p},q,\widetilde{q}\in[1,+\infty)$, $0<a\le1$, $0<\sigma<n$ with \begin{equation}\label{eq:Ico} \gamma<\frac nr,\qquad \beta<\frac nq,\qquad \frac np-n<\alpha<\frac np-\sigma \end{equation} satisfying the scaling condition \begin{equation}\label{eq:scalingCKN} \gamma-\frac nr= a\left(\alpha+\sigma-\frac nr\right)+ (1-a)\left(\beta-\frac nq\right). \end{equation} Define the quantities \begin{equation}\label{eq:Deltas} \mathbf\Delta=a \sigma+n \left( \frac1r-\frac{1-a}{q}-\frac ap \right),\qquad \widetilde{\mathbf\Delta}=a \sigma+n \left( \frac1\widetilde{r}-\frac{1-a}{\widetilde{q}}-\frac a\widetilde{p} \right). \end{equation} and assume further that \begin{equation}\label{eq:IIco} \mathbf\Delta+(n-1)\widetilde{\mathbf\Delta}\ge0, \end{equation} \begin{equation}\label{eq:IIIco} 1<p,\qquad a\left(\sigma-\frac np\right)<\mathbf\Delta\le a \sigma, \qquad a\left(\sigma-\frac n\widetilde{p}\right)\le \widetilde{\mathbf\Delta}\le a \sigma. \end{equation} Then the following interpolation inequality holds: \begin{equation}\label{eq:CKNnostra} \||x|^{-\gamma}u\|_{L^{r}_{|x|}L^{\widetilde{r}}_{\theta}}\le C \||x|^{-\alpha}|D|^{\sigma} u\|^{a}_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} \||x|^{-\beta}u\|^{1-a}_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}. \end{equation} If one assumes strict inequality in \eqref{eq:IIco}, then the inequalities in \eqref{eq:IIIco} can be relaxed to non strict inequalities. \end{theorem} \noindent When $\sigma$ is an integer, the condition on $\alpha$ from below can be dropped, and a slightly stronger estimate can be proved. We introduce the notation \begin{equation*} \||x|^{-\alpha}D^{\sigma} u\|_{L^{p}L^{\widetilde{p}}}= \sum_{|\nu|=\sigma} \||x|^{-\alpha}D^{\nu} u\|_{L^{p}L^{\widetilde{p}}},\qquad \nu=(\nu_{1},\dots,\nu_{n})\in \mathbb{N}^{n}. \end{equation*} Then we have: \begin{corollary}\label{cor:integers} Assume $\sigma=1,\dots,n-1$ is an integer. Then the following estimate holds \begin{equation}\label{eq:CKNnostraint} \||x|^{-\gamma}u\|_{L^{r}_{|x|}L^{\widetilde{r}}_{\theta}}\le C \||x|^{-\alpha}D^{\sigma} u\|^{a}_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} \||x|^{-\beta}u\|^{1-a}_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}. \end{equation} provided the parameters satisfy the same conditions as in the previous theorem, with the exception of the condition $\alpha>-n+n/p$ which is not necessary. \end{corollary} \begin{remark}\label{rem:comparCKN} If $\sigma=1$, Corollary \ref{cor:integers} contains both the original result of \cite{CaffarelliKohnNirenberg84-a} (for $\mathbf\Delta\le a$) and the radial improvement of \cite{DenapoliDrelichmanDuran09-a}. \noindent Indeed, if we choose $p=\widetilde{p}$, $q=\widetilde{q}$, $r=\widetilde{r}$ in Corollary \ref{cor:integers} we get of course $\mathbf\Delta=\widetilde{\mathbf\Delta}$, and selecting $\sigma=1$ we reobtain the original inequality \eqref{eq:CKN} in the range $0\le \mathbf\Delta\le a$. \noindent On the other hand, if $u$ is a radial function, estimate \eqref{eq:CKNnostraint} does not depend on the choice of $\widetilde{p},\widetilde{q},\widetilde{r}$ and we can let $\widetilde{\mathbf\Delta}$ assume an arbitrary value in the range \eqref{eq:IIIco}. Thus if $\mathbf\Delta>a(\sigma-n/p)$ we can choose $\widetilde{\mathbf\Delta}=0$, while if $\mathbf\Delta=a(\sigma-n/p)$ we can choose $\widetilde{\mathbf\Delta}=\epsilon>0$ arbitrarily small, recovering the results of Theorem \ref{the:CKNDDD}. \end{remark} \noindent Again we can get theorem \ref{the:Our2Thm} as a consequence of \ref{the:Our1Thm}. \begin{proof} We begin by taking $0<a\le1$, and indices $r,\widetilde{r},s,\widetilde{s},q,\widetilde{q}\in[1,+\infty]$ such that \begin{equation}\label{eq:rsq} \frac1r=\frac as+\frac{1-a}q,\qquad \frac1\widetilde{r}=\frac a\widetilde{s}+\frac{1-a}\widetilde{q}. \end{equation} Then by two applications of H\"older's inequality we obtain the interpolation inequality \begin{equation}\label{eq:holder} \begin{split} \||x|^{-\gamma}u\|_{L^{r}L^{\widetilde{r}}} =& \|(|x|^{-\delta}u)^{a}(|x|^{-\beta}u)^{1-a}\|_{L^{r}L^{\widetilde{r}}} \\ \le& \|(|x|^{-\delta}u)^{a}\|_{L^{s/a}L^{\widetilde{s}/a}} \|(|x|^{-\beta}u)^{1-a}\|_{L^{q/(1-a)}L^{\widetilde{q}/(1-a)}} \\ =& \||x|^{-\delta}u\|^{a}_{L^{s}L^{\widetilde{s}}} \||x|^{-\beta}u\|^{1-a}_{L^{q}L^{\widetilde{q}}} \end{split} \end{equation} provided the exponents $\gamma,\delta,\beta$ are related by \begin{equation}\label{eq:expon} \gamma=a \delta+(1-a)\beta. \end{equation} Now the main step of the proof. By Theorem \eqref{the:Our1Thm} we know that \begin{equation*} \||x|^{-\delta}T_{\lambda}u\|_{L^{s}L^{\widetilde{s}}}\lesssim \||x|^{-\alpha}u\|_{L^{p}L^{\widetilde{p}}} \end{equation*} under suitable conditions on the indices. Now using the well known estimate \begin{equation}\label{eq:fractest} |u(x)|\le C_{\lambda,n} T_{\lambda}\left( \left| |D|^{n-\lambda}u \right| \right) \end{equation} the previous inequality can be equivalently written \begin{equation* \||x|^{-\delta}u\|_{L^{s}L^{\widetilde{s}}}\lesssim \||x|^{-\alpha}|D|^{\sigma}u\|_{L^{p}L^{\widetilde{p}}},\qquad \sigma=n-\lambda \end{equation*} which together with \eqref{eq:holder} gives \begin{equation}\label{eq:final} \||x|^{-\gamma}u\|_{L^{r}L^{\widetilde{r}}}\lesssim \||x|^{-\alpha}|D|^{\sigma}u\|_{L^{p}L^{\widetilde{p}}}^{a} \||x|^{-\beta}u\|^{1-a}_{L^{q}L^{\widetilde{q}}}. \end{equation} The conditions on the indices are those given by \eqref{eq:rsq}, \eqref{eq:expon}, plus those listed in the statement of Theorem \eqref{the:Our1Thm} (notice that we are using $-\alpha$ instead of $\alpha$). The complete list is the following: \begin{equation}\label{eq:tot1} r,s,q,\widetilde{r},\widetilde{s},\widetilde{q}\in[1,+\infty],\qquad a<0\le1,\qquad 0<\sigma<n, \end{equation} \begin{equation}\label{eq:tot2} \frac1r=\frac as+\frac{1-a}q,\qquad \frac1\widetilde{r}=\frac a\widetilde{s}+\frac{1-a}\widetilde{q}. \end{equation} \begin{equation}\label{eq:tot3} 1<s\le p<\infty,\qquad 1\le\widetilde{s}\le\widetilde{p}\le \infty, \end{equation} \begin{equation}\label{eq:tot4} \gamma<\frac nr,\qquad \beta<\frac nq,\qquad -\alpha<\frac n{p'},\qquad \delta<\frac ns, \end{equation} \begin{equation}\label{eq:tot5} \gamma=a \delta+(1-a)\beta, \end{equation} \begin{equation}\label{eq:tot6} -\alpha+\delta+n-\sigma=n+\frac ns-\frac np, \end{equation} \begin{equation}\label{eq:tot7} -\alpha+\delta\ge (n-1)\left(\frac1s-\frac1p+\frac1\widetilde{p}-\frac1\widetilde{s}\right). \end{equation} Recall also that, when the last inequality \eqref{eq:tot7} is strict, we can allow the full range \begin{equation*} 1\le s\le p\le \infty. \end{equation*} Our final task is to rewrite this set of conditions in a compact form, eliminating the redundant parameters $\delta,s,\widetilde{s}$. Define the two quantities \begin{equation*} \mathbf\Delta=a \sigma+n \left( \frac1r-\frac{1-a}{q}-\frac ap \right),\qquad \widetilde{\mathbf\Delta}=a \sigma+n \left( \frac1\widetilde{r}-\frac{1-a}{\widetilde{q}}-\frac a\widetilde{p} \right). \end{equation*} Then \eqref{eq:tot2} are equivalent to \begin{equation}\label{eq:tot2b} \mathbf\Delta=a \left(\sigma+\frac ns-\frac np\right),\qquad \widetilde{\mathbf\Delta}=a \left(\sigma+\frac n\widetilde{s}-\frac n\widetilde{p}\right) \end{equation} while \eqref{eq:tot6} is equivalent to \begin{equation}\label{eq:tot6b} \delta=\alpha+\frac{\mathbf\Delta}{a} \end{equation} and we can use \eqref{eq:tot2b}, \eqref{eq:tot6b} to replace $\delta,s,\widetilde{s}$ in the remaining relations. Condition \eqref{eq:tot5} becomes \begin{equation}\label{eq:tot5b} \mathbf\Delta=\gamma-a \alpha-(1-a)\beta, \end{equation} which is precisely the scaling condition, while \eqref{eq:tot7} becomes \begin{equation}\label{eq:tot7b} \mathbf\Delta+(n-1)\widetilde{\mathbf\Delta}\ge0. \end{equation} The last inequality in \eqref{eq:tot4}, $\delta<n/s$, can be written \begin{equation*}% \alpha<\frac np-\sigma \end{equation*} so that \eqref{eq:tot4} is replaced by \begin{equation}\label{eq:tot4b} \gamma<\frac nr,\qquad \beta<\frac nq,\qquad \frac np-n<\alpha<\frac np-\sigma. \end{equation} Finally, conditions \eqref{eq:tot3} translate to \begin{equation}\label{eq:tot3b} 1<p,\qquad a\left(\sigma-\frac np\right)<\mathbf\Delta\le a \sigma, \qquad a\left(\sigma-\frac n\widetilde{p}\right)\le \widetilde{\mathbf\Delta}\le a \sigma. \end{equation} When the inequality in \eqref{eq:tot7b} is strict, the last condition can be relaxed to \begin{equation}\label{eq:tot3b2} 1\le p,\qquad a\left(\sigma-\frac np\right)\le\mathbf\Delta\le a \sigma, \qquad a\left(\sigma-\frac n\widetilde{p}\right)\le \widetilde{\mathbf\Delta}\le a \sigma. \end{equation} \noindent We pass now to the proof of Corollary \ref{cor:integers}. Assume now $\sigma$ is integer, and the inequality \begin{equation* \||x|^{-\gamma}u\|_{L^{r}L^{\widetilde{r}}}\le C \||x|^{-\alpha}|D|^{\sigma} u\|^{a}_{L^{p}L^{\widetilde{p}}} \||x|^{-\beta}u\|^{1-a}_{L^{q}L^{\widetilde{q}}} \end{equation*} is true for a certain choice of the parameters as in the theorem, so that in particular \begin{equation*} \alpha< \frac np-\sigma<\frac np. \end{equation*} Then we shall prove that also the following inequalities are true \begin{equation}\label{eq:goalk} \||x|^{k-\gamma}u\|_{L^{r}L^{\widetilde{r}}}\le C \||x|^{k-\alpha}D^{\sigma} u\|^{a}_{L^{p}L^{\widetilde{p}}} \||x|^{k-\beta}u\|^{1-a}_{L^{q}L^{\widetilde{q}}} \end{equation} for all integers $k\ge0$, where we are using the shorthand notation \begin{equation*} \||x|^{k-\alpha}D^{\sigma} u\|_{L^{p}L^{\widetilde{p}}}= \sum_{|\nu|=\sigma} \||x|^{k-\alpha}D^{\nu} u\|_{L^{p}L^{\widetilde{p}}},\qquad (\nu=(\nu_{1},\dots,\nu_{n})\in \mathbb{N}^{n}). \end{equation*} This in particular implies that the condition on $\alpha$ from below can be dropped when $\sigma$ is integer. \noindent When $k=0$, \eqref{eq:goalk} is obtained just by replacing $|D|^{\sigma}$ with $D^{\sigma}$ in the original inequality. The proof of this estimate is identical to the previous one; the only modification is to use, instead of \eqref{eq:fractest}, the stronger pointwise bound \begin{equation}\label{eq:fractest2} |u(x)|\le C_{\lambda,n} T_{\lambda}\left( \left| D^{n-\lambda}u \right| \right) \end{equation} which is valid for all $\lambda=1,\dots,n-1$. \noindent Now if we apply \eqref{eq:goalk} (with $k=0$) to a function of the form $|x|^{k}u$ for some $k\ge1$, we obtain \begin{equation*} \||x|^{k-\gamma}u\|_{L^{r}L^{\widetilde{r}}}\le C \||x|^{-\alpha}D^{\sigma}(|x|^{k}u)\|^{a}_{L^{p}L^{\widetilde{p}}} \||x|^{k-\beta}u\|^{1-a}_{L^{q}L^{\widetilde{q}}} \end{equation*} and to conclude the proof we see that it is sufficient to prove the inequality \begin{equation}\label{eq:interm} \||x|^{-\alpha}D^{\sigma}(|x|^{k}u)\|_{L^{p}L^{\widetilde{p}}}\lesssim \||x|^{k-\alpha}D^{\sigma} u\|_{L^{p}L^{\widetilde{p}}} \end{equation} for all $\alpha<n/p$, $1\le p,\widetilde{p}<\infty$, and integers $\sigma=1,\dots,n-1$, $k\ge1$. Notice indeed that all the conditions on the parameters (apart from $\alpha>-n+n/p$) are unchanged if we decrease $\gamma,\alpha,\beta$ by the same quantity. \noindent By induction on $k$ (and writing $\delta=-\alpha$), we are reduced to prove that for all $p,\widetilde{p}\in[1,\infty)$ and $1\le\sigma\le n-1$ \begin{equation}\label{eq:interm2} \||x|^{\delta}D^{\sigma}(|x|u)\|_{L^{p}L^{\widetilde{p}}}\lesssim \||x|^{1+\delta}D^{\sigma} u\|_{L^{p}L^{\widetilde{p}}},\qquad \delta>\sigma-\frac np. \end{equation} Using Leibnitz' rule we reduce further to \begin{equation}\label{eq:interm3} \||x|^{1+\delta-\ell}u\|_{L^{p}L^{\widetilde{p}}}\lesssim \||x|^{1+\delta}D^{\ell} u\|_{L^{p}L^{\widetilde{p}}},\qquad \delta>\ell-\frac np \end{equation} for $\ell=1,\dots,n-1$, and by induction on $\ell$ this is implied by \begin{equation}\label{eq:interm4} \||x|^{\delta}u\|_{L^{p}L^{\widetilde{p}}}\lesssim \||x|^{1+\delta}\nabla u\|_{L^{p}L^{\widetilde{p}}},\qquad \delta>1-\frac np. \end{equation} In order to prove \eqref{eq:interm4}, consider first the radial case. When $u=\phi(|x|)$ is a radial (smooth compactly supported) function, we have \begin{equation*} \||x|^{\delta}u\|_{L^{p}L^{\widetilde{p}}}^{p}\simeq \int_{0}^{\infty}\rho^{\delta p+n-1}|\phi(\rho)|^{p}d\rho. \end{equation*} Integrating by parts we get \begin{equation*} \begin{split} =&-\frac{p}{\delta p+n}\int_{0}^{\infty} \rho^{\delta p+n}|\phi|^{p-1}|\phi(\rho)|'d\rho \\ \lesssim & \int_{0}^{\infty}(\rho^{\delta p+n-1}|\phi|^{p})^{\frac{p-1}{p}} (\rho^{\delta p+p+n-1}|\phi'|^{p})^{\frac1p}d\rho \\ \simeq & \||x|^{\delta}u\|^{\frac{p-1}{p}}_{L^{p}L^{\widetilde{p}}} \||x|^{1+\delta}\nabla u\|_{L^{p}L^{\widetilde{p}}} \end{split} \end{equation*} which implies \eqref{eq:interm4} in the radial case. If $u$ is not radial, define \begin{equation*} \phi(\rho)=\|u(\rho \theta)\|_{L^{\widetilde{p}}_{\theta}(\mathbb{S}^{n-1})} = \left( \int_{\mathbb{S}^{n-1}} |u(\rho \theta)|^{\widetilde{p}}dS_{\theta} \right)^{\frac1\widetilde{p}} \end{equation*} so that \begin{equation*} \||x|^{\delta}u\|_{L^{p}L^{\widetilde{p}}}\simeq \left(\int_{0}^{\infty}\rho^{\delta p+n-1}|\phi(\rho)|^{p}d\rho \right) ^{\frac1p}. \end{equation*} The proof in the radial case implies \begin{equation*} \||x|^{\delta}u\|_{L^{p}L^{\widetilde{p}}}\le \||x|^{\delta+1}\phi'(|x|)\|_{L^{p}}; \end{equation*} moreover we have \begin{equation*} \begin{split} |\phi'(\rho)|\lesssim &\ \phi^{1-\widetilde{p}} \int_{\mathbb{S}^{n-1}} |u(\rho \theta)|^{\widetilde{p}-1}|\theta \cdot \nabla u|\ dS_{\theta} \\ \le &\ \phi^{1-\widetilde{p}} \left(\int_{\mathbb{S}}|u|^{\widetilde{p}}\right)^{\frac{\widetilde{p}-1}\widetilde{p}} \left(\int_{\mathbb{S}}|\nabla u|^{\widetilde{p}}\right)^{\frac1\widetilde{p}} =\|\nabla u(\rho \theta)\|_{L^{\widetilde{p}}_{\theta}(\mathbb{S}^{n-1})} \end{split} \end{equation*} and in conclusion we obtain \begin{equation*} \||x|^{\delta}u\|_{L^{p}L^{\widetilde{p}}}\le \||x|^{\delta+1}\nabla u\|_{L^{p}L^{\widetilde{p}}} \end{equation*} as claimed. \end{proof} \section{Strichartz estimates for the wave equation} \noindent As a last example, we mention an application of our result to Strichartz estimates for the wave equation; a more detailed analysis will be conducted elsewhere. The wave flow $e^{it|D|}$ on $\mathbb{R}^{n}$, $n\ge2$, satisfies the estimates, which are usually called \emph{Strichartz estimates}: \begin{equation}\label{eq:strich} \||D|^{\frac nr+\frac1p-\frac n2}e^{it|D|}f\|_{L^{p}_{t}L^{r}_{x}} \lesssim \|f\|_{L^{2}} \end{equation} provided the indices $p,r$ satisfy \begin{equation}\label{eq:admWE} p\in[2,\infty],\qquad 0<\frac1r\le\frac12-\frac{2}{(n-1)p}. \end{equation} Here the $L^{p}_{t}L^{r}_{x}$ norms are defined as \begin{equation*} \|u(t,x)\|_{L^{p}_{t}L^{r}_{x}}= \left\| \|u(t,\cdot)\|_{L^{r}_{x}} \right\|_{L^{p}_{t}}. \end{equation*} In their most general version, the estimates were proved in \cite{GinibreVelo95-b}, \cite{KeelTao98-a}. Notice that in \eqref{eq:strich} we included the extension of the estimates which can be obtained via Sobolev embedding on $\mathbb{R}^{n}$. \noindent If the initial value $f$ is a radial function, the estimates admit an improvement in the sense that conditions \eqref{eq:admWE} can be relaxed to \begin{equation}\label{eq:admradial} p\in[2,\infty],\qquad 0<\frac1r<\frac12-\frac{1}{(n-1)p}. \end{equation} This phenomenon is connected with the finite speed of propagation for the wave equation and is usually deduced using the space-time decay properties of the equation. For a thourough discussion and a comprehensive history of such estimates see e.g.~\cite{JiangWangYu10-a} and the references therein. \noindent A different set of estimates are the \emph{smoothing estimates}, also known as Morawetz-type or weak dispersion estimates. These appear in a large number of versions; a particularly sharp one is the following, from \cite{FangWang08-a}: \begin{equation}\label{eq:smooWE} \||x|^{-\zeta}|D|^{\frac12-\zeta} e^{it|D|}f\|_{L^{2}_{t}L^{2}_{x}} \lesssim \|\Lambda^{\frac12-\zeta} f\|_{L^{2}},\qquad \frac12<\zeta <\frac n2. \end{equation} Here the operator \begin{equation*} \Lambda=(1-\Delta_{\mathbb{S}^{n-1}})^{1/2} \end{equation*} is a function of the Laplace-Beltrami operator on the sphere and acts only on angular variables, thus we see that the flow improves the angular regularity. Morawetz-type estimates are conceptually simpler than \eqref{eq:strich}, being related to more basic properties of the operators; indeed $L^{2}$ estimates of this type can be proved for quite large classes of equations via multiplier methods. \noindent Corresponding estimates are known for the Schr\"odinger flow $e^{it \Delta}$, and M.C.~Vilela \cite{Vilela01-a} noticed that in the radial case they can be used to deduce Strichartz estimates via the radial Sobolev embedding. Following a similar idea for the wave flow, in combination with our precised estimates \eqref{eq:weightS}, gives an even better result, which strengthens the standard Strichartz estimates \eqref{eq:strich}-\eqref{eq:admWE} in terms of the mixed $L_{|x|}^{p}L_{\theta}^{\widetilde{p}}$ norms. Indeed, a special case of \eqref{eq:weightS} gives, for arbitrary functions $g(x)$, \begin{equation}\label{eq:special} \|g\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}\lesssim \||x|^{\alpha}|D|^{\alpha+\frac n2-\frac nq}g\|_{L^{2}},\qquad q,\widetilde{q}\in[2,\infty),\qquad \frac n2>\alpha\ge(n-1)\left(\frac1q-\frac1\widetilde{q}\right) \end{equation} with the exclusion of the case $\alpha=0$, $q=\widetilde{q}=2$. Then by \eqref{eq:special} and \eqref{eq:smooWE} we obtain the precised Strichartz estimates \begin{equation}\label{eq:prestrich} \||x|^{-\delta}|D|^{\frac nq+\frac12-\frac n2-\delta}e^{it|D|}f\| _{L^{2}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \lesssim \|\Lambda^{-\epsilon} f\|_{L^{2}} \end{equation} provided \begin{equation}\label{eq:condpre} q,\widetilde{q}\in[2,+\infty),\qquad \delta<\frac nq,\qquad 0<\epsilon<\frac{n-1}{2},\qquad 0<\frac1q<\frac1\widetilde{q}-\frac{1}{2(n-1)} \end{equation} and \begin{equation}\label{eq:condpre2} \epsilon\le \delta+(n-1)\left(\frac1\widetilde{q}-\frac{1}{2(n-1)}-\frac1q\right). \end{equation} \noindent We will not exploit the consequence of this particular section in the thesis. The results contained can be consider just as an application of our point of view to a different class of problems. We hope to came back on Strichartz estimates in $L^{p}L^{\widetilde{p}}$ spaces in future works. \chapter{Introduction to the regularity problem for the Navier-Stokes equation}\label{chap2} \noindent In this chapter we introduce the Cauchy problem for the Navier-Stokes approximation of the fluid motion in the whole space, that's \begin{equation}\label{CauchyNS} \left \{ \begin{array}{rcccl} \partial_{t}u + (u \cdot \nabla) u +\nabla p -\Delta u & = & 0 & \quad \mbox{in}& \quad [0,T)\times \mathbb{R}^{n} \\ \nabla \cdot u & = & 0 & \quad \mbox{in} & \quad [0,T)\times \mathbb{R}^{n} \\ u & = & u_{0} & \quad \mbox{in}& \quad \{0\} \times \mathbb{R}^{n}. \end{array}\right. \end{equation} Here $u = (u_{1},u_{2},u_{3})$ is the velocity field, $p$ is the pressure and the viscosity have been set to one. No external forces is working. The first equation is the Newton law while the second guarantees the incompressibility of the fluid. To require incompressibility also at time $t=0$ have to be considered just initial data $u_{0}$ such that $\nabla \cdot u_{0}=0$. So is useful to define the space \begin{equation} L^{2}_{\sigma}(\mathbb{R}^{n}) = \overline{ \left\{ u_{0} \in C^{\infty}_{0}(\mathbb{R}^{n})\quad \mbox{s.t.} \quad \int_{\mathbb{R}^{n}} |u_{0}|^{2} \ dx < +\infty, \ \nabla \cdot u_{0} =0 \right\} }. \end{equation} We will use the same notation for the norm of scalar, vector or tensor quantities, the meaning will be clear by the situation; for instance we will use $\|p\|_{L^{2}(\mathbb{R}^{n})}=\int_{\mathbb{R}^{n}}|p| \ dx$, $\|u\|_{L^{2}(\mathbb{R}^{n})} = \int_{\mathbb{R}^{n}}\sum_{i=1}^{3}u_{i}^{2} \ dx$, $\|\nabla u\|_{L^{2}(\mathbb{R}^{n})} = \int_{\mathbb{R}^{n}}\sum_{i,j=1}^{3}(\partial_{i}u_{j})^{2} \ dx$. We will use also the notation $u \in L^{2}(\mathbb{R}^{n})$ instead of $u \in ({L^{2}(\mathbb{R}^{n})})^{3}$, and so on. The well posedness of (\ref{CauchyNS}) is a well-known mathematical challenge and just partial results have been obtained. The main question is: if we consider an initial datum $u_{0}$ in the Schwartz class, there exists a unique global solution of the problem (\ref{CauchyNS})? \noindent In this chapter we will give an excursus on classical theorems starting by the pioneering work of Leray, Hopf, Serrin and Kato \cite{Ler,Hopf,Ser}. This classical results have been improved in many different directions and in the thesis we will focus basically on the weighted norm approach, which also has a wide reference literature, see \cite{YongZhou}, \cite{Kukavica}, et al. We will also briefly focus on \cite{Tat,CKN}. The first is particulary relevant because seems to be a sharp version for the existence with small data. The second is a celebrated landmark for the regularity theory. \section{Equivalence between the differential and integral formulation} \noindent In this secion we give the integral formulation of problem (\ref{CauchyNS}). This formulation is very useful in order to study both local (in time) well posedness and global well posedness with small data. In such a case, starting by the integral formulation is immediate to look at the equation (\ref{CauchyNS}) as a perturbed heat equation, and fixed point techniques are available. Of course this is irrelevant when we look for global solutions with large initial data. We follow basically \cite{Lem}, and we omit, almost completely, the proofs to get a compact presentation. Anyway all the proofs are classical and can be easily found in literature. Let come back on the system: $$ \left \{ \begin{array}{rcl} \partial_{t}u + (u \cdot \nabla) u +\nabla p & = \Delta u & 0 \\ \nabla \cdot u & = & 0 \\ u & = & u_{0}. \end{array}\right. $$ or in components $(i=1,...,n)$: $$ \left \{ \begin{array}{rcl} \partial_{t}u_{i} + \sum_{j=1}^{n}u_{j} \partial_{j} u_{i} +\partial_{i} p & = & \sum_{j=1}^{n}\partial_{jj}u_{i} \\ \sum_{i=1}^{n}\partial_{i} u_{i} & = & 0. \end{array}\right. $$ By taking the divergence of the first equation and using $\nabla \cdot u =0$ we get: \begin{eqnarray}\label{PressureRie} 0&=& \sum_{i=1}^{n} \partial_{i} \sum_{j=1}^{n}u_{j}\partial_{j}u_{i} + \Delta p \\ &=& \sum_{i,j=1}^{n}\partial_{i}\partial_{j}(u_{i}u_{j}) + \Delta p, \end{eqnarray} so $p$ can, at least formally, be recovered by $u$ througt: \begin{equation} p= - \frac{\sum_{i,j=1}^{n}\partial_{i}\partial_{j}(u_{i}u_{j})}{\Delta}. \end{equation} By using his relation the system can be written as: \begin{equation}\label{CauchyNSInt} \left \{ \begin{array}{rclcl} u & = & e^{t\Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u \otimes u) ds &\quad \mbox{in} \quad & [0,T)\times \mathbb{R}^{n} \\ \nabla \cdot u & = & 0 &\quad \mbox{in} \quad & [0,T)\times \mathbb{R}^{n}. \end{array}\right. \end{equation} where $(u\otimes u)_{i,j} = u_{i}u_{j}$ and $\mathbb{P}$ is formally: \begin{equation}\label{LerProj} \mathbb{P}f = f - \nabla \frac{1}{\Delta}\nabla \cdot f, \end{equation} or in components $$ (\mathbb{P} f)_{i} = f_{i} - \frac{ \partial_{i}\partial_{j}}{\Delta} f_{j}. $$ The operator $\mathbb{P}$, that's a really useful tool in the study of the Navier-Stokes problem, has been introduced by Leray in \cite{Ler}. It is a projection operator on the subspace of the divergence free vector fields. It is infact easy to show that $\mathbb{P}f=f \Leftrightarrow \nabla \cdot f =0$. In order to give precise definition of the formal computation above at first we have to make sense to the operator $\mathbb{P}$. This is easy if we restrict to $f \in L^{2}(\mathbb{R}^{n})$, in this case $\mathbb{P}f$ is defind by: $$ \mathbb{P} f = f - (R \otimes R) f $$ or in components: $$ (\mathbb{P}f)_{i} = f_{i} - \sum_{j}R_{i}R_{j}f_{j}, $$ where $R_{j}$ is the Riesz transform in the direction $j$ defined by the symbol $i\frac{\xi_{j}}{|\xi|}$. This is a simple way to define $\mathbb{P}$, even if it can be defined on larger Banach spaces as a Calderon-Zygmund operator; details can be found in \cite{Lem}. Anyway we are interested basically in the operator $\mathbb{P}(\nabla \cdot f)$, that appears in the integral formulation (\ref{CauchyNSInt}); it is defined through components by: $$ (\mathbb{P}\nabla \cdot (u \otimes u))_{i} = \partial_{j} (u \otimes u)_{i,j} - \frac{1}{\Delta} \partial_{i}\partial_{j}\partial_{k} (u \otimes u)_{j,k}. $$ In such a case the differentiation allows to extend the definition to a larger class of Banach spaces. To give a precise definition (again in \cite{Lem}) we need the auxiliary spaces: \begin{definition} Let define the dual spaces $WL^{\infty}(\mathbb{R}^{n}), L^{1}_{uloc}(\mathbb{R}^{n})$: \begin{itemize} \item the space $WL^{\infty}(\mathbb{R}^{n})$ is the Banach space of the Lebesgue measurable functions $\phi$ on $\mathbb{R}^{n}$ such that \begin{equation}\label{aaa} \sum_{k \in \mathbb{Z}^{n}} \sup_{x \in \{ k+[0,1]^{n} \}} | \phi(x)| < +\infty, \end{equation} equipped with the norm (\ref{aaa}); \item the space $L^{1}_{uloc}(\mathbb{R}^{n})$ is the space of locally intgrable functions equipped with the norm: $$ L^{1}_{uloc}(\mathbb{R}^{n}) = \sup_{[0,1]^{n}} \| \mathbbm{1}_{[0,1]^{n}} f \|_{L^{1}(\mathbb{R}^{n})}. $$ \end{itemize} \end{definition} \noindent It holds the following: \begin{lemma}[\cite{Lem}]\label{OseenDef} The operator $\frac{1}{\Delta}\partial_{i}\partial_{j}\partial_{k}$ is a convolution operator with a kernel $T_{i,j,k}$ such that the following decomposition holds: $$ T_{i,j,k}=\alpha_{i,j,k}+\partial_{i}\partial_{j}\beta_{k} $$ where $\alpha_{i,j,k} \in WL^{\infty}(\mathbb{R}^{n})$ and $\beta_{k} \in L^{1}_{loc}(\mathbb{R}^{n})$. \end{lemma} \noindent By lemma (\ref{OseenDef}) and inclusions $$ L^{1}_{loc} * L^{1}_{uloc} \subset L^{1}_{uloc}, \qquad WL^{\infty}*L^{\infty}, $$ it turns out that $\mathbb{P}(\nabla \ \cdot)$ can be defined on the space $(L^{1}_{uloc}(\mathbb{R}^{n}))^{n \times n}$. Now we focus on some properties of convolutions with the Oseen Kernel, so we consider: $$ \frac{1}{\Delta}\partial_{i}\partial_{j}e^{t\Delta}. $$ It holds the following: \begin{lemma}[\cite{Lem}]\label{OseenKernelTheorem} Let $1\leq i,j \leq n$. The operator $\frac{1}{\Delta}\partial_{i}\partial_{j}e^{t\Delta}$ is a convolution operator $O_{i,j}(t)*f_{j}$, with $O_{i,j}(t) \in (C^{\infty}(\mathbb{R}^{n}))^{n\times n}$, the homogeneity: $$ O_{i,j}(t) = \frac{1}{t^{\frac{n}{2}}}O_{i,j}\left( \frac{x}{\sqrt{t}} \right), $$ and the decay: $$ (1+|x|)^{n+|\eta|}\partial^{\eta}O_{i,j} \in (L^{\infty}(\mathbb{R}^{n}))^{n\times n}, $$ for each multi-index $\eta$. \end{lemma} \noindent This is the main technical tool we need in order to study the properties of $ e^{t\Delta}\mathbb{P}(\nabla \ \cdot \ ) $ that acts on the tensor $u\otimes u$ through $$ (e^{t\Delta} \mathbb{P}(\nabla \cdot (u \otimes u)))_{i}= e^{t\Delta} \partial_{j}(u \otimes u)_{i,j} - e^{t\Delta}\frac{1}{\Delta} \partial_{i}\partial_{j}\partial_{k} (u \otimes u)_{j,k}. $$ It holds the following: \begin{proposition}[\cite{Lem}]\label{OseenDecayFinale} Let $1\leq i,j,k \leq n$. The operator $e^{t\Delta} \mathbb{P} (\nabla \ \cdot \ )$ is a convolution operator $K_{i,j,k}(t)*f_{j,k}$, with $K_{i,j,k}(t) \in (C^{\infty}(\mathbb{R}^{n}))^{n\times n}$, the homogeneity: $$ K_{i,j,k}(t) = \frac{1}{t^{\frac{n+1}{2}}}K_{i,j,k}\left( \frac{x}{\sqrt{t}} \right), $$ and the decay: $$ (1+|x|)^{n+ 1 +|\eta|}\partial^{\eta}K_{i,j,k} \in (L^{\infty}(\mathbb{R}^{n}))^{n\times n}, $$ for each multi-index $\eta$. \end{proposition} \noindent We finish by stating an useful equivalence result: \begin{theorem}[\cite{Lem}]\label{DiffvsInt} Let $u \in \cap_{s<T} \left( L^{2}_{t}L^{2}_{uloc,x}(0,s) \times \mathbb{R}^{n} \right)$. Then the following are equivalent: \begin{enumerate} \item $u$ is a weak solution of: \begin{equation} \left \{ \begin{array}{rcccl} \partial_{t}u + \mathbb{P} \nabla \cdot (u \otimes u) & = & \Delta u & \quad \mbox{in}& \quad [0,T)\times \mathbb{R}^{n} \\ \nabla \cdot u & = & 0 & \quad \mbox{in} & \quad [0,T)\times \mathbb{R}^{n} \\ u & = & u_{0} & \quad \mbox{in}& \quad \{0\} \times \mathbb{R}^{n}. \end{array}\right. \end{equation} \item $u$ solves the integral problem: \begin{equation} \left \{ \begin{array}{rclcl} u & = & e^{t\Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u \otimes u) ds &\quad \mbox{in} \quad & [0,T)\times \mathbb{R}^{n} \\ \nabla \cdot u & = & 0 &\quad \mbox{in} \quad & [0,T)\times \mathbb{R}^{n}. \end{array}\right. \end{equation} \end{enumerate} \end{theorem} \section{The Leray-Hopf solutions} \noindent The modern theory of Navier-Stokes equation starts with the Leray's work \cite{Ler} in which global existence of weak solutions of (\ref{CauchyNS}) for $L^{2}$ initial data is proved. Such weak solutions have also physical meaning because they respond to the energy dissipation. On the other hand the existence theorem is by compactness and neither regularity nor uniqeness have been proved in the general case. In this section we will briefly sketch the ideas of the Leay's theory. We start by definition of weak solution in the context of \cite{Ler} \begin{definition}[Leray's solutions] The pair $(u,p)$ is a weak Leray solution of the Navier-Stokes system (\ref{CauchyNS}) in $[0,T) \times \mathbb{R}^{n}$ if the following holds: \begin{enumerate} \item Exist some constants $E_{0}, E_{1}$ such that: \begin{equation} \label{EnergyBoundness} \int_{\mathbb{R}^{n}} |u(t, \cdot)|^{2} \ dx \leq E_{0}, \end{equation} for almost every $t \in (0,T)$, and \begin{equation}\label{EnergyBoundness2} \int_{0}^{T}\int_{\mathbb{R}^{n}} |\nabla u |^{2} \ dxdt \leq E_{1}; \end{equation} \item (u,p) satisfy (\ref{CauchyNS}) in the sense of distributions in $[0,T) \times \mathbb{R}^{n}$, that's \begin{equation}\label{DistribSense} \int_{0}^{T} \int_{\mathbb{R}^{n}} (\partial_{t} \phi + (u \cdot \nabla) \phi) u \ dxdt + \int_{\mathbb{R}^{n}} u_{0} \phi (x,0) \ dx = \int_{0}^{T} \int_{\mathbb{R}^{n}} (\nabla \phi \cdot \nabla) u \ dxdt, \end{equation} for each $\phi \in C^{\infty}_{c}([0,T) \times \mathbb{R}^{n})$ with $\nabla \cdot \phi =0$ and \begin{equation}\label{DistrSense2} \int_{0}^{T} \int_{\mathbb{R}^{n}} u \cdot \nabla \phi \ dxdt=0, \quad \int_{0}^{T} \int_{\mathbb{R}^{n}} p \Delta \phi + \sum_{i,j=1}^{n} u_{i}u_{j} \partial_{i}\partial_{j} \phi =0, \end{equation} for each $\phi \in C^{\infty}_{c}([0,T) \times \mathbb{R}^{n})$. \item $u$ satisfy the energy inequality: \begin{equation} \label{Energy} \int_{\mathbb{R}^{n}} |u(t, \cdot)|^{2} + 2\int_{0}^{t}\int_{\mathbb{R}^{n}} |\nabla u |^{2} \ dxdt \leq \int_{\mathbb{R}^{n}} |u_{0}|^{2} \end{equation} for each $t \in (0,T)$. \end{enumerate} \end{definition} \noindent Condition ($\ref{Energy}$) is expression of the dissipation of kinetic energy (the first term of the sum) caused by the frictions (the second term). It can be justified by multiplication of (\ref{CauchyNS}) with $2\phi u$ and integration by parts. \noindent It is well know that Leray weak solutions are weakly continuous (see \cite{Temam}), i.e. \begin{equation}\label{WC} \lim_{t \rightarrow s} \int_{\mathbb{R}^{n}} u(t,x)w(x) \ dx = \int_{\mathbb{R}^{n}} u(s,x) w(x) \ dx \end{equation} for all $w \in L^{2}(\mathbb{R}^{n})$, and so, if $u_{0} \in L^{2}(\mathbb{R}^{n}) $ then \begin{equation}\label{WCin0} \lim_{t\rightarrow 0} \int_{\mathbb{R}^{n}} u(t,x)w(x) \ dx = \int_{\mathbb{R}^{n}} u_{0}(x) w(x) \ dx, \end{equation} for all $w \in L^{2}(\mathbb{R}^{n})$. This is how $u$ attend its initial datum. In \cite{Ler} is proved the existence of a weak solution $u \in \mathbb{R}^{+} \times \mathbb{R}^{n}$ of (\ref{CauchyNS}) for every $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$. If we set the problem in $[0,T]\times \Omega$, $\Omega \subset \mathbb{R}^{n}$ open and bounded, and we require zero Dirichlet condition on $[0,T] \times \partial \Omega$, an analogous result is due to Hopf \cite{Hopf}. Let then state the precise Leray's result: \begin{theorem}[\cite{Ler}]\label{Leray'sTheorem} Let $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$. There exist a weak solution $u \in L^{\infty}(\mathbb{R}^{+}; L^{2}_{\sigma}(\mathbb{R}^{n})) \cap L^{2}(\mathbb{R}^{+}; \dot{H}^{1}(\mathbb{R}^{n}))$ of the Cauchy problem (\ref{CauchyNS}) in $\mathbb{R}^{+} \times \mathbb{R}^{n}$. Then $u$ weakly attend its initial datum, i.e. $$ \lim_{t \rightarrow 0} \int_{\mathbb{R}^{n}} (u(t,x)-u_{0,x})w(x) \ dx =0, \qquad \forall w \in L^{2}(\mathbb{R}^{n}). $$ Moreover the energy inequality holds: \begin{equation}\label{energy*} \int_{\mathbb{R}^{n}} |u(t, \cdot)|^{2} + 2\int_{0}^{T}\int_{\mathbb{R}^{n}} |\nabla u |^{2} \ dxdt \leq \int_{\mathbb{R}^{n}} |u_{0}|^{2}, \qquad \forall t \in \mathbb{R}^{+}. \end{equation} \end{theorem} \begin{remark} The choice $u_{0}\in L^{2}_{\sigma}$ is of course the more natural and phisically relevant for the problem. In such a way all initial data with bounded energy are covered. Anyway, as noticed, this generality leads to a poor wellposendess theory in which neither uniqeness nor persistence of regularity are guaranteed. Instead, as will seen in the next section, well posedness can be achieved if we restrict to small (in a suitable sense) initial data. \end{remark} \begin{proof} We just sketched the proof. Details can be found widely in literature. A possible way to get the Leray's theorem is consider the family of regularized systems: \begin{equation}\label{CauchyNSepsilon} \left \{ \begin{array}{rcccl} \partial_{t}u^{\varepsilon} + (u^{\varepsilon}*\rho^{\varepsilon} \cdot \nabla) u^{\varepsilon} +\nabla p & = & \Delta u^{\varepsilon} & \quad \mbox{in}& \quad \mathbb{R}^{+} \times \mathbb{R}^{n} \\ \nabla \cdot u^{\varepsilon} & = & 0 & \quad \mbox{in} & \quad \mathbb{R}^{+} \times \mathbb{R}^{n} \\ u^{\varepsilon} & = & u_{0} & \quad \mbox{in}& \quad \{0\} \times \mathbb{R}^{n}, \end{array}\right. \end{equation} where $\rho^{\varepsilon}$ is a standard mollifier of size $\varepsilon$, that's: $$ \rho \in C^{\infty}_{c}(\mathbb{R}^{n}), \qquad \rho^{\varepsilon}=\frac{1}{\varepsilon^{n}}\rho(\frac{x}{\varepsilon}). $$ Now if $u_{0}\in L^{2}_{\sigma}(\mathbb{R}^{n})$ exists for each $\varepsilon$ a unique global (and smooth in space) solution $u^{\varepsilon}$ of problem (\ref{CauchyNSepsilon}). Furthermore the functions $u^{\varepsilon}$ satisfies the energy inequality: \begin{equation}\label{EnergYBoundEpsilon} \int_{\mathbb{R}^{n}} |u^{\varepsilon}(t, \cdot)|^{2} + 2 \int_{0}^{t}\int_{\mathbb{R}^{n}} |\nabla u^{\varepsilon} |^{2} \ dxdt \leq \int_{\mathbb{R}^{n}} |u_{0}|^{2}, \quad \forall t > 0, \end{equation} uniformly in $\varepsilon$. This follows simply by taking the scalar product of equation (\ref{CauchyNSepsilon}) with $u$ and by integrating by parts\footnote{By integration by parts we get $\int_{0}^{t}\int_{\mathbb{R}^{n}} \sum_{i,j=1}^{n} (u^{\varepsilon}*\rho^{\varepsilon}) (\partial_{j} u_{i}^{\varepsilon}) u^{\varepsilon}_{i}, \int_{0}^{t}\int_{\mathbb{R}^{n}} \sum_{i=1}^{n}(\partial_{i} p) u_{i}^{\varepsilon} =0$.}: \begin{eqnarray}\nonumber 0 &=&\int_{0}^{t}\int_{\mathbb{R}^{n}} \partial_{t}u^{\varepsilon}\cdot u^{\varepsilon} + (u_{j}^{\varepsilon}*\rho^{\varepsilon} \cdot \nabla) u^{\varepsilon} \cdot u^{\varepsilon} +\nabla p \cdot u^{\varepsilon} -\Delta u^{\varepsilon} \cdot u^{\varepsilon} \ dxdt \\ \nonumber &=& \int_{0}^{t}\int_{\mathbb{R}^{n}} \sum_{i=1}^{n}(\partial_{t}u_{i}^{\varepsilon}) u_{i}^{\varepsilon} + \sum_{i,j=1}^{n} (u^{\varepsilon}*\rho^{\varepsilon}) (\partial_{j} u_{i}^{\varepsilon}) u^{\varepsilon}_{i} \\ \nonumber &+& \sum_{i=1}^{n}(\partial_{i} p) u_{i}^{\varepsilon} -(\sum_{i,j=1}^{n}(\partial_{j}\partial_{j} u_{i}^{\varepsilon}) u_{i}^{\varepsilon} \ dxdt \\ \nonumber &=& \int_{0}^{t}\int_{\mathbb{R}^{n}} \frac{1}{2}\partial_{t}(\sum_{i=1}^{n} u_{i}^{\varepsilon})^{2} \ dxdt + \int_{0}^{t}\int_{\mathbb{R}^{n}} \sum_{i,j=1}^{n}(\partial_{j} u_{i})^{2} \ dxdt \\ \nonumber &=& \int_{0}^{t}\int_{\mathbb{R}^{n}} \frac{1}{2}|u^{\varepsilon}|^{2} \ dxdt + \int_{0}^{t}\int_{\mathbb{R}^{n}} |\nabla u|^{2} \ dxdt. \nonumber \end{eqnarray} \noindent Equation (\ref{EnergYBoundEpsilon}) allows to recover $u$ by compactness from the sequence $u^{\varepsilon}$. The solutions $u^{\varepsilon}$ are built up as a Picard sequences related to the integral formulation of (\ref{CauchyNSepsilon}), i.e. \begin{equation}\label{CauchyNSIntEpsilon}\nonumber \left \{ \begin{array}{rclcl} u^{\varepsilon} & = & e^{t\Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot ((u^{\varepsilon}*\rho^{\varepsilon}) \otimes u^{\varepsilon}) ds &\quad \mbox{in} \quad & \mathbb{R}^{+} \times \mathbb{R}^{n} \\ \nabla \cdot u^{\varepsilon} & = & 0 &\quad \mbox{in} \quad & \mathbb{R}^{+} \times \mathbb{R}^{n}. \end{array}\right. \end{equation} So $u^{\varepsilon}$ is the limit of: \begin{equation}\label{PicardForLerayEpsilon} \begin{array}{rcl} u^{\varepsilon}_{1} & = &e^{t\Delta}u_{0} \\ u^{\varepsilon}_{2} & = & e^{t \Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot ((u^{\varepsilon}_{1}*\rho^{\varepsilon}) \otimes u^{\varepsilon}_{1})(s)ds \\ u^{\varepsilon}_{n} &=& e^{t\Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot ((u^{\varepsilon}_{n-1}*\rho^{\varepsilon}) \otimes u^{\varepsilon}_{n-1})(s)ds. \end{array} \end{equation} \end{proof} \noindent Notice finally that $$ u \in \cap_{s<T} \left( L^{2}_{t}L^{2}_{uloc,x}(0,s) \times \mathbb{R}^{n} \right), \quad \forall t \in \mathbb{R}^{+}, $$ because of (\ref{energy*}); so by theorem \ref{DiffvsInt} we have the integral representation: $$ u = e^{t\Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u \otimes u) ds. $$ \section{Regularity criteria} \noindent As seen in theorem (\ref{Leray'sTheorem}) weak global solutions of problem (\ref{CauchyNS}) are always available for initial data $u_{0}$ with bounded energy ($u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$). It is not known, on the other hand, if such solutions are unique, neither if they preserve the regularity of $u_{0}$. In this section we focus on partial regularity criterions. The philosophy of such a criterions is that we can infer about the uniqeness or regularity of weak solutions by the knowledge of some a priori information on the solution itself. In general a partial regularity criterion is an assertion of the following type: \ \noindent {\em Let $u$ a Leray's solution of (\ref{CauchyNS}). Let furthermore assume some additional a priori properties (tipically these are boundedness conditions) about $u$, then $u$ is the unique Leray solution of (\ref{CauchyNS}). Furthermore $u$ is $C^{\infty}$ in the space variables at each time $t>0$.} \ \noindent The first result of this kind goes back to Serrin \cite{Ser}. The author consider a little different setting. He works with weak solutions in open regions $\Omega \subseteq \mathbb{R}^{+} \times \mathbb{R}^{n}$, that is a couple $(u,p)$ $u$ such that: \begin{equation}\label{WeakSolutionsInOmega} \int \int (\partial_{t} \phi + (u \cdot \nabla) \phi) u - (\nabla \phi \cdot \nabla) u = 0, \end{equation} for each $\phi \in C^{\infty}_{c}(\Omega)$ with $\nabla \cdot \phi =0$ and \begin{equation}\label{WeakSolutionsInOmegaDiv} \int \int u \cdot \nabla \phi = 0, \quad \int \int p \Delta \phi + \sum_{i,j=1}^{n} u_{i}u_{j} \partial_{i}\partial_{j} \phi =0, \end{equation} for each $\phi \in C^{\infty}_{c}(\Omega)$. Conditions (\ref{WeakSolutionsInOmega}, \ref{WeakSolutionsInOmegaDiv}) are formally justified by taking the scalar product of the equations with the vector field $\phi$ and then by integrating by parts. \begin{remark} Let $\Omega = (0,T) \times \Omega'$ where $\Omega'$ is an open subset of $\mathbb{R}^{n}$. Let then $\Psi$ a scalar function such that: $$ \Delta \Psi =0 \quad \mbox{in} \quad \mathbb{R}^{n}, $$ and $a:(0,T)\rightarrow \mathbb{R}$ is integrable in $(0,T)$. Then it's easy to check that: $$ u(t,x)= a(t)\Psi(x) $$ is a weak solutions of (\ref{WeakSolutionsInOmega}). \end{remark} \noindent The remark shows that regularity in space and time have to be analyzed in different ways\footnote{This is physically expectable by the incompressibility of fluid. A little change of fluid velocity localized in space causes instantly a global change of fluid velocity, so the time derivatives of the velocity are expected to be more singular.}. While it is reasonable that little a priori assumptions on $u$ are sufficient to get space regularity, stronger assumptions should be necessary in order to get time regularity. In this spirit the following holds: \begin{theorem}[\cite{Ser}, \cite{Struwe}, \cite{Sohr}]\label{SerrinRegularity} Let $u$ a weak solution of the Navier-Stokes equation in the open space-time region $\Omega$. Then define: $$ \Omega_{t}=\{ t \} \times \mathbb{R}^{n} \cap \Omega. $$ If $u \in L^{\infty}_{t}L^{2}_{\Omega_{t}}$, $\nabla u \in L^{2}_{t}L^{2}_{\Omega_{t}}$ and furthermore: \begin{equation}\label{SerrinScaling} \int_{0}^{+\infty} \| u \|^{s}_{L^{p}_{x}(\Omega_{t})}(t) \ dt < +\infty, \quad \frac{2}{s} + \frac{n}{p} \leq 1; \end{equation} then $u \in C^{\infty}$ in the space variables at each times $t$ such that $\Omega_{t} \neq 0$. Assume in addition that: \begin{equation}\label{SerrinTimeDerivatives} \partial_{t} u \in L^{p}_{t}L^{2}_{\Omega_{t}}, \qquad p \geq 1. \end{equation} Then $\partial_{x_{i}}u(t,x)$ are absolute continuous in time and exists a differentiable function $p(t,x)$ such that: $$ \partial_{t} u - \Delta u + ( u \cdot \nabla ) u = - \nabla p $$ almost everywhere in $\Omega$. \end{theorem} \noindent Actually Serrin has proved the theorem just in the case $\frac{2}{s} + \frac{n}{p} < 1$. The endpoint case has been fixed in \cite{Sohr}, \cite{Struwe}. The critical relation $\frac{2}{s} + \frac{n}{p}=1$ follows by requiring $L^{s}_{t}L^{p}_{x}$ invariance under the scaling: $$ \lambda \rightarrow \lambda u (\lambda^{2}t, \lambda x). $$ The regularity and the uniqeness of weak solutions are strictly related. A good example is the fact that under condition (\ref{SerrinScaling}) in $(0,T)\times \mathbb{R}^{n}$ also uniqeness is easily achieved. This is again due to Serrin (see \cite{Lem}). \begin{lemma}[J.Serrin]\label{SerrinUniqness} Let $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$ and $u$ a Leray solution of (\ref{CauchyNS}). If furthermore \begin{equation} \int_{0}^{T} \| u \|^{s}_{L^{p}_{x}(\mathbb{R}^{n})}(t) \ dt < +\infty, \quad p \in (n,+\infty], \quad \frac{2}{s} + \frac{n}{p} =1, \end{equation} then $u$ is unique in $[0,T)$. \end{lemma} \noindent Another fundamental regularity result is due to Caffarelli, Kohn and Nirenberg \cite{CKN}. We will again state it without the proof, that's quite difficult, and can be found in the original paper or, in a simplified version, in \cite{Lin}. The authors give a local regularity criterion for system (\ref{CauchyNS}). A local criterion is an assertion of the following type: \ \noindent {\em Let $u$ a Leray solution of (\ref{CauchyNS}) and $(s,y)$ a fixed point in the space-time $\mathbb{R}^{+}\times \mathbb{R}^{n}$. If $u$ satisfies some a priori boundedness condition near (in a sense to be specified) the point $(s,y)$, then $u$ is $C^{\infty}$ in the space variables in the point $(s,y)$.} \ \noindent To state the criterion in \cite{CKN} we need some preliminaries: \begin{definition} The parabolic cylinder $Q(s,y,r)$ with top centered in $(s,y)$ is the set: $$ Q(s,y,r) = B(y,r) \times (s,s-r^{2}), $$ where $B(y,r)$ is the $n$-dimensional ball of radius $r$ centred in $y$. The set $Q$ is important in the study of parial regularity of (\ref{CauchyNS}) because is invariant under the scaling: $(s,y) \rightarrow (\lambda^{2}s,\lambda x)$. \end{definition} \noindent We need also consider a different definition of weak solution than the Leray's one. \begin{definition}[Suitable solutions, \cite{CKN}]\label{SuitSolDef} Let $\Omega$ a open subset of $\mathbb{R}^{+}\times \mathbb{R}^{n}$ and: $$ \Omega_{t} = \{ t \} \times \mathbb{R}^{n} \cap \Omega. $$ The pair $(u,p)$ is a suitable weak solution of the Navier-Stokes equation if: \begin{enumerate} \item $p \in L^{\frac{n+2}{4}}(\Omega)$; \item Exist some constants $E_{0}, E_{1}$ such that: \begin{equation} \label{GenEnergyBoundness} \int_{\Omega_{t}} |u(t, \cdot)|^{2} \ dx \leq E_{0}, \end{equation} for almost every $t$ such that $\Omega_{t}\neq 0$, and \begin{equation}\label{GenEnergyBoundness2} \int\int_{\Omega} |\nabla u |^{2} \ dxdt \leq E_{1}; \end{equation} \item (u,p) satisfy (\ref{CauchyNS}) in the sense of distributions in $\Omega$; \item A generalized version of the energy inequality holds: \begin{equation} \label{GenEnergy} 2\int\int |\nabla u|^{2} \phi \leq \int\int |u|^{2}(\phi_{t} + \Delta \phi) + (|u|^{2} +2p)u \cdot \nabla \phi \end{equation} for each function $\phi \in C^{\infty}_{c}(\Omega)$. \end{enumerate} \end{definition} \noindent The inequality (\ref{GenEnergy}) can be formally justified by taking the scalar product of the equation (\ref{CauchyNS}) against the vector field $\phi u$ and then by integrating by parts. It is straightforward to check that Leray solutions are also suitable solutions. Furthermore definition the (\ref{SuitSolDef}) is meaningful, infact: \begin{theorem}[\cite{CKN}] Let $u_{0}\in L^{2}_{\sigma}(\Omega)$, with $\Omega$ an open subset of $\mathbb{R}^{n}$. Then there exist a pair: $$(u,p): (\mathbb{R}^{+} \times \Omega, \mathbb{R}^{+} \times \Omega) \rightarrow (\mathbb{R}^{n}, \mathbb{R}),$$ such that $(u,p)$ is a suitable solution of the Navier-Stokes equation in $\mathbb{R}^{+} \times \Omega$. Furthermore $u$ attains $u_{0}$ as initial datum in the following sense: $$ \int_{\Omega}u(t,x) \phi(x) \ dx \rightarrow \int_{\Omega} u_{0}(x)\phi(x) \ dx, \qquad \mbox{as} \quad t\rightarrow 0, $$ for each $\mathbb{\phi} \in L^{2}_{\Omega}$. \end{theorem} Now we are ready to state the fundamental local regularity criterion: \begin{lemma}[\cite{CKN}]\label{CKNLemma} Let $n\geq 3$. Let then $\Omega$ an open subset of $\mathbb{R}^{+}\times\mathbb{R}^{n}$ and the pair $(u,p)$ a suitable solution of the Navier-Stokes equation in $\Omega$. Let then $(s,y)$ some point in $\Omega$. There exist an absolute constant $\varepsilon$ such that if: \begin{equation}\label{CKNINtegrability} \limsup_{r\rightarrow 0} \frac{1}{r^{n-2}} \int \int_{Q^{*}(r,s,y)} |\nabla u|^{2} \leq \varepsilon, \end{equation} where $Q^{*}(r,s,y) = Q(r,s + \frac{1}{8}r^{2}, y)$. Then $u$ is regular ($C^{\infty}$ in the space variables) in a neighborhood of $(s,y)$. \end{lemma} \noindent In the following we will use a little different formulation of the lemma, that's more convenient in order to work with weighted spaces: \begin{lemma}[\cite{CKN},\cite{YongZhou}]\label{WeightedCKNLemma} Let $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$ and $u$ a Leray solution of (\ref{CauchyNS}). Let then \begin{equation}\label{Integrability} \int_{0}^{T} \int_{\mathbb{R}^{n}}|x|^{2-n} |\nabla u|^{2} \ dt dx < +\infty; \end{equation} then $u$ is regular ($C^{\infty}$ in the space variables) in the segment $(0,T)\times \{ 0 \}$. \end{lemma} \begin{proof} Let $t \in (0.T)$, then \begin{eqnarray} & & \limsup_{r\rightarrow 0} \frac{1}{r^{n-2}}\int\int_{Q(r,t,0)} |\nabla u|^{2}(t,x) \ dxdt \nonumber\\ &=& \limsup_{r\rightarrow 0} \frac{1}{r^{n-2}}\int_{t-r^{2}}^{t}\int_{B(x,r)} |\nabla u|^{2}(t,x) \ dxdt \nonumber \\ & \leq & \limsup_{r\rightarrow 0} \int_{t-r^{2}}^{t}\int_{B(x,r)} |x|^{2-n}|\nabla u|^{2}(t,x) \ dxdt \nonumber \\ & \leq & \limsup_{r\rightarrow 0} \int_{t-r^{2}}^{t}\int_{\mathbb{R}^{n}} |x|^{2-n}|\nabla u|^{2}(t,x) \ dxdt =0, \nonumber \end{eqnarray} where the continuity property of the integral (\ref{Integrability}) is used. \end{proof} \noindent As suggested by this version of the Caffarelli-Kohn-Nirenberg lemma, a local boundedness condition, for instance (\ref{CKNINtegrability}), can be replaced by imposing boundedness in a suitable weighted $L^{p}$ space. We follow this point of view in the next of the thesis. A complete set of partial regularity criteria can be obtained, infact, by working with weighted norms, as shown in \cite{YongZhou}. Before stating this results we give some notations: \begin{notation} Let $\alpha \in \mathbb{R}$, $p,s \in [1,+\infty)$. Let then $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$, $F: \mathbb{R}^{+} \times \mathbb{R}^{n} \rightarrow \mathbb{R}$. We will say: \begin{itemize} \item $f \in L^{p}_{|x|^{\alpha p}dx}$ if $$\Big( \int_{\mathbb{R}^{n}} |x|^{\alpha p} |f(x)|^{p} \ dx \Big)^{\frac{1}{p}} < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{p}_{|x|^{\alpha p}dx}}$, or with $\||x|^{\alpha}\cdot\|_{L^{p}_{x}}$; \item $f \in L^{\infty}_{|x|^{\alpha}dx}$ if $$\sup_{x\in \mathbb{R}^{n}} |x|^{\alpha} |f(x)| < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{\infty}_{|x|^{\alpha}dx}}$, or with $\||x|^{\alpha}\cdot\|_{L^{\infty}_{x}}$; \item $F \in L^{s}_{t}L^{p}_{|x|^{\alpha p}dx}$ if $$\Big( \int_{\mathbb{R}^{+}}\Big| \int_{\mathbb{R}^{n}} |x|^{\alpha p} |F(t,x)|^{p} \ dx \Big|^{\frac{s}{p}} \ dt \Big)^{\frac{1}{s}} < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{s}_{t}L^{p}_{|x|^{\alpha p}dx}}$, or with $\||x|^{\alpha}\cdot\|_{L^{s}_{t}L^{p}_{x}}$; \item $F \in L^{s}_{T}L^{p}_{|x|^{\alpha p}dx}$ if $$\Big( \int_{0}^{T}\Big| \int_{\mathbb{R}^{n}} |x|^{\alpha p} |F(t,x)|^{p} \ dx \Big|^{\frac{s}{p}} \ dt \Big)^{\frac{1}{s}} < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{s}_{T}L^{p}_{|x|^{\alpha p}dx}}$, or with $\||x|^{\alpha}\cdot\|_{L^{s}_{T}L^{p}_{x}}$. \end{itemize} \end{notation} \noindent We give similar definitions for $$L^{\infty}_{t}L^{p}_{|x|^{\alpha p}dx}, \quad L^{\infty}_{T}L^{p}_{|x|^{\alpha p}dx}, \quad L^{s}_{t}L^{\infty}_{|x|^{\alpha}dx}, \quad L^{s}_{T}L^{\infty}_{|x|^{\alpha}dx}.$$ \noindent As told partial regularity criterions in weighted Lebesgue spaces hold: \begin{theorem}[\cite{YongZhou}]\label{YZTheorem} Let $n \geq 3$ and $u_{0}$ a divergence free vector field such that $u_{0} \in H^{2}(\mathbb{R}^{n})$ and: \begin{equation}\label{Weightedu0bound} \| |x|^{1-\frac{n}{2}} u_{0}\|_{L^{2}_{x}} < +\infty. \end{equation} If $u$ is a weak Leray's solution of (\ref{CauchyNS}) such that: \begin{equation}\label{YZuBound1} \||x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{x}} < +\infty, \end{equation} with \begin{equation}\label{YZCondition1} \frac{2}{s}+ \frac{n}{p} = 1-\alpha, \quad \frac{2}{1-\alpha} < s < +\infty, \quad \frac{n}{1-\alpha} < p < +\infty, \quad -1 \leq \alpha < +1; \end{equation} or \begin{equation}\label{YZuBound2} \||x|^{\alpha} u \|_{L^{2/(1-\alpha)}_{T}L^{\infty}_{x}} < +\infty, \quad -1 < \alpha < +1; \end{equation} or \begin{equation}\label{YZuBound3} \sup_{t \in (0,T)} \||x|^{\alpha} u \|_{L^{n/(1-\alpha)}_{x}} = \varepsilon, \quad -1 \leq \alpha \leq +1; \end{equation} with $\varepsilon$ sufficiently small; then actually $u$ is regular ($C^{\infty}$ in space variables) in the segment $(0,T) \times \{ 0 \}$. \end{theorem} \noindent Of course regularity in $(0,T) \times \bar{x}$ is achieved if the weights and the norms are centered in $\bar{x}$ instead of in the origin. This is infact a slightly different formulation of the theorem. In the original one the author gets global regularity by requiring $$ \sup_{\bar{x} \in \mathbb{R}^{n}} \||x-\bar{x}|^{1-\frac{n}{2}} u_{0}\|_{L^{2}_{x}} < +\infty, $$ and $$ \sup_{\bar{x} \in Rn} \||x-\bar{x}|^{\alpha} u \|_{L^{s}_{T}L^{p}_{x}} < +\infty. $$ Such a formulation is more useful for our purposes. \noindent This is a local regularity criterion. The weight $|x|^{\alpha}$ localizes the norms near to the origin and provides regularity just in the points $(0,T)\times \{ 0 \}$. We will show how it is actually the case just for negative values of $\alpha$. If $\alpha \geq 0$ then global regularity can be achieved. We will show that it is also the case if, by taking $\alpha < 0$, we assume a suitable amount of angular integrability of the solution. \begin{remark} Notice that: \begin{enumerate} \item The first equation in (\ref{YZCondition1}) is the critical relation coming out by requiring $L^{s}_{t}L^{p}_{|x|^{\alpha p}dx}$ invariance under the scaling: $$ u^{\lambda}: u(t,x) \rightarrow \lambda u (\lambda^{2}t,\lambda x); $$ \item The estimates (\ref{YZuBound2}), (\ref{YZuBound3}) are the endpoint version of (\ref{YZuBound1}), obtained by setting $(s,p)$ equal to $(2/(1-\alpha), \infty)$ and $(\infty, 3/1-\alpha )$ respectively. These are consistent with the scaling relation (\ref{YZCondition1}); \item Condition (\ref{YZuBound3}) in the case $\alpha = 1$ becomes a smallness condition on the norm $\||x|^{\alpha} u \|_{L^{\infty}_{T}L^{\infty}_{x}}$ , which implies that the possible behaviour of the strong solution can be $|x|^{-1+\varepsilon}$ at the neighbourhood of the origin for small $\varepsilon$. This recovers one of the main results proved in \cite{CKN}, \cite{Tian} for suitable weak solutions; \item Of course can be set $T=+\infty$ to get regularity for all times; \item The range of values for $\alpha$ does not depend on the dimension. \end{enumerate} \end{remark} \noindent Then an analogous of theorem (\ref{YZTheorem}) holds by assuming a priori boundedness of $\nabla u$ in weighted Lebesgue spaces: \begin{theorem}[\cite{YongZhou}]\label{YZDerTheorem} Let $n\geq 3$ and $u_{0}$ a divergence free vector field such that $u_{0} \in H^{2}(\mathbb{R}^{n})$ and: \begin{equation}\label{Weightedu0boundDer} \||x|^{1-\frac{n}{2}} u_{0} \|_{L^{2}_{x}} < +\infty. \end{equation} If $u$ is a weak Leray solution of (\ref{CauchyNS}) such that: \begin{equation}\label{YZDeruBound1} \||x|^{\alpha} \nabla u \|_{L^{s}_{T}L^{p}_{x}} < +\infty, \end{equation} with \begin{equation}\label{YZDerCondition1} \frac{2}{s}+ \frac{n}{p} = 2-\alpha, \quad 1 < s < +\infty, \quad \frac{n}{2 -\alpha} < p < +\infty, \quad -1 \leq \alpha < +2; \end{equation} or \begin{equation}\label{YZDeruBound2} \||x|^{\alpha} \nabla u \|_{L^{2/(2-\alpha)}_{T}L^{\infty}_{x}} < +\infty, \quad 0 < \alpha \leq +2; \end{equation} or \begin{equation}\label{YZDeruBound3} \sup_{t \in (0,T)} \||x|^{\alpha} \nabla u \|_{L^{n/(2-\alpha)}_{x}} = \varepsilon, \quad -1 \leq \alpha < +2; \end{equation} with $\varepsilon$ sufficiently small; then actually $u$ is regular ($C^{\infty}$ in space variables) on the segment $(0,T) \times \{ 0 \}$. \end{theorem} \noindent We give again similar remarks. \begin{remark} Notice that: \begin{enumerate} \item The first equation in (\ref{YZDerCondition1}) is assumed again to ensure invariance with respect to the scaling: $$ u^{\lambda}: u(t,x) \rightarrow \lambda u(\lambda^{2} t,\lambda x), $$ infact: \begin{eqnarray} \| |x|^{\alpha} \nabla u^{\lambda}\|_{L^{s}_{t}L^{p}_{x}} &=& \| |x|^{\alpha} \lambda^{2}(\nabla u)(\lambda^{2}t,\lambda x)\|_{L^{s}_{t}L^{p}_{x}} \nonumber \\ &=& \lambda^{2-\alpha-\frac{2}{s}-\frac{n}{p}}\| |x|^{\alpha}u\|_{L^{s}_{t}L^{p}_{x}}; \nonumber \end{eqnarray} \item Conditions $1<s$, $0 < \alpha$ in (\ref{YZDerCondition1}),(\ref{YZDeruBound2}) seems to be artificial with respect to the more natural $\frac{2}{2-\alpha} \leq s$, $-1 \leq \alpha$, but they are necessay in order to work with $L^{s}_{t}L^{p}_{x}$ spaces with $s,p \geq 1$; \item By setting $\alpha =\frac{1}{2}$, $s=p=2$ a similar result to lemma (\ref{CKNLemma}) is achieved; \item A global regularity result is again achieved by setting $T=+\infty$. \item The range of values for $\alpha$ does not depend again on the dimension. \end{enumerate} \end{remark} \section{Well posedness with small data} \noindent A different approach to the existence and uniqeness of the solutions of (\ref{CauchyNS}) consists in considering small initial data. Such a big restriction provides complete well posedness, i.e. regularity, uniqeness and decay of solutions. Solutions with small initial data have been deeply investigated since by \cite{Kat}, and in \cite{Tat} the sharp case seems to be covered. The key point of the small data theory is that the nonlinear term $(u\cdot \nabla)u$ is negligible with respect to the others, so the equation (\ref{CauchyNS}) can be interpreted as a perturbed heat equation. This point of view suggests to perform a fixed point algorithm around the Heat propagator. Let's start by the Duhamel's representation: \begin{equation}\label{IntegralCauchyNS} \left \{ \begin{array}{rcll} u & = & e^{t \Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes u)(s)ds & \mbox{in} \quad [0,T)\times \mathbb{R}^{n} \\ \nabla \cdot u & = & 0 & \mbox{in} \quad [0,T) \times \mathbb{R}^{n}. \end{array}\right. \end{equation} Where $\mathbb{P}$ is the Leray projection as defined in (\ref{LerProj}). So the solution is the sum of the linear propagator $e^{t\Delta}$ and a bilinear term $B(u,u)= \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes u)(s)ds$. Then the Picard iteration can be performed: \begin{equation}\label{PicardForKato} \begin{array}{rcl} u_{1} & = &e^{t\Delta}u_{0} \\ u_{2} & = & e^{t \Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u_{1} \otimes u_{1})(s)ds \\ u_{n} &=& e^{t\Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u_{n-1} \otimes u_{n-1})(s)ds. \end{array} \end{equation} It is easy to show the following: \begin{theorem}[\cite{Lem}]\label{Picard} Let $X_{T}$ a Banach space of functions defined on $[0,T]\times \mathbb{R}^{n}$ such that the bilinear form: $$ B(u,v) = \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes v)(s)ds $$ is bounded by $X_{T} \times X_{T}$ to $X_{T}$. Let then $X_{0} \subset \mathbb{S}'(\mathbb{R}^{n})$ such that $$ \|e^{t\Delta} f\|_{X_{T}} \leq C_{X_{0},X_{T}}\|f\|_{X_{0}} \qquad \forall t \in (0,T]. $$ Under this assumptions $u_{n}$ is a Cauchy sequence that converges to a solution of the integral problem (\ref{IntegralCauchyNS}). \end{theorem} \noindent Usually (see again \cite{Lem}) $X_{T}$ is called an admissible path space, while $X_{0}$ is called an adapted space. Furthermore under suitable smallness assumption on $\|u_{0}\|_{X_{0}}$ in theorem \ref{Picard} can be set $T=+\infty$ to get a global existence result. Many adapted spaces have been considered in literature since by \cite{Kat} where $u_{0} \in L^{n}$. This result have been generalized to the homogeneous Sobolev space $\dot{H}^{\frac{n}{p}-1}$ in \cite{Kat2,Kat3}, to Morrey spaces in \cite{Morrey}, to Besov spaces in \cite{Gall} and of course a lot of alternative references are possible. The biggest space in which Picard iteration is possible seems to be $BMO^{-1}$, see \cite{Tat}. For instance the following continuous embeddings of adapted spaces holds: $$ \dot{H}^{\frac{n}{p}-1} \subset L^{n} \subset \dot{B}^{-1+\frac{n}{p}}_{p,p < \infty,\infty} \subset BMO^{-1}. $$ Even if, as mentioned, several choices are possible, we will work basically with weighted $L^{p}$ spaces. \begin{remark} Global well posedness with small data forces the adapted space $X_{0}$ to be invariant under the scaling $\lambda \rightarrow u_{0}^{\lambda}= \lambda u_{0}(\lambda x)$, i.e $$ \|u_{0}^{\lambda}\|_{X_{0}} = \|u_{0}\|_{X_{0}}, \qquad \forall \lambda \in \mathbb{R}^{+}. $$ This easily follows by the similarity propetry of equation (\ref{CauchyNS}). For instance, if we restrict to the $L^{p}$ case the right-scaling adapted space is $L^{n}(\mathbb{R}^{n})$. Let infact $u_{0} \in L^{p}(\mathbb{R}^{n})$ with $p > n$. Of course scaling \begin{equation} \begin{array}{rcl} u^{\lambda}(t,x) & = & \lambda u(\lambda^{2}t, \lambda x) \qquad \lambda > 0 \\ u_{0}^{\lambda}(x) & = & \lambda u_{0}(\lambda x) \end{array} \end{equation} leads to a one-parameter family of solutions of (\ref{CauchyNS}). If furthermore a global well posedness result with small data is achieved, by setting $\lambda \rightarrow +\infty$ it follows well posedness with arbitrarily large initial data. The same is in the case $p<n$, by setting $\lambda \rightarrow 0$. \end{remark} \noindent The first result in small data teory goes back to \cite{Kat}: \begin{theorem}\label{SmallKato} Let $u_{0}$ a divergence free vector field on $\mathbb{R}^{n}$. Exists $\varepsilon >0$ such that if $$ \|u_{0}\|_{L^{n}(\mathbb{R}^{n})} < \varepsilon, $$ then there is a unique solution $u : \mathbb{R}^{+}\times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ of the integral problem (\ref{CauchyNSInt}). Furthermore $u$ has the decay: \begin{equation}\label{KatSolDecay} \|u(t,\cdot)\|_{L^{q}(\mathbb{R}^{n})} \leq \frac{c_{q}}{t^{(1- \frac{n}{q})/2}}\|u_{0}\|_{L^{n}(\mathbb{R}^{n})}, \qquad t > 0; \end{equation} \begin{equation}\label{KatDerSolDecay} \|\partial u(t,\cdot)\|_{L^{q}(\mathbb{R}^{n})} \leq \frac{c_{q}}{t^{(2- \frac{n}{q})/2}}\|u_{0}\|_{L^{n}(\mathbb{R}^{n})}, \qquad t > 0 \end{equation} \end{theorem} for all $q\in[n,+\infty]$. Solutions $u$ also obey to the bound: \begin{equation}\label{KatSolBound} \|u\|_{L^{r}_{t}L^{q}_{x}} < +\infty, \quad \frac{2}{r} + \frac{n}{q} = 1, \quad n \leq q \leq \frac{n^{2}}{n-2}. \end{equation} This is of course the simplest small data result achievable. As we mentioned the optimal case seems to be covered by Koch and Tataru in \cite{Tat}. To state their result we need some more definitions: \begin{definition}[$BMO(\mathbb{R}^{n})$] A tempered distribution $u$ belongs to the space $BMO(\mathbb{R}^{n})$ if \begin{equation}\label{BMONorm} \Big(\sup_{x,R>0} \frac{2}{|B(x,R)|}\int_{0}^{R}\int_{B(x,R)} t |\nabla(e^{t\Delta} u)|^{2}(t,y) \ dydt\Big)^{\frac{1}{2}} < \infty. \end{equation} \end{definition} \noindent This is the Carleson charaterization of $BMO(\mathbb{R}^{n})$. Other equivalent definitions are in \cite{Stein93-a}. The square root in (\ref{BMONorm}) is taken because in such a way th quantity is a seminorm\footnote{Of course (\ref{BMONorm}) of a constant function is zero.}. \begin{remark} Let $w$ the solution of the heat equation $$ \left \{ \begin{array}{rcl} \partial_{t}w - \Delta w & = & 0 \quad \mbox{in} \quad \mathbb{R}^{+} \times \mathbb{R}^{n} \\ w & = & v \quad \mbox{in} \quad \{0\} \times \mathbb{R}^{n}, \end{array}\right. $$ of course $$ \|v\|_{BMO(\mathbb{R}^{n})} = \Big(\sup_{x,R>0} \frac{1}{|B(x,R)|}\int_{0}^{R^{2}}\int_{B(x,R)} | \nabla w|^{2}(t,y) \ dydt\Big)^{\frac{1}{2}}, $$ where we used the symbol $\|\cdot\|_{BMO(\mathbb{R}^{n})}$ even if the quantity is a seminorm. \end{remark} \noindent Then the space $BMO^{-1}(\mathbb{R}^{n})$ is defined by: \begin{definition}[$BMO^{-1}(\mathbb{R}^{n})$]\label{BMO^-1Def} A tempered disrtibution $v$ belongs to $BMO^{-1}(\mathbb{R}^{n})$ if: \begin{equation}\label{BMO^-1Norm} \| v \|_{BMO^{-1}(\mathbb{R}^{n})} = \Big(\sup_{x,R>0} \frac{1}{|B(x,R)|}\int_{0}^{R^{2}}\int_{B(x,R)} |e^{t\Delta} v|^{2}(t,y) \ dydt\Big)^{\frac{1}{2}} < \infty. \end{equation} In such a case $\|\cdot\|_{BMO^{-1}(\mathbb{R}^{n})}$ is actually a norm. \end{definition} \noindent It is easy to show that if $v$ is a vector field such that $v_{i} \in BMO(\mathbb{R}^{n})$ for each $i$, then $\nabla \cdot v \in BMO^{-1}(\mathbb{R}^{n})$. The converse is also true, as stated by the following theorem that establishes the precise relationship between the two spaces: \begin{theorem}[\cite{Lem}] Let $u$ a tempred distribution. Then $u \in BMO^{-1}(\mathbb{R}^{n})$ if and only if there exist $v_{i} \in BMO(\mathbb{R}^{n})$ such that: $$ u= \sum_{i=1}^{n} \partial_{i} v_{i}. $$ \end{theorem} \noindent We also define the parabolic cylinder\footnote{This is a central object in the study of regulariry properties of (\ref{CauchyNS}) because it obeys to the scaling: $(t,x)\rightarrow (\lambda^{2}t,\lambda x)$; we will came back later on this topics.} centred in $x$ and with radius $R$: \begin{equation}\label{ParCyl} Q(x,R)=B(x,R)\times (0,R^{2}), \end{equation} and introduce the adapted space $X$ by: \begin{definition}[\cite{Tat}] \begin{equation} \|u\|_{X}= \sup_{t>0}t^{\frac{1}{2}}\|u(t)\|_{L^{\infty}(\mathbb{R}^{n})} + \Big(\sup_{x,R>0} \frac{1}{|B(x,R)|}\int_{Q(x,R)} |u|^{2}(t,y) \ dydt\Big)^{\frac{1}{2}}. \end{equation} \end{definition} \noindent We are ready to state the theorem: \begin{theorem}[\cite{Tat}]\label{TatTheo} Let $u_{0}$ a divergence free vector field on $\mathbb{R}^{n}$. Exists $\varepsilon >0$ such that if $$ \|u_{0}\|_{BMO^{-1}(\mathbb{R}^{n})} < \varepsilon, $$ then there is a unique solution $u : \mathbb{R}^{+}\times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ of the integral problem (\ref{CauchyNSInt}). Furthermore $u \in X$. \end{theorem} \noindent This result works with a translation invariant adapted spaces, and, as mentioned, there is a wide literature on the topics. On the other hand even if $X_{0}$ is not translation invariant local regularity results are still available, but this cases have not been so deeply investigated. In the following we focus in particular on weighted spaces with power weights of the kind $|x|^{\alpha}$. So the translation invariance is broken but we still have to require invariance with respect to the scaling centered in the origin $$ \lambda \rightarrow \lambda u_{0}(\lambda x); $$ In this way the most simple spaces available are endowed with the norms $$ \| |x|^{\alpha} \cdot \|_{L^{p}(\mathbb{R}^{n})}, \qquad \mbox{with} \quad \alpha = 1- \frac{n}{p}. $$ In the case $p=2$ a very interesting result has been obtained in \cite{CKN} \begin{theorem}[Caffarelli-Kohn-Nirenberg]\label{CKNSmallData} Let $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$ and $u$ a suitable weak solution of (\ref{CauchyNS}). There exists an absolute constant $\varepsilon_{0} > 0$ such that if $$ \| |x|^{1- n/2} u_{0} \|_{L^{2}(\mathbb{R}^{n})} = \varepsilon < \varepsilon_{0}, $$ then $u$ is regular ($C^{\infty}$ in space variables) in the interior of the parabola $$ \Pi = \left\{ (t,x) \quad \mbox{t.c.} \quad t > \frac{|x|^{2}}{\varepsilon_{0}- \varepsilon} \right\}. $$ \end{theorem} \begin{remark} Of course by translations the analogous result holds if one consider small data in the norm $\| |x-\bar{x}|^{\alpha} \cdot\|_{L^{p}(\mathbb{R}^{n})}$, for a fixed $\bar{x} \in \mathbb{R}^{n}$. \end{remark} \noindent We will come back on this result in the next chapter and we will show how it depends on the amount of angular integrability of $u_{0}$. By applying the technology developed in the first chapter we suggest how to quantify precisely the gain of regularity that angular integrability provides. \chapter{Results in weighted setting with angular integrability} \noindent We apply the technology developed in Chapter \ref{SectInequality} to the study of the well posedness and regularity of (\ref{CauchyNS}). We closely look at these problems in the setting of weighted $L^{p}$ spaces with different integrability in radial and angular directions. More precisely we consider well posedness of (\ref{CauchyNS}) with $u_{0} \in L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ that's defined by \begin{equation}\label{MixedLpSect4} \begin{array}{lcl} L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta} &=& \Big\{ f \in L^{1}_{loc}(\mathbb{R}^{n}) \quad \mbox{s.t.} \\ && \left( \int_{0}^{+\infty} \|f(\rho\ \cdot\ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{\alpha p + n-1} d \rho \right)^{\frac1p} < +\infty \Big\}; \end{array} \end{equation} it is immediate to show that the above quantity is a norm and we use the notation $$ \|\cdot\|_{L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}}, \quad \mbox{or} \quad \||x|^{\alpha}\cdot\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}. $$ This space is not translation invariant, as the classical adapted spaces are, but as shown in theorem \ref{CKNSmallData} local regularity results are still achieveble in this setting. \noindent Then we are interested in regularity criteria in the context of spaces: \begin{equation}\label{MixedLpSect4bis} \begin{array}{lcl} L^{s}_{T}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta} &=& \Big\{ u \in L^{1}_{loc}( (0,T) \times \mathbb{R}^{n}) \quad \mbox{s.t.} \\ & & \left( \int_{0}^{T} \left( \int_{0}^{+\infty} \|u(t, \rho\ \cdot\ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{\alpha p + n-1} d \rho \right)^{\frac{s}{p}} dt \right)^{\frac1s} < +\infty \Big\}, \end{array} \end{equation} and again we denote this norms with $$ \|\cdot\|_{L^{s}_{T}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}}, \quad \mbox{or} \quad \||x|^{\alpha}\cdot\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}. $$ We use also $\|\cdot\|_{L^{s}_{t}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}}$ and $\||x|^{\alpha}\cdot\|_{L^{s}_{t}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}$ if $T=+\infty$. Of course if we restrict to radially symmetric functions $u_{0}=u_{0}(|x|)$ (or $u=u(t,|x|)$) the norms reduce to the classical ones. Now it is well known that the problem (\ref{CauchyNS}) is very much simpler in a symmetric setting, and stronger results are achievable. The idea is try to recover some of this improvements in the case of initial data or solutions with merely higher angular integrability. The functional spaces well suited to this purpose are indeed (\ref{MixedLpSect4}), (\ref{MixedLpSect4bis}). We find encouraging results by considering $L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ with $\alpha <0$ and large value of $\widetilde{p}$. The idea is that while the weight $|x|^{\alpha}$ localizes the norm near to the origin, the radial $L^{\widetilde{p}}$ integrability, with large $\widetilde{p}$, provides a bound for large $|x|$. This heuristic will be more precisely formulated later, and will be useful to interpret all the following results. Of course by translations can be considered power weights centred in some $\bar{x} \neq 0$. In this case all the norms have to be translated in $\bar{x}$. We give some precise definitions about the spaces we are going to use. \begin{notation} Let $\alpha \in \mathbb{R}$, $p,s \in [1,+\infty)$. Let then $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$, $F: \mathbb{R}^{+} \times \mathbb{R}^{n} \rightarrow \mathbb{R}$. We will say: \begin{itemize} \item $f \in L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ if $$\Big( \int_{\mathbb{R}^{+}} \|f( \rho \cdot \ )\|^{p}_{L^{\widetilde{p}}_{(\mathbb{S}^{n-1})}}\rho^{\alpha p +n-1} \ d \rho \Big)^{\frac{1}{p}} < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}}$, or $\||x|^{\alpha}\cdot\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}$; \item $f \in L^{\infty}_{|x|^{\alpha}d|x|}L^{\widetilde{p}}_{\theta}$ if $$\sup_{ \rho > 0 } \rho^{\alpha} \|f( \rho \ \cdot \ )\|_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{\infty}_{|x|^{\alpha}d|x|}L^{\widetilde{p}}_{\theta}}$, or $\||x|^{\alpha}\cdot\|_{L^{\infty}_{|x|}L^{\widetilde{p}}_{\theta}}$; \item $F \in L^{s}_{t}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ if $$\Big( \int_{\mathbb{R}^{+}} \Big| \int_{\mathbb{R}^{+}} \|F(t, \rho \ \cdot \ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{\alpha p +n-1} \ d \rho \Big|^{\frac{s}{p}} \ dt \Big)^{\frac{1}{s}} < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{s}_{t}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}}$, or $\||x|^{\alpha}\cdot\|_{L^{s}_{t}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}$; \item $F \in L^{s}_{T}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ if $$\Big( \int_{0}^{T}\Big| \int_{\mathbb{R}^{+}} \|F(t,\rho \ \cdot \ )\| \rho^{\alpha p + n-1} \ d \rho \Big|^{\frac{s}{p}} \ dt \Big)^{\frac{1}{s}} < +\infty,$$ and we denote this norm with $\|\cdot\|_{L^{s}_{T}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}}$, or $\||x|^{\alpha}\cdot\|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}$. \end{itemize} \end{notation} We give similar definitions for $$L^{\infty}_{t}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}, \quad L^{\infty}_{T}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}, \quad L^{s}_{t}L^{\infty}_{|x|^{\alpha}d|x|}L^{\widetilde{p}}_{\theta}, \quad L^{s}_{T}L^{\infty}_{|x|^{\alpha}d|x|}L^{\widetilde{p}}_{\theta}.$$ \section{Decay estimates for convolutions with heat and Oseen kernels} \noindent The most important technical tools we need are weighted decay estimates for convolutions with the heat and Oseen kernels. Even if results of this kind already exist in literature we cover a larger set of weights by considering $L^{p}_{|x|}L^{\widetilde{p}}_{\theta}$ spaces. In particular we show that infact the higher angular integrability allows to consider a larger set of weights. Corollary (\ref{cor:nonhom}) is the main ingredient in the proofs. \noindent To give a more compact notation it's convenient to define the quantities: \begin{equation}\label{LambdaDef} \Lambda (\alpha, p, \widetilde{p}) = \alpha + \frac{n-1}{p} - \frac{n-1}{\widetilde{p}} \end{equation} \begin{equation}\label{OmegaDef} \Omega (\alpha,p,s) = \alpha + \frac np + \frac 2s. \end{equation} We will use also the notation $\Lambda_{a}$ instead of $\Lambda(\alpha,p,\widetilde{p})$, when the values of $p,\widetilde{p}$ will be clear by the context. Let's start by the punctual decay: \begin{proposition}\label{PDecayCor} Let $n\geq 2$, $1 < p \leq q < +\infty$ and $1 \leq \widetilde{p} \leq \widetilde{q} \leq +\infty$. Assume further that $\alpha, \beta$ satisfy the set of conditions \begin{equation}\label{eq:condDL(Heat)} \beta > -\frac nq,\qquad \alpha<\frac{n}{p'}, \qquad \Lambda (\alpha,p,\widetilde{p}) \geq \Lambda (\beta,q,\widetilde{q}). \end{equation} Then the following estimates hold: \begin{enumerate} \item \begin{equation}\label{PHeatDer} \||x|^{\beta} \partial^{\eta} e^{t\Delta}u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq \frac{c_{\eta}}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta \geq0$, \item \begin{equation}\label{PGHeatDer} \||x|^{\beta} \partial^{\eta} e^{t\Delta} \mathbb{P} \nabla \cdot F\|_{L^{q}_{x}L^{\widetilde{q}}_{\theta}} \leq \frac{d_{\eta}}{t^{(1 + |\eta| + \frac{n}{p} -\frac{n}{q} +\alpha -\beta)/2}} \| |x|^{\alpha} F\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $1+ |\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta > 0$. \end{enumerate} for each multi index $\eta$, so in particular: \begin{enumerate} \item \begin{equation}\label{PHeat} \||x|^{\beta} e^{t\Delta}u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq \frac{c_{0}}{t^{(\frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $\frac{n}{p}-\frac{n}{q} + \alpha-\beta \geq0$, \item \begin{equation}\label{PGHeat} \||x|^{\beta} e^{t\Delta} \mathbb{P} \nabla \cdot F\|_{L^{q}_{x}L^{\widetilde{q}}_{\theta}} \leq \frac{d_{0}}{t^{(1 + \frac{n}{p} -\frac{n}{q} +\alpha -\beta)/2}} \| |x|^{\alpha} F\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $1+ \frac{n}{p}-\frac{n}{q} + \alpha-\beta > 0$. The range of admissible $p,q$ indices can be relaxed to $1\leq p \leq q \leq +\infty$ provided that $\Lambda (\alpha,p,\widetilde{p}) > \Lambda (\beta,q,\widetilde{q})$. \end{enumerate} \end{proposition} \begin{proof} The proof follows by proposition (\ref{cor:nonhom}) and scaling considerations. At first notice: \begin{equation}\label{HeatScaling} e^{t\Delta} \phi = S_{\sqrt{t}}e^{\Delta}S_{1/\sqrt{t}} \phi, \end{equation} where $S_{\lambda}$ is defined by: \begin{equation} (S_{\lambda}\phi)(x)= \phi(\frac{x}{\lambda}). \end{equation} Then also notice: \begin{equation} \|\partial^{\eta} |x|^{\beta} S_{\lambda} \phi \|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} = \lambda^{\frac{n}{q} +\beta -|\eta|}\||x|^{\beta}\phi\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}. \end{equation} For each $u_{0} \in L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ and $t\in \mathbb{R}^{+}$ the function $e^{\Delta} S_{1/ \sqrt{t}}$ is in the Schwartz class, so we can apply proposition (\ref{cor:nonhom}) with arbitrarily high values of $\lambda$ and condition (\ref{eq:condabg}) is trivially satisfied. We get: \begin{eqnarray} \||x|^{\beta} \partial^{\eta} e^{t\Delta} u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} &=& \| |x|^{\beta}\partial^{\eta} S_{\sqrt{t}}e^{\Delta}S_{1/ \sqrt{t}} u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &=& t^{(\frac{n}{q} + \beta - |\eta|)/2} \||x|^{\beta} (\partial^{\eta} e^{\Delta})S_{1/\sqrt{t}} u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & \frac{c_{\eta}}{t^{(-\frac{n}{q} -\beta + |\eta |)/2}}\||x|^{\alpha}S_{1/ \sqrt{t}}u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} \nonumber \\ &=& \frac{c_{\eta}}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q}+\alpha -\beta)/2}}\||x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \nonumber \end{eqnarray} where we used the condition (\ref{eq:condDL}), i.e. $$ \Lambda(\alpha,p,\widetilde{p}) \geq \Lambda(\beta,q,\widetilde{q}). $$ The proof of (\ref{PGHeatDer}) is similar, but we have to work with the operator $e^{t\Delta} \mathbb{P} \nabla \cdot \ $. As seen in proposition \ref{OseenDecayFinale} it is a convolution with a kernel $K$ such that \begin{equation}\label{OseenInProofScaling} K_{j,k,m}(t) = K_{j,k,m} \left( \frac{x}{\sqrt{t}} \right), \end{equation} and \begin{equation}\label{OseenInProof} (1+|x|)^{1 + n + |\mu |}\partial^{ \mu} K_{j,k,m} \in L^{\infty}(\mathbb{R}^{n}), \end{equation} for each multi index $\mu$. By the scaling (\ref{OseenInProofScaling}) follows: \begin{equation}\label{OssenScaling} K(t) * \phi =\frac{1}{\sqrt{t}} S_{\sqrt{t}} K * S_{1/\sqrt{t}} \phi. \end{equation} So we get \begin{eqnarray} \||x|^{\beta} \partial^{\eta} e^{t\Delta} \mathbb{P} \nabla \cdot F\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} &=& \| |x|^{\beta} \partial^{\eta}K(t) * F\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &=& \frac{1}{\sqrt{t}} \| |x|^{\beta}\partial^{\eta} S_{\sqrt{t}} K * S_{1/ \sqrt{t}} F\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &=& \frac{1}{\sqrt{t}}t^{(\frac{n}{q} + \beta - |\eta|)/2} \||x|^{\beta} (\partial^{\eta} K) * S_{1/ \sqrt{t}} F\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & \frac{d_{\eta}}{t^{(-\frac{n}{q} -\beta +1 + |\eta |)/2}}\||x|^{\alpha}S_{1/ \sqrt{t}} F\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} \nonumber \\ &=& \frac{d_{\eta}}{t^{(1+ |\eta| + \frac{n}{p}-\frac{n}{q}+\alpha -\beta)/2}}\||x|^{\alpha} F\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \nonumber \end{eqnarray} provided that $\Lambda_{\alpha} \geq \Lambda_{\beta}$. Notice that the optimal choice of $\gamma$ allowed by (\ref{OseenInProof}) is $\gamma = 1 + n + |\eta|$ and leads to $$\alpha + \beta + 1 + n + |\eta| > n \Big( 1+\frac{1}{q}- \frac{1}{p} \Big) \Rightarrow 1+ |\eta| + \frac{n}{p}-\frac{n}{q}+\alpha -\beta > 0.$$ \end{proof} \noindent It's remarkable that the restriction $\Lambda_{\alpha} \geq \Lambda_{\beta}$ can be removed if we localize the estimate in a space-time parabola above the origin. The size of the parabola will depend by the values of the difference $\Lambda_{\alpha} - \Lambda_{\beta}$ and increases as $\Lambda_{\alpha}- \Lambda_{\beta} \rightarrow 0^{-}$. In the limit case $\Lambda_{\alpha}=\Lambda_{\beta}$ we infact recover proposition \ref{PDecayCor}. \begin{proposition}\label{LocalPDecay} Let $n\geq 2$, $1 < p \leq q < +\infty$ and $1 \leq \widetilde{p} \leq \widetilde{q} \leq +\infty$. Assume further that $\alpha, \beta$ satisfy the set of conditions \begin{equation}\label{eq:condDL(Heat)Loc} \beta > -\frac nq,\qquad \alpha<\frac{n}{p'}, \qquad \Lambda (\alpha,p,\widetilde{p}) < \Lambda (\beta,q,\widetilde{q}), \end{equation} and define: $$ \Lambda_{\alpha,\beta}= \Lambda (\alpha,p,\widetilde{p}) - \Lambda (\beta,q,\widetilde{q}). $$ Let then $\Pi(R)$ the space-time parabola: $$ \Pi(R) = \Big\{ \frac{|x|}{\sqrt{t}} < R \Big\}; $$ then the following estimates hold: \begin{enumerate} \item \begin{equation}\label{PHeatDerLoc} \| \mathbbm{1}_{\Pi(R)} |x|^{\beta} \partial^{\eta} e^{t\Delta}u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq \frac{c_{\eta}R^{-\Lambda_{\alpha,\beta}}}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta \geq0$, \item \begin{equation}\label{PGHeatDerLoc} \| \mathbbm{1}_{\Pi(R)} |x|^{\beta} \partial^{\eta} e^{t\Delta} \mathbb{P} \nabla \cdot F\|_{L^{q}_{x}L^{\widetilde{q}}_{\theta}} \leq \frac{d_{\eta}R^{-\Lambda_{\alpha,\beta}}}{t^{(1 + |\eta| + \frac{n}{p} -\frac{n}{q} +\alpha -\beta)/2}} \| |x|^{\alpha} F\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $1+ |\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta > 0$. \end{enumerate} for each muti index $\eta$, so in particular: \begin{enumerate} \item \begin{equation}\label{PHeatLoc} \| \mathbbm{1}_{\Pi(R)} |x|^{\beta} e^{t\Delta}u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq \frac{c_{0}R^{-\Lambda_{\alpha,\beta}}}{t^{(\frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $\frac{n}{p}-\frac{n}{q} + \alpha-\beta \geq0$, \item \begin{equation}\label{PGHeatLoc} \| \mathbbm{1}_{\Pi(R)} |x|^{\beta} e^{t\Delta} \mathbb{P} \nabla \cdot F\|_{L^{q}_{x}L^{\widetilde{q}}_{\theta}} \leq \frac{d_{0}R^{-\Lambda_{\alpha,\beta}}}{t^{(1 + \frac{n}{p} -\frac{n}{q} +\alpha -\beta)/2}} \| |x|^{\alpha} F\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} provided that $1+ \frac{n}{p}-\frac{n}{q} + \alpha-\beta > 0$. \end{enumerate} These estimates can also be differently formulated by setting $R^{-\Lambda_{\alpha,\beta}}= K$. For instance (\ref{PHeatDerLoc}) becomes: \begin{equation}\label{PHeatDerLoc2} \| \mathbbm{1}_{\Pi(K^{-1/\Lambda_{\alpha,\beta}})} |x|^{\beta} \partial^{\eta} e^{t\Delta}u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq \frac{c_{\eta}K}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} and similarly for the other estimates. \end{proposition} \begin{proof} We use simply $\Lambda$ instead of $\Lambda_{\alpha,\beta}$. Of course: $$ \Lambda < 0 \quad \Rightarrow \quad R^{-\Lambda} \Big| \frac{x}{\sqrt{t}} \Big|^{\Lambda} \geq 1, \quad \mbox{if} \quad (t,x)\in \Pi(R). $$ Then we get, as in (\ref{PDecayCor}): \begin{eqnarray} \|\mathbbm{1}_{\Pi(R)} |x|^{\beta} \partial^{\eta} e^{t\Delta} u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} &=& \|\mathbbm{1}_{\Pi(R)} |x|^{\beta}\partial^{\eta} S_{\sqrt{t}}e^{\Delta}S_{1/ \sqrt{t}} u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & \frac{R^{-\Lambda}}{t^{\Lambda/2}}\| |x|^{\beta +\Lambda}\partial^{\eta} S_{\sqrt{t}}e^{\Delta}S_{1/ \sqrt{t}} u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &=& \frac{R^{-\Lambda}}{t^{\Lambda /2}} t^{(\frac{n}{q} + \beta + \Lambda - |\eta|)/2} \||x|^{\beta +\Lambda} e^{\Delta}S_{1/\sqrt{t}} u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & \frac{c_{\eta}}{t^{(-\frac{n}{q} -\beta + |\eta |)/2}}\||x|^{\alpha}S_{1/ \sqrt{t}}u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} \nonumber \\ &=& \frac{c_{\eta}}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q}+\alpha -\beta)/2}}\||x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \nonumber \end{eqnarray} where the indexes relationships are consistent because: $$ \Lambda_{\alpha} \geq \Lambda(\Lambda_{\alpha,\beta} +\beta,p,\widetilde{p}) = \Lambda(\Lambda_{\alpha}-\Lambda_{\beta} + \beta,p,\widetilde{p})= \Lambda_{\alpha}. $$ The other inequalities can be proved in the same way. \end{proof} \begin{remark} We precisely observed that the inequalities hold with an additional factor $R^{-\Lambda}$ after localization in a space-time parabola. Notice that this factor goes to $1$ as $\Lambda \rightarrow 0^{-}$. To get a uniformly in $\Lambda$ constant, it is instead necessary to restrict the size of the parabola: if we chose the constant equal to $K$, we need to restrict to: $$ \left| \frac{x}{\sqrt{t}} \right| \leq K^{-\frac{1}{\Lambda}}. $$ Here as $\Lambda \rightarrow 0^{-}$ the parabola fills the whole space-time. \end{remark} \noindent By the time decay also integral estimates can be easily obtained: \begin{proposition}\label{IDecayCor} Let $n\geq 2$, $1 < p \leq q < \frac{np}{(|\eta| + \alpha-\beta)p + n-2}$, $r \in (1, +\infty)$ and $ 1 \leq \widetilde{p} \leq \widetilde{q} \leq +\infty$. Assume further that $\alpha, \beta$ satisfy the set of conditions \begin{equation}\label{eq:condDL(IHeat)} \beta > -\frac nq,\qquad \alpha<\frac{n}{p'}. \end{equation} The following estimates hold: \begin{equation}\label{IHeat} \| |x|^{\beta} \partial^{\eta} e^{t\Delta}u_{0}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq c_{\eta} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} for each multi index $\eta$, provided that: \begin{equation}\label{OmegaScaling} |\eta| + \Omega(\alpha, p, \infty) = \Omega (\beta,q,r), \quad \Lambda (\alpha,p,\widetilde{p}) \geq \Lambda (\beta,q,\widetilde{q}). \end{equation} And the localized version \begin{equation}\label{IHeatLoc} \| \mathbbm{1}_{\Pi(R)} |x|^{\beta} \partial^{\eta} e^{t\Delta}u_{0}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq c_{\eta} R^{-\Lambda_{\alpha,\beta}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} for each multi index $\eta$, provided that: \begin{equation}\label{OmegaScalingLoc} |\eta| + \Omega(\alpha, p, \infty) = \Omega (\beta,q,r), \quad \Lambda_{\alpha,\beta} = \Lambda (\alpha,p,\widetilde{p}) - \Lambda (\beta,q,\widetilde{q}) < 0, \end{equation} where we remember the definition of $\Pi(R)$: $$ \Pi(R) = \Big\{ \frac{|x|}{\sqrt{t}} < R \Big\}. $$ \end{proposition} \begin{proof} By the punctual decay: \item \begin{equation} \||x|^{\beta} \partial^{\eta} e^{t\Delta}u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \leq \frac{c_{\eta}}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad t>0, \end{equation} follows that $\partial^{\eta}e^{t\Delta}u_{0}$ is bounded in the Lorentz space $L^{r,\infty}(\mathbb{R}^{+}; L^{q}_{|x|^{\beta q}d|x|}L^{\widetilde{q}}_{\theta})$ if $|\eta| + \Omega(\alpha, p, \infty) = \Omega (\beta,q,r)$. Infact: \begin{eqnarray} \|\||x|^{\beta}\partial^{\eta}e^{t\Delta}u_{0}\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}\|_{L^{r,\infty}_{t}} &\leq & c_{\eta} \left\| \frac{1}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \| |x|^{\alpha} u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}\right\|_{L^{r,\infty}_{t}} \nonumber \\ &\leq & c_{\eta} \left\| \frac{1}{t^{(|\eta| + \frac{n}{p}-\frac{n}{q} + \alpha-\beta)/2}} \right\|_{L^{r,\infty}} \|u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} \nonumber \\ &\leq & c_{\eta} \|u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \nonumber \end{eqnarray} provided that: $$ (|\eta| + \frac{n}{p} -\frac{n}{q} +\alpha -\beta)/2 = \frac{1}{r} \Rightarrow |\eta| + \Omega(\alpha, p, \infty) = \Omega (\beta,q,r). $$ Let now consider a couple $(\alpha_{0},\beta_{0}, p_{0},\widetilde{p}_{0},q_{0},\widetilde{q}_{0},r_{0})$, $(\alpha_{1},\beta_{1}, p_{1},\widetilde{p}_{1},q_{1},\widetilde{q}_{1},r_{1})$ such that the assumptions of theorem are satisfied, and in particular (\ref{eq:condDL(IHeat)}, \ref{OmegaScaling}) holds. Then we have the couples of linear operators: \begin{equation} \partial^{\eta} e^{t\Delta} : \begin{array}{ccc} L^{p_{0}}_{|x|^{\alpha_{0} p_{0}}d|x|}L^{\widetilde{p}_{0}}_{\theta} & \longrightarrow & L^{r_{0},\infty}_{t} L^{q_{0}}_{|x|^{\beta_{0} q_{0}}d|x|}L^{\widetilde{q}_{0}}_{\theta} \\ & & \\ & & \\ L^{p_{1}}_{|x|^{\alpha_{1} p_{1}}d|x|}L^{\widetilde{p}_{1}}_{\theta} & \longrightarrow & L^{r_{1},\infty}_{t} L^{q_{1}}_{|x|^{\beta_{1} q_{1}}d|x|}L^{\widetilde{q}_{1}}_{\theta} . \end{array} \end{equation} if we fix $\xi \in (0,1)$ we can perform real interpolation of operators with parameters $(\xi, r)$, provided that: \begin{equation}\label{rConstr} p_{\xi} < r_{\xi}, \end{equation} where $$ \frac{1}{p_{\xi}} = (1- \xi)\frac{1}{p_{0}} + \frac{1}{\xi p_{1}}, $$ in the same way are defined $q_{\xi},r_{\xi},\widetilde{q}_{\xi},\widetilde{p}_{\xi}$, while $$ \alpha_{\xi}= (1-\xi)\alpha_{0} +\xi \alpha_{1}, $$ and the same for $\beta_{\xi}$. So we get the bounded operators: $$ \partial^{\eta}e^{t\Delta} u_{0} : $$ $$ L^{p_{\xi}}_{|x|^{\alpha_{\xi} p_{\xi}}d|x|}L^{\widetilde{p}_{\xi}}_{\theta} \rightarrow \Big(L^{r_{0},\infty}_{t} L^{q_{0}}_{|x|^{\beta_{0} q_{0}}d|x|}L^{\widetilde{q}_{0}}_{\theta}, L^{r_{1},\infty}_{t} L^{q_{1}}_{|x|^{\beta_{1} q_{1}}d|x|}L^{\widetilde{q}_{1}}_{\theta} \Big)_{\xi,r} = L^{r}_{t} L^{q_{\xi}}_{|x|^{\beta_{\xi} q_{\xi}}d|x|}L^{\widetilde{q}_{\xi}}_{\theta}. $$ It is now straightforward to check that indeces $(\alpha_{\xi},\beta_{\xi}, \mbox{etc})$ satisfy the relations (\ref{eq:condDL(IHeat)}, \ref{OmegaScaling}) and other assumptions. Furthetmore constrains (\ref{OmegaScaling}) and (\ref{rConstr}) are equivalent to $q_{\xi} < \frac{np_{\xi}}{(|\eta| + \alpha_{\xi} - \beta_{\xi}) p_{\xi} + n-2}$. Of course this method misses the endpoints $p,q=1$. The estimates (\ref{IHeatLoc}) are proved in the same way by using the localized punctual decay. \end{proof} \noindent The estimates of the Duhamel term needs no interpolation notions: \begin{proposition}\label{DDecayCor} Let $n\geq 2$, $1 < p \leq 2q < +\infty$, $1 < s \leq 2r < +\infty $ and $1 \leq p \leq 2q \leq +\infty$. Assume further that $\alpha, \beta$ satisfy the set of conditions \begin{equation}\label{eq:condDL(DHeat)} \beta > -\frac nq,\qquad \alpha<\frac{n}{p'},\qquad 2 \Lambda (\alpha,p,\widetilde{p}) \geq \Lambda (\beta,q,\widetilde{q}) \end{equation} Then the following estimates holds: \begin{equation}\label{DGHeat} \left\||x|^{\beta} \partial^{\eta} \int_{0}^{t} e^{(t-s)\Delta} \mathbbm{P} \nabla \cdot (u \otimes u) \ ds \right\|_{L^{r}_{t}L^{q}_{x}L^{\widetilde{q}}_{\theta}} \leq d_{\eta} \| |x|^{\alpha} u \|^{2}_{L^{s}_{t}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \end{equation} for each multi index $\eta$, provided that: \begin{equation}\label{DGOmegaScaling} 2 \Omega(\alpha, p, s) = \Omega (\beta,q,r) + 1 - |\eta |. \end{equation} In particular holds: \begin{equation}\label{DGHeatq=p} \left\||x|^{\beta} \partial^{\eta} \int_{0}^{t} e^{(t-s)\Delta} \mathbbm{P} \nabla \cdot (u \otimes u) \ ds\right\|_{L^{r}_{t}L^{q}_{x}L^{\widetilde{q}}_{\theta}} \leq d_{\eta} \| |x|^{\beta} u \|^{2}_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}, \end{equation} for each multi index $\eta$, provided that: \begin{equation}\label{DGOmegaScalingq=p} \frac 2r + \frac nq = 1 - \beta - |\eta |, \qquad \Lambda(\beta,q,\widetilde{q}) \geq 0. \end{equation} The range of admissible $p,q$ indices can be relaxed to $1\leq p \leq q \leq +\infty$ provided that $2\Lambda (\alpha,p,\widetilde{p}) > \Lambda (\beta,q,\widetilde{q})$. \end{proposition} \begin{proof} By Minkowsky inequality and estimates (\ref{PHeatDer}): \begin{eqnarray} & &\left\||x|^{\beta} \partial^{\eta} \int_{0}^{t} e^{(t-s)\Delta}\mathbb{P} \nabla \cdot F(x,s) \ ds \right\|_{L^{r}_{t}L^{q}_{x}L^{\widetilde{q}}_{\theta}} \nonumber \\ & \leq &\left\| \int_{\mathbb{R}^{+}} \| |x|^{\beta} \partial^{\eta} e^{(t-s)\Delta}\mathbb{P} \nabla \cdot F \|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \ ds\right\|_{L^{r}_{t}} \nonumber \\ &\leq & d_{\eta} \left\| \int_{\mathbb{\mathbb{R}^{+}}} \frac{1}{(t-s)^{(1 + |\eta | +\frac{n}{p_{0}} -\frac{n}{q} +\alpha_{0} -\beta)/2}}\||x|^{\alpha_0}F \|_{\L^{q_0}_{|x|}L^{\widetilde{q}_0}_{\theta}} \ ds \right\|_{L^{r}_{t}}, \nonumber \end{eqnarray} provided that: \begin{equation}\label{DuamhCondRemark} \widetilde{p}_0 \leq \widetilde{q}, \quad p_0 \leq q \quad 1 + |\eta | + \frac{n}{p_{0}} - \frac{n}{q_{0}} + \alpha_{0} - \beta > 0, \quad \Lambda_{\alpha_0} \geq \Lambda_{\beta}. \end{equation} Let then \begin{equation}\label{WYScalingInCor} 1+ \frac{1}{r} = \frac{1}{s_{0}} + \frac{1}{k}, \end{equation} and use the Young inequality in the following Lorentz Spaces: $$ \| \cdot \|_{L^{r}} \leq \| \cdot \|_{L^{s_{0}}} \| \cdot \|_{L^{k,\infty}}, $$ that's allowed if $1<r,s_{0},k <+\infty$. It is assured by assumptions $1< r,s < +\infty$. We get \begin{eqnarray} & &\left\||x|^{\beta} \partial^{\eta} \int_{0}^{t} e^{(t-s)\Delta}\mathbb{P} \nabla \cdot F(x,s) \ ds \right\|_{L^{r}_{t}L^{q}_{x}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & d_{\eta} \||x|^{\alpha_0} F\|_{L^{s_{0}}_{t}L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} \left\|\int_{\mathbb{\mathbb{R}^{+}}} \frac{dt}{t^{(1+|\eta | +\frac{n}{p_{0}} -\frac{n}{q_{0}} +\alpha_{0}-\beta)/2}}\right\|_{L^{k,\infty}_{t}} \nonumber \\ & \leq &\ d_{\eta} \||x|^{\alpha_0} F\|_{L^{s_{0}}_{t}L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}}, \nonumber \end{eqnarray} provided that: \begin{equation}\label{DuhamProvvScal} p_0 \leq q \quad (1 + |\eta | + \frac{n}{p_{0}} - \frac{n}{q_{0}} + \alpha_{0} - \beta)/2=\frac{1}{k}, \quad \Lambda_{\alpha_0} \geq \Lambda_{\beta}, \end{equation} because of $$ \left\| \int_{\mathbb{R}^{+}} \frac{dt}{t^{1/k}}\right\|_{L^{k,\infty}_{t}} =1. $$ Conditions (\ref{DuhamProvvScal}), (\ref{WYScalingInCor}) lead to: \begin{equation}\label{DuhamProvvScal2} \Omega(\alpha_{0}, p_{0}, s_{0}) = 1-|\eta | +\Omega(\beta,q,r). \end{equation} We now specify $F=u \otimes u$: \begin{eqnarray} & &\left\||x|^{\beta} \partial^{\eta} \int_{0}^{t} e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u\otimes u)(x,s) \ ds \right\|_{L^{r}_{t}L^{q}_{x}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & c_{\eta} \||x|^{\alpha_{0}} |u|^{2}\|_{L^{s_{0}}_{t}L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} \nonumber \\ &\leq & c_{\eta} \||x|^{\alpha_{0}/2} |u|\|^{2}_{L^{2s_{0}}_{t}L^{2p_{0}}_{|x|}L^{2\widetilde{p}_{0}}_{\theta}} \\ & \leq &c_{\eta} \||x|^{\alpha}u\|^{2}_{L^{s}_{t}L^{p}_{x}L^{\widetilde{p}}_{\theta}}, \nonumber \end{eqnarray} where we have set \begin{equation}\label{DuhamPivot} (\alpha_{0}/2, 2s_{0},2p_{0},2\widetilde{p}_{0})=(\alpha,s,p,\widetilde{p}). \end{equation} Notice that $2\Omega(\alpha,s,p)= \Omega(\alpha_{0},s_{0},p_{0})$, so (\ref{DuhamProvvScal2}) and (\ref{DuhamPivot}) lead to (\ref{DGOmegaScaling}) $$ 2\Omega(\alpha,p,s) = \Omega(\beta,q,r) +1 - |\eta |; $$ while condition (\ref{DuhamProvvScal}) leads to $$ 2\Lambda_{\alpha} \geq \Lambda_{\beta}. $$ Finally notice that (\ref{WYScalingInCor}) and (\ref{DuhamProvvScal}) imply the relationships: $$ r \geq s_{0}=s/2, \qquad q \geq p_{0}=p/2, \qquad \widetilde{q} \geq \widetilde{p}_{0}=\widetilde{p}/2. $$ These conditions are furthermore consistent with the choice $(\alpha,s,p,\widetilde{p})=(\beta,r,q,\widetilde{q})$; in such a way we recover inequality (\ref{DGHeatq=p}): $$ \left\||x|^{\beta} \partial^{\eta} \int_{0}^{t} e^{(t-s)\Delta} \mathbbm{P} \nabla \cdot (u \otimes u) \ ds \right\|_{L^{r}_{t}L^{q}_{x}L^{\widetilde{q}}_{\theta}} \leq d_{\eta} \| |x|^{\beta} u \|^{2}_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}, $$ provided that $$ \Omega(\beta,q,r) = 1-|\eta|, \qquad \Lambda(\beta,q,\widetilde{q}) \geq 0. $$ \end{proof} \section{Regularity criteria in weighted spaces with angular integrability} As told the technology developed until now is well suited to study the regularity property of weak solutions of (\ref{CauchyNS}) in weighted spaces. In particular we focus on theorem (\ref{YZTheorem}) in which is shown the regularity of a weak solution $u$ in the segment $(0,T)\times \{ 0 \}$ provided that is satisfied the a priori bound: \begin{equation}\label{moreOrLess} \||x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{x}} \leq +\infty, \quad \frac{2}{s} + \frac{n}{p} = 1- \alpha. \end{equation} Of course such norms are invariant with respect the natural scaling of (\ref{CauchyNS}) centred in the origin: \begin{equation*} \lambda \rightarrow \lambda u (\lambda^{2} t, \lambda x). \end{equation*} \noindent Now we are interested in what happens if we consider more or less angular integrability in (\ref{moreOrLess}). We have results in two different directions: \noindent If we consider weights $|x|^{\alpha}$ with $\alpha < 0$: \begin{itemize} \item we get regularity in the segment $(0,T)\times \{ 0 \}$ as in (\ref{YZTheorem}), but by requiring boundedness in $L^{s}_{t}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ with $\widetilde{p} < p$; \item we actually get global regularity (in $(0,T)\times \mathbb{R}^{n}$) by requiring boundedness in $L^{s}_{t}L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ with $\widetilde{p} > p$ and $\widetilde{p}$ large enough. \end{itemize} In the case $|x|^{\alpha}$, $\alpha \geq 0$ we show that: \begin{itemize} \item assumptions in (\ref{YZTheorem}) actually lead to global regularity (in $(0,T)\times \mathbb{R}^{n}$); \item global regularity is achieved also if $\widetilde{p} < p$ with $\widetilde{p}$ large enough; precisely $\widetilde{p}_{G} \leq \widetilde{p} < p$, with $\widetilde{p}_{G}$ depending on the other parameters; \item regularity in $(0,T)\times \{ 0 \}$ is achieved in the range $\widetilde{p}_{L} \leq \widetilde{p} < \widetilde{p}_{G}$, with $\widetilde{p}_{L}$ depending on the other parameters. \end{itemize} The notation $\widetilde{p}_{L},\widetilde{p}_{G}$ would remind to $\widetilde{p}_{Local}, \widetilde{p}_{Global}$. Such a scheme is explained by the following heuristic: if $\alpha < 0$ the weight $|x|^{\alpha}$ localizes in some sense the norm near to the origin, so local regularity is expectable, also for $\widetilde{p} < p$, but some boundedness condition at infinity are necessary in order to get global regularity. Such a condition is actually high $L^{\widetilde{p}}$ integrability in the angular direction. If we consider instead $|x|^{\alpha}$ with $\alpha \geq 0$, the weight suffices to guarantee also boundedness for large $|x|$, furthermore the integrability in the angular direction can be relaxed to small values of $\widetilde{p}$. In this spirit we prove two extensions of theorem (\ref{YZTheorem}). In the first under the hypothesis of higher angular integrability we get global regularity. In the second we get regularity in the segment $(0,T)\times \{ 0 \}$, as in (\ref{YZTheorem}), but with weaker angular integrability. It is convenient to introduce the quantity: \begin{equation}\label{PtildeDef} \widetilde{p}_{L} = \left \{ \begin{array}{lcr} \frac{2(n-1)p}{(2 \alpha +1)p +2(n-1)} & \mbox{if} & -\frac{1}{2} \leq \alpha < 0 \\ && \\ \frac{2(n-1)p}{p +2(n-1)} & \mbox{if} & 0 \leq \alpha < 1, \end{array}\right. \end{equation} \begin{equation}\label{PtildeGDef} \widetilde{p}_{G} = \left \{ \begin{array}{lcr} \max \left( 4, \frac{(n-1)p}{\alpha p + n-1} \right) & \mbox{if} & \frac{1-n}{2} \leq \alpha < 0 \\ && \\ \frac{(n-1)p}{\alpha p + n-1} & \mbox{if} & 0 \leq \alpha \leq \frac{1}{2}, \end{array}\right. \end{equation} \noindent as told $\widetilde{p}_{L}, \widetilde{p}_{G}$ would remind to $\widetilde{p}_{Local}, \widetilde{p}_{Global}$. This quantities are infact sufficiently high angular integrability in order to to get respectively local (in $(0,t)\times \{ 0 \}$) and global regularity. Notice that $$ \widetilde{p}_{L} < \widetilde{p}_{G}, \quad \mbox{if} \quad \alpha < 1/2, \qquad \widetilde{p}_{L} = \widetilde{p}_{G} \quad \mbox{if} \quad \alpha = 1/2; $$ and \begin{equation}\label{pLpGalphaPositivo} \widetilde{p}_{L} < p < \widetilde{p}_{G}, \qquad \mbox{if} \quad \alpha <0, \end{equation} \begin{equation}\label{pLpGalphaNegativo} \widetilde{p}_{L} < \widetilde{p}_{G} < p, \qquad \mbox{if} \quad \alpha > 0; \end{equation} The range of $\alpha$ in which $\widetilde{p}_{L}, \widetilde{p}_{G}$ have been defined is justified in the following proofs. \ \noindent The following global regularity criterion holds: \begin{theorem}\label{OurYZTheorem} Let $n \geq 3$, $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$ and $u$ a weak Leray solution of (\ref{CauchyNS}). \ \noindent If ${\bf \alpha \in [(1-n)/2,0)}$, $\alpha_{0} \in [(2-n)/2, 2/(2+n))$, $\frac{n}{1- \alpha} < p \leq \frac{1-n}{\alpha}$, and $$ \| |x|^{\alpha_{0}} u_{0}\|_{L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} < +\infty $$ with: \begin{equation}\label{OurYZCond0} \alpha_{0} = 1- \frac{n}{p_{0}}, \quad \widetilde{p}_{0} \leq \frac{\widetilde{p}_{G}}{2}, \end{equation} \begin{equation}\label{OurYZCond0Complicata} \left \{ \begin{array}{lcr} 2 \leq p_{0} \leq \widetilde{p}_{G}/2 & \mbox{if} & \widetilde{p}_{G} \leq 2n \\ 2 \leq p_{0} \leq \widetilde{p}_{G}/2, \quad p_{0} < \frac{2\widetilde{p}_{G}}{\widetilde{p}_{G} - 2n} & \mbox{if} & \widetilde{p}_{G} > 2n; \end{array}\right. \end{equation} and \begin{equation}\label{OurYZuBound1} \| |x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} < +\infty, \end{equation} with: \begin{equation}\label{OurYZCondition1} \frac{2}{s}+ \frac{n}{p} = 1-\alpha, \end{equation} \begin{equation}\label{OurYZCondition2} \frac{2}{1-\alpha} < s < +\infty, \end{equation} \begin{equation}\label{OurYZCondition3} \widetilde{p} \geq \widetilde{p}_{G} = \max \left(4, \frac{(n-1)p}{\alpha p +n -1}\right); \end{equation} then actually $u$ is regular ($C^{\infty}$ in space variables) in $(0,T) \times \mathbb{R}^{n}$. \ \noindent The same holds if ${\bf \alpha \in [0,1/2]}$, $\alpha_{0} \in [(2-n)/2, 2/(2+n))$, $\max \left( 4, \frac{n}{1 - \alpha} \right) < p < +\infty$, or $p=4$, and $$ \| |x|^{\alpha_{0}} u_{0}\|_{L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} < +\infty $$ with: \begin{equation}\label{OurYZCond0bis} \alpha_{0} = 1- \frac{n}{p_{0}}, \quad \widetilde{p}_{0} \leq \frac{p}{2}, \end{equation} \begin{equation}\label{OurYZCond0ComplicataBis} \left \{ \begin{array}{lcr} 2 \leq p_{0} \leq p/2 & \mbox{if} & p \leq 2n \\ 2 \leq p_{0} \leq p/2, \quad p_{0} < \frac{2 p}{p - 2n} & \mbox{if} & p > 2n; \end{array}\right. \end{equation} and \begin{equation}\label{OurYZuBound1bis} \| |x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} < +\infty, \end{equation} with: \begin{equation}\label{OurYZCondition1bis} \frac{2}{s}+ \frac{n}{p} = 1-\alpha, \end{equation} \begin{equation}\label{OurYZCondition2bis} \frac{2}{1-\alpha} < s < +\infty, \end{equation} \begin{equation}\label{OurYZCondition3bis} \widetilde{p} \geq \widetilde{p}_{G} = \frac{(n-1)p}{\alpha p +n -1}. \end{equation} \end{theorem} \begin{remark} By relations (\ref{pLpGalphaPositivo}, \ref{pLpGalphaNegativo}) it turns out that in the case of localizing weights ($|x|^{\alpha}, \alpha <0$) additional angular integrability is requested in order to get global regularity. We will come back on this point also in the next section. On the other hand the weights $|x|^{\alpha}, \alpha > 0$, by providing an additional boundedness at infinity, allows to get global regularity with less angular integrability. \end{remark} \noindent Of course by translations $u$ is regular provided that the weights and the norms are centered in any $\bar{x}$. \begin{remark} We give some remarks about the indexes: \begin{itemize} \item The first conditions in (\ref{OurYZCond0}, \ref{OurYZCond0bis}) and the conditions (\ref{OurYZCondition1}, \ref{OurYZCondition1bis}) follow by requiring invariance with respect to the natural scalings for respectively $u_{0}$ and $u$, i.e. $$ \lambda \rightarrow \lambda u_{0}(\lambda x) $$ $$ \lambda \rightarrow \lambda u(\lambda^{2}, \lambda x); $$ \item Our method misses the endpoints $s=+\infty$, $p=+\infty$. The last one is recovered just in case $\alpha \geq 0$, if we assume $\widetilde{p} > \widetilde{p}_{G}$. This is because of the use of proposition \ref{DDecayCor}; \item Of course we can set $T=+\infty$ to get regularity for all times; \item If $n > 3$ we get a gain in the negative values of $\alpha$ with respect to the theorem \ref{YZTheorem}. We have infact $\frac{1-n}{2} \leq \alpha$ instead of $-1 \leq \alpha$. This is also more satisfactory because exbits a dependence on the dimension. We have, on the other hand, a loss on the positive values of $\alpha$, that's $\alpha \leq \frac{1}{2}$ instead of $\alpha < 1$; \item It is interesting to notice that no direct correlation between the radial and angular integrability of the initial datum have to be required. \end{itemize} \end{remark} \ \noindent The assumptions on the initial datum are a bit complicated, and have to be considered as merely technical. For instance can be considered $u_{0}$ in the Schwartz class without a real loss in the main contents of the theorem. In this way the formulation becomes simpler. The real information is about the angular integrability of the solution requested in order to get regularity, so it is the hypotesis $p \geq \widetilde{p}_{G}$. It comes out by requiring $\Lambda (\alpha,p,\widetilde{p}) \geq 0$ in order to apply inequality (\ref{DGHeat}) with simply $L^{r}_{t}L^{q}_{x}$ spaces on the left side. \ \begin{proof} We want to use the regularity criterion (\ref{SerrinRegularity}), so we need to show: \begin{equation}\label{OurYZCondition1Proof} \|u\|_{L^{r}_{T}L^{q}_{x}} < +\infty, \quad \mbox{with} \quad \frac{2}{r} +\frac{n}{q} =1. \end{equation} Let's start by the integral representation \begin{equation}\nonumber u = e^{t \Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes u)(s)ds. \end{equation} \noindent In order to get (\ref{OurYZCondition1Proof}) we distinguish the cases $\alpha \in [(1-n)/2,0)$ and $\alpha \in [0,1/2]$. \noindent \subsection*{Case ${\bf \alpha \in [(1-n)/2,0)}$} \begin{eqnarray}\nonumber \|u\|_{L^{r}_{T}L^{q}_{x}} &\leq & \|e^{t\Delta}u_{0}\|_{L^{r}_{T}L^{q}_{x}}+ \left\|\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes u)(s) \ ds \right\|_{L^{r}_{T}L^{q}_{x}} \\ \nonumber &=& I + II. \end{eqnarray} By the scaling assumption and the proposition (\ref{IDecayCor}) we get: \begin{equation}\label{I} I \leq c_{0} \||x|^{\alpha_{0}}u_{0}\|_{L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}}, \end{equation} provided that \begin{equation}\label{I2} p_{0} \leq q < \frac{np_{0}}{p_{0} - 2}, \quad \widetilde{p}_{0} \leq q, \quad \Lambda_{\alpha_{0}, p_{0}, \widetilde{p}_{0}} \geq 0. \end{equation} Actually the condition $\Lambda_{\alpha_{0}, p_{0}, \widetilde{p}_{0}} \geq 0$ is not necessary in order to prove the theorem. We assume it for now to avoid some technicalities in the proof. We will show at the end how it can be removed. To bound $II$ we use proposition (\ref{DDecayCor}) and scaling, so \begin{equation}\nonumber II \leq d_{0} \||x|^{\alpha}u\|^{2}_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \end{equation} provided that \begin{equation}\label{OurYZCond2Proof} \Lambda(\alpha,p, \widetilde{p}) \geq 0, \end{equation} \begin{equation}\label{OurYZCond3Proof} p/2, \ \widetilde{p} /2 \leq q, \quad s/2 \leq r. \end{equation} Condition (\ref{OurYZCond2Proof}) is ensured by $$ \widetilde{p} \geq \frac{(n-1)p}{\alpha p +n -1}. $$ Notice also that this condition, the scaling relation and $\alpha <0$ imply $\frac{n}{1-\alpha} < p \leq \frac{1-n}{\alpha}$. So the widest range for $p$ is atteined as $\alpha \rightarrow 0^{-}$. The last point is to show that a couple $(r,q)$ such that (\ref{OurYZCond3Proof}) is consistent with $\frac{2}{r} + \frac{n}{q}=1$ actually exists. We choose $q = \widetilde{p}_{G} = \max \left( 4, \frac{(n-1)p}{\alpha p + n-1} \right)$. This is allowed by $(1-n)/2 \leq \alpha$; infact if $\frac{(n-1)p}{\alpha p + n-1} \geq 4$ we get $$ \frac{2}{r} = 1- \frac{n}{q} = 1- \frac{2n \alpha}{n-1} + \frac{2n}{p} \Rightarrow \frac{2}{r} - \frac{4}{s} = \frac{1-n -2 \alpha}{n-1}, $$ so $$ (1-n)/2 \leq \alpha \Rightarrow s/2 \leq r; $$ while if $\frac{(n-1)p}{\alpha p + n-1} < 4$ we get also $p < 4$ and $$ \frac{2}{r} = 1- \frac{n}{4}, \ \frac{4}{s} > 2- \frac{2n}{p} \Rightarrow \frac{2}{r} - \frac{4}{s} < -1 +\frac{2n}{p} - \frac{n}{4} < 0. $$ \noindent Finally the condition (\ref{I2}) becomes $$ p_{0} \leq \frac{\widetilde{p}_{G}}{2} < \frac{np_{0}}{p_{0}-2}, $$ that by a straightforward calculation leads to (\ref{OurYZCond0Complicata}), $\alpha_{0} \in [(2-n)/2, 2/(2+n))$. \subsection*{Case ${\bf \alpha \in [0,1/2]}$} The only difference is in the choice of $(r,q)$. Here we set $q = p/2$. In such a way the condition (\ref{OurYZCond3Proof}) is ensured by $\alpha \leq 1/2$, infact: $$ \frac{2}{r} = 1- \frac{2n}{p} \Rightarrow \frac{2}{r} - \frac{4}{s} = -1 + 2 \alpha, $$ so $$ \alpha \leq 1/2 \Rightarrow s/2 \leq r. $$ Notice that now we haven't the restriction $p\leq \frac{1-n}{\alpha}$. Then the condition (\ref{I2}) becomes $$ p_{0} \leq \frac{q}{2} < \frac{np_{0}}{p_{0}}, $$ that by a straightforward calculation leads to (\ref{OurYZCond0ComplicataBis}), $\alpha_{0} \in [(2-n)/2, 2/(2+n))$ and $\max \left( 4, \frac{n}{1 - \alpha} \right) < p$ or $p=4$. \noindent We show how the condition $\Lambda_{\alpha_{0}, p_{0}, \widetilde{p}_{0}} \geq 0$ can be removed. Let call it simply $\Lambda$ and suppose $\Lambda < 0 $. We still can use the localized estimate (\ref{IHeatLoc}) to get the bound $$ \|\mathbbm{1}_{\Pi(R)} u \|_{L^{r}_{T}L^{q}_{x}} \leq R^{-\Lambda} c_{0} \||x|^{\alpha_{0}} u_{0} \|_{L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} + d_{0} \||x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} $$ where $\Pi(R)$ is the parabola $$ \Pi(R) = \left\{ (t,x) \in \mathbb{R}^{+} \times \mathbb{R}^{n} \quad \mbox{s.t.} \quad \left| \frac{x}{\sqrt{t}}\right| \leq R \right\}. $$ So regularity is achieved in $\Pi(R)$ and, as $R \rightarrow +\infty$, in the whole space-time. \end{proof} \noindent Then the following local regularity criterion holds: \begin{theorem}\label{OurYZTheoremLoc} Let $n \geq 3$, $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n}) \cap H^{1}(\mathbb{R}^{n}) \cap L^{2}_{|x|^{2-n}dx}$. Let $u$ a weak Leray's solution of (\ref{CauchyNS}). \ \noindent If ${\bf \alpha \in [-1/2,0)}$, $\alpha_{0} \in \left[ 1-n, \frac{2-n}{2+n}\right)$, $\max \left(2, \frac{n}{1- \alpha} \right) < p < +\infty$ or $p=2$, and $$ \| |x|^{\alpha_{0}} u_{0}\|_{L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} < +\infty $$ with: \begin{equation}\label{OurYZCond0Loc} \alpha_{0} = 1- \frac{n}{p_{0}}, \quad \widetilde{p}_{0} \leq \frac{p}{2}, \quad \Lambda_{\alpha_{0}, p_{0}, \widetilde{p}_{0}} \geq 0; \end{equation} \begin{equation}\label{OurYZCond0ComplicataLoc} \left \{ \begin{array}{lcr} 1 \leq p_{0} \leq p/2 & \mbox{if} & p \leq n \\ 1 \leq p_{0} \leq p/2, \quad p_{0} < \frac{p}{p - n} & \mbox{if} & p > n; \end{array}\right. \end{equation} and \begin{equation}\label{OurYZuBound1Loc} \| |x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} < +\infty, \end{equation} with: \begin{equation}\label{OurYZCondition1Loc} \frac{2}{s}+ \frac{n}{p} = 1-\alpha, \end{equation} \begin{equation}\label{OurYZCondition2Loc} \frac{2}{1-\alpha} < s < +\infty, \end{equation} \begin{equation}\label{OurYZCondition3Loc} \widetilde{p} \geq \widetilde{p}_{L} = \frac{2(n-1)p}{(2\alpha +1)p + 2(n-1)}; \end{equation} then actually $u$ is and regular ($C^{\infty}$ in space variables) in the segment $(0,T) \times \{ 0 \}$. \ \noindent The same holds if ${\bf \alpha \in [0,1)}$, $\alpha_{0} \in \left[ 1-(1-\alpha)n, 1- (1-\alpha)\frac{2n}{2+n} \right)$, $\max \left( 2, \frac{n}{1 - \alpha} \right) < p < +\infty$ or $p=2$, and $$ \| |x|^{\alpha_{0}} u_{0}\|_{L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} < +\infty $$ with: \begin{equation}\label{OurYZCond0bisLoc} \alpha_{0} = 1- \frac{n}{p_{0}}, \quad \widetilde{p}_{0} \leq \frac{p}{2}, \quad \Lambda_{\alpha_{0},p_{0},\widetilde{p}_{0}} \geq 0, \end{equation} \begin{equation}\label{OurYZCond0ComplicataBisLoc} \frac{1}{1-\alpha} \leq p_{0} \leq \frac{p}{2}, \quad p_{0} < \frac{p}{(1-\alpha)p -n}; \end{equation} and \begin{equation}\label{OurYZuBound1bisLoc} \| |x|^{\alpha} u \|_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} < +\infty, \end{equation} with: \begin{equation}\label{OurYZCondition1bisLoc} \frac{2}{s}+ \frac{n}{p} = 1-\alpha, \end{equation} \begin{equation}\label{OurYZCondition2bisLoc} \frac{2}{1-\alpha} < s < +\infty, \end{equation} \begin{equation}\label{OurYZCondition3bisLoc} \widetilde{p} \geq \widetilde{p}_{L} = \frac{2(n-1)p}{p + 2(n-1)}. \end{equation} \end{theorem} \noindent Of course by translations $u$ is regular in the segment $(0,T) \times \{ \bar{x} \}$, provided that all the norms and the weights are centered in $\bar{x}$. \begin{remark} We give again some remarks about the indexes: \begin{itemize} \item The first conditions in (\ref{OurYZCond0Loc}, \ref{OurYZCond0bisLoc}) and the conditions (\ref{OurYZCondition1Loc}, \ref{OurYZCondition1bisLoc}) follow by requiring invariance with respect to the natural scalings for respectively $u_{0}$ and $u$, i.e. $$ \lambda \rightarrow \lambda u_{0}(\lambda x) $$ $$ \lambda \rightarrow \lambda u(\lambda^{2}, \lambda x); $$ \item Of course we can set $T=+\infty$ to get regularity for all times; \item We get a loss in the negative values of $\alpha$ with respect to the theorem \ref{YZTheorem}. We have infact $-\frac{1}{2} \leq \alpha$ instead of $-1 \leq \alpha$. \end{itemize} \end{remark} \ \noindent We make again complicated assumptions on the initial datum, but the main point is the angular integrability of the solution requested in order to get regularity in $(0,T)\times \{ 0 \}$, i.e. the hypotesis $p \geq \widetilde{p}_{L}$. It comes out by requiring $\Lambda (\alpha,p,\widetilde{p}) \geq \beta$ with $\beta \in [-1, 1)$. This is because in such a way (\ref{DGHeat}) allows to apply directly the theorem \ref{YZTheorem}. \ \begin{proof} Here we want to use directly the theorem (\ref{YZTheorem}), so we need to show: \begin{equation}\label{OurYZCondition1ProofLoc} \| |x|^{\beta} u \|_{L^{r}_{T}L^{q}_{x}} < +\infty, \quad \mbox{with} \quad \frac{2}{r} +\frac{n}{q} =1 - \beta. \end{equation} Let's start by the integral representation \begin{equation}\nonumber u = e^{t \Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes u)(s)ds. \end{equation} \noindent In order to get (\ref{OurYZCondition1ProofLoc}) we distinguish the case $\alpha \in [-1/2,0)$ and $\alpha \in [0,1)$. \noindent \subsection*{Case ${\bf \alpha \in [-1/2,0)}$} \begin{eqnarray}\nonumber \| |x|^{\beta} u \|_{L^{r}_{T}L^{q}_{x}} &\leq & \| |x|^{\beta} e^{t\Delta}u_{0}\|_{L^{r}_{T}L^{q}_{x}} \\ \nonumber &+& \left\| |x|^{\beta} \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes u)(s) \ ds \right\|_{L^{r}_{T}L^{q}_{x}} \\ \nonumber &=& I + II. \end{eqnarray} By the scaling assumption and corollary (\ref{IDecayCor}) we get: \begin{equation}\label{ILoc} I \leq c_{0} \||x|^{\alpha_{0}}u_{0}\|_{L^{p_{0}}_{|x|}L^{\widetilde{p}_{0}}_{\theta}} \end{equation} provided that \begin{equation}\label{I2Loc} \quad p_{0} \leq q < \frac{np_{0}}{(\alpha_{0} - \beta)p +n-2}, \quad \widetilde{p}_{0} \leq q, \quad \Lambda_{\alpha_{0}, p_{0}, \widetilde{p}_{0}} \geq 0. \end{equation} To bound $II$ we use proposition (\ref{DDecayCor}) and scaling, so \begin{equation}\nonumber II \leq d_{0} \||x|^{\alpha} u \|^{2}_{L^{s}_{T}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \end{equation} provided that \begin{equation}\label{OurYZCond2ProofLoc} 2 \Lambda(\alpha,p, \widetilde{p}) \geq \beta, \end{equation} \begin{equation}\label{OurYZCond3ProofLoc} p/2, \ \widetilde{p} /2 \leq q, \quad s/2 \leq r. \end{equation} Condition (\ref{OurYZCond2ProofLoc}) is ensured by \begin{equation}\label{pLocInProofLoc} \widetilde{p} \geq \frac{2(n-1)}{(2\alpha -\beta)p + 2(n-1)}. \end{equation} The last point is to show tha a couple $(\beta,r,q)$ such that (\ref{OurYZCond3ProofLoc}) is consistent with the scaling relation actually exists. \noindent Because we are using theorem (\ref{YZTheorem}), we need to restrict to $-1\leq \beta$ and, in order to get the lower possible value for $\widetilde{p}$, we actually choose $\beta =-1$. In such a way the condition (\ref{pLocInProofLoc}) becomes the (\ref{OurYZCondition3Loc}). With this choice of $\beta$ we have $$ \widetilde{p} \leq p \quad \mbox{if} \quad -1/2 \leq \alpha, $$ that's infact the range of $\alpha$ we have restricted on. Then we choose $q=p/2$, so by the scaling relations we get $$ \frac{2}{r} - \frac{4}{s} = 2 \alpha -2 \leq 0, $$ that's consistent with $s/2 \leq r$. Of course because of the choice $q= p/2$ and the scaling we have to require $$ \max \left( 2, \frac{n}{1 - \alpha} \right) < p, \qquad \mbox{or} \quad p=2. $$ Then the condition (\ref{I2Loc}) becomes $$ \quad p_{0} \leq q < \frac{np_{0}}{2p_{0} - 2}, $$ that by a straightforward calculation leads to (\ref{OurYZCond0ComplicataLoc}), $\alpha_{0} \in \left[ 1-n, \frac{2-n}{2+n} \right)$. \subsection*{Case ${\bf \alpha \in [0,1)}$} The only difference is again in the choice of $(\beta, r, q)$. Because of $\alpha \geq 0$ we can reach smaller values for $\widetilde{p}$, and we do it by requiring $2\alpha - \beta =1$ in (\ref{pLocInProofLoc}), in such a way $$ \widetilde{p} \geq \frac{2(n-1)p}{p + 2(n-1)}. $$ More precisely we choose $$ (\beta, r, q) = (2\alpha -1, s/2, p/2). $$ A simple calculation shows that this choice is consistent with the scaling relation. Now by the condition (\ref{I2Loc}) and scaling we have $$ p_{0} \leq q < \frac{np_{0}}{(2 - 2\alpha)p_{0} -2}, $$ that by a straightforward calculation leads to (\ref{OurYZCond0ComplicataBisLoc}), $\alpha_{0} \in \left[ 1-(1-\alpha)n, 1-(1-\alpha)\frac{2n}{2+n} \right)$. \end{proof} \section{Well posedness with small data in weighted $L^p$ spaces}\label{smallDataMixed} In chapter \ref{chap2} we have introduced the fundamental results and ideas of the small data theory for the solutions of the Navier Stokes equation. In this section we came back on this topic in order to get results in weighted Lebesgue spaces with angular integrability. In particular we will focus on theorem \ref{CKNSmallData}. As observed the small data theory is very well understood in the case of translation invariant adapted spaces, but not so much is known without this hypothesis. We will consider basically small data $u_{0}$ in the weighted space $L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ that's endowed with the norm $$ \left( \int_{0}^{+\infty} \|f( \rho \ \cdot\ )\|^{p}_{L^{\widetilde{p}}(\mathbb{S}^{n-1})} \rho^{\alpha p + n-1} d \rho \right)^{\frac1p}. $$ Of course by translations all the results we are going to prove still hold if the norms and the weights are centred in some point $\bar{x}$ different from the origin . In order to develop a small data theory we have then to restrict to critical spaces, so we need invariance under the scaling $ \lambda \rightarrow \lambda u_{0}(\lambda x) $, that, as observed before, is the right one for the initial datum of system (\ref{CauchyNS}). This leads to the scaling relation $$ \alpha = 1-\frac{n}{p} $$ Notice that in this family there are the critical spaces $L^{3}$ and $L^{2}_{|x|^{2-n} dx}$ considered in theorems \ref{CKNSmallData} and \ref{CKNSmallData}. Regular strong solutions are available for small data in $L^{3}$, while smallness in $L^{2}_{|x|^{2-n} dx}$ gives only regularity localized in the interior of a space-time parabola centered on the origin. We conjecture that such behaviour is typical of the power weights $|x|^{\alpha}$ with $\alpha <0$. This is not surprising because in such case, even if the norms are defined on the whole space $\mathbb{R}^{n}$, the weights give a kind of localization near to the origin and a loss of informations at infinity occurs. Then we show that such informations can be recovered by a certain amount of angular integrability, and in this case, global regularity in space and time is availble. This is the case in the limit $\widetilde{p} \rightarrow \left( \frac{(n-1)p}{p-1} \right)^{-}$, as shown by the following \begin{theorem}\label{WeighKato} Let $n\geq 3$, $p \in [2,2+n]$, $ \widetilde{p} \in [1, +\infty]$ and $\alpha, \widetilde{p}$ such that: \begin{equation}\label{eq:condDL(Heat)1InThm} \alpha=1 - \frac{n}{p}, \qquad \widetilde{p} \geq \frac{(n-1)p}{p-1}. \end{equation} Let then $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$. There exists $\varepsilon > 0$, depending on all the parameter, such that if $$ \||x|^{\alpha}u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} < \varepsilon $$ then exists a unique global solution $u$ to the integral problem (\ref{IntegralCauchyNS}). Furthermore for all $p \leq q < \frac{np}{(\alpha - \beta)p + n-2}$, $\widetilde{p} \leq \widetilde{q}$, $r \in (1, +\infty)$ it holds \begin{equation}\label{NotInfty} \||x|^{\beta} u\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} < 2c_{0}\varepsilon. \end{equation} provided that \begin{equation}\label{eq:condDL(Heat)2InThm} \Lambda(\alpha, p, \widetilde{p}) > \Lambda (\beta,q,\widetilde{q}) \qquad \frac{2}{r} + \frac{n}{p} = 1- \beta, \end{equation} and for all $p \leq q < \frac{np}{( 1 + \alpha - \beta)p + n-2}$, $\widetilde{p} \leq \widetilde{q}$, $r \in (1, +\infty)$ it holds \begin{equation}\label{NotInftyDer} \||x|^{\beta} \nabla u\|_{L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} < 2c_{1}\varepsilon. \end{equation} provided that \begin{equation}\label{eq:condDL(Heat)2InThmDer} \Lambda(\alpha, p, \widetilde{p}) > \Lambda (\beta,q,\widetilde{q}) \qquad \frac{2}{r} + \frac{n}{p} = 2 - \beta; \end{equation} So $u$ is a regular ($C^{\infty}$ in the space variables) classical solution of (\ref{CauchyNS}). \end{theorem} \begin{proof} Let's start by the integral representation: $$ u = e^{t \Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u \otimes u)(s)ds $$ and consider the Picard sequence $$ \begin{array}{rcl} u_{1} & = &e^{t\Delta}u_{0} \\ u_{2} & = & e^{t \Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u_{1} \otimes u_{1})(s)ds \\ u_{n} &=& e^{t\Delta}u_{0} + \int_{0}^{t}e^{(t-s)\Delta}\mathbb{P}\nabla \cdot (u_{n-1} \otimes u_{n-1})(s)ds. \end{array} $$ We will show that $u_{n}$ is a Cauchy sequence in all the Banach spaces $L^{r}_{t}L^{q}_{|x|^{\beta q}d|x|}L^{\widetilde{q}}_{\theta}$ such that (\ref{eq:condDL(Heat)2InThm}) holds. Let's start noticing that it is a straightforward calculation to show that $p \leq q < \frac{np}{(\alpha- \beta)p + n-2}$, $\widetilde{p} \leq \widetilde{q}$, $r \in (1,+\infty)$, (\ref{eq:condDL(Heat)2InThm}) and (\ref{eq:condDL(Heat)1InThm}) are satisfied by a non null set of indeces. We use the weighted estimates (\ref{IHeat}), (\ref{DGHeatq=p}) with $\eta =0$ to bound by induction: \begin{eqnarray} \||x|^{\beta} u_{1}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} & \leq & c_{0}\||x|^{\alpha}u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} = c_{0}\varepsilon \nonumber \\ \||x|^{\beta} u_{2}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}& \leq & c_{0}\varepsilon + d_{0} \||x|^{\beta}u_{1}\|^{2}_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & 2c_{0}\varepsilon \nonumber \end{eqnarray} if we take a small $\varepsilon$ such that\footnote{At this step it could suffices to take $d_{0}c_{0}\varepsilon \leq 1$. The stronger condition $4d_{0}c_{0}\varepsilon \leq 1$ is used starting by the tirhd step.} $4d_{0}c_{0}\varepsilon \leq 1$. Then by induction \begin{eqnarray}\label{moment} \||x|^{\beta} u_{n}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}& \leq & c_{0}\varepsilon + d_{0} \||x|^{\beta}u_{n-1}\|^{2}_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \\ &\leq & 2c_{0}\varepsilon \nonumber \end{eqnarray} again because $4d_{0}c_{0}\varepsilon \leq 1$. So the sequence is well defined in $L^{r}_{t}L^{q}_{|x|^{\beta q}d|x|}L^{\widetilde{q}}_{\theta}$. Now we have to show that $u_{n}$ is a Cauchy sequence. Agin using (\ref{IHeat}), (\ref{DGHeatq=p}) we bound the differences: \begin{eqnarray} & & \||x|^{\beta} (u_{n} - u_{n-1})\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}\nonumber \\ & = & \left\||x|^{\beta}\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u_{n-1}\otimes u_{n-1} - u_{n-2}\otimes u_{n-2})(s)ds\right\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & \left\||x|^{\beta}\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u_{n-1}\otimes (u_{n-1}-u_{n-2}))(s)ds\right\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &+& \left\||x|^{\beta}\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u_{n-2}\otimes(u_{n-1} \otimes u_{n-2}))(s)ds\right\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & d_{0}(\||x|^{\beta}u_{n-1}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}\||x|^{\beta}(u_{n-1} - u_{n-2})\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ & + & \||x|^{\beta}u_{n-2}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}\||x|^{\beta} (u_{n-1}-u_{n-2})\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}) \nonumber \\ & \leq & d_{0}c_{0}\varepsilon\||x|^{\beta} (u_{n-1} - u_{n-2})\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \end{eqnarray} where we used the uniform bound (\ref{moment}). In $n-2$ steps we get: \begin{eqnarray} & & \||x|^{\beta} (u_{n}-u_{n-1})\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & (2d_{0}c_{0}\varepsilon)^{n-1} \||x|^{\beta}(u_{2}-u_{1})\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &=& (2c_{0}\varepsilon)^{n-1} \left\||x|^{\beta}\int_{0}^{t}e^{(t-s)\Delta}\mathbb{P} \nabla \cdot (u_{1}\otimes u_{1})(s)ds\right\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & (2d_{0}c_{0}\varepsilon)^{n}. \nonumber \end{eqnarray} So the differences $u_{n}-u_{n-1}$ are bounded by a gemoetric series, and easily follows that $u_{n}$ is a Cauchy sequences. \noindent The regularity of $u$ is now a direct consequence of theorem (\ref{OurYZTheorem}). In particular by setting\footnote{It is again straightforward to show that conditions $(\ref{eq:condDL(Heat)2InThm})$ are satisfited also under the restriction $\beta=0, q=\widetilde{q}$.} $\beta=0, q=\widetilde{q}$ is possible to refer to the original Serrin's result (\ref{SerrinRegularity}). Notice that $$ \Lambda(\alpha,p,\widetilde{p}) \geq 0 \Rightarrow \widetilde{p} \geq \frac{(n-1)p}{p-1}, $$ $$ p \leq q < \frac{np}{p-2} \Rightarrow p \in [2,2+n]. $$ In order to prove (\ref{NotInftyDer}) we use (\ref{IHeat}), (\ref{DGHeatq=p}) with $\eta = 1$, so \begin{eqnarray} \||x|^{\beta} \nabla u_{1}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} & \leq & c_{1}\||x|^{\alpha}u_{0}\|_{L^{p}_{|x|}L^{\widetilde{p}}_{\theta}} = c_{1}\varepsilon \nonumber \\ \||x|^{\beta} \nabla u_{2}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}& \leq & c_{1}\varepsilon + d_{1} \||x|^{\beta} \nabla u_{1}\|^{2}_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \nonumber \\ &\leq & 2c_{1}\varepsilon \nonumber \end{eqnarray} if we take a small $\varepsilon$ such that $4d_{1}c_{1}\varepsilon \leq 1$. Then by induction \begin{eqnarray}\label{momentDer} \||x|^{\beta} \nabla u_{n}\|_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}}& \leq & c_{1}\varepsilon + d_{1} \||x|^{\beta}u_{n-1}\|^{2}_{L^{r}_{t}L^{q}_{|x|}L^{\widetilde{q}}_{\theta}} \\ &\leq & 2c_{1}\varepsilon. \nonumber \end{eqnarray} \end{proof} \noindent Actually this theorem is a particular case of the Koch-Tataru theorem and can be proved directly by the methods in \cite{Tat} and the estimates in proposition \ref{IDecayCor}. Anyway we prefer a more direct proof, in particular of the bounds (\ref{NotInfty}, \ref{NotInftyDer}). Then also the regularity of $u$ is a really difficult problems for the Koch-Tataru solutionts (see \cite{Banica}), so we prefer to prove it directly in our case. \noindent \noindent The theorem show that if we work in $L^{p}_{|x|^{\alpha p}d|x|}L^{\widetilde{p}}_{\theta}$ a sufficiently high angular integrability in order to recover the loss of informations for large $|x|$ is $$ \widetilde{p}_{G} = \frac{(n-1)p}{p-1}. $$ Now it is really interesting to understand what happens in the range $$ p < \widetilde{p} < \widetilde{p}_{G}. $$ We have no definitive results in this direction but a clear way to pursue. We will show how this problem is connected to the behaviour of the Leray solutions close to the null solutions. In order to simplify as more as possible the notation let consider just the spaces $$ L^{2}_{|x|^{-1}d|x|}L^{\widetilde{p}}_{\theta}, \quad 2 < \widetilde{p} < \widetilde{p}_{G}=4. $$ So we set $n=3, p=2$. This is also the most interesting case because the quantity involved are related to real physical quantities. Let now recall the theorem \ref{CKNSmallData} \begin{theorem}[Caffarelli-Kohn-Nirenberg]\label{CKNSmallDataBis} Let $u_{0} \in L^{2}_{\sigma}(\mathbb{R}^{n})$ and $u$ a suitbale weak solution of (\ref{CauchyNS}). There exists an absolute constant $\varepsilon_{0} > 0$ such that if $$ \| |x|^{-1/2} u_{0} \|_{L^{2}(\mathbb{R}^{n})} = \varepsilon < \varepsilon_{0}, $$ then $u$ is regular ($C^{\infty}$ in space variables) in the interior of the parabola \begin{equation}\label{PiLimit} \Pi_{2} = \left\{ (t,x) \quad \mbox{t.c.} \quad t > \frac{|x|^{2}}{\varepsilon_{0}- \varepsilon} \right\}. \end{equation} \end{theorem} \noindent We used the notation $\Pi_{2}$ to remind the fact that th authors work with $L^{2}$ integrability in the angular variables. What we expect is a growth in the size of $\Pi_{\widetilde{p}}$ for bigger values of $\widetilde{p}$, and the filling of the whole space-time in the limit $\widetilde{p} \rightarrow 4^{-}$. To be more precise we expect to show regularity in the set \begin{equation}\label{Conj1} \Pi_{\widetilde{p}} = \left\{ (t,x) \in \mathbb{R}^{+} \times \mathbb{R}^{n} \quad \mbox{s.t} \quad t > \frac{c(\widetilde{p}) |x|^{2}}{\varepsilon_{0} - \varepsilon} \right\}, \end{equation} for small data in $L^{2}_{|x|^{-1}d|x|}L^{\widetilde{p}}_{\theta}$, and furthermore \begin{equation}\label{Conj2} c(\widetilde{p}) \rightarrow 0, \quad \mbox{as} \quad \widetilde{p} \rightarrow 4^{-}. \end{equation} In such a way the gap between theorem \ref{CKNSmallDataBis} and \ref{WeighKato} would be completely covered. It turns out that this behaviour is strictly connected with a possible improvements of theorem \ref{CKNSmallData} in the limit $\varepsilon \rightarrow 0$. This is a non trivial point and has of course an intrinsic interest. If we take the limit $\varepsilon \rightarrow 0$ in $(\ref{PiLimit})$ we get the maximal regularity set $$ \Pi = \left\{ t > \frac{|x|^{2}}{\varepsilon_{0}} \right\}. $$ On the other hand seems reasonable to conjecture improvements to the size of $\Pi$. A possibility is \begin{equation}\label{PossibleImprov} \Pi_{\varepsilon} = \left\{ t > \frac{c(\varepsilon)|x|^{2}}{\varepsilon_{0} - \varepsilon} \right\}, \end{equation} with \begin{equation}\label{ConjVare} c(\varepsilon) \rightarrow 0, \quad \mbox{as} \quad \varepsilon \rightarrow 0. \end{equation} Let now show how (\ref{ConjVare}) implies (\ref{Conj1}, \ref{Conj2}). The idea is to perform a decomposition of the initial datum inspired by a similar argument in \cite{Calderon}. We split $u_{0}$ in $$ u_{0}= v_{0} + w_{0} $$ with $$ \nabla \cdot v_{0}= \nabla \cdot w_{0} =0, $$ and $$ v_{0} \in L^{2}_{|x|^{-1}dx}, \quad w_{0} \in L^{2}_{|x|^{-1}L^{4}_{\theta}}. $$ Moreover we require that $$ \| |x|^{-1/2} w_{0}\|_{L^{2}_{|x|}L^{4}_{\theta}} \rightarrow 0 \quad \mbox{as} \quad \widetilde{p} \rightarrow 2^{+} $$ and $$ \| |x|^{-1/2} v_{0}\|_{L^{2}_{x}} \rightarrow 0 \quad \mbox{as} \quad \widetilde{p} \rightarrow 4^{-} $$ In order to achieve such a decomposition we slightly modify the argument in \cite{Calderon}. Let $s >0$ and define $$ u_{0} = u_{0,<s} + u_{0,>s}, $$ where $u_{0,<s}$ is equal to $u_{0}$ if $|u_{0}| <s$ and is zero otherwise . Then $v_{0}, w_{0}$ are defined by $$ v_{0} = \lim_{t \rightarrow 0} e^{t\Delta} \mathbb{P} u_{0,>s}, $$ $$ w_{0} = \lim_{t \rightarrow 0} e^{t\Delta} \mathbb{P} u_{0,<s}. $$ It follows easily by the representation of $e^{t\Delta} \mathbb{P}$ as a convolution operator that the limits are attained respectively in $L^{2}_{|x|^{-1}}dx$ and $L^{2}_{|x|^{-1}}L^{4}_{\theta}$ and the properties $\nabla \cdot v_{0}= \nabla \cdot u_{0}$. Furthermore by a simple interpolation argument\footnote{see \cite{Calderon}.} \begin{equation}\label{Decomposition} \begin{array}{lcl} \| |x|^{-1/2} w_{0}\|_{L^{2}_{|x|}L^{4}_{\theta}} &\leq & C s^{1-\frac{\widetilde{p}}{4}} \| |x|^{-1/2} u_{0} \|^{\widetilde{p}/ 4}_{L^{2}_{|x|}L^{\widetilde{p}}_{\theta}} \\ && \\ \| |x|^{-1/2} v_{0}\|_{L^{2}_{x}} &\leq & C s^{1-\frac{\widetilde{p}}{2}} \| |x|^{-1/2} u_{0} \|^{\widetilde{p} /2}_{L^{2}_{|x|}L^{\widetilde{p}}_{\theta}}, \end{array} \end{equation} for each $s > 0$. Then we choose $s= \frac{\theta}{1-\theta}$, where $\theta$ is defined by $$ \frac{1}{\widetilde{p}} = \frac{1-\theta}{2} + \frac{\theta}{4}. $$ Notice also that $$ 1 - \frac{\widetilde{p}}{4} = \frac{1 - \theta}{2 - \theta}, \qquad 1- \frac{\widetilde{p}}{2} = - \frac{\theta}{1-\theta}. $$ To simplify the notation we set $$ A_{\theta} = C \left( \frac{\theta}{1-\theta} \right)^{\frac{1- \theta}{2 - \theta}}, \qquad B_{\theta} = C \left( \frac{\theta}{1-\theta} \right)^{- \frac{\theta}{2-\theta}}, $$ in such a way (\ref{Decomposition}) becomes \begin{equation}\label{DecompositionBis} \begin{array}{lcl} \| |x|^{-1/2} w_{0}\|_{L^{2}_{|x|}L^{4}_{\theta}} &\leq & A_{\theta} \| |x|^{-1/2} u_{0} \|^{\widetilde{p}/4}_{L^{2}_{|x|}L^{\widetilde{p}}_{\theta}} \\ && \\ \| |x|^{-1/2} v_{0}\|_{L^{2}_{x}} &\leq & B_{\theta} \| |x|^{-1/2} u_{0} \|^{\widetilde{p} /2}_{L^{2}_{|x|}L^{\widetilde{p}}_{\theta}}. \end{array} \end{equation} A straightforward calculation shows that $A_{\theta}$ is an increasing function of $\theta$ and \begin{equation} \lim_{\theta \rightarrow 0} A_{0}=0, \quad \lim_{\theta \rightarrow 1} A_{\theta}=1; \end{equation} while $B_{\theta}$ is a decreasing function of $\theta$ such that \begin{equation}\label{ultima} \lim_{\theta \rightarrow 0} B_{0}=1, \quad \lim_{\theta \rightarrow 1} B_{\theta}=0. \end{equation} \noindent Then we consider the Cauchy problems \begin{equation}\label{CauchyNSForW} \left \{ \begin{array}{rcl} \partial_{t}w + (w \cdot \nabla) w +\nabla p_{w} & = & \Delta w \\ \nabla \cdot w & = & 0 \\ w & = & w_{0}, \end{array}\right. \end{equation} and \begin{equation}\label{CauchyNSForv} \left \{ \begin{array}{rcl} \partial_{t}v + (v \cdot \nabla) v + (v \cdot \nabla) w + (w \cdot \nabla) v +\nabla p_{v} & = & \Delta v \\ \nabla \cdot v & = & 0 \\ v & = & v_{0}. \end{array}\right. \end{equation} Of course $u=v+w$ and the pressure $p_{v}, p_{w}$ can be still recovered by $v,w$ through $$ p_{w} = C \left( \sum_{i,j =1}^{n} R_{i}R_{j}(w_{i}w_{j}) \right), $$ and $$ p_{v} = C \left( \sum_{i,j =1}^{n} R_{i}R_{j}(v_{i}v_{j}) +2 \sum_{i,j =1}^{n} R_{i}R_{j}(v_{i}w_{j}) \right). $$ At first we notice that the global regularity of $w$ is ensured by theorem \ref{WeighKato} provided that $$ A_{\theta} \| u_{0} \|^{\widetilde{p}/4}_{L^{2}_{|x|}L^{\widetilde{p}}_{\theta}} < \varepsilon_{4}. $$ We used the notation $\varepsilon_{4}$ to emphasize the fact that the smallness condition is about the $L^{2}_{|x|^{-1}d|x|}L^{4}_{\theta}$ norm of $w_{0}$. Then by (\ref{NotInfty}, \ref{NotInftyDer}) we also get the bound \begin{equation}\label{BoundOnW} \begin{array}{lcl} \| |x|^{\beta} w \|_{L^{r}_{t}L^{q}_{x}} &\leq & c_{0} \varepsilon_{4}, \quad \frac{2}{r} + \frac{3}{q} = 1-\beta \\ && \\ \| |x|^{\beta} \nabla w \|_{L^{r}_{t}L^{q}_{x}} &\leq & c_{0} \varepsilon_{4}, \quad \frac{2}{r} + \frac{3}{q} = 2-\beta. \end{array} \end{equation} These bounds are necessary in order to handle the terms $(v \cdot \nabla) w, (w \cdot \nabla) v$ in (\ref{CauchyNSForv}). Then if we are able to prove\footnote{Actually we conjecture (\ref{PossibleImprov}, \ref{ConjVare}) for the perturbed system (\ref{CauchyNSForv}). Anyway the additional terms are easily handled by using (\ref{BoundOnW}).} (\ref{PossibleImprov}, \ref{ConjVare}) we get the following regularity set for $v$: $$ \Pi = \left\{ (t,x) \quad \mbox{t.c.} \quad t > \frac{c(\varepsilon)|x|^{2}}{\varepsilon_{0}- \varepsilon} \right\}, $$ with $\varepsilon = \| |x|^{-1/2}v_{0}\|_{L^{2}_{x}} \leq B_{\theta} \| |x|^{-1/2} u_{0} \|^{\widetilde{p} /2}_{L^{2}_{|x|}L^{\widetilde{p}}_{\theta}}$ sufficiently small. Now by using (\ref{ultima}) we get $c(\varepsilon) \rightarrow 0$ as $ \widetilde{p} \rightarrow 4^{-}$ (or $\theta \rightarrow 1$). \chapter*{Outlooks and remarks} The consequences of angular integrability in Sobolev embeddings and PDEs have been considered by many authors. We have focused basically on the applications to the Navier-Stokes equation, but, as mentioned in section \ref{SobEmb} (Chapter \ref{SectInequality}), this point of view is natural and useful in the context of Srichartz estimates for the wave and Schr\"odinger equations on $\mathbb{R}^{n}$. A comprehensive reference about its application to the wave equation is \cite{JiangWangYu10-a}. \noindent The consequences of higer angular integrability have been explored also in te context of the Dirac equation, see \cite{DanconaCacciafesta11-a}, \cite{MachiharaNakamuraNakanishi05-a}. \noindent We have in mind to conclue by suggesting some additional consequences of propositions \ref{IDecayCor}, \ref{DDecayCor}. As we have seen until now the key point in global regularity results is the request \begin{equation}\label{LambdaOtlooks} \Lambda (\alpha, p, \widetilde{p}) \geq 0 \end{equation} for a solution $u$ bounded in \begin{equation}\label{BoundOutlooks} \| |x|^{\alpha} u \|_{L^{s}_{t}L^{p}_{|x|}L^{\widetilde{p}}_{\theta}}, \qquad \frac{2}{s} + \frac{n}{p} = 1 - \alpha. \end{equation} Then the regularity and uniqeness problems are strictly connected. We have given as example the theorem \ref{SerrinUniqness} in which the uniqueness of weak solutions bounded in \begin{equation}\nonumber \| u \|_{L^{s}_{t}L^{p}_{x}} < +\infty \end{equation} is proved. In the same spirit by applying Sobolev embeddings in $L^{p}_{|x|^{\alpha}d|x|^{\alpha}}L^{\widetilde{p}}_{\theta}$ spaces under condition \ref{LambdaOtlooks}, we get uniqueness of weak solutions bounded in the norms \ref{BoundOutlooks}. \noindent Anyway we have basically focused on theorems \ref{OurYZTheorem}, \ref{OurYZTheoremLoc} to show the difference between local and global regularity results. These theorems can be further extended in some directions. In particular a larger set of indeces can be covered about the $L^{p}$ integrability. The restriction on $p$ from below, i.e. $$ \frac{n}{1-\alpha} < p, \qquad 4 \leq p \quad \mbox{or} \quad 2 \leq p, $$ can be relaxed by using respectively the local regularity criteria in \cite{CKN}, \cite{Kukavica}. We omit the details. \chapter*{Acknowledgments} The author would like to thank prof. Piero D'Ancona for the work done together and constant suggestions and encouragement.
{ "timestamp": "2013-11-21T02:09:43", "yymm": "1308", "arxiv_id": "1308.4361", "language": "en", "url": "https://arxiv.org/abs/1308.4361", "abstract": "We prove an extension of the Stein-Weiss weighted estimates for fractional integrals, in the context of Lp spaces with different integrability properties in the radial and the angular direction. In this way, the classical estimates can be unified with their improved radial versions. A number of consequences are obtained: in particular we deduce precised versions of weighted Sobolev embeddings, Caffarelli-Kohn-Nirenberg estimates, and Strichartz estimates for the wave equation, which extend the radial improvements to the case of arbitrary functions. Then we apply this technology in order to give new a priori assumptions on weak solutions of the Navier-Stokes equation so as to be able to conclude that they are smooth. The regularity criteria are given in terms of mixed radial-angular weighted Lebesgue space norms.", "subjects": "Analysis of PDEs (math.AP)", "title": "Inequalities with angular integrability and applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631631151011, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950321086259 }
https://arxiv.org/abs/1709.01982
Stabilizing Weighted Graphs
An edge-weighted graph $G=(V,E)$ is called stable if the value of a maximum-weight matching equals the value of a maximum-weight fractional matching. Stable graphs play an important role in some interesting game theory problems, such as network bargaining games and cooperative matching games, because they characterize instances which admit stable outcomes. Motivated by this, in the last few years many researchers have investigated the algorithmic problem of turning a given graph into a stable one, via edge- and vertex-removal operations. However, all the algorithmic results developed in the literature so far only hold for unweighted instances, i.e., assuming unit weights on the edges of $G$.We give the first polynomial-time algorithm to find a minimum cardinality subset of vertices whose removal from $G$ yields a stable graph, for any weighted graph $G$. The algorithm is combinatorial and exploits new structural properties of basic fractional matchings, which are of independent interest. In particular, one of the main ingredients of our result is the development of a polynomial-time algorithm to compute a basic maximum-weight fractional matching with minimum number of odd cycles in its support. This generalizes a fundamental and classical result on unweighted matchings given by Balas more than 30 years ago, which we expect to prove useful beyond this particular application.In contrast, we show that the problem of finding a minimum cardinality subset of edges whose removal from a weighted graph $G$ yields a stable graph, does not admit any constant-factor approximation algorithm, unless $P=NP$. In this setting, we develop an $O(\Delta)$-approximation algorithm for the problem, where $\Delta$ is the maximum degree of a node in $G$.
\section{Introduction} Several interesting game theory problems are defined on networks, where the vertices represent players and the edges model the way players can interact with each other. In many such games, the structure of the underlying graph that describes the interactions among players is essential in determining the existence of stable outcomes for the corresponding games, i.e., outcomes where players have no incentive to deviate. Popular examples are \emph{cooperative matching} games, introduced by Shapley and Shubik~\cite{journals/ijgt/Shapley71}, and \emph{network bargaining} games, defined by Kleinberg and Tardos~\cite{conf/stoc/KleinbergT08}, both extensively studied in the game theory community. Instances of such games are described by a graph $G=(V,E)$ with edge weights $w \in \mathbb R^E_{\geq 0}$, where $V$ represents a set of players, and the value of a \emph{maximum-weight matching}, denoted as $\nu(G)$, is the total value that the players could get by interacting with each other. An important role in such games is played by so-called \emph{stable} graphs. An edge-weighted graph $G=(V,E)$ is called stable if the value $\nu(G)$ of a maximum-weight matching equals the value of a maximum-weight \emph{fractional} matching, denoted as $\nu_f(G)$. Formally, $\nu_f(G)$ is given by the optimal value of the standard linear programming relaxation of the matching problem, defined as \begin{equation}\tag{P} \nu_f(G) := \max\set{w^{\top}x:x(\delta(v))\leq 1 \; \forall v\in V, x\geq 0} \end{equation} Here $x$ is a vector in $\mathbb R^E$, $\delta(v)$ denotes the set of edges incident to the node $v$, and for a set $F\subseteq E$, $x(F)=\sum_{e\in F} x_e$. Feasible solutions of the above LP are called \emph{fractional matchings}. The relation that interplays between stable graphs and network games is as follows. In cooperative matching games \cite{journals/ijgt/Shapley71}, the goal is to find an allocation of the value $\nu(G)$ among the vertices, given as a vector $y \in \mathbb R^V_{\geq 0}$, such that no subset $S \subseteq V$ has an incentive to form a \emph{coalition} to deviate. This condition is formally defined by the constraints $\sum_{v \in S} y_v \geq \nu(G[S]), \forall S \subseteq V$, where $G[S]$ denotes the subgraph induced by $S$, and an allocation $y$ that satisfies the above set of constraints is called \emph{stable}. Deng et al.~\cite{journals/mor/DengIN99} proved that a stable allocation exists if and only if the graph describing the game is a stable graph. This is an easy consequence from LP duality. If $y$ is a stable allocation, then $y$ is a feasible dual solution of value $\nu(G)$, showing that $\nu_f(G)=\nu(G)$. Conversely, if $\nu_f(G)=\nu(G)$, then an optimal dual solution yields a stable allocation of $\nu(G)$. In network bargaining games \cite{conf/stoc/KleinbergT08}, each edge $e$ represents a deal of value $w_e$. A player can enter in a deal with at most one neighbor, and when a deal is made, the players have to agree on how to split the value of the deal between them. An outcome of the game is given by a pair $(M, y)$, where $M$ is a matching of $G$ and stands for the set of deals made by the players, and $y \in \mathbb R^V_{\geq 0}$ is an allocation vector representing how the deal values have been split. Kleinberg and Tardos have defined a notion of \emph{stable} outcome for such games, as well as a notion of \emph{balanced} outcome, that are outcomes where players have no incentive to deviate, and in addition the deal values are ``fairly'' split among players. They proved that a balanced outcome exists if and only if a stable outcome exists, and this happens if and only if the graph $G$ describing the game is stable. Motivated by the above connection, in the last few years many researchers have investigated the algorithmic problem of turning a given graph into a stable one, by performing a minimum number of modifications on the input graph \cite{journals/mp/BockCKPS15,conf/ipco/AhmadianHS16,journals/tcs/ItoKKKO17,arxiv/ChandrasekaranG16,journals/mst/KonemannL015,journals/tcs/BiroBGKP,journals/ijgt/BiroKP12}. Two natural operations which have a nice network game interpretation, are vertex-deletion and edge-deletion. They correspond to \emph{blocking players} and \emph{blocking deals}, respectively, in order to achieve stability in the corresponding games. Formally, a subset of vertices $S \subseteq V$ is called a \emph{vertex-stabilizer} if the graph $G\setminus S := G[V\setminus S]$ is stable. Similarly, a subset of edges $F \subseteq E$ is called an \emph{edge-stabilizer} if the graph $G\setminus F := (V, E\setminus F)$ is stable. The corresponding optimization problems, which are the focus of this paper, are: \smallskip \noindent {\bf Minimum Vertex-stabilizer:} \emph{Given an edge-weighted graph $G=(V,E)$, find a minimum-cardinality vertex-stabilizer.}\\ {\bf Minimum Edge-stabilizer:} \emph{Given an edge-weighted graph $G=(V,E)$, find a minimum-cardinality edge-stabilizer.} \smallskip The above problems have been studied quite intensively in the last few years on unweighted graphs. In particular, Bock et al.~\cite{journals/mp/BockCKPS15} have showed that finding a minimum-cardinality edge-stabilizer is hard to approximate within a factor of $(2-\varepsilon)$, assuming Unique Game Conjecture (UGC) \cite{conf/stoc/Khot02}. On the positive side, they have given an approximation algorithm for the edge-stabilizer problem, whose approximation factor depends on the sparsity of the input graph $G$. In other work, Ahmadian et al.~\cite{conf/ipco/AhmadianHS16} and Ito et al.~\cite{journals/tcs/ItoKKKO17} have shown independently that finding a minimum-cardinality vertex-stabilizer is a polynomial-time solvable problem. These (exact and approximate) algorithmic results, developed for unweighted instances, do not easily generalize when dealing with arbitrary edge-weights, since they heavily rely on the structure of maximum matchings in unweighted graphs. In fact, unweighted instances of the above problems exhibit a very nice property, as shown in \cite{journals/mp/BockCKPS15,conf/ipco/AhmadianHS16}: the removal of any inclusion-wise minimal edge-stabilizer (resp. vertex-stabilizer) from a graph $G$ \emph{does not decrease} the cardinality of a maximum matching in the resulting graph. This property ensures that there is at least one maximum-cardinality matching that survives in the modified graph, and this insight can be successfully exploited when designing (exact and approximate) algorithms. Unfortunately, it is not difficult to realize that this crucial property does not hold anymore when dealing with edge-weighted graphs (see Appendix I), and in fact, the development of algorithmic results for weighted graphs requires substantial new ideas. \paragraph{Our results and techniques.} Our main results are as follows. \smallskip \noindent {\bf Vertex-stabilizers}. We give the first polynomial-time algorithm to find a minimum-cardinality vertex-stabilizer $S$, in any weighted graph $G$. Our algorithm also ensures that $\nu(G\setminus S) \geq \frac{2}{3}\nu(G)$, i.e., the value of a maximum-weight matching is preserved up to a factor of $\frac{2}{3}$, and we show that this factor is tight in general. Specifically, as previously mentioned, a minimum-cardinality vertex-stabilizer for a weighted graph might decrease the value of a maximum-weight matching in the resulting graph. From a network bargaining perspective, this means we are decreasing the total value which the players are able to get, which is of course undesirable. However, we can show this is inevitable, since deciding whether there exists \emph{any} vertex-stabilizer $S$ that preserves the value of a maximum-weight matching (i.e., such that $\nu(G\setminus S)=\nu(G)$) is an NP-hard problem. Furthermore, we give an example of a graph $G$ where \emph{any} vertex-stabilizer $S$ decreases the value of a maximum-weight matching by a factor of essentially $\frac{1}{3}$, i.e. $\nu(G\setminus S) \leq \pr{\frac{2}{3} + \varepsilon} \nu(G)$ (for an arbitrarily small $\varepsilon >0$). This shows that the bounds of our algorithm are essentially best possible: the algorithm finds a vertex-stabilizer $S$ whose cardinality is the \emph{smallest} possible, and preserves the value of a maximum-weight matching up to a factor of $\frac{2}{3}$, that is the tightest factor that holds for all instances. The above result is based on two main ingredients. The first one is giving a lower bound on the cardinality of a minimum vertex-stabilizer, which generalizes the lower bound used in the unweighted setting, and is based on the structure of optimal basic solutions of (P). In particular, it was shown in \cite{conf/ipco/AhmadianHS16} that a lower bound on the cardinality of a vertex-stabilizer for unweighted graphs is given by the minimum number of \emph{odd-cycles} in the support of an optimal basic solution to (P). We show that this lower bound holds also for weighted graphs, though this generalization is not obvious (in fact, as we will show later, the same generalization does \emph{not} hold for edge-stabilizers). Consequently, our proof is much more involved, and requires different ideas. The second main ingredient is giving a polynomial-time algorithm for computing an optimal basic solution to (P) with the smallest number of odd-cycles in its support, which is of independent interest, as highlighted in the next paragraph. \smallskip \noindent {\bf Computing maximum fractional matchings with minimum cycle support.} The fractional matching polytope given by (P) has been extensively studied in the literature, and characterizing instances for which a maximum fractional matching equals an integral one is a natural graph theory question (see \cite{journals/mp/BockCKPS15,conf/ipco/AhmadianHS16}). It is well-known that basic solutions of (P) are half-integral, and the support of a basic solution is the disjoint union of a matching (given by 1-valued entries) and a set of odd-cycles (given by half-valued entries). Balas \cite{journal/nms/Balas81} gave a nice polynomial-time algorithm to compute a basic maximum fractional matching in an unweighted graph, with minimum number of odd-cycles in its support. This is a classical result on matching theory, which has been known for more than 30 years. In this paper, we generalize this result to arbitrary weighted instances, exploiting nice structural properties of basic fractional matchings. Our algorithm is based on combinatorial techniques, and we expect that this result will prove useful beyond this particular application. \smallskip \noindent {\bf Edge-stabilizers}. When dealing with edge-removal operations, the stabilizer problem becomes harder, already in the unweighted setting. It is shown in \cite{journals/mp/BockCKPS15} that finding a minimum edge-stabilizer is as hard as vertex cover, and whether the problem admits a constant factor approximation algorithm is an interesting open question. We here show that the answer to this question is negative for weighted graphs, since we prove that the minimum edge-stabilizer problem for a weighted graph $G$ does not admit any constant-factor approximation algorithm, unless $P=NP$. From an approximation point of view, we show that the algorithm we developed for the vertex-stabilizer problem translates into a $O(\Delta)$-approximation algorithm for the edge-stabilizer problem, where $\Delta$ is the maximum degree of a node in $G$. Once again, the analysis relies on proving a lower bound on the cardinality of a minimum edge-stabilizer. It was shown in \cite{journals/mp/BockCKPS15} that a lower-bound on the cardinality of a minimum edge-stabilizer for unweighted graphs is again given by the minimum number of odd-cycles in the support of an optimal solution to (P) (called $\gamma(G)$). Interestingly, we show that, differently from the vertex-stabilizer setting, here this lower bound does not generalize, and $\gamma(G)$ \emph{is not} a lower bound on the cardinality of an edge-stabilizer for arbitrary weighted graphs. However, we are able to show that $\ceil{\gamma(G)/2}$ \emph{is} a lower bound on the cardinality of a minimum edge-stabilizer, and this is enough for our approximation purposes. \smallskip \noindent {\bf Additional results}. Lastly, we also generalize a result given in \cite{conf/ipco/AhmadianHS16} on finding a minimum vertex-stabilizer which avoids a fixed maximum matching $M$, on unweighted graphs. We prove that if $M$ is a maximum-weight matching of a weighted graph $G$, then finding a minimum vertex-stabilizer that is element-disjoint from $M$ is a polynomial-time solvable problem. Otherwise, if $M$ is not a maximum-weight matching, the problem is at least as hard as vertex cover. We supplement this result with a 2-approximation algorithm for this case, that is best possible assuming UGC. \paragraph{Related work.} Bir\'o et al.~\cite{journals/tcs/BiroBGKP} were the first to consider the edge-stabilizer problem in weighted graphs, and they showed NP-hardness for this case. Stabilizing a graph via different operations on the input graph (other than removing edges/vertices) has also been studied. In particular, Ito et al.~\cite{journals/tcs/ItoKKKO17} have given polynomial-time algorithms to stabilize an unweighted graph by adding edges and by adding vertices. Chandrasekaran et al.~\cite{arxiv/ChandrasekaranG16} have recently studied the problem of stabilizing unweighted graphs by fractionally increasing edge weights. Ahmadian et al.~\cite{conf/ipco/AhmadianHS16} have also studied the vertex-stabilizer problem on unweighted graphs, but in the more-general setting where there are (non-uniform) costs for removing vertices, and gave approximation algorithms for this case. Bir\'o et al.~\cite{journals/ijgt/BiroKP12} and K\"onemann et al.~\cite{journals/mst/KonemannL015} studied a variant of the problem where the goal is to compute a minimum-cardinality set of \emph{blocking pairs}, that are edges whose removal from the graph yield the existence of a fractional vertex cover of size at most $\nu(G)$ (but note that the resulting graph might not be stable). Mishra et al.~\cite{journals/algorithmica/MishraRSSS11} studied the problem of converting a graph into a \emph{K\"onig-Egerv\'ary graph}, via vertex-deletion and edge-deletion operations. A K\"onig-Egerv\'ary graph is a graph where the size of a maximum matching equals the size of an (integral) minimum vertex cover. They gave an $O(\log n \log \log n)$-approximation algorithm for the vertex-removal setting in unweighted graphs, and showed constant-factor hardness of approximation (assuming UGC) for both the minimum vertex-removal and edge-removal problem. \paragraph{Paper Organization.} In Section \ref{sec:preliminaries}, we give some preliminaries and discuss notation. In Section \ref{sec:frac_matching}, we give a polynomial-time algorithm to compute an optimal basic solution to (P) with minimum number of odd cycles in its support. This algorithm will be crucially used in Section \ref{sec:vertex_stabilizer}, where we give our results on vertex-stabilizers. Section \ref{sec:edge_stabilizer} reports our results on edge-stabilizers. Finally, our additional results can be found in Section \ref{sec:additional}. \section{Preliminaries and notation} \label{sec:preliminaries} A key concept that we will use is LP duality. The dual of (P) is given by \begin{equation}\tag{D} \tau_f(G) := \min\set{\mathbbm 1^{\top}y:y_u+y_v\geq w_{uv} \; \forall uv\in E, y\geq 0}. \end{equation} As feasible solutions to (P) are called fractional matchings, we call feasible solutions to (D) \emph{fractional $w$-vertex covers}. In fact, (D) is the standard LP-relaxation of the problem of finding a minimum $w$-vertex cover, obtained by adding integrality constraints on (D). We also call basic feasible solutions to (P) as \emph{basic fractional matchings}. An application of duality theory yields the following relationship $\nu(G) \leq \nu_f(G) = \tau_f(G)$. Recall that a graph $G$ is \emph{stable} if $\nu(G) = \nu_f(G) = \tau_f(G)$. For a vector $x\in \mathbb R^{E}$ and any subset $F\subseteq E$, we denote $x_{-F}\in \mathbb R^{E -F}$ as the subvector obtained by dropping the entries corresponding to $F$. For any multisubset $F\subseteq E$, we define $x(F):=\sum_{e\in F}x_e$. Note that an element may be accounted for multiple times in the sum if it appears more than once in $F$. We denote $\supp(x):=\set{e\in E:x_e\neq 0}$ as the support of $x$. For any positive integer $k$, $[k]$ represents the set $\set{1,2,\dots,k}$. Given an undirected graph $G$, we denote by $n$ the number of vertices and by $m$ the number of edges. For edge weights $w\in\mathbb R^m_+$ and a matching $M$ in $G$, a path/walk is called \emph{$M$-alternating} if its edges alternately belong to $M$ and $E\setminus M$. Recall that a walk is a path that is non-simple. We say that an $M$-alternating path/walk is \emph{valid} if it starts with an $M$-exposed vertex or an edge in $M$, and ends with an $M$-exposed vertex or an edge in $M$ (see Figure \ref{fig:valid} for an example). A valid $M$-alternating path/walk $P$ is called \emph{$M$-augmenting} if $w(P\setminus M)>w(P\cap M)$. A cycle is also called $M$-alternating if its edges alternately belong to $M$ and $E\setminus M$. Note that an $M$-alternating cycle has even length. An $M$-alternating cycle $C$ is said to be $M$-augmenting if $w(C\setminus M)>w(C\cap M)$. \begin{figure}[H] \centering \def1.6{1.2} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node[vertex] (a1) {}; \node[vertex] (a2) [right of=a1] {}; \node[vertex] (a3) [right of=a2] {}; \node[vertex] (a4) [right of=a3] {}; \node[vertex] (a5) [right of=a4] {}; \node[vertex] (a6) [right of=a5] {}; \path[matched edge] (a1) -- (a2); \path[edge] (a2) -- (a3); \path[matched edge] (a3) -- (a4); \path[edge] (a4) -- (a5); \path[matched edge] (a5) -- (a6); \draw[decoration={brace,raise=12pt},decorate] (0,0) -- node[weight,above=18pt] {$P$} (5*1.6,0); \draw[decoration={brace,raise=12pt},decorate] (4*1.6,0) -- node[weight,below=18pt] {$P'$} (0,0); \end{tikzpicture} \caption{An example of a valid and invalid alternating path. Here, $P$ is valid while $P'$ is invalid.} \label{fig:valid} \end{figure} \begin{definition} An odd cycle $C=(e_1,e_2,\dots,e_{2k+1})$ is called an $M$-\emph{blossom} if $e_i\in M$ for all even $i$ and $e_i\notin M$ for all odd $i$. The vertex $v:=e_1\cap e_{2k+1}$ is called the \emph{base} of the blossom. The blossom is \emph{augmenting} if $v$ is $M$-exposed and $w(C\setminus M)>w(C\cap M)$. \end{definition} \begin{definition}\label{def:flower} An \emph{$M$-flower} $C\cup P$ consists of an $M$-blossom $C$ with base $v_1$ and a valid $M$-alternating path $P=(v_1,v_2,\dots,v_k)$ where $v_1v_2\in M$. The vertex $v_k$ is called the \emph{root} of the flower. The flower is \emph{augmenting} if \[w(C\setminus M)+2w(P\setminus M)>w(C\cap M)+2w(P\cap M).\] \end{definition} Given an $M$-augmenting flower $C\cup P$, if we replace the vector which places 1 on the edges of $M\cap (C\cup P)$, with the vector that places $\frac{1}{2}$ on the edges of C and 1 on the edges of $P\setminus M$, then the change in weight is exactly $\frac{1}{2}$ times LHS $-$ RHS of the above inequality. So the inequality means that this operation increases the weight. \begin{definition}\label{def:bicycle} An \emph{$M$-bi-cycle} $C\cup P\cup D$ consists of two $M$-blossoms $C,D$ with bases $v_1,v_k$ respectively and an odd $M$-alternating path $P=(v_1,v_2,\dots,v_k)$ where $v_1v_2,v_{k-1}v_k\in M$. The bi-cycle is \emph{augmenting} if \[w(C\setminus M)+2w(P\setminus M)+w(D\setminus M)>w(C\cap M)+2w(P\cap M)+w(D\cap M).\] \end{definition} Note that the structures defined in Definition \ref{def:flower} and \ref{def:bicycle} might not be simple. For example, in a flower $C\cup P$, the path $P$ might intersect the blossom $C$ more than once. Figure \ref{fig:simple} illustrates some simple examples of these structures. Notice that a blossom is always simple. \begin{figure}[ht] \centering \def1.6{1.2} \begin{minipage}{0.2\textwidth} \centering \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node[vertex] (a1) {}; \node[vertex] (a2) [above left of=a1] {}; \node[vertex] (a3) [above right of=a1] {}; \node[vertex] (a4) [above of=a2] {}; \node[vertex] (a5) [above of=a3] {}; \path[edge] (a1) -- (a2); \path[edge] (a1) -- (a3); \path[matched edge] (a2) -- (a4); \path[matched edge] (a3) -- (a5); \path[edge] (a4) -- (a5); \end{tikzpicture} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node[vertex] (a1) {}; \node[vertex] (a2) [above left of=a1] {}; \node[vertex] (a3) [below left of=a1] {}; \node[vertex] (a4) [right of=a1] {}; \node[vertex] (a5) [right of=a4] {}; \path[edge] (a1) -- (a2); \path[edge] (a1) -- (a3); \path[matched edge] (a2) -- (a3); \path[matched edge] (a1) -- (a4); \path[edge] (a4) -- (a5); \end{tikzpicture} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node[vertex] (a1) {}; \node[vertex] (a2) [above left of=a1] {}; \node[vertex] (a3) [below left of=a1] {}; \node[vertex] (a4) [left of=a2] {}; \node[vertex] (a5) [left of=a3] {}; \node[vertex] (a6) [right of=a1] {}; \node[vertex] (a7) [right of=a6] {}; \node[vertex] (a8) [right of=a7] {}; \node[vertex] (a9) [above right of=a8] {}; \node[vertex] (a10) [below right of=a8] {}; \path[edge] (a1) -- (a2); \path[edge] (a1) -- (a3); \path[matched edge] (a2) -- (a4); \path[matched edge] (a3) -- (a5); \path[edge] (a4) -- (a5); \path[matched edge] (a1) -- (a6); \path[edge] (a6) -- (a7); \path[matched edge] (a7) -- (a8); \path[edge] (a8) -- (a9); \path[edge] (a8) -- (a10); \path[matched edge] (a9) -- (a10); \end{tikzpicture} \end{minipage} \caption{Simple examples of a blossom, a flower and a bi-cycle.} \label{fig:simple} \end{figure} \ The significance of the structures defined above is given by the following theorem: \begin{theorem}[\cite{conf/stoc/KleinbergT08}]\label{thm:stable} If a graph is stable, then it does not have an $M$-augmenting flower or bi-cycle for every maximum-weight matching $M$. Otherwise, it has an $M$-augmenting flower or bi-cycle for every maximum-weight matching $M$. \end{theorem} We will need the following classical result on the structure of basic fractional matchings: \begin{theorem}[\cite{conf/smp/Balinski70}]\label{thm:basic} A fractional matching $x$ in $G=(V,E)$ is basic if and only if $x_e=\set{0,\frac{1}{2},1}$ for all $e\in E$ and the edges $e$ having $x_e=\frac{1}{2}$ induce vertex-disjoint odd cycles in $G$. \end{theorem} Let $\hat{x}$ be a basic fractional matching in $G$. We partition the support of $\hat{x}$ into two parts. Define \[\mathscr{C}(\hat{x}) := \set{C_1,\dots,C_q} \quad \text{and} \quad M(\hat{x}) := \set{e\in E:\hat{x}_e=1}\] as the set of odd cycles such that $\hat{x}_e=\frac{1}{2}$ for all $e\in E(C_i)$ and the set of matched edges in $\hat{x}$ respectively. For ease of notation, we use $V(\mathscr{C}(\hat{x})) = \cup_{C\in \mathscr{C}(\hat{x})}V(C)$ and $E(\mathscr{C}(\hat{x})) = \cup_{C\in \mathscr{C}(\hat{x})}E(C)$ to denote the vertex set and edge set of $\mathscr{C}(\hat{x})$ respectively. We define two operations on the entries of $\hat{x}$ associated with certain edge sets of $G$: \begin{definition} By \emph{complementing} on $E'\subseteq E$, we mean replacing $\hat{x}_e$ by $\bar{x}_e=1-\hat{x}_e$ for all $e\in E'$. \end{definition} \begin{definition} By \emph{alternate rounding} on $C\in \mathscr{C}(\hat{x})$ at $v$ where $C=\set{e_1,\dots,e_{2k+1}}$ and $v=e_1\cap e_{2k+1}$, we mean replacing $\hat{x}_e$ by $\bar{x}_e = 0$ for all $e\in \set{e_1,e_3,\dots,e_{2k+1}}$ and $\bar{x}_e=1$ for all $e\in \set{e_2,e_4,\dots,e_{2k}}$. When $v$ is clear from the context, we just say alternate rounding on $C$. \end{definition} Let $\mathcal{X}$ be the set of basic maximum-weight fractional matchings in $G$. Define \[\gamma(G) := \min_{\hat{x}\in \mathcal{X}}\size{\mathscr{C}(\hat{x})}.\] Note that $G$ is stable if and only if $\gamma(G)=0$. We will use the following terminology given in \cite{book/CookCPS98} for the description of Edmonds' maximum matching algorithm. Given a graph $G$ and a matching $M$, let $T$ be an $M$-alternating tree rooted at a vertex $r$. We denote by $A(T)$ and $B(T)$ the sets of nodes in $T$ at odd and even distance respectively from $r$. We call $T$ \emph{frustrated} if every edge of $G$ having one end in $B(T)$ has the other end in $A(T)$. Finally, the following theorem gives a sufficient condition for a graph to be Hamiltonian. \begin{theorem}[Ore's Theorem \cite{journals/amm/Ore60}] Let $G$ be a finite and simple graph with $n\geq 3$ vertices. If $\deg(u)+\deg(v)\geq n$ for every pair of distinct non-adjacent vertices $u$ and $v$, then $G$ is Hamiltonian. \end{theorem} \section{Maximum fractional matching with minimum support} \label{sec:frac_matching} In this section, we give a polynomial-time algorithm to compute a basic maximum-weight fractional matching $\hat{x}$ for a weighted graph $G$ with minimum number of odd cycles in its support, i.e., satisfying $\size{\mathscr{C}(\hat{x})} = \gamma(G)$. This algorithm will be used as a subroutine by our vertex-stabilizer algorithm, which we will develop in Section \ref{sec:vertex_stabilizer}. Our first step is to characterize basic maximum-weight fractional matchings which have more than $\gamma(G)$ odd cycles. Balas~\cite{journal/nms/Balas81} considered this problem on unweighted graphs, and gave the following characterization: \begin{theorem}[\cite{journal/nms/Balas81}] Let $\hat{x}$ be a basic maximum fractional matching in an unweighted graph $G$. If $\size{\mathscr{C}(\hat{x})}>\gamma(G)$, then there exists an $M(\hat{x})$-alternating path which connects two odd cycles $C_i,C_j \in \mathscr{C}(\hat{x})$. Furthermore, alternate rounding on the odd cycles and complementing on the path produces a basic maximum fractional matching $\bar{x}$ such that $\mathscr{C}(\bar{x})\subset\mathscr{C}(\hat{x})$. \end{theorem} We generalize this to weighted graphs. Before stating the theorem, we need to introduce the concept of \emph{connector} (see Figure \ref{fig:connector} for some examples): \begin{definition} Let $C$ be a cycle and $S_0,S_1,\dots,S_k$ be a partition of $V(C)$ such that $\size{S_0}$ is even and $k\geq 2$, where $S_0$ is allowed to be empty. Let $M$ be a perfect matching on the vertex set $S_0$. We call the graph $C\cup M$ a \emph{connector}. Each $S_i$ is called a \emph{terminal set} for $i\geq 1$. An edge $e\in M$ is called a \emph{chord} if $e\notin E(C)$. \end{definition} \begin{figure}[ht] \centering \begin{minipage}{0.3\textwidth} \centering \begin{tikzpicture}[inner sep=2.5pt, minimum size=2.5pt, auto] \def 9 {7} \def 1.3cm {1.3cm} \node[vertex,fill=gray!40] (u1) at ({360/9*1}:1.3cm) {}; \node[vertex] (u2) at ({360/9*2}:1.3cm) {}; \node[vertex] (u3) at ({360/9*3}:1.3cm) {}; \node[vertex] (u4) at ({360/9*4}:1.3cm) {}; \node[vertex,fill=gray] (u5) at ({360/9*5}:1.3cm) {}; \node[vertex] (u6) at ({360/9*6}:1.3cm) {}; \node[vertex,fill=gray!40] (u7) at ({360/9*7}:1.3cm) {}; \path[edge] (u1) -- (u2); \path[edge] (u2) -- (u3); \path[matched edge] (u3) -- (u4); \path[edge] (u4) -- (u5); \path[edge] (u5) -- (u6); \path[edge] (u6) -- (u7); \path[edge] (u7) -- (u1); \path[matched edge] (u2) -- (u6); \end{tikzpicture} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \begin{tikzpicture}[inner sep=2.5pt, minimum size=2.5pt, auto] \def 9 {9} \def 1.3cm {1.3cm} \node[vertex,fill=gray!40] (v1) at ({360/9*1}:1.3cm) {}; \node[vertex] (v2) at ({360/9*2}:1.3cm) {}; \node[vertex,fill=gray!40] (v3) at ({360/9*3}:1.3cm) {}; \node[vertex] (v4) at ({360/9*4}:1.3cm) {}; \node[vertex] (v5) at ({360/9*5}:1.3cm) {}; \node[vertex] (v6) at ({360/9*6}:1.3cm) {}; \node[vertex] (v7) at ({360/9*7}:1.3cm) {}; \node[vertex,fill=gray] (v8) at ({360/9*8}:1.3cm) {}; \node[vertex] (v9) at ({360/9*9}:1.3cm) {}; \path[edge] (v1) -- (v2); \path[edge] (v2) -- (v3); \path[edge] (v3) -- (v4); \path[matched edge] (v4) -- (v5); \path[edge] (v5) -- (v6); \path[edge] (v6) -- (v7); \path[edge] (v7) -- (v8); \path[edge] (v8) -- (v9); \path[edge] (v9) -- (v1); \path[matched edge] (v2) -- (v7); \path[matched edge] (v6) -- (v9); \end{tikzpicture} \end{minipage} \caption{Two examples of connectors. Bold edges indicate $M$. Vertices of the same color belong to the same terminal set. White vertices are the ones in $S_0$.} \label{fig:connector} \end{figure} Connectors are useful because of the following property: \begin{lemma} \label{lem:connector} Let $C\cup M$ be a connector. For every terminal set $S_i$, there exists an $M$-augmenting path in the connector from a vertex $v \in S_i$ to a vertex $u \in S_j$, for some $j\neq i$. \end{lemma} \begin{proof} For every $e\in M\cap E(C)$, contract $e$ and smooth away the vertex formed after the contraction (smoothing is the reverse operation of subdivision). The only edges in $M$ that survive this process are the chords. Fix $i$ and identify all the vertices in $S_i$ into a single vertex $v_i$. Denote the resulting (multi)graph as $G=(V,E)$. Observe that there exists an $M$-augmenting path from $S_i$ to $S_j$ in $C\cup M$ if and only if there exists an $M$-augmenting path from $v_i$ to $S_j$ in $G$ where $i\neq j$. Hence, we will work on the reduced graph $G$. Apply Edmonds' maximum matching algorithm on $G$ initialized with the matching $M\cap E$, and construct an $M$-alternating tree starting with the exposed vertex $v_i$. There are two possibilities: either we find an augmenting path from $v_i$ to $S_j$ for some $j\neq i$ or a frustrated tree rooted at $v_i$. For the purpose of contradiction, suppose we get a frustrated tree $T$ rooted at $v_i$. Let $\widetilde{T}=T\cup D$, where $D=\set{uv\notin E(T):u\in A(T), v\in B(T)}$. Note that we do not have edges connecting two nodes in $B(T)$, otherwise $T$ is not a frustrating tree. We claim that each pseudonode in $T$ is incident to at least two unmatched edges in $\widetilde{T}$. Let $v$ be a pseudonode in $T$, and $S(v)$ be the subset of vertices in $G$ that are contained in $v$ (after expanding pseudonodes). Note that $S(v)\subset V$ because there are at least two exposed vertices in $G$. Let $\delta_{G\setminus M}(\cdot)$ denote the cut function on $G\setminus M$. Since $G\setminus M$ is 2-edge-connected, we have $\size{\delta_{G\setminus M}(S(v))}\geq 2$. These edges are present in $\widetilde{T}$ because otherwise we can extend the alternating tree $T$. It follows that $v$ is incident to at least two unmatched edges in $\widetilde{T}$. Let $uv$ be a matched edge in $T$ where $u\in A(T)$ and $v\in B(T)$. We claim that $\deg_{\widetilde{T}}(u)\leq \deg_{\widetilde{T}}(v)$. Note that $\deg_{\widetilde{T}}(u)$ is either 2 or 3. This is because $u$ is not a pseudonode, and $\deg_G(w)=3$ for every $M$-covered vertex $w$ in $G$. If $v$ is not a pseudonode, then $\deg_{\widetilde{T}}(v)=3$ as all edges in $\delta_G(v)$ are accounted for in $\widetilde{T}$. Otherwise, if $v$ is a pseudonode, then by the previous claim $v$ is incident to at least two unmatched edges in $\widetilde{T}$. So $\deg_{\widetilde{T}}(v)\geq 3$. Now, observe that $\widetilde{T}$ is a bipartite graph as the node set can be partitioned into $A(T)$ and $B(T)$ where $\size{B(T)}=\size{A(T)}+1$. For every $v\in A(T)$, let $M(v)$ be its matched neighbour in $B(T)$. The extra node in $B(T)$ is the root of $T$, which has degree at least one in $\widetilde{T}$. Summing up the node degrees in $A(T)$, we obtain \[\sum_{v\in A(T)}\deg_{\widetilde{T}}(v) \leq \sum_{v\in A(T)}\deg_{\widetilde{T}}(M(v)) < \sum_{v\in A(T)}\deg_{\widetilde{T}}(M(v)) + 1 \leq \sum_{v\in B(T)}\deg_{\widetilde{T}}(v)\] which is a contradiction. \end{proof} Let $y$ be a minimum fractional $w$-vertex cover in $G$. We say that an edge $uv$ is \emph{tight} if $y_u+y_v=w_{uv}$. Similarly, we say that a path is tight if all of its edges are tight. \begin{theorem}\label{thm:cycles} Let $\hat{x}$ be a basic maximum-weight fractional matching and $y$ be a minimum fractional $w$-vertex cover in $G$. If $\size{\mathscr{C}(\hat{x})}> \gamma(G)$, then there exists \begin{enumerate}[noitemsep,topsep=0pt] \item[(i)] a vertex $v\in V(C_i)$ for some odd cycle $C_i\in \mathscr{C}(\hat{x})$ such that $y_v=0$; or \item[(ii)] a tight $M(\hat{x})$-alternating path $P$ which connects two odd cycles $C_i,C_j\in\mathscr{C}(\hat{x})$; or \item[(iii)] a tight and valid $M(\hat{x})$-alternating path $P$ which connects an odd cycle $C_i\in\mathscr{C}(\hat{x})$ and a vertex $v\notin V(\mathscr{C}(\hat{x}))$ such that $y_v=0$. \end{enumerate} Furthermore, alternate rounding on the odd cycles and complementing on the path produces a basic maximum-weight fractional matching $\bar{x}$ such that $\mathscr{C}(\bar{x})\subset \mathscr{C}(\hat{x})$. \end{theorem} \begin{proof} We will start by proving the second part of the theorem, namely that alternate rounding and complementing produces a basic maximum-weight fractional matching with lesser odd cycles. For Case (i), let $\bar{x}$ be the basic fractional matching obtained by alternate rounding on $C_i$ at $v$. Since $y_v=0$, both $\bar{x}$ and $y$ satisfy complementary slackness. Hence, $\bar{x}$ is optimal to (P) and $\mathscr{C}(\bar{x})=\mathscr{C}(\hat{x})\setminus C_i$. For Case (ii), denote $u=V(P)\cap V(C_i)$ and $v=V(P)\cap V(C_j)$ as the endpoints of $P$. Let $\bar{x}$ be the basic fractional matching obtained by alternate rounding on $C_i,C_j$ at $u,v$ respectively and complementing on $P$. Note that $u$ and $v$ are exposed after the alternate rounding, and covered after complementing. Since $\bar{x}$ and $y$ satisfy complementary slackness, $\bar{x}$ is optimal to (P) and $\mathscr{C}(\bar{x})=\mathscr{C}(\hat{x})\setminus\set{C_i,C_j}$. For Case (iii), denote $u=V(P)\cap V(C_i)$ and $v\notin V(\mathscr{C}(\hat{x}))$ as the endpoints of $P$. Let $\bar{x}$ be the basic fractional matching obtained by alternate rounding on $C_i$ at $u$ and complementing on $P$. Since $y_v=0$, both $\bar{x}$ and $y$ satisfy complementary slackness. Thus, $\bar{x}$ is optimal to (P) and $\mathscr{C}(\bar{x})=\mathscr{C}(\hat{x})\setminus C_i$. Next, we prove the first part of the theorem. We may assume $y_v>0$ for every vertex $v\in V(\mathscr{C}(\hat{x}))$. Let $x^*$ be a basic maximum-weight fractional matching in $G$ such that $\size{\mathscr{C}(x^*)}=\gamma(G)$. Define $N(\hat{x}) := M(\hat{x})\setminus E(\mathscr{C}(x^*))$ and $N(x^*) := M(x^*)\setminus E(\mathscr{C}(\hat{x}))$. Consider the following subgraph \[J = (V,N(\hat{x}) \triangle N(x^*)).\] Since $N(\hat{x})$ and $N(x^*)$ are matchings in $G$, $J$ is made up of vertex-disjoint paths and cycles of $G$. For each such path or cycle, its edges alternately belong to $N(\hat{x})$ or $N(x^*)$. Moreover, its intermediate vertices are disjoint from $\mathscr{C}(\hat{x})$ and $\mathscr{C}(x^*)$. Since $\hat{x}$ and $x^*$ are maximum-weight fractional matchings in $G$, every path in $J$ is tight by complementary slackness. If there exists a path in $J$ which connects two odd cycles from $\mathscr{C}(\hat{x})$, then we are done. If there exists a path in $J$ which connects an odd cycle from $\mathscr{C}(\hat{x})$ and a vertex $v\notin V(\mathscr{C}(\hat{x})\cup\mathscr{C}(x^*))$, then $y_v=0$ because $v$ is either exposed by $M(\hat{x})$ or $M(x^*)$. Hence, we are also done. So we may assume every path in $J$ belongs to one of the following three categories: \begin{enumerate}[noitemsep,topsep=0pt] \item[(a)] Vertex disjoint from $\mathscr{C}(\hat{x})$ and $\mathscr{C}(x^*)$. \item[(b)] Starts and ends at the same cycle. \item[(c)] Connects an odd cycle from $\mathscr{C}(\hat{x})$ and an odd cycle from $\mathscr{C}(x^*)$. \end{enumerate} Note that by the second part of the theorem, there is no path in $J$ which connects two odd cycles from $\mathscr{C}(x^*)$ or an odd cycle from $\mathscr{C}(x^*)$ and a vertex $v\notin V(\mathscr{C}(\hat{x})\cup \mathscr{C}(x^*))$. We say that two odd cycles $C_i$ and $C_j$ are \emph{adjacent} if $V(C_i)\cap V(C_j)\neq \emptyset$ or if they are connected by a path in $J$. \begin{claim} Every cycle in $\mathscr{C}(\hat{x})$ is adjacent to a cycle in $\mathscr{C}(x^*)$. \end{claim} \begin{proof} Let $C$ be an odd cycle in $\mathscr{C}(\hat{x})$. For every vertex $v\in V(C)$, since we assumed $y_v>0$, by complementary slackness it is either $M(x^*)$-covered or belongs to $V(\mathscr{C}(x^*))$. If $v\in V(\mathscr{C}(x^*))$, then we are done. So we may assume that every vertex in $C$ is $M(x^*)$-covered. Let $uv\in M(x^*)$ where $u\in V(C)$ and $v\notin V(C)$. Observe that $uv$ is the first edge of a path in $J$, so it either ends at an odd cycle in $\mathscr{C}(x^*)$ or $C$. Since $C$ has an odd number of vertices, by the pigeonhole principle there exists a path in $J$ which connects $C$ and an odd cycle in $\mathscr{C}(x^*)$. \end{proof} Recall that we assumed no two cycles in $\mathscr{C}(\hat{x})$ are adjacent. We also know that no two cycles in $\mathscr{C}(x^*)$ are adjacent. Since $\size{\mathscr{C}(\hat{x})}>\size{\mathscr{C}(x^*)}$, by the previous claim there exists an odd cycle in $\mathscr{C}(x^*)$ which is adjacent to at least two odd cycles in $\mathcal{C}(\hat{x})$. Let $C^*\in \mathscr{C}(x^*)$ be adjacent to $C_1,\dots,C_k\in \mathscr{C}(\hat{x})$ for some $k\geq 2$. For every $i\in[k]$, define \[S_i:=\set{v\in V(C^*):v\in V(C_i) \text{ or } \exists\; \text{a path in }J \text{ from }v \text{ to }C_i}\] and $S_0:=V(C^*)\setminus \cup_{i=1}^kS_i$. Note that $y_v>0$ for every vertex $v\in V(C^*)$. Hence, by complementary slackness every vertex in $S_0$ is $M(\hat{x})$-covered. Let $v\in S_0$. It is either matched to another vertex in $S_0$ or is an endpoint of a path in $J$ whose other endpoint is also a vertex in $S_0$. Hence, $\size{S_0}$ is even. Moreover, $S_i\neq \emptyset$ for all $i\geq 1$, and the sets $S_0,\dots,S_k$ partition $V(C^*)$. Let $\mathcal{P}$ be the set of paths in $J$ that start and end at $C^*$, and consider the subgraph $C^*\cup \mathcal{P}$. We claim that there exists an $M(\hat{x})$-alternating path from $S_i$ to $S_j$ in $C^*\cup \mathcal{P}$ for some $j\neq i$. Since every path in $\mathcal{P}$ starts and ends with an edge in $M(\hat{x})$, we can perform the following reduction: contract every path in $\mathcal{P}$ into a single edge in $M(\hat{x})$. It is easy to see that an $M(\hat{x})$-alternating path from $S_i$ to $S_j$ in $C^*\cup \mathcal{P}$ corresponds to an $M(\hat{x})$-alternating path from $S_i$ to $S_j$ in the reduced graph. Then, observe that the reduced graph along with the matching $M(\hat{x})$ forms a connector. By Lemma \ref{lem:connector}, there exists an $M(\hat{x})$-alternating path $P$ from $S_i$ to $S_j$ in $C^*\cup \mathcal{P}$ for some $j\neq i$. Let $v_i\in S_i$ and $v_j\in S_j$ be the endpoints of $P$. Let $P_i$ and $P_j$ be the paths in $J$ connecting $v_i$ to $C_i$ and $v_j$ to $C_j$ respectively. If $v_i\in V(C_i)$, set $P_i=\emptyset$. Similarly if $v_j\in V(C_j)$, set $P_j=\emptyset$. Then, $P_i\cup P\cup P_j$ forms a tight $M(\hat{x})$-alternating path which connects $C_i$ and $C_j$. \end{proof} Given a basic maximum-weight fractional matching $\hat{x}$ in $G$, we would like to reduce the number of odd cycles in $\mathscr{C}(\hat{x})$ to $\gamma(G)$. One way to accomplish this is to search for the structures described in Theorem \ref{thm:cycles}. Fix a minimum fractional $w$-vertex cover $y$ in $G$. Let $G'$ be the unweighted graph obtained by applying the following operations to $G$ (see Figure \ref{fig:auxgraph}): \begin{enumerate}[noitemsep,topsep=0pt] \item[(a)] Delete all non-tight edges. \item[(b)] Add a vertex $z$. \item[(c)] For every vertex $v\in V$ where $\hat{x}(\delta(v))=1$ and $y_v=0$, add the edge $vz$. \item[(d)] For every vertex $v\in V$ where $\hat{x}(\delta(v))=0$ and $y_v=0$, add the vertex $v'$ and the edges $vv',v'z$. \item[(e)] Shrink every odd cycle $C_i\in \mathscr{C}(\hat{x})$ into a pseudonode $i$. \end{enumerate} Note that none of the edges in $M(\hat{x})$ and $\mathscr{C}(\hat{x})$ were deleted because they are tight. Consider the edge set $M':=M(\hat{x})\cup \set{vv':v\in V}$. It is easy to see that $M'$ is a matching in $G'$. The significance of the auxiliary graph $G'$ is given by the following lemma: \begin{figure}[ht] \centering \def1.6{1} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex] (z) at (0,4) {z}; \node [vertex] (m1) at (1,1) {}; \node [vertex] (m2) [above right of=m1] {}; \node [vertex] (m3) at (3,1.5) {}; \node [vertex] (m4) [right of=m3] {}; \node [vertex,fill=gray!70] (p1) at (-0.5,1) {}; \node [vertex,fill=gray!70] (p2) at (-1.5,1.5) {}; \node [vertex,fill=gray!70] (p3) at (0.3,2) {}; \node [vertex] (u1) at (-4,1.5) {}; \node [vertex] (v1) at (-4,3) {}; \node [vertex] (u2) at (-2.5,1.5) {}; \node [vertex] (v2) at (-2.5,3) {}; \draw (0,1.5) ellipse (5.5cm and 1cm); \path [matched edge] (m1) -- (m2); \path [matched edge] (m3) -- (m4); \path [matched edge] (u1) -- (v1); \path [matched edge] (u2) -- (v2); \path (z) edge (m2); \path (z) edge (v1); \path (z) edge (v2); \path (z) edge (p1); \end{tikzpicture} \caption{The auxiliary graph $G'$ and the matching $M'$. Vertices in the ellipse are from the original graph $G$. Gray vertices represent pseudonodes.} \label{fig:auxgraph} \end{figure} \begin{lemma} \label{lem:augpaths} $M'$ is a maximum matching in $G'$ if and only if $\size{\mathscr{C}(\hat{x})}=\gamma(G)$. \end{lemma} \begin{proof} $(\Rightarrow)$ Let $\hat{x}$ be a basic maximum-weight fractional matching where $\size{\mathscr{C}(\hat{x})}>\gamma(G)$ and $y$ be a minimum fractional $w$-vertex cover in $G$. Applying Theorem \ref{thm:cycles} yields three cases. In Case (i), there exists a vertex $v\in C_i$ for some odd cycle $C_i\in \mathscr{C}(\hat{x})$ such that $y_v=0$. Then, the edge $iz$ is an $M'$-augmenting path in $G'$. In Case (ii), there exists a tight $M(\hat{x})$-alternating path $P$ in $G$ connecting two odd cycles $C_i,C_j\in \mathscr{C}(\hat{x})$. In $G'$, $P$ is an $M'$-augmenting path whose endpoints are pseudonodes $i$ and $j$. In Case (iii), there exists a tight and valid $M(\hat{x})$-alternating path $P$ in $G$ connecting an odd cycle $C_i\in\mathscr{C}(\hat{x})$ and a vertex $v\notin V(\mathscr{C}(\hat{x}))$ such that $y_v=0$. If $v$ is $M(\hat{x})$-covered, then $P+vz$ is an $M'$-augmenting path in $G'$. Otherwise, $P+vv'+v'z$ is an $M'$-augmenting path in $G'$. Thus, $M'$ is not a maximum matching in $G'$. $(\Leftarrow)$ Assume $M'$ is not a maximum matching in $G'$. Then, there exists an $M'$-augmenting path $P$ in $G'$. If both of its endpoints are pseudonodes $i$ and $j$, then $P$ is a tight $M(\hat{x})$-alternating path in $G$ which connects $C_i$ and $C_j$. So we may assume the endpoints of $P$ are a pseudonode $i$ and $z$. If $iz\in E(P)$, then there exists a vertex $v\in V(C_i)$ such that $y_v=0$. If $vz\in E(P)$ for some $v\in V$, then $y_v=0$ and $v$ is $M(\hat{x})$-covered. Hence, $P-vz$ is a tight and valid $M(\hat{x})$-alternating path in $G$ connecting $C_i$ and $v$. Otherwise, $v'z\in E(P)$ for some $v\in V$, which implies that $y_v=0$ and $v$ is $M(\hat{x})$-exposed. Hence, $P-vv'-v'z$ is a tight and valid $M(\hat{x})$-alternating path in $G$ connecting $C_i$ and $v$. By Theorem \ref{thm:cycles}, $\size{\mathscr{C}(\hat{x})}>\gamma(G)$. \end{proof} Thus, searching for the structures in Theorem \ref{thm:cycles} is equivalent to searching for an $M'$-augmenting path in $G'$. This immediately gives us an algorithm to generate a basic maximum-weight fractional matching with $\gamma(G)$ odd cycles. \begin{algorithm}[H] \caption{Minimize number of odd cycles} \label{alg:mincycle} Compute a basic maximum-weight fractional matching $\hat{x}$ in $G$\; Compute a minimum fractional $w$-vertex cover $y$ in $G$\; Construct $G'$ and $M'$\; \While{$\exists$ an $M'$-exposed pseudonode $r$ in $G'$}{ Grow an $M'$-alternating tree $T$ rooted at $r$ using Edmonds' algorithm \cite{journals/cjm/Edmonds65}\; \uIf{an $M'$-augmenting $r$-$s$ path $P'$ is found in $G'$}{ Let $P$ be the corresponding tight $M(\hat{x})$-alternating path in $G$\; \uIf{$s$ is a pseudonode}{ Alternate round on $C_r,C_s$ and complement on $P$\; } \Else{ Alternate round on $C_r$ and complement on $P$\; } Update $G'$ and $M'$ } \Else{ $G'\leftarrow G'\setminus V(T)$ } } \Return $\hat{x}$ \end{algorithm} After an $M'$-augmenting path $P'$ is found, let $\bar{x}$ denote the new basic maximum-weight fractional matching in $G$ obtained by alternate rounding and complementing $\hat{x}$. We can update $G'$ as follows. If $s$ is a pseudonode, we unshrink $C_r$ and $C_s$ in $G'$ because $\mathscr{C}(\bar{x})=\mathscr{C}(\hat{x})\setminus\set{C_r,C_s}$. Otherwise, $s=z$ and we only unshrink $C_r$. Then, there are two cases. In the first case, we have $vz\in E(P')$ for some $v\in V$. Observe that $\hat{x}(\delta(v))=1$ but $\bar{x}(\delta(v))=0$. Hence we replace the edge $vz$ with edges $vv',v'z$. In the second case, we have $v'z\in E(P')$ for some $v\in V$. This implies $\hat{x}(\delta(v))=0$ but $\bar{x}(\delta(v))=1$. So we replace edges $vv',v'z$ with the edge $vz$. \begin{theorem} \label{thm:mincycles} Algorithm \ref{alg:mincycle} computes a basic maximum-weight fractional matching with $\gamma(G)$ odd cycles in polynomial time. \end{theorem} \begin{proof} There are at most $O(n)$ vertex-disjoint odd cycles in $\mathscr{C}(\hat{x})$. At every iteration, we eliminate at least one odd cycle from $\mathscr{C}(\hat{x})$ or a frustrated tree from $G'$. Hence, there are at most $O(n)$ iterations, and Algorithm \ref{alg:mincycle} terminates in polynomial time. Next, we prove correctness. Suppose we obtain an $M'$-frustrated tree $T$. Every edge in $T$ has one endpoint in $A(T)$ and another endpoint in $B(T)$. Every edge in $\delta_{G'}(T)$ has one endpoint in $A(T)$ and another endpoint outside $T$. Since the matching in $T$ remains unchanged in every iteration, this property continues to hold throughout the execution of the algorithm. Thus, $T$ is a frustrated tree in every subsequent iteration. This implies that the last matching generated by the algorithm is maximum. By Lemma \ref{lem:augpaths}, we have $\size{\mathscr{C}(\hat{x})}=\gamma(G)$. \end{proof} We remark here that in Algorithm \ref{alg:mincycle}, we can avoid solving linear programs to obtain $\hat{x}$ and $y$ in Steps 1 and 2. They can be computed using a simple duplication technique by Nemhauser and Trotter~\cite{journals/mp/NemhauserT75}, which involves solving the problem on a suitable bipartite graph. \section{Computing vertex-stabilizers} \label{sec:vertex_stabilizer} The goal of this section is to prove the following theorem: \begin{theorem} \label{thm:vert_stabilizer} There exists a polynomial-time algorithm that computes a minimum vertex-stabilizer $S$ for a weighted graph $G$. Moreover, $\nu(G\setminus S)\geq \frac{2}{3}\nu(G)$. \end{theorem} \noindent Let us start with discussing a lower bound on the size of a minimum vertex-stabilizer. \paragraph{Lower bound.} We will here prove that $\gamma(G)$ is a lower bound on the number of vertices to remove in order to stabilize a graph. Recall that a graph is stable if and only if $\gamma(G)=0$. One strategy to achieve this is by showing that $\gamma(G)$ does not decrease by too much when we remove a vertex. Indeed, we prove that $\gamma(G)$ drops by at most 1 when a vertex is deleted (Lemma \ref{lem:gamma_vert}). We first develop a couple of claims. \begin{claim} \label{clm:frac} Let $\hat{x}$ and $y$ be a basic maximum-weight fractional matching and a minimum fractional $w$-vertex cover in $G$ respectively. Pick a vertex $s$ from any odd cycle $C\in \mathscr{C}(\hat{x})$. If $\bar{x}$ is the fractional matching obtained by alternate rounding on $C$ at $s$, then $\bar{x}_{-\delta(s)}$ and $y_{-s}$ is a basic maximum-weight fractional matching and a minimum fractional $w$-vertex cover in $G\setminus s$ respectively. \end{claim} \begin{proof} First, notice that $\bar{x}_{-\delta(s)}$ is a basic fractional matching and $y_{-s}$ is a fractional $w$-vertex cover in $G\setminus s$. We will show that they satisfy complementary slackness. Let $uv\in E(C)$ be an edge where $\bar{x}_{uv}>0$. Since $e\in E(C)$, we have $\hat{x}_{uv}>0$ and so $y_u+y_v=w_{uv}$. Next, let $v\neq s$ be a vertex in $C$ where $y_v>0$. We only need to check the vertices in $C$ because $\hat{x}_e=0$ for every edge $e\in\delta(s)\setminus E(C)$. Since $v$ is $M(\bar{x})$-covered, we have $\bar{x}(\delta(v))=1$. Therefore, $\bar{x}_{-\delta(s)}$ and $y_{-s}$ form a primal-dual optimal pair. \end{proof} The following operation allows us to switch between fractional matchings on a set of edges: \begin{definition} Let $x$ and $x'$ be fractional matchings in $G$. By \emph{switching} on $E'\subseteq E$ from $x$ to $x'$, we mean replacing $x_e$ by $x'_e$ for all $e\in E'$. \end{definition} Switching does not necessarily yield a feasible fractional matching. Hence, we will only use it on the components of a specific subgraph of $G$: \begin{claim} \label{clm:switch} Given two basic fractional matchings $x$ and $x'$, let $H$ be the subgraph of $G$ induced by $\supp(x+x')$. For any component $K$ in $H$, switching on $E(K)$ from $x$ to $x'$ yields a basic fractional matching in $G$. \end{claim} \begin{proof} Let $\bar{x}$ denote the vector obtained by switching on $E(K)$ from $x$ to $x'$. We first show that $\bar{x}$ is a feasible fractional matching in $G$. For the purpose of contradiction, suppose there exists a vertex $v\in V(K)$ such that $\bar{x}(\delta(v))>1$. Since $\bar{x}_e = x'_e$ for all $e\in E(K)$ and $\bar{x}_e=x_e$ for all $e\notin E(K)$, we have $0<\bar{x}(\delta(v)\cap E(K)) \leq 1$ and $0<\bar{x}(\delta(v)\setminus E(K)) \leq 1$. So there exists an edge $f\in \delta(v)\setminus E(K)$ such that $x_f>0$, which is a contradiction. It is easy to see that $\bar{x}$ is basic. \end{proof} \begin{lemma}\label{lem:gamma_vert} For every vertex $v\in V$, $\gamma(G\setminus v)\geq \gamma(G)-1$. \end{lemma} \begin{proof} Let $x^*$ be a basic maximum-weight fractional matching in $G$ such that $\size{\mathscr{C}(x^*)}=\gamma(G)$. Let $y$ be a minimum fractional $w$-vertex cover in $G$. For the purpose of contradiction, suppose there exists a vertex $u\in V$ such that $\gamma(G\setminus u)<\gamma(G)-1$. There are two cases: \smallskip \noindent \emph{Case 1: $u\in V(C)$ for some odd cycle $C\in \mathscr{C}(x^*)$.} Let $\bar{x}$ be the fractional matching obtained from $x^*$ by alternate rounding on $C$ at $u$. By Claim \ref{clm:frac}, we know that $\bar{x}_{-\delta(u)}$ is a basic maximum-weight fractional matching and $y_{-u}$ is a minimum fractional $w$-vertex cover in $G\setminus u$. We first give a proof sketch for this case. If $\bar{x}_{-\delta(u)}$ is not an optimal basic solution yielding $\gamma(G\setminus u)$ odd cycles, then one of the structures given by Theorem \ref{thm:cycles} must exist. This same structure would be a structure corresponding to the basic solution $x^*$, but this yields a contradiction since $x^*$ is an optimal basic solution with $\gamma(G)$ odd cycles. For notational convenience, we can use $\mathscr{C}(\bar{x})$ and $M(\bar{x})$ to refer to the odd cycles and matched edges of $\bar{x}_{-\delta(u)}$ respectively because $\mathscr{C}(\bar{x})=\mathscr{C}\pr{\bar{x}_{-\delta(u)}}$ and $M(\bar{x})=M\pr{\bar{x}_{-\delta(u)}}$. Since $\size{\mathscr{C}(\bar{x})}=\size{\mathscr{C}(x^*)}-1=\gamma(G)-1>\gamma(G\setminus u)$, Theorem \ref{thm:cycles} tells us that $G\setminus u$ contains one of the following structures. The first structure is a vertex $v\in V(C_i)$ for some odd cycle $C_i\in \mathscr{C}(\bar{x})$ such that $y_v=0$. However, since $C_i\in \mathscr{C}(x^*)$, by Theorem \ref{thm:cycles} we arrive at the contradiction $\size{\mathscr{C}(x^*)}>\gamma(G)$. The second structure is a tight and valid $M(\bar{x})$-alternating path $P$ which connects two odd cycles $C_i,C_j\in\mathscr{C}(\bar{x})$, or an odd cycle $C_i\in \mathscr{C}(\bar{x})$ and a vertex $v\notin V(\mathscr{C}(\bar{x}))$ such that $y_v=0$. Note that $C_i,C_j\in \mathscr{C}(x^*)$. If $V(P)\cap V(C)=\emptyset$, then $P$ is also a tight and valid $M(x^*)$-alternating path in $G$ which connects $C_i$ and $C_j$, or $C_i$ and $v$. So, let $s=V(C_i)\cap V(P)$ and $t$ denote the first vertex of $C$ encountered while traversing along $P$ from $s$. Then, the $s$-$t$ subpath of $P$ is a tight $M(x^*)$-alternating path which connects $C_i,C\in \mathscr{C}(x^*)$. We again obtain the contradiction $\size{\mathscr{C}(x^*)}>\gamma(G)$ by Theorem \ref{thm:cycles}. \smallskip \noindent \emph{Case 2: $u\notin V(\mathscr{C}(x^*))$.} If $u$ is $M(x^*)$-exposed, then $\nu_f(G\setminus u)=\nu_f(G)$ and $\gamma(G\setminus u)=\gamma(G)$. So we may assume $u$ is $M(x^*)$-covered. Let $\hat{x}$ be a basic maximum-weight fractional matching in $G\setminus u$ such that $\size{\mathscr{C}(\hat{x})}< \gamma(G)-1$. Define $N(\hat{x}):=M(\hat{x})\setminus E(\mathscr{C}(x^*))$ and $N(x^*):=M(x^*)\setminus E(\mathscr{C}(\hat{x}))$. Consider the subgraph $J=(V,N(x^*)\triangle N(\hat{x}))$. Note that $u$ is covered by $N(x^*)$ and exposed by $N(\hat{x})$. Let $P$ be the component in $J$ which contains $u$. We know that $P$ is a path with $u$ as an endpoint. Let $v$ be the other endpoint of $P$. There are 3 subcases, but before jumping into them, we first give an overview of how we arrive at a contradiction in each subcase. We show that one can move from $x^*$ to a new solution $\tilde x$ such that: \begin{enumerate}[noitemsep,topsep=0pt] \item[(i)] $\tilde{x}$ is a basic maximum-weight fractional matching for a subgraph $G'$ obtained by deleting at most 1 vertex from a cycle of $\mathscr{C}(x^*)$; and \item[(ii)] $\size{\mathscr{C}(\tilde{x})}<\gamma(G')$. \end{enumerate} Clearly, both of the above properties cannot hold, so this yields a contradiction. \smallskip \emph{Subcase 2.1: $v\in C$ for some odd cycle $C\in \mathscr{C}(x^*)$.} In this subcase, the path $P$ has even length. Let $\bar{x}$ be the fractional matching obtained from $x^*$ by alternate rounding on $C$ at $v$. By Claim \ref{clm:frac}, $\bar{x}_{-\delta(v)}$ is a basic maximum-weight fractional matching in $G\setminus v$. Let $H$ be the subgraph of $G$ induced by $\supp(\hat{x}+\bar{x})$ (see Figure \ref{fig:subcase1} for an example). Note that $\hat{x}_e+\bar{x}_e=0$ for every edge $e\notin E(P)$ which is incident to a vertex in $P$. Thus, $P$ is a component in $H$. Since $\size{\mathscr{C}(\bar{x})}=\gamma(G)-1>\size{\mathscr{C}(\hat{x})}$, there exists a component $K$ in $H$ which has more odd cycles from $\mathscr{C}(\bar{x})$ than $\mathscr{C}(\hat{x})$. Switching on $K$ from $\bar{x}_{-\delta(v)}$ to $\hat{x}$ yields a basic fractional matching in $G\setminus v$ with less than $\gamma(G)-1$ odd cycles. To yield a contradiction to Case 1, it is left to show that it is maximum-weight. This is because we are deleting a vertex $v$ from an odd cycle of $\mathscr{C}(x^*)$, but $\gamma(G\setminus v)$ decreases by more than 1. Now, since $\hat{x}$ and $\bar{x}_{-\delta(v)}$ are maximum-weight fractional matchings in $G\setminus u$ and $G\setminus v$ respectively, we have $\sum_{e\in E(K)}w_e\hat{x}_e = \sum_{e\in E(K)}w_e\bar{x}_e$ because $u,v\notin V(K)$. Thus, the resulting matching is indeed maximum-weight in $G\setminus v$. \begin{figure}[ht] \centering \def1.6{1.3} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex,label=below:\small{$v$}] (v1) {}; \node [vertex] (v2) [above right of=v1] {}; \node [vertex] (v3) [below right of=v1] {}; \node [vertex] (v4) [right of=v2] {}; \node [vertex] (v5) [right of=v3] {}; \node [vertex] (v6) [left of=v1] {}; \node [vertex] (v7) [left of=v6] {}; \node [vertex] (v8) [left of=v7] {}; \node [vertex,label=below:\small{$u$}] (v9) [left of=v8] {}; \path [matched edge] (v2) -- (v4); \path [matched edge] (v3) -- (v5); \path [dashed edge] (v1) -- (v2); \path [dashed edge] (v1) -- (v3); \path [dashed edge] (v4) -- (v5); \path [selected edge] (v1) -- (v6); \path [matched edge] (v6) -- (v7); \path [selected edge] (v7) -- (v8); \path [matched edge] (v8) -- (v9); \node[weight] (c) [right of=v1] {$C$}; \node[weight] (k) at (1.25,1.75) {$K$}; \draw plot [smooth cycle] coordinates {(0,1) (-1,2) (2,2.5) (3,1) (2,0.5)}; \end{tikzpicture} \caption{An example of the graph induced by $\supp(\hat{x}+\bar{x})$ in Subcase 2.1. Black bold edges are in $M(\bar{x})$ while gray bold edges are in $M(\hat{x})$.} \label{fig:subcase1} \end{figure} \smallskip \emph{Subcase 2.2: $v\in C$ for some odd cycle $C\in \mathscr{C}(\hat{x})$.} In this subcase, the path $P$ has odd length. Let $\bar{x}$ be the fractional matching obtained from $\hat{x}$ by alternate rounding on $C$ at $v$. By Claim \ref{clm:frac}, $\bar{x}_{-\delta(v)}$ is a basic maximum-weight fractional matching in $G\setminus\set{u,v}$. Let $H$ be the subgraph of $G$ induced by $\supp(x^*+\bar{x})$ (see Figure \ref{fig:subcase2} for an example). Note that $x^*_e+\bar{x}_e=0$ for every edge $e\notin E(P)$ incident to a vertex in $P$. Thus, $P$ is a component in $H$. Since $\size{\mathscr{C}(\bar{x})} = \size{\mathscr{C}(\hat{x})} - 1 < \gamma(G)-2<\size{\mathscr{C}(x^*)}$, there exists a component $K$ in $H$ which has more odd cycles from $\mathscr{C}(x^*)$ than $\mathscr{C}(\bar{x})$. Switching on $K$ from $x^*$ to $\bar{x}$ yields a basic fractional matching in $G$ with less than $\gamma(G)$ odd cycles. To yield a contradiction, it is left to show that it is maximum-weight. Since $x^*$ and $\bar{x}_{-\delta(v)}$ are maximum-weight fractional matchings in $G$ and $G\setminus\set{u,v}$ respectively, we have $\sum_{e\in E(K)}w_ex^*_e=\sum_{e\in E(K)}w_e\bar{x}_e$ because $u,v\notin V(K)$. Thus, the resulting basic fractional matching is maximum-weight in $G$. \begin{figure}[ht] \centering \def1.6{1.3} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex,label=below:\small{$v$}] (v1) {}; \node [vertex] (v2) [above right of=v1] {}; \node [vertex] (v3) [below right of=v1] {}; \node [vertex] (v4) [right of=v2] {}; \node [vertex] (v5) [right of=v3] {}; \node [vertex] (v6) [left of=v1] {}; \node [vertex] (v7) [left of=v6] {}; \node [vertex,label=below:\small{$u$}] (v8) [left of=v7] {}; \path [selected edge] (v2) -- (v4); \path [selected edge] (v3) -- (v5); \path [dashed edge] (v1) -- (v2); \path [dashed edge] (v1) -- (v3); \path [dashed edge] (v4) -- (v5); \path [matched edge] (v1) -- (v6); \path [selected edge] (v6) -- (v7); \path [matched edge] (v7) -- (v8); \node[weight] (c) [right of=v1] {$C$}; \node[weight] (k) at (1.25,1.75) {$K$}; \draw plot [smooth cycle] coordinates {(0,1) (-1,2) (2,2.5) (3,1) (2,0.5)}; \end{tikzpicture} \caption{An example of the graph induced by $\supp(x^*+\bar{x})$ in Subcase 2.2. Black bold edges are in $M(x^*)$ while gray bold edges are in $M(\bar{x})$.} \label{fig:subcase2} \end{figure} \smallskip \emph{Subcase 2.3: $v\notin V(\mathscr{C}(x^*)\cup \mathscr{C}(\hat{x}))$.} Let $H$ be the subgraph of $G$ induced by $\supp(x^*+\hat{x})$ (see Figure \ref{fig:subcase3} for an example). Note that $x^*_e+\hat{x}_e=0$ for every edge $e\notin E(P)$ which is incident to a vertex in $P$. Thus, the path $P$ is a component in $H$. Since $\size{\mathscr{C}(x^*)}>\gamma(G)-1>\size{\mathscr{C}(\hat{x})}$, there exists a component $K$ in $H$ which has more odd cycles from $\mathscr{C}(x^*)$ than $\mathscr{C}(\hat{x})$. Switching on $K$ from $x^*$ to $\hat{x}$ yields a basic fractional matching in $G$ with less than $\gamma(G)$ odd cycles. To yield a contradiction, it is left to show that it is maximum-weight. Since $x^*$ and $\hat{x}$ are maximum-weight fractional matchings in $G$ and $G\setminus u$ respectively, we have $\sum_{e\in E(K)}w_e x^*_e=\sum_{e\in E(K)}w_e\hat{x}_e$ because $u\notin V(K)$. This implies that the resulting basic fractional matching is maximum-weight in $G$. \end{proof} \begin{figure}[ht] \centering \def1.6{1.3} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex,label=below:\small{$v$}] (v1) {}; \node [vertex] (v2) [left of=v1] {}; \node [vertex] (v3) [left of=v2] {}; \node [vertex] (v4) [left of=v3] {}; \node [vertex,label=below:\small{$u$}] (v5) [left of=v4] {}; \path [selected edge] (v1) -- (v2); \path [matched edge] (v2) -- (v3); \path [selected edge] (v3) -- (v4); \path [matched edge] (v4) -- (v5); \node[weight] (k) at (1.25,1.75) {$K$}; \draw plot [smooth cycle] coordinates {(0,1) (-1,2) (2,2.5) (3,1) (2,0.5)}; \end{tikzpicture} \caption{An example of the graph induced by $\supp(x^*+\hat{x})$ in Subcase 2.3. Black bold edges are in $M(x^*)$ while gray bold edges are in $M(\hat{x})$.} \label{fig:subcase3} \end{figure} As a corollary to the above lemma, we obtain the claimed lower bound. \begin{lemma}\label{lem:bound_vert} For every vertex-stabilizer $S$ of $G$, $\size{S}\geq \gamma(G)$. \end{lemma} \paragraph{The algorithm.} The algorithm we use to stabilize a graph is very simple: it computes a basic maximum-weight fractional matching $\hat{x}$ in $G$ with $\gamma(G)$ odd cycles (this can be done using Algorithm \ref{alg:mincycle}) and a minimum fractional $w$-vertex cover $y$ in $G$, and then removes \emph{one} vertex from every cycle in $\mathscr{C}(\hat{x})$, namely, the vertex with the least $y$-value in the cycle. Algorithm \ref{alg:vert} formalizes this. \begin{algorithm}[H] Initialize $S \gets \emptyset$ \; Compute a minimum fractional $w$-vertex cover $y$ in $G$\; Compute a basic maximum-weight fractional matching $\hat{x}$ in $G$ with $\gamma(G)$ odd cycles\; Let $\mathscr{C}(\hat{x})=\set{C_1,C_2,\dots,C_{\gamma(G)}}$\; \For{$i = 1$ \textbf{to} $\gamma(G)$}{ Let $v_i= \argmin_{v\in V(C_i)}y_v$\; $S\gets S+v_i$ } \Return $S$ \caption{Minimum vertex-stabilizer} \label{alg:vert} \end{algorithm} \noindent We are now ready to prove the main theorem stated at the beginning of the section, Theorem \ref{thm:vert_stabilizer}. \begin{proof}[Proof of Theorem \ref{thm:vert_stabilizer}.] Let $S=\set{v_1,v_2,\dots,v_{\gamma(G)}}$ be the set of vertices returned by the algorithm. Let $\bar{x}$ be the vector obtained from $\hat{x}$ by alternate rounding on $C_i$ at $v_i$ for all $i$ respectively. By Lemma \ref{clm:frac}, $\bar{x}_{-\cup_{i=1}^{\gamma(G)}\delta(v_i)}$ is a basic maximum-weight fractional matching in $G\setminus S$. Note that it is also a maximum-weight integral matching in $G\setminus S$. Thus, $\nu(G\setminus S)=\nu_f(G\setminus S)$ and $G\setminus S$ is stable. Moreover, $S$ is minimum by Lemma \ref{lem:bound_vert}. It is left to show that $\nu(G\setminus S)\geq \frac{2}{3}\nu(G)$. For every odd cycle $C_i\in \mathscr{C}(\hat{x})$, we have \[y_{v_i}\leq \frac{y(V(C_i))}{\size{V(C_i)}}\leq \frac{y(V(C_i))}{3}\] because $v_i$ has the smallest fractional $w$-vertex cover in $C_i$. From Lemma \ref{clm:frac}, we also know that $y_{-S}$ is a minimum fractional $w$-vertex cover in $G\setminus S$. Then, \[\nu(G\setminus S) = \tau_f(G\setminus S) = \mathbbm 1^{\top}y-\sum_{i=1}^{\gamma(G)}y_{v_i}\geq \mathbbm 1^{\top}y - \frac{1}{3}\sum_{i=1}^{\gamma(G)}y(C_i) \geq \mathbbm 1^{\top}y - \frac{1}{3}\mathbbm 1^{\top}y = \frac{2}{3}\tau_f(G) \geq \frac{2}{3}\nu(G)\] \end{proof} Note that removing any single vertex from each cycle of $\mathscr{C}(\hat{x})$ yields a minimum-cardinality vertex stabilizer. The reason we chose the vertex with the smallest $y_v$ is to preserve the value of the original maximum-weight matching by a factor of $\frac{2}{3}$. \paragraph{Tightness of the matching bound.} A natural question is whether it is possible to design an algorithm that always returns a vertex-stabilizer $S$ satisfying $\nu(G\setminus S) \geq \alpha \nu(G)$, for some $\alpha > \frac{2}{3}$. We report here an example showing that, in general, this is not possible since the bound of $\frac{2}{3}$ can be asymptotically tight. Consider the graph $G$ in Figure \ref{fig:twothirds} for some sufficiently small $\varepsilon>0$. It is unstable because it is an augmenting flower. The maximum-weight matching is given by the bold edges. For any vertex stabilizer $S$, \[\nu(G\setminus S)\leq 2 = \frac{2}{3-\varepsilon}\pr{3-\varepsilon} = \frac{2}{3-\varepsilon}\nu(G)\] \begin{figure}[ht] \centering \def1.6{1.5} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex] (v1) {}; \node [vertex] (v2) [above left of=v1] {}; \node [vertex] (v3) [below left of=v1] {}; \node [vertex] (v4) [right of=v1] {}; \path [matched edge] (v2) -- node [weight, left] {$2$} (v3); \path (v1) edge node [weight, above right] {$2$} (v2); \path [matched edge] (v1) -- node [weight, above] {$1-\varepsilon$} (v4); \path (v1) edge node [weight, below right] {$2$} (v3); \end{tikzpicture} \caption{An example showing that the bound of $\frac{2}{3}$ is asymptotically tight.} \label{fig:twothirds} \end{figure} Another natural question is whether one can at least distinguish if, for a specific instance, there exists a vertex-stabilizer $S$ such that $\nu(G\setminus S)=\nu(G)$. Once again, we show that the answer is negative. Specifically, let us call a vertex-stabilizer $S$ \emph{weight-preserving} if $\nu(G\setminus S)=\nu(G)$. We show that finding such a vertex-stabilizer is hard in general. The proof is based on a reduction from the independent set problem, similar to the one given by Bir\'{o} et al.~\cite{journals/tcs/BiroBGKP}. \begin{theorem} \label{thm:hardness_vert} Deciding whether a graph has a weight-preserving vertex-stabilizer is NP-complete. \end{theorem} \begin{proof} The problem is clearly in NP because any yes-instance can be verified using a weight-preserving vertex-stabilizer in polynomial time. To prove NP-hardness, we give a reduction from the independent set problem. Let $G=(V,E)$ and $k$ be an independent set instance, where $V=\set{v_1,v_2,\dots,v_n}$. The independent set problem asks to determine whether $G$ has an independent set of size at least $k$. We may assume $2\leq k\leq n$. We construct the gadget graph $G^*$ as follows. First, set the weight on every edge in $E$ to 1. For each $v_i\in V$, add a vertex $v'_i$ and the edge $v_iv'_i$ with weight 1. Denote this set of new vertices as $V'=\set{v'_1,v'_2,\dots,v'_n}$. Next, create $k$ pairwise-disjoint copies of the three cycle $C_i=(V_i,E_i)$ where $V_i=\set{a_i,b_i,c_i}$, $E_i=\set{a_ib_i,b_ic_i,a_ic_i}$ and the weight on every edge in $E_i$ is 4. Finally, add the edge $b_iv_j$ for every $i\in [k]$ and $j\in [n]$ with weight 2. (See Figure \ref{fig:hardness_vert}) \begin{figure}[ht] \centering \def1.6{1.5} \def0.4{0.4} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex,label=below left:\small{$v_1$}] (v1) {}; \node [vertex,label=below left:\small{$v_2$}] (v2) [right of=v1, xshift=0.4*1.6 cm] {}; \node [font=\large] (d1) [right of=v2, xshift=0.4*1.6 cm] {$\dots$}; \node [vertex,label=below left:\small{$v_k$}] (v3) [right of=d1, xshift=0.4*1.6 cm] {}; \node [font=\large] (d2) [right of=v3, xshift=0.4*1.6 cm] {$\dots$}; \node [font=\large] (d3) [above of=d1, yshift=0.4*1.6 cm] {$\dots$}; \node [vertex,label=below left:\small{$v_n$}] (v4) [right of=d2, xshift=0.4*1.6 cm] {}; \node [vertex,label=below left:\small{$v'_1$}] (w1) [below of=v1] {}; \node [vertex,label=below left:\small{$v'_2$}] (w2) [below of=v2] {}; \node [vertex,label=below left:\small{$v'_k$}] (w3) [below of=v3] {}; \node [vertex,label=below left:\small{$v'_n$}] (w4) [below of=v4] {}; \node [vertex,label=below left:\small{$b_1$}] (b1) [above of=v1] {}; \node [vertex,label=below left:\small{$b_2$}] (b2) [above of=v2] {}; \node [vertex,label=below left:\small{$b_k$}] (b3) [above of=v3] {}; \node [vertex] (a1) [above left of=b1, xshift=0.3*1.6 cm] {}; \node [vertex] (a2) [above left of=b2, xshift=0.3*1.6 cm] {}; \node [vertex] (a3) [above left of=b3, xshift=0.3*1.6 cm] {}; \node [vertex] (c1) [above right of=b1, xshift=-0.3*1.6 cm] {}; \node [vertex] (c2) [above right of=b2, xshift=-0.3*1.6 cm] {}; \node [vertex] (c3) [above right of=b3, xshift=-0.3*1.6 cm] {}; \path (v1) edge (w1); \path (v2) edge (w2); \path (v3) edge (w3); \path [matched edge] (v4) -- (w4); \path (v3) edge [bend right=15] (v4); \path [matched edge] (a1) -- (c1); \path (b1) edge (a1); \path (b1) edge (c1); \path [matched edge] (b1) -- (v1); \path (b1) edge (v2); \path (b1) edge (v3); \path (b1) edge (v4); \path [matched edge] (a2) -- (c2); \path (b2) edge (a2); \path (b2) edge (c2); \path (b2) edge (v1); \path [matched edge] (b2) -- (v2); \path (b2) edge (v3); \path (b2) edge (v4); \path [matched edge] (a3) -- (c3); \path (b3) edge (a3); \path (b3) edge (c3); \path (b3) edge (v1); \path (b3) edge (v2); \path [matched edge] (b3) -- (v3); \path (b3) edge (v4); \path [fill=gray,opacity=0.25] (-0.4*1.6,-0.4*1.6) -- (-0.4*1.6,0.4*1.6) -- (5*1.6+6*0.4*1.6,0.4*1.6) -- (5*1.6+6*0.4*1.6,-0.4*1.6) -- (-0.4*1.6,-0.4*1.6); \draw[decoration={brace,raise=15pt},decorate] (-0.4*1.6,1.6+0.05) -- node[weight,left=18pt] {$w_e=4$} (-0.4*1.6,1.75*1.6-0.05); \draw[decoration={brace,raise=15pt},decorate] (-0.4*1.6,0.05) -- node[weight,left=18pt] {$w_e=2$} (-0.4*1.6,1.6-0.05); \draw[decoration={brace,raise=15pt},decorate] (-0.4*1.6,-1.6+0.05) -- node[weight,left=18pt] {$w_e=1$} (-0.4*1.6,-0.05); \end{tikzpicture} \caption{The gadget graph $G^*$.} \label{fig:hardness_vert} \end{figure} Our goal is to show that $G$ has an independent set of size at least $k$ if and only if $G^*$ has a weight-preserving vertex-stabilizer. Before proving the main result, we first derive some properties of maximum-weight matchings in $G^*$. \begin{claim}\label{clm:matching1} If $M$ is a maximum-weight matching in $G^*$, then $M\cap E=\emptyset$. \end{claim} \begin{proof} For the purpose of contradiction, suppose there exists an edge $v_iv_j\in M\cap E$. Then, $(v'_i,v_i,v_j,v'_j)$ forms an augmenting path, which is a contradiction. \end{proof} \begin{claim}\label{clm:matching2} If $M$ is a maximum-weight matching in $G^*$, every $b_i$ is matched to some $v_j$ in $M$. \end{claim} \begin{proof} For the purpose of contradiction, suppose there exists an $i\in[k]$ such that $b_iv_j\notin M$ for all $j\in [n]$. Then, $b_i$ is either $M$-exposed or matched to $a_i$ or $c_i$. Since $k\leq n$, by the pigeonhole principle there exists an $\ell\in [n]$ such that $b_jv_\ell\notin M$ for all $j\in [k]$. By Claim \ref{clm:matching1}, $v_\ell v'_\ell\in M$. If $b_i$ is $M$-exposed, then $(b_i,v_\ell,v'_\ell)$ forms an augmenting path. Otherwise, we may without loss of generality assume $a_ib_i\in M$. Then, $(c_i,a_i,b_i,v_\ell,v'_\ell)$ forms an augmenting path. We have reached a contradiction. \end{proof} \begin{claim}\label{clm:matching3} $\nu(G^*)=n+5k$ \end{claim} \begin{proof} Let $M$ be a maximum-weight matching in $G^*$. By Claim \ref{clm:matching2}, there are $k$ edges of the form $b_iv_j$ in $M$. Hence, there are also $k$ edges of the form $a_ic_i$ in $M$. Moreover, we have $n-k$ edges of the form $v_iv'_i$ in $M$. This gives a total weight of $2k+4k+n-k=n+5k$. \end{proof} \begin{claim}\label{clm:matching4} The set of inessential vertices in $G^*$ is $V'$. \end{claim} \begin{proof} It is easy to see that the vertices in $G$ and $\cup_{i=1}^kC_i$ are essential because they are covered by every maximum-weight matching in $G^*$. Let $v'_i\in V'$ and $M$ be a maximum-weight matching in $G^*$ such that $v_iv'_i\in M$. Since $k\geq 1$, there exist $j\in[k]$ and $\ell\in[n]$ such that $b_jv_\ell\in M$. Define a new matching $M':=M+b_jv_i-b_jv_\ell+v_\ell v'_\ell-v_iv'_i$. Note that $M'$ is a maximum-weight matching in $G^*$ and $v'_i$ is $M'$-exposed. Thus, $v'_i$ is inessential. \end{proof} For the forward direction, let $S$ be an independent set of $G$ where $\size{S}=k$. Without loss of generality, assume $S=\set{v_1,v_2,\dots,v_k}$. Let $M$ be the matching defined by \[M:=\set{a_ic_i,b_iv_i:1\leq i\leq k}\cup\set{v_iv'_i:k< i\leq n}.\] Since $w(M)=n+5k$, by Claim \ref{clm:matching3} it is a maximum-weight matching in $G^*$. We claim that $S':=\set{v'_1,v'_2,\dots, v'_k}$ is a weight-preserving vertex-stabilizer of $G^*$. First, note that the matching $M$ survives after removing $S'$ from $G^*$. Hence, $M$ is a maximum-weight matching in $G^*\setminus S'$. It is left to show that $G^*\setminus S'$ is stable. We define a fractional $w$-vertex cover $y$ on $G^*\setminus S'$ as follows: \[y_v=\begin{cases} 2, &\text{ if }v\in\cup_{i=1}^k\set{a_i,b_i,c_i}\\ 1, &\text{ if }v\in\cup_{i=k+1}^n v_i\\ 0, &\text{ otherwise}. \end{cases}\] It is easy to check that for every $uv\notin E$, the condition $y_u+y_v\geq w_{uv}$ holds. Let $v_iv_j\in E$. Since $S$ is an independent set, at most one of $v_i$ and $v_j$ is in $S$. This implies that $y_{v_i}+y_{v_j}\geq 1$. So $y$ is indeed a fractional $w$-vertex cover. Since $\tau_f(G^*\setminus S')=n+5k=\nu(G^*\setminus S')$, $G^*\setminus S'$ is stable. For the converse, let $S'$ be a weight-preserving vertex-stabilizer of $G$. We know that $S'\subseteq V'$ by Claim \ref{clm:matching4} because $S'$ does not contain essential vertices. Let $M$ be a maximum-weight matching in $G^*\setminus S'$. Since $\nu(G^*\setminus S')=\nu(G^*)$, $M$ is also a maximum-weight matching in $G^*$. We claim that $\size{S'}=k$. If $\size{S'}<k$, then there exists an $i\in[n]$ such that $v_iv'_i\in G^*\setminus S'$ and $b_jv_i\in M$ for some $j\in[k]$. Then, $a_jc_j\in M$, so $C_j\cup (b_j,v_i,v'_i)$ forms an augmenting flower. This is a contradiction to Theorem \ref{thm:stable} because $G^*\setminus S'$ is stable. On the other hand, if $\size{S'}>k$, then $w(M)=6k+n-\size{S'}<n+5k$. This is also a contradiction because $\nu(G^*\setminus S')=\nu(G^*)$. It follows that $\size{S'}=k$. Without loss of generality, assume $S'=\set{v'_1,v'_2,\dots,v'_k}$. Let $S:=\set{v_1,v_2,\dots,v_k}$. We claim that every $v_i\in S$ is matched to some $b_j$ in $M$. For the purpose of contradiction, suppose there exists $v_i\in S$ such that $v_i$ is $M$-exposed. By the pigeonhole principle, there exists $j\in [n]$ such that $v_jv'_j\in G^*\setminus S'$ and $b_\ell v_j\in M$ for some $\ell\in [k]$. Then, $(v_i,b_\ell,v_j,v'_j)$ forms an augmenting path, which is a a contradiction. It is left to show that $S$ is an independent set. For the purpose of contradiction, suppose there exist $v_i,v_j\in S$ such that $v_iv_j\in E$. Let $b_pv_i,b_qv_j\in M$ for some $p,q\in [k]$. Then, $C_p\cup (b_p,v_i,v_j,b_q) \cup C_q$ forms an augmenting bi-cycle. By Theorem \ref{thm:stable}, $G^*\setminus S'$ is unstable, which is a contradiction. Thus, $S$ is an independent set and $\size{S}\geq k$. \end{proof} \section{Computing edge-stabilizers} \label{sec:edge_stabilizer} In contrast to the vertex-stabilizer problem, finding a minimum edge-stabilizer is computationally difficult. Bir\'{o} et al.~\cite{journals/tcs/BiroBGKP} proved that the problem is NP-hard on weighted graphs. We strengthen this result by showing the following hardness of approximation: \begin{theorem} \label{thm:hardness_edge} There is no constant factor approximation for the minimum edge-stabilizer problem unless $P=NP$. \end{theorem} \begin{proof} We construct a gap-producing reduction from the independent set problem. Let $G=(V,E)$ and $k$ be an independent set instance where $V=\set{v_1,v_2,\dots,v_n}$. The independent set problem asks to determine whether $G$ has an independent set of size at least $k$. We may assume $2\leq k\leq n$. Let $\rho\geq 1$ be an integer. We construct the gadget graph $G^*=(V^*,E^*)$ as follows. For every edge $v_iv_j\in E$, replace it with $\rho k$ paths of length 3, i.e. $(v_i,u^\ell_{ij},u^\ell_{ji},v_j)$ for $\ell\in [\rho k]$. Assign weight 1 to every edge in the paths. For each $v_i\in V$, add a vertex $v'_i$ and the edge $v_iv'_i$ with weight 1. Next, create $k$ pairwise-disjoint copies of the complete graph on $2\rho k+1$ vertices. Denote each of them as $H_i$ and assign weight 4 to every edge in $H_i$. In addition, for every $H_i$, label one of the vertices as $b_i$. Finally, add the edge $b_iv_j$ for every $i\in [k]$ and $j\in [n]$ with weight 2. (See Figure \ref{fig:hardness_edge}) \begin{figure}[ht] \centering \def1.6{1.5} \def0.4{0.4} \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex,label=below left:\small{$v_1$}] (v1) {}; \node [vertex,label=below left:\small{$v_2$}] (v2) [right of=v1, xshift=0.4*1.6 cm] {}; \node [font=\large] (d1) [right of=v2, xshift=0.4*1.6 cm] {$\dots$}; \node [vertex,label=below left:\small{$v_k$}] (v3) [right of=d1, xshift=0.4*1.6 cm] {}; \node [font=\large] (d2) [right of=v3, xshift=0.4*1.6 cm] {$\dots$}; \node [font=\large] (d3) [above of=d1, yshift=0.4*1.6 cm] {$\dots$}; \node [vertex,label=below left:\small{$v_n$}] (v4) [right of=d2, xshift=0.4*1.6 cm] {}; \node [vertex,label=below left:\small{$v'_1$}] (w1) [below of=v1] {}; \node [vertex,label=below left:\small{$v'_2$}] (w2) [below of=v2] {}; \node [vertex,label=below left:\small{$v'_k$}] (w3) [below of=v3] {}; \node [vertex,label=below left:\small{$v'_n$}] (w4) [below of=v4] {}; \node [vertex,label=below left:\small{$b_1$}] (b1) [above of=v1] {}; \node [vertex,label=below left:\small{$b_2$}] (b2) [above of=v2] {}; \node [vertex,label=below left:\small{$b_k$}] (b3) [above of=v3] {}; \draw (b1) to[out=130,in=-130] (-0.4*1.6,2*1.6) to[out=40,in=140] (0.4*1.6,2*1.6) to[out=-50,in=50] (b1); \draw (b2) to[out=130,in=-130] (1.6,2*1.6) to[out=40,in=140] (1.6+2*0.4*1.6,2*1.6) to[out=-50,in=50] (b2); \draw (b3) to[out=130,in=-130] (3*1.6+2*0.4*1.6,2*1.6) to[out=40,in=140] (3*1.6+4*0.4*1.6,2*1.6) to[out=-50,in=50] (b3); \node [weight] (H1) [above of=b1, yshift=-0.7*0.4*1.6 cm] {$H_1$}; \node [weight] (H2) [above of=b2, yshift=-0.7*0.4*1.6 cm] {$H_2$}; \node [weight] (H3) [above of=b3, yshift=-0.7*0.4*1.6 cm] {$H_k$}; \path (v1) edge (w1); \path (v2) edge (w2); \path (v3) edge (w3); \path [matched edge] (v4) -- (w4); \path[fill=gray,opacity=0.5] (v3) to [bend right=7] (v4) to [bend left=22] (v3); \path [matched edge] (b1) -- (v1); \path (b1) edge (v2); \path (b1) edge (v3); \path (b1) edge (v4); \path (b2) edge (v1); \path [matched edge] (b2) -- (v2); \path (b2) edge (v3); \path (b2) edge (v4); \path (b3) edge (v1); \path (b3) edge (v2); \path [matched edge] (b3) -- (v3); \path (b3) edge (v4); \path [fill=gray,opacity=0.25] (-0.4*1.6,-0.4*1.6) -- (-0.4*1.6,0.4*1.6) -- (5*1.6+6*0.4*1.6,0.4*1.6) -- (5*1.6+6*0.4*1.6,-0.4*1.6) -- (-0.4*1.6,-0.4*1.6); \node [circle,draw,thick,inner sep=1.5pt] (small) [below of=d2,yshift=0.8*1.6 cm] {}; \node [circle,draw,thick] (big) [minimum size=3.3cm,above of=v4, xshift=-0.4*1.6 cm, yshift=1.5*0.4*1.6 cm] {}; \path (small) edge [thick] (big); \node [minimum size=0pt,inner sep=0pt] (s1) [left of=big,xshift=-0.1 cm,yshift=0.3 cm] {}; \node [minimum size=0pt,inner sep=0pt] (s2) [left of=big,xshift=-0.15cm,yshift=-0.1 cm] {}; \node [minimum size=0pt,inner sep=0pt] (s3) [left of=big,xshift=0.05 cm,yshift=-0.8 cm] {}; \node [minimum size=0pt,inner sep=0pt] (s4) [right of=big,xshift=0.1cm,yshift=0.3cm] {}; \node [minimum size=0pt,inner sep=0pt] (s5) [right of=big,xshift=0.15cm,yshift=-0.1cm] {}; \node [minimum size=0pt,inner sep=0pt] (s6) [right of=big,xshift=-0.05cm,yshift=-0.8cm] {}; \node [vertex,label=above:\small{$u^1_{kn}$}] (u1) [left of=big,xshift=0.7cm,yshift=0.7cm] {}; \node [vertex] (u2) [left of=big,xshift=0.7 cm,yshift=0.1 cm] {}; \node [vertex,label=above:\small{$u^{\rho k}_{kn}$}] (u3) [left of=big,xshift=0.7cm,yshift=-1.1 cm] {}; \node [vertex,label=above:\small{$u^1_{nk}$}] (u4) [right of=big,xshift=-0.7cm,yshift=0.7 cm] {}; \node [vertex] (u5) [right of=big,xshift=-0.7cm,yshift=0.1 cm] {}; \node [vertex, label=above:\small{$u^{\rho k}_{nk}$}] (u6) [right of=big,xshift=-0.7cm,yshift=-1.1 cm] {}; \node [font=\large] (d4) [below of=big, xshift=0 cm, yshift=1.1 cm] {$\vdots$}; \path [matched edge] (u1) -- (u4); \path [matched edge] (u2) -- (u5); \path [matched edge] (u3) -- (u6); \path (s1) edge (u1); \path (s2) edge (u2); \path (s3) edge (u3); \path (u4) edge (s4); \path (u5) edge (s5); \path (u6) edge (s6); \draw[decoration={brace,raise=15pt},decorate] (-0.4*1.6,1.6+0.05) -- node[weight,left=18pt] {$w_e=4$} (-0.4*1.6,2.1*1.6-0.05); \draw[decoration={brace,raise=15pt},decorate] (-0.4*1.6,0.05) -- node[weight,left=18pt] {$w_e=2$} (-0.4*1.6,1.6-0.05); \draw[decoration={brace,raise=15pt},decorate] (-0.4*1.6,-1.6+0.05) -- node[weight,left=18pt] {$w_e=1$} (-0.4*1.6,-0.05); \end{tikzpicture} \caption{The gadget graph $G^*$.} \label{fig:hardness_edge} \end{figure} \begin{claim}\label{clm:gap1} If $G$ has an independent set of size at least $k$, then $G^*$ has an edge-stabilizer of size at most $k$. \end{claim} \begin{proof} Let $S$ be an independent set in $G$ where $\size{S}=k$. Without loss of generality, we may assume $S=\set{v_1,v_2,\dots,v_k}$. Let $F=\cup_{i=1}^k v_iv'_i$. We claim that $F$ is an edge-stabilizer of $G^*$. Let $M_i$ be a perfect matching in $H_i\setminus b_i$ for all $i\in [k]$. Let $\hat{M}:=\cup_{\ell=1}^{\rho k}\set{u^\ell_{ij}u^\ell_{ji}:v_iv_j\in E}$. Define the matching $M$ in $G^*\setminus F$ as \[M:=\hat{M}\cup M_1\cup\dots\cup M_k \cup \set{b_iv_i:1\leq i\leq k} \cup \set{v_iv'_i:k< i\leq n}\] Note that $w(M)=(m+4k)\rho k + n + k$. In order to show that $G^*\setminus F$ is stable, it suffices to exhibit a fractional $w$-vertex cover of the same weight. Let $y\in \mathbb R^{\size{V^*}}_+$ be a vector defined by \[y_v=\begin{cases} 2, &\text{ if }v\in \cup_{i=1}^k V(H_i)\\ 1, &\text{ if }v\in \cup_{i=k+1}^n v_i \; \text{ or } \; v=u^\ell_{ij} \text{ where }i\leq k\\ \frac{1}{2}, &\text{ if }v=u^\ell_{ij} \text{ where } i,j> k\\ 0, &\text{ otherwise}. \end{cases}\] It is easy to check that $y_u+y_v\geq w_{uv}$ for all $uv\in E^*$. Hence, $y$ is a fractional $w$-vertex cover in $G^*$. Since $S$ is an independent set, there are no edges of the form $u^\ell_{ij}u^\ell_{ji}$ where $i,j\leq k$. Then, \[\mathbbm 1^{\top}y=2(2\rho k+1)k+n-k+m\rho k = 4\rho k^2 + n + k + m\rho k = (m+4k)\rho k + n + k= w(M)\] which implies that $G^*\setminus F$ is stable. \end{proof} \begin{claim}\label{clm:gap2} If $G$ does not have an independent set of size at least $k$, then every edge-stabilizer of $G^*$ has size at least $(\rho+1)k$. \end{claim} \begin{proof} We prove the contrapositive. Assume $G^*$ has an edge-stabilizer $F$ such that $\size{F}<(\rho+1)k$. Let $M$ be a maximum-weight matching in $G^*\setminus F$. Let $c$ denote the number of edges removed from the complete graphs, i.e. $c:=\size{F\cap \cup_{i=1}^k E(H_i)}$. We first show that $c<2\rho k-1$. According to Ore's Theorem, we need to remove at least $2\rho k-1$ edges from $H_i$ in order to make it non-Hamiltonian. Let $\mathcal{H}:=\set{i:H_i\setminus F \mbox{ is Hamiltonian}}$. Then, $\size{\mathcal{H}}\geq k - \frac{c}{2\rho k-1}$. For every $i\in \mathcal{H}$, $b_iv_j\in M$ for some $j\in[n]$, otherwise $H_i\setminus F$ contains an augmenting blossom because it has an odd number of vertices. Thus, $v_jv'_j\subseteq F$, otherwise $H_i\setminus F\cup(b_i,v_j,v'_j)$ contains an augmenting flower. We have \begin{align*} c + \size{\mathcal{H}} &< (\rho+1)k \\ c + \pr{k-\frac{c}{2\rho k-1}} &< (\rho+1)k \\ c \pr{1 - \frac{1}{2\rho k-1}} &< \rho k \\ c \pr{\frac{2\rho k-2}{2\rho k-1}} &< \rho k \\ c &< (2\rho k-1)\pr{\frac{\rho k}{2\rho k-2}} \\ c &< 2\rho k-1 \end{align*} Since $c<2\rho k-1$, $\size{\mathcal{H}}=k$. Without loss of generality, we may assume $b_iv_i\in M$ for every $i\in [k]$. Then, $\cup_{i=1}^k v_iv'_i\subseteq F$. We claim that $S=\set{v_1,v_2,\dots,v_k}$ is an independent set in $G$. For the purpose of contradiction, suppose there exists an edge $v_iv_j\in E$ for some $i,j\in[k]$. Let $\mathcal{P}_{ij}=\cup_{\ell=1}^{\rho k}(v_i,u^\ell_{ij},u^\ell_{ji},v_j)$ denote the set of paths between $v_i$ and $v_j$. Since $\size{F\cap \mathcal{P}_{ij}}<(\rho+1)k-k=\rho k$, at least one path $(v_i,u^t_{ij},u^t_{ji},v_j)\in\mathcal{P}_{ij}$ is present in $G^*\setminus F$. Observe that $u^t_{ij}u^t_{ji}\in M$, and $(H_i\setminus F) \cup (b_i,v_i,u^t_{ij},u^t_{ji},v_j,b_j)\cup (H_j\setminus F)$ contains an augmenting bi-cycle. Thus, by Theorem \ref{thm:stable} $G^*\setminus F$ is unstable, which is a contradiction. \end{proof} Now, suppose we have an $\alpha$-approximation to the minimum edge-stabilizer problem for some constant $\alpha\geq 1$. Set $\rho = \ceil{\alpha}$ and construct the gadget graph $G^*$ as shown above. Run this algorithm on $G^*$ and let $F$ be the returned edge-stabilizer. Let OPT be size of a minimum edge-stabilizer in $G^*$. If $G$ has an independent set of size at least $k$, then by Claim \ref{clm:gap1} we have $\text{OPT} \leq k$ and $\size{F}\leq \alpha\cdot \text{OPT} \leq \rho k$. On the other hand, if $G$ does not have an independent set of size at least $k$, then by Claim \ref{clm:gap2} we have $\text{OPT}\geq (\rho+1)k>\rho k$. This implies that $\size{F}>\rho k$. Therefore, we can use this algorithm to decide the independent set problem in polynomial time. \end{proof} In this section, we prove that Algorithm \ref{alg:vert} is an $O(\Delta)$-approximation algorithm for the minimum edge-stabilizer problem. We first need to establish a lower bound on the optimal solution. Next example shows that, differently from the unweighted case, $\gamma(G)$ is not a lower bound on the size of a minimum edge-stabilizer for weighted graphs. Let $G$ be the unstable graph depicted in Figure \ref{fig:bound_edge}. The unique maximum-weight matching is shown in the left, while the unique maximum-weight fractional matching is shown in the right. Gray edges have value $\frac{1}{2}$. Even though $\gamma(G)=2$, the edge with weight 0.5 is a minimum edge-stabilizer. \begin{figure}[ht] \def1.6{1.5} \centering \begin{minipage}{0.49\textwidth} \centering \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex] (v1) {}; \node [vertex] (v2) [above left of=v1] {}; \node [vertex] (v3) [below left of=v1] {}; \node [vertex] (v4) [right of=v1] {}; \node [vertex] (v5) [right of=v4] {}; \node [vertex] (v6) [right of=v5] {}; \node [vertex] (v7) [above right of=v6] {}; \node [vertex] (v8) [below right of=v6] {}; \path [matched edge] (v2) -- node [weight, left] {$2$} (v3); \path (v1) edge node [weight, above right] {$2$} (v2); \path (v1) edge node [weight, below right] {$2$} (v3); \path [matched edge] (v1) -- node [weight, above] {$1$} (v4); \path (v4) edge node [weight, above] {$0.5$} (v5); \path [matched edge] (v5) -- node [weight, above] {$1$} (v6); \path (v6) edge node [weight, above left] {$2$} (v7); \path (v6) edge node [weight, below left] {$2$} (v8); \path [matched edge] (v7) -- node [weight, right] {$2$} (v8); \end{tikzpicture} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \begin{tikzpicture}[node distance=1.6 cm, inner sep=2.5pt, minimum size=2.5pt, auto] \node [vertex] (v1) {}; \node [vertex] (v2) [above left of=v1] {}; \node [vertex] (v3) [below left of=v1] {}; \node [vertex] (v4) [right of=v1] {}; \node [vertex] (v5) [right of=v4] {}; \node [vertex] (v6) [right of=v5] {}; \node [vertex] (v7) [above right of=v6] {}; \node [vertex] (v8) [below right of=v6] {}; \path [selected edge] (v2) -- node [weight, left] {$2$} (v3); \path [selected edge] (v1) -- node [weight, above right] {$2$} (v2); \path [selected edge] (v1) -- node [weight, below right] {$2$} (v3); \path (v1) edge node [weight, above] {$1$} (v4); \path [matched edge] (v4) -- node [weight, above] {$0.5$} (v5); \path (v5) edge node [weight, above] {$1$} (v6); \path [selected edge] (v6) -- node [weight, above left] {$2$} (v7); \path [selected edge] (v6) -- node [weight, below left] {$2$} (v8); \path [selected edge] (v7) -- node [weight, right] {$2$} (v8); \end{tikzpicture} \end{minipage} \caption{An example showing that $\gamma(G)$ is not a lower bound.} \label{fig:bound_edge} \end{figure} However, we can prove the following. \begin{lemma} \label{lem:gamma_edge} For every edge $e\in E$, $\gamma(G\setminus e)\geq \gamma(G)-2$. \end{lemma} \begin{proof} Let $x^*$ be a basic maximum-weight fractional matching in $G$ such that $\size{\mathscr{C}(x^*)}=\gamma(G)$. Let $y$ be a minimum fractional $w$-vertex cover in $G$. Pick an edge $ab\in E$. If $x^*_{ab}=0$, then $\nu_f(G\setminus ab)=\nu_f(G)$ and $\gamma(G\setminus ab)= \gamma(G)$. So we may assume $x^*_{ab}\in \set{\frac{1}{2},1}$. Let $G'$ be the graph obtained by replacing the edge $ab$ with edges $ab',b'a',a'b$ where $w_{ab'}=w_{b'a'}=w_{a'b}=w_{ab}$ and $a',b'\notin V$. Define the vectors $\hat{x}$ and $\hat{y}$ as \[\hat{x}_e = \begin{cases} 1-x^*_{ab}, &\text{ if }e=b'a',\\ x^*_{ab}, &\text{ if }e\in \set{ab',a'b},\\ x^*_e, &\text{ otherwise}. \end{cases} \qquad \hat{y}_v = \begin{cases} y_a, &\text{ if }v=a',\\ y_b, &\text{ if }v=b',\\ y_v, &\text{ otherwise}. \end{cases}\] Note that $\hat{x}$ is a basic fractional matching while $\hat{y}$ is a fractional $w$-vertex cover in $G'$. Furthermore, they satisfy complementary slackness conditions as $\hat{y}_u+\hat{y}_v=w_{uv}$ for all $uv\in\set{ab',b'a',a'b}$ and $\hat{x}(\delta(v))=1$ for all $v\in \set{a,a',b,b'}$. Hence, they form a primal-dual optimal pair. Since $\size{\mathscr{C}(\hat{x})}=\gamma(G)$, we have $\gamma(G')\leq \gamma(G)$. We claim that $\gamma(G')=\gamma(G)$. For the purpose of contradiction, suppose $\gamma(G')<\gamma(G)$. By Theorem \ref{thm:cycles}, $G'$ contains one of the following: \smallskip \emph{Structure (i):} a vertex $v\in V(C_i)$ for some odd cycle $C_i\in \mathscr{C}(\hat{x})$ such that $\hat{y}_v=0$. If $v\in V$, then $v\in V(\mathscr{C}(x^*))$ and $y_v=0$. Otherwise, $v\in\set{a',b'}$ which implies $a,b\in V(\mathscr{C}(x^*))$ and $y_a=0$ or $y_b=0$. By Theorem \ref{thm:cycles}, we arrive at the contradiction $\gamma(G)<\size{\mathscr{C}(x^*)}$. \smallskip \emph{Structure (ii):} a tight $M(\hat{x})$-alternating path $P$ connecting two odd cycles $C_i,C_j\in \mathscr{C}(\hat{x})$. If $P$ does not have any intermediate vertices from $\set{a,a',b,b'}$, then it is also a tight $M(x^*)$-alternating path in $G$ connecting two odd cycles from $\mathscr{C}(x^*)$. Otherwise, $ab',b'a',a'b\in E(P)$ and $C_i,C_j\in \mathscr{C}(x^*)$. Then, $(P\cup ab)\setminus\set{ab',b'a',a'b}$ is a tight $M(x^*)$-alternating path in $G$ connecting $C_i$ and $C_j$. By Theorem \ref{thm:cycles}, we arrive at the contradiction $\gamma(G)<\size{\mathscr{C}(x^*)}$. \smallskip \emph{Structure (iii):} a tight and valid $M(\hat{x})$-alternating path $P$ connecting an odd cycle $C_i\in \mathscr{C}(\hat{x})$ and a vertex $v\notin V(\mathscr{C}(\hat{x}))$ such that $\hat{y}_v=0$. If $P$ does not have any intermediate vertices from $\set{a,a',b,b'}$, then $v\notin\set{a,a',b,b'}$. Hence, $P$ is also a tight and valid $M(x^*)$-alternating path in $G$ connecting an odd cycle from $\mathscr{C}(x^*)$ and $v$. If $ab',b'a',a'b\in E(P)$, then $v\notin\set{a',b'}$ and $C_i\in \mathscr{C}(x^*)$. Thus, $(P\cup ab)\setminus \set{ab',b'a',a'b}$ is a tight and valid $M(x^*)$-alternating path in $G$ connecting $C_i$ and $v$. If $a'b\in E(P)$ but $ab',b'a'\notin E(P)$, then $v=a'$ and $y_a=0$. So $P+ab-a'b$ is a tight and valid $M(x^*)$-alternating path in $G$ connecting $C_i$ and $a$. If $ab'\in E(P)$ but $b'a',a'b\notin E(P)$, then $v=b'$ and $y_b=0$. So $P+ab-ab'$ is a tight and valid $M(x^*)$-alternating path in $G$ connecting $C_i$ and $b$. By Theoreom \ref{thm:cycles}, we arrive at the contradiction $\gamma(G)<\size{\mathscr{C}(x^*)}$. \smallskip Thus, we have shown that $\gamma(G')=\gamma(G)$. Since $G'\setminus\set{a',b'}=G\setminus ab$, applying Lemma \ref{lem:gamma_vert} yields \[\gamma(G\setminus ab)=\gamma\pr{G'\setminus\set{a',b'}}\geq \gamma(G')-2 = \gamma(G)-2.\] \end{proof} As a corollary to the above lemma, we obtain the following lower bound. \begin{lemma} For every edge-stabilizer $F$ of $G$, $\size{F}\geq \ceil{\frac{\gamma(G)}{2}}$. \end{lemma} Since Algorithm \ref{alg:vert} deletes $\gamma(G)$ vertices, at most $\gamma(G)\Delta$ edges are deleted, proving the following. \begin{theorem} There exists an efficient $O(\Delta)$-approximation algorithm for the minimum edge-stabilizer problem. \end{theorem} \section{Forcing an outcome} \label{sec:additional} Given a set of deals $M$, we here look at the problem of removing as few players as possible in order to make $M$ realizable as a stable outcome. This corresponds to finding a minimum vertex-stabilizer $S$ with the additional constraint that $M$ is a maximum-weight matching in $G\setminus S$. Note that this implicitly implies $S\cap V(M)=\emptyset$. A solution to this problem is called an \emph{$M$-vertex-stabilizer}. We would like to point out that the following variants of the problem, which are along the lines of Chandrasekaran et al. \cite{arxiv/ChandrasekaranG16} are polytime solvable: find a weight vector $w'$ such that $M$ is a stable outcome for $(G,w')$ so as to minimize $\|w-w'\|_1$ or $\|w-w'\|_\infty$ (or even the $\ell_p$ norm $\|w-w'\|_p$). This is an inverse-optimization problem that can be cast as an LP (or convex program for the $\ell_p$ norm) by exploiting complementary slackness. Ahmadian et al.~\cite{conf/ipco/AhmadianHS16} previously showed that the $M$-vertex-stabilizer problem is polynomial-time solvable on unweighted graphs when $M$ is a maximum matching in $G$. We prove that when $M$ is any arbitrary matching in $G$, the problem becomes hard: \begin{theorem}\label{thm:hardness_mvert} The $M$-vertex-stabilizer problem is NP-hard on unweighted graphs. Furthermore, no efficient $(2-\varepsilon)$-approximation algorithm exists for any $\varepsilon>0$ assuming UGC. \end{theorem} \begin{proof} We give an approximation-preserving reduction from the vertex cover problem. Let $G=(V,E)$ be a vertex cover instance. For every edge $uv\in E$, replace it with an augmenting path of length three, i.e. $(u,u',v',v)$ where $u'v'\in M$. Denote the resulting (unweighted) graph as $G'=(V',E')$. We will show that every vertex cover in $G$ corresponds to an $M$-vertex-stabilizer in $G'$ and vice versa. This implies that the reduction is approximation-preserving and the inapproximability results for the vertex cover problem \cite{journals/am/DinurS05,journals/jcss/KhotR08} carry over to the problem of finding a minimum $M$-vertex-stabilizer. Observe that $G'$ does not contain any alternating cycle or blossom. This implies that there is no augmenting cycle, flower or bi-cycle in $G'$. Let $S$ be a vertex cover of $G$. Then, $G'\setminus S$ has no augmenting path because every augmenting path in $G'$ corresponds to an edge in $G$. Thus, $M$ is a maximum matching in $G'\setminus S$ and hence $S$ is an $M$-vertex stabilizer. For the converse, suppose $S$ is an $M$-vertex stabilizer of $G'$. Note that $S\subseteq V$ as every vertex in $V'\setminus V$ is $M$-covered. Then, $G\setminus S$ has no edges because every edge in $G$ corresponds to an augmenting path in $G'$. It follows that $S$ is a vertex cover for $G$. This concludes the proof of inapproximability. \end{proof} On unweighted graphs, every instance of this problem admits a solution. However, this is not the case for weighted graphs. Consider an $M$-augmenting bi-cycle. It is unstable by Theorem \ref{thm:stable}, but does not have an $M$-vertex-stabilizer because every vertex is $M$-covered. In general, if the graph contains an $M$-augmenting path whose endpoints are $M$-covered, or an $M$-augmenting cycle, or an $M$-augmenting flower whose root is $M$-covered, or an $M$-augmenting bi-cycle, then it does not have an $M$-vertex-stabilizer. We would like to point out that recognizing an infeasible instance of the $M$-vertex-stabilizer problem can be done in polynomial time. In particular, we prove that: \begin{theorem}\label{thm:mvert_stabilizer} The $M$-vertex-stabilizer problem admits an efficient 2-approximation algorithm. Furthermore, if $M$ is a maximum-weight matching, then it can be solved in polynomial time. \end{theorem} We first sketch the main ideas. Given a weighted graph $G$ and a matching $M$, the algorithm searches for the structures which prevent $G$ from being stable or $M$ from being a maximum-weight matching. Among all such structures, the ones that can be tampered with are augmenting paths with at least one $M$-exposed endpoint and augmenting flowers with an $M$-exposed root. The algorithm then proceeds to delete these $M$-exposed vertices. If there exist augmenting paths whose endpoints are both $M$-exposed, the problem becomes hard because we do not know which endpoint is optimal to remove. In this case, the algorithm removes both endpoints, thus yielding a 2-approximation. Note that this last case cannot happen if $M$ is maximum-weight, and this explains why the problem becomes polynomial-time solvable. Kleinberg and Tardos \cite{conf/stoc/KleinbergT08} were the first to give a method of locating these structures. It involves solving a certain linear program using the dynamic programming algorithm of Aspvall and Shiloach \cite{journals/siamcomp/AspvallS80}. We use a slightly different algorithm for finding these structures, which in fact, will allow us to prove a strengthened version of Theorem \ref{thm:mvert_stabilizer} (Theorem \ref{thm:mvert_stabilizer_2}). Our algorithm relies on searching for augmenting walks of a certain length, via a slight modification of the dynamic programming algorithm given by Aspvall and Shiloach \cite{journals/siamcomp/AspvallS80}. Using this as a subroutine, we design an algorithm for the $M$-vertex-stabilizer problem. \subsection{Finding augmenting walks} The first ingredient is an algorithm to find augmenting walks of a certain length. We say that a valid $M$-alternating $uv$-walk $P$ of length at most $k$ is $\emph{optimal}$ if for any other valid $M$-alternating $uv$-walk $P'$ of length at most $k$, we have $w(P\setminus M)-w(P\cap M)\geq w(P'\setminus M) - w(P'\cap M)$. Given a source vertex $s$ and an integer $k\in \mathbb Z_+$, the algorithm searches for optimal valid alternating $sv$-walks of length at most $k$ for all $v\in V$. The significance of optimality is as follows. Let $P$ be an optimal valid alternating $sv$-walk returned by the algorithm. If $P$ is augmenting, then we have found an augmenting $sv$-walk of length at most $k$. Otherwise, we can conclude that there are no augmenting $sv$-walks of length at most $k$ by the optimality of $P$. The inner workings of our algorithm is similar to the Grapevine algorithm given by Aspvall and Shiloach \cite{journals/siamcomp/AspvallS80}. In their paper, the Grapevine algorithm is used as a subroutine to solve linear systems of the form $Ax\leq b$, where each constraint contains at most 2 variables. They first constructed an auxiliary graph to model the relationship between variables and constraints. The Grapevine algorithm is then run on this auxiliary graph to compute lower and upper bounds of each variable. We now give an overview of our algorithm. For every vertex $v\in V$, we define two variables $y_1(v)$ and $y_2(v)$. In iteration $i$, if $v$ is $M$-exposed, we would like $y_1(v)$ to represent the quantity $w(P\setminus M)-w(P\cap M)$, where $P$ is an optimal valid alternating $sv$-walk of length at most $i$. On the other hand, if $v$ is $M$-covered, we would like $y_2(v)$ to represent the quantity $w(P\setminus M)-w(P\cap M)$, where $P$ is an optimal valid alternating $sv$-walk of length at most $i$. We first initialize $y_1(s)=0$ and $y_2(s)=-\infty$ if $s$ is $M$-covered, and $y_1(s)=y_2(s)=0$ if $s$ is $M$-exposed. For every other vertex $v\neq s$, we set $y_1(v)=y_2(v)=-\infty$. At every iteration, each vertex $v$ determines whether it could increase its $y_1(v)$ value by replacing it with $y_2(u)+w_{uv}$ for some $uv\in E\setminus M$, and similarly, whether it could increase its $y_2(v)$ value by replacing it with $y_1(u)-w_{uv}$ for some $uv\in M$. In a way, this is analogous to the Bellman-Ford algorithm for computing shortest paths. The main difference is that we are maintaining two variables for each vertex, instead of one. This ensures the walk we obtain is $M$-alternating and valid. Also, notice that we are adding the weights of unmatched edges and subtracting the weights of matched edges. This will give us our desired quantity $w(P\setminus M)-w(P\cap M)$. \begin{algorithm}[H] \SetKwInOut{Input}{Input} Initialize vectors $y_1,y_2,z_1,z_2\in \mathbb R^n$\; \uIf{$s$ is $M$-covered}{ $y_1(s) \leftarrow 0$\; $y_2(s) \leftarrow -\infty$\; } \Else{ $y_1(s) \leftarrow 0$\; $y_2(s) \leftarrow 0$\; } \ForEach{vertex $v\neq s$}{ $y_1(v)\leftarrow -\infty$\; $y_2(v)\leftarrow -\infty$\; } \For{$i=1$ to $k$}{ \ForEach{vertex $v$}{ $\displaystyle z_1(v)\leftarrow\max_{u:uv\in E\setminus M}\set{y_2(u)+w_{uv}}$\; $\displaystyle z_2(v)\leftarrow\max_{u:uv\in M}\set{y_1(u)-w_{uv}}$\; } \ForEach{vertex $v$}{ $y_1(v)\leftarrow\max(y_1(v),z_1(v))$\; $y_2(v)\leftarrow\max(y_2(v),z_2(v))$\; } } \Return{$y_1,y_2$} \caption{Optimal valid $M$-alternating $sv$-walks of length at most $k$} \label{alg:optwalk} \end{algorithm} In the algorithm, we take the maximum of the empty set to be $-\infty$. For the analysis, we will use $y^i_1(v)$ and $y^i_2(v)$ to denote the value of $y_1(v)$ and $y_2(v)$ respectively at iteration $i$ for all $i< k$. Note that $y^0_1(v)$ and $y^0_2(v)$ refer to the initial value received by $y_1(v)$ and $y_2(v)$ before the main ``for'' loop. The following lemmas verify our intuition. \begin{lemma}\label{lem:covered} Let $v$ be an $M$-covered vertex. If there is no valid $M$-alternating $sv$-walk of length at most $k$, then $y_2(v)=-\infty$. Otherwise, there exists an optimal valid $M$-alternating $sv$-walk $P$ of length at most $k$, and $y_2(v)=w(P\setminus M)-w(P\cap M)$. \end{lemma} \begin{proof} We start by proving the contrapositive of the first statement. Let $v$ be an $M$-covered vertex where $y_2(v)$ is finite. We proceed by induction on $k$. We look at two base cases. When $k=1$, $y_2(v)=y^0_1(s)-w_{sv}$ where $sv\in M$. So $(s,v)$ is our desired walk. When $k=2$, if $y_2(v)$ was updated in iteration 1, then this reduces to the previous case. Otherwise, $y_2(v)=y^0_2(s)+w_{su}-w_{uv}$ for some $uv\in M$. Since $y^0_2(s)$ is finite, $s$ is $M$-exposed and $(s,u,v)$ is our desired walk. For the inductive hypothesis, assume the statement holds for some $k\geq 2$. Consider the case $k+1$. Let $j$ be the last iteration in which $y_2(v)$ was updated, i.e. $y_2(v)=y_1^{j-1}(u)-w_{uv}$ for some $uv\in M$. We may assume $j>2$, otherwise we are back at the base cases. Since the update of $y_2(v)$ was triggered by the update of $y_1(u)$, we know that $y_1(u)$ was updated at iteration $j-1$, i.e. $y_1^{j-1}(u)=y^{j-2}_2(t)+w_{tu}$ for some $tu\in E\setminus M$. Similarly, since the update of $y_1(u)$ was triggered by the update of $y_2(t)$, we know that $y_2(t)$ was updated at iteration $j-2>0$. This implies that $t$ is $M$-covered. As $y^{j-2}_2(t)$ is finite and $j-2<k$, by the inductive hypothesis there exists a valid $M$-alternating $st$-walk of length at most $j-2$. Appending $(t,u,v)$ to this walk yields a valid $M$-alternating $sv$-walk of length at most $j\leq k+1$. Next, we prove the second statement. Let $v$ be an $M$-covered vertex where a valid $M$-alternating $sv$-walk of length at most $k$ exists. Since the number of such walks is finite, there exists an optimal one. Among all such optimal walks, let $P$ be the shortest one in terms of number of edges. We proceed by induction on $k$. We look at two base cases. When $k=1$, $P=(s,v)$ and $y_2(v)=-w_{sv}=w(P\setminus M)-w(P\cap M)$. When $k=2$, if $\size{E(P)}=1$ then this reduces to the previous case. Otherwise, $P=(s,u,v)$ for some $uv\in M$ and $y_2(v)=w_{su}-w_{uv}=w(P\setminus M)-w(P\cap M)$. For the inductive hypothesis, assume the statement holds for some $k\geq 2$. Consider the case $k+1$. We may assume $P$ has length exactly $k+1$, otherwise by the inductive hypothesis we are done. Denote $P=(v_0,v_1,\dots,v_{k+1})$ where $v_0=s$ and $v_{k+1}=v$. Then, $P'=(v_0,v_1,\dots,v_{k-1})$ is an optimal valid $M$-alternating $sv_{k-1}$-walk of length at most $k-1$. Since $v_{k-1}$ is $M$-covered, by the inductive hypothesis we have $y^{k-1}_2(v_{k-1})=w(P'\setminus M)-w(P'\cap M)$. We also know that $y^{k-1}_2(v_{k-1})+w_{v_{k-1}v_k}\geq y^{k-1}_2(u)+w_{uv_k}$ for all $uv_k\in E\setminus M$ because $P$ is optimal. We claim that $y^{k-1}_2(v_{k-1})+w_{v_{k-1}v_k}>y^{k-1}_1(v_k)$. For the purpose of contradiction, suppose otherwise. Then, \begin{align*} y_2^k(v) &\geq y_1^{k-1}(v_k)-w_{v_kv} \\ &\geq y^{k-1}_2(v_{k-1})+w_{v_{k-1}v_k}-w_{v_kv} \\ &= w(P'\setminus M) - w(P'\cap M) +w_{v_{k-1}v_k}-w_{v_kv} \\ &= w(P\setminus M) - w(P\cap M). \end{align*} Since $y_2^k(v)$ is finite, by the first part of the lemma there exists a valid $M$-alternating $sv$-walk of length at most $k$. So by the inductive hypothesis, $y_2^k(v)=w(Q\setminus M)-w(Q\cap M)$ where $Q$ is an optimal valid $M$-alternating $sv$-walk of length at most $k$. Note that $w(Q\setminus M)-w(Q\cap M)=w(P\setminus M)-w(P\cap M)$ because $P$ is optimal. However, $Q$ is shorter than $P$, which is a contradiction. Thus, we obtain $y_1^k(v_k)=y_2^{k-1}(v_{k-1})+w_{v_{k-1}v_k}$ and \[y_2(v) = y_1^k(v_k)-w_{v_kv} = y_2^{k-1}(v_{k-1})+w_{v_{k-1}v_k}-w_{v_kv} = w(P\setminus M) - w(P\cap M).\] \end{proof} \begin{lemma}\label{lem:exposed} Let $v$ be an $M$-exposed vertex. If there is no valid $M$-alternating $sv$-walk of length at most $k$, then $y_1(v)=-\infty$. Otherwise, there exists an optimal valid $M$-alternating $sv$-walk $P$ of length at most $k$, and $y_1(v)=w(P\setminus M)-w(P\cap M)$. \end{lemma} \begin{proof} We start by proving the contrapositive of the first statement. Let $v$ be an $M$-exposed vertex where $y_1(v)$ is finite. We proceed by induction on $k$. We look at two base cases. When $k=0$, we have $v=s$ and the empty path $(s)$ is our desired walk. When $k=1$, if $y_1(v)$ was never updated, then $v=s$ and so this reduces to the previous case. Otherwise, $y_1(v)=y^0_2(s)+w_{sv}$. Since $y^0_2(s)$ is finite, $s$ is $M$-exposed and $(s,v)$ is our desired walk. For the inductive hypothesis, assume the statement holds for some $k\geq 1$. Consider the case $k+1$. Let $j$ be the last iteration in which $y_1(v)$ was updated, i.e. $y_1(v)=y_2^{j-1}(u)+w_{uv}$ for some $uv\in E\setminus M$. We may assume $j>1$, otherwise we are back at the base cases. Since the update of $y_1(v)$ was triggered by the update of $y_2(u)$, we know that $y_2(u)$ was updated at iteration $j-1>0$. This implies that $u$ is $M$-covered. As $y^{j-1}_2(u)$ is finite and $j-1\leq k$, by Lemma \ref{lem:covered} there exists a valid $M$-alternating $su$-walk of length at most $j-1$. Appending $(u,v)$ to this walk yields a valid $M$-alternating $sv$-walk of length at most $j\leq k+1$. Next, we prove the second statement. Let $v$ be an $M$-exposed vertex where a valid $M$-alternating $sv$-walk of length at most $k$ exists. Since the number of such walks is finite, there exists an optimal one. Among all such optimal walks, let $P$ be the shortest one in terms of number of edges. We proceed by induction on $k$. We look at two base cases. When $k=0$, $P=(s)$ and $y_1(v)=0=w(P\setminus M)-w(P\cap M)$. When $k=1$, if $\size{E(P)}=0$ then this reduces to the previous case. Otherwise, $P=(s,v)$ and $y_1(v)=w_{sv}=w(P\setminus M)-w(P\cap M)$. For the inductive hypothesis, assume the statement holds for some $k\geq 1$. Consider the case $k+1$. We may assume $P$ has length exactly $k+1$, otherwise by the inductive hypothesis we are done. Denote $P=(v_0,v_1,\dots,v_{k+1})$ where $v_0=s$ and $v_{k+1}=v$. Then, $P'=(v_0,v_1,\dots,v_k)$ is an optimal valid $M$-alternating $sv_k$-walk of length at most $k$. Since $v_k$ is $M$-covered, by Lemma \ref{lem:covered} we have $y^k_2(v_k)=w(P'\setminus M)+w(P'\cap M)$. We also know $y^k_2(v_k)+w_{v_kv}\geq y^k_2(u)+w_{uv}$ for all $uv\in E\setminus M$ because $P$ is optimal. We claim that $y^k_2(v_k)+w_{v_kv}>y^k_1(v)$. For the purpose of contradiction, suppose otherwise. Then \[y^k_1(v)\geq y^k_2(v_k)+w_{v_kv}=w(P'\setminus M)-w(P'\cap M) + w_{v_kv} = w(P\setminus M)-w(P\cap M).\] Since $y_1^k(v)$ is finite, by the first part of the lemma there exists a valid $M$-alternating $sv$-walk of length at most $k$. So by the inductive hypothesis, $y_1^k(v)=w(Q\setminus M)-w(Q\cap M)$ where $Q$ is an optimal valid $M$-alternating $sv$-walk of length at most $k$. Note that $w(Q\setminus M)-w(Q\cap M)=w(P\setminus M)-w(P\cap M)$ because $P$ is optimal. However, $Q$ is shorter than $P$, which is a contradiction. Thus, we obtain $y_1(v)=y_2^k(v_k)+w_{v_kv}=w(P\setminus M)-w(P\cap M)$. \end{proof} \subsection{The algorithm} The reason we look for augmenting walks is because of the following: \begin{lemma}\label{lem:augwalk} An augmenting $uv$-walk contains an augmenting $uv$-path, an augmenting cycle, an augmenting flower rooted at $u$ or $v$, or an augmenting bi-cycle. \end{lemma} \begin{proof} We first prove the following claim: \begin{claim}\label{clm:decompose} If $P$ is an alternating walk, then it can be decomposed into $P=P_1P_2\dots P_\ell$ such that: \begin{enumerate}[noitemsep,topsep=0pt] \item[(i)] Every $P_i$ is an alternating path, an alternating cycle or a blossom. \item[(ii)] There is no $i$ such that $P_i$ and $P_{i+1}$ are both alternating paths or blossoms. \end{enumerate} \end{claim} \begin{proof} Let $P=(v_1,v_2,\dots,v_t)$ be an $M$-alternating walk. We proceed by induction on $t$. For the base case $t=2$, $P$ is an alternating path of length 1 as there are no loops in $G$. Suppose the lemma is true for $t\leq k$ for some $k\geq 2$. Consider the case $t=k+1$. We may assume $P$ is not simple. Let $j$ be the smallest index such that $v_j=v_i$ for some $i<j$. Decompose $P$ into $P_1=(v_1,v_2,\dots,v_i)$, $P_2=(v_i,v_{i+1},\dots,v_j)$ and $P_3=(v_j,v_{j+1},\dots,v_t)$. $P_1$ is a (possibly empty) alternating path while $P_2$ is an alternating cycle or a blossom. Since $P_3$ is an $M$-alternating walk with fewer edges, by the inductive hypothesis it can be decomposed into $P_3=P'_1P'_2\dots,P'_\ell$ where every $P'_i$ is an alternating path, an alternating cycle or a blossom. Moreover, there are no consecutive paths or blossoms in this decomposition. Note that $P'_1$ is not a blossom because $P_3$ starts with an edge in $M$. Thus, $P=P_1P_2P'_1P'_2\dots P'_\ell$ is our desired decomposition. \end{proof} Let $P$ be an $M$-augmenting $uv$-walk. Using Claim \ref{clm:decompose}, decompose $P$ into $P=P_1P_2\dots P_k$. If $P_i$ is an augmenting cycle for some $i\in [k]$, then we are done. So we may assume that every alternating cycle in the decomposition is not augmenting. Note that $P_k$ is not an alternating cycle, otherwise $P$ is not valid because it ends with an unmatched edge whose endpoints are $M$-covered. Let $P'$ be the alternating $uv$-walk obtained by dropping all the alternating cycles in the decomposition. It is easy to see that $P'$ is still augmenting. Repeat this process until we are left with an augmenting $uv$-walk $P^*=P^*_1P^*_2\dots P^*_\ell$ such that every $P_i$ is an alternating path or a blossom. If $P^*_1$ is a blossom, then $u$ is $M$-exposed because the first edge of $P^*$ is not in $M$. Similarly, if $P^*_\ell$ is a blossom, then $v$ is $M$-exposed because the last edge of $P^*$ is not in $M$. In both cases, since $P^*$ does not have any $M$-exposed intermediate vertices, we get $u=v$ and $\ell=1$. This implies that $P^*$ is an augmenting blossom with base $u$, which is trivially an augmenting flower with root $u$. Thus, we may assume $P_1$ and $P_\ell$ are alternating paths. If $\ell=1$, then $P^*$ is an augmenting $uv$-path. Otherwise, from Claim \ref{clm:decompose} we know that $P_i$ is an alternating path for all odd $i$ while $P_i$ is a blossom for all even $i$. Observe that $P^*_1\cup P^*_2$ and $P^*_{\ell-1}\cup P^*_\ell$ form flowers rooted at $u$ and $v$ respectively, where the former is simple while the latter might not be simple. Moreover, $P^*_{2i}\cup P^*_{2i+1} \cup P^*_{2i+2}$ form bi-cycles for all $i\in \br{\frac{\ell-3}{2}}$. Since $2w(P^*\setminus M)>2w(P^*\cap M)$ and \[2w(P^*)=2w(P^*_1)+w(P^*_2) + \sum_{i=1}^{(\ell-3)/2}\pr{w(P^*_{2i})+2w(P^*_{2i+1})+w(P^*_{2i+2})} + w(P^*_{\ell-1}) + 2w(P^*_\ell),\] at least one of them is augmenting. \end{proof} We are now ready to present the algorithm for the $M$-vertex-stabilizer problem: \begin{algorithm}[H] Initialize $S\leftarrow\emptyset$ \; \ForEach{$M$-exposed vertex $u$}{ Search for $M$-augmenting $uv$-walks of length at most $3n$ using Algorithm \ref{alg:optwalk}\; \uIf{$\exists$ an $M$-augmenting $uu$-walk or $uv$-walk for some $M$-covered vertex $v$}{ $S\leftarrow S\cup\set{u}$\; $G\leftarrow G\setminus u$\; } } \ForEach{$M$-exposed vertex $u$}{ Search for $M$-augmenting $uv$-walks of length at most $n$ using Algorithm \ref{alg:optwalk}\; \uIf{$\exists$ an $M$-augmenting $uv$-walk for some $M$-exposed vertex $v$}{ $S\leftarrow S\cup\set{u,v}$\; $G\leftarrow G\setminus\set{u,v}$\; } } \uIf{$w(M)<\nu_f(G)$}{ \Return ``INFEASIBLE'' \; } \uElse{ \Return $S$\; } \caption{$M$-vertex-stabilizer} \label{alg:mvert} \end{algorithm} Let $S_1$ denote the set of $M$-exposed vertices in $G$ which are roots of augmenting flowers or endpoints of augmenting paths whose other endpoint is $M$-covered. Note that given a feasible instance, every $M$-vertex-stabilizer contains $S_1$. We prove a stronger statement than Theorem \ref{thm:mvert_stabilizer}: \begin{theorem}\label{thm:mvert_stabilizer_2} The $M$-vertex-stabilizer problem admits an efficient 2-approximation algorithm. Furthermore, if $M$ is a maximum-weight matching in $G\setminus S_1$, then it is polynomial-time solvable. \end{theorem} \begin{proof} Let $G$ be the input graph and $M$ be a matching in $G$. Let $R$ be the set of $M$-exposed vertices in $G$, and let $R'$ be any subset of $R$. If $G$ contains an augmenting path whose endpoints are $M$-covered or an augmenting cycle, then it is also present in $G\setminus R'$. Since $M$ is not a maximum-weight matching in $G\setminus R'$, $R'$ is not an $M$-vertex-stabilizer. Similarly, if $G$ contains an augmenting flower whose root is $M$-covered or an augmenting bi-cycle, then it is also present in $G\setminus R'$. By Theorem \ref{thm:stable}, $G\setminus R'$ is not stable. In these two cases, there is no $M$-vertex-stabilizer. Since $S\subseteq R$ and $w(M)<v_f(G\setminus S)$, the algorithm will return ``INFEASIBLE''. Thus, we may assume $G$ does not contain any of the aforementioned structures. The only structure which can make $G$ unstable is an augmenting flower whose root is $M$-exposed. In addition, the only structure which can prevent $M$ from being a maximum-weight matching in $G$ is an augmenting path with at least one $M$-exposed endpoint. \begin{claim} Let $u$ be an $M$-exposed vertex. Then, $u$ is the root of an augmenting flower if and only if there exists an augmenting $uu$-walk of length at most $3n$. \end{claim} \begin{proof} Let $C\cup P$ be a augmenting flower rooted at $u$ where $C=(v_1,v_2,\dots,v_j,v_1)$ is the blossom and $P=(u_1,u_2,\dots,u_k)$ is the valid alternating path. Assume $u_1=v_1$ and $u_k=u$. Let $P^{-1}=(u_k,u_{k-1},\dots,u_1)$ denote the reverse of path $P$. Then, $Q=P^{-1}CP$ is a valid alternating $uu$-walk, and its length is at most $3n$. Moreover, since \[w(Q\setminus M) = w(C\setminus M) + 2w(P\setminus M) > w(C\cap M) + 2w(P\cap M) = w(Q\cap M),\] it is augmenting. For the converse, let $P$ be an augmenting $uu$-walk of length at most $3n$. By Lemma \ref{lem:augwalk}, $P$ contains an augmenting flower rooted at $u$. \end{proof} \begin{claim} Let $u$ be an $M$-exposed vertex and $v$ be an $M$-covered vertex. If there is no augmenting flower rooted at $u$, then there exists an augmenting $uv$-path if and only if there exists an augmenting $uv$-walk of length at most $3n$. \end{claim} \begin{proof} A $uv$-path is trivially a $uv$-walk. For the converse, let $P$ be an augmenting $uv$-walk of length at most $3n$. By Lemma \ref{lem:augwalk}, $P$ contains an augmenting $uv$-path. \end{proof} By the two claims above, the set of vertices collected in the first ``for'' loop of the algorithm is exactly $S_1$. Let $S^*$ be a minimum $M$-vertex-stabilizer. Then, $S_1\subseteq S^*$. Now, the only structure which can prevent $M$ from being a maximum-weight matching in $G\setminus S_1$ is an augmenting path whose endpoints are both $M$-exposed. \begin{claim} Let $u$ and $v$ be $M$-exposed vertices. There exists an augmenting $uv$-path in $G\setminus S_1$ if and only if there exists an augmenting $uv$-walk of length at most $n$ in $G\setminus S_1$. \end{claim} \begin{proof} A $uv$-path is trivially a $uv$-walk. For the converse, let $P$ be an augmenting $uv$-walk of length at most $n$. By Lemma \ref{lem:augwalk}, $P$ contains an augmenting $uv$-path. \end{proof} Let $S_2$ be the set of vertices collected in the second ``for'' loop of the algorithm. At every iteration, a pair of vertices were added to $S_2$ because they are the endpoints of an augmenting path. Note that at least one of them is in $S^*$, otherwise this augmenting path is present in $G\setminus S^*$. Thus, we have $\size{S^*}\geq \size{S_1}+\frac{1}{2}\size{S_2}\geq \frac{1}{2}\size{S}$. The matching $M$ is maximum-weight in $G\setminus(S_1\cup S_2)$ because there are no augmenting paths or cycles. Moreover, $G\setminus(S_1\cup S_2)$ is stable because it does not contain any augmenting flowers or bi-cycles. Thus, $S=S_1\cup S_2$ is an $M$-vertex-stabilizer. Finally, if $M$ is a maximum-weight matching in $G\setminus S_1$, then $S_2=\emptyset$. We get $\size{S}=\size{S_1}\leq \size{S^*}$ implying that $S$ is optimal. \end{proof} \bibliographystyle{abbrv}
{ "timestamp": "2017-11-28T02:03:02", "yymm": "1709", "arxiv_id": "1709.01982", "language": "en", "url": "https://arxiv.org/abs/1709.01982", "abstract": "An edge-weighted graph $G=(V,E)$ is called stable if the value of a maximum-weight matching equals the value of a maximum-weight fractional matching. Stable graphs play an important role in some interesting game theory problems, such as network bargaining games and cooperative matching games, because they characterize instances which admit stable outcomes. Motivated by this, in the last few years many researchers have investigated the algorithmic problem of turning a given graph into a stable one, via edge- and vertex-removal operations. However, all the algorithmic results developed in the literature so far only hold for unweighted instances, i.e., assuming unit weights on the edges of $G$.We give the first polynomial-time algorithm to find a minimum cardinality subset of vertices whose removal from $G$ yields a stable graph, for any weighted graph $G$. The algorithm is combinatorial and exploits new structural properties of basic fractional matchings, which are of independent interest. In particular, one of the main ingredients of our result is the development of a polynomial-time algorithm to compute a basic maximum-weight fractional matching with minimum number of odd cycles in its support. This generalizes a fundamental and classical result on unweighted matchings given by Balas more than 30 years ago, which we expect to prove useful beyond this particular application.In contrast, we show that the problem of finding a minimum cardinality subset of edges whose removal from a weighted graph $G$ yields a stable graph, does not admit any constant-factor approximation algorithm, unless $P=NP$. In this setting, we develop an $O(\\Delta)$-approximation algorithm for the problem, where $\\Delta$ is the maximum degree of a node in $G$.", "subjects": "Data Structures and Algorithms (cs.DS); Discrete Mathematics (cs.DM); Computer Science and Game Theory (cs.GT); Combinatorics (math.CO)", "title": "Stabilizing Weighted Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631619124993, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950312444431 }
https://arxiv.org/abs/1810.11882
Knotting Probability of Equilateral Hexagons
For a positive integer $n\ge 3$, the collection of $n$-sided polygons embedded in $3$-space defines the space of geometric knots. We will consider the subspace of equilateral knots, consisting of embedded $n$-sided polygons with unit length edges. Paths in this space determine isotopies of polygons, so path-components correspond to equilateral knot types. When $n\le 5$, the space of equilateral knots is connected. Therefore, we examine the space of equilateral hexagons. Using techniques from symplectic geometry, we can parametrize the space of equilateral hexagons with a set of measure preserving action-angle coordinates. With this coordinate system, we provide new bounds on the knotting probability of equilateral hexagons.
\section{Introduction} Classically a knot can be defined as a closed, non self-intersecting smooth curve embedded in Euclidean $3$-space. Two knots are considered to be equivalent if one can be smoothly deformed into another. The question of whether or not two given knots are equivalent proves to be a difficult problem. Much of theory is devoted to developing techniques to answer this question. The study of the invariance of knots has been of interest to not only mathematicians but also biologist, physicists, and computer scientists. Prominent examples of knotting appear in polymers, specifically DNA and proteins. In the early 1970's, it was discovered that enzymes called topoisomerases causes the DNA to change its form. Type II topoisomerases bind to two segments of double-standed DNA, split one of the segments, transport the other through the break, and reseal the break. These studies suggest that the topological configuration, or the knotting, plays a role in understanding the behavior of these enzymes. Sometimes the arbitrary flexibility and lack of thickness in the classical theory of knots does not accurately depict the physical constraints of objects in nature. This inspires questions in the field of physical knot theory and models that seek to capture some of the physical properties. A question of focus in this paper is about the statistical distribution of knot types as a function of the length. For example, what is the probability that an $n$ edged polygon is knotted? The study of knots from a probabilistic viewpoint provides an understanding of typical knots. There are many ways to model random knots. The model we will consider is that of closed polygonal curves in $\mathbb{R}^3$. A knot is realized by joining $n$ line segments. In addition, we restrict the length of the segments to be equal. We identify each $n$-sided polygonal curve with the $3n$-tuple of vertex coordinates which define it. This gives a correspondence between points in $\mathbb{R}^{3n}$ and $n$-sided polygons in $\mathbb{R}^3$. We consider the $2(n-3)$ dimensional subspace of equilateral polygons equivalent up to translations and rotations. Using techniques from symplectic geometry, we study the space of equilateral hexagons. Suppose $P$ is an equilateral hexagon. We prove that the probability $P$ is knotted is at most $\frac{14-3\pi}{192}<\frac{1}{42}$. \section{Background} There are various ways to define a knot, all of which capture the intuitive notion of a knotted loop. We will start by defining a polygonal knot. For any two distinct points in $3$-space, $p$ and $q$, let $[p,q]$ denote the line segment joining them. For an ordered set of distinct points, $(p_1, p_2, \dots, p_n)$, the union of the segments $[p_1, p_2], [p_2,p_3],\dots, [p_{n-1},p_n],$ and $[p_n,p_1]$ is called a closed polygonal curve. If each segment intersects exactly two other segments, intersecting each only at an endpoint, then the curve is said to be simple. \begin{definition} A polygonal knot, $K$, is a simple, closed polygonal curve in $\mathbb{R}^3$. \end{definition} If a polygonal knot, $P$, has $n$ vertices, we will call $P$ a $n$-sided polygonal knot. We label the vertices of $P$ as $v_{1}, v_{2}, \ldots, v_{n}$. We call the segments $[v_i,v_{i+1}]$ the edges of $P$ and label the edges of $P$ as $e_{1}, e_{2}, \ldots, e_{n}$, where $e_{1}=[v_1,v_2], e_2=[v_2, v_3], \dots, e_{n-1}=[v_{n-1},v_n]$, and $e_n=[v_n,v_1]$. In addition, we will select a distinguished vertex, $v_{1}$, called a root and a choice of orientation. \begin{figure}[h] \centerline{\includegraphics[width=.35\textwidth]{hex.pdf}\hspace{1.5cm}\includegraphics[width=.3\textwidth]{hept.pdf}} \caption{The figure on the left shows a 6-sided, rooted, oriented polygonal trefoil knot. The figure on the right shows a 7-sided, rooted, oriented polygonal figure-8 knot.} \label{fig:label} \end{figure} With a distinguished vertex and orientation, we can view $P$ as a point in $\mathbb{R}^{3n}$ by listing the coordinates of the vertices starting with $v_1$ then following the orientation. Not all points in $\mathbb{R}^{3n}$ will correspond to simple polygonal curves. Therefore we define the discriminant set, in the spirit of Vassiliev \cite{Birman1993}. \begin{definition} The discriminant, $\Sigma^{(n)}$, is all points in $\mathbb{R}^{3n}$ that correspond to non-embedded polygonal knots. \end{definition} A polygonal knot in ${\mathbb R}^{3}$ fails to be embedded when two or more of the edges intersect. For an $n$-sided polygonal knot there are $\frac{1}{2}n(n-3)$ pairs of non-adjacent edges. So $\Sigma^{(n)}$ is the union of the closure of the $\frac{1}{2}n(n-3)$ real semi-algebraic cubic varieties, each piece consisting of polygons with a single intersection between non-adjacent edges\cite{Randell1}\cite{Randell2}. By excluding these singular points, we are left with an open set in $\mathbb{R}^{3n}$ corresponding to embedded polygons in $\mathbb{R}^3$. \begin{definition} The embedding space for rooted, oriented $n$-sided polygonal knots, denoted $Geo(n)$, is defined to be $ \mathbb{R}^{3n}-\Sigma^{(n)}$. \end{definition} The space $Geo(n)$ is an open $3n$-dimensional manifold. A continuous path $h:[0,1]\to Geo(n)$ is an isotopy of polygonal knots. \begin{definition} Two $n$-sided polygonal knots are geometrically equivalent if they lie in the same path-component of $Geo(n)$. \end{definition} Path components are in bijective correspondence with the geometric knot types realizable with $n$ edges. Any polygon that is in the same path-component of the regular, planar $n$-gon is then geometrically equivalent to the unknot. Next we will consider polygonal knots with unit length edges. Consider the function $f:Geo(n)\to \mathbb{R}^{n}$ where $(v_{1},v_{2},\ldots, v_{n})\mapsto (||v_{1}-v_{2}||,||v_{2}-v_{3}||,\ldots,||v_{n}-v_{1}||)$. \begin{definition} Let $f^{-1}((1,1,\ldots,1))=Equ(n)$ be the embedding space for rooted, oriented, $n$-sided equilateral knots. \end{definition} Since $Equ(n)$ is the preimage of the smooth map $f$ at the regular value $(1,1,\ldots,1)$, $Equ(n)$ is a $2n$-dimensional manifold. Similar to the space of geometric knots, path-components correspond to the equilateral knot types realizable with $n$ edges. In this paper, we will focus on equilateral polygonal knots. Every triangle is planar. A quadrilateral can be folded along a diagonal to become planar. It is also known that any pentagon can be deformed to a planar pentagon \cite{Randell2}. Therefore $Equ(n)$ is connected for $n\le 5$ and the case of hexagons is the first interesting example. Jorge Calvo \cite{Calvo} proves that $Equ(6)$ has five path-components. One component of $Equ(6)$ corresponds to the unknot, two to the right-handed trefoil and two to the left-handed trefoil. In order to distinguish between the different components, he introduces new knot invariants for equilateral hexagonal knots. First let $H=(v_{1},v_{2},\ldots,v_{6})\in Equ(6)$. \begin{definition} Let $H\in Equ(6)$. The curl of $H$, denoted $curl(H)$, is defined by $curl(H)=\text{sign}((v_{3}-v_{1})\times(v_{5}-v_{1})\cdot(v_{2}-v_{1}))$. \end{definition} If $v_{1}, v_{3}$ and $v_{5}$ are on the $xy$-plane oriented in a counter-clockwise orientation, then $curl(H)$ denotes the sign of the $z$-coordinate of $v_{2}$. So, in a sense, some knots curl up while others curl down. We will describe a second invariant of the hexagonal knot that distinguishes its topological knot type. \begin{definition} Define $T_i$ to be the interior of the triangular disk spanned by $(v_{i-1}, v_{i}, v_{i+1})$. \end{definition} Using a right-hand rule, we orient each $T_i$ as shown in Figure \ref{fig:T2}. \begin{figure}[h] \centerline{\includegraphics[width=.35\textwidth]{T2.pdf}} \caption{In this figure triangular disk $T_2$ is shaded and the orientation from the right-hand rule is shown.} \label{fig:T2} \end{figure} \begin{definition} Define $\Delta_{i}$, for $i=2,4,$ and $6$, to be the algebraic intersection number of T$_{i}$ with $H$. \end{definition} \begin{figure}[h] \centerline{\includegraphics[width=.35\textwidth]{Delta2.pdf}} \caption{This figure shows an example of a hexagonal knot in which $\Delta_2=1$.} \label{fig:D2} \end{figure} The following Lemma distinguishes topological knot type using the algebraic intersection numbers. \begin{lemma}(\cite{Calvo})\label{algintersection} Let $H\in Equ(6)$. Then \begin{enumerate} \item $H$ is a right-handed trefoil iff $\Delta_{i}=1$ for all $i$, \item $H$ is a left-handed trefoil iff $\Delta_{i}=-1$ for all $i$, \item $H$ is an unknot iff $\Delta_{i}=0$ for some $i\in\{2,4,6\}$. \end{enumerate} \end{lemma} Combining the notion of curl with the appropriate intersections from Lemma \ref{algintersection} we arrive at Calvo's Geometric Knot Invariant, Joint Chirality-Curl. \begin{definition}(\cite{Calvo}) Let $H\in Equ(6)$. Define Joint Chirality-Curl $$J(H)=(\Delta_{2}\Delta_{4}\Delta_{6},\Delta_{2}^2\Delta_{4}^2\Delta_{6}^2 curl(H)).$$ \end{definition} The Joint Chirality-Curl distinguishes between the five components of $Equ(6)$. \begin{thm}(\cite{Calvo}): Let $H\in Equ(6)$. Then \[ J(H) = \begin{cases} (0,0) & \text{iff } $H$\text{ is unknot}\\ (1,c) & \text{iff } $H$ \text{ is right-trefoil with } $curl(H)=c$\\ (-1,c)& \text{iff } $H$\text{ is left-trefoil with }$curl(H)=c$ \end{cases} \] \end{thm} The five components of $Equ(6)$ are due to the choice of a root and orientation. Consider the automorphisms $r$ and $s$ on $Equ(6)$ defined by $$ r\langle v_1, v_2, v_3, v_4, v_5, v_6\rangle=\langle v_1, v_6, v_5, v_4, v_3, v_2\rangle$$ $$ s\langle v_1, v_2, v_3, v_4, v_5, v_6\rangle=\langle v_2, v_3, v_4, v_5, v_6, v_1\rangle.$$ These automorphisms act on $Equ(6)$ by reversing or shifting the order of the vertices of each hexagon. They generate the dihedral group of order twelve. \begin{thm}(\cite{Calvo})\label{2.5Calvo} Suppose $\Gamma$ is a subgroup of the dihedral group $\langle r,s\rangle$. Then $Geo(6)/\Gamma$ has five components if and only if $\Gamma$ is contained in the index-$2$ subgroup $\langle s^2,rs \rangle$. Otherwise, $Geo(6)/\Gamma$ has three components. \end{thm} \begin{cor}(\cite{Calvo}) The spaces $Geo(6)/\langle s\rangle $ of non-rooted oriented embedded hexagons, and $Geo(6)/\langle r,s\rangle$ of non-rooted non-oriented embedded hexagons, each consist of three path-components. \end{cor} Next we will discuss definitions and results from symplectic geometry, specifically toric symplectic manifolds \cite{symplectic}, that apply to knot spaces. \begin{definition} A symplectic manifold, $M$, is an even dimensional manifold with a closed, non-degenerate $2$-form, $\omega$, called the symplectic form. \end{definition} Since $\omega$ is non-degenerate, there is a canonical isomorphism between the tangent and cotangent bundles, namely $$TM\mapsto T^*M: X\to \iota(X)\omega=\omega(X,\cdot ).$$ \begin{definition} A symplectomorphism of a symplectic manifold $(M,\omega)$ is a diffeomorphism $\psi\in Diff(M)$ that preserves the symplectic form. The group of symplectomorphisms of $M$ is denoted $Symp(M,\omega)$. \end{definition} Since $\omega$ is nondegenerate the homomorphism $T_q M\to T_q^*M: v\mapsto \iota(v)\omega_q$ is bijective. Thus there is a one-to-one correspondence between vector fields and $1$-forms via $$\chi (M)\to \Omega^1(M):X\mapsto \iota (X)\omega.$$ \begin{definition} A vector field $X\in \chi (M)$ is called symplectic if $\iota (X)\omega$ is closed. Denote the space of symplectic vector fields by $\chi (M,\omega)$. \end{definition} \begin{prop}\cite{symplectic}\label{symp} Let $M$ be a closed manifold. If $t\mapsto \psi_t\in Diff(M)$ is a smooth family of diffeomorphims generated by a family of vector fields $X_t\in \chi(M)$ via $$ \frac{d}{dt}\psi_t=X_t \circ \psi_t, \hspace{1cm} \psi_0=id,$$ then $\psi_t\in Symp(M,\omega)$ for every $t$ if and only if $X_t\in \chi(M,\omega)$ for every $t$. \end{prop} Now consider a smooth function $H:M\to \mathbb{R}$. \begin{definition} The vector field $X_H:M\to TM$ determined by identity $dH=\iota(X_H)\omega$ is called the Hamiltonian vector field associated to $H$. \end{definition} If $M$ is closed, then by Proposition \ref{symp}, the vector field $X_H$ generates a smooth $1$-parameter group of diffeomorphisms $\phi^t_H\in Diff(M)$ such that $$\frac{d}{dt} \phi^t_H=X_H\circ \phi^t_H,\hspace{1cm} \phi^0_H=id,$$ called a Hamiltonian flow associated to $H$. The identity $$dH(X_H)=(\iota(X_H)\omega)(X_H)=\omega(X_H,X_H)=0$$ shows that $X_H$ is tangent to level sets. A useful example to consider is the unit sphere $S^2$, where $\omega$ is the standard area form. If $S^2=\{(x_1, x_2, x_3) : \sum_j x_j^2=1 \}$, then $\omega_x(u,v)= \langle x,u\times v\rangle $ for $u,v\in T_x S^2$. Consider cylindrical polar coordinates $(\theta,x_3)$ for $\theta\in [0,2\pi)$ and $x_3 \in [-1,1]$. Let $H$ be the height function $x_3$ on $S^2$. The level sets are circles at constant height. The Hamiltonian flow $\phi^t_H$ rotates each circle at constant speed and $X_H$ is the vector field $\frac{\partial }{\partial \theta}$. Thus $\phi^t(H)$ is the rotation of the sphere about its vertical axis through the angle $t$. Consider a smooth map $[0,1]\times M\to M:(t,q)\mapsto \psi_t(q)$ such that $\psi_t\in Symp(M,\omega)$ and $\psi_0=id$. A family of such symplectomorphisms is called a symplectic isotopy of $M$. The isotopy is generated by a unique family of vector fields $X_t:M\to TM$ such that $\frac{d}{dt}\psi_t=X_t\circ\psi_t$. If all of the $1$-forms are exact then there exists a smooth family of Hamiltonian functions $H_t:M\to \mathbb{R}$ such that $\iota (X_t)\omega=dH_t$. In this case, $\psi_t$ is called a Hamiltonian isotopy. \begin{definition} A symplectomorphism, $\psi$, is called Hamiltonian if there exists a Hamiltonian isotopy $\psi_t\in Symp(M,\omega)$ from $\psi_0=id$ to $\psi_1=\psi$. \end{definition} \begin{definition} A Hamiltonian action of $S^1$ on $(M,\omega)$ is a $1$-parameter subgroup $\mathbb{R}\to Symp(M): t\mapsto \psi_t$ of $Symp(M)$ where $\psi_t=id$ and which is the integral of a Hamiltonian vector field $X_H$. \end{definition} The Hamiltonian function $H:M\to \mathbb{R}$ in this case is called the moment map. If $k$ such symmetries commute we have an action of a torus, $T^{k}$, on $M$. Then the moment map, $\mu: M\to{\mathbb R}^{k}$ yields a $k$-dimensional vector of conserved quantities. If $k$ is half the dimension of $M$, then $M$ is called toric symplectic. From theorems of Atiyah\cite{Atiyah:1982re} and Guillemin-Sternberg\cite{GuillStern}, the image of $\mu$ is a convex polytope, $P$, called the moment polytope. Moreover, the vertices of the moment polytope are the images under $\mu$ of the fixed point of the Hamiltonian torus action. In addition, the torus action preserves the fibers of the moment map. If we can invert $\mu$, we get a map $\alpha:P\times T^{n}\to M$ called the action-angle map. The previous example of the unit sphere $S^2$ is a toric symplectic manifold, with circle action rotation about the $z$-axis. The moment map $H:S^2\to S^1$ is the height function, the conserved quantity as the sphere rotates. The image of $H$ is a convex polytope, namely the interval $[-1,1]$. The fibers of $H$ are horizontal circles of constant height, which are preserved under the action. Lastly the circle, $S^1$, is half the dimension of $S^2$. The toric symplectic structure on the sphere naturally carries over to a toric symplectic structure on the product of spheres. This gives a toric symplectic structure on the space of open random walks or open polygons. We will consider the subspace of closed random walks. Let $Pol(n)$ be the $2n$ dimensional space of possibly singular polygons in $\mathbb{R}^3$ with edgelengths one. We will consider the quotient space $Pol_0(n)=Pol(n)/\text{SO}(3)$ of equilateral polygons up to translations and rotations. Jason Cantarella and Clayton Shonkwiler \cite{JC} describe the almost toric symplectic structure of $Pol_0(n)$. We summarize some of the important information below. To define the toric action, consider any triangulation, $T$, of an equilateral planar regular $n$-gon. Let $d_{i}$ be the lengths of the $n-3$ diagonals of the triangulation. These diagonals, along with the edges on the polygon, form $n-2$ triangles which each obey $3$ triangle inequalities. Therefore the lengths of the diagonals and the edge lengths must obey a set of $3(n-2)$ triangle inequalities, called the triangulation inequalities. \begin{thm}\cite{Kapovich}\cite{Howardmanon}\cite{Hitchin} The following facts are known: \begin{itemize} \item $Pol_0(n)$ is a possibly singular $(2n-6)$-dimensional symplectic manifold. The symplectic volume is equal to the standard measure. \item To any triangulation $T$ of the standard $n$-gon we can associate a Hamiltonian action of the torus $T^{n-3}$ on $Pol_0(n)$, where $\theta_i$ acts by folding the polygon around the $i^{th}$ diagonal of the triangulation. \item The moment map $\mu: Pol_0(n)\mapsto \mathbb{R}^{n-3}$ for a triangulation $T$ records the lengths $d_i$ of the $n-3$ diagonals of the triangulation. \item The inverse image $\mu^{-1}(int(P))\subset Pol_0(n)$ of the interior of the moment polytope $P$ is an toric symplectic manifold. \end{itemize} \end{thm} The moment polytope, $P_n$, is defined by the triangulation inequalities for $T$. The vertices of the moment polytope represent degenerate polygons which extremize several triangulation inequalities. Figure \ref{pentagon} shows a triangulation of an equilateral pentagon and the corresponding moment polytope. \begin{figure}[h] \centerline{\includegraphics[width=.5\textwidth]{pentagon.pdf}\includegraphics[width=.55\textwidth]{pentagonpoly.pdf}} \caption{The left image shows the fan triangulation of an equilateral pentagon, where all diagonals share a common vertex. The lengths of the diagonals, $d_1$ and $d_2$, satisfy six triangle inequalities. The figure on the left shows the moment polytope of $Pol_0(5)$ corresponding to the fan triangulation.} \label{pentagon} \end{figure} The action-angle map $\alpha:P_n\times T^{n-3} \mapsto Pol_0(n)$ for a triangulation $T$ is given by first constructing the $n-2$ triangles using the diagonal lengths, $d_i$, and edge lengths of $1$ and then joining them in $3$-space with dihedral angles given by the $\theta_i$. The polygon is the boundary of this triangulated surface. This construction only makes sense for polygons equivalent up to translations and rotations, which is why the quotient by $\text{SO}(3)$ is necessary. An example of an equilateral pentagon is shown in Figure \ref{pentagon2}. \begin{figure}[h] \centerline{\includegraphics[width=.35\textwidth]{pent1.pdf}\includegraphics[width=.35\textwidth]{pent2.pdf}\includegraphics[width=.35\textwidth]{pent3.pdf}} \caption{The figure shows how to construct an equilateral pentagon from the action-angle map $\alpha: P_5\times T^2\mapsto Pol_0(5)$ for the fan triangulation. A point $(d_1, d_2)$ in the moment polytope gives the information needed to construct three triangles. Then a point $(\theta_1, \theta_2)\in T^2$ gives instruction on how to attach the triangles along the diagonals. The boundary of the triangulated surface is the equilateral pentagon.} \label{pentagon2} \end{figure} The following theorem will be used in Section 4 when calculating the knotting probability of equilateral hexagons. \begin{thm}(Duistermaat-Heckman)\cite{DuistHeck}\label{DH} Suppose $M$ is a $2n$-dimensional toric symplectic manifold with moment polytope $P$, $T^{n}$ is the $n$-torus and $\alpha$ inverts the moment map. If we take the standard measure on the $n$-torus and the uniform measure on $\text{int}(P)$, then the map $\alpha:\text{int}(P)\times T^{n}\to M$ parametrizing a full-measure subset of $M$ in action-angles coordinates is measure-preserving. In particular, if $f:M\to \mathbb{R}$ is any integrable function then $$\int_M f(x)\text{ d}m=\int_{P\times T^n} f(d_1,\cdots,d_n,\theta_1,\cdots,\theta_n)\text{ dVol}_{\mathbb{R}^n}\wedge d\theta_1\wedge\cdots \wedge d\theta_n $$ and if $f(d_1, \cdots,d_n,\theta_1,\cdots,\theta_n)=f_d(d_1, \cdots,d_n)f_\theta(\theta_1,\cdots,\theta_n)$ then $$\int_M f(x)\text{ d}m=\int_{P} f_d(d_1,\cdots,d_n)dVol_{\mathbb{R}^n}\int_{T^n} f_\theta(\theta_1,\cdots,\theta_n)d\theta_1\wedge\cdots \wedge d\theta_n. $$\\ \end{thm} \section{Symplectic Structure of the space of equilateral hexagons} \subsection{Action-Angle Coordinates} In order to describe the action-angle coordinates on the space of equilateral hexagons, we first must consider the quotient space of $Equ(6)$. \begin{definition} Let $Equ_0(6)=Equ(6)/\text{SO}(3)\times\mathbb{R}^3$ be the embedding space of rooted, oriented equilateral hexagonal knots up to translations and rotations. \end{definition} Let $H=(v_{1},v_{2},v_{3},v_{4},v_{5},v_{6})\in Equ(6)$. We can translate $H$ so that $v_{1}=(0,0,0)$. Additionally we rotate $H$ so that $v_{3}$ is on the positive $x$-axis and $v_5$ on the upper-half $xy$-plane. Therefore $v_1, v_3$, and $v_5$ are on the $xy$-plane in a counter-clockwise orientation. In this section, we will consider this to be the standard position for $H\in Equ_0(6)$. Next we can choose any triangulation of the standard planar equilateral hexagon to form our action-angle coordinates. We will use one of the triangulations that has a central triangle. \begin{definition} Let the $T_{135}$ triangulation be the triangulation of the regular planar equilateral hexagon which has diagonals connecting $v_{1}$ to $v_{3}$, $v_{3}$ to $v_{5}$, and $v_{5}$ to $v_{1}$, with lengths $d_{1}$, $d_{2}$, and $d_{3}$, respectively. \end{definition} \begin{figure}[h] \centerline{\includegraphics[width=.4\textwidth]{Ttriangulation.pdf}} \caption{This figure shows the $T_{135}$ triangulation of an equilateral hexagon.} \label{fig:label} \end{figure} The lengths of the diagonals of the $T_{135}$ triangulation obey the following triangulation inequalities: \begin{equation*} \begin{aligned}[c] 0\le& d_1 \le2,\\ 0\le& d_2 \le2,\\ 0\le& d_3 \le2,\\ \end{aligned} \quad \text{and}\quad \begin{aligned}[c] d_3\le& d_1+d_2,\\ d_1\le&d_3+d_2,\\ d_2\le&d_3+d_1.\\ \end{aligned} \end{equation*} \begin{definition} The $T_{135}$ triangulation polytope, $P_6$, is the moment polytope for $Pol_0(6)$ corresponding to the $T_{135}$ triangulation and is determined by the triangulation inequalities. \end{definition} \begin{figure}[h] \centerline{\includegraphics[width=.5\textwidth]{poly.pdf}} \caption{This figure shows the $T_{135}$ triangulation polytope.} \label{fig:polytope} \end{figure} Let $\theta_{i}$ be the dihedral angle around diagonal $d_{i}$, where the regular planar hexagon has all angles $\pi$. Then the action-angle map for $T_{135}$, $\alpha:P_6\times T^3\mapsto Pol_0(6)$ allows us to parametrize any $H\in Equ_{0}(6)$ as $H=(d_{1},d_{2}, d_{3}, \theta_{1}, \theta_{2}, \theta_{3})$. To construct an equilateral hexagonal knot in $Equ_0(6)$, first choose a point $(d_1, d_2, d_3)\in P_6$ and construct four triangles: one with lengths $d_1, d_2$, and $d_3$ and three isosceles triangles with two side lengths $1$ and third side $d_i$. The triangle with side lengths $d_1, d_2$, and $d_3$ is placed on the $xy$-plane with $v_1$ the origin and $v_3$ on the positive $x$-axis. Then a point $(\theta_1,\theta_2,\theta_3)$ in the torus $T^3$ gives instructions on how to connect the three remaining triangles. \begin{figure}[h] \centerline{\includegraphics[width=.6\textwidth]{trexex.pdf}\includegraphics[width=.45\textwidth]{trefexample.pdf}} \caption{Given a point $(d_1,d_2,d_3)\in P_6$ four triangles are formed. Then given a triple of angles, the triangles are connected to form an equilateral hexagonal trefoil.} \label{fig:label} \end{figure} For $H\in Equ_0(6)$ in standard position, the action-angle coordinates arising from the $T_{135}$ triangulation gives the following coordinates for the vertices of $H$: \begin{align*} v_1 =& \Big(0,0,0\Big)\\ v_{2}=& \Big(\frac{d_{1}}{2}, \frac{1}{2}\sqrt{4-(d_{1})^2}\text{ cos}(\theta_{1}),\frac{1}{2}\sqrt{4-(d_{1})^2}\text{ sin}(\theta_{1})\Big),\\ v_{3}=& \Big(d_{1},0,0\Big),\\ v_{4}=&\Big(\frac{3(d_{1})^2-(d_{2})^2+(d_{3})^2}{4d_{1}}-\frac{d}{4d_{1}d_{2}}\sqrt{4-(d_{2})^2}\text{ cos}(\theta_{2}), \frac{d}{4d_{1}}-\\&\frac{(d_{1})^2+(d_{2})^2-(d_{3})^2}{4d_{1}d_{2}}\sqrt{4-(d_{2})^2}\text{ cos}(\theta_{2}), \frac{1}{2}\sqrt{4-(d_{2})^2}\text{ sin}(\theta_{2})\Big),\\ v_{5} =& \Big(\frac{(d_{1})^2-(d_{2})^2+(d_{3})^2}{2d_{1}}, \frac{d}{2d_{1}}, 0\Big),\\ v_{6} =& \Big(\frac{(d_{1})^2-(d_{2})^2+(d_{3})^2}{4d_{1}}-\frac{d}{4d_{1}d_{3}}\sqrt{4-(d_{3})^2}\text{ cos}(\theta_{3}), \frac{d}{4d_{1}}-\\&\frac{(d_{1})^2-(d_{2})^2+(d_{3})^2}{4d_{1}d_{3}}\sqrt{4-(d_{3})^2}\text{ cos}(\theta_{3}), \frac{1}{2}\sqrt{4-(d_{3})^2}\text{ sin}(\theta_{3})\Big), \end{align*} where $d=\sqrt{2(d_{1}d_{2})^2+2(d_{1}d_{3})^2+ 2(d_{2}d_{3})^2-(d_{1})^4-(d_{2})^4-(d_{3})^4}$.\\ Recall that the geometric knot invariant for hexagons, Joint Chirality-Curl, distinguishes between two types of both right-handed and left-handed trefoils with $curl(H)=\text{sign}((v_{3}-v_{1})\times(v_{5}-v_{1})\cdot(v_{2}-v_{1}))$. The following two lemmas give a relation between the curl of a hexagon and the possible dihedral angles. \begin{lemma}\label{curl1} Let $H\in Equ_0(6)$. Let $H$ be parametrized using action-angle coordinates from the $T_{135}$ triangulation. If $H$ has Joint Chirality-Curl $(\pm1,1)$, then $\theta_i\in (0,\pi)$ for $i=1,2,3$. \end{lemma} \textit{Proof:}\ Let $H\in Equ_{0}(6)$ be parametrized with action-angle coordinates $(d_1,d_2,d_3,\theta_1,\theta_2,\theta_3)$ arising from the $T_{135}$ triangulation. Let $H$ be in standard position. Since $v_1$, $v_3$, and $v_5$ are on the $xy$-plane oriented in a counter-clockwise direction, $curl(H)$ denotes the sign of the $z$-coordinate of $v_2$. Therefore if $curl(H)=1$, then $\theta_1\in(0,\pi)$. Suppose that $\theta_2, \theta_3\in (\pi, 2\pi)$. Then both $e_4$ and $e_5$ lie below the $xy$-plane and can not pierce $T_2$. Thus $\Delta_2=0$ and $H$ has Joint Chirality-Curl $(0,0)$. If $\theta_2\in(\pi, 2\pi)$ and $\theta_3\in(0,\pi)$, then neither $e_6$ nor $e_1$ can pierce $T_4$ and $\Delta_4=0$. Similarly if $\theta_3\in(\pi, 2\pi)$ and $\theta_2\in(0,\pi)$, then $\Delta_6=0$. Therefore if $curl(H)=1$, then $\theta_{i}\in (0,\pi)$ for all $i\in{1,2,3}$. \hfill$\square$ \begin{lemma}\label{curl-1} Let $H\in Equ_0(6)$. Let $H$ be parametrized using action-angle coordinates from the $T_{135}$ triangulation. If $H$ has Joint Chirality-Curl $(\pm1,-1)$, then $\theta_i\in (\pi,2\pi)$ for $i=1,2,3$. \end{lemma} \textit{Proof:}\ Let $H\in Equ_{0}(6)$ be parametrized with action-angle coordinates $(d_1,d_2,d_3,\theta_1,\theta_2,\theta_3)$ arising from the $T_{135}$ triangulation. Let $H$ be in standard position. If $curl(H)=-1$, then $\theta_1\in(\pi,2\pi)$. Similar to the previous argument of Lemma \ref{curl1}, if either of both of $\theta_2$ and $\theta_3$ are between $0$ and $\pi$, then $J(H)=(0,0)$. Therefore if $curl(H)=-1$ and $H$ is a trefoil, then $\theta_{i}\in (\pi, 2\pi)$ for all $i\in{1,2,3}$. \hfill$\square$\\ \subsection{Equilateral, Right-Handed, Positive Curl, Hexagonal Trefoils} In this section, we will determine constraints on the values of $d_1, d_2, d_3, \theta_1, \theta_2,$ and $\theta_3$ in order to have a right-handed hexagonal trefoil with positive curl. First we will define a set of inequalities that must be satisfied in order for $H\in Equ_{0}(6)$ in standard position to have Joint Chirality-Curl $(1,1)$. \begin{prop}\label{disc} Let $H\in Equ_0(6)$ and parametrize $H$ with action-angle coordinates arising from the $T_{135}$ triangulation. If $J(H)=(1,1)$, then the following nine functions must be positive: \begin{align*} f_1= & d_2\sqrt{4-(d_2)^2}\text{sin}(\theta_2)\big(d_3d-((d_1)^2-(d_2)^2+(d_3)^2)\sqrt{4-(d_3)^2}\text{cos}(\theta_3)\big)-\\& d_3\sqrt{4-(d_3)^2}\text{sin}(\theta_3)\big(d_2d-((d_1)^2+(d_2)^2-(d_3)^2)\sqrt{4-(d_2)^2}\text{cos}(\theta_2)\big),\\ g_1= & \sqrt{4-(d_2)^2}\Big(\frac{-(d_1)^2+(d_2)^2+(d_3)^2}{2d_2 d_3}\text{cos}(\theta_2)\text{sin}(\theta_3)+\text{sin}(\theta_2)\text{cos}(\theta_3)\Big)\\ &-\frac{d\text{sin}(\theta_3)}{2d_3},\\ h_1= & \sqrt{4-(d_3)^2}\Big(\frac{-(d_1)^2+(d_2)^2+(d_3)^2}{2d_2 d_3}\text{cos}(\theta_3)\text{sin}(\theta_2)+\text{sin}(\theta_3)\text{cos}(\theta_2)\Big) \\ &-\frac{d\text{sin}(\theta_2)}{2d_2},\\ f_2 = & d_3\sqrt{4-(d_3)^2}\text{sin}(\theta_3)\big(d_1d-((d_1)^2+(d_2)^2-(d_3)^2)\sqrt{4-(d_1)^2}\text{cos}(\theta_1))- \\& d_1\sqrt{4-(d_1)^2}\text{sin}(\theta_1)\big(d_3d-(-(d_1)^2+(d_2)^2+(d_3)^2)\sqrt{4-(d_3)^2}\text{cos}(\theta_3)\big),\\ g_2 = &\sqrt{4-(d_3)^2}\Big(\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3}\text{cos}(\theta_3)\text{sin}(\theta_1)+\text{sin}(\theta_3)\text{cos}(\theta_1)\Big) \\ &-\frac{d\text{sin}(\theta_1)}{2d_1},\\ h_2 = &\sqrt{4-(d_1)^2}\Big(\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3}\text{cos}(\theta_1)\text{sin}(\theta_3)+\text{sin}(\theta_1)\text{cos}(\theta_3)\Big) \\ &-\frac{d\text{sin}(\theta_3)}{2d_3},\\ f_3 = & d_1\sqrt{4-(d_1)^2}\text{sin}(\theta_1)\big(d_2d-(-(d_1)^2+(d_2)^2+(d_3)^2)\sqrt{4-(d_2)^2}\text{cos}(\theta_2)\big)-\\&d_2\sqrt{4-(d_2)^2}\text{sin}(\theta_2)\big(d_1d-((d_1)^2-(d_2)^2+(d_3)^2)\sqrt{4-(d_1)^2}\text{cos}(\theta_1)\big),\\ g_3 = &\sqrt{4-(d_1)^2}\Big(\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}\text{cos}(\theta_1)\text{sin}(\theta_2)+\text{sin}(\theta_1)\text{cos}(\theta_2)\Big)\\ & -\frac{d\text{sin}(\theta_2)}{2d_2},\\ h_3 = &\sqrt{4-(d_2)^2}\Big( \frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}\text{cos}(\theta_2)\text{sin}(\theta_1)+\text{sin}(\theta_2)\text{cos}(\theta_1)\Big) \\ & -\frac{d\text{sin}(\theta_1)}{2d_1}. \end{align*} \end{prop} \textit{Proof:}\ Let $H\in Equ_0(6)$ and parametrize $H$ with action-angle coordinates from the $T_{135}$ triangulation. If $curl(H)=1$, then $\theta_i\in (0,\pi)$ for all $i$ by Lemma \ref{curl1}. Recall that for a hexagon, $H\in Equ_0(6)$ to be a right-handed trefoil then the algebraic intersection numbers $\Delta_i$ have to be equal to one for $i=2,4,6$. First we will consider $\Delta_4=1$. This means that the triangular disk $T_{4}$ containing $v_{3}, v_{4},$ and $v_{5}$ must be pierced by either the edge $e_{6}$ or $e_1$ so that the orientation on the edge agrees with the orientation on $T_4$ coming from a right-hand rule. If $\theta_2\in (0,\pi)$, then $e_6$ must pierce $T_4$ for $\Delta_4=1$. In order for the line going through $v_{1}$ and $v_{6}$ to pierce $T_{4}$ the following must be positive: \begin{align} (v_{6}-v_{1})\times(v_{4}-v_{1})\centerdot(v_{3}-v_{1})&>0,\\ (v_{6}-v_{1})\times(v_{5}-v_{1})\centerdot(v_{4}-v_{1})&>0,\\ (v_{6}-v_{1})\times(v_{3}-v_{1})\centerdot(v_{5}-v_{1})&>0. \end{align} \begin{figure}[h] \centerline{\includegraphics[width=.5\textwidth]{discexample.pdf}} \caption{This figure shows the case where $e_6$ pierces $T_6$.} \end{figure} In addition, the plane containing $v_{3}, v_{4},$ and $v_{5}$ must separate $v_{1}$ from $v_{6}$. So the following must be negative \begin{align} \big((v_{6}-v_{3})\times(v_{5}-v_{3})\centerdot(v_{4}-v_{3})\big)\big((v_{1}-v_{3})\times(v_{5}-v_{3})\centerdot(v_{4}-v_{3})\big)<0. \end{align} Since $\theta_i\in(0,\pi)$ for all $i$, $(v_{6}-v_{1})\times(v_{3}-v_{1})\centerdot(v_{5}-v_{1})$ is always positive. Negating $(4)$ leaves three inequalities that must be satisfied so that $e_{6}$ pierces $T_4$. Letting $H$ be in standard position and evaluating the remaining three with action-angle coordinates gives three functions $f_1, g_1, h_1$ that must be positive for $H$ to have Joint Chirality-Curl $(1,1)$. Similarly, for $\Delta_2=1$ and $\Delta_6=1$ then $f_2, g_2, h_2$ and $f_3, g_3, h_3$ must be positive, respectively. Therefore if $H$ is in standard position and parametrized with action-angle coordinates form the $T_{135}$ triangulation, $f_i, g_i$, and $h_i$ must be positive for $H$ to have Joint Chirality-Curl $(1,1)$. \hfill$\square$\\ Next we will prove constraints on action-angle coordinates to get a right-handed, positive curl trefoil. \begin{lemma}\label{equilateral} Let $H\in Equ_{0}(6)$ and parametrize $H$ with with action-angle coordinates coming from the $T_{135}$ triangulation. If $H$ has Joint Chirality-Curl $(1,1)$, then the lengths of diagonals $d_i$ must be distinct. \end{lemma} \textit{Proof:}\ Let $H$ be in $Equ_{0}(6)$. Parametrize $H$ with action-angle coordinates coming from the $T_{135}$ triangulation, so $H=(d_1,d_2,d_3,\theta_1,\theta_2,\theta_3)$ and $H$ is in standard position. Suppose $d_1=d_2=d_3=x$, for some $x\in(0,2)$. Then $v_1=(0,0,0), v_3=(x,0,0)$, and $v_5=(\frac{x}{2}, \frac{\sqrt{3}x}{2},0)$. First we will consider the case when $x=\sqrt{3}$. When $\theta_i=0$ for all $i$, $H$ is planar but singular with vertices $v_2$, $v_4$, and $v_6$ coinciding in a single point, $(\frac{\sqrt{3}}{2},\frac{1}{2},0)$. Additionally $e_1=e_6$, $e_2=e_3$, and $e_4=e_5$. As $\theta_1$ increases from $0$ to $\pi$, $v_{2}$ traverses a circle, $c_2$, of radius $\frac{1}{2}$ centered at $(\frac{\sqrt{3}}{2},0,0)$ lying in a plane parallel to the $yz$-plane. Similarly, as $\theta_3$ increases from $0$ to $2\pi$, $v_6$ moves along a circle, $c_6$, of radius $\frac{1}{2}$ centered at the midpoint of the edge connecting $v_1$ and $v_5$. Therefore $e_1$ sweeps out a circular cone, $C_{12}$, with vertex the origin and base circle $c_2$ and $e_6$ sweeps out a circular cone, $C_{61}$, with vertex the origin and base circle $c_6$. The circles $c_2$ and $c_6$ only intersect when $\theta_1=\theta_3=0$. Therefore the respective cones only intersect in the segment from $(0,0,0)$ to $(\frac{\sqrt{3}}{2},0,0)$, corresponding to edges $e_1$ and $e_6$ coinciding when $\theta_1=\theta_3=0$. This implies that $e_2$ can not pierce $T_6$. Thus if $x=\sqrt{3}$, $H$ can not have Joint Chirality-Curl $(1,1)$. Next we consider the case when $\sqrt{3}<x<2$. When $\theta_1, \theta_2, \theta_3=0$, $H$ is planar and embedded. Hence $H$ is unknotted. Cones $C_{12}$ and $C_{61}$, formed by edges $e_1$ and $e_6$ as $\theta_1$ and $\theta_3$ vary, do not intersect. Therefore neither $T_2$ nor $T_6$ will be pierced by $H$, so $H$ will remain unknotted. Now consider the case when $0<x<\sqrt{3}$. If $\theta_i=\text{cos}^{-1}(\frac{\sqrt{3}x}{3\sqrt{4-x^2}})$ for all $i$, then $v_2, v_4$, and $v_6$ coincide. If $\theta_i\in(\text{cos}^{-1}(\frac{\sqrt{3}x}{3\sqrt{4-x^2}}),\pi)$ for any $i\in \{1,2,3\}$ then $H$ will be unknotted. Therefore suppose that $\theta_i\in (0,\text{cos}^{-1}(\frac{\sqrt{3}x}{3\sqrt{4-x^2}}))$ for all $i$. If $\theta_1=\theta_3=0$, then $e_2$ and $e_5$ intersect in a point on the $xy$-plane. If $1\le x<\sqrt{3}$, then the point of intersection is interior of the triangle with vertices $v_1$, $v_3$, and $v_5$. As $\theta_1$ and $\theta_3$ increase from $0$ to $\text{cos}^{-1}(\frac{\sqrt{3}x}{3\sqrt{4-x^2}})$, $e_2$ and $e_5$ continue to intersect in a point. In order for $e_2$ to pierce $T_6$, then $\theta_3>\theta_1$. Similarly $e_1$ and $e_4$ intersect when $\theta_1=\theta_2$. In order for $e_4$ to pierce $T_2$, then $\theta_1>\theta_2$. When $\theta_2=\theta_3$, $e_6$ and $e_3$ intersect. In order for $e_6$ to pierce $T_4$, then $\theta_2>\theta_3$. This implies that $\theta_3>\theta_1>\theta_2>\theta_3$, a contradiction. When $0<x<1$ and $\theta_1=\theta_3=0$, then $e_2$ and $e_5$ intersect in a point that is exterior of the triangle with vertices $v_1$, $v_3$, and $v_5$. In order for $e_2$ to pierce $T_6$, then $\theta_3<\theta_1$. In order for $e_4$ to pierce $T_2$, then $\theta_1<\theta_2$. In order for $e_6$ to pierce $T_4$, then $\theta_2<\theta_3$. This implies that $\theta_3<\theta_1<\theta_2<\theta_3$, a contradiction. Therefore when $d_1=d_2=d_3$, $H$ can not have Joint Chirality-Curl $(1,1)$. \hfill$\square$\\ From Lemma \ref{curl1}, we know that all three dihedral angles must be in the interval $(0,\pi)$ for $curl(H)=1$. Next given any admissible triple of diagonal lengths, we prove a tighter constraints on the dihedral angles so that $H$ has Joint Chirality-Curl $(1,1)$. \begin{lemma}\label{angles} Let $H\in Equ_{0}(6)$ and parametrize $H$ using action-angle coordinates with the $T_{135}$ triangulation. If $H$ has Joint Chirality-Curl $(1,1)$, then $\theta_{i}\in (0,\pi)$ for all $i\in{1,2,3}$ and $\theta_{1}+\theta_{2}<\pi$, $\theta_{1}+\theta_{3}<\pi$, and $\theta_{2}+\theta_{3}<\pi$. \end{lemma} \textit{Proof:}\ Let $H\in Equ_{0}(6)$ be parametrized with action-angle coordinates\\ $(d_1,d_2,d_3,\theta_1,\theta_2,\theta_3)$ arising from the $T_{135}$ triangulation. If $H$ is a right-handed trefoil with $curl(H)=1$, then from Lemma \ref{curl1} we know $\theta_{i}\in (0,\pi)$ for all $i\in{1,2,3}$. If $\theta_i\in(0,\frac{\pi}{2})$ for all $i\in\{1,2,3\}$, then clearly $\theta_{1}+\theta_{2}<\pi$, $\theta_{1}+\theta_{3}<\pi$, and $\theta_{2}+\theta_{3}<\pi$. Additionally, if $\theta_i, \theta_j\in(0,\frac{\pi}{2})$, for any two distinct $i,j\in\{1,2,3\}$, then $\theta_i+\theta_j< \pi$. Next we will show that if $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$ and $H$ has Joint Chirality-Curl $(1,1)$, then $\theta_{1}+\theta_{2}<\pi$ and $\theta_{1}+\theta_{3}<\pi$. Towards a contradiction, suppose that $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, and $\theta_1+\theta_2= \pi$. Substituting $\theta_1=\pi-\theta_2$ into equation $g_3$ from Proposition \ref{disc} and using the facts that $\text{cos}(\pi-\theta_2)=-\text{cos}(\theta_2)$ and $\text{sin}(\pi-\theta_2)=\text{sin}(\theta_2)$, we obtain the following \begin{align*} g_3=\sqrt{4-(d_1)^2}\Big(-\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}+1\Big)\text{sin}(\theta_2)\text{cos}(\theta_2)-\frac{d\text{sin}(\theta_2)}{2d_2}. \end{align*} We make the same substitutions into equation $h_3$ to obtain \begin{align*} h_3=\sqrt{4-(d_2)^2}\Big(\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1\Big)\text{sin}(\theta_2)\text{cos}(\theta_2)-\frac{d\text{sin}(\theta_1)}{2d_1}. \end{align*} If $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1\ge0$, then $g_3$ is negative. Fix $d_1,d_2, d_3,$ and $\theta_2$, and now consider $g_3$ as a function of $\theta_1$. Since $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1\ge0$, then $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}>0$. This implies that the derivative of $g_3$ with respect to $\theta_1$, $$\sqrt{4-(d_1)^2}\Big(\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}(-\text{sin}(\theta_1))\text{sin}(\theta_2)+\text{cos}(\theta_1)\text{cos}(\theta_2)\Big),$$ is negative for $\theta_1\in(\frac{\pi}{2},\pi)$ and $\theta_2\in(0,\frac{\pi}{2})$. Since $g_3$ is negative for $\theta_1=\pi-\theta_2$ and $g_3$ is decreasing, $g_3$ is negative for all $\theta_1>\pi-\theta_2$. Next suppose $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1<0$, then $h_3$ is negative. This means that the plane, $P_2$, containing vertices $v_1, v_2$, and $v_3$ does not separate $v_4$ and $v_5$ when $\theta_1=\pi-\theta_2$. Therefore as $\theta_1$ increases to $\pi$ so that $\theta_1>\pi-\theta_2$, $P_2$ will not separate $v_4$ and $v_5$. Thus $h_3$ is negative for $\theta_1>\pi-\theta_2$. Since both $g_3$ and $h_3$ must be positive for $H$ to a right-handed trefoil with $curl(H)=1$, we have reached a contradiction. Therefore if $\theta_1\in(0,\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, and $\theta_1+\theta_2\ge \pi$, $H$ can not have Joint Chirality-Curl $(1,1)$. Now suppose that $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, and $\theta_1+\theta_3= \pi$. Similar to the previous argument, we will substitute $\theta_1=\pi-\theta_3$ into equation $g_2$ and use the facts that $\text{cos}(\pi-\theta_3)=-\text{cos}(\theta_3)$ and $\text{sin}(\pi-\theta_3)=\text{sin}(\theta_3)$. This results in the following: \begin{align*} g_2&=\sqrt{4-(d_3)^2}\Big(\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} -1\Big)\text{cos}(\theta_3)\text{sin}(\theta_3)-\frac{d\text{sin}(\theta_1)}{2d_1}. \end{align*} Making the same substitutions into $h_2$ gives \begin{align*} h_2&=\sqrt{4-(d_1)^2}\Big(-\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} +1\Big)\text{cos}(\theta_3)\text{sin}(\theta_3)-\frac{d\text{sin}(\theta_3)}{2d_3}. \end{align*} If $\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} -1\le 0$, then $g_2$ is negative. If $\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} -1> 0$, then $h_2$ is negative. Since both equations must be positive to have Joint Chirality-Curl $(1,1)$, we have reached a contradiction. Therefore if $H$ is a right-handed trefoil with $curl(H)=1$ and $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, then $\theta_{1}+\theta_{2}<\pi$ and $\theta_{1}+\theta_{3}<\pi$. The cases when $\theta_2\in(\frac{\pi}{2}, \pi)$, $\theta_1, \theta_3\in(0,\frac{\pi}{2})$ and $\theta_3\in(\frac{\pi}{2}, \pi)$, $\theta_1, \theta_2\in(0,\frac{\pi}{2})$ are proven in the same manner. Thus if $H$ has Joint Chirality-Curl $(1,1)$, then $\theta_{i}\in (0,\pi)$ for all $i\in{1,2,3}$ and $\theta_{1}+\theta_{2}<\pi$, $\theta_{1}+\theta_{3}<\pi$, and $\theta_{2}+\theta_{3}<\pi$. \hfill$\square$\\ For $H\in Equ_{0}(6)$ to be a right-handed trefoil with $curl(H)=1$, only one of the dihedral angles can be greater than $\pi/2$ with the additional condition that the sum of any two angles must be less than $\pi$. This portion of the cube $[0,2\pi]^3$ is shown in the following Figure \ref{torusportion}. \begin{figure}[h] \centerline{\includegraphics[width=.7\textwidth]{torusangles.pdf}} \caption{The corresponding angles from Lemma \ref{angles} for an equilateral hexagon to have Joint Chirality-Curl $(1,1)$.} \label{torusportion} \end{figure} When all three diagonals from the $T_{135}$ triangulation have equal length, $H$ can not have Joint Chirality-Curl $(1,1)$. So, we continue our analysis with the case where two of the diagonals have equal lengths.\\ coincide when $\theta_1=\theta_2=0$. This occurs when $d_3=d_1\sqrt{4-(d_1)^2}$. If $d_3\ge d_1\sqrt{4-(d_1)^2}$, then $H$ has Joint Chirality $(0,0)$ for all $\theta_i$. Now we will consider the different ranges of $\theta_i$ for $H$ to have Joint Chirality-Curl $(1,1)$ from Lemma \ref{angles}. Next we consider the case where the three diagonals of the $T_{135}$ triangulation are distinct.\\ \begin{lemma}\label{polytopeportionR+} Let $H\in Equ_{0}(6)$ and parametrize $H$ using action-angle coordinates with the $T_{135}$ triangulation. Suppose $d_{1}, d_{2},$ and $d_{3}$ are distinct and let $d_{i}>d_{j}, d_{k}$. If $J(H)=(1,1)$ then $\theta_{i}\in(0,\pi)$ and $\theta_{j},\theta_{k}\in(0,\pi/2)$. Moreover, if $d_{i}>\sqrt{(d_{j})^2+(d_{k})^2}$ then $\theta_{i}\in(\pi/2,\pi)$.\\ \end{lemma} \textit{Proof:}\ Let $H\in Equ_{0}(6)$ be in standard position so that $v_1$, $v_3$, and $v_5$ are on the $xy$-plane. Suppose that the lengths of the diagonals are distinct and that $d_2>d_3>d_1$. We will show that if $H$ has Joint Chirality-Curl $(1,1)$ then $\theta_2\in (0,\pi)$ and $\theta_1,\theta_3\in(0,\frac{\pi}{2})$. Let $l_2$ be the line in the $xy$-plane perpendicular to the segment connecting $v_1$ and $v_3$ intersecting at the midpoint, $m_2$. Similarly, we define $l_4$ and $l_6$ for segments connecting $v_3$ to $v_5$ and $v_5$ to $v_1$ respectively. The three lines intersect in a unique point, $k$, the circumcenter of the triangle with vertices $v_1$, $v_3$, and $v_5$. Moreover, $l_i$ represents the orthogonal projection of $v_i$ onto the $xy$-plane as $\theta_i$ varies and $k$ is the projection of where all vertices coincide, if such point exists. \begin{figure}[h] \centerline{\includegraphics[width=.55\textwidth]{circumeter.pdf}\includegraphics[width=.55\textwidth]{circumeterobtuse.pdf}} \caption{The figures show the triangle spanned by vertices $(v_1 ,v_3 ,v_5 )$ with perpendicular bisector $l_i$. In the figure on the left, $(d_2)^2 < (d_1)^2+(d_3)^2$ and so $k$ is interior of the triangle. In the figure on the right, $(d_2)^2 >(d_1)^2+(d_3)^2$ and so $k$ is exterior of the triangle.} \label{obtuse} \end{figure} Since $d_3>d_1$ then $l_4$ intersects the segment connecting $v_1$ to $v_5$ instead of the segment connecting $v_1$ and $v_3$. Suppose towards contradiction that $\theta_1\in(\frac{\pi}{2},\pi)$. Then the plane perpendicular to the $xy$-plane containing $l_4$ separates $e_4$ and $T_2$. Therefore $H$ can not have Joint Chirality-Curl $(1,1)$ if $\theta_1\in(\frac{\pi}{2},\pi)$. Let $\phi_1$ be the angle for $\theta_1$ where $v_2$ projects onto $k$. If $\theta_1\in (\phi_1,\frac{\pi}{2})$ then $e_4$ and $T_2$ are still separated by the plane through $l_4$. Therefore $\theta_1\in (0,\phi_1)$. Next suppose that $\theta_3\in(\frac{\pi}{2},\pi)$. Since $d_2>d_3$ the $l_2$ intersects the segment connecting $v_3$ and $v_5$. This means the plane perpendicular to the $xy$-plane containing $l_2$ separates $e_2$ and $T_6$. Therefore we have reached a contradiction and $\theta_3\in (0,\frac{\pi}{2})$. Let $\phi_3$ be the angle for $\theta_3$ for which $v_6$ projects onto $k$. In order for $e_2$ to pierce $T_6$ then $\theta_3\in (0,\phi_3)$. Let $\phi_2$ be the angle for $\theta_2$ for which $v_4$ projects onto $k$. Let $p$ be the point where $e_1$ intersects $e_4$ when $\theta_1=0$ and $\theta_2=\pi$. In order for $e_4$ to intersect $T_2$, $e_4$ must intersect the cone spanned by $e_1$. The two cones will intersect along an arc connecting $p$ to point which projects onto $k$. If $\theta_2<\phi_2$, then $e_4$ no longer intersects the cone spanned by $e_1$. Then $\theta_2\in (\phi_2,\pi)$ for $H$ to have Joint Chirality-Curl $(1,1)$. If $d_2>\sqrt{(d_1)^2+(d_3)^2}$ then the triangle with vertices $v_1$, $v_3$, and $v_5$ is obtuse, as shown in Figure \ref{obtuse}. Therefore $k$ is exterior of the triangle and $\phi_2>\frac{\pi}{2}$. Hence if $d_2>\sqrt{(d_1)^2+(d_3)^2}$ then $\theta_1,\theta_3\in (0,\frac{\pi}{2})$ and $\theta_2\in (\frac{\pi}{2},\pi)$. \hfill$\square$\\ The moment polytope corresponding to the $T_{135}$ triangulation is split into three equal regions, depending on the which diagonal length is largest. The function $d_1=\sqrt{(d_2)^2+(d_3)^2}$ divides the third of the polytope where $d_1>d_2, d_3$ into two regions, one for acute triangles and one for obtuse triangles, as shown in Figure \ref{fig:dividepolytope}. \begin{figure}[h] \centerline{\includegraphics[width=.5\textwidth]{thridpolytope.pdf}\includegraphics[width=.5\textwidth]{polytopeobtuse.pdf}} \caption{The figure on the left shows the portion of the moment polytope, $P_6$, where $d_1>d_2, d_3$. The figure on the left shows the portion of the moment polytope where additionally $(d_1)^2>(d_2)^2+(d_3)^2.$} \label{fig:dividepolytope} \end{figure} \subsection{Equilateral, Left-Handed, Positive Curl, Hexagonal Trefoils} In this section, we will discuss constraints for $H$ to be a left-handed hexagonal trefoil with positive curl. \begin{prop}\label{discL+} Let $H\in Equ_0(6)$ and parametrize $H$ with action-angle coordinates arising from the $T_{135}$ triangulation. If $J(H)=(-1,1)$, then $f_i<0$, $g_i>0$, and $h_i>0$, for all $i$, where \begin{align*} f_1= & d_2\sqrt{4-(d_2)^2}\text{sin}(\theta_2)\big(d_3d-((d_1)^2-(d_2)^2+(d_3)^2)\sqrt{4-(d_3)^2}\text{cos}(\theta_3)\big)-\\& d_3\sqrt{4-(d_3)^2}\text{sin}(\theta_3)\big(d_2d-((d_1)^2+(d_2)^2-(d_3)^2)\sqrt{4-(d_2)^2}\text{cos}(\theta_2)\big),\\ g_1= & \sqrt{4-(d_2)^2}\Big(\frac{-(d_1)^2+(d_2)^2+(d_3)^2}{2d_2 d_3}\text{cos}(\theta_2)\text{sin}(\theta_3)+\text{sin}(\theta_2)\text{cos}(\theta_3)\Big)\\ &-\frac{d\text{sin}(\theta_3)}{2d_3},\\ h_1= & \sqrt{4-(d_3)^2}\Big(\frac{-(d_1)^2+(d_2)^2+(d_3)^2}{2d_2 d_3}\text{cos}(\theta_3)\text{sin}(\theta_2)+\text{sin}(\theta_3)\text{cos}(\theta_2)\Big) \\ &-\frac{d\text{sin}(\theta_2)}{2d_2},\\ f_2 = & d_3\sqrt{4-(d_3)^2}\text{sin}(\theta_3)\big(d_1d-((d_1)^2+(d_2)^2-(d_3)^2)\sqrt{4-(d_1)^2}\text{cos}(\theta_1))- \\& d_1\sqrt{4-(d_1)^2}\text{sin}(\theta_1)\big(d_3d-(-(d_1)^2+(d_2)^2+(d_3)^2)\sqrt{4-(d_3)^2}\text{cos}(\theta_3)\big),\\ g_2 = &\sqrt{4-(d_3)^2}\Big(\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3}\text{cos}(\theta_3)\text{sin}(\theta_1)+\text{sin}(\theta_3)\text{cos}(\theta_1)\Big) \\ &-\frac{d\text{sin}(\theta_1)}{2d_1},\\ h_2 = &\sqrt{4-(d_1)^2}\Big(\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3}\text{cos}(\theta_1)\text{sin}(\theta_3)+\text{sin}(\theta_1)\text{cos}(\theta_3)\Big) \\ &-\frac{d\text{sin}(\theta_3)}{2d_3},\\ f_3 = & d_1\sqrt{4-(d_1)^2}\text{sin}(\theta_1)\big(d_2d-(-(d_1)^2+(d_2)^2+(d_3)^2)\sqrt{4-(d_2)^2}\text{cos}(\theta_2)\big)-\\&d_2\sqrt{4-(d_2)^2}\text{sin}(\theta_2)\big(d_1d-((d_1)^2-(d_2)^2+(d_3)^2)\sqrt{4-(d_1)^2}\text{cos}(\theta_1)\big),\\ g_3 = &\sqrt{4-(d_1)^2}\Big(\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}\text{cos}(\theta_1)\text{sin}(\theta_2)+\text{sin}(\theta_1)\text{cos}(\theta_2)\Big)\\ & -\frac{d\text{sin}(\theta_2)}{2d_2},\\ h_3 = &\sqrt{4-(d_2)^2}\Big( \frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}\text{cos}(\theta_2)\text{sin}(\theta_1)+\text{sin}(\theta_2)\text{cos}(\theta_1)\Big) \\ & -\frac{d\text{sin}(\theta_1)}{2d_1}. \end{align*} \end{prop} \textit{Proof:}\ Let $H\in Equ_0(6)$ and parametrize $H$ using action-angle coordinates for the $T_{135}$ triangulation. If $curl(H)=1$, then $\theta_i\in (0,\pi)$ for all $i$. Additionally, if $H$ has Joint Chirality-Curl $(-1,1)$ the all algebraic intersection numbers $\Delta_i$ must be negative. First we will consider that condition that $\Delta_4=-1$, meaning the algebraic intersection of $T_4$ with $H$ is $-1$. If $\theta_2\in (0,\pi)$ this means that $e_1$ must pierce $T_4$. Therefore the line going through $v_1$ and $v_2$ must pass through $T_4$ and the following three inequalities must be satisfied: \begin{align} (v_2-v_1)\times(v_4-v_1)\centerdot(v_3-v_1)>0,\\ (v_2-v_1)\times(v_5-v_1)\centerdot(v_4-v_1)>0,\\ (v_2-v_1)\times(v_3-v_1)\centerdot(v_5-v_1)>0. \end{align} Since $\theta_i\in(0,\pi)$, then $(v_2-v_1)\times(v_3-v_1)\centerdot(v_5-v_1)$ is always positive. Suppose $H$ is in standard position. Evaluating the remaining two expressions with action-angle coordinates from the $T_{135}$ triangulation results in $h_3>0$ and $f_3<0$. Additionally, the plane containing $v_3, v_4$, and $v_5$ must separate $v_1$ and $v_2$. Therefore the following must be negative: $$\big( (v_2-v_3)\times(v_5-v_3)\centerdot(v_4-v_3)\big)\centerdot\big( (v_1-v_3)\times(v_5-v_3)\centerdot (v_4-v_3)\big)<0.$$ This constraint is equivalent to $g_3>0$. Similarly, the conditions that the $\Delta_2=-1$ and $\Delta_6=-1$ are equivalent to $f_2<0, g_2>0, h_2>0$ and $f_3<0, g_3>0, h_3>0$, respectively. Therefore if $H\in Equ_0(6)$ is in standard position and has Joint Chirality-Curl $(-1,1)$, then $f_i<0$, $g_i>0$, and $h_i>0$ for all $i$. \hfill$\square$\\ Next we define possible dihedral angles for an equilateral hexagon to have Joint Chirality-Curl $(-1,1)$. \begin{lemma}\label{angles(-1,1)} Let $H\in Equ_{0}(6)$ and parametrize $H$ using action-angle coordinates with the $T_{135}$ triangulation. If $H$ has Joint Chirality-Curl $(-1,1)$, then $\theta_{i}\in (0,\pi)$ for all $i\in{1,2,3}$ and $\theta_{1}+\theta_{2}<\pi$, $\theta_{1}+\theta_{3}<\pi$, and $\theta_{2}+\theta_{3}<\pi$. \end{lemma} \textit{Proof:}\ Let $H\in Equ_{0}(6)$ be parametrized with action-angle coordinates $(d_1,d_2,d_3,\theta_1,\theta_2,\theta_3)$ arising from the $T_{135}$ triangulation. If $H$ has Joint Chirality-Curl $(-1,1)$, then from Lemma \ref{curl1} we know $\theta_{i}\in (0,\pi)$ for all $i\in{1,2,3}$. If $\theta_i\in(0,\frac{\pi}{2})$ for all $i\in\{1,2,3\}$, then clearly $\theta_{1}+\theta_{2}<\pi$, $\theta_{1}+\theta_{3}<\pi$, and $\theta_{2}+\theta_{3}<\pi$. Additionally if $\theta_i, \theta_j\in(0,\frac{\pi}{2})$, for any two distinct $i,j\in\{1,2,3\}$, then $\theta_i+\theta_j< \pi$. Next we will show that if $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$ and $H$ has Joint Chirality-Curl $(-1,1)$, then $\theta_{1}+\theta_{2}<\pi$ and $\theta_{1}+\theta_{3}<\pi$. Towards a contradiction, suppose that $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, and $\theta_1+\theta_2= \pi$. Substituting $\theta_1=\pi-\theta_2$ into equation $g_3$ from Proposition \ref{discL+} and using the facts that $\text{cos}(\pi-\theta_2)=-\text{cos}(\theta_2)$ and $\text{sin}(\pi-\theta_2)=\text{sin}(\theta_2)$, we obtain the following \begin{align*} g_3&=\sqrt{4-(d_1)^2}\Big(-\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}+1\Big)\text{sin}(\theta_2)\text{cos}(\theta_2)-\frac{d\text{sin}(\theta_2)}{2d_2}. \end{align*} We make the same substitutions into equation $h_3$ to obtain \begin{align*} h_3&=\sqrt{4-(d_2)^2}\Big(\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1\Big)\text{sin}(\theta_2)\text{cos}(\theta_2)-\frac{d\text{sin}(\theta_1)}{2d_1}. \end{align*} If $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1\ge0$, then $g_3$ is negative. Fix $d_1,d_2, d_3,$ and $\theta_2$, and now consider $g_3$ as a function of $\theta_1$. Since $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1\ge0$, then $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}>0$. This implies that the derivative of $g_3$ with respect to $\theta_1$, $$\sqrt{4-(d_1)^2}\Big(\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}(-\text{sin}(\theta_1))\text{sin}(\theta_2)+\text{cos}(\theta_1)\text{cos}(\theta_2)\Big),$$ is negative for $\theta_1\in(\frac{\pi}{2},\pi)$ and $\theta_2\in(0,\frac{\pi}{2})$. Since $g_3$ is negative for $\theta_1=\pi-\theta_2$ and $g_3$ is decreasing, $g_3$ is negative for all $\theta_1>\pi-\theta_2$. Next suppose $\frac{(d_1)^2+(d_2)^2-(d_3)^2}{2d_1 d_2}-1<0$, then $h_3$ is negative. Thus $h_3$ is negative for $\theta_1>\pi-\theta_2$. Since both $g_3$ and $h_3$ must be positive for $H$ to have Joint Chirality-Curl $(-1,1)$, we have reached a contradiction. Therefore if $\theta_1\in(0,\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, and $\theta_1+\theta_2\ge \pi$, $H$ can not have Joint Chirality-Curl $(-1,1)$. Now suppose that $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, and $\theta_1+\theta_3= \pi$. Similar to the previous argument, we will substitute $\theta_1=\pi-\theta_3$ into equation $g_2$ and use the facts that $\text{cos}(\pi-\theta_3)=-\text{cos}(\theta_3)$ and $\text{sin}(\pi-\theta_3)=\text{sin}(\theta_3)$. This results in the following: \begin{align*} g_2&=\sqrt{4-(d_3)^2}\Big(\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} -1\Big)\text{cos}(\theta_3)\text{sin}(\theta_3)-\frac{d\text{sin}(\theta_1)}{2d_1}. \end{align*} Making the same substitutions into $h_2$ gives \begin{align*} h_2&=\sqrt{4-(d_1)^2}\Big(-\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} +1\Big)\text{cos}(\theta_3)\text{sin}(\theta_3)-\frac{d\text{sin}(\theta_3)}{2d_3}. \end{align*} If $\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} -1\le 0$, then $g_2$ is negative. If $\frac{(d_1)^2-(d_2)^2+(d_3)^2}{2d_1 d_3} -1> 0$, then $h_2$ is negative. Since both equations must be positive to have Joint Chirality-Curl $(-1,1)$, we have reached a contradiction. Therefore if $H$ is a left-handed trefoil with $curl(H)=1$ and $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2, \theta_3\in(0,\frac{\pi}{2})$, then $\theta_{1}+\theta_{2}<\pi$ and $\theta_{1}+\theta_{3}<\pi$. Again the cases when $\theta_2\in(\frac{\pi}{2}, \pi)$, $\theta_1, \theta_3\in(0,\frac{\pi}{2})$ and $\theta_3\in(\frac{\pi}{2}, \pi)$, $\theta_1, \theta_2\in(0,\frac{\pi}{2})$ are proven in the same manner. Thus if $H$ has Joint Chirality-Curl $(-1,1)$, then $\theta_{i}\in (0,\pi)$ for all $i\in{1,2,3}$ and $\theta_{1}+\theta_{2}<\pi$, $\theta_{1}+\theta_{3}<\pi$, and $\theta_{2}+\theta_{3}<\pi$. \hfill$\square$\\ Next we consider the case where the three diagonals of the $T_{135}$ triangulation are distinct.\\ \begin{lemma}\label{polytopeportionL+} Let $H\in Equ_{0}(6)$ and parametrize $H$ using action-angle coordinates with the $T_{135}$ triangulation. Suppose $d_{1}, d_{2},$ and $d_{3}$ are distinct and let $d_{i}>d_{j}, d_{k}$. If $J(H)=(-1,1)$ then $\theta_{i}\in(0,\pi)$ and $\theta_{j},\theta_{k}\in(0,\pi/2)$. Moreover, if $d_{i}>\sqrt{(d_{j})^2+(d_{k})^2}$ then $\theta_{i}\in(\pi/2,\pi)$.\\ \end{lemma} \textit{Proof:}\ Let $H\in Equ_{0}(6)$ be in standard position so that $v_1$, $v_3$, and $v_5$ are on the $xy$-plane. Suppose that the lengths of the diagonals are distinct and that $d_2>d_1>d_3$. We will show that if $H$ has Joint Chirality-Curl $(-1,1)$ then $\theta_2\in (0,\pi)$ and $\theta_1,\theta_3\in(0,\frac{\pi}{2})$. Let $l_2$ be perpendicular bisector to the segment connecting $v_1$ and $v_3$. Similarly, we define $l_4$ and $l_6$ to be the perpendicular bisectors to segments connecting $v_3$ to $v_5$ and $v_5$ to $v_1$, respectively. The three lines intersect in a unique point, $k$, the circumcenter of the triangle spanned by $(v_1, v_3, v_5)$. The orthogonal projection of $v_i$ onto the $xy$-plane lies on $l_i$. In addition, $k$ is the orthogonal projection of where all vertices coincide, if such point exists. Since $d_1>d_3$ then $l_4$ intersects the segment connecting $v_1$ to $v_3$ instead of the segment connecting $v_1$ and $v_5$. Suppose towards contradiction that $\theta_3\in(\frac{\pi}{2},\pi)$. Then the plane perpendicular to the $xy$-plane containing $l_4$ separates $e_3$ and $T_6$. Therefore $H$ can not have Joint Chirality-Curl $(-1,1)$ if $\theta_3\in(\frac{\pi}{2},\pi)$. Next suppose that $\theta_1\in(\frac{\pi}{2},\pi)$. Since $d_2>d_1$ the $l_6$ intersects the segment connecting $v_3$ and $v_5$. This means the plane perpendicular to the $xy$-plane containing $l_6$ separates $e_5$ and $T_2$. Therefore we have reached a contradiction and $\theta_1\in (0,\frac{\pi}{2})$. Let $\psi_3$ be the angle for $\theta_3$ for which $v_6$ projects onto $k$. If $\theta_3\in(\psi_3,\pi)$, then the plane perpendicular to the $xy$-plane containing $l_4$ still separates $e_3$ and $T_6$. Thus if $H$ has Joint Chirality-Curl $(-1,1)$, then $\theta_3\in(0,\psi_3)$. Let $\psi_1$ be the angle of $\theta_1$ so that $v_1$ projects onto $k$. If $e_5$ is to intersect $T_2$, then $\theta_1\in(0,\psi_1)$. Let $\psi_2$ be the angle for $\theta_2$ for which $v_4$ projects onto $k$. Since $\theta_1\in (0,\psi_1)$ and $\theta_3\in (0,\psi_3)$ then $\theta_2\in (\psi_2,\pi)$ for $H$ to have Joint Chirality-Curl $(-1,1)$. If $d_2>\sqrt{(d_1)^2+(d_3)^2}$ then the triangle spanned by $(v_1, v_3, v_5)$ is obtuse. Therefore $k$ is exterior of the triangle spanned by $(v_1, v_3, v_5)$ and $\psi_2>\frac{\pi}{2}$. Hence if $d_2>\sqrt{(d_1)^2+(d_3)^2}$ then $\theta_1,\theta_3\in (0,\frac{\pi}{2})$ and $\theta_2\in (\frac{\pi}{2},\pi)$. \hfill$\square$\\ \section{Knotting Probability of Hexagonal Trefoils} In this section, we will discuss the probability that a random equilateral hexagon is knotted. It has been proven that at least $\frac{1}{3}$ of hexagons with total length $2$ are unknotted\cite{JC}. Using action-angle coordinates and Calvo's geometric invariant $curl$, Cantarella and Shonkwiler \cite{JC} prove that at least $\frac{1}{2}$ of the space of equilateral hexagons consists of unknots. In order to gain intuition on the tightness of these bounds, we performed a Monte Carlo experiment. We randomly sampled a point in the moment polytope $P_6$ and a point in the cube $[0,2\pi]^3$. We then tested whether this point satisfies the necessary constraints to have a hexagonal trefoil with Joint Chirality-Curl $(1,1)$, described in Proposition \ref{disc}. Taking a sample size of $10$ million configurations, repeating this experiment multiple times, we found that on average the fraction of $(1,1)$ trefoils is $3.426005\times 10^{-5}$ with standard deviation $2.241511 \times 10^{-6}$. Since there are four types of trefoils, we estimate that the knotting probability for equilateral hexagons is about $1.370402 \times 10^{-4}$. Using the lemmas from the previous section, we improve the theoretical bound. \begin{thm} The probability that an equilateral hexagon is knotted is at most $\frac{14-3\pi}{192}$. \end{thm} \textit{Proof:}\ Let $H \in Equ_0(6)$. We will choose the $T_{135}$ triangulation of $H$ to form our set of action-angle coordinates: $\alpha:P_6\times T^{3}\mapsto Pol_0(6)$. Since almost all of $Pol_0(6)$ is a toric symplectic manifold, Theorem \ref{DH} holds for integrals over this space. First we will calculate the expected value for $H=(d_1,d_2,d_3,\theta_1,\theta_2, \theta_3)$ to have $curl=1$. Suppose that $H$ is in general position so that the lengths of the diagonals are distinct. Without loss of generality, we assume that $d_1$ is the largest of the three diagonals. The moment polytope, $P_6$, corresponding to the $T_{135}$ triangulation is $\frac{1}{2}$ of the cube $[0,2]^3$. Therefore the volume of $P_6$ is $4$. The region where one diagonal is greater than the other two divides the moment polytope into $3$ regions with equal volume of $\frac{4}{3}$. From Lemma \ref{polytopeportionR+} and Lemma \ref{polytopeportionL+} if $d_1>d_2,d_3$ and $curl(H)=1$, then $\theta_1\in(0,\pi)$, $\theta_2\in(0,\frac{\pi}{2})$, and $\theta_3\in(0,\frac{\pi}{2})$. Additionally, if $(d_1)^2>(d_2)^2+(d_3)^2$, then $\theta_1\in(\frac{\pi}{2},\pi)$. From Lemma \ref{angles} and Lemma \ref{angles(-1,1)}, we know that $\theta_1+\theta_2<\pi$ and $\theta_1+\theta_3<\pi$, shown in Figure \ref{torussmallest} \begin{figure}[h] \centerline{\includegraphics[width=.5\textwidth]{polytopeobtuse.pdf}\includegraphics[width=.45\textwidth]{torussmallestportion.pdf}} \caption{The figure on the left shows the portion of the moment polytope $P_6$ where $d_1>d_2,d_3$ and $(d_1)^2>(d_2)^2+(d_3)^2$. The figure on the right shows the portion of cube $[0,2\pi]^3$ where $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2,\theta_3\in(0,\frac{\pi}{2})$, $\theta_1+\theta_2<\pi$, and $\theta_1+\theta_3<\pi$.} \label{torussmallest} \end{figure} Using standard integration, we calculate that the volume of the portion of $P_6$ where $d_1>d_2$, $d_1>d_3$, and $(d_1)^2>(d_2)^2+(d_3)^2$ is equal to $\frac{2(\pi-2)}{3}$. The ratio of this volume out of the third of $P_6$ is $\frac{\pi}{2}-1$. The region where $\theta_1\in(\frac{\pi}{2},\pi)$, $\theta_2\in(0,\frac{\pi}{2})$, $\theta_3\in(0,\frac{\pi}{2})$, $\theta_1+\theta_2<\pi$ and $\theta_1+\theta_3<\pi$ is $\frac{1}{192}$ of the cube $[0,2\pi]^3$. The portion of $P_6$ where $d_1>d_2,d_3$ and $(d_1)^2<(d_2)^2+(d_3)^2$ is $2-\frac{\pi}{2}$ of the volume of $\frac{1}{3}$ of $P_6$. The region where $\theta_1\in(0,\pi)$, $\theta_2\in(0,\frac{\pi}{2})$, $\theta_3\in(0,\frac{\pi}{2})$, $\theta_1+\theta_2<\pi$ and $\theta_1+\theta_3<\pi$, shown in Figure \ref{torussmall}, is $\frac{1}{48}$ of the cube $[0,2\pi]^3$. \begin{figure}[h] \centerline{\includegraphics[width=.5\textwidth]{polytopeacute.pdf}\includegraphics[width=.45\textwidth]{torussmallportion.pdf}} \caption{The figure on the left shows the portion of the moment polytope $P_6$ where $d_1>d_2,d_3$ and $(d_1)^2<(d_2)^2+(d_3)^2$. The figure on the right shows the portion of cube $[0,2\pi]^3$ where $\theta_1\in(0,\pi)$, $\theta_2,\theta_3\in(0,\frac{\pi}{2})$, $\theta_1+\theta_2<\pi$, and $\theta_1+\theta_3<\pi$.} \label{torussmall} \end{figure} Therefore the expected value for $curl(H)=1$ is bounded above by $$\Big(\frac{\pi}{2}-1\Big)\Big(\frac{1}{192}\Big)+\Big(2-\frac{\pi}{2}\Big)\Big(\frac{1}{48}\Big)=\frac{7}{192}-\frac{\pi}{128}.$$ Making a similar argument for $curl(H)=-1$, we see that the knot probability is at most $$2(\frac{7}{192}-\frac{\pi}{128})=\frac{14-3\pi}{192},$$ as desired. \hfill$\square$ \\ \newpage \section{Acknowledgements} The author would like to thank Ken Millett for his guidance on this project while at the University of California, Santa Barbara. The author would also like to thank Jorge Calvo, Jason Cantarella, and Clay Shonkwiler, whose work inspired this project. \bibliographystyle{JHEP3}
{ "timestamp": "2018-10-30T01:16:56", "yymm": "1810", "arxiv_id": "1810.11882", "language": "en", "url": "https://arxiv.org/abs/1810.11882", "abstract": "For a positive integer $n\\ge 3$, the collection of $n$-sided polygons embedded in $3$-space defines the space of geometric knots. We will consider the subspace of equilateral knots, consisting of embedded $n$-sided polygons with unit length edges. Paths in this space determine isotopies of polygons, so path-components correspond to equilateral knot types. When $n\\le 5$, the space of equilateral knots is connected. Therefore, we examine the space of equilateral hexagons. Using techniques from symplectic geometry, we can parametrize the space of equilateral hexagons with a set of measure preserving action-angle coordinates. With this coordinate system, we provide new bounds on the knotting probability of equilateral hexagons.", "subjects": "Geometric Topology (math.GT)", "title": "Knotting Probability of Equilateral Hexagons", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631615116321, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7087950309563821 }
https://arxiv.org/abs/1512.04797
Pontryagin maximum principle for optimal sampled-data control problems
In this short communication, we first recall a version of the Pontryagin maximum principle for general finite-dimensional nonlinear optimal sampled-data control problems. This result was recently obtained in [L. Bourdin and E. Tr{é}lat , Optimal sampled-data control, and generalizations on time scales,arXiv:1501.07361, 2015]. Then we discuss the maximization condition for optimal sampled-data controls that can be seen as an average of the weak maximization condition stated in the classical Pontryagin maximum principle for optimal (permanent) controls. Finally, applying this theorem, we solve a linear-quadratic example based on the classical parking problem.
\section{Introduction} Optimal control theory is concerned with the analysis of controlled dynamical systems, where one aims at steering such a system from a given configuration to some desired target by minimizing some criterion. The Pontryagin maximum principle (in short, PMP), established at the end of the 50's for general finite-dimensional nonlinear continuous-time dynamics (see \cite{pont}, and see \cite{gamk} for the history of this discovery), is certainly the milestone of the classical optimal control theory. It provides a first-order necessary condition for optimality, by asserting that any optimal trajectory must be the projection of an extremal. The PMP then reduces the search of optimal trajectories to a boundary value problem posed on extremals. Optimal control theory, and in particular the PMP, has an immense field of applications in various domains, and it is not our aim here to list them. We speak of a {\em purely continuous-time optimal control problem}, when both the state $q$ and the control $u$ evolve continuously in time, and the control system under consideration has the form $$ \dot q(t) = f(t,q(t),u(t)), \; \text{for a.e. } t \in \mathbb{R}^+, $$ where $q(t) \in \mathbb{R}^n$ and $u(t) \in \Omega \subset \mathbb{R}^m$. Such models assume that the control is permanent, that is, the value of $u(t)$ can be chosen at each time $t \in \mathbb{R}^+$. We refer the reader to textbooks on continuous optimal control theory such as \cite{agrach,Bon-Chy03,trel2,bres,brys,BulloLewis,hest,Jurdjevic,lee,pont,Schattler,seth,trel} for many examples of theoretical or practical applications. We speak of a {\em purely discrete-time optimal control problem}, when both the state $q$ and the control $u$ evolve in a discrete way in time, and the control system under consideration has the form $$ q_{k+1}-q_k = f(k,q_k,u_k), \; k \in \mathbb{N}, $$ where $q_k \in \mathbb{R}^n$ and $u_k \in \Omega \subset \mathbb{R}^m$. As in the continuous case, such models assume that the control is permanent, that is, the value of $u_k$ can be chosen at each time $k \in \mathbb{N}$. A version of the PMP for such discrete-time control systems has been established in \cite{Halkin,holt2,holt} under appropriate convexity assumptions. The considerable development of the discrete-time control theory was in particular motivated by the need of considering digital systems or discrete approximations in numerical simulations of differential control systems (see the textbooks \cite{bolt,cano,mord,seth}). It can be noted that some early works devoted to the discrete-time PMP (like \cite{fan}) are mathematically incorrect. Some counterexamples were provided in \cite{bolt} (see also \cite{mord}), showing that, as is now well known, the exact analogous of the continuous-time PMP does not hold at the discrete level. More precisely, the maximization condition of the continuous-time PMP cannot be expected to hold in general in the discrete-time case. Nevertheless, a weaker condition can be derived, in terms of nonpositive gradient condition (see Theorem 42.1 in \cite{bolt}). We speak of an {\em optimal sampled-data control problem}, when the state $q$ evolves continuously in time, whereas the control $u$ evolves in a discrete way in time. This hybrid situation is often considered in practice for problems in which the evolution of the state is very quick (and thus can be considered continuous) with respect to that of the control. We often speak, in that case, of {\em digital control}. This refers to a situation where, due for instance to hardware limitations or to technical difficulties, the value $u(t)$ of the control can be chosen only at times $t=kT$, where $T>0$ is fixed and $k \in \mathbb{N}$. This means that, once the value $u(kT)$ is fixed, $u(t)$ remains constant over the time interval $[kT,(k+1)T)$. Hence the trajectory $q$ evolves according to $$ \dot q(t) = f(t,q(t),u(kT)), \; \text{for a.e. } t \in [kT,(k+1)T),\; k\in\mathbb{N}. $$ In other words, this {\em sample-and-hold} procedure consists of ``freezing" the value of $u$ at each \textit{controlling time} $t=kT$ on the corresponding \textit{sampling time interval} $[kT,(k+1)T)$, where $T$ is called the \textit{sampling period}. In this situation, the control of the system is clearly nonpermanent. To the best of our knowledge, the classical optimal control theory does not treat general nonlinear optimal sampled-data control problems, but concerns either purely continuous-time, or purely discrete-time optimal (permanent) control problems. In \cite{bourdin-trelat-pontryagin2} we provided a version of the PMP that can be applied to general nonlinear optimal sampled-data control problems.\footnote{Actually we established in \cite{bourdin-trelat-pontryagin2} a PMP in the much more general framework of {\em time scales}, which unifies and extends continuous-time and discrete-time issues. But it is not our aim here to enunciate this result in its whole generality.} In this short communication, we first recall in Section~\ref{section1} the above mentioned PMP. Then a discussion is provided concerning the maximization condition for optimal sampled-data controls that can be seen as an average of the weak maximization condition stated in the classical PMP for optimal (permanent) controls. Finally, in Section~\ref{section2}, we solve a linear-quadratic example based on the classical parking problem. \section{Main result}\label{section1} Let $m$, $n$ and $j$ be nonzero integers. In the sequel, we denote by $\langle \cdot , \cdot \rangle_n$ the classical scalar product in $\mathbb{R}^n$. Let $T>0$ be an arbitrary sampling period. In what follows, for any real number $t$, we denote by $E(t)$ the integer part of $t$, defined as the unique integer such that $E(t)\leq t< E(t)+1$. Note that $k = E(t/T)$ whenever $kT\leq t<(k+1)T$. In this section, we are interested in the general nonlinear optimal sampled-data control problem given by {\small \begin{equation*} {\bf (OSDCP)} \; \left\{\begin{split} & \min \int_0^{t_f} f^0 ( \tau, q(\tau), u( kT ) ) \, d\tau , \quad \textrm{with}\ k=E(\tau/T) , \\ & \dot q(t) = f ( t,q(t), u( kT ) ), \quad \textrm{with}\ k=E(t/T) , \\[5pt] & u(kT) \in \Omega , \\[5pt] & g(q(0),q(t_f)) \in \mathrm{S} . \end{split}\right. \end{equation*}}Here, $f: \mathbb{R}\times \mathbb{R}^{n} \times \mathbb{R}^{m} \rightarrow \mathbb{R}^{n}$, $f^0: \mathbb{R}\times \mathbb{R}^{n} \times \mathbb{R}^{m} \rightarrow \mathbb{R}$ and $g: \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^j$ are of class $\mathscr{C}^1$, and $\Omega$ (resp., $\mathrm{S}$) is a non-empty closed convex subset of $\mathbb{R}^m$ (resp., of $\mathbb{R}^j$). The final time $t_f \geq 0$ can be fixed or not. Recall that $g$ is said to be submersive at a point $(q_1,q_2) \in \mathbb{R}^n \times \mathbb{R}^n$ if the differential of $g$ at this point is surjective. We define as usual the Hamiltonian $H:\mathbb{R}\times\mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^m\rightarrow \mathbb{R}$ by $$ H(t,q,p,p^0,u) = \langle p, f(t,q,u) \rangle_{n} + p^0 f^0(t,q,u) . $$ \subsection{Statement} In \cite{bourdin-trelat-pontryagin2} we proved the following theorem. \begin{theorem}[PMP for ${\bf (OSDCP)}$]\label{thmmainintro} If a trajectory $q$, defined on $[0,t_f]$ and associated with a sampled-data control $u$, is an optimal solution of ${\bf (OSDCP)}$, then there exists a nontrivial couple $(p,p^0)$, where $p : [0,t_f] \rightarrow \mathbb{R}^n$ is an absolutely continuous mapping (called adjoint vector) and $p^0 \leq 0$, such that the following conditions hold: \begin{itemize} \item \textbf{Extremal equations}: $$ \dot q(t) = \partial_p H (t,q(t),p(t),p^0,u(kT)), $$ $$ \dot p(t) = -\partial_q H (t,q(t),p(t),p^0,u(kT)) ,$$ for almost every $t\in[0,t_f)$, with $k=E(t/T)$. \\ \item \textbf{Maximization condition}:\\ For every controlling time $kT \in [0,t_f)$ such that $(k+1)T \leq t_f$, we have \begin{multline}\label{secondconditionintro} \Big\langle \dfrac{1}{T} \int_{kT}^{(k+1)T} \partial_u H (\tau,q(\tau),p(\tau),p^0,u(kT)) \; d\tau \\ , \; y-u(kT) \Big\rangle_{m} \leq 0, \end{multline} for every $y \in \Omega$. In the case where $kT \in [0,t_f)$ with $(k+1)T > t_f$, the above maximization condition is still valid provided $\frac{1}{T}$ is replaced with $\frac{1}{t_f - kT}$ and $(k+1)T$ is replaced with $t_f$. \\ \item \textbf{Transversality conditions on the adjoint vector}:\\ If $g$ is submersive at $(q(0),q(t_f))$, then the nontrivial couple $(p,p^0)$ can be selected to satisfy $$ p(0) = - \partial_1 g (q(0),q(t_f))^\top \psi, $$ $$ p(t_f) = \partial_2 g (q(0),q(t_f)) ^\top \psi, $$ where $-\psi$ belongs to the orthogonal of $\mathrm{S}$ at the point $g (q(0),q(t_f)) \in \mathrm{S}$. \\ \item \textbf{Transversality condition on the final time}:\\ If the final time is left free in the optimal sampled-data control problem ${\bf (OSDCP)}$ and if $t_f>0$, then the nontrivial couple $(p,p^0)$ can be moreover selected to satisfy \begin{equation*} H(t_f, q(t_f), p(t_f),p^0,u(k_f T) ) = 0, \end{equation*} where $k_f =E(t_f/T)$ whenever $t_f \notin \mathbb{N} T$, and $k_f=E(t_f/T)-1$ whenever $t_f \in \mathbb{N} T$. \end{itemize} \end{theorem} The maximization condition~\eqref{secondconditionintro}, which is satisfied for every $y \in \Omega$, gives a necessary condition allowing to compute $u(kT)$ in general, and this, for all controlling times $kT \in [0,t_f)$. We will solve in Section~\ref{section2} an example of optimal sampled-data control problem, and show how these computations can be done in a simple way. \begin{remark}\label{remarknormaliser} As is well known, the nontrivial couple $(p,p^0)$ of Theorem~\ref{thmmainintro}, which is a Lagrange multiplier, is defined up to a multiplicative scalar. Defining as usual an \textit{extremal} as a quadruple $(q,p,p^0,u)$ solution of the extremal equations, an extremal is said to be \textit{normal} whenever $p^0\neq 0$ and \textit{abnormal} whenever $p^0=0$. In the normal case $p^0\neq 0$, it is usual to normalize the Lagrange multiplier so that $p^0=-1$. \end{remark} \begin{remark}\label{remarkconditionsterminales} Let us describe some typical situations of terminal conditions $g(q(0),q(t_f)) \in \mathrm{S}$ in ${\bf (OSDCP)}$, and of the corresponding transversality conditions on the adjoint vector. \begin{itemize} \item If the initial and final points are fixed in ${\bf (OSDCP)}$, that is, if we impose $q(0) = q_0$ and $q(t_f) = q_f$, then $j=2n$, $g(q_1,q_2) = (q_1,q_2)$ and $\mathrm{S} = \{ q_0 \} \times \{ q_f \}$. In that case, the transversality conditions on the adjoint vector give no additional information. \item If the initial point is fixed, that is, if we impose $q(0) = q_0$, and if the final point is left free in ${\bf (OSDCP)}$, then $j=n$, $g(q_1,q_2) = q_1$ and $\mathrm{S} = \{ q_0 \} $. In that case, the transversality conditions on the adjoint vector imply that $p(t_f) = 0$. Moreover, we have $p^0 \neq 0$\footnote{Indeed, if $p^0 =0$, then the adjoint vector $p$ is trivial from the extremal equation and from the final condition $p(t_f)=0$. This leads to a contradiction since the couple $(p,p^0)$ has to be nontrivial.} and we can normalize the Lagrange multiplier so that $p^0=-1$ (see Remark~\ref{remarknormaliser}). \item If the periodic condition $q(0)=q(t_f)$ is imposed in ${\bf (OSDCP)}$, then $j=n$, $g(q_1,q_2) = q_1- q_2$ and $\mathrm{S} = \{ 0 \}$. In that case, the transversality conditions on the adjoint vector yield that $p(0) = p(t_f)$. \end{itemize} We stress that, in all examples above, the function $g$ is indeed a submersion. \end{remark} \begin{remark} In \cite{bourdin-trelat-pontryagin2} we also provided a result stating the existence of optimal solutions for ${\bf (OSDCP)}$, under some appropriate compactness and convexity assumptions. Actually, if the existence of solutions is stated, the necessary conditions provided in Theorem~\ref{thmmainintro} may prove the uniqueness of the optimal solution. \end{remark} \subsection{Averaging of the classical weak maximization condition}\label{section12} Let us compare the maximization condition~\eqref{secondconditionintro} with respect to that of the classical PMP. Let us consider the following general nonlinear optimal (permanent) control problem \begin{equation*} {\bf (OCP)} \; \left\{\begin{split} & \min \int_0^{t_f} f^0 ( \tau, q(\tau), u( \tau ) ) \, d\tau , \\ & \dot q(t) = f ( t,q(t), u( t ) ),\\[5pt] & u(t) \in \Omega , \\[5pt] & g(q(0),q(t_f)) \in \mathrm{S} . \end{split}\right. \end{equation*} In the sequel, we will denote by $u^*$ an optimal (permanent) control. In the case of ${\bf (OCP)}$, the statement of the classical PMP coincides with that of Theorem~\ref{thmmainintro}, except the maximization condition~\eqref{secondconditionintro}.\footnote{Actually the transversality condition on the final time is slightly different. Precisely, if the final time is left free in the optimal control problem ${\bf (OCP)}$ and if $t_f>0$, then the nontrivial couple $(p,p^0)$ can be selected such that the function $t \mapsto H(t,q(t),p(t),p^0,u^*(t))$ is equal almost everywhere to a continuous function vanishing at $t=t_f$.} Indeed, the maximization condition in the classical PMP is celebrated to be given by \begin{equation}\label{eqstrongmaxcondition} u^*(t) \in \argmax_{y \in \Omega} H((t,q(t),p(t),p^0,y) , \end{equation} for a.e. $t \in [0,t_f)$. Note that \eqref{eqstrongmaxcondition} can be directly weakened as follows: \begin{equation}\label{eqweakmaxcondition} \left\langle \partial_u H (t,q(t),p(t),p^0,u^*(t)) , y-u^*(t) \right\rangle_{m} \leq 0, \end{equation} for every $y \in \Omega$ and for a.e. $t \in [0,t_f)$. If the classical PMP is stated with the nonpositive gradient condition~\eqref{eqweakmaxcondition}, the literature speaks of \textit{weak formulation of the classical PMP}.\footnote{As mentioned in the introduction, only the weak formulation of the classical PMP can be extended to the discrete case. To extend the strong formulation of the classical PMP to the discrete case, one has to consider additional convexity assumptions on the dynamics, see Remark~\ref{hamiltonianconcave} or \cite{Halkin,holt2,holt} for example.} \medskip It is worth to emphasize that the maximization condition~\eqref{secondconditionintro} given in Theorem~\ref{thmmainintro} can be seen as an average of the weak maximization condition~\eqref{eqweakmaxcondition} given in the classical PMP. For this reason we speak of \textit{nonpositive average gradient condition}. \begin{remark}\label{hamiltonianconcave} In the case where the Hamiltonian $H$ is concave in $u$, the strong and the weak formulations of the classical PMP are obviously equivalent. In a similar way, if $H$ is concave in $u$, note that the maximization condition~\eqref{secondconditionintro} in Theorem~\ref{thmmainintro} can be written as $$ u(kT) \in \argmax_{y \in \Omega} \, \dfrac{1}{T} \int_{kT}^{(k+1)T} H (\tau,q(\tau),p(\tau),p^0,y) \; d\tau ,$$ for all controlling times $kT \in [0,t_f)$. In the case where $(k+1)T > t_f$, the above maximization condition is still valid provided $\frac{1}{T}$ is replaced with $\frac{1}{t_f - kT}$ and $(k+1)T$ is replaced with $t_f$. In that case we speak of \textit{pointwise maximization of the average Hamiltonian}. \end{remark} \section{The parking problem}\label{section2} In this section, we consider the classical double integrator $$ \ddot{q} = u , \quad u \in [-1,1], $$ which can represent a car with position $q \in \mathbb{R}$ and with bounded acceleration $u$ acting as the control. Let us study the classical problem of parking the car at the origin, from an initial position $M > 0$ and with a fixed final time $t_f > 0$, minimizing the energy $$ \int_0^{t_f} u^2 \, d\tau. $$ In the sequel we first give some recalls on the classical permanent control case (solved with the help of the classical PMP). Then we solve the sampled-data control case with the help of Theorem~\ref{thmmainintro} and compare the two situations. \subsection{Recalls on the permanent control case}\label{section21} The above optimal control problem, in the permanent control case, can be summarized as follows: \begin{equation*} \left\{\begin{split} & \min \int_0^{t_f} u(\tau)^2 \, d\tau , \\[5pt] & \left( \begin{array}{c} \dot{q}_1(t) \\ \dot{q}_2(t) \end{array} \right) = \left( \begin{array}{c} q_2(t) \\ u(t) \end{array} \right), \\[5pt] & u(t) \in [-1,1] , \\[5pt] & \left( \begin{array}{c} q_1(0) \\ q_2(0) \end{array} \right) = \left( \begin{array}{c} M \\ 0 \end{array} \right), \quad \left( \begin{array}{c} q_1(t_f) \\ q_2(t_f) \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \end{array} \right). \end{split}\right. \end{equation*} In the sequel we assume that $t_f^2 > 4M$ in order to ensure the existence of a solution. From the classical PMP, one can prove that, if $4M < t_f^2 < 6M$, the optimal (permanent) control $u^*$ is given by $$ u^* (t) = \left\lbrace \begin{array}{lcl} -1 & \text{if} & 0 \leq t \leq t_1, \\ \\ \dfrac{2t-t_f}{\sqrt{3(t_f^2 - 4M)}} & \text{if} & t_1 \leq t \leq t_f - t_1 , \\ \\ 1 & \text{if} & t_1 \leq t \leq t_f, \end{array} \right. $$ where $ t_1 = \frac{1}{2} (t_f - \sqrt{3(t_f^2 - 4M)}) < \frac{t_f}{2}$, see Figure~\ref{Fig1}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[->] (-0.5,0)--(10,0); \draw[->] (0,-2)--(0,2); \draw [red,domain=0:3*((3-sqrt(3))/2)] plot (\x,{-1}); \draw [red,domain=(9-3*((3-sqrt(3))/2)):9] plot (\x,{1}); \draw [red,domain=3*((3-sqrt(3))/2):(9-3*((3-sqrt(3))/2))] plot (\x,{ (2*\x-9)/(3*sqrt(3)) }); \node at (-0.3,-0.35) {$0$}; \node at (9,0) {$|$}; \node at (9,-0.5) {$t_f$}; \node at (1.9,0) {$|$}; \node at (1.9,-0.5) {$t_1$}; \node at (7.09,0) {$|$}; \node at (7.09,-0.5) {$t_f-t_1$}; \node at (6,1) {\textcolor{red}{$u^*$}}; \node at (0,1) {$-$}; \node at (0,-1) {$-$}; \end{tikzpicture} \caption{Optimal (permanent) control, if $4M < t_f^2 < 6M$}\label{Fig1} \end{center} \end{figure} If $6M \leq t_f^2$, one can prove that the optimal (permanent) control $u^*$ is given by $$ u^*(t) = \dfrac{6M}{t_f^3 } (2t - t_f), \quad t \in [0,t_f],$$ see Figure~\ref{Fig2}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[->] (-0.5,0)--(10,0); \draw[->] (0,-2)--(0,2); \draw [red,domain=0:8] plot (\x,{ (12*(2*\x-8))/(16*8)}); \node at (-0.3,-0.35) {$0$}; \node at (8,0) {$|$}; \node at (8,-0.5) {$t_f$}; \node at (6,0.8) {\textcolor{red}{$u^*$}}; \node at (0,1) {$-$}; \node at (0,-1) {$-$}; \end{tikzpicture} \caption{Optimal (permanent) control, if $6M \leq t_f^2 $}\label{Fig2} \end{center} \end{figure} \subsection{The sampled-data control case}\label{sectionexamplesampled} In this section, we consider the corresponding optimal sampled-data control problem given by \begin{equation*} \left\{\begin{split} & \min \int_0^{t_f} u(kT)^2 \, d\tau , \quad \textrm{with}\ k=E(\tau/T) , \\[5pt] & \left( \begin{array}{c} \dot{q}_1(t) \\ \dot{q}_2(t) \end{array} \right) = \left( \begin{array}{c} q_2(t) \\ u(kT) \end{array} \right), \quad \textrm{with}\ k=E(t/T) , \\[5pt] & u(kT) \in [-1,1] , \\[5pt] & \left( \begin{array}{c} q_1(0) \\ q_2(0) \end{array} \right) = \left( \begin{array}{c} M \\ 0 \end{array} \right), \quad \left( \begin{array}{c} q_1(t_f) \\ q_2(t_f) \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \end{array} \right), \end{split}\right. \end{equation*} where $T > 0$ is a fixed sampling period. In order to avoid the case where a controlling time $kT$ is such that $(k+1)T > t_f$ and in order to simplify the redaction, we assume that $t_f = KT$ for some $K \in \mathbb{N}^*$. Let us apply Theorem~\ref{thmmainintro} in the normal case $p^0 = -1$. From the extremal equations, the adjoint vector $p=(p_1 \; p_2)^\top$ is such that $p_1$ is constant and $p_2 (t) = p_1(t_f - t)+p_2 (t_f)$ is affine. The maximization condition~\eqref{secondconditionintro} provides $$ \frac{1}{T} (y-u(kT)) \int_{kT}^{(k+1)T} p_2(\tau) - 2 u(kT) \; d\tau \leq 0, $$ that is $$ (y-u(kT)) \left[ - 2 u(kT) +p_1 \left( t_f - kT - \dfrac{T}{2} \right) + p_2 (t_f) \right] \leq 0, $$ for all $k=0,\ldots,K-1$ and all $y \in [-1,1]$. Let us write this maximization condition as $$ (y-u(kT)) \Gamma_k ( u(kT) ) \leq 0, $$ for all $k=0,\ldots,K-1$ and all $y \in [-1,1]$, where $\Gamma_k : [-1,1] \to \mathbb{R}$ is a decreasing affine function. It clearly follows that \begin{itemize} \item if $\Gamma_k (-1) < 0$, then $u(kT) = -1$; \item if $\Gamma_k (1) > 0$, then $u(kT) = 1$; \item if $\Gamma_k (-1) > 0$ and $\Gamma_k (1) < 0$, then $u(kT)$ is the unique solution of $\Gamma_k ( x ) = 0$ given by $$ u(kT) = \dfrac{1}{2} \left[ p_1 \left(t_f - kT - \dfrac{T}{2} \right) + p_2 (t_f) \right] . $$ \end{itemize} Hence, for each couple $(p_1,p_2(t_f))$, the above method allows to compute explicitly the associated values $u(kT)$ for all $k=0,\ldots,K-1$. Unfortunately, the transversality conditions on the adjoint vector do not provide any additional information on the values of $p_1$ and $p_2 (t_f)$, see Remark~\ref{remarkconditionsterminales}. As a consequence, and as usual, we proceed to a numerical shooting method on the application $$ (p_1,p_2(t_f)) \longmapsto (q_1(t_f),q_2(t_f)) $$ in order to guarantee the final constraints $q_1(t_f) = q_2 (t_f) = 0$.\footnote{In order to initiate the shooting method, we take the values of $p_1$ and $p_2 (t_f)$ from the classical permanent control case, see Section~\ref{section21}.} Finally we obtain the following numerical results. The values $u(kT)$ are represented with blue crosses and the red curve corresponds to the optimal (permanent) control $u^*$ obtained in Section~\ref{section21}. \begin{itemize} \item With $M=2$, $t_f=3$ (in the case $4M < t_f^2 < 6M$) and for $T=1$, $T=0.5$, $T=0.1$ and $T=0.01$, we obtain: \begin{center} \includegraphics[scale=0.18]{img1.eps} \includegraphics[scale=0.18]{img2.eps} \includegraphics[scale=0.18]{img3.eps} \includegraphics[scale=0.18]{img4.eps} \end{center} \item With $M=2$, $t_f=4$ (in the case $6M \leq t_f^2$) and for $T=1$, $T=0.5$, $T=0.1$ and $T=0.01$, we obtain: \begin{center} \includegraphics[scale=0.18]{img5.eps} \includegraphics[scale=0.18]{img6.eps} \includegraphics[scale=0.18]{img7.eps} \includegraphics[scale=0.18]{img8.eps} \end{center} \end{itemize} \begin{remark} The previous numerical results naturally lead us to ask about the convergence of the optimal sampled-data control to the optimal (permanent) control when the sampling period $T$ tends to $0$. Actually, this natural question also emerges from the maximization condition~\eqref{secondconditionintro} that can be seen as an average of the weak maximization condition of the classical PMP, see Section~\ref{section12}. Indeed, note that the interval of average is smaller and smaller as the sampling period $T$ is reduced. Similarly, an important scientific perspective concerns the convergence of the optimal trajectory associated to a sampled-data control to the optimal trajectory associated to a permanent control. These important issues both constitute a forthcoming research project of the two authors of this note. \end{remark} \begin{remark} Note that the above graphics only represent (by blue crosses) the discrete values $u(kT)$ of the sampled-data control $u$ at each controlling time $t=kT$. Let us provide some graphics representing the \textit{sample-and-hold} procedure consisting of ``freezing" the control at each controlling time $kT$ on the corresponding sampling time interval $[kT,(k+1)T)$. We fix $T = 0.5$ and we consider first $(M,t_f) = (2,3)$, then $(M,t_f) = (2,4)$. We obtain: \begin{center} \includegraphics[scale=0.18]{img9.eps} \includegraphics[scale=0.18]{img99.eps} \end{center} \end{remark} \begin{ack} The second author was partially supported by the Grant FA9550-14-1-0214 of the EOARD-AFOSR. \end{ack}
{ "timestamp": "2015-12-16T02:08:34", "yymm": "1512", "arxiv_id": "1512.04797", "language": "en", "url": "https://arxiv.org/abs/1512.04797", "abstract": "In this short communication, we first recall a version of the Pontryagin maximum principle for general finite-dimensional nonlinear optimal sampled-data control problems. This result was recently obtained in [L. Bourdin and E. Tr{é}lat , Optimal sampled-data control, and generalizations on time scales,arXiv:1501.07361, 2015]. Then we discuss the maximization condition for optimal sampled-data controls that can be seen as an average of the weak maximization condition stated in the classical Pontryagin maximum principle for optimal (permanent) controls. Finally, applying this theorem, we solve a linear-quadratic example based on the classical parking problem.", "subjects": "Optimization and Control (math.OC)", "title": "Pontryagin maximum principle for optimal sampled-data control problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986363166722906, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7087950287568359 }
https://arxiv.org/abs/1910.02530
On the Hausdorff dimension of Riemann's non-differentiable function
Recent findings show that the classical Riemann's non-differentiable function has a physical and geometric nature as the irregular trajectory of a polygonal vortex filament driven by the binormal flow. In this article, we give an upper estimate of its Hausdorff dimension. We also adapt this result to the multifractal setting. To prove these results, we recalculate the asymptotic behavior of Riemann's function around rationals from a novel perspective, underlining its connections with the Talbot effect and Gauss sums, with the hope that it is useful to give a lower bound of its dimension and to answer further geometric questions.
\section{Introduction} \subsection{Riemann's non-differentiable function} In a lecture in the Royal Prussian Academy of Sciences in 1872, in Berlin, Weierstrass \cite{Weierstrass1872} explained against the belief of the time that a continuous function need not have a well-defined derivative, proposing the famous Weierstrass functions, \begin{equation}\label{eq:Weierstrass_Function} W(x) = \sum_{n=1}^\infty a^n\,\cos (2\pi b^k x), \qquad 0 < a < 1, \quad b>1, \quad ab \geq 1, \end{equation} as counterexamples. However, his main motivation to tackle this problem was the function \begin{equation}\label{RiemannFunctionOriginal} R(x) = \sum_{n=1}^{\infty}{\frac{ \sin{ (n^2x ) } }{ n^2 }} \end{equation} proposed by Riemann some years earlier. Riemann is believed to have claimed that $R$ was continuous but nowhere differentiable. Even if no written nor oral proof survived, \eqref{RiemannFunctionOriginal} became widely known as Riemann's non-differentiable function. Weierstrass claimed that this conjecture was a \textit{somewhat difficult} problem, and he was correct indeed, since one century had to pass until Gerver \cite{Gerver1970} disproved the conjecture in 1970. He showed that $R$ is differentiable at points $\pi x$ where $x\in\mathbb{Q}$ is a quotient of two odd numbers, with derivative equal to $-1/2$. Previously, in 1916, Hardy \cite{Hardy1916} had shown that $R$ is not differentiable in $\pi x$ if $x$ is irrational. The problem was completely solved in 1971 by Gerver himself \cite{Gerver1971}, showing that it was also the case of the remaining rationals. Later, Duistermaat \cite{Duistermaat1991}, Jaffard \cite{Jaffard1996} and Jaffard and Meyer \cite{JaffardMeyer1996} studied the regularity of $R$ deeper. In all these works, a common technique is to study a generalization of $R$ to the complex plane, \begin{equation}\label{PhiDuistermaat} \phi_D(t) = \sum_{n=1}^{\infty}{ \frac{e^{i \pi n^2 t}}{i \pi n^2} }, \end{equation} for which $\operatorname{Re}\phi_D(t) = R(\pi t)/\pi$. \subsection{A physical and geometric version of Riemann's function} Recently, De la Hoz and Vega \cite{delaHozVega2014} found a version of Riemann's non-differentiable function, \begin{equation}\label{Phi} \phi(t) = \sum_{k\in\mathbb{Z}}{\frac{ e^{ - 4\pi^2 i k^2 t } - 1 }{ -4\pi^2k^2 } }, \end{equation} in a novel context concerning the evolution of vortex filaments, thus giving it a fantastic geometric and physical interpretation. They showed that \eqref{Phi}, which is related to the previous $\phi_D$ by \begin{equation}\label{FromPhiDuistermaatToPhi} \phi(t) = -\frac{i}{2\pi}\phi_D(-4\pi t) + i t + \frac{1}{12}, \qquad \qquad \forall t \in \mathbb{R}, \end{equation} approximates accurately the trajectories of the corners of polygonal vortex filaments that follow the binormal flow, a model for the evolution of a single vortex filament that is represented by the vortex filament equation (VFE) or localized induction approximation (LIA), \begin{equation}\label{VFE} \boldsymbol{X}_t = \boldsymbol{X}_s \times \boldsymbol{X}_{ss}, \qquad \text{ or equivalently } \qquad \boldsymbol{X}_t = \kappa \, \boldsymbol{B}. \end{equation} Here, the vortex is represented by the curve $\boldsymbol{X} : \mathbb{R}^2 \to \mathbb{R}^3$ with variables $s$ and $t$, the arclength and the time respectively, and is given an initial condition $\boldsymbol{X}(s,0)$. Also, $\kappa = \kappa(s,t)$ represents the curvature and $\boldsymbol{B} = \boldsymbol{B}(s,t)$ is the binormal vector. The VFE was originally proposed by Da Rios \cite{DaRios1906}, though forgotten and rediscovered many times by different authors, as discussed in \cite{Ricca1991}. A landmark result in the study of this equation is due to Hasimoto \cite{Hasimoto1972}, who established a direct connection between the VFE and the cubic nonlinear Schr\"odinger equation (NLS). The relationship works as follows: let $\kappa$ and $\tau$ be the curvature and torsion of the filament $\boldsymbol{X}$ that evolves according to the VFE, and define the complex-valued function \begin{equation}\label{Hasimoto} \psi(s,t) = \kappa(s,t) \, e^{i\int_0^s{ \tau(\sigma,t)\,d\sigma } }. \end{equation} This is often called the filament function. Hasimoto showed that $\psi$ satisfies \begin{equation}\label{NLS} \psi_t = i \psi_{ss} + \frac{i}{2}\, \left( \left| \psi \right|^2 + A(t) \right)\psi, \end{equation} where $A(t)$ is a real function of time. This function $A(t)$ supposes no extra inconvenient in practice because the function $\Psi(s,t) = \psi(s,t)\, e^{-i/2 \int_0^t A(\tau)d\tau}$ solves the standard cubic NLS \begin{equation} \Psi_t = i \Psi_{ss} + \frac{i}{2}\left| \Psi \right|^2\Psi. \end{equation} The usefulness of this transformation is evident because, under the condition that it can be unmade, it allows to work directly with the cubic NLS. In principle, if $\psi$ is found, its definition yields $\kappa$ and $\tau$ directly and the tangent vector is obtained integrating the Frenet-Serret system. The curve is then recovered integrating the tangent. Unfortunately, it is not always trivial to materialize these ideas. Even in the simple case of a partially straight filament with $\kappa = 0$, the Frenet-Serret frame is not well-defined! In fact, Hasimoto needed to assume this non-vanishing restriction for the curvature. However, Koiso \cite{Koiso1997} showed that a parallel frame can used instead of the classic Frenet-Serret frame to remove this restriction, unmake the transformation and recover $\boldsymbol{X}$. We are particularly interested in the evolution of closed vortex filaments. Think of smoke rings of cigarettes which, as we know, essentially maintain their shape while they travel. But what happens if the ring has the shape of a triangle? In \cite{KlecknerScheelerIrvine2014} they did this experiment with a clover-shaped filament, and its evolution is nothing close to that of the circular ring. De la Hoz and Vega \cite{delaHozVega2014} then showed that the triangle behaves in a similar way. More generally, they studied general regular polygonal vortices, and they showed that surprisingly their evolution is ruled by the Talbot effect, an originally optical phenomenon. A numeric simulation of the evolution of the triangular vortex is available in \cite{KumarVideos} or in the video \url{https://youtu.be/f3HQFfTtFtU} by Sandeep Kumar. The video above also shows the trajectory of one of the corners of the triangle. These trajectories were also numerically simulated in \cite[Figure 2]{delaHozVega2014}, which turn out to be plane and some of which are shown in Figure~\ref{fig:Numeric_Trajectories}. Comparing them to the image of $\phi$ \eqref{Phi} shown in Figure~\ref{FIG_Curva}, there is little doubt that this version of Riemann's non-differentiable is a very good approximation of these trajectories. \begin{figure}[h] \includegraphics[width=\textwidth]{NumericTrajectories} \caption{Numeric simulations of the trajectory of a corner of the $M$-sided regular polygon, for $M=3,4,5$. Image by F. De la Hoz and L. Vega.} \label{fig:Numeric_Trajectories} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{Curva2} \caption{\small The set $\phi([0,1/(2\pi)]) \subset \mathbb C$. The resemblance to the numeric trajectories in Figure~\ref{fig:Numeric_Trajectories} is astonishing.} \label{FIG_Curva} \end{figure} Let us briefly explain why Riemann's function appears in this context. For that, we need to describe the evolution of polygonal vortices with the VFE. Let $M \in \mathbb N$ and $\boldsymbol{X}_M$ be the solution to the VFE when the initial datum $\boldsymbol{X}_M(s,0)$ is a planar regular polygon of $M$ sides. An option to parametrize it is to do it first in the interval $[0,2\pi)$ and then to extend it periodically to $\mathbb R$, so that the problem becomes periodic in space. Thanks to Hasimoto's transformation, we can work with the filament function $\psi_M$ \eqref{Hasimoto} instead, so we need to parametrize the curvature and the torsion of the polygon. The torsion is zero because the polygon is planar. Regarding the curvature, we may think of each corner as a Dirac delta, so placing $M$ of them uniformly in $[0,2\pi)$ and extending periodically, it is reasonable to set \begin{equation}\label{RegularPolygonM} \psi_M(s,0) = \kappa_M(s,0) = \frac{2\pi}{M}\sum_{k \in \mathbb{Z}}{ \delta\left( s - \frac{2\pi}{M}k \right) }. \end{equation} We now do heuristic but clarifying computations. Instead of solving NLS for $\psi_M$, forget about the nonlinearity and assume $\psi_M$ solves the free Schr\"odinger equation \begin{equation}\label{FreeSchrodingerEquation} \psi_t = i\,\psi_{ss} \end{equation} With the help of the Poisson summation formula, the well-known solution is \begin{equation}\label{FreeSchrodingerSolution} \psi_M(s,t) = e^{it\partial_s^2}\left(\frac{2\pi}{M}\,\sum_{k \in \mathbb{Z}}\delta(\cdot - \textstyle{\frac{2\pi}{M}}\, k) \right)(s) = \sum_{k\in\mathbb{Z}}{ e^{ i M k s - i M^2 k^2 t} }. \end{equation} To recover $\boldsymbol{X}_M$, we should integrate the Frenet-Serret system in $s$ to get the tangent, and integrate the latter also in $s$. Again, a very heuristic shortcut is to integrate $\psi_M$ twice in $s$, and since $\psi_M$ solves the free Schr\"odinger equation, that amounts to integrate it once in $t$. Thus, we would get \begin{equation}\label{PhiIntegral} \boldsymbol{X}_M(s,t) \approx i \, \int_0^t{ \sum_{k\in\mathbb{Z}}{ e^{ i M k s - i M^2 k^2 \tau} } \, d\tau }. \end{equation} The point $\boldsymbol{X}_M(0,0)$ represents a corner, whose trajectory is $\boldsymbol{X}_M(0,t)$. According to the definition of $\phi$ in \eqref{Phi}, we get \begin{equation} \boldsymbol{X}_M(0,t) \approx \sum_{k\in\mathbb{Z}}{\frac{ e^{ - i M^2 k^2 t } - 1 }{ -M^2 \, k^2 } } = \frac{4\pi^2}{M^2}\, \phi\left( \frac{M^2}{4\pi^2}\, t \right). \end{equation} In view of the resemblance of the numeric trajectories of Figure~\ref{fig:Numeric_Trajectories} and the image of $\phi$ in Figure~\ref{FIG_Curva}, this crude approximation is surprisingly precise. Moreover, the larger $M$, the better the matching, which suggests some kind of convergence of the trajectories $\boldsymbol{X}_M$ to $\phi$ when $M \to \infty$. The first result in this direction has been given recently by Banica and Vega \cite{BanicaVega2020-R} for initial polygons of $M = 2n + 1$ sides with a particular parametrization. For completeness, we reproduce their result here in a simplified way. To put ourselves in context, observe that the parametrization of the periodic data we considered in \eqref{RegularPolygonM} gives infinitely many loops around the polygon. \begin{thm*}{(\cite[Theorem 1.1]{BanicaVega2020-R})} Let $n \in \mathbb N$ and the planar regular polygon of $2n+1$ sides be parametrized by $\boldsymbol{X}_n(s,0)$, which gives a single loop to the polygon when $|s| \leq n$ with its corners located at the integers, and which escapes to infinity by two straight lines when $|s|>n$. Then, $\lim_{n \to \infty}n\boldsymbol{X}_n(0,t) = \phi(t)$ \end{thm*} Hence, this theorem and the novel point of view gives Riemann's non-differentiable function an intrinsic geometric and physical nature that makes its study from these perspectives an interesting topic. For instance, related to physics and the theory of turbulence, it was shown in \cite{BoritchevEceizabarrenaVilaca2019} that it is intermittent. However, in this article we focus in geometric aspects. \subsection{Geometric study of Riemann's function} A quick look at Figure~\ref{FIG_Curva} is enough to be convinced of the geometric complexity of Riemann's non-differentiable function. Related to this, for instance, in \cite{Eceizabarrena2020} its geometric differentiability was analyzed. It is also quite natural to wonder whether this is a fractal or not; this is precisely the focus of this paper. Questions about the dimension (either Hausdorff, Minkowski or others) of non-differentiable functions are popular. A famous, long-lived problem is to prove that the dimension of the graph of the Weierstrass function \eqref{eq:Weierstrass_Function} is $ 2 + \log(a)/\log(b)$, as was conjectured by Mandelbrot \cite{Mandelbrot1977} in 1977. While the result for the Minkowski dimension was proved in 1984 \cite{KaplanMalletParetYorke1984}, the conjecture for the Hausdorff dimension resists, at least partially. Aside from a randomized version by Hunt \cite{Hunt1998}, the best known result known is by Shen in 2018 \cite{Shen2018}, who proved the conjecture for any $0<a<1$ and $b\in \mathbb{N}, b \geq 2$ using dynamical systems. Riemann's non-differentiable function is also an interesting case of study, and in the spirit of Weierstrass' words, even a more difficult one due to the slower convergence of the series. The main result in the literature is by Chamizo and Córdoba \cite{ChamizoCordoba1999,Cordoba2008}, who proved that the Minkowski dimension of the graph of the original function \eqref{RiemannFunctionOriginal} is 5/4. Concerning the Hausdorff dimension, to my knowledge, no result is known yet. The discoveries in the context of vortex filaments, though, make us focus on the image of the complex valued function \eqref{Phi} shown in Figure~\ref{FIG_Curva} rather than in the graph of the original function. The question about the dimension of $\phi$ is in principle more challenging than studying the dimension of the graph of the original Riemann's function. Indeed, in the case of a graph, we have a complete control of the speed of the curve in the direction of the abscissa, while the image of a parametric curve can move in the plane arbitrarily. Also, the fact that Figure~\ref{FIG_Curva} is not a graph makes it have plenty of self-intersections that make its study harder. \subsection{Results} In this paper, we give a first approach to computing the Hausdorff dimension of the image of $\phi$. \begin{thm}\label{TheoremHausdorffDimension} The Hausdorff dimension of the image of Riemann's non-differentiable function $\phi$ defined in \eqref{Phi} satisfies \[ 1 \leq \operatorname{dim}_{\mathcal{H}}{ \phi(\mathbb{R}) } \leq \frac43. \] \end{thm} This theorem can be generalized to the context of multifractality, a very popular topic in the mathematical study of turbulence which deals with the local H\"older regularity of functions. Let us briefly introduce it. For $\alpha \geq 0$, a function $f$ is said to be $\alpha$-H\"older in $x_0 \in \mathbb{R}$, and denoted $f\in C^{\alpha}(x_0)$, if there exists a polynomial $P$ with $\operatorname{deg}P \leq \alpha$ such that \[ \left| f(x_0+h) - P(h) \right| \leq C |h|^{\alpha}, \qquad \text{ when } h \text{ is small}. \] The H\"older exponent of $f$ at a given point $x_0$ is the maximal H\"older regularity of $f$ at $x_0$, \begin{equation}\label{MaximalHolderRegularity} \alpha_f(x_0) = \sup\left\{ \beta \geq 0 \mid f \in C^{\beta}(x_0) \right\}. \end{equation} Then, the Hausdorff dimension of the set of points with exponent $\alpha$, that is, \begin{equation}\label{SpectrumOfSingularitiesDef} d(\alpha) = \operatorname{dim}_{\mathcal{H}}\{ x \in \mathbb{R} \mid \alpha_f(x) = \alpha \}, \qquad \forall \alpha \geq 0, \end{equation} when regarded as a function of $\alpha$, is called the spectrum of singularities. This definition is usually extended to values of $\alpha$ yielding an empty set by setting their image to be $-\infty$. The spectrum of singularities is the principal object of study in multifractal analysis, and in fact a function is said to be multifractal if its spectrum of singularities is defined by \eqref{SpectrumOfSingularitiesDef} at least on an open interval of H\"older exponents $\alpha$. Riemann's non-differentiable function was shown to be a multifractal by Jaffard \cite{Jaffard1996}, who proved \begin{equation}\label{SpectrumOfSingularitiesJaffard} d_R(\alpha) = \left\{ \begin{array}{cl} 4\alpha - 2, & \quad \text{if } \alpha \in [1/2,3/4], \\ 0, & \quad \text{if } \alpha = 3/2, \\ -\infty, & \quad \text{otherwise}. \end{array} \right. \end{equation} The three functions $R$, $\phi_D$ and $\phi$ have the same regularity, so in fact \eqref{SpectrumOfSingularitiesJaffard} holds for all of them. With this result in hand, he also proved the validity of the Frisch-Parisi multifractal formalism \cite{FrischParisi1985} for Riemann's function. To prove \eqref{SpectrumOfSingularitiesJaffard}, Jaffard established a relationship between the H\"older exponent of $R$ at an irrational point and a particular irrationality exponent of that irrational point that is related to the rate of convergence of its sequence of approximations by continued fractions. Multifractality is, thus, a concept measured in the domain of a function. With the geometric interpretation of Riemann's function in mind, a natural question is whether the multifractality of $\phi$ is translated from its domain to its image $\phi(\mathbb{R})$. We prove a partial result in this direction. \begin{thm}\label{TheoremHausdorffDimensionGeneralised} Let $\phi$ be Riemann's non-differentiable function \eqref{Phi} and $D_{\sigma} = \{ x \in \mathbb{R} \mid \alpha_\phi(x) = \sigma \}$. Then, \[ \operatorname{dim}_{\mathcal{H}} \phi(D_{\alpha}) \leq \operatorname{dim}_{\mathcal{H}} \phi\Big(\bigcup_{\sigma \leq \alpha}D_{\sigma}\Big) \leq \frac{4\alpha - 2}{\alpha}, \qquad \forall \alpha \in [1/2,3/4]. \] \end{thm} Observe that according to \eqref{SpectrumOfSingularitiesJaffard}, the range $\alpha \in [1/2,3/4]$ is the only one of interest. Also, Theorem~\ref{TheoremHausdorffDimensionGeneralised} generalizes Theorem~\ref{TheoremHausdorffDimension} because the union $\cup_{\sigma \leq 3/4}D_{\sigma}$ covers the whole real line except a countable number of points, which are precisely those in $D_{3/2}$, the set of points where $\phi$ is differentiable. Then, the classical results of Hardy and Gerver imply that $D_{3/2} \subset \frac{1}{2\pi}\, \mathbb{Q}$, so $\operatorname{dim}_{\mathcal{H}} \phi(\mathbb{R}) = \operatorname{dim}_{\mathcal{H}} \phi(\cup_{\sigma \leq 3/4}D_{\sigma}) \leq 4/3$. Taking the periodic property \begin{equation}\label{PeriodicityPhi} \phi\left(t+\frac{1}{2\pi}\right) = \phi(t) + \frac{i}{2\pi}, \qquad \qquad \forall t \in \mathbb{R}, \end{equation} into account, Theorems~\ref{TheoremHausdorffDimension} and \ref{TheoremHausdorffDimensionGeneralised} can be proved using the asymptotic behavior of $\phi(t_x + h) - \phi(t_x)$ when $h \to 0$ for $x \in [0,1]$, where we denote $t_x = x/(2\pi)$. If $x = p/q \in \mathbb{Q}$ is an irreducible fraction, we will also write $t_{p/q} = t_{p,q}$. The proof of Theorem~\ref{TheoremHausdorffDimensionGeneralised} is also based on the classification of irrational points according to the rate of convergence of their approximations by continued fractions. \subsection{Auxiliary geometric result: the asymptotic behavior of $\phi$ around rationals} The asymptotic behavior of the original generalization $\phi_D$ of Riemann's function was computed by Duistermaat \cite{Duistermaat1991}. Thanks to it, he could explain the self-similar patterns of the graph of $R$ analytically. While one can get the asymptotic behavior of $\phi$ from Duistermaat's work using the relationship \eqref{FromPhiDuistermaatToPhi}, in this paper we will prove it directly. The reasons to do this are the following: \begin{itemize} \item We do the computations from a different and, arguably, more intuitive perspective. \item Like in \cite{Duistermaat1991}, the main vehicle will be the relationship between the modular group and the Jacobi $\theta$ function, but this new approach allows to unravel the relationships with phenomena in other fields like Gauss sums in number theory and the Talbot effect in optics. \item To prove Theorems~\ref{TheoremHausdorffDimension} and \ref{TheoremHausdorffDimensionGeneralised} it is enough to work with the leading terms of the asymptotics, which can easily be deduced from Duistermaat's work. Even so, we compute the asymptotic behavior of $\phi$ so that machinery to prove future results is fully and explicitly available. It is the lower order terms which capture the self-similar properties of $\phi$, so they may be critical to tackle other geometric questions. For instance, it seems reasonable to think that they will be needed to obtain a lower bound for the Hausdorff dimension. They already proved to be vital in \cite{Eceizabarrena2020} to study the geometric differentiability of $\phi$. \end{itemize} For the sake of clarity, let us write here a simplified introductory version of the asymptotic behavior of $\phi$. It can be classified very cleanly, since the situation around any rational can be reduced to what happens around either 0 or 1/2. For the precise expressions I refer the reader to Propositions~\ref{thm:Asymptotic_At_0}, \ref{thm:Asymptotic_At_1_2}, \ref{thm:Asymptotic_At_Q013} and \ref{thm:Asymptotic_At_Q2}. \begin{prop}\label{TheoremAsymptoticsSimplified} Let $p,q \in \mathbb{Z}$ such that $ 0 \leq p < q$ and $\operatorname{gcd}(p,q)=1$. The asymptotic behavior of $\phi$ around the rational point $t_{p,q} = (p/q)/(2\pi)$ depends on $q \pmod 4$ as follows: \begin{itemize} \item The asymptotic behavior of $\phi$ around 0 is \begin{equation} \phi(h) = \frac32\, \frac{1+i}{\sqrt{2\pi}} \, \left( h^{1/2} - \frac{8\pi^2}{3}\, i \, \left[ \frac16 - 2\phi\left( \frac{-1}{16\pi^2h} \right) \right] h^{3/2} + O\left( h^{5/2}\right) \right). \end{equation} and if $q \equiv 0,1,3 \pmod 4$, there exists an eighth root of unity $e_{p,q}$ such that \begin{equation} \phi(t_{p,q} + h) - \phi(t_{p,q}) \approx \frac{e_{p,q}}{q^{3/2}} \, \phi(q^2\, h). \end{equation} \item The asymptotic behavior of $\phi$ around $1/2$ is \begin{equation} \phi(t_{1,2} + h) - \phi(t_{1,2}) = -16\,\frac{1-i}{\sqrt{2}}\, \sum_{\substack{k=1 \\ k \text{ odd}}}^{\infty}{ \frac{e^{ik^2/(16h)}}{k^{2}} } \,h^{3/2} + O\left( h^{5/2} \right), \end{equation} and if $q \equiv 2 \pmod 4$, there exists an eighth root of unity $e_{p,q}$ such that \begin{equation} \phi(t_{p,q} + h) - \phi(t_{p,q}) \approx \frac{e_{p,q}}{q^{3/2}}\, \left( \phi(t_{1,2} + q^2\, h) - \phi(t_{1,2}) \right). \end{equation} \end{itemize} \end{prop} The second term in the asymptotic behavior around 0 captures the self-similar patterns of $\phi$ that can be identified in Figure~\ref{FIG_Curva}. Most importantly, this pattern appears around every rational $t_{p,q}$ with $q \equiv 0,1,3 \pmod 4$. This should play an important role to compute a lower bound for its Hausdorff dimension, but as already said, it is not needed to prove Theorems~\ref{TheoremHausdorffDimension} and \ref{TheoremHausdorffDimensionGeneralised}. Indeed, the following corollary with the leading order term is enough. \begin{cor}\label{thm:Global_Bound} Let $p, q \in \mathbb{N}$ such that $\operatorname{gcd}(p,q) = 1$. Let also $M >0$. Then, there exists $C_M>0$ independent of $p$ and $q$ such that \begin{itemize} \item if $q \equiv 0,1,3 \pmod{4}$, \begin{equation} \left| \phi(t_{p,q}+h) - \phi(t_{p,q}) \right| \leq C_M \frac{|h|^{1/2}}{q^{1/2}}, \qquad \text{ whenever } |h| < \frac{M}{q^2}. \end{equation} \item if $q \equiv 2 \pmod{4}$, \begin{equation} \left| \phi(t_{p,q}+h) - \phi(t_{p,q}) \right| \leq C_M q^{3/2}\, |h|^{3/2}, \qquad \text{ whenever } |h| \leq \frac{M}{q^2}. \end{equation} \end{itemize} \end{cor} This corollary corresponds to Corollaries~\ref{thm:Global_Bound_Q013} and \ref{thm:Global_Bound_Q2} in the main text. \subsection{Discussion on a lower bound for the Hausdorff dimension} The theorems in this paper are a first approach to the Hausdorff dimension of Riemann's non-differentiable function in its version shown in Figure~\ref{FIG_Curva}, which represents the trajectory of a polygonal vortex filament. Of course, the objective now turns into knowing whether the exact value of the dimension is precisely 4/3. Some difficulties with respect to previous works are the following. First, dealing with Riemann's function is more complicated than working with Weierstrass' function due to its quadratic rather than exponential convergence. Also, Figure~\ref{FIG_Curva} is not a graph, so the control over the abscissa direction is lost. What is more, the set self-intersects many times, in a way that seems difficult to measure. Regarding self-similarity, unlike exactly self-similar fractals that have finitely many scaling laws, Figure~\ref{FIG_Curva} and specially the fact that the self-similar term in Proposition~\ref{TheoremAsymptoticsSimplified} is multiplied by the continuously decreasing term $h^{3/2}$ suggest that $\phi$ may have a continuum of scaling laws. There are some clues that vaguely suggest that the dimension might be 4/3, like the fact that the cover used in the proof of Theorem~\ref{TheoremHausdorffDimension} would no longer cover the set if the diameters are made slightly smaller and that the estimates used are sharp. A possible line of attack comes from deepening in the study of the multifractal setting of Theorem~\ref{TheoremHausdorffDimensionGeneralised}. In fact, analyzing the subsets $D_\alpha$ for a fixed $\alpha$ means studying irrationals with a fixed irrationality exponent, and this could be a way to isolate a set that has a single scaling, or at least a simpler scaling law. What we can be more convinced is the dimension being strictly greater than 1, due to the self-similar patterns already mentioned. Even showing this would be an interesting contribution. \subsection{Structure of the document} Since Corollary~\ref{thm:Global_Bound} suffices to tackle Theorems~\ref{TheoremHausdorffDimension} and \ref{TheoremHausdorffDimensionGeneralised}, we begin by proving them in Section~\ref{Section_HausdorffDimension}. In Section~\ref{sec:Technical_Results} we prove some technical results corresponding to the multifractal setting of Theorem~\ref{TheoremHausdorffDimensionGeneralised}. In Section~\ref{sec:Asymptotics_Heuristics}, we explain the heuristics on how the asymptotic behavior of $\phi$ around rationals can be reduced to the asymptotic behavior around either 0 or $1/2$. We also explain how such reduction is deeply related to Gauss sums and the Talbot effect. Then, in Section~\ref{Section_BaseCases} we compute the asymptotics around 0 and $1/2$, and in Section~\ref{Section_Rationals} we compute the asymptotics around rationals by making the already mentioned reduction rigorous. \begin{acknowledgements*} Special thanks to Luis Vega. I would also like to thank Fernando Chamizo, Albert Mas and Xavier Tolsa for interesting and useful discussions. The bulk of this work was developed while I was working at BCAM - Basque Center for Applied Mathematics. This research is supported by the Ministry of Education, Culture and Sport (Spain) under grant FPU15/03078 - Formaci\'on de Profesorado Universitario, by the ERCEA under the Advanced Grant 2014 669689 - HADE and also by the Basque Government through the BERC 2018-2021 program and by the Ministry of Science, Innovation and Universities: BCAM Severo Ochoa accreditation SEV-2017-0718. It is also supported by the Simons Foundation Collaboration Grant on Wave Turbulence (Nahmod’s Award ID 651469). \end{acknowledgements*} \section{The Hausdorff dimension}\label{Section_HausdorffDimension} In this last section, we prove Theorems~\ref{TheoremHausdorffDimension} and \ref{TheoremHausdorffDimensionGeneralised} based on Corollary~\ref{thm:Global_Bound}. Before going into the proofs, we recall that given $d \geq 0$, the $d$-Hausdorff content of diameter $\delta>0$ of a set $A \subset \mathbb{R}^n$ is \begin{equation}\label{HausdorffContent} \mathcal{H}_{\delta}^d(A) = \inf\left\{ \sum_{i \in I}\left(\operatorname{diam}U_i\right)^d \mid A \subset \bigcup_{i \in I}{U_i}, \quad \operatorname{diam}U_i < \delta\quad \forall i \in I, \quad I \text{ countable} \right\}, \end{equation} where the sets $U_i$ can be chosen to be open if needed. This is a decreasing function of $\delta$, and taking the limit $\delta \to 0$ yields the $d$-Hausdorff measure of $A$, \begin{equation}\label{HausdorffMeasure} \mathcal{H}^d(A) = \lim_{\delta \to 0}\mathcal{H}_{\delta}^d(A) = \sup_{\delta > 0}\mathcal{H}_{\delta}^d(A) . \end{equation} Finally, the Hausdorff dimension of $A$ is \begin{equation}\label{HausdorffDimension} \operatorname{dim}_{\mathcal{H}}A = \inf\{\,d\, \mid \, \mathcal{H}^d(A) = 0\} = \sup\{\,d\, \mid \, \mathcal{H}^d(A) = \infty\}. \end{equation} \subsection{Proof of Theorem~\ref{TheoremHausdorffDimension}}\label{Subsection_TheoremHausdorffDimension} The lower bound of Theorem~\ref{TheoremHausdorffDimension} is just a consequence of $\phi$ being a continuous and non-constant curve. Indeed, there exist $s,t \in \mathbb{R}$ with $s < t$ such that $\phi(s) \neq \phi(t)$. Let $[\phi(s),\phi(t)] \subset \mathbb{R}^2$ denote the line segment connecting $\phi(s)$ and $\phi(t)$, and $L$ its infinite extension. Then, the orthogonal projection $P_{\perp}:\phi([s,t]) \to L$ is a Lipschitz map, so \begin{equation} \operatorname{dim}_{\mathcal{H}}P_{\perp}\phi([s,t]) \leq \operatorname{dim}_{\mathcal{H}}\phi([s,t]), \end{equation} see, for instance, \cite[Proposition 3.3]{Falconer2014}. Since the continuity of $\phi$ implies $[\phi(s), \phi(t)] \subset P_{\perp}(\phi([s,t]))$, we get \begin{equation} \operatorname{dim}_{\mathcal{H}}\phi([0,1/(2\pi)]) \geq \operatorname{dim}_{\mathcal{H}}\phi([s, t]) \geq \operatorname{dim}_{\mathcal{H}}P_{\perp}\phi([s,t]) \geq \operatorname{dim}_{\mathcal{H}} [\phi(s), \phi(t)] = 1. \end{equation} Regarding the upper bound, it is enough to work with the set $\phi(\frac{1}{2\pi}\left((0,1)\cap\mathbb{I}\right))$, where $\mathbb{I}$ stands for the set of irrational numbers. This is because the periodic property \eqref{PeriodicityPhi} implies \[ \phi(\mathbb{R}) = \bigcup_{k \in \mathbb{Z}}{\phi([k,k+1/2\pi])} = \bigcup_{k\in\mathbb{Z}}{ \left( \phi([0,1/2\pi]) + \frac{i}{2\pi}\, k \right) }, \] and since the Hausdorff dimension of a countable union of sets is the supremum among the Hausdorff dimensions of each of the sets (see, for instance, \cite[Chapter 4]{Mattila1995}), we have \begin{equation} \operatorname{dim}_{\mathcal{H}}\phi(\mathbb{R}) = \sup_{k \in \mathbb{Z}} \operatorname{dim}_{\mathcal{H}} \left( \phi([0,1/2\pi]) + \frac{i}{2\pi}\, k \right). \end{equation} Of course, all such sets have the same Hausdorff dimension, so it is enough to work with, say, $k=0$. On the other hand, the set of rational points is countable and has therefore $\mathcal{H}^d$-measure zero for every $d >0$. Thus, $\phi([0,1/2\pi])$ has the same $\mathcal{H}^d$-measure as $\phi(\frac{1}{2\pi}\left((0,1)\cap\mathbb{I}\right))$. As a consequence, $\operatorname{dim}_{\mathcal{H}}\phi(\mathbb{R}) = \operatorname{dim}_{\mathcal{H}}\phi(\mathcal{I})$, where $\mathcal{I} = \frac{1}{2\pi}\left( (0,1) \cap \mathbb{I} \right)$. It will be enough to find a proper countable cover of the set $\phi(\mathcal{I})$. First, we see that \begin{equation}\label{CoverOfIrrationals} (0,1) \cap \mathbb{I} \, \, \, \subset \bigcup_{\substack{1 \leq p < q \\ \operatorname{gcd}(p,q)=1 \\ q \geq Q_0}}{ B\left( \frac{p}{q}, \frac{1}{q^2} \right) }, \qquad \forall Q_0 \in \mathbb{N}. \end{equation} This cover is a direct consequence of the theory of continued fractions. Let $\rho \in (0,1) \cap \mathbb{I}$ and $\rho_n = p_n/q_n$ be its convergents by continued fractions for all $n \in \mathbb{N}$. These convergents are irreducible rationals such that $\lim_{n \to \infty}{q_n}=+\infty$ and $|\rho - p_n/q_n| < q_n^{-2}$ for every $n \in \mathbb{N}$. Consequently, for no matter how large $Q_0 \in \mathbb{N}$, we can find $N_0 \in \mathbb{N}$ such that $q_n \geq Q_0$ and $|\rho - p_n/q_n| < q_n^{-2}$ for every $n > N_0$, hence \eqref{CoverOfIrrationals}. Let now the asymptotics in Corollary~\ref{thm:Global_Bound} with $p = p_n$ and $q=q_n$ be evaluated at $h = h_n = t_\rho - t_{p_n,q_n}$ such that $t_{p_n,q_n} + h_n = t_{\rho}$. Then, $|h_n| < 1/(2\pi q_n^2)$, which implies $q_n^{3/2}|h_n|^{3/2} < q_n^{-1/2}|h_n|^{1/2}$. Thus, there exists $C>0$ such that \begin{equation}\label{UpperBoundSimplified} |\phi(t_{\rho}) - \phi(t_{p_n,q_n})| \leq C\,\frac{|h_n|^{1/2}}{q_n^{1/2}} < \frac{C}{q_n^{3/2}}, \qquad \forall n \in \mathbb{N}. \end{equation} Thus, \eqref{CoverOfIrrationals} is translated to the image of $\phi$ because \eqref{UpperBoundSimplified} shows that \begin{equation}\label{CoverOfirrationalsPhi} \phi ( \mathcal{I} ) \subset \bigcup_{\substack{1 \leq p < q \\ \operatorname{gcd}(p,q)=1 \\ q \geq Q_0}}{ B\left( \phi\left(t_{p,q}\right), \frac{C}{q^{3/2}} \right) }, \qquad \forall Q_0 \in \mathbb{N}. \end{equation} Let $d >0$. This cover for $\phi(\mathcal I )$ yields an upper bound of the $d$-Hausdorff content \eqref{HausdorffContent} of diameter $\delta <C/Q_0^{3/2}$, since we have \begin{equation}\label{eq:Estimation_Of_The_Hausdorff_Content} \mathcal{H}^d_{C/Q_0^{3/2}}(\phi(\mathcal{I})) \leq \sum_{\substack{1 \leq p < q \\ \operatorname{gcd}(p,q)=1 \\ q \geq Q_0}}{ \left( \operatorname{diam} B\left( \phi\left(t_{p,q}\right), \frac{C}{q^{3/2}} \right) \right)^d } = C^d\,\sum_{ q = Q_0}^{\infty}{ \frac{\varphi(q)}{q^{3d/2}} } \leq C^d\,\sum_{ q = Q_0}^{\infty}{ \frac{1}{q^{3d/2 - 1}} } \end{equation} for every $Q_0 \in \mathbb{N}$. Here, $\varphi$ is Euler's totient function, whose trivial but in general best bound $\varphi(q) < q$ we used above. Then, take the limit $Q_0 \to \infty$ so that \begin{equation} \mathcal{H}^d(\phi(\mathcal{I})) = \lim_{Q_0 \to \infty} \mathcal{H}^d_{C/Q_0^{3/2}}(\phi(\mathcal{I})) \leq C^d\, \lim_{Q_0 \to \infty} \sum_{ q = Q_0}^{\infty}{ \frac{1}{q^{3d/2 - 1}} } . \end{equation} The sum inside the limit converges if and only if $3d/2 - 1 > 1$, or equivalently is and only if $d > 4/3$, so \begin{equation} \mathcal{H}^d(\phi(\mathcal{I})) = 0, \qquad \forall d > 4/3. \end{equation} According to the definition of the Hausdorff dimension \eqref{HausdorffDimension}, this implies $ \operatorname{dim}_{\mathcal{H}}( \phi(\mathcal{I} )) \leq4/3 $. \qed \begin{rem} Using the Dirichlet approximation theorem instead of the continued fraction theory to obtain a cover like \eqref{CoverOfIrrationals} gives some extra information about $\mathcal{H}^{4/3}(\phi(\mathcal{I}))$. Dirichlet's theorem states that given a natural number $N \in \mathbb{N}$ and any irrational $\rho$, there exist $p,q \in \mathbb{Z}$, such that $1 \leq q \leq N$ and $|\rho - p/q| < 1/(qN)$. This implies that \begin{equation} (0,1) \cap \mathbb I \,\,\, \subset \bigcup_{\substack{ 1 \leq q \leq N \\ 1 \leq p \leq q }} B \left(\frac{p}{q}, \frac{1}{qN}\right), \qquad \forall N \in \mathbb{N}. \end{equation} Fix $N \in \mathbb N$ and let $p_N(\rho)/q_N(\rho)$ be the approximation of $\rho$ corresponding to $N$. Plug this in \eqref{UpperBoundSimplified} so that we get \begin{equation} \left| \phi(t_\rho) - \phi(t_{p_N(\rho),q_N(\rho)}) \right| \leq C\, \frac{ |\rho - p_N(\rho)/q_N(\rho)|^{1/2} }{ q_N(\rho)^{1/2} } \leq \frac{C}{ q_N(\rho)\, N^{1/2} }, \end{equation} which means that \begin{equation} \phi(\mathcal{I}) \subset \bigcup_{\substack{ 1 \leq q \leq N \\ 1 \leq p \leq q }} B \left( \phi(t_{p,q}), \frac{C}{q N^{1/2}} \right), \qquad \forall N \in \mathbb N. \end{equation} Moreover, the diameters of the balls satisfy $ 1/(qN^{1/2}) \leq N^{-1/2}$. Thus, for $1 \leq d < 2$, \begin{equation} \mathcal{H}^d_{N^{-1/2}}(\phi(\mathcal I)) \leq \sum_{\substack{ 1 \leq q \leq N \\ 1 \leq p \leq q }} \frac{C^d}{(qN^{1/2})^d} = \frac{C^d}{N^{d/2}}\,\sum_{ q=1 }^N \frac{1}{q^{d-1}} \leq C^d\, \frac{N^{2- 3d/2}}{(2-d)}, \end{equation} which shows as before that $\mathcal{H}^d(\phi(\mathcal I)) = \lim_{N\to \infty} \mathcal{H}^d_{N^{-1/2}}(\phi(\mathcal I)) = 0$ for every $d > 4/3$, but also and more interestingly, \begin{equation} \mathcal{H}^{4/3}(\phi(\mathcal I)) = \lim_{N\to \infty} \mathcal{H}^{4/3}_{N^{-1/2}}(\phi(\mathcal I)) \leq \frac{C^d}{2 - 4/3} = \frac32 C^d. \end{equation} \end{rem} \subsection{Proof of Theorem~\ref{TheoremHausdorffDimensionGeneralised}}\label{Subsection_TheoremHausdorffDimensionGeneralised} We follow the structure of the proof of Theorem~\ref{TheoremHausdorffDimension}, but we use deeper results that relate the rate of convergence of the approximations by continued fractions with the H\"older regularity coefficients defined in \eqref{MaximalHolderRegularity}. Let $p_n/q_n$ be the $n$-th covergent by continued fractions of $\rho \in (0,1) \cap \mathbb{I}$. As above, $| \rho - p_n/q_n | < q_n^{-2}$, but now we want to quantify how smaller than $q_n^{-2}$ this error is. For that, define the sequence $(\gamma_n)_{n \in \mathbb{N}}$ as \begin{equation}\label{DefinitionOfGammaN} \left| \rho - \frac{p_n}{q_n} \right| = \frac{1}{q_n^{\gamma_n}} , \qquad \forall n \in \mathbb{N}. \end{equation} It is clear that $\gamma_n > 2$ for every $n \in \mathbb{N}$. Of all convergents, let us work only with the approximations with $q_n \equiv 0,1,3 \pmod{4}$, which are always infinitely many (see Lemma~\ref{lem:Continued_Fraction_Auxiliary}), and define \begin{equation}\label{DefinitionOfGamma} \begin{split} \gamma(\rho) & = \sup\left\{ \tau \mid \gamma_n \geq \tau \text{ for infinitely many } n \in \mathbb{N} \text{ such that } q_n \equiv 0,1,3 \,(\text{mod } 4) \right\} \\ & = \limsup_{\substack{n \to \infty \\ q_n \equiv 0,1,3 \,(\text{mod } 4)}} \gamma_n, \end{split} \end{equation} There is a direct connection between $\gamma$ and the H\"older exponent $\alpha_\phi$ \eqref{MaximalHolderRegularity} given by \begin{equation}\label{CorrespondenceJaffard} \alpha_\phi(t_{\rho}) = \frac12 + \frac{1}{2\gamma(\rho)}. \end{equation} This identity is an adaptation of the original result for $\phi_D$ shown by Jaffard in \cite{Jaffard1996}, see Subsection~\ref{sec:CorrespondenceJaffard} for details and proof. The idea of the proof is that the definition of $\gamma_n$ allows to improve the bound in \eqref{UpperBoundSimplified} because now, if $h_n = t_\rho - t_{p_n,q_n}$, \begin{equation}\label{eq:Estimations_With_Gamma_n} |\phi(t_{\rho}) - \phi(t_{p_n,q_n})| \leq C\,\frac{|h_n|^{1/2}}{q_n^{1/2}} = \frac{C}{q_n^{(1+\gamma_n)/2}} , \qquad \forall n \in \mathbb{N}, \end{equation} which is smaller than $C/q_n^{3/2}$, and then $\gamma(\rho)$ can be used to control the exponent $(1+\gamma_n)/2$. Thus, we take the set of points with fixed $\gamma(\rho)=\gamma$ and we cover it like in \eqref{CoverOfirrationalsPhi} but with balls of smaller diameter, yielding a better estimation for the Hausdorff dimension. Finally, the correspondence \eqref{CorrespondenceJaffard} connects these sets with the sets where $\phi$ has a given regularity. Define the sets of points of a determinate coefficient $\beta \geq 2$, \begin{equation}\label{CorrespondenceBetweenSets} R_{\beta} = \left\{ t_\rho \in \mathcal I \mid \gamma(\rho) = \beta \right\} = D_{\frac12 + \frac{1}{2\beta}} \cap \mathcal{I}, \end{equation} where $t_\rho = \rho/2\pi$, $D_\sigma = \{ x \in \mathbb{R} \, \mid \, \alpha(x) = \sigma \}$ and the last equality holds because of \eqref{CorrespondenceJaffard}. Let $\beta > 2$ and $t_{\rho} \in \cup_{\sigma \geq \beta}R_{\sigma}$ so that $\gamma(\rho) \geq \beta$. Then, choose $\epsilon >0$ such that $\gamma(\rho) - \epsilon \geq \beta - \epsilon > 2$. By definition of $\gamma(\rho)$, the set of indices \begin{equation} A_{\rho, \epsilon} = \{ n \in \mathbb{N} \mid q_n \equiv 0,1,3\, (\text{mod } 4) \text{ and } \gamma_n > \beta - \epsilon \} \end{equation} is infinite for all $\epsilon > 0$ as above, and hence, from \eqref{eq:Estimations_With_Gamma_n} we get \begin{equation} |\phi(t_{\rho}) - \phi(t_{p_n,q_n})| < \frac{C}{q_n^{(1+\beta - \epsilon)/2}} , \qquad \forall n \in A_{\rho,\epsilon}. \end{equation} As in \eqref{CoverOfirrationalsPhi}, this shows that \[ \phi \Big( \bigcup_{\sigma \geq \beta}R_{\sigma} \Big) \subset \bigcup_{\substack{ 1 \leq p < q \\ \operatorname{gcd}(p,q) = 1 \\ q \geq Q_0 }}B\left( \phi(t_{p,q}),\frac{C}{q^{(1+\beta - \epsilon)/2}} \right), \qquad \forall Q_0 \in \mathbb{N}. \] Repeating the same procedure as in \eqref{eq:Estimation_Of_The_Hausdorff_Content}, we get \[ \mathcal{H}^d \left( \phi \Big( \bigcup_{\sigma \geq \beta}R_{\sigma} \Big) \right) \leq C^d \lim_{Q_0 \to \infty}\sum_{q=Q_0}^{\infty}{ \frac{1}{q^{\frac{1+\beta - \epsilon}{2}d - 1}} } = 0, \qquad \forall d > \frac{4}{1+\beta - \epsilon}, \] so $\operatorname{dim}_{\mathcal{H}}\phi \Big( \bigcup_{\sigma \geq \beta}R_{\sigma} \Big) \leq d$ for every $d > 4/(1+\beta-\epsilon)$ and every $\epsilon >0$. Since this is valid for every $0 < \epsilon < \beta - 2$, let $\epsilon \to 0$ to we conclude that \[ \operatorname{dim}_{\mathcal{H}}\phi \Big( \bigcup_{\sigma \geq \beta}R_{\sigma} \Big) \leq \frac{4}{1+\beta}, \qquad \forall \beta > 2. \] By the correspondences \eqref{CorrespondenceJaffard} and \eqref{CorrespondenceBetweenSets}, we get the result for the H\"older regularity sets, \[ \operatorname{dim}_{\mathcal{H}}\phi \Big( \mathcal{I} \cap \bigcup_{\sigma \leq \alpha}D_{\sigma} \Big) \leq \frac{4\alpha-2}{\alpha}, \qquad \text{ for every } \quad \frac12 \leq \alpha < \frac34. \] This is also valid for $\alpha = 3/4$. Indeed, every irrational $\rho$ satisfies $\gamma(\rho) \geq 2$, which according to \eqref{CorrespondenceJaffard} means that $\alpha(t_\rho) \leq 3/4$. This means that all the irrational $t_\rho$ are in $\mathcal{I} \cap \bigcup_{\sigma \leq 3/4}D_{\sigma}$, so that the difference with the whole interval $ [0,1/(2\pi)]$ is a subset of the rationals $\{ t_x \mid x \in \mathbb{Q} \cap [0,1] \}$, at most a countable set which has Hausdorff dimension 0. Hence, according to Theorem~\ref{TheoremHausdorffDimension}, \begin{equation} \operatorname{dim}_{\mathcal{H}}\phi \Big( \mathcal{I} \cap \bigcup_{\sigma \leq 3/4}D_{\sigma} \Big) = \operatorname{dim}_{\mathcal{H}} \phi([0,1/(2\pi)]) \leq 4/3. \end{equation} Like in the proof of Theorem~\ref{TheoremHausdorffDimension}, the theorem follows because the periodic property \eqref{PeriodicityPhi} implies that $\phi\left( \bigcup_{\sigma \leq \alpha}D_{\sigma}\right)$ is a countable union of translates of $\phi \left( \mathcal{I} \cap \bigcup_{\sigma \leq \alpha}D_{\sigma} \right)$. Also, the first inequality of the theorem is just a consequence of the inclusion $ D_{\alpha} \subset \bigcup_{\sigma \leq \alpha}D_{\sigma} $. \section{Technical results for Section~\ref{Section_HausdorffDimension}}\label{sec:Technical_Results} \subsection{Proof of the correspondence (\ref{CorrespondenceJaffard})}\label{sec:CorrespondenceJaffard} In \cite{Jaffard1996}, Jaffard proved \begin{equation}\label{eq:Connection_Alpha_Tau_Proof} \alpha_{\phi_D}(\rho) = \frac12 + \frac{1}{2\tau(\rho)}, \end{equation} where $\phi_D$ is Duistermaat's version \eqref{PhiDuistermaat}, $\alpha_{\phi_D}$ is the H\"older exponent of $\phi_D$ defined in \eqref{MaximalHolderRegularity} and \begin{equation}\label{eq:Definition_Of_Tau_Proof} \tau(x) = \sup \left\{ \tau \, : \, \Big| x - \frac{p_n}{q_n} \Big| < \frac{1}{q_n^\tau}, \, \text{ for infinitely many } \frac{p_n}{q_n} \text{ such that not both } p_n,q_n \text{ are odd}\, \right\}, \end{equation} which is similar to the irrationality exponent of $\rho$ \footnotemark \footnotetext{The irrationality exponent of an irrational $\rho$ is defined as \begin{equation} \mu(\rho) = \sup \left\{ \mu >0 \, : \, \Big| \rho - \frac{p}{q} \Big| < \frac{1}{q^\mu} \text{ for infinitely many rationals } \frac{p}{q} \, \right\}, \end{equation} and it can be proved (as in Lemma~\ref{thm:Lemma_Gamma_Tau_Aux}) that equivalently, if $p_n/q_n$ are the convergents of $\rho$, then \begin{equation} \mu(\rho) = \sup \left\{ \mu >0 \, : \, \Big| \rho - \frac{p_n}{q_n} \Big| < \frac{1}{q^\mu} \text{ for infinitely many } n \in \mathbb N \, \right\}. \end{equation} }. In this subsection, we check that \eqref{CorrespondenceJaffard} is the equivalent expression for $\phi$, where $\tau$ is replaced by $\gamma$ \eqref{DefinitionOfGamma}. It is clear from \eqref{FromPhiDuistermaatToPhi} that $\phi_D$ and $\phi$ share regularity properties. More precisely, $\phi$ has at $t_\rho = \rho/2\pi$ the regularity that $\phi_D$ has at $2\rho$, so \begin{equation} \alpha_{\phi}(t_{\rho}) = \alpha_{\phi_D}(2\rho). \end{equation} Therefore, from \eqref{eq:Connection_Alpha_Tau_Proof} we immediately get \begin{equation} \alpha_{\phi}(t_\rho) = \frac12 + \frac{1}{2\tau(2\rho)}. \end{equation} However, we want to connect $\alpha_{\phi}(t_\rho)$ directly with some irrationality exponent of $\rho$, not of $2\rho$. It is usual in this transition (see Section~\ref{Section_Heuristics}, \eqref{eq:Classification_Of_Rationals}) that the condition of $p_n,q_n$ not being both odd for $\phi_D$ turns into $q_n \equiv 0,1,3 \pmod{4}$ for $\phi$, so we expect the correct exponent to be \begin{equation}\label{eq:Definition_Of_Gamma_Proof} \gamma(x) = \sup\left\{ \gamma \, : \, \Big| x - \frac{p_n}{q_n} \Big| < \frac{1}{q_n^\gamma} \text{ for infinitely many } n \in \mathbb{N} \text{ with } q_n \equiv 0,1,3 \,(\text{mod } 4) \right\}, \end{equation} which is the same as \eqref{DefinitionOfGamma}. We prove the following: \begin{lem}\label{thm:Lemma_Gamma_Tau} Let $x \in \mathbb R \setminus \mathbb Q$. Then, $\gamma(x) = \tau(2x)$. \end{lem} We split the proof in two steps. First, we prove in Lemma~\ref{thm:Lemma_Gamma_Tau_Aux} that $\gamma$ and $\tau$ can be defined using any rational, that is, by \begin{equation}\label{eq:Definition_Of_Tau_R} \tau_R(x) = \sup \left\{ \tau \, : \, \Big| x - \frac{p}{q} \Big| < \frac{1}{q^\tau}, \, \text{ for infinitely many rationals } \frac{p}{q} \text{ not both odd}\, \right\}, \end{equation} and \begin{equation}\label{eq:Definition_Of_Gamma_R} \gamma_R(x) = \sup\left\{ \gamma : \Big| x - \frac{p}{q} \Big| < \frac{1}{q^\gamma} \text{ for infinitely many rationals } \frac{p}{q} \text{ with } q \equiv 0,1,3 \,(\text{mod } 4) \right\}, \end{equation} where in both definitions all fractions must be irreducible. Then, we prove the equality of \eqref{eq:Definition_Of_Tau_R} and \eqref{eq:Definition_Of_Gamma_R} in Lemma~\ref{thm:Lemma_Gamma_Tau_R}. \begin{lem}\label{thm:Lemma_Gamma_Tau_Aux} Let $x \in \mathbb{R} \setminus \mathbb{Q}$. Then, $\tau_R(x) = \tau(x)$ and $\gamma_R(x) = \gamma(x)$. \end{lem} \begin{proof} We prove $\tau_R(x) = \tau(x)$, the proof for $\gamma$ is analogous. First, it is clear that \begin{equation}\label{GoodApproximation} \begin{split} & \left\{ \tau \, : \, \Big| x - \frac{p_n}{q_n} \Big| < \frac{1}{q_n^\tau} \, \text{ for infinitely many convergents } \frac{p_n}{q_n} \text{ not both odd}\, \right\} \\ & \qquad \subset \left\{ \tau \, : \, \Big| x - \frac{p}{q} \Big| < \frac{1}{q^\tau} \, \text{ for infinitely many rationals } \frac{p}{q} \text{ not both odd}\, \right\}, \end{split} \end{equation} so taking the supremum we get $\tau(x) \leq \tau_R(x)$. Let now $\tau$ such that there are infinitely many rationals $p/q$ such that $p$ and $q$ are not both odd and $|x-p/q| < q^{-\tau}$. Assume that $\tau > 2$, so that \begin{equation} \Big| x - \frac{p}{q} \Big| < \frac{1}{q^\tau} \leq \frac{1}{2q^2} \quad \Longleftrightarrow \quad 2 \leq q^{\tau - 2} \end{equation} holds whenever $q > 2^{1/(\tau-2)}$. Since we are working with infinitely many rationals $p/q$, in particular infinitely many of them satisfy this last property. It is a property of continued fractions (see \cite[Theorem 19]{Khinchin1964}) that every approximation satisfying the left hand side of \eqref{GoodApproximation} is a convergent of $x$, so there are infinitely many continued fraction convergents $p_n/q_n$ such that $|\rho - p_n/q_n| < q_n^{-\tau}$. Thus, \begin{equation}\label{eq:Part_In_Proof_Tau_Aux} \begin{split} & \left\{ \tau > 2 \, : \, \Big| x - \frac{p}{q} \Big| < \frac{1}{q^\tau} \, \text{ for infinitely many rationals } \frac{p}{q} \text{ not both odd}\, \right\} \\ & \qquad \subset \left\{ \tau > 2 \, : \, \Big| x - \frac{p_n}{q_n} \Big| < \frac{1}{q_n^\tau} \, \text{ for infinitely many convergents } \frac{p_n}{q_n} \text{ not both odd}\, \right\}. \end{split} \end{equation} To continue, we need to check that $ \tau(x) \geq 2$. This is a consequence of $|x - p_n/q_n| < q_n^{-2}$ being true for all $n \in \mathbb N$ and the fact that there are infinitely many convergents $p_n/q_n$ with not both $p_n$ and $q_n$ odd (in fact, consecutive convergents $p_n, q_n, p_{n-1},q_{n-1}$ cannot all be odd because $q_np_{n-1} - q_{n-1}p_n = (-1)^n$, see \cite[Theorem 2]{Khinchin1964}). Now, the trivial inequality we proved in the beginning of the proof implies that $2 \leq \tau(x) \leq \tau_R(x)$. Thus, we separate two cases. If $\tau_R(x)=2$, then $2 \leq \tau(x) \leq \tau_R(x) = 2$ and hence $\tau(x) = \tau_R(x)$. Otherwise, $\tau_R(x) > 2$, and by the definition of the supremum and by \eqref{eq:Part_In_Proof_Tau_Aux}, \begin{equation} \begin{split} \tau_R(x) & = \sup \left\{ \tau > 2 \, : \, \Big| x - \frac{p}{q} \Big| < \frac{1}{q^\tau} \, \text{ for infinitely many rationals } \frac{p}{q} \text{ not both odd}\, \right\} \\ & \leq \sup \left\{ \tau > 2 \, : \, \Big| x - \frac{p_n}{q_n} \Big| < \frac{1}{q_n^\tau} \, \text{ for infinitely many convergents } \frac{p_n}{q_n} \text{ not both odd}\, \right\} \\ & \leq \sup \left\{ \tau \geq 2 \, : \, \Big| x - \frac{p_n}{q_n} \Big| < \frac{1}{q_n^\tau} \, \text{ for infinitely many convergents } \frac{p_n}{q_n} \text{ not both odd}\, \right\} \\ & = \tau(\rho), \end{split} \end{equation} and the proof is complete. \end{proof} Thanks to Lemma~\ref{thm:Lemma_Gamma_Tau_Aux}, Lemma~\ref{thm:Lemma_Gamma_Tau} follows from the following. \begin{lem}\label{thm:Lemma_Gamma_Tau_R} Let $x \in \mathbb R \setminus \mathbb Q $. Then, $\gamma_R(x) = \tau_R(2x)$. \end{lem} \begin{proof} Rewrite $\tau_R(2x)$ as \begin{equation} \begin{split} \tau_R(2x) & = \sup \left\{ \tau \, : \, \Big| 2x - \frac{p}{q} \Big| < \frac{1}{q^\tau}, \, \text{ for infinitely many } \frac{p}{q} \text{ not both odd}\, \right\} \\ & = \sup \left\{ \tau \, : \, \Big| x - \frac{1}{2}\frac{p}{q} \Big| < \frac{1}{2q^\tau}, \, \text{ for infinitely many } \frac{p}{q} \text{ not both odd}\, \right\}. \end{split} \end{equation} We want to write the bound $1/(2q^\tau)$ in terms of the denominator of the new fraction $p/(2q)$, and there are two different cases: \begin{enumerate} \item If $p$ is even and $q$ is odd, then $p/(2q) = (p/2)/q$, and the denominator is $q$. We let the condition as $|x - (p/2)/q| < 1/(2q^\tau)$. \item If $p$ is odd and $q$ is even, then $p/(2q)$, and the denominator is $2q$. We rewrite the condition as $|x - p/(2q)| < 2^{\tau - 1}/(2q)^\tau$. \end{enumerate} The condition must hold for infinitely many rationals, so if we relabel as \begin{equation}\label{eq:P1} \tag{$P1_\tau$} \Big| x - \frac{p}{q} \Big| < \frac{1}{2q^\tau}, \qquad \text{ if } q \text{ odd } \end{equation} and \begin{equation}\label{eq:P2} \tag{$P2_\tau$} \Big| x - \frac{p}{q} \Big| < \frac{2^{\tau - 1}}{q^\tau}, \qquad \text{ if } q \equiv 0 \pmod{4}, \end{equation} then $\tau_R(2x)$ is equivalently given by \begin{equation} \tau_R(2x) = \sup \left\{ \tau \, : \, \text{ infinitely many } \frac{p}{q} \text{ satisfy their corresponding } \eqref{eq:P1} \text{ or } \eqref{eq:P2} \right\}, \end{equation} where the rationals have to be such that $q \equiv 0,1,3 \pmod{4}$. By Lemma~\ref{thm:Lemma_Gamma_Tau_Aux}, we know that $\tau_R(2x),\gamma_R(x) \geq 2$, so we may work only with $\tau, \gamma \geq 2$ all along the proof. Fix $\epsilon >0$. With the definition of $\gamma_R(x)$ in mind, assume that $\gamma \geq 2$ is such that $|x - p/q| < 1/q^{\gamma+\epsilon}$ for infinitely many rationals with $q \equiv 0,1,3 \pmod{4}$. For the ones satisfying $q \equiv 0 \pmod 4$, \begin{equation} \frac{1}{q^{\gamma + \epsilon}} < \frac{2}{q^\gamma} \leq \frac{2^{\gamma - 1}}{q^\gamma} \end{equation} always holds, so (\textcolor{blue}{$P2_\gamma$}) holds. Also, for those with $q \equiv 1,3 \pmod 4$, \begin{equation} \frac{1}{q^{\gamma + \epsilon}} < \frac{1}{2q^\gamma} \, \Longleftrightarrow 2 < q^\epsilon, \end{equation} so (\textcolor{blue}{$P1_\gamma$}) holds for $q>2^{1/\epsilon}$. In short, all rationals that satisfy $q>2^{1/\epsilon}$, which are infinitely many, satisfy their corresponding (\textcolor{blue}{$P1_\gamma$}) or (\textcolor{blue}{$P2_\gamma$}), so \begin{equation} \begin{split} & \left\{ \gamma \geq 2 \mid \Big| x - \frac{p}{q} \Big| < \frac{1}{q^{\gamma+\epsilon}} \text{ for infinitely many } \frac{p}{q} \text{ with } q \equiv 0,1,3 \,(\text{mod } 4) \right\} \\ & \qquad \quad \subset \left\{ \tau \geq 2 \, : \, \text{ infinitely many } \frac{p}{q} \text{ satisfy } \eqref{eq:P1} \text{ or } \eqref{eq:P2} \right\}, \end{split} \end{equation} or equivalently, \begin{equation}\label{Epsilons} \begin{split} & \left\{ \sigma \geq 2+\epsilon \mid \Big| x - \frac{p}{q} \Big| < \frac{1}{q^{\sigma}} \text{ for infinitely many } \frac{p}{q} \text{ with } q \equiv 0,1,3 \,(\text{mod } 4) \right\} - \epsilon \\ & \qquad \quad \subset \left\{ \tau \geq 2 \, : \, \text{ infinitely many } \frac{p}{q} \text{ satisfy } \eqref{eq:P1} \text{ or } \eqref{eq:P2} \right\}. \end{split} \end{equation} Then, if we assume that $\gamma_R(x) > 2$ and choose $\epsilon < \gamma_R(x)-2$, then $\gamma_R(x) > 2+\epsilon$ and the supremum of the left hand side set of \eqref{Epsilons} is $\gamma_R(x)-\epsilon$. Then, taking supremums in \eqref{Epsilons}, we get \begin{equation}\label{eq:Gamma_Less_Than_Tau} \gamma_R(x) > 2 \quad \Longrightarrow \quad \gamma_R(x) - \epsilon \leq \tau_R(2x), \qquad \forall \epsilon < \gamma_R(x) - 2. \end{equation} This is one of the inequalities we need. In particular, $2 < \gamma_R(x) - \epsilon \leq \tau_R(2x)$. Thus, \begin{equation}\label{eq:Big_Big_1} \gamma_R(x) > 2 \quad \Longrightarrow \quad \tau_R(2x) > 2. \end{equation} We look now for the reverse inequality. Let $\tau \geq 2$ and assume that there are infinitely many rationals satisfying their corresponding ($\textcolor{blue}{P1_{\tau+\epsilon}}$) or ($\textcolor{blue}{P2_{\tau+\epsilon}}$). For the rationals satisfying ($\textcolor{blue}{P1_{\tau+\epsilon}}$), \begin{equation} \Big| x - \frac{p}{q} \Big| < \frac{1}{2q^{\tau+\epsilon}} < \frac{1}{q^\tau} \end{equation} always holds, and for those satisfying ($\textcolor{blue}{P2_{\tau+\epsilon}}$), we have \begin{equation} \Big| x - \frac{p}{q} \Big| < \frac{2^{\tau +\epsilon - 1}}{q^{\tau+\epsilon}} < \frac{1}{q^\tau} \quad \Longleftrightarrow \quad 2^{\tau + \epsilon - 1} < q^\epsilon, \end{equation} which holds for all that satisfy $q > 2^{(\tau + \epsilon -1 )/\epsilon}$. We are working with an infinite set of rationals, so infinitely many of them satisfy $q > 2^{(\tau + \epsilon -1 )/\epsilon}$. Thus, infinitely many of them, all with $q \equiv 0,1,3 \pmod 4$, satisfy $|x - p/q| < 1/q^\tau$. Hence, \begin{equation} \begin{split} & \left\{ \tau \geq 2 \, : \, \text{ infinitely many } \frac{p}{q} \text{ satisfy (} \textcolor{blue}{P1_{\tau+\epsilon}} \text{) or (} \textcolor{blue}{P2_{\tau+\epsilon}} \text{)} \right\} \\ & \qquad \quad \subset \left\{ \gamma \geq 2 \, : \, \Big| x - \frac{p}{q} \Big| < \frac{1}{q^{\gamma}} \text{ for infinitely many } \frac{p}{q} \text{ with } q \equiv 0,1,3 \,(\text{mod } 4) \right\}, \end{split} \end{equation} or equivalently, \begin{equation}\label{Epsilons2} \begin{split} & \left\{ \sigma \geq 2 + \epsilon \, : \, \text{ infinitely many } \frac{p}{q} \text{ satisfy (} \textcolor{blue}{P1_{\sigma}} \text{) or (} \textcolor{blue}{P2_{\sigma}} \text{)} \right\} - \epsilon \\ & \qquad \quad \subset \left\{ \gamma \geq 2 \, : \, \Big| x - \frac{p}{q} \Big| < \frac{1}{q^{\gamma}} \text{ for infinitely many } \frac{p}{q} \text{ with } q \equiv 0,1,3 \,(\text{mod } 4) \right\}, \end{split} \end{equation} As before, if we assume $\tau_R(2x) > 2$, then choose $\epsilon < \tau_R(2x) - 2$ so that $2+\epsilon < \tau_R(2x)$. This implies that the supremum of the set on the left hand side of \eqref{Epsilons2} is precisely $\tau_R(2x) - \epsilon$, so we get \begin{equation}\label{eq:Tau_Less_Than_Gamma} \tau_R(2x) > 2 \quad \Longrightarrow \quad \tau_R(2x) - \epsilon \leq \gamma_R(x), \qquad \forall \epsilon < \tau_R(2x) - 2. \end{equation} In particular, $ 2 < \tau_R(2x) - \epsilon \leq \gamma_R(x)$, so we also get \begin{equation}\label{eq:Big_Big_2} \tau_R(2x) > 2 \quad \Longrightarrow \quad \gamma_R(x)>2. \end{equation} We are ready to conclude. Joining \eqref{eq:Big_Big_1} and \eqref{eq:Big_Big_2} gives \begin{equation} \gamma_R(x) = 2 \quad \Longleftrightarrow \quad \tau_R(2x) = 2. \end{equation} Also, when $\gamma_R(x), \tau_R(2x) >2$, from \eqref{eq:Gamma_Less_Than_Tau} and \eqref{eq:Tau_Less_Than_Gamma} we get \begin{equation} \gamma_R(x) - \epsilon \leq \tau_R(2x) \leq \gamma_R(x) + \epsilon, \qquad \forall \epsilon < \min \{ \gamma_R(x)-2, \tau_R(2x) - 2 \}. \end{equation} Consequently, $\gamma_R(x) = \tau_R(2x)$ and the proof is complete. \end{proof} \subsection{A lemma about continued fractions} \begin{lem}\label{lem:Continued_Fraction_Auxiliary} Let $\rho \in \mathbb{R} \setminus \mathbb{Q}$ and its convergents by continued fractions $p_n/q_n$. Then, for any $n \in \mathbb{N}$, $q_n$ and $q_{n+1}$ are not both even. Consequently, there exists a subsequence of convergents $p_{n_j}/q_{n_j}$ such that $q_{n_j}$ is odd for all $j \in \mathbb{N}$. \end{lem} \begin{proof} By contradiction, let $N \in \mathbb{N} $ be such that $q_N$ and $q_{N+1}$ are both even. It is a basic fact of continued fractions \cite[Theorem 1]{Khinchin1964} that if the continued fraction of $\rho$ is $[a_0;a_1,a_2,\ldots]$, then the convergents satisfy $q_{n+1} = a_{n+1}q_n + q_{n-1}$ for every $n \geq 2$. In particular, \begin{equation} q_{N-1} = q_{N+1} - a_{N+1}q_N = 0 \pmod{2}, \end{equation} so $q_{N-1}$ is even. By induction, $q_n$ is even for every $n \leq N$. However, $p_0 / q_0 = a_0 = [\rho] \in \mathbb N$, so $q_0=1$, which is a contradiction. Hence, there are never two consecutive convergents with even denominator, and convergents with odd denominator are infinitely many. \end{proof} \section{The asymptotic behavior: heuristics}\label{sec:Asymptotics_Heuristics} We now turn to the asymptotic behavior of Riemann's non-differentiable function $\phi$. Recall that we are looking for the precise behavior of $\phi(t_x + h) - \phi(t_x)$ when $h \to 0$, where $t_x = x/2\pi$. We will always work with rationals $x=p/q$ such that $p$ and $q$ are coprime, and in that case we will often denote $t_{p/q}$ as $t_{p,q}$. In this section we explain the heuristics of this computation. The arguments here will be rigorously established in Sections~\ref{Section_BaseCases} and \ref{Section_Rationals}. \subsection{Overview}\label{sec:Overview} We mentioned in the introduction that Duistermaat \cite{Duistermaat1991} computed the asymptotic behavior of $\phi_D$ near rational points. For that, he first realized that the derivative of $\phi_D$ is directly related to the Jacobi $\theta$ function \begin{equation}\label{JacobiTheta} \theta(z) = \sum_{k \in \mathbb{Z}}{ e^{\pi i k^2 z} }, \qquad z \in \mathbb{H} = \{ z \in \mathbb{C} \mid \operatorname{Im}(z) >0 \}, \end{equation} because \begin{equation}\label{eq:Phi_Duistermaat_Derivative} \phi_D'(z) = \frac12 \left( \theta(z) - 1 \right), \qquad \forall z \in \mathbb{H}. \end{equation} The $\theta$ function interacts with the modular group $\Gamma$ of M\"obius transformations $\gamma$ that satisfy \begin{equation} \gamma(z) = \frac{az+b}{cz+d}, \qquad a,b,c,d \in \mathbb{Z}, \qquad ad-bc=1, \end{equation} which is a group under the operation of composition that is generated by the transformations \begin{equation} S(x) = 1/z \qquad \text{ and } \qquad T(z) = z + 1; \qquad \qquad \Gamma = \langle S, T \rangle. \end{equation} It is well-known that the Jacobi $\theta$ function interacts very well with $S$, since the inversion identity \begin{equation}\label{InversionOfTheta} \theta \left( \frac{-1}{z} \right) = \sqrt{\frac{z}{i}}\, \theta(z), \qquad \forall z \in \mathbb H, \end{equation} holds with the principal branch of the square root. But $\theta$ interacts not with $T$ but with $T^2(z) = z+2$, since trivially \begin{equation}\label{PeriodicityOfTheta} \theta(z+2) = \theta(z), \qquad \forall z \in \mathbb H. \end{equation} Thus, the group linked to $\theta$ is the subgroup $\Gamma_\theta = \langle S,T^2 \rangle$, the so-called $\theta$-modular group. It can be equivalently written as \begin{equation}\label{eq:Theta_Modular_Group} \Gamma_\theta = \left\{ \,\, \gamma(x) = \frac{ax+b}{cx+d} \quad \mid \quad a,b,c,d \in \mathbb{Z}, \, \, ad-bc = 1, \, \, a \equiv d \not\equiv b \equiv c \, (\text{mod } 2) \,\, \right\}. \end{equation} Properties~\eqref{InversionOfTheta} and \eqref{PeriodicityOfTheta} and the fact that $\Gamma_\theta$ is a group imply that for every $\gamma \in \Gamma_\theta$ there exists an identity relating $\theta(\gamma(z))$ with $\theta(z)$. In fact, it is \begin{equation}\label{eq:Theta_Transformation_General} \theta(\gamma(z)) = e_{\gamma}\,\sqrt{cz+d}\,\theta(z), \qquad \forall \gamma \in \Gamma_{\theta}, \end{equation} where $e_\gamma$ is an eighth root of the unity depending only on $c$ and $d$. Details on the properties of the Jacobi $\theta$ function and of the modular group can be found in \cite{Apostol1990,SteinShakarchi2003}. Duistermaat used the transformation \eqref{eq:Theta_Transformation_General} in \eqref{eq:Phi_Duistermaat_Derivative} and integrated the identity to obtain an asymptotic expansion for $\phi_D(x) - \phi_D(r)$, where $r$ is the rational pole of the $\gamma \in \Gamma_\theta$ chosen. Here, as stated in the introduction, instead of using \eqref{FromPhiDuistermaatToPhi} to translate the asymptotic behavior for $\phi_D$ to $\phi$, we will compute the asymptotic behavior of $\phi$ directly. In our case, the identity \eqref{eq:Phi_Duistermaat_Derivative} takes the form \begin{equation}\label{PhiWithTheta} \phi(t) = i \,\int_0^t{ \theta(-4\pi \tau)\,d\tau }, \end{equation} at least formally because $\theta$ is not well-defined on $\mathbb{R}$. Then, the asymptotic at $t_x$ is \begin{equation}\label{AsymptoticWithTheta} \phi(t_x + h ) - \phi(t_x) = i \,\int_{t_x}^{t_x+h}{ \theta(-4\pi \tau)\,d\tau } = i \,\int_{t_x}^{t_x+h}{ \psi(0, \tau)\,d\tau } , \end{equation} where $\psi$ is the Schr\"odinger solution \eqref{FreeSchrodingerSolution}. This expression, together with the $\theta$-modular transformations, will allow us to reduce the asymptotics around any rational to the behavior around either 0 or $t_{1,2}$. These two, on the other hand, can be computed by hand. This reduction is related to the Talbot effect and the generalized Gauss sums \begin{equation}\label{GaussSum} G(a,b,c) = \sum_{m = 0}^{c-1}{ e^{2\pi i \,\frac{a\,m^2 + b\,m}{c}} }, \qquad a,b \in \mathbb{Z}, \quad c \in \mathbb{N}. \end{equation} Indeed, we are going to see in Subsection~\ref{Section_Heuristics} that the Talbot effect, which happens at the level of $\psi$, combined to the pseudoconformal invariance of the Schr\"odinger solution \eqref{FreeSchrodingerSolution} yields an iterative algorithm to reduce any Gauss sum $G(p,0,q)$ to the trivial $G(0,0,1)$ or $G(1,0,2)$. Thus, \eqref{AsymptoticWithTheta} suggests that this iterative algorithm can be translated to the level of $\phi$ to reduce the behavior around $t_{p,q}$ to either $t_{0,1}=0$ or $t_{1,2}$. In fact, these iterations will materialize in a single $\theta$-modular transformation, so the reduction will be the consequence of combining \eqref{eq:Theta_Transformation_General} and \eqref{AsymptoticWithTheta}. However, the algorithm does not supply the transformation explicitly, so we will compute it in Subsection~\ref{Section_LookingForTransformations} following ideas of \cite{Jaffard1996}. \subsection{Heuristics of the reduction: the Talbot effect and Gauss sums}\label{Section_Heuristics} The Talbot effect is an optic phenomenon consisting in the interference caused by the diffracted light after crossing a grating with equidistant parallel slits. In 1836, Talbot \cite{Talbot1836} discovered a distance, called the Talbot distance nowadays, where the interference pattern matches the original grating. Later, it was discovered that in every fraction $p/q$ of the Talbot distance, the interference pattern is a grating with $q$ times as many slits as the original (see \cite{BerryMarzoliSchleich2001}). It turns out that the Talbot effect is mathematically expressed in terms of the solution $\psi$ \eqref{FreeSchrodingerSolution} to the Schr\"odinger equation \cite{BerryKlein1996,MatsutaniOnishi2003}. More precisely, \begin{equation}\label{TalbotEffect} \psi(s,t_{p,q} ) = \sum_{k \in \mathbb{Z}}{ e^{2\pi i \left( k s - k^2\frac{p}{q} \right)} } = \frac{1}{q} \, \sum_{k \in \mathbb{Z}}{ \sum_{r=0}^{q-1}{ G(-p,r,q)\, \delta\left(s-k-\frac{r}{q}\right) } }, \end{equation} where $G(-p,r,q)$ are Gauss sums \eqref{GaussSum}, see \cite[Section 3.3]{delaHozVega2014} for the details. The Talbot effect \eqref{TalbotEffect} and the prseudoconformal symmetry of the Schr\"odinger equation can be used to compute Gauss sums iteratively. The basic idea is that a symmetry together with an invariant initial datum yields an invariance for the corresponding solution, in case uniqueness of solutions is granted. For example, the free Schr\"odinger equation is translation invariant: if $u(s,t)$ is a solution, then so is $u(s+1,t)$. This symmetry takes the initial condition $u(s,0)$ to $u(s+1,0)$. In \eqref{FreeSchrodingerEquation}, $\psi_0(s) = \psi_0(s+1)$, so assuming uniqueness, the two solutions must also coincide, so $\psi(s,t) = \psi(s+1,t)$. We repeat this procedure with the pseudoconformal symmetry \begin{equation}\label{PseudoconformalTransformation} \mathcal{P}u(s,t) = \frac{1}{\sqrt{4\pi i t}} \, \overline{u}\left( \frac{s}{t},\frac{1}{t} \right) \,e^{is^2/(4t)}, \qquad \mathcal{P}u(s,0) = \mathcal{F}^{-1}\left( \overline{u}(4\pi \cdot,0) \right)(s) = \frac{1}{4\pi}\mathcal{F}^{-1}\overline{u}\left( \frac{s}{4\pi},0 \right), \end{equation} where the bar represents complex conjugation. Due to the Poisson summation formula, the initial datum $\psi_0(s) =\psi(s,0)$ satisfies $\widehat{\psi_0} = \psi_0 = \overline{\psi_0}$, so \begin{equation} \mathcal{P}\psi_0(s) = \frac{1}{4\pi}\psi_0\left( \frac{s}{4\pi}\right) . \end{equation} Then, if uniqueness of solution is assumed, we get \begin{equation} \mathcal{P}\psi(s,t) = \frac{1}{4\pi}\psi\left( \frac{s}{4\pi},\frac{t}{(4\pi)^2} \right). \end{equation} Rearranging the above leads to the pseudoconformal invariance of $\psi$, \begin{equation}\label{Pseudoconformal_Invariance} \psi(s,t) = \frac{1}{(4\pi i t)^{1/2}}\, e^{i s^2/(4t)}\, \overline{\psi}\left( \frac{s}{4\pi t}, \frac{1}{(4\pi)^2t} \right). \end{equation} The key point is that \eqref{Pseudoconformal_Invariance} allows the reduction \begin{equation}\label{Pseudoconformal_Reduction} t_{p,q} \to \frac{1}{(4\pi)^2}\,\frac{1}{t_{p,q}} = \frac{1}{2\pi}\,\frac{q}{4p} = t_{q,4p}. \end{equation} To see the effect of this at the level of Gauss sums, evaluate \eqref{Pseudoconformal_Invariance} in $t_{p,q}$ and use the Talbot effect \eqref{TalbotEffect} to get \begin{equation}\label{eq:TalbotAndPseudoconformal} \frac{1}{q}\,\sum_{k \in \mathbb{Z}} \sum_{r=0}^{q-1} G(-p,r,q)\,\delta\left(s - k -\frac{r}{q}\right) = \frac{e^{\frac{i\pi}{2}\frac{q}{p}s^2}}{2\,\sqrt{2ipq}} \,\sum_{k\in\mathbb{Z}}\sum_{r=0}^{4p-1} \overline{G(-q,r,4p)}\,\delta\left( s-\frac{2p}{q}k - \frac{r}{2q} \right). \end{equation} Compare the coefficients of the respective Dirac deltas at $s=0$ to get the well-known reciprocity formula for Gauss sums, \begin{equation}\label{eq:Reciprocity_Formula} G(p,0,q) = \sqrt{\frac{q}{p}}\, \frac{1+i}{4}\,G(-q,0,4p), \end{equation} which can be found, for instance, in \cite[Theorem 1.2.2]{BerndtEvansWilliams1998}. Gauss sums are easy to compute by hand when $q$ is small. For instance, \eqref{eq:Reciprocity_Formula} immediately implies the non-trivial $G(1,0,q) = \sqrt{q} \,(1+i)(1+(-i)^q)/2$ for every $q \in \mathbb{N}$. In the same way, we may combine it with the trivial modular property \begin{equation}\label{eq:Modularity_Formula} G(a,0,c) = G(a(\text{mod } c),0,c), \end{equation} to compute $G(p,0,q)$ iteratively. We do that in Algorithm~\ref{thm:Algorithm_For_Reduction}. We do not take care of the multiplying factors coming from each time we use the reciprocity formula \eqref{eq:Reciprocity_Formula}, but just control the reduction of the variables $(p,q)$ of the Gauss sums. \begin{alg}\label{thm:Algorithm_For_Reduction} Let $p,q \in \mathbb{N}$ coprime integers such that $q \neq 1,2,4$ and $p<q$. Denote by $R$ the reciprocity formula \eqref{eq:Reciprocity_Formula} and by $M$ the modularity formula \eqref{eq:Modularity_Formula}. \begin{itemize} \item If $p < q/2$, do $(p,q) \xrightarrow{R} (-q,4p) \xrightarrow{M} (4p-q,4p)$. \begin{itemize} \item If $p < q/4$, then $4p < q$. The denominator has been reduced. \item If $q/4 < p < q/2$, iterate again $(4p-q,4p) \xrightarrow{R} (-p,4p-q) \xrightarrow{M} (3p-q,4p-q)$. And $0 <4p-q< q$. The denominator has been reduced. \end{itemize} \item If $q/2 < p < q$, do $(p,q) \xrightarrow{M} (p-q,q) \xrightarrow{R} (q,4(q-p))$. \begin{itemize} \item If $p > 3q/4$, then $4(q-p) < q$. The denominator has been reduced. \item If $q/2 < p < 3q/4$, iterate again $(q,4(q-p)) \xrightarrow{M} (4p - 3q,4(q-p)) \xrightarrow{R} (q-p,3q - 4p)$, where $3q-4p < q$. The denominator has been reduced. \end{itemize} \end{itemize} If $q=4$, then $(p,4) \xrightarrow{R} (-4,4p) = (-1,p) \xrightarrow{M} (p-1,p)$, where $p=1$ or $p=3$. Therefore, the denominator $q$ can always be reduced to $q=1$ or $q=2$. When $q=2$, then $(1,2) \xrightarrow{R} (-2,4) = (-1,2) \xrightarrow{M} (1,2)$, so the algorithm takes $q=2$ to itself. \end{alg} \begin{rem} In the same way that the reciprocity formula \eqref{eq:Reciprocity_Formula} is a consequence of the pseudoconformal invariance \eqref{Pseudoconformal_Invariance}, the modular property \eqref{eq:Modularity_Formula} can be seen a consequence of the time periodicity of $\psi$, \begin{equation}\label{Periodic_Invariance} \psi(s,t) = \psi(s,t+1/2\pi), \end{equation} and corresponds to the time transformation \begin{equation}\label{eq:Modular_Reduction} t_{p,q} \to t_{p,q} + \frac{k}{2\pi} = \frac{1}{2\pi}\,\left( \frac{p}{q} + k \right) = t_{p+kq,q}, \qquad \forall k \in \mathbb{Z}. \end{equation} \end{rem} In short, Algorithm~\ref{thm:Algorithm_For_Reduction} shows that for every irreducible rational number $p/q$ there exists a transformation $\gamma$, formed by several combinations of \eqref{Pseudoconformal_Reduction} and \eqref{eq:Modular_Reduction}, and which has attached two other transformations $a_\gamma$ and $b_\gamma$ coming from the corresponding \eqref{Pseudoconformal_Invariance} and \eqref{Periodic_Invariance}, such that \begin{equation}\label{eq:Identity_After_Transformation} \psi(s,t) = a_\gamma(s,t)\,\psi\left( b_\gamma(s,t),\gamma(t) \right) \end{equation} and either $\gamma(t_{p,q}) = t_{0,1} = 0$ or $\gamma(t_{p,q})=t_{1,2}$. This identity can now be plugged in \eqref{AsymptoticWithTheta}, so a change of variables $\gamma(t)=\tau$ should lead to the asymptotic behavior around $0$ or $t_{1,2}$. At this stage, we do not know an explicit expression for $\gamma$, but we can guess the nature of $\gamma$ anyways. For that, rewrite \eqref{AsymptoticWithTheta} by changing variables $r = 2\pi \tau$ as \begin{equation}\label{eq:Asymptotic_Ready_For_Reduction} \phi(t_x+ h) - \phi(t_x) = i\, \int_{x}^{x+2\pi h}{ \psi(0,r/2\pi)\,dr} = \frac{i}{2\pi}\,\int_x^{x+2\pi h}{ \theta(-2r)\,dr }. \end{equation} This way, it is adapted to the setting of Algorithm~\ref{thm:Algorithm_For_Reduction} with $r \in (x,x+2\pi h)$ in the same scale as $p/q$. That means that the time transformations coming from \eqref{Pseudoconformal_Invariance} and \eqref{Periodic_Invariance} are applied to $\eta(r) = \theta(-2r)$. According to \eqref{Pseudoconformal_Reduction}, reciprocity changes $\eta(r) \to \eta(-1/4r)$, that is, $\theta(r) \to \theta(-1/r)$. On the other hand, in view of \eqref{eq:Modular_Reduction} with $k=1$, modularity changes $\eta(r) \to \eta(r+1)$, that is, $\theta(r) \to \theta(r+2)$. These two transformations, \begin{equation}\label{eq:Transformations_Rescaled} r \to 1/r \qquad \text{ and } \qquad r \to r + 2, \end{equation} are precisely the generators of the $\theta$-modular group $\Gamma_\theta$ \eqref{eq:Theta_Modular_Group}. Since $\gamma$ is a combination of both, then it must be a $\theta$-modular transformation $\gamma \in \Gamma_\theta$. Observe that we have changed the scale in \eqref{eq:Asymptotic_Ready_For_Reduction} again, with a change of variables $2r=\sigma$. The proper setting is now \begin{equation}\label{eq:Asymptotic_Ready_For_Reduction_With_Modular_Group} \phi(t_x+ h) - \phi(t_x) = \frac{i}{4\pi}\,\int_{2x}^{2x+4\pi h}{ \theta(- \sigma)\,d\sigma }, \end{equation} and for $x = p/q$, since the reduction will yield asymptotics at $0$ or $t_{1,2}$, then either $\gamma(2p/q)=0$ or $\gamma(2p/q)=1$ will hold. From now on, we will denote by $\tilde{p}/\tilde{q}$ the irreducible fraction of $2p/q$, so that \begin{equation}\label{eq:Definition_Of_TildePTildeQ} \begin{array}{lll} \tilde{p} = 2p, & \quad \tilde{q} = q, & \qquad \text{ if } q \text{ is odd, } \\ \tilde{p} = p, & \quad \tilde{q} = q/2, & \qquad \text{ if } q \text{ is even. } \end{array} \end{equation} At this point, we can guess which rational numbers can be sent to 0 and which cannot. Assume both $\tilde{p},\tilde{q}$ are odd and that $\gamma \in \Gamma_{\theta}$ is such that $\gamma(\tilde{p}/\tilde{q}) = 0$. The coefficients in the numerator of $\gamma$, $a$ and $b$ (see \eqref{eq:Theta_Modular_Group}), are coprime, so either $a = \tilde{q}$ and $b = -\tilde{p}$ or $a = -\tilde{q}$ and $b = \tilde{p}$ must hold. But then the parity condition in \eqref{eq:Theta_Modular_Group} is not kept, hence $\gamma$ does not exist. These points are precisely corresponding to $p/q$ with $q \equiv 2\, (\text{mod } 4)$, because then $p$ is odd and $\tilde{p}/\tilde{q} = p/(q/2)$, where $q/2$ is odd. On the other hand, if $q \equiv 0 \, (\text{mod } 4)$, then $\tilde{p}/\tilde{q} = p/(q/2)$ with $p$ odd and $q/2$ even, and if $q \equiv 1,3 \, (\text{mod } 4)$, then $\tilde{p}/\tilde{q} = 2p/q$ with $2p$ even and $q$ odd. In Subsection~\ref{Section_LookingForTransformations}, we prove that the general scheme for the $\theta$-modular transformations corresponding to $t_{p,q}$ is \begin{equation}\label{eq:Classification_Of_Rationals} \begin{array}{lll} q \text{ odd } & \Longrightarrow & \tilde{p} = 2p, \quad \tilde{q} = q, \quad \,\,\, \exists \gamma \in \Gamma_{\theta} \text{ such that } \gamma(\tilde{p}/\tilde{q}) = 0. \\ q \equiv 0 \, (\text{mod } 4) & \Longrightarrow & \tilde{p} = p, \quad \tilde{q} = q/2, \quad \exists \gamma \in \Gamma_{\theta} \text{ such that } \gamma(\tilde{p}/\tilde{q}) = 0. \\ q \equiv 2 \, (\text{mod } 4) & \Longrightarrow & \tilde{p} = p, \quad \tilde{q} = q/2, \quad \exists \gamma \in \Gamma_{\theta} \text{ such that } \gamma(\tilde{p}/\tilde{q}) = 1. \end{array} \end{equation} We will also compute these transformations. \subsection{Formal reduction and $\theta$-modular functions}\label{Section_LookingForTransformations} We now compute the $\theta$-modular transformations of classification \eqref{eq:Classification_Of_Rationals} explicitly, which were essentially given in \cite{Jaffard1996}. Then, combining them with \eqref{eq:Asymptotic_Ready_For_Reduction_With_Modular_Group}, we will reduce the asymptotics around $t_{p,q}$ to either 0 or $t_{1,2}$ formally. The conclusions, though heuristic, are very enlightening. We determine the coefficients $a,b,c,d$ of $\gamma \in \Gamma_{\theta}$ as in \eqref{eq:Theta_Modular_Group} using continued fractions. Let $\tilde{p}_n/\tilde{q}_n$ be the $n$-th convergent of $\tilde{p}/\tilde{q}$ by continued fractions. As a rational number, it has finitely many convergents, so there exists $N \in \mathbb{N}$ such that $\tilde{p}/\tilde{q} = \tilde{p}_N/\tilde{q}_N$. Also, recall that $\tilde{p}_n\,\tilde{q}_{n-1} - \tilde{q}_n\,\tilde{p}_{n-1} = (-1)^{n-1}$ for every $n \leq N$. Details about continued fractions can be found in \cite{Khinchin1964}. \subsubsection{Transformation for rationals $p/q$ such that $q \equiv 0,1,3 \pmod{4}$} \label{Subsection_TransformationNotBothOdd} According to \eqref{eq:Classification_Of_Rationals}, these rationals can be sent to 0. Indeed, $\tilde{p}$ and $\tilde{q}$ are not both odd, so choose \begin{equation} a = \tilde{q}, \qquad b = -\tilde{p}. \end{equation} Since $\tilde{p} = \tilde{p}_N$ and $\tilde{q}=\tilde{q}_N$, the other coefficients will depend on $\tilde{p}_{N-1}$ and $\tilde{q}_{N-1}$: \begin{itemize} \item If $\tilde{p}_{N-1}$ and $\tilde{q}_{N-1}$ are not both odd, we choose \[ c = (-1)^{N-1}\,\tilde{q}_{N-1}, \qquad d = (-1)^N\,\tilde{p}_{N-1}, \] so that $ad-bc = (-1)^N \left( \tilde{q}\,\tilde{p}_{N-1} - \tilde{p}\,\tilde{q}_{N-1} \right) = (-1)^{2N} = 1$. \item If $\tilde{p}_{N-1}$ and $\tilde{q}_{N-1}$ are both odd, the above does not satisfy the parity conditions, so choose \[ c = (-1)^{N-1}\,\tilde{q}_{N-1} + \tilde{q}, \qquad d = (-1)^N\,\tilde{p}_{N-1}-\tilde{p}. \] \end{itemize} \begin{rem}\label{RemarkForC013} The choice of $c$ and $d$ is not unique. Indeed, parity and the determinant are preserved with $c' = c + 2k\tilde{q}$ and $d' = d - 2k\tilde{p}$ for any $k \in \mathbb{Z}$. If $k=1$, we may work with $\tilde{q} < c < 4\tilde{q}$ in both cases. If $k=-1$ in the first case and $k=-2$ in the second one, we may also work with $-4\tilde{q} < c < -\tilde{q}$. \end{rem} \subsubsection{Transformation for rationals $p/q$ such that $q \equiv 2 \pmod{4}$} \label{Subsection_TransformationBothOdd} According to \eqref{eq:Classification_Of_Rationals}, they cannot be sent to 0. In this case, both $\tilde{p}$ and $\tilde{q}$ are odd, so choose \[ a = (-1)^{N-1}\,\tilde{q}_{N-1} + \tilde{q}, \quad b = (-1)^N\tilde{p}_{N-1}-\tilde{p}, \quad c = (-1)^{N-1}\tilde{q}_{N-1}, \quad d= (-1)^N\,\tilde{p}_{N-1}. \] Indeed, $\tilde{p}_{N-1}$ and $\tilde{q}_{N-1}$ cannot both be odd, so parity conditions are preserved. Also $ad-bc = 1$. One can easily check that $\gamma(\tilde{p}/\tilde{q}) =1$. \begin{rem}\label{RemarkForC2} Here too, the choice of $a,b,c,d$ is not unique, since all properties are preserved if \[ \begin{array}{ll} a = (-1)^{N-1}\,\tilde{q}_{N-1} + (2k+1)\tilde{q}, & b = (-1)^N\tilde{p}_{N-1}-(2k+1)\tilde{p}, \\ c = (-1)^{N-1}\tilde{q}_{N-1} + 2k\tilde{q}, & d= (-1)^N\,p_{N-1}-2k\tilde{p}, \end{array} \] for any $k \in \mathbb{Z}$. With $k=1$, we may assume $\tilde{q} < c < 3\tilde{q}$, and with $k=-1$, we may work with $-3\tilde{q} < c < -\tilde{q}$. \end{rem} \subsubsection{Formal reduction}\label{Subsection_Reduction} Once we have the transformations, let us use them in \eqref{eq:Asymptotic_Ready_For_Reduction_With_Modular_Group} to reduce from $t_{p,q}$ to either 0 or $t_{1,2}$ formally. We begin with $0 < p \leq q$ coprime such that $q \equiv 0,1, 3 \pmod{4}$. We just saw that there exists $\gamma \in \Gamma_{\theta}$ such that $\gamma(\tilde{p}/\tilde{q}) = 0$. According to \eqref{eq:Asymptotic_Ready_For_Reduction_With_Modular_Group}, for $h \in \mathbb{R}$ we have \begin{equation}\label{eq:Asymptotic_Ready_For_Reduction_With_Modular_Group_Rationals} \phi(t_{p,q}+h) - \phi(t_{p,q}) = \frac{i}{4\pi} \,\int_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h}{ \theta(-\sigma)\,d\sigma }. \end{equation} Conjugate and use the transformation \eqref{eq:Theta_Transformation_General} with the $\gamma$ above so that \begin{equation}\label{eq:Before_Changing_Variables} \overline{ \phi(t_{p,q}+h) - \phi(t_{p,q}) } = \frac{\overline{i\,e_{\gamma}}}{4\pi}\, \int_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h}{ \frac{\theta(\gamma(\sigma))}{ \sqrt{c\sigma+d} } \,d\sigma }. \end{equation} Now, change variables $\gamma(\sigma) = r$. Since $a = \tilde{q},\, b=-\tilde{p}$ and $ad-bc = 1$, we have \begin{equation}\label{eq:Changing_Variables} \gamma(x) = \frac{ax+b}{cx+d} \qquad \Longrightarrow \qquad \gamma^{-1}(x) = \frac{dx-b}{-cx+a}, \quad \gamma'(x) = \frac{1}{(cx+d)^2}. \end{equation} Then, the boundaries of the integral become $\gamma(\tilde{p}/\tilde{q})=0$ and \begin{equation}\label{eq:Upper_Boundary_Of_Integral} \gamma(\tilde{p}/\tilde{q} + 4\pi h) = \frac{4\pi \tilde{q}^2h}{1+4\pi c \tilde{q} h}. \end{equation} At this point, the cases $h>0$ and $h<0$ have to be considered separately. To avoid a null denominator, if $h\geq0$, following Subsections~\ref{Subsection_TransformationNotBothOdd} and \ref{Subsection_TransformationBothOdd} we let $c = c_+$ be such that $\tilde{q} < c_+ < 4\tilde{q}$. On the other hand, if $h<0$, choose $c = c_-$ such that $-4\tilde{q} < c_- < -\tilde{q}$. This way, we have $4\pi c \tilde{q} h \geq 0$ in both cases. With \eqref{eq:Upper_Boundary_Of_Integral} in mind, define \begin{equation}\label{eq:Definition_Of_B} b(h) = \frac{\tilde{q}^2h}{1+4\pi c_\pm \tilde{q} h} = \left\{ \begin{array}{ll} \frac{\tilde{q}^2h}{1+4\pi c_{+} \tilde{q} h}, & \text{when } h \geq 0, \\ \frac{\tilde{q}^2h}{1+4\pi c_{-} \tilde{q} h}, & \text{when } h < 0. \end{array} \right. \end{equation} Then, \eqref{eq:Before_Changing_Variables} turns into \begin{equation}\label{eq:After_Changing_Variables_013} \overline{ \phi(t_{p,q} + h ) - \phi(t_{p,q}) } = \frac{\overline{i\,e_{\gamma}}}{4\pi}\, \int_0^{4\pi b(h)}{ \frac{\theta(r)}{(\tilde{q}-c_{\pm}r)^{3/2}}\, dr }, \qquad \text{ for all } h. \end{equation} When $|h|$ is small, $b(h)$ behaves like $\tilde{q}^2h$, so the variable $r$ of the integral is small and $\tilde{q} - cr$ is similar to $\tilde{q}$. Thus, by \eqref{eq:Asymptotic_Ready_For_Reduction_With_Modular_Group_Rationals}, the asymptotic around $t_{p,q}$ will behave approximately as \begin{equation}\label{eq:Approximated_Asymptotic_013} \phi(t_{p,q} + h ) - \phi(t_{p,q}) \approx \frac{e_{\gamma}}{\tilde{q}^{3/2}}\, \frac{i}{4\pi} \, \int_0^{ 4\pi \tilde{q}^2 h }{ \theta(-r)\, dr } = \frac{e_{\gamma}}{\tilde{q}^{3/2}}\,\phi( \tilde{q}^2\,h). \end{equation} This means that when $h \to 0$, the behavior of $\phi$ around $t_{p,q}$ is essentially the same as around 0, except that we need to rescale by $\tilde{q}^2$ in the variable and by $\tilde{q}^{-3/2}$ in the image. On the other hand, if $q \equiv 2 \pmod{4}$, there exists $\gamma \in \Gamma_{\theta}$ such that $\gamma(\tilde{p}/\tilde{q}) = 1$. The same steps as before lead to \begin{equation}\label{eq:After_Changing_Variables_2} \overline{ \phi(t_{p,q} + h) - \phi(t_{p,q}) } = \frac{\overline{i\,e_{\gamma}}}{4\pi}\, \int_1^{1 + 4\pi b(h)}{ \frac{\theta(r)}{(\tilde{q}-c_{\pm}(r-1))^{3/2}}\, dr } . \end{equation} Like before, when $|h|$ is small we have $b(h) \approx \tilde{q}^2h$, so \begin{equation}\label{eq:Approximated_Asymptotic_2} \phi(t_{p,q}+h) - \phi(t_{p,q}) \approx \frac{\,e_{\gamma}}{\tilde{q}^{3/2}} \, \frac{i}{4\pi}\, \int_1^{1 + 4\pi \tilde{q}^2 h}{ \theta(-r)\, dr } = \frac{e_{\gamma}}{\tilde{q}^{3/2}}\,\left( \phi(t_{1,2} + \tilde{q}^2\,h) - \phi(t_{1,2}) \right). \end{equation} Thus, up to the same scaling as before, the behavior of $\phi$ around $t_{p,q}$ is essentially the same as around $t_{1,2}$ when $h \to 0$. We will make make this formal reduction rigorous in Section~\ref{Section_Rationals}. However, that will be of no use if we do not know how $\phi$ behaves around 0 and $t_{1,2}$. We devote Section~\ref{Section_BaseCases} to compute the asymptotic behavior of $\phi$ around those two points by hand. \section{Asymptotic behavior around 0 and $t_{1,2}$}\label{Section_BaseCases} \subsection{Asymptotic behavior around $0$}\label{Section_AsymptoticIn0} Since $\phi(0)=0$, we need to compute an asymptotic expression for $\phi(h)$. The main idea, which can be traced back to Smith \cite{Smith1972}, is to use the Poisson summation formula. We begin assuming $h > 0$ and writing \begin{equation}\label{eq:Definition_Of_G} \phi(h) = -h\,\sum_{k \in \mathbb{Z}}{ g(2\pi k \sqrt{h}) }, \qquad \text{ where } \qquad g(x) = \frac{e^{-ix^2}-1}{x^2}. \end{equation} The Poisson summation formula (see \cite[Theorem 3.1.17]{Grafakos2009}) gives \begin{equation}\label{PoissonSummationFormulaUsed} \phi(h) = -\frac{\sqrt{h}}{2\pi}\,\sum_{k \in \mathbb{Z}}{\widehat{g} \left( \frac{k}{2\pi \sqrt{h}} \right)} \end{equation} if $|g(x)| + |\widehat{g}(x)| \leq C(1+|x|)^{-1-\delta}$ for some $C,\delta > 0$. The function $g$ satisfies that property because it is analytic, so bounded in any compact set, and it decreases as $|x|^{-2}$ when $|x| \to \infty$. To prove that property for $\widehat{g}$, we need the following lemma, very similar to \cite[Lemma 1]{OskolkovChakhkiev2010}. \begin{lem}\label{thm:Fourier_Transform_Lemma} The Fourier transform of $g$ defined in \eqref{eq:Definition_Of_G} is \begin{equation} \widehat{g}(\xi) = 2\pi^2\,|\xi|\, \operatorname{erfc}\left( \frac{1-i}{\sqrt{2}}\,\pi\,|\xi| \right) - \sqrt{2\pi}\,(1+i)\,e^{i\pi^2 \xi^2},\qquad \forall \xi \in \mathbb{R}, \end{equation} where $\operatorname{erfc}(z) = 1-\operatorname{erf}(z)$ stands for the complementary error function and $\operatorname{erf}(z) = \frac{2}{\sqrt{\pi}}\int_0^z{ e^{-w^2}\,dw }$ is the error function for $z \in \mathbb C$. Its asymptotic expansion for $x \in \mathbb{R}$ at infinity is \begin{equation}\label{eq:Error_Function_Asymptotic_Expansion} \operatorname{erfc}(x) = \frac{e^{-x^2}}{\sqrt{\pi}}\, \left( \frac{1}{x} + \sum_{n=1}^N{ (-1)^n\, \frac{(2n-1)!!}{2^n\,x^{2n+1}}} \right) + O\left( \frac{1}{x^{2N+3}} \right), \qquad \forall N \in \mathbb{N}. \end{equation} \end{lem} \begin{rem} The integral of the holomorphic function $e^{-w^2}$, $w \in \mathbb{C}$ in the definition of the error function can be computed along any path connecting 0 and $z$. \end{rem} \begin{proof} From the definition of $g$, integrating by parts we get \begin{equation} \widehat{g}(\xi) = -2i \, \int_{\mathbb{R}}{ e^{-ix^2}e^{-2\pi i \xi x}\,dx } + 2 \pi i \xi \, \int_{\mathbb{R}}{ \frac{e^{-2\pi i \xi x}}{x}\,dx } - 2\pi i \xi \int_{\mathbb{R}}{ \frac{e^{-ix^2}}{x}e^{-2\pi i \xi x}\,dx }. \end{equation} The first two integrals are the well-known \begin{equation}\label{eq:Fourier_Transform_Of_Gaussian_And_1/x} \mathcal{F}_x\left( e^{-ix^2} \right)(\xi) = \sqrt{\pi}\,\frac{1-i}{\sqrt{2}} \, e^{i\pi^2\xi^2}, \qquad \mathcal{F}_x\left(1/x \right)(\xi) = -\pi i \operatorname{sign}(\xi), \end{equation} while the third one is the convolution of both of them, that is, \begin{equation} \int_\mathbb{R} e^{i\pi^2x^2}\,\operatorname{sign}(\xi-x)\,dx = \int_{-\infty}^{\xi} e^{i\pi^2x^2}\,dx - \int_\xi^\infty e^{i\pi^2x^2}\,dx = \operatorname{sign}(\xi)\,\int_{-|\xi|}^{|\xi|}e^{i\pi^2x^2}\,dx. \end{equation} Hence, \begin{equation} \widehat{g}(\xi) = -\sqrt{2\pi}\,(1+i)\,e^{i\pi^2\xi^2} + 2\pi^2 |\xi| - 4\pi^2\sqrt{\pi}\,\frac{1-i}{\sqrt{2}}|\xi|\,\int_0^{|\xi|}{e^{i\pi^2y^2}\,dy}. \end{equation} The last integral is essentially $\operatorname{erf}(\textstyle{\frac{1-i}{\sqrt{2}}}\,\pi |\xi|)$, because with the path $\eta(t) = \textstyle{\frac{1-i}{\sqrt{2}}}\,\pi t$, $t \in (0,|\xi|)$ we get \begin{equation} \operatorname{erf}\left(\frac{1-i}{\sqrt{2}}\,\pi |\xi|\right) = \frac{2}{\sqrt{\pi}}\, \int_0^{|\xi|} e^{-\eta(t)^2}\,\eta'(t)\,dt = 2\sqrt{\pi}\,\frac{1-i}{\sqrt{2}} \, \int_0^{|\xi|} e^{i \pi^2 t^2}\,dt. \end{equation} Thus, \begin{equation} \widehat{g}(\xi) = -\sqrt{2\pi}\,(1+i)\,e^{i\pi^2\xi^2} + 2\pi^2 |\xi|\left( 1 -\operatorname{erf}\left( \frac{1-i}{\sqrt{2}}\pi |\xi| \right) \right). \end{equation} The asymptotic expansion of $\operatorname{erfc}(x)$ for $x \in \mathbb R$ is well-known and is obtained integrating its definition by parts $N$ times. \end{proof} Since the error function is analytic, so is $\widehat{g}$. Also, $\operatorname{erfc}(x) = \pi^{-1/2}e^{-x^2}\left( x^{-1} + O(x^{-3}) \right)$, so we get \begin{equation}\label{DecayForFourierTransformG} \begin{split} \widehat{g}(\xi) & = 2\pi^2\,|\xi|\, \pi^{-\frac12}e^{i \pi^2\xi^2}\left( \frac{1+i}{\sqrt{2}} \pi^{-1} |\xi|^{-1} + O(|\xi|^{-3}) \right) - \sqrt{2\pi}\,(1+i)\,e^{i\pi^2 \xi^2} = O(|\xi|^{-2}) \\ \end{split} \end{equation} when $|\xi| > 1$. Thus, the hypotheses for the Poisson summation formula are satisfied and \eqref{PoissonSummationFormulaUsed} holds. Given that $\widehat{g}(0) = \int_{\mathbb{R}}{g(x)\,dx} = -\sqrt{2\pi}\,(1+i)$ and that $g$ and $\widehat{g}$ are even, Lemma~\ref{thm:Fourier_Transform_Lemma} implies \begin{equation} \phi(h) = \frac{1+i}{\sqrt{2\pi}}\,\sqrt{h} - \frac{\sqrt{h}}{\pi}\sum_{k=1}^{\infty}{ \left( \frac{\pi k}{ \sqrt{h}}\, \operatorname{erfc}\left( \frac{1-i}{2\sqrt{2}}\, \frac{ k}{ \sqrt{h}} \right) - \sqrt{2\pi}\,(1+i)\,e^{\frac{i k^2}{4h} } \right) }. \end{equation} For each $k \in \mathbb{N}$ and for any $N \in \mathbb{N}$, the asymptotic expansion of $\operatorname{erfc}$ in \eqref{eq:Error_Function_Asymptotic_Expansion} gives \begin{equation} \frac{\pi k}{ \sqrt{h}}\, \operatorname{erfc}\left( \frac{1-i}{2\sqrt{2}}\, \frac{ k}{ \sqrt{h}} \right) - \sqrt{2\pi} \, e^{\frac{ik^2}{4h}}\, (1+i) = \sqrt{\pi}\frac{1+i}{\sqrt{2}} e^{\frac{ik^2}{4h}} \sum_{n=1}^N{ \frac{(2n-1)!!\,2^{n+1}\, h^n}{i^n\,k^{2n}} } + O\left( \frac{\sqrt{h}}{k} \right)^{2N+2} . \end{equation} Sum in $k \in \mathbb{N}$ and change the order of summation to get \begin{equation}\label{Asymptotic0Preliminary} \phi(h) = \frac{1+i}{\sqrt{2\pi}}\,\sqrt{h} - \frac{1-i}{\sqrt{2\pi}}\,\sum_{n=1}^{N}{ \frac{2^{n+1} \, (2n-1)!!}{i^{n-1}}\left( \sum_{k=1}^{\infty}{ \frac{e^{ ik^2/(4h)}}{k^{2n}} } \right) h^{n+\frac12}} + O\left( h^{N+\frac32} \right) \end{equation} for any $N \in \mathbb{N}$, which is the asymptotic behavior of $\phi$ around 0. For negative values $h<0$, the property $\phi(-h) = \overline{\phi(h)}$ implies that \eqref{Asymptotic0Preliminary} is correct up to determining $\sqrt{h} = \pm i \sqrt{|h|}$. Indeed, writing $h = -|h| < 0$ and conjugating \eqref{Asymptotic0Preliminary} we have \begin{equation \phi(-|h|) = \frac{1-i}{\sqrt{2\pi}}\,\sqrt{|h|} - \frac{1+i}{\sqrt{2\pi}}\,\sum_{n=1}^{N}{ 2^{n+1}\, i^{n-1} \, (2n-1)!!\left( \sum_{k=1}^{\infty}{ \frac{e^{\frac{-ik^2}{4|h|}}}{k^{2n}} } \right) |h|^{n+\frac12}} + O\left( h^{N+\frac32} \right), \end{equation} while direct substitution in \eqref{Asymptotic0Preliminary} leads to \begin{equation} \phi(-|h|) = \frac{1+i}{\sqrt{2\pi}}\,\sqrt{-|h|} - \frac{1-i}{\sqrt{2\pi}}\,\sum_{n=1}^{N}{ \frac{2^{n+1} \, (2n-1)!!}{i^{n-1}}\left( \sum_{k=1}^{\infty}{ \frac{e^{\frac{ik^2}{-4|h|}}}{k^{2n}} } \right) (-|h|)^{n+\frac12}} + O\left( h^{N+\frac32} \right). \end{equation} These two expressions coincide if $\sqrt{-1} = -i$, so \eqref{Asymptotic0Preliminary} works also for $h<0$ with the branch of the complex square root with $\sqrt{-1} = -i$. In short, we have proved the following proposition. \begin{prop}\label{thm:Asymptotic_At_0} Let \begin{equation}\label{eq:Spiral_0} Y_n(h) = \sum_{k=1}^{\infty}{ \frac{e^{ik^2/(4h)}}{k^{2n}} }, \qquad n \in \mathbb{N}, \end{equation} and $N \in \mathbb{N}$. Then, \begin{equation}\label{eq:Asymptotic_0_Complete} \phi(h) = \frac{1+i}{\sqrt{2\pi}}\,\sqrt{h} - \frac{1-i}{\sqrt{2\pi}}\,\sum_{n=1}^{N}{ \frac{2^{n+1} \, (2n-1)!!}{i^{n-1}}\, Y_n(h) \, h^{n+\frac12}} + O\left( h^{N+\frac32} \right) \end{equation} for every $h \in \mathbb{R}$, where $\sqrt{-1} = -i$ if $h<0$. In particular, when $N=1$, we get the self-similar asymptotic expression \begin{equation}\label{eq:Asymptotic_0_Selfsimilar} \phi(h) = \frac32\,\frac{1+i}{\sqrt{2\pi}}\,\sqrt{h} - 4\pi^2\, \frac{1-i}{\sqrt{2\pi}} \left[ \frac16 - 2\phi\left( \frac{-1}{16\pi^2h} \right) \right] h^{3/2} + O\left( h^{5/2}\right). \end{equation} \end{prop} The only thing left to prove is the self-similar expression \eqref{eq:Asymptotic_0_Selfsimilar}, which holds because \begin{equation}\label{SelfSimilarity0} Y_1(h) = \frac{\pi^2}{6} - \frac{i}{8h} - 2\pi^2\,\phi\left( \frac{-1}{16\pi^2h} \right). \end{equation} In turn, this last identity is easy to prove using \eqref{FromPhiDuistermaatToPhi}, given that $Y_1(h) = i\pi \phi_D(1/(4\pi h))$. \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{Curva_0-v.2} \caption{Zoom of $\phi(\mathbb R)$ around $\phi(0)=0$, located on the lower left corner.} \label{fig:Zoom0} \end{subfigure} \hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{Curva_1_2} \caption{Zoom of $\phi(\mathbb R)$ around $\phi(t_{1,2})$, located in the center of the spiral.} \label{fig:Zoom12} \end{subfigure} \caption{Graphic visualization of the asymptotic behavior of $\phi$ around 0 and $t_{1,2}$. Compare Figure~\ref{fig:Zoom0} to Figure~\ref{FIG_Curva} to appreciate the self-similar patterns, which are analytically explained by \eqref{eq:Asymptotic_0_Selfsimilar} in Proposition~\ref{thm:Asymptotic_At_0}. In Figure~\ref{fig:Zoom12}, the spiraling pattern is a consequence of \eqref{eq:Asymptotic_At_12_Short} in Proposition~\ref{thm:Asymptotic_At_1_2} and the definition of $Z_1$ \eqref{Spiral12}. } \label{fig:ZoomBasic} \end{figure} \subsection{Asymptotic behavior around $t_{1,2}$}\label{Section_AsymptoticIn12} An easy way to deduce the asymptotic behavior of $\phi$ around $t_{1,2}$ is by means of the identity \begin{equation}\label{IdentityFrom12To0} \phi(h + t_{1,2}) = \frac18 + \frac{i}{4\pi} + \frac{\phi(4h)}{2} - \phi(h), \end{equation} which can be proved by splitting the sum in the definition of $\phi$ into the even and odd indices. What is more, evaluating it at $h=0$ gives $\phi(t_{1,2}) = 1/8 + i/(4\pi)$, so \begin{equation} \phi( t_{1,2} + h ) - \phi(t_{1,2}) = \frac{\phi(4h)}{2} - \phi(h). \end{equation} We can now use Proposition~\ref{thm:Asymptotic_At_0}. The leading square root terms cancel, so $h^{3/2}$ becomes the leading order. Moreover, the coefficients of the higher order terms are \begin{equation}\label{Spiral12} 4^n\,Y_n(4h) - Y_n(h) = 4^n \, Z_n(h), \qquad \text{where} \quad Z_n(h) = \sum_{\substack{k=1 \\ k \text{ odd}}}^{\infty}{ \frac{e^{ik^2/(16h)}}{k^{2n}} }. \end{equation} As a consequence, the asymptotic behavior of $\phi$ around $t_{1,2}$ can be written as follows. \begin{prop}\label{thm:Asymptotic_At_1_2} Let $N \in \mathbb{N}$. Then, \begin{equation} \phi(t_{1,2}+h) - \phi(t_{1,2}) = -\frac{1-i}{\sqrt{2\pi}}\, \sum_{n=1}^N{ \frac{2^{3n+1}\,(2n-1)!!}{i^{n-1}}\, Z_n(h)\, h^{n+\frac12} } + O\left( h^{N+\frac32} \right) \end{equation} for every $h \in \mathbb{R}$, where $\sqrt{-1} = -i$ when $h<0$. In particular, when $N=1$, \begin{equation}\label{eq:Asymptotic_At_12_Short} \phi(t_{1,2}+h) - \phi(t_{1,2}) = -16\,\frac{1-i}{\sqrt{2\pi}}\,Z_1(h)\,h^{3/2} + O\left( h^{5/2} \right). \end{equation} \end{prop} \begin{rem} The function $Z_1(h)$ turns around the origin in a circular pattern, and the more $h$ approaches to zero, the faster it does it. Since in \eqref{eq:Asymptotic_At_12_Short} it is multiplied by $h^{3/2}$, which tends to zero when $h \to 0$, this circular pattern turns into a spiral that concentrates in $\phi(t_{1,2})$ (see Figure~\ref{fig:Zoom12}). \end{rem} \begin{rem} Identities similar to \eqref{IdentityFrom12To0} can be obtained for other rationals such as $t_{1,3}, t_{1,4},t_{1,6}$ and $t_{1,8}$. Consequently, one can prove the asymptotic behavior of $\phi$ around those points with as much precision as wanted. \end{rem} \section{Asymptotic behavior around rationals}\label{Section_Rationals} Once we know the asymptotic behavior around 0 and $t_{1,2}$, we compute the case of a general rational $t_{p,q}$. For that, we make the reduction process explained in Subsection~\ref{Section_LookingForTransformations} rigorous. First of all, the formal identity \eqref{AsymptoticWithTheta} in which the reduction is based is made precise by \begin{equation} \phi(t) = i\, \lim_{\epsilon \to 0^+} \int_0^t \theta(-4\pi \tau + i\epsilon)\,d\tau. \end{equation} This is a consequence of Fubini's theorem and the dominated convergence theorem. Consequently, we get the rigorous version of \eqref{eq:Asymptotic_Ready_For_Reduction_With_Modular_Group_Rationals}, \begin{equation}\label{eq:Asymptotic_With_Limit_2} \phi\left( t_{p,q} + h \right) - \phi(t_{p,q}) = \frac{i}{4\pi}\,\lim_{\epsilon \to 0^+} \int_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h}{\theta(-\tau + i\epsilon)\,d\tau}. \end{equation} Let now $\gamma \in \Gamma_\theta$ and use the transformation \eqref{eq:Theta_Transformation_General} for the Jacobi $\theta$ function so that, after conjugation, \eqref{eq:Asymptotic_With_Limit_2} turns into \begin{equation}\label{eq:Asymptotic_At_Q013_Before_Integrating_By_Parts} \overline{ \phi(t_{p,q} + h) - \phi(t_{p,q}) } = \frac{1}{4\pi i e_\gamma}\lim_{\epsilon\to 0^+}\int_{\tilde{p}/\tilde{q}}^{ \tilde{p}/\tilde{q} + 4\pi h } \frac{ \theta(\gamma(\tau + i \epsilon)) }{\sqrt{ c(\tau + i\epsilon) + d } }\,d\tau. \end{equation} Observing that $\phi'(z) = i\theta(-4\pi z)$ whenever $\operatorname{Im}z >0$, integrate by parts choosing \begin{equation} \begin{array}{ll} u = \frac{1}{\gamma'(\tau + i \epsilon)\, \sqrt{c(\tau + i \epsilon) + d}} = (c(\tau + i \epsilon) + d)^{3/2}, & du = \frac{3c}{2}\, \sqrt{c(\tau + i \epsilon) + d}\,d\tau, \\ dv = \theta(\gamma(\tau + i\epsilon)))\,\gamma'(\tau + i\epsilon)\,d\tau, & v = 4\pi i\, \phi\left(-\frac{\gamma(\tau + i\epsilon)}{4\pi}\right), \end{array} \end{equation} which yields \begin{equation} \frac{1}{e_\gamma}\,\lim_{\epsilon \to 0}\Bigg[\phi\left(-\frac{\gamma(\tau + i\epsilon)}{4\pi}\right)\,(c(\tau + i \epsilon) + d)^{\frac32} \Bigg|_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h} - \frac{3c}{2}\, \int_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h} \phi\left(-\frac{\gamma(\tau + i\epsilon)}{4\pi}\right)\, \sqrt{c(\tau + i \epsilon) + d} \,d\tau \Bigg]. \end{equation} This allows to work exclusively with $\phi$, which is well-defined on the real line. Clearly, we can now take the limit $\epsilon \to 0$ in the first term. In the second term, due to the fact that the integrating interval is finite, everything inside the integral is bounded independently of $\epsilon$. Thus, the limit can be taken inside by the theorem of dominated convergence to get \begin{equation}\label{eq:Valid_For_Any_Gamma} \phi(t_{p,q} + h) - \phi(t_{p,q}) = e_\gamma \,\Bigg[\phi\left(\frac{\gamma(\tau)}{4\pi}\right)\,(c\tau + d)^{3/2} \Bigg|_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h} - \frac{3c}{2}\, \int_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h} \phi\left(\frac{\gamma(\tau)}{4\pi}\right)\, \sqrt{c\tau + d} \,d\tau \Bigg]. \end{equation} \subsection{Asymptotic behavior around $t_{p,q}$ with $q \equiv 0,1,3 \pmod{4}$}\label{Section_AsymptoticInQ013} Let $p/q$ be an irreducible fraction such that $q \equiv 0,1,3 \pmod{4}$. In Subsection~\ref{Section_LookingForTransformations} we found $\gamma \in \Gamma_{\theta}$ such that $\gamma(\tilde{p}/\tilde{q}) = 0$, where $\tilde{p}/\tilde{q} = 2p/q$. Recalling \eqref{eq:Upper_Boundary_Of_Integral}, the definition of $b(h)$ in \eqref{eq:Definition_Of_B} and $c = c_\pm$, \begin{equation} \phi(t_{p,q} + h) - \phi(t_{p,q}) = e_\gamma\, \Bigg[ \frac{(1+4\pi c_\pm \tilde{q} h)^{3/2}}{\tilde{q}^ {3/2}} \, \phi\left( b(h) \right) - \frac32 c_\pm \, \int_{\tilde{p}/\tilde{q}}^{\tilde{p}/\tilde{q} + 4\pi h}{ \phi\left( \frac{\gamma(\tau)}{4\pi} \right) \sqrt{c_\pm \tau + d} \,d\tau} \Bigg] \end{equation} Change variables $\gamma(\tau)/4\pi = r$ as in \eqref{eq:Changing_Variables} to get \begin{equation}\label{eq:Asymptotic_Closed_Form_013} \phi(t_{p,q} + h) - \phi(t_{p,q}) = e_\gamma \left[ \frac{ \phi\left( b(h) \right)}{\left(\tilde{q} - 4\pi c_\pm b(h)\right)^{3/2}} - 6\pi c_{\pm}\,\int_0^{b(h)}{ \frac{\phi(r)}{(\tilde{q}-4\pi c_{\pm} r)^{5/2}}\,dr } \right]. \end{equation} We can already use the asymptotic behavior around 0 in $\phi(b(h))$ and $\phi(r)$ because $b(h)$ behaves like $\tilde{q}^2h$ when $h$ is small. For simplicity, call $b=b(h)$. Develop $(\tilde{q} - 4\pi c_\pm b)^{-3/2}$ and $(\tilde{q} - 4\pi c_\pm b)^{-5/2}$ using the Taylor series \begin{equation}\label{TaylorSeries} (1-x)^{-\alpha} = \sum_{n=0}^{\infty}{ \binom{n + \alpha - 1}{n}x^n }, \qquad |x| < 1, \end{equation} which can be done because $4\pi c_{\pm}b(h)/\tilde{q} < 1$ for all $h \in \mathbb{R}$. Also, develop $\phi(b)$ following Proposition~\ref{thm:Asymptotic_At_0} so that we get \begin{equation}\label{AsymptoticsWithB013} \phi(t_{p,q} + h) - \phi(t_{p,q}) = \frac{e_{\gamma}}{\tilde{q}^{3/2}} \left[ \frac{1+i}{\sqrt{2\pi}}\,b^{\frac12} + \left( 2\pi\,\frac{1+i}{\sqrt{2\pi}}\,\frac{c_{\pm}}{\tilde{q}} - 4\,\frac{1-i}{\sqrt{2\pi}}\,Y_1(b) \right)\,b^{\frac32} + O\left( b^{\frac52}\right) \right]. \end{equation} Computing further terms requires integrating $r^{3/2}Y_1(r)$. Using \eqref{TaylorSeries} again, expand \begin{equation}\label{TaylorSeriesOfB} \begin{split} & b(h)^{1/2} = \tilde{q}\,h^{1/2} \left( 1 - 2\pi\,c_{\pm}\tilde{q}h + O\left(c_{\pm}^2\,\tilde{q}^2\,|h|^{3/2} \right) \right), \qquad b(h)^{3/2} = \tilde{q}^3\,h^{3/2} \left(1 + O\left(\tilde{q}\,|c_{\pm}h | \right)\right), \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad b^{5/2}(h) =O\left(q^5|h|^{5/2} \right), \end{split} \end{equation} which according to the definition of $b(h)$ are valid only if $4\pi |c_{\pm}\tilde{q} h| < 1$. We use them to expand \eqref{AsymptoticsWithB013} in terms of $h$ and obtain \begin{equation} \phi(t_{p,q}+h) - \phi(t_{p,q}) = e_{\gamma}\,\left( \frac{1+i}{\sqrt{2\pi}}\,\frac{h^{1/2}}{\tilde{q}^{1/2}} -4\,\frac{1-i}{\sqrt{2\pi}}\,Y_1(b(h))\,\tilde{q}^{3/2}\,t^{3/2} + O\left( \tilde{q}^{\frac72}\,h^{\frac52} \right) \right), \end{equation} valid for $\tilde{q}^2\,h < 1/(4\pi c_+/\tilde{q})$ when $h >0$ and for $\tilde{q}^2\,|h| < 1/(4\pi |c_-|/\tilde{q})$ when $h <0$. This is the asymptotic behavior we looked for, which we write in the following proposition: \begin{prop}\label{thm:Asymptotic_At_Q013} Let $p, q \in \mathbb{N}$ such that $q \equiv 0,1,3 \, (\text{mod } 4)$, $p<q$ and $\operatorname{gcd}(p,q) = 1$. Define $\tilde{p}$ and $\tilde{q}$ so that $\tilde{p}/\tilde{q} = 2p/q$ is an irreducible fraction, and set \begin{equation} Y_1(h) = \sum_{k=1}^{\infty}{ \frac{e^{ik^2/(4h)}}{k^{2}} } \qquad \text{ and } \qquad b(h) = \left\{ \begin{array}{ll} \frac{\tilde{q}^2h}{1+4\pi c_{+} \tilde{q} h}, & \text{when } h \geq 0, \\ \frac{\tilde{q}^2h}{1+4\pi c_{-} \tilde{q} h}, & \text{when } h < 0, \end{array} \right. \end{equation} where $\tilde{q} \leq c_+, |c_-| \leq 4\tilde{q}$ as in Subsection~\ref{Section_LookingForTransformations}. Then, there exists a complex eighth root of unity $e_{p,q}$ depending only on $p$ and $q$ such that \begin{equation}\label{eq:Asymptotic_At_Q013_Principal} \phi(t_{p,q}+h) - \phi(t_{p,q}) = \frac{e_{p,q}}{\sqrt{\pi}}\frac{1+i}{\sqrt{2}}\,\Bigg( \frac{h^{1/2}}{\tilde{q}^{1/2}} + 4\,i\,Y_1(b(h))\,\tilde{q}^{3/2}\,h^{3/2} + O\left( \tilde{q}^{7/2}\,h^{5/2} \right) \Bigg), \end{equation} which is valid when $|h| \leq 1/(4\pi\frac{|c_\pm|}{\tilde{q}}\tilde{q}^2)$ and where $c_\pm = c_+$ when $h>0$ and $c_\pm = c_-$ when $h<0$. Also, $\sqrt{-1} = -i$ when $h < 0$. The corresponding the self-similar form is \begin{equation}\label{eq:Asymptotic_At_Q013_Selfsimilar} \begin{split} & \phi(t_{p,q}+h) - \phi(t_{p,q}) \\ & \qquad \qquad = \frac32\,\frac{e_{p,q}}{\sqrt{\pi}}\frac{1+i}{\sqrt{2}}\,\Bigg[ \frac{h^{1/2}}{\tilde{q}^{1/2}} + \frac{8\pi^2}{3} i \left( \frac16 - \frac{i}{2\pi}\,\frac{c_\pm}{\tilde{q}} - 2\phi\left( \frac{-1}{16\pi^2b(h)} \right) \right)\,\tilde{q}^{\frac32}\,h^{\frac32} + O\left( \tilde{q}^{\frac72}\,h^{\frac52} \right) \Bigg] \end{split} \end{equation} for the same values $|h| \leq 1/(4\pi\frac{|c_\pm|}{\tilde{q}}\tilde{q}^2)$ as above. Also equivalently, the above is rescaled as \begin{equation}\label{eq:Asymptotic_At_Q013_Rescaled} \begin{split} & \phi\left(t_{p,q}+\frac{h}{\tilde{q}^2}\right) - \phi(t_{p,q}) \\ & \qquad \qquad = \frac{3}{2\sqrt{\pi}}\,\frac{1+i}{\sqrt{2}}\,\frac{e_{p,q}}{\tilde{q}^{3/2}} \, \left[ h^{\frac12} + \frac{8\pi^2}{3} i \left( \frac16 - \frac{i}{2\pi}\,\frac{c_\pm}{\tilde{q}} - 2\phi\left( \frac{-1}{16\pi^2\beta(h)} \right) \right)h^{\frac32} + O\left( h^{\frac52} \right) \right] \end{split} \end{equation} for all $|h| \leq 1/(4\pi\frac{|c_\pm|}{\tilde{q}})$, where $\beta(h) = b(h/\tilde{q}^2)$. \end{prop} \begin{rem} The leading square root term is the cause of every right-angled corner in Figure~\ref{FIG_Curva}, since $\sqrt{-1} = \pm i$. Also, the self-similar patterns of $\phi$ in Figure~\ref{FIG_Curva} are analytically explained by the term $\phi(-1/(16\pi^2b(h)))$ in the expansions \eqref{eq:Asymptotic_At_Q013_Selfsimilar} and \eqref{eq:Asymptotic_At_Q013_Rescaled}. In fact, \eqref{eq:Asymptotic_At_Q013_Selfsimilar} is obtained from \eqref{eq:Asymptotic_At_Q013_Principal} via the identity \eqref{SelfSimilarity0} that we already used in the previous section. \end{rem} \begin{rem} Comparing \eqref{eq:Asymptotic_At_Q013_Rescaled} with \eqref{eq:Asymptotic_0_Selfsimilar} in Proposition~\ref{thm:Asymptotic_At_0}, we see that $\phi$ behaves around $t_{p,q}$ essentially the same way as around 0, except rescaling the variable by $\tilde{q}^{-2}$ and the image by $\tilde{q}^{3/2}$ and replacing $h$ with $\beta(h)$ in the self-similar term. This is the rigorous version of \eqref{eq:Approximated_Asymptotic_013} that we anticipated formally in Subsection~\ref{Section_LookingForTransformations}. \end{rem} \begin{rem}\label{ChoiceOfSquareRootQ013} In Proposition~\ref{thm:Asymptotic_At_Q013}, we claim $\sqrt{-1} = -i$ whenever $h<0$. The symmetry $\phi(-t) = \overline{\phi(t)}$ was enough to determine this around 0, but there is no such symmetry around $\phi(t_{p,q})$ for $q > 2$. However, we can work with the limit $h \to 0$ in the asymptotic expression of $\phi(t_{p,q}+h) - \phi(t_{p,q})$. Let $0 < |h| \ll 1$. We start with \eqref{eq:Asymptotic_Closed_Form_013}, where the leading term when $h \to 0$ is the first one. Indeed, $\lim_{h\to 0}b(h) = 0$, so by Proposition~\ref{thm:Asymptotic_At_0} we have \begin{equation} \frac{\phi(b(h))}{(\tilde{q} - 4\pi c_\pm b(h))^{3/2}} \sim \frac{\textstyle{\frac{1+i}{\sqrt{2\pi}}}\,b^{1/2}+ O(b^{3/2})}{\tilde{q}^{3/2}} \qquad \text{ when } h \to 0, \end{equation} and \begin{equation} \int_0^b\frac{\phi(r)}{(\tilde{q}-4\pi c_\pm r)^{5/2}}\,dr \sim \frac{1+i}{\sqrt{2\pi}} \int_0^b \frac{ r^{1/2}+ O(r^{3/2})}{\tilde{q}^{5/2}}\,dr = \frac{1+i}{\sqrt{2\pi}} \frac{ b^{3/2} + O(b^{5/2})}{\tilde{q}^{5/2}} \qquad \text{ when } h \to 0. \end{equation} Consequently, \begin{equation}\label{eq:Aux_Short_Asymptotic} 1 = \lim_{h \to 0}\frac{ \phi(t_{p,q} + h) - \phi(t_{p,q}) }{e_{p,q}\,\tilde{q}^{-3/2}\,\phi(b(h))}. \end{equation} Define $b_-(h)$ by \begin{equation} b(- h ) = -\frac{\tilde{q}^2 h}{1 + 4\pi c_- \tilde{q}h} = - b_-(h), \end{equation} so that $ \overline{\phi(b(-h))} = \phi(b_-(h) $. Therefore, evaluate \eqref{eq:Aux_Short_Asymptotic} in $-h$ and conjugate it so that \begin{equation} \begin{split} 1 & = e_{p,q} \,\lim_{h \to 0} \frac{ \overline{ \phi(t_{p,q} - h) - \phi(t_{p,q}) }}{\tilde{q}^{- 3/2} \, \overline{\phi(b(-h))} } = e_{p,q} \, \lim_{h \to 0} \frac{\overline{\phi(t_{p,q} - h) - \phi(t_{p,q})}}{\tilde{q}^{-3/2}\, \phi(b_-(h)) } \\ & = e_{p,q} \, \,\lim_{h \to 0} \frac{\overline{\phi(t_{p,q} - h) - \phi(t_{p,q})}}{\tilde{q}^{-3/2}\, \phi(b(h)) }\, \frac{\phi(b(h))}{ \phi(b_-(h)) } = e_{p,q} \, \,\lim_{h \to 0} \frac{\overline{\phi(t_{p,q} - h) - \phi(t_{p,q})}}{\tilde{q}^{-3/2}\, \phi(b(h)) } \\ & = e_{p,q}^2 \,\lim_{h \to 0} \frac{\overline{\phi(t_{p,q} - h) - \phi(t_{p,q})}}{\phi(t_{p,q} + h) - \phi(t_{p,q})}. \end{split} \end{equation} We used \eqref{eq:Aux_Short_Asymptotic} in the last equality, and \begin{equation} \lim_{h \to 0} \frac{\phi(b(h))}{\phi(b_-(h))} = \lim_{h \to 0} \frac{b(h)^{1/2}}{b_-(h)^{1/2}} = 1. \end{equation} in the previous one. Finally, using the asymptotic behavior in Proposition~\ref{thm:Asymptotic_At_Q013}, we get \begin{equation} 1 = e_{p,q}^2 \lim_{h \to 0} \frac{ \overline{ e_{p,q} (1+i) (-h)^{1/2} } }{ e_{p,q} (1+i) h^{1/2} } = e_{p,q}^2 \frac{\overline{e_{p,q}}}{e_{p,q}} \frac{1-i}{1+i} \overline{\sqrt{-1}} = -i \overline{\sqrt{-1}}, \end{equation} which implies that $\sqrt{-1} = -i$ must hold so that Proposition~\ref{thm:Asymptotic_At_Q013} works also for $h<0$. \end{rem} As a corollary, we show that the asymptotic behavior in Proposition~\ref{thm:Asymptotic_At_Q013} can be truncated in its first term independently of $q$, which is what we use in the proofs of Theorems~\ref{TheoremHausdorffDimension} and \ref{TheoremHausdorffDimensionGeneralised}. \begin{cor}\label{thm:Global_Bound_Q013} Let $p, q \in \mathbb{N}$ such that $q \equiv 0,1,3 \pmod{4}$ and $\operatorname{gcd}(p,q) = 1$. Given $M >0$, there exists $C_M>0$ independent of $p$ and $q$ such that \begin{equation} \left| \phi(t_{p,q}+h) - \phi(t_{p,q}) \right| \leq C_M \frac{|h|^{1/2}}{q^{1/2}}, \qquad \forall |h| < \frac{M}{q^2}. \end{equation} \end{cor} \begin{proof} The Taylor expansion that was used to get \eqref{AsymptoticsWithB013} works because $4\pi c_{\pm}b/\tilde{q} < 1$ for all $h \in \mathbb{R}$. However, $\lim_{h\to\infty}4\pi c_\pm b(h)/\tilde{q} = 1$, so we can truncate the series uniformly only if $4\pi c_\pm b(h)/\tilde{q} < \delta$ for some fixed $0<\delta<1$. That is equivalent to $|h| < (\textstyle{\frac{\delta}{4\pi \frac{|c|}{\tilde{q}}(1-\delta)}})/\tilde{q}^2$. Now, given $M>0$, since $\delta/(1-\delta)$ covers the whole positive real line for $\delta \in (0,1)$, there exists $0<\delta_M < 1$ such that $M = \textstyle{\frac{\delta_M}{16\pi(1-\delta_M)}}$. Since $|c_\pm |< 4\tilde{q}$, then $|h| < M/\tilde{q}^2$ means that $4\pi c_\pm b(h)/\tilde{q} < \delta_M$, and thus we can truncate \eqref{AsymptoticsWithB013}, in the sense that there exists $C_{\delta_M}>0$ such that \begin{equation}\label{eq:Truncation_In_Terms_Of_B} \left| \phi(t_{p,q}+h) - \phi(t_{p,q}) \right| \leq C_{\delta_M} \frac{|b(h)|^{1/2}}{\tilde{q}^{3/2}}. \end{equation} Now, if $4\pi |c_{\pm} \tilde{q} h| \geq 1$, then from the definition of $b(h)$ we have $|b(h)| \leq \tilde{q}^2 |h| /2$, so we get \begin{equation} \left| \phi(t_{p,q}+h) - \phi(t_{p,q}) \right| \leq \frac{C_{\delta_M}}{\sqrt{2}} \frac{|h|^{1/2}}{\tilde{q}^{1/2}}. \end{equation} Otherwise, if $4\pi |c_{\pm} \tilde{q} h| < 1$, then the bound is immediate from Proposition~\ref{thm:Asymptotic_At_Q013} because in particular we have $|h| < \tilde{q}^{-2}$ and then \begin{equation} \tilde{q}^{7/2}|h|^{5/2} < \tilde{q}^{3/2}|h|^{3/2} < \tilde{q}^{-1/2}|h|^{1/2} \end{equation} can be used in \eqref{eq:Asymptotic_At_Q013_Principal}. \end{proof} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{Curva_1_8} \caption{Zoom of $\phi(\mathbb R)$ around $\phi(t_{1,8})$. Compare to Figure~\ref{FIG_Curva} to appreciate the self-similar pattern, which is analytically explained in \eqref{eq:Asymptotic_At_Q013_Selfsimilar} in Proposition~\ref{thm:Asymptotic_At_Q013}. Compare it also to the behavior of $\phi$ around 0 in Figure~\ref{fig:Zoom0}. Except for a rotation by $\pi/4$ radians, they are very similar. } \label{fig:Zoom18} \end{figure} \subsection{Asymptotic behavior around $t_{p,q}$ with $q \equiv 2 \pmod{4}$}\label{Section_AsymptoticInQ2} If $p/q$ is an irreducible fraction such that $q \equiv 2 \pmod{4}$, we saw that there exists $\gamma \in \Gamma_{\theta}$ satisfying $\gamma(\tilde{p}/\tilde{q}) = 1$, where $\tilde{p}/\tilde{q} = 2p/q$ is irreducible. The strategy is exactly the same as in Subsection~\ref{Section_AsymptoticInQ013}, except that when integrating by parts in \eqref{eq:Asymptotic_At_Q013_Before_Integrating_By_Parts} we choose \begin{equation} v = 4\pi i \, \left[ \phi\left(-\frac{\gamma(\tau+i\epsilon) }{4\pi} \right) - \phi\left(-\frac{\gamma(\tilde{p}/\tilde{q} + i\epsilon)}{4\pi} \right) \right] \end{equation} instead. Then, after taking the limit $\epsilon \to 0$ and changing variables $\gamma(\tau)/4\pi=r$ as before, we get \begin{equation}\label{eq:Asymptotic_Closed_Form_2} \phi(t_{p,q} + h) - \phi(t_{p,q}) = e_{\gamma} \Bigg[ \frac{\phi(t_{1,2} + b(h)) - \phi(t_{1,2})}{(\tilde{q}-4\pi c_{\pm} b(h))^{3/2}} - 6\pi \,c_{\pm} \int_0^{b(h)}{ \frac{\phi(t_{1,2} + r) - \phi(t_{1,2})}{(\tilde{q}-4\pi c_{\pm} r)^{5/2}}\,dr } \Bigg] \end{equation} for all $h \in \mathbb{R}$. Now develop $\phi(t_{1,2} + b(h)) - \phi(t_{1,2})$ using Proposition~\ref{thm:Asymptotic_At_1_2} and use the Taylor expansions \eqref{TaylorSeries} to get a series in terms of $b = b(h)$, \begin{equation}\label{eq:Asymptotic_With_B_2} \phi(t_{p,q}+h) - \phi(t_{p,q}) = e_{\gamma} \,\left[ -16\,\frac{1-i}{\sqrt{2\pi}}\, Z_1(b)\,\frac{b^{3/2}}{\tilde{q}^{3/2}} + \frac{1}{\tilde{q}^{3/2}}\,O\left( b^{5/2} \right) \right]. \end{equation} Finally, expanding the Taylor series for powers of $b(h)$ as in \eqref{TaylorSeriesOfB}, we get the asymptotic behavior we were looking for: \begin{prop}\label{thm:Asymptotic_At_Q2} Let $p, q \in \mathbb{N}$ such that $q \equiv 2 \pmod{4}$, $p<q$ and $\operatorname{gcd}(p,q) = 1$. Define $\tilde{p}$ and $\tilde{q}$ so that $\tilde{p}/\tilde{q} = 2p/q$ is an irreducible fraction, and set \begin{equation} Z_1(h) = \sum_{\substack{k=1 \\ k \text{ odd}}}^{\infty}{ \frac{e^{ik^2/(16h)}}{k^{2}} } \qquad \text{ and } \qquad b(h) = \left\{ \begin{array}{ll} \frac{\tilde{q}^2h}{1+4\pi c_{+} \tilde{q} h}, & \text{when } h \geq 0, \\ \frac{\tilde{q}^2h}{1+4\pi c_{-} \tilde{q} h}, & \text{when } h < 0, \end{array} \right. \end{equation} where $\tilde{q} \leq c_+, |c_-| \leq 3\tilde{q}$ as in Subsection~\ref{Section_LookingForTransformations}. Then, there exists a complex eighth root of unity $e_{p,q}$ depending only on $p$ and $q$ such that \begin{equation}\label{eq:Asymptotic_At_Q2_Principal} \phi(t_{p,q}+h) - \phi(t_{p,q}) = e_{p,q} \,\left( -16\,\frac{1-i}{\sqrt{2\pi}}\,Z_1(b(h))\, \tilde{q}^{3/2}\,h^{3/2} + O\left( \tilde{q}^{7/2} h^{5/2} \right) \right), \quad |h| < \frac{1}{4\pi \frac{|c_\pm|}{\tilde{q}}}\,\frac{1}{\tilde{q}^2}, \end{equation} where $c_\pm = c_+$ when $h>0$ and $c_\pm = c_-$ when $h<0$. Also, $\sqrt{-1} = -i$ when $h < 0$. Equivalently, rescaling the variable, \begin{equation}\label{eq:Asymptotic_At_Q2_Rescaled} \phi\left(t_{p,q}+\frac{h}{\tilde{q}^2}\right) - \phi(t_{p,q}) = \frac{e_{p,q}}{\tilde{q}^{3/2}} \,\left( -16\,\frac{1-i}{\sqrt{2\pi}}\,Z_1(\beta(h)) \,h^{3/2} + O\left( h^{5/2} \right) \right), \qquad |h| < \frac{1}{4\pi \frac{|c_\pm|}{\tilde{q}}}, \end{equation} where $\beta(h) = b(h/\tilde{q}^2)$. \end{prop} \begin{rem} Proposition~\ref{thm:Asymptotic_At_Q2} confirms what we formally deduced in\eqref{eq:Approximated_Asymptotic_2}, this is, that $\phi$ behaves around $t_{p,q}$ with $q \equiv 2 \pmod{4}$ essentially the same way as around $t_{1,2}$, except the usual rescaling and replacing $h$ with $\beta(h)$ in the argument of $Z_1$. \end{rem} The analogous result of Corollary~\ref{thm:Global_Bound_Q013} is also satisfied, with an equally analogous proof. \begin{cor}\label{thm:Global_Bound_Q2} Let $p, q \in \mathbb{N}$ such that $q \equiv 2 \pmod{4}$ and $\operatorname{gcd}(p,q) = 1$. Given $M>0$, there exists $C_M>0$ independent of $p$ and $q$ such that \begin{equation} \left| \phi(t_{p,q}+h) - \phi(t_{p,q}) \right| \leq C_M \, q^{3/2}\, h^{3/2}, \qquad \forall |h| \leq \frac{M}{q^2}. \end{equation} \end{cor} \bibliographystyle{acm}
{ "timestamp": "2021-07-19T02:13:35", "yymm": "1910", "arxiv_id": "1910.02530", "language": "en", "url": "https://arxiv.org/abs/1910.02530", "abstract": "Recent findings show that the classical Riemann's non-differentiable function has a physical and geometric nature as the irregular trajectory of a polygonal vortex filament driven by the binormal flow. In this article, we give an upper estimate of its Hausdorff dimension. We also adapt this result to the multifractal setting. To prove these results, we recalculate the asymptotic behavior of Riemann's function around rationals from a novel perspective, underlining its connections with the Talbot effect and Gauss sums, with the hope that it is useful to give a lower bound of its dimension and to answer further geometric questions.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "On the Hausdorff dimension of Riemann's non-differentiable function", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631659211718, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7087950281807142 }
https://arxiv.org/abs/1707.03281
Characterizations of Ideal Cluster Points
Given an ideal $\mathcal{I}$ on $\omega$, we prove that a sequence in a topological space $X$ is $\mathcal{I}$-convergent if and only if there exists a ``big'' $\mathcal{I}$-convergent subsequence. Then, we study several properties and show two characterizations of the set of $\mathcal{I}$-cluster points as classical cluster points of a filters on $X$ and as the smallest closed set containing ``almost all'' the sequence. As a consequence, we obtain that the underlying topology $\tau$ coincides with the topology generated by the pair $(\tau,\mathcal{I})$.
\section{Introduction}\label{sec:introduction} Following the concept of statistical convergence as a generalization of the ordinary convergence, Fridy \cite{MR1181163} introduced the statistical limit points and statistical cluster points of a real sequence $(x_n)$ as generalizations of accumulation points. A real number $\ell$ is said to be a \emph{statistical limit point} of $(x_n)$ if there exists a subsequence $(x_{n_k})$ such that $$\lim_{k\to \infty} x_{n_k} = \ell$$ and the set of indices $\{n_k: k \in \omega\}$ has positive upper asymptotic density (see Section \ref{sec:preliminaries} for definitions). Also, $\ell$ is called \emph{statistical cluster point} provided that $$\{n\in \omega: |x_n-\ell|<\varepsilon\}$$ has positive upper asymptotic density for every $\varepsilon>0$. He proved, among others, that these concepts are not equivalent. These notions have been studied in a number of recent papers, see e.g. \cite{PaoloMarek17, MR1372186, MR1416085, Leo17, LMM18, MR1260176, MR1758553}. Extensions of statistical convergence to more general spaces can be found in \cite{MR2904078, MR2463821, MR938459, MR1821765}, and to ideal convergence, see e.g. \cite{MR2835960, MR2320288, MR1838788, Leoproj}. Given an ideal $\mathcal{I}$ on the positive integers $\omega$, we investigate various properties of $\mathcal{I}$-cluster points and $\mathcal{I}$-limit points of sequences taking values in topological spaces $(X,\tau)$. The main contributions of the article are: \begin{enumerate}[label=(\roman*)] \item \label{itema1} a new characterization of $\mathcal{I}$-convergence: informally, a sequence $(x_n)$ is $\mathcal{I}$-convergent if and only if there exists a ``big'' $\mathcal{I}$-convergent subsequence (see Theorem \ref{lem:basic0}.\ref{item:basic4} and Corollary \ref{cor:cor1}); \item \label{itema2} the topology generated by the pair $(\tau,\mathcal{I})$ corresponds to the underlying topology $\tau$ (see Theorem \ref{thm:sametopology}); \item \label{itema3} a characterization of $\mathcal{I}$-cluster points as classical ``cluster points of the filter'' generated by the sequence (see Theorem \ref{thm:characterizationbourbaki}); \item \label{itema4} a characterization of the set of $\mathcal{I}$-cluster points as the smallest closed set containing ``almost all'' the sequence (see Theorem \ref{thm:char2}). \end{enumerate} \section{Preliminaries}\label{sec:preliminaries} Let $\mathrm{Fin}$ be the collection of finite subsets of $\omega$. The upper asymptotic density of a set $S\subseteq \omega$ is defined by $$ \mathrm{d}^\star(S):=\limsup_{n\to \infty} \frac{|S\cap [1,n]|}{n}\, $$ and we denote by $\mathcal{Z}$ the collection of all $S$ such that $\mathrm{d}^\star(S)=0$. Hence, a real number $\ell$ is a statistical cluster point of a given real sequence $(x_n)$ if and only if $\{n\in \omega: |x_n-\ell|<\varepsilon\}$ does not belong to $\mathcal{Z}$ for every $\varepsilon>0$. An ideal $\mathcal{I}$ on $\omega$ is a family of subsets of positive integers closed under taking finite unions and subsets of its elements. It is also assumed that $\mathcal{I}$ is different from the power set of $\omega$ and contains all the singletons. It is clear that $\mathrm{Fin}$ and $\mathcal{Z}$ are ideals. Many other examples can be found, e.g., in \cite[Chapter 1]{MR1711328} and \cite[Section 2]{MR3594409}. Intuitively, an ideal represents the collection of subsets of $\omega$ which are ``small'' in a suitable sense. We denote by $\mathcal{I}^\star:=\{A\subseteq \omega: A^c \in \mathcal{I}\}$ the \emph{filter dual} of $\mathcal{I}$ and by $\mathcal{I}^+$ the collection of $\mathcal{I}$\emph{-positive sets}, that is, the collection of all sets which do not belong to $\mathcal{I}$. \begin{defi}\label{iconverg} Given a topological space $X$, a sequence $x=(x_n)$ is said to be $\mathcal{I}$\emph{-convergent} to $\ell$, shortened with $x_n \to_{\mathcal{I}} \ell$, whenever $\{n: x_n \in U\} \in \mathcal{I}^\star$ for all neighborhoods $U$ of $\ell$. Moreover, let $\Gamma_x(\mathcal{I})$ denote the set of $\mathcal{I}$\emph{-cluster points} of $x$, that is, the set of all $\ell \in X$ such that $\{n: x_n \in U\} \in \mathcal{I}^+$ for all neighborhoods $U$ of $\ell$. \end{defi} Ordinary convergence corresponds to $\mathrm{Fin}$-convergence (thus, we shorten $x_n \to_{\mathrm{Fin}} \ell$ with $x_n \to \ell$) and statistical convergence to $\mathcal{Z}$-convergence. Now, one may worder whether $\mathcal{I}$-convergence corresponds to ordinary convergence with respect to another topology on the same base set. Essentially, it never happens. \begin{example} Let us assume that $\mathcal{I}\neq \mathrm{Fin}$ and $X$ is a topological space with at least two distinct points such that its topology $\tau$ is not the trivial topology $\tau_0$. Hence, there exists a set $I\in \mathcal{I}\setminus \mathrm{Fin}$; in particular, $I$ is infinite. Fix distinct $a,b \in X$ and define the sequence $(x_n)$ by $x_n=a$ whenever $n \notin I$ and $x_n=b$ otherwise. It follows by construction that $x_n \to_\mathcal{I} a$ in $(X,\tau)$. Let us assume, for the sake of contradiction, there exists a topology $\tau^\prime$ such that $x_n \to a$ in $(X,\tau^\prime)$. If there is a $\tau^\prime$-neighborhood $U$ of $a$ such that $b\notin U$, then $ \{n: x_n\notin U\} =I. $ This is impossible, since $I$ is not finite. Hence $b \in U$ whenever $a \in U$. By the arbitrariness of $a$ and $b$, we conclude that $\tau^\prime=\tau_0$. The converse is false: given $U \in \tau\setminus \tau_0$ and $u \in U$, then the constant sequence $(u)$ is not $\mathcal{I}$-convergent to $\ell$ provided that $\ell\notin U$. \end{example} Other notions of convergence have been defined in literature, considering properties of subsequences of $x$ with sufficiently many elements. \begin{defi}\label{def:istarconverg} Given a topological space $X$, a sequence $x=(x_n)$ is said to be $\mathcal{I}^\star$\emph{-convergent} to $\ell$, shortened with $x_n \to_{\mathcal{I}^\star} \ell$, whenever there exists a subsequence $(x_{n_k})$ such that $x_{n_k} \to \ell$ and $\{n_k: k \in \omega\} \in \mathcal{I}^\star$. Moreover, let $\Lambda_x(\mathcal{I})$ denote the set of $\mathcal{I}$\emph{-limit points} of $x$, that is, the set of all $\ell \in X$ such that there exists a subsequence $(x_{n_k})$ for which $x_{n_k} \to \ell$ and $\{n_k: k \in \omega\} \in \mathcal{I}^+$. \end{defi} At this point, recall that an ideal $\mathcal{I}$ is a \emph{P-ideal} if it is $\sigma$-directed modulo finite sets, i.e., for every sequence $(A_n)$ of sets in $\mathcal{I}$ there exists $A \in \mathcal{I}$ such that $A_n\setminus A$ is finite for all $n$; equivalent definitions were given, e.g., in \cite[Proposition 1]{MR2285579}. Moreover, given infinite sets $A,B \subseteq \omega$ such that $A$ has canonical enumeration $\{a_n: n \in \omega\}$, we say that $\mathcal{I}$ a \emph{G-ideal} if $$ A_B:=\{a_b: b \in B\} \in \mathcal{I}^\star\,\,\,\text{ if and only if }\,\,\,B\in \mathcal{I}^\star $$ provided that $A \in \mathcal{I}^\star$. This condition is strictly related to the so-called ``property (G)'' considered in \cite{MR3568092} and to the definition of invariant and thinnable ideals considered in \cite{Leo17, Leo17b}. Note that the class of G-ideals contains the ideals generated by $\alpha$-densities with $\alpha \ge -1$ (in particular, $\mathcal{I}_\mathrm{d}$ and the collection of logarithmic density zero sets), several summable ideals, and the \emph{P\'olya ideal}, i.e., $$ \mathcal{I}_{\mathfrak{p}}:=\left\{S\subseteq \omega: \mathfrak p^\star(S):=\lim_{s \to 1^-} \limsup_{n \to \infty} \frac{|S \cap [ns,n]|}{(1-s)n}=0\right\}, $$ see \cite[Section 2]{Leo17}. Among other things, the upper P\'olya density $\mathfrak p^\star$ has found a number of remarkable applications in analysis and economic theory, see e.g. \cite{MR1545027}, \cite{MR0003208} and \cite{MR1656470}. In this regard, we have the following basic result: points \ref{item:basic1}-\ref{item:basic2} can be shown by routine arguments, cf. \cite[Theorem 3.1]{MR2904078} and \cite[Section 2]{MR2463821} (we omit details); although not explicit in the literature, point \ref{item:basic3} can be considered folklore, see \cite[Theorem 3.2]{MR1844385} for the case $X$ being a metric space (we include the proof here for the sake of completeness); lastly, point \ref{item:basic4} provides a new characterization of $\mathcal{I}$-convergence (related results can be found in \cite[Theorem 3.4]{MR3568092} and \cite[Theorem 3.4]{Leo17}). \begin{thm}\label{lem:basic0} Let $X$ be a topological space and $\mathcal{I}$ be an ideal. Then: \begin{enumerate}[label={\rm (\roman{*})}] \item \label{item:basic1} $\mathcal{I}$-limits and $\mathcal{I}^\star$-limits are unique, provided $X$ is Hausdorff; \item \label{item:basic2} $\mathcal{I}^\star$-convergence implies $\mathcal{I}$-convergence; \item \label{item:basic3} $\mathcal{I}$-convergence implies $\mathcal{I}^\star$-convergence, provided $X$ is first countable and $\mathcal{I}$ is a P-ideal; \item \label{item:basic4} A sequence $(x_n) \in X^{\omega}$ is $\mathcal{I}$-convergent if and only if there exists an $\mathcal{I}$-convergent subsequence $(x_{n_k})$ such that $\{n_k: k \in \omega\} \in \mathcal{I}^\star$, provided $\mathcal{I}$ is a G-ideal. \end{enumerate} \end{thm} \begin{proof} \ref{item:basic3} Let $(x_n)$ be a sequence taking values in $X$ which is $\mathcal{I}$-convergent to some $\ell \in X$. Then, let $(U_j)$ be a countable decreasing local base at $\ell$ and, for each $j$, define $A_j:=\{n: x_n\notin U_j\}$. Hence, $A_j \in \mathcal{I}$ for each $j$, $(A_j)$ is increasing, and, since $\mathcal{I}$ is a P-ideal, there exists $A \in \mathcal{I}$ such that $A_j\setminus A$ is finite for all $j$. Denoting by $(n_k)$ the increasing sequence of integers in $A^c$ (which belongs to $\mathcal{I}^\star$), it follows that $x_{n_k} \to \ell$. Indeed, letting $V$ be a neighborhood of $\ell$ and $j \in \omega$ such that $U_j\subseteq V$, then the finiteness of $\{k: x_{n_k}\notin V\}$ follows by the fact that it has the same cardinality of $\{n_k: x_{n_k}\notin V\}$ and $ \{n_k: x_{n_k}\notin V\} \subseteq \{n_k: x_{n_k}\notin U_j\} \subseteq \{n \in A^c: x_n \notin U_j\} = A_j\setminus A. $ \ref{item:basic4} Let us suppose that $(x_n)$ is $\mathcal{I}$-convergent to $\ell \in X$. Fix also $I \in \mathcal{I}$ and let $(n_k)$ be the increasing enumeration of $I^c$. Then, it is claimed that the subsequence $(x_{n_k})$ is $\mathcal{I}$-convergent to $\ell$. Indeed, for each neighborhood $U$ of $\ell$, we have $\{n: x_n \notin U\} \in \mathcal{I}$ by hypothesis, hence $ \{n_k: x_{n_k} \in U\} = \{n: x_n \in U\}\setminus I=\omega\setminus (\{n: x_n \notin U\} \cup I) \in \mathcal{I}^\star. $ It follows by the fact that $\mathcal{I}$ is a G-ideal that $\{k: x_{n_k} \in U\} \in \mathcal{I}^\star$, that is, $x_{n_k} \to_{\mathcal{I}} \ell$. The converse can be shown similarly. \end{proof} It is well known that $\mathcal{Z}$ is a P-ideal (see e.g. \cite[Proposition 3.2]{MR632187}) and, as recalled before, it is also a G-ideal. Hence \begin{cor}\label{cor:cor1} Let $(x_n)$ be a sequence taking values in a topological space $X$. Then the following are equivalent: \begin{enumerate}[label={\rm (\roman{*})}] \item \label{item:eq1} $(x_n)$ is statistically convergent; \item \label{item:eq3} There exists a statistically convergent subsequence $(x_{n_k})$ with $\{n_k: k \in \omega\} \in \mathcal{Z}^\star$. \end{enumerate} If, in addition, $X$ is first countable, then they are also equivalent to: \begin{enumerate}[label={\rm (\roman{*})}] \setcounter{enumi}{2} \item \label{item:eq2} There exists a convergent subsequence $(x_{n_k})$ with $\{n_k: k \in \omega\} \in \mathcal{Z}^\star$; \end{enumerate} \end{cor} It is worth noting that the equivalence between \ref{item:eq1} and \ref{item:eq2} can be already found in \cite[Theorem 2.2]{MR2463821}, cf. also \cite[Theorem 1]{MR816582} and \cite[Theorem 1]{MR1260176}. We obtain also an abstract version of \cite[Theorem 2.3]{MR954458}, see also \cite[Proposition 1]{MR2835960} and \cite[Theorem 1]{MR2334006}; the proof goes verbatim, hence we omit it. \begin{cor}\label{cor:decomposition} Let $\mathcal{I}$ be a P-ideal and $(x_n)$ be a sequence taking values in a metrizable group \textup{(}with identity $0$\textup{)} such that $x_n \to_\mathcal{I} \ell$. Then, there exist sequences $(y_n)$ and $(z_n)$ such that: $x_n=y_n+z_n$ for all $n$, $y_n \to \ell$, and $\{n\in \omega: z_n\neq 0\} \in \mathcal{I}$. \end{cor} Recall that a real double sequence $x=(x_{n,m}: n,m \in\omega)$ has \emph{Pringsheim limit} $\ell$ provided that for every $\varepsilon>0$ there exists $k \in \omega$ such that $|x_{n,m}-\ell|<\varepsilon$ for all $n,m\ge k$. Identifying ideals on countable sets with ideals on $\omega$ through a fixed bijection, it is easily seen that this is equivalent to $x \to_{\mathcal{I}_{\mathrm{Pr}}} \ell$, where $\mathcal{I}_{\mathrm{Pr}}$ is the ideal defined by $$ \mathcal{I}_{\mathrm{Pr}}:=\left\{A\subseteq \omega \times \omega: \limsup_{n\to \infty}\, \sup \left\{k: (n,k) \in A\right\}<\infty\right\}. $$ Equivalently, $\mathcal{I}_{\mathrm{Pr}}$ is the ideal on $\omega \times \omega$ containing the complements of $[n,\infty)\times [n,\infty)$ for all $n \in \omega$. At this point, for each $n,m \in \omega$, let $\mu_{n,m}$ be the uniform probability measure on $\{1,\ldots,n\}\times \{1,\ldots,m\}$ and define the ideal $$ \mathcal{Z}_{\mathrm{Pr}}:=\left\{A\subseteq \omega\times \omega: \mu_{n,m}(A) \to_{\mathcal{I}_{\mathrm{Pr}}} 0\right\}. $$ Note that $\mathcal{I}_{\mathrm{Pr}}\subseteq \mathcal{Z}_{\mathrm{Pr}}$ and that $\mathcal{Z}_{\mathrm{Pr}}$ is a P-ideal. The notion of convergence of real double sequences $(x_{n,m})$ with respect to the ideal $\mathcal{Z}_{\mathrm{Pr}}$ has been recently introduced in \cite{MR2002719, MR2019757}; here, it has been simply defined ``statistical convergence'' of double sequences. Accordingly, it has been shown in \cite[Theorem 2]{MR2002719} that a real double sequence $(x_{n,m})$ is statistically convergent to $\ell$ if and only if there exist real double sequences $(y_{n,m})$ and $(z_{n,m})$ such that $y_{n,m}\to_{\mathcal{I}_{\mathrm{Pr}}} \ell$ and $\{(n,m): z_{n,m}\neq 0\}\in \mathcal{Z}_{\mathrm{Pr}}$. However, this is an immediate consequence of Corollary \ref{cor:decomposition}. \section{Ideal Cluster points}\label{sec:cluster} Given sequences $x$ and $y$ taking values in a topological space $X$, we say that they are $\mathcal{I}$\emph{-equivalent}, shortened with $x\equiv_\mathcal{I} y$, if $\{n: x_n \neq y_n\} \in \mathcal{I}$ (it is easy to see that $\equiv_\mathcal{I}$ is an equivalence relation). The following lemmas, which collect and extend several results contained in \cite{MR2463821, MR1181163, MR1844385}, show some standard properties of $\mathcal{I}$-cluster and $\mathcal{I}$-limit points. \begin{lem}\label{lem:basic} Let $x$ and $y$ be sequences taking values in a topological space $X$ and fix ideals $\mathcal{I}\subseteq \mathcal{J}$. Then: \begin{enumerate}[label={\rm (\roman{*})} \item \label{item:1} $\Lambda_x(\mathcal{J}) \subseteq \Lambda_x(\mathcal{I})$ and $\Gamma_x(\mathcal{J}) \subseteq \Gamma_x(\mathcal{I})$; \item \label{item:2} $\Lambda_x(\mathrm{Fin}) = \Gamma_x(\mathrm{Fin})$, provided $X$ is first countable; \item \label{item:3} $\Lambda_x(\mathcal{I}) \subseteq \Gamma_x(\mathcal{I})$; \item \label{item:4} $\Gamma_x(\mathcal{I})$ is closed; \item \label{item:5} $\Lambda_x(\mathcal{I})=\Lambda_y(\mathcal{I})$ and $\Gamma_x(\mathcal{I})=\Gamma_y({\mathcal{I}})$ provided $x\equiv_{\mathcal{I}} y$; \item \label{item:6} $\Gamma_x(\mathcal{I}) \cap K \neq \emptyset$, provided $K\subseteq X$ is compact and $\{n: x_n \in K\} \in \mathcal{I}^+$; \item \label{item:7} $\Lambda_x(\mathcal{I}) = \Gamma_x(\mathcal{I})=\{\ell\}$ provided $x_n \to_{\mathcal{I}^\star} \ell$ and $X$ is Hausdorff. \end{enumerate} \end{lem} \begin{proof} \ref{item:1} and \ref{item:2} easily follow from the definitions. In addition, \ref{item:3} is obvious if $\Lambda_x(\mathcal{I})=\emptyset$. Otherwise, fix $\ell \in \Lambda_x(\mathcal{I})$ and a neighborhood $U$ of $\ell$. Then, there exists an increasing subsequence $(n_k)$ with $\{n_k\}\in\mathcal{I}^+$ such that $x_{n_k} \to \ell$, so that $S:=\{n_k: x_{n_k} \notin U\}$ is finite. This implies that $\{n_k\} \setminus S\subseteq \{n: x_n \in U\}$. To conclude, it is sufficient to note that $\{n_k\}\setminus S \notin \mathcal{I}$, therefore $\{n: x_n \in U\}\in \mathcal{I}^+$. Similarly, \ref{item:4} is clear if $\Gamma_x(\mathcal{I})=\emptyset$. In the opposite, let $y$ be an accumulation point of $\Gamma_x(\mathcal{I})$ and $U$ a neighborhood of $y$. Then, there exists $z \in \Gamma_x(\mathcal{I}) \cap U$. Let $V$ be a neighborhood of $z$ contained in $U$. Considering that $\{n: x_n \in V\} \subseteq \{n: x_n \in U\}$ and $\{n: x_n\in V\} \in \mathcal{I}^+$, we conclude that $y \in \Gamma_x(\mathcal{I})$. To prove \ref{item:5}, fix $\ell \in \Lambda_x(\mathcal{I})$, so that there exists a subsequence $(x_{n_k})$ such that $\{n_k\}\in \mathcal{I}^+$ and $x_{n_k} \to \ell$. Since $\{n: x_n\neq y_n\} \in \mathcal{I}$ and $\{n_k: x_{n_k} \neq y_{n_k}\} \subseteq \{n: x_n\neq y_n\}$, then $S:=\{n_k: x_{n_k} = y_{n_k}\} \in \mathcal{I}^+$. Denoting by $(s_n)$ the canonical enumeration of $S$, we obtain $y_{s_n} \to \ell$, hence $\ell \in \Lambda_y({\mathcal{I}})$. By the arbitrariness of $\ell$, we have $\Lambda_x(\mathcal{I})\subseteq \Lambda_y({\mathcal{I}})$ therefore, by symmetry, $\Lambda_x(\mathcal{I})=\Lambda_y({\mathcal{I}})$. The other claim can be shown similarly. The proof of \ref{item:6} can be found in \cite[Theorem 6]{MR2923430}, cf. also \cite[Theorem 2.14]{MR2463821} for the case $\mathcal{I}=\mathcal{Z}$. Lastly, suppose that $x_n \to_{\mathcal{I}^\star} \ell$ so that $x_n \to_{\mathcal{I}} \ell$ by Theorem \ref{lem:basic0}.\ref{item:basic2} and, in particular, $\ell \in \Lambda_x(\mathcal{I})$. Also, thanks to \ref{item:3}, we have $\{\ell\}\subseteq \Lambda_x(\mathcal{I}) \subseteq \Gamma_x(\mathcal{I})$. To conclude, let us suppose for the sake of contradition that there exists an $\mathcal{I}$-cluster point $\ell^\prime$ of $x$ different from $\ell$. Fix disjoint neighborhoods $U$ and $U^\prime$ of $\ell$ and $\ell^\prime$, respectively. On the one hand, since $\ell^\prime$ is a $\mathcal{I}$-cluster point, then $\{n:x_n \in U^\prime\} \in \mathcal{I}^+$. On the other hand, this is impossible since $\{n:x_n \in U^\prime\} \subseteq \{n: x_n\notin U\} \in \mathcal{I}$. This proves \ref{item:7}. \end{proof} It follows at once from Theorem \ref{lem:basic0}.\ref{item:basic3} and Lemma \ref{lem:basic}.\ref{item:7} that: \begin{cor}\label{lem:ilimit} Let $\mathcal{I}$ be a P-ideal and $(x_n)$ be a sequence taking values in a first countable Hausdorff space such that $x_n \to_\mathcal{I} \ell$. Then $\Lambda_x(\mathcal{I}) = \Gamma_x(\mathcal{I})=\{\ell\}$. \end{cor} The converse of Corollary \ref{lem:ilimit} does not hold in general: the real sequence $x$ defined by $x_n=n$ if $n$ is even and $x_n=0$ otherwise satisfies $\Lambda_x(\mathcal{Z}) = \Gamma_x(\mathcal{Z})=\{0\}$ while $x_n \not\to_{\mathcal{Z}} 0$. On the other hand, if the underlying space space is compact, it is sufficient, cf. \cite[Proposition 8]{MR1757066} for a special case. \begin{lem}\label{lem:converseabovelemmaconvergence} Let $\mathcal{I}$ be an ideal, let $(x_n)$ be a sequence in a first countable compact space $X$, and suppose that $\Gamma_x(\mathcal{I})=\{\ell\}$. Then $x_n \to_{\mathcal{I}} \ell$. In addition, if $\mathcal{I}$ is a P-ideal, then $x_n \to_{\mathcal{I}^\star} \ell$. \end{lem} \begin{proof} Let $(U_k)$ be a decreasing local base at $\ell$. Fix $k \in \omega$ and, for each $z \in X$ with $z\neq \ell$, there exists a neighborhood $U_z$ of $z$ such that $\{n \in \omega: x_n \in U_z\} \in \mathcal{I}$. Since $\{U_z: z \in X\setminus \{\ell\}\} \cup U_k$ is an open cover of $X$ and $X$ is compact, there exists a finite subcover $U_{z_1}\cup \cdots \cup U_{z_m} \cup U_k$; note that $U_k$ belongs to the subcover, indeed, in the opposite, we would have $\omega=\bigcup_{i\le m}\{n: x_n \in U_{z_i}\} \in \mathcal{I}$. In particular, $\{n \in \omega: x_n \in U_k\} \in \mathcal{I}^\star$. Therefore $x_n \to_{\mathcal{I}} \ell$. If, in addition, $\mathcal{I}$ is a P-ideal then $A_k:=\{n\in \omega: x_n \notin U_k\}$ is an increasing sequence in $\mathcal{I}$, hence there exists $A \in \mathcal{I}$ such that $A_k\setminus A \in \mathrm{Fin}$ for all $k$. It follows that $\{n \in A^c: x_n \notin U_k\}=A_k \cap A^c \in \mathrm{Fin}$ for all $k$, that is, $x_n \to_{\mathcal{I}^\star} \ell$. \end{proof} As an application, we obtain a generalization of \cite[Theorem 3]{MR1416085}: \begin{cor} Let $\mathcal{I}$ be an ideal and $(x_n)$ be a sequence in first countable space $X$ such that $\{n \in \omega: x_n \notin K\} \in \mathcal{I}$ for some compact $K\subseteq X$. Then $x_n \to_{\mathcal{I}} \ell$ if and only if $\Gamma_x(\mathcal{I})=\{\ell\}$. \end{cor} Moreover, Lemma \ref{lem:basic}.\ref{item:5} can be strenghtened if $X$ is a topological group: \begin{lem}\label{lem:strengtened} Let $x$ and $y$ be sequences taking values in a topological group $X$ \textup{(}written additively, with identity $0$\textup{)} and fix an ideal $\mathcal{I}$. Then: \begin{enumerate}[label={\rm (\roman{*})}] \item \label{item:lem1} $\Gamma_x(\mathcal{I})=\Gamma_y(\mathcal{I})$ provided $x_n-y_n \to_{\mathcal{I}} 0$; \item \label{item:lem2} $\Lambda_x(\mathcal{I})=\Lambda_y(\mathcal{I})$ provided $x_n-y_n \to_{\mathcal{I}^\star} 0$. \end{enumerate} \end{lem} \begin{proof} Let $z$ be the sequence defined by $z_n=x_n-y_n$. \ref{item:lem1} It follows by hypothesis $z_n \to_{\mathcal{I}} 0$ and $-z_n \to_{\mathcal{I}} 0$. Fix $\ell \in \Gamma_x(\mathcal{I})$ and let $U$ be a neighborhood of $\ell$. By the continuity of the operation of the group, there exist neighborhoods $V$ and $W$ of $\ell$ and $0$, respectively, such that $V+W \subseteq U$. Considering that $\{n: x_n \in V\} \in \mathcal{I}^+$ and $\{n: -z_n \in W\} \in \mathcal{I}^\star$, it follows that $$ \{n: y_n \in U\}=\{n: x_n-z_n \in U\} \supseteq \{n: x_n \in V\} \cap \{n: -z_n \in W\} \in \mathcal{I}^+. $$ Since $\ell$ and $U$ were arbitrarily chosen, then $\Gamma_x(\mathcal{I})\subseteq \Gamma_y(\mathcal{I})$. The opposite inclusion can be shown similarly. \ref{item:lem2} By hypothesis $z_n \to_{\mathcal{I}^\star} 0$ and $-z_n \to_{\mathcal{I}^\star} 0$. Fix $\ell \in \Lambda_x(\mathcal{I})$, hence there exist $A,B \in \mathcal{I}^\star$ such that $\lim_{a \in A}x_a=\ell$ and $\lim_{b \in B}-z_b=0$. Setting $C:=A\cap B \in \mathcal{I}^\star$, it follows that $\lim_{c \in C}y_c=\lim_{c \in C}x_c-z_c=\ell$, therefore $\Lambda_x(\mathcal{I})\subseteq \Lambda_y(\mathcal{I})$. The opposite inclusion can be shown similarly. \end{proof} We recall that, under suitable assumptions on $X$ and $\mathcal{I}$, the collection of $\mathcal{I}$-cluster and $\mathcal{I}$-limit point sets can be characterized as the closed sets and $F_\sigma$ sets, respectively; see \cite[Theorem 3.1]{PaoloMarek17}, \cite[Section 2]{MR2463821}, \cite[Theorem 1.1]{MR1838788}, and \cite[Section 4]{MR1844385}. Moreover, the continuity of the map $x\mapsto \Gamma_x(\mathcal{I})$ has been investigated in \cite{MR1838788} The next result establishes a connection between sets of cluster points with respect to different ideals (the proof is based on \cite[Theorem 2]{MR1181163} which focuses on the case $X=\mathbf{R}$, $\mathcal{I}=\mathcal{Z}$, and $\mathcal{J}=\mathrm{Fin}$). \begin{lem}\label{thm:Clrelation} Let $x$ be a sequence taking values in a strongly Lindel\"{o}f space $X$ and fix ideals $\mathcal{J}\subseteq \mathcal{I}$ such that $\mathcal{I}$ is a P-ideal. Then, there exists an $\mathcal{I}$-equivalent sequence $y$ such that $\Gamma_x(\mathcal{I})=\Gamma_y(\mathcal{J})$ and $\{y_n: n \in \omega\}\subseteq \{x_n: n \in \omega\}$. \end{lem} \begin{proof} The claim is obvious if $\Gamma_x(\mathcal{I})=\Gamma_x({\mathcal{J}})$. Hence, let us suppose that $\Delta:=\Gamma_x({\mathcal{J}})\setminus \Gamma_x(\mathcal{I})\neq \emptyset$ and, for each $z \in \Delta$, let $U_z$ be a neighborhood of $z$ such that $\{n: x_n \in U_z\} \in \mathcal{I}$. Then $\bigcup U_z$ is an open cover of $\Delta$. Since $X$ is strongly Lindel\"{o}f, there exists a countable subset $\{z_k: k \in \omega\}\subseteq \Delta$ such that $\bigcup U_{z_k}$ is an open subcover of $\Delta$. Moreover, since $\mathcal{I}$ is a P-ideal, there exists $I \in \mathcal{I}$ such that $\{n: x_n \in U_{z_k}\}\setminus I$ is finite for all $k$. At this point, let $(i_n)$ be the canonical enumeration of $\omega\setminus I$ and define the sequence $y$ by $y_n=x_{i_n}$ if $n \in I$ and $y_n=x_n$ otherwise. Since $\{n: x_n\neq y_n\}\subseteq I \in \mathcal{I}$, then $x \equiv_\mathcal{I} y$, hence we obtain by Lemma \ref{lem:basic}.\ref{item:5} that $\Gamma_x(\mathcal{I})=\Gamma_y({\mathcal{I}})$. The claim follows by the fact that every $\mathcal{J}$-cluster point of $y$ is also an $\mathcal{I}$-cluster point, therefore $\Gamma_y(\mathcal{I})=\Gamma_y(\mathcal{J})$. \end{proof} Lastly, given a topological space $(X,\tau)$ and an ideal $\mathcal{I}$, define the family $$ \textstyle \tau(\mathcal{I}):=\left\{F^c \subseteq X: F=\bigcup_{x \in F^{\omega}}\Gamma_x(\mathcal{I})\right\}, $$ that is, $F$ is $\tau(\mathcal{I})$\emph{-closed} if and only if it is the union of $\mathcal{I}$-cluster points of $F$-valued sequences. In particular, it is immediate that $\tau=\tau(\mathrm{Fin})$. \begin{lem}\label{lem:easy} $\tau \subseteq \tau(\mathcal{I})$. \end{lem} \begin{proof} Let $F$ be a $\tau$-closed set. Thanks to Lemma \ref{lem:basic}.\ref{item:1}, we have $ \textstyle F\subseteq \bigcup_{x \in F^{\omega}} \Gamma_x(\mathcal{I}) \subseteq \bigcup_{x \in F^{\omega}} \Gamma_x(\mathrm{Fin})=F, $ where the first inclusion is obtained by choosing the constant sequence $(f)$, for each fixed $f \in F$. Therefore, $F^c\in \tau(\mathcal{I})$.\end{proof} The converse holds under some additional assumptions: \begin{thm}\label{thm:sametopology} Assume that one of the following conditions holds: \begin{enumerate}[label={\rm (\roman{*})}] \item \label{item:top1} $X$ is sequentially strongly Lindel\"{o}f and $\mathcal{I}$ is a P-ideal; \item \label{item:top2} $X$ is first countable. \end{enumerate} Then $\tau=\tau(\mathcal{I})$. \end{thm} \begin{proof} Thanks to Lemma \ref{lem:easy}, it is sufficient to show that $\tau(\mathcal{I}) \subseteq \tau$. Let $F$ be a $\tau(\mathcal{I})$-closed set. Then, it is enough to show that if $\ell \in F$ is an $\mathcal{I}$-cluster point of some $F$-valued sequence $x$, it is also an ordinary limit point of some $F$-valued sequence $y$. \ref{item:top1} This follows directly by Lemma \ref{thm:Clrelation}, setting $\mathcal{J}=\mathrm{Fin}$. \ref{item:top2} Let $(U_k)$ be a decreasing local base at $\ell$. Then, there exists a subsequence $(x_{n_k})$ converging to $\ell$: to this aim, set $S_k:=\{n: x_n\in U_k\}$ for each $k$, fix $n_1 \in S_1$ arbitrarily and, for each $k\in \omega$, define $n_{k+1}:=\min S_{k+1}\setminus \{1,\ldots,n_k\}$ (note that this is possible since each $S_k$ is infinite). \end{proof} \section{Characterizations}\label{sec:charact} Given an ideal $\mathcal{I}$ and a sequence $x$ taking values in a topological space $X$, we define the $\mathcal{I}$\emph{-filter generated by} $x$ as $$ \mathscr{F}_x(\mathcal{I}):=\left\{Y\subseteq X: \{n: x_n \notin Y\} \in \mathcal{I}\right\}. $$ It is immediate that $\mathscr{F}_x(\mathcal{I})$ is a filter on $X$ with filter base $$ \mathcal{B}_x(\mathcal{I}):=\{\{x_n: n\notin I\}: I \in \mathcal{I}\}. $$ In addition, if $\mathcal{I}=\mathrm{Fin}$, then $\mathscr{F}_x(\mathcal{I})$ coincides with the standard filter generated by $x$, cf. \cite[Definition 7, p.64]{MR1726779}. With this notation, we are going to show that $\ell$ is an $\mathcal{I}$-cluster point of $x$ if and only if it is a cluster point of the filter $\mathscr{F}_x(\mathcal{I})$, that is, $\ell$ lies in the closure of all sets in the filter base $\mathcal{B}_x(\mathcal{I})$, c.f. \cite[Definition 2, p.69]{MR1726779}. \begin{lem}\label{lem:firstinclusion} $\bigcap_{B \in \mathcal{B}_x(\mathcal{I})} \overline{B} \subseteq \Gamma_x(\mathcal{I})$. \end{lem} \begin{proof} Let us suppose that $\ell \in \bigcap_{I \in \mathcal{I}}\overline{\{x_n: n\notin I\}}$, that is, for each $I \in \mathcal{I}$ there exists a subsequence $(x_{n_k})$ converging to $\ell$ such that $\{n_k: k \in \omega\} \cap I = \emptyset$. Suppose for the sake of contradiction that $\ell$ is not an $\mathcal{I}$-cluster point, i.e., there exists an open neighborhood $U$ of $\ell$ such that $J:=\{n: x_n \in U\}$ belongs to $\mathcal{I}$. Then, it follows that $\{x_n: n \notin J\} \in \mathcal{B}_x(\mathcal{I})$ hence $$ \textstyle \ell \in \bigcap_{B \in \mathcal{B}_x(\mathcal{I})} \overline{B}\subseteq \overline{\{x_n: n \notin J\}} = \overline{\{x_n: x_n \notin U\}} \subseteq U^c, $$ which is impossible since $\ell \in U$. \end{proof} However, if $X$ is first countable, then also the converse holds. \begin{thm}\label{thm:characterizationbourbaki} Let $\mathcal{I}$ be an ideal and $x$ be a sequence taking values in a first countable space $X$. Then $\Gamma_x(\mathcal{I})=\bigcap_{B \in \mathcal{B}_x(\mathcal{I})} \overline{B}$. \end{thm} \begin{proof} Thanks to Lemma \ref{lem:firstinclusion}, it is sufficient to show that $\Gamma_x(\mathcal{I})\subseteq \bigcap_{B \in \mathcal{B}_x(\mathcal{I})} \overline{B}$. Let us suppose that $\ell$ is an $\mathcal{I}$-cluster point of $x$ and fix a decreasing local base $(U_k)$ at $\ell$, so that $S_k:=\{n: x_n \in U_k\} \in \mathcal{I}^+$ for all $k$. Fix also $I \in \mathcal{I}$ and note that $T_k:=S_k\setminus I \in \mathcal{I}^+$ for all $k$ (in particular, each $T_k$ is infinite). Then, we have to prove that $\ell \in \overline{\{x_n: n\notin I\}}$, i.e., there exists a subsequence $(x_{n_k})$ converging to $\ell$ such that $n_k \notin I$ for all $k$. To this aim, it is enough to fix $n_1 \in T_1$ arbitrarily and $n_{k+1}:=\min T_{k+1}\setminus \{1,\ldots,n_k\}$ for all $k \in \omega$. It follows by construction that $\lim_{k\to \infty}x_{n_k}=\ell$ and $n_k \notin I$ for all $k$. \end{proof} As a corollary, we obtain another proof of Lemma \ref{lem:basic}.\ref{item:4}, provided $X$ is first countable. We conclude with another characterization of the set of $\mathcal{I}$-cluster points, which subsumes the results contained in \cite{MR2040222}. \begin{thm}\label{thm:char2} Let $x$ be a sequence taking values in a regular Hausdorff space $X$ such that $\{n: x_n \notin K\} \in \mathcal{I}$ for some compact set $K$. Then $\Gamma_x(\mathcal{I})$ is the smallest closed set $C$ such that $\{n: x_n \notin U\} \in \mathcal{I}$ for all sets $U$ containing $C$. \end{thm} \begin{proof} Fix $\kappa \in K$ and define the sequence $y$ by $y_n=\kappa$ if $x_n \notin K$ and $y_n=x_n$ otherwise. It follows by Lemma \ref{lem:basic}.\ref{item:6}-\ref{item:5} that $ \emptyset \neq \Gamma_x(\mathcal{I})=\Gamma_y(\mathcal{I}) \subseteq K. $ Let also $\mathscr{C}$ be the family of closed sets $C$ such that $\{n: x_n \notin U\} \in \mathcal{I}$ for all open subsets $U\supseteq C$ (note that $\{n: x_n \notin U\} \in \mathcal{I}$ if and only if $\{n: y_n \notin U\} \in \mathcal{I}$). First, we show that $\Gamma_x(\mathcal{I}) \in \mathscr{C}$. Indeed, $\Gamma_x(\mathcal{I})$ is closed by Lemma \ref{lem:basic}.\ref{item:4}; moreover, let us suppose for the sake of contradiction that there exists an open set $U$ containing $\Gamma_x(\mathcal{I})$ such that $\{n: x_n \notin U\} \in \mathcal{I}^+$, that is, $\{n: y_n \notin U\}=\{n: y_n \in K\setminus U\} \in \mathcal{I}^+$. Considering that $K\setminus U$ is compact, we obtain by Lemma \ref{lem:basic}.\ref{item:6} that there exists an $\mathcal{I}$-cluster point of $y$ in $K\setminus U$. This contradicts the fact that $\Gamma_y(\mathcal{I})=\Gamma_x(\mathcal{I})\subseteq U$. Lastly, fix $C \in \mathscr{C}$ and let us suppose that $\Gamma_x(\mathcal{I})\setminus C\neq \emptyset$. Fix $\ell \in \Gamma_x(\mathcal{I})\setminus C$ and, by the regularity of $X$, there exist disjoint open sets $U$ and $V$ containing the closed sets $\{\ell\}$ and $K\cap C$, respectively. This is impossible, indeed the set $\{n: x_n \in V\}$ belongs to $\mathcal{I}$ since $C \in \mathscr{C}$, and, on the other hand, it contains $\{n: x_n \in U\} \in \mathcal{I}^+$ since $\ell$ is an $\mathcal{I}$-cluster point. \end{proof} \subsection*{Acknowledgments} The authors are grateful to Szymon G\l ab (\L{}\'{o}d\'{z} University of Technology, PL) and Ond\v{r}ej Kalenda (Charles University, Prague) for several useful comments. \bibliographystyle{amsplain}
{ "timestamp": "2019-02-19T02:15:50", "yymm": "1707", "arxiv_id": "1707.03281", "language": "en", "url": "https://arxiv.org/abs/1707.03281", "abstract": "Given an ideal $\\mathcal{I}$ on $\\omega$, we prove that a sequence in a topological space $X$ is $\\mathcal{I}$-convergent if and only if there exists a ``big'' $\\mathcal{I}$-convergent subsequence. Then, we study several properties and show two characterizations of the set of $\\mathcal{I}$-cluster points as classical cluster points of a filters on $X$ and as the smallest closed set containing ``almost all'' the sequence. As a consequence, we obtain that the underlying topology $\\tau$ coincides with the topology generated by the pair $(\\tau,\\mathcal{I})$.", "subjects": "Classical Analysis and ODEs (math.CA); Functional Analysis (math.FA); General Topology (math.GN); Number Theory (math.NT); Probability (math.PR)", "title": "Characterizations of Ideal Cluster Points", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631623133666, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7087950255881655 }
https://arxiv.org/abs/1503.09092
Efficiently decoding Reed-Muller codes from random errors
Reed-Muller codes encode an $m$-variate polynomial of degree $r$ by evaluating it on all points in $\{0,1\}^m$. We denote this code by $RM(m,r)$. The minimal distance of $RM(m,r)$ is $2^{m-r}$ and so it cannot correct more than half that number of errors in the worst case. For random errors one may hope for a better result.In this work we give an efficient algorithm (in the block length $n=2^m$) for decoding random errors in Reed-Muller codes far beyond the minimal distance. Specifically, for low rate codes (of degree $r=o(\sqrt{m})$) we can correct a random set of $(1/2-o(1))n$ errors with high probability. For high rate codes (of degree $m-r$ for $r=o(\sqrt{m/\log m})$), we can correct roughly $m^{r/2}$ errors.More generally, for any integer $r$, our algorithm can correct any error pattern in $RM(m,m-(2r+2))$ for which the same erasure pattern can be corrected in $RM(m,m-(r+1))$. The results above are obtained by applying recent results of Abbe, Shpilka and Wigderson (STOC, 2015), Kumar and Pfister (2015) and Kudekar et al. (2015) regarding the ability of Reed-Muller codes to correct random erasures.The algorithm is based on solving a carefully defined set of linear equations and thus it is significantly different than other algorithms for decoding Reed-Muller codes that are based on the recursive structure of the code. It can be seen as a more explicit proof of a result of Abbe et al. that shows a reduction from correcting erasures to correcting errors, and it also bares some similarities with the famous Berlekamp-Welch algorithm for decoding Reed-Solomon codes.
\section{Introduction} Consider the following challenge: \begin{quote} Given the truth table of a polynomial $f(\mathbf{x}) \in \mathbb{F}_2[x_1,\dots, x_m]$ of degree at most $r$, in which $1/2-o(1)$ fraction of the locations were flipped (that is, given the evaluations of $f$ over $\mathbb{F}_2^m$ with nearly half the entries corrupted), recover $f$ efficiently. \end{quote} \noindent If the errors are adversarial, then clearly this task is impossible for any degree bound $r \ge 2$, since there are two different quadratic polynomials that disagree on only $1/4$ fraction of the domain. Hence, we turn to considering {\em random} sets of errors of size $(1/2-o(1))2^m$, and we hope to recover $f$ with high probability (in this case, one may also consider the setting where each bit is independently flipped with probability $1/2-o(1)$. By standard Chernoff bounds, both settings are almost equivalent). Even in the random model, if every bit was flipped with probability exactly $1/2$, the situation is again hopeless: in this case the input is completely random and carries no information whatsoever about the original polynomial. It turns out, however, that even a very small relaxation leads to a dramatic improvement in our ability to recover the hidden polynomial: in this paper we prove, among other results, that even at corruption rate $1/2-o(1)$ and degree bound as large as $o(\sqrt{m})$, we can {\em efficiently} recover the {\em unique} polynomial $f$ whose evaluations were corrupted. Note that in the worst case, given a polynomial of such a high degree, an adversary can flip a tiny fraction of the bits --- just slightly more than $1/2^{\sqrt{m}}$ --- and prevent unique recovery of $f$, even if we do not require an efficient solution; and yet, in the average case, we can deal with flipping almost half the bits. Recasting the playful scenario above in a more traditional terminology, this paper deals with similar questions related to recovery of low-degree multivariate polynomials from their \emph{randomly} corrupted evaluations on $\mathbb{F}_2^m$, or in the language of coding theory, we study the problem of decoding \emph{Reed-Muller} codes under random errors in the \emph{binary symmetric channel (BSC)}. We turn to some background and motivation. \subsection{Reed-Muller Codes} Reed-Muller (RM) codes were introduced in 1954, first by Muller \cite{muller} and shortly after by Reed \cite{reed} who also provided a decoding algorithm. They are among the oldest and simplest codes to construct --- the codewords are multivariate polynomials of a given degree, and the encoding function is just their evaluation vectors. In this work we mainly focus on the most basic case where the underlying field is $\mathbb{F} = \mathbb{F}_2$, the field of two elements, although our techniques do generalize to larger finite fields. Over $\mathbb{F}_2$, the Reed-Muller code of degree $r$ in $m$ variables, denoted by $RM(m,r)$, has block length $n=2^m$, rate $\binom{m}{\le r}/2^m$ and its minimal distance is $2^{m-r}$. RM codes have been extensively studied with respect to decoding errors in both the worst case and random setting. We begin by giving a review of Reed-Muller codes and their use in theoretical computer science and then discuss our results. \subsubsection*{Background} Error-correcting codes (over both large and small finite fields) have been extremely influential in the theory of computation, playing a central role in some important developments in several areas such as cryptography (e.g.\ \cite{Shamir79} and \cite{BF90}), theory of pseudorandomness (e.g.\ \cite{bogdanov-viola}), probabilistic proof systems (e.g.\ \cite{BFL91,Sha92} and \cite{ALMSS98}) and many more. An important aspect of error correcting codes that received a lot of attention is designing efficient decoding algorithms. The objective is to come up with an algorithm that can correct a certain amounts of errors in a received word. There are two settings in which this problem is studied: \medskip {\bf Worst case errors:} This is also referred to as errors in the \emph{Hamming model}~\cite{hamming50}. Here, the algorithm should recover the original message regardless of the error pattern, as long as there are not too many errors. The number of errors such a decoding algorithm can tolerate is upper bounded in terms of the distance of the code. The {\em distance} of the code $C$ is the minimum Hamming distance of any two codewords in $C$. If the distance is $d$, then one can {\em uniquely} recover from at most $d-1$ erasures and from $\lfloor (d-1)/2 \rfloor$ errors. For this model of worst-case errors it is easy to prove that Reed-Muller codes perform badly. They have relatively small distance compared to what random codes of the same rate can achieve (and also compared to explicit families of codes). Another line of work in Hamming's worst case setting concerns designing algorithms that can correct beyond the unique-decoding bound. Here there is no unique answer and so the algorithm returns a list of candidate codewords. In this case the number of errors that the algorithm can tolerate is a parameter of the distance of the code. This question received a lot of attention and among the works in this area we mention the seminal works of Goldreich and Levin on Hadamard Codes \cite{GoldreichLevin89} and of Sudan \cite{Sudan97} and Guruswami and Sudan \cite{GuruswamiSudan99} on list decoding Reed-Solomon codes. Recently, the list-decoding question for Reed-Muller codes was studied by Gopalan, Klivans and Zuckerman \cite{GopalanKZ08} and by Bhowmick and Lovett \cite{BhowmickL14}, who proved that the list decoding radius\footnote{The maximum distance $\eta$ for which the number of code words within distance $\eta$ is only polynomially large (in $n$).} of Reed-Muller codes, over $\mathbb{F}_2$, is at least twice the minimum distance (recall that the unique decoding radius is half that quantity) and is smaller than four times the minimal distance, when the degree of the code is constant. \medskip {\bf Random errors:} A different setting in which decoding algorithms are studied is Shannon's model of random errors \cite{shannon48}. In Shannon's average-case setting (which we study here), a codeword is subjected to a random corruption, from which recovery should be possible {\em with high probability}. This random corruption model is called a {\em channel}. The two most basic ones, the Binary Erasure Channel (BEC) and the Binary Symmetric Channel (BSC), have a parameter $p$ (which may depend on $n$), and corrupt a message by independently replacing, with probability $p$, the symbol in each coordinate, with a ``lost'' symbol in the BEC($p$) channel, and with the complementary symbol in the BSC($p$) case. In his paper Shannon studied the optimal trade-off achievable for these channels (and many other channels) between the distance and rate. For {\em every} $p$, the capacity of BEC($p$) is $1-p$, and the capacity of BSC($p$) is $1-h(p)$, where $h$ is the binary entropy function.\footnote{$h(p) = -p\log_2(p) - (1-p)\log_2(1-p)$, for $p\in (0,1)$, and $h(0)=h(1)=0$.} Shannon also proved that random codes achieve this optimal behavior. That is, for every $0<\epsilon$ there exist codes of rate $1-h(p)-\epsilon$ for the BSC (and rate $1-p-\epsilon$ for the BEC), that can decode from a fraction $p$ of errors (erasures) with high probability. For our purposes, it is more convenient to assume that the codeword is subjected to a fixed number $s$ of random errors. Note that by the Chernoff-Hoeffding bound, (see e.g., \cite{AlonSpencer}), the probability that more than $pn + \omega(\sqrt{pn})$ errors occur in BSC($p$) (or BEC($p$)) is $o(1)$, and so we can restrict ourselves to the case of a fixed number $s$ of random errors, by setting the corruption probability to be $p=s/n$. We refer to \cite{AbbeSW15} for further discussion on this subject. \subsubsection*{Decoding erasures to decoding errors} \label{sec:erasures} Recently, there has been a considerable progress in our understanding of the behavior of Reed-Muller codes under random erasures. In \cite{AbbeSW15}, Abbe, Shpilka and Wigderson showed that Reed-Muller codes achieve capacity for the BEC for both sufficiently low and sufficiently high rates. Specifically, they showed that $RM(m,r)$ achieves capacity for the BEC for $r = o(m)$ or $r > m - o(\sqrt{m/\log m})$. More recently, Kumar and Pfister~\cite{KumarPfister15} and Kudekar, Mondelli, \c{S}a\c{s}o\u{g}lu and Urbanke \cite{KudekarMSU15} independently showed that Reed-Muller codes achieve capacity for the BEC in the entire constant rate regime, that is $r \in [m/2-O(\sqrt{m}), m/2+O(\sqrt{m})]$. These regimes are pictorially represented in \autoref{fig:rm-bec-regime}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[framed] \draw[thick] (0,0) -- (10,0); \draw (0,-0.25) -- (0,0.25); \draw (10,-0.25) -- (10,0.25); \draw (5,-0.25) -- (5,0.25); \node at (5,0.5) {$m/2$}; \node at (0,0.5) {$0$}; \node at (10,0.5) {$m$}; \draw[ultra thick,draw=red] (4,0) -- (6,0); \draw[ultra thick,draw=red] (0,0) -- (2,0); \draw[ultra thick,draw=red] (8.5,0) -- (10,0); \draw [decorate,decoration={brace,amplitude=10pt,mirror},yshift=-3pt] (0,0) -- (2,0) node [black,midway,yshift=-0.8cm] {\footnotesize $o(m)$}; \draw [decorate,decoration={brace,amplitude=10pt,mirror},yshift=-3pt] (8.5,0) -- (10,0) node [black,midway,yshift=-0.8cm] {\footnotesize $o(\sqrt{(m/\log m)})$}; \draw [decorate,decoration={brace,amplitude=10pt,mirror},yshift=-3pt] (4,0) -- (6,0) node [black,midway,yshift=-0.8cm] {\footnotesize $O(\sqrt{m})$}; \end{tikzpicture} \end{center} \caption{Regime of $r$ for which $RM(m,r)$ is known to achieve capacity for the BEC} \label{fig:rm-bec-regime} \end{figure} Another result proved by Abbe et al.\ \cite{AbbeSW15} is that Reed-Muller codes $RM(m,m-2r-2)$ can correct any \emph{error pattern} if the same \emph{erasure pattern} can be decoded in $RM(m,m-r-1)$. This reduction is appealing on its own, since it connects decoding from erasures --- which is easier in both an intuitive and an algorithmic manner --- with decoding from errors; but its importance is further emphasized by the progress made later by Kumar and Pfister and Kudekar et al., who showed that Reed-Muller codes can correct many erasures in the constant rate regime, right up to the channel capacity. This result show that $RM(m,m-(2r+2))$ can cope with most error patterns of weight $(1-o(1))\binom{m}{\le r}$, which is the capacity of $RM(m,m-(r+1))$ for the BEC. While this is polynomially smaller than what can be achieved in the Shannon model of errors for random codes of the same rate, this number is still much larger (super-polynomial) than the distance (and the list-decoding radius) of the code, which is $2^{2r+2}$. Also, since $RM\inparen{m, \frac{m}{2} - o(\sqrt{m})}$ can cope with $\inparen{\frac{1}{2} - o(1)}$-fraction of erasures, this translation implies that $RM(m,o(\sqrt{m}))$ can handle that many random errors. However, a shortcoming of the proof of Abbe et al.\ for the BSC is that it is existential. In particular it does not provide an efficient decoding algorithm. Thus, Abbe et al.\ left open the question of coming up with a decoding algorithm for Reed-Muller codes from random errors. \subsection{Our contributions}\label{sec:our-results} In this work we give an efficient decoding algorithm for Reed-Muller codes that matches the parameters given by Abbe et al. Following the aforementioned results about the erasure correcting ability of Reed-Muller codes, the results can be partitioned into the low-rate and the high-rate regimes. We begin with the result for the low rate case. \begin{theorem}[Low rate, informal]\label{thm:main-low-degree} Let $r < \delta \sqrt{m}$ for a small enough $\delta$. Then, there is an efficient algorithm that can decode $RM(m,r)$ from a random set of $(1 - o(1)) \cdot \binom{m}{\leq m/2 - r}$ errors. In particular, if $r = o(\sqrt{m})$, the algorithm can decode from $\inparen{\frac{1}{2} - o(1)} \cdot 2^m$ errors. The running time of the algorithm is $O(n^4)$ and it can be simulated in $\mathsf{NC}$. \end{theorem} For high rate Reed-Muller codes, we cannot hope to achieve such a high error correction capability as in the low rate case, even information theoretically. We do give, however, an algorithm that corrects many more errors (a super-polynomially larger number) than what the minimal distance of the code suggests, and its running time is also nearly linear in the block length of the code. \begin{theorem}[High rate, informal]\label{thm:main:informal} Let $r =o(\sqrt{m/\log m})$. Then, there is an efficient algorithm that can decode $RM(m, m-(2r+2))$ from a random set of $(1-o(1))\binom{m}{\le r}$ errors. Moreover, the running time of the algorithm is $2^m \cdot \mathrm{poly}(\binom{m}{\le r})$ and it can be simulated in $\mathsf{NC}$. \end{theorem} Recall that the block length of the code is $n=2^m$, and thus the running time is near linear in $n$ when $r=o(m)$. A general property of our algorithm is that it corrects any error pattern in $RM(m, m - 2r -2)$ for which the same {\em erasure} pattern in $RM(m,m - r-1)$ can be corrected. Stated differently, if an erasure pattern can be corrected in $RM(m, m-r-1)$ then the same pattern, where the ``lost'' symbol is replaced with arbitrary $0/1$ values, can be corrected in $RM(m, m-(2r+2))$. This property is useful when we know $RM(m,m - r-1)$ can correct a large set of erasures with high probability, that is, when $m-r-1$ falls in the \emph{red region} in \autoref{fig:rm-bec-regime}. Thus, our result has implications also beyond the above two instances. In particular, it may be the case that our algorithm performs well for other rates as well. For example, consider the following question and the theorem it implies. \begin{question}\label{Q:RM:BEC} Does $RM(m, m-r-1)$ achieve capacity for the BEC? \end{question} \begin{theorem}[informal]\label{thm:main:conj:informal} For any value $r$ for which the answer to \autoref{Q:RM:BEC} is positive, there exists an efficient algorithm that decodes $RM(m, m-2r-2)$ from a random set of $(1-o(1))\binom{m}{\le r}$ errors with probability $(1-o(1))$ (over the random errors). Moreover, the running time of the algorithm is $2^m \cdot \mathrm{poly}\inparen{\binom{m}{\le r}}$. \end{theorem} Recall that Abbe et al.\ \cite{AbbeSW15} also proved that the answer to \autoref{Q:RM:BEC} is positive for $r = m- o(m)$ (that is, for $RM(m,o(m))$) but this case does not help us as we need to consider $RM(m, m-(2r+2))$ and $m - (2r + 2) < 0$ in this case. The coding theory community seems to believe the answer to \autoref{Q:RM:BEC} is positive, for all values of $r$, and conjectures to that effect were made\footnote{The belief that RM codes achieve capacity is much older, but we did not trace back where it appears first.} in \cite{forney-road,arikan-RM,mondelli-RM}. Recent simulations have also suggested that the answer to the question is positive \cite{arikan-RM,mondelli-RM}. Thus, it seems natural to believe that the answer is positive for most values of $r$, even for $r =\Theta(m)$. As a conclusion, the belief in the coding theory community suggests that our algorithm can decode a random set of roughly $\binom{m}{\le r}$ errors in $RM(m, m-(2r+2))$. For example, for $r=\rho\cdot m$, where $\rho<1/2$, the minimal distance of $RM(m, m-(2r+2))$ is roughly $2^{2\rho m}$ whereas our algorithm can decode from roughly $2^{h(\rho)m}$ random errors (assuming the answer to \autoref{Q:RM:BEC} is positive), which is a much larger quantity for every $\rho < 1/2$.\\ In \autoref{sec:abstraction}, we also present an abstraction of our decoding procedure that may be applicable to other linear codes. This is a generalization of the abstract Berlekamp-Welsch decoder or ``error-locating pairs'' method of Duursma and K\"{o}tter~\cite{DuursmaK94} that connects decodable erasure patterns on a larger code to decodable error patterns. A specific instantiation of this was observed by Abbe et al.\ \cite{AbbeSW15} by connecting decodable error patterns of any linear code $C$ to decodable erasure patterns of an appropriate ``tensor'' $C'$ of $C$ (by essentially embedding these codes in a large enough RM code). Although Abbe et al.\ did not provide an efficient decoding algorithm, the algorithm we present directly applies here (\autoref{sec:general}). The abstraction of the ``error-locating pairs'' method presented in \autoref{sec:abstraction} should hopefully be applicable in other contexts too, especially considering the generality of the results of \cite{KumarPfister15, KudekarMSU15}. \subsection{Related literature} In \autoref{sec:erasures} we surveyed the known results regarding the ability of Reed-Muller codes to correct random erasures. In this section we summarize the results known about recovering RM codes from random errors. Once again, it is useful to distinguish between the low rate and the high rate regime of Reed-Muller codes. We shall use $d$ to denote the distance of the code in context. For $RM(m,r)$ codes, $d = 2^{m-r}$. In \cite{krich}, the majority logic algorithm of \cite{reed} is shown to succeed in recovering all but a vanishing fraction of error patterns of weight up to $d \log d/4$ for all RM codes of positive rate. In \cite{dumer3}, Dumer showed for all $r$ such that $\min(r, m-r) = \omega(\log m)$ that most error patterns of weight at most $(d\log d/2) \cdot (1 - \frac{\log m}{\log d})$ can be recovered in $RM(m,r)$. To make sense of the parameters, we note that when $r = m - \omega(\log m)$ the weight is roughly $(d\log d/2)$. To compare this result to ours, we first consider the case when $r=m - o(\sqrt{m/\log m})$. Here the algorithm of \cite{dumer3} can correct roughly $2^{o(\sqrt{m/\log m})}$ random errors in $RM(m,r)$ whereas \autoref{thm:main:informal} gives an algorithm for correcting roughly $m^{o(\sqrt{m/ \log m})} \approx (d \log d)^{O(\log m)}$ random errors. Further, even for the case $r = (1 -\rho) m$, where $\rho<1/2$ is a constant, the bound in the above result of \cite{dumer3} is equal to $O(d\log d)$. On the other hand, assuming a positive answer to \autoref{Q:RM:BEC}, \autoref{thm:main:conj:informal} implies an efficient decoding algorithm for $RM(m,(1-\rho)m)$ that can decode from, roughly, $\binom{m}{\frac{1}{2}\rho m} = d^{O(\log 1/\rho)}$ random errors, for this case. \begin{figure}[h] \begin{center} \begin{tikzpicture}[framed] \draw[thick] (0,0) -- (12,0); \draw (0,-0.25) -- (0,0.25); \draw (12,-0.25) -- (12,0.25); \draw (6,-0.25) -- (6,0.25); \node at (6,-0.5) {\scriptsize $m/2$}; \node at (0,-0.5) {\scriptsize $0$}; \node at (12,-0.5) {\scriptsize $m$}; \draw [decorate,decoration={brace,amplitude=5pt},yshift=10pt] (0,0) -- (1,0) node [black,midway,yshift=0.4cm] {\scriptsize $\log m$}; \draw [decorate,decoration={brace,amplitude=5pt},yshift=10pt] (11,0) -- (12,0) node [black,midway,yshift=0.4cm] {\scriptsize $\log m$}; \draw (1,-0.25) -- (1,0.25); \draw (11,-0.25) -- (11,0.25); \draw (2,-0.25) -- (2,0.25); \node at (2,-0.5) {\scriptsize $o(\sqrt{m})$}; \draw (9,-0.25) -- (9,0.25); \draw [decorate,decoration={brace,amplitude=10pt,mirror},yshift=-20pt] (9,0) -- (12,0) node [black,midway,yshift=-0.7cm] {\scriptsize $o(\sqrt{m/\log m})$}; \node[anchor=east] at (-0.5,0) {\scriptsize Degree ($r$) of $RM(m,r)$:}; \node[anchor=east] at (-0.5,1.5) {\scriptsize \cite{dumer1,dumer2,dumer3}:}; \draw[very thick, draw=blue!80] (0,1.5) -- (0.9,1.5) node [black, midway, above] {\scriptsize $\approx n/2$ errors}; \draw[very thick, draw=red!80] (1.1,1.5) -- (10.9,1.5) node [black, midway, above] {\scriptsize $O(d \log d)$ errors} node [black, midway, below] {\scriptsize $O(n \log n)$ time algorithm}; \node[anchor=east] at (-0.5,3) {\scriptsize Our results:}; \draw[very thick, draw=blue!80] (0,3) -- (2,3) node [black, midway, above] {\scriptsize $\approx n/2$ errors} node [black, midway, below] {\scriptsize $O(n^4)$ time algo.}; \draw[very thick, draw=red!80] (9,3) -- (12,3) node [black, midway, above] {\scriptsize $(d \log d)^{O(\log m)}$ errors} node [black, midway, below] {\scriptsize $n^{1 + o(1)}$ time algo.}; \draw[very thick, loosely dotted, draw=brown!80] (2,3) -- (9,3) node [black!70, midway, above] {\scriptsize $(d \log d)^{\omega(1)}$ errors} node [black!70, midway, below] {\scriptsize assuming positive answer to \autoref{Q:RM:BEC}}; \end{tikzpicture} \end{center} \caption{Comparison with \cite{dumer1,dumer2,dumer3}} \label{fig:comparison-dumer} \end{figure} We now turn to other regimes of parameters, specifically RM codes of low rate. For the special case of $r=1,2$, \cite{hell} shows that $RM(m,r)$ codes are capacity-achieving. In \cite{sidel}, it is shown that RM codes of fixed order (i.e., $r=O(1)$) can decode most error patterns of weight up to $\frac{1}{2}n(1-\sqrt{c(2^r-1)m^r/ n r!})$, where $c> \ln(4)$. In \cite{AbbeSW15}, Abbe et al.\ settled the question for low order Reed-Muller codes proving that $RM(m,r)$ codes achieve capacity for the BSC when $r=o(m)$ \cite{AbbeSW15}. We note however that all the results mentioned here are existential in nature and do not provide an efficient decoding algorithm. A line of work by Dumer \cite{dumer1,dumer2} based on recursive algorithms (that exploit the recursive structure of Reed-Muller codes), obtains algorithmic results mainly for low-rate regimes. In \cite{dumer1}, it is shown that for a fixed degree, i.e., $r=O(1)$, an algorithm of complexity $O(n \log n)$ can correct most error patterns of weight up to $n(1/2 - \varepsilon)$ given that $\varepsilon$ exceeds $n^{-1/2^r}$. In \cite{dumer3}, this is improved to errors of weight up to $\frac{1}{2}n(1-(4m/d)^{1/2^r})$ for all $r=o(\log m )$. The case $r=\omega(\log m)$ is also covered in \cite{dumer3}, as described above. We note that all the efficient algorithms mentioned above (both for high- and low-rate) rely on the so called Plotkin construction of the code, that is, on its recursive structure (expanding an $m$-variate polynomial according to the $m$-th variable $f(x_1,\ldots,x_m)=x_m g(x_1,\ldots,x_{m-1})+h(x_1,\ldots,x_{m-1})$), whereas our approach is very different. We summarize and compare our results with \cite{dumer1,dumer2,dumer3} for various range of parameters in \autoref{fig:comparison-dumer} (degree is $r$ and distance is $d = 2^{m-r}$). The dotted region in \autoref{fig:comparison-dumer} corresponds to the uncovered region in \autoref{fig:rm-bec-regime} beyond $m/2$, via the connection given in \autoref{thm:main:conj:informal}. \subsection{Notation and terminology}\label{sec:notation} Before explaining the idea behind the proofs of our results we need to introduce some notation and parameters. We shall use the same notation as \cite{AbbeSW15}. \begin{itemize} \item We denote by $\mathbb{M}(m,r)$ the set of $m$-variate monomials over $\mathbb{F}_2$ of degree at most $r$. \item For non-negative integers $r \leq m$, $RM(m,r)$ denotes the Reed-Muller code whose codewords are the evaluation vectors of all multivariate polynomials of degree at most $r$ on $m$ boolean variables. The maximal degree $r$ is sometimes called the order of the code. The block length of the code is $n=2^m$, the dimension $k=k(m,r)=\sum_{i=0}^r \binom{m}{i} \stackrel{\mathrm{def}}{=} \binom{m}{\le r}$, and the distance $d=d(m,r)=2^{m-r}$. The code rate is given by $R=k(m,r)/n$. \item We use $E(m,r)$ to denote the ``evaluation matrix'' of parameters $m,r$, whose rows are indexed by all monomials in $\mathbb{M}(m,r)$, and whose columns are indexed by all vectors in $\mathbb{F}_2^m$. The value at entry $(M,\mathbf{u})$ is equal to $M(\mathbf{u})$. For $\mathbf{u}\in \mathbb{F}_2^m$, we denote by $\mathbf{u}^r$ the column of $E(m,r)$ indexed by $\mathbf{u}$, which is a $k$-dimensional vector, consisting of all evaluations of degree $\leq r$ monomials at $\mathbf{u}$. For a subset of columns $U \subseteq \mathbb{F}_2^m$ we denote by $U^r$ the corresponding submatrix of $E(m,r)$. \item $E(m,r)$ is a generator matrix for $RM(m,r)$. The duality property of Reed-Muller codes (see, for example, \cite{sloane-book}) states that $E(m,m-r-1)$ is a parity-check matrix for $RM(m,r)$, or equivalently, $E(m,r)$ is a parity-check matrix for $RM(m,m-r-1)$. \item We associate with a subset $U\subseteq \mathbb{F}_2^m$ its characteristic vector $\mathbbm{1}_U \in \mathbb{F}_2^n$. We often think of the vector $\mathbbm{1}_U$ as denoting either an {\em erasure pattern} or an {\em error pattern}. \item For a positive integer $n$, we use the standard notation $[n]$ for the set $\{1,2,\ldots,n\}$. \end{itemize} We next define what we call the degree-$r$ syndrome of a set. \begin{definition}[Syndrome] Let $r \le m$ be two positive integers. The \emph{degree-$r$ syndrome}, or simply \emph{$r$-syndrome} of a set $U=\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}\subseteq\mathbb{F}_2^m$ is the $\binom{m}{\le r}$-dimensional vector $\alpha$ whose entries are indexed by all monomials $M \in \mathbb{M}(m,r)$, such that \[ \alpha_M \stackrel{\mathrm{def}}{=} \sum_{i=1}^t M(\mathbf{u}_i). \] \end{definition} Note that this is nothing but the syndrome of the error pattern $\mathbbm{1}_U \in \mathbb{F}_2^{n}$ in the code $RM(m, m-r-1)$ (whose parity check matrix is the generator matrix of $RM(m,r)$). \subsection{Proof techniques}\label{sec:techniques} In this section we describe our approach for constructing a decoding algorithm. Recall that the algorithm has the property that is decodes in $RM(m, m-2r-2)$ any error pattern $U$ which is correctable from erasures in $RM(m, m-r-1)$. Such patterns are characterized by the property that the columns of $E(m,r)$ corresponding to the elements of $U$ are linearly independent vectors. Thus, it suffices to give an algorithm that succeeds whenever the error pattern $\mathbbm{1}_U$ gives rise to such linearly independent columns, which happens with probability $1-o(1)$ for the regime of parameters mentioned in \autoref{thm:main-low-degree} and \autoref{thm:main:informal}. So let us assume from now on that the error pattern $\mathbbm{1}_U$ corresponds to a set of linearly independent columns in $E(m,r)$. Notice that by the choice of our parameters, our task is to recover $U$ from the degree $(2r+1)$-syndrome of $U$. Furthermore, we want to do so efficiently. For convenience, let $t = |U|=(1-o(1))\binom{m}{\le r}$. Recall that the degree-$(2r+1)$ syndrome of $U$ is the $\binom{m}{\le 2r+1}$-long vector $\alpha$ such that for every monomial $M\in \mathbb{M}(m,2r+1)$, $\alpha_M = \sum_{i=1}^t M(\mathbf{u}_i)$. Imagine now that we could somehow find degree-$r$ polynomials $f_i(x_1,\ldots,x_m)$ satisfying $f_i(u_j)=\delta_{i,j}$. Then, from knowledge of $\alpha$ and, say, $f_1$, we could compute the following sums: \[ \sigma_\ell = \sum_{i=1}^{t} (f_1 \cdot x_\ell) (\mathbf{u}_i), \quad \ell \in [m]. \] Indeed, if we know $\alpha$ and $f_1$ then we can compute each $\sigma_\ell$, as it just involves summing several coordinates of $\alpha$ (since $\deg(f_1\cdot x_\ell) \leq r+1$). We now observe that \[ \sigma_\ell = \sum_{i=1}^{t} (f_1 \cdot x_\ell) (\mathbf{u}_i) = (f_1 \cdot x_\ell) (\mathbf{u}_1) = ( \mathbf{u}_1)_\ell. \] In other words, knowledge of such an $f_1$ would allow us to discover all coordinates of $\mathbf{u}_1$ and in particular, we will be able to deduce $\mathbf{u}_1$, and similarly all other $\mathbf{u}_i$ using $f_i$. Our approach is thus to find such polynomials $f_i$. What we will do is set up a system of linear equations in the coefficients of an unknown degree $r$ polynomial $f$ and show that $f_1$ is the unique solution to the system. Indeed, showing that $f_1$ is a solution is easy and the hard part is proving that it is the unique solution. To explain how we set the system of equations, let us assume for the time being that we actually know $\mathbf{u}_1$. Let $f = \sum_{M\in \mathbb{M}(m,r)} c_M \cdot M$, where we think of $\{c_M\}$ as unknowns. Consider the following linear system: \begin{enumerate} \item $\sum\limits_{i=1}^t f(\mathbf{u}_i) \sspaced{=} f(\mathbf{u}_1) \sspaced{=} 1$, \item $\sum\limits_{i=1}^t (f \cdot M)(\mathbf{u}_i) \sspaced{=} M(\mathbf{u}_1)$, for all $M \in \mathbb{M}(m,r)$. \item $\sum\limits_{i=1}^t (f \cdot M \cdot (x_\ell + (\mathbf{u}_1)_\ell + 1))(\mathbf{u}_i) \sspaced{=} M(\mathbf{u}_1)$ for every $\ell \in [m]$ and for all $M \in \mathbb{M}(m,r)$. \end{enumerate} In words, we have a system of $2 + \binom{m}{\le r} + m\cdot \binom{m}{\le r}$ equations in $\binom{m}{\le r}$ variables (the coefficients of $f$). Observe that $f=f_1$ is indeed a solution to the system. To prove that it is the unique solution we rely on the fact that the columns of $U^r$ are linearly independent and hence expressing $\mathbf{u}_1^r$ as a linear combination of those columns can be done in a unique way. Now we explain what to do when we do not know $\mathbf{u}_1$. Let $\mathbf{v}=(v_1,\ldots,v_m)\in \mathbb{F}_2^m$. We modify the linear system above to: \begin{enumerate} \item $\sum\limits_{i=1}^t f(\mathbf{u}_i) \sspaced{=} f(\mathbf{v}) \sspaced{=} 1$, \item $\sum\limits_{i=1}^t (f \cdot M)(\mathbf{u}_i) \sspaced{=} M(\mathbf{v})$ for all $M \in \mathbb{M}(m,r)$. \label{item:in-span} \item $\sum\limits_{i=1}^t (f \cdot M \cdot (x_\ell + v_\ell + 1))(\mathbf{u}_i) \sspaced{=} M(\mathbf{v})$ for all $\ell \in [m]$ and $M \in \mathbb{M}(m,r)$. \label{item:in-set} \end{enumerate} Now the point is that one can prove that if a solution exists then it must be the case that $\mathbf{v}$ is an element of $U$. Indeed, the set of equations in \autoref{item:in-span} implies that $\mathbf{v}^r$ is in the linear span of the columns of $U^r$. The linear equations in \autoref{item:in-set} then imply that $\mathbf{v}$ must actually be in the set $U$. Notice that what we actually do amounts to setting, for every $\mathbf{v}\in\mathbb{F}_2^m$, a system of linear equations of size roughly $\binom{m}{\le r}$. Such a system can be solved in time $\mathrm{poly}\inparen{\binom{m}{\le r}}$. Thus, when we go over all $\mathbf{v}\in\mathbb{F}_2^m$ we get a running time of $2^m \cdot \mathrm{poly}\inparen{\binom{m}{\le r}}$, as claimed. Our proof can be viewed as an algorithmic version of the proof of Theorem 1.8 of Abbe et al.\ \cite{AbbeSW15}. That theorem asserts that when the columns of $U^r$ are linearly independent, the $(2r+1)$-syndrome of $U$ is unique. In their proof of the theorem they first use the $(2r)$-syndrome to claim that if $V$ is another set with the same $(2r)$-syndrome then the column span of $U^r$ is the same as that of $V^r$. Then, using the degree $(2r+1)$ monomials they deduce that $U=V$. This is similar to what our linear system does, but, in contrast, \cite{AbbeSW15} did not have an efficient algorithmic version of this statement. \section{Decoding Algorithm For Reed-Muller Codes}\label{sec:decode} We begin with the following basic linear algebraic fact. \begin{lemma} \label{lem:dual-poly} Let $\mathbf{u}_1,\dots, \mathbf{u}_t \in \mathbb{F}_2^m$ such that $\inbrace{\mathbf{u}_1^{r}, \dots, \mathbf{u}_t^{ r}}$ are linearly independent. Then, for every $i \in [t]$, there exists a polynomial $f_i$ so that for every $j \in [t]$, \[ f_i (\mathbf{u}_j) = \delta_{i,j} = \begin{cases} 1 & \text{if } i=j \\ 0 & \text{otherwise}. \end{cases} \] \end{lemma} \noindent For completeness, we give the short proof. \begin{proof} Consider the matrix $U^r \in \mathbb{F}_2^{t \times \binom{m}{\le r}}$ whose $i$-th row is $\mathbf{u}_i^{ r}$. A polynomial $f_i$ which satisfies the properties of the lemma is a solution to the linear system $U^r\mathbf{x}=\mathbf{e}_i$, where $\mathbf{e}_i \in \mathbb{F}_2^t$ is the $i$-th elementary basis vector (that is, $(\mathbf{e}_i)_j=\delta_{i,j}$), and the $\binom{m}{\le r}$ unknowns are the coefficients of $f_i$. By the assumption that $U$ is of full rank, indeed there exists a solution. \end{proof} \noindent The algorithm would proceed by making a guess $\mathbf{v}=(v_1,\dots, v_m) \in \mathbb{F}_2^m$ for one of the error locations. If we could come up with an efficient way to \emph{verify} that the guess is correct, this would immediately yield a decoding algorithm. We shall verify our guess by using the dual polynomials $f_1,\dots, f_t$ described above. We shall find them by solving a system of linear equations that can be constructed from the $(2r+1)$-syndrome of $\inbrace{\mathbf{u}_1,\dots, \mathbf{u}_m}$. We will need the following crucial, yet simple, observation. \begin{observation}\label{obs:evalsum-from-syndrome} Let $f$ be any $m$-variate polynomial of degree at most $2r+1$, and $\mathbf{u}_1,\dots, \mathbf{u}_t \in \mathbb{F}_2^m$. Then, the sum $\sum_{i=1}^t f(\mathbf{u}_i)$ can be computed given the $(2r+1)$-syndrome of $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$, in time $O\inparen{\binom{m}{2r+1}}$. \end{observation} \begin{proof} For any $M \in \mathbb{M}(m,2r+1)$, denote $\alpha_M = \sum_{i=1}^t M(\mathbf{u}_i)$ (so that $\alpha = (\alpha_M)_{M \in \mathbb{M}(M,2r+1)}$ is precisely the syndrome of $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$). Write $f = \sum_{M \in \mathbb{M}(m,2r+1)} c_M \cdot M$, where $c_M \in \mathbb{F}_2$, then \begin{align*} \sum_{i=1}^t f(\mathbf{u}_i) &\spaced{=} \sum_{i=1}^t \sum_{M \in \mathbb{M}(m,2r+1)} c_M \cdot M(\mathbf{u}_i)\\& \spaced{=} \sum_{M \in \mathbb{M}(m,2r+1)} c_M \inparen{\sum_{i=1}^t M(\mathbf{u}_i)} \spaced{=} \sum_{M \in \mathbb{M}(m,2r+1)} c_M \alpha_M. \qedhere \end{align*} \end{proof} The following lemma shows how to verify a guess for an error location. It is the main ingredient in the analysis of our algorithm and the reason why it works. Basically, the lemma gives a system of linear equations whose solution enables us to decide whether a given $\mathbf{v}\in \mathbb{F}_2^m$ is a corrupted coordinate or not, without knowledge of the set of errors $U$ but only of its syndrome. In a sense, this lemma is analogous to the Berlekamp-Welch algorithm, which also gives a system of linear equations whose solution reveals the set of erroneous locations (\cite{BerWel}, and see also the exposition in Chapter 13 of \cite{GRSBook}). \begin{lemma}[Main Lemma] \label{lem:decode} Let $\mathbf{u}_1,\dots, \mathbf{u}_t \in \mathbb{F}_2^m$ such that $\inbrace{\mathbf{u}_1^{r}, \dots, \mathbf{u}_t^{ r}}$ are linearly independent, and $\mathbf{v}=(v_1,\ldots,v_m) \in \mathbb{F}_2^m$. Suppose there exists a multilinear polynomial $f \in \mathbb{F}_2[x_1,\ldots,x_m]$ with $\deg(f) \le r$ such that for every monomial $M \in \mathbb{M}(m,r)$, \begin{enumerate} \item \label{item:sum1} $\sum\limits_{i=1}^t f(\mathbf{u}_i) \sspaced{=} f(\mathbf{v}) \sspaced{=} 1$, \item \label{item:2r} $\sum\limits_{i=1}^t (f \cdot M)(\mathbf{u}_i) \sspaced{=} M(\mathbf{v})$, and \item \label{item:2r+1} $\sum\limits_{i=1}^t (f \cdot M \cdot (x_\ell + v_\ell + 1))(\mathbf{u}_i) \sspaced{=} M(\mathbf{v})$ for every $\ell \in [m]$. \end{enumerate} Then there exists $i \in [t]$ such that $\mathbf{v}=\mathbf{u}_i$. \end{lemma} \noindent Observe that if indeed $\mathbf{v}=\mathbf{u}_i$ for some $i \in [t]$, then the polynomial $f_i$ guaranteed by \autoref{lem:dual-poly} satisfies those equations. Hence, the lemma should be interpreted as saying the converse: that if there exists such a solution, then $\mathbf{v}=\mathbf{u}_i$ for some $i$. Further, given the $(2r+1)$-syndrome of $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$ as input, \autoref{obs:evalsum-from-syndrome} shows that each of the above constraints are linear constraints in the coefficients of $f$. Thus, finding such an $f$ is merely solving a system of $O\inparen{\binom{m}{\leq r}}$ linear equations in $\binom{m}{\leq r}$ unknowns and can be done in $\mathrm{poly}\inparen{\binom{m}{\leq r}}$ time. \begin{proof}[Proof of \autoref{lem:decode}] Let $J = \inbrace{j \mid f(\mathbf{u}_j) = 1}$. Note that by \autoref{item:sum1} it holds that $J \neq \emptyset$. \begin{addmargin}[2em]{5em} \begin{subclaim} $\sum\limits_{i \in J} \mathbf{u}_i^{ r} =\mathbf{v}^{ r}$. \end{subclaim} \begin{myproof}{Subclaim} Let $M \in \mathbb{M}(m,r)$. We show that $\sum_{i \in J} M(\mathbf{u}_i) = M(\mathbf{v})$, i.e.,\ that the $M$'th coordinate of $\sum_{i \in J} \mathbf{u}_i^{ r}$ is equal to that of $\mathbf{v}^{ r}$. Indeed, as $f$ satisfies the constraints in \autoref{item:2r}, \[ M(\mathbf{v}) = \sum_{i=1}^t (f \cdot M)(\mathbf{u}_i) = \sum_{i \in J} (f \cdot M)(\mathbf{u}_i) + \sum_{i \not\in J} (f \cdot M)(\mathbf{u}_i) = \sum_{i \in J} M(\mathbf{u}_i). \myqedhere \] \end{myproof} \end{addmargin} For any $\ell \in [m]$, let $J_{\ell} = \inbrace{j \mid f(\mathbf{u}_j)=1 \;\text{and}\; (\mathbf{u}_j)_\ell = v_\ell} \subseteq J$. Observe that this definition implies that for every $j \in [t]$, the index $j$ is in $J_\ell$ if and only if $(f\cdot (x_\ell + v_\ell + 1))(\mathbf{u}_j)=1$. Using a similar argument, we can show the following. \begin{addmargin}[2em]{5em} \begin{subclaim} For every $\ell \in [m]$, \begin{equation} \label{eq:sum-J_ell} \sum_{i \in J_\ell} \mathbf{u}_i^{ r} = \mathbf{v}^{ r}. \end{equation} \end{subclaim} \begin{myproof}{Subclaim} Again, for any $M \in \mathbb{M}(m,r)$ the constraints in \autoref{item:2r+1} imply that \[ M(v) = \sum_{i=1}^t (f \cdot M \cdot (x_\ell + v_\ell + 1))(\mathbf{u}_i) = \sum_{i \in J_\ell} M(\mathbf{u}_i). \myqedhere \] \end{myproof} \end{addmargin} \noindent From the above claims, \[ \mathbf{v}^r = \sum_{i \in J} \mathbf{u}_i^r = \sum_{i\in J_1} \mathbf{u}_i^r = \dots = \sum_{i \in J_m} \mathbf{u}_i^r. \] By the linear independence of $\inbrace{\mathbf{u}_1^{ r}, \dots, \mathbf{u}_t^{r}}$, it follows that $J = J_1 = J_2 = \cdots = J_m$. Indeed, there is a unique linear combination of $\inbrace{\mathbf{u}_1^r,\ldots,\mathbf{u}_t^r}$ that gives $\mathbf{v}^r$. The only vector which can be in the (non-empty) intersection $\bigcap_{k=1}^m J_k$ is $\mathbf{v}$, and so there exists $i \in [t]$ so that $\mathbf{u}_i = \mathbf{v}$. \end{proof} \autoref{lem:decode} implies a natural algorithm for decoding from $t$ errors indexed by vectors $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$, assuming $\inbrace{\mathbf{u}_1^{ r}, \dots, \mathbf{u}_t^{ r}}$ are linearly independent, that we write down explicitly in \autoref{alg:decoding}. \begin{algorithm} \caption{: Reed-Muller Decoding} \label{alg:decoding} \begin{algorithmic}[1] \Require{A $(2r+1)$-syndrome of $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$ \State{$\mathcal{E} = \emptyset$} \ForAll{$\mathbf{v}=(v_1,\dots, v_m) \in \mathbb{F}_2^m$} \State{Solve for a polynomial $f \in \mathbb{F}_2[x_1,\dots, x_m]$ of degree at most $r$: \begin{itemize} \item $\sum\limits_{i=1}^t f(\mathbf{u}_i)=f(\mathbf{v}) = 1$, \item $\sum\limits_{i=1}^t (f \cdot M)(\mathbf{u}_i) = M(\mathbf{v})$ for all $M \in \mathbb{M}(m,r)$. \item $\sum\limits_{i=1}^t (f \cdot M \cdot (x_\ell + v_\ell + 1))(\mathbf{u}_i) = M(\mathbf{v})$ for all $\ell \in [m]$ and $M \in \mathbb{M}(m,r)$. \end{itemize}} \If{there is a polynomial $f$ that satisfies the above system of equations} \State{Add $\mathbf{v}$ to the set $\mathcal{E}$. } \EndIf \EndFor \State{{\bf return} the set $\mathcal{E}$ as the error locations. } \end{algorithmic} \end{algorithm} \begin{theorem} \sloppy \label{thm:decode-algo} Given the $(2r+1)$-syndrome of $t$ unknown vectors $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t} \subseteq \mathbb{F}_2^m$ such that $\inbrace{\mathbf{u}_1^{ r}, \dots, \mathbf{u}_t^{ r}}$ are linearly independent, \autoref{alg:decoding} outputs $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$, runs in time $2^m \cdot \mathrm{poly}(\binom{m}{\le r})$ and can be realized using a circuit of depth $\mathrm{poly}(m) = \mathrm{poly}(\log n)$. \end{theorem} \begin{proof} The algorithm enumerates all vectors in $\mathbb{F}_2^m$, and for each candidate $\mathbf{v}$ checks whether there exists a solution to the linear system of $\mathrm{poly}(\binom{m}{\le r})$ equations in $\mathrm{poly}(\binom{m}{\le r})$ unknowns given in \autoref{lem:decode}. \autoref{obs:evalsum-from-syndrome} shows that this system of linear equations can be constructed from the $(2r+1)$-syndrome in $\mathrm{poly}(\binom{m}{\leq r})$ time. By \autoref{lem:dual-poly} and \autoref{lem:decode}, a solution to this system exists if and only if there is $i \in [t]$ so that $\mathbf{v}=\mathbf{u}_i$. The bound on the running time follows from the description of the algorithm. Furthermore, all $2^m=n$ linear systems can be solved in parallel, and each linear system can be solved with an $\mathsf{NC}^2$ circuit (see, e.g., \cite{MahajanV97}). \end{proof} Observe that the the proof of correctness for \autoref{alg:decoding} is valid, for any value of $r$, whenever the set of error locations $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$ satisfies the property that $\inbrace{\mathbf{u}_1^r,\ldots,\mathbf{u}_t^r}$ are linearly independent. Therefore, we would like to apply \autoref{thm:decode-algo} in settings where $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t}$ are linearly independent with high probability. For the constant rate regime, Kumar and Pfister \cite{KumarPfister15} and Kudekar, Mondelli, \c{S}a\c{s}o\u{g}lu and Urbanke \cite{KudekarMSU15} proved that $RM(m, m-r-1)$ achieves capacity for $r=m/2 \pm O(\sqrt{m})$. \begin{theorem}[\cite{KumarPfister15}, Theorem 23] \label{thm:capacity-BEC-const-rate} Let $r \le m$ be integers such that $r=m/2\pm O(\sqrt{m})$. Then, for $t=(1-o(1))\binom{m}{\le r}$, with probability $1-o(1)$, for a set of vectors $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t} \subseteq \mathbb{F}_2^m$ chosen uniformly at random, it holds that $\inbrace{\mathbf{u}_1^{ r},\ldots,\mathbf{u}_t^{ r}}$ are linearly independent over $\mathbb{F}_2^{\binom{m}{\le r}}$. \end{theorem} Letting $r=m/2-o(\sqrt{m})$ and looking at the code $RM(m, m-2r-2) = RM(m, o(\sqrt{m}))$ so that $\binom{m}{\le r} = (1/2 - o(1))2^m$, we get the following statement, stated earlier as \autoref{thm:main-low-degree}. \begin{corollary} There exists a (deterministic) algorithm that is able to correct $t = (1/2 - o(1))2^m$ random errors in $RM(m, o(\sqrt{m})$ with probability $1-o(1)$. The algorithm runs in time $2^m \cdot \inparen{\binom{m}{m/2-o(\sqrt{m}}}^3 \le n^4$. \end{corollary} Alternatively, we can pick $r=m/2-O(\sqrt{m})$ and correct $c \cdot 2^m$ random errors in the code $RM(m, O(\sqrt{m}))$, where $c$ is some positive constant that goes to zero as the constant hidden under the big $O$ increases. For the high-rate regime, recall the following capacity achieving result proved in \cite{AbbeSW15}: \begin{theorem}[\cite{AbbeSW15}, Theorem 4.5] \label{thm:random-linear-independent} \sloppy Let $\epsilon>0$, $r \le m$ be two positive integers and $t < \binom{m-\log(\binom{m}{\le r})-\log(1/\epsilon)}{\le r}$. Then, with probability at least $1-\epsilon$, for a set of vectors $\inbrace{\mathbf{u}_1,\ldots,\mathbf{u}_t} \subseteq \mathbb{F}_2^m$ chosen uniformly at random, it holds that $\inbrace{\mathbf{u}_1^{ r},\ldots,\mathbf{u}_t^{ r}}$ are linearly independent over $\mathbb{F}_2^{\binom{m}{\le r}}$. \end{theorem} Using \autoref{thm:random-linear-independent}, we apply \autoref{thm:decode-algo} to obtain the following corollary, which was stated informally as \autoref{thm:main:informal}. \begin{corollary} Let $\epsilon>0$, and $r \le m$ be two positive integers. Then there exists a (deterministic) algorithm that is able to correct $t = \left\lfloor \binom{m-\log(\binom{m}{\le r})-\log(1/\epsilon)}{\le r} \right \rfloor-1$ random errors in $RM(m, m-(2r+2))$ with probability at least $1-\epsilon$. The algorithm runs in time $2^m \cdot \mathrm{poly}\inparen{\binom{m}{\le r}}$. \end{corollary} \noindent If $r=o(\sqrt{m/\log m})$, the bound on $t$ is $(1-o(1))\binom{m}{\le r}$, as promised. More generally, a positive answer to \autoref{Q:RM:BEC} is equivalent to $\inbrace{\mathbf{u}_1^r, \ldots, \mathbf{u}_t^r}$ for $t=(1-o(1))\binom{m}{\le r}$ being linearly independent with probability $1-o(1)$ (see Corollary 2.9 in \cite{AbbeSW15}), and thus we also obtain the following corollary, which was stated informally as \autoref{thm:main:conj:informal}. \begin{corollary} Let $r \le m$ be two positive integers. Suppose that $RM(m, m-r-1)$ achieves capacity for the BEC. Then there exists a (deterministic) algorithm that is able to correct $(1-o(1))\binom{m}{\le r}$ random errors in $RM(m, m-(2r+2))$ with probability $1-o(1)$. The algorithm runs in time $2^m \cdot \mathrm{poly}\inparen{\binom{m}{\le r}}$. \end{corollary} We note that for all values of $r$, $2^m \cdot \mathrm{poly}\inparen{\binom{m}{\le r}}$ is polynomial in the block length $n=2^m$, and when $r=o(m)$ this is equal to $n^{1+o(1)}$. \section{Abstractions and Generalizations}\label{sec:abstraction} \subsection{An abstract view of the decoding algorithm} In this section we present a more abstract view of \autoref{alg:decoding}, in the spirit of the works by Pellikaan, Duursma and K\"{o}tter (\cite{Pellikaan92, DuursmaK94}) which abstract the Berlekamp-Welch algorithm (see also the exposition in \cite{SudanLectureNotes}). Stated in this way, it is also clear that the algorithm works also over larger alphabets, so we no longer limit ourselves to dealing with binary alphabets. As shown in \cite{KumarPfister15}, Reed-Muller codes over $\mathbb{F}_q$ (sometimes referred to as {\em Generalized Reed-Muller codes)} also achieve capacity in the constant rate regime. We begin by giving the definition of a (pointwise) product of two vectors, and of two codes. \begin{defin} \label{def:star} Let $\mathbf{u},\mathbf{v} \in \mathbb{F}_q^n$. Denote by $\mathbf{u} * \mathbf{v} \in \mathbb{F}_q^n$ the vector $(\mathbf{u}_1\mathbf{v}_1, \ldots, \mathbf{u}_n \mathbf{v}_n)$. For $A,B \subseteq \mathbb{F}_q^n$ we similarly define $A*B = \setdef{\mathbf{u} * \mathbf{v}}{\mathbf{u} \in A, \mathbf{v} \in B}$. \end{defin} Following the footsteps of \autoref{alg:decoding}, we wish to decode, in a code $C$, error patterns which are correctable from erasures in a related code $N$, through the use of an {\em error-locating code} $E$. Under some assumptions on $C, N$ and $E$, we can use a similar proof in order to do this. \begin{theorem} \label{thm:abstraction} Let $E, C, N \subseteq \mathbb{F}_q^n$ be codes with the following properties. \begin{enumerate} \item \label{item:inclusion} $E*C \subseteq N$ \item \label{item:correction} For any pattern $\mathbbm{1}_{U}$ that is correctable from erasures in $N$, and for any coordinate $i \not \in U$ there exists a codeword $\mathbf{e} \in E$ such that $\mathbf{e}_j = 0$ for all $j \in U$ and $\mathbf{e}_i=1$. \end{enumerate} Then there exists an efficient algorithm that corrects in $C$ any pattern $\mathbbm{1}_U$, which is correctable from erasures in $N$.\end{theorem} To put things in perspective, earlier we set $C=RM(m, m-2r-2)$, $N=RM(m, m-r-1)$ and $E=RM(m,r+1)$. It is immediate to observe that \autoref{item:inclusion} holds in this case, and \autoref{item:correction} is guaranteed by \autoref{lem:dual-poly}: Indeed, consider the error pattern $U=\{\mathbf{u}_1, \ldots, \mathbf{u}_t\}$ and the dual polynomials $\{f_i\}_{i=1}^t$, and let $\mathbf{v} \not\in U$ be any other coordinate of the code. If there exists $j \in [t]$ such that $f_j (\mathbf{v})=1$, we can pick the codeword $g=f_j \cdot (1+x_\ell + \mathbf{v}_\ell)$, where $\ell$ is some coordinate such that $\mathbf{v}_\ell \neq (\mathbf{u}_j)_\ell$. $g$ has degree at most $r+1$ and so it is a codeword in $E$, and it can be directly verified that it satisfies the conditions of \autoref{item:correction}. If $f_j(\mathbf{v})=0$ for all $j$, we can pick $g=1-\sum_{i=1}^t f_i$. It is also worth pointing out the differences between our approach and the abstract Berlekamp-Welch decoder of Duursma and K\"{o}tter: They similarly set up codes $E, C$ and $N$ such that $E*C \subseteq N$. However, instead of \autoref{item:correction}, they require that for any $\mathbf{e} \in E$ and $\mathbf{c} \in C$, if $\mathbf{e} * \mathbf{c} = 0$ then $\mathbf{e}=0$ or $\mathbf{c}=0$ (or similar requirements regarding the distances of $E$ and $C$ that guarantee this property). This property, as well as the distance properties, do not hold in the case of Reed-Muller codes. Turning back to the proof of \autoref{thm:abstraction}, the algorithm and the proof of correctness turn out to be very short to describe in this level of generality. Given a word $\mathbf{y} \in \mathbb{F}_q^n$, the algorithm would solve the the linear system $\mathbf{a} * \mathbf{y} = \mathbf{b}$, in unknowns $\mathbf{a} \in E$ and $\mathbf{b} \in N$. Under the hypothesis of the theorem, we show that common zeros of the possible solutions for $\mathbf{a}$ determine exactly the error locations. Once the locations of the errors are identified, correcting them is easy: we can replace the error locations by the symbol '?' and use an algorithm which corrects erasures (this can always be done efficiently, when unique decoding is possible, as this merely amounts to solving a system of linear equations). The algorithm is given in \autoref{alg:abstract}. \begin{algorithm} \caption{: Abstract Decoding Algorithm} \label{alg:abstract} \begin{algorithmic}[1] \Require{received word $\mathbf{y} \in \mathbb{F}_q^n$ such that $\mathbf{y} = \mathbf{c} + \mathbf{e}$, with $\mathbf{c} \in C$ and $\mathbf{e}$ is supported on a set $U$} \State{Solve for $\mathbf{a} \in E, \mathbf{b} \in N$, the linear system $\mathbf{a} * \mathbf{y} = \mathbf{b}$.} \State{Let $\inbrace{\mathbf{a}_1, \ldots, \mathbf{a}_k}$ be a basis for the solution space of $\mathbf{a}$, and let $\mathcal{E}$ denote the common zeros of $\setdef{\mathbf{a}_i}{i \in [k]}$.} \State{For every $j \in \mathcal{E}$, replace $\mathbf{y}_j$ with '?', to get a new word $\mathbf{y}'$.} \State{Correct $\mathbf{y}'$ from erasures in $C$.} \end{algorithmic} \end{algorithm} Note that in \autoref{thm:abstraction} we assume that the error pattern $U$ is correctable from erasures in $N$, whereas \autoref{alg:abstract} first computes a set of error locations $\mathcal{E}$ and then corrects $\mathbf{y}'$ from erasures in $C$. Thus, the proof of \autoref{thm:abstraction} can be divided into two steps. The first, and the main one, will be to show that $\mathcal{E}=U$. The second, which is merely an immediate observation, will be to show that $U$ is also correctable from erasures in $C$. We begin with the second part: \begin{lemma} \label{lem:correct-from-erasures} Assume the setup of \autoref{thm:abstraction}, and let $U$ be any pattern which is correctable from erasures in $N$. Then $U$ is also correctable from erasures in $C$. \end{lemma} \begin{proof} We may assume that $U \neq \emptyset$, as otherwise the statement is trivial. Suppose on the contrary that $U$ is not correctable from erasures in $C$, that is, there exists a non-zero codeword $\mathbf{c} \in C$ supported on $U$. For any $\mathbf{a} \in E$, we have that $\mathbf{a} * \mathbf{c}$ is a codeword of $N$ which is supported on a subset of $U$. In order to reach a contradiction, we want to pick $\mathbf{a} \in E$ so that $\mathbf{a} * \mathbf{c}$ is a non-zero codeword of $N$, which contradicts the assumption that $U$ is correctable from erasures in $N$. Pick $i \in U$ so that $\mathbf{c}_i \neq 0$. Observe that if $U$ is correctable from erasures in $N$ then so is $U \setminus \inbrace{i}$. By \autoref{item:correction} in \autoref{thm:abstraction} with respect to the set $U \setminus \inbrace{i}$ there exists $\mathbf{a} \in E$ with $\mathbf{a}_i=1$. Thus, in particular $\mathbf{a} * \mathbf{c}$ is non-zero. \end{proof} We now prove that main part of \autoref{thm:abstraction}, that is, that under the assumptions stated in the theorem, \autoref{alg:abstract} correctly decodes (in $C$) any error pattern that is correctable from erasures in $N$. \begin{proof}[Proof of \autoref{thm:abstraction}] Write $\mathbf{y} = \mathbf{c} + \mathbf{e}$, so that $\mathbf{c} \in C$ is the transmitted codeword and $\mathbf{e}$ is supported on the set of error locations $U$. As noted above, by \autoref{lem:correct-from-erasures} it is enough to show that under the assumptions of the theorem (in particular, that $U$ is correctable from erasures in $N$), the set of error locations $\mathcal{E}$ computed by \autoref{alg:abstract} equals $U$. In the following two lemmas, we argue that any solution $\mathbf{a}$ for the system vanishes on the error points, and then that for every other index $i$, there exists a solution whose $i$-th entry is non-zero (and so there must be a basis element for the solution space whose $i$-th entry is non-zero). The following lemma states that every solution $\mathbf{a} \in E$ to the equation $\mathbf{a} * \mathbf{y} = \mathbf{b}$ vanishes on $U$, the support of $\mathbf{e}$. In the pointwise product notation, this is equivalent to showing that $\mathbf{a} * \mathbf{e} = 0$. \begin{addmargin}[2em]{5em} \begin{subclaim} For every $\mathbf{a} \in E, \mathbf{b} \in N$ such that $\mathbf{a} * \mathbf{y} = \mathbf{b}$, it holds that $\mathbf{a} * \mathbf{e} = 0$. \end{subclaim} \begin{myproof}{Subclaim} Since $\mathbf{a} * \mathbf{y} = \mathbf{b} \in N$ (by the assumption) and $\mathbf{a} * \mathbf{c} \in N$ (by \autoref{item:inclusion}), we get that $\mathbf{a}*\mathbf{e} = \mathbf{a} * \mathbf{y} - \mathbf{a} * \mathbf{c}$ is also a codeword in $N$. Furthermore, $\mathbf{a}*\mathbf{e}$ is also supported on $U$, and since $U$ is an erasure-correctable pattern in $N$, the only codeword that is supported on $U$ is the zero codeword. \end{myproof} \end{addmargin} \noindent To finish the proof, we show that for any $i \not\in U$, there is a solution $\mathbf{a}$ to the system of linear equations with $\mathbf{a}_i = 1$. \begin{addmargin}[2em]{5em} \begin{subclaim} For every $i \not \in U$ there exists $\mathbf{a} \in E, \mathbf{b} \in N$ such that $\mathbf{a}$ is 0 on $U$, $\mathbf{a}_i=1$ and $\mathbf{a}*\mathbf{y}=\mathbf{b}$. \end{subclaim} \begin{myproof}{Subclaim} By \autoref{item:correction}, since $U$ is correctable from erasures in $N$, for every $i \not \in U$ we can pick $\mathbf{a} \in E$ such that $\mathbf{a}$ is 0 on $U$ and $\mathbf{a}_i = 1$. Set $\mathbf{b} = \mathbf{a} * \mathbf{y}$. It remains to be shown that $\mathbf{b}$ is a codeword of $N$. This follows from the fact that \[ \mathbf{b} = \mathbf{a} * \mathbf{c} + \mathbf{a} * \mathbf{e} = \mathbf{a} * \mathbf{c}, \] where the second equality follows from the fact that $\mathbf{a}$ is zero on $U$ (the support of $\mathbf{e}$). Finally, $\mathbf{a} * \mathbf{c}$ is a codeword of $N$ by \autoref{item:inclusion}. \end{myproof} \end{addmargin} These two claims complete the proof of the theorem. \end{proof} \subsection{Decoding of Linear Codes over $\mathbb{F}_2$} \label{sec:general} In \cite{AbbeSW15}, it is observed that their results for Reed-Muller codes imply that for {\em every} linear code $N$, every pattern which is correctable from erasures in $N$ is correctable from errors in what they call the ``degree-three tensoring'' of $N$. One can in fact use our \autoref{alg:decoding} almost verbatim to obtain an efficient version of this statement. However, here we remark that this is nothing but a special case of \autoref{thm:abstraction} with an appropriate setting of the codes $E,C,N$. We begin by briefly describing their definitions and their argument. The basic tool used by \cite{AbbeSW15} is embedding any parity check matrix in the matrix $E(m,1)$ for an appropriate choice of $m$. Let $N$ be any linear code of dimension $k$ over $\mathbb{F}_2$ and $H$ be its parity check matrix. For convenience, we first extend $N$ by adding a parity bit. This increases the block length by 1, does not decrease the distance and preserves the dimension. A parity check matrix for the extended code can by obtained from $H$ by constructing the matrix \[ H_0= \left( \begin{array}{cc} 1 & 1 \cdots 1 \\ 0 & \raisebox{-15pt}{{\huge\mbox{{$H$}}}} \\[-3ex] \vdots & \\ 0 & \end{array} \right). \] The main observation now is that $E(m,1)$ is an $(m+1)\times 2^m$ matrix that contains all vectors of the form $(1,\mathbf{v})$ for $\mathbf{v} \in \mathbb{F}_2^m$, so if we set $m=n-k$ to be the number of rows of $H$, we can pick a subset $S$ of the columns of $E(m,1)$ that correspond to the columns that appear in $H_0$. \cite{AbbeSW15} then define the degree-three tensoring of $N$, which is a code $C$ whose parity check matrix is $H_0^{\otimes 3}$: this is an $\binom{m}{\le 3} \times n$ matrix with rows indexed by tuples $i_1 < i_2 < i_3$, with the corresponding row being the pointwise product (as in \autoref{def:star}) of rows $i_1,i_2,i_3$ of $H_0$. One can then verify that \autoref{alg:decoding} can be used in order to correct (in $C$) any error pattern which is correctable from erasures in $N$, by using the algorithm with $r=1$ and having the error location guesses run only over the columns in $S$. A closer look reveals that this construction is in fact a special case of \autoref{thm:abstraction}. Given any linear binary code $N$ with parity check matrix $H$, the main observation of \cite{AbbeSW15} can be interpreted as saying that when we add a parity bit to $N$, we can embed $N$ in a puncturing of $RM(m, m-2)$ (whose parity check matrix is $E(m,1)$). We state it in the following claim: \begin{claim} \label{cl:embed} Let $N'$ denote the subcode of $RM(m, m-2)$ of all words that are 0 outside $S$. Then $N$ is precisely the restriction of $N'$ to the $S$ coordinates. \end{claim} \begin{proof} Let $\mathbf{b} \in N$. Then $H_0 \mathbf{b}=0$, i.e.\ the columns of $H_0$ indexed by the non-zero elements in $\mathbf{b}$ add up to 0. Let $\mathbf{b}' \in \mathbb{F}_2^{2^m}$ denote that extension of $\mathbf{b}$ into a vector of length $2^m$ obtained by filling 0's in every coordinate not in $S$. Then $E(m,1)\mathbf{b}'=0$, since the same columns that appeared in $H_0$ appear in $E(m,1)$. This implies that $\mathbf{b}' \in N'$. Similarly, for every $\mathbf{b}' \in N'$, we can define $\mathbf{b}$ to be its restriction to $S$, and then $H_0\mathbf{b}=0$, i.e.\ $\mathbf{b} \in N$. \end{proof} The degree-three tensoring of $N$, which we denote by $C$, can then be similarly embedded in a puncturing of $RM(m, m-4)$, where again, only the coordinates in $S$ remain, and similarly $C$ can be seen to be the restriction to $S$ to the subcode $C'$ of $RM(m, m-4)$ that contains the words that are 0 outside $S$. Finally, we define the error locating code $E$ to be the restriction of $RM(m,2)$ to the coordinates of $S$. We now show that the conditions of \autoref{thm:abstraction} are satisfied in this case. We begin with \autoref{item:correction}. If $U$ is a correctable pattern in $N$, it means that the columns indexed by $U$ in $H_0$ are linearly independent. It follows that they are also linearly independent as columns in $E(m,1)$. Hence, using the same arguments as before we can find, for any coordinate $\mathbf{v} \not \in U$, a degree 2 polynomial $g$ such that $g(\mathbf{v})=1$ and $g$ restricted to $U$ is 0. Restricting the evaluations of $g$ to the subset of coordinates $S$, we get a codeword $\mathbf{e} \in E$ with the required property. As for \autoref{item:inclusion}: We first argue that $RM(m,2)*C' \subseteq N'$, since the degrees match and the property of vanishing outside $S$ is preserved under multiplication. Projecting back to the coordinates in $S$, we get that $E*C \subseteq N$. \section*{Acknowledgement} We would like thank Avi Wigderson, Emmanuel Abbe and Ilya Dumer for helpful discussions and for commenting on an earlier version of the paper. We thank Venkatesan Guruswami and anonymous reviewers for pointing out the abstraction of \autoref{alg:decoding} given in \autoref{sec:abstraction}. \bibliographystyle{customurlbst/alphaurlpp}
{ "timestamp": "2015-08-28T02:08:14", "yymm": "1503", "arxiv_id": "1503.09092", "language": "en", "url": "https://arxiv.org/abs/1503.09092", "abstract": "Reed-Muller codes encode an $m$-variate polynomial of degree $r$ by evaluating it on all points in $\\{0,1\\}^m$. We denote this code by $RM(m,r)$. The minimal distance of $RM(m,r)$ is $2^{m-r}$ and so it cannot correct more than half that number of errors in the worst case. For random errors one may hope for a better result.In this work we give an efficient algorithm (in the block length $n=2^m$) for decoding random errors in Reed-Muller codes far beyond the minimal distance. Specifically, for low rate codes (of degree $r=o(\\sqrt{m})$) we can correct a random set of $(1/2-o(1))n$ errors with high probability. For high rate codes (of degree $m-r$ for $r=o(\\sqrt{m/\\log m})$), we can correct roughly $m^{r/2}$ errors.More generally, for any integer $r$, our algorithm can correct any error pattern in $RM(m,m-(2r+2))$ for which the same erasure pattern can be corrected in $RM(m,m-(r+1))$. The results above are obtained by applying recent results of Abbe, Shpilka and Wigderson (STOC, 2015), Kumar and Pfister (2015) and Kudekar et al. (2015) regarding the ability of Reed-Muller codes to correct random erasures.The algorithm is based on solving a carefully defined set of linear equations and thus it is significantly different than other algorithms for decoding Reed-Muller codes that are based on the recursive structure of the code. It can be seen as a more explicit proof of a result of Abbe et al. that shows a reduction from correcting erasures to correcting errors, and it also bares some similarities with the famous Berlekamp-Welch algorithm for decoding Reed-Solomon codes.", "subjects": "Information Theory (cs.IT)", "title": "Efficiently decoding Reed-Muller codes from random errors", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631619124993, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7087950253001045 }
https://arxiv.org/abs/math/0504195
The Eulerian Distribution on Involutions is Indeed Unimodal
Let I_{n,k} (resp. J_{n,k}) be the number of involutions (resp. fixed-point free involutions) of {1,...,n} with k descents. Motivated by Brenti's conjecture which states that the sequence I_{n,0}, I_{n,1},..., I_{n,n-1} is log-concave, we prove that the two sequences I_{n,k} and J_{2n,k} are unimodal in k, for all n. Furthermore, we conjecture that there are nonnegative integers a_{n,k} such that $$ \sum_{k=0}^{n-1}I_{n,k}t^k=\sum_{k=0}^{\lfloor (n-1)/2\rfloor}a_{n,k}t^{k}(1+t)^{n-2k-1}. $$ This statement is stronger than the unimodality of I_{n,k} but is also interesting in its own right.
\section{Introduction} A sequence $a_0,a_1,\ldots,a_n$ of real numbers is said to be {\it unimodal} if for some $0\leq j\leq n$ we have $a_0\leq a_1\leq\cdots\leq a_j\geq a_{j+1}\geq\cdots \geq a_n$, and is said to be {\it log-concave} if $a_i^2\geq a_{i-1}a_{i+1}$ for all $1\leq i\leq n-1$. Clearly a log-concave sequence of \emph{positive} terms is unimodal. The reader is referred to Stanley's survey~\cite{Stanley89} for the surprisingly rich variety of methods to show that a sequence is log-concave or unimodal. As noticed by Brenti~\cite{Brenti}, even though log-concave and unimodality have one-line definitions, to prove the unimodality or log-concavity of a sequence can sometimes be a very difficult task requiring the use of intricate combinatorial constructions or of refined mathematical tools. Let $\mathfrak{S}_n$ be the set of all permutations of $[n]:=\{1,\ldots,n\}$. We say that a permutation $\pi=a_1 a_2\cdots a_n\in\mathfrak{S}_n$ has a {\it descent} at $i$ ($1\leq i\leq n-1$) if $a_i>a_{i+1}$. The number of descents of $\pi$ is called its descent number and is denoted by ${\rm d}(\pi)$. A statistic on $\mathfrak{S}_n$ is said to be \emph{Eulerian}, if it is equidistributed with the descent number statistic. Recall that the polynomial $$ A_n(t)=\sum_{\pi\in \mathfrak{S}_n} t^{1+{\rm d}(\pi)}=\sum_{k=1}^{n}A(n,k)t^k $$ is called an {\it Eulerian polynomial}. It is well-known that the {\it Eulerian numbers} $A(n,k)$ ($1\leq k\leq n$) form a unimodal sequence, of which several proofs have been published: such as the analytical one by showing that the polynomial $A_n(t)$ has only real zeros \cite[p. 294]{Co}, by induction based on the recurrence relation of $A(n,k)$ (see \cite{Kurtz}), or by combinatorial techniques (see \cite{Gasharov, Stembridge}). Let $\mathcal{I}_n$ be the set of all involutions in $\mathfrak{S}_n$ and $\mathcal{J}_{n}$ the set of all fixed-point free involutions in $\mathfrak{S}_{n}$. Define \begin{align*} I_n(t)&=\sum_{\pi\in \mathcal{I}_n} t^{{\rm d}(\pi)}=\sum_{k=0}^{n-1}I_{n,k}t^k,\\[5pt] J_{n}(t)&=\sum_{\pi\in \mathcal{J}_{n}} t^{{\rm d}(\pi)}=\sum_{k=0}^{n-1}J_{n,k}t^k. \end{align*} The first values of these polynomials are given in Table~1. \begin{table}[h] \caption{The polynomials $I_{n}(t)$ and $J_{n}(t)$ for $n\leq 6$.\label{table:in6}} \begin{center} {\footnotesize \begin{tabular}{|l|l|l|} \hline $n$ & $I_n(t)$ & $J_{n}(t)$ \\\hline 1 & 1 & 0 \\\hline 2 & $1+t$ & $t$ \\\hline 3 & $1+2t+t^2$ & 0 \\\hline 4 & $1+4t+4t^2+t^3$ & $t+t^2+t^3$ \\\hline 5 & $1+6t+12t^2+6t^3+t^4$ & 0 \\\hline 6 & $1+9t+28t^2+28t^3+9t^4+t^5$ & $t+3t^2+7t^3+3t^4+t^5$ \\\hline \end{tabular} } \end{center} \end{table} As one may notice from Table~\ref{table:in6} that the coefficients of $I_n(t)$ and $J_{n}(t)$ are \emph{symmetric} and \emph{unimodal} for $1\leq n\leq 6$. Actually, the symmetries had been conjectured by Dumont and were first proved by Strehl~\cite{Strehl}. Recently, Brenti (see \cite{Dukes}) conjectured that the coefficients of the polynomial $I_n(t)$ are \emph{log-concave} and Dukes~\cite{Dukes} has obtained some partial results on the unimodality of the coefficients of $I_n(t)$ and $J_{2n}(t)$. Note that, in contrast to Eulerian polynomials $A_n(t)$, the polynomials $I_n(t)$ and $J_{2n}(t)$ may have \emph{non-real zeros}. In this paper we will prove that for $n\geq 1$, the two sequences $I_{n,0}, I_{n,1},\ldots, I_{n,n-1}$ and $J_{2n,1}, J_{2n,2}, \ldots, J_{2n,2n-1}$ are unimodal. Our starting point is the known generating functions of polynomials $I_n(t)$ and $J_{n}(t)$: \begin{align} \sum_{n=0}^{\infty}I_n(t)\frac{u^n}{(1-t)^{n+1}} &=\sum_{r=0}^{\infty}\frac{t^r}{(1-u)^{r+1}(1-u^2)^{r(r+1)/2}}, \label{eq:indt} \\[5pt] \sum_{n=0}^{\infty}J_n(t)\frac{u^n}{(1-t)^{n+1}} &=\sum_{r=0}^{\infty}\frac{t^r}{(1-u^2)^{r(r+1)/2}}, \label{eq:jndt} \end{align} which have been obtained by D\'esarm\'enien and Foata~\cite{DF} and Gessel and Reutenauer~\cite{GR} using different methods. We first derive linear recurrence formulas for $I_{n,k}$ and $J_{2n,k}$ in the next section and then prove the unimodality by induction in Section~3. We end this paper with further conjectures beyond the unimodality of the two sequences $I_{n,k}$ and $J_{2n,k}$. \section{Linear recurrence formulas for $I_{n,k}$ and $J_{2n,k}$} Since the recurrence formula for the numbers $I_{n,k}$ is a little more complicated than $J_{2n,k}$, we shall first prove it for the latter. \begin{thm}\label{lem:recj} For $n\geq 2$ and $k\geq 0$, the numbers $J_{2n,k}$ satisfy the following recurrence formula: \begin{align} 2nJ_{2n,k} &=[k(k+1)+2n-2]J_{2n-2,k}+2[(k-1)(2n-k-1)+1]J_{2n-2,k-1}\nonumber\\[5pt] &\quad +[(2n-k)(2n-k+1)+2n-2]J_{2n-2,k-2}. \label{eq:recj} \end{align} Here and in what follows $J_{2n,k}=0$ if $k<0$. \end{thm} \noindent {\it Proof.} Equating the coefficients of $u^{2n}$ in \eqref{eq:jndt}, we obtain \begin{equation}\label{fnt-jnt} \frac{J_{2n}(t)}{(1-t)^{2n+1}}=\sum_{r=0}^{\infty}{r(r+1)/2+n-1\choose n}t^r. \end{equation} Since $$ {r(r+1)/2+n-1\choose n}=\frac{r(r-1)/2+r+n-1}{n}{r(r+1)/2+n-2\choose n-1}, $$ it follows from \eqref{fnt-jnt} that $$ \frac{J_{2n}(t)}{(1-t)^{2n+1}} =\frac{t^2}{2n}\left(\frac{J_{2n-2}(t)}{(1-t)^{2n-1}}\right)'' +\frac{t}{n}\left(\frac{J_{2n-2}(t)}{(1-t)^{2n-1}}\right)' +\frac{n-1}{n}\frac{J_{2n-2}(t)}{(1-t)^{2n-1}}, $$ or \begin{align} J_{2n}(t)&=\frac{t^2(1-t)^2}{2n}J_{2n-2}''(t) +\left[\frac{(2n-1)t^2(1-t)}{n}+\frac{t(1-t)^2}{n}\right]J_{2n-2}'(t) \nonumber\\[5pt] &\quad +\left[(2n-1)t^2+\frac{(2n-1)(1-t)t}{n} +\frac{(n-1)(1-t)^2}{n}\right]J_{2n-2}(t) \nonumber\\[5pt] &=\frac{t^4-2t^3+t^2}{2n}J_{2n-2}''(t) +\left[\frac{(2-2n)t^3}{n}+\frac{(2n-3)t^2}{n}+\frac{t}{n}\right]J_{2n-2}'(t) \nonumber\\[5pt] &\quad +\left[(2n-2)t^2+\frac{t}{n}+\frac{n-1}{n}\right]J_{2n-2}(t). \label{eq:j2n-der} \end{align} Equating the coefficients of $t^{n}$ in \eqref{eq:j2n-der} yields \begin{align*} J_{2n,k}&=\frac{(k-2)(k-3)}{2n}J_{2n-2,k-2} -\frac{(k-1)(k-2)}{n}J_{2n-2,k-1} +\frac{k(k-1)}{2n}J_{2n-2,k} \\[5pt] &\quad +\frac{(2-2n)(k-2)}{n}J_{2n-2,k-2}+\frac{(2n-3)(k-1)}{n}J_{2n-2,k-1} +\frac{k}{n}J_{2n-2,k} \\[5pt] &\quad +(2n-2)J_{2n-2,k-2}+\frac{1}{n}J_{2n-2,k-1} +\frac{n-1}{n}J_{2n-2,k}. \end{align*} After simplification, we obtain \eqref{eq:recj}. \qed \begin{thm}For $n\geq 3$ and $k\geq 0$, the numbers $I_{n,k}$ satisfy the following recurrence formula: \begin{align} nI_{n,k} &=(k+1)I_{n-1,k}+(n-k)I_{n-1,k-1}+[(k+1)^2+n-2]I_{n-2,k} \nonumber\\[5pt] &\quad +[2k(n-k-1)-n+3]I_{n-2,k-1}+[(n-k)^2+n-2]I_{n-2,k-2}. \label{eq:reci} \end{align} Here and in what follows $I_{n,k}=0$ if $k<0$. \end{thm} \noindent {\it Proof.} Extracting the coefficients of $u^{2n}$ in \eqref{eq:indt}, we obtain \begin{equation}\label{eq:int-g} \frac{I_{n}(t)}{(1-t)^{n+1}} =\sum_{r=0}^{\infty}t^r\sum_{k=0}^{\lfloor n/2\rfloor} {r(r+1)/2+k-1\choose k}{r+n-2k\choose n-2k}. \end{equation} Let $$ T(n,k):={x+k-1\choose k}{y-2k\choose n-2k}, $$ and $$ s(n):=\sum_{k=0}^{\lfloor n/2 \rfloor}T(n,k). $$ Applying Zeilberger's algorithm, the Maple package {\tt ZeilbergerRecurrence(T,n,k,s,0..n)} gives \begin{equation}\label{eq:zeil} (2x+y+n+1)s(n)+(y+1)s(n+1)-(n+2)s(n+2)= 0, \end{equation} i.e., $$ s(n)=\frac{y+1}{n}s(n-1)+\frac{2x+y+n-1}{n}s(n-2). $$ When $x=r(r+1)/2$ and $y=r$, we get \begin{equation}\label{g(n)} s(n)=\frac{r+1}{n}s(n-1)+\frac{r(r-1)+3r+n-1}{n}s(n-2). \end{equation} Now, from \eqref{eq:int-g} and \eqref{g(n)} it follows that \begin{align*} \frac{nI_{n}(t)}{(1-t)^{n+1}} &=t\left(\frac{I_{n-1}(t)}{(1-t)^{n}}\right)' +\frac{I_{n-1}(t)}{(1-t)^{n}} +t^2\left(\frac{I_{n-2}(t)}{(1-t)^{n-1}}\right)'' +3t\left(\frac{I_{n-2}(t)}{(1-t)^{n-1}}\right)'\\[5pt] &\quad+(n-1)\frac{I_{n-2}(t)}{(1-t)^{n-1}}, \end{align*} or \begin{align} nI_{n}(t) &=(t-t^2)I_{n-1}'(t)+[1+(n-1)t]I_{n-1}(t) +t^2(1-t)^2I_{n-2}''(t)\nonumber\\[5pt] &\quad+t(1-t)[3+(2n-5)t]I_{n-2}'(t) +(n-1)[1+t+(n-2)t^2]I_{n-2}(t). \label{eq:ing-rec} \end{align} Comparing the coefficients of $t^k$ in both sides of \eqref{eq:ing-rec}, we obtain \begin{align*} nI_{n,k} &=kI_{n-1,k}-(k-1)I_{n-1,k-1} +I_{n-1,k}+(n-1)I_{n-1,k-1} \\[5pt] &\quad +k(k-1)I_{n-2,k}-2(k-1)(k-2)I_{n-2,k-1}+(k-2)(k-3)I_{n-2,k-2} \\[5pt] &\quad +3kI_{n-2,k}+(2n-8)(k-1)I_{n-2,k-1}-(2n-5)(k-2)I_{n-2,k-2} \\[5pt] &\quad +(n-1)I_{n-2,k}+(n-1)I_{n-2,k-1}+(n-1)(n-2)I_{n-2,k-2}, \end{align*} which, after simplification, equals the right-hand side of \eqref{eq:reci}. \qed \begin{rmk} The recurrence formula \eqref{eq:zeil} can also be proved by hand as follows. It is easy to see that the generating function of $s(n)$ is \begin{equation}\label{eq:sn-gen} \sum_{n=0}^{\infty}s(n)u^n=(1-u^2)^{-x}(1-u)^{-y-1}. \end{equation} Differentiating \eqref{eq:sn-gen} with respect to $u$ implies that $$ \sum_{n=0}^{\infty}ns(n)u^{n-1} =\left(\frac{2ux}{1-u^2}+\frac{y+1}{1-u}\right)(1-u^2)^{-x}(1-u)^{-y-1}, $$ consequently, \begin{align} (1-u^2)\sum_{n=0}^{\infty}ns(n)u^{n-1} &=[(2x+y+1)u+y+1](1-u^2)^{-x}(1-u)^{-y-1}\nonumber \\[5pt] &=[(2x+y+1)u+y+1]\sum_{n=0}^{\infty}s(n)u^{n}. \label{eq:sn-deri} \end{align} Comparing the coefficients of $u^{n+1}$ in both sides of \eqref{eq:sn-deri}, we obtain $$ (n+2)s(n+2)-ns(n)=(2x+y+1)s(n)+(y+1)s(n+1), $$ which is equivalent to \eqref{eq:zeil}. \end{rmk} Note that the right-hand side of \eqref{eq:recj} (resp.~\eqref{eq:reci}) is invariant under the substitution $k\rightarrow 2n-k$ (resp.~$k\rightarrow n-1-k$), provided that the sequence $I_{n-1,k}$ (resp.~$J_{2n-2,k}$) is symmetric. Thus, by induction we derive immediately the symmetry properties of $J_{2n,k}$ and $I_{n,k}$ (see \cite{DF, GR, Strehl}). \begin{cor} For $n,k\in\mathbb{N}$, we have $$ I_{n,k}=I_{n,n-1-k},\quad J_{2n,k}=J_{2n,2n-k}. $$ \end{cor} It would be interesting to find a combinatorial proof of the recurrence formulas \eqref{eq:recj} and \eqref{eq:reci}, since such a proof could hopefully lead to a combinatorial proof of the unimodality of these two sequences. \section{Unimodality of the sequences $I_{n,k}$ and $J_{2n,k}$} The following observation is crucial in our inductive proof of the unimodality of the sequences $I_{n,k}$ ($0\leq k\leq n-1$) and $J_{2n,k}$ ($1\leq k\leq 2n-1$). \begin{lem}\label{lem:sum-axi} Let $x_0,x_1,\ldots,x_n$ and $a_0,a_1,\ldots,a_n$ be real numbers such that $x_0\geq x_1\geq\cdots\geq x_n\geq 0$ and $a_0+a_1+\cdots+a_k \geq 0$ for all $k=0,1,\ldots,n.$ Then $$ \sum_{i=0}^{n}a_ix_i\geq 0. $$ \end{lem} Indeed, the above inequality follows from the identity: $$ \sum_{i=0}^{n}a_ix_i=\sum_{k=0}^{n}(x_k-x_{k+1})(a_0+a_1+\cdots+a_k), $$ where $x_{n+1}=0$. \begin{thm}\label{thm:uni-jn} The sequence $J_{2n,1},J_{2n,2},\ldots,J_{2n,2n-1}$ is unimodal. \end{thm} \noindent {\it Proof.} By the symmetry of $J_{2n,k}$, it is enough to show that $J_{2n,k}\geq J_{2n,k-1}$ for all $2\leq k\leq n$. We proceed by induction on $n$. Clearly, the $n=2$ case is obvious. Suppose the sequence $J_{2n-2,k}$ is unimodal in $k$. By Theorem \ref{lem:recj}, one has \begin{align} 2n(J_{2n,k}-J_{2n,k-1}) =A_0 J_{2n-2,k}+A_1J_{2n-2,k-1}+A_2J_{2n-2,k-2}+A_3 J_{2n-2,k-3}, \label{eq:jn-rec} \end{align} where \begin{align*} A_0&=k^2+k+2n-2, \quad A_1=4nk-3k^2-6n+k+6,\\[5pt] A_2&=3k^2+4n^2-8nk-5k+12n-4,\quad A_3=3k-k^2+4nk-4n^2-8n. \end{align*} We have the following two cases: \begin{itemize} \item If $2\leq k\leq n-1$, then $$ J_{2n-2,k}\geq J_{2n-2,k-1}\geq J_{2n-2,k-2}\geq J_{2n-2,k-3} $$ by the induction hypothesis, and clearly \begin{align*} &A_0\geq 0,\quad A_0+A_1=2(k-1)(2n-k)+4 \geq 0, \\[5pt] &A_0+A_1+A_2=(2n-k)^2-3k+8n \geq 0,\quad A_0+A_1+A_2+A_3=0. \end{align*} Therefore, by Lemma~\ref{lem:sum-axi}, we have $$ J_{2n,k}-J_{2n,k-1} \geq 0. $$ \item If $k=n$, then $$ J_{2n-2,n-1}\geq J_{2n-2,n}=J_{2n-2,n-2}\geq J_{2n-2,n-3} $$ by symmetry and the induction hypothesis. In this case, we have $A_1=(n-2)(n-3)\geq 0$ and thus the corresponding condition of Lemma~\ref{lem:sum-axi} is satisfied. Therefore, we have $$ J_{2n,n}-J_{2n,n-1} \geq 0. $$ \end{itemize} This completes the proof. \qed \begin{thm}\label{thm:uni-in} The sequence $I_{n,0},I_{n,1},\ldots,I_{n,n-1}$ is unimodal. \end{thm} \noindent {\it Proof.} By the symmetry of $I_{n,k}$, it suffices to show that $I_{n,k}\geq I_{n,k-1}$ for all $1\leq k\leq (n-1)/2$. From Table \ref{table:in6}, it is clear that the sequences $I_{n,k}$ are unimodal in $k$ for $1\leq n\leq 6$. Now suppose $n\geq 7$ and the sequences $I_{n-1,k}$ and $I_{n-2,k}$ are unimodal in $k$. Replacing $k$ by $k-1$ in \eqref{eq:reci}, we obtain \begin{align} nI_{n,k-1} &=kI_{n-1,k-1}+(n-k+1)I_{n-1,k-2}+(k^2+n-2)I_{n-2,k-1} \nonumber\\[5pt] &\quad +[2(k-1)(n-k)-n+3]I_{n-2,k-2}+[(n-k+1)^2+n-2]I_{n-2,k-3}. \label{eq:reci2} \end{align} Combining \eqref{eq:reci} and \eqref{eq:reci2} yields \begin{align} n(I_{n,k}-I_{n,k-1}) &=B_0I_{n-1,k}+B_1I_{n-1,k-1}+B_2I_{n-1,k-2} \nonumber\\[5pt] &\quad +C_0I_{n-2,k}+C_1I_{n-2,k-1}+C_2I_{n-2,k-2}+C_3I_{n-2,k-3}, \label{eq:inkk-1} \end{align} where \begin{align*} B_0 &=k+1,\quad B_1=n-2k,\quad B_2=-(n-k+1),\\[5pt] C_0 &=(k+1)^2+n-2,\quad C_1=2nk-3k^2-2k-2n+5, \\[5pt] C_2 &=n^2-4nk+3k^2+4n-2k-5,\quad C_3=-(n-k+1)^2-n+2. \end{align*} Notice that $I_{n-1,k}\geq I_{n-1,k-1}\geq I_{n-1,k-2}$ for $1\leq k\leq (n-1)/2$. By Lemma~\ref{lem:sum-axi}, we have \begin{equation}\label{eq:b0ink} B_0I_{n-1,k}+B_1I_{n-1,k-1}+B_2I_{n-1,k-2}\geq 0. \end{equation} It remains to show that \begin{equation}\label{eq:c0ink} C_0I_{n-2,k}+C_1I_{n-2,k-1}+C_2I_{n-2,k-2}+C_3I_{n-2,k-3}\geq 0, \quad\forall\,1\leq k\leq (n-1)/2. \end{equation} We need to consider the following two cases: \begin{itemize} \item If $1\leq k\leq (n-2)/2$, then $$ I_{n-2,k}\geq I_{n-2,k-1}\geq I_{n-2,k-2} \geq I_{n-2,k-3} $$ by the induction hypothesis, and \begin{align*} &C_0=(k+1)^2+n-2\geq 0,\quad C_0+C_1=(2k-1)(n-k-1)+k+3\geq 0, \\[5pt] &C_0+C_1+C_2=(n-k+1)^2+n-2\geq 0,\quad C_0+C_1+C_2+C_3=0. \end{align*} \item If $k=(n-1)/2$, then by symmetry and the induction hypothesis, $$ I_{n-2,k-1} \geq I_{n-2,k}=I_{n-2,k-2} \geq I_{n-2,k-3}. $$ In this case, we have $C_1=(n-3)(n-7)/4\geq 0$ for $n\geq 7$. \end{itemize} Therefore, by Lemma~\ref{lem:sum-axi} the inequality \eqref{eq:c0ink} holds. It follows from \eqref{eq:inkk-1}--\eqref{eq:c0ink} that $$ I_{n,k}-I_{n,k-1}\geq 0,\quad\forall\,1\leq k\leq (n-1)/2. $$ This completes the proof. \qed \section{Further remarks and open problems} Since $I_{n,k}=I_{n,n-1-k}$, we can rewrite $I_n(t)$ as follows: \begin{align*} I_n(t)=\sum_{k=0}^{n-1}I_{n,k}t^k =\begin{cases}\displaystyle\sum_{k=0}^{n/2-1}I_{n,k}t^k(1+t^{n-2k-1}), &\text{if $n$ is even,}\\[15pt] I_{n,(n-1)/2}t^{(n-1)/2}+\displaystyle\sum_{k=0}^{(n-3)/2}I_{n,k}t^k(1+t^{n-2k-1}), &\text{if $n$ is odd.} \end{cases} \end{align*} Applying the well-known formula $$x^{n}+y^{n} =\sum_{j=0}^{\lfloor n/2\rfloor}(-1)^{j}\frac{n}{n-j}{n-j\choose j}(xy)^j (x+y)^{n-2j}, $$ we obtain \begin{align} I_n(t)=\sum_{k=0}^{\lfloor (n-1)/2 \rfloor}a_{n,k}t^{k}(1+t)^{n-2k-1}, \label{eq:sym-int} \end{align} where $$ a_{n,k}= \begin{cases}\displaystyle\sum_{j=0}^{k}(-1)^{k-j} \frac{n-2j-1}{n-k-j-1}{n-k-j-1\choose k-j}I_{n,j},&\text{if $2k+1<n$,}\\[15pt] I_{n,k}+\displaystyle\sum_{j=0}^{k-1}(-1)^{k-j}\frac{n-2j-1}{n-k-j-1} {n-k-j-1\choose k-j}I_{n,j},&\text{if $2k+1=n$.} \end{cases} $$ The first values of $a_{n,k}$ are given in Table~\ref{table:ank}, which seems to suggest the following conjecture. \begin{conj}\label{conj:ank} For $n\geq 1$ and $k\geq 0$, the coefficients $a_{n,k}$ are nonnegative integers. \end{conj} \begin{table}[!h] \caption{Values of $a_{n,k}$ for $n\leq 16$ and $0\leq k\leq \lfloor (n-1)/2\rfloor$.\label{table:ank}} {\scriptsize \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $k\setminus n$&1&2&3&4 & 5& 6& 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14& 15 & 16 \\\hline 0 & 1 & 1 & 1 & 1 & 1& 1& 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\\hline 1 & & & 0 & 1 & 2& 4& 6 & 9 & 12& 16 & 20 & 25 & 30 & 36& 42 & 49\\\hline 2 & & & & & 2& 6& 18& 39& 79& 141& 239& 379 & 579 & 849& 1211& 1680\\\hline 3 & & & & & & & 0 & 18& 78& 272& 722& 1716& 3626& 7160& 13206& 23263\\\hline 4 & & & & & & & & & 20& 124& 668& 2560& 8360& 23536& 59824& 139457\\\hline 5 & & & & & & & & & & & 32 & 700 & 4800& 24160& 95680& 325572\\\hline 6 & & & & & & & & & & & & & 440 & 5480& 44632& 257964\\\hline 7 & & & & & & & & & & & & & & & 2176& 44376\\\hline \end{tabular} \end{center} } \end{table} Since the coefficients of $t^k(1+t)^{n-2k-1}$ are symmetric and unimodal with center of symmetry at $(n-1)/2$, Conjecture~\ref{conj:ank}, is stronger than the fact that the coefficients of $I_n(t)$ are symmetric and unimodal. A more interesting question is to give a combinatorial interpretation of $a_{n,k}$. Note that the Eulerian polynomials can be written as $$ A_n(t)=\sum_{k=1}^{\lfloor (n+1)/2 \rfloor}c_{n,k}t^k(1+t)^{n-2k+1}, $$ where $c_{n,k}$ is the number of increasing binary trees on $[n]$ with $k$ leaves and no vertices having left children only (see \cite{Branden,FS,Gasharov}). We now proceed to derive a recurrence relation for $a_{n,k}$. Set $x=x(t)=t/(1+t)^2$ and $$ P_n(x)=\sum_{k=0}^{\lfloor (n-1)/2 \rfloor}a_{n,k}x^k. $$ Then we can rewrite \eqref{eq:sym-int} as \begin{equation}\label{eq:intpx} I_n(t)=(1+t)^{n-1}P_{n}(x). \end{equation} Differentiating \eqref{eq:intpx} with respect to $t$ we get \begin{align} I_n'(t)&=(n-1)(1+t)^{n-2}P_{n}(x)+(1+t)^{n-1}P_{n}'(x)x'(t), \label{eq:intxt} \\[5pt] I_n''(t)&=(n-1)(n-2)(1+t)^{n-3}P_{n}(x)+2(n-1)(1+t)^{n-2}P_{n}'(x)x'(t)\nonumber\\[5pt] &\quad +(1+t)^{n-1}P_{n}''(x)(x'(t))^2+(1+t)^{n-1}P_{n}'(x)x''(t), \\[5pt] x'(t)&=\frac{1-t}{(1+t)^3},\quad x''(t)=\frac{2t-4}{(1+t)^4}. \label{eq:intxt2} \end{align} Substituting \eqref{eq:intpx}--\eqref{eq:intxt2} into \eqref{eq:ing-rec}, we obtain \begin{align} &\hskip -2mm n(1+t)^{n-1}P_n(x) \nonumber\\[5pt] &=[1+(2n-2)t+t^2](1+t)^{n-3}P_{n-1}(x)+t(1-t)^2(1+t)^{n-5}P_{n-1}'(x)\nonumber\\[5pt] &\quad+[-(t^2+14t+1)(1-t)^2+(1+6t-18t^2+6t^3+t^4)n +4t^2n^2](1+t)^{n-5}P_{n-2}(x)\nonumber\\[5pt] &\quad+[3t(t^2-4t+1)(1-t)^2+4t^2(1-t)^2n](1+t)^{n-7}P_{n-2}'(x)\nonumber\\[5pt] &\quad+t^2(1-t)^4(1+t)^{n-9}P_{n-2}''(x). \label{eq:inxpnx} \end{align} Dividing the two sides of \eqref{eq:inxpnx} by $(1+t)^{n-1}$ and noticing that $t/(1+t)^2=x$, after a little manipulation we get \begin{align} nP_n(x)&=[1+(2n-4)x]P_{n-1}(x)+(x-4x^2)P_{n-1}'(x) \nonumber\\[5pt] &\quad +[(n-1)+(2n-8)x+4(n-3)(n-4)x^2]P_{n-2}(x) \nonumber\\[5pt] &\quad +[3x+(4n-30)x^2+(72-16n)x^3]P_{n-2}'(x)+(x^2-8x^3+16x^4)P_{n-2}''(x).\nonumber \end{align} Extracting the coefficients of $x^k$ yields \begin{align*} na_{n,k}&=a_{n-1,k}+(2n-4)a_{n-1,k-1}+ka_{n-1,k}-4(k-1)a_{n-1,k-1}\\[5pt] &\quad+(n-1)a_{n-2,k}+(2n-8)a_{n-2,k-1}+4(n-3)(n-4)a_{n-2,k-2}\\[5pt] &\quad+3ka_{n-2,k}+(4n-30)(k-1)a_{n-2,k-1}+(72-16n)(k-2)a_{n-2,k-2} \\[5pt] &\quad+k(k-1)a_{n-2,k}-8(k-1)(k-2)a_{n-2,k-1}+16(k-2)(k-3)a_{n-2,k-2}. \end{align*} After simplification, we obtain the following recurrence formula for $a_{n,k}$. \begin{thm}\label{thm:rec-ank} For $n\geq 3$ and $k\geq 0$, there holds \begin{align} na_{n,k} &=(k+1)a_{n-1,k}+(2n-4k)a_{n-1,k-1}+[k(k+2)+n-1]a_{n-2,k} \nonumber\\[5pt] &\quad+[(k-1)(4n-8k-14)+2n-8]a_{n-2,k-1}+4(n-2k)(n-2k+1)a_{n-2,k-2}, \label{eq:rec-ank} \end{align} where $a_{n,k}=0$ if $k<0$ or $k>(n-1)/2$. \end{thm} Note that, if $n\geq 2k+3$, then $$(k-1)(4n-8k-14)+2n-8>0 \quad\text{for any $k\geq 1$,}$$ and so are the other coefficients in \eqref{eq:rec-ank}. Therefore, Conjecture~\ref{conj:ank} would be proved if one can show that $a_{2n+1,n}\geq 0$ and $a_{2n+2,n}\geq 0$. {}Finally, from \eqref{eq:sym-int} it is easy to see that \begin{align*} a_{2n+1,n}&=(-1)^nI_{2n+1}(-1)=\sum_{k=0}^{2n}(-1)^{n-k} I_{2n+1,k},\\[5pt] a_{2n+2,n}&=(-1)^{n}I_{2n+2}'(-1)=\sum_{k=1}^{2n+1}(-1)^{n+1-k} kI_{2n+2,k}. \end{align*} Thus, Conjecture~\ref{conj:ank} is equivalent to the {\it nonnegativity} of the above two alternating sums. Since $J_{2n,k}=J_{2n,2n-k}$, in the same manner as $I_{n}(t)$ we obtain \begin{align*} J_{2n}(t)=\sum_{k=1}^{n}b_{2n,k}t^{k}(1+t)^{2n-2k}, \end{align*} where $$ b_{2n,k}= \begin{cases}\displaystyle\sum_{j=1}^{k}(-1)^{k-j} \frac{2n-2j}{2n-k-j}{2n-k-j\choose k-j}J_{2n,j},&\text{if $k<n$,}\\[15pt] J_{2n,k}+\displaystyle\sum_{j=1}^{k-1}(-1)^{k-j}\frac{2n-2j}{2n-k-j} {2n-k-j\choose k-j}J_{2n,j},&\text{if $k=n$.} \end{cases} $$ Now, it follows from \eqref{fnt-jnt} that \begin{align*} J_{2n,k}=\sum_{i=0}^{k}(-1)^{k-i}{2n+1\choose k-i}{i(i+1)/2+n-1\choose i(i+1)/2-1} \end{align*} is a polynomial in $n$ of degree $d:=k(k+1)/2-1$ with leading coefficient $1/d!$, and so is $b_{2n,k}$. Thus, we have $\lim_{n\rightarrow+\infty} b_{2n,k}=+\infty$ for any fixed $k>1$. The first values of $b_{2n,k}$ are given in Table~\ref{table:bnk}, which seems to suggest \begin{conj}\label{conj:bnk} For $n\geq 9$ and $k\geq 1$, the coefficients $b_{2n,k}$ are nonnegative integers. \end{conj} \begin{table}[h] \caption{Values of $b_{2n,k}$ for $2n\leq 24$ and $1\leq k\leq n$.\label{table:bnk}} {\scriptsize \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $k\setminus 2n$ &2& 4& 6& 8& 10& 12& 14& 16& 18& 20& 22& 24 \\\hline 1 &1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1 \\\hline 2 & &$-1$&$-1$ &0& 2& 5& 9& 14& 20& 27& 35& 44 \\\hline 3 & & & 3&12& 36& 91& 201& 399& 728& 1242& 2007& 3102 \\\hline 4 & & & &$-7$&$-10$ &91&652&2593& 7902& 20401& 46852& 98494 \\\hline 5 & & & & & 25&219&1710&10532& 50165& 194139& 639968& 1861215 \\\hline 6 & & & & & &$-65$& 249&11319&122571& 841038& 4377636& 18747924 \\\hline 7 & & & & & & & 283& 6586&135545&1737505&15219292&101116704 \\\hline 8 & & & & & & & &$-583$& 33188&1372734&24412940&277963127 \\\hline 9 & & & & & & & & & 4417& 379029&16488999&367507439 \\\hline 10& & & & & & & & & & 1791& 3350211&203698690 \\\hline 11& & & & & & & & & & & 133107& 36903128 \\\hline 12& & & & & & & & & & & & 761785 \\\hline \end{tabular} \end{center} } \end{table} Similarly to the proof of Theorem~\ref{thm:rec-ank}, we can prove the following result. \begin{thm} \label{thm:rec-bnk} For $n\geq 2$ and $k\geq 1$, there holds \begin{align*} 2n b_{2n,k}&=[k(k+1)+2n-2]b_{2n-2,k}+[2+2(k-1)(4n-4k-3)]b_{2n-2,k-1}\\[5pt] &\quad+8(n-k+1)(2n-2k+1)b_{2n-2,k-2}. \end{align*} where $b_{2n,k}=0$ if $k<1$ or $k>n$. \end{thm} Theorem \ref{thm:rec-bnk} allows us to reduce the verification of Conjecture~\ref{conj:bnk} to the boundary case $b_{2n,n}\geq 0$ for $n\geq 9$. \vskip 5mm \noindent{\bf Acknowledgment.} The second author was supported by EC's IHRP Programme, within Research Training Network ``Algebraic Combinatorics in Europe," grant HPRN-CT-2001-00272. \renewcommand{\baselinestretch}{1}
{ "timestamp": "2005-10-19T18:36:05", "yymm": "0504", "arxiv_id": "math/0504195", "language": "en", "url": "https://arxiv.org/abs/math/0504195", "abstract": "Let I_{n,k} (resp. J_{n,k}) be the number of involutions (resp. fixed-point free involutions) of {1,...,n} with k descents. Motivated by Brenti's conjecture which states that the sequence I_{n,0}, I_{n,1},..., I_{n,n-1} is log-concave, we prove that the two sequences I_{n,k} and J_{2n,k} are unimodal in k, for all n. Furthermore, we conjecture that there are nonnegative integers a_{n,k} such that $$ \\sum_{k=0}^{n-1}I_{n,k}t^k=\\sum_{k=0}^{\\lfloor (n-1)/2\\rfloor}a_{n,k}t^{k}(1+t)^{n-2k-1}. $$ This statement is stronger than the unimodality of I_{n,k} but is also interesting in its own right.", "subjects": "Combinatorics (math.CO)", "title": "The Eulerian Distribution on Involutions is Indeed Unimodal", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319879873276, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7087934436076934 }
https://arxiv.org/abs/1708.06682
Weighted isoperimetric inequalities in warped product manifolds
We prove some isoperimetric type inequalities in warped product manifolds, or more generally, multiply warped product manifolds. We then relate them to inequalities involving the higher order mean-curvature integrals. We also apply our results to obtain sharp eigenvalue estimates and some sharp geometric inequalities in space forms.
\section{Introduction}\label{sec: intro} The classical isoperimetric inequality on the plane states that for a simple closed curve on $\mathbb R^2$, we have $L^2\ge 4\pi A$, where $L$ is the length of the curve and $A$ is the area of the region enclosed by it. The equality holds if and only if the curve is a circle. The classical isoperimetric inequality has been generalized to hypersurfaces in higher dimensional Euclidean space, and to various ambient spaces. For these generalizations, we refer to the beautiful article by Osserman \cite{osserman1978isoperimetric} and the reference therein. For a more modern account see \cite{ros2001isoperimetric}. Apart from two-dimensional manifolds and the standard space forms $\mathbb R^n$, $\mathbb H^n$ and $\mathbb S^n$, there are few manifolds for which the isoperimetric surfaces are known. According to \cite{bray2002isoperimetric}, known examples include $\mathbb R\times \mathbb H^n$, $\mathbb RP^3$, $\mathbb S^1\times \mathbb R^2$, $T^2\times \mathbb R$, $\mathbb R\times \mathbb S^n$, $\mathbb S^1\times \mathbb R^n$, $\mathbb S^1\times \mathbb S^2$, $\mathbb S^1\times \mathbb H^2$ and the Schwarzschild manifold, most of which are warped product manifolds over an interval or a circle. There are also many applications of the isoperimetric inequalities. For example, isoperimetric surfaces were used to prove the Penrose inequality \cite{bray2001proof}, an inequality concerning the mass of black holes in general relativity, in some important cases. In this paper, we prove both classical and weighted isoperimetric results in warped product manifolds, or more generally, multiply warped product manifolds. We also relate them to inequalities involving the higher order mean-curvature integrals. Some applications to geometric inequalities and eigenvalues are also given. For the sake of simplicity, let us describe our main results on a warped product manifold. The multiply warped product case is only notationally more complicated and presents no additional conceptual difficulty. Let $\displaystyle M=[0, \lambda)\times N$ ($\lambda\le \infty$) be a product manifold. Equip $M$ with the warped product Riemannian metric $\displaystyle g=dr^2+ s(r)^2 g_{N}$ for some continuous $s(r)\ge 0$, where $g_{N}$ is a Riemannian metric on the $m$-dimensional manifold $N$, which we assume to be compact and oriented. Define $B_R:=\{(r, \theta)\in M: r< R\}$. We define the functions $A(r)$ and $v(r)$ by \begin{align*} A(r):= s(r)^m \; \textrm{ and }\; v(r):= \int_{0}^{r} A(t) dt. \end{align*} Up to multiplicative constants, they are just the area of $\partial B_r$ and the volume of $B_r$ respectively. For a bounded domain $\Omega$ in $M$, we define $\Omega^\#$ to be the region $B_R$ which has the same volume as $\Omega$, i.e. $\mathrm{Vol} (B_R) =\mathrm{Vol} (\Omega) $. We denote the area of $\partial \Omega$ and $\partial \Omega^\#$ by $|\partial \Omega|$ and $|\partial \Omega^\#|$ respectively. One of our main results is the following isoperimetric theorem, which is a special case of Theorem \ref{thm weighted vol}. \begin{theorem}\label{thm1} Let $\Omega$ be a bounded open set in $(M,g)$ with Lipschitz boundary. Assume that \begin{enumerate} \item \label{cond: surj} The projection map $\pi: \partial \Omega\subset \mathbb [0, \lambda)\times N\to N$ defined by $(r, \theta)\mapsto \theta$ is surjective. \item\label{cond: mono} $s(r)$ is non-decreasing. \item\label{cond: convex} $A\circ v^{-1}$ is convex, or equivalently, $ss''-s'^2\ge 0$ if $s$ is twice differentiable. \end{enumerate} Then the isoperimetric inequality holds: $$|\partial \Omega|\ge|\partial \Omega^\#|.$$ \end{theorem} We remark that our notion of convexity does not require the function to be differentiable: $f$ is convex on $I$ if and only if $f((1-t)x+ty)\le (1-t)f(x)+tf(y)$ for any $t\in(0,1)$ and $x, y\in I$. One feature of our result is that except compactness, we do not impose any condition on the base manifold $N$. We will see in Section \ref{sec: necess} that without further restriction on $N$, our conditions are optimal in a certain sense. The expression $ss''-s'^2$ comes from the observation that if $s$ is twice differentiable, then as $v'=s^m$, the convexity of $A(v^{-1}(u))=s\left( v^{-1}\left( u \right)\right)^m$ is equivalent to \begin{equation}\label{eq: conv} \frac{d^2}{du^2} A \left(v^{-1}(u)\right) =\frac{m}{ s(r)^{m+2}}\left(s(r)s''(r)-s'(r)^2\right)\ge 0, \end{equation} where $r=v^{-1}(u)$. The expression $ss''-s'^2$ also has a number of geometric and physical meanings. It is related to the stability of the slice $\Sigma=\{r=r_0\}$ as a constant-mean-curvature (CMC) hypersurface (i.e. whether it is a minimizer of the area among nearby hypersurfaces enclosing the same volume). Indeed, it can be shown (\cite{GLW} Proposition 6.2) that $\lambda_1(g_N)\ge m\left(s(r_0)'^2-s(r_0)s''(r_0)\right)$ if and only if $\Sigma$ is a stable CMC hypersurface, where $\lambda_1(g_N)$ is the first Laplacian eigenvalue of $(N, g_N)$. It is also related to the so called ``photon spheres'' in relativity (see \cite{GLW} Proposition 6.1). From \eqref{eq: conv}, $A\circ v^{-1}$ is convex if and only if $s$ is $\log$-convex. So if $f(r)$ is non-decreasing and convex, then $s(r)=\exp(f(r))$ satisfies the convexity condition. One example of such a function is $s(r)=e^r$. If $\partial \Omega$ is star-shaped, we can remove the assumption on the monotonicity of $s(r)$. \begin{theorem}\label{thm2} Suppose $\Omega$ is a bounded open set in $(M,g)$ with Lipschitz boundary. Assume that \begin{enumerate} \item $\partial \Omega$ is star-shaped in the sense that it is a graph over $N$, i.e. of the form $\partial \Omega=\{(r, \theta): r=\psi(\theta), \theta\in N\}$. \item\label{cond2 thm2} $A\circ v^{-1}$ is convex, or equivalently, $ss''-s'^2\ge 0$ if $s$ is twice differentiable. \end{enumerate} Then the isoperimetric inequality holds: $$|\partial \Omega|\ge|\partial \Omega^\#|.$$ \end{theorem} If the classical isoperimetric inequality already holds on $(M,g)$, we can extend it by the following result, which is a special case of Theorem \ref{thm: main}. \begin{theorem}[Weighted isoperimetric inequality]\label{thm3} Let $\Omega$ be a bounded open set in $(M,g)$ with Lipschitz boundary. Assume that \begin{enumerate} \item The classical isoperimetric inequality holds on $(M,g)$, i.e. $|\partial \Omega|\ge|\partial \Omega^\#|$. \item $a(r)$ is a non-negative continuous function such that $\psi(r):=b(r)A(r)$ is non-decreasing, where $b(r):=a(r)-a(0)$. \item The function $\psi\circ v^{-1} $ is convex. \end{enumerate} Then \begin{align*} \int_{\partial \Omega} a(r)dS \ge\int_{\partial \Omega^{\#}} a(r)dS. \end{align*} If $b(r)A(r)>0$ for $r>0$, then the equality holds if and only if $r=\mathrm{constant}$, i.e. $\partial \Omega$ is a coordinate slice. \end{theorem} Using a volume preserving flow, recently Guan, Li and Wang \cite{GLW} (see also \cite{guan2015mean}) proved the following related result, assuming $s$ is smooth ($\partial \Omega$ is called ``graphical'' in \cite{GLW}): \begin{theorem} (\cite[Theorem 1.2]{GLW}) \label{thm: GLW} Suppose $\Omega$ is a domain in $(M,g)$ with smooth star-shaped boundary. Assume that \begin{enumerate} \item The Ricci curvature $\mathrm{Ric}_N$ of $g_N$ satisfies $\mathrm{Ric}_N\ge (m-1)K g_N$, where $K>0$ is constant. \item $0\le s'^2-ss''\le K$. \end{enumerate} Then the isoperimetric inequality holds: $$|\partial \Omega|\ge|\partial \Omega^\#|.$$ If $ s'^2-ss''<K$, then the equality holds if and only if $\partial \Omega$ is a coordinate slice $\{r=r_0\}$. \end{theorem} We note that our assumption $ss''-s'^2\ge 0$ in Theorem \ref{thm2} \textit{complements} that of \cite{GLW}. This does not contradict the result in \cite{GLW}. In fact, we will show the necessity of this and other conditions in Section \ref{sec: necess} (cf. Proposition \ref{prop: necess}). We also notice that except the obvious case that $M$ has constant curvature, the equality holds only when $\partial \Omega$ is a coordinate slice. Indeed, combining Theorem \ref{thm: GLW} with Theorem \ref{thm3}, we can generalize Theorem \ref{thm: GLW} as follows. \begin{theorem}[Theorem \ref{thm: glw weighted}] Suppose $\Omega$ is a domain in $(M,g)$ with smooth star-shaped boundary. Assume that the Ricci curvature $\mathrm{Ric}_N$ of $g_N$ satisfies $\mathrm{Ric}_N\ge (m-1)K g_N$ and $0\le s'^2-ss''\le K$, where $K>0$ is constant. Suppose $a(r)$ is a positive continuous function such that $b( v^{-1}(u))s( v^{-1}(u))^m$ is convex, where $b(r):=a(r)-a(0)$. Then the weighted isoperimetric inequality holds: $$\int_{\partial \Omega}a(r)dS\ge \int_{\partial \Omega^\#}a(r)dS.$$ The equality holds if and only if either \begin{enumerate} \item $(M,g)$ has constant curvature, $a(r)$ is constant on $\partial \Omega$, and $\partial \Omega$ is a geodesic hypersphere, or \item $\partial \Omega$ is a slice $\{r=r_0\}$. \end{enumerate} \end{theorem} Combining Theorem \ref{thm: GLW}, Theorem \ref{thm2} and the proof of Proposition \ref{prop: necess}, we get the following general picture for the isoperimetric problem in warped product manifolds: \begin{theorem} Let $M$ be the product manifold $[0, \lambda)\times N$ equipped with the warped product metric $g=dr^2+s(r)^2 g_N$. \begin{enumerate} \item Suppose $s'^2-ss''\le 0$. Then the star-shaped isoperimetric hypersurfaces are precisely the coordinate slices $\{r=r_0\}$. \item Suppose $0\le s'^2-ss''\le K$ and $\mathrm{Ric}_N\ge (m-1)Kg_N$ where $K>0$ is constant. Then the star-shaped isoperimetric hypersurfaces are either geodesic hyperspheres if $(M,g)$ has constant curvature, or the coordinate slices $\{r=r_0\}$. \item Suppose $s'^2-ss''>K$ and $\lambda_1(g_N)\le mK$ where $K>0$ is constant. Then the coordinate slices $\{r=r_0\}$ cannot be isoperimetric hypersurfaces. \end{enumerate} \end{theorem} We also prove isoperimetric type theorems involving the integrals of higher order mean curvatures $(H_k)$ in warped product manifolds. For simplicity, let us state the result when the ambient space is $\mathbb R^n$ (Corollary \ref{cor: Rn}), which follows from a more general theorem (Theorem \ref{thm: higher}) \begin{theorem}[Corollary \ref{cor: Rn}] Let $\Sigma$ be a closed embedded hypersurface in $\mathbb R^{m+1}$ which is star-shaped with respect to $0$ and $\Omega$ is the region enclosed by it. Assume that $H_k>0$ on $\Sigma$. Then for any integer $l\ge 0$, \begin{align*} n \beta_n^{-\frac{l-1}{n}}\mathrm{Vol}(\Omega)^{\frac{n-1+l}{n}} \le\int_{\Sigma} H_kr^{l+k}dS, \end{align*} where $\beta_n$ is the volume of the unit ball in $\mathbb R^n$. If $l\ge 1$, the equality holds if and only if $\Sigma$ is a sphere centered at $0$. \end{theorem} Note that when $k=l=0$, this reduces to $n \beta_n^{\frac{1}{n}}\mathrm{Vol}(\Omega)^{\frac{n-1}{n}} \le \mathrm{Area} (\Sigma) $, which is the classical isoperimetric inequality. In fact, we prove a stronger result \eqref{ineq: chain} : \begin{align*} n \beta_n^{-\frac{l-1}{n}}\mathrm{Vol}(\Omega)^{\frac{n-1+l}{n}} \le \int_{\partial \Omega} H_0 r ^{l} dS \le \int_{\partial \Omega} H_1 r ^{l+1} dS \le \cdots \le \int_{\partial \Omega} H_k r ^{l+k} dS. \end{align*} This can be compared to the following result of Guan-Li \cite[Theorem 2]{Guan-Li}: \begin{align*} \left(\frac{1}{ \beta_n }\mathrm{Vol}(\Omega)\right)^{\frac{1}{n}}\le \left(\frac{1}{n\beta_n}{\int_\Sigma H_0dS}\right)^{\frac{1}{n-1}}\le \cdots \le \left(\frac{1}{n\beta_n} \int_\Sigma H_k dS \right) ^{\frac{1}{n-k-1}} \end{align*} under the same assumption. Some applications of the weighted isoperimetric inequalities will also be given in Section \ref{sec: eg} and Section \ref{sec: eigen}. The rest of this paper is organized as follows. In Section \ref{sec: main}, we first prove Theorem \ref{thm3}. In Section \ref{sec: weighted vol}, we prove the isoperimetric inequality involving a weighted volume (Theorem \ref{thm weighted vol}), which implies Theorem \ref{thm1} and Theorem \ref{thm2}. Although Theorem \ref{thm3} can also be stated using the weighted volume, we prefer to prove the version involving only the ordinary volume for the sake of clarity, and indicates the changes needed to prove Theorem \ref{thm weighted vol}. In Section \ref{sec: eg}, we illustrate how we can obtain interesting geometric inequalities in space forms by using Theorem \ref{thm: main}. In Section \ref{sec: higher}, we introduce the weighted Hsiung-Minkowski formulas in warped product manifolds, and combine them with the isoperimetric theorem to obtain new isoperimetric results involving the integrals of the higher order mean curvatures. In Section \ref{sec: eigen}, we give further applications of our results to obtain some sharp eigenvalue estimates for the Steklov differential operator (also known as Dirichlet-to-Neumann map) and a second order differential operator related to the extrinsic geometry of hypersurfaces. Finally in Section \ref{sec: necess}, we show that the conditions of Theorem \ref{thm1} are all necessary by giving counterexamples where the isoperimetric inequality fails if any one of the conditions is violated. \section{Weighted isoperimetric inequality on multiply warped product manifolds}\label{sec: main} In this section, we first prove Theorem \ref{thm3}. As explained in Section \ref{sec: intro}, our result actually applies to multiply warped product manifold with no additional difficulties. Let us now describe our setting. Let $\displaystyle M=[0, \lambda)\times \prod_{q=1}^{p}N_q$ ($\lambda\le \infty$) be a product manifold. Equip $M$ with the warped product Riemannian metric $\displaystyle g:=dr^2+ \sum_{q=1}^{p}s_q(r)^2 g_{N_q}$ for some $s_q(r)\ge 0$, where $g_{N_q}$ is a Riemannian metric on the $m_q$-dimensional manifold $N_q$. Denote the $m$-dimensional volume of $\displaystyle N:=\prod_{q=1}^{p}N_q$ with respect to the product metric $\displaystyle \sum_{q=1}^{p}g_{N_q}$ by $|N|$ , where $\displaystyle m=\sum_{q=1}^{p}m_q$. Define $B_R:=\{(r, \theta)\in M: r< R\}$. We define the functions $A(r)$, $v(r)$ and $V(r)$ by \begin{align*} A(r):= \prod_{q=1}^{p}s_q(r)^{m_q},\; v(r):= \int_{0}^{r} A(t) dt \;\textrm{ and }\; V(r):= |B_r| =|N| v(r). \end{align*} Here $|B_R|$ denotes that $(m+1)$-dimensional volume of $B_R$ with respect to $g$. Note that $\psi\circ v^{-1}$ is convex if and only if $\psi\circ V^{-1}$ is convex. For a bounded domain $\Omega$ in $M$, we define $\Omega^\#$ to be the region $B_R$ which has the same volume as $\Omega$, i.e. $|B_R|=|\Omega|$. In this paper, we will assume that $A(r)>0$ and can possibly be zero only when $r=0$. However, if $\{r=0\}$ is identified as a point, we do not assume $g$ is smooth at this point, e.g. metric with a conical singularity at this point is allowed. \begin{theorem}\label{thm: main} Let $\Omega$ be a bounded open set in $(M,g)$ with Lipschitz boundary. Assume that \begin{enumerate} \item\label{cond1} The classical isoperimetric inequality holds on $(M,g)$, i.e. $|\partial \Omega|\ge|\partial \Omega^\#|$. \item\label{cond2} $a(r)$ is a non-negative continuous function such that $b(r)A(r)$ is non-decreasing, where $b(r):=a(r)-a(0)$. \item \label{cond3} The function $b (V^{-1}(u)) \;A (V^{-1}(u))$ is convex. \end{enumerate} Then \begin{align*} \int_{\partial \Omega} a(r)dS \ge\int_{\partial \Omega^{\#}} a(r)dS. \end{align*} If $b(r)A(r)>0$ for $r>0$, then the equality holds if and only if $r=\mathrm{constant}$, i.e. $\partial \Omega$ is a coordinate slice. \end{theorem} One of the ingredients of the proof of Theorem \ref{thm: main} is the Jensen's inequality. The use of Jensen's inequality in establishing isoperimetric results in $\mathbb R^n$ has also been implemented in \cite{betta1999weighted}. Let us give some notations. Suppose $\phi$ is a monotone function on $\mathbb R$ and $\mu$ is a probability measure on $X$ (i.e. $\mu(X)=1$), we then define for a function $\rho: X\to \mathbb R$ $$\mathcal M_{\phi,\mu}[\rho]:=\phi^{-1}\left(\int_X \phi(\rho)d\mu\right).$$ The following form of Jensen's inequality will be useful to us. \begin{proposition} \label{prop: jensen} Let $\mu$ be a probability measure on $X$ and $\phi, \psi$ be functions on $\mathbb R$. Assume $\phi^{-1}$ exists and $\psi\circ \phi^{-1}$ is convex, then $$\psi \left(\mathcal M_{\phi, \mu}[\rho]\right) \le\int_{X} \psi (\rho) d\mu. $$ Moreover, if $\psi$ is strictly increasing, then $\mathcal M_{\phi, \mu}[\rho] \le \mathcal M_{\psi, \mu}[\rho]$. \end{proposition} \begin{proof} Define $\Phi=\psi\circ \phi^{-1}$. Since $ {\Phi}$ is convex, by Jensen's inequality, \begin{align*} \psi \left(\mathcal M_{\phi, \mu}[\rho]\right) =\Phi\left(\int_X \phi (\rho) d\mu\right) \le \int_{X}\Phi (\phi (\rho)) d\mu = \int_{X} \psi (\rho) d\mu. \end{align*} If $\psi$ is strictly increasing, applying $\psi^{-1}$ to the above inequality, we get $\mathcal M_{\phi, \mu}[\rho] \le \mathcal M_{\psi, \mu}[ \rho]$. \end{proof} We now prove Theorem \ref{thm: main}. We remark that the reader may feel free to assume $p=1$ without affecting one's understanding of the proof. \begin{proof} Assume first $\Sigma=\partial \Omega$ is piecewise $C^1$, and that $\Sigma$ is a union of graphs over finitely many domains in $N$. This means that there exists open, pairwise disjoint subsets $\{S_i\}_{i=1}^l$ of $N$ with Lipschitz boundary such that $\Sigma$ is represented by \begin{align*} \Sigma=\partial \Omega=\left\{(r, \theta): r=r_{i,j}(\theta), \theta\in \overline S_i, j\in\{1,\cdots, 2k_i\}, i\in\{1, \cdots, l\}\right\} \end{align*} and \begin{align*} \overline \Omega =\left\{(r, \theta): r_{i, 2\kappa-1}(\theta)\le r\le r_{i, 2\kappa}(\theta), \theta\in \overline S_i, \kappa\in\{1,\cdots, k_i\}, i\in\{1,\cdots, l\}\right\}, \end{align*} where \begin{align*} &r_{i,j}\in C^1(S_i)\cap C^0(\overline S_i),\quad j=1,\cdots, 2k_i, \\ &r_{i,1}(\theta)<\cdots< r_{i,2k_i}(\theta)\quad \textrm{for } \theta\in S_i, \\ &r_{i,1}(\theta) \begin{cases} =0\quad \textrm{if }(0, \theta)\in \Omega\\ >0\quad \textrm{if }(0, \theta)\notin \Omega. \end{cases} \end{align*} Let $\displaystyle S=\bigcup_{i=1}^l S_i$, then by direct computation, \begin{align*} \int_\Sigma a(r) dS =&\int_{S} a(r)\prod_{q=1}^{p}\left(1+s_q(r)^{-2}|\nabla_{N_q} \, r|^2_{g_{N_q}}\right)^{\frac{1}{2}}s_q(r)^{m_q}d\mathrm{vol}_N. \end{align*} Here $\nabla_{N_q} $ is the connection with respect to $g_{N_q}$ and $d\mathrm{vol}_N$ is the $m$-dimensional volume form on $N$. Let \begin{equation*} I:=\int_{\partial \Omega} b(r)dS\textrm{ and }I^{\#}:=\int_{\partial \Omega^{\#}} b(r)dS. \end{equation*} We claim that \begin{equation}\label{ineq: claim} I\ge I^\#. \end{equation} Let $\psi(r)=b(r)A(r)=b(r)\prod_{q=1}^{p}s_q(r)^{m_q}$. First of all, \begin{equation}\label{ineq: I} \begin{split} I =&\sum_{i=1}^{l}\sum_{j=1}^{2k_i}\int_{S_i}b(r_{i,j})\prod_{q=1}^{p}\left(1+s_q(r_{i,j})^{-2}\left|\nabla_{N_q}\,r_{i,j}\right|^2_{g_{N_q}}\right)^{\frac{1}{2}}s_q(r_{i,j})^{m_q}d\mathrm{vol}_N\\ \ge&\sum_{i=1}^{l}\sum_{j=1}^{2k_i}\int_{S_i}b(r_{i,j})\prod_{q=1}^{p}s_q(r_{i,j})^{m_q}d\mathrm{vol}_N\\ =&\sum_{i=1}^{l}\sum_{j=1}^{2k_i}\int_{S_i} \psi(r_{i,j}) d\mathrm{vol}_N\\ \ge&\sum_{i=1}^{l}\int_{S_i} \psi(r_{i, 2k_i}) d\mathrm{vol}_N\\ =&\int_{N}\psi(\rho) d\mathrm{vol}_N \end{split} \end{equation} where we define $\rho(\theta):= \begin{cases} r_{i,2k_i}(\theta)\; &\textrm{if }\theta\in S_i,\\ 0 \;&\textrm{if }\theta\in N\setminus S \end{cases} $, noting that $b(0)=0$. On the other hand, for $B_R=\Omega^{\#}$, we have \begin{align*} |B_R| = |N| \int_{0}^{R} A(r)dr=|N|v(R). \end{align*} As $|B_R|=|\Omega|$, it is not hard to see by Fubini's theorem that \begin{equation}\label{eq: R} |B_R| =|\Omega|=\sum_{i=1}^{l}\sum_{j=1}^{2k_i}(-1)^j\int_{S_i} v(r_{i,j}) d\mathrm{vol}_N. \end{equation} Define $R_1$ by \begin{align*} \sum_{i=1}^{l}\sum_{j=1}^{2k_i}(-1)^j\int_{S_i} v(r_{i,j}) d\mathrm{vol}_N \le&\sum_{i=1}^{l}\int_{S_i} v(r_{i,2k_i}) d\mathrm{vol}_N=: |B_{R_1}|, \end{align*} noting that $v(r)$ is increasing. Then by the definition of $V$ and $\rho$, \begin{equation}\label{eq: R1} R_1 = V^{-1} \left(\int_{N} v(\rho) d\mathrm{vol}_N\right). \end{equation} Comparing with \eqref{eq: R}, we have $R_1\ge R$, and as $\psi(r)$ is non-decreasing, \begin{equation}\label{ineq: I sharp} I^{\#}=|N|b(R)A(R)=|N|\psi(R)\le |N| \psi(R_1). \end{equation} Now take $d\mu=\frac{1}{|N|}d\mathrm{vol}_N$. As $\psi\circ V^{-1}$ is convex, by \eqref{ineq: I}, Jensen's inequality (Proposition \ref{prop: jensen}), \eqref{eq: R1} and \eqref{ineq: I sharp}, \begin{equation}\label{ineq: pf1} \begin{split} \frac{1}{|N|}I \ge\frac{1}{|N|}\int_{N}\psi(\rho) d\mathrm{vol}_N =&\int_{N}\psi(\rho) d\mu\\ \ge& \psi \left(\mathcal M_{V, \mu}[\rho]\right)\\ =& \psi \left(V^{-1}\left( \int_N V (\rho) d\mu\right)\right)\\ =& \psi\left(V^{-1}\left(\int_{N} v(\rho)d\mathrm{vol}_N \right)\right)\\ =&\psi(R_1)\\ \ge&\frac{1}{|N|}I^\#. \end{split} \end{equation} We have proved \eqref{ineq: claim}. Moreover, from \eqref{ineq: I}, if $\psi(r)>0$ for $r>0$, then $r_{i,j}$ must be locally constant and hence $r$ is constant on $\Sigma$. Finally, by the classical isoperimetric inequality, \begin{equation*} \begin{split} \int_{\partial \Omega} a(r)dS =&I+ a(0)\int_{\partial \Omega} dS\\ \ge &I^{\#}+ a(0)\int_{\partial \Omega^{\#}} dS\\ =&\int_{\partial \Omega^{\#}} a(r)dS. \end{split} \end{equation*} In general, for a domain $\Omega$ with Lipschitz boundary, we can approximate $\Omega$ by piecewise $C^1$ domains $\Omega_i$ which satisfy the above conditions. A standard approximation argument will then give the desired result. \end{proof} As explained in Section \ref{sec: intro}, combining Theorem \ref{thm: GLW} (\cite[Theorem 1.2]{GLW}) with Theorem \ref{thm: main} we have \begin{theorem}\label{thm: glw weighted} Suppose $\Omega$ is a domain in $(M,g)$ with smooth star-shaped boundary. Assume that the Ricci curvature $\mathrm{Ric}_N$ of $g_N$ satisfies $\mathrm{Ric}_N\ge (m-1)K g$ and $0\le s'^2-ss''\le K$, where $K>0$ is constant. Suppose $a(r)$ is a positive continuous function such that $b( v^{-1}(u))s( v^{-1}(u))^m$ is convex, where $b(r):=a(r)-a(0)$. Then the weighted isoperimetric inequality holds: $$\int_{\partial \Omega}a(r)dS\ge \int_{\partial \Omega^\#}a(r)dS.$$ \begin{enumerate} \item $(M,g)$ has constant curvature, $a(r)$ is constant on $\partial \Omega$ and $\partial \Omega$ is a geodesic hypersphere, or \item $\partial \Omega$ is a slice $\{r=r_0\}$. \end{enumerate} \end{theorem} \begin{proof} The inequality follows from Theorem \ref{thm: GLW} (\cite[Theorem 1.2]{GLW}) and Theorem \ref{thm: main} (see also Remark \ref{rmk: glw} \eqref{item: mono}) . Suppose the equality holds, we have two cases: (i) $a(r)|_{\partial \Omega}\not\equiv a(0)$ and (ii) $a(r)|_{\partial \Omega}\equiv a(0)$.\\ \noindent Case (i). Let $p\in \partial \Omega$ such that $a(r(p))\ne a(0)$ and $r_0=r(p)$. Then $S=\{q\in \partial \Omega: r(q)=r_0\}$ is clearly closed in $\partial \Omega$. It is also open in $\partial \Omega$ because by \eqref{ineq: I}, $r$ is locally constant on $\{q\in \partial \Omega: a(r(q))\ne a(0)\}$. Therefore $S=\partial \Omega$ and so $\partial \Omega$ is the slice $\{r=r_0\}$. \\ \noindent Case (ii). In this case, we can without loss of generality assume $a\equiv 1$ on $\partial \Omega$. The equality asserts that $\partial \Omega$ is a smooth hypersurface which has minimum area among all graphical hypersurfaces bounding the same volume, and so by the first variation formulas (e.g. \cite[p. 1186]{osserman1978isoperimetric}), if $\Sigma_t$ is a variation of $\Sigma$ with normal variation $u \nu_{\Sigma_t}$, then \begin{align*} \left.\frac{d}{dt}\right|_{t=0}\mathrm{Area}(\Sigma_t) = m \int_{\Sigma} u H_1 dS=0 \end{align*} for all $u$ such that $$\left.\frac{d}{dt}\right|_{t=0}\mathrm{Vol}(\Omega_t) = \int_{\Sigma} u dS=0.$$ This implies that $\Sigma$ has constant mean curvature . It follows from \cite[Corollary 7]{montiel1999unicity} that either $(M,g)$ has constant curvature and $\partial \Omega$ is a geodesic hypersphere, or $\partial \Omega$ is a slice $\{r=r_0\}$. \end{proof} \begin{remark}\label{rmk: glw} \begin{enumerate} \item\label{item: mono} The monotonicity of $s(r)$ is not assumed in Theorem \ref{thm: glw weighted} because of the same reason as Theorem \ref{thm star}. Also, from the proof of Theorem \ref{thm: main}, we only need the classical isoperimetric inequality to hold for star-shaped domains for Theorem \ref{thm: glw weighted} to hold. \item By direction computation, \begin{align*} &\frac{d^2}{du^2}\left(b( v^{-1}(u)) \;s (v^{-1}(u))^m\right)\\ =&\frac{1}{s(r)^{m+2}}\left(s(r)^2b''(r)+ms(r)s'(r)b'(r)+mb(r)\left(s(r)s''(r)-s'(r)^2\right)\right), \end{align*} where $r=v^{-1}(u)$. So the convexity of $b( v^{-1}(u)) \;s( v^{-1}(u))^m$ can be rephrased as \begin{equation}\label{eq: second der} s(r)^2b''(r)+ms(r)s'(r)b'(r)-mb(r)\left(s'(r)^2-s(r)s''(r)\right)\ge 0. \end{equation} This condition is often easier to check as $v^{-1}$ is usually not very explicit. \end{enumerate} \end{remark} \section{Isoperimetric inequalities involving weighted volume}\label{sec: weighted vol} In this section, we consider a variant of Theorem \ref{thm: main} involving a weighted volume. In particular, we prove an isoperimetric result without assuming the classical isoperimetric inequality to hold on $M$. We consider the weighted volume defined by $$ \mathrm{Vol}_c(\Omega):=\int_{\Omega}c(r)dv_g$$ where $dv_g$ is the $(m+1)$-dimensional volume form with respect to $g$ and $c(r)> 0$ is a radially symmetric weight function. Obviously, this is just the ordinary volume if $c\equiv 1$. We define $\widetilde \Omega^\#$ to be the region $B_R$ which has the same weighted volume as $\Omega$, i.e. $\mathrm{Vol}_c(\widetilde \Omega^\#)=\mathrm{Vol}_c(\Omega)$. Our goal is to look for conditions such that \begin{align*} \int_{\partial \Omega}a(r)dS \ge\int_{\partial \widetilde \Omega^\#}a(r)dS. \end{align*} We define the functions $A(r)$, $\widetilde v(r)$ and $\widetilde V(r)$ by \begin{align*} A(r):= \prod_{q=1}^{p}s_q(r)^{m_q},\; \widetilde v(r):= \int_{0}^{r}c(t) A(t) dt\quad \textrm{ and }\; \widetilde V(r):= \int_{B_r}c \,dv_g =|N|\widetilde v(r). \end{align*} \begin{theorem}\label{thm weighted vol} Let $\Omega$ be a bounded open set in $(M,g)$ with Lipschitz boundary. Assume that \begin{enumerate} \item\label{cond1'} The projection map $\pi: \partial \Omega\to N$ defined by $(r, \theta)\mapsto \theta$ is surjective. \item\label{cond2'} $a(r)$ is a positive continuous function such that $\psi(r):=a(r)A(r)$ is non-decreasing, \item \label{cond3'} The function $\psi\circ \widetilde V^{-1}$ is convex. \end{enumerate} Then \begin{align*} \int_{\partial \Omega} a(r)dS \ge\int_{\partial \widetilde\Omega^{\#}} a(r)dS. \end{align*} The equality holds if and only if $r=\mathrm{constant}$, i.e. $\partial \Omega$ is a coordinate slice. \end{theorem} \begin{proof} We use the same notations as the proof of Theorem \ref{thm: main}. Since the proof is similar, we only indicate where changes are made. We only prove the case where $\Sigma=\partial \Omega$ is piecewise $C^1$, and that $\Sigma$ is a union of graphs over finitely many domains on $N$. So we define $r_{i,j}$ as before. Note that $N=\bigcup_{i=1}^l \overline {S_i}$ by Condition \eqref{cond1'}. Let \begin{equation*} I:=\int_{\partial \Omega} a(r)dS\textrm{ and }I^{\#}:=\int_{\partial \widetilde\Omega^{\#}} a(r)dS. \end{equation*} As in \eqref{ineq: I}, \begin{equation*} \begin{split} I =&\sum_{i=1}^{l}\sum_{j=1}^{2k_i}\int_{S_i}a(r_{i,j})\prod_{q=1}^{p}\left(1+s_q(r_{i,j})^{-2}\left|\nabla_{N_q}\, r_{i,j}\right|^2_{g_{N_q}}\right)^{\frac{1}{2}}s_q(r_{i,j})^{m_q}d\mathrm{vol}_N\\ \ge&\sum_{i=1}^{l}\int_{S_i} \psi(r_{i, 2k_i}) d\mathrm{vol}_N\\ =&\int_{N}\psi(\rho) d\mathrm{vol}_N \end{split} \end{equation*} where $\rho: N\to [0, \infty)$ is defined by $\rho(\theta):= \max\{r: (r, \theta)\in \Sigma\} $. On the other hand, for $B_R=\widetilde\Omega^{\#}$, we have \begin{align*} \mathrm{Vol}_c (B_R) = |N| \int_{0}^{R} c(r)A(r)dr=|N|\widetilde v(R). \end{align*} As in \eqref{eq: R}, define $R_1$ by \begin{align*} \mathrm{Vol}_c (B_R) = \mathrm{Vol}_c(\Omega ) =&\sum_{i=1}^{l}\sum_{j=1}^{2k_i}(-1)^j\int_{S_i} \widetilde v(r_{i,j}) d\mathrm{vol}_N\\ \le&\sum_{i=1}^{l}\int_{S_i} \widetilde v(r_{i,2k_i}) d\mathrm{vol}_N=: \mathrm{Vol}_c(B_{R_1}) . \end{align*} Then analogous to \eqref{eq: R1} and \eqref{ineq: I sharp}, we have \begin{equation}\label{ineq: I sharp'} R_1 = \widetilde V^{-1} \left(\int_{N} \widetilde v(\rho) d\mathrm{vol}_N\right) \; \textrm{and}\; I^{\#}=|N|\psi(R)\le |N| \psi(R_1). \end{equation} As $\psi\circ \widetilde V^{-1}$ is convex, it is clear that we can proceed as in \eqref{ineq: pf1} to show that $I\ge I^\#$. The analysis of the equality case and the general case is proved similarly as in Theorem \ref{thm: main}. \end{proof} Theorem \ref{thm1} immediately follows from Theorem \ref{thm weighted vol}. For Theorem \ref{thm2}, note that the only place where we have used the monotonicity of $\psi$ (or $s$ in the context of Theorem \ref{thm2}) is \eqref{ineq: I sharp'}. But since $\Sigma$ is star-shaped, $R_1=R$ and the monotonicity condition is not needed. Let us state only the following version for later use. \begin{theorem}\label{thm star} Let $\Omega$ be a bounded open set in $(M,g)$ with Lipschitz star-shaped boundary. Suppose $a(r)$ is positive such that the function $\psi\circ \widetilde V^{-1}$ is convex, where $\psi(r)=a(r)A(r)$. Then \begin{align*} \int_{\partial \Omega} a(r)dS \ge\int_{\partial \widetilde\Omega^{\#}} a(r)dS. \end{align*} The equality holds if and only if $r=\mathrm{constant}$, i.e. $\partial \Omega$ is a coordinate slice. \end{theorem} \section{Some concrete examples}\label{sec: eg} In this section, we provide some concrete examples of how Theorem \ref{thm3} can be used to obtain some interesting geometric inequalities. In all the examples below, the metric $g$ on $M$ is all given by $g=dr^2+s(r)^2 g_{\mathbb S^{m}}$, and the convexity of $b(V^{-1}(u)) s(V^{-1}(u))^m$ is directly checked by using \eqref{eq: second der}. The computations have all been verified by Mathematica. \begin{enumerate} \item On the Euclidean space $\mathbb R^{n}$, the warping function is $s(r)= r$. Choosing $a(r)=b(r)= r^k $, we have \begin{equation}\label{ineq: rk} \int_{\partial \Omega} r^k \,dS\ge \int_{\partial \Omega^\#} r^k \,dS \end{equation} if $k\ge 1$. When $k=0$, this is just the classical isoperimetric inequality. \item On the hyperbolic space $\mathbb H^n$, the warping function is $s(r)=\sinh r$. Choosing $a(r)=b(r)=\sinh^k (r)$, we have \begin{equation*} \int_{\partial \Omega} \sinh^k r \,dS\ge \int_{\partial \Omega^\#} \sinh^k r \,dS \end{equation*} if $k\ge 1$. Similarly we also have $$\int_{\partial \Omega} \cosh r \,dS\ge \int_{\partial \Omega^\#} \cosh r \,dS$$ and $$\int_{\partial \Omega} (\cosh r-1)^k \,dS\ge \int_{\partial \Omega^\#} (\cosh r-1)^k \,dS$$ if $k\ge 1$. \item On the open hemisphere $\mathbb S^{n}_+$, the warping function is $s(r)=\sin r$, $(0<r<\frac{\pi}{2})$. Choosing $a(r)=b(r)=\tan^k (r)$, we have $$\int_{\partial \Omega} \tan^k r \,dS\ge \int_{\partial \Omega^\#} \tan^k r \,dS.$$ if $k\ge 1$ and $\Omega\subset \mathbb S^n_+$. Similarly we also have $$\int_{\partial \Omega} (1-\cos r) \,dS\ge \int_{\partial \Omega^\#} (1-\cos r) \,dS$$ if $k\ge 1$. \end{enumerate} In all the above examples, we can convert the inequalities into a form which involves the volume of $\Omega$: \begin{align*} \int_{\partial \Omega}a(r)dS\ge |N|a(R)A(R) \end{align*} where $R=V^{-1}(|\Omega|)$. For example, in $\mathbb R^n$, the inequality $$\int_{\partial \Omega}r^k dS\ge \int_{\partial \Omega^\#}r^k dS$$ is equivalent to \begin{equation}\label{ineq: Rn} \int_{\partial \Omega}r^k dS\ge n \beta_n ^{-\frac{k-1}{n}} \mathrm{Vol}(\Omega) ^{\frac{n-1+k}{n}} \end{equation} for $k\ge 0$, where $\beta_n$ is volume of the unit ball $B$ in $\mathbb R^n$. For other spaces, the inequality is not as explicit because $V^{-1}$ is not explicit except when $n=2$. \section{Weighted isoperimetric theorems involving higher order mean curvatures}\label{sec: higher} In this section, we generalize the weighted isoperimetric inequality in a warped product manifold to some inequalities involving the weighted integrals of the higher order mean curvatures. This is closely related to the quermassintegral inequalities (\cite{Guan-Li}), which include as a special case the isoperimetric inequality, since the area integral can be interpreted as the integral of the zeroth mean curvature $H_0=1$. From now on, our warped product manifold is $M^{n} = [0,\lambda)\times {N}^{n-1}$ equipped with the metric $g=dr^2+s(r)^2 g_N$. Before stating the main theorem, we give some definitions which are useful in studying the extrinsic geometry of hypersurfaces. On a hypersurface $\Sigma$ in $M$, we define the normalized $k$-th mean curvature function \begin{eqnarray} H_k:=H_k(\Lambda)=\frac{1}{\binom{n-1}{k}}\sigma_k(\Lambda), \end{eqnarray} where $\Lambda=(\l_1,\cdots,\l_{n-1})$ are the principal curvature functions on $\Sigma$ and the homogenous polynomial $\sigma_k$ of degree $k$ is the $k$-th elementary symmetric function \[ \sigma_k(\Lambda)=\sum_{i_1<\cdots<i_{k}}\lambda_{i_1}\cdots\lambda_{i_k}. \] We adopt the usual convention $\sigma_0=H_{0}=1$. The $k$-th Newton transformation $T_k: T\Sigma \rightarrow T \Sigma$ (cf. \cite{reilly1973variational}) is useful in studying the extrinsic geometry of $\Sigma$, and is defined as follows. If we write \begin{equation*} T_k ( e_j ) =\sum_{i=1}^{n-1} ( T_k )_j^i e_{i}, \end{equation*} then $ (T_k) _j^i $ are given by $$ {(T_k)}_j^{\,i}= \frac 1 {k!} \sum_{\substack{1 \le i_1,\cdots, i_k \le n-1\\ 1\le j_1, \cdots, j_k \le n-1}} \delta^{i i_1 \ldots i_k}_{j j_1 \ldots j_k} B_{i_1}^{j_1}\cdots B_{i_k}^{j_k} $$ where $B$ is the second fundamental form of $\Sigma$. One also defines $T_0 = \mathrm{Id}$, the identity map. We define the vector field $X = s(r) \, \frac{\partial}{\partial r}$ and the potential function $c(r) = s'(r)$. Note that $X$ is a conformal Killing vector field: $\mathcal L_X g = 2 c g$ \cite[Lemma 2.2]{B2013}. The warped product manifold is somewhat special in that there exists a nontrivial conformal Killing vector field, which in turn leads to some nice formulas of Hsiung-Minkowski types. We will need the following weighted Hsiung-Minkowski formulas (cf. \cite[Proposition 2.1]{Kwo2016}, \cite[Proposition 1]{KLP}): \begin{proposition}[Weighted Hsiung-Minkowski formulas]\label{prop: HM} Suppose $\eta$ is a smooth function on a closed hypersurface $\Sigma$ in $M$, then for $1\le k \le n-1$, we have \begin{equation*} \label{weighted in} \begin{split} \int_\Sigma \eta c H_{k-1}dS =&\int_\Sigma \eta H_{k} \langle X, \nu\rangle dS -\frac{1}{k{{n-1}\choose k}}\int_\Sigma \eta \left(\mathrm{div}_\Sigma T_{k-1}\right)(X^T)dS\\ &-\frac{1}{k{{n-1}\choose k}}\int_\Sigma \langle T_{k-1}(X^T), \nabla _\Sigma \eta\rangle dS, \end{split} \end{equation*} where $\nu$ is the unit normal vector, $X=s (r) \partial _r$ and $X^T$ is the tangential component of $X$ onto $T\Sigma$. \end{proposition} \begin{proof} For completeness we sketch the proof here. Let $m=n-1$. We compute \begin{align*} &\mathrm{div}_\Sigma \left(\eta T_k (X^T)\right)\\ =&\langle T_k(\nabla \eta), X^T\rangle+ \eta(\mathrm{div}\;T_k)(X^T)+ \frac{1}{2}\eta\langle T_k, \iota^* (\mathcal{L}_Xg )\rangle -\eta\langle T_k, B\rangle \langle X, \nu\rangle\\ =&\langle T_k(\nabla \eta), X^T\rangle+ \eta(\mathrm{div}\;T_k)(X^T)+ c \eta\langle T_k, \iota^*g \rangle -\eta\langle T_k, B\rangle \langle X, \nu\rangle\\ =&\langle T_k(\nabla \eta), X^T\rangle+ \eta(\mathrm{div}\;T_k)(X^T)+ (m-k){m\choose k}c \eta H_k -(k+1){m\choose {k+1}}H_{k+1}\eta \langle X, \nu\rangle \end{align*} where $\iota$ is the inclusion of $\Sigma$ in $M$ and we used the fact that $\mathrm{tr}_\Sigma(T_k)= (m-k){m\choose k}H_k$ and $\langle T_k, B\rangle =(k+1){m\choose {k+1}} H_{k+1}$. Applying the divergence theorem will then give the result. \end{proof} \begin{lemma} \label{T positive} Suppose $N$ has constant curvature $K$ and $\Sigma$ is a star-shaped hypersurface with $H_p>0$. Assume that $s'>0$ for $r>0$ and $s'(r)^2-s(r)s''(r)\le K$. Then \begin{enumerate} \item\label{item: 1} For all $k \in \{1, \cdots, p-1\}$, we have $T_k>0$ and $H_k>0$. \item\label{item: 2} For $k \in \{2, \cdots, p\}$, \begin{equation}\label{eq: div} (\mathrm{div}_\Sigma T_{k-1}) (X^T)\ge 0, \end{equation} where $X^T$ is the tangential component of $X$ onto $T\Sigma$. \end{enumerate} \end{lemma} \begin{proof} This is essentially \cite[Lemma 1, Proposition 1]{KLP} or \cite[Section 2]{BE2013}, despite some minor differences in the assumptions. \eqref{item: 1} is proved in \cite[Lemma 1 (2b)]{KLP}. \eqref{item: 2} follows the same proof as in \cite[Proposition 1 (2)]{KLP}. In the proof of \cite[Proposition 1 (2)]{KLP}, it is assumed that $K>s'(r)^2-s(r)s''(r)$ (Condition (H4) in \cite{BE2013}, \cite{B2013}) and $\langle X, \nu\rangle >0$, but since we only require non-strict inequality in \eqref{eq: div}, the conclusion still holds under our assumption. A remark is that we need $N$ to have constant curvature because conformal flatness of $g$ is essential in the formula of $\left(\mathrm{div}_{\Sigma}T_{k-1}\right)(X^T)$ on p. 393 in \cite{BE2013}. \end{proof} \begin{theorem}\label{thm: mean curvature} Suppose $\Omega$ is a domain in $M$ and its boundary is a smooth hypersurface with $H_1 \ge 0$. Assume that $s'>0$ for $r>0$ and $0\le s'(r)^2-s(r)s''(r)$. Then for $l\ge 1$, \begin{align*} |N|^{-\frac{l-1}{n}} \left(n\int_{\Omega}c(r) dv\right)^{\frac{n+l-1}{n}} \le \int_{\partial \Omega} H_1 s(r)^{l+1} c(r)^{-1} \,dS. \end{align*} The equality holds if and only if $\partial \Omega$ is a slice. \end{theorem} \begin{proof} We will prove the following chain of inequalities: \begin{equation*} \begin{split} |N|^{-\frac{l-1}{n}} \left(n\int_{\Omega}c(r) dv\right)^{\frac{n+l-1}{n}} \le \int_{\partial \Omega} s(r)^{l} dS \le \int_{\partial \Omega} H_1 s(r)^{l+1} {c(r)}^{-1} \,dS. \end{split} \end{equation*} Define $a(r)=s(r)^l $. As $A(r)=s(r)^{n-1}$, $\psi(r)=s(r)^{n-1+l} $ and $\widetilde v(r)=\int_{0}^{r}s(t)^{n-1} c (t) dt=\frac{1}{n} s(r)^n$, we have \begin{align*} \psi\circ \widetilde v^{-1}(u) =& \left( n u\right)^{\frac{n-1+l}{n}} \end{align*} which is clearly convex as $l\ge 1$. So by Theorem \ref{thm star}, we have \begin{equation}\label{eq: star1} \int_{\partial \Omega}s(r)^l dS\ge \int_{\partial \widetilde \Omega^\#}s(r) ^l dS =|N|^{-\frac{l-1}{n}} \left(n\int_{\Omega}c(r) dv\right)^{\frac{n+l-1}{n}}. \end{equation} We now simply denote $s(r)$ by $s$ and $c(r)$ by $c$. Applying the weighted Hsiung-Minkowski formula (Proposition \ref{prop: HM}), we have \begin{align*} \int_{\partial \Omega} s^l dS =\int_{\partial \Omega} c\frac{s^l}{c} dS =&\int_{\partial \Omega} \frac{s^l}{c} H_1 \langle X, \nu \rangle-\frac{1}{n-1} \int_{\partial \Omega} \left\langle \nabla \left(\frac{s^l}{c}\right), s \nabla r\right\rangle dS\\ =&\int_{\partial \Omega} \frac{s^l}{c} H_1 \langle X, \nu \rangle-\frac{1}{n-1} \int_{\partial \Omega} \left\langle \frac{s^{l-1} }{c^2}(lc^2-ss'')\nabla r, s \nabla r\right\rangle dS\\ \le&\int_{\partial \Omega} \frac{s^l}{c} H_1 \langle X, \nu \rangle dS\\ \le&\int_{\partial \Omega} \frac{s^{l+1} }{c} H_1 dS. \end{align*} \end{proof} To relate the weighted volume to the integral of the higher order mean curvatures, we need stronger assumptions. \begin{theorem}\label{thm: higher} Suppose $N$ has constant curvature $K$ and $\Sigma$ is a closed star-shaped hypersurface which is the boundary of a domain $\Omega$. Assume that $s'>0$ for $r>0$ and $0\le s'(r)^2-s(r)s''(r)\le K$ and $H_k>0$ on $\Sigma$. Then for $l\ge 1$, \begin{align*} |N|^{-\frac{l-1}{n}} \left(n\int_{\Omega}c(r) \, dv\right)^{\frac{n+l-1}{n}} \le \int_{\partial \Omega} H_k s(r)^{l+k} c(r)^{-k} \,dS. \end{align*} The equality holds if and only if $\Sigma$ is a slice. \end{theorem} \begin{proof} We actually prove the following stronger statement: \begin{equation}\label{ineq: chain} \begin{split} |N|^{-\frac{l-1}{n}} \left(n\int_{\Omega}c(r) dv\right)^{\frac{n+l-1}{n}} \le \int_{\partial \Omega} s(r)^{l} dS \le& \int_{\partial \Omega} H_1 s(r)^{l+1} c(r)^{-1} dS\\ \le& \cdots\\ \le& \int_{\partial \Omega} H_k s(r)^{l+k} c(r)^{-k} dS. \end{split} \end{equation} The first two inequalities have already been proved in Theorem \ref{thm: mean curvature}. We prove the remaining inequalities by induction. Again denote $s(r)$ by $s$ and $c(r)$ by $c$. By the weighted Hsiung-Minkowski formula (Proposition \ref{prop: HM}) and Lemma \ref{T positive}, for $1\le j\le k-1$, we have \begin{align*} &\int_{\partial \Omega} \frac{s^{l+j}}{c ^j }H_j dS\\ =&\int_{\partial \Omega} \frac{s^{l+j} }{c^{j+1} } H_{j+1} \langle X, \nu \rangle dS -\frac{1}{j{{n-1}\choose j}} \int_{\partial \Omega} \left\langle \frac{s^{l+j-1} }{c ^{j+2} }((l+j)c^2 -(j+1)ss'' )T_j(\nabla r), s \nabla r\right\rangle dS\\ \le&\int_{\partial \Omega} \frac{s^{l+j} }{c^{j+1} } H_{j+1} \langle X, \nu \rangle dS\\ \le&\int_{\partial \Omega} \frac{s^{l+j+1} }{c^{j+1} } H_{j+1}dS. \end{align*} \end{proof} We note that the conditions in Theorem \ref{thm: mean curvature} and Theorem \ref{thm: higher} are satisfied in the following space forms: \begin{enumerate} \item The Euclidean space $\mathbb R^n$ with metric $dr^2+r^2 g_{\mathbb S^{n-1}}$. \item The hyperbolic space $\mathbb H^n$ with metric $dr^2+\sinh^2 r g_{\mathbb S^{n-1}}$. \item The open hemisphere $\mathbb S^n_+$ with metric $dr^2+\sin^2 r g_{\mathbb S^{n-1}}$. \end{enumerate} In the following corollaries, we denote the point $\{r=0\}$ by $0$ and the volume of the unit ball in $\mathbb R^n$ by $\beta_n$. We obtain the following corollaries. \begin{corollary}\label{cor: Rn} Let $\Sigma$ be a closed embedded hypersurface in $\mathbb R^{m+1}$ which is star-shaped with respect to $0$ and $\Omega$ is the region enclosed by it. Assume that $H_k>0$ on $\Sigma$. Then for any integer $l\ge 0$, \begin{align*} n \beta_n^{-\frac{l-1}{n}}\mathrm{Vol}(\Omega)^{\frac{n-1+l}{n}} \le\int_{\Sigma} H_kr^{l+k}dS. \end{align*} \end{corollary} \begin{proof} The case where $l\ge 1$ follows directly from Theorem \ref{thm: higher}. If $k=l=0$, this is the ordinary isoperimetric inequality. So \eqref{eq: star1} is still true when $l=0$, and we can perform induction starting from this case to show the assertion when $l= 0$. \end{proof} \begin{remark} If $l=1$, the inequality becomes \begin{align*} n\mathrm{Vol} (\Omega) \le \int _{\partial \Omega} H_kr^{k+1}dS. \end{align*} In particular, if $k=1$, it is easily seen from the proof that the assumption can be weakened to $H_1\ge 0$ because $T_0=\mathrm{id}$ is always positive. Corollary \ref{cor: Rn} extends \eqref{ineq: Rn} in Section \ref{sec: eg} and also generalizes \cite{KM2014} Theorem 2. See also \cite[Theorem 2]{kwong2015monotone}. \end{remark} \begin{corollary} Let $\Sigma$ be a closed embedded hypersurface in $\mathbb H^{m+1}$ which is star-shaped with respect to $0$ and $\Omega$ is the region enclosed by it. Assume that $H_k>0$ on $\Sigma$. Then for any integer $l\ge 1$, \begin{align*} n \beta_n^{-\frac{l-1}{n}} \left(\int_{\Omega}\cosh r \,dv\right)^{\frac{n+l-1}{n}} \le \int_{\partial \Omega} H_k \sinh^l r\tanh ^k r \,dS. \end{align*} \end{corollary} \begin{corollary} Let $\Sigma$ be a closed embedded hypersurface in $\mathbb S_+^{m+1}$ which is star-shaped with respect to $0$ and $\Omega$ is the region enclosed by it. Assume that $H_k>0$ on $\Sigma$. Then for any integer $l\ge 1$, \begin{align*} n \beta_n^{-\frac{l-1}{n}} \left(\int_{\Omega}\cos r \, dv\right)^{\frac{n+l-1}{n}} \le \int_{\partial \Omega} H_k \sin^l r\tan ^k r \,dS. \end{align*} \end{corollary} It is also possible to prove results analogous to Theorem \ref{thm: higher} for standard space forms by extending the inequalities in Section \ref{sec: eg} using the weighted Hsiung Minkowski inequalities, we will not do it here for the sake of simplicity. \section{Applictions to eigenvalue estimates}\label{sec: eigen} In this section, we apply Theorem \ref{thm3} to obtain some sharp eigenvalue estimates. We define $\lambda_1(T_k)$ to be the first eigenvalue of the symmetric second order differential operator $-\mathrm{div}(T_k\circ\nabla )$ on $\Sigma$. The equality holds if and only if $\Sigma$ is immersed as a geodesic sphere. Note that $\lambda_1(T_0)$ is just the first Laplacian eigenvalue. We now give an application of our main result to eigenvalues estimatation. The following theorem generalizes \cite{wang2010isoperimetric} Theorem 1.2, which corresponds to the case where $k=0$. \begin{theorem}\label{thm: lambda} \label{thm: lambda1} Let $\Sigma$ be a closed embedded hypersurface in $\mathbb R^{m+1}$ enclosing a region $\Omega$. Then \begin{align*} \lambda_1(T_k)\le \frac{(m-k){m\choose k}\beta^{\frac{1}{n}}}{n \mathrm{Vol}(\Omega)^{\frac{n+1}{n}}}\int_{\partial \Omega} H_k dS \end{align*} where $n=m+1$ and $\beta_n$ is the volume of the unit ball in $\mathbb R^n$. The equality holds if and only if $\Sigma$ is a sphere. (Note that $\lambda_1(T_0)$ is just the first Laplacian eigenvalue.) \end{theorem} \begin{proof} By a suitable translation, we can assume that $\int_{\partial \Omega}x_i\,dS=0$ for $i=1, \cdots, n$. By Theorem \ref{thm3}, we have \begin{equation}\label{ineq: volr} \int_{\partial \Omega} r^2 dS \ge \int_{\partial \Omega^\#}r^2 dS = n \beta_n^{-\frac{1}{n}} \mathrm{Vol}(\Omega)^{\frac{n+1}{n}}. \end{equation} By the variational characterization of $\lambda_1(T_k)$ and the fact that $\textrm{tr}_\Sigma(T_k)=(m-k)H_k$, we have \begin{equation}\label{ineq: lambda} \begin{split} \lambda_1(T_k) \int_{\partial \Omega} r^2 =&\lambda_1(T_k)\sum_{i=1}^n \int_{\partial \Omega} {x_i} ^2 \le \int_{\partial \Omega} \sum_{i=1}^n \langle T_k (\nabla x_i), \nabla x_i\rangle dS\\ =&\int_{\partial \Omega}\sum_{j,l=1}^m \left(\sum_{i=1}^n(\nabla _{e_j}x_i)(\nabla _{e_l}x_i) \right)(T_k)_j^l dS\\ =&\int_{\partial \Omega} \mathrm{tr}_\Sigma(T_k) dS\\ =&\int_{\partial \Omega} (m-k){m\choose k}H_k dS. \end{split} \end{equation} Therefore combining the two inequalities we have \begin{align*} n \beta_n^{-\frac{1}{n}} \mathrm{Vol}(\Omega)^{\frac{n+1}{n}} \lambda_1(T_k) \le (m-k){m\choose k}\int_\Sigma H_k dS. \end{align*} If the equality holds, then by Theorem \ref{thm3}, $\partial \Omega$ is a sphere. \end{proof} \begin{corollary} Let $\Sigma$ be a closed embedded hypersurface in $\mathbb R^{m+1}$ enclosing a region $\Omega$. Then the first Laplacian eigenvalue $\lambda_1(\Sigma)$ on $\Sigma$ satisfies \begin{align*} \lambda_1(\Sigma)\le \frac{m\beta_n^{\frac{1}{n}} \mathrm{Area}( \Sigma)}{n\mathrm{Vol}(\Omega)^{\frac{n+1}{n}}}. \end{align*} The equality is attained if and only if $\Omega$ is a ball. \end{corollary} To state our next result, we need to define the Steklov eigenvalues, as follows. Let $(\Omega,g)$ be a compact Riemannian manifold with smooth boundary $\partial \Omega=\Sigma$. The first nonzero Steklov eigenvalue is defined as the smallest $p\ne0$ of the following Steklov problem (\cite{Stekloff}) \begin{equation}\label{eq: stek} \begin{cases} \Delta f =0\quad &\textrm{on }\Omega\\ \frac{\partial f}{\partial \nu}=p f \quad &\textrm{on }\partial \Omega \end{cases} \end{equation} where $\nu$ is the unit outward normal of $\partial \Omega$. Physically, this describes the stationary heat distribution in a body $\Omega$ whose flux through $\partial \Omega$ is proportional to the temperature on $\partial \Omega$. It is known that the Steklov boundary problem \eqref{eq: stek} has a discrete spectrum $$0=p_0< p_1\le p_2\le\cdots \to \infty.$$ Moreover, $p_1$ has the following variational characterization (e.g. \cite[Equation 2.3]{KS}) \begin{equation}\label{eq: p1} p_1(\Omega)=\min_{\int_{\partial \Omega}f dS=0} \frac{\int_\Omega|\nabla f|^2 dv}{\int_{\partial \Omega}f^2 dS}. \end{equation} We will now prove an upper bound of $p_1(\Omega)$ with the techniques similar to that in Theorem \ref{thm: lambda}. \begin{theorem} For a domain $\Omega$ in $\mathbb R^n$ with smooth boundary, the first Steklov eigenvalue $p_1$ satisfies \begin{equation*} p_1(\Omega)\le \left(\frac{\beta_n}{\mathrm{Vol}(\Omega)}\right)^{\frac{1}{n}}. \end{equation*} The equality is attained if and only if $\Omega$ is a ball. \end{theorem} \begin{proof} As in the proof of Theorem \ref{thm: lambda}, we assume $\int_{\partial \Omega}x_i dS=0$ for $i=1, \cdots, n$. So by \eqref{ineq: volr} and \eqref{eq: p1}, \begin{align*} n \beta_n^{-\frac{1}{n}} \mathrm{Vol}(\Omega)^{\frac{n+1}{n}} \le \int_{\partial \Omega} r^2 dS =& \sum_{i=1}^{n}\int_{\partial \Omega} x_i^2 dS\\ \le& \sum_{i=1}^{n}\frac{1}{p_1} \int_{\Omega}|\nabla x_i^2|dv\\ =&\frac{1}{p_1} n\mathrm{Vol}(\Omega). \end{align*} From this we obtain \begin{equation*} p_1\le \left(\frac{\beta_n}{\mathrm{Vol}(\Omega)}\right)^{\frac{1}{n}}. \end{equation*} If the equality holds, then by Theorem \ref{thm3}, $\partial \Omega$ is a sphere. \end{proof} \section{The necessity of the conditions}\label{sec: necess} In this section, we examine the necessity of the conditions in Theorem \ref{thm1}, the classical isoperimetric inequality. First, we consider the condition that $A\circ V^{-1}$ being a convex function (Assumption \ref{cond: convex}), or equivalently that $ss''-s'^2\ge 0$ if $s$ is twice differentiable. We will show that this condition is necessary in the following sense: \begin{proposition}\label{prop: necess} Suppose $s(r_0)s''(r_0)-s'(r_0)^2<0$, then there exists a compact Riemannian manifold $(N, g_N)$ such that for the warped product manifold $([0, \lambda)\times N, dr^2+s(r)^2g_N)$, the coordinate slice $\Sigma=\{r=r_0\}$ fails to be area-minimizing among nearby hypersurfaces enclosing the same volume. \end{proposition} \begin{proof} The second variation formula for area on a constant-mean-curvature hypersurface $\Sigma$, which is a critical point of the area functional subject to the constraint that a fixed amount of volume is enclosed, reads (e.g. \cite[Equation (1)]{CY}) \begin{align*} \left.\frac{d^2}{dt^2}\right|_{t=0}\mathrm{Area}(\Sigma_t) = \int_{\Sigma} \left(|\nabla_\Sigma u|^2-\left(|B|^2+\mathrm{Ric}_g (\nu, \nu)\right)u^2\right)dS \end{align*} where the 1-parameter family of deformations of $\Sigma$ is given by $\Sigma_t=\phi(\Sigma, t)$ with $\Sigma_0=\Sigma$, $|B|^2$ is the square norm of the second fundamental form, $u$ is a function on $\Sigma$ and the normal variation is $u\nu_{\Sigma_t}$. In order to preserve the volume, we require $\int_\Sigma u dS=0$. Suppose $s(r_0)s''(r_0)-s'(r_0)^2=-k<0$. Choose a compact Riemannian manifold $(N, g_N)$ such that its first Laplacian eigenvalue $\lambda_1(g_N)<m k$. Such an $N$ can, for example, be a sphere $\mathbb S^m(R)$ with sufficiently large radius $R$ such that $\frac{1}{R^2}<k$. Let $M=[0, \lambda)\times \mathbb R$ equipped with the metric $g=dr^2+s(r)^2 g_N$, then its straightforward to compute that \begin{equation}\label{eq: ric} |B|^2+\mathrm{Ric}_g(\nu, \nu)=m\left(\frac{s'(r_0)^2-s(r_0)s''(r_0)}{s(r_0)^2}\right) \end{equation} on the hypersurface $\Sigma=\{r=r_0\}$. It is also easy to see that $u$ is an eigenfunction on $\Sigma$ with eigenvalue $\frac{\lambda_1(g_N)}{s(r_0)^2}$, so by the variational characterization of $\lambda_1$, $u$ satisfies $\int_\Sigma u dS=0$ and \begin{align*} \int_\Sigma |\nabla_\Sigma u|^2 dS=\frac{\lambda_1(g_N)}{s(r_0)^2}\int_\Sigma u^2 dS. \end{align*} Therefore the second variation formula becomes \begin{align*} \left.\frac{d^2}{dt^2}\right|_{t=0}\mathrm{Area}(\Sigma_t) =& \int_{\Sigma} \left(|\nabla_\Sigma u|^2-\left(|B|^2+\mathrm{Ric}_g (\nu, \nu)\right)u^2\right)dS\\ =& \left(\lambda_1(g_N)-m\left(s'(r_0)^2-s(r_0)s''(r_0)\right)\right)\int_{\Sigma} \frac{u^2}{s(r_0)^2} dS\\ <&0. \end{align*} It follows that $\Sigma$ fails to be area-minimizing among nearby hypersurfaces enclosing the same volume. \end{proof} \begin{remark} The necessity of the condition $ss''-s'^2\ge -1$ when $N$ is the unit sphere is also discussed in \cite{chunhe2016necessary}. See also \cite{GLW} Remark 6.3. We notice that one of the conditions of Theorem 1.2 (isoperimetric inequality) in \cite{GLW} is that $-k\le ss''-s'^2\le 0$. Proposition \ref{prop: necess} does not contradict the result in \cite{GLW} because it is assumed that $\mathrm{Ric}_N \ge (m-1)k g_N$ in \cite{GLW} while in our example, $\mathrm{Ric}_N<(m-1)k g_N$. Indeed, as already noted in \cite{GLW}, the condition that $\mathrm{Ric}_N \ge (m-1)k g_N$ guarantees that $\lambda_1(g_N)\ge m k$ by Lichnerowicz' theorem \cite{Lich}. \end{remark} It is also easy to see that the condition that the surjectivity of the projection map $\pi: \partial \Omega\to N$ is necessary if $s(0)>0$, or equivalently, $A(0)>0$. Indeed, if $A(0)>0$, $|B_r|\to 0$ but $\mathrm{Area}(\{r=r_0\})\to A(0)$ as $r\to 0^+$. If we take $\Omega$ to be a small enough geodesic ball around a point which is far from $r=0$, then the isoperimetric inequality clearly fails as $\partial \Omega$ has smaller area than $\partial B_r$, where $B_r=\Omega^\#$. We now show that in the case where $s(0)=0$, Assumption\ref{cond: surj} in Theorem \ref{thm1} cannot be removed unless we impose more restriction on $s'(0)$. \begin{proposition}\label{prop: s'(0)} Suppose $s(0)=0$. If $s'(0) >\left(\frac{n \beta_n}{|N|}\right)^{\frac{1}{n-1}}$, then Theorem \ref{thm1} fails if Assumption \ref{cond: surj} is removed. In particular, if $N=\mathbb S^{n-1}$ and $s'(0)> 1$, then Theorem \ref{thm1} fails if Assumption \ref{cond: surj} is removed. \end{proposition} \begin{proof} We now use the notation that $B_R(0)=\{(r, \theta): r<R\}$ and $B_R(p)$ to be the geodesic ball of radius $R$ around a point $p$ in $M$ with $r(p)\ne 0$. We will compare the areas of $\partial B_r(p)$ and $\partial B_r(0)$ for small $r$. By Taylor's theorem, $s(r)=s'(0)r+O(r^2)$ as $r\to 0^+$. Then \begin{align*} |B_r(0)|=&\frac{|N|}{n} s'(0)^{n-1}r^n+O(r^{n+1})\quad \textrm{ and }\\ |\partial B_r(0)|=&|N| s(r)^{n-1}= |N| s'(0)^{n-1}r^{n-1}+O(r^n). \end{align*} On the other hand, if $r(p)\ne 0$, then $|B_r(p)|=\beta_n r^n +O(r^{n+2})$ and $\partial B_r(p)=n \beta_n r^{n-1}+O(r^{n+1})$ (cf. \cite[Theorem 3.1]{gray1974volume}). Fix a small geodesic ball $B_{r}(p)$. If $|B_{r_0}(0)|=|B_{r}(p)|$, we must have $r_0=\left(\frac{n \beta_n}{|N|}\right)^{\frac{1}{n}}s'(0)^{-\frac{n-1}{n}} r+O({r}^2)$. So \begin{align*} |\partial B_{r_0}(0)| =&|N|s(r_0)^{n-1}\\ =&|N| s'(0)^{n-1}\left(\left(\frac{n \beta_n}{|N|}\right)^{\frac{1}{n}}s'(0)^{-\frac{n-1}{n}} r\right)^{n-1}+O({r}^{n})\\ =&|N|^{\frac{1}{n}}\left(n \beta_n\right)^{\frac{n-1}{n}} s'(0)^{\frac{n-1}{n}} r ^{n-1}+O({r}^{n}). \end{align*} Therefore, for the isoperimetric inequality to hold, it is necessary that $|N|^{\frac{1}{n}}\left(n \beta_n\right)^{\frac{n-1}{n}} s'(0)^{\frac{n-1}{n}} \le n \beta_n$, i.e. $s'(0)\le \left(\frac{n \beta_n}{|N|}\right)^{\frac{1}{n-1}}$. In particular, if $N=\mathbb S^{n-1}$, this means that $s'(0)\le 1$. \end{proof} In particular, Proposition \ref{prop: s'(0)} implies that for a given $s(r)$ satisfying all the conditions in Theorem \ref{thm1} and such that $s(0)=0$ and $s'(0)>0$, we can choose a Riemannian manifold $N$ with large enough volume $|N|$ such that $s'(0) >\left(\frac{n \beta_n}{|N|}\right)^{\frac{1}{n-1}}$ (for example by choosing a sphere with large enough radius), then the isoperimetric inequality will fail if we drop the Assumption \ref{cond: surj}. Alternatively, we can also fix $N$ and rescale $s(r)$, which does not affect the assumptions in Theorem \ref{thm1}. Finally, we give an example in which the isoperimetric inequality fails when all assumptions except Assumption \eqref{cond: mono} hold. In view of Theorem \ref{thm2}, the counterexample must not be a star-shaped hypersurface. For convenience, we take the interval to be $[1, \infty)$ and $N$ to be any compact $m$-dimensional manifold. Let $s(r)=r^{-\frac{1}{m}}$ which is a decreasing function. As $\log s(r)=-\frac{1}{m}\log r$ is convex, \eqref{cond: convex} is satisfied. For the region $\Omega=\{R_1<r<R_2\}$, $|\Omega| =|N|(\log R_2 -\log R_1)$ and $ |\partial \Omega| = |N|({R_1}^{-1}+{R_2}^{-1})$. If we take $R_2=e R_1$ and let $R_1\to \infty$, then we have $|\Omega|\equiv |N|$ but $|\partial \Omega|\to 0$. Therefore the isoperimetric inequality fails.
{ "timestamp": "2017-08-23T02:07:47", "yymm": "1708", "arxiv_id": "1708.06682", "language": "en", "url": "https://arxiv.org/abs/1708.06682", "abstract": "We prove some isoperimetric type inequalities in warped product manifolds, or more generally, multiply warped product manifolds. We then relate them to inequalities involving the higher order mean-curvature integrals. We also apply our results to obtain sharp eigenvalue estimates and some sharp geometric inequalities in space forms.", "subjects": "Differential Geometry (math.DG)", "title": "Weighted isoperimetric inequalities in warped product manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319868927341, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7087934308020041 }
https://arxiv.org/abs/1306.5351
Divisors on graphs, binomial and monomial ideals, and cellular resolutions
We study various binomial and monomial ideals arising in the theory of divisors, orientations, and matroids on graphs. We use ideas from potential theory on graphs and from the theory of Delaunay decompositions for lattices to describe their minimal polyhedral cellular free resolutions. We show that the resolutions of all these ideals are closely related and that their $\mathbb{Z}$-graded Betti tables coincide. As corollaries, we give conceptual proofs of conjectures and questions posed by Postnikov and Shapiro, by Manjunath and Sturmfels, and by Perkinson, Perlman, and Wilmes. Various other results related to the theory of chip-firing games on graphs also follow from our general techniques and results.
\section{Introduction} This work is concerned with the development of new connections between the theory of divisors on graphs, potential theory, the theory of lattices, Delaunay decompositions, and commutative algebra. \subsection{Divisors on graphs} Let $G$ be a graph. Let $\Div(G)$ be the free abelian group generated by $V(G)$. An element of $\Div(G)$ is a formal sum of vertices with integer coefficients and is called a {\em divisor} on $G$. We denote by ${\mathcal M}(G)$ the group of integer-valued functions on the vertices. The {\em Laplacian operator} $\Delta : {\mathcal M}(G) \to \Div(G)$ is defined by \[\Delta(f) = \sum_{v \in V(G)} \sum_{\{v,w\} \in E(G)} (f(v) - f(w)) (v) \ .\] The group of {\em principal divisors} is defined as the image of the Laplacian operator and is denoted by $\Prin(G)$. Two divisors $D_1$ and $D_2$ are called {\em linearly equivalent} if their difference is a principal divisor. This gives an equivalence relation on the set of divisors. The set of equivalence classes forms a finitely generated abelian group which is called the {\em Picard group} of $G$. If $G$ is connected, then the finite (torsion) part of the Picard group has cardinality equal to the number of spanning trees of $G$. This group has appeared in the literature under many different names; in theoretical physics and in probability it was first introduced as the ``abelian sandpile group'' or ``abelian avalanche group'' in the context of self-organized critical phenomena \cite{Bak, Dhar1, Gabrielov}. In arithmetic geometry, it appears implicitly in the study of component groups of N\'eron models of Jacobians of algebraic curves \cite{Raynaud, Lorenzini1}. In algebraic graph theory this group appeared under the name ``Jacobian group'' or ``Picard group'' in the study of flows and cuts in graphs \cite{Bacher}. The study of a certain chip-firing game on graphs led to the definition of this group under the name ``critical group'' \cite{Biggs97, Biggs99}. We recommend the recent survey article \cite{Levine1} for a short but more detailed overview of the subject. \medskip The theory of divisors on graphs closely mirrors the theory of divisors on algebraic curves. In fact, Baker and Norine in \cite{BN1} prove a version of Riemann-Roch theorem in this setting via a combinatorial argument. It was immediately realized (in \cite{Gathmann, MZ}) that this divisor theory has a natural extension to {\em metric graphs} (or {\em abstract tropical curves}). This theory, however, has resisted a more conceptual and cohomological interpretation. \medskip Associated to $G$ there is a canonical ideal which encodes the equivalences of divisors on $G$. This ideal is already implicitly defined in Dhar's seminal paper \cite{Dhar1}, but it was first introduced in \cite{CoriRossinSalvy02}. Let $K$ be a field and let ${\mathbf R}=K[{\mathbf x}]$ be the polynomial ring in variables $\{x_v: v \in V(G)\}$. The canonical binomial ideal is defined as ${\mathbf I}_G:= \langle {\mathbf x}^{D_1} - {\mathbf x}^{D_2} : \, D_1 \sim D_2 \text{ both nonnegative divisors}\rangle$. A related monomial ideal, which we denote by ${\mathbf M}_G^q$, is a certain initial ideal of ${\mathbf I}_G$ which is defined after fixing a vertex $q \in V(G)$ (see \S\ref{sec:divcomm}). This ideal, for the case of complete graphs, was extensively studied in \cite{PostnikovShapiro04}. In \cite{MadhuBernd}, Riemann-Roch theory for graphs is linked to Alexander duality (see \S\ref{sec:alex}) for the ideal ${\mathbf M}_G^q$. \subsection{Minimal free resolutions} Let ${\mathsf A}$ be an abelian group and let $R$ be an ${\mathsf A}$-graded polynomial ring over $K$. Let ${\mathfrak m}$ denote the ideal consisting of all polynomials with zero constant term. We require the ${\mathsf A}$-grading to be ``nice'', in the sense that a version of Nakayama's lemma holds (see \S\ref{sec:nicegrading}). For a graded $R$-module $M$, a graded free resolution of $M$ is an exact sequence of the form \[ \mathcal{F} \, : 0 \rightarrow \cdots \rightarrow F_{i} \xrightarrow{\varphi_{i}} F_{i-1} \rightarrow \cdots \rightarrow F_0 \xrightarrow{\varphi_{0}} M \rightarrow 0 \] where all $F_i$'s are free $R$-modules and all differential maps $\varphi_i$'s are graded. This resolution is called {\em minimal} if $\varphi_{i+1}(F_{i+1}) \subseteq {\mathfrak m} F_i$ for all $i \geq 0$. The $i$-th {\em Betti number} $\beta_{i}(M)$ of $M$ is the rank of $F_i$. The $i$-th {\em graded Betti number} in degree ${\mathsf j} \in {\mathsf A}$, denoted by $\beta_{i,{\mathsf j}}(M)$, is the rank of the degree ${\mathsf j}$ part of $F_i$. If the grading is ``nice'' then any finitely generated graded $R$-module has a minimal free resolution, and the numbers $\beta_{i,{\mathsf j}}(M)$ and $\beta_{i}(M)$ are independent of the choice of the minimal resolution. These integers encode very subtle numerical information about the module $M$. Many invariants of $M$ (e.g. its Hilbert series) can be computed using these Betti numbers. \medskip There is a standard way to write down a complex of graded modules from a cell complex ${\mathcal C}$. Namely, one can label $0$-dimensional cells of ${\mathcal C}$ by monomials, and then extend the labeling to arbitrary faces by labeling each face $F$ with the least common multiple of the monomial labels on the vertices in $F$. The resulting labeled cell complex leads to a complex of free graded $R$-modules \[ {\mathcal F}_{{\mathcal C}} = \bigoplus_{\emptyset \ne F \in {\mathcal C}}{R(-{\mathbf m}_F)} \] where ${\mathbf m}_F$ denotes the monomial label of the face $F$. The differential of ${\mathcal F}_{{\mathcal C}}$ is the homogenized differential of the cell complex ${\mathcal C}$; if $[F]$ denotes the generator of ${R(-{\mathbf m}_F)}$ we have \[ \partial([F]) = \sum_{{\rm codim(F,F')=1} \atop F' \subset F}{\varepsilon (F,F') \frac{{\mathbf m}_F}{{\mathbf m}_{F'}} \ [F']} \] where $\varepsilon (F,F') \in \{-1, +1\}$ denotes the incidence function indicating the orientation of $F'$ in the boundary of $F$. \medskip This construction is so general that the resulting complex is expected not to be exact. In the rare case that we do get an exact sequence, the pair $({\mathcal F}, \partial)$ is called a {\em cellular free resolution} which was first studied in \cite{BayerSturmfels}. If all cells are polyhedral, $({\mathcal F}, \partial)$ is called a {\em polyhedral cellular free resolution}. If moreover all ${{\mathbf m}_F}/{{\mathbf m}_{F'}}$ appearing in the differential maps are non-units in $R$, then we have a {\em minimal polyhedral cellular free resolution}. \subsection{Outline and our results} Our first goal is to give a minimal polyhedral cellular free resolution for the ideal ${\mathbf I}_G$. Quite surprisingly, many ideas from potential theory on graphs, from lattices and Delaunay decomposition, and from (a generalized version of) the notion of total unimodularity (developed in \S\ref{sec:divisors} and \S\ref{sec:latticesec}) fit together nicely to give a direct and self-contained solution to this problem. This is worked out in \S\ref{sec:IGresol}. Note that as a result we obtain a whole family (as $G$ varies) of ideals with minimal polyhedral cellular free resolution. For complete graphs this is the Scarf complex and for trees this is the Koszul complex. \medskip We then step back and define two more ideals; the {\em graphic Lawrence ideal} ${\mathbf J}_G$ and one of its initial ideals ${\mathbf O}_G^q$ (defined after fixing a vertex), which we call the {\em graphic oriented matroid ideal}. These are special classes of more general ideals studied in \cite{Popescu} and \cite{novik2002syzygies}. They are intimately related to {\em graphic hyperplane arrangements} and to {\em Delaunay decomposition of cut lattices} reviewed in \S\ref{sec:arrg}. In \S\ref{sec:matroidlawrence} we take a close look at these ideals, review some general known results, and prove some new results for our special situation. \medskip Roughly speaking, the ideals ${\mathbf J}_G$ and ${\mathbf O}_G^q$ can be thought of as ``orientation'' variants of the ``divisor'' ideals ${\mathbf I}_G$ and ${\mathbf M}_G^q$. A powerful technique in the theory of divisors on graphs and chip-firing games is to relate divisors to orientations. Given an orientation, one can form a divisor by reading off the associated indegrees or outdegrees (see, e.g., \cite[Theorem~2.3]{Lovasz91}, \cite[Theorem~3.3]{BN1}, \cite{HopPerk}, \cite{FarbodFatemeh}, and \cite{ABKS}). Our next main result shows that, algebraically, there is a good justification for the strength of this method. We show that the relation between the ideals ${\mathbf J}_G$ and ${\mathbf I}_G$ (and similarly ${\mathbf O}_G^q$ and ${\mathbf M}_G^q$) can be understood via {\em regular sequences}. This is the content of \S\ref{sec:reg} and \S\ref{sec:main}. \medskip These regular sequences allow us to compare many algebraic properties and constructions for the ideals ${\mathbf J}_G$ and ${\mathbf I}_G$ (and similarly ${\mathbf O}_G^q$ and ${\mathbf M}_G^q$). For example, one immediate corollary is to obtain a minimal polyhedral cellular free resolution for the ideal ${\mathbf I}_G$ from a minimal polyhedral cellular free resolution for the ideal ${\mathbf J}_G$. This resolution is essentially equivalent to the one obtained by our potential theoretic considerations (see Remark~\ref{rmk:ResolRelation}). We also obtain a minimal polyhedral cellular free resolution for the ideal ${\mathbf M}_G^q$ from a minimal polyhedral cellular free resolution for the ideal ${\mathbf O}_G^q$. It follows that all these resolutions are closely related to Delaunay decompositions of the lattice of integral coboundaries (which we call the {\em integral cut lattice}) and to the graphic hyperplane arrangement. Moreover, the ${\mathbb Z}$-graded Betti numbers of all these ideals coincide. So ${\mathbf M}_G^q$ and ${\mathbf O}_G^q$ are examples of ``nice'' initial ideals in the sense of \cite{conca}, meaning that one can read the Betti numbers of the original ideal from the initial ideal (see \cite{boocher,Fatemeh} for some results in this direcrtion). Also, we obtain, automatically, an interpretation of the Betti numbers in terms of the number of faces of various dimensions in the graphic hyperplane arrangement, or equivalently, the number of orbits of the Delaunay cells of various dimensions in the cut or principal lattice. These interpretations also imply that Betti numbers can be read from the number of {\em acyclic partial orientations} of $G$ (see Remark~\ref{sec:nopartorient}, Example~\ref{exam:1}, and Theorem~\ref{thm:betti_coincide}). As a corollary, it follows that the Betti table of all these ideals are independent of the base field $K$. \medskip For complete graphs, a minimal polyhedral cellular free resolutions for ${\mathbf M}_G^q$ and ${\mathbf I}_G$ was given in \cite{PostnikovShapiro04} and \cite{MadhuBernd}, respectively. The case of general graphs was left open in both works. Our work generalizes these constructions to arbitrary graphs, puts their constructions into a larger context, and resolves several questions and conjectures from these papers. We should mention that minimal free resolutions and the Betti numbers for both ${\mathbf M}_G^q$ and ${\mathbf I}_G$ were first established in \cite{FarbodFatemeh} and independently in \cite{Madhu}. The first Betti number for $I_G$ was computed in \cite{horia}. A minimal {\em cellular} resolution for ${\mathbf M}_G^q$ was given in \cite{Anton}. Very recently, the Betti numbers for ${\mathbf M}_G^q$ was also computed in \cite{hopkins}. \medskip We also remark that it is possible to directly give a minimal polyhedral cellular free resolution for the ideal ${\mathbf M}_G^q$ by our potential theoretic techniques in \S\ref{sec:IGresol}, but we have chosen to skip the details of this construction here as all the main ideas appear elsewhere in this writing. Moreover, an essentially equivalent (see Remark~\ref{rmk:cells}(ii)) solution for ${\mathbf M}_G^q$ has recently (and independently) appeared in \cite{Anton}, where they leave the solution for ${\mathbf I}_G$ as an open problem. \medskip Our techniques allow us to revisit some of the foundational results on {\em chip-firing} games and related fields. For example, we remark that our potential theoretic interpretation of Gr\"obner weights relating ${\mathbf I}_G$ to ${\mathbf M}_G^q$ gives a new proof of the result in \cite{FarbodMatt12} interpreting $q$-reduced divisors as divisors of {\em minimum total potential} (see Remark~\ref{rmk:reducedpotential}). A related problem is to describe the whole Gr\"obner cone of the initial ideal ${\mathbf M}_G^q$. This was a question of Bernd Sturmfels which we completely answer in \S\ref{sec:grocone}. We show that the rays of the Gr\"obner cone associated to ${\mathbf M}_G^q$ correspond, in a precise sense, to Green's functions. \medskip The equality of the Betti tables of all of our ideals allows one to prove many numerical facts about one ideal by looking instead at another ideal in this family. We consider a few of such examples in \S\ref{sec:conseq}. One example is the computation of multiplicities. Many applications are expected, and will appear in future work. \section{Notation and background} \label{sec:Background} Throughout, we assume ${\mathbb N}$ contains zero. All rings are commutative with $1$. \medskip A {\em graph} means a finite, connected, unweighted multigraph with no loops. As usual, the set of vertices and edges of a graph $G$ are denoted by $V(G)$ and $E(G)$. For $A \subseteq V(G)$, we denote by $A^c$ the complement of $A$ in $V(G)$. We set $n=|V(G)|$ and $m=|E(G)|$. For a set of vertices $S$, the induced subgraph of $G$ with the vertex set $S$ is denoted by $G[S]$. \medskip Let ${\mathbb E}(G)$ denote the set of oriented edges of $G$; for each edge in $E(G)$ there are two edges $e$ and $\bar{e}$ in ${\mathbb E}(G)$. So we have $|{\mathbb E}(G)|=2m$. An element $e$ of ${\mathbb E}(G)$ is called an {\em oriented} edge, and $\bar{e}$ is called the {\em inverse} of $e$. We have a map \[ \begin{aligned} {\mathbb E}(G) &\rightarrow V(G) \times V(G) \\ e &\mapsto (e_{+}, e_{-}) \end{aligned} \] sending an oriented edge $e$ to its head (or its terminal vertex) $e_{+}$ and its tail (or its initial vertex) $e_{-}$. Note that $\bar{e}_{+}=e_{-}$ and $\bar{e}_{-}=e_{+}$. Given disjoint nonempty subsets $A,B$ of $V(G)$ we define \[ {\mathbb E}(A,B) = \{e \in {\mathbb E}(G): e_+ \in A ,e_- \in B\} \ . \] \medskip \begin{figure}[ht] \begin{center} \begin{tikzpicture} [scale = .30, very thick = 20mm] \node (n2) at (20,6) [Cwhite] {$e_-$}; \node (n3) at (28,6) [Cwhite] {$e_+$}; \node (n2) at (24,3.8) [Cwhite] {$\bar{e}$}; \node (n3) at (24,8) [Cwhite] {$e$}; \node (n2) at (21,6) [Cgray] {}; \node (n3) at (27,6) [Cgray] {}; \path[blue][ ->] (n2) edge [bend left=35] node { } (n3); \path[ <-] (n2) edge [bend right=35] node { } (n3); \end{tikzpicture} \caption{Oriented edges, head, and tail} \end{center} \end{figure} \medskip \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .30, very thick = 20mm] \node (n1) at (14,11) [Cgray] {}; \node (n2) at (11,6) [Cgray] {}; \node (n3) at (17,6) [Cgray] {}; \foreach \from/\to in {n1/n3,n1/n2,n2/n3} \draw[black][] (\from) -- (\to); \node (n1) at (24,11) [Cgray] {}; \node (n2) at (21,6) [Cgray] {}; \node (n3) at (27,6) [Cgray] {}; \path[ ->] (n3) edge [bend right=15] node { } (n1); \path[ ->] (n1) edge [bend right=15] node { } (n2); \path[ ->] (n2) edge [bend right=15] node { } (n3); \path[blue][ ->] (n1) edge [bend right=15] node { } (n3); \path[blue][ ->] (n3) edge [bend right=15] node { } (n2); \path[blue][ ->] (n2) edge [bend right=15] node { } (n1); \node (n1) at (34,11) [Cgray] {}; \node (n2) at (31,6) [Cgray] {}; \node (n3) at (37,6) [Cgray] {}; \path[ ->] (n3) edge [bend right=15] node { } (n1); \path[ ->] (n1) edge [bend right=15] node { } (n2); \path[ ->] (n2) edge [bend right=15] node { } (n3); \end{tikzpicture} \caption{Graph $K_3$, its oriented edges, and a fixed orientation} \end{center} \medskip \begin{center} \begin{tikzpicture} [scale = .30, very thick = 20mm] \node (n1) at (14,11) [Cgray] {}; \node (n2) at (11,6) [Cgray] {}; \node (n3) at (17,6) [Cgray] {}; \draw[black][] (n1) -- (n2); \path[ ->] (n3) edge [bend right=15] node { } (n1); \path[ ->] (n2) edge [bend right=15] node { } (n3); \node (n1) at (24,11) [Cgray] {}; \node (n2) at (21,6) [Cgray] {}; \node (n3) at (27,6) [Cgray] {}; \path[ ->] (n3) edge [bend right=15] node { } (n1); \path[ ->] (n1) edge [bend right=15] node { } (n2); \path[ ->] (n2) edge [bend right=15] node { } (n3); \path[blue][ ->] (n2) edge [bend right=15] node { } (n1); \end{tikzpicture} \caption{Two equivalent ways to draw a partial orientation} \end{center} \end{figure} An {\em orientation} of $G$ is a choice of subset ${\mathcal O} \subset {\mathbb E}(G)$ such that ${\mathbb E}(G)$ is the disjoint union of ${\mathcal O}$ and $\bar{{\mathcal O}}=\{\bar{e}: \ e \in {\mathcal O} \}$. An orientation is called {\em acyclic} if it contains no directed cycle. A {\em partial orientation} of $G$ is a choice of subset ${\mathcal P} \subset {\mathbb E}(G)$ that strictly contains an orientation ${\mathcal O}$ of $G$. For a partial orientation ${\mathcal P}$, the associated (connected) {\em partition} is the partition of $G$ into totally cyclic subgraphs with edges $\{e , \bar{e} \in {\mathcal P}\}$. A partial orientation is called {\em acyclic} if the induced orientation on the graph obtained by contracting all its totally cyclic components is acyclic. \medskip Let ${\mathcal O}$ be an orientation of $G$. A vertex $q$ is called a {\em source} for ${\mathcal O}$ if $q=e_-$ for every $e \in {\mathcal O}$ which is incident to $q$. Let ${\mathcal P}$ be a partial orientation of $G$. Let $H$ be the associated connected component containing the vertex $q$. Then $q$ is called a {\em source} for ${\mathcal P}$ if $H$ corresponds to a source in the graph obtained by contracting all components of ${\mathcal P}$ (see Example~\ref{exam:1}). \medskip For an abelian group $A$, we let $C^0(G,A)$ denote the set of all $A$-valued functions on $V(G)$. It is endowed with the bilinear form \[ \langle f_1, f_2\rangle = \sum_{v\in V(G)}{f_1(v)f_2(v)} \ . \] Also, $C^1(G, A)$ will denote the space of all $A$-valued functions $g$ on ${\mathbb E}(G)$ such that $g(\bar{e})=-g(e)$ for all $e \in {\mathbb E}(G)$. After fixing an orientation ${\mathcal O} \subset {\mathbb E}(G)$ we have $C^1(G, A) = C_{{\mathcal O}}^1(G, A) \oplus C_{\bar{{\mathcal O}}}^1(G, A)$, where $C_{{\mathcal O}}^1(G, A)$ denotes the space of all $A$-valued functions on ${\mathcal O}$. The group $C^1(G, A)$ (and therefore $C_{{\mathcal O}}^1(G, A)$) is endowed with the bilinear form \begin{equation} \langle g_1, g_2 \rangle = \sum_{e \in {\mathcal O}}{g_1(e)g_2(e)} = \frac{1}{2} \sum_{e \in {\mathbb E}(G)}{g_1(e)g_2(e)} \end{equation} The usual coboundary map $d \colon C^0(G,A) \rightarrow C^1(G,A)$ is defined by \[ (d f) (e) = f(e_+) - f(e_-) = -(d f) (\bar{e}) \ . \] After fixing an orientation ${\mathcal O} \subset {\mathbb E}(G)$, we obtain the restricted coboundary map $d_{\mathcal O} \colon C^0(G,A) \to C_{{\mathcal O}}^1(G,A)$. \medskip Let $R$ be a commutative ring with $1$. We let $C_0(G, R)$ denote the free $R$-module generated by $V(G)$. Elements of $C_0(G, R)$ are of the form $\sum_{v \in V(G)} {a_v (v)}$ for $a_v \in R$. It is endowed with a bilinear form induced by $\langle (u) , (v)\rangle =\delta_{v}(u)$ for $u,v \in V(G)$. Here $\delta_{v}(u)$ denotes the usual Kronecker delta function. \medskip Likewise, we let $C_1(G, R)$ denote the free $R$-module generated by ${\mathbb E}(G)$. Elements of $C_1(G, R)$ are of the form $\sum_{e \in {\mathbb E}(G)} {a_e (e)}$ for $a_e \in R$. It is endowed with a bilinear form induced by \[ \langle (e) , (e')\rangle = \begin{cases} 1, &\text{if $e'=e$}\\ -1, &\text{if $e'=\bar{e}$}\\ 0, &\text{otherwise} \end{cases} \] for $e,e' \in {\mathbb E}(G)$. The usual boundary map $\partial\colon C_1(G, R) \rightarrow C_0(G, R)$ is defined by \[ \partial(e)=(e_+)-(e_-) \ . \] The bilinear forms defined above provide canonical isomorphisms $C_0(G,R) \cong C^0(G,R)$ and $C_1(G,R) \cong C^1(G,R)$. Then the maps $\partial$ and $d$ are adjoint with respect to these bilinear forms. We let $e^\ast \in C^1(G,R)$ denote the image of $(e) \in C_1(G, R)$ under this isomorphism, i.e. \[ e^\ast := \langle (e) , \cdot \rangle \ . \] The characteristic function of $v$ or $\chi_v= \delta_{v} \in C^0(G, R)$ is the image of $(v) \in C_0(G,R)$ under the canonical isomorphism. \medskip \medskip Let $K$ be a field. Associated to $G$ we define two polynomial rings: \begin{itemize} \item Let ${\mathbf R}=K[{\mathbf x}]$ denote the polynomial ring in $n$ variables $\{x_v: v \in V(G)\}$. \item Let ${\mathbf S}=K[{\mathbf y}]$ denote the polynomial ring in $2m$ variables $\{y_e: e \in {\mathbb E}(G)\}$ or $\{y_e, y_{\bar{e}}: e \in {\mathcal O}\}$ (for any orientation ${\mathcal O}$). \end{itemize} \section{Divisors and potential theory on graphs} \label{sec:divisors} Following \cite{BN1}, we let $\Div(G)$ be the free abelian group generated by $V(G)$. Equivalently, $\Div(G) = C_0(G, {\mathbb Z})$. An element of $\Div(G)$ is written as $\sum_{v \in V(G)} a_v (v)$ for $a_v \in {\mathbb Z}$ and is called a {\em divisor} on $G$. The coefficient $a_v$ in $D$ is denoted by $D(v)$. A divisor $D$ is called {\em effective} if $D(v) \geq 0$ for all $v\in V(G)$. The set of effective divisors is denoted by $\Div_{+}(G)$. We write $D \leq E$ if $E-D \in \Div_{+}(G)$. For $D \in \Div(G)$, let $\deg(D) = \sum_{v \in V(G)} D(v)$. Given disjoint nonempty subsets $A,B$ of $V(G)$ one can assign a divisor $D(A,B)= \sum_{v \in A} |\{ w \in B: \{v,w\} \in E(G)\}| \ (v)$. \medskip We denote by ${\mathcal M}(G)$ the group of integer-valued functions on the vertices. Equivalently, ${\mathcal M}(G)=C^0(G, {\mathbb Z})$. For $A \subseteq V(G)$, $\chi_A \in {\mathcal M}(G)$ denotes the $\{0,1\}$-valued characteristic function of $A$. The {\em Laplacian operator} $\Delta \colon {\mathcal M}(G) \to \Div(G)$ is defined by \[\Delta(f) = \sum_{v \in V(G)} \sum_{\{v,w\} \in E(G)} (f(v) - f(w)) (v) \ .\] \begin{Remark} \label{rmk:selfadjoint} With the identification ${\mathcal M}(G)=C^0(G, {\mathbb Z})$ and $\Div(G)=C_0(G, {\mathbb Z})$ and the canonical isomorphism $C_1(G,R) \cong C^1(G,R)$, the operator $\Delta$ is identified with $\partial_{\mathcal O} d_{\mathcal O} \colon C^0(G, {\mathbb Z}) \rightarrow C_0(G,{\mathbb Z})$, where $\partial_{\mathcal O}$ and $d_{\mathcal O}$ denote the usual (restricted) boundary and coboundary maps for an arbitrary orientation ${\mathcal O}$. Somewhat more canonically, $\Delta = \frac{1}{2}\partial d$. It follows that $\Delta$ is a self-adjoint operator. \end{Remark} \medskip The group of {\em principal divisors} is defined as the image of the Laplacian operator and is denoted by $\Prin(G)$. It is easy to check that $\Prin(G) \subseteq \Div^0(G)$ where $\Div^0(G)$ denotes the set consisting of divisors of degree zero. The quotient $\Pic^0(G) = \Div^0(G) / \Prin(G)$ is a finite group whose cardinality is the number of spanning trees of $G$ (see, e.g., \cite{FarbodMatt12} and references therein). The full {\em Picard group} of $G$ is defined as \[ \Pic(G) = \Div(G) / \Prin(G) \] which is isomorphic to ${\mathbb Z} \oplus \Pic^0(G)$. Since $\Prin(G) \subseteq \Div^0(G)$, the map $\deg \colon \Div(G) \rightarrow {\mathbb Z}$ descends to a well-defined map $\deg \colon \Pic(G) \rightarrow {\mathbb Z}$. Two divisors $D_1$ and $D_2$ are called {\em linearly equivalent} if they become equal in $\Pic(G)$. In this case we write $D_1 \sim D_2$. \subsection{Divisors and potential theory} \label{sec:potential} For $p,q \in V(G)$ let the {\em Green's function} $j_q(p,\cdot)$ denote the unique (${\mathbb Q}$-valued) solution to the Laplace equation $\Delta f = (p)-(q)$ satisfying $f(q)=0$. If we think of graph $G$ as an electrical network (in which each edge is a resistor having unit resistance) then $j_q(p,v)$ denotes the electric potential at $v$ if one unit of current enters the network at $p$ and exits at $q$, with $q$ grounded (i.e., zero potential). It is easy to check that $j_q(p,q)=0$, $j_q(p,v)=j_q(v,p)$, and $0 \leq j_q(p,v) \leq j_q(p,p)$ (see \cite{ChinburgRumely,BX06}). \cite[Construction~3.1]{FarbodMatt12} explains how to compute these functions using basic linear algebra. \medskip There exists a positive definite, symmetric bilinear form \[ \langle \cdot \, , \cdot \rangle_{\en} \colon \; \Div^0(G) \times \Div^0(G) \rightarrow {\mathbb Q} \] \[ \langle D_1 , D_2 \rangle_{\en} = \sum_{u,v \in V(G)}{D_1(u)j_q(u,v)D_2(v)} \] which is a canonical (i.e. independent of the choice of $q$) pairing on $\Div^0(G)$ (see \cite{FarbodMonodromy,FarbodMatt12}). It is called the {\em energy pairing} on $\Div^0(G)$. \medskip Let $\mathbf{1}$ denote the all-$1$'s divisor. For $D \in \Div(G)$ and $q\in V(G)$, following \cite{FarbodMatt12}, the {\em total potential functional} is defined as \[ \begin{aligned} b_q(D)&=\langle \mathbf{1} - n (q), D - \deg(D)(q)\rangle_{\en}\\ &= \sum_{v}\sum_{p}{j_q(p,v)D(v)} \ . \end{aligned} \] \subsection{Divisors and commutative algebra} \label{sec:divcomm} Any effective divisor $D$ gives rise to a monomial \[ {\mathbf x}^D := \prod_{v \in V(G)}{x_{v}^{D(v)}} \in {\mathbf R}\ . \] Associated to every graph $G$ there is a canonical ideal in ${\mathbf R}$ which encodes the linear equivalences of divisors on $G$: \[ \begin{aligned} {\mathbf I}_G &:= \langle {\mathbf x}^{D_1} - {\mathbf x}^{D_2} : \, D_1 \sim D_2 \text{ both effective divisors}\rangle \\ &= \span_K \{ {\mathbf x}^{D_1} - {\mathbf x}^{D_2} : \, D_1 \sim D_2 \text{ both effective divisors}\} \end{aligned} \] which was first introduced in \cite{CoriRossinSalvy02}. This ideal is graded by both $\Pic(G)$ and ${\mathbb Z}$. \begin{Remark} It is shown in \S\ref{sec:nicegrading} (and in \cite{FarbodFatemeh}) that, although $\Pic(G)$ has torsion elements, it provides a ``nice'' grading in the sense that Nakayama's lemma holds with respect to this grading and the concept of $\Pic(G)$-graded minimal free resolution makes sense in this context. \end{Remark} \medskip Once we fix a vertex $q$, there is a natural term order that gives rise to a particularly nice Gr\"obner basis for ${\mathbf I}_G$. This term order was also introduced in \cite{CoriRossinSalvy02}. Consider a total ordering of the set of variables $\{x_v: v \in V(G)\}$ compatible with the distances of vertices from $q$ in $G$: \begin{equation}\label{eq:dist} \dist(w,q) < \dist(v,q) \, \implies \, x_w < x_v \ . \end{equation} Here, the distance between two vertices in a graph is the number of edges in a shortest path connecting them. This ordering can be thought of as an ordering on the vertices induced by running the breadth-first search (BFS) algorithm starting at the root vertex $q$. The term order $<_q$ will denote the graded reverse lexicographic ordering (grevlex) on ${\mathbf R}$ induced by the total ordering on the variables given in \eqref{eq:dist}. \medskip The initial ideal ${\mathbf M}_G^q:=\ini_{<_q}({\mathbf I}_G)$ for $({\mathbf I}_G, <_q)$ is canonically defined (up to the choice of the distinguished vertex $q$). This ideal is extensively studied in \cite{PostnikovShapiro04}, where it is denoted by $M_G$. This ideal is naturally equipped with $\Div(G)$ (fine) and ${\mathbb Z}$ (coarse) gradings. One of the main results of \cite{CoriRossinSalvy02} is the following theorem -- see also \cite[Section~5]{FarbodFatemeh} where this result is reproved and generalized to higher syzygy modules. \begin{Theorem} \label{thm:Cori} A Gr\"obner basis of $({\mathbf I}_G, <_q)$ is \[ \left\{{\mathbf x}^{D(A^c, A)}-{\mathbf x}^{D(A, A^c)} : A \subsetneq V(G) , q\in A \right\} \ . \] Moreover, \begin{itemize} \item[(i)] $\LM({\mathbf x}^{D(A^c, A)}-{\mathbf x}^{D(A, A^c)})={\mathbf x}^{D(A^c, A)}$. \item[(ii)] It suffices to consider only those subsets $A$ of $V(G)$ such that both $G[A]$ and $G[A^c]$ are connected. In this case we obtain a {\em minimal Gr\"obner basis} of $({\mathbf I}_G, <_q)$. \end{itemize} \end{Theorem} As we will see, the minimal Gr\"obner basis described in part (ii) is also a minimal generating set (see also \cite{FarbodFatemeh}). \medskip \subsection{Potential theory and Gr\"obner weight functionals for ${\mathbf I}_G$}\label{sec:wt1} Let $\vartheta \in C^0(G, {\mathbb R})$ and think of it as a linear functional $\vartheta \colon \Div(G) \rightarrow {\mathbb R}$. For $f =\sum{c_i {\mathbf x}^{D_i}} \in {\mathbf R}$ the $\vartheta$-degree of $f$, denoted by $\deg_\vartheta(f)$, is the maximum value of $\vartheta(D_i)$. The $\vartheta$-initial form of $f$ is the sum of all terms $c_i {\mathbf x}^{D_i}$ such that $\vartheta(D_i)$ is maximum. For an ideal $I \subset {\mathbf R}$, the $\vartheta$-initial ideal $\ini_\vartheta(I)$ is the ideal generated by all $\vartheta$-initial forms. \medskip Fix a term order $<$ for ${\mathbf R}$. The functional $\vartheta$ is said to {\em represent $<$ for $I$} if $\ini_\vartheta(I)=\ini_{<}(I)$. It is known that for any term order $<$ and any ideal $I$, there is a {\em non-negative} and {\em integer-valued} functional representing $<$ for $I$ (\cite[Proposition~1.11]{SturmfelsGrobnerConvex}). \medskip In our situation there is a nice and direct interaction between Gr\"obner theory and potential theory. \begin{Lemma} \label{lem:bq} $b_q\colon \Div(G) \rightarrow {\mathbb Q}$ is a non-negative rational-valued functional representing $<_q$ for ${\mathbf I}_G$. \end{Lemma} \begin{proof} For $D \in \Div(G)$ we know $b_q(D)= \sum_{v,p}{j_q(p,v)D(v)}$, so the non-negativity and rationality follows immediately. By Theorem~\ref{thm:Cori}, it suffices (see \cite[proof of Proposition~1.11]{SturmfelsGrobnerConvex}) to check that for any $A \subsetneq V(G)$ with $q\in A$, we have \[ b_q(D(A^c, A)) > b_q(D(A, A^c)) \ . \] But $D(A, A^c)-D(A^c, A)=\Delta(\chi_A)$, where $\chi_A$ denotes the $\{0,1\}$-valued characteristic function of $A$. The Laplacian operator $\Delta$ is self-adjoint (see Remark~\ref{rmk:selfadjoint}), which means \[\sum_{v}{f(v) \Delta(g)(v)}=\sum_{v}{g(v) \Delta(f)(v)}\] for all $f, g \in {\mathcal M}(G)$. Therefore for all $f \in {\mathcal M}(G)$ we have \begin{equation}\label{eq:useful} \begin{aligned} \sum_{v}{j_q(p,v)\Delta(f)(v)} &= \sum_{v}{f(v)\Delta(j_q(p, \cdot))(v)} \\ &= \sum_{v}{f(v)(\delta_p(v)-\delta_q(v))} \\ &= f(p)-f(q) \ . \end{aligned} \end{equation} Therefore we have \[ \begin{aligned} b_q(D(A, A^c)-D(A^c, A)) &= b_q(\Delta(\chi_A)) \\ &= \sum_{v,p}{j_q(p,v)\Delta(\chi_A)(v)} \\ &= \sum_{p}{(\chi_A(p)-\chi_A(q))} \ . \end{aligned} \] The result now follows, because for any set $A \subsetneq V(G)$ with $q\in A$, we have $\chi_A(q)=1$, and there exists a vertex $p \in A^c$ with $\chi_A(p)=0$. \end{proof} \begin{Remark} \label{rmk:reducedpotential} $q$-reduced divisors (or $G$-parking functions with respect to $q$) can be defined as the normal forms of ${\mathbf R} / {\mathbf I}_G$ with respect to the Gr\"obner basis described in Theorem~\ref{thm:Cori}. It easily follows from Lemma~\ref{lem:bq} that a $q$-reduced divisor is precisely the unique (in each equivalence class) minimizer of the $b_q$ functional. See \cite{FarbodMatt12} for a precise statement and a different proof of this fact. \end{Remark} \medskip \begin{Definition}\label{def:M_intwt} We let $\vartheta_q$ denote the non-negative, integral functional associated to $b_q$ (i.e. obtained from $b_q$ by clearing the denominators). Clearly, $\vartheta_q$ will also represent $<_q$ for ${\mathbf I}_G$. \end{Definition} \subsection{Gr\"obner cone of ${\mathbf M}_G^q$} \label{sec:grocone} A modification of the proof of Lemma~\ref{lem:bq} shows that the rays of the Gr\"obner cone associated to ${\mathbf M}_G^q$, in a precise sense, correspond to Green's functions. The weight functional $\eta \in C^0(G, {\mathbb R})$ defined by $\eta(D)=\sum_{v \in V(G)}{\eta(v) (v)}$ is in the Gr\"obner cone if and only if for any set $B \ne \emptyset$ with $q\not\in B$ we have \begin{equation}\label{eq:wt1char} \eta(\Delta(\chi_B)) = \sum_{v \in V(G)}{\eta(v) \Delta(\chi_B)(v)} = \sum_{v \in V(G)}{\chi_B(v) \Delta(\eta)(v)} >0 \ . \end{equation} In particular, for each vertex $p \ne q$, setting $B=\{p\}$ we must have: \begin{equation} \label{eq:MConeCond} \gamma_p : =\Delta(\eta)(p) >0 \ . \end{equation} This condition is also sufficient because for all $B \ne \emptyset$ with $q\not\in B$ we have \[ \eta(\Delta(\chi_B))=\sum_{v \in V(G)}{\chi_B(v)\gamma_v}=\sum_{v \in B}{\gamma_v} \ . \] \medskip It follows that $\eta \in {\mathcal M}(G)$ is a solution to $\Delta(\eta) = \gamma$ for the degree zero divisor $\gamma := \sum_{p \in V(G)} \gamma_p (p)$. From the definition of the Green's function $j_q(p, v)$, and the fact that the Laplacian operator has a $1$-dimensional zero-eigenspace generated by the all-$1$ function $\mathbf{1}$, we obtain: \begin{equation}\label{eq:wt1} \eta=\sum_{p \in V(G)}\gamma_p j_q(p, \cdot ) + k \cdot \mathbf{1} \end{equation} for some constant $k \in {\mathbb R}$. We summarize these observations in the following theorem. \begin{Theorem} The weight functional $\eta \in C^0(G, {\mathbb R})$ represents $<_q$ for ${\mathbf I}_G$ if and only if there exist $k \in {\mathbb R}$ and real numbers $\gamma_p >0$ (for $p \in V(G)$) such that \[ \eta=\sum_{p \in V(G)}\gamma_p j_q(p, \cdot ) + k \cdot \mathbf{1} \ . \] \end{Theorem} \medskip In other words $\eta$, up to constant functions, is in the interior of the cone generated by the vectors $(j_q(p,v))_{v \in V(G)}$ for various $p \in V(G)$. Note that these vectors are independent because the matrix $(j_q(p,v))_{p,v \in V(G) \backslash \{q\}}$ is invertible (see \cite[Construction~3.1]{FarbodMatt12}). The question of describing this Gr\"obner cone was asked by Bernd Sturmfels. \medskip \section{Lattices, Delaunay decompositions, total unimodularity, and infinite arrangements} \label{sec:latticesec} \subsection{Lattices and Delaunay decompositions} \label{sec:lattice} Let $\Lambda$ be a free ${\mathbb Z}$-module (abelian group), endowed with a positive definite symmetric bilinear pairing $\beta \colon \Lambda \times \Lambda \rightarrow {\mathbb Z}$. The pair $(\Lambda, \beta)$ (or just $\Lambda$, when $\beta$ is understood) is called a {\em free bilinear form space over ${\mathbb Z}$} or, more concisely, an {\em abstract ${\mathbb Z}$-lattice}. \medskip Let $(\Lambda, \beta)$ be an abstract ${\mathbb Z}$-lattice. We let $\Lambda_{{\mathbb R}} := \Lambda \otimes {\mathbb R}$. The bilinear pairing $\beta$ naturally extends to a bilinear pairing $\beta_{\mathbb R}$ on $\Lambda_{\mathbb R}$ by $\beta_{\mathbb R}(a \otimes {\mathbf u},b \otimes {\mathbf v} )=ab \ \beta({\mathbf u}, {\mathbf v})$. The dual ${\mathbb Z}$-module $\Lambda^{\vee} := \Hom_{{\mathbb Z}}(\Lambda , {\mathbb Z})$ is contained (via extension of scalars) in the dual real vector space $\Lambda_{{\mathbb R}}^{\vee} := \Hom_{{\mathbb Z}}(\Lambda , {\mathbb R}) = \Hom_{{\mathbb R}}(\Lambda_{\mathbb R}, {\mathbb R})= \Lambda^\vee \otimes {\mathbb R}$. The {\em non-degeneracy} of $\beta$ is the statement that the homomorphism \[ \begin{aligned} \Psi\colon \Lambda &\rightarrow \Lambda^\vee \\ {\mathbf v} &\mapsto \beta({\mathbf v} , \cdot) \end{aligned} \] is injective. Clearly every positive definite bilinear pairing is automatically non-degenerate. Therefore the natural extension $\Psi_{\mathbb R} \colon \Lambda_{\mathbb R} \rightarrow \Lambda_{\mathbb R}^\vee$ is also injective (e.g., because ${\mathbb R}$ is a flat ${\mathbb R}$-module). Since these vector spaces have the same dimension, it follows that $\Psi_{\mathbb R}$ is indeed an isomorphism. In other words, in the language of bilinear forms, $\beta_{\mathbb R}$ is a {\em perfect pairing}\footnote{A perfect pairing is sometimes called a {\em unimodular pairing} in the literature. We will avoid this terminology to avoid confusion.} on $\Lambda_{\mathbb R}$. So, in this situation, any $\varphi \in \Lambda_{\mathbb R}^\vee$ is of the form $\varphi (\cdot) = \beta_{\mathbb R} ({\mathbf a} , \cdot)$ for some ${\mathbf a} \in \Lambda_{\mathbb R}$. \medskip Let $d \colon \Lambda_{\mathbb R} \times \Lambda_{\mathbb R} \rightarrow {\mathbb R}$ be any distance function on $\Lambda_{\mathbb R}$. The {\em Delaunay decomposition} of $\Lambda_{\mathbb R}$ with respect to the lattice $\Lambda$ and the distance function $d$ (not necessarily induced by the bilinear form) is defined as the collection of cells \[ A_{{\mathbf p}} = {\rm conv.hull} \{ {\mathbf s} \in \Lambda: d({\mathbf p},{\mathbf s}) \text{ is minimal}\} \ . \] as ${\mathbf p}$ varies in $\Lambda_{\mathbb R}$. It is a classical fact (essentially due to Voronoi and Delaunay) that the collection of Delaunay cells $\{A_{{\mathbf p}}\}$ gives a locally finite, cellular decomposition (face to face tiling) of $\Lambda_{\mathbb R}$ which is invariant under the action of $\Lambda$ (see, e.g., \cite{Conway}). \medskip \subsection{Total unimodularity} Consider a (not necessarily minimal) finite set $\{\varphi_i\}_{i\in I}$ of generators for the free ${\mathbb Z}$-module $\Lambda^\vee$. Extension of scalars gives an inclusion $\Lambda^\vee \hookrightarrow \Lambda_{\mathbb R}^\vee$. Clearly, for any subset $J \subseteq I$ such that $\{\varphi_i\}_{i\in J}$ generates $\Lambda^\vee$ as a ${\mathbb Z}$-module, we have $\{\varphi_i \}_{i\in J}$ spans $\Lambda_{\mathbb R}^\vee$ as a real vector space (here we have identified $\varphi_i \otimes 1$ with $\varphi_i$). The converse is, of course, not true in general. \medskip \begin{Definition} Let $(\Lambda, \beta)$ be an abstract ${\mathbb Z}$-lattice. A finite set $\{\varphi_i\}_{i\in I}$ of generators for $\Lambda^\vee$ is called {\em totally unimodular} if for any subset $J \subseteq I$ such that the collection $\{\varphi_i\}_{i\in J}$ spans $\Lambda_{\mathbb R}^\vee$ as a real vector space, the collection $\{\varphi_i\}_{i\in J}$ generates $\Lambda^\vee$ as a ${\mathbb Z}$-module. \end{Definition} \begin{Example} Let $\Lambda={\mathbb Z}^2$, generated by ${\mathbf e}_1$ and ${\mathbf e}_2$, endowed with the obvious bilinear pairing induced by $\langle {\mathbf e}_i, {\mathbf e}_j \rangle = \delta_{i}(j)$. Let ${\mathbf e}_i^\ast \in ({\mathbb Z}^2)^\vee$ denote the dual basis element ${\mathbf e}_i^\ast({\mathbf e}_j)=\delta_{i}(j)$. Then \begin{itemize} \item $\{{\mathbf e}_1^\ast, {\mathbf e}_2^\ast, {\mathbf e}_1^\ast+{\mathbf e}_2^\ast\}$ generates $\Lambda^\vee$ and is totally unimodular. \item $\{{\mathbf e}_1^\ast, {\mathbf e}_2^\ast, {\mathbf e}_1^\ast+2{\mathbf e}_2^\ast\}$ generates $\Lambda^\vee$ but is not totally unimodular. The subcollection $\{{\mathbf e}_1^\ast, {\mathbf e}_1^\ast+2{\mathbf e}_2^\ast\}$ spans $({\mathbb R}^2)^\vee$ as a real vector space, but it does not generate $({\mathbb Z}^2)^\vee$. For example, ${\mathbf e}_2^\ast$ will not be in the ${\mathbb Z}$-module it generates. \end{itemize} \end{Example} \begin{Example}\label{ex:WU} The primary examples of total unimodularity and the most well-known examples arise from totally unimodular matrices or, more generally, weakly unimodular matrices. An $r \times m$ ($r \leq m$) integer matrix $A=(a_{ij})$ is called {\em weakly unimodular} if every $r \times r$ square submatrix of $A$ has determinant in the set $\{-1, 0 , 1\}$. If every square submatrix of $A$ has determinant in the set $\{-1, 0 , 1\}$, then $A$ is called a {\em totally unimodular} matrix. Any totally unimodular matrix is weakly unimodular. A weakly unimodular matrix which contains the identity matrix of size $r$ is automatically totally unimodular. \medskip Let $A$ be a weakly unimodular matrix. Let $\Lambda$ denote the row space $\Image(A^T) \hookrightarrow {\mathbb Z}^m$ with the bilinear pairing induced by the natural bilinear pairing on ${\mathbb Z}^m$. For $1 \leq j \leq m$ let $\varphi_j \in \Lambda^\vee$ denote the restriction of ${\mathbf e}_j^\ast \in ({\mathbb Z}^m)^\vee$ to $\Lambda$. Concretely, if we denote the $i$-th row ($1 \leq i \leq r$) of $A$ by ${\mathbf v}_i$, then each $\varphi_j$ is defined by $\varphi_j({\mathbf v}_i)=a_{ij}$. By Cramer's rule, the collection $\{\varphi_1, \ldots, \varphi_m\}$ is totally unimodular precisely because $A$ is weakly unimodular. \end{Example} \subsection{Infinite hyperplane arrangements} \label{sec:infarrag} Consider a finite collection $\{\varphi_i\}_{i\in I} \subset \Lambda_{\mathbb R}^\vee$ spanning $\Lambda_{\mathbb R}^\vee$ as a vector space over ${\mathbb R}$. For each ${\mathbf p} \in \Lambda_{\mathbb R}$ we denote by $C_{\mathbf p}$ the polyhedron in $\Lambda_{\mathbb R}$ defined by \[ C_{\mathbf p} = \{{\mathbf s} \in \Lambda_{\mathbb R}: \lfloor \varphi_i({\mathbf p}) \rfloor \leq \varphi_i({\mathbf s}) \leq \lceil \varphi_i({\mathbf p}) \rceil \text{ for all } i \in I \} \ . \] As usual, $\lfloor x \rfloor$ denotes the largest integer $n \leq x$, and $\lceil x \rceil$ denotes the smallest integer $n \geq x$. Clearly $C_{\mathbf s}=C_{\mathbf p}$ for all ${\mathbf s} \in \relint(C_{\mathbf p})$. We denote by ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$ the collection of all polyhedra $C_{\mathbf p}$ for ${\mathbf p} \in \Lambda_{\mathbb R}$. \medskip The following result is well known for the case of totally unimodular matrices (Example~\ref{ex:WU}) (see, e.g., \cite{OdaSeshadri, Erdahl}). We give a proof suited for our general setting. \begin{Theorem}\label{thm:totunim} Fix a finite collection $\{\varphi_i\}_{i\in I} \subset \Lambda_{\mathbb R}^\vee$ which spans $\Lambda_{\mathbb R}^\vee$ as a vector space over ${\mathbb R}$. \begin{itemize} \item[(i)] ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$ is a polyhedral cell decomposition of $\Lambda_{\mathbb R}$ by bounded convex polyhedra. This cell decomposition is invariant under the translation by \[\{{\mathbf s} \in \Lambda_{\mathbb R}: \varphi_i({\mathbf s}) \in {\mathbb Z} \text{ for all } i \in I \}\] which is contained in the set of $0$-dimensional polyhedra in ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$. \item[(ii)] If, further, $\{\varphi_i\}_{i\in I} \subset \Lambda^\vee$ and it generates $\Lambda^\vee$, then ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$ is invariant under the translation action by elements of $\Lambda$ which is contained in the set of $0$-dimensional polyhedra in ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$. \item[(iii)] If, further, $\{\varphi_i\}_{i\in I}$ is totally unimodular, then $\Lambda$ coincides with the set of $0$-dimensional polyhedra in ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$. Moreover, ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$ coincides with the Delaunay decomposition of $\Lambda_{\mathbb R}$ with respect to the lattice $\Lambda$ and the metric induced by \begin{equation} \label{eq:norm} \lVert {\mathbf p}\rVert^2 = \sum_{i \in I}{\lvert \varphi_i({\mathbf p}) \rvert^2} \ . \end{equation} \end{itemize} \end{Theorem} \begin{proof} Consider the map \[ \begin{aligned} \varPhi \colon \Lambda_{\mathbb R} &\longrightarrow {\mathbb R}^I \\ {\mathbf p} &\mapsto (\varphi_i({\mathbf p}))_{i \in I} \ . \end{aligned} \] Since $\{\varphi_i\}_{i\in I}$ spans $\Lambda_{\mathbb R}^\vee$ we know $\varPhi$ is injective. Let $\{\varepsilon_i\}_{i \in I}$ denote the standard basis of ${\mathbb R}^I$, and let $\{\varepsilon_i^\ast\}_{i \in I}$ denote the dual basis of $({\mathbb R}^I)^\vee$. Then ${\mathcal H}({\mathbb R}^I, \{\varepsilon_i^\ast\}_{i \in I})$ is clearly the Delaunay decomposition of ${\mathbb R}^I$ with respect to the lattice ${\mathbb Z}^I$ with its standard pairing (induced by $\langle \varepsilon_i , \varepsilon_j \rangle = \delta_{i}(j)$). The decomposition ${\mathcal H}(\Lambda_{\mathbb R}, \{\varphi_i\}_{i\in I})$ is the decomposition of $\Lambda_{\mathbb R}$ induced by $\varPhi$ from this Delaunay decomposition of ${\mathbb R}^I$. It consists of $\varPhi^{-1}(C)$ for various cells $C$ in the Delaunay decomposition of ${\mathbb R}^I$ with $\varPhi^{-1}(\relint(C)) \ne \emptyset$. \medskip \noindent (i) immediately follows from the above considerations. \medskip \noindent For (ii) note that, since $\{\varphi_i\}_{i\in I}$ generates $\Lambda^\vee$, we have $\Lambda = \varPhi^{-1}({\mathbb Z}^I)$. \medskip \noindent For (iii), let $A=\varPhi^{-1}(C)$ for cells $C$ in the Delaunay decomposition of ${\mathbb R}^I$ with $\varPhi^{-1}(\relint(C)) \ne \emptyset$. By the total unimodularity assumption, $A$ is $0$-dimensional if and only if $A=\{{\mathbf s}\}$ for some ${\mathbf s} \in \Lambda$. Let $B$ be a cell in the Delaunay decomposition of $\Lambda_{\mathbb R}$. By definition this means there exists some ${\mathbf p}_0 \in \Lambda_{\mathbb R}$ such that \[ A=A_{{\mathbf p}_0}= {\rm conv.hull} \{ {\mathbf s} \in \Lambda: \lVert {\mathbf p}_0-{\mathbf s}\rVert \text{ is minimal}\} \ . \] Consider $\varPhi({\mathbf p}_0) \in {\mathbb R}^I$, and let $B'$ denote the corresponding Delaunay cell in ${\mathbb R}^I$, i.e. \[ B'=B'_{\varPhi({\mathbf p}_0)}= {\rm conv.hull} \{ {\mathbf a} \in {\mathbb Z}^I: \langle \varPhi({\mathbf p}_0)-{\mathbf a}, \varPhi({\mathbf p}_0)-{\mathbf a}\rangle \text{ is minimal}\} \ . \] $B$ is obviously contained in $\varPhi^{-1}(B')$. However the convex polyhedron $\varPhi^{-1}(B')$ is the convex hull of its $0$-dimensional faces. Therefore $B=\varPhi^{-1}(B')$. \end{proof} \begin{Remark} \label{rmk:delon} \begin{itemize} \item[] \item[(i)] Under the total unimodularity assumption, by Theorem~\ref{thm:totunim}(iii), we obtain a finite polyhedral cell decomposition of the quotient torus $\Lambda_{\mathbb R} / \Lambda$. This cell decomposition is essential in the study of our binomial ideals. \item[(ii)] If the totally unimodular collection is coming from a weakly unimodular matrix as in Example~\ref{ex:WU}, then the norm in \eqref{eq:norm} coincides with the standard norm induced by the bilinear form $\beta_{\mathbb R}$. This is because the $\varphi_j$'s are precisely the restriction of the ${\mathbf e}_j^\ast$'s to $\Lambda_{\mathbb R}$. \end{itemize} \end{Remark} \section{Potential theory and the cellular free resolution of ${\mathbf I}_G$} \label{sec:IGresol} Here we use potential theory and the energy pairing to give a self-contained and direct solution to the problem of finding a minimal polyhedral cellular free resolution of the ideal ${\mathbf I}_G$. \subsection{Minimal cellular free resolutions} \label{sec:res} Let $S$ be a polynomial ring in $r$ variables. Let ${\mathcal C}$ be a regular cell complex. If we {\em label} the vertices ($0$-dimensional cells) by monomials in $S$, we may extend the labeling to arbitrary faces by labeling an arbitrary face $F$ with the {\em least common multiple} of the monomial labels on the vertices of $F$. In this way we obtain a {\em labeled cell complex}, which leads to a complex of free ${\mathbb Z}^r$-graded $S$-modules \begin{equation} \label{eq:freecomplex} {\mathcal F}_{{\mathcal C}} = \bigoplus_{\emptyset \ne F \in {\mathcal C}}{S(-{\mathbf m}_F)} \end{equation} where ${\mathbf m}_F$ denotes the monomial label of the face $F$. The homological degree of $S(-{\mathbf m}_F)$ is $\dim(F)$. Let $[F]$ denote the generator of $S(-{\mathbf m}_F)$. The differential of ${\mathcal F}_{{\mathcal C}}$ is the homogenized differential of the cell complex ${\mathcal C}$: \[ \partial([F]) = \sum_{{\rm codim(F,F')=1} \atop F' \subset F}{\varepsilon (F,F') \frac{{\mathbf m}_F}{{\mathbf m}_{F'}} \ [F']} \] where $\varepsilon (F,F') \in \{-1, +1\}$ denotes the incidence function indicating the orientation of $F'$ in the boundary of $F$ (see \cite[IX.5]{Massey} or \cite[Section 6.2]{Bruns}). Note that the length of $({\mathcal F}, \partial)$ is the dimension of ${\mathcal C}$. \medskip It is shown in \cite[Proposition~1.2]{BayerSturmfels} that the complex $({\mathcal F}, \partial)$ is exact if and only if every subcomplex ${\mathcal C}_{\leq {\mathbf m}}$ (i.e. the subcomplex of ${\mathcal C}$ consisting of all cells whose labels divide the monomial ${\mathbf m}$) is acyclic over $K$ (i.e. its homology with $K$ coefficients is only in degree $0$). In this case $({\mathcal F}, \partial)$ is called a {\em cellular free resolution}. If all cells are polyhedral it is called a {\em polyhedral cellular free resolution}. It is a {\em minimal} cellular free resolution if all ${{\mathbf m}_F}/{{\mathbf m}_{F'}}$ appearing in the differential maps are non-units. See \cite{BayerSturmfels} for more details. \subsection{Principal lattice with the energy pairing} Recall the ${\mathbb Z}$-module $\Prin(G)$ is defined as the image of the Laplacian operator $\Delta \colon {\mathcal M}(G) \to \Div(G)$. We have introduced two different canonical bilinear forms on this group. One is the bilinear form induced from the bilinear form on $C_0(G,{\mathbb Z})=\Div(G)$ defined in \S\ref{sec:Background}. The bilinear form that is most relevant in this section is the one induced from the energy pairing defined in \S\ref{sec:potential}. \begin{Definition} By a {\em principal lattice} we will mean the pair $(\Prin(G), \langle \cdot , \cdot \rangle_{\en})$ where \[ \langle \cdot , \cdot \rangle_{\en} \colon \Prin(G) \times \Prin(G) \rightarrow {\mathbb Z} \] is the restriction of the energy pairing to $\Prin(G) \subseteq \Div^0(G)$. \end{Definition} \begin{Remark} It is easy to see (using \eqref{eq:useful}) that if $D \in \Prin(G)$ then for all $E \in \Div^0(G)$ we have $\langle E , D \rangle_{\en} \in {\mathbb Z}$ and therefore \begin{itemize} \item[(i)] The restriction of the energy pairing to $\Prin(G)$ is ${\mathbb Z}$-valued. \item[(ii)] The energy pairing descends to a well-defined pairing on $\Pic^0(G)$, which is shown to be non-degenerate in \cite{FarbodMonodromy}. \end{itemize} \end{Remark} The principal lattice is an abstract ${\mathbb Z}$-lattice in the sense of \S\ref{sec:lattice}. Its ambient vector space $\Prin(G)_{\mathbb R} = \Prin(G) \otimes {\mathbb R}$ coincides with $\Div_{{\mathbb R}}^0(G) = \Div^0(G) \otimes {\mathbb R} \subset C_1(G,{\mathbb R})$. Our next goal is to find a nice collection of functionals for this lattice. For each $e \in {\mathbb E}(G)$ we define the functional $\zeta_e \in \Div_{{\mathbb R}}^0(G)^{\vee}$ by \[ \zeta_e(\cdot) = \langle \partial(e) , \cdot \rangle_{\en} \ . \] \begin{Lemma} \label{lem:lookgraphical} \begin{itemize} \item[] \item[(i)] Any $D \in \Div_{{\mathbb R}}^0(G)$ is of the form $D=\Delta(f)$ for some $f \in C^1(G, {\mathbb R})$. \item[(ii)] For $D = \Delta(f) \in \Div_{{\mathbb R}}^0(G)$ we have $\zeta_e(D)=(d f)(e)$. \end{itemize} \end{Lemma} \begin{proof} (i) This follows from the fact that the kernel of $\Delta$ consists of constant functions. (ii) We have, using \eqref{eq:useful} \[ \begin{aligned} \langle \partial(e) , D \rangle_{\en} &=\langle \partial(e) , \Delta(f) \rangle_{\en}\\ &=\sum_{u,v \in V(G)}{(\delta_{e_+}(u)-\delta_{e_-}(u))j_q(u,v)\Delta(f)(v)}\\ &=\sum_{u\in V(G)}{(\delta_{e_+}(u)-\delta_{e_-}(u)) \sum_{v \in V(G)}j_q(u,v)\Delta(f)(v)}\\ &=\sum_{u \in V(G)}{(\delta_{e_+}(u)-\delta_{e_-}(u))(f(u)-f(q))}\\ &= f(e_+)-f(e_-) \ . \end{aligned} \] \end{proof} \begin{Proposition}\label{prop:rightfunctionals} \begin{itemize} \item[] \item[(i)] $\{\zeta_e\}_{e \in {\mathbb E}(G)} \subset \Prin(G)^\vee$. \item[(ii)] $\{\zeta_e\}_{e \in {\mathbb E}(G)}$ generates $\Prin(G)^\vee$. \item[(iii)] $\{\zeta_e\}_{e \in {\mathbb E}(G)}$ is totally unimodular for the principal lattice. \end{itemize} \end{Proposition} \begin{proof} (i) We need to show that $\zeta_e(D) \in {\mathbb Z}$ for all $D \in \Prin(G)$. Let $D = \Delta(f)$ for $f \in {\mathcal M}(G)$. Then by Lemma~\ref{lem:lookgraphical}(ii) $\zeta_e(D) = (d f)(e)$ which is an integer because $f$ is integer-valued. \medskip (ii) Let $\zeta$ be an arbitrary element of $\Prin(G)^\vee$. We need to show that $\zeta=\sum_{e \in {\mathbb E}(G)}{a_e\zeta_e}$ for some integers $a_e$. Since $\zeta \in \Div_{{\mathbb R}}^0(G)^\vee$ and $\langle \cdot , \cdot \rangle_{\en}$ is positive definite (and therefore non-degenerate), we must have $\zeta(\cdot) = \langle {\mathbf a} , \cdot \rangle_{\en}$ for some ${\mathbf a} \in \Div_{{\mathbb R}}^0(G)$ (see \S\ref{sec:lattice}). For all $ p \in V(G) \backslash \{q\}$ we have (see \eqref{eq:useful}) \begin{equation} \label{eq:integervalues} \begin{aligned} \langle {\mathbf a} , \Delta(\chi_p) \rangle_{\en} &=\sum_{u,v \in V(G)}{{\mathbf a}(u) j_q(u,v)\Delta(\chi_p)(v)}\\ &=\sum_{u\in V(G)}{{\mathbf a}(u) \sum_{v \in V(G)}j_q(u,v)\Delta(\chi_p)(v)}\\ &=\sum_{u \in V(G)}{{\mathbf a}(u) (\chi_p(u)-\chi_p(q))}\\ &= {\mathbf a}(p) \ . \end{aligned} \end{equation} Since $\zeta \in \Prin(G)^\vee$ we must have ${\mathbf a}(p) = \langle {\mathbf a} , \Delta(\chi_p) \rangle_{\en} \in {\mathbb Z}$ for all $p \in V(G) \backslash \{q\}$. Since ${\mathbf a}(q)=-\sum_{p \ne q}{{\mathbf a}(p)}$ we obtain ${\mathbf a} \in \Div^0(G)$. Let \begin{equation}\label{eq:telescope} {\mathbf a} = \sum_{p \in V(G)}{{\mathbf a}(p)(p)}=\sum_{p \ne q}{{\mathbf a}(p)((p)-(q))} \ . \end{equation} Since $G$ is connected, for each $p \ne q$ there is a directed path from $q$ to $p$ consisting of some oriented edges $\{e^{(i)}\}_{1 \leq i \leq \ell}$ such that $e^{(1)}_-=q$, $e^{(\ell)}_+=p$, and $e^{(i)}_+ = e^{(i+1)}_-$ for $ 1 \leq i \leq \ell-1$. We may write \[ (p)-(q)=\sum_{i=1}^{\ell}{(e^{(i)}_+ -e^{(i)}_-)}=\sum_{i=1}^{\ell}{\partial (e^{(i)})} \ . \] Substituting this in \eqref{eq:telescope}, we conclude that ${\mathbf a}=\sum_{e \in {\mathbb E}(G)}{a_e \partial(e)}$ for some integers $a_e$. Therefore $\zeta=\sum_{e \in {\mathbb E}(G)}{a_e\zeta_e}$ as we want. \medskip (iii) Assume $J \subseteq {\mathbb E}(G)$ is such that the collection $\{\zeta_e\}_{e \in J}$ spans $\Div_{{\mathbb R}}^0(G)^\vee$ as a real vector space. We need to show that $\{\zeta_e\}_{e \in J}$ also generates $\Prin(G)^\vee$ as a ${\mathbb Z}$-module. Let $\zeta$ be an arbitrary element of $\Prin(G)^\vee$. Then $\zeta = \sum_{e \in J}{b_e\zeta_e}$ for some $b_e \in {\mathbb R}$ because $\{\zeta_e\}_{e \in J}$ spans $\Div_{{\mathbb R}}^0(G)^\vee$. In other words \[ \zeta(\cdot) = \langle {\mathbf b} , \cdot \rangle_{\en} \quad \text{ with }\quad {\mathbf b} = \sum_{e \in J}{b_e\partial(e)} \] for some $b_e \in {\mathbb R}$. We need to show that $b_e \in {\mathbb Z}$ for all $e \in J$. A computation identical to \eqref{eq:integervalues} shows that we have ${\mathbf b} \in \Div^0(G)$. It is a well-known classical fact (due to Poincar\'e) that the incidence matrix of $G$ is totally unimodular (see, e.g., \cite[Proposition~5.3]{BiggsBook} and \S\ref{sec:cutlattice}). So $\sum_{e \in J}{b_e\partial(e)} \in \Div^0(G)$ will automatically imply that all $b_e$'s must be integers. \end{proof} \begin{Remark} It also follows from the proof of Proposition~\ref{prop:rightfunctionals}(ii) that \begin{itemize} \item[(i)] $\Prin(G)^\vee \cong \Div^0(G)$ and a canonical isomorphism is furnished by the energy pairing. \item[(ii)] $C_1(G, {\mathbb Z}) \xrightarrow{\partial} C_0(G, {\mathbb Z}) \xrightarrow{\deg} {\mathbb Z} \rightarrow 0$ is an exact sequece. This statement, when ${\mathbb Z}$ is replaced with ${\mathbb R}$ is classical (see, e.g., \cite[Proposition~12.1 and Proposition~28.1]{Biggs97}). \end{itemize} \end{Remark} \medskip We are now ready to apply the results in \S\ref{sec:infarrag} to this setting. \begin{Theorem} \label{thm:PrinDel} Let ${\mathcal H}(\Div^0_{\mathbb R}(G), \{\zeta_e\}_{e\in {\mathbb E}(G)}) = \{C_{{\mathbf a}}\}$ be the collection of all polyhedra \begin{equation} \label{eq:Ca} C_{\mathbf a} = \{{\mathbf b} \in \Div^0_{\mathbb R}(G): \lfloor \zeta_e({\mathbf a}) \rfloor \leq \zeta_e({\mathbf b}) \leq \lceil \zeta_e({\mathbf a}) \rceil \text{ for all } e \in {\mathbb E}(G) \} \ . \end{equation} as ${\mathbf a}$ varies in $\Div^0_{\mathbb R}(G)$. Then \begin{itemize} \item[(i)] $\{C_{{\mathbf a}}\}$ is a polyhedral cell decomposition of $\Div^0_{\mathbb R}(G)$ by bounded convex polyhedra. \item[(ii)] The cell decomposition $\{C_{{\mathbf a}}\}$ is invariant under the translation by the lattice $\Prin(G)$. \item[(iii)] The set of $0$-dimensional cells in $\{C_{{\mathbf a}}\}$ coincides with $\Prin(G)$. \item[(iv)] $\{C_{{\mathbf a}}\}$ is the same as the Delaunay cell decomposition of $\Div^0_{\mathbb R}(G)$ with respect to the lattice $\Prin(G)$ and the metric induced by the norm \begin{equation} \label{eq:norm2} \lVert {\mathbf p} \rVert = \sqrt{\langle {\mathbf p}, {\mathbf p} \rangle_{\en}} = \sqrt{{\mathcal E}({\mathbf p})} \ . \end{equation} \item[(v)] $\{C_{{\mathbf a}}\}$ descends to a finite polyhedral cell decomposition of $\Div^0_{\mathbb R}(G) / \Prin(G)$. \end{itemize} \end{Theorem} \begin{proof} This result follows from Proposition~\ref{prop:rightfunctionals}, Theorem~\ref{thm:totunim}, and Remark~\ref{rmk:delon}(i). We only need to show that the norm defined in \eqref{eq:norm2} is compatible with the one considered in \eqref{eq:norm}. By Lemma~\ref{lem:lookgraphical}(i) any ${\mathbf p} \in \Div^0_{\mathbb R}(G)$ is of the form $\Delta(f)$ for some $f \in C^1(G, {\mathbb R})$. By \eqref{eq:useful}, Lemma~\ref{lem:lookgraphical}(ii), and Remark~\ref{rmk:selfadjoint} we have \[ \begin{aligned} {\mathcal E}({\mathbf p}) &= \langle \Delta(f), \Delta(f) \rangle_{\en} \\ &= \sum_{v \in V(G)}{f(u) \Delta(f) (u)} \\ &= \frac{1}{2}\sum_{v \in V(G)}{f(u) (\partial d f) (u)} \\ &=\frac{1}{2}\sum_{e \in {\mathbb E}(G)}{(d f)(e)(d f)(e)}\\ &=\frac{1}{2}\sum_{e \in {\mathbb E}(G)}{\lvert \zeta_e({\mathbf p}) \rvert ^2}\ . \end{aligned} \] So the norm defined in \eqref{eq:norm2} is proportional to the norm defined in \eqref{eq:norm} and they induce the same Delaunay cell decomposition. \end{proof} \medskip The Delaunay cell decomposition $\{C_{{\mathbf a}}\}$ of Theorem~\ref{thm:PrinDel} will be denoted by ${\Del}(\Prin(G))$. The induced finite cell decomposition of $\Div^0_{\mathbb R}(G) / \Prin(G)$ will be denoted by ${\Del}(\Prin(G)) / \Prin(G)$. \medskip \begin{Remark} \label{rmk:cells} \begin{itemize} \item[] \item[(i)] Since $\zeta_{\bar{e}}=-\zeta_e$ for all $e \in {\mathbb E}(G)$ we could alternatively define $C_{\mathbf a}$ in \eqref{eq:Ca} as \[ \{{\mathbf b} \in \Div^0_{\mathbb R}(G): \zeta_e({\mathbf b}) \leq \lceil \zeta_e({\mathbf a}) \rceil \text{ for all } e \in {\mathbb E}(G) \} \ . \] It follows that open cells in this cell complex correspond precisely to equivalence classes of points, where ${\mathbf a} \sim {\mathbf b}$ if and only if $\lceil \zeta_e({\mathbf a}) \rceil = \lceil \zeta_e({\mathbf b}) \rceil$ for all $e \in {\mathbb E}(G)$. \item[(ii)] By Lemma~\ref{lem:lookgraphical}(ii) the local picture at the origin is the image of the graphic arrangement defined in \S\ref{sec:BG} under the map $\Delta$. \item[(iii)] The cell complexes ${\Del}(\Prin(G))$ and ${\Del}(\Prin(G)) / \Prin(G)$ are related to the cell complexes ${\Del}(L(G))$ and ${\Del}(L(G)) / L(G)$ (defined in \S\ref{sec:cutlattice}) by the (restricted) boundary map (see Remark~\ref{rmk:isometry} and Remark~\ref{rmk:ResolRelation}). The finite cell complex ${\Del}(L(G)) / L(G)$ and the finite cell complex ${\Del}(\Prin(G)) / \Prin(G)$ have the same $f$-vector (i.e. the same number of $i$-dimensional faces for all $i$). \end{itemize} \end{Remark} \medskip The following lemma will be used in the proof of Theorem~\ref{thm:Ures}. \begin{Lemma}\label{lem:contract} Fix a divisor $E \in \Div(G)$. The subcomplex of $\Del(\Prin(G))$ on the lattice points $P(E)=\{D \in \Prin(G): D \leq E\}$ is a polyhedral subdivision of a contractible space. \end{Lemma} \begin{proof} $P(E)$ is precisely the set of lattice points inside the closed convex polytope $Q(E)=\{{\mathbf a} \in \Div^0_{\mathbb R}(G): {\mathbf a} \leq E\}$. The subcomplex of $\Del(\Prin(G))$ consisting of cells on the lattice points $P(E)$ consists of all Delaunay cells on these lattice points. Recall $\Del(\Prin(G))$ is a tiling of the ambient space. Therefore this subcomplex forms a space which is homotopy equivalent to the polytope $Q(E)$ itself, and therefore is contractible. \end{proof} \subsection{Labeling $\Del(\Prin(G))$ and the minimal free resolution of ${\mathbf I}_G$} \label{sec:labelDelPrin} Let ${\mathbf T} = K[{\mathbf x},{\mathbf x}^{-1}]$ denote the Laurent polynomial ring in variables $\{x_v: v \in V(G)\}$. Clearly ${\mathbf T}$ is a module over ${\mathbf R}$. Consider the ${\mathbf R}$-submodule ${\mathbf U}_G \subset {\mathbf T}$ generated by Laurent monomials $\{{\mathbf x}^D: D \in \Prin(G)\}$. This Laurent monomial module ${\mathbf U}_G$ may be thought of as the ``universal cover'' of ${\mathbf I}_G$ and many question about ${\mathbf I}_G$ can be reduced to questions about ${\mathbf U}_G$. For example, the free resolutions of ${\mathbf U}_G$ and ${\mathbf I}_G$ are closely related. See \cite{BayerSturmfels} for an extensive study of this relation. Since the only effective divisor in $\Prin(G)$ is the all-$0$ divisor, the results of \cite{BayerSturmfels} apply to our situation. \medskip Consider the cell decomposition $\Del(\Prin(G))$. By Theorem~\ref{thm:PrinDel} the set of $0$-dimensional cells in $\Del(\Prin(G))$ is precisely $\Prin(G)$. We will label each $0$-cell $D \in \Prin(G)$ by the Laurent monomials ${\mathbf x}^D$. As usual, we let the label of any other cell to be the least common multiple of the labels of its vertices. This labeled cell complex leads to a complex of free $\Div(G)$-graded ${\mathbf R}$-modules \[ {\mathcal F}_G:= {\mathcal F}_{\Del(\Prin(G))} = \bigoplus_{\emptyset \ne F \in \Del(\Prin(G))}{{\mathbf R}(-{\mathbf m}_F)} \] where ${\mathbf m}_F$ denotes the monomial label of the face $F$. Let $[F]$ denote the generator of ${\mathbf R}(-{\mathbf m}_F)$. The differential of ${\mathcal F}_G$ is the homogenized differential (boundary) operator of the cell complex $\Del(\Prin(G))$: \begin{equation} \label{eq:differntials} \partial([F]) = \sum_{{\rm codim(F,F')=1} \atop F' \subset F}{\varepsilon (F,F') \frac{{\mathbf m}_F}{{\mathbf m}_{F'}} \ [F']} \end{equation} where $\varepsilon (F,F') \in \{-1, +1\}$ denotes the incidence function indicating the orientation of $F'$ in the boundary of $F$. \medskip \begin{Lemma}\label{lem:nounit} \begin{itemize} \item[] \item[(i)] Let ${\mathbf a} \in \Div^0_{\mathbb R}(G)$. Then ${\mathbf a}(v)=\sum_{e_+=v}\zeta_e({\mathbf a})$. \item[(ii)] Let $F=C_{\mathbf a}$ be a cell in $\Del(\Prin(G))$ corresponding to a point ${\mathbf a} \in \Div^0_{\mathbb R}(G)$ (i.e. ${\mathbf a} \in \relint(F)$). Then ${\mathbf m}_F= {\mathbf x}^E$ where $E \in \Div(G)$ is defined by \begin{equation} \label{eq:prinlabels} E(v)= \sum_{e_+=v} \lceil \zeta_e({\mathbf a}) \rceil \ . \end{equation} \item[(iii)] For distinct faces $F' \subsetneq F$ of $\Del(\Prin(G))$ we have ${\mathbf m}_F \ne {\mathbf m}_{F'}$. \end{itemize} \end{Lemma} \begin{proof} (i) By Lemma~\ref{lem:lookgraphical}(i) we may write ${\mathbf a} = \Delta(f)$ for some $f \in C^1(G, {\mathbb R})$. By definition we have $\Delta(f) = \sum_{v}\sum_{e_+=v}(f(e_+)-f(e_-)) (v)$. Therefore, it follows from Lemma~\ref{lem:lookgraphical}(ii) that ${\mathbf a}(v)=\sum_{e_+=v}\zeta_e({\mathbf a})$. (ii) follows from (i) and the fact that open cells in $\Del(\Prin(G))$ correspond precisely to equivalence classes of points, where ${\mathbf a} \sim {\mathbf b}$ if and only if $\lceil \zeta_e({\mathbf a}) \rceil = \lceil \zeta_e({\mathbf b}) \rceil$ for all $e \in {\mathbb E}(G)$ (Remark~\ref{rmk:cells}(ii)). \medskip (iii) Let $F=C_{\mathbf a}$ for ${\mathbf a} \in \relint(F)$ and $F'=C_{{\mathbf a}'}$ for ${\mathbf a}' \in \relint(F')$. Since ${\mathbf a}'$ is in $F$ as well, it satisfies $\zeta_e({\mathbf a}') \leq \lceil \zeta_e({\mathbf a}) \rceil$ for all $e \in {\mathbb E}(G)$. Therefore we have $\lceil \zeta_e({\mathbf a}')\rceil \leq \lceil \zeta_e({\mathbf a}) \rceil$. But since $F' \ne F$ there must exist some $e$ such that $\zeta_e({\mathbf a}') \in {\mathbb Z}$ but $\zeta_e({\mathbf a}) \not \in {\mathbb Z}$ and therefore $\lceil \zeta_e({\mathbf a}')\rceil < \lceil \zeta_e({\mathbf a}) \rceil$. The result now follows from part (ii) because for this edge, by \eqref{eq:prinlabels}, the exponent of $x_{e_+}$ in ${\mathbf m}_{F'}$ must be strictly less than the exponent of $x_{e_+}$ in ${\mathbf m}_{F}$. \end{proof} \begin{Theorem}\label{thm:Ures} The complex $({\mathcal F}_G, \partial)$ is a minimal $\Div(G)$-graded free resolution of the module ${\mathbf U}_G$ over ${\mathbf R}$. \end{Theorem} \begin{proof} We need to show two things: \begin{itemize} \item[(i)] $({\mathcal F}_G, \partial)$ is exact, i.e. $({\mathcal F}_G, \partial)$ is a cellular free resolution of ${\mathbf U}_G$. \item[(ii)] For distinct faces $F' \subsetneq F$ of $\Del(\Prin(G))$ with $\codim(F,F')=1$ we have ${\mathbf m}_F \ne {\mathbf m}_{F'}$, i.e. no unit of ${\mathbf R}$ appears in differential maps and the resolution $({\mathcal F}_G, \partial)$ is minimal. \end{itemize} By \cite[Proposition~1.2]{BayerSturmfels}, we know (i) is equivalent to \begin{itemize} \item[(i')] For each $E \in \Div(G)$, the subcomplex of $\Del(\Prin(G))$ on the lattice points $\{D \in \Prin(G): D \leq E\}$ is acyclic over the field $K$, i.e. its reduced homology $\widetilde{H}_i$ with $K$ coefficients vanishes for all $i \geq 0$. \end{itemize} (i') follows from Lemma~\ref{lem:contract} and (ii) follows from Lemma~\ref{lem:nounit}(iii). \end{proof} From Theorem~\ref{thm:Ures} and \cite[Corollary~3.7]{BayerSturmfels} we immediately obtain the following theorem. \begin{Theorem}\label{thm:Ires} The quotient cell complex $\Del(\Prin(G)) / \Prin(G)$ supports a $\Pic(G)$-graded minimal free resolution for $I_G$. \end{Theorem} \begin{Example} \label{ex:PrinResol} Consider the graph $K_3$ with a fixed orientation as in Figure~\ref{fig:K30}. \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .30, very thick = 20mm] \node (n1) at (34,12) [Cwhite] {$u_1$}; \node (n2) at (30,6) [Cwhite] {$u_3$}; \node (n3) at (38,6) [Cwhite] {$u_2$}; \node (n1) at (34,5.2) [Cwhite] {$e_1$}; \node (n2) at (31.7,9) [Cwhite] {$e_3$}; \node (n3) at (36.3,9) [Cwhite] {$e_2$}; \node (n1) at (34,11) [Cgray] {}; \node (n2) at (31,6) [Cgray] {}; \node (n3) at (37,6) [Cgray] {}; \foreach \from/\to in {n3/n1,n1/n2,n2/n3} \draw[black][->] (\from) -- (\to); \end{tikzpicture} \caption{Graph $K_3$ and a fixed orientation ${\mathcal O}$} \label{fig:K30} \end{center} \end{figure} The lattice $\Prin(G)$ is two dimensional and is depicted in Figure~\ref{fig:PrinLattice}. This lattice ``lives in'' $C_0(G,{\mathbb R})=\span\{(u_1),(u_2),(u_3)\} \cong {\mathbb R}^3$. In the picture $c_1=\Delta(\chi_{u_1})=2(u_1)-(u_2)-(u_3)$, $c_2=\Delta(\chi_{u_2})=-(u_1)+2(u_2)-(u_3)$, and $c_3=\Delta(\chi_{u_3})=-(u_1)-(u_2)+2(u_3)$. \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .30, very thick = 20mm] \node (n42) at (20,1) [Cblack] {}; \node (n41) at (14,1) [Cblack] {}; \node (n43) at (26,1) [Cblack] {}; \node (n44) at (32,1) [Cblack] {}; \node (n11) at (14,11) [Cblack] {}; \node (n12) at (20,11) [Cblack] {}; \node (n13) at (26,11) [Cblack] {}; \node (n14) at (32,11) [Cblack] {}; \node (n21) at (11,6) [Cblack] {}; \node (n22) at (17,6) [Cblack] {}; \node (n23) at (23,6) [Cblack] {}; \node (n24) at (29,6) [Cblack] {}; \node (n25) at (35,6) [Cblack] {}; \node (n4) at (14,1) [Cblack] {}; \node (n1) at (14,11) [Cblack] {}; \node (n2) at (11,6) [Cblack] {}; \node (n3) at (17,6) [Cblack] {}; \node (n51) at (23,-4) [Cblack] {}; \node (n52) at (29,-4) [Cblack] {}; \node (n70) at (20,-8.3) [C0] {}; \node (n18) at (35,16) [C0] {$\zeta_{e_1}=0$}; \node (n40) at (6.7,1) [C0] {$\zeta_{e_2}=0$}; \node (n10) at (17,16) [C0] {$\zeta_{e_3}=0$}; \node (n181) at (38,11) [C0] {$\zeta_{e_1}=1$}; \node (n401) at (3.7,6) [C0] {$\zeta_{e_2}=1$}; \node (n101) at (11,16) [C0] {$\zeta_{e_3}=1$}; \node (m24) at (30.5,6.8) [C0] {$c_1$}; \node (mm) at (30.5,-4.5) [C0] {$c_2$}; \node (m42) at (19,0.17) [C0] {$c_3$}; \node (m42) at (25,0.17) [C0] {$0$}; \foreach \from/\to in {n40/n41, n401/n21} \draw[black][dashed] (\from) -- (\to); \foreach \from/\to in {n14/n18, n70/n51,n25/n181} \draw[green][dashed] (\from) -- (\to); \foreach \from/\to in {n10/n12,n1/n3, n101/n11} \draw[red][dashed] (\from) -- (\to); \foreach \from/\to in {n11/n14,n21/n25,n41/n44,n51/n52} \draw[black][] (\from) -- (\to); \foreach \from/\to in {n11/n22,n22/n42,n42/n51,n12/n23,n23/n43,n43/n52,n13/n24,n24/n44,n14/n25,n21/n41} \draw[red][] (\from) -- (\to); \foreach \from/\to in {n11/n21,n12/n22,n22/n41,n13/n23,n23/n42,n14/n24,n24/n43,n43/n51,n25/n44,n44/n52} \draw[green][] (\from) -- (\to); \end{tikzpicture} \caption{The lattice $(\Prin(G), \langle \cdot , \cdot \rangle_{\en})$ and the associated cellular decomposition of the ambient space $\Div_{{\mathbb R}}^0(G)$} \label{fig:PrinLattice} \end{center} \end{figure} The cell decomposition $\Del(\Prin(G))$ is the Delaunay decomposition of $\Div^0_{{\mathbb R}}(G)$ with respect to the principal lattice and the energy distance (Theorem~\ref{thm:PrinDel}(iv)) which coincides with the infinite hyperplane arrangement \eqref{eq:Ca}. The quotient cell complex $\Del(\Prin(G))/\Prin(G)$ of the torus has one $0$-cell $\{v\}$ (orbit of the origin), three $1$-cells $\{e,e',e''\}$ (orbits of green, red, and black edges), and two $2$-cells $\{f,f'\}$ (orbits of upward and downward triangles). In Figure~\ref{fig:fundPrin} we have chosen a fundamental domain for the lattice, and have labeled all cells of this fundamental domain according to the recipe described in the beginning of \S\ref{sec:labelDelPrin} or, equivalently, in Lemma~\ref{lem:nounit}(ii). For simplicity we have used $x_i$ instead of $x_{u_i}$. The labeled cell complex in Figure~\ref{fig:fundPrin} is enough to completely describe a minimal free resolution for both ${\mathbf I}_G$ and ${\mathbf U}_G$. Concretely, the minimal resolution of ${\mathbf I}_G$ is as follows: \[ 0 \rightarrow {\mathbf R}(-{\mathbf m}_{f}) \oplus {\mathbf R}(-{\mathbf m}_{f'}) \xrightarrow{\partial_2} {\mathbf R}(-{\mathbf m}_{e}) \oplus {\mathbf R}(-{\mathbf m}_{e'})\oplus {\mathbf R}(-{\mathbf m}_{e''}) \xrightarrow{\partial_1} {\mathbf R}(-{\mathbf m}_v) \ . \] As usual, assume $[F]$ denotes the generator of ${\mathbf R}(-{\mathbf m}_F)$. Let \[ {\mathbf m}_e = x_1^2 \ , \quad {\mathbf m}_{e'} = x_1 x_2 \ , \quad {\mathbf m}_{e''} = x_2^2 \ , \] \[ {\mathbf m}_f = x_1^2 x_2 \ , \quad {\mathbf m}_{f'}=x_1 x_2^2 \ . \] The homogenized differential operator (see \eqref{eq:differntials}) $(\partial_1, \partial_2)$ of the cell complex is described as follows: \[ \partial_1([e])= \frac{x_1^2}{1}[v] -\frac{x_1^2}{\frac{x_1^2}{x_2x_3}}[v] = ({x_1^2} - {x_2x_3})[v]\ , \] \[ \partial_1([e'])= \frac{x_1 x_2}{\frac{x_1x_2}{x_3^2}}[v] - \frac{x_1x_2}{1}[v] = ({x_3^2} - {x_1x_2})[v]\ , \] \[ \partial_1([e''])= \frac{x_2^2}{\frac{x_2^2}{x_1x_3}}[v] - \frac{x_2^2}{1}[v] = ({x_1x_3} - {x_2^2})[v]\ , \] \[ \partial_2([f])= \frac{x_1^2 x_2}{x_1^2}[e] - \frac{x_1^2 x_2}{\frac{x_1^2 x_2}{x_3}}[e''] + \frac{x_1^2 x_2}{x_1 x_2}[e'] = {x_2}[e] - {x_3}[e''] + {x_1}[e'] \ , \] \[ \partial_2([f'])= \frac{x_1 x_2^2}{\frac{x_1x_2^2}{x_3}}[e] - \frac{x_1 x_2^2}{x_2^2}[e''] + \frac{x_1 x_2^2}{x_1 x_2}[e'] = {x_3}[e] - {x_1}[e''] + {x_2}[e'] \ . \] Clearly ${\mathbf I}_G$ is the image of $\partial_1$ after identifying $[v]$ with $1 \in {\mathbf R}$ (see Theorem~\ref{thm:Cori}). Note that, since the labeling is compatible with the action of the lattice, any other fundamental domain would give rise to the exact same description of the differential maps. \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .60, very thick = 20mm] \node (n4) at (14,1) [Cgray] {}; \node (n1) at (14,11) [Cgray] {}; \node (n2) at (11,6) [Cblack] {}; \node (n3) at (17,6) [Cgray] {}; \node (m1) at (14,0) [C0] {$\frac{x_2^2}{x_1x_3}$}; \node (m1) at (14,12.2) [C0] {$\frac{x_1^2}{x_2x_3}$}; \node (m3) at (18.3,6) [C0] {$\frac{x_1 x_2}{x_3^2}$}; \node (m) at (10.3,6) [C0] {$1$}; \foreach \from/\to in {n4/n2} \draw[red][] (\from) -- (\to); \foreach \from/\to in {n1/n2} \draw[green][] (\from) -- (\to); \foreach \from/\to in {n2/n3} \draw[black][] (\from) -- (\to); \foreach \from/\to in {n1/n3} \draw[red][dashed] (\from) -- (\to); \foreach \from/\to in {n4/n3} \draw[green][dashed] (\from) -- (\to); \node (m10) at (11.3,8.5) [C0] {\textcolor{gray}{$x_1^2$}}; \node (m14) at (11.3,3.5) [C0] {\textcolor{gray}{$x_2^2$}}; \node (m23) at (14.1,6.86) [C0] {\textcolor{gray}{$x_1x_2$}}; \node (m023) at (17,9.0) [C0] {\textcolor{gray}{$\frac{x_1^2x_2}{x_3}$}}; \node (m023) at (17,3.0) [C0] {\textcolor{gray}{$\frac{x_1 x_2^2}{x_3}$}}; \node (m123) at (14,8.2) [C0] {\textcolor{blue}{$x_1^2 x_2$}}; \node (m423) at (14,4.2) [C0] {\textcolor{blue}{$x_1x_2^2$}}; \end{tikzpicture} \caption{A choice of fundamental domain with labels} \label{fig:fundPrin} \end{center} \end{figure} \end{Example} \medskip \begin{Remark} \label{rmk:isometry} It follows from the computation \[ \begin{aligned} \langle \Delta(f), \Delta(g) \rangle_{\en} &= \sum_{v \in V(G)}{f(u) \Delta(g) (u)} \\ &= \sum_{v \in V(G)}{f(u) (\partial_{{\mathcal O}} d_{{\mathcal O}} g) (u)} \\ &=\sum_{e \in {\mathbb E}(G)}{(d_{{\mathcal O}} f)(e)(d_{{\mathcal O}} g)(e)} \end{aligned} \] that there is an isometry between the principal lattice $(\Prin(G) , \langle \cdot, \cdot \rangle_{\en})$ and the {\em cut lattice} (lattice of integral cocyles) $(L(G) , \langle \cdot , \cdot \rangle )$ defined in \S\ref{sec:cutlattice}. It is natural to ask whether there are other ideals defined directly in terms of the cut lattice and, if so, whether there are nice relations between these ideals. These questions will be answered in this work (see \S\ref{sec:relate}, especially Remark~\ref{rmk:ResolRelation}). \end{Remark} \begin{Remark} It is possible to give a polyhedral cellular free resolution of the ideal ${\mathbf M}_G^q$ using the local picture at the origin of $\Del(\Prin(G))$ (or, alternatively, using the graphic hyperplane arrangement -- see Remark~\ref{rmk:cells}(ii)) and study its Gr\"obner relation with ${\mathbf I}_G$, similar to what we will do for ${\mathbf O}_G^q$ in relation to ${\mathbf J}_G$ in \S\ref{sec:matroidlawrence}. Instead, we will show (in \S\ref{sec:relate}) that one could alternatively relate ${\mathbf I}_G$ to ${\mathbf J}_G$ and ${\mathbf M}_G^q$ to ${\mathbf O}_G^q$ via a regular sequence. As a corollary, this gives an alternate way to describe polyhedral cellular free resolutions of all these ideals and to compare their Betti numbers. \end{Remark} \begin{Remark} The minimal free resolution of ${\mathbf M}_G^q$ is a Koszul complex when $G$ is a tree because ${\mathbf M}_G^q$ is generated by the variables $\{x_v : v \ne q\}$ (see Theorem~\ref{thm:Cori}). When $G$ is a complete graph, the minimal free resolution of ${\mathbf M}_G^q$ is given by a Scarf complex (see, e.g., \cite[Corollary~6.9]{PostnikovShapiro04}). \end{Remark} \section{Graphs, arrangements, and integral cuts} \label{sec:arrg} \subsection{Graphic arrangements and connected partitions} \label{sec:BG} Following \cite{GreenZaslavsky}, we define the {\em graphic hyperplane arrangement} as follows. An important feature that we want to emphasize in this section is that this arrangement naturally ``lives in'' the Euclidean space $C^0(G,{\mathbb R})$, i.e. the vector space of all real-valued functions on $V(G)$ endowed with the bilinear form \[ \langle f_1, f_2\rangle = \sum_{v\in V(G)}{f_1(v)f_2(v)} \ . \] Recall that $C^1(G, {\mathbb R})$ denotes the vector space of real-valued functions on ${\mathbb E}(G)$ and $d \colon C^0(G,{\mathbb R}) \rightarrow C^1(G,{\mathbb R})$ denotes the usual coboundary map. For each edge $e \in {\mathbb E}(G)$, let ${\mathcal H}_{e} \subset C^0(G,{\mathbb R})$ denote the hyperplane \[ {\mathcal H}_{e} = \{f \in C^0(G,{\mathbb R}): (d f) (e)=0 \} \ . \] Note that ${\mathcal H}_{\bar{e}}={\mathcal H}_{e}$. Consider the arrangement \[ {\mathcal H}'_G= \{{\mathcal H}_e : e \in {\mathbb E}(G)\} \] in $C^0(G,{\mathbb R})$. Since $G$ is connected, we know $\bigcap_{e \in {\mathbb E}(G)}{{\mathcal H}_e}$ is the $1$-dimensional space of constant functions on $V(G)$, which is the same as the kernel of $d$. We define the {\em graphic arrangement} corresponding to $G$, denoted by ${\mathcal H}_G$, to be the restriction of ${\mathcal H}'_G$ to the hyperplane \begin{equation}\label{eq:kerperp} (\Ker(d))^{\perp} = \{f \in C^0(G, {\mathbb R}) : \sum_{v\in V(G)}f(v)=0\} \ . \end{equation} \medskip The intersection poset of ${\mathcal H}_G$ (i.e. the collection of nonempty intersections of hyperplanes ${\mathcal H}_e$ ordered by reverse inclusion) is naturally isomorphic to the poset of connected partitions of $G$ (i.e. partitions of $V(G)$ whose blocks induce connected subgraphs). See, e.g., \cite[p.112]{GreenZaslavsky}. \medskip It is well-known that there is a one-to-one correspondence between acyclic orientations of $G$ and the regions of ${\mathcal H}_G$ (see, e.g., \cite[Lemma~7.1 and Lemma~7.2]{GreenZaslavsky}). Given any function $f \in C^0(G,{\mathbb R})$ one can label each vertex $v$ with the real number $f(v)$. In this way we obtain an acyclic partial orientation of $G$ by directing $v$ to $u$ if $f(u) < f(v)$. Recall this means we have an acyclic orientation on the graph $G/f$ obtained by contracting all unoriented edges (i.e. all edges $\{u,v\}$ with $f(u)=f(v)$). \medskip We are mainly interested in acyclic orientations of $G$ with a {\em unique source} at $q \in V(G)$. For this purpose, we fix a real number $c>0$ and define \[ {\mathcal H}^{q,c}=\{f\in C^0(G, {\mathbb R}): f(q)=-c \} \ . \] The restriction of the arrangement ${\mathcal H}_G$ to ${\mathcal H}^{q,c}$ will be denoted by ${\mathcal H}_G^{q,c}$. We denote the {\em bounded complex} (i.e. the polyhedral complex consisting of bounded cells) of ${\mathcal H}_G^{q,c}$ by ${\mathcal B}_G^{q,c}$. \begin{Remark} \begin{itemize} \item[] \item[(i)] By \eqref{eq:kerperp}, the restriction of ${\mathcal H}_G$ to ${\mathcal H}^{q,c}$ coincides with the restriction of ${\mathcal H}_G$ to \[ ({\mathcal H}^{q,c})'=\{f\in C^0(G, {\mathbb R}): \sum_{v \ne q}f(v)=c \} \ . \] \item[(ii)] We will see in \S\ref{sec:grobrel} (e.g. Lemma~\ref{lem:labels}(ii)) that it is most natural (although not necessary) to choose $0 <c<1$. \end{itemize} \end{Remark} The following lemma relates regions of ${\mathcal B}_G^{q,c}$ to acyclic orientations with unique source at $q$ (see also \cite[Theorem~7.3]{GreenZaslavsky}). \begin{Lemma} \label{lem:BG} Each $f \in {\mathcal B}_G^{q,c}$ gives an acyclic partial orientation of $G$ with a unique source at $q$. {In particular $f(v) \geq f(q)$ for any edge $\{v,q\} \in E(G)$.} \end{Lemma} \begin{proof} Since we are considering the orientation on $G / f$ we may assume $f(u) \ne f(v)$ for any $\{u, v\} \in E(G)$. Since any acyclic orientation of $G$ has at least one source vertex% \footnote{It is an elementary fact that {\em any} acyclic orientation of $G$ has at least one source and one sink.}, it suffices to show that no vertex $v \ne q$ can be a source in the orientation corresponding to $f$. Let $w$ be a vertex such that $f(w)$ is maximum (i.e. $f(w) \geq f(v)$ for all $v \in V(G)$). To obtain a contradiction, assume $s \ne q$ is a source and therefore $f(v) > f(s)$ for all $\{v, s\} \in V(G)$. Recall that $\chi_v$ denotes the characteristic function of $v \in V(G)$. It follows that \[ f_t=f + t (\chi_w-\chi_s) \in C^0(G, {\mathbb R}) \] also belongs to the same cell as $f$ for {\em any} $t \geq 0$. This is because: \begin{itemize} \item $f_t(q)=f(q)=-c$: note that $s \ne q$ by assumption. Moreover, since $f(q) = -c$ and $\sum_{v \ne q}{f(v)}= c >0$, there must be at least one vertex $v$ with $f(v) > 0 > f(q)$. Therefore $f(q)$ cannot be maximum among $f(v)$'s, which means $w \ne q$. \item $\sum_{v\in V(G)}{f_t(v)}=\sum_{v \in V(G)}{f(v)}+t-t=0$. \item If $\{u,v\} \in E(G)$, we have $f_t(u) > f_t(v)$ if and only if $f(u) > f(v)$. Note that $f_t$ and $f$ differ only in places $w$ and $s$. So this claim follows from the fact that $f_t(w)=f(w)+t \geq f(w)$ and $f(s) \geq f(s)-t = f_t(s)$. \end{itemize} However, not all $f_t$ for $t \geq 0$ can be contained in the bounded complex because they constitute a ray in $C^0(G, {\mathbb R})$ emanating from $f$. \end{proof} \medskip \begin{Remark} \label{sec:nopartorient} It follows (see also \cite[Corollary~7.3]{GreenZaslavsky}) that the number of $i$-dimensional cells in ${\mathcal B}_G^{q,c}$ is equal to the number of acyclic partial orientations of $G$ with $(i+2)$ (connected) components having a unique source at $q$. For an example, see Example~\ref{exam:1}. \end{Remark} \subsection{Lattice of integral cuts and graphic infinite arrangements} \label{sec:cutlattice} Fix an arbitrary orientation ${\mathcal O} \subset {\mathbb E}(G)$. Consider the restricted coboundary map $d_{\mathcal O} : C^0(G,{\mathbb Z}) \to C_{{\mathcal O}}^1(G,{\mathbb Z})$ and the usual bilinear form on $C_{\mathcal O}^1(G,{\mathbb Z})$ defined by \begin{equation}\label{eq:LGpairing} \langle g_1, g_2 \rangle = \sum_{e \in {\mathcal O}}{g_1(e)g_2(e)} \ . \end{equation} The {\em lattice of integral cuts} (with respect to the orientation ${\mathcal O}$) is by definition the group of integral coboundaries $\Image(d_{\mathcal O})$ inside $C_{\mathcal O}^1(G,{\mathbb Z})$ with its bilinear form induced from \eqref{eq:LGpairing}. It is denoted by $L(G,{\mathcal O})$. When the orientation is clear we simply denote it by $L(G)$. \begin{Remark} \label{rmk: orientvslattice} Consider the (unrestricted) coboundary map \[ d : C^0(G,{\mathbb Z}) \to C^1(G,{\mathbb Z}) \cong C_{{\mathcal O}}^1(G,{\mathbb Z}) \oplus C_{\bar{{\mathcal O}}}^1(G,{\mathbb Z}) \ . \] Its image $\Lambda=\Image(d)$ is isomorphic to the lattice $\{(a,-a) : a \in L(G,{\mathcal O})\}$. The choice of the orientation ${\mathcal O}$ gives a splitting of $C^1(G,{\mathbb Z})$ and of $\Lambda$. \end{Remark} \medskip We may identify $C^0(G,{\mathbb Z})$ with ${\mathbb Z}^{V(G)}$ and $C_{\mathcal O}^1(G,{\mathbb Z})$ with ${\mathbb Z}^{{\mathcal O}}$. If we also fix a labeling on the vertices and edges of the graph, then $d_{\mathcal O}$ is represented by the matrix $B^T$, where $B$ is the $n \times m$ vertex-edge incidence matrix of $G$. In this case, the lattice of integral cuts $L(G)$ is $\Image(B^T) \hookrightarrow {\mathbb Z}^m$. It is a well-known classical fact (due to Poincar\'e) that the matrix $B$ is totally unimodular in the sense of Example~\ref{ex:WU} (see, e.g., \cite[Proposition~5.3]{BiggsBook}). Therefore Theorem~\ref{thm:totunim}(iii) and Remark~\ref{rmk:delon} apply to this situation. The Delaunay cell decomposition corresponding to the lattice $L(G)$ will be denoted by ${\Del}(L(G))$. \medskip Here we list some properties of $L(G)$ from \cite{Tutte}. Elements of $L(G)$ are integral $1$-coboundaries. A $1$-coboundary is called {\em elementary} if it has minimal nonempty support in $L(G)$. An elementary element $f \in L(G)$ is called {\em primitive} if $f(v) \in \{-1,0,+1\}$ for all $v \in V(G)$. It follows from the total unimodularity that every elementary element of $L(G)$ is an integral multiple of a primitive element of $L(G)$ (see, e.g., \cite[\S{1} and \S5]{Tutte}). Primitive elements of $L(G)$ correspond precisely to {\em bonds} (i.e. minimal edge-cuts, or, equivalently, edge-cuts connecting two connected subgraphs) (see, e.g., \cite[\S1.3]{Tutte}). If $f,g \in L(G)$, we say that $g$ {\em conforms} to $f$ if $f(e)g(e) > 0$ for all $e \in {\mathcal O}$ with $g(e) \ne 0$. For any $0 \ne f \in L(G)$, there exists a primitive element conforming to $f$ (\cite[1.23]{Tutte}). Moreover, $f$ can be represented as a sum of primitive elements, each conforming to $f$ (\cite[1.24]{Tutte}). \section{Graphic oriented matroid ideal and Lawrence ideal} \label{sec:matroidlawrence} We next study some natural ideals associated to the cell complexes introduced in \S\ref{sec:arrg}. See \cite{Popescu} and \cite{novik2002syzygies} for a more general study of such constructions. \subsection{Graphic oriented matroid ideal} An {\em oriented hyperplane arrangement} is a real hyperplane arrangement along with a choice of a ``positive side'' for each hyperplane. Equivalently, one may fix a set of linear forms vanishing on hyperplanes to fix the ``orientation''. For any oriented hyperplane arrangement one can define (see \cite{novik2002syzygies}) the associated {\em oriented matroid ideal}: let $\{h_j\}$ be $m$ nonzero linear forms defining the hyperplane arrangement ${\mathcal A}$ with hyperplanes ${\mathcal H}_j = \{ {\mathbf p} \in V : h_j({\mathbf p})=c_j\}$ in a real affine space $V$. The oriented matroid ideal associated to ${\mathcal A}$ is the ideal in $2m$ variables of the form: \[ {\mathbf O}_{{\mathcal A}} = \langle {\mathbf m}({\mathbf p}) : {\mathbf p} \in V \rangle \subset K[{\mathbf w},{\mathbf z}] \] where for each ${\mathbf p} \in V$ \[ {\mathbf m}({\mathbf p})=\prod_{h_i({\mathbf p})>c_i}w_i \prod_{h_i({\mathbf p})<c_i}z_i \ . \] Note that any two points in the relative interior of a cell will give rise to the same monomial. \medskip Consider the hyperplane arrangement ${\mathcal H}_G^{q,c}$ (defined in \S\ref{sec:BG}) which is contained in a codimension $2$ affine subspace of $C^0(G, {\mathbb R})$. Fixing an orientation ${\mathcal O}$ of the graph $G$ will fix the linear forms $(d f)(e)=f(e_{+})-f(e_{-})$ for $e \in {\mathcal O}$ and gives an orientation to the hyperplane arrangement ${\mathcal H}_G^{q,c}$. The oriented matroid ideal associated to this oriented hyperplane arrangement ${\mathcal H}_G^{q,c}$ will be denoted by ${\mathbf O}_G^q$ (instead of ${\mathbf O}_{{\mathcal H}_G^{q,c}}$) and will be called the {\em graphic oriented matroid ideal} associated to $G$ and $q$. It follows from the discussion in \S\ref{sec:BG} that this ideal is independent of the choice of the real number $c>0$. In this situation, we may consider the variables ${\mathbf w}$ as $\{y_e: e \in {\mathcal O}\}$ and the variables ${\mathbf z}$ as $\{y_{\bar{e}}: e \in {\mathcal O}\}$ and then ${\mathbf O}_G^q \subset {\mathbf S}$. \subsection{Graphic Lawrence ideal} For any embedded integral lattice $L \hookrightarrow {\mathbb Z}^m$ one can define (see \cite[Chapter 7]{SturmfelsGrobnerConvex}) a binomial ideal ${\mathbf J}_L$ in $2m$ variables, called the {\em Lawrence ideal} of $L$, by the following formula: \[ {\mathbf J}_L= \langle {\mathbf w}^{a^+}{\mathbf z}^{a^-}-{\mathbf w}^{a^-}{\mathbf z}^{a^+}: a^+,a^-\in {\mathbb N}^m,\ a={a^+}-{a^-}\in L \rangle \subset K[{\mathbf w},{\mathbf z}] \ . \] When the lattice $L$ is unimodular, the Lawrence ideal ${\mathbf J}_L$ is called unimodular (\cite{Popescu}). \medskip For simplicity, the unimodular Lawrence ideal associated to the unimodular lattice of integral cuts $L(G)$ will be denoted by ${\mathbf J}_G$ (instead of ${\mathbf J}_{L(G)}$) and will be called the {\em graphic Lawrence ideal} of $G$. Again, we may consider the variables ${\mathbf w}$ as $\{y_e: e \in {\mathcal O}\}$ and the variables ${\mathbf z}$ as $\{y_{\bar{e}}: e \in {\mathcal O}\}$ and then ${\mathbf J}_G \subset {\mathbf S}$. \medskip \subsection{Labeling ${\mathcal B}_G^{q,c}$ and the minimal free resolution of ${\mathbf O}_G^q$} \label{sec:labelBG} The bounded polyhedral cell complex ${\mathcal B}_G^{q,c}$ (defined in \S\ref{sec:BG}) supports a minimal free resolution for the ideal ${\mathbf O}_G^q$. To see this, we need to label the vertices of ${\mathcal B}_G^{q,c}$ appropriately: each vertex $f \in {\mathcal B}_G^{q,c}$ is labeled by the monomial \begin{equation} \label{eq:BLabels} {\mathbf m}(f) = \prod_{e \in {\mathbb E}(G) \atop (d f)(e)>0}{y_{e}} \ . \end{equation} \begin{Remark} Fixing an orientation ${\mathcal O}$ will result in the factorization of ${\mathbf m}(f)$ as \[ {\mathbf m}(f) =\prod_{e \in {\mathcal O} \atop f(e_+) -f(e_-)>0}{y_{e}} \prod_{e \in {\mathcal O} \atop f(e_-) -f(e_+)>0}{y_{\bar{e}}} \ . \] \end{Remark} In this way, we obtain a labeling of all cells by the least common multiple construction. It is easily seen that the label of any cell will be ${\mathbf m}(f)$ (as in \eqref{eq:BLabels}) for any point $f$ in the relative interior of that cell. The following result is an application of \cite[Theorem~1.3(b)]{novik2002syzygies} for the hyperplane arrangement ${\mathcal H}_G^q$. \begin{Theorem} \label{thm:bounded} The labeled polyhedral cell complex ${\mathcal B}_G^{q,c}$ gives a $C^1(G,{\mathbb Z})$-graded minimal free resolution for ${\mathbf O}_G^q$. In particular, ${\mathbf O}_G^q$ is minimally generated by the monomials ${\mathbf m}(f)$, as $f$ ranges over the vertices of ${\mathcal B}_G^{q,c}$. \end{Theorem} The fact that there is no unit in the corresponding differential maps is immediate from the description of the labelings. All subcomplexes $({\mathcal B}_G^{q,c})_{\leq {\mathbf m}}$ are in fact contractible, by a result of Bj\"orner and Ziegler (\cite[Theorem~4.5.7]{orientedmatroid}). See \cite{novik2002syzygies} for more details, and Example~\ref{exam:1} and Figure~\ref{fig:arrangement} for an example. \subsection{Labeling $\Del(L(G))$ and the minimal free resolution of ${\mathbf J}_G$} \label{sec:labelDel} Fix an arbitrary orientation ${\mathcal O} \subset {\mathbb E}(G)$ of $G$ and consider the lattice of integral cuts $L(G)$ as in \S\ref{sec:cutlattice}. As we have already discussed, it comes equipped with a canonical polyhedral cell decomposition of the ambient real vector space $L(G)_{\mathbb R}=L(G) \otimes {\mathbb R} = \Image (d_{\mathcal O} \colon C^0(G, {\mathbb R}) \rightarrow C_{{\mathcal O}}^1(G, {\mathbb R}))$. This polyhedral cell decomposition, denoted by $\Del(L(G))$, can be thought of as an infinite hyperplane arrangement (Theorem~\ref{thm:totunim}(iii)), or more naturally, as the Delaunay decomposition of the ambient space with respect to the lattice $L(G)$ and the metric induced by its natural pairing \eqref{eq:LGpairing} (See Remark~\ref{rmk:delon}(ii)). We make this a labelled cell complex by assigning the label \begin{equation} \label{eq:blabel} {\mathbf b}(a)=\prod_{e \in {\mathbb E}(G)} { {y_{e}^{a(e)}} } \end{equation} to each vertex $a \in L(G) \hookrightarrow C^1(G, {\mathbb R})$. \begin{Remark} Fixing an orientation ${\mathcal O}$ will result in the factorization of this Laurent monomial as \[ {\mathbf b}(a)=\prod_{e \in {\mathcal O}} { {y_{e}^{a(e)}} } \prod_{e \in \bar{{\mathcal O}}} { {y_{e}^{-a(e)}} } = \prod_{e \in {\mathcal O}} { {y_{e}^{a(e)}} } / \prod_{e \in {\mathcal O}} { {y_{\bar{e}}^{a(e)}} } \] for $a \in L(G)$. \end{Remark} As usual, we extend the labeling to all faces by the least common multiple rule. The associated complex of free $C^1(G, {\mathbb Z})$-graded ${\mathbf S}$-modules (see \S\ref{sec:res}) is not ${\mathbf S}$-finite. By \cite[Theorem~3.1]{Popescu} this complex is a minimal cellular free resolution of the (Laurent) monomial module generated by the labels of the lattice points in $L(G)$. This Laurent monomial module can be thought of as the ``universal cover'' of ${\mathbf J}_G$; the Delaunay cell complex is invariant under the translation by $L(G)$ (Theorem~\ref{thm:totunim} and Remark~\ref{rmk:delon}), and the labeling is also compatible with this action. So we obtain a well defined finite cell complex on the quotient torus $L(G)_{\mathbb R} / L(G)$, which we denote by $\Del(L(G)) / L(G)$. The following theorem is an application of \cite[Theorem~3.5]{Popescu} (or \cite[Theorem~3.2]{BayerSturmfels}) to our setting. \begin{Theorem} \label{thm:Jresol} The quotient cell complex $\Del(L(G)) / L(G)$ supports a $(C^1(G, {\mathbb Z})/{\Lambda})$-graded minimal free resolution for ${\mathbf J}_G$. \end{Theorem} Here $\Lambda$ is the image of the (unrestricted) coboundary map $d : C^0(G,{\mathbb Z}) \to C^1(G,{\mathbb Z})$ (see Remark~\ref{rmk: orientvslattice}). \medskip \subsection{Gr\"obner relation between ${\mathbf J}_G$ and ${\mathbf O}_G^q$} \label{sec:grobrel} Recall that the hyperplane arrangement ${\mathcal H}_G^{q,c}$ is naturally sitting inside $C^0(G, {\mathbb R})$, and the Delaunay decomposition $\Del(L(G))$ is an infinite hyperplane arrangement naturally sitting inside $C_{{\mathcal O}}^1(G, {\mathbb R})$. The obvious map between these ambient spaces is the (restricted) coboundary map $d_{\mathcal O} \colon C^0(G, {\mathbb R}) \rightarrow C_{{\mathcal O}}^1(G, {\mathbb R})$. As we will see, this map relates the corresponding hyperplane arrangements and cell complexes, and this relation translates into precise algebraic relations between ${\mathbf J}_G$ and ${\mathbf O}_G^q$. \medskip First note that $\Ker(d) =\Ker(d_{\mathcal O})$ is the $1$-dimensional space of constant functions on $V(G)$, and we have \[ L(G)_{\mathbb R} = \Image (d_{\mathcal O}) \cong C^0(G, {\mathbb R}) / \Ker(d) \cong C^0(G, {\mathbb R}) \cap (\Ker(d))^{\perp} \ . \] Let $e \in {\mathbb E}(G)$. Under the induced isomorphism $d_{\mathcal O} \colon C^0(G, {\mathbb R}) \cap (\Ker(d))^{\perp} \xrightarrow{\sim} L(G)_{\mathbb R}$, the hyperplane \[ {\mathcal H}_{e}|_{(\Ker(d))^{\perp}} = \{f \in C^0(G,{\mathbb R}): (d f) (e)=0 \} \cap (\Ker(d))^{\perp} \] is mapped to the hyperplane \[ {\mathcal G}_e=\{a \in L(G)_{\mathbb R}: \varphi_e(a) = 0\} \ , \] where $\varphi_e$ is the restriction of the functional $e=e^{\ast\ast} \in C_1(G, {\mathbb Z})$ to $L(G)_{\mathbb R}$. By Example~\ref{ex:WU}, Proposition~\ref{thm:totunim}(iii), and Remark~\ref{rmk:delon}(ii), the hyperplanes ${\mathcal G}_e$ are precisely the hyperplanes passing through the origin in $\Del(L(G))$. Recall from \S\ref{sec:BG} that the hyperplane arrangement ${\mathcal H}_G^{q,c}$ has another hyperplane defined by \begin{equation} \label{eq:lastplane} ({\mathcal H}^{q,c})'|_{(\Ker(d))^{\perp}} =\{f\in C^0(G, {\mathbb R}): \sum_{v \ne q}{f(v)}=c \} \cap (\Ker(d))^{\perp} \ . \end{equation} The real vector space $L(G)_{\mathbb R}$ is spanned by $\{d_{\mathcal O}(\chi_v): v \ne q\}$. Under the induced isomorphism $d_{\mathcal O} \colon C^0(G, {\mathbb R}) \cap (\Ker(d_{\mathcal O}))^{\perp} \xrightarrow{\sim} L(G)_{\mathbb R}$, the hyperplane \eqref{eq:lastplane} is mapped to the affine hyperplane \[ {\mathcal G}^{q,c}=\{a \in C^1(G, {\mathbb R}): a = \sum_{v \ne q}{f(v) d_{\mathcal O}(\chi_v)} \text{ with } \sum_{v \ne q}{f(v)}=c\} \ . \] This is a hyperplane passing through all points $\{c \cdot d_{\mathcal O}(\chi_v): v \ne q\}$. \medskip We denote the restriction of the arrangement $\{{\mathcal G}_e\}_{e \in {\mathbb E}(G)}$ to the affine hyperplane ${\mathcal G}^{q,c}$ by ${\mathcal G}_G^{q,c}$. It follows that ${\mathcal G}_G^{q,c}$, upto a linear transformation, coincides with the arrangement ${\mathcal H}_G^{q,c}$, and therefore its bounded complex, which we denote by ${\mathcal A}_G^{q,c}$, may be identified with ${\mathcal B}_G^{q,c}$. \medskip Next we show that these geometric considerations nicely relate the labeling of ${\mathcal B}_G^{q,c}$ by monomials (described in \S\ref{sec:labelBG}) with the natural labeling of ${\mathcal A}_G^{q,c}$ induced by $\Del(L(G))$ (described in \S\ref{sec:labelDel}). For this purpose, we will see that it is most natural to assume $0<c<1$. With this assumption, if the hyperplane ${\mathcal G}^{q,c}$ intersects a Delaunay cell $C$, then ${C}$ must contain the origin. By the least common multiple labeling rule, this means that all such cells $C$ have monomial labels in ${\mathbf S}$. \medskip To concretely describe these induced monomial labels, it suffices to find the labels of the vertices in ${\mathcal G}_G^{q,c}$ induced from the labels of the rays in the central hyperplane arrangement $\{ {\mathcal G}_e: e \in {\mathbb E}(G)\}$. These rays correspond to bonds $d_{\mathcal O}(\chi_B)$ for $B \subset V(G)$ (see \S\ref{sec:cutlattice}). Such a ray intersects ${\mathcal G}^{q,c}$ if and only if for some real number $t>0$ we have \[ t d_{\mathcal O}(\chi_B) = \sum_{v \ne q}{f(v)d_{\mathcal O}(\chi_v)} \ , \] or equivalently \[ d_{\mathcal O}(t\chi_B-\sum_{v \ne q}{f(v)\chi_v})=0 \ . \] Since the kernel of $d_{\mathcal O}$ consists of constant functions we must have \begin{equation} \label{eq:eval} t\sum_{v \in B}{\chi_v}-\sum_{v \ne q}{f(v)\chi_v} = k \sum_{v}{\chi_v} \end{equation} for some constant $k \in {\mathbb R}$. \medskip We claim that $q \not\in B$. Indeed, if $q \in B$, then evaluating \eqref{eq:eval} at $q$ we obtain $k=t$ and therefore \[t\sum_{v \in B^c}{\chi_v}=-\sum_{v \ne q}{f(v)\chi_v} \ .\] This implies that $f(v)=-t <0$ for $v \in B^c$ and $f(v)=0$ for $v \in B \backslash \{q\}$. But this is impossible because $\sum_{v \ne q}{f(v)}=c$ by assumption. \medskip Since $q \not \in B$, by evaluating \eqref{eq:eval} at $q$ we obtain $k=0$ and therefore \[ t\sum_{v \in B}{\chi_v}=\sum_{v \ne q}{f(v)\chi_v} \ , \] which implies that $f(v)=t$ for $v \in B$ and $f(v) = 0$ for $v \in B^c \backslash \{q\}$. Since $\sum_{v \ne q}{f(v)}=c$, we must have $t=\frac{c}{|B|}$. Conversely, for any nonempty subset $B \subset V(G) \backslash \{q\}$, the ray corresponding to the simple cut $d_{\mathcal O}(\chi_B)$ intersects ${\mathcal G}^{q,c}$ at the point $\frac{c}{|B|}d_{\mathcal O}(\chi_B)$. If we fix $0<c<1$, then we always have $0<\frac{c}{|B|} <1$ which means that the point of intersection belongs to a cell in $\Del(L(G))$ containing the origin. We summarize these observations in the following proposition. \begin{Proposition} \label{prop:rays} Let $\emptyset \ne B \subset V(G)$. The ray corresponding to the bond $d_{\mathcal O}(\chi_B)$ intersects ${\mathcal G}^{q,c}$ if and only if $q \not \in B$. If $0<c<1$, then the point of intersection belongs to a cell in $\Del(L(G))$ containing the origin. \end{Proposition} The vertices of ${\mathcal A}_G^{q,c}$ are the points of intersections with these rays. For each vertex of ${\mathcal A}_G^{q,c}$ we may assign the label corresponding to the $1$-dimensional cell of $\Del(L(G))$ containing that vertex. If we assume $0<c<1$, this is a (non-Laurent) monomial label that coincides with the labeling rule for ${\mathcal B}_G^{q,c}$ described in \S\ref{sec:labelBG}. From this point of view, it is straightforward to describe these labels combinatorially. \begin{Lemma} \label{lem:labels} For any $A \subsetneq V(G)$ with $q \in A$ the following holds. \begin{itemize} \item[(i)] The label of the point $d_{\mathcal O}(\chi_{A^c})$ in the labeled complex $\Del(L(G))$ is \[ {\mathbf b}(d_{\mathcal O}(\chi_{A^c})) = \frac{\prod_{e \in {\mathbb E}(A^c, A)}{y_e} }{\prod_{e \in {\mathbb E}(A,A^c)}{y_e} } \ .\] \item[(ii)] For $0<c<1$, the induced label on the vertex ${\mathcal A}_G^{q,c}$ corresponding to the bond $d_{\mathcal O}(\chi_{A^c})$ is \[ \prod_{e \in {\mathbb E}(A^c, A)}{y_e} \ . \] \end{itemize} \end{Lemma} \begin{proof} (i) By \eqref{eq:blabel} we have \[ \begin{aligned} {\mathbf b}(d_{\mathcal O}(\chi_{A^c}))&=\prod_{e \in {\mathbb E}(G)} { {y_{e}^{d(\chi_{A^c})(e)}} } \\ &=\prod_{e \in {\mathbb E}(G)} { {y_{e}^{\chi_{A^c}(e_+)-\chi_{A^c}(e_-)} } } \\ &= \frac{\prod_{e \in {\mathbb E}(A^c, A)}{y_e} }{\prod_{e \in {\mathbb E}(A,A^c)}{y_e} } \ . \end{aligned} \] (ii) The label of the origin is ${\mathbf b}(\mathbf{0})=1$. Therefore, by the least common multiple construction, the label of the $1$-dimensional cell $\{\mathbf{0}, d_{\mathcal O}(\chi_{A^c}) \}$ in $\Del(L(G))$ is $\prod_{e \in {\mathbb E}(A^c, A)}{y_e}$. The result now follows from Proposition~\ref{prop:rays}. \end{proof} \medskip Since the labeled complex ${\mathcal A}_G^{q,c}$ (for $0 <c<1$) coincides with the labeled complex ${\mathcal B}_G^{q,c}$, we might as well think of the ideal ${\mathbf O}_G^q$ as constructed from ${\mathcal A}_G^{q,c}$. The advantage of this point of view is a precise Gr\"obner relation between ${\mathbf O}_G^q$ and ${\mathbf J}_G$ coming from the described relation of ${\mathcal A}_G^{q,c}$ and $\Del(L(G))$. \begin{Lemma}\label{lem:bij} Intersection of cells in $\Del(L(G))$ with the hyperplane ${\mathcal G}^{q,c}$ induces a bijection between $(i+1)$-dimensional cells of $\Del(L(G)) / L(G)$ and $i$-dimensional cells of ${\mathcal A}_G^{q,c}$ for all $0 \leq i \leq n-2$. \end{Lemma} \begin{proof} It suffices to only consider cells in $\Del(L(G))$ containing the origin; all other cells in $\Del(L(G))$ can be obtained by translating such cells by $L(G)$. The primitive (or indecomposable) elements of $L(G)$ correspond to bonds (see \S\ref{sec:cutlattice}). Therefore the vertex set of any cell in $\Del(L(G))$ containing the origin is of the form $\{\mathbf{0}\} \cup P$ for some $P \subset \{d_{\mathcal O}(\chi_B) : \emptyset \ne B \subset V(G)\}$. Since $d_{\mathcal O}(\chi_{B^c})=-d_{\mathcal O}(\chi_B)$, it suffices to restrict our attention to the case where $P \subset \{d_{\mathcal O}(\chi_B) : \emptyset \ne B \subset V(G), q \not \in B\}$. By Proposition~\ref{prop:rays}, these are precisely those cells that have nonempty intersection with ${\mathcal G}^{q,c}$. \end{proof} \begin{Proposition} \label{prop:grobrelation} \begin{itemize} \item[] \item[(i)] A generating set for the ideal ${\mathbf J}_G$ is \[ \left\{ {\prod_{e \in {\mathbb E}(A^c, A)}{y_e} }-{\prod_{e \in {\mathbb E}(A,A^c)}{y_e} } : A \subsetneq V(G), q \in A \right\} .\] If we consider only those subsets $A$ of $V(G)$ such that both $G[A]$ and $G[A^c]$ are connected, then we have a minimal generating set for ${\mathbf J}_G$. \item[(ii)] The minimal generating set in part (i) is also a Gr\"obner basis with respect to {\em any} term order (i.e. is a universal Gr\"obner basis). \item[(iii)] A minimal generating set for the ideal ${\mathbf O}_G^q$ is \[ \left\{{\prod_{e \in {\mathbb E}(A^c, A)}{y_e} } : A \subsetneq V(G), q \in A, G[A] \text{ and } G[A^c] \text{ are connected} \right\} .\] \item[(iv)] ${\mathbf O}_G^q$ is the initial ideal of ${\mathbf J}_G$ with respect to any term order $\prec_q$ with the property that \[ {\prod_{e \in {\mathbb E}(A,A^c)}{y_e} } \ \prec_q {\prod_{e \in {\mathbb E}(A^c, A)}{y_e} } \] for every $A \subsetneq V(G)$ with $q \in A$ such that both $G[A]$ and $G[A^c]$ are connected. \end{itemize} \end{Proposition} \begin{proof} (i) It follows from the discussion in \S\ref{sec:res}, Theorem~\ref{thm:Jresol} and \cite[proof of Theorem~3.2]{BayerSturmfels} that a minimal generating set for ${\mathbf J}_G$ is given by binomials \[ \frac{{\mathbf m}_F}{{\mathbf m}_{F'}} - \frac{{\mathbf m}_F}{{\mathbf m}_{\mathbf{0}}} \ , \] where $F$ is in a fundamental set of representatives of $1$-cells in $\Del(L(G))$ connecting $\mathbf{0}$ to $F=d_{\mathcal O}(\chi_{A^c})$ for $A \subsetneq V(G)$ and $q \in A$. By Lemma~\ref{lem:labels}(i), we have \[ {\mathbf m}_{F'} = {\mathbf b}(d_{\mathcal O}(\chi_{A^c})) = \frac{\prod_{e \in {\mathbb E}(A^c, A)}{y_e} }{\prod_{e \in {\mathbb E}(A,A^c)}{y_e}} , \quad {\mathbf m}_{\mathbf{0}} = 1 , \]\[ {\mathbf m}_{F} = \lcm({\mathbf m}_{F'}, {\mathbf m}_{\mathbf{0}}) = \prod_{e \in {\mathbb E}(A^c, A)}{y_e} \] and therefore \[ \frac{{\mathbf m}_F}{{\mathbf m}_{F'}} - \frac{{\mathbf m}_F}{{\mathbf m}_{\mathbf{0}}}= \prod_{e \in {\mathbb E}(A, A^c)}{y_e}-\prod_{e \in {\mathbb E}(A^c, A)}{y_e} \ . \] The rest of part (i) is immediate. \medskip (ii) follows from the general fact that in any Lawrence ideal, a minimal binomial generating set is a Gr\"obner basis with respect to any term order (\cite[Theorem~7.1]{SturmfelsGrobnerConvex}). In our concrete situation, one can also easily verify (as in the proof of Theorem~\ref{thm:Cori} given in \cite[Theorem~5.1]{FarbodFatemeh}) that the $S$-polynomial of the two binomials corresponding to the cuts $(A,A^c)$ and $(B,B^c)$ can be reduced to zero by the binomials corresponding to the cuts $(A \backslash B, (A \backslash B)^c )$ and $(B \backslash A, (B \backslash A)^c )$. \medskip (iii) It follows from the discussion in \S\ref{sec:res}, Theorem~\ref{thm:bounded}, and the fact that the labeled cell complex ${\mathcal A}_G^{q,c}$ coincides with the labeled complex ${\mathcal B}_G^{q,c}$, that a minimal generating set for ${\mathbf O}_G^q$ is given by the monomials ${\mathbf m}_F$ as $F$ varies over the vertices of the bounded cell complex ${\mathcal A}_G^{q,c}$. By Proposition~\ref{prop:rays} and Lemma~\ref{lem:labels}(ii), these labels are precisely of the form \[ \prod_{e \in {\mathbb E}(A^c, A)}{y_e} \] for $A \subsetneq V(G)$ with $q \in A$ such that the edges between $(A,A^c)$ form a bond. \medskip (iv) follows from (ii) and (iii). \end{proof} \medskip \begin{Example}\label{exam:1} Consider the graph $G$ depicted in Figure~\ref{fig:graph} with the fixed orientation ${\mathcal O}$. Let $q$ be the distinguished (red) vertex at the bottom. Acyclic partial orientations of $G$ with unique source at $q$ are depicted in Figures~\ref{fig:2partition}--\ref{fig:4partition}. \begin{figure}[h] \begin{center} \begin{tikzpicture} [scale = .18, very thick = 10mm] \node (n4) at (4,1) [Cred] {}; \node (n1) at (4,11) [Cgray] {}; \node (n2) at (1,6) [Cgray] {}; \node (n3) at (7,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n1/n3} \draw[] (\from) -- (\to); \foreach \from/\to in {n2/n1,n4/n3,n1/n4} \draw[] (\from) -- (\to); \node (n4) at (24,1) [Cred] {}; \node (m4) at (25,6) [Cwhite] {$e_5$}; \node (n1) at (24,11) [Cgray] {}; \node (m1) at (26.5,9.5) [Cwhite] {$e_3$}; \node (n2) at (21,6) [Cgray] {}; \node (m1) at (21.5,9.5) [Cwhite] {$e_1$}; \node (n3) at (27,6) [Cgray] {}; \node (m1) at (26.5,2.5) [Cwhite] {$e_4$}; \node (m1) at (21.5,2.5) [Cwhite] {$e_2$}; \foreach \from/\to in {n4/n2,n3/n1,n1/n2,n4/n3,n4/n1} \draw[black][->] (\from) -- (\to); \end{tikzpicture} \caption{Graph $G$ and a fixed orientation ${\mathcal O}$} \label{fig:graph} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{tikzpicture} [scale = .17, very thick = 10mm] \node (n4) at (4,1) [Cred] {}; \node (n1) at (4,11) [Cgray] {}; \node (n2) at (1,6) [Cgray] {}; \node (n3) at (7,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n1/n3} \draw[] (\from) -- (\to); \foreach \from/\to in {n2/n1,n4/n3,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (14,1) [Cred] {}; \node (n1) at (14,11) [Cgray] {}; \node (n2) at (11,6) [Cgray] {}; \node (n3) at (17,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n3/n1,n4/n1} \draw[blue][->] (\from) -- (\to); \foreach \from/\to in {n1/n2,n4/n3} \draw[] (\from) -- (\to); \node (n4) at (24,1) [Cred] {}; \node (n1) at (24,11) [Cgray] {}; \node (n2) at (21,6) [Cgray] {}; \node (n3) at (27,6) [Cgray] {}; \foreach \from/\to in {n1/n2,n1/n3} \draw[] (\from) -- (\to); \foreach \from/\to in {n4/n2,n4/n3,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (34,1) [Cred] {}; \node (n1) at (34,11) [Cgray] {}; \node (n2) at (31,6) [Cgray] {}; \node (n3) at (37,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n4/n3} \draw[] (\from) -- (\to); \foreach \from/\to in {n2/n1,n3/n1,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (44,1) [Cred] {}; \node (n1) at (44,11) [Cgray] {}; \node (n2) at (41,6) [Cgray] {}; \node (n3) at (47,6) [Cgray] {}; \foreach \from/\to in {n1/n3,n4/n3,n4/n1} \draw[] (\from) -- (\to); \foreach \from/\to in {n1/n2,n4/n2} \draw[blue][->] (\from) -- (\to); \node (n4) at (54,1) [Cred] {}; \node (n1) at (54,11) [Cgray] {}; \node (n2) at (51,6) [Cgray] {}; \node (n3) at (57,6) [Cgray] {}; \foreach \from/\to in {n1/n2,n4/n2,n4/n1} \draw[] (\from) -- (\to); \foreach \from/\to in {n1/n3,n4/n3} \draw[blue][->] (\from) -- (\to); \end{tikzpicture} \caption{Acyclic partial orientations with $2$ components} \label{fig:2partition} \end{center} \medskip \begin{center} \begin{tikzpicture} [scale = .17, very thick = 10mm] \node (n4) at (-6,13) [Cred] {}; \node (n1) at (-6,23) [Cgray] {}; \node (n2) at (-9,18) [Cgray] {}; \node (n3) at (-3,18) [Cgray] {}; \foreach \from/\to in {n2/n4} \draw[] (\from) -- (\to); \foreach \from/\to in {n4/n3,n2/n1,n3/n1,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (4,13) [Cred] {}; \node (n1) at (4,23) [Cgray] {}; \node (n2) at (1,18) [Cgray] {}; \node (n3) at (7,18) [Cgray] {}; \foreach \from/\to in {n3/n4} \draw[] (\from) -- (\to); \foreach \from/\to in {n4/n2,n2/n1,n3/n1,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (14,13) [Cred] {}; \node (n1) at (14,23) [Cgray] {}; \node (n2) at (11,18) [Cgray] {}; \node (n3) at (17,18) [Cgray] {}; \foreach \from/\to in {n1/n3} \draw[] (\from) -- (\to); \foreach \from/\to in {n4/n2,n4/n3, n2/n1,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (24,13) [Cred] {}; \node (n1) at (24,23) [Cgray] {}; \node (n2) at (21,18) [Cgray] {}; \node (n3) at (27,18) [Cgray] {}; \foreach \from/\to in {n1/n2} \draw[] (\from) -- (\to); \foreach \from/\to in {n1/n3, n3/n4,n2/n4,n1/n4} \draw[blue][<-] (\from) -- (\to); \end{tikzpicture} \caption{Acyclic partial orientations with $3$ components} \label{fig:3partition} \end{center} \medskip \begin{center} \begin{tikzpicture} [scale = .17, very thick = 10mm] \node (n4) at (4,13) [Cred] {}; \node (n1) at (4,23) [Cgray] {}; \node (n2) at (1,18) [Cgray] {}; \node (n3) at (7,18) [Cgray] {}; \foreach \from/\to in {n2/n1,n2/n4} \draw[blue][<-] (\from) -- (\to); \foreach \from/\to in {n4/n3,n3/n1,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (14,13) [Cred] {}; \node (n1) at (14,23) [Cgray] {}; \node (n2) at (11,18) [Cgray] {}; \node (n3) at (17,18) [Cgray] {}; \foreach \from/\to in {n1/n2,n3/n1} \draw[blue][<-] (\from) -- (\to); \foreach \from/\to in {n4/n3,n4/n2,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (24,13) [Cred] {}; \node (n1) at (24,23) [Cgray] {}; \node (n2) at (21,18) [Cgray] {}; \node (n3) at (27,18) [Cgray] {}; \foreach \from/\to in {n1/n2,n1/n3, n3/n4,n2/n4,n1/n4} \draw[blue][<-] (\from) -- (\to); \node (n4) at (34,13) [Cred] {}; \node (n1) at (34,23) [Cgray] {}; \node (n2) at (31,18) [Cgray] {}; \node (n3) at (37,18) [Cgray] {}; \foreach \from/\to in {n2/n1,n3/n1, n3/n4,n2/n4,n1/n4} \draw[blue][<-] (\from) -- (\to); \end{tikzpicture} \caption{Acyclic partial orientations with $4$ components} \label{fig:4partition} \end{center} \end{figure} \medskip Consider the arrangement ${\mathcal H}'_G=\{{\mathcal H}_{e_1},\ldots, {\mathcal H}_{e_5}\}$. The graphic arrangement ${\mathcal H}_G^{q,c}$ (for some $c > 0$) is two-dimensional and is depicted in Figure~\ref{fig:arrangement}. Its bounded complex ${\mathcal B}_G^{q,c}$ is the bounded part of this figure. Recall that the graphic arrangement ``lives in'' $C^0(G,{\mathbb R})$, which may be identified with ${\mathbb R}^4$ after fixing a labeling of the vertices. For each hyperplane labeled ${\mathcal H}_e$, the small arrow next to it denotes the side where $(d f)(e) >0$. The hyperplane ${\mathcal H}_{\bar{e}}$ coincides with ${\mathcal H}_e$, but its arrow will be reversed. We have also labeled the $0$-cells according to \eqref{eq:BLabels}. \setlength{\unitlength}{1.1pt} \begin{figure}[h!] \begin{center} \begin{picture}(100,195)(0,-55) \thicklines \put(60,0){\circle*{4}} \put(15,-10){$y_{\bar{e}_1} y_{e_4} y_{e_5}$} \put(26,66){\circle*{4}} \put(-23,65){$y_{e_2} y_{e_3} y_{e_5}$} \put(160,0){\circle*{4}} \put(130,-10){$y_{\bar{e}_3} y_{e_4}$} \put(-40,0){\circle*{4}} \put(-95,-10){$y_{\bar{e}_1} y_{e_3} y_{e_5}$} \put(60,100){\circle*{4}} \put(24,98){$y_{e_1} y_{e_2}$} \put(60,50){\circle*{4}} \put(15,45){$y_{e_2} y_{e_4} y_{e_5}$} \put(60,0){\line(0,-1){30}} \put(60,50){\line(2,-1){100}} \put(60,50){\line(-2,1){65}} \put(60,100){\line(0,1){30}} \put(60,-20){\vector(-1,0){10}} \put(56,-43){$H_{e_3}$} \put(175,0){\vector(0,1){10}} \put(180,10){$H_{e_2}$} \put(-50,-10){\vector(1,-1){10}} \put(-60,-43){$H_{e_4}$} \put(30,130){\line(1,-1){20}} \put(160,0){\line(1,-1){20}} \put(170,-10){\vector(-2,-3){7}} \put(193,-40){$H_{e_5}$} \put(14,73){\vector(2,3){7}} \put(-30,80){$H_{e_1}$} \thicklines \put(-40,0){\line(1,0){230}} \put(-40,0){\line(-1,0){30}} \put(60,0){\line(-1,0){80}} \put(60,0){\line(0,1){130}} \put(160,0){\line(3,-2){35}} \put(-40,0){\line(1,1){130}} \put(-40,0){\line(-1,-1){30}} \put(160,0){\line(-1,1){130}} \put(160,0){\line(1,-1){30}} \put(-40,-10){${\mathbf p}_1$} \put(64,-10){${\mathbf p}_2$} \put(155,10){${\mathbf p}_3$} \put(63,55){${\mathbf p}_4$} \put(21,74){${\mathbf p}_5$} \put(65,98){${\mathbf p}_6$} \end{picture} \caption{${\mathcal H}_G^q$, ${\mathcal B}_G^q$, and the monomial labels on the vertices} \label{fig:arrangement} \end{center} \end{figure} The polynomial ring ${\mathbf S}$ has $10$ variables: \[\{y_e, y_{\bar{e}}: e \in {\mathcal O}\} = \{y_{e_1},y_{e_2},y_{e_3},y_{e_4},y_{e_5} ; y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_3},y_{\bar{e}_4},y_{\bar{e}_5}\} \ .\] By Theorem~\ref{thm:bounded}, the associated oriented matroid ideal ${\mathbf O}_G^q$ is minimally generated by the labels of the $0$-cells: \begin{equation} \label{eq:OGex} {\mathbf O}_G^q=\langle y_{\bar{e}_1} y_{e_4} y_{e_{5}}, y_{e_2}y_{e_3}y_{e_5}, y_{\bar{e}_3} y_{e_4}, y_{\bar{e}_1} y_{e_3}y_{e_5}, y_{e_1} y_{e_2},y_{e_2}y_{e_4}y_{e_5}\rangle \ . \end{equation} Note that the indices appearing in the minimal generating set correspond precisely to the oriented edges leaving the connected partition containing $q$ (i.e. the blue edges in Figure~\ref{fig:2partition}). This is what we expect by Proposition~\ref{prop:grobrelation}(iii). The lattice of integral cuts $L(G)$ is $3$-dimensional. Instead of drawing it, we may directly write a minimal generating set for ${\mathbf J}_G$ using Proposition~\ref{prop:grobrelation}(i): \[ {\mathbf J}_{G} =\langle y_{\bar{e}_1} y_{e_4} y_{e_{5}}-y_{{e}_1} y_{\bar{e}_4} y_{\bar{e}_{5}}, y_{e_2}y_{e_3}y_{e_5}-y_{\bar{e}_2}y_{\bar{e}_3}y_{\bar{e}_5}, y_{\bar{e}_3}y_{e_4}-y_{{e}_3}y_{\bar{e}_4}, y_{\bar{e}_1} y_{e_3}y_{e_5}-y_{{e}_1} y_{\bar{e}_3}y_{\bar{e}_5},\] \[ y_{e_1} y_{e_2}-y_{\bar{e}_1} y_{\bar{e}_2},y_{e_2}y_{e_4}y_{e_5}-y_{\bar{e}_2}y_{\bar{e}_4}y_{\bar{e}_5}\rangle\ . \] The first term in each binomial is the dominant term for the term order $\prec_q$. The bounded complex ${\mathcal B}_G^q$ has six $0$-cells $\{{\mathbf p}_1, \ldots , {\mathbf p}_6\}$, nine $1$-cells $\{E_1, \ldots , E_9\}$, and four $2$-cells $\{F_1, \ldots , F_4\}$. These numbers correspond to the acyclic orientations of Figure~\ref{fig:2partition}, Figure~\ref{fig:3partition}, and Figure~\ref{fig:4partition}, as well as the Betti numbers of ${\mathbf O}_G^q$ and ${\mathbf J}_G$. Moreover, ${\mathcal B}_G^q$ supports a minimal free resolution for ${\mathbf O}_G^q$. To explicitly describe this minimal resolution, let \[ E_1=\{{\mathbf p}_1,{\mathbf p}_2\} , \quad E_2=\{{\mathbf p}_2,{\mathbf p}_3\}, \quad E_3=\{{\mathbf p}_1,{\mathbf p}_5\}, \quad E_4=\{{\mathbf p}_2,{\mathbf p}_4\}, \quad E_5=\{{\mathbf p}_3,{\mathbf p}_4\} \] \[ E_6=\{{\mathbf p}_4,{\mathbf p}_5\}, \quad E_7=\{{\mathbf p}_5,{\mathbf p}_6\} , \quad E_8=\{{\mathbf p}_4,{\mathbf p}_6\} , \quad E_9=\{{\mathbf p}_3,{\mathbf p}_6\} \ , \] \[ F_1=\{{\mathbf p}_1,{\mathbf p}_2,{\mathbf p}_4,{\mathbf p}_5\}, \quad F_2=\{{\mathbf p}_2,{\mathbf p}_3,{\mathbf p}_4\} , \quad F_3=\{{\mathbf p}_4,{\mathbf p}_5,{\mathbf p}_6\}, \quad F_4=\{{\mathbf p}_3,{\mathbf p}_4,{\mathbf p}_6\} \ . \] We extend the labeling on the vertices to the whole ${\mathcal B}_G^q$ by the least common multiple construction. For example, \[ {\mathbf m}_{E_2}=y_{\bar{e}_1}y_{\bar{e}_3} y_{e_4} y_{e_5}, \ {\mathbf m}_{E_4}=y_{\bar{e}_1} y_{e_2} y_{e_4} y_{e_5}, \ {\mathbf m}_{E_5}=y_{e_2} y_{\bar{e}_3} y_{e_4} y_{e_5} , \ {\mathbf m}_{E_6}=y_{e_2}y_{e_3} y_{e_4} y_{e_5}, \ \] \[ {\mathbf m}_{F_2}= y_{\bar{e}_1} y_{e_2} y_{\bar{e}_3} y_{e_4}y_{e_5}\ . \] Then the minimal resolution of ${\mathbf O}_G^q$ is as follows. \[ 0 \rightarrow \bigoplus_{i=1}^4{\mathbf S}(-{\mathbf m}_{F_i}) \xrightarrow{\partial_2} \bigoplus_{i=1}^9{\mathbf S}(-{\mathbf m}_{E_i}) \xrightarrow{\partial_1} \bigoplus_{i=1}^6{\mathbf S}(-{\mathbf m}_{{\mathbf p}_i}) \xrightarrow{\partial_0} {\mathbf S} \twoheadrightarrow {\mathbf S} /{\mathbf O}_G^q \ . \] As usual, assume $[F]$ denotes the generator of ${\mathbf S}(-{\mathbf m}_F)$. The homogenized differential operator of the cell complex $(\partial_0, \partial_1, \partial_2)$ is as described in \eqref{eq:differntials}. For example \[ \partial_0([{\mathbf p}_i])= {\mathbf m}_{{\mathbf p}_i} = {\mathbf m}({\mathbf p}_i) \ , \] \[ \partial_1([E_6])= \frac{y_{e_2}y_{e_3} y_{e_4} y_{e_5}}{y_{e_2} y_{e_4} y_{e_5}}[{\mathbf p}_4] -\frac{y_{e_2}y_{e_3} y_{e_4} y_{e_5}}{y_{e_2}y_{e_3} y_{e_5}}[{\mathbf p}_5] = y_{e_3}[{\mathbf p}_4] - y_{e_4}[{\mathbf p}_4]\ , \] \medskip \[ \begin{aligned} \partial_2([F_2]) &= \frac{y_{\bar{e}_1} y_{e_2} y_{\bar{e}_3} y_{e_4}y_{e_5}}{y_{\bar{e}_1}y_{\bar{e}_3} y_{e_4} y_{e_5}}[E_2] -\frac{y_{\bar{e}_1} y_{e_2} y_{\bar{e}_3} y_{e_4}y_{e_5}}{y_{\bar{e}_1} y_{e_2} y_{e_4} y_{e_5}}[E_4]+\frac{y_{\bar{e}_1} y_{e_2} y_{\bar{e}_3} y_{e_4}y_{e_5}}{y_{e_2} y_{\bar{e}_3} y_{e_4} y_{e_5}}[E_5] \\ &= y_{e_2}[E_2] - y_{\bar{e}_3}[E_4] + y_{\bar{e}_1}[E_5] \ . \end{aligned} \] Although ${\mathbf J}_G$ has the same Betti table as ${\mathbf O}_G^q$, it is not possible to read the minimal free resolution for ${\mathbf J}_G$ directly from ${\mathcal B}_G^q$; one really needs to consider the cell decomposition of the torus $L(G)_{{\mathbb R}}/L(G)$. \end{Example} \begin{Example} \label{ex:CutResol} Consider the graph $K_3$ with a fixed orientation as in Figure~\ref{fig:K3}. \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .30, very thick = 20mm] \node (n1) at (34,12) [Cwhite] {$u_1$}; \node (n2) at (30,6) [Cwhite] {$u_3$}; \node (n3) at (38,6) [Cwhite] {$u_2$}; \node (n1) at (34,5.2) [Cwhite] {$e_1$}; \node (n2) at (31.7,9) [Cwhite] {$e_3$}; \node (n3) at (36.3,9) [Cwhite] {$e_2$}; \node (n1) at (34,11) [Cgray] {}; \node (n2) at (31,6) [Cred] {}; \node (n3) at (37,6) [Cgray] {}; \foreach \from/\to in {n3/n1,n1/n2,n2/n3} \draw[black][->] (\from) -- (\to); \end{tikzpicture} \caption{Graph $K_3$ and a fixed orientation ${\mathcal O}$} \label{fig:K3} \end{center} \end{figure} The lattice of integral cuts $L(G)$ is two-dimensional and is depicted in Figure~\ref{fig:CutLattice}. This picture should be compared with Figure~\ref{fig:PrinLattice} (see Remark~\ref{rmk:isometry}). This lattice ``lives in'' $C_{{\mathcal O}}^1(G,{\mathbb R})=\span\{e_1^*,e_2^*,e_3^*\} \cong {\mathbb R}^3$. In the picture $a_1=d_{{\mathcal O}}(\chi_{u_1})=e^\ast_2-e^\ast_3$, $a_2=d_{{\mathcal O}}(\chi_{u_2})=e^\ast_1-e^\ast_2$, and $a_3=d_{{\mathcal O}}(\chi_{u_3})=e^\ast_3-e^\ast_1$. \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .30, very thick = 20mm] \node (n42) at (20,1) [Cblack] {}; \node (n41) at (14,1) [Cblack] {}; \node (n43) at (26,1) [Cblack] {}; \node (n44) at (32,1) [Cblack] {}; \node (n11) at (14,11) [Cblack] {}; \node (n12) at (20,11) [Cblack] {}; \node (n13) at (26,11) [Cblack] {}; \node (n14) at (32,11) [Cblack] {}; \node (n21) at (11,6) [Cblack] {}; \node (n22) at (17,6) [Cblack] {}; \node (n23) at (23,6) [Cblack] {}; \node (n24) at (29,6) [Cblack] {}; \node (n25) at (35,6) [Cblack] {}; \node (n4) at (14,1) [Cblack] {}; \node (n1) at (14,11) [Cblack] {}; \node (n2) at (11,6) [Cblack] {}; \node (n3) at (17,6) [Cblack] {}; \node (n51) at (23,-4) [Cblack] {}; \node (n52) at (29,-4) [Cblack] {}; \node (n61) at (27,2.8) [Cgray] {}; \node (n62) at (27,-.7) [Cgray] {}; \node (n72) at (27,-9) [C0] {}; \node (n70) at (20,-8.3) [C0] {}; \node (n18) at (35,16) [C0] {$\varphi_1=0$}; \node (n40) at (6.7,1) [C0] {$\varphi_2=0$}; \node (n10) at (17,16) [C0] {$\varphi_3=0$}; \node (n181) at (38,11) [C0] {$\varphi_1=1$}; \node (n401) at (3.7,6) [C0] {$\varphi_2=1$}; \node (n101) at (11,16) [C0] {$\varphi_3=1$}; \node (m24) at (30.5,6.8) [C0] {$a_1$}; \node (mm) at (30.5,-4.5) [C0] {$a_2$}; \node (m42) at (19,0.17) [C0] {$a_3$}; \node (m42) at (25,0.17) [C0] {$0$}; \node (n71) at (27,16) [C0] {${\mathcal G}^{q,c}$}; \foreach \from/\to in {n71/n62,n72/n61} \draw[blue][dashed] (\from) -- (\to); \foreach \from/\to in {n40/n41, n401/n21} \draw[black][dashed] (\from) -- (\to); \foreach \from/\to in {n14/n18, n70/n51,n25/n181} \draw[green][dashed] (\from) -- (\to); \foreach \from/\to in {n10/n12,n1/n3, n101/n11} \draw[red][dashed] (\from) -- (\to); \foreach \from/\to in {n11/n14,n21/n25,n41/n44,n51/n52} \draw[black][] (\from) -- (\to); \foreach \from/\to in {n11/n22,n22/n42,n42/n51,n12/n23,n23/n43,n43/n52,n13/n24,n24/n44,n14/n25,n21/n41} \draw[red][] (\from) -- (\to); \foreach \from/\to in {n11/n21,n12/n22,n22/n41,n13/n23,n23/n42,n14/n24,n24/n43,n43/n51,n25/n44,n44/n52} \draw[green][] (\from) -- (\to); \foreach \from/\to in {n62/n61} \draw[blue][] (\from) -- (\to); \node (n61) at (27,2.8) [Cgray] {}; \node (n62) at (27,-.7) [Cgray] {}; \node (n63) at (27,1) [Cgray] {}; \end{tikzpicture} \caption{Cut lattice $L(G)$} \label{fig:CutLattice} \end{center} \end{figure} The cell decomposition $\Del(L(G))$ is the Delaunay decomposition of $L(G)_{\mathbb R}$ with respect to the cut lattice and the usual Euclidean metric (cf. Remark~\ref{rmk:delon}(ii)), which coincides with an infinite hyperplane arrangement (Theorem~\ref{thm:totunim}(ii) and \S\ref{sec:cutlattice}). The hyperplanes at the origin are defined by $\varphi_i = e_i |_{L(G)_{\mathbb R}} = 0$. The quotient cell decomposition $\Del(L(G))/L(G)$ of the torus $L(G)_{\mathbb R} / L(G)$ has one $0$-cell $\{{\mathbf p}\}$ (the orbit of the origin), three $1$-cells $\{E,E',E''\}$ (the orbits of the green, red, and black edges), and two $2$-cells $\{F,F'\}$ (the orbits of the upward and downward triangles). Assume that $q=u_3$ is the distinguished vertex. The hyperplane ${\mathcal G}^{q,c}$ is the hyperplane passing through points $c a_1$ and $c a_2$. In the figure $c$ is roughly $\frac{1}{3}$. The bounded complex of the intersection of this hyperplane with the arrangement at the origin is denoted by a solid blue segment. This is ${\mathcal A}_G^{q,c}$, which is combinatorially equivalent to ${\mathcal B}_G^{q,c}$ (via the coboundary map). In Figure~\ref{fig:fund}, we have chosen a fundamental domain for the lattice, and have labeled all cells of this fundamental domain according to the recipe described in \S\ref{sec:labelDel}. This labeling induces a labeling on ${\mathcal A}_G^{q,c}$ (compatible with the labeling of ${\mathcal B}_G^{q,c}$) which is also given in the figure. The labelled cell complexes in Figure~\ref{fig:fund} are enough to completely describe minimal free resolutions for ${\mathbf J}_G$ and for ${\mathbf O}_G$. Concretely, the minimal resolution of ${\mathbf J}_G$ is as follows: \[ 0 \rightarrow {\mathbf S}(-{\mathbf m}_{F}) \oplus {\mathbf S}(-{\mathbf m}_{F'}) \xrightarrow{\partial_2} {\mathbf S}(-{\mathbf m}_{E}) \oplus {\mathbf S}(-{\mathbf m}_{E'})\oplus {\mathbf S}(-{\mathbf m}_{E''}) \xrightarrow{\partial_1} {\mathbf S}(-{\mathbf m}_{{\mathbf p}}) \ . \] As usual, assume $[F]$ denotes the generator of ${\mathbf S}(-{\mathbf m}_F)$. The labels of cells in $\Del(L(G)) / L(G)$ are: \[ {\mathbf m}_E = y_{e_2}y_{\bar{e}_3} \ , \quad {\mathbf m}_{E'} = y_{e_1}y_{\bar{e}_3} \ , \quad {\mathbf m}_{E''} = y_{e_1}y_{\bar{e}_2} \ , \] \[ {\mathbf m}_F = y_{e_1} y_{e_2}y_{\bar{e}_3} \ , \quad {\mathbf m}_{F'}= y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3} \ . \] The homogenized differential operator (see \eqref{eq:differntials}) of the cell complex $(\partial_1, \partial_2)$ is described as follows: \[ \partial_1([E])= \frac{y_{e_2}y_{\bar{e}_3}}{1}[{\mathbf p}] -\frac{y_{e_2}y_{\bar{e}_3}}{\frac{y_{e_2}y_{\bar{e}_3}}{y_{\bar{e}_2}y_{e_3}}}[{\mathbf p}] = ({y_{e_2}y_{\bar{e}_3}} - {y_{\bar{e}_2}y_{e_3}})[{\mathbf p}]\ , \] \[ \partial_1([E'])= \frac{y_{e_1}y_{\bar{e}_3}}{\frac{y_{e_1}y_{\bar{e}_3}}{y_{\bar{e}_1}y_{e_3}}}[{\mathbf p}] - \frac{y_{e_1}y_{\bar{e}_3}}{1}[{\mathbf p}] = ({y_{\bar{e}_1}y_{e_3}}-{y_{e_1}y_{\bar{e}_3}})[{\mathbf p}]\ , \] \[ \partial_1([E''])= \frac{y_{e_1}y_{\bar{e}_2}}{\frac{y_{e_1}y_{\bar{e}_2}}{y_{\bar{e}_1}y_{e_2}}}[{\mathbf p}] - \frac{y_{e_1}y_{\bar{e}_2}}{1}[{\mathbf p}] = ({y_{\bar{e}_1}y_{e_2}} - {y_{e_1}y_{\bar{e}_2}})[{\mathbf p}]\ , \] \[ \partial_2([F])= \frac{y_{e_1}y_{e_2}y_{\bar{e}_3}}{y_{e_2}y_{\bar{e}_3}}[E] - \frac{y_{e_1}y_{e_2}y_{\bar{e}_3}}{\frac{y_{e_1}y_{e_2}y_{\bar{e}_3}}{y_{e_3}}}[E''] + \frac{y_{e_1}y_{e_2}y_{\bar{e}_3}}{y_{e_1}y_{\bar{e}_3}}[E'] = {y_{e_1}}[E] - {y_{e_3}}[E''] + {y_{e_2}}[E'] \ , \] \[ \partial_2([F'])= \frac{y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}}{\frac{y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}}{{y_{\bar{e}_1}}}}[E] - \frac{y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}}{y_{e_1}y_{\bar{e}_2}}[E''] + \frac{y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}}{y_{e_1}y_{\bar{e}_3}}[E'] = {y_{\bar{e}_1}}[E] - {y_{\bar{e}_3}}[E''] + {y_{\bar{e}_2}}[E'] \ . \] Clearly ${\mathbf J}_G$ is the image of $\partial_1$ after identifying $[{\mathbf p}]$ with $1$ (see Proposition~\ref{prop:grobrelation}). Since the labeling is compatible with the action of the lattice, any translation of this fundamental domain would give rise to the exact same description of the differential maps. The minimal resolution of ${\mathbf O}_G^q$ can be read from the bounded complex ${\mathcal A}_G^{q,c}$. If we identify the name of each cell in ${\mathcal A}_G^{q,c}$ with the name of the associated cell in $\Del(L(G))$, we have \[ 0 \rightarrow {\mathbf S}(-{\mathbf m}_{F}) \oplus {\mathbf S}(-{\mathbf m}_{F'}) \xrightarrow{\tilde{\partial}_1} {\mathbf S}(-{\mathbf m}_{E}) \oplus {\mathbf S}(-{\mathbf m}_{E'})\oplus {\mathbf S}(-{\mathbf m}_{E''}) \xrightarrow{\tilde{\partial}_0} {\mathbf S} \ , \] where \[ \tilde{\partial}_0([E])={\mathbf m}_E = y_{e_2}y_{\bar{e}_3} \ , \] \[ \tilde{\partial}_0([E'])={\mathbf m}_{E'} = y_{e_1}y_{\bar{e}_3} \ , \] \[ \tilde{\partial}_0([E''])={\mathbf m}_{E''} = y_{e_1}y_{\bar{e}_2} \ , \] \[ \tilde{\partial}_1([F])= \frac{y_{e_1} y_{e_2}y_{\bar{e}_3}}{y_{e_2}y_{\bar{e}_3}}[E] - \frac{y_{e_1} y_{e_2}y_{\bar{e}_3}}{y_{e_1}y_{\bar{e}_3}}[E'] = y_{e_1}[E]-y_{e_2}[E']\ , \] \[ \tilde{\partial}_1([F'])= \frac{y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}}{y_{e_1}y_{\bar{e}_3}}[E'] - \frac{y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}} {y_{e_1}y_{\bar{e}_2}} [E''] = y_{\bar{e}_2}[E'] - y_{\bar{e}_3} [E''] \ . \] The ideal ${\mathbf O}_G^q$ is the image of $\tilde{\partial}_0$ (see Proposition~\ref{prop:grobrelation}). This example is, of course, closely related to Example~\ref{ex:PrinResol}. The general relationship between these two constructions is explained in Remark~\ref{rmk:ResolRelation}. \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .60, very thick = 20mm] \node (n4) at (14,1) [Cgray] {}; \node (n1) at (14,11) [Cgray] {}; \node (n2) at (11,6) [Cblack] {}; \node (n3) at (17,6) [Cgray] {}; \node (m1) at (14,0) [C0] {$\frac{y_{e_1}y_{\bar{e}_2}}{y_{\bar{e}_1}y_{{e}_2}}$}; \node (m1) at (14,12.2) [C0] {$\frac{y_{e_2}y_{\bar{e}_3}}{y_{\bar{e}_2}y_{{e}_3}}$}; \node (m3) at (18.3,6) [C0] {$\frac{y_{e_1}y_{\bar{e}_3}}{y_{\bar{e}_1}y_{{e}_3}}$}; \node (m) at (10.3,6) [C0] {$1$}; \foreach \from/\to in {n4/n2} \draw[red][] (\from) -- (\to); \foreach \from/\to in {n1/n2} \draw[green][] (\from) -- (\to); \foreach \from/\to in {n2/n3} \draw[black][] (\from) -- (\to); \foreach \from/\to in {n1/n3} \draw[red][dashed] (\from) -- (\to); \foreach \from/\to in {n4/n3} \draw[green][dashed] (\from) -- (\to); \node (m10) at (11.3,8.5) [C0] {\textcolor{gray}{$y_{e_2}y_{\bar{e}_3}$}}; \node (m14) at (11.3,3.5) [C0] {\textcolor{gray}{$y_{e_1}y_{\bar{e}_2}$}}; \node (m23) at (14.1,6.86) [C0] {\textcolor{gray}{$y_{e_1}y_{\bar{e}_3}$}}; \node (m023) at (17,9.0) [C0] {\textcolor{gray}{$\frac{y_{e_1}y_{e_2}y_{\bar{e}_3}}{y_{{e}_3}}$}}; \node (m023) at (17,3.0) [C0] {\textcolor{gray}{$\frac{y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}}{y_{\bar{e}_1}}$}}; \node (m123) at (14,8.2) [C0] {\textcolor{blue}{$y_{e_1}y_{e_2}y_{\bar{e}_3}$}}; \node (m423) at (14,4.2) [C0] {\textcolor{blue}{$y_{e_1}y_{\bar{e}_2}y_{\bar{e}_3}$}}; \node (nn4) at (23,3) [Cgray] {}; \node (nn1) at (23,8) [Cgray] {}; \node (nn0) at (23,5.5) [Cgray] {}; \node (m1) at (24,5.5) [C0] {\textcolor{gray}{$y_{e_1}y_{\bar{e}_3}$}}; \node (m1) at (24,3) [C0] {\textcolor{gray}{$y_{e_1}y_{\bar{e}_2}$}}; \node (m1) at (24,8) [C0] {\textcolor{gray}{$y_{e_2}y_{\bar{e}_3}$}}; \node (m1) at (25,4.25) [C0] {\textcolor{blue}{$y_{e_1}y_{e_2}y_{\bar{e}_3}$}}; \node (m1) at (25,6.75) [C0] {\textcolor{blue}{$y_{e_1}y_{e_2}y_{\bar{e}_3}$}}; \foreach \from/\to in {nn4/nn1} \draw[blue][] (\from) -- (\to); \node (nn4) at (23,3) [Cgray] {}; \node (nn1) at (23,8) [Cgray] {}; \node (nn0) at (23,5.5) [Cgray] {}; \end{tikzpicture} \caption{A choice of fundamental domain with labels (left) , ${\mathcal A}_G^{q,c}$ with its induced labels (right)} \label{fig:fund} \end{center} \end{figure} \end{Example} \subsection{Potential theory and Gr\"obner weight functionals for ${\mathbf J}_G$} \label{sec:PotJG} Let $C_0(G,{\mathbb R})$ denote the real vector space spanned by $V(G)$, and let $C_1(G,{\mathbb R})$ denote the real vector space spanned by ${\mathbb E}(G)$. The usual boundary operator $\partial \colon C_1(G,{\mathbb R}) \rightarrow C_0(G,{\mathbb R})$ is defined by \[ (\partial(\sigma))(v) = \sum_{e_+=v}{\sigma(e)} - \sum_{e_-=v}{\sigma(e)} \ . \] An element $\sigma \in C_1(G,{\mathbb R})$ gives a map $\sigma \colon C^1(G,{\mathbb Z}) \rightarrow {\mathbb R}$ by sending $f$ to $f(\sigma)$. So it may be thought of as a weight functional for the ideal ${\mathbf J}_G$. Our next goal is to study the weight functionals $\sigma \in C_1(G, {\mathbb R})$ that represent the term order $\prec_q$ in Proposition~\ref{prop:grobrelation}(iv). For our application, a very important class of examples arises from weight functionals representing $<_q$ for ${\mathbf I}_G$ as studied in \S\ref{sec:wt1} (see Lemma~\ref{lem:bq}, Definition~\ref{def:M_intwt}, or \eqref{eq:wt1}). \begin{Proposition} \label{prop:plusworks} Let $\vartheta \in C^0(G, {\mathbb R})$ be any weight functional representing $<_q$ for ${\mathbf I}_G$ (i.e. ${\mathbf M}_G^q=\ini_{\vartheta}{({\mathbf I}_G)}$). Then the $1$-chain $\sigma \in C_1(G,{\mathbb R})$ defined by \[ \sigma(e) = \vartheta(e_+) \quad \text{for all } e \in {\mathbb E}(G) \] represents a term order $\prec_q$ for ${\mathbf J}_G$ with ${\mathbf O}_G^q = \ini_{\sigma}({\mathbf J}_G)$. \end{Proposition} \begin{proof} By Proposition~\ref{prop:grobrelation}, the term order $\prec_q$ is characterized by requiring \[ {\prod_{e \in {\mathbb E}(A,A^c)}{y_e} } \ \prec_q {\prod_{e \in {\mathbb E}(A^c, A)}{y_e} } \] for every $A \subsetneq V(G)$, where $q \in A$ with $G[A]$ and $G[A^c]$ connected. Since (see Lemma~\ref{lem:labels}) \[ \frac{\prod_{e \in {\mathbb E}(A^c, A)}{y_e} }{\prod_{e \in {\mathbb E}(A,A^c)}{y_e} }= \prod_{e \in {\mathbb E}(G)} { {y_{e}^{d(\chi_{A^c})(e)}} } \ , \] we have ${\mathbf O}_G^q = \ini_{\sigma}({\mathbf J}_G)$ if and only if \begin{equation}\label{eq:sigpos} \sigma(d(\chi_{A^c}))=\sum_{e \in {\mathbb E}(G)} {\sigma(e) \cdot (d(\chi_{A^c}))(e)} >0 \end{equation} for all bonds $d(\chi_{A^c})(e)$ associated to $A \subsetneq V(G)$ with $q \in A$. Since $\partial$ is the adjoint to $d$, \eqref{eq:sigpos} is equivalent to \begin{equation}\label{eq:sigmapos} \sum_{v \in V(G)}{(\partial(\sigma))(v) \cdot \chi_{A^c}(v) } > 0 \ . \end{equation} Since $\sigma(e) = \vartheta(e_+)$, we have \[ \begin{aligned} (\partial(\sigma))(v) &= \sum_{e_+=v}{\sigma(e)} - \sum_{e_-=v}{\sigma(e)} \\ &= \sum_{e_+=v}{\vartheta(e_+)} - \sum_{e_-=v}{\vartheta(e_+)} \\ &= \deg(v){\vartheta(v)} - \sum_{\{u,v\} \in E(G)}{\vartheta(u)} \\ &= \Delta(\vartheta)(v) \ . \end{aligned} \] Therefore (see \eqref{eq:wt1char}) \[ \sum_{v \in V(G)}{(\partial(\sigma))(v) \cdot \chi_{A^c}(v) } = \sum_{v \in V(G)}{\Delta(\vartheta)(v) \cdot \chi_{A^c}(v) } >0 \] and \eqref{eq:sigmapos} holds. \end{proof} \begin{Definition} \label{def:O_intwt} Let $\vartheta_q \in C^0(G,{\mathbb Z})$ denote the non-negative, integral functional defined in Definition~\ref{def:M_intwt}. We denote by $\lambda_q$ the associated non-negative, integral weight functional in $C_1(G,{\mathbb R})$ defined by \[ \lambda_q(e) = \vartheta_q(e_+) \quad \text{for all } e \in {\mathbb E}(G) \] as in Proposition~\ref{prop:plusworks} . \end{Definition} \subsection{Gr\"obner cone of ${\mathbf O}_G^q$} \label{sec:Ocone} Next we will describe the Gr\"obner cone associated to ${\mathbf O}_G^q$. As in \S\ref{sec:grocone}, this cone is intimately related to potential theory and Green's functions. The description of this cone is most elegant when $G$ does not have a cut vertex. Cut vertices introduce linear subspaces in the Gr\"obner cone and are slightly tedious (but similar) to deal with. Throughout this section, we will therefore assume that $G$ is $2$-vertex-connected. This condition is equivalent to assuming that the lattice $L(G)$ is indecomposable (\cite[Proposition~4]{Bacher}). \begin{Proposition} Assume $G$ is $2$-vertex-connected. Then $\sigma \in C_1(G, {\mathbb R})$ represents a term order $\prec_q$ for ${\mathbf J}_G$ with ${\mathbf O}_G^q = \ini_{\sigma}({\mathbf J}_G)$ if and only if for all $p \in V(G) \backslash \{q\}$ we have \[ \beta_p :=(\partial(\sigma))(p) > 0 \ . \] \end{Proposition} \begin{proof} We have already seen that $\sigma \in C_1(G, {\mathbb R})$ represents a term order $\prec_q$ for ${\mathbf J}_G$ with ${\mathbf O}_G^q = \ini_{\sigma}({\mathbf J}_G)$ if and only if \eqref{eq:sigmapos} holds for all bonds $d(\chi_{A^c})(e)$ associated to $A \subsetneq V(G)$ with $q \in A$. Since we have assumed there is no cut vertex, the star of every vertex gives a bond, so it is necessary (setting $A^c=\{p\}$ for $p\ne q$ in \eqref{eq:sigmapos}) to have $\beta_p=(\partial(\sigma))(p) > 0$. This condition is also sufficient because then for any bond $d(\chi_{A^c})(e)$ associated to $A \subsetneq V(G)$ with $q \in A$, we get \[ \sum_{v \in V(G)}{(\partial(\sigma))(v) \cdot \chi_{A^c}(v) }=\sum_{v \in V(G)}{\beta_v \cdot \sum_{p \in A^c} \chi_p(v) } = \sum_{p \in A^c}{\beta_p} >0 \] and \eqref{eq:sigmapos} holds. \end{proof} Therefore $\sigma \in C_1(G, {\mathbb R})$ is a solution to $\partial(\sigma)=\beta$ for $\beta=\sum_{p \in V(G)}{\beta_v(v)}$ in $\Div^0(G)$ with $\beta_p >0$ for $p \ne q$. \medskip After identifying $C_1(G,{\mathbb R})$ with $C^1(G, {\mathbb R})$ (by sending $e$ to $e^\ast$) we have the orthogonal (``Hodge'') decomposition \[ C_1(G,{\mathbb R}) \cong \Ker(\partial) \oplus \Image(d) \ . \] Let $\sigma = \sigma' + \sigma''$ for $\sigma' \in \Ker(\partial)$ and $\sigma'' = d(\psi) \in \Image(d)$ for $\psi \in C^0(G,{\mathbb R})$. Then $\partial(\sigma)=\beta$ if and only if $\partial d(\psi) = \partial(\sigma'')=\beta$. By Remark~\ref{rmk:selfadjoint} $\partial d = 2\Delta$, so \[ \Delta{\psi} = \frac{1}{2}\beta \ . \] It follows from the definition of the Green's function $j_q(p, v)$, together with the fact that the Laplacian operator has a one dimensional zero-eigenspace generated by $\mathbf{1}$, that: \[ \psi=\frac{1}{2}\sum_{p \in V(G)}{\beta_p j_q(p, \cdot )} + k \cdot \mathbf{1} \] for some constant $k \in {\mathbb R}$. Therefore \[ \sigma(e)=\sigma'(e) + \sigma''(e) =\sigma'(e) + (d(\psi))(e) = \sigma'(e) + \frac{1}{2}\sum_{p \in V(G)}{\beta_p (j_q(p, e_+ ) - j_q(p, e_- ))}\ . \] We summarize these observations in the following theorem. \begin{Theorem} Assume $G$ is $2$-vertex connected. The $1$-chain $\sigma \in C_1(G, {\mathbb R})$ represents $\prec_q$ for ${\mathbf J}_G$ if and only if there exist $\sigma' \in \Ker(\partial)$ and real numbers $\beta'_p >0$ (for $p \in V(G)$) such that \[ \sigma(e)= \sigma'(e) + \sum_{p \in V(G)}{\beta'_p (j_q(p, e_+ ) - j_q(p, e_- ))} \] for all $e \in {\mathbb E}(G)$. \end{Theorem} In other words $\sigma$ (up to an element of the ``extended cycle space'' $\Ker(\partial)$) is in the interior of the cone generated by the vectors $(j_q(p, e_+ ) - j_q(p, e_- ))_{e \in {\mathbb E}(G)}$ for various $p \in V(G)$. It is easy, using \cite[Construction~3.1]{FarbodMatt12}, to show that these vectors are independent. \section{Regular sequences, minimal free resolutions, and flat families} \label{sec:reg} \subsection{``Nice'' gradings and Nakayama's lemma for polynomial rings} \label{sec:nicegrading} Let $S$ be a polynomial ring over $K$ in $r$ variables $\{z_1, \ldots, z_r\}$. Let ${\mathfrak m}$ denote the ideal consisting of all polynomials with zero constant term. Let $M$ be a finitely generated ${\mathbb Z}$-graded module over $S$. Nakayama's lemma for ${\mathbb Z}$-graded polynomial rings is the statement that ${\mathfrak m} M=M$ implies $M=0$. The proof of this lemma is significantly simpler than the proof of the analogues statement for local rings; taking $i$ to be the least integer such that $M_i \ne 0$, we see that the graded piece $M_i$ cannot appear in ${\mathfrak m} M$, so ${\mathfrak m} M \ne M$ unless $M=0$. The above version of Nakayama's lemma is a statement about ${\mathbb Z}$-graded polynomial rings and modules. It naturally extends to other gradings, provided that the grading is ``nice''. Let ${\mathsf A}$ be an abelian group, and assume the polynomial ring $S$ is endowed with an ${\mathsf A}$-valued degree map (semigroup homomorphism) $\deg_{A} \colon {\mathbb N}^r \rightarrow {\mathsf A}$. Let $S_{{\mathbf a}}$ denote the $K$-vector space consisting of all homogeneous polynomials having degree ${\mathbf a} \in {\mathsf A}$. Then $S$ has the direct sum decomposition \[ S=\bigoplus_{{\mathbf a} \in {\mathsf A}} S_{{\mathbf a}} \] satisfying $S_{\mathbf a} \cdot S_{\mathbf b} \subseteq S_{{\mathbf a}+{\mathbf b}}$. \begin{Definition} \label{def:nice} We call an ${\mathsf A}$-grading of $S$ ``nice'' if there exists a group homomorphism $u' \colon {\mathsf A} \rightarrow {\mathbb Z}$ such that the semigroup homomorphism $u:=u' \circ \deg_A \colon {\mathbb N}^r \rightarrow {\mathbb Z}$ has the following properties \begin{itemize} \item[(i)] $u({\mathbf v}) \geq 0$ for all ${\mathbf v} \in {\mathbb N}^r$, \item[(ii)] $u({\mathbf v}) = 0$ if and only if ${\mathbf v} = (0, 0, \ldots, 0)$. \end{itemize} \end{Definition} \medskip For a ``nice'' ${\mathsf A}$-grading of $S$, we automatically have $S_{\mathbf{0}} = K$. This is because $S_{\mathbf{0}}$ is spanned by the set of all monomials ${\mathbf z}^{{\mathbf v}}$ satisfying $\deg_A({\mathbf v})=\mathbf{0}$. Since $u'(\mathbf{0})= 0 $, it follows that $u({\mathbf v}) = 0$ and (ii) implies that ${\mathbf v} = (0, 0, \ldots, 0)$. It follows that, when we have a ``nice'' grading, $\bigoplus_{{\mathbf a} \in {\mathsf A} \backslash \{\mathbf{0}\}} S_{{\mathbf a}}$ coincides with the maximal ideal ${\mathfrak m}$ consisting of all polynomials with zero constant term. It is clear that the usual (coarse) ${\mathbb Z}$-grading is ``nice'' in the above sense. The following example generalizes the (fine) ${\mathbb Z}^r$-grading. \begin{Example}\label{ex:positive} Let $\omega= (\omega_1, \omega_2, \ldots , \omega_r)$ be an integral positive (i.e. $\omega_i \in {\mathbb Z}_{>0}$) weight vector. Let ${\mathbf e}_i$ denote the standard vector having $1$ in position $i$ and $0$ elsewhere. Consider the grading $\deg_\omega \colon {\mathbb N}^r \rightarrow \bigoplus_{i=1}^{r} {\mathbb Z} \omega_i {\mathbf e}_i$ defined by sending ${\mathbf e}_i$ to $\omega_i {\mathbf e}_i$. This is a ``nice'' grading. Indeed, let $u' \colon \bigoplus_{i=1}^{r} {\mathbb Z} \omega_i {\mathbf e}_i \rightarrow {\mathbb Z}$ be the group homomorphism defined by sending $\omega_i {\mathbf e}_i$ to $\omega_i$. Then the induced map $u \colon {\mathbb N}^r \rightarrow {\mathbb Z}$ is defined by sending ${\mathbf e}_i$ to $\omega_i$, and (i) and (ii) immediately follow from the positivity of the $\omega_i$'s. These are the ``positive multigradings'' in the sense of \cite[Definition~8.7]{MillerSturmfels}. \end{Example} \begin{Example} Consider the polynomial ring ${\mathbf R}=K[{\mathbf x}]$ in variables $\{x_v : v \in V(G)\}$. Each monomial is of the form ${\mathbf x}^D$ for some effective divisor $D \in \Div_+(G)$. Consider the $\Pic(G)$-grading defined by the semigroup homomorphism $\Div_+(G) \rightarrow \Pic(G)$ sending $D$ to its equivalence class $[D]$. This is a ``nice'' grading via the map $u' \colon \Pic(G) \rightarrow {\mathbb Z}$ sending $[D]$ to $\deg([D]) = \sum_{v} D(v)$. This is a well-defined homomorphism because all principal divisors have degree $0$. The induced map $u \colon \Div_+(G) \rightarrow {\mathbb Z}$ sends the effective divisor $D$ to $\deg(D)=\sum_{v} D(v)$. It is immediate that (i) and (ii) hold. See \cite[Section~2.2]{FarbodFatemeh} for more details. Note that if $G$ is not a tree then $\Pic(G)$ contains torsion elements. This example shows that our definition is robust enough to handle gradings with groups which are not necessarily torsion-free. \end{Example} Let $S$ be graded by ${\mathsf A}$. An $S$-module $M$ is called {\em ${\mathsf A}$-graded} if it is endowed with a decomposition $M =\bigoplus_{{\mathbf a} \in {\mathsf A}} M_{{\mathbf a}}$ as a direct sum of graded components such that $S_{{\mathbf a}} M_{{\mathbf b}} \subseteq M_{{\mathbf a}+{\mathbf b}}$ for all ${\mathbf a},{\mathbf b} \in {\mathsf A}$. \begin{Lemma}[Nakayama's lemma for ``nicely'' graded polynomial rings] \label{lem:nakayama} Assume $S$ is a polynomial ring endowed with a ``nice'' ${\mathsf A}$-grading. Let $M$ be a finitely generated ${\mathsf A}$-graded $S$-module. Then ${\mathfrak m} M=M$ implies $M=0$. \end{Lemma} \begin{proof} Suppose $M\ne 0$. Let $u' \colon {\mathsf A} \rightarrow {\mathbb Z}$ be as in Definition~\ref{def:nice}. Write $M=\bigoplus_{{\mathbf a} \in {\mathsf A}} M_{{\mathbf a}}$. For any graded piece $M_{{\mathbf a}}$, let $\ell(M_{{\mathbf a}})$ denote the integer $u'({\mathbf a})$. Let $\ell(M) = \min_{{\mathbf a} \in {\mathsf A}}{\ell(M_{{\mathbf a}})}$. Since $M$ is assumed to be finitely generated, $\ell(M) > - \infty $. Since $u'(r) \geq 1$ for all $r\in {\mathfrak m} = \bigoplus_{{\mathbf a} \ne \mathbf{0}}{M_{{\mathbf a}}}$, we have $\ell({\mathfrak m} M) > \ell(M)$ and therefore ${\mathfrak m} M \ne M$. \end{proof} \subsection{Regular sequences and homogeneous systems of parameters} Recall that for a commutative ring $S$ and an $S$-module $M$, an element $s \in S$ is called a {\em nonzerodivisor} on $M$ if $sm = 0$ implies $m = 0$ for $m \in M$. An {\em $M$-regular sequence} is a sequence $s_1, \ldots, s_d \in S$ such that \begin{itemize} \item[(i)] $M/(s_1, \ldots , s_{d})M \ne 0$, \item[(ii)] $s_i$ is a nonzerodivisor on $M/(s_1, \ldots , s_{i-1})M$ for $i = 1, \ldots, d$. \end{itemize} \begin{Remark} \label{rmk:weakvsstrong} In our application $S$ will always be a ``nicely'' graded polynomial ring, $M \ne 0$ will be a finitely generated graded $S$-module, and the $s_i$'s will be polynomials with zero constant term. In this situation (i) is automatically satisfied. This follows from Lemma~\ref{lem:nakayama}: if $s_i \in {\mathfrak m}$ and $M/(s_1, \ldots , s_{d})M = 0$, then we must have $M = {\mathfrak m} M$. But Nakayama's lemma would then imply that $M=0$. \end{Remark} \medskip \begin{Lemma}\label{lem:unit} Assume that $s_1,\ldots,s_d \in S$ is an $M$-regular sequence and $\varepsilon_1,\ldots,\varepsilon_d$ are units in $S$. Then $\varepsilon_1 s_1,\ldots,\varepsilon_d s_d$ is also an $M$-regular sequence. \end{Lemma} \begin{proof} Clearly $\epsilon_1 s_1$ is a nonzerodivisor on $M$. We need to show that $\epsilon_i s_i$ is a nonzerodivisor on $M/(\varepsilon_1 s_1,\ldots, \varepsilon_{i-1}s_{i-1})M$ for all $i > 1$. First note that $(\varepsilon_1 s_1,\ldots,\varepsilon_{i-1} s_{i-1})M=(s_1,\ldots,s_{i-1})M$. Assume that $(\varepsilon_i s_i)m\in (s_1,\ldots,s_{i-1})M$ for some $m \in M$, or $(\varepsilon_i s_i)m=\sum_{j=1}^{i-1} s_j m_j$ for some $m_j \in M$. Then $s_i m\in (\varepsilon_1^{-1}s_1,\ldots,\varepsilon_{i-1}^{-1} s_{i-1})M=(s_1,\ldots,s_{i-1})M$, which is a contradiction because $s_i$ is a nonzerodivisor on $M/(s_1, \ldots, s_{i-1})M$. \end{proof} \medskip \begin{Lemma}\label{lem:localization} Let $S$ be a ring, $M$ be an $S$-module, and $N$ be a flat $S$-module. If $s_1,\ldots,s_d \in S$ is an $M$-regular sequence then $s_1,\ldots,s_d$ is also an $(M \otimes_S N)$-regular sequence, provided that $(s_1,\ldots,s_d)(M \otimes_S N) \ne (M \otimes_S N)$. \end{Lemma} For a proof see, e.g., \cite[Proposition~1.1.2]{Bruns}. \medskip It is not necessarily true that every permutation of the $s_i$'s is again a regular sequence. For example, $xy, xz, y-1$ is a regular sequence for $K[x,y,z]$ (as a module over itself), but $xy, y-1, xz$ is not a regular sequence. However, in situations where Nakayama's lemma apply, permutation of a regular sequence is allowed. The following theorem, for local rings, is proved in \cite[Proposition~1.1.6]{Bruns}. \begin{Theorem} \label{lem:herzog} Let $S$ be a polynomial ring endowed with a ``nice'' ${\mathsf A}$-grading. Let $M$ be a finitely generated ${\mathsf A}$-graded $S$-module. Assume $s_1,\ldots,s_d$ is an $M$-regular sequence consisting of elements in ${\mathfrak m}$. Then any permutation of $s_1,\ldots,s_d$ is also an $M$-regular sequence. \end{Theorem} \begin{proof} It suffices to show that if $s_1, s_2$ is an $M$-regular sequence then $s_2,s_1$ is also an $M$-regular sequence (see \cite[proof of Proposition~1.1.6]{Bruns}). \begin{itemize} \item $s_2$ is a nonzerodivisor on $M$: let $N$ denote the kernel of the map $M \rightarrow M$ sending $m$ to $s_2 m$. For each $z \in N$ we have $s_2z = 0$ and therefore $s_2z+s_1M = s_1M$. Since $s_2$ is a nonzerodivisor on $M/s_1M$ by assumption, we must have $z \in s_1M$ or $z = s_1 z'$ for some $z' \in M$. But then $s_1(s_2 z')=s_2(s_1 z') = 0$ and since $s_1$ is a nonzerodivisor on $M$ we must have $s_2z' = 0$ and $z' \in N$. So we have shown that $N \subseteq s_1 N$ and therefore $N=s_1N$. Since $s_1 \in {\mathfrak m}$ by assumption, we obtain $N= s_1N \subseteq {\mathfrak m} N \subseteq N$ or ${\mathfrak m} N = N$, and by Lemma~\ref{lem:nakayama} we get $N=0$, which is what we want. \item $s_1$ is a nonzerodivisor on $M /s_2M$: if $s_1 (z+s_2 M) = s_2 M$ for some $z \in M$, then $s_1 z \in s_2M$ or $s_1 z = s_2 z'$ for some $z' \in M$. But then $s_2 z' \in s_1M$, or equivalently $s_2(z' +s_1M)=s_1M$. Since $s_2$ is a nonzerodivisor on $M/s_1M$, this means that $z' \in s_1M$, so $z' = s_1 m$ for some $m \in M$. But $s_1z = s_2 s_1 m$ implies $z = s_2m$ because $s_1$ is a nonzerodivisor on $M$. Therefore $z \in s_2M$, which is what we want. \end{itemize} Remark~\ref{rmk:weakvsstrong} completes the proof. \end{proof} \medskip Consider polynomial rings with ${\mathbb Z}$-gradings. In this ${\mathbb Z}$-graded setting, an {\em h.s.o.p.} (homogeneous system of parameters) for $M$ is defined as a set $\{\theta_1, \ldots, \theta_{\dim(M)} \} \subset S$ of homogeneous elements of positive degree such that $\dim (M / (\theta_1, \ldots, \theta_{\dim(M)})M) = 0$. Here $\dim(\cdot)$ denotes the Krull dimension. Equivalently, $\{\theta_1, \ldots, \theta_{d}\} \subset S$ is an h.s.o.p if and only if $d=\dim(M)$ and $M$ is a finitely generated $K[\theta_1, \ldots, \theta_{d}]$-module. Clearly the property of being an h.s.o.p. does not change under permutation. \medskip By definition, $\depth(M)$ is the length of the longest homogeneous $M$-regular sequence. In general $\depth(M) \leq \dim(M)$. If $\depth(M)=\dim(M)$, then $M$ is called {\em Cohen-Macaulay}. \begin{Theorem}\label{thm:CMhsop} Assume $M$ has an h.s.o.p. Then $M$ is Cohen-Macaulay if an only if every h.s.o.p. is an $M$-regular sequence. \end{Theorem} For a proof see, e.g., \cite[p.35]{Stanley96}. \medskip An {\em l.s.o.p.} (linear system of parameters) for $M$ is an h.s.o.p., all of whose elements have degree one. \subsection{Linear systems of parameters and squarefree monomial ideals} Consider the polynomial ring $K[{\mathbf z}]$ in variables ${\mathbf z}= \{z_1, \ldots , z_r \}$. Monomial ideals are the ${\mathbb N}^r$-graded ideals of $K[{\mathbf z}]$. An ideal is squarefree if it is generated by squarefree monomials. Given an abstract simplicial complex $\Sigma$, the squarefree monomial ideal in $K[{\mathbf z}]$ defined as \[ I_{\Sigma}= \langle {\mathbf z}^{\tau} : \tau \not \in \Sigma \rangle \] is called the {\em Stanley-Reisner ideal} of $\Sigma$. The {\em Stanley-Reisner ring} (or {\em face ring}) $K[\Sigma]$ is, by definition, $K[{\mathbf z}] / I_{\Sigma}$. In fact, this gives a bijective correspondence between squarefree monomial ideals inside $K[{\mathbf z}]$ and abstract simplicial complexes on the vertices $\{z_1, \ldots, z_r\}$ (see, e.g., \cite[Chapter~1]{MillerSturmfels}). The simplicial complex $\Sigma$ is called Cohen-Macaulay if $K[\Sigma]$ is Cohen-Macaulay. A (pure) ``shellable'' simplicial complex is Cohen-Macaulay (see, e.g., \cite[Chapter~III]{Stanley96} or \cite[Chapter~13]{MillerSturmfels}). In general, $\dim(K[\Sigma])$ is equal to the maximal cardinality of the faces of $\Sigma$ (see, e.g., \cite[p.53]{Stanley96}). \medskip Given a degree one element $\theta=\sum_{i}\alpha_i z_i$ and a face $\tau \in \Sigma$, by {\it restriction of $\theta$ to $\tau$} we mean \[\theta|_\tau=\sum_{z_i\in \tau} \alpha_i z_i \ .\] For squarefree monomial ideals, there is a nice characterization of l.s.o.p. which was first given in \cite{KindKleinschmidt}. \begin{Lemma} \label{lem:Sta} Let $K[\Sigma]$ be a Stanley-Reisner ring of Krull dimension $d$, and let $\{\theta_1,\ldots,\theta_d\} \subset K[\Sigma]$ be a set of elements of degree one. Then the following are equivalent: \begin{itemize} \item[(i)] $\{\theta_1,\ldots,\theta_d \}$ is an l.s.o.p. for $K[\Sigma]$, \item[(ii)] for every facet $\tau$ of $\Sigma$ the restrictions $\theta_1|_{\tau},\ldots,\theta_d|_{\tau}$ span a vector space of dimension equal to $|\tau|$ (the cardinality of $\tau$). \end{itemize} \end{Lemma} For a proof see, e.g., \cite[pp.81-82]{Stanley96}. \subsection{Regular sequences and free resolutions} Let $S$ be a polynomial ring with its usual ${\mathbb Z}$-grading, and $M$ be a graded $S$-module. Assume that \[ {\mathcal F} \colon 0 \rightarrow \cdots \rightarrow F_{i} \xrightarrow{\varphi_{i}} F_{i-1} \rightarrow \cdots \rightarrow F_0 \xrightarrow{\varphi_{0}} M \rightarrow 0 \] is a graded free resolution. We may form the free complex of $S/(s)$-modules \[ {\mathcal F}\otimes_{S} S/(s) \colon 0 \rightarrow \cdots \rightarrow F_{i}\otimes_{S} S/(s) \xrightarrow{\varphi_{i} \otimes {\rm id}} F_{i-1}\otimes_{S} S/(s) \rightarrow \cdots \rightarrow F_0\otimes_{S} S/(s)\ . \] \medskip The following theorem is a slight generalization of \cite[Lemma~3.15]{EisenbudSyz} (see also \cite[Proposition~1.1.5]{Bruns}). \begin{Theorem} \label{thm:Eis} Assume $s$ is a nonzerodivisor on $S$ and on $M$. Then \begin{itemize} \item[(i)] ${\mathcal F}\otimes_{S} S/(s)$ is a free resolution of $M /(s)M$. \item[(ii)] If $s \in {\mathfrak m}$ (i.e. if $s$ has zero constant term) and if ${\mathcal F}$ is a {\em minimal} free resolution of $M$, then ${\mathcal F}\otimes_{S} S/(s)$ is a minimal free resolution of $M / (s)M$. \item[(iii)] If $s$ is homogeneous of positive degree, then the ${\mathbb Z}$-graded Betti numbers of $M$ over $S$ coincide with the ${\mathbb Z}$-graded Betti numbers of $M/(s)M$ over the graded ring $S/(s)$. \item[(iv)] If $s\in {\mathfrak m}$ and if ${\mathcal F}$ is a {\em minimal cellular free resolution} of $M$, then ${\mathcal F}\otimes_{S} S/(s)$ is a {\em minimal cellular free resolution} of $M / (s)M$. \end{itemize} \end{Theorem} \begin{proof} (i) We compute the homology of the complex ${\mathcal F}\otimes_{S} S/(s)$. By definition, this homology is computed by the $\Tor$ functor: \[ \Tor^{S}_{i}(M, S/(s)) = \Ker(\varphi_{i} \otimes {\rm id}) / \Image(\varphi_{i+1} \otimes {\rm id}) \ . \] To compute $\Tor^{S}_{i}(M, S/(s))$ consider the exact sequence (for the nonzerodivisor $s$ on $S$) \[ 0 \rightarrow S \xrightarrow{s} S \xrightarrow{\epsilon} S/(s) \rightarrow 0 \] which can be seen as a free resolution of $S/(s)$. Tensoring this resolution with $M$ on the left we obtain the complex \[ 0 \rightarrow M \xrightarrow{{\rm id} \otimes s} M \xrightarrow{{\rm id} \otimes \epsilon} M \otimes_S S/(s) \rightarrow 0 \ . \] Again by definition, $\Tor^{S}_{0}(M, S/(s)) = M \otimes_S S/(s) = M/(s)M$, $\Tor^{S}_{1}(M, S/(s)) = \Ker({\rm id} \otimes s) =0$ (because $s$ is a nonzerodivisor on $M$), and $\Tor^{S}_{i}(M, S/(s)) = 0$ for $i >1$. \medskip (ii) ${\mathcal F}$ is a minimal free resolution of $M$ if and only if there are no $S$-units in the matrices corresponding to $\varphi_i$ ($i \geq 1$). The matrix corresponding to $\varphi_i \otimes \id$ in ${\mathcal F}\otimes_{S} S/(s)$ is the same as the matrix corresponding to $\varphi_i$, except that its entries are considered as elements in $S/(s)$. If an entry $u$ is a unit in $S/(s)$ then there exists $u' \in S$ such that $(u+(s))(u'+(s))=1+(s)$, or equivalently $uu'-1 \in (s)$. But this is not possible because $u$ and $u'$ are homogeneous of positive degree and $s\in {\mathfrak m}$. \medskip (iii) It follows from part (ii) that a minimal free resolution of $M$ turns into a minimal free resolution of $M/(s)M$. When $s$ is homogeneous, the degrees of the graded parts remain the same. \medskip (iv) Assume the minimal free resolution ${\mathcal F}$ is supported on a labeled cell complex ${\mathcal D}$. Then ${\mathcal F}\otimes_{S} S/(s)$ is supported on the same cell complex whose labels are now considered as elements of $S/(s)$. \end{proof} We remark that one can use Theorem~\ref{thm:Eis} repeatedly and obtain a similar result for regular sequences. \subsection{Regular sequences and flat families} The purpose of this section is to give a generalization (and a complete proof) of \cite[Proposition 15.15]{Eisenbud} in Proposition~\ref{prop:Eis} \medskip Let $S$ be a polynomial ring in $r$ variables $\{z_1 , \ldots , z_r\}$, and let $S[t]$ be the polynomial ring with one extra indeterminate $t$ over $S$. Let $\omega \in \Hom({\mathbb Z}^r,{\mathbb Z})$ be an integral weight functional. For any $g=\sum_{i}u_i m_i\in S$, where $u_i$'s are nonzero constants in $K$ and $m_i$'s are some monomials in $S$, we define $\deg_\omega(g):= \max \omega(m_i)$. The ``lift'' of $g$ to $S[t]$ with respect to $\omega$ is \[ \tilde{g}=t^b g(t^{-\omega(z_1)}z_1,\ldots,t^{-\omega(z_r)}z_r) \] in which $b=\deg_\omega(g)$. For any ideal $I \subset S$ we define the ideal \[ \tilde{I} = \langle \tilde{g} : g\in I \rangle \subset S[t]\ . \] It follows from the definition that \begin{equation}\label{eq:ini} (S[t]/\tilde{I})/t(S[t]/\tilde{I})\cong S/\ini_\omega(I) \ . \end{equation} In other words, $\tilde{g}$ modulo $t$ is precisely $\ini_\omega(g)$. \medskip For a proof of the following result, see \cite[Theorem~15.17]{Eisenbud}. \begin{Theorem} \label{thm:flat} For any ideal $I\subset S$, \begin{itemize} \item[(i)] The $K[t]$-algebra $S[t]/\tilde{I}$ is a free (and thus flat) $K[t]$-module. \item[(ii)] The map \[ \varphi \colon ({S[t]}/\tilde{I})\otimes_{K[t]} {K[t,t^{-1}]} \rightarrow (S/I)[t,t^{-1}] \] induced by \[z_i\mapsto t^{\omega(z_i)} z_i\] gives an isomorphism of $K[t]$-algebras. \end{itemize} \end{Theorem} Note that for the map $\varphi$ we have $\varphi(\tilde{g})=t^b g$ (where $b=\deg_\omega(g)$) and $\varphi(\tilde{I})=I$. \medskip \begin{Proposition}\label{prop:Eis} Let $I$ be a graded ideal and $\omega$ be a positive integral weight functional. Assume that $f_1,\ldots,f_d\in S$ are such that \[ \ini_\omega(f_1),\ldots,\ini_\omega(f_d) \] is an $(S/\ini_\omega(I))$-regular sequence. Then \[f_1,\ldots,f_d\] is an $(S/I)$-regular sequence. \end{Proposition} \begin{proof} Let $M=S[t]/\tilde{I}$. By Theorem~\ref{thm:flat}(i), $M$ is a free $K[t]$-module. Therefore $t$ is a nonzerodivisor on $M$. By \eqref{eq:ini}, $M/tM \cong S/\ini_\omega(I)$ and $\tilde{g}$ modulo $t$ equals $\ini_\omega(g)$ for all $g \in S$. Therefore, the hypothesis is precisely the statement that $t, \tilde{f}_1,\ldots,\tilde{f}_d$ is an $M$-regular sequence. But $t, \tilde{f}_1,\ldots,\tilde{f}_d$ are all homogeneous elements with respect to the ``nice'' grading of $M$ defined by $\deg(z_i) = \omega(z_i)$ and $\deg(t)=1$ (see Example~\ref{ex:positive}). Therefore, by Theorem~\ref{lem:herzog}, the permutation $\tilde{f}_1,\ldots,\tilde{f}_d, t$ is also an $M$-regular sequence. The module ${K[t,t^{-1}]}$ is the localization of $K[t]$ with respect to $t$, therefore it is a flat $K[t]$-module. By Lemma~\ref{lem:localization} (and Remark~\ref{rmk:weakvsstrong}), $\tilde{f}_1,\ldots,\tilde{f}_d$ is also a regular sequence on $M':= M \otimes_{K[t]} {K[t,t^{-1}]}$. By Theorem~\ref{thm:flat} (ii), the map \[ \varphi \colon M' \rightarrow (S/I)[t,t^{-1}]\ \textit{ with }\ z_i\mapsto t^{\omega(z_i)} z_i \] is an isomorphism. Therefore $\varphi(\tilde{f}_1)=t^{b_1}f_1,\ldots,\varphi(\tilde{f}_d)=t^{b_d}f_d$ is a regular sequence on $(S/I)[t,t^{-1}]$ (here $b_i=\deg_\omega(f_i)$). Since $t^{b_i}$'s are units in $(S/I)[t,t^{-1}]$, by Lemma~\ref{lem:unit}, $f_1,\ldots,f_d$ is also a regular sequence on $(S/I)[t,t^{-1}]$. Therefore $f_1,\ldots,f_d$ is a regular sequence on $(S/I)$. \end{proof} \begin{Remark} \label{rmk:posnotneeded} When $f_1, f_2, \ldots, f_d$ are ${\mathbb Z}$-homogeneous, the positivity of $\omega$ in Proposition~\ref{prop:Eis} is unnecessary. This is because $\omega$ and $\omega+ c \mathbf{1}$ (for any $c>0$) behave the same on these ${\mathbb Z}$-homogeneous forms. \end{Remark} \section{Regular sequences for ${\mathbf O}_G^q$ and ${\mathbf J}_G$} \label{sec:main} \subsection{Linear system of parameters for ${\mathbf O}_G^q$} The ideal ${\mathbf O}_G^q\subset {\mathbf S}$ is a squarefree monomial ideal. Let $\Sigma_G^q$ denote its associated simplicial complex on $2m$ vertices $\{y_e: e \in {\mathbb E}(G)\}$. \medskip For each spanning tree $T$ of $G$, let ${\mathcal O}_T$ denote the orientation of $T$ with a unique source at $q$ (i.e. the orientation obtained by orienting all paths away from $q$). For an example, see Figure~\ref{fig:spanning trees}. \begin{Proposition}\label{prop:primdecom} \begin{itemize} \item[] \item[(i)] The number of facets of $\Sigma_G^q$ is the same as the number of spanning trees of $G$. For each spanning tree $T$, the corresponding facet $\tau_T$ is: \[ \tau_T = \{y_e: e \in {\mathbb E}(G) \backslash {\mathcal O}_T\} \ . \] \item[(ii)] For each spanning tree $T$ of $G$, let $P_T=\langle y_e: e \in {\mathcal O}_T \rangle$. The minimal prime decomposition of ${\mathcal O}_G^q$ is \[ {\mathbf O}_G^q = \bigcap_{T} P_T \ , \] the intersection being over all spanning trees of $G$. \item[(iii)] For each facet $\tau$ of $\Sigma_G^q$ we have $|\tau|=2m-n+1$. Therefore \[\dim(K[\Sigma_G^q])=2m-n+1 \ .\] \item[(iv)] $\Sigma_G^q$ is Cohen-Macaulay. \end{itemize} \end{Proposition} \begin{proof} (i) By Proposition~\ref{prop:grobrelation}, we know that ${\mathbf O}_G^q$ is generated by monomials of the form $\prod_{e \in {\mathbb E}(A^c,A)}{y_e}$, where $q \in A \subsetneq V(G)$ and ${\mathbb E}(A^c,A) \subset {\mathbb E}(G)$ denotes the set of oriented edges from $A$ to its complement $A^c$. First we show that for each spanning tree $T$, the monomial $m_T:=\prod_{e \in {\mathbb E}(G) \backslash {\mathcal O}_T}{y_e}$ does not belong to ${\mathbf O}_G^q$. Clearly $m_T \in {\mathbf O}_G^q$ if and only if $m_T$ is divisible by one of the given generators $\prod_{e \in {\mathbb E}(A^c,A)}{y_e}$. But \[ \prod_{e \in {\mathbb E}(A^c,A)}{y_e} \,\,\, \mid \, \prod_{e \in {\mathbb E}(G) \backslash {\mathcal O}_T}{y_e} \quad\quad \iff \quad\quad {\mathbb E}(A^c,A) \subseteq ({\mathbb E}(G) \backslash {\mathcal O}_T) \ . \] However, it follows from the definition of ${\mathcal O}_T$ that it must contain some element of ${\mathbb E}(A^c,A)$ for any $A$. This shows that $\tau_T =\{y_e: e \in {\mathbb E}(G) \backslash {\mathcal O}_T\}$ is a face in the simplicial complex $\Sigma_G^q$. Next we show that $\tau_T$ must be a facet; for $f \in {\mathcal O}_T$ removing $f$ from the tree gives a partition of $V(T)=V(G)$ into two connected subsets $B$ and $B^c$ with $f_- \in B$ and $f_+ \in B^c$. Then the monomial $m_T \cdot y_f$ is divisible by $\prod_{e \in {\mathbb E}(B^c,B)}{y_e}$. It remains to show that for any monomial $m = \prod_{e \in F}{y_e}$ that does not belong to ${\mathbf O}_G^q$ we have $F\subseteq ({\mathbb E}(G) \backslash {\mathcal O}_T)$ for some spanning tree $T$. To show this, we repeatedly use the fact that $m$ is not divisible by generators of the form $\prod_{e \in {\mathbb E}(A^c,A)}{y_e}$ for various $A$, and construct a spanning tree $T$. This procedure is explained in Algorithm~\ref{alg:FindTree}. Note that if $\prod_{e \in F}{y_e}$ is not divisible by $\prod_{e \in {\mathbb E}(A^c,A)}{y_e}$ then there exists an $e \in {\mathbb E}(A^c,A)$ such that $e \not \in F$. The orientation ${\mathcal O}_T$ is also induced by Algorithm~\ref{alg:FindTree}. \begin{algorithm} \caption{Finding a facet containing a given monomial not belonging to ${\mathbf O}_G^q$} \KwIn{\\ A monomial $m = \prod_{e \in F}{y_e}$ not belonging to ${\mathbf O}_G^q$.} \KwOut{\\ A spanning tree $T$ such that $F\subseteq ({\mathbb E}(G) \backslash {\mathcal O}_T)$.} \BlankLine {\bf Initialization:} \\$A=\{q\}$, \\$T=\emptyset$. \BlankLine \While{$A \ne V(G)$}{ Find an oriented edge $e$ such that $e \in E(A,A^c)$ and $e \not \in F$,\\ $T=T \cup \{e\}$, \\ $A=A \cup \{e_+\}$, } Output $T$. \label{alg:FindTree} \end{algorithm} \medskip (ii) follows from (i) and \cite[Theorem~1.7]{MillerSturmfels}. \medskip (iii) follows from (i) and the fact that $\dim(K[\Sigma_G^q])$ is equal to the maximal cardinality of the faces of $\Sigma_G^q$. \medskip (iv) The Krull dimension of $K[\Sigma_G^q]={\mathbf S} / {\mathbf O}_G^q$ is $2m-n+1$ by part (iii). By the Auslander--Buchsbaum formula (for graded rings and modules, see \cite[page~437]{Singular}), \[ \depth({\mathbf S} / {\mathbf O}_G^q) = \depth({\mathbf S}) - \pd_{\mathbf S}({\mathbf S} / {\mathbf O}_G^q) = 2m-n+1 \] because $\pd_{\mathbf S}({\mathbf S} / {\mathbf O}_G^q)=n-1$ by Theorem~\ref{thm:bounded}. Therefore $\dim({\mathbf S} / {\mathbf O}_G^q)=\depth({\mathbf S} / {\mathbf O}_G^q)$ and $K[\Sigma_G^q]$ is Cohen-Macaulay. \end{proof} \medskip \begin{Remark} \begin{itemize} \item[] \item[(i)] Proposition~\ref{prop:primdecom}(iii) can be strengthened; the simplicial complex $\Sigma_G^q$ is in fact shellable. Since ${\mathbf J}_G$ is the lattice ideal associated to the free abelian group $\Lambda = \Image{\partial^\ast}$, it is a toric ideal (in the sense of \cite[Chapter~4]{SturmfelsGrobnerConvex}). $\Sigma_G^q$ is precisely the {\em initial complex} of ${\mathbf J}_G$ with respect to $\prec_q$ (in the sense of \cite[Chapter~8]{SturmfelsGrobnerConvex}). Let $\sigma \in C_1(G, {\mathbb R})$ be any weight functional representing the term order $\prec_q$ for ${\mathbf J}_G$ (e.g. $\vartheta_q$ of Definition~\ref{def:O_intwt} -- see also \S\ref{sec:Ocone}). By \cite[Theorem~8.3]{SturmfelsGrobnerConvex} $\sigma$ provides us with a regular triangulation of $\Sigma_G^q$. This is accomplished by ``lifting'' each point $y_e$ into the next dimension by the height $\sigma(e)$, and then projecting back the lower face of the resulting positive cone. This is a unimodular triangulation because the ideal ${\mathbf O}_G^q$ is squarefree (\cite[Corollary~8.9]{SturmfelsGrobnerConvex}). The associated Gr\"obner fan studied in \S\ref{sec:PotJG} coincides with the associated secondary fan of this triangulation. It is well-known that given any regular triangulation, one can obtain shelling orders using the {\em line shelling} technique (see, e.g., \cite[Theorem~9.5.10]{TriangulationBook}). \item[(ii)] A minimal free resolution of the Alexander dual of ${\mathbf O}_G^q$ can be obtained by the construction given in \cite{BatziesWelker} (see also \cite{AntonFatemeh}). \end{itemize} \end{Remark} \begin{Example}\label{example:primdecom} Consider the graph in Example~\ref{exam:1}. For the spanning tree in Figure~\ref{fig:onefacets} we have \[\tau_T=\{y_e: e \in {\mathbb E}(G)\} \backslash \{y_{e_1},y_{e_3},y_{e_4} \}=\{y_{e_2},y_{e_5},y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_3},y_{\bar{e}_4},y_{\bar{e}_5}\}\ ,\] (which is the same as $\tau_8$ in Example~\ref{exam:Delta}). Moreover, $P_T = \langle y_{e_1},y_{e_3},y_{e_4} \rangle$. \end{Example} \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .22, very thick = 15mm] \node (n4) at (4,1) [Cgray] {}; \node (n2) at (4,11) [Cgray] {}; \node (n1) at (1,6) [Cgray] {}; \node (n3) at (7,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n1/n4} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n2/n1,n4/n3,n3/n2} \draw[] (\from) -- (\to); \node (n4) at (24,1) [Cgray] {}; \node (n2) at (24,11) [Cgray] {}; \node (n1) at (21,6) [Cgray] {}; \node (n3) at (27,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n1/n4} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n2/n1,n4/n3,n3/n2} \draw[blue][->] (\from) -- (\to); \end{tikzpicture} \caption{Spanning tree $T$ and its orientation ${\mathcal O}_T$} \label{fig:onefacets} \end{center} \end{figure} \medskip We are now ready to give a particularly nice l.s.o.p. for ${\mathbf O}_G^q$. Note that since $K[\Sigma_G^q]={\mathbf S} / {\mathbf O}_G^q$ is Cohen-Macaulay, every h.s.o.p. (in particular every l.s.o.p.) is regular (Theorem~\ref{thm:CMhsop}). \medskip First we introduce some notation. For each $v \in V(G)$ we choose a distinguished incoming edge to $v$ and denote it by $e_v$. In other words, we fix a distinguished subset $\{e_v: v \in V(G)\ \} \subset {\mathbb E}(G)$ of cardinality $n$ in such a way that $(e_v)_{+}=v$. \medskip For each $v$ define the set of linear forms \[ {\mathcal L}_v = \{y_e - y_{e_v}:\ e \in {\mathbb E}(G) \ , e \ne e_v \ , e_{+}=(e_v)_{+}=v \} \] and let \begin{equation} \label{eq:L} {\mathcal L}= \bigcup_{v\in V(G)}{{\mathcal L}_v} \ . \end{equation} We also let \[ {\mathcal L}^{(q)}= {\mathcal L} \cup \{y_{e_q}\} \ . \] Clearly, $|{\mathcal L}_v|=\deg(v)-1$ for $v\in V(G)$, $|{\mathcal L}|=2m-n$, and $|{\mathcal L}^{(q)}|=2m-n+1$. \begin{Proposition} \label{prop:reg} The set ${\mathcal L}^{(q)}$ forms an l.s.o.p. (and thus a regular sequence) for $K[\Sigma_G^q]={\mathbf S}/{\mathbf O}_G^q$. \end{Proposition} \begin{proof} We will use the criterion in Lemma~\ref{lem:Sta}. Note that by Proposition~\ref{prop:primdecom}(iii), $\dim K[\Sigma_G^q]=|{\mathcal L}^{(q)}|$. For each facet $\tau$ and each vertex $v \ne q$, by Proposition~\ref{prop:primdecom}(i), all but one variable $y_e$ with $e^+=v$ appear in $\tau$. Again by Proposition~\ref{prop:primdecom}(i), all variables $y_e$ with $e^+=q$ appear in $\tau$. It follows that the dimension of the vector space spanned by the restrictions of forms in ${\mathcal L}^{(q)}$ to the facet $\tau$ is equal to $\sum_{v} (\deg(v)-1)+1=2m-n+1$ which is equal to $|\tau|$ by Proposition~\ref{prop:primdecom}(iii), and the conditions in Lemma~\ref{lem:Sta} are satisfied. \end{proof} \medskip \begin{Example}\label{exam:Delta} For the graph in Example~\ref{exam:1}, ${\mathbf O}_G^q$ is the Stanley-Reisner ideal of the simplicial complex $\Sigma_G^q$ given by facets \begin{figure}[h] \begin{center} \begin{tikzpicture} [scale = .28, very thick = 10mm] \node (n4) at (24,1) [Cred] {}; \node (n1) at (24,11) [Cgray] {}; \node (n2) at (21,6) [Cgray] {}; \node (n3) at (27,6) [Cgray] {}; \node (n41) at (24,-0.5) [Cwhite] {$u_4$}; \node (n11) at (24,12.5) [Cwhite] {$u_2$}; \node (n21) at (19.5,6) [Cwhite] {$u_1$}; \node (n31) at (28.5,6) [Cwhite] {$u_3$}; \node (m1) at (21.5,9.5) [Cwhite] {$e_1$}; \node (m1) at (21.5,2.5) [Cwhite] {$e_2$}; \node (m1) at (26.5,9.5) [Cwhite] {$e_3$}; \node (m1) at (26.5,2.5) [Cwhite] {$e_4$}; \node (m4) at (25,6) [Cwhite] {$e_5$}; \foreach \from/\to in {n4/n2,n3/n1,n1/n2,n4/n3,n4/n1} \draw[black][->] (\from) -- (\to); \end{tikzpicture} \caption{Graph $G$ and a fixed orientation} \label{fig:graph1} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \begin{tikzpicture} [scale = .18, very thick = 10mm] \node (n4) at (4,1) [Cgray] {}; \node (n1) at (4,11) [Cgray] {}; \node (n2) at (1,6) [Cgray] {}; \node (n3) at (7,6) [Cgray] {}; \foreach \from/\to in {n2/n1,n1/n3,n4/n2} \draw[blue][->] (\from) -- (\to); \foreach \from/\to in {n4/n1,n3/n4} \draw[lightgray][dashed] (\from) -- (\to); \node (n4) at (14,1) [Cgray] {}; \node (n1) at (14,11) [Cgray] {}; \node (n2) at (11,6) [Cgray] {}; \node (n3) at (17,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n1/n3,n4/n1} \draw[blue][->] (\from) -- (\to); \foreach \from/\to in {n1/n2,n4/n3} \draw[lightgray][dashed] (\from) -- (\to); \node (n4) at (24,1) [Cgray] {}; \node (n1) at (24,11) [Cgray] {}; \node (n2) at (21,6) [Cgray] {}; \node (n3) at (27,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n4/n3} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n4/n1,n1/n3,n1/n2} \draw[blue][->] (\from) -- (\to); \node (n4) at (34,1) [Cgray] {}; \node (n1) at (34,11) [Cgray] {}; \node (n2) at (31,6) [Cgray] {}; \node (n3) at (37,6) [Cgray] {}; \foreach \from/\to in {n4/n1,n1/n3} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n2/n1,n4/n2,n4/n3} \draw[blue][->] (\from) -- (\to); \node (n4) at (44,1) [Cgray] {}; \node (n1) at (44,11) [Cgray] {}; \node (n2) at (41,6) [Cgray] {}; \node (n3) at (47,6) [Cgray] {}; \foreach \from/\to in {n1/n2,n3/n1} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n4/n2,n4/n3,n4/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (54,1) [Cgray] {}; \node (n1) at (54,11) [Cgray] {}; \node (n2) at (51,6) [Cgray] {}; \node (n3) at (57,6) [Cgray] {}; \foreach \from/\to in {n1/n3,n4/n2} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n4/n3,n4/n1,n1/n2} \draw[blue][->] (\from) -- (\to); \node (n4) at (64,1) [Cgray] {}; \node (n1) at (64,11) [Cgray] {}; \node (n2) at (61,6) [Cgray] {}; \node (n3) at (67,6) [Cgray] {}; \foreach \from/\to in {n1/n2,n4/n1} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n4/n3,n4/n2,n3/n1} \draw[blue][->] (\from) -- (\to); \node (n4) at (74,1) [Cgray] {}; \node (n1) at (74,11) [Cgray] {}; \node (n2) at (71,6) [Cgray] {}; \node (n3) at (77,6) [Cgray] {}; \foreach \from/\to in {n4/n2,n1/n4} \draw[lightgray][dashed] (\from) -- (\to); \foreach \from/\to in {n4/n3,n3/n1,n1/n2} \draw[blue][->] (\from) -- (\to); \end{tikzpicture} \caption{Spanning trees $T$ and orientations ${\mathcal O}_T$ corresponding to $\tau_1,\tau_2,\ldots,\tau_8$} \label{fig:spanning trees} \end{center} \end{figure} \[\tau_1=\{y_{e_1},y_{e_3},y_{e_4},y_{e_5},y_{\bar{e}_2},y_{\bar{e}_4},y_{\bar{e}_5}\}, \ \tau_2=\{y_{e_1},y_{e_3},y_{e_4},y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_4},y_{\bar{e}_5}\},\] \[ \tau_3=\{y_{{e}_2},y_{e_3},y_{e_4},y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_4},y_{\bar{e}_5}\}, \ \tau_4=\{y_{e_1},y_{e_3},y_{{e}_5},y_{\bar{e}_2},y_{\bar{e}_3},y_{\bar{e}_4},y_{\bar{e}_5}\},\] \[ \tau_5=\{y_{e_1},y_{e_3},y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_3},y_{\bar{e}_4}, y_{\bar{e}_5}\}, \ \tau_6=\{y_{e_2},y_{e_3},y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_3},y_{\bar{e}_4},y_{\bar{e}_5}\}, \] \[ \tau_7=\{y_{e_1},y_{e_5},y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_3},y_{\bar{e}_4},y_{\bar{e}_5}\}, \ \tau_8=\{y_{e_2},y_{e_5},y_{\bar{e}_1},y_{\bar{e}_2},y_{\bar{e}_3},y_{\bar{e}_4},y_{\bar{e}_5}\}. \] See Proposition~\ref{prop:primdecom}(i), Example~\ref{example:primdecom}, and Figure~\ref{fig:spanning trees}. If we choose $\{e_1, e_3, e_4,\bar{e}_4\}$ as our distinguished set of incoming edges to vertices, we have \[ {\mathcal L}_{u_1}=\{y_{e_2}-y_{e_1}\} \ , \quad {\mathcal L}_{u_2}=\{y_{\bar{e}_1}-y_{e_3}, y_{e_5}-y_{e_3}\} \ , \] \[ {\mathcal L}_{u_3}=\{y_{\bar{e}_3}-y_{e_4}\} \ , \quad {\mathcal L}_{u_4}=\{y_{\bar{e}_2}-y_{\bar{e}_4}, y_{\bar{e}_5}-y_{\bar{e}_4}\} \ . \] Therefore \[ {\mathcal L}= \bigcup_{v \in V(G)} {\mathcal L}_v = \{y_{e_2}-y_{e_1}, y_{\bar{e}_1}-y_{e_3}, y_{e_5}-y_{e_3}, y_{\bar{e}_3}-y_{e_4}, y_{\bar{e}_2}-y_{\bar{e}_4}, y_{\bar{e}_5}-y_{\bar{e}_4} \} \] and \[ {\mathcal L}^{(q)}={\mathcal L}\cup \{y_{\bar{e}_4} \} \ . \] Note that $|{\mathcal L}^{(q)}|=7=2 \times 5 - 4 +1$. The restrictions of linear forms of ${\mathcal L}^{(q)}$ to $\tau_1$ are \[ {\mathcal L}^{(q)} | _{\tau_1}= \{ -y_{e_1}, -y_{e_3}, y_{e_5}-y_{e_3}, -y_{e_4}, y_{\bar{e}_2}, y_{\bar{e}_5}, y_{\bar{e}_4} \} \ , \] which span a vector space of dimension $|\tau_1|=7=2 \times 5 - 4 +1$. Similarly, the restrictions of the linear forms of ${\mathcal L}^{(q)}$ to the other $\tau_i$'s span a vector space of dimension $|\tau_i|$. \end{Example} \medskip \subsection{Linear system of parameters for ${\mathbf J}_G$} Next we use Proposition~\ref{prop:Eis} to give a regular sequence for ${\mathbf S}/{\mathbf J}_G$. \begin{Proposition}\label{prop:regularJ} The set ${\mathcal L}$ forms a regular sequence for ${\mathbf S}/{\mathbf J}_G$. \end{Proposition} \begin{proof} Let $\lambda_q \in C_1(G,{\mathbb R})$ be the integral, non-negative weight functional defined in Definition~\ref{def:O_intwt}. Any element of ${\mathcal L}$ is of the form $g=y_e - y_{e_v}$ with $e_+=(e_v)_+=v$ for some $v \in V(G)$. Since $\lambda_q(e)=\lambda_q(e_v)$ depends only on $v$ by the construction in Proposition~\ref{prop:plusworks}, we obtain $\ini_{\lambda_q}(g)=g$ and $\tilde{g}=g$. Therefore $\{\ini_{\lambda_q}(g) : g \in {\mathcal L}\} = {\mathcal L}$ which is a regular sequence on ${\mathbf S} / \ini_{\lambda_q}({\mathbf J}_G)={\mathbf S} / {\mathbf O}_G^q$ by Proposition~\ref{prop:reg}. So we may apply Proposition~\ref{prop:Eis} to conclude that ${\mathcal L}$ is a $({\mathbf S}/{\mathbf J}_G)$-regular sequence. \end{proof} \begin{Remark} It follows from Theorem~\ref{thm:CMhsop} and Proposition~\ref{prop:CM} that ${\mathcal L}$ also forms a (partial) l.s.o.p. for ${\mathbf S}/{\mathbf J}_G$. \end{Remark} \section{${\mathbf I}_G$ from ${\mathbf J}_G$ and ${\mathbf M}_G^q$ from ${\mathbf O}_G^q$} \label{sec:relate} A common and powerful technique in the theory of divisors on graphs and chip-firing games is to relate divisors to orientations. Given an orientation, one can form a divisor from the associated indegrees or outdegrees (see, e.g., \cite[Theorem~2.3]{Lovasz91}, \cite[Theorem~3.3]{BN1}, \cite{HopPerk}, \cite{FarbodFatemeh}, and \cite{ABKS}). Algebraically, there is a good justification for the strength of this method related to the regular sequences studied in \S\ref{sec:main}. \medskip Recall that ${\mathbf R}=K[{\mathbf x}]$ denotes the polynomial ring in $n$ variables $\{x_v: v \in V(G)\}$ and ${\mathbf S}=K[{\mathbf y}]$ denotes the polynomial ring in $2m$ variables $\{y_e: e \in {\mathbb E}(G)\}$. There is a canonical surjective $K$-algebra homomorphism \[ \phi \colon {\mathbf S} \rightarrow {\mathbf R} \] defined by sending $y_e$ to $x_{e_+}$ for all $e \in {\mathbb E}(G)$. The kernel of this map is precisely the ideal generated by ${\mathcal L}$ (defined in \eqref{eq:L}), which we denote by ${\mathfrak a}= \langle {\mathcal L} \rangle$. The induced isomorphism \[ \bar{\phi} \colon {\mathbf S} /{\mathfrak a} \xrightarrow{\sim} {\mathbf R} \] is the ``algebraic indegree map'', and it relates the ideals ${\mathbf I}_G$ and ${\mathbf M}_G^q$ to the ideals ${\mathbf J}_G$ and ${\mathbf O}_G^q$. \begin{Proposition}\label{prop:specialize} \begin{itemize} \item[] \item[(i)] $\bar{\phi}({\mathbf J}_G +{\mathfrak a}) = {\mathbf I}_G$. In other words $\bar{\phi}$ induces an isomorphism $({\mathbf S} / {\mathbf J}_G ) \otimes_{{\mathbf S}} ({\mathbf S} / {\mathfrak a}) \cong {\mathbf R} / {\mathbf I}_G$. \item[(ii)] $\bar{\phi}({\mathbf O}_G^q +{\mathfrak a} ) = {\mathbf M}_G^q$. In other words $\bar{\phi}$ induces an isomorphism $({\mathbf S} / {\mathbf O}_G^q) \otimes_{{\mathbf S}} ({\mathbf S} / {\mathfrak a}) \cong {\mathbf R} / {\mathbf M}_G^q$. \end{itemize} \end{Proposition} \begin{proof} The map $\bar{\phi}$ sends $\prod_{e \in {\mathbb E}(A^c,A)}{y_e} + {\mathfrak a}$ to ${\mathbf x}^{D(A^c,A)}$. So the proposition immediately follows from examining the generating sets described in Theorem~\ref{thm:Cori} and in Proposition~\ref{prop:grobrelation}. \end{proof} \begin{Remark} \label{rmk:Rtilde} The variables $y_e$ with $e_+=q$ do not appear in the support of any element of ${\mathbf O}_G^q$ (see Theorem~\ref{prop:grobrelation}(iii)). Likewise, the variable $x_q$ does not appear in the support of any element of ${\mathbf M}_G^q$ (see Theorem~\ref{thm:Cori}). Therefore we also have an isomorphism $\bar{\phi}({\mathbf O}_G^q + \langle {\mathcal L}^{(q)} \rangle) = \bar{\phi}({\mathbf O}_G^q +{\mathfrak a} + \langle y_{e_q} \rangle) \cong {\mathbf M}_G^q+\langle x_q \rangle$. In other words $({\mathbf S} / {\mathbf O}_G^q) \otimes_{{\mathbf S}} ({\mathbf S} / \langle {\mathcal L}^{(q)}\rangle) \cong \tilde{{\mathbf R}} / {\mathbf M}_G^q$, where $\tilde{{\mathbf R}} = K[\{x_v\}_{ v \ne q}]$. \end{Remark} \medskip \begin{Theorem} \label{thm:betti_coincide} \begin{itemize} \item[] \item[(i)] The polyhedral cell complex ${\mathcal B}_G^{q,c}$ (equivalently, ${\mathcal A}_G^{q,c}$) supports a $\Div(G)$-graded (and ${\mathbb Z}$-graded) minimal free resolution for ${\mathbf M}_G^q$. \item[(ii)] The quotient labeled cell complex $\Del(L(G)) / L(G)$ supports a $\Pic(G)$-graded (and ${\mathbb Z}$-graded) minimal free resolution for ${\mathbf I}_G$. \item[(iii)] The ${\mathbb Z}$-graded Betti diagrams of ${\mathbf J}_G$, ${\mathbf I}_G$, ${\mathbf O}_G^q$, and ${\mathbf M}_G^q$ coincide. \end{itemize} \end{Theorem} \begin{proof} (i) By Theorem~\ref{thm:bounded}, we know that ${\mathcal B}_G^{q,c}$ gives a $C^1(G,{\mathbb Z})$-graded minimal free resolution for ${\mathbf S} / {\mathbf O}_G^q$. The same statement is true if we replace ${\mathcal B}_G^{q,c}$ with ${\mathcal A}_G^{q,c}$ by the discussion in \S\ref{sec:grobrel}. By Theorem~\ref{thm:Eis}(iv) and Proposition~\ref{prop:reg}, if we replace all the labels ${\mathbf m}_F$ with ${\mathbf m}_F + {\mathfrak a}$, we obtain a minimal cellular free resolution for $({\mathbf S} / {\mathbf O}_G^q)/\otimes_{{\mathbf S}} ({\mathbf S} / {\mathfrak a}) \cong {\mathbf R} / {\mathbf M}_G^q$ (see Proposition~\ref{prop:specialize}(ii)). Alternatively we could replace all labels ${\mathbf m}_F$ with ${\mathbf m}_F + \langle {\mathcal L}^{(q)} \rangle$ to obtain a minimal cellular free resolution for $({\mathbf S} / {\mathbf O}_G^q) \otimes_{{\mathbf S}} ({\mathbf S} / \langle {\mathcal L}^{(q)}\rangle) \cong \tilde{{\mathbf R}} / {\mathbf M}_G^q$. The new labels are easily seen to be $\Div(G)$ and ${\mathbb Z}$-homogeneous, and the resulting minimal free resolution is $\Div(G)$ and ${\mathbb Z}$-graded. \medskip (ii) follows similarly from Theorem~\ref{thm:Jresol}, Theorem~\ref{thm:Eis}(iv), Proposition~\ref{prop:regularJ}, and Proposition~\ref{prop:specialize}(i). \medskip (iii) The fact that the (ungraded) Betti numbers of ${\mathbf J}_G$ and ${\mathbf O}_G^q$ coincide follows from Lemma~\ref{lem:bij}. By the labeling compatibility described in Lemma~\ref{lem:labels} the ${\mathbb Z}$-graded Betti numbers of ${\mathbf J}_G$ and ${\mathbf O}_G^q$ coincide as well. Since all elements of ${\mathcal L}$ are homogeneous (linear) forms, the relabeling of cells described above (in passing from ${\mathbf J}_G$ to ${\mathbf I}_G$ and from ${\mathbf O}_G^q$ to ${\mathbf M}_G^q$) does not change the ${\mathbb Z}$-degrees (see also Theorem~\ref{thm:Eis}(iii)). Therefore the ${\mathbb Z}$-graded Betti diagrams of all four ideals coincide. \end{proof} \begin{Remark} \begin{itemize} \item[] \item[(i)] Recall from Remark~\ref{sec:nopartorient} that the number of $i$-dimensional cells in ${\mathcal B}_G^{q,c}$ is equal to the number of acyclic partial orientations of $G$ with $(i+2)$ (connected) components having unique source at $q$. So one immediately obtains a combinatorial description of the (ungraded) Betti numbers in terms of acyclic partial orientations. This interpretation for the Betti numbers of $I_G$ was conjectured in \cite{perkinson} and proved in \cite{FarbodFatemeh} and \cite{Madhu}. \item[(ii)] Fix an orientation ${\mathcal O}$. Then it also follows from Proposition~\ref{prop:primdecom} (alternatively, \cite[Corollary~2.7]{novik2002syzygies}) that $\{y_e-y_{\bar{e}} : e \in {\mathcal O}\}$ is also a regular sequence for ${\mathbf O}_G^q$. The ideal ${\mathbf{Mat}}_G$ obtained from ${\mathbf O}_G^q$ by identifying the $y_e$'s with the $y_{\bar{e}}$'s is called the (unoriented) matroid ideal of $G$ and it has the same ${\mathbb Z}$-graded Betti diagram as ${\mathbf J}_G$, ${\mathbf I}_G$, ${\mathbf O}_G^q$, and ${\mathbf M}_G^q$. \end{itemize} \end{Remark} \medskip \begin{Example}\label{exam:3} We return to Examples~\ref{exam:1}. We described the sequence ${\mathcal L}^{(q)}$ in Example~\ref{exam:Delta}. For simplicity we let $x_i=x_{u_i}$. By sending $\{y_{e_2}, y_{e_1}\}$ to $x_1$, $\{y_{\bar{e}_1}, y_{e_5}, y_{e_3}\}$ to $x_{2}$, and $\{y_{\bar{e}_3}, y_{e_4}\}$ to $x_{3}$, ${\mathbf O}_G^q$ in \eqref{eq:OGex} is sent to the ideal \[ \langle x_2^2 x_3,x_1 x_2^2,x_3^2,x_2^{3},x_1^{2},x_1x_2x_3\rangle \] which is precisely ${\mathbf M}_G^q=\ini_{<_q}({\mathbf I}_G)$ by Theorem~\ref{thm:Cori}(ii). The minimal cellular free resolution of ${\mathbf M}_G^q$ is obtained from the minimal cellular free resolution of ${\mathbf O}_G^q$ (described in Examples~\ref{exam:1}) by ``relabeling'' (i.e. by replacing each $y_e$ with $x_{e_+}$). We first relabel the complex in Figure~\ref{fig:arrangement} to obtain Figure~\ref{fig:arrangement2}. The resulting labeled complex gives a minimal free resolution for ${\mathbf M}_G^q$ which is precisely the minimal free resolution of ${\mathbf O}_G^q$ ``relabeled''. Concretely, we first extend the labels ${\mathbf m}'({\mathbf p}_i)$ on the vertices to the whole of ${\mathcal B}_G^q$ by the least common multiple construction. For example, \[ {\mathbf m}_{E_2}=y_{\bar{e}_1}y_{\bar{e}_3} y_{e_4} y_{e_5} \mapsto {\mathbf m}'_{E_2}= x_2^2 x_3^2 \ , \] \[ {\mathbf m}_{E_4}=y_{\bar{e}_1} y_{e_2} y_{e_4} y_{e_5} \mapsto {\mathbf m}'_{E_4}= x_1x_2^2x_3\ , \] \[ {\mathbf m}_{E_5}=y_{e_2} y_{\bar{e}_3} y_{e_4} y_{e_5} \mapsto {\mathbf m}'_{E_5}= x_1x_2x_3^2\ , \] \[ {\mathbf m}_{E_6}=y_{e_2}y_{e_3} y_{e_4} y_{e_5} \mapsto {\mathbf m}'_{E_6}=x_1x_2^2x_3 \ , \] \[ {\mathbf m}_{F_2}= y_{\bar{e}_1} y_{e_2} y_{\bar{e}_3} y_{e_4}y_{e_5} \mapsto {\mathbf m}'_{F_2}= x_1x_2^2x_3^2\ . \] The minimal resolution of ${\mathbf M}_G^q$ is as follows. \[ 0 \rightarrow \bigoplus_{i=1}^4{\mathbf R}(-{\mathbf m}'_{F_i}) \xrightarrow{\partial'_2} \bigoplus_{i=1}^9{\mathbf R}(-{\mathbf m}'_{E_i}) \xrightarrow{\partial'_1} \bigoplus_{i=1}^6{\mathbf R}(-{\mathbf m}'_{{\mathbf p}_i}) \xrightarrow{\partial'_0} {\mathbf R} \twoheadrightarrow {\mathbf R} /{\mathbf M}_G^q \ . \] Assume $[[F]]$ denotes the generator of ${\mathbf R}(-{\mathbf m}'_F)$. The homogenized differential operator of the cell complex $(\partial'_0,\partial'_1, \partial'_2)$ is as described in \eqref{eq:differntials}. For example: \[ \partial'_0([[{\mathbf p}_i]])= {\mathbf m}'_{{\mathbf p}_i} = {\mathbf m}'({\mathbf p}_i) \ , \] \[ \partial'_1([[E_6]]) = x_2[[{\mathbf p}_4]] - x_3[[{\mathbf p}_4]]\ , \] \[ \partial'_2([[F_2]]) =x_1[[E_2]] - x_3[[E_4]] + x_2[[E_5]] \ . \] Although ${\mathbf J}_G$ and ${\mathbf I}_G$ have the same Betti table as ${\mathbf O}_G^q$ and ${\mathbf M}_G^q$, it is not possible to read the minimal free resolutions for ${\mathbf J}_G$ or ${\mathbf I}_G$ directly from ${\mathcal B}_G^q$; one really needs to consider the cell decomposition of $L(G)_{{\mathbb R}}/L(G)$ or of $\Div_{{\mathbb R}}^0(G)/\Prin(G)$. \end{Example} \medskip \setlength{\unitlength}{1.1pt} \begin{figure}[h!] \begin{center} \begin{picture}(100,195)(0,-55) \thicklines \put(60,0){\circle*{4}} \put(35,-13){$x_2^2x_3$} \put(26,66){\circle*{4}} \put(-13,60){$x_1x_2^2$} \put(160,0){\circle*{4}} \put(140,-13){$x_3^2$} \put(-40,0){\circle*{4}} \put(-75,-13){$x_2^3$} \put(60,100){\circle*{4}} \put(40,98){$x_1^2$} \put(60,50){\circle*{4}} \put(20,45){$x_1x_2x_3$} \put(60,0){\line(0,-1){30}} \put(60,50){\line(2,-1){100}} \put(60,50){\line(-2,1){65}} \put(60,100){\line(0,1){30}} \put(60,-20){\vector(-1,0){10}} \put(56,-43){$H_{e_3}$} \put(175,0){\vector(0,1){10}} \put(180,10){$H_{e_2}$} \put(-50,-10){\vector(1,-1){10}} \put(-60,-43){$H_{e_4}$} \put(30,130){\line(1,-1){20}} \put(160,0){\line(1,-1){20}} \put(170,-10){\vector(-2,-3){7}} \put(193,-40){$H_{e_5}$} \put(14,73){\vector(2,3){7}} \put(-30,80){$H_{e_1}$} \thicklines \put(-40,0){\line(1,0){230}} \put(-40,0){\line(-1,0){30}} \put(60,0){\line(-1,0){80}} \put(60,0){\line(0,1){130}} \put(160,0){\line(3,-2){35}} \put(-40,0){\line(1,1){130}} \put(-40,0){\line(-1,-1){30}} \put(160,0){\line(-1,1){130}} \put(160,0){\line(1,-1){30}} \put(-40,-10){${\mathbf p}_1$} \put(64,-10){${\mathbf p}_2$} \put(155,10){${\mathbf p}_3$} \put(63,55){${\mathbf p}_4$} \put(21,74){${\mathbf p}_5$} \put(65,98){${\mathbf p}_6$} \end{picture} \caption{The relabeled bounded complex ${\mathcal B}_G^{q,c}$ giving a minimal free resolution of ${\mathbf M}_G^{q}$} \label{fig:arrangement2} \end{center} \end{figure} \begin{Remark} \label{rmk:ResolRelation} There is an isometry between the principal lattice $(\Prin(G) , \langle \cdot, \cdot \rangle_{\en})$ and the cut lattice $(L(G) , \langle \cdot , \cdot \rangle )$ (Remark~\ref{rmk:isometry}). So the Delaunay decompositions $\Del(\Prin(G))$ and $\Del(L(G))$ are combinatorially equivalent (compare Figure~\ref{fig:PrinLattice} with Figure~\ref{fig:CutLattice}) and the relabeling of cells in $\Del(L(G))$ described above correspond to the labels that were given to cells of $\Del(\Prin(G))$ in \S\ref{sec:labelDelPrin}. Therefore the resolution of ${\mathbf I}_G$ described in Theorem~\ref{thm:Ires} coincides with the resolution of ${\mathbf I}_G$ obtained from the resolution of ${\mathbf J}_G$ in Theorem~\ref{thm:Jresol} by ``relabeling'' as in Theorem~\ref{thm:betti_coincide}. For example, the resolution of ${\mathbf I}_G$ described in Example~\ref{ex:PrinResol} can alternatively be obtained from the resolution of ${\mathbf J}_G$ described in Example~\ref{ex:CutResol}. It is straightforward to give an alternate {\em proof} for Theorem~\ref{thm:Ures} and Theorem~\ref{thm:Ires} using these observations. \end{Remark} \medskip \section{Some consequences of our main results} \label{sec:conseq} \subsection{Cohen-Macaulayness} \label{sec:CM} For a polynomial ring $S$, a term order $<$ and an ideal $I \subset S$, it is known that $S/I$ is Cohen-Macaulay if and only if $S/\ini_{<}(I)$ is Cohen-Macaulay (see, e.g., \cite[Corollary 3.3.5]{HHBook}). \begin{Proposition}\label{prop:CM} The modules ${\mathbf S} / {\mathbf O}_G^q$, ${\mathbf R} / {\mathbf M}_G^q$, ${\mathbf S} / {\mathbf J}_G$, and ${\mathbf R} / {\mathbf I}_G$ are all Cohen-Macaulay. \end{Proposition} \begin{proof} By Proposition~\ref{prop:primdecom}(iv) we have that ${\mathbf S}/{\mathbf O}_G^q$ is Cohen-Macaulay. For ${\mathbf R} / {\mathbf M}_G^q$, first observe that by Theorem~\ref{thm:Cori} the variable $x_q$ does not appear in the support of any of the given monomial generators of ${\mathbf M}_G^q$. This implies that $\depth({\mathbf R} / {\mathbf M}_G^q)\geq 1$. On the other hand, $\dim({\mathbf R}/{\mathbf M}_G^q)=1$. One way to see this is the following: by Proposition~\ref{prop:primdecom}(iii) we know that $\dim({\mathbf S} / {\mathbf O}_G^q)=2m-n+1$. Since ${\mathcal L}^{(q)}={\mathfrak a} \cup \{y_{e_q}\}$ is an l.s.o.p. for ${\mathbf S} / {\mathbf O}_G^q$, we deduce by Proposition~\ref{prop:specialize}(ii) that $\dim({\mathbf R}/{\mathbf M}_G^q)=\dim(({\mathbf S} / {\mathbf O}_G^q )\otimes_{{\mathbf S}}({\mathbf S} / {\mathfrak a}))=\dim(({\mathbf S} / {\mathbf O}_G^q)/{\mathfrak a}({\mathbf S} / {\mathbf O}_G^q))=1$. Therefore ${\mathbf M}_G^q$ is also Cohen-Macaulay. Since $\ini_{\prec_q}({\mathbf J}_G)={\mathbf O}_G^q$ and $\ini_{<_q}({\mathbf I}_G)={\mathbf M}_G^q$, we immediately conclude that ${\mathbf S}/{\mathbf J}_G$ and ${\mathbf R}/{\mathbf I}_G$ are also Cohen-Macaulay. \end{proof} \medskip \subsection{Multiplicities} \label{sec:mult} For a finitely generated (graded) module $M$ of dimension $d>0$ over a polynomial ring, the {\em multiplicity} of $M$ is defined to be the leading coefficient of the Hilbert polynomial of $M$ (i.e. the polynomial defining $i \mapsto \dim(M_i)$ for $i >> 0$). We will denote this quantity by $e(M)$. Since the Hilbert polynomial is completely determined by the Betti table (see, e.g., \cite[Theorem~8.20 and Proposition~8.23]{MillerSturmfels}), the multiplicity is also determined by the Betti table. The following result easily follows. \begin{Theorem} \[ e({\mathbf S}/{\mathbf O}_G^q)=e({\mathbf S}/{\mathbf J}_G)=e({\mathbf R}/{\mathbf M}_G^q)=e({\mathbf R}/{\mathbf I}_G)=\kappa(G)\ , \] where $\kappa(G)$ denotes the number of spanning trees of $G$. \end{Theorem} \begin{proof} All these ideals have the same Betti table and hence the same multiplicity. It suffices to compute the multiplicity of ${\mathbf S}/{\mathbf O}_G^q=K[\Sigma_G^q]$. By Proposition~\ref{prop:primdecom}(ii), we have \[ {\mathbf O}_G^q = \bigcap_{T} P_T \ , \] the intersection being over all spanning trees of $G$. By Proposition~\ref{prop:primdecom}(iii), we have $\dim({\mathbf S}/{\mathbf O}_G^q)=2m-n+1$. Also, for each spanning tree $T$ we have $P_T=\langle y_e: e \in {\mathcal O}_T \rangle$ and therefore \[ \dim({\mathbf S}/P_T)=2m-n+1\quad \text{and} \quad e({\mathbf S}/P_T)=1 \ . \] In this situation (see, e.g., \cite[Lemma 5.3.11]{Singular}) we have \[ e({\mathbf S}/{\mathbf O}_G^q)=\sum_{T}e({\mathbf S}/P_T) \ , \] the sum being over all spanning trees of $G$. \end{proof} For ${\mathbf R} / {\mathbf I}_G$, the multiplicity was recently computed in \cite{OCarroll} using a different method. \subsection{Alexander dual of ${\mathbf M}_G^q$ and cocellular free resolution} \label{sec:alex} In \cite{MadhuBernd}, Riemann-Roch theory for graphs is linked to Alexander duality for the ideal ${\mathbf M}_G^q$. Recall that ${\mathbf M}_G^q \subset \tilde{{\mathbf R}} = K[\{x_v\}_{ v \ne q}]$ (see Remark~\ref{rmk:Rtilde}). Here we quickly study the Alexander dual of ${\mathbf M}_G^q$ and use Theorem~\ref{thm:bounded} to obtain its minimal cocellular free resolution. \medskip We define the divisor \[ {\mathbf a} = \sum_{v \in V(G)}{(\deg(v))(v)} \ . \] It follows from Theorem~\ref{thm:Cori} and Theorem~\ref{thm:betti_coincide}(i) that: \begin{itemize} \item[(i)] ${\mathbf M}_G^q$ is generated in degree preceding ${\mathbf a}$. \item[(ii)] ${\mathbf M}_G^q+\langle \{x_v^{{\mathbf a}(v)+1}\}_{v \ne q} \rangle = {\mathbf M}_G^q$; this this is because for each $v\neq q$ in $V(G)$, the star of the vertex $v$ forms a cut and therefore $x_v^{\deg(v)} \in {\mathbf M}_G^q$. \item[(iii)] All face labels in the labeled cell complex $B_G^{q,c}$ resolving ${\mathbf M}_G^q$ (as in Theorem~\ref{thm:betti_coincide}(i)) divide ${\mathbf x}^{{\mathbf a}+\mathbf{1}}$. In fact a stronger statement is true; all vertex labels divide ${\mathbf x}^{{\mathbf a}}$. \end{itemize} Consider the cellular complex ${\mathcal B}_G^{q,c}$ with labels ${\mathbf m}_F'$ for cells $F$ as in the proof of Theorem~\ref{thm:betti_coincide}(i). Relabel each cell $F$ with ${\mathbf x}^{{\mathbf a}+\mathbf{1}}/{\mathbf m}_F'$. For simplicity, let us call ${\mathcal B}_G^{q,c}$ with its new labels ${\mathcal D}$. Let ${\mathcal D}_{\leq {\mathbf a}}$ denote the subcomplex consisting of cells with labels dividing ${\mathbf a}$. Let $({\mathbf M}_G^q)^{[{\mathbf a}]}$ denote the Alexander dual of ${\mathbf M}_G^q$ with respect to ${\mathbf a}$ (\cite[Definition~5.20]{MillerSturmfels}). In this setting, \cite[Theorem~5.37]{MillerSturmfels} gives the following result: \begin{Proposition} The polyhedral complex $({\mathcal D}_G)_{\leq {\mathbf a}}$ supports a minimal (cocellular) resolution for the ideal $({\mathbf M}_G^q)^{[{\mathbf a}]}$. \end{Proposition} This observation has been made (independently) in \cite{Anton}. See \cite[Section~5.3]{MillerSturmfels} for more details. Here we give an example to illustrate this result. \begin{Example}\label{exam:dual3} The complex $({\mathcal D}_G)_{\leq {\mathbf a}}$ associated to Example~\ref{exam:3} and Figure~\ref{fig:arrangement2} is depicted in Figure~\ref{fig:dualfig} in blue. The ideal $({\mathbf M}_G^q)^{[{\mathbf a}]}$ is minimally generated as \[ ({\mathbf M}_G^q)^{[{\mathbf a}]}=\langle x_{1}x_{2}^2x_{3}^2, x_{1}x_{2}^3x_{3}, x_{1}^2x_{2}^2x_{3}, x_{1}^2x_{2}x_{3}^2 \rangle \ . \] \end{Example} \medskip \setlength{\unitlength}{1.1pt} \begin{figure}[h!] \begin{center} \begin{picture}(100,195)(0,-55) \thicklines \put(60,0){\circle*{4}} \put(26,66){\circle*{4}} \put(46,66){\textcolor{blue}{\circle*{4}}} \put(13,86){\textcolor{blue}{$x_1^2x_2^2x_3$}} \put(75,66){\textcolor{blue}{\circle*{4}}} \put(75,86){\textcolor{blue}{$x_1^2x_2x_3^2$}} \put(46,33){\textcolor{blue}{\circle*{4}}} \put(13,18){\textcolor{blue}{$x_1x_2^3x_3$}} \put(75,33){\textcolor{blue}{\circle*{4}}} \put(75,18){\textcolor{blue}{$x_1x_2^2x_3^2$}} \thicklines \put(46,66){\textcolor{blue}{\line(1,0){28}}} \put(46,66){\textcolor{blue}{\line(0,-1){31}}} \put(75,33){\textcolor{blue}{\line(-1,0){28}}} \put(75,33){\textcolor{blue}{\line(0,1){32}}} \put(160,0){\circle*{4}} \put(-40,0){\circle*{4}} \put(60,100){\circle*{4}} \put(60,50){\circle*{4}} \put(60,0){\line(0,-1){30}} \put(60,50){\line(2,-1){100}} \put(60,50){\line(-2,1){65}} \put(60,100){\line(0,1){30}} \put(56,-43){$H_{e_3}$} \put(180,10){$H_{e_2}$} \put(-60,-43){$H_{e_4}$} \put(30,130){\line(1,-1){20}} \put(160,0){\line(1,-1){20}} \put(193,-40){$H_{e_5}$} \put(-30,80){$H_{e_1}$} \thicklines \put(-40,0){\line(1,0){230}} \put(-40,0){\line(-1,0){30}} \put(60,0){\line(-1,0){80}} \put(60,0){\line(0,1){130}} \put(160,0){\line(3,-2){35}} \put(-40,0){\line(1,1){130}} \put(-40,0){\line(-1,-1){30}} \put(160,0){\line(-1,1){130}} \put(160,0){\line(1,-1){30}} \end{picture} \caption{The bounded complex supporting a minimal free resolution of $({\mathbf M}_G^{q})^{[{{\mathbf a}}]}$} \label{fig:dualfig} \end{center} \end{figure} \section*{Acknowledgments} The first author is grateful to Volkmar Welker for his support and many helpful conversations during this project. She is also grateful to J\"urgen Herzog for helpful discussions related to regular sequences and flat families. The second author would like to thank his advisor Matthew Baker for his support throughout this project and for many useful conversations. We would like to thank Bernd Sturmfels for useful discussions. \input{DivisorsResolutions.bbl} \end{document}
{ "timestamp": "2013-06-25T02:01:40", "yymm": "1306", "arxiv_id": "1306.5351", "language": "en", "url": "https://arxiv.org/abs/1306.5351", "abstract": "We study various binomial and monomial ideals arising in the theory of divisors, orientations, and matroids on graphs. We use ideas from potential theory on graphs and from the theory of Delaunay decompositions for lattices to describe their minimal polyhedral cellular free resolutions. We show that the resolutions of all these ideals are closely related and that their $\\mathbb{Z}$-graded Betti tables coincide. As corollaries, we give conceptual proofs of conjectures and questions posed by Postnikov and Shapiro, by Manjunath and Sturmfels, and by Perkinson, Perlman, and Wilmes. Various other results related to the theory of chip-firing games on graphs also follow from our general techniques and results.", "subjects": "Combinatorics (math.CO); Commutative Algebra (math.AC); Algebraic Geometry (math.AG)", "title": "Divisors on graphs, binomial and monomial ideals, and cellular resolutions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319879873278, "lm_q2_score": 0.7154239836484144, "lm_q1q2_score": 0.7087934255738071 }
https://arxiv.org/abs/1706.09374
Yet again on polynomial convergence for SDEs with a gradient-type drift
Bounds on convergence rate to the invariant distribution for a class of stochastic differential equations (SDEs) with a gradient-type drift are obtained.
\section{Introduction} Let us consider a stochastic differential equation in $R^d$ \begin{equation}\label{eq1} dX_t = dB_t - \nabla U (X_t)\,dt \end{equation} with initial data \begin{equation}\label{eq2} X_0 = x. \end{equation} Here $B_t, \,\, t\geq 0$ is a $d$-dimensional Brownian motion, $X_t$ takes values in $R^d$, $U$ is a non-negative function, $U(0)=0$ and $\lim_{|x|\to\infty}U(x) = +\infty$. Function $U$ is assumed to be locally bounded and locally $C^1$. The aim of this paper is to establish ergodic properties of the Markov process $X_t$, namely, existence and uniqueness of its invariant probability measure, and to estimate convergence rate to the invariant measure which rate bound would not depend on the first derivatives of the function $U$. Such a problem -- about bounds not depending explicitly on \(\nabla U\) -- was posed and in some particular case solved in \cite{Ver01}. Here we extend and relax some of the assumptions from \cite{Ver01}. It is widely known that the rate of convergence may be derived from the estimates of the type \begin{equation}\label{eq3} \mathbb E_x \tau^k \leq C(1+|x|^m), \end{equation} \begin{equation}\label{eq4} \sup_{t\ge 0} \mathbb E_x |X_t|^m \leq C(1+|x|^{m'}), \end{equation} for some $k>1$, $m, m', C > 0$, where $\tau = \inf (t\geq 0: \,\, |X_t| \le K)$ for some $K>0$, see, e.g., \cite{Ka, Ve1}, et al. In particular, for SDEs (\ref{eq1}) with a {\em bounded} $\nabla U$ it can be derived from (\ref{eq3}) and (\ref{eq4}) that \begin{equation}\label{eq5} var(\mu^x_t-\mu^{inv}) \leq P(x)(1+t)^{-k'}, \end{equation} with any $k'<k$ and some function $P$ growing in \(x\) at infinity. ~ \noindent The bounds like (\ref{eq3}) under various assimptions were obtained for various classes of processes by many authors, see, in particular, \cite{AIM, Ka, La}, \cite{MW} -- \cite{Ve2} and the references therein; yet, for SDEs all assumptions were usually -- except the paper \cite{Ver01} -- stated in terms of \(\nabla U\). See also \cite{GRS, Mal} where stronger sub-exponential bounds were established under another standing assumption. In \cite{Ve1} and \cite{Ve2} a recurrence condition $$ -p = \limsup (\nabla U(x),x) < 0 $$ was used to get bounds like (\ref{eq5}). Here the goal is to use some analogue of the latter condition but in terms of the limiting behavour of the function \(U\) itself, similar to \cite{Ver01} but under weaker assumptions. \section{Main results} \subsection{Earlier results} Recall briefly some earlier results from \cite{Ver01} where, in fact, a little more general equation was considered. Assume \begin{equation}\label{eq7a} \sup_{x,x':\, |x-x'|\le 1}\; (U(x)-U(x')) < \infty \end{equation} and let the structure of the function $U$ be as follows: \begin{eqnarray}\label{eq7} U(x) = U^1(x) + U^2(x), \quad U^1(x) = V(|x|), \quad <U^2(x),x> \equiv 0. \end{eqnarray} The function \(V\) here is assumed in the class \(C^1(0, \infty)\). In particular, the "essential" divergent part $U^1$ of the drift has a central symmetry property while another divergent part $U^2$ is orthogonal to the direction $x$ at any point $x$. Let the following recurrence condition is satisfied, \begin{equation}\label{eq8} -\lim_{ \xi \to\infty} \frac{V(\xi)}{ \log \xi} + d = -p < 0. \end{equation} \begin{Proposition}[\cite{Ver01}]\label{Theorem1} Let (\ref{eq7a})--(\ref{eq7}) and (\ref{eq8}) with $p>1/2$ be satisfied. Then for any $0<k<p+1/2$ and $\varepsilon>0$ small enough the estimate (\ref{eq3}) holds with $m=2k+\varepsilon$ and some $C=C_\varepsilon$ and the estimate (\ref{eq4}) is valid with any $m<2p-1$ and $m'=m+2\varepsilon$. Moreover, there exists a unique invariant measure for the Markov process $X_t$. \end{Proposition} \begin{Proposition}[\cite{Ver01}]\label{Theorem2} Let (\ref{eq7a})--(\ref{eq7}) and (\ref{eq8}) with $p>1/2$ be satisfied. Then the bound (\ref{eq5}) holds true with any $k'<k<p+1/2$ and $\tilde P(x) = C_\varepsilon (1+|x|^{m})$, $m=2k+\varepsilon$ with any $\varepsilon>0$ small enough and some $C$. If, moreover, $p>3/2$ then the bound (\ref{eq5a}) holds true with any $k'<k<p-1/2$ and $\hat P(x) = C_\varepsilon (1+|x|^{m})$, $m=2k+\varepsilon$ with any $\varepsilon>0$ small enough and some $C$. \end{Proposition} The assumption $p>3/2$ relates to the critical value $3/2$ in \cite{Ve1}. \subsection{\bf New results} Below \([a]\) denotes the integer value of \(a\in \mathbb R^1\). \begin{Theorem}\label{Thm1} Let there exist \(1/2<p_2\le p_1\) such that \begin{equation}\label{eq8b} 0<p_2 \le \frac{V(\xi)}{\log \xi} - d \le p_1, \end{equation} for all \(\xi >0\) which are large enough by the absolute value. Then, the bound (\ref{eq4}) holds true with \(m' = m + 2(p_1 - p_2)\) and \(m=2k(1+p_1-p_2)\). Moreover, for any positive integer value of \(\displaystyle k<1+\frac{2p_2-1}{2(1+p_1-p_2)}\) and \(m=2k(1+p_1-p_2)\), the bound (\ref{eq3}) holds. Moreover there is a unique invariant probability measure \(\mu^{inv}\), and for any \(0<k'<k\), and for any \(t\ge 0\), \begin{equation}\label{eq5a} var(\mu^x_t-\mu^{inv}) \leq P(x)(1+t)^{-k'}, \end{equation} with some polynomial function \(P(x)\). \end{Theorem} \begin{remark} Note that \(k=1\) is included in the range of values for which the bound (\ref{eq3}) will be established. The assumption (\ref{eq8b}) may be replaced by a similar one with \(\limsup_{|\xi|\to\infty}\) and \(\liminf_{|\xi|\to\infty}\) instead of exact inequalities which may or may not change slightly the resulting statement depending on whether or not the value \(\displaystyle \frac{2p_2-1}{2(1+p_1-p_2)}\) is integer. Also, depending on whether the same value is integer, the range of \(k\) for which the bound (\ref{eq3}) holds true may change a bit. We do not pursue the inspection of all these possible changes here. Let us mention that the assumption (\ref{eq7a}) is needed for the ``local mixing'' which explaination may be read in \cite{Ver01} in detail. \end{remark} \section{Proof} {\bf 1.} As in \cite{Ver01}, due to comparison theorems for SDEs with reflection and the assumption on the structure of the drift one gets, \begin{eqnarray} & \displaystyle |X_t| \le y_t, \nonumber \\ \nonumber \\ & \displaystyle dy_t = d\bar w_t + \left(\frac{d}{y_t} - V'(y_t) \right)dt + d\varphi_t \equiv d\bar w_t - \bar V'(y_t)dt + d\varphi_t,\label{eqy} \end{eqnarray} where $\bar w$ is a 1-dimensional Wiener process, $y$ is a solution of the SDEs above with a non-sticky boundary condition at (any) point $K>0$, $\varphi$ is its local time at $K$, $\bar V'(y) = V'(y) - d/y $; in other words, we let \[ \bar V(y) = V(y) - d \ln y, \quad y>0. \] Condition (\ref{eq8b}) can be rewritten in the form \[ \xi^{2p_2} \le \exp(2\bar V(\xi)) \le \xi^{2p_1}, \quad \xi \ge K. \] ~ \noindent {\bf 2.} The invariant density of the process $\xi_t$ with $K=|x|$ has a form $$ C(|x|) \exp\left(-2\bar V(y)\right), \quad y > |x|. $$ The normalizing identity implies the estimation from above (under \(2p_2 > 1\)), $$ C(|x|) = \left(\int_{|x|}^\infty \, \exp(-2\bar V(y)\,dy)\right)^{-1} \le \left(\int_{|x|}^\infty \xi^{-2p_2}\,dy)\right)^{-1} = (2p_2-1) |x|^{2p_2 -1}, $$ for the values of \(|x|\) large enough. For smaller values of \(|x|\), convergence of the integral cannot be destroyed because in some bounded neighbourhood of zero the function \( \exp\left(-2\bar V(y)\right)\) is bounded. Note that for small values of \(|x|\) the expressions \(\displaystyle \left(\int_{|x|}^\infty \, \exp(-2\bar V(y)\,dy)\right)^{-1}\) are smaller, which means that in all cases for some \(C_0\), \[ C(|x|) \le (2p_2-1) |x|^{2p_2 -1} \wedge C_0. \] ~ \noindent {\bf 3.} The inequality (\ref{eq4}) with any real value \(m< 2p_2 -1\) and with \(m' = m + 2(p_1 - p_2)\) (where \(m'\) may not be necessarily integer either) follows from a direct calculation, \begin{eqnarray*} & \displaystyle \mathbb E_x |X_t|^m \le \mathbb E_{|x|} |y_t|^{m} \le C(|x|)\,\int_{|x|}^{\infty} \,\xi^m \exp(-2\bar V(\xi))\,d\xi \\\\ & \displaystyle \le (C |x|^{2p_1-1} \wedge C_0)\,\int_{|x|}^{\infty} \,\xi^m \xi^{-2p_2}\,d\xi \le C |x|^{ m + 2(p_1-p_2)} \end{eqnarray*} (here the constants \(C\) may be different on different lines and even on the same line), which is true for any $x$ large enough, due to comparison theorems for the processes $y_t$ with different initial data $y_0$. For {\em any \(x\)} -- not necessarily small - this implies the bound (\ref{eq4}), as required. ~ \noindent {\bf 4.} Denote $v^q(\xi) = \mathbb E_\xi\gamma^q$ for any integer \(q\ge 0\), $\gamma = \inf (t:\; y_t\le K)$ and let $L$ denote the generator of $y_t$. By virtue of the identity $$ \left( \int_0^\gamma 1\,dt \right)^q = q \int_0^\gamma\, \left(\int \, 1\,ds\right)^{q-1}\,dt, $$ it follows, $$ v^q(y_0) = q \mathbb E_{y_0} \int_0^\gamma v^{q-1}(y_t)\,dt, $$ for any $q$ such that the integral in the right hand side converges. In turn, this implies an equation (for example, by It\^o's or Dynkin's formula) \begin{equation}\label{eq10} Lv^{q} = - q v^{q-1}, \quad (q\ge 1) \end{equation} (cf. with \cite{D} theorem 13.17 where the equation is explained differently and under another stronger assumption). Evidently, one boundary value for the latter equation is $v^q(K)= 0$. Concerning the ``second boundary value'' usual for a PDE of the second order, it is seemingly missing here. The justification of the formula for solution below can be done by the following limiting procedure. Let \(N>K\) be the second boundary (later on \(N\) would go to infinity). Let $v_N^q(\xi) = \mathbb E_\xi\gamma_N^q$ for any integer \(q\ge 0\), $\gamma_N = \inf (t:\; y^N_t\le K)$, where the process \(y^N_t\) is a solution of the equation similar to (\ref{eqy}) but with another non-sticky reflection at \(N\). Note that all solutions are strong and, hence, may be constructed on the same probability space; see, e.g., \cite{Ver81} for SDEs with one boundary, and results from this paper are easily extended on the case with two finite boundaries. Apparently, \(y^N_t\le y_t\) for any \(t\) and \(N\), and \(\gamma_N\uparrow \gamma\) as \(N\uparrow \infty\). So, by the monotone convergence, \(v_N^q \uparrow v^q\) for all values of \(q\) (even if the limit \(v^q\) is not finite). Then the sequence of the functions \(v_N^q(\xi)\) satisfies the equations (\ref{eq10}) with boundary conditions \[ v_N^q(K) =0, \quad (v_N^q)'(N) = 0. \] The formula for solution of such an equation reads, \[ v_N^q(\xi) = 2q \int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^N v_N^{q-1}(y_2) \exp(-2\bar V(y_2))\,dy_2, \quad K\le \xi \le N, \] which may be verified by a direct calculation. Hence, by induction, the function \(v^q(\xi)\) is given by the formula via the function $v^{q-1}$, \begin{equation}\label{eq11} v^q(\xi) = 2q \int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty v^{q-1}(y_2) \exp(-2\bar V(y_2))\,dy_2. \end{equation} By another induction this implies the inequalities (assuming \(v^0 \equiv 1\)): \begin{eqnarray*} v^1(\xi) =2\int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty v^{0}(y_2) \exp(-2\bar V(y_2))\,dy_2 \\\\ = 2\int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty \exp(-2\bar V(y_2))\,dy_2 \le 2\int_K^\xi\, y_1^{2p_1}\,dy_1\, \int_{y_{1}}^\infty y_2^{-2p_2}\,dy_2 \\\\ = C\int_K^\xi\, y_1^{2p_1-2p_2+1}\,dy_1 = C (\xi^{2(p_1-p_2)+2} - K^{2(p_1-p_2)+2}) \le C \xi^{2(p_1-p_2)+2}, \end{eqnarray*} under the condition that \(p_2>1/2\) (otherwise the inner integral diverges). Further, \begin{eqnarray*} & \displaystyle v^2(\xi) =4\int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty v^{1}(y_2) \exp(-2\bar V(y_2))\,dy_2 \\\\ & \displaystyle \le C \int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty y_2^{2(p_1-p_2)+2} \exp(-2\bar V(y_2))\,dy_2 \\\\ & \displaystyle \le C \int_K^\xi\, y_1^{2p_1}\,dy_1\, \int_{y_{1}}^\infty y_2^{2(p_1-p_2)+2 - 2p_2}\,dy_2 \\\\ & \displaystyle = C \int_K^\xi\, y_1^{2p_1}\,dy_1\, y_{1}^{2p_1-4p_2+3} = C (\xi^{4(p_1-p_2)+4} - K^{4(p_1-p_2)+4}) \\\\ & \displaystyle \le C \xi^{4(p_1-p_2+1)}, \end{eqnarray*} where in the calculus it was assumed that \(2p_1 - 4p_2 + 2 < -1\), that is, that \(p_1 < 2p_2 -3/2\), otherwise the inner integral in the calculus diverges. Since from the beginning \(p_1\ge p_2\), for the value of \(p_2\) this means that compulsory \(p_2 > 3/2\). ~ \noindent Next, \begin{eqnarray*} & \displaystyle v^3(\xi) =6\int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty v^{2}(y_2) \exp(-2\bar V(y_2))\,dy_2 \\\\ & \displaystyle \le C \int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty y_2^{4(p_1-p_2+1)} \exp(-2\bar V(y_2))\,dy_2 \\\\ & \displaystyle \le C \int_K^\xi\, y_1^{2p_1}\,dy_1\, \int_{y_{1}}^\infty y_2^{4(p_1-p_2+1) - 2p_2}\,dy_2 \\\\ & \displaystyle = C \int_K^\xi\, y_1^{2p_1}\,dy_1\, y_{1}^{4p_1-6p_2+5} = C (\xi^{6(p_1-p_2+1)} - K^{6(p_1-p_2+1)}) \le C \xi^{6(p_1-p_2+1)}. \end{eqnarray*} For the inner integral to converge, the values of \(p_1, p_2\) must satisfy \(4p_1 - 6p_2 + 4 < -1\), that is, \(\displaystyle p_1 < \frac32 p_2-\frac54\). Due to the condition \(p_1 \ge p_2\), for \(p_2\) this compulsory implies \(\displaystyle p_2 > \frac52\). Note that, as usual, constants \(C\) may be different for any \(q\) and even from line to line. It looks plausible that the general formula -- as long as the integrals converge -- reads, \begin{equation}\label{vq} v^{q} (\xi) \leq C_{q} \xi^{2q(1+p_1-p_2)}. \end{equation} The base being already established, let us show the induction step. Assume that for $q=n-1$ the formula is valid with some constant \(C_{n-1}\), that is, \[ v^{n-1} (\xi) \leq C_{n-1} \xi^{2(n-1)(1+p_1-p_2)}. \] Then for $q=n$ (as long as the integrals in the calculus below converge) we have, \begin{eqnarray*} & \displaystyle v^{n}(\xi) =2n \int_K^\xi\, \exp(2\bar V(y_1))\,dy_1\, \int_{y_{1}}^\infty v^{n-1}(y_2) \exp(-2\bar V(y_2))\,dy_2 \\\\ & \displaystyle \le 2n \int_K^\xi\, {y_1}^{2p_1}\,dy_1\, \int_{y_{1}}^\infty C_{n-1}\, y_2^{2(n-1)(p_1-p_2+1)} {y_2}^{-2p_2}\,dy_2 \\\\ & \displaystyle = {C_n}^{} \int_K^\xi\, y_1^{2p_1}\,y_1^{2n-1+2(n-1)p_1-2np_2}dy_1= {C_n}^{} \int_K^\xi\, y_1^{2n-1+2np_1-2np_2}dy_1 \le C_n \xi^{2n(p_1-p_2+1)}. \end{eqnarray*} Hence, indeed, by induction the formula (\ref{vq}) is established. The values of \(q\) for which the integrals in the calculus converge must satisfy the bound \[ 2(q-1)(1+p_1-p_2) -2p_2 < -1, \] that is, \[ q < q_0:= 1 + \frac{2p_2-1}{2(1+p_1-p_2)} = \frac{1+2p_1}{2(1+p_1-p_2)}. \] As a consequence, it is compulsory that \(p_2 > q-1/2\). Recall that in this paper only integer values of \(q\) are used; however, \(q_0\) introduced above may not be necessarily integer, but in any case \(q_0 > 1\). Also, note that if \(p_1=p_2 = p\) as in \cite{Ver01}, then the latter inequality \(q<q_0\) reduces to \(q< p + 1/2\), precisely as in \cite{Ver01}. ~ {\bf 5.} By virtue of the established bounds (\ref{eq3})--(\ref{eq4}), the bound (\ref{eq5}) on convergence towards the stationary measure follows from various sources (cf., e.g., \cite{Ve2, Ver01}, et al.) and, hence, in this brief presentation we skip the details of this step. The existence of the invariant probability measure may be justified via the Harris--Khasminsky principle based on (\ref{eq3}) with any \(k\ge 1\). Its uniqueness follows, for example, from the bound (\ref{eq4}). This completes the proof of the Theorem \ref{Thm1}. \section*{Acknowledgements} For the first author this study has been funded by the Russian Foundation for Basic Research grant \mbox{17-01-00633$\_$a}. For the second author this study has been funded by the Russian Academic Excellence Project '5-100' and by the Russian Science Foundation project 17-11-01098.
{ "timestamp": "2017-07-25T02:09:09", "yymm": "1706", "arxiv_id": "1706.09374", "language": "en", "url": "https://arxiv.org/abs/1706.09374", "abstract": "Bounds on convergence rate to the invariant distribution for a class of stochastic differential equations (SDEs) with a gradient-type drift are obtained.", "subjects": "Probability (math.PR)", "title": "Yet again on polynomial convergence for SDEs with a gradient-type drift", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446494481299, "lm_q2_score": 0.727975460709318, "lm_q1q2_score": 0.7087894122491647 }
https://arxiv.org/abs/1203.1400
Isoperimetric inequality under Kähler Ricci flow
Let $({\M}, g(t))$ be a Kähler Ricci flow with positive first Chern class. We prove a uniform isoperimetric inequality for all time. In the process we also prove a Cheng-Yau type log gradient bound for positive harmonic functions on $({\M}, g(t))$, and a Poincaré inequality without assuming the Ricci curvature is bounded from below.
\section{Introduction} In this paper we study K\"ahler Ricci flows \begin{equation} \label{krf} \partial_t g_{i\bar j} = - R_{i\bar j} + g_{i\bar{j}} = \partial_i \partial_{\bar j} u, \quad t>0, \end{equation} on a compact, K\"ahler manifold ${\bf M}$ of complex dimension $m=n/2$, with positive first Chern class. Given initial K\"ahler metric $g_{i\bar j}(0)$, H. D. Cao \cite{Ca:1} proved that (\ref{krf}) has a solution for all time $t$. Recently, many results concerning long time and uniform behavior of (\ref{krf}) have appeared. For example, when the curvature operator or the bisectional curvature is nonnegative, it is known that solutions to (\ref{krf}) stays smooth when time goes to infinity (see \cite{CCZ:1}, \cite{CT:1} and \cite{CT:2} for examples). In the general case, Perelman (cf \cite{ST:1}) proved that the scalar curvature $R$ is uniformly bounded, and the Ricci potential $u(\cdot, t)$ is uniformly bounded in $C^1$ norm, with respect to $g(t)$. When the complex dimension $m=2$, let $({{\bf M}}, g(t))$ be a solution to (\ref{krf}), it is proved in (\cite{CW:1}) that the isoperimetric constant for $({{\bf M}}, g(t))$ is bounded from below by a uniform constant. We mention that an isoperimetric estimate for the Ricci flow on the two sphere was already proven by Hamilton in \cite{Ha:1}. In this paper, we prove that in all complex dimensions, the isoperimetric constant for $({{\bf M}}, g(t))$ is bounded from below by a uniform constant. This extends the result of Chen-Wang mentioned above. This result seem to add more weight to the belief that the K\"ahler Ricci flow converges to a K\"ahler Ricci soliton as $t \to \infty$, except on a subvariety of complex codimension $2$, To make the statement precise, let's introduce notations and definition. We use ${\bf M}$ to denote a compact Riemann manifold and $g(t)$ to denote the metric at time $t$; $d(x, y, t)$ is the geodesic distance under $g(t)$; $B(x, r, t) = \{ y \in {{\bf M}} \ | \ d(x, y, t) < r \}$ is the geodesic ball of radius $r$, under metric $g(t)$, centered at $x$, and $|B(x, r, t)|_{g(t)}$ is the volume of $B(x, r, t)$ under $g(t)$; $d g(t)$ is the volume element. We also reserve $R=R(x, t)$ as the scalar curvature under $g(t)$. When the time variable $t$ is not explicitly used, we may also suppress it in the notations mentioned above. \medskip The main result of the paper is the following theorem. \begin{theorem} \label{thm1.1} Let $({{\bf M}}, g(t))$, $\partial_t g_{i\bar j} = - R_{i\bar j} + g_{i\bar{j}}$, be a K\"ahler Ricci flow on a $n$ real dimensional compact, K\"ahler manifold with positive first Chern class. Then there exists a uniform constant $S_0$, depending only on the initial metric $g(0)$ and a numerical constant $C$, such that \[ \left[ \int_{{\bf M}} | u |^{n/(n-1)} d g(t) \right]^{(n-1)/n} \le S_0 \int_{{\bf M}} | \nabla u | dg(t) + \frac{C}{|{\bf M}|^{1/n}_{g(t)}} \int_{{\bf M}} | u| dg(t) \]for all $ u \in C^\infty({\bf M})$. \end{theorem} {\it Remark. It is well known that Theorem \ref{thm1.1} implies a uniform lower bound for the isoperimetric constant of $({{\bf M}}, g(t))$, i.e. there exists a positive constant $c_0$, depending only on the initial metric such that \[ I({{\bf M}}, g(t)) \equiv \inf_{D \subset {\bf M}} \, \frac{ | \partial D |}{ \left[ \min \{ |D|, \, |{{\bf M}}-D|\} \right]^{(n-1)/n} } \ge c_0. \] Here all the volume are with respect to $g(t)$; and $D$ is a subdomain of ${\bf M}$ such that $\partial D$ is a $n-1$ dimensional submanifold of ${\bf M}$. A proof can be found in \cite{CLN:1} Section 5.1 e.g. } \medskip The proof of the theorem is based on the following properties for K\"ahler Ricci flow on a compact manifold with positive first Chern class. \vskip 0.5cm \noindent {\it Property A. Let $({{\bf M}}, g(t))$ be a K\"ahler Ricci flow (\ref{krf}) on a compact manifold with positive first Chern class. There exist uniform positive constants $C$, and $\kappa$ so that \\ 1. \quad $| R(g(t))| \le C,$ 2. $diam ({\bf M}, g(t)) \le C,$ 3. $\Vert u \Vert_{C^1} \le C.$ 4. $|B(x, r, t)|_{g(t)} \ge \kappa r^n$, for all $t>0$ and $r \in (0, diam ({\bf M}, g(t)))$. } 5. $ |B(x, r, t)|_{g(t)} \le \kappa^{-1} r^n $ for all $r>0$, $t>0$. \vskip 0.5cm \noindent {\it Property B. Under the same assumption as in Property A, there exists a uniform constant $S_2$ so that the following $L^2$ Sobolev inequality holds: \\ \[ \left( \int_{{\bf M}} v^{2n/(n-2)} d g(t) \right)^{(n-2)/n} \le S_2 \left( \int_{{\bf M}} | \nabla v |^2 dg(t) + \int_{{\bf M}} v^2 dg(t) \right) \]for all $ v \in C^\infty({\bf M}, g(t))$.\\ } Property A 1-4 is due to Perelman (c.f. \cite{ST:1}), Property B was first established in \cite{Z07:1} (see also \cite{Ye:1}, \cite{Z10:1} ). Property A 5 can be found in \cite{Z11:1} and also \cite{CW:2}. The rest of the paper is organized as follows. In Section 2, we prove some gradient bounds for harmonic functions on $({{\bf M}}, g(t))$. Since the bounds do not rely on the usual lower bound of Ricci curvature, the result may be of independent interest. Using these bounds, we prove the theorem in Section 3. \medskip \section{gradient bounds for harmonic functions} In order to prove the theorem, in this section we state and prove a number of results on harmonic functions on certain manifolds with fixed metric. These results are well known when the manifold has nonnegative Ricci curvature, a property that is not available for us. Since some of these results may be of independent interest, we will also deal with the real variable case and impose some conditions which are more general than needed for the proof of the theorems in Section 1. As the metric is independent of time in this section, we will suppress the time variable $t$. In this section, the basic assumptions on the $n$ real dimensional manifolds ${\bf M}$ are {\it Assumption 1. $L^2$ Sobolev inequality: there is a positive constant $\alpha$ such that \[ \left( \int_{{\bf M}} u^{2n/(n-2)} d g(t) \right)^{(n-2)/n} \le \alpha \left( \int_{{\bf M}} | \nabla u |^2 dg(t) + \int_{{\bf M}} u^2 dg(t) \right) \]for all $ u \in C^\infty({\bf M})$. } {\it Assumption 2. There exists a positive constant $\kappa$, such that \[ \kappa r^n \le |B(x, r)| \le \kappa^{-1} r^n, \qquad x \in {\bf M}, \quad 0 < r< diam({\bf M}) \le 1. \]} {\it Assumption 3. There exists a smooth function $L=L(x)$ and 2 smooth parallel $(2, 2)$ tensor fields $P$ and $Q$ such that the Ricci curvature is given by \[ R_{ij} =P^{kl}_{ij} \partial_k \partial_l L + Q^{kl}_{ij} g_{kl} \] under a local coordinates. Moreover $\Vert P \Vert_\infty \le 1$, $\Vert Q \Vert_\infty \le 1$. Here $\partial_k \partial_l L $ is the Hessian of $L$. } Note that Assumption 3 includes as a special case, the formula for the Ricci curvature on K\"ahler manifolds $\partial_i\partial_{\bar j} u = g_{i\bar j}-R_{i\bar j}$ where $u$ is the Ricci potential. \begin{lemma} \label{ledumean} Suppose $({\bf M}, g)$ is a compact Riemann manifold of real dimension $n$, satisfying Assumptions 1, 2, 3. Let $u$ be a smooth harmonic function in $B(x_0, r)$ where $x_0 \in {\bf M}$ and $ r \le diam({\bf M})$. Then there exists a positive constant $C_0=C_0(\alpha, \kappa, \Vert \nabla L \Vert_\infty)$ such that \[ \sup_{x \in B(x_0, r/2)} |\nabla u(x)| \le C_0 \frac{1}{r} \left(\frac{1}{r^n} \int_{B(x_0, r)} u^2 dg \right)^{1/2}. \] \end{lemma} \proof Since $u$ solves $\Delta u=0$, by Bochner's formula, we have \begin{equation} \label{dddu} \Delta |\nabla u |^2 = 2 | Hess \, u |^2 + 2 Ric (\nabla u, \nabla u). \end{equation} Given $\sigma \in (0, 1)$, let $\psi=\psi(x)$ be a standard Lipschitz cut-off function such that $\psi(x)=0$ when $x \in B(x_0, r)^c$; $0 \le \psi \le 1$ and $\psi(x)=1$ when $x \in B(x_0, \sigma r)$ and $|\nabla \psi| \le \frac{4}{(1-\sigma) r}$. We mention that no second order derivatives of $\psi$ are involved. So we only need $\psi$ is Lipschitz. For clarity of presentation, we write \[ f = |\nabla u|^2. \]Given a number $p \ge 1$, using $f \psi^2$ as a test function on (\ref{dddu}), after a routine calculation, we derive \begin{equation} \label{dfpsi} \aligned &\frac{2p-1}{p^2} \int_{B(x_0, r)} |\nabla (f^p \psi)|^2 dg \\ &\le \frac{C}{(1-\sigma)^2 r^2} \int_{B(x_0, r)} f^2 dg -2 \int_{B(x_0, r)} | Hess \, u |^2 f^{2p-1} dg - 2 \int_{B(x_0, r)} Ric (\nabla u, \nabla u) f^{2p-1} \psi^2 dg\\ &\equiv I_1 + I_2 +I_3. \endaligned \end{equation} Now we want to absorb part of $I_3$ by $I_2$ which is a good term. In a local orthonormal coordinates, we denote $u_i$ the i-th component of $\nabla u$. Then By Assumption 3, we have, after integrating by parts, \[ \aligned I_3 &= - 2 \int R_{ij} u_i u_j f^{2p-1} \psi^2 dg\\ &=-2 \int P^{kl}_{ij} (\partial_k \partial_l L) \, u_i u_j f^{2p-1} \psi^2 dg - 2 \int Q^{kl}_{ij} g_{kl} u_i u_j f^{2p-1} \psi^2 dg\\ &=2 \int P^{kl}_{ij} (\partial_l L) \, (\partial_k u_i) u_j f^{2p-1} \psi^2 dg + 2 \int P^{kl}_{ij} (\partial_l L) \, u_i (\partial_k u_j) f^{2p-1} \psi^2 dg\\ & \qquad+2 \int P^{kl}_{ij} (\partial_l L) \, u_i u_j \partial_k (f^{2p-1} \psi^2) dg - 2 \int Q^{kl}_{ij} g_{kl} u_i u_j f^{2p-1} \psi^2 dg. \endaligned \] Here we also used the assumption that the $P$ tensor is parallel. To control the second from last term in the above identity, we notice that $ |u_i u_j | \le |\nabla u|^2 = f$ and that \[ f \partial_k (f^{2p-1} \psi^2) =(\partial_k f^p) \psi^2 \frac{2p-1}{p} f^p + 2 f^{2p} (\partial_k \psi) \psi. \] From here, using Young's inequality, it is easy to see that \[ I_3 \le \frac{1}{2} \int |\nabla (f \psi)|^2 dg + \int | Hess \, u |^2 f^{2p-1} \psi^2 dg + C \left(\frac{ \Vert \nabla L \Vert^2_\infty}{[(1-\sigma) r]^2} + \Vert \nabla L \Vert^2_\infty +1 \right) \int f^{2p} \psi^2 dg. \]Substituting this to (\ref{dfpsi}), we deduce \[ \int |\nabla (f^p \psi)|^2 dg \le C p \left(\frac{ \Vert \nabla L \Vert^2_\infty}{[(1-\sigma) r]^2} + \Vert \nabla L \Vert^2_\infty +1 \right) \int f^{2p} \psi^2 dg. \] Since $diam ({\bf M}) \le 1$ by Assumption 2, the last inequality implies \[ \int |\nabla (f^p \psi)|^2 dg \le C p \frac{ \Vert \nabla L \Vert^2_\infty + 1}{[(1-\sigma) r]^2} \int f^{2p} \psi^2 dg. \] Using the $L^2$ Sobolev inequality in Assumption 1 and Moser's iteration, we deduce, by choosing $p= (n/(n-2))^i$, and replacing $(1-\sigma)r$ by $\left(1/4\right)^i r$, $i=0, 1, 2, ...$, that \begin{equation} \label{du3r/4} \sup_{x \in B(x_0, r/2)} |\nabla u(x)|^2 \le C_1 \frac{1}{r^n} \int_{B(x_0, 3r/4)} |\nabla u|^2 dg \end{equation} where $C_1=C_1(\alpha, \kappa, \Vert \nabla L \Vert_\infty)$. We observe that even though that the number $p$ appears on the right hand side of the inequality before (\ref{du3r/4}), but its growth is only an exponential of $i$. As well known, it will be suppressed by the Moser iteration process, just like the term $1/(1-\sigma)^2$. Next we take $\sigma=3/4$ in the definition of the cut off function $\psi$. Using $u \psi^2$ as a test function on $\Delta u =0$, we infer, after a routine calculation \[ \int_{B(x_0, 3r/4)} |\nabla u|^2 dg \le \frac{C}{r^2} \int_{B(x_0, r)} u^2 dg. \] Here $C$ is a numerical constant. Combining the last two inequalities we arrive at \[ \sup_{x \in B(x_0, r/2)} |\nabla u(x)|^2 \le C_0 \frac{1}{r^{n+2}} \int_{B(x_0, r)} u^2 dg \] where $C_0=C_0(\alpha, \kappa, \Vert \nabla L \Vert_\infty)$. \qed The next lemma is simply the $L^2$ mean value inequality for the Laplace and heat equation under Assumptions 1 and 2. Since the result is well known (Grigoryan \cite{Gr:1} and Saloff-Coste \cite{Sa:1}), we omit the proof. \begin{lemma} \label{lemvp} Let ${\bf M}$ be a manifold satisfying Assumptions 1, 2. Suppose $u$ be is a smooth harmonic function in $B(x_0, r)$ where $x_0 \in {\bf M}$ and $ r \le diam({\bf M})$. Then there exists a positive constant $C_1=C_1(\alpha, \kappa)$ such that \[ \sup_{x \in B(x_0, r/2)} |u(x)| \le C_1 \left(\frac{1}{r^n} \int_{B(x_0, r)} u^2 dg \right)^{1/2}. \] Suppose $u$ is a solution of the heat equation $\Delta u -\partial_t u =0$ in the space time cube $B(x_0, r) \times [t_0-r^2, t_0]$. Then \[ \sup_{(x, t) \in B(x_0, r/2) \times [t_0 - r^2/4]} |u(x, t)| \le C_1 \left(\frac{1}{r^{n+2}} \int^t_{t-r^2}\int_{B(x_0, r)} u^2 dg ds \right)^{1/2}. \] \end{lemma} The next lemma provides bounds for the Green's function of the Laplacian and its gradients. \begin{lemma} \label{leDGbound} Let ${\bf M}$ be a manifold satisfying Assumptions 1, 2 and 3. Assume also $diam({\bf M})>\beta>0$ for a positive constant $\beta$. Let $\Gamma_0$ be the Green's function of the Laplacian $\Delta$ on ${\bf M}$. Then there exists a positive constant $C_0=(\alpha, \beta, \kappa, \Vert \nabla L \Vert_\infty)$ such that (a). $ |\Gamma_0(x, y)| \le \frac{C_0}{d(x, y)^{n-2}}$, $x, y \in {\bf M}$, (b). $ |\nabla_x \Gamma_0(x, y)| \le \frac{C_0}{d(x, y)^{n-1}}$, $x, y \in {\bf M}$. \end{lemma} \proof Once (a) is proven, (b) is just a consequence of (a) and Lemma \ref{ledumean} applied on the ball $B(x, d(x, y)/2)$. So now we just need to prove (a). On a compact manifold ${\bf M}$, we know that \begin{equation} \label{gammaG} \Gamma_0(x, y) = \int^\infty_0 \left( G(x, t, y)- \frac{1}{|{\bf M}|} \right) dt \end{equation} where $G$ is the fundamental solution of the heat equation $\Delta u - \partial_t u=0$. We remark that the metric is fixed here. So we need to bound $G$. Under Assumptions 1 and 2, Grigoryan \cite{Gr:1} and Saloff-Coste \cite{Sa:1} proved that there exist positive constants $A_1, A_2, A_3$ which depend only on $\alpha$ and $\kappa$ such that \begin{equation} \label{Gaub} G(x, t, y) \le A_1 ( 1 + \frac{1}{t^{n/2}} ) e^{- A_2 d(x, y)^2/t}. \end{equation} Fixing $x, y$ and $t$, we write $u=u(z, l) =G(z, l, y)$ and regard it as a solution of the heat equation in the cube $B(x, r) \times [t-r^2, t]$. Here $r=\sqrt{t}/2$. Extending Lemma \ref{ledumean} to the parabolic case in a routine manner, we know that \[ |\nabla u(x, t)| \le \frac{C_1}{r} \left(\frac{1}{r^{n+2}} \int^t_{t-r^2}\int_{B(x, r)} u^2 dg ds \right)^{1/2}. \]Substituting (\ref{Gaub}) to the right hand side, we know that \[ | \nabla_x G(x, t, y)| \le A_1 ( 1 + \frac{1}{t^{(n+1)/2}} ) e^{- A_2 d(x, y)^2/t}. \]Here the constants $A_1$ and $A_2$ may have changed. It is well known that this gradient bound and the upper bound (\ref{Gaub}) together imply a Gaussian lower bound for the heat kernel $G$. See \cite{CD:1}, p1165 e.g. Now, by \cite{Sa:1}, the following $L^2$ Poincar\'e inequality holds: for any $u \in C^\infty({\bf M})$, $r \in (0, diam({\bf M})]$, \begin{equation} \label{l2Poin} \int_{B(x, r/2)} | u - \bar u_{B(x, r/2)}|^2 dg \le A_3 r^2 \int_{B(x, r)} |\nabla u|^2 dg. \end{equation} By a trick in Jerison \cite{J:1}, which uses only volume doubling property, one has that \begin{equation} \label{goodl2Poin} \int_{B(x, r)} | u - \bar u_{B(x, r)}|^2 dg \le C A_3 r^2 \int_{B(x, r)} |\nabla u|^2 dg. \end{equation} Here $C$ depends only on $\kappa$. We mention that some of the cited results were stated for complete noncompact manifolds. But they are also valid for complete, closed manifolds as long as the diameters are uniformly bounded. Let $u_0 \in C^\infty({\bf M})$ be a function such that $\int_{\bf M} u_0 dg =0$. Then the function \begin{equation} \label{uxt=} u(x, t) = \int_{\bf M} \left( G(x, t, z)- \frac{1}{|{\bf M}|} \right) u_0(z) dg(z) \end{equation} is a solution to the heat equation such that $\int_{\bf M} u(x, t) dg(x)=0$. By the $L^2$ Poincar\'e inequality with $r=diam ({\bf M})$, we have \[ \int_{\bf M} u^2 dg \le C \, A_3 \, diam({\bf M})^2 \int_{\bf M} |\nabla u|^2 dg \le C A_3 \int_{\bf M} |\nabla u|^2 dg \]since $diam({\bf M}) \le 1$ by assumption. From this we deduce \[ \frac{d}{d t} \int_{\bf M} u^2 dg = - 2 \int_{\bf M} |\nabla u|^2 dg = -2 (CA_3)^{-1} \int_{\bf M} u^2 dg \] and consequently \[ \int_{\bf M} u^2(z, s) dg \le e^{ - 2 (CA_3)^{-1} s} \int_{\bf M} u^2_0(z) dg, \qquad s>0. \] Recall that we assume $diam({\bf M})>\beta>0$. For $t \ge \beta^2$, we can apply Lemma \ref{lemvp} to get \[ u^2(x, t) \le C^2_1 \frac{1}{\beta^{n+2}} \int^t_{t-\beta^2} \int_{{\bf M}} u^2(z, s) dg ds. \] Combining this with the previous inequality, we arrive at \[ u^2(x, t) \le C_2 e^{ - 2 (CA_3)^{-1} t} \int_{\bf M} u^2_0(z) dg \] where $C_2=C_0(\alpha, \beta, \kappa, A_3)$. By (\ref{uxt=}), this means \[ \left[ \int_{\bf M} \left( G(x, t, z)- \frac{1}{|{\bf M}|} \right) u_0(z) dg \right]^2 \le C_2 e^{ - 2 (CA_3)^{-1} t} \int_{\bf M} u^2_0(z) dg \] Fixing $x \in {\bf M}$ and $t \ge \beta^2$, and taking $u_0(z) = G(x, t, z)- \frac{1}{|{\bf M}|}$ in the above inequality, we obtain \begin{equation} \label{intg-1m} \int_{\bf M} \left( G(x, t, z)- \frac{1}{|{\bf M}|} \right)^2 dg \le C_2 e^{ - 2 (CA_3)^{-1} t}, \quad t \ge \beta^2. \end{equation} Fixing $x$, the function $h(z, t) \equiv G(x, t, z)- \frac{1}{|{\bf M}|}$ is also a solution to the heat equation. Applying the mean value inequality in Lemma \ref{lemvp} on the cube $B(y, \beta) \times [t-\beta^2, t]$, we infer \[ h^2(y, t) \le C^2_1 \frac{1}{\beta^{n+2}} \int^t_{t-\beta^2} \int_{{\bf M}} h^2(z, s) dg ds. \] That is \[ \left(G(x, t, y)- \frac{1}{|{\bf M}|}\right)^2 \le C^2_1 \frac{1}{\beta^{n+2}} \int^t_{t-\beta^2} \int_{{\bf M}} \left( G(x, s, z)- \frac{1}{|{\bf M}|} \right)^2 dg ds. \]Substituting (\ref{intg-1m}) to the last inequality, we deduce \begin{equation} \label{Gboundt>b} | G(x, t, y)- \frac{1}{|{\bf M}|} | \le C_3 e^{- C_4 t}, \qquad t \ge \beta^2, \end{equation} where $C_3, C_4$ depend only on $\alpha, \beta, \kappa$ and $A_3$ which only depends on $\alpha, \kappa$. From (\ref{gammaG}), \[ \aligned \Gamma_0(x, y) &= \int^{\beta^2}_0 \left( G(x, t, y)- \frac{1}{|{\bf M}|} \right) dt + \int^\infty_{\beta^2} \left( G(x, t, y)- \frac{1}{|{\bf M}|} \right) dt\\ &\equiv I_1 + I_2. \endaligned \]Using the bound (\ref{Gaub}) on $I_1$ and (\ref{Gboundt>b}) on $I_2$, we derive, after simple integration, \[ |\Gamma_0(x, y) | \le \frac{C_0}{d(x, y)^{n-2}}, \]where $C_0$ depends only on $\alpha, \beta, \kappa$ . This proves part (a) of the Lemma. As mentioned earlier, part (b) follows from part (a) and Lemma \ref{ledumean}. \qed The next result is a Cheng-Yau type log gradient estimate. Although not used in the proof of the theorems, it may be of independent interest. \begin{proposition} Let ${\bf M}$ be a manifold satisfying Assumptions 1, 2 and 3. Let $u$ be a positive harmonic function in the geodesic ball $B(x, 2r)$, which is properly contained in ${\bf M}$. Then there exists a positive constant $C$, depending only on the controlling constants in Assumptions 1-3, such that \[ \sup_{B(x, r)} | \nabla \ln u | \le \frac{C}{r} \] when $ r \in (0, 1]$. \proof \end{proposition} For convenience, we use the following notations \[ h \equiv \ln u, \quad F \equiv | \nabla h|^2. \] Following \cite{CY:1}, it is well known that $\Delta h = - F$ and \[ \Delta F = - 2 \nabla h \nabla F + 2 | Hess \, h|^2 + 2 Ric (\nabla h, \nabla h). \] Consider the function \begin{equation} \label{w=} w \equiv F^{5n}. \end{equation} By a routine calculation, we know that, for any $p \ge 1$, \begin{equation} \label{ddwp} \Delta w^p \ge - 2 \nabla h \nabla w^p + 10 n p F^{5np-1} | Hess \, h|^2 + 10 n p F^{5np-1} Ric (\nabla h, \nabla h) \end{equation} Given $\sigma \in (0, 1)$, let $\psi=\psi(x)$ be a standard smooth cut-off function such that $\psi(x)=0$ when $x \in B(x_0, r)^c$; $0 \le \psi \le 1$ and $\psi(x)=1$ when $x \in B(x_0, \sigma r)$ and $|\nabla \psi| \le \frac{4}{(1-\sigma) r}$. Using $w^2 \psi$ as a test function on (\ref{ddwp}), we deduce, after a straight forward calculation, that \begin{equation} \label{dwpp} \aligned \int |\nabla(w^p \psi)|^2 dg &\le -10 n p \int F^{5np-1} \, | Hess \, h|^2 w^p \psi^2 dg + 2 \int \nabla h \nabla w^p \, w^p \psi^2 dg\\ &\qquad - 10 n p \int F^{5np-1} Ric (\nabla h, \nabla h) w^p \psi^2 dg\\ &\equiv I_1 + I_2 + I_3. \endaligned \end{equation} Next we will show that the negative term $I_1$ dominate $I_2$ and $I_3$, modulo some harmless terms. Observe that \[ \aligned I_2 &= \int \psi^2 \nabla h \nabla w^{2p} dg \\ &= -2 \int \psi \nabla \psi \nabla h \, w^{2p} dg - \int \psi^2 \Delta h \, w^{2p} dg. \endaligned \] Recall that $\Delta h = -|\nabla h|^2 =F $. Hence, by Young's inequality, for any given $\epsilon>0$, \begin{equation} \label{i2<} I_2 \le (\epsilon +1) \int F w^{2 p} \psi^2 dg + \epsilon^{-1} \Vert \nabla \psi \Vert^2_\infty \int w^{2p} \psi^2 dg. \end{equation} It takes a little longer to prove the bound for $I_3$. By our condition on the Ricci curvature $R_{ij}$, we have \[ I_3 = -10 n p \int F^{5np-1} ( P^{kl}_{ij} \partial_k \partial_l L + Q^{kl}_{ij} g_{kl}) \, \partial_i h \partial_j h \, w^p \psi^2 dg. \] After integration by parts, this becomes \begin{equation} \label{i3=} \aligned I_3 &= 10 np (5np-1) \int F^{5 np-2} \partial_k F \, P^{kl}_{ij} \partial_l L \, \partial_i h \partial_j h \, w^p \psi^2 dg \\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, (\partial_k \partial_i h) \, \partial_j h \, w^p \psi^2 dg\\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, \partial_i h \, (\partial_k \partial_j h) \, w^p \psi^2 dg \\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, \partial_i h \, \partial_j h \, \partial_k (w^p \psi) \psi dg \\ & \qquad + 10 n p \int F^{5 np-1} \, P^{kl}_{ij} \partial_l L \, \partial_i h \, \partial_j h \, w^p \psi \partial_k \psi\\ &\qquad -10 n p \int F^{5np-1} Q^{kl}_{ij} g_{kl}\, \partial_i h \partial_j h \, w^p \psi^2 dg\\ &\equiv T_1 + ... + T_6. \endaligned \end{equation} Let us bound $T_i$, $i=1, ..., 6$. Observe that \[ | T_1 | \le 10 np (5np-1) \Vert \nabla L \Vert_\infty \int F^{5 np-2} |\nabla F | \, | \nabla h |^2 \, w^p \psi^2 dg. \] Since $| \nabla h |^2 = F$, we deduce, using $w^p=F^{ 5np}$, \[ \aligned | T_1 | &\le 10 np (5np-1) \Vert \nabla L \Vert_\infty \int F^{5 np-1} |\nabla F | \, w^p \psi^2 dg\\ &\le 10 np \Vert \nabla L \Vert_\infty \int |\nabla w^p | \, w^p \psi^2 dg. \endaligned \] Thus, after a little calculation, we obtain, \begin{equation} \label{t1<} | T_1 | \le \frac{1}{10} \int | \nabla (w^p \psi) |^2 dg + c p^2 \Vert \nabla L \Vert^2_\infty \int w^{2 p} \psi^2 dg + c \Vert \nabla \psi \Vert^2_\infty \int_{supp \, \psi} w^{2 p} dg. \end{equation} Next \[ \aligned | T_2 | &\le 10 np \Vert \nabla L \Vert_\infty \int F^{5 np-1} \, | Hess \, h | \, | \nabla h | \, w^p \psi^2 dg \\ & \le np \int F^{5 np-1} \, | Hess \, h |^2 \, \, w^p \psi^2 dg + c np \Vert \nabla L \Vert^2_\infty \int F^{5 np-1} \, |\nabla h |^2 \, \, w^p \psi^2 dg. \endaligned \] Recalling again that $|\nabla h |^2 = F$ and the definition of $I_1$, we deduce \begin{equation} \label{t2<} | T_2 | \le - \frac{I_1}{10} + c np \Vert \nabla L \Vert^2_\infty \int \, w^{2 p} \psi^2 dg. \end{equation} Since $T_3$ is similar to $T_2$, we also have \begin{equation} \label{t3<} | T_3 | \le - \frac{I_1}{10} + c np \Vert \nabla L \Vert^2_\infty \int \, w^{2 p} \psi^2 dg. \end{equation} By Young's inequality \[ | T_4 | \le \frac{1}{2} \int | \nabla ( w^p \psi)|^2 dg + 50 n^2 p^2 \Vert \nabla L \Vert^2_\infty \int F^{10 np-2} \, |\nabla h |^4 \psi^2\, dg. \] Since $F = |\nabla h|^2 $ and $w=F^{5n}$, this shows \begin{equation} \label{t4<} | T_4 | \le \frac{1}{2} \int | \nabla ( w^p \psi)|^2 dg + c p^2 \Vert \nabla L \Vert^2_\infty \int w^{2 p} \psi^2\, dg. \end{equation} Next \[ |T_5| \le 10 np \Vert \nabla L \Vert_\infty \, \Vert \psi \Vert_\infty \int F^{5np-1} | \nabla h|^2 w^p \psi dg, \] which becomes \begin{equation} \label{t5<} |T_5| \le 10 np \Vert \nabla L \Vert_\infty \, \Vert \psi \Vert_\infty \int w^{2p} \psi dg. \end{equation} Lastly \begin{equation} \label{t6<} | T_6 | \le 10 np \int F^{5np-1} | \nabla h|^2 w^p \psi^2 dg = 10 np \int w^{2 p} \psi^2 dg. \end{equation} Substituting (\ref{t1<})-(\ref{t6<}) into (\ref{i3=}), we find that \begin{equation} \label{i3<} | I_3 | \le \frac{|I_1|}{5} + \frac{3}{5} \int | \nabla ( w^p \psi)|^2 dg + c \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg. \end{equation} Here we recall that \[ I_1= -10 n p \int F^{5np-1} \, | Hess \, h|^2 w^p \psi^2 dg. \]Using the inequality \[ | Hess \, h |^2 \ge \frac{1}{n} ( \Delta h )^2 = \frac{1}{n} |\nabla h |^4, \]we find that \[ I_1 = \frac{I_1}{2} + \frac{I_1}{2} \le \frac{I_1}{2} - 5p \int F^{5np-1} \, | \nabla h|^4 w^p \psi^2 dg \] which induces, since $w=F^{ 5n}$ and $F = | \nabla h |^2$, that \begin{equation} \label{i1<2} I_1 \le \frac{I_1}{2} - 5p \int F w^{2 p} \psi^2 dg. \end{equation} Substituting (\ref{i1<2}), (\ref{i3<}) and (\ref{i2<}) into (\ref{dwpp}), we deduce \[ \aligned \int |\nabla(w^p \psi)|^2 dg &\le \frac{I_1}{2} - 5p \int F w^{2 p} \psi^2 dg + (\epsilon +1) \int F w^{2 p} \psi^2 dg + \epsilon^{-1} \Vert \nabla \psi \Vert^2_\infty \int w^{2p} \psi^2 dg \\ &\qquad + \frac{|I_1|}{5} + \frac{3}{5} \int | \nabla ( w^p \psi)|^2 dg + c \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg. \endaligned \] Since $p \ge 1$, we can take$\epsilon=1$ and obtain \[ \int |\nabla(w^p \psi)|^2 dg + \int (w^p \psi)^2 dg \le c \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg \]where $c$ may have changed in value. By the Sobolev inequality in Assumption 1, this implies \[ \left( \int (w^p \psi)^{2n/(n-2)} dg \right)^{(n-2)/n} \le c \alpha \frac{ p^2 \Vert \nabla L \Vert^2_\infty +1}{[(1-\sigma) r]^2} \int_{supp \, \psi} w^{2p} dg. \] From this, the standard Moser's iteration implies \[ \sup_{B(x, \sigma r)} w^2 \le \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{(1-\sigma)^n r^n} \int_{B(x, r)} w^2 dg \]for $r, \sigma \in (0, 1]$. Using $w=F^{5n}$, we arrive at \[ \sup_{B(x, \sigma r)} F \le \left( \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{(1-\sigma)^n r^n} \int_{B(x, r)} F^{10n} dg \right)^{1/(10n)} \]for $r, \sigma \in (0, 1]$. Using the volume doubling property and an algebraic trick in \cite{LS:1} e.g., we deduce \begin{equation} \label{F<} \sup_{B(x, r/2)} F \le \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{r^n} \int_{B(x, r)} F dg \end{equation} for $r, \sigma \in (0, 1]$. Using integration by parts, it is known that \[ \int_{B(x, r)} F dg = \int_{B(x, r)} |\nabla (\ln u)|^2 dg \le 4 \frac{|B(x, 4 r)|}{ r^2} \le c r^{n-2} \]where we have used Assumption 2. Substituting this to (\ref{F<}), we arrive at \[ \sup_{B(x, r/2)} |\nabla (\ln u)| \le \frac{C(\alpha, n, \Vert \nabla L \Vert^2_\infty)}{r} \]proving the proposition. \qed \section{Proof of the Theorem} \proof (Theorem \ref{thm1.1}). For simplicity of presentation, we omit the time variable in the proof. It is also clear that we can take $\bar u =0$. {\it Step 1.} \medskip Pick $ u \in C^\infty({\bf M})$. Since $\Delta u = \Delta u$ and $\bar u=0$, we have \[ u(x) = -\int_{\bf M} \Gamma_0(x, y) \Delta u(y) dg(y), \] where $\Gamma_0$ is the Green's function of the Laplacian on ${\bf M}$. Pick a small balls $B(x, r)$. Then, \[ \aligned u(x) &= -\lim_{r \to 0} \int_{{\bf M}-B(x, r)} \Gamma_0(x, y) \Delta u(y) dg(y)\\ & = \lim_{r \to 0} \int_{{\bf M}-B(x, r)} \nabla \Gamma_0(x, y) \nabla u(y) dg(y) - \lim_{r \to 0}\int_{\partial B(x, r)} \Gamma_0(x, y) \partial_n u(y) dS. \endaligned \] Here we have used integration by parts. Note that $|\Gamma_0(x, y)| \le \frac{C_0}{d(x, y)^{n-2}}$ by Lemma\ref{leDGbound}. Also the volume of $\partial B(x, r)$, the small spheres of radius $r$, is bounded from above by $C r^{n-1}$. So the second limit is $0$. We mention that one does not need a uniform in time bound for $|\partial B(x, r)|$ since we are freezing a time and taking the limit $r \to 0$. Hence \[ u(x) = \int_{\bf M} \nabla \Gamma_0(x, y) \nabla u(y) dg(y). \] According to Lemma \ref{leDGbound}, this implies \begin{equation} \label{u<I1du} |u(x)| \le C_0 \int_{\bf M} \frac{|\nabla u(y)|}{d(x, y)^{n-1}} dg(y) \equiv C_0 I_1(|\nabla u|) (x). \end{equation} Here $I_1$ is the Riesz potential of order $1$. We claim that there exists a constant $C_1$, depending only on the constant $\kappa$ in Property A 5, such that \begin{equation} \label{I1mf} |I_1(f)(x)| \le C_1 [ M(f) (x)]^{1-(1/n)} \, \Vert f \Vert^{1/n}_1. \end{equation} for all smooth function $f$ on ${\bf M}$. Here $M(f)$ is the Hardy-Littlewood maximal function. The proof given here is more or less the same as in the Euclidean case (p86 \cite{Zi:1}), under Property A 5, i.e. $\kappa r^n \le |B(x, r)| \le \kappa^{-1} r^n$. Let $\delta$ be a positive number, then \[ \aligned |I_1(f)(x)| &\le \int_{B(x, \delta)} \frac{|f(y)|}{d(x, y)^{n-1}} dg + \int_{B^c(x, \delta)} \frac{|f(y)|}{d(x, y)^{n-1}} dg\\ &\le \Sigma^\infty_{j=0} \int_{\{ 2^{-j-1} \delta \le d(x, y) <2^{-j} \delta \}} \frac{|f(y)|}{d(x, y)^{n-1}} dg + \delta^{1-n} \int_{\bf M} |f(y)| dg\\ & \le \Sigma^\infty_{j=0} (2^{(j+1)}/\delta)^{n-1} |B(x, 2^{-j} \delta)| \frac{1}{|B(x, 2^{-j} \delta)|} \int_{B(x, 2^{-j} \delta)} |f(y)|dg + \delta^{1-n} \int_{\bf M} |f(y)| dg\\ &\le \Sigma^\infty_{j=0} (2^{(j+1)}/\delta)^{n-1} |B(x, 2^{-j} \delta)| \, M(f)(x) + \delta^{1-n} \Vert f \Vert_1. \endaligned \] By Property A 5, \[ |B(x, 2^{-j} \delta)| \le \kappa^{-1} (2^{-j} \delta)^n. \] Combining the last 2 inequalities we deduce \[ |I_1(f)(x)| \le C \kappa^{-1} \delta \, M(f)(x) + \delta^{1-n} \Vert f \Vert_1 \] which implies (\ref{I1mf}) by taking $\delta =[ M(f)(x)/\Vert f \Vert_1]^{-1/n}$. We remark that if $\delta>diam ({\bf M})$, then the integral $\int_{B^c(x, \delta)} \frac{|f(y)|}{d(x, y)^{n-1}} dg$ is regarded as zero. Since Property A 4-5 induces volume doubling property, it is well known that the maximal operator is bounded from $L^1({\bf M})$ to weak $L^1({\bf M})$, i.e. there is a positive constant $C_2$, depending only on $\kappa$ such that \[ \beta | \{ x \, | \, M(f)(x)> \beta \} | \le C_2 \Vert f \Vert_1, \qquad \]for all $\beta>0$. A short proof can be found in Chapter 3 of Folland's book \cite{Fo:1} e.g. Note the proof there is written for the Euclidean space. But as indicated below, it is clear that it works for all metric spaces with volume doubling property. Pick $x \in S_\beta \equiv \{ x \, | \, M(f)(x)> \beta \}$. Then by definition of $M(f)(x)$, there exists radius $r_x>0$ such that \[ \frac{1}{|B(x, r_x)|} \int_{B(x, r_x)} |f(y)| dg > \beta. \]Note that the family of balls $\{ B(x, r_x) \, | \, x \in S_\beta \}$ is an open cover of $S_\beta$. Since the manifold is compact, by well known covering argument for compact metric spaces, there exists a finite subfamily $\{ B(x, r_{x_i}) \, | \, i=1, ..., m \} $ of disjoint balls such that $\{ B(x, 3 r_{x_i}) \, | \, i=1, ..., m \}$ covers $S_\beta$. Using volume doubling property, one has \[ \beta |S_\beta| \le \beta \Sigma_i |B(x, 3 r_{x_i})| \le C \Sigma_i \beta |B(x, r_{x_i})| \le \Vert f \Vert_1. \] Combining this with (\ref{I1mf}), we obtain, for all $\alpha>0$, \[ \aligned | \{ x \, | \, I_1(f)(x)> \alpha \} |& \le | \{ x \, | \, M(f)(x)> \frac{\alpha^{n/(n-1)}}{\Vert f \Vert^{1/(n-1)}_1 C^{n/(n-1)}_1}\} | \\ &\le C_2 C^{n/(n-1)}_1 \Vert f \Vert^{1/(n-1)}_1 \alpha^{-n/(n-1)} \Vert f \Vert_1. \endaligned \]Thus \begin{equation} \label{I1weak} \alpha^{n/(n-1)} | \{ x \, | \, I_1(f)(x)> \alpha \} | \le C_2 C^{n/(n-1)}_1 \Vert f \Vert^{n/(n-1)}_1 \end{equation} By (\ref{u<I1du}) we have \[ | \{ x \, | \, |u(x)|>\alpha \} \le | \{ x \, | \, |I_1(\nabla u)(x)|>\alpha C^{-1}_0 \} , \] which infers, via (\ref{I1weak}) with $f=|\nabla u|$ the following statement: if $\bar u =0$ then for all $\alpha>0$, it holds \begin{equation} \label{uweak} \alpha^{n/(n-1)} | \{ x \, | \, |u(x)|> \alpha \} | \le C_3 \Vert \nabla u \Vert^{n/(n-1)}_1. \end{equation} Here $C_3$ is a constant depending only on the controlling constants in Properties A and B. {\it Step 2.} \medskip Now we will convert the weak type inequality (\ref{uweak}) to the desired $L^1$ Sobolev inequality, using an argument based on the idea in \cite{FGW:1}. See also \cite{CDG:1}. Define the sets \[ D_k = \{ x \, | \, |u(x)| > 2^k \}, \quad k \, \, \text{are integers}. \]Then \[ \Vert u \Vert_p =\left( \Sigma^\infty_{k=-\infty} \int_{D_k -D_{k+1}} |u(x)|^p dg \right)^{1/p} \] where $p=n/(n-1)$ here and later in the proof. This shows \begin{equation} \label{up<sum} \Vert u \Vert_p \le \left( \Sigma^\infty_{k=-\infty} 2^{(k+1) p} | D_k| \right)^{1/p} = \left( \Sigma^\infty_{k=-\infty} 2^{(k+1)p } | \{ x \, | \, |u(x)|>2^k \}| \right)^{1/p}. \end{equation} Now we define \[ g_k=g_k(x)= \begin{cases} 2^{k-1}, \qquad x \in D_k = \{ x \, | \, |u(x)| > 2^k \},\\ |u(x)|-2^{k-1}, \qquad x \in D_{k-1}-D_k= \{ x \, | \, 2^{k-1} < |u(x)| \le 2^k \},\\ 0, \qquad x \in D^c_{k-1}= \{ x \, | \, |u(x)| \le 2^{k-1} \}. \end{cases} \] It is clear that $g_k$ is a Lipschitz function such that $0 \le g_k \le |u|/2$. Observe that \[ D_k \subset \{ x \, | \, g_k(x) = 2^{k-1} \} \subset \{ x \, | \, g_k(x) >2^{k-2} \} \subset \{ x \, | \, |g_k(x)-\bar g_k| >2^{k-3} \} \cup \{ x \, | \, \bar g_k >2^{k-3} \} \] Here $\bar g_k$ is the average of $g_k$ on ${\bf M}$. Hence \begin{equation} \label{Dk<sum} \aligned |D_k| &\le |\{ x \, | \, |g_k(x)-\bar g_k| >2^{k-3} \}| + | \{ x \, | \, \bar g_k >2^{k-3} \}|\\ &\equiv T_{k1} + T_{k2}. \endaligned \end{equation} Note the average of the function $g_k-\bar g_k$ is $0$. Thus we can apply (\ref{uweak}), with $u$ there being replaced by $g_k-\bar g_k$, to deduce \begin{equation} \label{Tk1} T_{k1}=|\{ x \, | \, |g_k(x)-\bar g_k| >2^{k-3} \}| \le C C_3 2^{-p k} \Vert \nabla g_k \Vert^{p}_1. \end{equation} To treat $T_{k2}$, recall that $g_k \le |u|/2$ which implies \[ \bar g_k \le \Vert u \Vert_1 /(2 |{\bf M}|). \] Therefore \[ T_{k2} = | \{ x \, | \, \bar g_k >2^{k-3} \}| \le | \{ x \, | \, \frac{\Vert u \Vert_1}{|{\bf M}|} >2^{k-2} \}|. \] This shows that \begin{equation} \label{Tk2} \aligned T_{k2} = \begin{cases} 0, \quad \text{when} \quad k > 2 + \log_2 \frac{\Vert u \Vert_1}{|{\bf M}|} \equiv k_0\\ |{\bf M}|, \quad \quad k \le k_0. \end{cases} \endaligned \end{equation} Substituting (\ref{Tk1}) and (\ref{Tk2}) into (\ref{Dk<sum}), we deduce \[ \aligned |D_k| \le \begin{cases} C C_3 2^{-p k} \Vert \nabla g_k \Vert^{p}_1, \quad \text{when} \quad k > k_0\\ C C_3 2^{-p k} \Vert \nabla g_k \Vert^{p}_1+ |{\bf M}|, \quad \quad k \le k_0. \end{cases} \endaligned \]Substituting this to (\ref{up<sum}) and using Minkowski inequality, we obtain \[ \Vert u \Vert_p \le C_4 \Sigma^\infty_{k=-\infty} \Vert \nabla g_k \Vert_1 + C |{\bf M}|^{1/p} \Sigma^{[k_0] +1}_{k=-\infty} 2^k. \] Here $[k_0]$ is the greatest integer less than or equal to $k_0$. Note that the supports of $\nabla g_k$ are disjoint and $\nabla g_k = \nabla |u|$ in the supports. Also by the definition of $k_0$ in (\ref{Tk2}), we have $2^{k_0} = 4 \Vert u \Vert_1/|{\bf M}|$. Hence \[ \Vert u \Vert_p \le C_4 \Vert \nabla u \Vert_1 + C |{\bf M}|^{1/p} \Vert u \Vert_1/|{\bf M}|, \] which implies, since $p=n/(n-1)$, \[ \Vert u \Vert_{n/(n-1)} \le C_4 \Vert \nabla u \Vert_1 + C \frac{1}{|{\bf M}|^{1/n}} \Vert u \Vert_1. \]Here $C$ is a numerical constant. This proves Theorem \ref{thm1.1}. \qed Two final remarks are in order. Let $\alpha$ be the average of $u$ in ${\bf M}$. Then By Theorem \ref{thm1.1}, we have \[ \Vert u-\alpha \Vert_{n/(n-1)} \le C \Vert \nabla u \Vert_1 + C \frac{1}{|{\bf M}|^{1/n}} \Vert u -\alpha \Vert_1. \]Since the average of $u-\alpha$ is zero, inequality (\ref{u<I1du}) implies \[ |u(x)-\alpha| \le C_0 \int_{\bf M} \frac{|\nabla u(y)|}{d(x, y)^{n-1}} dg(y). \]After integration, using the $\kappa$ noninflating property, we find that \[ \Vert u -\alpha \Vert_1 \le C diam ({\bf M}) \Vert \nabla u \Vert_1. \]By the $\kappa$ noncollapsing property of ${\bf M}$, there holds $diam ({\bf M}) \le C |{\bf M}|^{1/n}$. This shows the usual isoperimetric inequality \[ \Vert u-\alpha \Vert_{n/(n-1)} \le C \Vert \nabla u \Vert_1. \] Notice the $L^2$ Poincar\'e inequality (\ref{goodl2Poin}), \cite{Z11:1} and Section 9 of \cite{Ch:1} imply the following long time convergence result: {\it the K\"ahler Ricci flow in Theorem \ref{thm1.1} converges sequentially in time, under Gromov-Hausdorff topology, to a compact metric space with $L^2$ Poincar\'e inequality and volume doubling condition.} By Cheeger's Theorem 11.7 \cite{Ch:1}, the limit space can be equipped with differential structure a.e.. {\bf Acknowledgment.} Q. S. Z. would like to thank Professors L. Capogna, X. X. Chen, D. Jerison, Bing Wang and Bun Wong for helpful conversations. Part of the work was done when he was a visiting professor at Nanjing University under a Siyuan Foundation grant, the support of which is gratefully acknowledged. Both of us wish to thank the referee for checking the paper carefully and making helpful corrections and suggestions.
{ "timestamp": "2013-07-11T02:05:36", "yymm": "1203", "arxiv_id": "1203.1400", "language": "en", "url": "https://arxiv.org/abs/1203.1400", "abstract": "Let $({\\M}, g(t))$ be a Kähler Ricci flow with positive first Chern class. We prove a uniform isoperimetric inequality for all time. In the process we also prove a Cheng-Yau type log gradient bound for positive harmonic functions on $({\\M}, g(t))$, and a Poincaré inequality without assuming the Ricci curvature is bounded from below.", "subjects": "Differential Geometry (math.DG)", "title": "Isoperimetric inequality under Kähler Ricci flow", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446494481298, "lm_q2_score": 0.727975460709318, "lm_q1q2_score": 0.7087894122491647 }
https://arxiv.org/abs/2212.11216
Optimal cycles enclosing all the nodes of a $k$-dimensional hypercube
We solve the general problem of visiting all the $2^k$ nodes of a $k$-dimensional hypercube by using a polygonal chain that has minimum link-length, and we show that this optimal value is given by $h(2,k):=3 \cdot 2^{k-2}$ if and only if $k \in \mathbb{N}-\{0,1\}$. Furthermore, for any $k$ above one, we constructively prove that it is possible to visit once and only once all the aforementioned nodes, $H(2,k):=\{\{0,1\} \times \{0,1\} \times \dots \times \{0,1\}\} \subset \mathbb{R}^k$, with a cycle (i.e., a closed path) having only $3 \cdot 2^{k-2}$ links.
\section{Introduction} \label{sec:Intr} Given $k \in \mathbb{N}-\{0,1\}$, let the finite subset of $2^k$ points belonging to the Euclidean space $\textit{E} \subset \mathbb{R}^k$ be defined as $H(2,k):=\{\{0,1\} \times \{0,1\} \times \dots \times \{0,1\}\} \subset \mathbb{R}^k$, where the symbol ``$\times$'' denotes the well-known cartesian product. Then, the problem of joining all the nodes of $H(2,k)$ with a connected set of segments having minimum cardinality is equivalent to asking to ourselves which is the minimum-link covering tree embedding all the nodes of a $k$-dimensional hypercube. This question is not an open problem, since Dumitrescu and T\'oth (see \cite{Dumitrescu:12}, Figure 2), in 2014, easily showed that it is sufficient to connect all the $2^{k-1}$ pairs of opposite nodes with as many segments so that all of them (i.e., exactly $2^{k-1}$ line segments) meet in the center, $\textnormal{C} \equiv \left(\frac{1}{2}, \frac{1}{2}, \dots, \frac{1}{2} \right)$. Now, it is obvious to understand why the number of (line) segments of the above mentioned minumum-link covering tree for $H(2,k)$ cannot match the link-length of any polygonal chain covering the same set of $2^k$ nodes. Thus, we are interested in solving a multidimensional \textit{thinking outside the box} problem, quite similar to the $k$-dimensional generalization of the infamous \textit{nine dots puzzle} \cite{Loyd:7, Chein:20, Kershaw:21} which was definitely solved in 2020 by Ripà (see \cite{Ripa:11}): the crucial special case of finding a minimum-link polygonal chain covering any given $H(2,k):=\{\{0,1\} \times \{0,1\} \times \dots \times \{0,1\}\} \subset \mathbb{R}^k$, a fascinating challenge belonging to the general problem of finding a minimum-link covering trail for every set $H(n,k)$ (see \cite{OEIS:31}\&\hspace{-0.1mm}\cite{Ripa:22}). In order to introduce the original results of the present paper (Section \ref{sec:2}), let us give a few of definitions first. \begin{definition} \label{def1.1} Let $\mathcal{P}(m):=(\rm{S_1})$-$(\rm{S_2})$-$\dots$-$({\rm{S}}_{m+1})$, a polygonal chain consisting of $m$ links, be well-defined through the sequence of its $m+1$ vertices, so $\mathcal{P}(m) \equiv \{\overline{{\rm{S_1 S_2}}} \cup \overline{{\rm{{S_2 S_3}}}} \cup \dots \cup \overline{{\rm{S}}_{m}{\rm{S}}_{m+1}}\}$. \linebreak In particular, for any $d \in \mathbb{Z}^+ : d \leq m+1$, let the $d$-th vertex of $\mathcal{P}(m) \subset \mathbb{R}^k$ be univocally identified by the $k$-tuple $(x_1, x_2, \dots, x_k)$ (e.g., given $\mathcal{P}(3):=(0,0)$-$(1,0)$-$(1,1)$-$(0,1)$ and $d : d=2$, we have $x_1(S_d)=1 \wedge x_2(S_d)=0$, since $(x_1(S_2), x_2(S_2)) \equiv (1,0)$ for the aforementioned minimum-link polygonal chain covering $H(2,2):=\{{\{0,1\} \times \{0,1\}}\}$). \end{definition} \begin{definition} \label{def1.2} Accordingly to Definition \ref{def1.1}, let $h(2,k)$ denote the link-length of the minimum-link polygonal chain $\mathcal{P}(h(2,k)):=(\rm{S_1})$-$(\rm{S_2})$-$\dots $-$({\rm{S}}_{h(2,k)+1})$ visiting all the nodes $H(2,k):=\{\{0,1\} \times \{0,1\} \times \dots \times \{0,1\}\}$ of the $k$-dimensional hypercube $\{[0, 1] \times [0, 1] \times \dots \times [0, 1]\}$. \end{definition} \begin{definition} \label{def1.3} Let $\mathcal{P}(h(2,k))$ be a (possibly self-intersecting) path if there is not any element of the set $H(2,k)$ which belongs to more than one link of $\mathcal{P}(h(2,k))$. Then, let $\mathcal{P}(h(2,k))$ be a cycle if it is a path such that $(\rm{S_1}) \equiv$ $({\rm{S}}_{m+1})$ (i.e., we call a cycle any closed path). Furthermore, we define as “perfect covering cycle", $\mathcal{\bar{C}}(h(2,k))$, any closed path such that no element of the set $\{\rm{S_\textit{j}} \mid \textit{j}=1, 2, \dots, \textit{m}+1\}$ belongs to more than two links of the given covering path for $H(2,k)$. \end{definition} Lastly, for clarity sake, let us specify that we will use \textit{vertices} and \textit{links} when we are referring to the turning points (usually we consider Steiner points which do not belong to $\{[0, 1] \times [0, 1] \times \dots \times [0, 1]\} \subset \mathbb{R}^k$) of the polygonal chains that we are taking into account, whereas we will prefer \textit{nodes} and \textit{edges} for the respective subsets of points entirely belonging to given $k$-dimensional hypercube. Since the aim of this paper is to find polygonal chains, embedding $H(2,k)$, which are optimal with respect to the number of line segments, we immediately point out that any covering cycle for $H(2,k)$ is a covering path for the same set of points, and it is also a polygonal chain covering all the $2^k$ nodes of the hypercube $\{[0, 1] \times [0, 1] \times \dots \times [0, 1]\} \subset \mathbb{R}^k$. Now, a constructive proof of the existence of covering cycles for $H(2,2)$ and $H(2,3)$ having link-length of only $h(2,2)=3$ and $h(2,3)=6$, respectively, has already been shown in Reference \cite{Ripa:13} (e.g., the aforementioned paper, see pages 163-164 and Figures 7 to 9, provides an optimal upper bound for any $k \in \{2,3\}$ that is also achievable by taking into account only covering cycles, instead of generic polygonal chains, since $H(2,2) \subset \mathcal{P}(3)=\left(\frac{1}{2}, \frac{3}{2}\right)$-$(2, 0)$-$(-1, 0)$-$\left(\frac{1}{2}, \frac{3}{2}\right)$ and $H(2,3) \subset \mathcal{P}(6)=\left(\frac{1}{2}, \frac{1}{2}, \frac{3}{2}\right)$-$(2,2,0)$-$(-1,-1,0)$-$\left(\frac{1}{2}, \frac{1}{2}, \frac{3}{2}\right)$-$(2,-1,0)$-$(-1,2,0)$-$\left(\frac{1}{2}, \frac{1}{2}, \frac{3}{2}\right)$). Thus, from here on, let us assume that $k \geq 2$ is given. The goal of the present research paper is to show that $h_l(2,k)=3 \cdot 2^{k-2}$ is a valid lower bound for any polygonal chain visiting all the elements $H(2,k)$ and to constructively prove that the aforementioned lower bound is equal to the upper bound $h_u(2,k)=3 \cdot 2^{k-2}$ \cite{OEIS:30} which returns the minimum link-lenght of any perfect covering cycle for the same set of nodes. It follows that we are going to prove that $h_u(2,k)=h_l(2,k)$ by providing optimal covering cycles consisting of $3 \cdot 2^{k-2}$ links for any $k \in \mathbb{N}-\{0,1\}$, and this result will be shown in the next section. \section{Main Result} \label{sec:2} In order to prove that $h_l(2,k)=3 \cdot 2^{k-2}$ holds for any $k \in \mathbb{N}-\{0,1\}$, we will introduce the following lemma. \begin{lemma}\label{lemma 1} For any $k \in \mathbb{N}-\{0,1\}$, it is not possible to visit more than $4$ distinct elements of the set $H(2,k):=\{\{0,1\} \times \{0,1\} \times \dots \times \{0,1\}\} \subset \mathbb{R}^k$ by using a polygonal chain with $3$ links. \end{lemma} \begin{proof} We prove Lemma \ref{lemma 1} by studying the generic trail $\mathcal{P}(3) \equiv \{\overline{\rm{S_1 S_2}} \cup \overline{\rm{S_2 S_3}} \cup \overline{\rm{S_3 S_4}}\}$ that passes through $4$ (distinct) nodes of $H(2, k)$. Then, we will show that there is not a choice for these four nodes (and also for the considered Steiner points) that implies the existence of a fifth node belonging to $\mathcal{P}(3)$. We will start by demonstrating that there is not a trail $\mathcal{P}(2)$ that, passing through at least $3$ nodes, visits a fourth node. Considering that we need to pass through at least $3$ nodes, and being able to visit at most $2$ nodes with the first segment, we can impose that $\overline{{\rm S}_1{\rm S}_2}$ passes through $2$ nodes. Although, for convenience, we will impose ${\rm S}_1 \equiv {\rm V}_1 \equiv {\rm O}$, the obtained result can be extended to any choice of ${\rm S}_1$. \vspace{2mm} Let ${\rm S}_j$ be the origin of a given half-line $q_j$. Given a parameter $t_j\in \mathbb{R}:t_j\geq 0$, the corresponding parametric equation is of the form $q_j = {\rm S}_j + t_j \cdot {\rm \vv{{\rm S}_j{\rm V}_{j+1}}}$, where ${\rm V}_j$ indicates the $i$-th node of $H(2,k)$ visited by $\mathcal{P}(m)$. In this way, for $t_j = 1$, each of the Cartesian coordinates of the nodes must assume the value $0$ or $1$. Our goal is to show that this happens only for $t_j = 1$, and if this occurs for other values of $t_j$, then we want to show that the visited node is a node already visited previously. Since the Steiner point ${\rm S}_{j+1}$ belongs to the considered half-line, let us denote by $\bar{t_j}$ the value of the parameter $t_j$ such that ${\rm S}_{j+1}={\rm S}_{j}+ \bar{t_j} \cdot {\rm \vv{{\rm S}_j{\rm V}_{j+1}}}$ (i.e., ${\rm S}_{j+1}:={\rm S}_{j+1}(\bar{t_j})$). \vspace{2mm} The generic point ${\rm S}_2$ is obtained as the last endpoint of a segment passing through ${\rm V}_2$, \begin{equation}\label{S2} {\rm S}_2 = {\rm S}_1 + \bar{t_1} \cdot \vv{{\rm S}_1{\rm V}_2}. \end{equation} Now, from here on, we will indicate the $i$-th coordinate of the generic point ${\rm P}$, belonging to the Euclidean space $\mathbb{R}^k$, as $x_i({\rm P})$ (e.g., ${\rm P} \equiv (x_1({\rm P}), x_2({\rm P}), \dots, x_k({\rm P}))$). \vspace{2mm} Consequently, from Equation (\ref{S2}), it follows that \begin{equation} x_i\left({\rm S}_2\right)= \bar{t_1} \cdot x_i\left({\rm V}_2\right), \end{equation} where $\bar{t_1} \geq1$. \vspace{4mm} Similarly, we will have that \vspace{2mm} \begin{equation} {\rm S}_3 = {\rm S}_2 + \bar{t_2} \cdot \vv{ {\rm S}_2{\rm V}_3}. \end{equation} Hence, \begin{equation} \begin{split} x_i\left({\rm S}_3\right)&= \bar{t_1} \cdot x_i\left({\rm V}_2\right) + \bar{t_2} \cdot \left(x_i\left({\rm V}_3\right)-\bar{t_1} \cdot x_i \cdot \left({\rm V}_2\right)\right)\\ &= \bar{t_1} \cdot x_i\left({\rm V}_2\right) \cdot \left(1-\bar{t_2}\right) + \bar{t_2} \cdot x_i\left({\rm V}_3\right), \end{split} \end{equation} where $\bar{t_2} \geq 1$. Let us consider the segment $\overline{{\rm S}_2{\rm S}_3}$. By disregarding the node ${\rm V}_3$, obtained by imposing $t_2 = 1$, we verify that it does not exist any node ${\rm V_j}$ of $H(2,k)$, belonging to $\overline{{\rm S}_2{\rm S}_3}$, which has not been previously visited. Thus, for $i<k$, we need to study all the $x_i({\rm V}_j)$ equations, showing that there are no solutions such that $0<t_2<1$. It is not necessary to continue beyond point ${\rm V}_3$, studying solutions for $t_2>1$. If we encounter a node of the set $H(2,k)$ after point ${\rm V}_3$, then we will necessarily have to study also the case in which point ${\rm V}_3$ represents the furthest node and, with $t_2 <1$, we will find the point studied previously As a result, we have to study the above mentioned $x_i({\rm V}_j)$ equations, \begin{equation} \begin{cases} \bar{t_1} \cdot x_i({\rm V}_2) + t_2 \cdot (x_i({\rm V}_3)-\bar{t_1} \cdot x_i({\rm V}_2))=x_i({\rm V}_j) \\ \bar{t_1}>1 \hspace{7.5cm}.\\ 0<t_2<1 \end{cases} \end{equation} Hence, \begin{align} &x_i({\rm V}_2)=x_i({\rm V}_3)=0 \Rightarrow t_2\in \mathbb{R}:0<t_2<1,\\ &x_i({\rm V}_3)=0 \Rightarrow t_2=\frac{\bar{t_1}-1}{\bar{t_1}}. \end{align} Consequently, all the solutions imply $x_i({\rm V}_3)=0$. If $x_i({\rm V}_3)=0$ holds for all $i : i<k$, then $({\rm V}_3)=({\rm V}_1)$. Now, we are finally ready to study the generic trail $\mathcal{P}(3) \equiv \{\overline{\rm{S_1 S_2}} \cup \overline{\rm{S_2 S_3}} \cup \overline{\rm{S_3 S_4}}\}$. Thanks to the results discussed above, we know that if $\overline{\rm{S_2 S_3}}$ visits two nodes, then $\overline{\rm{S_1 S_2}}$ and $\overline{\rm{S_3 S_4}}$ visit one node each, so we can impose (from the beginning) that $\overline{\rm{S_1 S_2}}$ visits two nodes of $H(2,k)$. Such trail, $\mathcal{P}(3)$, can be built starting from the just described trail $\mathcal{P}(2)$, by simply adding a fourth generic Steiner point whose coordinates satisfy \begin{equation} {\rm S}_4 = {\rm S}_3 + \bar{t_3} \cdot \vv{ {\rm S}_3{\rm V}_4}, \end{equation} so we have \begin{equation} \begin{split} x_i({\rm S}_4) = & \bar{t_1} \cdot x_i({\rm V}_2) \cdot (1-\bar{t_2}) + \bar{t_2} \cdot x_i({\rm V}_3) + \bar{t_3} \cdot (x_i({\rm V}_4)-(\bar{t_1} \cdot x_i({\rm V}_2) \cdot (1-\bar{t_2}) + \bar{t_2} \cdot x_i({\rm V}_3))) \\ =&(\bar{t_1} \cdot x_i({\rm V}_2) \cdot (1-\bar{t_2}) + \bar{t_2} \cdot x_i({\rm V}_3)) \cdot (1-\bar{t_3}) + \bar{t_3} \cdot x_i({\rm V}_4), \end{split} \end{equation} \vspace{-6mm} \noindent with $\bar{t_3} =1$. \vspace{2mm} Before moving on the segment $\overline{{\rm S}_3{\rm S}_4}$, let us make some considerations for a better understanding of the nature of the next step of the present proof. We have that $\bar{t_1} \geq 1$ and $\bar{t_2}> 1$, since $\bar{t_2} = 1$ would imply ${\rm S}_3 = {\rm V}_3$. It follows that $\overline{{\rm S}_3{\rm S}_4}$ would visit ${\rm V}_2$ and ${\rm V}_3$, whereas it could not visit other nodes, since the set $H(2,k)$ has not more than $2$ collinear nodes. Under these constraints, the following results are obtained so that we can use them to find the solutions of Equation (\ref{xiV4}). \begin {enumerate} \item $\bar{t_1} \cdot (1-\bar{t_2}) <0$, since $\bar{t_1}> 0$ and $(1-\bar{t_2}) <0$. \item $\bar{t_1} \cdot (1-\bar{t_2}) + \bar{t_2} <1$, since $\bar{t_1}=1$ implies $\bar{t_1} \cdot (1-\bar{t_2}) + \bar{t_2} =1$ and\\ $\frac{\partial}{\partial \bar{t_1}}(\bar{t_1} \cdot (1-\bar{t_2}) + \bar{t_2})<0$ $\forall \; \bar{t_1}>1,\bar{t_2}>1$. \end {enumerate} Now, we consider the segment $\overline{{\rm S}_3{\rm S}_4}$. By disregarding the node ${\rm V}_4$, obtained by imposing $t_3 = 1$, we verify that it does not exist any unvisited node ${\rm V}_j$ of $H(2,k)$, belonging to $\overline{{\rm S}_3{\rm S}_4}$. Thus, \begin{equation}\label{xiV4} \resizebox{.999\hsize}{!}{$ \begin{cases} \bar{t_1}\cdot x_i({\rm V}_2) \cdot(1-\bar{t_2}) + \bar{t_2}\cdot x_i({\rm V}_3) + t_3\cdot (x_i({\rm V}_4)-(\bar{t_1}\cdot x_i({\rm V}_2) \cdot(1-\bar{t_2}) + \bar{t_2}\cdot x_i({\rm V}_3)))=x_i({\rm V}_j) \\ \bar{t_1}>1\\ \bar{t_2}>1\\ 0<t_3<1\\ \end{cases}$}\hspace{-3.5mm}. \end{equation} Hence, \begin{align} & x_i({\rm V}_2)=x_i({\rm V}_3)=x_i({\rm V}_4)=x_i({\rm V}_j)=0 \Rightarrow t_3\in \mathbb{R}:0<t_3<1, \; & \label{V=V=0}\\ &x_i({\rm V}_2)=x_i({\rm V}_3)=1,x_i({\rm V}_4)=x_i({\rm V}_j)=0 \wedge \bar{t_1}=\frac{\bar{t_2}}{\bar{t_2}-1} \Rightarrow t_3\in \mathbb{R}:0<t_3<1, \label{V2=V3}\\ &x_i({\rm V}_2)=x_i({\rm V}_4)=0,x_i({\rm V}_3)=x_i({\rm V}_j)=1 \Rightarrow t_3=\frac{\bar{t_2}-1}{\bar{t_2}}, \label{V2=V4}\\ &x_i({\rm V}_2)=x_i({\rm V}_3)=x_i({\rm V}_4)=1,x_i({\rm V}_j)=0 \Rightarrow t_3=\frac{\bar{t_1}(1-\bar{t_2})+\bar{t_2}}{\bar{t_1}(1-\bar{t_2})+\bar{t_2}-1}. \label{V2=V3=V4} \end{align} There cannot be two indices $i, i'$ such that $(x_i({\rm V}_2)=x_i({\rm V}_3)=1,x_i({\rm V}_4)=0)\wedge (x_i'({\rm V}_2)=x_i'({\rm V}_3)=x_i'({\rm V}_4)=1,x_i'({\rm V}_j)=0)$ (see (\ref{V2=V3})\&(\ref{V2=V3=V4})), since by imposing $\bar{t_1} = \frac{\bar{t_2}}{\bar{t_2}-1}$ we obtain $t_3 = 0$. The uniqueness of the indices that simultaneously verify (\ref{V=V=0}),(\ref{V2=V3})\&(\ref{V2=V4}) implies that ${\rm V}_1$ is visited twice, while the uniqueness of the indices that simultaneously verify (\ref{V=V=0}),(\ref{V2=V3=V4})\&(\ref{V2=V4}) implies that ${\rm V}_2$ is visited twice. Therefore, it is not possible to join more than $4$ nodes of the given set with a polygonal chain consisting of only $3$ links, and this concludes the proof of Lemma \ref{lemma 1}. \end{proof} By invoking Lemma \ref{lemma 1}, we can easily prove the following theorem. \begin{theorem}\label{lower bound} Let $k\in \mathbb{N}-\{0,1\}$ be given. The link-length of the covering trail $\mathcal{P}(h(2,k))$ for the set $H(2,k)$ satisfies $h(2,k)\geq 3\cdot 2^{k-2}$. \end{theorem} \begin{proof} There is a total of $2^k$ nodes to be visited. By Lemma \ref{lemma 1}, we can join a maximum of $4$ nodes with a polygonal chain of link-length $3$. Since $H(2,k)$ has $\frac{2^k} {4}$ groups of $4$ nodes, the $4$ nodes of each of these groups require a minimum of $3$ segments to be visited. Therefore, $h(2,k) \geq 3 \cdot 2^{k-2}$. \end{proof} Now, we need to find the shortest possible covering path, $\mathcal{P} (h(2,k))$. At this purpose, it is possible to prove the following result. \begin{theorem}\label{upper bound 1} Given $H(2,k)$ with $k\in \mathbb{N}-\{0,1\}$, it is always possible to construct a covering cycle $\mathcal{P}(h(2,k))$ of link-length $h(2,k) = 3 \cdot 2^{k-2}$. \end{theorem} \begin{proof} It is possible to create an algorithm that generates a covering circuit for $H(2,k)$ whose link-length exactly coincides with the lower bound stated by Theorem \ref{lower bound}. The algorithm is valid for any finite number of dimensions. \vspace{2mm} First of all, we notice that it is always possible to join the four nodes of a rectangle with a covering circuit of link-length $3$. In fact, given the set $\{ \{0,1\} \times \{0,b\} \}$, we can have the covering circuit $({\rm S_1})$-$({\rm S_2})$-$({\rm S}_3)$-$({\rm S}_{1})$, with the elements of the set $\{{\rm S_1},{\rm S_2},{\rm S_3} \}$ given by \begin{align*} &{\rm S}_1\equiv \left(\frac{b}{2},\frac{3}{2} \right);\; & & \\ &{\rm S}_2 \equiv (-b,0)\; & {\rm S}_1+\frac{1}{3}\cdot \vv{{\rm S}_1{\rm S}_2}=(0,1); & \nonumber\\ &{\rm S}_3\equiv (0,2\cdot b)\; & {\rm S}_2+\frac{1}{3}\cdot\vv{{\rm S}_2{\rm S}_3}=(0,0), & \;\;\; \; {\rm S}_2+\frac{2}{3}\cdot \vv{{\rm S}_2{\rm S}_3}=(b,0); \nonumber \\ &{\rm S}_4\equiv {\rm S}_1 \; & {\rm S}_3+\frac{2}{3}\cdot\vv{{\rm S}_3{\rm S}_4}=(b,1). & \nonumber \end{align*} We consider the sheaf of planes that has in common the line $r:={\rm C}+t\cdot \vv{e_k}$\hspace{1mm}, where \linebreak ${\rm C}\equiv \left(\frac{1}{2}, \frac{1}{2},\dots,\frac{1}{2},0 \right)$ and $\vv{e_k}:= (0,0, \dots,0,1)$ is the second vector of the canonical basis. These planes have parametric equation ${\rm C}+t\cdot \vv{e_k} + u\cdot \vv{s}$ with $\vv{e_k}$ and $\vv{s}$ being linearly independent vectors. Let $\vv{s_l}\in \{\{-\frac{1}{2}\}\times\{-\frac{1}{2},\frac{1}{2}\}\times\{-\frac{1}{2},\frac{1}{2}\}\times \cdots \times\{-\frac{1}{2},\frac{1}{2}\}\times\{0\}\}\subseteq \mathbb{R}^{k-2}$ be the vector such that $l \in \mathbb{N} : l<2^{k-2}$. If $t=u=1$, then we obtain the point ${\rm V}_{4l+1} \in H(2,k)$ such that\\ ${\rm V}_{4l+1}\equiv (0,x_2({\rm V}_{4l+1}),x_3({\rm V}_{4l+1}),\dots,x_{k-1}({\rm V}_{4l+1}),1)$, where \begin{equation} \begin{cases} x_i({\rm V}_{4l+1})= 1 & \text{if} \; x_i(\vv{{ s}_{l}})=+\frac{1}{2}\\ x_i({\rm V}_{4l+1})= 0 & \text{if} \; x_i(\vv{{ s}_{l}})=-\frac{1}{2} \end{cases}. \end{equation} If $t=0 \wedge u=1$, then we obtain the point ${\rm V}_{4l+2} \in H(2,k)$ such that\\ ${\rm V}_{4l+2}\equiv (0,x_2({\rm V}_{4l+2}),x_3({\rm V}_{4l+2}),\dots,x_{k-1}({\rm V}_{4l+2}),1)$, where \begin{equation} \begin{cases} x_i({\rm V}_{4l+2})= 1 & \text{if} \; x_i(\vv{{ s}_{l}})=+\frac{1}{2}\\ x_i({\rm V}_{4l+2})= 0 & \text{if} \; x_i(\vv{{ s}_{l}})=-\frac{1}{2} \end{cases}. \end{equation} If $t=0 \wedge u=-1$, then we obtain the point ${\rm V}_{4l+3} \in H(2,k)$ such that\\ ${\rm V}_{4l+3}\equiv (0,x_2({\rm V}_{4l+3}),x_3({\rm V}_{4l+3}),\dots,x_{k-1}({\rm V}_{4l+3}),1)$, where \begin{equation} \begin{cases} x_i({\rm V}_{4l+3})= 0 & \text{if} \; x_i(\vv{{ s}_{l}})=+\frac{1}{2}\\ x_i({\rm V}_{4l+3})= 1 & \text{if} \; x_i(\vv{{ s}_{l}})=-\frac{1}{2} \end{cases}. \end{equation} If $t=1 \wedge u=-1$, then we obtain the point ${\rm V}_{4l+4} \in H(2,k)$ such that\\ ${\rm V}_{4l+4}\equiv (0,x_2({\rm V}_{4l+4}),x_3({\rm V}_{4l+4}),\dots,x_{k-1}({\rm V}_{4l+4}),1)$, where \begin{equation} \begin{cases} x_i({\rm V}_{4l+4})= 0 & \text{if} \; x_i(\vv{{ s}_{l}})=+\frac{1}{2}\\ x_i({\rm V}_{4l+4})= 1 & \text{if} \; x_i(\vv{{ s}_{l}})=-\frac{1}{2} \end{cases}. \end{equation} Being $\{\sigma\}_l:=\{\sigma_l : l=0,1,2,\dots,2^{k-2}-2,2^{k-2}-1\}$ the set of the planes containing $r$ and ${\rm C} + \vv{s_l}$, in total we will have exactly $4$ nodes of $H(2,k)$ lying on each one of the aforementioned planes (i.e., there are $2^{k-2}$ planes $\sigma_l$ such that $0 \leq l < 2^{k-2}$, being $2^{k-2}$ the multisubsets of size $k-2$ from the set $\{\frac{1}{2},-\frac{1}{2}\}$). Since each of the $2^{k-2}$ multisubsets is different from all the others, it follows that it does not exists any pair of positive integers $(j,j') : j \neq j'$ such that ${\rm V}_{j} \equiv {\rm V}_{j'}$. Consequently, each plane contains exactly $4$ nodes that does not belong to any other plane $\sigma_l$. Thus, there are $2^{k-2}$ planes that include $4$ different nodes each, for a total of $2^k$ distinct nodes. Since a $k$-dimensional hypercube has exactly $2^k$ nodes, we conclude that each point lies on one and only one plane $\sigma_l$. Let $l \in \mathbb{N}_0 : 0 \leq l \leq 2^{k-2}-1$ be given. Then, the four points ${\rm V}_{4l+1}$,${\rm V}_{4l+2}$,${\rm V}_{4l+3}$, and ${\rm V}_{4l+4}$ identify the nodes of a rectangle with base $1$ and height $\sqrt{2 \cdot (k-1)}$. In fact, $\vv{e_2}=(0,1,0,0,\dots,0)$ forms an orthogonal basis with vector $\vv{s}$ that has coordinate $s_2=0$. We have already proven that the nodes in this orthogonal basis are in position $(0,-1)$; $(0,1)$; $(1,1)$; $(1,-1)$ so that, in this basis, they are vertices of a rectangle of base $1$ and height $2$. Consequently, being $\sqrt{\frac{1}{2} \cdot (k-1)}$ the magnitude of vector $\vv{s}$, we get a height of $2 \cdot \sqrt{\frac{1}{2} \cdot (k-1)}=\sqrt{2 \cdot (k-1)}$ in the canonical basis of the space $E$. Finally, we have $2^{k-2}$ rectangles, whose vertices can be covered by $2^{k-2}$ covering circuits of link-length $3$ (see Figures 1\&2). \begin{figure}[H] \begin{center} \includegraphics[width=15cm]{Figura_1.png} \end{center} \caption{The minimum-link perfect covering cycle $\mathcal{\bar{C}}(h(2,2)):=\left(\frac{1}{2},\frac{3}{2}\right)$-$(-1,0)$-$(2,0)$-$\left(\frac{1}{2},\frac{3}{2}\right)$\\ joins all the nodes of $H(2,3)$ (picture realized with GeoGebra \cite{Geogebra:32}).} \label{fig:Figure_1} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=12cm]{Figura_2.png} \end{center} \caption{The minimum-link closed polygonal chain $\mathcal{P}(6):=\left(\frac{1}{2},\frac{1}{2},2 \right)$-$\left(-\frac{1}{2},-\frac{1}{2},0 \right)$-$(\frac{3}{2},\frac{3}{2},0)$-$\left(\frac{1}{2},\frac{1}{2},2 \right)$-$(-\frac{1}{2},\frac{3}{2},0)$-$(\frac{3}{2},-\frac{1}{2},0)$-$\left(\frac{1}{2},\frac{1}{2},2 \right)$\\ visits all the nodes of $H(2,3)$ once and only once (picture realized with GeoGebra \cite{Geogebra:32}).} \label{fig:Figure_2} \end{figure} Lastly, using the described covering circuits, we get a circuit that starts and ends at a point that lies on the generating line of the sheaf of planes that contains all the planes belonging to $\{\sigma\}_l$. Thus, \vspace{-2mm} \begin{equation} \resizebox{.99\hsize}{!}{$ \{{\rm V}_{4l+1}, {\rm V}_{4l+2}, {\rm V}_{4l+3}, {\rm V}_{4l+4}\} \hspace{-0.1mm} \subset \hspace{-0.1mm} P_m(4) \Rightarrow \exists! \hspace{0.8mm} l \in \{0,1,\dots,2^{k-2}-1\} : \left({\rm S}_{3l+1}\right)\hspace{-1mm}\textnormal{-}\hspace{-1mm}\left({\rm S}_{3l+2}\right)\hspace{-1mm}\textnormal{-}\hspace{-1mm}\left({\rm S}_{3l+3}\right)\hspace{-1mm}\textnormal{-}\hspace{-1mm}\left({\rm S}_{3l+4}\right) \hspace{-0.1mm}\subset \hspace{-0.1mm} \{\sigma\}_l $}. \end{equation} As shown in Figure 3, we obtain a covering cycle for $H(2,k)$ by the repetition, for every $l \in \{0, 1, 2, \dots, 2^{k-2}-1\}$, of the covering circuit described by \begin{align} &{\rm S}_{3l+1}\equiv {\rm C}+\frac{3}{2}\cdot \vv{e_1}; & \nonumber \\ &{\rm S}_{3l+2}\equiv {\rm C} + 3\cdot \vv{s_l}& {\rm S}_{3l+1}+\frac{1}{3}\cdot \vv{{\rm S}_{3l+1}{\rm S}_{3l+2}}\equiv{\rm V}_{4l+1}; \nonumber \\ &{\rm S}_{3l+3}\equiv {\rm C}- 3 \cdot \vv{s_l} & {\rm S}_{3l+2}+\frac{1}{3}\cdot\vv{{\rm S}_{3l+2}{\rm S}_{3l+3}}\equiv{\rm V}_{4l+2}, \nonumber \\ & & {\rm S}_{3l+2}+\frac{2}{3}\cdot\vv{{\rm S}_{3l+2}{\rm S}_{3l+3}}\equiv{\rm V}_{4l+3}; \nonumber \\ &{\rm S}_{3l+4}\equiv {\rm S}_{3l+1} & {\rm S}_{3l+3}+\frac{2}{3}\cdot\vv{{\rm S}_{3l+3}{\rm S}_{3l+4}}\equiv{\rm V}_{4l+4}. \nonumber \end{align} \begin{figure}[H] \begin{center} \includegraphics[width=\linewidth]{Figura_3_1.png} \end{center} \caption{The minimum-link closed polygonal chain $\mathcal{P}(12):=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{3}{2} \right)$-$\left(-1,-1,-1, 0 \right)$-$(2,2,2,0)$-$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{3}{2} \right)$-$(-1,-1,2,0)$-$(2,2,-1,0)$-\\$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{3}{2} \right)$-$(-1,2,-1,0)$-$(2,-1,2,0)$-$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{3}{2} \right)$-$(-1,2,2,0)$-$(2,-1,-1,0)$-$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{3}{2} \right)$\\ joins all the nodes of $H(2,4)$ (picture realized with GeoGebra \cite{Geogebra:32}).} \label{fig:Figure_4} \end{figure} Therefore, we have constructively proven that $h(2,k)\leq 3\cdot 2^{k-2}$, for any $k \in \mathbb{N}-\{0,1\}$. \end{proof} \vspace{2mm} Lastly, we note that it is also possible to generate a covering cycle that does not have coincident Steiner points, except for the first and the last one, following a variation of the previous algorithm, as shown by Corollary \ref{upper bound 2} (see also Figures 4\&5). \begin{figure}[H] \begin{center} \includegraphics[width=\linewidth]{Figura_3_2.png} \end{center} \caption{The minimum-link perfect covering cycle $\mathcal{\bar{C}}(h(2,3)):=\left(\frac{1}{2},\frac{1}{2},2 \right)$-$\left(-\frac{1}{2},-\frac{1}{2},0 \right)$-$\left(\frac{4}{3},\frac{4}{3},0 \right)$-$\left(\frac{1}{2},\frac{1}{2},\frac{5}{2} \right)$-$\left(-\frac{1}{3},\frac{4}{3},0 \right)$-$\left(\frac{3}{2},-\frac{1}{2},0 \right)$-$\left(\frac{1}{2},\frac{1}{2},2 \right)$\\ joins all the nodes of $H(2,3)$ (picture realized with GeoGebra \cite{Geogebra:32}).} \label{fig:Figure_3} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=\linewidth]{Figura_5_1.png} \end{center} \caption{The minimum-link perfect covering cycle $\mathcal{\bar{C}}(h(2,4))\hspace{-1mm}:=\hspace{-1mm}\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{3}{2} \right)$-$\left(-1,-1,-1,0 \right)$-$\left(\frac{3}{2},\frac{3}{2},\frac{3}{2},0 \right)$-$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},2 \right)$-$\left(-\frac{1}{2},-\frac{1}{2},\frac{3}{2},0 \right)$-$\left(\frac{4}{3},-\frac{1}{3},-\frac{1}{3},0 \right)$-\\$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{5}{2} \right)$-$\left(-\frac{1}{3},\frac{4}{3},-\frac{1}{3},0 \right)$-$\left(\frac{5}{4},-\frac{1}{4},\frac{5}{4},0 \right)$-$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},3 \right)$-$\left(-\frac{1}{4},\frac{5}{4},\frac{5}{4},0 \right)$-$\left(2,-1,-1,0 \right)$-$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{3}{2} \right)$\\ visits all the nodes of $H(2,4)$ once and only once (picture realized with GeoGebra \cite{Geogebra:32}).} \label{fig:Figure_5} \end{figure} \begin{corollary} \label{upper bound 2} Given $H(2,k)$ with $k\in \mathbb{N}-\{0,1\}$, it is always possible to construct a perfect covering cycle $\bar{\mathcal{C}}(h(2,k)) := \{\overline{{\rm{S_1 S_2}}} \cup \overline{{\rm{{S_2 S_3}}}} \cup \dots \cup \overline{{\rm{S}}_{3 \cdot 2^{k-2} }{\rm{S}}_{1}}\}$ having exactly $3 \cdot 2^{k-2}$ distinct Steiner points and such that ${\rm{S}}_{3 \cdot 2^{k-2} + 1} \equiv {\rm{S}}_{1}$. \end{corollary} \begin{proof} We constructively prove the corollary by following the same approach that has been introduced in the proof of Theorem \ref{upper bound 1}, taking also into account that the Steiner points of the type ${\rm S}_ {3l + 1}$, that lie on the straight line $r$, must have the coordinates $x_{k}({\rm S}_{3l+1})$ different from each other. We can choose $ x_{k}({\rm S}_{3l+1}):= \frac {l+2} {2}$ to obtain \begin{align} &{\rm S}_{3l+1}\equiv {\rm C}+\frac {l+2} {2}\cdot \vv{e_2}; & \\ &{\rm S}_{3l+2}\equiv {\rm C}+\left(1+\frac{1}{l}\right)\cdot \vv{s_l} & {\rm S}_{3l+1}+\frac{l+1}{2l+3}\cdot\vv{{\rm S}_{3l+1}{\rm S}_{3l+2}}\equiv{\rm V}_{4l+1}; \nonumber \\ &{\rm S}_{3l+3}\equiv{\rm C}-\left(1+\frac{1}{l}\right)\cdot \vv{s_l} & {\rm S}_{3l+2}+\frac{l+1}{2l+3}\cdot\vv{{\rm S}_{3l+2}{\rm S}_{3l+3}}\equiv{\rm V}_{4l+2}; \nonumber \\ & &{\rm S}_{3l+2}+\frac{l+2}{2l+3}\cdot\vv{{\rm S}_{3l+2}{\rm S}_{3l+3}}\equiv{\rm V}_{4l+3}; \nonumber \\ &{\rm S}_{3l+4}\equiv {\rm C}+\frac {l+3} {2}\cdot \vv{e_2} & {\rm S}_{3l+3}+\frac{l+2}{2l+3}\cdot\vv{{\rm S}_{3l+3}{\rm S}_{3l+4}}\equiv{\rm V}_{4l+4}. \nonumber \end{align} Therefore, for any given $k \in \mathbb{N}-\{0,1\}$ we have provided a perfect covering cycle, for $H(2,k)$, which is characterized by a link-length of $3 \cdot 2^{k-2}$ and such that no Steiner point is visited more than once, with the only exception of the starting/ending point, $S_1 \equiv S_{1+3 \cdot 2^{k-2}}$ . This concludes the proof of Corollary \ref{upper bound 2}. \end{proof} \section{Conclusion} Although for any $H(2,k)$ we have constructively shown the existence of perfect covering cycles whose link-length is equal to $3 \cdot 2^{k-2}$, the problem of finding an analogous formula concerning optimal covering paths for any given set $H(n,k):=\{1, 2, \dots, n\} \times \{ 1, 2, \dots, n\} \times \dots \times \{ 1, 2, \dots, n\} \subset \mathbb{R}^k$, such that $n \geq 4 \wedge k \geq 3$, remains completely open \cite{Ripa:22} (e.g., we can only say that $h(4,3) \in \mathbb\{21, 22, 23\}$ \cite{OEIS:31}). \section*{Acknowledgments} We sincerely thank Luca Onnis for his kind assistance on the initial phase of the present paper. \bibliographystyle{plain}
{ "timestamp": "2022-12-22T02:18:20", "yymm": "2212", "arxiv_id": "2212.11216", "language": "en", "url": "https://arxiv.org/abs/2212.11216", "abstract": "We solve the general problem of visiting all the $2^k$ nodes of a $k$-dimensional hypercube by using a polygonal chain that has minimum link-length, and we show that this optimal value is given by $h(2,k):=3 \\cdot 2^{k-2}$ if and only if $k \\in \\mathbb{N}-\\{0,1\\}$. Furthermore, for any $k$ above one, we constructively prove that it is possible to visit once and only once all the aforementioned nodes, $H(2,k):=\\{\\{0,1\\} \\times \\{0,1\\} \\times \\dots \\times \\{0,1\\}\\} \\subset \\mathbb{R}^k$, with a cycle (i.e., a closed path) having only $3 \\cdot 2^{k-2}$ links.", "subjects": "Combinatorics (math.CO)", "title": "Optimal cycles enclosing all the nodes of a $k$-dimensional hypercube", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446517423792, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.7087894081731922 }
https://arxiv.org/abs/1602.05908
Efficient approaches for escaping higher order saddle points in non-convex optimization
Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima.
\section{Algorithm for Finding Third Order Optimal Points} \label{sec:alg} We design an algorithm that is guaranteed to converge to a third order local minimum. Throughout this section we assume both Assumptions~\ref{assump:lipschitzhessian} and \ref{assump:lipthird} \footnote{Note that we actually only cares about a {\em level set} ${\cal L} = \{x|f(x) \le f(x^{(0)})\}$, as long as this set is bounded Assumptions~\ref{assump:lipschitzhessian} follows from Assumption~\ref{assump:lipthird}}. The main intuition of the algorithm is similar to the proof of Theorem~\ref{thm:thirdcondition}: the algorithm tries to make improvements using first, second or third order information. However, the nature of the third order condition makes it challenging for the algorithm to guarantee progress. Consider a potential local minimum point $x$. It is very easy to check whether $\nabla f(x) \ne 0$ or $\lambda_{min} (\nabla^2 f(x) )< 0$, and to make progress using the corresponding directions. However, to verify Condition 3 in Definition~\ref{cond:third}, we need to do it in the right subspace. The na\"ive guess is that we should take the eigensubspace of $\nabla^2 f(x)$ with eigenvalue at most 0. However, this is not correct because even if $x$ is a second order local minimum that does not satisfy the third order condition, it is still possible to have a sequence of $x^{(i)}$'s that converge to $x$ with $\nabla^2 f(x^{(i)})$ all be {\em strictly} positive definite. Hence all the $x^{(i)}$'s appear to satisfy Condition 3 in Definition~\ref{cond:third}. We do not want to the algorithm to spend too much time around this point $x$, so we need to identify a subspace that may have some positive eigenvalues. In order to make sure we can find a vector the contribution from third order term is larger than the second order term, we define {\em competitive subspace} below: \begin{definition}[eigensubspace] For any symmetric matrix $M$, let its eigendecomposition be $M = \sum_{i=1}^n \lambda_i v_i v_i^\top$ (where $\lambda_i$'s are eigenvalues and $\|v_i\| = 1$), we use ${\cal S}_{\tau}(M)$ to denote the span of eigenvectors with eigenvalue at most $\tau$. That is $$ {\cal S}_{\tau}(M) = \mbox{span}\{v_i|\lambda_i \le \tau\}. $$ \end{definition} \begin{definition}[competitive subspace]\label{def:competitive} For any $Q > 0$, and any point $z$, let the competitive subspace ${\cal S}(z)$ be the largest eigensubspace ${\cal S}_\tau(\nabla^2 f(z))$, such that if we let $C_Q(z)$ be the norm of the third order derivatives in this subspace $$ C_Q(z) = \|\mbox{Proj}_{{\cal S}(z)} \nabla^3 f(z)\|_F, $$ then $\tau \le C_Q^2/12LQ^2$. If no such subspace exists then let ${\cal S}(z)$ be empty and $C_Q(z) = 0$. \end{definition} Similar to $\mu(z)$ as in Definition~\ref{def:mu}, $C_Q(z)$ can be viewed as how Condition 3 in Definition~\ref{cond:third} is satisfied approximately. If both $\mu(z)$ and $C_Q(z)$ are $0$ then the point $z$ satisfies third order necessary conditions. Intuitively, competitive subspace is a subspace where the eigenvalues of the Hessian are small, but the Frobenius norm of the third order derivative is large. Therefore we are likely to make progress using the third order information. The parameters in Definition~\ref{def:competitive} are set so that if there is a unit vector $u\in {\cal S}(z)$ such that $[\nabla^3 f(z)](u,u,u) \ge \|\mbox{Proj}_{{\cal S}(z)} \nabla^3 f(z)\|_F/Q$ (see Theorem~\ref{thm:approx}), then we can find a new point where the sum of second, third and fourth order term can be bounded (see Lemma~\ref{lem:thirdorderstep}). \begin{remark} The competitive subspace in Definition~\ref{def:competitive} can be computed in polynomial time, see Algorithm~\ref{alg:compet}. The main idea is that we can compute the eigendecomposition of the Hessian $\nabla^2 f(z) = \sum_{i=1}^n \lambda_i v_iv_i^\top$, and then there are only $n$ different subspaces ($\mbox{span}\{v_n\},\mbox{span}\{v_{n-1},v_n\},$ $\ldots,\mbox{span}\{v_1,v_2,\ldots v_n\}$). We can enumerate over all of them, and check for which subspaces the norm of the third order derivative is large. \end{remark} Now we are ready to state the algorithm. The algorithm is a combination of the cubic regularization algorithm and a third order step that tries to use the third order derivative in order to improve the function value in the competitive subspace. \begin{algorithm} \begin{algorithmic} \FOR{$i = 0$ \TO $t-1$} \STATE $z^{(i)} = \mbox{CubicReg}(x^{(i)})$. \STATE Let $\epsilon_1 = \|\nabla f(z^{(i)})\|$, \STATE Let ${\cal S}(z), C_Q(z)$ be the competitive subpace of $f(z)$ (Definition~\ref{def:competitive}). \IF {$C_Q(z)\ge Q(24\epsilon_1 L)^{1/3}$} \STATE $u = \mbox{Approx}(\nabla^3 f(z^{(i)}), {\cal S})$. \STATE $x^{(i+1)} = z^{(i)}-\frac{C_Q(z)}{LQ}u$. \ELSE \STATE $x^{(i+1)} = z^{(i)}$. \ENDIF \ENDFOR \end{algorithmic} \caption{Third Order Optimization}\label{alg:main} \end{algorithm} Suppose we have the following approximation guarantee for Algorithm~\ref{alg:approx} \begin{algorithm} \begin{algorithmic} \REQUIRE Tensor $T$, subspace ${\cal S}$. \ENSURE unit vector $u\in {\cal S}$ such that $T(u,u,u) \ge \|\mbox{Proj}_{\cal S} T\|_F/Q$. \REPEAT \STATE Let $\hat{u}$ be a random standard Gaussian in subspace ${\cal S}$. \STATE Let $u = \hat{u}$ \UNTIL $|T(u,u,u)| \ge \|\mbox{Proj}_{\cal S} T\|_F/Bn^{1.5}$ for a fixed constant $B$ \RETURN $u$ if $T(u,u,u)>0$ and $-u$ otherwise. \end{algorithmic} \caption{Approximate Tensor Norms}\label{alg:approx} \end{algorithm} \begin{theorem}\label{thm:approx} There is a universal constant $B$ such that the expected number of iterations of Algorithm~\ref{alg:approx} is at most $2$, and the output of $\mbox{Approx}$ is a unit vector $u$ that satisfies $T(u,u,u) \ge \|\mbox{Proj}_{\cal S} T\|_F/Q$ for $Q = Bn^{1.5}$. \end{theorem} The proof of this theorem follows directly from anti-concentration (see Appendix~\ref{app:approx}. Notice that there are other algorithms that can potentially give better approximation (lower value of $Q$) which will improve the rate of our algorithm. However in this paper we do not try to optimize over dependencies over the dimension $n$, that is left as an open problem. By the choice of the parameters in the algorithm, we can get the following guarantee (which is analogous to Theorem~\ref{thm:cubicreg}): \begin{lemma}\label{lem:thirdorderstep} If $C_Q(z) \ge Q(24\epsilon_1 L)^{1/3}$, $u$ is a unit vector in ${\cal S}(z)$ and $[\nabla^3 f(z)](u,u,u) \ge \|\mbox{Proj}_{{\cal S}(z)} \nabla^3 f(z)\|_F/Q$. Let $x' = z- C_Q(z)/LQ \cdot u$. then we have $$ f(x') \le f(z) - \frac{C_Q(z)^4}{24L^3Q^4}. $$ \end{lemma} \begin{proof} Let $\epsilon = C_Q(z)/LQ$, then by Lemma~\ref{lem:lipbound} we know $$ f(x') \le f(z) - \frac{\epsilon^3C}{6Q} + \epsilon_1 \epsilon + \epsilon_2 \epsilon^2/2 + L\epsilon^4/24. $$ Here $\epsilon_1 = \|\nabla f(z)\|$, and $\epsilon_2 \le \frac{C_Q(z)^2}{12 LQ^2}$ by the construction of the subspace. By the choice of parameters, we know the terms $\epsilon_1\epsilon, \epsilon_2 \epsilon^2/2, L\epsilon^4/24$ are all bounded by $\frac{\epsilon^3C_Q(z)}{24Q}$, therefore $$ f(x') \le f(z) - \frac{\epsilon^3C_Q(z)}{24Q} =f(z) - \frac{C_Q(z)^4}{24L^3Q^4} $$ \end{proof} Using this Lemma, and Theorem~\ref{thm:cubicreg} for cubic regularization, we can show that both progress measure goes to 0 as the number of steps increase (this is analogous to Theorem~\ref{thm:cubicreg2}). \begin{theorem} \label{thm:thirdfinite} Suppose the algorithm starts at $f(x_0)$, and $f$ has global min at $f(x^*)$. Then in one of the $t$ iterations we have \begin{enumerate} \item $\mu(z) \le \left(\frac{12(f(x_0)-f(x^*)}{Rt}\right)^{1/3}$. \item $C_Q(z) \le \max\left\{Q(24\|\nabla f(z)\| L)^{1/3}, Q\left(\frac{24L^3(f(x_0)-f(x^*))}{t} \right)^{1/4}\right\}.$ \end{enumerate} \end{theorem} Recall $\mu(z) = \max\left\{\sqrt{\frac{1}{R}\|\nabla f(z)\|}, -\frac{2}{3R}\lambda_n \nabla^2 f(z)\right\}$ is intuitively measuring how much first and second order progress the algorithm can make. The value $C_Q(z)$ as defined in Definition~\ref{def:competitive} is a measure of how much third order progress the algorithm can make. The theorem shows both values goes to 0 as $t$ increases (note that even the first term $Q(24\|\nabla f(z)\| L)^{1/3}$ in the bound for $C_Q(z)$ goes to 0 because the $\|\nabla f(z)\|$ goes to 0). \begin{proof} By the guarantees of Theorem~\ref{thm:cubicreg} and Lemma~\ref{lem:thirdorderstep}, we know the sequence of points $x^{(0)}, z^{(0)}, \ldots, x^{(i)}, z^{(i)}, \ldots$ has non-increasing function values. Also, $$ \sum_{i=1}^t f(x^{(i)}) - f(x^{(i-1)}) \le f(x_0) - f(x^*). $$ So there must be an iteration where $f(x^{(i)}) - f(x^{(i-1)}) \le \frac{f(x_0) - f(x^*)}{t}$. If $\mu(z) > \left(\frac{12(f(x_0)-f(x^*)}{Rt}\right)^{1/3}$, then Theorem~\ref{thm:cubicreg} implies $f(x^{(i-1)}) - f(z^{(i-1)}) > \frac{f(x_0) - f(x^*)}{t}$, which is impossible. On the other hand if $C_Q(z) \le \max\left\{Q(24\|\nabla f(z)\| L)^{1/3}, Q\left(\frac{24L^3(f(x_0)-f(x^*))}{t} \right)^{1/4}\right\}$, then the third order step makes progress, and we know $f(z^{(i-1)}) - f(x^{(i)}) > \frac{f(x_0) - f(x^*)}{t}$, which is again impossible. \end{proof} We can also show that when $t$ goes to infinity the algorithm converges to a third order local minimum (similar to Theorem~\ref{thm:cubicreg3}). \begin{theorem} \label{thm:thirdlimit} When $t$ goes to infinity, the values $f(x^{(t)})$ converge. If the level set ${\cal L}(f(x_0)) = \{x|f(x)\le f(x_0)\}$ is compact, then the sequence of points $x^{(t)}, z^{(t)}$ has nonempty limit points, and every limit point $x$ satisfies the third order necessary conditions. \end{theorem} \begin{proof} By Theorem~\ref{thm:cubicreg} and Lemma~\ref{lem:thirdorderstep}, we know the function value is non-increasing, and it has a lowerbound $f(x^*)$, so the value must converge. The existence of limit points is guaranteed by the compactness of the level set. The only thing left to prove is that every limit point $x$ must satisfy the third order necessary conditions. Notice that $f(x^{(0)}) - \lim_{t\to \infty} f(x^{(t)} \ge \sum_{i=0}^\infty \frac{R\mu(z^{(i)})^3}{12}+\frac{C_Q(z^{(i)})^4}{24L^3 Q^4}$, so $\lim_{i\to \infty}\mu(z^{(i)} = 0$ and $\lim_{i\to\infty} C_Q(z^{(i)}) = 0$. Also we know further $\lim_{i\to\infty} \|z^{(i)} - x^{(i)}\| = 0$. Therefore wlog a limit point $x$ is also a limit point of sequence $z$, and $\lim_{i\to \infty} \|\nabla f(z)\| = 0$. Also we know $H = \nabla^2 f(x)$ is PSD, because otherwise points near $x$ will have nonzero $\mu(z^{(i)}$ and $x$ cannot be a limit point. Now we only need to check the third order condition. Assume towards contradiction that third order condition is not true. The we know the Hessian has a subspace ${\cal P}$ with $0$ eigenvalues, and the third order derivative has norm at least $\epsilon$ in this subspace. By matrix perturbation theory, when $z$ is very close to $x$, ${\cal P}$ is very close to ${\cal S}_\epsilon(z)$ for $\epsilon\to 0$, on the other hand the third order tensor also converges to $\nabla^3 f(x)$ (by Lipschitz condition), so ${\cal S}_\epsilon(z)$ will eventually be a competitive subspace and $C_Q(z)$ is at least $\epsilon/2$ for all $z$. However this is impossible as $\lim_{i\to\infty} C_Q(z^{(i)}) = 0$. \end{proof} \begin{remark} Note that not all third order local minimum can be the limit point for Algorithm~\ref{alg:main}. This is because if $f(x)$ has very large third order derivatives but relatively smaller Hessian, even though the Hessian might be positive definite (so $x$ is in fact a local minimum), Algorithm~\ref{alg:main} may still find a non-empty competitive subspace, and will be able to reduce the function value and escape from the saddle point. An example is for the function $f(x) = x^2 - 100x^3 + x^4$, $x = 0$ is a local minimum but the algorithm can escape from that and find the global minimum. \end{remark} In the most general case it is hard to get a convergence rate for the algorithm because the function may have higher order local minima. However, if the function has nice properties then it is possible to prove polynomial rates of convergence. \begin{definition}[strict third order saddle] We say a function is strict third order saddle, if there exists constants $\alpha, c_1, c_2, c_3, c_4 > 0$ such that for any point $x$ one of the following is true: \begin{enumerate} \setlength{\itemsep}{0pt} \item $\|\nabla f(x)\| \ge c_1$. \item $\lambda_n(f(x)) \le -c_2$. \item $C_Q(f(x)) \ge c_3$. \item There is a local minimum $x^*$ such that $\|x-x^*\| \le c_4$ and the function is $\alpha$-strongly convex restricted to the region $\{x|\|x-x^*\| \le 2c_4\}$. \end{enumerate} \label{def:strictthirdsaddle} \end{definition} This is a generalization of the strict saddle functions defined in \cite{ge2015escaping}. Even if a function has degenerate saddle points, it may still satisfy this condition. \begin{corollary} When $t \ge \mbox{poly}(n, L, R, Q, f(x_0) - f(x^*)) \max\{(1/c_1)^{1.5}, (1/c_2)^3, (1/c_3)^{4.5}\}$, there must be a point $z^{(i)}$ with $i\le t$ that is in case 4 in Definition~\ref{def:strictthirdsaddle}. \end{corollary} \begin{proof} We use $\tilde{O}$ to only focus on the polynomial dependency on $t$ and ignore polynomial dependency on all other parameters. By Theorem~\ref{thm:thirdfinite}, we know there must be a $z^{(i)}$ which satisfies $\mu(z^{(i)} \le \tilde{O}((1/t)^{1/3})$ and $C_Q(z) \le \tilde{O}(\max\{(1/t)^{1/4}, \|\nabla f(z)\|^{1/3}\})$. By the Definition of $\mu$ (Definition~\ref{def:mu}), we know $\|\nabla f(z)\| \le \tilde{O}(\mu(z))^2 = \tilde{O}(t^{-2/3})$, $\lambda_n(\nabla^2 f(z)) \ge -\tilde{O}(t^{-1/3})$. Using the fact that $\|\nabla f(z)\| \le \tilde{O}(\mu(z))^2 = \tilde{O}(t^{-2/3}$, we know $$C_Q(z) \le \tilde{O}(\max\{(1/t)^{1/4}, \|\nabla f(z)\|^{1/3}\}) = \tilde{O}(t^{-2/9}).$$ Therefore, when $t \ge \mbox{poly}(n, L, R, Q, f(x_0) - f(x^*)) \max\{(1/c_1)^{1.5}, (1/c_2)^3, (1/c_3)^{4.5}\}$, the point $z$ must satisfy \begin{enumerate} \setlength{\itemsep}{0pt} \item $\|\nabla f(z)\| < c_1$; \item $\lambda_n (\nabla^2 f(z)) < -c_2$; \item $C_Q(z) < c_3$. \end{enumerate} Therefore the first three cases in Definition~\ref{def:strictthirdsaddle} cannot happen and $z$ must be near a local minimum. \end{proof} \section{Omitted Proofs} \subsection{Omitted Proofs in Section~\ref{sec:condition}} \label{app:third} \begin{lemma} [Lemma~\ref{lem:lipbound} Restated] For any $x,y$, we have $$ |f(y) - f(x) - \inner{\nabla f(x), y-x} + \frac{1}{2}(y-x)^\top \nabla^2 f(x) (y-x) - \frac{1}{6} \nabla^3 f(x)(y-x, y-x, y-x)| \le \frac{L}{24}\|y-x\|^4. $$ \end{lemma} \begin{proof} The proof follows from integration from $x$ to $y$ repeatedly. First we have $$ \nabla^2 f(x+u(y-x)) = \nabla^2 f(x) + \left[\int_0^u \nabla^3 f(x+v(y-x)) dv\right](y-x). $$ By the Lipschitz condition on third order derivative, we know $$ \|\nabla^3 f(x+v(y-x)) - \nabla^3 f(x)\|_F \le Lv\|x-y\|. $$ Combining the two we have $$ \nabla^2 f(x+u(y-x)) = \nabla^2 f(x) + [\nabla^3 f(x)](y-x) + h(u), $$ where $h(u) = \left[\int_0^u (\nabla^3 f(x+v(y-x)) - \nabla^3 f(x)) dv\right](y - x)$, so $\|h(u)\|_F \le \frac{L}{2}\|x-y\|^2$. Now we use the integral for the gradient of $f$: \begin{align*} \nabla f(x+t(y-x)) & = \nabla f(x) + \left[\int_0^t \nabla^2 f(x+u(y-x))du\right](y-x) \\ &= \nabla f(x) + \nabla^2 f(x) (y-x) + \left[\int_0^t h(u) du\right](y-x). \end{align*} Let $g(t) = \left[\int_0^t h(u) du\right](y-x)$, by the bound on $h(u)$ we know $\|g(t)\| \le \frac{1}{6} \|x-y\|^3$. Finally, we have \begin{align*} f(y) & = f(x) + \inner{\left[\int_0^1 \nabla f(x+t(y-x))du\right],(y-x)} \\ &= f(x) + \inner{\nabla f(x), y-x} + \frac{1}{2}(y-x)^\top \nabla^2 f(x) (y-x) + \frac{1}{6} \nabla^3 f(x)(y-x)^{\otimes 3} + \inner{\left[\int_0^1 g(t) dt\right],y-x}. \end{align*} The last term is bounded by $\|y-x\| \int_0^1\|g(t)\| dt \le \frac{L}{24}\|x-y\|^4$. \end{proof} \begin{theorem}[Theorem~\ref{thm:thirdcondition} restated] Given a function $f$ that satisfies Assumption~\ref{assump:lipthird}, a point $x$ is third order optimal if and only if it satisfies Condition~\ref{cond:third}. \end{theorem} \begin{proof} (necessary condition $\to$ third order minimal) By Lemma~\ref{lem:lipbound} we know $$f(y) \ge f(x) + \inner{\nabla f(x), y-x} + \frac{1}{2}(y-x)^\top \nabla^2 f(x) (y-x) + \frac{1}{6} \nabla^3 f(x)(y-x)^{\otimes 3}- \frac{L}{24}\|y-x\|^4.$$ Now let $\alpha$ be the smallest nonzero eigenvalue of $\nabla^2 f(x)$. Let $U$ be nullspace of $\nabla^2 f(x)$ and $V$ be the orthogonal subspace. We break $\nabla^3 f(x)$ into two tensors $G_1$ and $G_2$, where $G_1$ is the projection to $V\otimes V\otimes V$, $V\otimes V\otimes U$ (and its symmetries), and $G_2$ is the projection to $V\otimes U\otimes U$ (and its symmetries). Note that $\nabla^3 f(x) = G_1+G_2$ because the projection on $U\otimes U\otimes U$ is 0 by the third condition. Let $\beta$ be the max injective norm of $G_1$ and $G_2$. Now we know for any $u\in U$ and $v\in V$, $$ f(x+u+v) - f(x) \ge \frac{1}{2} \alpha \|v\|^2 - \frac{\beta}{6} \|u\| \|v\|^2 - \frac{\beta}{6} \|u\|^2 \|v\| - \frac{L}{24}\|u+v\|^4. $$ Now, if $\epsilon < \beta/\alpha$, because $\|u\|_2 \le \epsilon$ it is easy to see the sum of first two terms is at least $\frac{1}{3} \alpha \|v\|_2^2$. Now we can take the mininum of $$ \frac{\alpha}{3} \|v\|^2 - \frac{\beta}{6}\|u\|^2 \|v\|, $$ The minimum is achieved when $\|v\| = \|u\|^2\beta/\alpha$ and the minimum value is $- \|u\|^4\beta^2/6\alpha$. Therefore when $\|u+v\|\le \beta/\alpha$ we have $$ f(x+u+v) - f(x) \ge - \left(\frac{\beta^2}{\alpha}+\frac{L}{24}\right) \|u+v\|^4. $$ (third order minimal$\to$necessary condition) Assume towards contradiction that the necessary condition is not satisfied, but the point $x$ is third order local optimal. If the necessary condition is not satisfied, then one of the three cases happens: In the first case the gradient $\nabla f(x) \ne 0$. In this case, if we let $L'$ be an upperbound the operator norms of the second and third order derivative, then we know $$ f(x+\epsilon \nabla f(x)) \le f(x) - \epsilon \|\nabla f(x)\|^2 + \frac{\epsilon^2L'}{2}\|\nabla f(x)\|^2 +\frac{\epsilon^3 L'}{6}\|\nabla f(x)\|^3 + \frac{\epsilon^4L}{24}\|\nabla f(x)\|^4. $$ When $\epsilon\|\nabla f(x)\| \le 1$ and $\epsilon (2L'/3+L/24) \le 1/2$, we have $$ f(x+\epsilon \nabla f(x)) \le f(x) - \frac{\epsilon}{2} \|\nabla f(x)\|^2. $$ Therefore the point cannot be a third order local minimum. In the second case, $\nabla f(x) = 0$, but $\lambda_{min} \nabla^2 f(x) < 0$. Let $\|u\|=1$ be a unit vector such that $u^\top (\nabla^2 f(x))u = -c < 0$. Let $L'$ be the injective norm of $\nabla^3 f(x)$, then $$ f(x + \epsilon u) \le f(x) - \frac{c\epsilon^2}{2} + \frac{\epsilon^3 L'}{6} + \frac{\epsilon^4 L}{24}. $$ Therefore whenever $\epsilon < \min\{\sqrt{3c/L},3c/4L'\}$ we have $f(x+\epsilon u) \le f(x) - \frac{c\epsilon^2}{4}$. The point $x$ cannot be a third order local minimum. The third case is if $\nabla f(x) = 0$, $\nabla^2 f(x)$ is positive semidefinite, but there is a direction $\|u\|=1$ such that $u^\top (\nabla^2 f(x)) u = 0$, but $[\nabla^3 f(x)](u,u,u) \ne 0$. Without loss of generality we assume $[\nabla^3 f(x)](u,u,u) = c > 0$ (if it is negative we take $-u$), then $$ f(x+\epsilon u) \le f(x) - c\epsilon^3/6 + L\epsilon^4/24. $$ Therefore whenever $\epsilon < 2c/L$ we have $f(x+\epsilon u) \le f(x) - c\epsilon^3/12$ so $x$ cannot be a third order optimal. \end{proof} \subsection{Algorithm for Competitive Subspace, Proof of Theorem~\ref{thm:approx}} \begin{algorithm} \begin{algorithmic} \REQUIRE Function $f$, point $z$, Hessian $M = \nabla^2 f(z)$, third order derivative $T = \nabla^3 f(z)$, approximation ratio $Q$, Lipschitz Bound $L$, \ENSURE Competitive subpace ${\cal S}(z)$ and $C_Q(z)$. \STATE Compute the eigendecomposition $M = \sum_{i=1}^n \lambda_i v_iv_i^\top$. \FOR{$i = 1$ \TO $n$} \STATE Let ${\cal S} = \mbox{span}\{v_i,v_{i+1},...,v_n\}$. \STATE Let $C_Q = \|\mbox{Proj}_{\cal S} T\|_F$. \IF{$\frac{C_Q^2}{12LQ^2} \ge \lambda_i$} \RETURN ${\cal S}, C_Q$. \ENDIF \ENDFOR \RETURN ${\cal S} = \emptyset, C_Q = 0$. \end{algorithmic} \caption{Algorithm for computing the competitive subspace\label{alg:compet}} \end{algorithm} \begin{theorem}[Theorem~\ref{thm:approx} restated] There is a universal constant $B$ such that the expected number of iterations of Algorithm~\ref{alg:approx} is at most $2$, and the output of $\mbox{Approx}$ is a unit vector $u$ that satisfies $T(u,u,u) \ge \|\mbox{Proj}_{\cal S} T\|_F/Q$ for $Q = Bn^{1.5}$. \end{theorem} \begin{proof} We use the anti-concentration property for Gaussian random variables \begin{theorem}[anti-concentration\citep{carbery2001distributional}] Let $x\in {\mathbb R}^n$ be a Gaussian variable $x\sim N(0,I)$, for any polynomial $p(x)$ of degree $d$, there exists a constant $\kappa$ such that $$ \Pr[|p(x)|\le \epsilon \sqrt{\Var[p(x)]}] \le \kappa \epsilon^{1/d}. $$ \end{theorem} In our case $d = 3$ and we can choose some universal constant $\epsilon$ such that the probability of $p(x)$ being small is bounded by $1/3$. It is easy to check that the variance is lowerbounded by the Frobenius norm squared, so $$ \Pr[|T(\hat{u},\hat{u},\hat{u})|\ge \epsilon \|\mbox{Proj}_{\cal S} T\|_F] \ge 2/3. $$ On the other hand with high probability we know the norm of the Gaussian $\hat{u}$ is at most $2\sqrt{n}$. Therefore with probability at least $1/2$, $|T(\hat{u},\hat{u},\hat{u})| \ge \epsilon \|\mbox{Proj}_{\cal S} T\|_F$ and $\|\hat{u}\| \le 2\sqrt{n}$, therefore $|T(u,u,u)| \ge \frac{\epsilon}{8n^{1.5}} \|\mbox{Proj}_{\cal S} T\|_F$. Choosing $B = 8/\epsilon$ implies the theorem. \end{proof} \label{app:approx} \subsection{Proof of Theorem~\ref{thm:hard}} \label{app:hard} \begin{theorem} [Theorem~\ref{thm:hard} restated] It is NP-hard to find a fourth order local minimum of a function $f(x)$, even if $f$ is guaranteed to be well-behaved. \end{theorem} \begin{proof} We reduce the problem of verifying nonnegativenss of degree 4 polynomial to the problem of finding fourth order local minimum. Given a degree 4 homogeneous polynomial $f(x)$, we can write it as a symmetric fourth order tensor $T\in {\mathbb R}^{n^4}$. Without loss of generality we can rescale $T$ so that $\|T\|_F \le 1$ and therefore $\|T\| \le 1$. Now we define the function $g(x) = f(x)+\|x\|^6$. We first show that this function is well-behaved. \begin{claim} $g(x)$ is well-behaved. \end{claim} \begin{proof} Since $g(x)$ is a polynomial with bounded coefficients, clearly it is infinite order differentiable and satisfies condition 2. For condition 1, notice that $g(x) = 0$ and for all $\|x\|_2 > 1$, we have $g(x) \ge \|x\|^6-\|x|^4 > 0$ so the global minimizer must be at a point within the unit $\ell_2$ ball. Finally, for any $\|x\| = 1$, we know $g(tx) = f(x)t^4 + t^6$ which is always increasing when $t\ge 1$ since $|f(x)| \le 1$. \end{proof} Next we show if $f(x)$ is nonnegative, then $\vec{0}$ is the unique fourth order local minimizer. \begin{claim} If $f(x)$ is nonnegative, then $\vec{0}$ is the unique fourth order local minimizer of $g(x)$. \end{claim} \begin{proof} Suppose $x\ne 0$ is a local minimizer of $g(x)$ of order at least 1. Let $u = x/\|x\|$. We consider the function $g(tu) = f(u)t^4 + t^6$. Clearly the only first order local minimizer of $g(tu)$ is at $t = 0$. Therefore $x$ cannot be a first order local minimizer of $g(x)$. \end{proof} Finally, we show if $f(x)$ has a negative direction, then all the local minimizer of $g(x)$ must have negative value in $f$. \begin{claim} If $f(x)$ is negative for some $x$, then if $x$ is a fourth order local minimum of $g(x)$ then $f(x) < 0$. \end{claim} \begin{proof} Suppose $x\ne 0$ is a fourth order local minimum of $g(x)$. Then at least $t=1$ should be a fourth order local minimum of $g(tx) = f(x)t^4 +t^6\|x\|^6$. This is only possible if $f(x) < 0$. On the other hand, for $x = 0$, suppose $\|z\|=1$ is a direction where $f(z) < 0$, then $f(x) - f(x+tz) = f(z)t^4 - t^6 = \Omega(t^4)$, so $x = 0$ is not a fourth order local minimum. \end{proof} The theorem follows immediately from the three claims. \end{proof} \section{Conclusion} Complicated structures of saddle points are a major problem for optimization algorithms. In this paper we investigate the possibilities of using higher order derivatives in order to avoid degenerate saddle points. We give the first algorithm that is guaranteed to find a 3rd order local minimum, which can solve some problems caused by degenerate saddle points. However, we also show that the same ideas cannot be generalized to higher orders. There are still many open problems related to degenerate saddle points and higher order optimization algorithms. Are there interesting class of functions that satisfies the strict 3rd order saddle property (Definition~\ref{def:strictthirdsaddle})? Can we design a 3rd order optimization algorithm for constrained optimization? We hope this paper inspires more research in these directions and eventually design efficient optimization algorithms whose performance do not suffer from degenerate saddle points. \section{Introduction} Recent trend in applied machine learning has been dominated by the use of large-scale non-convex optimization, e.g. deep learning. However, analyzing non-convex optimization in high dimensions is very challenging. Current theoretical results are mostly negative regarding the hardness of reaching the globally optimal solution. Less attention is paid to the issue of reaching a locally optimal solution. In fact, even this is computationally hard in the worst case~\citep{nie2015hierarchy}. The hardness arises due to diversity and ubiquity of critical points in high dimensions. In addition to local optima, the set of critical points also consists of saddle points, which possess directions along which the objective value improves. Since the objective function can be arbitrarily bad at these points, it is important to develop strategies to escape them, in order to reach a local optimum. The problem of saddle points is compounded in high dimensions. Due to curse of dimensionality, the number of saddle points grows exponentially for many problems of interest, e.g.~\citep{auer1996exponentially,cartwright2013number,auffinger2013complexity}. Ordinary gradient descent can be stuck in a saddle point for an arbitrarily long time before making progress. A few recent works have addressed this issue, either by incorporating second order Hessian information~\citep{nesterov2006cubic} or through noisy stochastic gradient descent~\citep{ge2015escaping}. These works however require the Hessian matrix at the saddle point to have a strictly negative eigenvalue, termed as the {\em strict saddle} condition. The time to escape the saddle point depends (polynomially) on this negative eigenvalue. Some structured problems such as complete dictionary learning, phase retrieval and orthogonal tensor decomposition possess this property~\citep{sun2015nonconvex}. On the other hand, for problems without the strict saddle property, the above techniques can converge to a saddle point, which is disguised as a local minimum when only first and second order information is used. We address this problem in this work, and extend the notion of second order optimality to higher order optimality conditions. We propose a new efficient algorithm that is guaranteed to converge to a third order local minimum, and show that it is NP-hard to find a fourth order local minimum. Our results are relevant for a wide range of non-convex problems which possess degenerate critical points. At these points, the Hessian matrix is singular. Such points arise due to symmetries in the optimization problem, e.g., permutation symmetry in a multi-layer neural network. Singularities also arise in over-specified models, where the model capacity (such as the number of neurons in neural networks) exceeds the complexity of the target function. Here, certain neurons can be eliminated (i.e. have weights set to zero), and such critical points possess the so-called {\em elimination singularity}~\citep{wei2008dynamics}. Alternatively, two neurons can have the same weight, and this is known as {\em overlap singularity}~\citep{wei2008dynamics}. The Hessian matrix is singular at such critical points. This behavior is limited not just to neural networks, but has also been studied in overspecified Gaussian mixtures, radial basis function networks, ARMA models of time series~\citep{amari2006singularities,wei2008dynamics}, and student-teacher networks, also known as {\em soft committee models}~\citep{saad1995line,inoue2003line}. \begin{figure} \centering \includegraphics[width=\textwidth]{functions} \caption{Examples of Degenerate Saddle Points: (a) Monkey Saddle $-3x^2y+y^3$, $(0,0)$ is a second order local minimum but not third order local minimum; (b) $x^2y+y^2$, $(0,0)$ is a third order local minimum but not fourth order local minimum; (c) ``wine bottle'', the bottom of the bottle is a connected set with degenerate Hessian; (d) ``inverted wine bottle'': the points on the circle with degenerate Hessian are actually saddle points and not local minima.}\label{fig:functions} \end{figure} The current trend in practice is to incorporate overspecified models~\citep{giles2001overfitting}. Theoretically, bad local optima are guaranteed to disappear in neural networks under massive levels of overspecification~\citep{safran2015quality}. On the other hand, as discussed above, the saddle point problem is compounded in these overspecified models. Empirically, the presence of singular saddle points is found to slow down learning substantially~\citep{saad1995line,inoue2003line,amari2006singularities,wei2008dynamics}. Intuitively, these singular saddle points are surrounded by {\em plateaus} or flat regions with a sub-optimal objective value. For these regions neither the gradient or Hessian information can lead to a direction that improves the function value. Therefore they can ``fool'' the (ordinary) first and second order algorithms and they may stuck there for long periods of time. Higher order derivatives are needed to classify the point as either a local optimum or a saddle point. In this work, we tackle this challenging problem of escaping such higher order saddle points. \subsection{Summary of Results} We call a point $x$ a $p^{{\mbox{\tiny th}}}$ order local minimum if for any nearby point $y$ $f(x) - f(y) \le o(\|x-y\|^p)$ (see Definition~\ref{def:pthorder}). We give a necessary and sufficient condition for a point $x$ to be a third order local minimum (see Section~\ref{sec:condition}). Similar conditions (for even higher order) have been discussed in previous works, however their algorithmic implications were not known. We design an algorithm that is guaranteed to find a third order local minimum. \begin{theorem} (Informal) There is an algorithm that always converges to a third order local minimum (see Theorem~\ref{thm:thirdlimit}). Also, in polynomial time the algorithm can find a point that is ``similar'' to a third order local minimum (see Theorem~\ref{thm:thirdfinite}). \end{theorem} By ``similar'' we mean the point $x$ approximately satisfies the necessary and sufficient condition for third order local minimum (see Definition~\ref{cond:third}): the gradient $\nabla f(x)$ is small, Hessian $\nabla^2 f(x)$ is almost positive semidefinite (p.s.d) and in every subspace where the Hessian is small, the norm of the third order derivatives is also small. To the best of our knowledge this is the first algorithm that is guaranteed to converge to a third order local minimum. The algorithm alternates between a second order step (which we use cubic regularization\citep{nesterov2006cubic}) and a third order step. The third order step first identifies a ``competitive subspace'' where the third order derivative has a much larger norm than the second order. It then tries to find a good direction in this subspace to make improvement. For more details see Section~\ref{sec:alg}. We also show that it is NP-hard to find a fourth order local minimum: \begin{theorem} (Informal) Even for a well-behaved function, it is NP-hard to find a fourth order local minimum (see Theorem~\ref{thm:hard}). \end{theorem} \subsection{Related Work} A popular approach to overcoming saddle points is to incorporate second order information. However, the popular second order approach of Newton's method is not suitable since it converges to an arbitrary critical point, and does not distinguish between a local minimum and a saddle point. Directions along negative values of the Hessian matrix help in escaping the saddle point. A simple solution is then to use these directions, whenever gradient descent improvements are small (which signals the approach towards a critical point)~\citep{frieze1996learning,vempala2011structure}. A more elegant framework is the so-called trust region method~\citep{dauphin2014identifying,sun2015nonconvex} which involves optimizing the second order Taylor's approximation of the objective function in a local neighborhood of the current point. Intuitively, this objective ``switches'' smoothly between first order and second order updates.~\citet{nesterov2006cubic} propose adding a cubic regularization term to this Taylor's approximation. In a beautiful result, they show that in each step, this cubic regularized objective can be solved optimally due to hidden convexity and overall, the algorithm converges to a local optimum in bounded time. We give an overview of this algorithm in Section~\ref{sec:overview}. \cite{baes2009estimate} generalizes this idea to use higher order Taylor expansion, however the optimization problem is intractable even for third order Taylor expansion with quartic regularizer. \citet{ge2015escaping} recently showed that it is possible to escape saddle points using only first order information based on noisy stochastic gradient descent (SGD) in polynomial time in high dimensions. \citet{lee2016gradient} showed that even without adding noise, in the limit gradient descent converges to (second order) local minimum with random initialization. In many applications, these first-order algorithms are far cheaper than the computation of the Hessian eigenvectors.~\citet{nie2015hierarchy} propose using the hierarchy of semi-definite relaxations to compute all the local optima which satisfy first and second order necessary conditions based on semi-definite relaxations. All the above works deal with local optimality based on second order conditions. When the Hessian matrix is singular and p.s.d., higher order derivatives are required to determine whether it is a local optimum or a saddle point. Higher order optimality conditions, both necessary and sufficient, have been characterized before, e.g.~\citep{bernstein1984systematic,warga1986higher}. But these conditions are not efficiently computable, and it is NP-hard to determine local optimality, given such information about higher order derivatives~\citep{murty1987some}. \section{Hardness for Finding a fourth order Local Minimum} In this section we show it is hard to find a fourth order local minimum even if the function we consider is very well-behaved. \begin{definition}[Well-behaved function] We say a function $f$ is well-behaved if it is infinite-order differentiable, and satisfies: \begin{enumerate} \setlength{\itemsep}{0pt} \item $f(x)$ has a global minimizer at some point $\|x\|\le 1$. \item $f(x)$ has bounded first 5 derivatives for $\|x\| \le 1$. \item For any direction $\|x\| = 1$, $f(tx)$ is increasing for $t \ge 1$. \end{enumerate} \end{definition} Clearly, all local minimizers of a well-behaved function lies within the unit $\ell_2$ ball, and $f(x)$ is smooth with bounded derivatives within the unit $\ell_2$ ball. These functions also satisfy Assumptions~\ref{assump:lipschitzhessian} and \ref{assump:lipthird}. All the algorithms mentioned in previous sections can work in this case and find a local minimum up to order 3. However, this is not possible for fourth order. \begin{theorem} \label{thm:hard} It is NP-hard to find a fourth order local minimum of a function $f(x)$, even if $f$ is guaranteed to be well-behaved. \end{theorem} The main idea of the proof comes from the fact that we cannot even verify the nonnegativeness of a degree 4 polynomial (hence there are cases where we cannot verify whether a point is a fourth order local minimum or not). \begin{theorem}\label{thm:hardverify}\cite{nesterov2000squared,hillar2013most} It is NP-hard to tell whether a degree 4 homogeneous polynomial $f(x)$ is nonnegative. \end{theorem} \begin{remark} The NP hardness for nonnegativeness of degree 4 polynomial has been proved has been proved in several ways. In \cite{nesterov2000squared} the reduction is from the SUBSET SUM problem, which results in a polynomial that can have exponentially large coefficients and does not rule out FPTAS. However, the reduction in \cite{hillar2013most} relies on the hardness of copositive matrices, which in turn depends on the hardness of INDEPENDENT SET\citep{dickinson2014computational}. This reduction gives a polynomial whose coefficients can be bounded by $\mbox{poly}(n)$, and a polynomial gap that rules out FPTAS. \end{remark} To prove Theorem~\ref{thm:hard} we only need to reduce the nonnegativeness problem in Theorem~\ref{thm:hardverify} to the problem of finding a fourth order local minimum. We can convert a degree 4 polynomial to a well behaved function by adding a degree 6 regularizer $\|x\|^6$. We shall show when the degree 4 polynomial is nonnegative the $\vec{0}$ point is the only fourth order local minimum; when the degree 4 polynomial has negative directions then every fourth order local minimum must have negative function value. The details are deferred to Section~\ref{app:hard}. \section{Preliminaries} In this section we first introduce the classifications of saddle points. Next, as we often work with third order derivatives, and we treat it as a order 3 tensor, we introduce the necessary notations for tensors. \subsection{Critical Points} Throughout the paper we consider functions $f:{\mathbb R}^n \to {\mathbb R}$ whose first three order derivatives exist. We represent the derivatives by $\nabla f(x) \in {\mathbb R}^n$, $\nabla^2 f(x) \in {\mathbb R}^{n\times n}$ and $\nabla^3 f(x)\in {\mathbb R}^{n^3}$, where $$ [\nabla f(x)]_i = \frac{\partial}{\partial x_i} f(x), [\nabla^2 f(x)]_{i,j} = \frac{\partial^2}{\partial x_i\partial x_j} f(x), [\nabla^3 f(x)]_{i,j,k} = \frac{\partial^3}{\partial x_i\partial x_j\partial x_k} f(x). $$ For such smooth function $f(x)$, we say $x$ is a {\em critical point} if $\nabla f(x) = \vec{0}$. Traditionally, critical points are classified into four cases according to the Hessian matrix: \begin{enumerate} \setlength{\itemsep}{0pt} \item (Local Minimum) All eigenvalues of $\nabla^2 f(x)$ are positive. \item (Local Maximum) All eigenvalues of $\nabla^2 f(x)$ are negative. \item (Strict saddle) $\nabla^2 f(x)$ has at least one positive and one negative eigenvalues. \item (Degenerate) $\nabla^2 f(x)$ has either nonnegative or nonpositive eigenvalues, with some eigenvalues equal to 0. \end{enumerate} As we shall see later in Section~\ref{sec:overview}, for the first three cases second order algorithms can either find a direction to reduce the function value (in case of local maximum or strict saddle), or correct asserting that the current point is a local minimum. However, second order algorithms cannot handle degenerate saddle points. Degeneracy of Hessian indicates the presence of a {\em gutter} structure, where a set of connected points all have the same value, and all are local minima, maxima or saddle points~\citep{dauphin2014identifying}. See for example Figure~\ref{fig:functions} (c) (d) If the Hessian at a critical point $x$ is p.s.d., even if it has 0 eigenvalues we can say the point is a second order local minimum: for any $y$ that is sufficiently close to $x$, we have $f(x) - f(y) = o(\|x-y\|^2)$. That is, although there might be a vector $y$ that makes the function value decrease, the amount of decrease is a lower order term compared to $\|x-y\|^2$. In this paper we consider higher order local minimum: \begin{definition}[$p$-th order local minimum]\label{def:pthorder}A critical point $x$ is a $p$-th order local minimum, if there exists constants $C,\epsilon> 0$ such that for every $y$ with $\|y-x\|\le \epsilon$, $$ f(y) \ge f(x) - C \|x-y\|^{p+1}. $$ \end{definition} Every critical point is a first order local minimum, and every point that satisfies the second order necessary condition ($\nabla f(x) = 0, \nabla^2 f(x)\succeq 0$) is a second order local minimum. \subsection{Matrix and Tensor Notations} For a vector $v \in {\mathbb R}^n$, we use $\|v\|$ to denote its $\ell_2$ norm. For a matrix $M \in {\mathbb R}^{n\times n}$, we use $\|M\|$ to denote its spectral (operator) norm. All the matrices we consider are symmetric matrices, and they can be decomposed using eigen-decomposition: $$ M = \sum_{i=1}^n \lambda_i v_i v_i^\top. $$ In this decomposition $v_i$'s are orthonormal vectors, and $\lambda_i$'s are the eigenvalues of $M$. We always assume $\lambda_1\ge \lambda_2 \ge \ldots \ge \lambda_n$. We use $\lambda_1(M)$ to denote its largest eigenvalue and $\lambda_n(M)$ to denote its smallest eigenvalue. By the property of symmetric matrices we also know $\|M\| = \max\{|\lambda_1(M)|, |\lambda_n(M)|\}$. We use $\|M\|_F$ to denote the Frobenius norm of the matrix $\|M\|_F = \sqrt{\sum_{i,j\in [n]} M_{i,j}^2}$. The third order derivative is represented by a $n\times n\times n$ tensor $T$. We use the following multilinear notation to simplify the notations of tensors: \begin{definition}[Multilinear notations] Let $T\in {\mathbb R}^{n\times n\times n}$ be a third order tensor. Let $U\in {\mathbb R}^{n\times n_1}$, $V\in {\mathbb R}{n\times n_2}$ and $W\in {\mathbb R}^{n\times n_3}$ be three matrices, then the multilinear form $T(U,V,W)$ is a tensor in ${\mathbb R}^{n_1\otimes n_2\otimes n_3}$ that is equal to $$ [T(U,V,W)]_{p,q,r} = \sum_{i,j,k\in [n]} T_{i,j,k} U_{i,p}V_{j,q}W_{k,r}. $$ \end{definition} In particular, for vectors $u,v,w\in {\mathbb R}^n$, $T(u,v,w)$ is a number that relates linearly in $u,v$ and $w$ (similar to $u^\top M v$ for a matrix); $T(u,v,I)$ is a vector in ${\mathbb R}^n$ (similar to $Mu$ for a matrix); $T(u,I,I)$ is a matrix in ${\mathbb R}^{n\times n}$. The Frobenius norm of a tensor $T$ is defined similarly as matrices: $\|T\|_F = \sqrt{\sum_{i,j,k\in [n]} T_{i,j,k}^2}$. The spectral norm (also called injective norm) of a tensor is defined as $$\|T\| = \max_{\|u\| = 1, \|v\| =1, \|w\| = 1} T(u,v,w).$$ We say a tensor is symmetric if $T_{i,j,k} = T_{\pi(i,j,k)}$ for any permutation of the indices. For symmetric tensors the spectral norm is also equal to $\|T\| = \max_{\|u\| = 1} T(u,u,u)$. In both cases it is NP-hard to compute the spectral norm of a tensor\citep{hillar2013most}. We will often need to project a tensor $T$ to a subspace ${\cal P}$. Let $P$ be the projection matrix to the subspace $P$, we use the notation $\mbox{Proj}_{\cal P} T$ which denotes $T(P,P,P)$. Intuitively, $[T(P,P,P)]_{u,v,w} = T(Pu, Pv, Pw)$, that is, the projected tensor applied to vector $u,v,w$ is equivalent to the original tensor applied to the projection of $u,v,w$. \section{Overview of Nestorov's Cubic Regularization}\label{sec:overview} In this section we review the guarantees of Nesterov's Cubic Regularization algorithm\citep{nesterov2006cubic}. We will use this algorithm as a key step later in Section~\ref{sec:alg}, and prove analogous results for third order local minimum. The algorithm requires the first two order derivatives exist and the following smoothness constraint: \begin{assumption}[Lipschitz-Hessian]\label{assump:lipschitzhessian} $$ \forall x,y, \|\nabla^2 f(x) - \nabla^2 f(y)\| \le R\|x-y\|. $$ \end{assumption} At a point $x$, the algorithm tries to find a nearby point $z$ that optimizes the degree two Taylor's expansion: $f(x) + \inner{\nabla f(x), z-x} + \frac{1}{2}(z-x)^\top(\nabla^2 f(x))(z-x)$, with the cubic distance $\frac{R}{6}\|z-x\|^3$ as a regularizer. See Algorithm~\ref{alg:cubic} for one iteration of the algorithm. The final algorithm generates a sequence of points $x^{(0)},x^{(1)},x^{(2)}, \ldots$ where $x^{(i+1)} = \mbox{CubicReg}(x^{(i)})$. \begin{algorithm} \begin{algorithmic} \REQUIRE function $f$, current point $x$, Hessian smoothness $R$ \ENSURE Next point $z$ that satisfies Theorem~\ref{thm:cubicreg}. \STATE Let $z = \arg\min f(x) + \inner{\nabla f(x), z-x} + \frac{1}{2}(z-x)^\top(\nabla^2 f(x))(z-x) + \frac{R}{6}\|z-x\|^3$. \RETURN $z$ \end{algorithmic} \caption{CubicReg\citep{nesterov2006cubic}}\label{alg:cubic} \end{algorithm} The optimization problem that Algorithm~\ref{alg:cubic} tries to solve may seem difficult, as it has a cubic regularizer $\|z-x\|^3$. However, \cite{nesterov2006cubic} showed that it is possible to solve this optimization problem in polynomial time. For each point, define $\mu(z)$ to measure how close the point $z$ is to satisfying the second order optimality condition: \begin{definition} \label{def:mu} $\mu(z) = \max\left\{\sqrt{\frac{1}{R}\|\nabla f(z)\|}, -\frac{2}{3R}\lambda_n \nabla^2 f(z)\right\}$ \end{definition} When $\mu(z) = 0$ we know $\nabla f(z) = 0$ and $\nabla^2 f(z) \succeq 0$, which satisfies the second order necessary conditions (and in fact implies that $z$ is a second order local minimum). When $\mu(z)$ is small we can say that the point $z$ approximately satisfies the second order optimality condition. For one step of the algorithm the following guarantees can be proven\footnote{All of guarantees we stated here correspond to setting the regularizer $R$ to be exactly equal to the smoothness in Assumption~\ref{assump:lipschitzhessian}.} \begin{theorem}\label{thm:cubicreg} \citep{nesterov2006cubic} Suppose $z = \mbox{CubicRegularize}(x)$, then $\|z-x\| \ge \mu(z)$ and $f(z) \le f(x) - R\|z-x\|^3/12$. \end{theorem} Using Theorem~\ref{thm:cubicreg}, \cite{nesterov2006cubic} can get strong convergence results for the sequence $x^{(0)},x^{(1)},x^{(2)}, \ldots$ \begin{theorem}\label{thm:cubicreg2}\citep{nesterov2006cubic} If $f(x)$ is bounded below by $f(x^*)$, then $\lim_{i\to \infty} \mu(x^{(i)}) = 0$, and for any $t \ge 1$ we have $$ \min_{1\le i\le t} \mu(x^{(i)}) \le \frac{8}{3}\cdot \left(\frac{3(f(x^{(0)}) - f(x^*))}{2t R}\right)^{1/3}. $$ \end{theorem} This theorem shows that within first $t$ iterations, we can find a point that ``looks similar'' to a second order local minimum in the sense that gradient is small and Hessian does not have a negative eigenvalue with large absolute value. It is also possible to prove stronger guarantees for the limit points of the sequence: \begin{theorem}\label{thm:cubicreg3}\citep{nesterov2006cubic} If the level set ${\cal L}(x^{(0)}) := \{x|f(x) \le f(x^{(0)})\}$ is bounded, then the following limit exists $$ \lim_{i\to\infty} f(x^{(i)}) = f^*, $$ The set $X^*$ of the limit points of this sequence is non-empty. Moreover this is a connected set such that for any $x\in X^*$ we have $$ f(x) = f^*, \nabla f(x) = \vec{0}, \nabla^2 f(x)\succeq 0. $$ \end{theorem} Therefore the algorithm always converges to a set of points that are all second order local minima. \section{Third Order Necessary Condition} \label{sec:condition} In this section we present a condition for a point to be a third order local minimum, and show that it is necessary and sufficient for a class of smooth functions. Proofs are deferred to Appendix~\ref{app:third}. All the functions we consider satisfies the following natural smoothness conditions \begin{assumption} [Lipschitz third Order] We assume the first three derivatives of $f(x)$ exist, and for any $x,y\in {\mathbb R}^n$, $$ \|\nabla^3 f(x) - \nabla^3 f(y)\|_F \le L\|x-y\|. $$\label{assump:lipthird} \end{assumption} Under this assumption, we state our conditions for a point to be a third order local minimum. \begin{definition}[Third-order necessary condition]\label{cond:third} A point $x$ satisfy third-order necessary condition, if \begin{enumerate} \item $\nabla f(x) = 0$. \item $\nabla^2 f(x) \succeq 0$. \item For any $u$ that satisfy $u^\top (\nabla^2 f(x))u = 0$, $[\nabla^3 f(x)](u,u,u) = 0$. \end{enumerate} \end{definition} We first note that this condition can be verified in polynomial time. \begin{claim} Conditions in Definition~\ref{cond:third} can be verified in polynomial time given the gradients $\nabla f(x), \nabla^2 f(x)$ and $\nabla^3 f(x)$. \end{claim} \begin{proof} It is easy to check whether $\nabla f(x) = 0$ and $\nabla^2 f(x) \succeq 0$. We can also use SVD to compute the subspace ${\cal P}$ such that $u^\top (\nabla^2 f(x)) u = 0$ if and only if $u\in {\cal P}$. Now we can compute the projection of $\nabla^3 f(x)$ in the subspace ${\cal P}$, and we claim the third condition is violated if and only if the projection is nonzero. If the projection is zero, then clearly $[\nabla^3 f(x)](u,u,u)$ is 0 for any $u\in {\cal P}$. On the other hand, if projection $Z$ is nonzero, let $u$ be a uniform Gaussian vector that has unit variance in all directions of $u$, then we know ${\mathbb E}[[[\nabla^3 f(x)](u,u,u)]^2] \ge \|Z\|_F^2 > 0$, so there must exists an $u\in {\cal P}$ such that $[\nabla^3 f(x)](u,u,u) \ne 0$. \end{proof} \begin{theorem}\label{thm:thirdcondition} Given a function $f$ that satisfies Assumption~\ref{assump:lipthird}, a point $x$ is third order optimal if and only if it satisfies Condition~\ref{cond:third}. \end{theorem} Before proving the theorem, we first show a bound on $f(y)$ and a Taylor's expansion of $f$ at point $x$. \begin{lemma} \label{lem:lipbound} For any $x,y$, we have $$ |f(y) - f(x) - \inner{\nabla f(x), y-x} + \frac{1}{2}(y-x)^\top \nabla^2 f(x) (y-x) - \frac{1}{6} \nabla^3 f(x)(y-x, y-x, y-x)| \le \frac{L}{24}\|y-x\|^4. $$ \end{lemma} The Lemma can be proved by integrating over the third order derivatives three times and bounding the differences. Details are deferred to Appendix~\ref{app:third}. This lemmas allow us to ignore the fourth order term $\|y-x\|^4$ and focus on the order 3 Taylor expansion when $\|y-x\|$ is small. To prove Theorem~\ref{thm:thirdcondition}, intuitively, the ``only if'' direction (local minimum to necessary condition) is easy because if any condition in Definition~\ref{cond:third} is violated, we can use that particular derivative to find a direction that improves the function value. For the ``if'' direction (necessary condition to third order local minimum), the main challenge is to balance the contribution we get from the positive part of the Hessian matrix and the third order derivatives. For details see Appendix~\ref{app:third}.
{ "timestamp": "2016-02-19T02:13:20", "yymm": "1602", "arxiv_id": "1602.05908", "language": "en", "url": "https://arxiv.org/abs/1602.05908", "abstract": "Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima.", "subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)", "title": "Efficient approaches for escaping higher order saddle points in non-convex optimization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446502128796, "lm_q2_score": 0.7279754548076478, "lm_q1q2_score": 0.7087894070597542 }
https://arxiv.org/abs/0909.5021
Properties of Translating Solutions to Mean Curvature Flow
In this paper, we study the convexity, interior gradient estimate, Liouville type theorem and asymptotic behavior at infinity of translating solutions to mean curvature flow as well as the nonlinear flow by powers of the mean curvature.
\section*{1. Introduction and Main Results} \setcounter{section}{1} \setcounter{equation}{0} In this paper, we study the convexity, interior gradient estimate, Liouville type theorem and asymptotic behavior at infinity of the solutions to equation \begin{eqnarray} \label {1.1} a_{ij}u_{ij}:=(\delta_{ij}-\frac{u_iu_j}{1+|\nabla u|^2})u_{ij} =(\frac{1}{\sqrt{1+|\nabla u|^2}})^{\alpha-1}, \forall x \in R^n \end{eqnarray} where constants $\alpha>0$ and we have used the notation $u_i=\frac{\partial u}{\partial x_i}$ and the convention for summing. It is called the {\sl translating soliton equation} of the nonlinear evolution flow of hypersurfaces by powers ($\frac{1}{\alpha}$) of the mean curvature. This nonlinear flow was studied in [10, 11] and has important applications in minimal surfaces [2] and isoperimetric inequalities [11]. When $\alpha =1,$ (1.1) is reduced to \begin{equation} \label {1.2}div (\frac{Du}{\sqrt{1+|Du|^2}})=\frac{1}{\sqrt{1+|Du|^2}}\ \ in \ \ R^n, \end{equation} which plays a key role in classifying the type II-singularity of mean curvature flows [4,5,15]. Scaling the space and time variables in a proper way near type II-singularity points on the surfaces evolved by mean curvature vector with a mean convex initial surface, Huisken-Sinestrari [4,5] and White [15] proved that the limit flow can be represented as $M_t=\{ (x, u(x)+t)\in R^{n+1}: x\in R^n, t\in R\}$ and is also a solution (called a translating solutions or solitons) to mean curvature flow. Equivalently, $u$ is a solution to equation (1.2). Therefore, the classification of type II-singularity of mean curvature flow is reduced to the classification of solutions of equation (1.2). There was a well known conjecture among the geometric flows researchers which asserts that any complete strictly convex solution of (1.2) is radially symmetric [15]. A few years ago, Wang [13] proved this conjecture for $n=2$ and found a non-radially symmetric solution of (1.2) for $n>2.$ Sheng and Wang [12] used a direct argument to study the Singularity profile in mean curvature flow. One natural question is that how are about the asymptotic behavior at infinity of the solutions to (1.2), or more generally to equation (1.1). Obviously, the first step is to make clear the asymptotic behavior of radially symmetric solutions of (1.1). This leads us to prove the following theorem 1.1 in Section 2. {\bf Theorem 1.1} { \sl Equation (1.1) has a unique solution of the form $u(x)=r(|x|)$ up to a translation in $R^{n+1}.$ Moreover, the function $r\in C^2[0, \infty )$ satisfies that \begin{equation} \label {1.3} r''(t)>0,\ \ and \ \ \frac{t}{n}<r'(t)\bigl(1+ (r'(t))^2 \bigr)^{\frac{\alpha-1}{2}}<\frac {t}{n-1}\end{equation} for all $t>0,$ and \begin{equation} \label {1.4} r(t)=\frac {t^2}{2(n-1)}-\ln t +C_1 - \frac {(n-1)(n-4)}{2}t^{-2}+o(t^{-2}) \quad \hbox{if } \, \alpha =1 \end{equation} \begin{equation} \label {1.5} r(t)=\frac{\alpha}{\alpha+1}\bigl( \frac {1}{n-1} \bigr)^{1/\alpha} t^{1+\frac{1}{\alpha}} - C(\alpha, n) t^{1-\frac{1}{\alpha}} +o( t^{1-\frac{1}{\alpha}} ), \quad \hbox {if } \, \alpha \not =1 \end{equation} as $t\to \infty ,$ where $C_1$ is a constant depending on $r(1)$ and $$ C(\alpha, n)=\frac{1}{\alpha-1} (n-1)^{1/\alpha} \bigl( \frac{1}{\alpha (n-1)}+ \frac{\alpha -1}{2 }\bigr). $$ } We should mention that when $\alpha=1$, (1.3) was proved in [8] and a asymptotic result similar to (1.4) was proven in [1]. Our method for general $\alpha>0$ is different and yield more properties of the solutions. See Section 2 for details. Another natural question, formulated explicitly as an open problem in [13], is whether any solution of (1.2) is strictly convex. We will prove the following theorem 1.2 which is related to this question in Section 3. \vskip0.2cm {\bf Theorem 1.2} {\sl Let $u\in C^2(R^n)$ be a convex solution of equation (1.1). If $u$ is strictly convex in some nonempty set, then $u$ is is strictly convex in $R^n.$ Particularly, $u$ is is strictly convex in $R^n$ if $u(x)\to \infty $ as $|x|\to \infty ;$ and when $\alpha =1$ (i.e., $ u$ is a solution to (1.2)) and $u(x)\to \infty $ as $|x|\to \infty ,$ then after a rotation of coordinate system, $\lim_{h\to \infty}h^{-2}u(hx)\to \sum_{i=1}^kx_i^2$ in $R^n$ for some $k\geq 2,$ and $u$ is radially symmetric if $n=2.$} \vskip0.2cm This theorem generalizes the main results in [6] which asserts that the Hessian $(D^2 u(x))$ has constant rank for all $x\in R^n$ if $(D^2 u(x))$ is positive semi-definite and $ \Delta u=f(u, \nabla u)$ in $R^n,$ where $f\in C^{2, \alpha}$ is strictly positive and convex in $u.$ In the case of Minkowski space [9], similar convexity result was proved by the second author in [7]. Theorem 1.1 tells us that the radially symmetric solution of (1.1) is of the $1+\frac{1}{\alpha}$ order growth at infinity. This order tends to 1 as $\alpha $ goes to $\infty $. Motivated by this, we have the following Liouville result. \vskip0.2cm {\bf Theorem 1.3} {\sl There is no nonnegative solution $u\in C^3(R^n)$ of (1.1) such that \begin{eqnarray} \lim_{|x|\rightarrow\infty}\frac{|u(x)|}{|x|}=0.\nonumber \end{eqnarray}} \vskip0.2cm We will prove this theorem in section 5. For this purpose, in section 4 we will want to use the gradient estimate techniques by Xu-Jia Wang in [14] and the methods in [3] to obtain the following interior gradient result for equation (1.1). \vskip0.2cm {\bf Theorem 1.4} {\sl Suppose $u\in C^3(B_r(0))$ is a nonnegative solution of (1.1), then $$|\nabla u(0)|\leq\exp\{C_1+C_2\frac{M^2}{r^2}\},$$ where $M=\sup_{x\in B_r(0)}u(x),$\ $C_i$(i=1,2) are constants depending only on $n$ and $\alpha.$ } \section*{2. Asymptotic Behavior - Proof of Theorem 1.1 } \setcounter{section}{2} \setcounter{equation}{0} We will use a few lemmas to prove theorem 1.1. The main difficulty is to prove asymptotic expansion (1.4) and (1.5). {\bf Lemma 2.1} { \sl Suppose that $u(x)=r(|x-x_0|)+u(x_0)$ for some $x_0\in R^n$ and for all $x\in R^n.$ Then $u\in C^2(R^n)$ is a solution of (1.1) if and only if $r\in C^2(0, \infty )$ satisfies \begin{equation} \label {2.1} \frac {r''}{1+(r')^2}+\frac {n-1}{t}r'= \bigl(1+(r')^2 \bigr)^{\frac{1-\alpha}{2}}, \forall t\in (0, \infty )\end{equation} and \begin{equation} \label {2.2} r(0)=r'(0)=0.\end{equation}} {\bf Proof.} We may assume $x_0=0.$ Let $e_i$ be the unit vector in positive $x_i$-axis. Since $r(t)=u( t e_i)-u(0)=u(-t e_i)-u(0)$ for all $t\geq 0 ,$ then $r\in C^2[0, \infty )$ and it satisfies (2.2) if and only if $u\in C^2(R^n). $ Writing (1.1) in $r,$ we see that (1.1) is equivalent to (2.1). {\bf Lemma 2.2} {\sl If $y\in C^1(-\infty , \infty )$ satisfies \begin{equation} \label {2.3} y' +[ (n-1)y (1+y^2)^{\frac{\alpha-1}{2}}-e^s](1+y^2)^{\frac{3-\alpha}{2}}=0, \forall s\in (-\infty , \infty ) \end{equation} and \begin{equation} \label {2.4}\lim_{s \to -\infty } \frac {y(s)(1+y^2)^{\frac{\alpha-1}{2}}}{e^s}=\frac {1}{n}, \end{equation} then $r(t)=\int_0^t y(\ln s) ds \in C^2[0, \infty )$ satisfies (2.1) and (2.2).} {\bf Proof.} (2.1) can be verified directly by (2.3). Note that (2.4) implies $$r'(0):=\lim_{t\to 0^+}r'(t)=\lim_{s \to -\infty } y(s) =0=r(0)$$ and $$r''(0):=\lim_{t\to 0^+}r''(t)=\lim_{s \to -\infty } \frac {y'(s) }{e^s}=\frac {1}{n}$$ by equation (2.3) as well as equation (2.1). {\bf Lemma 2.3} {\sl There exists a $y\in C^1(-\infty , \infty )$ solving (2.3) and (2.4) such that \begin{equation} \label {2.5} y'(s)>0 \ \ and \ \ \frac {e^s}{n}<y(s)(1+y^2)^{\frac{\alpha -1}{2}}<\frac {e^s}{n-1}, \forall s\in (-\infty , \infty ).\end{equation} Moreover, the function $z(s):= (n-1)e^{-s}y(s)(1+y^2)^{\frac{\alpha-1}{2}}-1$ satisfies \begin{equation} \label {2.6} \lim_{s\to -\infty} z(s)=-\frac {1}{n}, \ \ \lim_{s\to \infty} z(s)=0\ \ and \ \ z'(s)>0, \forall s\in (-\infty , \infty ). \end{equation}} {\bf Proof.} By local existence we have a function $r \in C^2[0, \varepsilon )$ which solves (2.1) and (2.2) in $(0, \varepsilon )$ for some $\varepsilon >0$ (see [8]). Then $y(s)=r'(t) , t=e^s $ satisfies (2.3) in $(-\infty , \ln \varepsilon )$ and \begin{equation} \label {2.7} \lim _{s\to -\infty } y(s)=0.\end{equation} Let $(-\infty , T)$ be the maximal interval for which $y$ solves (2.3). First we prove $T=\infty .$ In fact, we shall show that \begin{equation} \label {2.8} y'(s)\geq 0 \ \ and \ \ y(s)(1+y^2)^{\frac{\alpha-1}{2}}\leq \frac {e^s}{n-1}, \forall s\in (-\infty , T ).\end{equation} For convenience, we may define $$ g(y)=y(1+y^2)^{\frac{\alpha-1}{2}}. $$ Note that $\alpha >0$ and $$ g'(y)=y'(s)(1+y^2)^{\frac{\alpha-3}{2}} (1+\alpha y^2)>0. $$ Suppose that there exists $s_0\in (-\infty , T)$ such that $ y'(s_0)<0 $ and then $g(y(s_0))>\frac {e^{s_0}}{n-1}$ by (2.3). We claim that \begin{equation} \label {2.9} y'(s)<0 ,\ \ and\ \ g(y(s))>\frac {e^s}{n-1}, \forall s\in (-\infty , s_0 ).\end{equation} Otherwise, there exists a $s_1<s_0$ such that $$ y'(s_1)=0 ,\ \ g(y(s_1))=\frac {e^{s_1}}{n-1}\ \ and \ \ y'(s)<0 ,\ \ \forall s\in (s_1 , s_0 ).$$ Then $ \frac {e^{s_1}}{n-1} =g(y(s_1))>g(y(s_0))>\frac {e^{s_0}}{n-1}.$ This contradiction implies that (2.9) holds, and hence $\lim _{s\to -\infty }g(y(s))>\frac {e^{s_0}}{n-1}, $ which contradicts (2.7). Therefore we have proven (2.8), and then $T=\infty .$ follows by the standard existence theory of ordinary differential equations. Next, we shall prove (2.6). Since $y$ satisfies (2.3) and (2.8) in $(-\infty , \infty),$ the function $z(s):= (n-1)e^{-s}g(y(s))-1$ satisfies \begin{equation} \label {2.10} z' +nz +1+ \alpha (n-1)z y^2=0 \ \ and \ \ z\leq 0, \forall s\in (-\infty , \infty ).\end{equation} Thus, $(ze^{ns})'\geq -e^{ns}$ for all $s\in (-\infty , \infty ).$ Integrating this inequality over $(-\infty , s)$ and then using (2.7) we obtain \begin{equation} \label {2.11} z(s)\geq -\frac {1}{n}, \forall s\in (-\infty , \infty ).\end{equation} Now we make two observations. The first one is that $z$ has no local maxima. Indeed, if $z'(s_0)=0$ for some $s_0$, we can obtain $z''(s_0)> 0$ since $z$ satisfies \begin{equation} \label {2.12} z''+n z' + \alpha (n-1) z' y^2+2 \alpha (n-1) zy y'=0 , \forall s\in (-\infty , \infty )\end{equation}. The second observation is that $z'(s)>0$ for all $s\in (-\infty , \infty).$ Otherwise, if $z'(s_0)\leq 0$ for some $s_0.$ Then $z'(s)\leq 0$ for all $s\leq s_0, $ since $z$ has no local maxima. This, together with (2.11), implies that $\lim_{k\to -\infty } z'(s_k)= 0$ for some sequence $s_k\to -\infty .$ Thus, $\lim_{k\to -\infty } z(s_k)=-\frac {1}{n}$ by (2.10). Since (2.10) and (2.11) imply $z'(s)<0$ in $(-\infty , s_0),$ we see that $z(s)<-\frac{1}{n}$ for $s\in (-\infty, s_0),$ which contradicts (2.11). Now that $z'(s)>0$ for all $s\in (-\infty , \infty),$ by (2.10) and (2.11) we have $\lim_{s\to -\infty } z(s)=-\frac {1}{n}$ and $\lim_{s\to \infty } z(s)=0 .$ This proves (2.6), which, together with equation (2.3), implies (2.5). The proof of Lemma 2.3 has been completed. {\bf Lemma 2.4} {\sl Let $y$ be a function as in Lemma 2.3. Then $r(t)=\int_1^t y(\ln s) ds $ satisfies (1.4) and (1.5) .} {\bf Proof.} It follows from (2.6) that $\lim_{k\to \infty } z'(s_k)= 0$ for some sequence $s_k\to \infty .$ By this, we claim that \begin{equation} \label {2.13} \lim_{s\to \infty } z'(s)= 0. \end{equation} Assume that this is not true, then there exists a sequence of local maxima $\theta_k\to \infty $ of $z'(s)$. Note that we have $z''(\theta _k)=0.$ From (2.6), we can derive $$g(y(s))=\frac{e^s}{n-1} (1+o(1)) \ \ and \ \ y(s)=(\frac{1}{n-1})^{1/\alpha} e^{s/\alpha} (1+o(1)) $$ as $ s \to \infty .$ By (2.10) and (2.6), we have $\lim_{s \to \infty } z'(s)e^{-2s/\alpha}= 0.$ By the definition of $z$, we have $$ 2\alpha (n-1) z y y'= \frac{ 2\alpha (n-1)z e^s [ z' + (z+1) ] y}{ (1+y^2)^{\frac{\alpha-3}{2}} (1+\alpha y^2)} =2(n-1)^{2-2/\alpha} z [ z' + (z+1) ] e^{2s/\alpha} (1+o(1)). $$ This, together with (2.12) and (2.6) again, implies $\lim_{k\to \infty } z'(\theta _k)=0$. The claim is hence proven. Then it follows from (2.13), (2.10) and (2.6) that \begin{equation} z(s)=-\frac{1}{\alpha(n-1)}y^{-2}(1+o(1))=-\frac{1}{\alpha} (n-1)^{\frac{2}{\alpha}-1} e^{-\frac{2s}{\alpha}} (1+o(1)) , \ \ as \ \ s\to \infty .\end{equation} and hence \begin{equation} g(y)=\frac{1}{n-1} e^s \bigl(1-\frac{1}{\alpha} (n-1)^{\frac{2}{\alpha}-1} e^{-\frac{2s}{\alpha}} \bigr)(1+o(1)), \ \ as \ \ s\to \infty . \end{equation} Therefore, by straightforward computation we obtain \begin{equation} \label {2.14} y= e^{\frac {s}{\alpha}} \bigl( \bigl(\frac{1}{n-1}\bigr)^{1/\alpha}-B(\alpha, n) e^{-\frac{2s}{\alpha}} \bigr) (1+o(1)), \ \ as \ \ s\to \infty , \end{equation} where $$ B(\alpha, n)=(n-1)^{1/\alpha} \bigl( \frac{1}{\alpha^2 (n-1)}+ \frac{\alpha -1}{2 \alpha}\bigr). $$ This implies (1.5). In particular, when $\alpha =1$, we have $$ y=\frac{1}{n-1}e^s -e^{-s} +o(e^s). $$ In theory, we can repeat the above procedure to obtain higher order expansion. Take the simple case $\alpha =1$ as example, we can let $w(s)=-\frac {e^{2s}}{n-1}z(s)-1.$ Then \begin{equation} \label {2.15} \lim_{s\to \infty }w(s)=0,\end{equation} and equation (2.10) read as \begin{equation} \label {2.16} w' +(n-2)w+n-2 +\frac {e^{2s}}{n-1}w-2(1+w)^2+(n-1)e^{-2s}(1+w)^3=0.\end{equation} Thus, \begin{equation} \label {2.17} w'' + w'[(n-2) +\frac {e^{2s}}{n-1} -4(1+w)+\frac{3(n-1)}{e^{2s}}(1+w)^2]+\frac {2e^{2s}}{n-1}w - \frac{2(n-1)}{e^{2s}}(1+w)^3 =0.\end{equation} Since (2.15) means $\lim_{k\to \infty }w'(s_k)=0$ for some sequence $s_k\to \infty, $ we claim that \begin{equation} \label {2.18} \lim_{s\to \infty }w'(s)=0.\end{equation} If not, then there exists a $\delta >0 $ and a sequence $\theta_k\to \infty$ such that $w''(\theta _k)=0$ and $w'(\theta _k)>\delta$ (or $w'(\theta _k)<-\delta$.) But it follows from (2.15) and (2.16) that \begin{equation} \label {2.19} \lim_{k\to \infty } [w'(\theta _k)+\frac{1}{n-1}e^{2\theta_k}w(\theta _k)]= 4-n.\end{equation} Taking $s=\theta_k$ in (2.17) and using (2.15) and (2.19), we have $$\lim_{k\to \infty }[ \frac {n-4 }{n-1}w(\theta _k)-\frac {4(w^2(\theta _k)+w(\theta _k))}{n-1}+\frac {w'(\theta _k)}{n-1}] e^{2\theta _k} =(4-n)(n-2)-4(4-n),$$ which implies $\lim_{k\to \infty } w'(\theta_k)=0,$ a contradiction. Thus, using (2.15) and (2.18) we can rewrite (2.16) as $$ w=-(n-1)(n-4)e^{-2s}(1+o(1))\ \ as \ \ s\to \infty.$$ By ( 2.14), we have $$z(s)=-(n-1)e^{-2s}+(n-1)^2(n-4)e^{-4s}+o(e^{-4s})\ \ as \ \ s\to \infty,$$ which implies $$y(s)= \frac {e^s}{n-1}-e^{-s}+(n-1)(n-4)e^{-3s}+o(e^{-3s}) \ \ as \ \ s\to \infty . $$ Therefore, $r(t)=\int_1^t y(\ln s) ds $ satisfies (1.4). {\bf Proof of Theorem 1.1:} the existence follows from Lemmas 2.1-2.4; while the uniqueness from the well-known comparison principle. \section*{3. Convexity - Proof of Theorem 1.2 } \setcounter{section}{3} \setcounter{equation}{0} {\bf Lemma 3.1} {\sl Let $u\in C^2(R^n).$ Suppose that there is a constant $c$ such that the set $\Omega _c=\{x\in R^n : u(x)<c\}$ is nonempty and bounded. If $u=c$ on $\partial \Omega _c ,$ then the Hessian matrix $(u_{ij}(x_0))>0$ (positive definite) for some $x_0\in \Omega _c .$} {\bf Proof.} By the assumption we see that $u=c$ on $\partial \Omega _c$ and $$u(x_1)=\min_{\overline{\Omega _c} }u(x)<c$$ for some $x_1\in \Omega _c.$ This implies that the function $$U(x)=u(x)-\frac {c-u(x_1)}{2(diam (\Omega _c))^2} |x-x_1|^2$$ must attain interior minimum in $\Omega _c.$ Consequently, $(u_{ij}(x_0))>0$ for some $x_0\in \Omega _c ,$ which implies the desired result. {\bf Lemma 3.2} {\sl Let $u\in C^2(R^n)$ be a convex solution of equation (1.4). If the set $\Omega _0 =\{x\in R^n : (u_{ij}(x))>0\}$ is nonempty, then $ \Omega _0=R^n .$} {\bf Proof.} We follow the arguments of theorem 1.3 by the second author in [7]. Suppose the contrary that there exists a $x_1\in R^n \backslash \Omega _0.$ We will derive a contradiction. We may assume $\Omega _0$ is nonempty and connected. (Otherwise, we replace it by one of its connected components ). Then there exists a short segment $l\subset \Omega _0$ such that $\bar {l}\cap \partial \Omega _0=\{x_1\} .$ Take $x_2\in l$ and $\varepsilon >0$ such that $\overline{B_{\varepsilon } (x_2)}\subset \Omega _0.$ Translating the ball $B_{\varepsilon } (x_2)$ along the line $l$ toward $x_1$ we come to a point $\bar{x}$ where the ball and $\partial \Omega _0$ are touched at the first time. It follows that \begin{equation} \label {3.1}\bar{x}\in R^n \backslash\Omega _0, \ \ \ B_{\varepsilon } (x_0)\subset \Omega _0 \ \ \ and \ \ \ \overline{ B_{\varepsilon } (x_0)} \cap \partial \Omega _0=\{\bar{x}\}\end{equation} for some $x_0\in \Omega _0.$ Moreover, the minimum eigenvalue $\lambda (x)$ of the Hessian $(u_{ij}(x))$ satisfies $\lambda (\bar{x})=0.$ By a coordinate translation and rotation we may arrange that \begin{equation} \label {3.2} \bar{x}=0, \ \ u(0)=0, \ \ \nabla u(0)=0 \ \ and \ \ u_{11}(0)=\lambda (0)=0.\end{equation} Thus, the origin $0\in \partial B_{\varepsilon } (x_0) $ and \begin{equation} \label {3.3} (u_{ij}(x))>0\ \ in \ \ B_{\varepsilon } (x_0).\end{equation} Rewrite equation (1.4) as \begin{equation} \label {3.4} \Delta u = A(|\nabla u|^2)u_i u_j u_{i j}+B(|\nabla u |^2)\ \ in \ \ R^n ,\end{equation} where $A(t)=\frac {1}{t+1}$ and $B(t)=(1+t)^{\frac {1-\alpha}{2}}$ are both analytic for $t>-1.$ Differentiating (3.4) twice with respect to $\frac {\partial}{\partial x_1},$ we have \begin{eqnarray}\label {3.5} \Delta u_{11}&=& 4[A'' u_i u_j u_{i j}+B'' ]u_l u_{l 1}u_m u_{m1}+2[A' u_{i} u_j u_{i j}+B']u_{m 1} u_{m 1}\nonumber \\ & +& 2[A' u_{i} u_j u_{i j}+B']u_{m } u_{m 11}+ 8A' u_{m } u_{m 1}u_{i1} u_j u_{i j}\nonumber \\ &+& 4A' u_{m } u_{m 1}u_{i} u_j u_{i j1} +2A u_{i11} u_j u_{i j}\nonumber \\ &+& 2A u_{i1} u_{j1} u_{i j} +4A u_{i1} u_{j} u_{i j1}\nonumber\\ &+& A u_{i} u_{j} u_{i j11} \ \ in \ \ R^n . \end{eqnarray} Since $u$ is analytic in $R^n ,$ we expand $u_{11}$ at $x=0$ as a power series to obtain $u_{11}(x)=P_k(x)+R(x)$ for all $x\in B_{\delta}(0)$ for some $\delta>0$ such that $\overline {B_{\varepsilon }(x_0)}\subset B_{\delta}(0) $ (one can choose a smaller $\varepsilon $ in advance if necessary), where $ P_k(x)$ is the lowest order term, which, by (3.2) and (3.3), is a nonzero homogeneous polynomial of degree $k$, and $R(x)$ is the rest. Note that $k\geq 2$ by the convexity of $u.$ It follows from (3.3) that $u_{i i}u_{11}-(u_{i1})^2>0$ in $B_{\varepsilon }(x_0).$ Summing over $i$ we have \begin{equation} \label {3.6} \Delta u u_{11}>\sum_{j=1}^n u_{j1}^2\geq u_{i1}^2\end{equation} for each $i=1, 2, \cdots , n.$ We claim that each $u_{i 1}$ is of order at least $\frac {k}{2}.$ Otherwise, we expand $u_{i1}$ at $x=0$ as a power series so that the lowest order term $h(x)$ must be a a nonzero homogeneous polynomial. Choose $$a=(a_1, a_2, \cdots , a_n)\in B_{\varepsilon }(x_0)\backslash \{x\in B_{\varepsilon }(x_0): h(x)=0\}$$ so that the segment $$L=\{ ta: t\in (0, 1)\} \subset B_{\varepsilon }(x_0).$$ Now restricting (3.6) on $L,$ multiplying the both sides by $t^{-k}$ and then letting $t\to 0^+,$ we see the limit of the left-hand side of (3.6) is a nonzero constant multiplied by $\Delta u(0)$ which equals to $1$ by (3.4), but the limit of the right-hand side is positive infinite. This is a contradiction. Therefore, each $u_{i 1}$ is of order at least $\frac {k}{2}.$ Hence $u_{ij1},$ $u_{11i}$ and $ u_{11ij}$ are of order at least $\frac {k}{2}-1 ,$ $k-1$ and $k-2$ respectively. Also note that each $u_{i}$ is of order at least 1 by (3.2). With these facts one can check that the right-hand side of equation (3.5) is of order at least of $k ;$ while the left-hand side, $\Delta u_{11},$ is either of order $k-2,$ or $\Delta P_k=0$ for all $x\in B_{\varepsilon }(x_0).$ Since the first case is impossible by comparing the orders of the two sides, we obtain that $P_k$ is a harmonic polynomial in $ B_{\varepsilon }(x_0).$ We claim that $P_k\geq 0$ for all $x\in B_{\varepsilon }(x_0).$ Otherwise, there exists $a=(a_1, a_2, \cdots , a_n)\in B_{\varepsilon }(x_0)$ such that $P_k(a)<0.$ Then $$\frac{u_{11}(ta)}{t^k}=P_k(a)+\frac{R(ta)}{t^k}, \ \ \forall t\in (0, 1),$$ which implies $\lim_{t\to 0^+}\frac{u_{11}(ta)}{t^k}=P_k(a)<0$ contradicting the fact that $u_{11}>0$ in $B_{\varepsilon }(x_0) $ (see (3.3)). Now we use the strong maximal principle to see that $P_k>0$ for all $x\in B_{\varepsilon }(x_0).$ But $P_k(0)=0,$ and it follows from Hopf's lemma that $\frac {\partial P_k}{\partial \nu}(0)<0,$ where $\nu $ is the unit outward normal to the sphere $\partial B_{\varepsilon }(x_0).$ This means that the degree of $P_k$ is only one, contradicting the fact $k\geq 2.$ This contradiction proves the lemma. {\bf Proof of Theorem 1.2:} it is direct from Lemmas 3.1 and 3.2. Note that if $u(x)\to \infty $ as $|x|\to \infty ,$ then $u$ is strictly convex by Lemmas 3.1 and 3.2. Therefore, when $\alpha =1$, we use the results in [13] to know that after a rotation of coordinate system, $\lim_{h\to \infty}h^{-2}u(hx)\to \sum_{i=1}^kx_i^2$ in $R^n$ for some $k\geq 2$ by [13; Theorem 1.3] and $u$ is radially symmetric if $n=2$ by [13; Theorem 1.1]. \section*{4. Interior Gradient Estimate - Proof of Theorem 1.4 } \setcounter{section}{4} \setcounter{equation}{0} Let $G(x,\xi)=g(x)\varphi(u)\log u_{\xi}(x),$ where $u$ satisfies the hypothesis of theorem 1.4, $g(x)=1-\frac{|x|^2}{r^2},\,\varphi(u)=1+\frac{u}{M},\,M=\sup_{x\in B_{r(0)}}u(x).$ Suppose that $\sup\{G(x,\xi),\,x\in B_r(0),\xi\in S^{n-1}\}$ is attained at point $x_0$ and in the direction $e_1$. Then at $x_0,$\,$u_i(x_0)=0$ for $i\geq 2$ since directive derivatives attain the maximum along the gradient direction, and so $a_{11}=\frac{1}{1+u_1^2},\,a_{ii}=1$ for $i\geq2,$ $a_{ij}=0$ for $i\neq j.$ As the arguments from (1.2) to (1.4) in [14], at $x_0$ we have \begin{eqnarray}\label {4.1} 0=(\log G)_i=\frac{g_i}{g}+\frac{\varphi^{\prime}}{\varphi}u_i+\frac{u_{1i}}{u_1\log u_1} \end{eqnarray} and \begin{eqnarray} 0\geq a_{ii}(\log G)_{ii} &\geq & \frac{f_1}{u_1\log u_1}+\frac{\varphi^{\prime}}{\varphi}f+\frac{u_{11}^2}{2(1+u_1^2)^2\log u_1}-\frac{2n}{gr^2}-\frac{4}{Mr},\nonumber \end{eqnarray} Where $$f(x)\equiv (\sqrt{1+|\nabla u(x)|^2})^{1-\alpha}, \ \ f_1(x)\equiv f_{x_1}(x).$$ In particular, $$ f(x_0)=(\sqrt{1+u_1^2(x_0)})^{1-\alpha}, \ \ f_1(x_0)= (1-k)(1+u_1(x_0)^2)^{\frac{-\alpha -1}{2}}u_1(x_0)u_{11}(x_0).$$\\ Suppose $G(x_0, e_1)$ is large enough so that $log u_1>1$ and $|\frac{g^{\prime}}{g}|\leq\frac{\varphi^{\prime}}{2\varphi}u_1$ at $x_0,$ then by (4.1) we can obtain $u_{11}\leq -\frac{\varphi^{\prime}}{2\varphi}u_1^2\log u_1<0.$ Therefore in the case of $\alpha \geq1$, \begin{eqnarray} \frac{f_1}{u_1\log u_1}+\frac{\varphi^{\prime}}{\varphi}f = \frac{(1-\alpha)u_{11}}{\log u_1}(1+u_1^2)^{\frac{-\alpha-1}{2}}+\frac{\varphi^{\prime}}{\varphi}(1+u_1^2)^{\frac{1-\alpha}{2}}\geq0\nonumber \end{eqnarray} and \begin{eqnarray}\label{4.2} \frac{u_{11}^2}{(1+u_1^2)^2}&=&\frac{u_1^2\log^2u_1}{(1+u_1^2)^2}(\frac{g^{\prime}}{g}+\frac{\varphi^{\prime}}{\varphi}u_1)^2\nonumber\\ &\geq&\frac{u_1^4\log^2u_1}{(1+u_1^2)^2}(\frac{\varphi^{\prime}}{2\varphi})^2\nonumber\\ &\geq&\frac{{\varphi^{\prime}}^2}{8\varphi^2}\log^2u_1\nonumber\\ &\geq&\frac{\log^2u_1}{32M^2}. \end{eqnarray}\\ If $0<\alpha<1,$ then there exists a positive integer $m$ such that $\alpha m>1,$ we may suppose that $G(x_0, e_1)$ is still suitably large so that $log u_1>1$ and $|\frac{g^{\prime}}{g}|\leq\frac{\varphi^{\prime}}{m\varphi}u_1$ at $x_0$, then by (4.1) we have \begin{eqnarray}\label{4.3} \frac{f_1}{u_1\log u_1}+\frac{\varphi^{\prime}}{\varphi}f & = &\frac{(1-\alpha)u_{11}}{\log u_1}(1+u_1^2)^{\frac{-\alpha-1}{2}}+\frac{\varphi^{\prime}}{\varphi}(1+u_1^2)^{\frac{1-\alpha}{2}}\nonumber\\ &=&(1+u_1^2)^{\frac{-\alpha-1}{2}}[\frac{(1-\alpha)u_{11}}{\log u_1}+\frac{\varphi^{\prime}}{\varphi}(1+u_1^2)]\nonumber\\ &=&(1+u_1^2)^{\frac{-\alpha-1}{2}}[-\frac{(1-\alpha)g_1u_1}{g}-\frac{(1-\alpha)\varphi^{\prime}u_1^2}{\varphi} +\frac{\varphi^{\prime}}{\varphi}(1+u_1^2)]\nonumber\\ &=&(1+u_1^2)^{\frac{-\alpha-1}{2}}[-\frac{(1-\alpha)g_1u_1}{g}+\frac{\varphi^{\prime}}{\varphi}(1+\alpha u_1^2)]\nonumber\\ &\geq&(1+u_1^2)^{\frac{-\alpha-1}{2}}\frac{\varphi^{\prime}}{m\varphi}[(\alpha-1)u_1^2+m+m\alpha u_1^2]\geq0. \end{eqnarray}\\ Obviously, in this case (4.2) becomes \begin{eqnarray} \frac{u_{11}^2}{(1+u_1^2)^2} \geq C\frac{\log^2u_1}{M^2},\nonumber \end{eqnarray} where the constant $C$ depends only on $\alpha .$ \\ To sum up the above two cases we see that at $x_0$ and for any $\alpha>0,$ \begin{eqnarray} 0\geq a_{ii}(\log G)_{ii} &\geq & \frac{f_1}{u_1\log u_1}+\frac{\varphi^{\prime}}{\varphi}f+\frac{u_{11}^2}{2(1+u_1^2)^2\log u_1}-\frac{2n}{gr^2}-\frac{4}{Mr}\nonumber\\ &\geq&C\frac{\log u_1}{M^2}-\frac{2n}{gr^2}-\frac{4}{Mr}.\nonumber \end{eqnarray} Recall we may have assumed $\log u_1(x_0)>1.$ Then we obtain \begin{eqnarray} g(x_0)\log u_1(x_0) \leq C_3\frac{M}{r}+C_4\frac{M^2}{r^2},\nonumber \end{eqnarray} where $C_3$ and $C_4$ depend only on $n$ and $\alpha$.\\ Since $ 1\leq\varphi\leq 2 ,$ we use Cauchy inequality to get \begin{eqnarray} G(x_0,e_1)&=&g(x_0)\varphi(u(x_0))\log u_1(x_0)\nonumber\\ &\leq&2C_3\frac{M}{r}+2C_4\frac{M^2}{r^2}\nonumber\\ &\leq &C_1(n,\alpha)+C_2(n,\alpha)\frac{M^2}{r^2}.\nonumber \end{eqnarray} Therefore, we have proved that for any $\xi\in S^{n-1},$ \begin{eqnarray} G(x_0,e_1)\geq G(0,\xi)\geq\log u_{\xi}(0),\nonumber \end{eqnarray} which implies \begin{eqnarray} u_{\xi}(0)\leq\exp\{C_1+C_2\frac{M^2}{r^2}\}.\nonumber \end{eqnarray} Noting that $\xi $ can be any vector in $S^{n-1},$ we have $$|\nabla u(0)|\leq\exp\{C_1+C_2\frac{M^2}{r^2}\}.$$ This proves theorem 1.4. \section*{5. Liouville Type Theorem - Proof of Theorem 1.3 } \setcounter{section}{5} \setcounter{equation}{0} Suppose $u\in C^3(R^n)$ is a nonnegative solution of (1.4),and \begin{equation} |u(x)|=o(|x|),\quad as \quad |x|\rightarrow\infty. \end{equation} We will prove $u\equiv constant $. Since any constant is not a solution to (1.4), the theorem will be proved. By Theorem 1.4 and (5.1) we have \begin{equation} |\nabla u(x)|\leq C,\quad \forall x\in R^n. \end{equation} We claim that $$\nabla u(x)=0,\quad \forall x\in R^n.$$ Otherwise, there is a $y$ such that $\nabla u(y)\neq 0$ for some $y.$ We may assume $y=0$ and thus \begin{equation} |\nabla u(0)|\geq\delta>0 \end{equation} for some positive $\delta . $ We will induce a contradiction. Let $$ G(x,\xi)=g(x)\varphi(u)u_{\xi}(x),$$ where $$g(x)=1-\frac{|x|^2}{r^2},\,\varphi(u)=(1-\frac{u}{M})^{-\beta},\,M=4\sup\{|u(x)|,x\in B_r(0)\}$$ and $\beta \in(0,1)$ is to be determined. Suppose $\sup\{G(x,\xi),\,x\in B_r(0),\xi\in S^{n-1}\}$ is attained at point $x_0$ and in the direction $e_1$.\\ By (5.3) we have \begin{equation} g(x_0)\geq\delta_1,\ u_1(x_0)\geq\delta_1 \end{equation} for some $\delta_1>0$ depending only on $\delta$.\\ Then at $x_0$, \begin{eqnarray} 0=(\log G)_i=\frac{u_{1i}}{u_1}+\frac{g_i}{g}+\frac{\varphi^{\prime}}{\varphi}u_i \end{eqnarray} and \begin{eqnarray} (\log G)_{ij}&=&\frac{u_{1ij}}{u_1}+(\frac{\varphi^{\prime\prime}}{\varphi}-2\frac{{\varphi^{\prime}}^2}{\varphi^2})u_iu_j +\frac{\varphi^{\prime}}{\varphi}u_{ij} +(\frac{g_{ij}}{g} -2\frac{g_ig_j}{g^2})\nonumber\\ &-&\frac{\varphi^{\prime}}{g\varphi}(g_iu_j+g_ju_i)\nonumber \end{eqnarray} where we have used (5.5). Note that at $x_0$, $u_i=0$ for $i\geq2,$ $a_{11}=\frac{1}{1+u_1^2},\ a_{ii}=1$ for $i\geq2,$ and $a_{ij}=0$ for $i\neq j.$ Therefore at $x_0,$ \begin{eqnarray} 0\geq a_{ii}(\log G)_{ii}&=&\frac{a_{ii}u_{1ii}}{u_1}+ (\frac{\varphi^{\prime\prime}}{\varphi}-2\frac{{\varphi^{\prime}}^2}{\varphi^2}) \frac{u_1^2}{1+u_1^2}+\frac{\varphi^{\prime}}{\varphi}f\nonumber\\ &+&a_{ii}(\frac{g_{ii}}{g} -2\frac{g_i^2}{g^2})-\frac{2g_1\varphi^{\prime}u_1}{g\varphi(1+u_1^2)}. \end{eqnarray} By (5.4) we obtain \begin{eqnarray} -\frac{2g_1\varphi^{\prime}u_1}{g\varphi(1+u_1^2)}&=&\frac{4\beta}{M-u}\cdot\frac{x_1}{gr^2} \cdot\frac{u_1}{1+u_1^2}\nonumber\\ &\geq&\frac{-16\beta}{3M\delta_1r}\nonumber\\ &=&-\frac{C_1}{Mr} \end{eqnarray} and \begin{eqnarray} a_{ii}(\frac{g_{ii}}{g} -2\frac{g_i^2}{g^2})&=&-\frac{2}{g r^2}(\frac{1}{1+u_1^2}+n-1)-\frac{8x_1^2}{(1+u_1^2)g^2r^4} -\sum_{i=2}^n \frac{8x_i^2}{g^2r^4}\nonumber\\ &\geq&-\frac{C_2}{r^2}, \end{eqnarray} where constants $C_1,C_2$ depend only on $n,$ $\delta_1$ and $\beta$.\\ Differentiating equation (1.4) with respect to $x_1$, we have $$\sum_{i=1}^n u_{1ii}=f_1+(\frac{u_iu_j}{1+u_1^2}u_{ij})_{x_1}$$ which implies $$\frac{u_{111}}{1+u_1 ^2}=f_1-\sum_{i=2}^nu_{1ii}$$ at $x_0$. Thus, we obtain that at $x_0,$ \begin{eqnarray} \frac{a_{ii}u_{1ii}}{u_1}&=&\frac{1}{u_1}[f_1+\frac{2u_1u_{11}^2}{(1+u_1^2)^2} +\sum_{i\geq2}\frac{2u_1}{1+u_1^2}u_{1i}^2]\nonumber\\ &\geq&\frac{1}{u_1}f_1\nonumber\\ &=&(1-\alpha)(1+u_1^2)^{-\frac{\alpha+1}{2}}u_{11}.\nonumber \end{eqnarray} Observing that (5.5) implies $u_{11}=-\frac{u_1^2\varphi^{\prime}}{\varphi}-\frac{u_1g_1}{g}$ at $x_0$, we see that \begin{eqnarray} \frac{a_{ii}u_{1ii}}{u_1}+\frac{\varphi^{\prime}}{\varphi}f&\geq&(1-\alpha)(1+u_1^2)^{-\frac{\alpha+1}{2}}u_{11} +\frac{\varphi^{\prime}}{\varphi}(1+u_1^2)^{\frac{1-\alpha}{2}}\nonumber\\ &=&(1-\alpha)(1+u_1^2)^{-\frac{\alpha+1}{2}}[-\frac{u_1^2\varphi^{\prime}}{\varphi}-\frac{u_1g_1}{g}] +\frac{\varphi^{\prime}}{\varphi}(1+u_1^2)^{\frac{1-\alpha}{2}}\nonumber\\ &=&\frac{\varphi^{\prime}}{\varphi}(1+u_1^2)^{-\frac{\alpha+1}{2}}[\alpha u_1^2-u_1^2+1+u_1^2]- (1-\alpha)(1+u_1^2)^{-\frac{\alpha+1}{2}}\frac{u_1g_1}{g}\nonumber\\ &=&\frac{\alpha}{M-u}(1+u_1^2)^{-\frac{\alpha+1}{2}}(\alpha u_1^2+1)+2(1-\alpha)(1+u_1^2)^{-\frac{\alpha+1}{2}}\frac{u_1x_1}{gr^2}.\nonumber\\ & \geq & \frac{1}{r}(1+u_1^2)^{-\frac{\alpha+1}{2}}[ \frac{r}{M-u}\alpha (\alpha u_1^2+1)-2|1-\alpha| \frac{u_1}{\delta_1}]. \nonumber \end{eqnarray} Since $\delta_1<u_1\leq C$ and $\lim_{r\to \infty} \frac{r}{M-u}=+\infty$ by (5.1), we have for $r$ large enough, \begin{eqnarray} \frac{a_{ii}u_{1ii}}{u_1}+\frac{\varphi^{\prime}}{\varphi}f\geq0. \end{eqnarray} Inserting (5.7),(5.8) and (5.9)into (5.6), we obtain that at $x_0$ and for large $r$, \begin{eqnarray} (\frac{\varphi^{\prime\prime}}{\varphi}-2\frac{{\varphi^{\prime}}^2}{\varphi^2}) \frac{u_1^2}{1+u_1^2}\leq\frac{C_1}{Mr}+\frac{C_2}{r^2}. \end{eqnarray} Now chose $\beta \in (0,1)$ so that \begin{eqnarray} \frac{\varphi^{\prime\prime}}{\varphi}-2\frac{{\varphi^{\prime}}^2}{\varphi^2}=\frac{\beta (1-\beta) }{(M-u)^2}\geq \frac{1}{10 M^2}. \end{eqnarray} Then (5.10) implies \begin{eqnarray} \frac{u_1^2}{1+u_1^2}\leq 10 [ C_1\frac{M}{r}+C_2\frac{M^2}{r^2}] \nonumber \end{eqnarray} at $x_0$ and for large $r$. Let $r\rightarrow\infty,$ by (5.1) we obtain $u_1(x_0)=0,$ contradicting (5.4). In this way, we have proved theorem 1.3. \newpage \section* { References} \begin{enumerate} \itemsep -2pt \item[1] Altschuler, Steven; Angenent, Sigurd B.; Giga, Yoshikazu Mean curvature flow through singularities for surfaces of rotation. J. Geom. Anal. 5 (1995), no. 3, 293--358. \item [2] T. H. Colding and W. P. Minicozzi II, Width and mean curvature flow, preprint, Arxiv: 0705.3827vz, 2007. \item [3] D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations of second order, 2nd Version, Springer-Verlag, 1983 \item[4] G. Huisken and C. Sinestrari, Mean curvature flow singularities for mean convex surfaces, Calc. Var. Partial Differ. Equations, 8(1999), 1-14. \item[5]G. Huisken and C. Sinestrari, Convexity estimates for mean curvature flow and singularities of mean convex surfaces, Acta Math., 183(1999), 45-70. \item [6] N. Korevaar and J. Lewis, convex solutions to certain equations have constant rank Hessian, arch Rational Mech. Anal., 97(1987), 19-32. \item[7] H.Y. Jian, Translating solitons of mean curvature flow of noncompact spacelike hypersurfaces in Minkowski space, J. Differential Equations, 220(2006), 147-162. \item [8] H. Y. Jian, Q. H. Liu and X. Q Chen, Convexity and symmetry of translating solitons in mean curvature flows, Chin. Ann. Math., 26B (2005), 413-422. \item [9] Y. N. Liu and H. Y. Jian, Evolution of spacelike hypersurfaces by mean curvature minus external force field in minkowski space, to appear in Advanced Nonlinear Studies, Aug 2009. \item [10] F. Schulze, Evolution of convex hypersurfaces by powers of the mean curvature, Math Z. 251(2005), 721-733. \item [11] F. Schulze, Nonlinear Evolution by mean curvature and isoperimetric inequalities, J. Differential Geom., 79(2008), 197-241. \item [12] W. M. Sheng and X. J. Wang, Singularity profile in the mean curvature flow,arXiv: math.DG/0902.2261v2. \item [13] X. J. Wang, Translation solutions to the mean curvature flow, arXiv: math.DG/0404326v1 (submitted in Ann Math, 2003.) \item[14] X. J.Wang,Interior gradient estimates for mean curvature equations,Math. Z.228 (1998), 73-81. \item [15] B. White, The nature of singularities in mean curvature flow of mean-convex sets, J. Amer. Math. Soc., 16(2003), 123-138. \end{enumerate} \end{document}
{ "timestamp": "2009-09-28T08:59:42", "yymm": "0909", "arxiv_id": "0909.5021", "language": "en", "url": "https://arxiv.org/abs/0909.5021", "abstract": "In this paper, we study the convexity, interior gradient estimate, Liouville type theorem and asymptotic behavior at infinity of translating solutions to mean curvature flow as well as the nonlinear flow by powers of the mean curvature.", "subjects": "Analysis of PDEs (math.AP); Differential Geometry (math.DG)", "title": "Properties of Translating Solutions to Mean Curvature Flow", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446494481299, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.708789406503035 }
https://arxiv.org/abs/2002.11291
Fitting the trajectories of particles in the equatorial plane of a magnetic dipole with epicycloids
In this paper we discuss epicycloid approximation of the trajectories of charged particles in axisymmetric magnetic fields. Epicycloid trajectories are natural in the Guiding Center approximation and we study in detail the errors arising in this approach. We also discuss how using exact results for particle motion in the equatorial plane of a magnetic dipole the accuracy of this approximation can be significantly extended. We also show that the epicycloids approximate the trajectory of a charged particle more accurately than the position of the particle along the trajectory.
\section{Introduction} The study of the trajectory of charged particle in magnetic dipole field is an agelong electromagnetism problem. There are many applications in this research, such as auroral theory and plasma physics. Although the motion of particle in magnetic field has been studied in some literatures\cite{2,3,4,6}, a large part of these conclusions are described by elliptic integrals, which are quite complicated. Therefore, whether a simpler and more intuitive physical model can be used to describe the motion of particle in a magnetic field has became a question of valuable exploration. As we all know, prolate epicycloid\cite{5} is a fascinating curve and formed by the superposition of two rotating motions of a point on the plane, as shown in Fig.1. There is an example in the field of machinery, when pinion is moving with a pure rolling along the large gear, the motion of a point outside the pinion rotating with the same angular speed as the pinion is prolate epicycloid. Meanwhile, a number of other physical phenomena can be explained by prolate epicycloid in nature. In this paper, we proposed the use of physical model of prolate epicycloid to explain the trajectory of charged particle in magnetic dipole field when particle trajectory are bounded, since they have homogeneous trajectory. The fitting results show that it seems worthwhile as the results are accurate. What's more, the particle trajectories of other types of axisymmetric magnetic fields\cite{7} is fitted by prolate epicycloid, prolate hypocycloid and curtate hypocycloid respectively\cite{5,8}. \section{The model of prolate epicycloid, prolate hypocycloid and curtate hypocycloid} \subsection{ Prolate epicycloid} There is a fixed circle with radius $a$, a moving circle with radius $b$ and a point moving in a circle of radius $R$ relative to the moving circle. We set that a point does circular motion and maintains the same angular velocity with respect to the moving circle, while the moving circle rolling without slipping relative to the fixed circle. If the point outside the fixed circle rolling without slipping, the track of prolate epicycloid will be formed as shown in Fig.1. The parametric equations\cite{5,8} of prolate epicycloid are \begin{equation} \begin{cases} x = (a + b)\cos \theta-R\cos(\frac{a + b}{b} \theta) \\ y = (a + b)\sin \theta-R\sin(\frac{a + b}{b} \theta) \end{cases} \end{equation} here, $\theta$ is revolution angle. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{1.eps} \caption{Prolate epicycloid when each parameter $a$=16m, $b$ = 2m, $R$ = 8m. The parameters $a$, $b$, $R$ are indicated.}\label{1} \end{figure} \subsection{Prolate hypocycloid and curtate hypocycloid} When the point inside the fixed circle rolling without slipping, prolate hypocycloid will be formed in Fig.2 and curtate hypocycloid will be formed if $R<\mid b\mid$ as shown in Fig.3. The difference of the parametric equations\cite{5,8} between prolate epicycloid and prolate hypocycloid is that $b$ becomes $-b$, as shown in Eq(2) while the parametric equations for curtate hypocycloid and prolate hypocycloid are the same. \begin{equation} \begin{cases} x = (a - b)\cos\theta-R\cos(\frac{a - b}{-b} \theta) \\ y = (a -b)\sin\theta-R\sin(\frac{a - b}{-b} \theta) \end{cases} \end{equation} \begin{figure}[h!] \centering \includegraphics[scale=0.39]{2.eps} \caption{Prolate hypocycloid when each parameter $a$=16m, $b$ = 2m, $R$ =8m. The parameters $a$, $b$, $R$ are indicated.}\label{2} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.37]{3.eps} \caption{Curtate hypocycloid when each parameter $a$=11m, $b$ =6m, $R$ =3m. The parameters $a$, $b$, $R$ are indicated.}\label{3} \end{figure} \section{Fitting of prolate epicycloid model and particle motion in magnetic dipole equatorial field} \subsection{ Differential equation of motion} We premise that the magnetic dipole is located at the origin and the magnetic dipole equatorial plane is the $xoy$ plane which the magnetic field direction is perpendicular to. According to Newton's second law and the Lorentz equations\cite{1}, one obtains \begin{equation} m \textbf{a}=e \begin{bmatrix} {\hat{\textbf i}}&\hat{{\textbf j}}&\hat{{\textbf k}} \\ v_\text{x} & v_\text{y}&0 \\ 0&0&B_\text{z}\\ \end{bmatrix} \end{equation} Next, the differential equations form of the trajectory equations of charged particles are \begin{equation} \begin{cases} m\frac{d^2 x}{dt^2}= e\frac{dy}{dt}B_\text{z} \\ m\frac{d^2 y}{dt^2}= -e\frac{dx}{dt}B_\text{z} \end{cases} \end{equation} The magnetic field intensity at a point in the equatorial field of magnetic dipole can be expressed as \begin{equation} \textbf{B} _\text{z} = \frac{\mu_0 \textbf{P}_m} {4\pi(x^2+y^2)^{3/2}} \end{equation} To simplify the notation it is convenient to introduce the constant\cite{7} \begin{equation*} A =\frac{e\mu_0 P_m}{4\pi m} \end{equation*} So Eq(4) can be descriped as \begin{equation} \begin{cases} \frac{d^2 x}{dt^2}= \frac{A}{(x^2+y^2)^{3/2}}\frac{dy}{dt}\\ \frac{d^2 y}{dt^2}= -\frac{A}{(x^2+y^2)^{3/2}}\frac{dx}{dt} \end{cases} \end{equation} In this paper, we suppose that the magnitude of $\textbf{P}_m $ is $\frac{2\pi}{\mu_0}\times 10^{-13} $A$\cdot$m and direction is vertically upward, $e= -1.602\times 10^{-19}$C, $m=9.1 \times 10^{-31}$kg. \subsection{Fitting of prolate epicycloid model and particle motion } By contrast with Fig.1 and the trajectory of charged particles when the particle trajectory are bounded in a magnetic dipole equatorial field, the motion trajectory and prolate epicycloid are considered to be the same. \subsubsection{An express for $ \overline{ \omega}$ } If the initial conditions are given, it can always set up a cartesian coordinate system that the particle are released on the positive $x$ axis.\\ We first define the following abbreviations for this section:\\ $\theta$ = angle of revolution;\\ $\frac{a}{b}\theta$ = angle of rotation;\\ $\omega$ = angular velocity of revolution;\\ $\overline{\omega}$ = average angular velocity of revolution;\\ $\frac{a}{b}\omega$ = angular velocity of rotation;\\ $v_j=\omega l$ = revolution speed;\\ $v_i=\frac{a}{b}\omega R$ = rotation speed;\\ $\alpha$ = angle between $v_i$ and $v_j$;\\ $l$ = distance of charged particle from origin.\\ \begin{figure}[h!] \centering \includegraphics[scale=0.5]{4.eps} \caption{ Velocity analysis diagram of charged particle and the angles of a triangle formed by dashed line are $\alpha$, $\frac{a}{b}\theta-\alpha$, $\pi-\frac{a}{b}\theta$.}\label{4} \end{figure} This paper consider the particle velocity is decomposed into revolution speed and rotation speed. From the conservation of kinetic energy we have \begin{equation} (\omega l + \frac{a}{b}\omega R\cos\alpha)^2+(\frac{a}{b}\omega R \sin\alpha)^2 = v^2 \end{equation} Fig.4 shows the three internal angles of the triangle formed by the charged particle, the center of the moving circle and the center of the fixed circle. According to the law of cosines, the following equations are obtained. \begin{equation*} (a+b)^2+R^2-l^2=-2(a+b)R\cos\frac{a}{b}\theta \end{equation*} \begin{equation*} l^2+R^2-(a+b)^2=2lR\cos\alpha \end{equation*} Then, the result of Eq(7) can be expressed as \begin{equation} \omega^2[(a+b)^2+(R+\frac{aR}{b})^2+2(a+b)(1+\frac{a}{b})R\cos\frac{a}{b}\theta]=v^2 \end{equation} here, integral is used to solve this complicated equation \begin{equation} \begin{split} \int_{0}^\theta\![(a+b)^2+(R+\frac{aR}{b})^2+2(a+b)(1+\frac{a}{b})R\cos\frac{a}{b}\theta]d\theta \\ =\int_0^t \frac{v^2}{\omega}dt+C \end{split} \end{equation} When $\theta$ takes the moment of discretization like $\theta =\frac{2\pi b}{a}n$ $(n=0,1,2,3,\cdot\cdot\cdot)$, the corresponding time $t$ and $\theta$ are in linear relation, $\omega$ is constant as well. In addition, the $\theta$ = $0$ is true in the case of $t$ = 0. So the result of Eq(9) is \begin{equation} [(a+b)^2+(R+\frac{aR}{b})^2]\theta=\frac{v^2 t}{\omega} \end{equation} The average angular velocity of revolution is given by \begin{equation*} \overline{\omega}=\frac{v}{\sqrt{(a+b)^2+(R+\frac{aR}{b})^2}} \end{equation*} \subsubsection{An express for the parameters $a, b, R$ } During the movement of charged particle, there are the maximum distance from the origin $ r_{max}$, the minimum distance $r_{min}$ and the period $T$ over the course of a loop. If take $v<(v_c/4)$ as a premise\cite{3} \begin{equation*} r_{max}=\frac{2r_0}{1+\sqrt{1-4(v/v_c)}} \end{equation*} \begin{equation*} r_{min}=\frac{2r_0}{1+\sqrt{1+4(v/v_c)}} \end{equation*} here, $v_c=\frac{A}{r^2_0}$, the distance of charged particle from the origin $\rho=r_0$ when charged particle is forced toward the magnetic dipole. E. H. Avrett\cite{3} has described $T$ over the course of a loop adequately, when $V=v_c/v$ is sufficiently large, the result can be obtained by the method of series expansion \begin{equation} T=\frac{2\pi r_0}{v_c}(1+\frac{15}{2V^2}+\frac{315}{4V^4}+\cdot\cdot\cdot) \end{equation} This may be a good numerical calculation method. So the parameters $a,b,R$ satisfies the following relationship \begin{equation*} R=\frac{r_{max}-r_{min}}{2} \end{equation*} \begin{equation*} a+b=\frac{r_{max}+r_{min}}{2} \end{equation*} \begin{equation*} \frac{a}{b}=\frac{2\pi T}{\overline{\omega}} \end{equation*} \subsubsection{Fitted motion equations of charged particle} If the direction of the initial release velocity of the charged particle is perpendicular to $x$ axis, the initial release position must be $r_{max}$ or $r_{min}$, and the parametric equations of charged particles are \begin{equation} \begin{cases} x= (a + b)\cos[sgn(B_z)\omega t]-sgn(A\cdot v) R\cos[sgn(B_z)\frac{a + b}{b}\omega t]\\ y= (a + b)\sin[sgn(B_z)\omega t]- sgn(A\cdot v) R\sin[sgn(B_z)\frac{a + b}{b}\omega t] \end{cases} \end{equation} In other initial conditions, the conclusion is discussed by K. Kabin and G. Bonner\cite{7} in detail: $t_{min}$ shows the time to $\rho_{min}$ turning point and it has the following description: \begin{equation*} t_{min}=\frac{sgn(v_{\rho_0})}{\surd{A^2-v^2_\bot}}(R(\rho_0)-\frac{\rho_{min}+\rho_{max}}{2}(\frac{\pi}{2}-arctan\frac{\rho_{min}+\rho_{max}-2\rho_0}{2R(\rho_0)}) ) \end{equation*} where, \begin{equation} R(\rho)=\frac{|v_\rho|\rho}{\surd{A^2-v^2_\bot}} \end{equation} So the analytical expression of the trajectory of charged particle can be obtained as \begin{equation} \begin{cases} \begin{split} x= (a + b)\cos[sgn(B_z)\omega (t-t_0)+\mu] \\ -sgn(A\cdot v) R\cos[sgn(B_z)\frac{a + b}{b}\omega (t-t_0)+\mu]\\ y= (a + b)\sin[sgn(B_z)\omega (t-t_0)+\mu]\\ - sgn(A\cdot v) R\sin[sgn(B_z)\frac{a + b}{b}\omega (t-t_0)+\mu] \end{split} \end{cases} \end{equation} where, $t_0$=$t_{min}$ when $sgn(A\cdot v)$ is positive; $t_0$=$t_{min}+\frac{T}{2}$ when $sgn(A\cdot v)$ is negative. $\mu$ satisfies the condition that $y(0)=0$. \subsubsection{Comparison and verification} C. Graef , S. Kusaka\cite{2} and A. R. Juarez\cite{4} take the same starting point: Using the equations of motion of charged particle in a equatorial plane of magnetic dipole in polar coordinates. The conclusion of the motion equations is \begin{equation} \begin{cases} \frac{d^2r}{ds^2}-r\frac{d\varphi}{ds}= -\frac{1}{r^2}\frac{d\varphi}{ds}\\ \frac{1}{r}\frac{d}{ds}[r^2(\frac{d\varphi}{ds})]=\frac{1}{r^2}\frac{dr}{dt}\\ (\frac{dr}{ds})^2+r^2 (\frac{d\varphi}{ds})^2=1 \end{cases} \end{equation} E. H. Avrett\cite{3} gave an implicit description of the orbit. \begin{equation} r =\frac{2r_{0}}{1+\sqrt{1-4(\nu/\nu_{c}\sin\alpha)}} \end{equation} K. Kabin\cite{6} came up with another way to write \begin{equation} \rho =\frac{p_{\varphi}+\sqrt{p_{\varphi}^2+4C\nu\sin\alpha}}{2\nu\sin\alpha} \end{equation} Next, the fitted parametric equations will be verified. First, the initial conditions for charged particle are taken as (x, y, \.{x}, \.{y})=(0.15, 0, 0, -0.1). $r_{max}=0.15$, $r_{min}= 0.10455, \frac{a}{b}=19.014$ are known by the real trajectory in Fig.5(a), just for convenience. So the analytic expression of prolate epicycloid is \begin{equation*} \begin{cases} x=0.1272748\cos(-0.21187x)+0.0227252\cos(-4.2374x)\\ y=0.1272748\sin(-0.21187x)+0.0227252\sin(-4.2374x) \end{cases} \end{equation*} which is described in Fig.5(b). \begin{figure}[h!] \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.26]{5.eps} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.46]{6.eps} \end{minipage} \caption{Comparison of real and fitted trajectories. (a). The particle trajectory in a magnetic dipole equatorial field as a movement time of $28s$. (b). The trajectory of prolate epicycloid as a movement time of $28s$.} \label{5} \end{figure} We define the error rate of the fitted particle trajectory as $$\eta=\frac{t_0-t^*}{t_0}$$ where, $t_0$ is the time of actual motion of charged particle, $t^*$ is the time of fitting motion of charged particle. By comparing Fig.5(a) and Fig.5(b), the error ratio $\eta_1$= $0.2143\%$. It is clear that the fitting effect is excellent, which proves that the parametric equation obtained by our fitting is meaningful. \section{Particle motion in an axisymmetric magnetic field} Continuing the above work, the motion of charged particle in an axisymmetric magnetic field is studied. The magnetic field intensity at a point in an axisymmetric magnetic field\cite{7} can be expressed as \begin{equation} \textbf{B} _\text{z} = \frac{\mu_0 \textbf{P}_m} {4\pi(x^2+y^2)^{n/2}} \end{equation} Especially, when $n$=3, the axisymmetric magnetic field is a magnetic dipole field. \subsection{Case: n$>$0} When $n=1$, this is a special case, K. Kabin and G. Bonner\cite{7} obtained the motion of charged particle in this magnetic field by finding a root of a transcendental equations numerically. Here, the same initial condition is taken when $A=5$, (x, y, \.{x}, \.{y})$=(3,0,0,2)$. The real trajectory and the trajectory fitted by prolate epicycloid are compared in Fig.6. \begin{figure}[h!] \centering \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.45]{7.eps} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.45]{8.eps} \end{minipage} \caption{Comparison of real and fitted trajectories. (a). The particle trajectory in an axisymmetric magnetic field as a movement time of $70$s. (b). The trajectory of prolate hypocycloid as a movement time of $70$s.} \label{6} \end{figure} The error ratio $\eta_2$ between this comparison is $2.5\%$. When n=2, initial conditions are taken that $A=0.05$, (x, y, \.{x}, \.{y})$=(0.15,0,0,-0.1)$. The real trajectory and the trajectory fitted by prolate epicycloid are compared in Fig.7. and the error ratio $\eta_3$ between this comparison is $0.5667\%$. \begin{figure}[h!] \centering \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.45]{9.eps} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.45]{10.eps} \end{minipage} \caption{Comparison of real and fitted trajectories. (a). The particle trajectory in an axisymmetric magnetic field as a movement time of $30$s. (b). The trajectory of prolate hypocycloid as a movement time of $30$s.} \label{7} \end{figure} After experiments, it is found that all of those case can be explained by prolate epicycloid when the particle trajectory is bounded and the magnetic field $B _\text{z}\sim \frac{1}{(x^2+y^2)^{n/2}}(n>0)$. And a potential discovery is that the accuracy of the fitted trajectory also increases with the increase of $n$. \subsection{Case: n$<$0 } There are two mathematical curve models that can account for particle trajectory, but not for any initial condition. \subsubsection{Fitting of prolate hypocycloid and particle motion } Because turn $b$ into $-b$ in parameter equations of prolate hypocycloid relative to prolate epicycloid, so the average angular velocity of prolate hypocycloid can be expressed as $$\overline{\omega}=\frac{v}{\sqrt{(a-b)^2+(R+\frac{aR}{-b})^2}}$$ Here is a typical example, we set $n = -3$, $ \frac{\mu_0 \overrightarrow{P}_m}{4\pi}= 10$, the initial conditions (x, y, \.{x}, \.{y}) for charged particle are $(0.15,0,0,-0.003)$. The same verification method is used to plot the trajectory fitted by prolate hypocycloid, as shown in Fig.8. \begin{figure}[h!] \centering \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.47]{11.eps} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.49]{12.eps} \end{minipage} \caption{Comparison of real and fitted trajectories. (a). The particle trajectory in an axisymmetric magnetic field as a movement time of $900$s. (b). The trajectory of prolate hypocycloid as a movement time of $900$s.} \label{8} \end{figure} By comparison it is found that the two results are similar, but not completely same, which inspires us to continue to study more optimized result. \subsubsection{Fitting of curtate hypocycloid and particle motion } We set $n = -3$, $ \frac{\mu_0 \overrightarrow{P}_m}{4\pi}= 10$, the initial conditions (x, y, \.{x}, \.{y}) =$(0.15,0,0,0.2)$ for charged particle. Although the effect of this fit is thin for the trajectory of charged particle shown in Fig.9(a) and 9(b), it is consistent with the tendency of particle trajectory. \begin{figure}[h!] \centering \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.47]{13.eps} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \includegraphics[scale=0.47]{14.eps} \end{minipage} \caption{Comparison of real and fitted trajectories. (a). The particle trajectory in an axisymmetric magnetic field as a movement time of $55$s. (b). The trajectory of curtate hypocycloid as a movement time of $55$s.} \label{9} \end{figure} \section{Conclusion} By establishing and fitting with the physical model of prolate epicycloid, a simple analytic expression is obtained in cartesian coordinate system. This is an excellent model for explaining the motion of charged particle in an axisymmetric magnetic field with $B\sim1/\rho^n$(n$>$0) when the particle trajectory are bounded, a particular example is the particle motion in magnetic dipole field when $n=3$. When $B\sim1/\rho^n$(n$<$0), the trajectory can be explained roughly by prolate hypocycloid and curtate hypocycloid, and further research will continue. It is hoped that the results can promote the study of the dynamics of the radiation belts and ring current particles. \section{Acknowledgments} The author X. Y. Wang would like to acknowledge Dr. Quanjin Wang for valuable discussions. This project is supported by the National Natural Science Foundation of China under Grant No. 11705076. We acknowledge the HongLiu Support Funds for Excellent Youth Talents of Lanzhou University of Technology. \section*{References}
{ "timestamp": "2020-05-12T02:07:08", "yymm": "2002", "arxiv_id": "2002.11291", "language": "en", "url": "https://arxiv.org/abs/2002.11291", "abstract": "In this paper we discuss epicycloid approximation of the trajectories of charged particles in axisymmetric magnetic fields. Epicycloid trajectories are natural in the Guiding Center approximation and we study in detail the errors arising in this approach. We also discuss how using exact results for particle motion in the equatorial plane of a magnetic dipole the accuracy of this approximation can be significantly extended. We also show that the epicycloids approximate the trajectory of a charged particle more accurately than the position of the particle along the trajectory.", "subjects": "Computational Physics (physics.comp-ph)", "title": "Fitting the trajectories of particles in the equatorial plane of a magnetic dipole with epicycloids", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.97364464868338, "lm_q2_score": 0.7279754548076478, "lm_q1q2_score": 0.708789405946316 }
https://arxiv.org/abs/2202.05246
Monotone Learning
The amount of training-data is one of the key factors which determines the generalization capacity of learning algorithms. Intuitively, one expects the error rate to decrease as the amount of training-data increases. Perhaps surprisingly, natural attempts to formalize this intuition give rise to interesting and challenging mathematical questions. For example, in their classical book on pattern recognition, Devroye, Gyorfi, and Lugosi (1996) ask whether there exists a {monotone} Bayes-consistent algorithm. This question remained open for over 25 years, until recently Pestov (2021) resolved it for binary classification, using an intricate construction of a monotone Bayes-consistent algorithm.We derive a general result in multiclass classification, showing that every learning algorithm A can be transformed to a monotone one with similar performance. Further, the transformation is efficient and only uses a black-box oracle access to A. This demonstrates that one can provably avoid non-monotonic behaviour without compromising performance, thus answering questions asked by Devroye et al (1996), Viering, Mey, and Loog (2019), Viering and Loog (2021), and by Mhammedi (2021).Our transformation readily implies monotone learners in a variety of contexts: for example it extends Pestov's result to classification tasks with an arbitrary number of labels. This is in contrast with Pestov's work which is tailored to binary classification.In addition, we provide uniform bounds on the error of the monotone algorithm. This makes our transformation applicable in distribution-free settings. For example, in PAC learning it implies that every learnable class admits a monotone PAC learner. This resolves questions by Viering, Mey, and Loog (2019); Viering and Loog (2021); Mhammedi (2021).
\section{Introduction} In this work we study the following fundamental question. Pick some standard learning algorithm~$A$, and consider training it for some natural task using a data-set of $n$ examples. \begin{center} {Does feeding $A$ with more training data \emph{provably} reduces its population loss?} \end{center} E.g., would increasing the number of examples from $n$ to $n+1$ improve its loss? How about~$2n$? Can one guarantee improvement in this case? What about~$2^n$, or even $2^{2^n}$? Would that be sufficient? Can one at least assure that the loss will not deteriorate? Intuitively, the answer should be yes: indeed, the more often we face a certain task, the better we typically get at solving it. This basic intuition is reflected in many works in theoretical and applied machine learning. For example, \cite*{SSbook} assert in their book that the learning curve starts decreasing when the number of examples surpasses the VC dimension (page~153); \cite*{DudaHartStork01} state in their book that for real-world problems the learning curve is monotone (Subsection 9.6.7). Similar statements are made by a variety of other works, a partial list includes~\cite*{Gu01perf,Tax08curves,Weiss2014GeneratingWL}. We refer the reader to the thorough survey by \cite*{Viering21curves} for an extensive discussion about monotone and non-monotone learning curves (Section 6). On the other hand, one might argue that in order to successfully learn complex functions, the algorithm must dedicate time and resources to exploring larger and larger sets of hypotheses, and consequently exhibit a non-monotone behaviour. For example, any Bayes consistent learning algorithm must consider arbitrarily complex hypotheses (since it is able to approximate arbitrary functions). Indeed, one can show that consistent algorithms such as \emph{Nearest-Neighbors} can demonstrate such non-monotone behaviour \citep*{devroye1996probabilistic}. This intuition is reflected in Chapter~6 in the book by \cite*{devroye1996probabilistic} in which it is conjectured that no Bayes-consistent rules can be monotone (Problem~6.16). \vspace{2mm} These intuitive considerations inspire a host of theoretical questions. Is it really the case that ``more data $=$ better generalization''? Is it at least the case for natural algorithms and natural learning tasks? Perhaps it is too much to expect that the addition of a single example will lead to better performance, but maybe if one doubles the training-set then better performance is guaranteed? Can we at least guarantee that the performance does not deteriorate? \paragraph{Notation.} We focus on multiclass classification with respect to the zero/one loss and use standard learning theoretic notation (see, e.g., \cite*{SSbook}). Let $X$ be a set called the domain and let $Y$ denote the label-set. We assume that $Y$ is finite, w.l.o.g $Y=[k]=\{0,1,\ldots, k-1\}$. For a set $Z$, let $Z^\star :=\cup_{n=0}^\infty Z^n$ denote the space of all finite sequences with elements from $Z$. An {\em hypothesis (or classifier)} is a function $h:X\to Y$. An {\em example} is a pair $z=(x,y)\in X\times Y$. A {\em sample} $S\in (X\times Y)^\star$ is a (finite) sequence of examples. A {\em learning rule} is a mapping from $(X\times Y)^\star$ to $Y^X$; i.e., the input is a finite sample and the output is a hypothesis. Given a distribution $D$ over $X\times Y$ and a hypothesis $h$, the {\em (population) loss} of $h$ with respect to $D$ is $\Err(h) = \E_{(x,y)\sim D}1[h(x)\neq y]$. Given a sample $S=\{(x_i,y_i)\}_{i=1}^m$, the {\rm (empirical) loss} of~$h$ with respect to $S$ is $\EmpS(h) =\frac{1}{m}\sum_{i=1}^m{1[h(x_i)\neq y_i]}$, where $1[\cdot]$ is the indicator function. \subsubsection*{Monotone Learners} \begin{definition}[Monotone learning rule]\label{def:mon} A learning rule $M$ is said to be \emph{monotone} w.r.t a distribution~$D$ if, \[(\forall m):\E_{S\sim D^{m}}\Bigl[\Err\bigl(M(S)\bigr)\Bigr] \geq \E_{S\sim D^{m+1}}\Bigl[\Err\bigl(M(S)\bigr)\Bigr].\] That is, the expected population loss of $M$ is monotone non-decreasing in the size of its training set. \end{definition} The following theorem is the main result in this work; it asserts that every learning algorithm can be efficiently transformed to a monotone one with competitive generalization guarantees. \begin{theorem}\label{t:main}[Every learner can be monotonised] Consider the setting of multiclass classification to $k\in\mathbb{N}$ labels. Then, every learning algorithm $A$ can be efficiently converted to a learning algorithm $M=M(A)$ such that $M$ has only a black-box oracle access to~$A$ and \begin{enumerate} \item $M$ is monotone with respect to \emph{every} distribution $D$. \item $M$'s performance is \emph{competitive} with that of $A$: for every source distribution $D$, \[\Bigl(\forall m\Bigr)\Bigl(\exists m' \text{ s.t. } \frac{m}{30}-1\leq m'\leq m\Bigr):\E_{S\sim D^{m}}\Bigl[\Err\bigl(M(S)\bigr)\Bigr] \leq \E_{S\sim D^{m'}}\Bigl[\Err\bigl(A(S)\bigr)\Bigr] + O\Bigl(\sqrt{\frac{\log m}{m}}\Bigr).\] \end{enumerate} \end{theorem} Theorem~\ref{t:main} affirmatively answers questions posed by \cite*{devroye1996probabilistic}, \cite*{viering2019open}, \cite*{Viering21curves}, and \cite*{mhammedi2021risk}. In addition, Theorem~\ref{t:main} readily implies monotone learners in a variety of contexts: \begin{corollary}[Bayes-Consistent Monotone Learners] For every $k\in\mathbb{N}$ there exists a Bayes consistent monotone learner for multiclass classification into $k$ labels. \end{corollary} Indeed, this follows by applying the transformation on {\it any} Bayes-consistent learner (for example, $k$-nearest neighbor). This extends Pestov's result who focused on the case of binary classification and designed a clever histogram-based Bayes consistent algorithm. Moreover, while Pestov's algorithm and analysis are tailored to the binary case, our argument is more general, and at the same time conceptually (and arguably technically) simpler. The existence of Bayes-optimal consistent learner remained open for 25 years since it was asked by by \cite*{devroye1996probabilistic}. Theorem~\ref{t:main} is also applicable in other contexts. In fact, the distribution-free regret-bound on the learning rate of the monotone learner allows one to apply it in the PAC setting, where monotone learners were not known to exist~\cite*{viering2019open,Viering21curves}: \begin{corollary}[Monotone PAC Learners] Let $k\in\mathbb{N}$ and let $\mathcal{H}\subseteq [k]^X$ be a PAC learnable hypothesis class. Then, there exists a monotone agnostic PAC learner for $\mathcal{H}$. \end{corollary} Indeed, this follows by applying the transformation on any PAC learning algorithm for $\mathcal{H}$ (say any empirical risk minimizer). Note that the learning rate of the resulting monotone PAC learner is suboptimal by an additive $\log m$ factor\footnote{The optimal PAC learning rate is proportional to $\sqrt{1/m}$.}. We leave the exploration for the optimal monotone PAC learning rate to future work. \subsection{Informal Explanation} A learning rule, even if it is Bayes consistent, does not have any reason, a priori, to be monotone. Indeed, the fact that the expected error converges to the Bayes error does not mean the convergence happens monotonically and it could very well be that the error strictly increases between $m$ and~$m+1$ infinitely often. Let us examine what are the difficulties one would encounter when trying to convert a Bayes consistent learner into one that is monotone. \paragraph{Convergent Learners are Sparsely Monotone.} Given any learning algorithm $A$ that has the following convergence property: \[ \lim_{m\to\infty} \E_{s\sim D^m}\left[\Err(A(S))\right] = e_D, \] with $\E_{s\sim D^m}\left[\Err(A(S))\right]\ge e_D$ for infinitely many $m$, which for example happens for any Bayes-consistent learner, we immediately get that there exists a subsequence $m_{1},\ldots,m_{k},\ldots$ of indices over which the error will be monotonically decreasing, i.e., \[ \forall j\ge i,\, \E_{s\sim D^{m_{j}}}\left[\Err(A(S))\right] \le \E_{s\sim D^{m_{i}}}\left[\Err(A(S))\right]. \] However, the above subsequence of indices is \emph{distribution-dependent}, which means that we cannot guarantee that if we increase the sample size from some value $m$ to some other value $m'$, the error will be smaller for all distributions. \paragraph{Sparse to Dense Monotonicity.} We first observe that we could relax the monotonicity requirement to hold only for infinitely many steps (or arbitrary size). Indeed, while the monotonicity requirement of Definition \ref{def:mon} is written for $m$ and~$m+1$, we observe that if we had a learner that satisfies a \emph{sparse} version of this inequality, such as \begin{equation}\label{eq:sparse-mon} (\forall m)(\exists m'\ge m):\E_{S\sim D^{m}}\Bigl[\Err\bigl(M(S)\bigr)\Bigr] \geq \E_{S\sim D^{m'}}\Bigl[\Err\bigl(M(S)\bigr)\Bigr]\,, \end{equation} it would be easy to convert it into a learner that satisfies Definition \ref{def:mon} without affecting the limit of its population loss as $m\to\infty$. Indeed, the above condition guarantees that there is an infinite sequence $m_1,\ldots,m_k,\ldots$ of indices over which the algorithm is guaranteed not to increase its expected loss. This sequence can be defined as $m_1=1$ and $m_{k+1}=m'(m_k)$. And from this, one could create a learner that, given $m$ examples with $m\in [m_k,m_{k+1})$, simply ignores $m-m_k$ examples from the training set. The monotonicity condition would then be satisfied with equality between $m_k$ and~$m_{k+1}$ since the output of our algorithm would be unchanged (in expectation). \paragraph{Running over Prefixes.} So in order to convert an arbitrary learning rule into one that is monotone, while still retaining its convergence properties, the main idea is to run the algorithm on prefixes of the training sample and measure the loss of the produced hypotheses in order to pick the best one. As the sample size increases, the pool of hypotheses to choose from will increase and the best one from a larger pool will thus have a smaller loss than from a smaller pool. This idea has been previously explored by~\cite*{viering2020making,mhammedi2021risk}. However, implementing this idea turns out to be a subtle task. Indeed, we can only \emph{estimate} the loss of the hypotheses (by setting aside some examples and computing their empirical loss), so there is always some possibility that we choose a worse hypothesis (which empirically looks better). To illustrate the issue, let's consider the simplest possible situation where the base algorithm has produced a hypothesis $h_0$ on a prefix of the sample, and another hypothesis $h_1$ on a longer prefix. If $\Err(h_1)\le \Err(h_0)$, the base algorithm is already monotone, but in the case $\Err(h_1)> \Err(h_0)$, any wrapper algorithm would have to choose between outputting $h_0$ or $h_1$. Unfortunately, this choice will necessarily worsen the error (unless the wrapper always outputs $h_0$ deterministically, in which case it would not manage to track the performance of the base algorithm). Indeed, the expected loss of any wrapper would be a convex combination of $\Err(h_1)$ and $\Err(h_0)$ and would be strictly larger than $\Err(h_0)$. So we cannot simply take the output of the base algorithm, and the idea is to \emph{regularize} it, i.e., make it possibly a little worse but in such a way that this regularization can be reduced as the sample size increases and thus we can guarantee monotonicity. \paragraph{How to Make the Learner Worse?} The question thus becomes: given a hypothesis $h$, is there a way to produce a hypothesis $h'$ that is guaranteed to be \emph{worse} than $h$ (i.e., $\Err(h')>\Err(h)$) and to possibly control how much worse it is? The first idea that comes to mind is to sometimes output a label which we know is incorrect. This would be possible if we add some extra label at our disposal, e.g., in binary classification we would allow the learner to output something different from $0$ or $1$, say $\bot$ and count $\bot$ as a mistake. But this is somewhat artificial and would require to change the nature of the algorithm's predictions. So the second idea that comes to mind is to just add noise to the output of the algorithm, i.e., to randomly pick a different label than the one predicted. Unfortunately, in the context of classification, adding noise does not guarantee that the loss is made worse! Indeed, in binary classification, if $\Err(h)>1/2$, adding some uniform noise to the output would make the error closer to $1/2$, hence better and not worse. \paragraph{How to Make the Learner Always Better Than Some Value?} So we see that if we could, given a hypothesis $h$, which could have error larger than $1/2$, return one that is guaranteed to have error less than $1/2$, we could then make the latter worse by adding uniform noise. This brings us to our last key idea: we symmetrize the output by replacing the hypothesis produced by the base algorithm by the best (in terms of empirical error) between $h$ and $1-h$. We can then guarantee (we will prove it below) that the expected loss of this \emph{symmetrized} output is less than $1/2$ and adding noise will thus strictly increase its loss, making room for reducing the loss when we choose between $h_0$ and~$h_1$. Symmetrization is more subtle in the context of multiclass classification with $k>2$ labels; the idea there is to replace $h$ with the best out of $k$ hypotheses which are obtained by composing $h$ with a cyclic permutation of the labels. For simplicity, we focus on the binary-case in this outline. \paragraph{Formalization.} Let us now try and write down some of the ideas above more formally. The overall approach is to run the base algorithm on a prefix of size $n$ of the sample $S$ to obtain some hypothesis~$h_0$, apply some transformation (which we call \emph{regularization}) to $h_0$ which consists of symmetrizing and adding noise, in order to obtain $R(h_0,S)$. Then perform the same operation on a longer prefix of size $N>n$ of $S$ to obtain $R(h_1,S)$ and then decide whether to use $h_0$ or $h_1$ by estimating their respective errors (on an additional subset of $N$ examples from $S$). If we denote by $p_N$ the probability of choosing $h_1$ over $h_0$, we see that the expected error of the resulting procedure will have the form\footnote{We will later provide more details about how to split the training sample so as to guarantee independence and decouple the expecations appropriately.} \[ p_N \E_{S\sim D^{N}}\Bigl[\Err\bigl(R(h_1,S)\bigr)\Bigr] + (1-p_N) \E_{S\sim D^{N}}\Bigl[\Err\bigl(R(h_0,S)\bigr)\Bigr] \] and if we want to satisfy Inequality \eqref{eq:sparse-mon}, this quantity would have to be smaller than the expected error of our procedure ran on $n$ examples, i.e., we would want \[ p_N \E_{S\sim D^{N}}\Bigl[\Err\bigl(R(h_1,S)\bigr)\Bigr] + (1-p_N) \E_{S\sim D^{N}}\Bigl[\Err\bigl(R(h_0,S)\bigr)\Bigr] \le \E_{S\sim D^{n}}\Bigl[\Err\bigl(R(h_0,S)\bigr)\Bigr] \] As discussed above, without regularization, i.e., if $R$ is the identity, there is no way to guarantee this inequality for every pair $h_0,h_1$, and the problematic case is when $h_1$ is worse than $h_0$, or when $R(h_1,S)$ is worse than $R(h_0,S)$. If we rewrite the above condition as follows: \[ p_N \left( \E_{S\sim D^{N}}\Bigl[\Err\bigl(R(h_1,S)\bigr)\Bigr] - \E_{S\sim D^{N}}\Bigl[\Err\bigl(R(h_0,S)\bigr)\Bigr] \right) \le \E_{S\sim D^{n}}\Bigl[\Err\bigl(R(h_0,S)\bigr)\Bigr] - \E_{S\sim D^{N}}\Bigl[\Err\bigl(R(h_0,S)\bigr)\Bigr] \] we see that in order for it to be satisfied even when $R(h_1,S)$ is worse than $R(h_0,S)$, we need \begin{enumerate} \item $p_N$ has to be small enough (to make the left hand side small enough). \item The regularization over $N$ examples has to be strictly better than the regularization over $n$ examples (to make the right hand side positive and large enough). \end{enumerate} \subsection{Technical Contributions} The technical contribution in this work can roughly be partitioned to two parts: \begin{itemize} \item[(i)] We develop a general axiomatic framework for constructing transformations which compile arbitrary learners to monotone learners with similar guarantees (Section~\ref{sec:general}). We attempt to state this framework in an abstract manner with the hope that it might be useful for other loss functions. In a nutshell, this framework reduces the task to constructing for every hypothesis $h$ a \emph{small} and \emph{symmetric} class $B_h$ such that $h\in B_h$, and $B_h$ can be learned by a monotone learner; for example, in the context of binary classification ($Y=\{0,1\}$) we use $B_h=\{h, 1-h\}$. More generally, in the context of multiclass classification ($Y=[k]=\{0,\ldots,k-1\}$), we use $B_h=\{s_i\circ h : i\in[k]\}$, where $s_i$ is the cyclic permutations mapping a label $y$ to $y+i \mod k$. \item[(ii)] In Sections~\ref{sec:binary} and \ref{sec:multiclass} we use our general framework to prove Theorem~\ref{t:main}. In Section~\ref{sec:binary} we focus on the case of binary classification; this section serves as a warmup to the general multiclass setting which is considered in Section~\ref{sec:multiclass}. The most technical proof in this work is that of Proposition~\ref{prop:multiclass}, specifically Lemma~\ref{le:monmulti} which asserts that the randomized ERM over $B_h$ is monotone: Recall that for a hypothesis $h:X\to \{0,\ldots, k-1\}$, the class $B_h$ consists of the $k$ cyclic permutations of $h$: $B_h = \{s_i\circ h : i\in[k]\}$, where $s_i$ is a cyclic permutation mapping $y\mapsto y+i\mod k$. The randomized ERM is the algorithm which given an input sample $S$, outputs an empirical risk minimizer from $B_h$ which is drawn uniformly at random. To prove Proposition~\ref{prop:multiclass} we exploit the following symmetry exhibited by $B_h$: for any example~$(x,y)$ there exists a unique $h'\in B_h$ such that $h'(x)=y$. This implies, via a symmetrization argument and via Chebychev's sum inequality\footnote{Chebyshev's sum inequality asserts that if $a_1\leq a_2\leq\ldots \leq a_n$ and $b_1\geq b_2\geq\ldots b_n$ then $\frac{1}{n}\sum a_i\cdot \frac{1}{n}\sum b_i \geq \frac{1}{n}a_ib_i$.} the desired monotonicity \citep*{hardy1988inequalities}. We also note that the upper bound on the rate in Theorem~\ref{t:main} is \emph{independent} of the number of labels $k$. To achieve this we once again appeal to the symmetric structure of $B_h$, and show that $B_h$ satisfies uniform convergence with rate which is independent of $k$. \end{itemize} \subsection{Related Work} The idea of monotone learning curves for universally consistent learners was first discussed by \cite*{devroye1996probabilistic}.This problem attracted little attention until recently when ~\cite*{viering2019open} considered monotone learning in a variety of contexts (e.g., when the goal is to learn a fix hypothesis class) and under more general loss functions. \cite*{viering2020making}, \cite*{Viering21curves}, and \cite*{mhammedi2021risk} considered the problem of transforming a given learner to a monotone one using a \emph{wrapper algorithm}. \cite*{viering2020making} and \cite*{mhammedi2021risk} derive weaker forms of monotonicity and leave open the question of whether such a transformation exists. In this work we resolve this problem in the context of multiclass classification. The conjecture by~\cite{devroye1996probabilistic} was finally answered in the positive by \cite*{pestov2021universally}. Pestov's result applies to binary classification, and here we prove an extension to general multiclass classification. \paragraph{Other Notions of Mononoticity.} \cite*{viering2020making} proposes to relax the requirement of monotonicity in expectation into \emph{high-probability} and \emph{eventual monotonicity}. \paragraph{Other Notions of Consistency.} It is important to note that the open problem proposed by \cite{viering2019open} is concerning consistency with respect to a fixed class of function. So this is less general than the universal consistency of \citep*{devroye1996probabilistic}. \paragraph{Comparing with Pestov's result.} While \cite*{pestov2021universally} just builds a specific algorithm and not a generic wrapper, and considers only the binary classification case, there are some similarities between his approach and ours that are worth illustrating. Indeed, his algorithm consists in the following three ingredients \begin{enumerate} \item Consider prefixes of the input sample of (exponentially) increasing size \item Perform a majority vote over a partition of the input domain \item Decide (empirically) whether or not to split each element of the partition into smaller pieces \end{enumerate} The first ingredient is similar to our (and other's) approach of guaranteeing monotonicity on an infinite sequence of indices (what we call sparse monotonicity above), the second one bears some similarity with our symmetrization approach since the majority vote consists in comparing $h$ and $1-h$, and the last one is comparable with our update procedure which decides whether to continue using $h_0$ or to switch to $h_1$. However there is one important difference which is key to obtaining a uniform bound on the excess loss of our monotone algorithm. Indeed, Pestov does not regularize by adding noise which requires him to refine the partition element under some very restrictive conditions (the conditional loss on the partition should not be close to $1/2$ nor to $0$ or~$1$) which has the effect of requiring to make a very large increases of the sample size between two stages, resulting in a slower, non-uniform, convergence rate (more specifically, he does not provide an explicit formula for computing~$N$ from~$n$). \section{General Framework}\label{sec:general} Given an algorithm $A$ which maps an input sample $S$ to an output hypothesis $A(S)$, we construct a monotone algorithm $M$ using two intermediate algorithms: \begin{enumerate} \item A \emph{regularization} algorithm $R$ is an algorithm that takes as input a sample $S$ and a hypothesis~$h$ and returns a (possibly randomized) hypothesis $R(h,S)$. It might be useful/intuitive to think about $R(h,S)$ as a smooth/regularized version of $h$. \item An \emph{update} algorithm $U$ is an algorithm that takes as an input a sample~$S$ and two hypotheses: (i) $h_0$ which is called the \emph{current} hypothesis, and (ii) $h_1$ which is called the \emph{candidate} hypothesis. The algorithm then outputs a hypothesis denoted by $U(h_0,h_1,S)\in\{h_0,h_1\}$. Intuitively,~$U$ chooses whether to replace the current hypothesis $h_0$ with the candidate hypothesis $h_1$, when the latter has smaller error. \end{enumerate} The monotone algorithm will then be constructed in an iterative manner from $A$ by applying $A$ to prefixes of the training sample of increasing size and using the \emph{update} algorithm at each step to decide whether to keep the current hypothesis or to update it to the new one (built on a longer prefix). At the end, we output a normalized version of the currently chosen hypothesis. The \emph{update} algorithm ensures that with high probability we update the hypothesis only when the new hypothesis has better (smaller) loss than the previous one. But since there is still a small chance we have updated to a worse hypothesis, the regularization step will be used to correct for corresponding additional expected loss. See Figure~\ref{fig:algM} for a more precise description of the algorithm $M$. \begin{figure} \begin{tcolorbox} \begin{center} {\bf The Monotonizing Algorithm $M$} \end{center} \begin{itemize} \item Input: a learning algorithm $A$, a regularization algorithm $R$, an update algorithm $U$, an increasing function $b:\mathbb{N}\rightarrow \mathbb{N}$ such that $b(0)=1$, and a sample $S\sim D^m$ of iid examples from $D$. \item Output: a hypothesis determined as follows \begin{enumerate} \item If $m< 2\cdot b(1) + b(0)$ then output $R(f,Z)$, where $f=A(\emptyset)$ and $Z$ consists of the first example from $S$. \item Else, let $T\geq 2$ be maximal such that $b(T-1)+\sum_{t=0}^{T-1} b(t)\le m$; discard from $S$ the last $m - \bigl(b(T-1)+\sum_{t=0}^{T-1}b(t)\bigr)$ examples. \item Partition the remaining examples into $T+1$ blocks $\{B_t\}_{t=0}^{T}$ where $\lvert B_t\rvert=b(t)$ for $t\leq T-1$ and $\lvert B_T\rvert = b(T-1)$. \item Let $S_t$ denote the union of the first $t$ blocks: $S_t := \bigcup_{i=0}^t B_i$. \item Set $f_{0}:=A(\emptyset)$. \item For each $t=1,\ldots,T-1$ perform the following operations: \begin{enumerate} \item Compute the new candidate hypothesis using $A$: $h_{t}:=A(S_{t-1})$. \item Choose the new hypothesis between $f_{t-1},h_t$: $f_{t}:=U(f_{t-1}, h_{t}, B_{t})$. \end{enumerate} \item On the last block, output $R(f_{T-1},B_{T})$. \end{enumerate} \end{itemize} \end{tcolorbox} \caption{\label{fig:algM} Pseudo-code for the transformation $M$ which converts any learning rule $A$ to a monotone one with similar guarantees. The algorithm proceeds by running $A$ on increasing prefixes of the input sample and apply carefully tailored model selection from the outputs of $A$. Item 1 handles a trivial base-case and can be ignored at first read. Note that the last two blocks have identical size $\lvert B_{T-1}\rvert = \lvert B_T\rvert =b(T-1)$. This is because the last block serves for regularization, while the first $T-1$ blocks are used for training and model selection.} \end{figure} \paragraph{A Framework for Proving Monotonicity.} We introduce several conditions on the update and regularization algorithms which guarantee the success of Algorithm $M$ when applied to \emph{any} learning algorithm $A$. We then analyze our algorithms by showing that they satisfy these conditions. Let us begin by introducing some notation: given a hypothesis $h$, denote by $\mathtt{LR}_n(h)$ the quantity \begin{equation}\label{eq:rnn} \mathtt{LR}_n(h):= \E_{S\sim D^{n}}\Err R(h,S) \end{equation} where the expectation is with respect to the sample $S$ of size $n$. \begin{definition}[Sufficient Conditions for Monotonicity] \label{def:suff-mon} Let $R$ be a regularization algorithm, let~$U$ be an update algorithm. We say that \emph{$(R,U)$ are successful} if for every source distribution $D$ the following conditions are satisfied: \begin{enumerate}[{\bf(C1)}] \item After regularization, the expected loss of the update algorithm is \emph{non-increasing}: \[ (\forall n\in\mathbb{N}) (\exists N\ge n) (\forall h_0,h_1): \E_{S\sim D^N}\mathtt{LR}_N(U(h_0,h_1,S)) \le \mathtt{LR}_n(h_0). \] For $n\in\mathbb{N}$, let $N(n)$ denote the smallest number $N>n$ for which the above holds. \item The \emph{update} algorithm competes with the new hypothesis at a small cost: \[ (\forall n\in\mathbb{N})(\forall h_0,h_1): \E_{S\sim D^n}\mathtt{LR}_n(U(h_0,h_1,S)) \le \Err(h_1)+c(n), \] for some function $c$ such that $\lim_{n\to\infty} c(n)=0$. \end{enumerate} \end{definition} \begin{prop}[Monotonicity and Competitiveness of $M$]\label{c:final} Let $R,U$ be a regularization and update algorithm such that $(R,U)$ are successful. Define $b:\mathbb{N}\to\mathbb{N}$ according to the recurrence $b(0)=1$ and $b(t+1)=N(b(t))$, where $N(\cdot)$ is the function defined in Condition (C1) of Definition \ref{def:suff-mon}. Consider algorithm $M$ with $(R,U)$ and $b$ as inputs. Then, for every algorithm $A$, applying $M$ on $A$ yields a monotone algorithm whose performance is competitive with that of $A$: for every source distribution $D$ and every sample size $m\geq 2b(1) + b(0)$, \[ \E_{S\sim D^m}\Err(M(S)) \le \E_{S\sim D^m}\Err(A(S_{T-2})) + c(b(T-1)), \] where $T$ and $S_{T-2}$ are as in the pseudo-code of $M$ in Figure~\ref{fig:algM}. \end{prop} \begin{proof} The case of $m < 2b(1) + b(0)$ is trivial, so we assume that $m \geq 2b(1) + b(0)$. We start by proving monotonicity. We only need to consider the case where a new block is added (otherwise, the additional examples are simply discarded and thus the expected error is unchanged). In this case, it is sufficient to consider the effect of adding a new block $B_{T}$ and prove that \begin{equation}\label{eq:monsufficient} \E \mathtt{LR}_N(f_{T-1})\leq \E \mathtt{LR}_n(f_{T-2}), \end{equation} for $n=b(T-2)$ and $N=N(n)=b(T-1)$. Conditioned on $S_{T-2}$, the hypotheses $f_{T-2}$ and~$h_{T-1}$ are fixed and \[f_{T-1}=U(f_{T-2},h_{T-1},B_{T-1})\] is a function of $B_{T-1}$ (which is not under conditioning). Therefore, \begin{align*} \E\bigl[\mathtt{LR}_N(f_{T-1}) \big\vert S_{T-2}\bigr] &= \E_{B_{T-1}\sim D^N}\bigl[\mathtt{LR}_N(U(f_{T-2},h_{T-1},B_{T-1}))\big\vert S_{T-2}\bigr]\\ &\leq \E\bigl[\mathtt{LR}_n(f_{T-2})\big\vert S_{T-2}\bigr]. \tag{Condition (C1)} \end{align*} Equation~(\ref{eq:monsufficient}) now follows by taking expectation over $S_{T-2}$. For the second part, observe that Condition (C2) implies that \begin{align*} \E_{S\sim D^m}\Err(M(S)) &= \E\Err R(f_{T-1},B_{T}) \\ &= \E \mathtt{LR}_N\bigl(U(f_{T-2},h_{T-1},B_{T-1})\bigr)\\ &\le \E\Err(h_{T-1}) + c(N) \tag{Condition (C2)}\\ &= \E\Err(A(S_{T-2})) + c(b(T-1)). \end{align*} \end{proof} \subsection{The Base-Class Approach} We design two algorithms using our framework above, one in binary classification which serves as a ``warmup'' and a more general one in multiclass classification. In this subsection we describe a common abstraction of these two algorithms. We hope this abstraction will be useful in other contexts as well. (E.g., other loss functions.) The common abstraction boils down to assuming that every hypothesis $h$ has an associated \emph{simple} hypothesis class $B_h$ such that $h\in B_h$. For example in the context of binary classification ($Y=\{0,1\}$) our $B_h$ will consist of two hypotheses: $h$ and its negation $1-h$; i.e., $B_h=\{h, 1-h\}$. More generally in multiclass classification with $k$ labels our $B_h$ will consist of $k$ hypotheses. We require that $B_h$ is ``well-behaved'' in a precise sense which we next describe. Consider the following randomized empirical risk minimizer over $B_h$. \begin{tcolorbox} \begin{center} {\bf Algorithm $G$: Randomized Empirical Risk Minimization} \end{center} \begin{itemize} \item Input: a hypothesis $h\in Y^X$ and a sample $S\in (X\times Y)^n$ \item Set $B_h^\star = \{f\in B_h : \EmpS(f)=\min_{g\in B_h}\EmpS(g)\}$ \item Output: a uniformly random hypothesis from $B_h^\star$, which is denoted by $G(h,S)$. \end{itemize} In words, $G(h,S)$ is a random empirical risk minimizer in $B_h$. \end{tcolorbox} Note that on the empty sample, $G(h,\emptyset)$ is a random hypothesis drawn uniformly from $B_h$. Let $\mathtt{LG}_n(h)$ denote the expected loss of $G(h,S)$ where $S$ is of size $n$: \[ \mathtt{LG}_n(h) := \E_{S\sim D^n} \Err(G(h,S))\,. \] \begin{definition}[Successful Base-Class]\label{def:base-class} Let $h\mapsto B_h$ be a mapping which associates with every hypothesis $h$ a finite hypothesis class $B_h$ such that $h\in B_h$. This mapping is called \emph{successful} if the following properties are satisfied: \begin{enumerate} \item The loss of the randomized ERM over $B_h$ is monotone \[ (\forall D)(\forall h)(\forall n): \mathtt{LG}_{n+1}(h)\le \mathtt{LG}_n(h)\,. \] \item There exists a function $e(n)$ with $\lim_{n\to\infty} e(n)=0$ such that every $B_h$ satisfies uniform convergence with rate $e(n)$: \[ (\forall D)(\forall h)(\forall n): \E_{S\sim D^n}\Bigl[\max_{f\in B_h}\bigl\lvert\EmpS(f) - \Err(f) \bigr\rvert\Bigr] \leq e(n). \] \item The loss of the random ERM on the empty sample is independent of $h$ and of the source distribution $D$: \[ (\forall D)(\exists \alpha)(\forall h): \mathtt{LG}_0(h)=\alpha\,. \] In other words, the expected loss of a uniform random hypothesis from $B_h$ is equal to a universal constant $\alpha$ which does not depend on $h$. ($\alpha$ can depend on the source distribution~$D$.) \end{enumerate} \end{definition} \begin{remark} The first condition above could be relaxed into $\mathtt{LG}_{N}(h)\le \mathtt{LG}_n(h)$ for some $N\ge n$ large enough. Indeed this would be sufficient to derive condition (C1). But we will show that the randomized ERM that we consider actually satisfies the stronger property of monotonicity for $N=n+1$. \end{remark} \begin{remark} The second item above implies (via a triangle inequality) that the randomized ERM $G(h,\cdot)$ is competitive with~$h$: \begin{equation}\label{eq:ERMcomp} (\forall D)(\forall h)(\forall n): 0\leq \mathtt{LG}_n(h)-\min_{f\in B_h}\Err(f) \leq 2e(n). \end{equation} (Note that $\min_{f\in B_h}\Err(f)\leq \Err(h)$, since $h\in B_h$.) \end{remark} \subsubsection*{The Regularization and Update Algorithms} We next describe how a successful map $h\mapsto B_h$ yields a successful pair of regularization and update algorithms. \begin{tcolorbox} \begin{center} {\bf The Regularization Algorithm $R$} \end{center} \begin{itemize} \item Input: a hypothesis $h\in Y^X$, a sample $S\in (X\times Y)^n$. \item Output: with probability $1-\eta_n$ output $G(h,S)$ and with probability $\eta_n$ output $G(h,\emptyset)$. Here $\eta_n\in (0,1)$ is a decreasing function of $n$ satisfying $\lim_{n\to\infty}\eta_n=0$. \end{itemize} \end{tcolorbox} \begin{tcolorbox} \begin{center} {\bf The Update Algorithm $U$} \end{center} \begin{itemize} \item Input: two hypotheses $h_0,h_1\in Y^X$, a sample $S\in (X\times Y)^{n}$. \item Compute $\min_{f\in B_{h_0}}\EmpS(f)$ and $\min_{f\in B_{h_1}}\EmpS(f)+ \epsilon_n$, and output $h_0$ if the first quantity is smaller and $h_1$ otherwise. Here $\epsilon_n\in (0,1)$ is a decreasing function of $n$ satisfying $\lim_{n\to\infty}\epsilon_n=0$. \end{itemize} \end{tcolorbox} \begin{prop}[Base-class]\label{prop:base-class} Assume $h\mapsto B_h$ is successful with uniform convergence rate~$e(n)$. Then, the update algorithm $U$ and the regularization algorithm $R$ with parameters \[\eta_n = \frac{1}{2\sqrt{n}}\text{ and } \epsilon_n=\sqrt{{\ln(64 n)}/{n}}+2e(n)\] satisfy Condition (C1) with $N(n) = 4n$ and condition (C2) with $c(n) = 2\eta_n+3\epsilon_n$. \end{prop} The rest of this section is dedicated to proving Proposition~\ref{prop:base-class}. We begin with collecting some simple properties of the regularization and update algorithms, and then continue to establish Conditions (C1) and (C2). \subsection{Basic Properties of $R$ and $U$} Fix a source distribution $D$. Recall the definition of $\mathtt{LR}_n$ (Equation~\ref{eq:rnn}): \[\mathtt{LR}_n(h)= \E_{S\sim D^{n}}\Err R(h,S)\] We next present some basic properties of $\mathtt{LR}_n$ which will be useful in proving Proposition~\ref{prop:base-class}. A simple calculation yields the following relationship between $\mathtt{LG}_n$ and $\mathtt{LR}_n$: for every hypothesis~$h$ and every $n$: \begin{align} \mathtt{LR}_n(h) &= (1-\eta_n)\cdot \mathtt{LG}_n(h) + \eta_n\cdot \mathtt{LG}_0(h) \notag\\ &=(1-\eta_n)\cdot \mathtt{LG}_n(h) + \eta_n\cdot \alpha\label{e:ddefnoise}\\ &= \mathtt{LG}_n(h) + \eta_n\left(\alpha - \mathtt{LG}_n(h) \right)\label{e:ddefnoise2}. \end{align} The following claim asserts that $\mathtt{LR}_n$ is monotone: \begin{claim}\label{c:rmonn} For all $n,m$, if $m\ge n$ and $\eta_m\le \eta_n\le 1$ then \[ \mathtt{LR}_n(h)-\mathtt{LR}_m(h) \ge (\eta_n-\eta_m)\left(\alpha-\mathtt{LG}_m(h)\right)\,, \] and in particular, \[ \forall h,\,\mathtt{LR}_m(h) \le \mathtt{LR}_n(h)\,. \] \end{claim} \begin{proof} By Equation \eqref{e:ddefnoise2}: \begin{align} \mathtt{LR}_n(h) - \mathtt{LR}_m(h) &= (1-\eta_n)\left(\mathtt{LG}_n(h) - \mathtt{LG}_m(h)\right) + (\eta_n-\eta_m)\left(\alpha-\mathtt{LG}_m(h)\right)\notag\\ &\ge (\eta_n-\eta_m)\left(\alpha-\mathtt{LG}_m(h)\right) \tag{$\mathtt{LG}_m(h) \le \mathtt{LG}_n(h)$ by Definition \ref{def:base-class}}\\ &\ge 0 \tag{$\mathtt{LG}_m(h)\le \mathtt{LG}_0(h) = \alpha$ by Definition \ref{def:base-class}} \end{align} which proves the claim. \end{proof} Lastly, observe that $R$ does not deteriorate the performance of $h$: \begin{claim}\label{c:Rcomp} For all $n$, \[ \mathtt{LR}_n(h) \le \Err(h)+\eta_n + 2e(n)\,. \] \end{claim} \begin{proof} \begin{align*} \mathtt{LR}_n(h) &= \mathtt{LG}_n(h) + \eta_n\left(\alpha - \mathtt{LG}_n(h) \right) \tag{Equation~\eqref{e:ddefnoise2}}\\ &\leq \mathtt{LG}_n(h) + \eta_n \tag{$\mathtt{LG}_n(h)\le \mathtt{LG}_0(h)=\alpha \le1$}\\ &\leq \Err(h)+\eta_n + 2e(n). \tag{Equation~\eqref{eq:ERMcomp}} \end{align*} \end{proof} \subsection{Update Probability} We first provide upper and lower bounds on the probability of making an update. \begin{lemma}\label{le:prob} Let $u_n=\epsilon_n-2e(n) = \sqrt{\ln(64n)/n}$. (Recall the definition of $\epsilon_n$ and $\eta_n$ in Proposition~\ref{prop:base-class}.) If $\mathtt{LR}_n(h_1)>\mathtt{LR}_n(h_0)$ then \[ \Pr_{S\sim D^n}[U(h_0,h_1,S)=h_1] \le 4\exp\bigl({-nu_n^2/2}\bigr) = \frac{1}{2\sqrt{n}} = \eta_n\,, \] and if $\mathtt{LR}_n(h_1)<\mathtt{LR}_n(h_0)-2\epsilon_n$ then \[ \Pr_{S\sim D^n}[U(h_0,h_1,S)=h_1] \ge 1-4\exp\bigl({-nu_n^2/2}\bigr)\,. \] \end{lemma} \begin{proof} To simplify the calculations below, for a hypothesis $h$ let \begin{align} \ErrS(h)&:=\min_{f\in B_{h}}\EmpS(f)\label{eq:lstars}\\ \Errs(h)&:=\min_{f\in B_{h}}\Err(f)\label{eq:lstard} \end{align} We first connect the performance of the regularized versions of the hypotheses $h_0$ and $h_1$ to their error probability $\Errs(h_0)$ and $\Errs(h_1)$. From Equation~\eqref{e:ddefnoise} we obtain \begin{equation}\label{e:rg} \mathtt{LR}_n(h_1)-\mathtt{LR}_n(h_0) = (1-\eta_n)(\mathtt{LG}_n(h_1)-\mathtt{LG}_n(h_0))\,. \end{equation} Recall from the pseudo-code of the update algorithm $U$ that the probability of update, which we denote $p_n$ is given by \[ p_n:=\Pr_{S\sim D^n}[U(h_0,h_1,S)=h_1]=\Pr_{S\sim D^n}\Bigl[\ErrS(h_0)-\ErrS(h_1) >\epsilon_n \Bigr]. \] Hence we have the following implications: \begin{align} \mathtt{LR}_n(h_1)>\mathtt{LR}_n(h_0) &\Rightarrow \mathtt{LG}_n(h_1) > \mathtt{LG}_n(h_0) \tag{by Equation~\eqref{e:rg}}\\ &\Rightarrow \Errs(h_1) + 2e(n) > \Errs(h_0) \tag{by Equation~\eqref{eq:ERMcomp}}\\ &\Rightarrow p_n \le \Pr\left[\lvert\ErrS(h_0)-\Errs(h_0)\rvert+\lvert\ErrS(h_1)-\Errs(h_1)\rvert>\epsilon_n - 2e(n)\right]\notag \end{align} and similarly \begin{align} \mathtt{LR}_n(h_0)>\mathtt{LR}_n(h_1) + 2\epsilon_n &\Rightarrow \mathtt{LG}_n(h_0) > \mathtt{LG}_n(h_1) + 2\epsilon_n\tag{by Equation~\eqref{e:rg} and $\eta_n<1$}\\ &\Rightarrow \Errs(h_0) + 2e(n) > \Errs(h_1) + 2\epsilon_n \tag{by Equation~\eqref{eq:ERMcomp}}\\ &\Rightarrow 1-p_n \le \Pr\left[\lvert\ErrS(h_0)-\Errs(h_0)\rvert+\lvert\ErrS(h_1)-\Errs(h_1)\rvert>\epsilon_n - 2e(n)\right]\notag \end{align} Since $\epsilon_n-2e(n)=u_n$ we have in both cases to upper bound \[ \Pr\left[\lvert\ErrS(h_0)-\Errs(h_0)\rvert+\lvert\ErrS(h_1)-\Errs(h_1)\rvert> u_n\right]\,, \] which is less than \[ \Pr\left[\lvert\ErrS(h_0)-\Errs(h_0)\rvert>u_n/2 \right] +\Pr\left[\lvert\ErrS(h_1)-\Errs(h_1)\rvert>u_n/2 \right] \] We can apply Mcdiarmid's inequality~\citep*{Mcdiarmid} to both terms. Recall that (a special case of) Mcdiarmid's inequality asserts that $\Pr[\lvert Z-\E Z\rvert \geq \epsilon]\leq2\exp(-2n\epsilon^2)$ for every random function $Z=Z(V_1,\ldots, V_n)$, where the $V_i$'s are independent, and such that $Z$ is stable in the sense that $\lvert Z(\vec v') - Z(\vec v'')\rvert \leq 1/n$ for every pair of vectors $\vec v', \vec v''$ whose hamming distance is $1$. Here it is applied on $Z(S)=\ErrS(h)=\min_{f\in B_{h}}\EmpS(f)$. We thus get the following upper bound: \[ 2\cdot 2\exp\bigl({-2n(u_n/2)^2}\bigr) = 4\exp\bigl({-nu_n^2/2}\bigr)\,. \] \end{proof} \subsection{Condition (C1)} \begin{lemma}\label{lem:c1parameters} Under the assumptions of Proposition~\ref{prop:base-class}, Condition (C1) is satisfied with $N(n)=4n$. \end{lemma} \begin{proof} Let $n\in\mathbb{N}$ and $N(n)=4n$. Set $p_N:=\Pr_{S\sim D^N}[U(h_0,h_1,S)=h_1]$, the left-hand side of (C1) can be written as \begin{align} \E_{S\sim D^N}\mathtt{LR}_N(U(h_0,h_1,S)) &= p_N\mathtt{LR}_N(h_1)+(1-p_N)\mathtt{LR}_N(h_0) \label{eq:cnn} \end{align} Assume first that $\mathtt{LR}_N(h_1)\le \mathtt{LR}_n(h_0)$. In this case, since $N>n$, we have $\mathtt{LR}_N(h_0)\le \mathtt{LR}_n(h_0)$. Thus, Equation~\eqref{eq:cnn} shows that $\E_{S\sim D^N}\mathtt{LR}_N(U(h_0,h_1,S))$ is a convex combination of two terms both $\leq \mathtt{LR}_n(h_0)$ hence $\E_{S\sim D^N}\mathtt{LR}_N(U(h_0,h_1,S))\le \mathtt{LR}_n(h_0)$. Thus, assume that $\mathtt{LR}_N(h_1)>\mathtt{LR}_n(h_0)$. By Equation~\eqref{eq:cnn}, \[ \E_{S\sim D^N}\Bigl[\mathtt{LR}_N(U(h_0,h_1,S))-\mathtt{LR}_n(h_0)\Bigr]=p_N\left(\mathtt{LR}_N(h_1)-\mathtt{LR}_N(h_0)\right)+\mathtt{LR}_N(h_0)-\mathtt{LR}_n(h_0)\,. \] Therefore, it suffices to show that in this case \[ p_N\bigl(\mathtt{LR}_N(h_1)-\mathtt{LR}_N(h_0)\bigr)\le\mathtt{LR}_n(h_0)-\mathtt{LR}_N(h_0). \] Denoting $\eta'=\eta_n-\eta_N\ge 0$, we have, by Claim \ref{c:rmonn}: \begin{equation \mathtt{LR}_N(h_0)-\mathtt{LR}_n(h_0) \le -\eta'\bigl(\alpha-\mathtt{LG}_N(h_0)\bigr) \end{equation} Further, because $\mathtt{LG}_N(\cdot)\leq \alpha$, \[ \mathtt{LR}_N(h_1)-\mathtt{LR}_N(h_0) = (1-\eta_N)\left(\mathtt{LG}_N(h_1)-\mathtt{LG}_N(h_0)\right) \le \alpha-\mathtt{LG}_N(h_0). \] The last two inequalities yield: \begin{align*} p_N\bigl(\mathtt{LR}_N(h_1)-\mathtt{LR}_N(h_0)\bigr)&\le p_N\Bigl(\alpha-\mathtt{LG}_N(h_0)\Bigr)\\ \eta'\left(\alpha-\mathtt{LG}_N(h_0))\right) &\le \mathtt{LR}_n(h_0)-\mathtt{LR}_N(h_0). \end{align*} Hence, condition (C1) is satisfied provided that $p_N \le \eta'$. By Lemma~\ref{le:prob}, we have \[ p_N\le 2\cdot 2\exp\bigl({-2N(u_N/2)^2}\bigr) = 4\exp\bigl({-Nu_N^2/2}\bigr) = 4\exp\Bigl({-\frac{1}{2}\ln (64 N)}\Bigr) = \frac{1}{2\sqrt{N}}\,. \] Now we can verify that for $N=4n$, \[\eta'=\frac{1}{2\sqrt{n}}-\frac{1}{2\sqrt{N}}= \frac{1}{2\sqrt{N}},\] and hence \[ p_N\le \eta' \] which concludes the proof. \end{proof} \subsection{Condition (C2)} \begin{lemma}\label{lem:parameters} Under the assumptions of Propositon~\ref{prop:base-class}, Condition (C2) is satisfied with \[ c(n)=2\eta_n+3\epsilon_n\,. \] \end{lemma} \begin{proof} Let $q_n:=\Pr_{S\sim D^n}[U(h_0,h_1,S)=h_0]$ be the probability of not switching to $h_1$. Thus, \begin{align*} \E_{S\sim D^n}\mathtt{LR}_n(U(h_0,h_1,S)) &= q_n\mathtt{LR}_n(h_0) + (1-q_n)\mathtt{LR}_n(h_1) \end{align*} Therefore, if $\mathtt{LR}_n(h_0)\leq \mathtt{LR}_n(h_1) + 2\epsilon_n$ then : \begin{align*} \E_{S\sim D^n}\mathtt{LR}_n(U(h_0,h_1,S))&\leq \mathtt{LR}_n(h_1) + 2\epsilon_n\\ &\leq \Err(h_1) + \eta_n + 2e(n) + 2\epsilon_n \tag{by Claim~\ref{c:Rcomp}} \\ &\leq \Err(h_1) + 2\eta_n + 3\epsilon_n \tag{$2e(n)\leq \epsilon_n$}\,. \end{align*} Thus, the conclusion holds in this case. Therefore, assume that $\mathtt{LR}_n(h_0)> \mathtt{LR}_n(h_1) + 2\epsilon_n$. In this case, \begin{align*} \E_{S\sim D^n}\mathtt{LR}_n(U(h_0,h_1,S)) &= q_n\mathtt{LR}_n(h_0) + (1-q_n)\mathtt{LR}_n(h_1)\\ &\leq q_n + \mathtt{LR}_n(h_1). \tag{$q_n,\mathtt{LR}_n(h_0)\in [0,1]$}\\ &\leq \mathtt{LR}_n(h_1) + \eta_n \tag{by Lemma~\ref{le:prob}}\\ &\leq \Err(h_1) + 2\eta_n + 2e(n)\tag{by Claim~\ref{c:Rcomp}} \\ &\leq \Err(h_1) + 2\eta_n + \epsilon_n \tag{$2e(n)\leq \epsilon_n$}\,. \end{align*} \end{proof} \section{Binary Classification}\label{sec:binary} In this section we prove that in the setting of binary classification, one can associate with every hypothesis $h$ a base class $B_h$ such that Definition~\ref{def:base-class} is satisfied. \begin{proposition}[Successful Base-Class: Binary Classification] Let the label-space $Y$ be $Y=\{0,1\}$. For each $h:X\to Y$ let $B_h=\{h, 1-h\}$. Then, the mapping $h\to B_h$ is successful with uniform convergence rate $e(n) = 1/\sqrt{n}$ and with $\alpha=1/2$. \end{proposition} \begin{proof} That $e(n)=1/\sqrt{n}$ follows by elementary probabilistic argument (essentially bounding the variance of a Binomial random variable): \begin{align} e(n) &= \E_{S\sim D^n}\max\left(\left|\EmpS(h)-\Err(h)\right|, \left|\EmpS(1-h)-\Err(1-h)\right|\right)\notag\\ &= \E_{S\sim D^n}\left|\EmpS(h)-\Err(h)\right|\notag\\ &\le \sqrt{\E_{S\sim D^n}\left(\EmpS(h)-\Err(h)\right)^2}\tag{By Jensen's inequality}\\ &= \sqrt{\mathtt{Var}(\EmpS(h))}\notag\\ &\le \frac{1}{\sqrt{n}}.\notag \end{align} Also, showing that $\alpha=1/2$ is simple: indeed for every distribution~$D$: \begin{align*} \mathtt{LG}_{0}(h) = \frac{1}{2}\Err(h) + \frac{1}{2}\Err(1-h) = \frac{1}{2}\Err(h) + \frac{1}{2}\bigl(1-\Err(h)\bigr) = \frac{1}{2}. \end{align*} Thus, it remains to prove the first Item in Definition~\ref{def:base-class}: \begin{lemma}\label{c:gmon} For all $n,m$ such that $m\ge n$ we have \[ \forall h,\,\mathtt{LG}_m(h) \le \mathtt{LG}_n(h). \] \end{lemma} We note that \cite*{pestov2021universally} proved this statement when $m,n$ are odd. (See Lemma 3.1 in \citep*{pestov2021universally}). We defer the proof to the next section where we derive a more general result that applies to multiclass with an arbitrary number of labels $k$. (See Lemma~\ref{le:monmulti}.) \end{proof} \section{Multiclass Classification}\label{sec:multiclass} We consider the multiclass case with $k<\infty$ labels. A hypothesis is a map from $X$ to $[k]$. To define $B_h$ we introduce cyclic permutations $s_0,\ldots,s_{k-1}$ of $[k]$ ($s_i(j)=(j+i) \mod k$) and let $B_h=\{ s_i\circ h : i\leq k\}$. \begin{proposition}[Successful Base-Class: Multiclass Classification]\label{prop:multiclass} Let the label-space $Y$ be $Y=[k]$. For each $h:X\to Y$ let $B_h=\{ s_i\circ h : i\leq k\}$. Then, the mapping $h\to B_h$ is successful with uniform convergence rate $e(n) = 36/\sqrt{n}$ and with $\alpha=\frac{k-1}{k}$. \end{proposition} \begin{proof} Let $h\in H$. We begin with the simplest part: namely that $\alpha = \frac{k-1}{k}$. Notice that the events \[{E}_i = \{(x,y) : s_i \circ h (x) = y\}\] are pairwise disjoint; in fact they form a partition of $X\times Y$ because for each $(x,y)$ there is a unique~$i$ such that $ s_i\circ h (x) = y$). Therefore, for every distribution $D$ over $X\times Y$: \begin{align*} \mathtt{LG}_{0}(h) &= \frac{1}{k}\sum_{i=1}^k\Err( s_i\circ h)\\ &= \frac{1}{k}\sum_{i=1}^k\bigl(1- D({E}_i)\bigr)\\ &= \frac{1}{k}\bigl(k - \sum_{i=1}^k D({E}_i)\bigr)\\ &=\frac{k-1}{k}. \end{align*} To see that $e(n)=36/\sqrt{n}$ notice that the family of events $\mathcal{E} = \{E_i : i\leq k\}$ consists of pairwise disjoint events and therefore its VC dimension is $1$. By VC theory (see\footnote{Theorem 1.16 in \citep*{lugosi2002pattern} gives $\E\sup_{E\in\mathcal{E}} \left\lvert D_S(E)-D(E)\right\rvert\le \frac{24}{\sqrt{n}} \int_0^1 \sqrt{\ln (2N(\epsilon))} d\epsilon$, where $N(\epsilon)$ denotes the covering number of the family $\mathcal{E}$. Here, since the $E_i$'s are disjoint, $N(\epsilon)\le 1+1/\epsilon$, and we have $\int_0^1 \sqrt{\ln (2+2/\epsilon)} d\epsilon \approx 1.22 \le \frac{3}{2}$ so we get $\E\max_{i} \left\lvert D_S(E_i)-D(E_i)\right\rvert\le \frac{36}{\sqrt{n}}$} e.g.,~\citep*{lugosi2002pattern}) \begin{align*} \frac{36}{\sqrt{n}}&\geq \E_{S\sim D^n}\max_i\bigl\lvert D_S(E_i) - D(E_i)\bigr\rvert\\ &= \E_{S\sim D^n}\max_i\bigl\lvert \bigl(1-D_S(E_i)\bigr) - \bigl(1-D(E_i)\bigr)\bigr\rvert\\ &= \E_{S\sim D^n}\max_i\bigl\lvert \EmpS(s_i\circ h) - \Err(s_i\circ h )\bigr\rvert. \end{align*} Above, $D_S$ denotes the empirical distribution induced by the sample $S=\{(x_i,y_i)\}_{i=1}^m$ (i.e., $D_S(E) = \frac{1}{m}\sum_{i=1}^{m}1[(x_i,y_i)\in E]$.). \footnote{Fix the above derivation. Expectation is redundant in the last line and $L^\star$ does not depend on $i$.} Thus, it remains to prove the first Item in Definition~\ref{def:base-class}: \begin{lemma}\label{le:monmulti} For all $n,m$ such that $m\ge n$ we have \[ \forall h,\,\mathtt{LG}_m(h) \le \mathtt{LG}_n(h). \] \end{lemma} \begin{proof} By induction, it suffices to consider the case of $m=n+1$. We first reformulate this problem in simpler terms. Recall that any example $z=(x,y)$ is classified correctly by exactly one of the $h_j$'s in $B_h$. Thus, partition the domain into sets $E_1,\ldots,E_k$ (with $k=\lvert B_h\rvert$) such that $z\in E_j\Leftrightarrow h_j(x)=y$. Thus, we have $\Pr(E_j)=1-\Err(h_j)$. To simplify the expressions below we denote \[ p_i=1-\Err(h_i),\,\, q_i=\Err(h_i)\,. \] Let $S=\{z_i\}_{i=1}^n\sim D^n$ be an i.i.d sample and let $i\leq n$. Define $X_i$ to be the random variable which is equal to the unique index $j$ such that $z_i\in E_j$. (I.e., $h_j$ correctly classifies $z_i$.) Notice that the variables $(X_1,\ldots,X_n)$ are i.i.d.\ with values in $[k]$ and distribution given by $\Pr(X_i=j)=p_j$ for all $i\leq n,j\leq k$. We can then express $\mathtt{LG}_n(h)$ as the expectation over the sample $L=(X_1,\ldots,X_n)$ of the quantity \[ f(L):=\frac{1}{\lvert I \rvert}\sum_{j\in I}q_j,\, \] where $I=I(L)$ is the set of indices $j$ such that $\sum_{i=1}^n 1[X_i=j]$ is largest. (Equivalently, such that $h_j$ is an empirical risk minimizer in $B_h$ with respect to the sample $S$.) Let $L$ be a sample of $n+1$ such variables $(X_1,\ldots,X_{n+1})$ (corresponding to an input sample $S\sim D^{n+1}$), and let $L^{-i}$ denote the sample $L$ without its $i$-th element, by symmetry and by linearity of expectation, we have \[ \mathtt{LG}_{n+1}(h)-\mathtt{LG}_n(h) = \frac{1}{n+1}\sum_{i=1}^{n+1}\E_{L} \left[f(L)-f(L^{-i})\right] \] We first study the above quantity when conditioned on $\lvert I\rvert>1$ (i.e., there are at least $2$ empirical risk minimizers in $B_h$). Notice that when $\lvert I\rvert > 1$ every $i\leq n+1$ satisfies $I_{i}\subseteq I$, where $I_{i}=I(L^{i})$. Also notice that $I_{i}=I$ if and only if $X_i\notin I$. Thus, \begin{align} &\sum_{i=1}^{n+1}\E_{L} \left[\left.f(L)-f(L^{-i}) \right| |I|>1\right]\notag\\ & = \sum_{i=1}^{n+1}\E_{L} \left[\left. 1[X_i\in I]\cdot \left(\frac{1}{|I|}\sum_{j\in I} q_j - \frac{1}{|I|-1}\sum_{j\in I,j\neq X_i} q_j \right)\right| |I|>1\right] \tag{if $X_i\notin I$, $f(L)=f(L^{-i})$}\\ & = \E_{L} \left[\left. \sum_{i=1}^{n+1}1[X_i\in I]\cdot \left(\frac{1}{|I|}\sum_{j\in I} q_j - \frac{1}{|I|-1}\sum_{j\in I,j\neq X_i} q_j \right)\right| |I|>1\right] \notag\\ & = \E_{L} \left[\left. \sum_{k\in I}\left(\sum_{i=1}^{n+1} 1[X_i=k]\right)\cdot \left(\frac{1}{|I|}\sum_{j\in I} q_j - \frac{1}{|I|-1}\sum_{j\in I,j\neq k} q_j \right)\right| |I|>1\right] \notag \end{align} Since all elements of $I$ have the same number of successes, the sum $\sum_{i=1}^{n+1} 1[X_i=k]$ is the same for all $k\in I$ and since \[ \frac{1}{|I|}\sum_{k\in I}\sum_{j\in I}q_j = \sum_{j\in I} q_j = \frac{1}{|I|-1}\sum_{k\in I}\sum_{j\in I,j\neq k} q_j \] this shows that \begin{equation}\label{eq:firstpart} \sum_{i=1}^{n+1}\E_{L} \left[\left.f(L)-f(L^{-i}) \right| |I|>1\right]=0\,. \end{equation} Now let us consider the case $|I|=1$. Let $J\supseteq I$ denote the set of almost minimizers (i.e., which are either optimal or one away from being optimal) We further condition on $J$ being some arbitrary fixed set $J_0$: \begin{align} &\E_{L} \left[\left.f(L)-f(L^{-(n+1)}) \right| |I|=1, J=J_0\right]\notag\\ &=\Pr\bigl[I= \{X_{n+1}\} \big\vert \lvert I\rvert = 1, J=J_0\}\bigr]\cdot \E_{L} \left[\left.f(L)-f(L^{-(n+1)}) \right| I=\{X_{n+1}\}, J=J_0\right]\tag{if $X_{n+1}\notin I$ then $f(L)-f(L^{-(n+1)})=0$}. \end{align} Therefore, since $\Pr[I= \{X_{n+1}\}\vert \lvert I\rvert = 1, J=J_0\}]>0$, it is enough to consider \begin{align} \E_{L} \left[\left.f(L)-f(L^{-(n+1)}) \right| I=\{X_{n+1}\}, J=J_0\right] &= \sum_{i\in J_0} \Pr(X_{n+1}=i) \E_{L} \left[\left.f(L)-f(L^{-(n+1)}) \right| I=\{i\}, J=J_0\right]\notag\\ &= \sum_{i\in J_0} \Pr(X_{n+1}=i) \E_{L} \left[\left. q_i - \frac{1}{|J|}\sum_{j\in J}q_j \right| I=\{i\}, J=J_0\right]\notag\\ &= \E_{L} \left[\left. \sum_{i\in J_0} p_iq_i - \frac{1}{|J_0|}\sum_{i,j\in J_0}p_iq_j \right| I=\{i\}, J=J_0\right].\notag \end{align} To see that the above is non-positive we use Chebyshev's sum inequality, which asserts that if $a_1\leq a_2\leq\ldots \leq a_n$ and $b_1\geq b_2\geq\ldots b_n$ then $\frac{1}{n}\sum a_i\cdot \frac{1}{n}\sum b_i \geq \frac{1}{n}a_ib_i$~\citep*{hardy1988inequalities}. Thus, since $p_i=1-q_i$, this inequality implies that the left term in the expectation is upper bounded by the right term, and so we get that the difference is non-positive. Hence, for every choice of~$J_0$: \[ \E_{L} \left[\left.f(L)-f(L^{-(n+1)}) \right| |I|=1, J=J_0\right] \le 0\,, \] which together with Equation~\eqref{eq:firstpart} allows to conclude the proof of the lemma. \end{proof} This concludes the proof of Proposition~\ref{prop:multiclass}. \end{proof} \section{Wrapping Up}\label{sec:wrap} \begin{proof}[Proof of Theorem~\ref{t:main}.] Let $A$ be any learning algorithm Proposition~\ref{prop:multiclass}, Proposition~\ref{prop:base-class}, and Proposition~\ref{c:final} imply the existence of a monotone algorithm $M$ such that for all $m \geq 2\cdot b(1) + b(0)$, \begin{equation}\label{eq:final} \E_{S\sim D^m}\Err(M(S)) \le \E_{S\sim D^m}\Err(A(S_{T-2})) + c(b(T-1)), \end{equation} where \begin{enumerate} \item $b(x)=4^x$, \item $T=T(m)$ is the maximal integer such that $b(T-1) + \sum_{t=0}^{T-1} b(t)\le m$, and \item $S_{T-2}$ is an i.i.d sample from the source distribution $D$ of size $\sum_{t=0}^{T-2}b(t)$. \item $c(x) = 2\cdot\frac{1}{2\sqrt{x}} + 3\Bigl(\sqrt{{\ln(64 x)}/{x}} + 2\cdot\frac{36}{\sqrt{x}}\Bigr) = O\Bigl(\sqrt{\frac{\log x}{x}}\Bigr)$, \end{enumerate} Since $M$ is monotone, it remains to prove that $M$'s performance is competitive with that of $A$. The case of input sample-size $m< 2b(1) + b(0)=9$ is trivial. So, assume $m\geq 9$ and hence Equation~\ref{eq:final} holds. By Items 1--4 above it suffices to show that $\lvert S_{T-2}\rvert \geq (m/30)-1$ and that $b(T-1)\geq m/10$. We proceed by showing that $T=T(m)=\Theta(\log m)$. Recall that $T$ is the maximal positive integer such that \[m \geq 4^{T-1} + \sum_{t=0}^{T-1}4^t = 2\cdot 4^{T-1} + \frac{4^{T-1}-1}{4-1} = \frac{7}{3}\cdot 4^{T-1}-\frac{1}{3}.\] Thus, \[T =1 + \Bigl\lfloor \log_4 \Bigl( \frac{3m+1}{7}\Bigr)\Bigr\rfloor, \] and $b(T-1)$, $\lvert S_{T-2}\rvert$ satisfy: \[b(T-1) = 4^{T-1} = 4^{\lfloor \log_4 ( \frac{3m+1}{7})\rfloor} \geq \frac{1}{4}\cdot\frac{3m+1}{7}\geq \frac{m}{10}.\] \begin{align*} \lvert S_{T-2}\rvert &= \sum_{t=0}^{T-2}4^t\\ &= \frac{4^{T-1}-1}{3}\\ &\geq \frac{\frac{1}{4}\cdot\frac{3m+1}{7} - 1}{3}\\ &=\frac{m}{28} - \frac{1}{14} \geq \frac{m}{30} - 1. \end{align*} \end{proof} \section{Open Questions and Future Research} We conclude this manuscript with some suggestions of open problems for future research. \paragraph{Other Loss Functions.} While the abstract framework developed in Section~\ref{sec:general} extends to other (bounded) loss functions, the construction of the base classes $B_h$ is tailored to the zero/one loss. It will be interesting to explore to which loss functions can one extend Theorem~\ref{t:main}. \paragraph{Can Monotone Learning Rules Achieve Optimal Rates?} It will be interesting to explore whether the bound on the rate in Theorem~\ref{t:main} can be strengthened to retain optimal learning rates. For example, for PAC learnable classes $\mathcal{H}\subseteq\{0,1\}^X$, the optimal learning rate in the agnostic setting scales like $\sqrt{d/m}$, and in the realizable setting like $d/m$, where $d$ is the VC dimension and $m$ is the input-sample size. Can these optimal rates be achieved by a monotone learning rule? Note that Theorem~\ref{t:main} is off by a $\log m$ factor. Another interesting setting to explore this question is the model of \emph{universal} learning, in which one focuses on distribution-dependent rates~\citep*{Bousquet21Universal}. In contrast with the distribution-free nature of PAC learning, some classes can be learned exponentially fast, at a rate which scales like $\exp(-n)$~\citep*{schuurmans:97,Bousquet21Universal}. Can such classes be learned monotonically in this fast rate? \paragraph{Monotone Empirical Risk Minimization.} Which classes admit a monotone empirical risk minimizer (ERM)? One of the key technical steps in our proof was to show that the class $B_h$ admits a monotone ERM. Our proof exploited the symmetry of $B_h$ (specifically, that each example is classified correctly by exactly one hypothesis in $B_h$). It will be interesting to determine which other classes admit monotone ERMs. In fact, as far as we know it is even open whether \emph{every} (learnable) class admits a monotone ERM. \section{Acknowledgements} We thank G\'{a}bor Lugosi for an insightful correspondence that helped to materialize the ideas leading to this work. \bibliographystyle{plainnat}
{ "timestamp": "2022-02-11T02:23:04", "yymm": "2202", "arxiv_id": "2202.05246", "language": "en", "url": "https://arxiv.org/abs/2202.05246", "abstract": "The amount of training-data is one of the key factors which determines the generalization capacity of learning algorithms. Intuitively, one expects the error rate to decrease as the amount of training-data increases. Perhaps surprisingly, natural attempts to formalize this intuition give rise to interesting and challenging mathematical questions. For example, in their classical book on pattern recognition, Devroye, Gyorfi, and Lugosi (1996) ask whether there exists a {monotone} Bayes-consistent algorithm. This question remained open for over 25 years, until recently Pestov (2021) resolved it for binary classification, using an intricate construction of a monotone Bayes-consistent algorithm.We derive a general result in multiclass classification, showing that every learning algorithm A can be transformed to a monotone one with similar performance. Further, the transformation is efficient and only uses a black-box oracle access to A. This demonstrates that one can provably avoid non-monotonic behaviour without compromising performance, thus answering questions asked by Devroye et al (1996), Viering, Mey, and Loog (2019), Viering and Loog (2021), and by Mhammedi (2021).Our transformation readily implies monotone learners in a variety of contexts: for example it extends Pestov's result to classification tasks with an arbitrary number of labels. This is in contrast with Pestov's work which is tailored to binary classification.In addition, we provide uniform bounds on the error of the monotone algorithm. This makes our transformation applicable in distribution-free settings. For example, in PAC learning it implies that every learnable class admits a monotone PAC learner. This resolves questions by Viering, Mey, and Loog (2019); Viering and Loog (2021); Mhammedi (2021).", "subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Information Theory (cs.IT); Statistics Theory (math.ST)", "title": "Monotone Learning", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446471538803, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.7087894048328777 }
https://arxiv.org/abs/1103.1873
Characterizing finitary functions over non-archimedean RCFs via a topological definition of OVF-integrality
When $R$ is a non-archimedean real closed field we say that a function $f\in R(\bar{X})$ is finitary at a point $\bar{b}\in R^n$ if on some neighborhood of $\bar{b}$ the defined values of $f$ are in the finite part of $R$. In this note we give a characterization of rational functions which are finitary on a set defined by positivity and finiteness conditions. The main novel ingredient is a proof that OVF-integrality has a natural topological definition, which allows us to apply a known Ganzstellensatz for the relevant valuation. We also give some information about the Kochen geometry associated with OVF-integrality.
\subsection{Topological statement of OVF-integrality for RCVFs} Recall that a {\em real closed valued field} (or RCVF) is an OVF which is real closed. The theory RCVF is the model companion of the theory OVF. In this section we give a topological property which is equivalent to OVF-integrality over an RCVF. \begin{proposition} \label{topol} Let $(K,v,\le_K)$ be a RCVF, let $f\in K({\bar X})$, and let ${\bar b}\in K^n$. Then $f$ is OVF-integral at ${\bar b}$ if and only if there is some neighborhood $U$ of ${\bar b}$ such that for ${\bar x}\in U$, if $f({\bar x})$ is defined then it is in $O_K$. \end{proposition} \begin{proof} First assume that $f$ is not OVF-integral at ${\bar b}$, i.e. there exists some OVF-valuation $\tilde{v}$ near ${\bar b}$ such that $\tilde{v}(f) < 0$. Let $\gamma\in\Gamma_K$, and denote the $\gamma$-neighborhood of ${\bar b}$ by $U_\gamma = \{{\bar x} \in K^n\ |\ \bigwedge_{i=1}^n v(x_i-b_i) > \gamma\}$. We need to find some ${\bar x} \in U_\gamma$ such that $f({\bar x}) \in K\setminus O_K$ (and in particular is defined). However since $\tilde{v}$ is an OVF-valuation there is some order $\le_L$ on $L=K({\bar X})$ such that $(L,\tilde{v},\le_L)$ is an OVF, and the tuple ${\bar X}\in L^n$ satisfies $\bigwedge_{i=1}^n \big( v(X_i-b_i) > \gamma \big)\ \wedge\ v(f({\bar X}) < 0\ \wedge\ q({\bar X}) \ne 0$ (where $q$ is the denominator of $f$) . This is a first-order formula over $(K,v,\le_K)$, which is an existentially closed OVF by virtue of being a RCVF (see for example~\cite{LY}, Section 3). Hence there is some ${\bar x}\in K^n$ satisfying the same formula, as required. For the other (and harder) direction, assume that for every $\gamma\in\Gamma_K$ there is some ${\bar x}_\gamma\in U_\gamma({\bar b})$ such that $f({\bar x}_\gamma)\in K\setminus O_K$. Now let $P=P({\bar X})$ be the partial type over the valued field $(K,v)$ which says that the tuple ${\bar X}$: (i) is contained in $U_\gamma({\bar b})$ for every $\gamma\in \Gamma_K$; (ii) satisfies $v(f({\bar X}))<0$; (iii) is transcendental over the field $K$; and (iv) satisfies $v(g({\bar X}))\ge 0$ for every $g\in \mathscr{I}_{ord} = \{\frac{1}{1+r} : r({\bar X})\in K({\bar X}) \mbox{ is a sum of squares} \}$ (see \cite{HY}, the sequel to Lemma 4.1). We may assume without loss that ${\bar x}_\gamma$ is transcendental over $K$, by continuity of $f$ (which is defined at ${\bar x}_\gamma$). It then clearly follows that $\frac{1}{1+r({\bar x}_\gamma)}\in [0,1] \subseteq O_K$. Therefore $P$ is indeed a partial type, i.e. it is consistent, hence $P$ has a realization ${\bar X}$ in some valued field $(\hat{L},\tilde{v})$ extending $(K,v)$. Now the restriction of $\tilde{v}$ to $L=K({\bar X})$ is an OVF-valuation (since $\mathscr{I}_{ord}$ has the extension property - see \cite{HY}) which is near ${\bar b}$, and such that $v(f({\bar X}))<0$. It follows that $f$ is not OVF-integral at ${\bar b}$, as required. $\ \diamond$ \end{proof} An interesting conclusion from Proposition~\ref{topol} is that a basic Kochen-closed set (i.e. the OVF-integrality locus $V^{int}(f)$ of a single function $f$) is open. \section{The relevant ganzstellensatz} Let $(K,v)$ be any valued field, let $L$ be a field extension of $K$, and assume $A\subseteq L$ is an $O_K$-algebra such that $A \cap K = O_K $. The set $T=\{1+ma\ :\ m\in {\cal{M}}_K, a\in A\}$ is multiplicative. The \emph{integral radical} of $A$ in $L$ is the integral closure (in $L$) of the localization $A_T$, and is denoted by $\sqrt[int]{A}$ (see \cite{HY}, Section 2). Let ${\bar p}=(p_1,..,p_m)$ be polynomials from $K[{\bar X}]$, let ${\bar g}=(g_1,...,g_l)$ be rational functions from $K({\bar X})$, and define: \[S_{{\bar p},{\bar g}} = \left\{{\bar b} \in K^n\ |\ p_1({\bar b}),\ldots,p_m({\bar b})>0\ \wedge\ v(g_1({\bar b})),\ldots,v(g_l({\bar b}))\ge 0\right\} \] The following result by Lavi and the author (see \cite{LY}, Theorem 7.4) gives a characterization of rational functions which are OVF-integral on $S_{{\bar p},{\bar g}}$: first, let $Cone({\bar p})$ denote the positive cone generated by the polynomials $\{p_i\ |\ i\}$. We may assume $S_{{\bar p},{\bar g}}\neq \emptyset$, hence $-1\notin Cone({\bar p})$, and we may define $I_{\bar p} = \{\frac{1}{1 + f}\ |\ f\in Cone({\bar p})\}$. Finally let $A_{{\bar p},{\bar g}}$ be the $O_K$-algebra generated by $I_{\bar p} \cup \{g_1,...,g_l\}$ in $L=K({\bar X})$. \begin{theorem} \cite{LY} \label{ganz} Assume that $(K,v,\le_K)$ is a real closed valued field. Then for any $h\in L$, $h$ is OVF-integral on $S_{{\bar p},{\bar g}}$ if and only if $h\in \sqrt[int] {A_{{\bar p},{\bar g}}}$. \end{theorem} \section{Finitary functions over non-archimedean RCFs} Let $R$ be a non-archimedean RCF, and let $v$ be the canonical valuation whose ring of integers equals the finite part of $R$. Let ${\bar p}=(p_1,..,p_m)$ be polynomials in $R[{\bar x}]$, let ${\bar g}=(g_1,...,g_l)$ be rational functions in $L=R({\bar x})$, and define \[T=T_{{\bar p},{\bar g}} = \left\{{\bar b} \in R^n\ |\ p_1({\bar b}),\ldots,p_m({\bar b})>0\ \wedge\ g_1,\ldots,g_l \mbox{ are finitary at }{\bar b}\right\} \] Define $I_{\bar p} = \{\frac{1}{1 + f}\ |\ f\in Cone({\bar p})\}$, and let $A_{{\bar p},{\bar g}}$ be the $O_R$-algebra generated by $I_{\bar p} \cup \{g_1,...,g_l\}$ in $L$. The pieces are now in place for the following: \begin{theorem} \label{main} For any $h\in L$, $h$ is finitary on $T_{{\bar p},{\bar g}}$ if and only if $h\in \sqrt[int] {A_{{\bar p},{\bar g}}}$. \end{theorem} \begin{proof} By Proposition~\ref{topol} being finitary at a point is equivalent to OVF-integrality, hence we may apply Theorem~\ref{ganz} to the set \[S=S_{{\bar p},{\bar g}} = \left\{{\bar b} \in R^n\ |\ p_1({\bar b}),\ldots,p_m({\bar b})>0\ \wedge\ v(g_1({\bar b})),\ldots,v(g_l({\bar b}))\ge 0\right\} \] and conclude that $h$ is finitary on $S$ if and only if $h\in \sqrt[int] {A_{{\bar p},{\bar g}}}$. Clearly $S \subseteq T$, hence it is now sufficient to show that if $h$ is finitary (or equivalently, OVF-integral) on $S$ then it has this property on $T$ as well. But every generator of $A_{{\bar p},{\bar g}}$ is OVF-integral on $T_{{\bar p},{\bar g}}$ (for elements of $I_{\bar p}$ by Proposition 4.7 of \cite{LY}, for the $g_i$ by definition), and the collection of functions which are OVF-integral on some set is closed under passing to the generated $O_R$-algebra and taking the integral radical. Therefore by using the above characterization of functions which are OVF-integral on $S$ we are done. $\ \diamond$ \end{proof} \begin{remark} \label{last} The set $T$ above is contained in the Kochen closure of $S$ (see Definition~\ref{kg}), however it need not equal this Kochen closure: consider for example $p(X)=X$, and note that for any RCVF $K$ the Kochen closure of $S_p=\{x\in K\ |\ x>0\}$ equals $\{x\in K\ |\ x\ge 0\}$. It is instructive to note here that for RCVFs the Kochen closure of any set $Q$ is contained in the usual closure of $Q$, however they need not be equal. For example the Kochen closure of $Q=\{(x,0)\in K^2\ |\ x>0\}$ does not contain the point $(0,0)$ (thanks to $f=\frac{Y}{X}$). The reason that the Kochen closure is sensitive to the ambient variety is that the latter determines the set of possible directions. More dramatically, although the Kochen closure of $\{(x,y)\in K^2\ |\ x\ne 0\}$ equals $K^2$, the Kochen closure of the half-plane $H=\{(x,y)\in K^2\ |\ x>0\}$ does not contain the point $(0,0)$, this time thanks to $f=\frac{X}{X+Y^2}$ for example. This last example gives a better appreciation of the set of possible `directions' that we have in mind. \end{remark} Note that the set $T$ defined in Theorem~\ref{main} is actually open, hence one can show more directly that being finitary on $T$ is equivalent to OVF-integrality on $T$: all one needs is the easier direction of Proposition~\ref{topol} and Remark~\ref{conserv} (as done in~\cite{L}, Corollary 3.8). A similar remark applies to the open set $S$, of course. However there are sets $T'$ which are not open for which there is a Ganzstellensatz, and if one wishes to generalize Theorem~\ref{main} to such sets the full force of Proposition~\ref{topol} seems to be required. For example, in an unpublished work of Haskell and the author we prove a Ganzstellnsatz for sets defined by equalities, therefore one can also obtain a Ganzstellnsatz for sets defined by weak inequalities (a function is OVF-integral on the union $\{x\ |\ p(x)\ge 0\} = \{x\ |\ p(x) > 0\} \cup \{x\ |\ p(x)= 0\}$ exactly when it is in the intersection of the relevant integral radicals). If we wish to characterize functions which are finitary on the non-open set \[T' = \left\{{\bar b} \in R^n\ |\ p_1({\bar b}),\ldots,p_m({\bar b})\ge 0\ \wedge\ g_1,\ldots,g_l \mbox{ are finitary at }{\bar b}\right\} \] then we would need to produce suitable OVF-valuations near points on the boundary of $T'$, as done in the proof of Proposition~\ref{topol}.
{ "timestamp": "2011-03-15T01:02:20", "yymm": "1103", "arxiv_id": "1103.1873", "language": "en", "url": "https://arxiv.org/abs/1103.1873", "abstract": "When $R$ is a non-archimedean real closed field we say that a function $f\\in R(\\bar{X})$ is finitary at a point $\\bar{b}\\in R^n$ if on some neighborhood of $\\bar{b}$ the defined values of $f$ are in the finite part of $R$. In this note we give a characterization of rational functions which are finitary on a set defined by positivity and finiteness conditions. The main novel ingredient is a proof that OVF-integrality has a natural topological definition, which allows us to apply a known Ganzstellensatz for the relevant valuation. We also give some information about the Kochen geometry associated with OVF-integrality.", "subjects": "Logic (math.LO)", "title": "Characterizing finitary functions over non-archimedean RCFs via a topological definition of OVF-integrality", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446463891303, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.7087894042761584 }
https://arxiv.org/abs/2208.02147
Weighted composition operators from the Bloch space to weighted Banach spaces on bounded homogeneous domains
We study the bounded and the compact weighted composition operators from the Bloch space into the weighted Banach spaces of holomorphic functions on bounded homogeneous domains, with particular attention to the unit polydisk. For bounded homogeneous domains, we characterize the bounded weighted composition operators and determine the operator norm. In addition, we provide sufficient conditions for compactness. For the unit polydisk, we completely characterize the compact weighted composition operators, as well as provide computable estimates on the operator norm.
\section{Introduction} Let $X$ and $Y$ be Banach spaces of holomorphic functions on a domain $\Omega \subset \mathbb{C}^n$. For holomorphic functions $\psi:\Omega \to \mathbb{C}$ and $\varphi:\Omega\to\Omega$, the \textit{weighted composition operator} from $X$ to $Y$ with symbols $\psi$ and $\varphi$ is defined as $$\wcompop{\psi}{\varphi} f = \psi(f\circ\varphi),$$ for $f \in X$. The weighted composition operator is the generalization of the \textit{multiplication operator} $M_\psi f = \psi f$ and the \textit{composition operator} $C_\varphi f = f\circ\varphi$, called the component operators. The study of weighted composition operators on the Bloch space of the unit disk $\mathbb{D}$ began with the work of Ohno and Zhao \cite{OhnoZhao:2001}, where the boundedness and compactness were characterized. In higher dimensions, these operators on the Bloch space have been studied on the unit polydisk by Chen, Stevi\'c, and Zhou \cite{ChenStevicZhao}, and on bounded homogeneous domains by the author and Colonna \cite{AllenColonna:2010}. The bounded and compact weighted composition operators from the Bloch space to $H^\infty$ were characterized by Ohno \cite{Ohno:2001} and Hosokawa, Izuchi, and Ohno \cite{HosokawaIzuchiOhno:2005} in the one-dimensional case, and by Li and Stevi\'c \cite{LiStevic:2007-II} in the case of the unit ball. In \cite{AllenColonna:2010}, the author and Colonna characterized the boundedness, determined the operator norm, and gave a sufficient condition for compactness in the case of a general bounded homogeneous domain in $\mathbb{C}^n$. Composition and multiplication operators on $\Hinfu{\mu}(\mathbb{D})$ were first studied by Bonet, Doma\'{n}iski, Lindstr\"{o}m and Taskinen \cite{BonetDomaniskiLindstromTaskinen:1998}, and then later by Bonet, Doma\'{n}iski and Lindstr\"{o}m \cite{BonetDomaniskiLindstrom:1999,BonetDomaniskiLindstrom:1999-II}. The weighted composition operators on these spaces were studied by Contreras and Hern\'{a}ndez-D\'{i}az \cite{ContrerasHernandezDiaz:2000}, and by Galindo and Lindstr\"{o}m \cite{GalindoLindstrom:2010}. In \cite{Stevic:2008}, Stevi\'c determined the norm of the bounded weighted composition operators from the Bloch space to the weighted Banach space $\Hinfu{\mu}$ of the unit ball by means of the more general $\alpha$-Bloch spaces. Likewise, Yang characterized the bounded and compact weighted composition operators from the Bloch space to $\Hinfu{\mu}$ on the unit ball via the $\alpha$-Bloch space \cite{Yang:2009}. Zhu characterized the weighted composition operators from the Bloch space to $\Hinfu{\mu}$ on the unit ball by means of studying the $F(p,q,s)$ spaces, for which the Bloch space is a special case \cite{Zhu:2009}. These operators have not previously been studied for spaces of functions defined on the unit polydisk. A difficulty in the study of such operators on domains in several variables is the fact that function theory on the unit ball $\mathbb{B}_n$ and the unit polydisk $\mathbb{D}^n$ are vastly different. In recent years, work has been done to generalize these two spaces in an effort to consolidate work. The bounded homogeneous domains are a natural generalization of both $\mathbb{B}_n$ and $\mathbb{D}^n$, and spaces of holomorphic functions on such domains have been studied. In this paper, we generalize the study of weighted composition operators from the Bloch space to the weighted Banach spaces from $\mathbb{B}_n$ and $\mathbb{D}^n$ to bounded homogeneous domains. This unifies the study of such operators, and allows for study these operators on spaces that are not just $\mathbb{B}_n$ and $\mathbb{D}^n$, examples of which can be can be found in \cite{Vesentini:1967}. Currently the study of these operators in higher dimensions has taken place in the setting of the unit ball $\mathbb{B}_n$. This paper introduces to the literature, the study of these operators in the unit polydisk setting. As is normally the case, the techniques used are different than those used for the unit ball. In addition, the author and Colonna studied weighted composition operators from the Bloch space into $H^\infty$ on bounded homogeneous domains in \cite{AllenColonna:2010}. This paper extends this work to the weighted Banach spaces $\Hinfu{\mu}$. \subsection{Organization of the Paper} In Section \ref{Section:Preliminaries}, we define the Bloch space and weighted Banach spaces $\Hinfu{\mu}$ on a bounded homogeneous domain as well as collect useful facts about Bloch functions on such domains. In Section \ref{Section:Boundedness}, we characterize the bounded weighted composition operator from $\mathcal{B}(D)$ into $\Hinfu{\mu}(D)$. In addition, we determine the operator norm of such operators. In Section \ref{Section:Compactness}, we develop sufficient conditions for the bounded weighted composition operators $W_{\psi,\varphi}:\mathcal{B}(D) \to \Hinfu{\mu}(D)$ to be compact. In Section \ref{Section:Polydisk}, we characterize the bounded and the compact weighted composition operators, as well as provide \qte{computable} estimates on the operator norm. Finally, in Section \ref{Section:Conclusions} we end with some concluding thoughts and open problems. \section{Preliminaries}\label{Section:Preliminaries} Let $D$ be a domain in $\mathbb{C}^n$. We denote by $H(D)$ the set of holomorphic functions from $D$ into $\mathbb{C}$, $S(D)$ the set of holomorphic self-maps of $D$, and by $\mathrm{Aut}(D)$ the set of biholomorphic maps of $D$. The space $H^\infty(D)$ of bounded holomorphic functions on $D$ is a Banach algebra equipped with norm $\supnorm{f} = \sup_{z \in D}\; \modu{f(z)}$. A domain $D$ is \textit{homogeneous} if $\mathrm{Aut}(D)$ acts transitively on $D$. Every homogeneous domain is equipped with a canonical metric, called the \textit{Bergman metric}, invariant under the action of $\mathrm{Aut}(D)$ \cite{Helgason:1962}. For a continuous, strictly positive function $\mu:D \to \mathbb{R}_+$, called a \textit{weight}, the \textit{weighted Banach spaces} $\Hinfu{\mu}(D)$ is defined as $$\Hinfu{\mu}(D) = \left\{f \in H(D) : \sup_{z \in D}\; \mu(z)\modu{f(z)} < \infty\right\},$$ which is a Banach space under the norm $\munorm{f} = \sup_{z \in D}\; \mu(z)\modu{f(z)}$. Notice that if $\mu \equiv 1$ on $D$, then $\Hinfu{\mu}(D) = H^\infty(D)$. In \cite{Timoney:1980} and \cite{Timoney:1980-I}, Timoney defined and studied the Bloch functions on a bounded homogeneous domain. In this paper, we conform to his notation, as follows. Let $D$ be a bounded homogeneous domain. For $z \in D$ and $f \in H(D)$, define $$Q_f(z) = \sup_{u \in \mathbb{C}^n\setminus\{0\}}\; \frac{\modu{\nabla(f)(z)u}}{H_z(u,\conj{u})^{1/2}},$$ where $\nabla(f)(z)$ is the gradient of $f$ at $z$, for $u=(u_1,\dots,u_n)$, $$\nabla(f)(z)u = \sum_{k = 1}^n \frac{\partial f}{\partial z_k}(z)u_k,$$ and $H_z$ is the Bergman metric on $D$ at $z$. For the definition of the Bergman metric (also known as the Poincar\'{e} metric or distance), see \cite{Krantz:2000} Definition 1.4.14. The Bergman metric for the unit polydisk $\mathbb{D}^n$ is defined as \begin{equation \nonumber H_z(u,\conj{v}) = \sum_{j=1}^n \frac{u_j\conj{v_j}}{(1-\modu{z_j}^2)^2}, \end{equation} where $u,v \in \mathbb{C}^n$ and $z \in \mathbb{D}^n$. The \textit{Bloch space} $\mathcal{B}(D)$ on a bounded homogeneous domain $D$ is the set of all functions $f \in H(D)$ for which $$\beta_f = \sup_{z \in D}\; Q_f(z) < \infty.$$ Timoney proved that $\mathcal{B}(D)$ is a Banach space under the norm $$\blochnorm{f} = \modu{f(z_0)} + \beta_f,$$ where $z_0$ is some fixed point in $D$ \cite{Timoney:1980}. For convenience, we shall assume throughout that $0 \in D$ and choose $z_0 = 0$. The {\it $*$-little Bloch space of $D$} is the subspace of $\mathcal{B}(D)$ defined as $$\mathcal{B}_{0*}(D) = \left\{f \in \mathcal{B}(D) : \lim_{z \to \distbd{D}} Q_f(z) = 0\right\},$$ where $\distbd{D}$ is the distinguished boundary of $D$. In \cite{Timoney:1980}, Timoney also proved that $H^\infty(D)$ is a subspace of $\mathcal{B}(D)$ and for each $f \in H^\infty(D)$, $\blochnorm{f} \leq |f(0)|+ c\supnorm{f}$ where $c$ is a constant depending only on the domain $D$. In the rest of the section, we collect useful results of Bloch functions on bounded homogeneous domains. For a bounded homogeneous domain $D$, and $z \in D$, define $$\begin{aligned} \omega(z) &= \sup\;\{|f(z)| : f \in \mathcal{B}(D), f(0) = 0 \text{ and } \norm{f}_\mathcal{B} \leq 1\},\\ \omega_0(z) &= \sup\;\{|f(z)| : f \in \Bloch_{0^*}(D), f(0) = 0 \text{ and } \norm{f}_\mathcal{B} \leq 1\}. \end{aligned}$$ \begin{lemma}[Lemma 4.1 of \cite{AllenColonna:2010}]\label{omega inequality} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$. For each $z \in D$, $\omega(z)$ and $\omega_0(z)$ are both finite. Moreover, $\omega_0(z) \leq \omega(z) \leq \rho(z,0)$, where $\rho(z,w)$ is the Poincar\'e distance between $z$ and $w$.\end{lemma} From Theorems 3.9 and 3.14 of \cite{Zhu:2004}, it follows that for $z \in \mathbb{B}_n$, \begin{equation}\label{omegaequalityball}\omega_0(z) = \omega(z) = \rho(z,0) = \frac{1}{2}\log\frac{1+\norm{z}}{1-\norm{z}}.\end{equation} It is not known if there are other bounded homogeneous domains for which equality holds. However, for the unit polydisk it was shown in \cite{AllenColonna:2010} that for $z \in \mathbb{D}^n$, \begin{equation}\label{rhopolydisk}\rho(z,0) \leq \frac{1}{2}\sum_{j=1}^n \log\frac{1+\modu{z_j}}{1-\modu{z_j}}.\end{equation} The quantities $\omega(z)$ and $\omega_0(z)$ play an important role in the theory of Bloch functions on bounded homogeneous domains, which is exercised through the use of the next Lemma. \begin{lemma}[Lemma 4.2 of \cite{AllenColonna:2010}]\label{pointevaluationestimate} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$ and let $f \in \mathcal{B}(D)$ (respectively, $f \in \Bloch_{0^*}(D)$). Then for all $z \in D$, we have $$\modu{f(z)} \leq \modu{f(0)} + \omega(z)\beta_f,$$ (respectively, $\modu{f(z)} \leq \modu{f(0)} + \omega_0(z)\beta_f)$. \end{lemma} \begin{lemma}[Theorem 3.1 of \cite{AllenColonna:2009}]\label{bergmancharacterization} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$ and let $f:D \to \mathbb{C}$ be holomorphic. Then $f$ is Bloch if and only if $f$ is a Lipschitz map as a function from $D$ under the Poincar\'e metric $\rho$ and the complex plane under the Euclidean metric. Furthermore $$\beta_f = \sup_{z \neq w}\; \frac{\modu{f(z)-f(w)}}{\rho(z,w)}.$$ \end{lemma} In particular, we have the following corollary. \begin{corollary}\label{bergmanestimate} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$ and $f \in \mathcal{B}(D)$. Then for all $z,w \in D$, $$\modu{f(z)-f(w)} \leq \norm{f}_\mathcal{B}\rho(z,w).$$ \end{corollary} \begin{lemma}[Theorem 3.3 of \cite{AllenColonna:2009}]\label{convergenceintoBloch} Let $(f_n)$ be a sequence of Bloch functions on a bounded homogeneous domain $D$ in $\mathbb{C}^n$ which converges locally uniformly in $D$ to some holomorphic function $f$. If the sequence $(\beta_{f_n})$ is bounded, then $f$ if Bloch and $$\beta_f \leq \liminf_{n \to \infty} \beta_{f_n}.$$ That is, the function $f \mapsto \beta_f$ is lower semi-continuous on $\mathcal{B}$ under the topology of uniform convergence on compact subsets of $D$. \end{lemma} \section{Boundedness and Operator Norm}\label{Section:Boundedness} In this section, we characterize the bounded weighted composition operators, and determine the operator norm, from $\mathcal{B}(D)$ to $\Hinfu{\mu}(D)$ in terms of the following quantities. For $\psi \in H(D)$ and $\varphi \in S(D)$, define $$\begin{aligned} \upsilon_\mu(\psi,\varphi) &= \sup_{z \in D}\; \mu(z)\modu{\psi(z)}\omega(\varphi(z)), \text{ and}\\ \upsilon_{0,\mu}(\psi,\varphi) &= \sup_{z \in D}\; \mu(z)\modu{\psi(z)}\omega_0(\varphi(z)). \end{aligned}$$ \begin{lemma}\label{upsilon inequality} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$, $\psi \in H(D)$, and $\varphi \in S(D)$. If $\wcompop{\psi}{\varphi}:\mathcal{B}(D)\to\Hinfu{\mu}(D)$ is bounded, then $$\upsilon_{0,\mu}(\psi,\varphi) \leq \upsilon_\mu(\psi,\varphi) \leq \norm{\wcompop{\psi}{\varphi}}.$$ \end{lemma} \begin{proof} The first inequality is a corollary of Lemma \ref{omega inequality}. So it suffices to show that $\upsilon_\mu(\psi,\varphi) \leq \norm{\wcompop{\psi}{\varphi}}$. Let $f \in \mathcal{B}(D)$ with $\blochnorm{f} \leq 1$. For every $z \in D$, we have $$\norm{\wcompop{\psi}{\varphi}} \geq \munorm{\psi(f\circ \varphi)} \geq \mu(z)\modu{\psi(z)}\modu{f(\varphi(z))}.$$ Taking the supremum over all such $f \in \mathcal{B}(D)$ such that $f(0) = 0$, we obtain $$\mu(z)\modu{\psi(z)}\omega(\varphi(z)) \leq \norm{\wcompop{\psi}{\varphi}}.$$ Finally, taking the supremum over all $z \in D$, we have $\upsilon_\mu(\psi,\varphi) \leq \norm{\wcompop{\psi}{\varphi}}$, as desired. \end{proof} \begin{theorem}\label{boundednesstheorem} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$, $\psi \in H(D)$, and $\varphi$ a holomorphic self-map of $D$. Then \begin{enumerate} \item[\listitem{a}] $\wcompop{\psi}{\varphi}:\mathcal{B}(D)\to\Hinfu{\mu}(D)$ is bounded if and only if $\psi \in \Hinfu{\mu}(D)$ and $\upsilon_\mu(\psi,\varphi)$ is finite. Furthermore, if $\wcompop{\psi}{\varphi}$ is bounded, then $\norm{\wcompop{\psi}{\varphi}} = \max\;\left\{\munorm{\psi},\upsilon_\mu(\psi,\varphi)\right\}.$ \item[\listitem{b}] $\wcompop{\psi}{\varphi}:\Bloch_{0^*}(D)\to\Hinfu{\mu}(D)$ is bounded if and only if $\psi \in \Hinfu{\mu}(D)$ and $\upsilon_{0,\mu}(\psi,\varphi)$ is finite. Furthermore, if $\wcompop{\psi}{\varphi}$ is bounded, then $\norm{\wcompop{\psi}{\varphi}} = \max\;\left\{\munorm{\psi},\upsilon_{0,\mu}(\psi,\varphi)\right\}.$ \end{enumerate} \end{theorem} \begin{proof} We will prove (a), since the proof of (b) follows the same argument. First, assume $\wcompop{\psi}{\varphi}$ is bounded. Since the constant function $\mathbb{1} \in \mathcal{B}(D)$, we have $\psi = \wcompop{\psi}{\varphi} \mathbb{1} \in \Hinfu{\mu}(D)$. Also $\upsilon_\mu(\psi,\varphi)$ is finite by Lemma \ref{upsilon inequality}, which also implies \begin{equation}\label{norm lower bound}\max\left\{\munorm{\psi},\upsilon_\mu(\psi,\varphi)\right\} \leq \norm{\wcompop{\psi}{\varphi}}.\end{equation} Next, assume $\psi \in \Hinfu{\mu}(D)$ and $\upsilon_\mu(\psi,\varphi)$ is finite. By Lemma \ref{pointevaluationestimate}, for $f \in \mathcal{B}(D)$ with $\blochnorm{f} \leq 1$, and $z \in D$, $$\begin{aligned} \munorm{\wcompop{\psi}{\varphi} f} &= \sup_{z \in D}\; \mu(z)\modu{\psi(z)}\modu{f(\varphi(z))}\\ &\leq \sup_{z \in D}\; \mu(z)\modu{\psi(z)}\left(\modu{f(0)} + \omega(\varphi(z))\beta_f\right)\\ &\leq \munorm{\psi}\modu{f(0)} + \upsilon_\mu(\psi,\varphi)\beta_f\\ &\leq \munorm{\psi}\left(\blochnorm{f}-\beta_f\right) + \upsilon_\mu(\psi,\varphi)\beta_f\\ &\leq \munorm{\psi}\blochnorm{f} + \left(\upsilon_\mu(\psi,\varphi)-\munorm{\psi}\right)\beta_f\\ &\leq \max\left\{\munorm{\psi},\upsilon_\mu(\psi,\varphi)\right\}\blochnorm{f}. \end{aligned}$$ Thus, $\wcompop{\psi}{\varphi}$ is bounded, and taking the supremum over all $f \in \mathcal{B}(D)$ with $\blochnorm{f} \leq 1$, we obtain $$\norm{\wcompop{\psi}{\varphi}} \leq \max\left\{\munorm{\psi},\upsilon_\mu(\psi,\varphi)\right\},$$ as desired. \end{proof} \section{Compactness}\label{Section:Compactness} In this section, we characterize the compact weighted composition operators from $\mathcal{B}(D)$ into $\Hinfu{\mu}(D)$. First, we provide necessary and sufficient conditions in terms of the classical convergence of bounded sequences. Lastly, we provide sufficient conditions for the operator to be compact in terms of the ``little-oh" condition induced by the symbols. \begin{proposition}\label{compactnesscharacterization} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$, $\psi \in H(D)$, and $\varphi \in S(D)$. Then $\wcompop{\psi}{\varphi}: \mathcal{B}(D) \to \Hinfu{\mu}(D)$ is compact if and only if for every bounded sequence $(f_k)$ in $\mathcal{B}(D)$ converging to 0 locally uniformly in $D$, the sequence $(\munorm{\psi(f_k\circ\varphi)})$ converges to 0 as $k \to \infty$. \end{proposition} \begin{proof} First, suppose $\wcompop{\psi}{\varphi}:\mathcal{B}(D)\to\Hinfu{\mu}(D)$ is compact. Let $(f_k)$ be a bounded sequence in $\mathcal{B}(D)$ which converges to 0 locally uniformly in $D$. By the compactness of $\wcompop{\psi}{\varphi}$, the sequence $(\psi(f_k\circ\varphi))$ contains a subsequence which converges to some function $f \in H(D)$. For $z \in D$, $\mu(z)\psi(z)f_k(\varphi(z)) \to 0$ as $k \to \infty$. Hence $f$ is identically 0, and thus $(\munorm{\psi(f_k\circ\varphi)})$ converges to 0 as $k\to \infty$. Conversely, suppose $(\munorm{\psi(f_k\circ\varphi)})$ converges to 0 as $k \to \infty$ for every bounded sequence $(f_k)$ in $\mathcal{B}(D)$ converging to 0 locally uniformly in $D$. Let $(g_k)$ be a bounded sequence in $\mathcal{B}(D)$, and without loss of generality assume $\norm{g_k}_\mathcal{B} \leq 1$ for each $k \in \mathbb{N}$. To prove $\wcompop{\psi}{\varphi}$ is compact, it suffices to show there exists a subsequence $(g_{k_j})$ for which $(\psi(g_{k_j}\circ\varphi))$ converges in $\Hinfu{\mu}(D)$. Fix $z_0 \in D$, and without loss of generality assume $g_k(z_0) = 0$ for all $k \in \mathbb{N}$. By Corollary \ref{bergmanestimate}, $\modu{g_k(z)} \leq \rho(z,z_0)$ for all $z \in D$. Thus $(g_k)$ is uniformly bounded on every closed disk centered at $z_0$ in the Poincar\'e metric, and thus is uniformly bounded on compact subsets of $D$. By Montel's Theorem (see \cite{Scheidemann:2005}), there exists a subsequence $(g_{k_j})$ which converges locally uniformly to a function $g \in H(D)$. By Lemma \ref{convergenceintoBloch}, $g \in \mathcal{B}(D)$ with $\blochnorm{g} \leq 1$. Defining $h_{k_j} = g_{k_j} - g$, we see that $\blochnorm{h_{k_j}} \leq 2$, and thus $(h_{k_j})$ is a bounded sequence in $\mathcal{B}(D)$ converging to 0 locally uniformly in $D$. Thus $(\munorm{\psi(h_{k_j}\circ\varphi)})$ converges to 0 as $k \to \infty$. Therefore $(\psi(g_{k_j}\circ\varphi))$ converges to $\psi(g\circ\varphi)$ in $\Hinfu{\mu}(D)$. \end{proof} Although this is a characterization of the compact weighted composition operators from $\mathcal{B}(D)$ into $\Hinfu{\mu}(D)$, we wish to have such a characterization in terms of the symbols $\psi$ and $\varphi$. For a general bounded homogeneous domain, we can determine sufficient conditions for compactness. We are unable to obtain a complete characterization due to the presence of the $\omega(z)$ term. For the unit ball, we have a closed form for $\omega(z)$, and thus a complete characterization can be obtained. However, in general, this term can not be removed, and thus we obtain a roadblock to obtaining complete characterizations of the weighted composition operators on bounded homogeneous domains. \begin{theorem}\label{compactnesssufficiency} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$, $\psi \in H(D)$, and $\varphi \in S(D)$. Then $\wcompop{\psi}{\varphi}:\mathcal{B}(D)\to\Hinfu{\mu}(D)$ is compact if $\psi \in \Hinfu{\mu}(D)$ and $$\lim_{\varphi(z) \to \partial D} \frac{1}{2}\mu(z)\modu{\psi(z)}\omega(\varphi(z)) = 0.$$ \end{theorem} \begin{proof} Suppose $\psi \in \Hinfu{\mu}(D)$ and $\lim_{\varphi(z) \to \partial D} \frac{1}{2}\mu(z)\modu{\psi(z)}\omega(\varphi(z)) = 0$. It suffices to show that for any bounded sequence $(f_k)$ in $\mathcal{B}(D)$ converging to 0 locally uniformly in $D$, then $(\munorm{\psi(f_k\circ\varphi)})$ converges to 0 as $k \to \infty$. Let $(f_k)$ be such a sequence, and without loss of generality assume $f_k(0) = 0$ for all $k \in \mathbb{N}$. For $\varepsilon > 0$, there exists $r > 0$ such that $\mu(z)\modu{\psi(z)}\omega(\varphi(z)) < \varepsilon$ whenever $\rho(\varphi(z),\partial D) < r$. So, if $\rho(\varphi(z),\partial D) < r$, then for $k$ large enough, $$\frac{1}{2}\mu(z)\modu{\psi(z)}\modu{f_k(\varphi(z))} \leq \frac{1}{2}\mu(z)\modu{\psi(z)}\omega(\varphi(z)) < \varepsilon.$$ Since $(f_k)$ converges to 0 locally uniformly in $D$, if $\rho(\varphi(z),\partial D) \geq r$ then $\modu{f_k(\varphi(z))} \to 0$ as $k \to \infty$. So for $k$ large enough, $\modu{f_k(\varphi(z))} < \frac{\varepsilon}{2\munorm{\psi}}$ and $$\frac{1}{2}\mu(z)\modu{\psi(z)}\modu{f_k(\varphi(z))} \leq\ \munorm{\psi}\modu{f_k(\varphi(z))} < \varepsilon,$$ whenever $\rho(\varphi(z),\partial D) \geq r$. Thus, for $k$ large enough, $\frac{1}{2}\mu(z)\modu{\psi(z)}\modu{f_k(\varphi(z))} < \varepsilon$ for all $z \in D$. Therefore, $(\munorm{\psi(f_k\circ\varphi)})$ converges to 0 as $k \to \infty$, completing the proof. \end{proof} As was the case for such operators acting on the Bloch space \cite{AllenColonna:2010}, more information about $\omega(z)$ on these domains is needed to prove the necessity of these conditions. However, we believe this to be the case, as the next section offers some hope. We end this section with the following conjecture. \begin{conjecture}\label{conjecture} Let $D$ be a bounded homogeneous domain in $\mathbb{C}^n$, $\psi \in H(D)$, and $\varphi$ a holomorphic self-map of $D$. Then $\wcompop{\psi}{\varphi}:\mathcal{B}(D)\to\Hinfu{\mu}(D)$ is compact if and only if $\psi \in \Hinfu{\mu}(D)$ and $$\lim_{\varphi(z) \to \partial D}\frac{1}{2} \mu(z)\modu{\psi(z)}\omega(\varphi(z)) = 0.$$\end{conjecture} \section{On the Unit Polydisk}\label{Section:Polydisk} In this section, we apply the results of the previous sections to the case when the bounded homogeneous domain is the unit polydisk $\mathbb{D}^n$. In this case, we show the sufficient condition for compactness in Theorem \ref{compactnesssufficiency} is also necessary. The weighted composition operator has not been studied in this setting before, and so the results in this section are new to the literature. To characterize the bounded weighted composition operators completely in terms of the symbols, we must establish a closed form for $\omega(z)$ on the domain. As stated previously, this is known for the unit ball. However, for the unit polydisk, such a form is not known. Thus, a literal application of the results from the previous two sections does not yield a characterization of boundedness completely in terms of the symbols. The same can be said for the operator norm and the sufficient condition for compactness. In this section, we will provide characterizations of boundedness and compactness completely in terms of the symbols, as well as provide norm estimates in a similar manner. For $\psi \in H(\mathbb{D}^n)$ and $\varphi = (\varphi_1,\dots,\varphi_n) \in S(\mathbb{D}^n)$, define $$\vartheta_\mu(\psi,\varphi) = \sup_{z \in \mathbb{D}^n} \frac{1}{2}\mu(z)|\psi(z)|\sum_{j=1}^n\log\frac{1+|\varphi_j(z)|}{1-|\varphi_j(z)|}.$$ \begin{theorem}\label{boundednesstheorempolydisk} Let $\psi \in H(\mathbb{D}^n)$ and $\varphi=(\varphi_1,\dots,\varphi_n) \in S(D^n)$. Then the following are equivalent: \begin{enumerate} \item[\listitem{a}] $\wcompop{\psi}{\varphi}:\mathcal{B}(\mathbb{D}^n)\to\Hinfu{\mu}(\mathbb{D}^n)$ is bounded. \item[\listitem{b}] $\wcompop{\psi}{\varphi}:\Bloch_{0^*}(\mathbb{D}^n)\to\Hinfu{\mu}(\mathbb{D}^n)$ is bounded. \item[\listitem{c}] $\psi \in \Hinfu{\mu}(\mathbb{D}^n)$ and $\vartheta_\mu(\psi,\varphi)$ is finite. \end{enumerate} \end{theorem} \begin{proof} The implication $(a) \Rightarrow (b)$ is clear. So assume $\wcompop{\psi}{\varphi}:\Bloch_{0^*}(\mathbb{D}^n) \to \Hinfu{\mu}(\mathbb{D}^n)$ is bounded. Then $\psi = \wcompop{\psi}{\varphi}\mathbb{1} \in \Hinfu{\mu}(\mathbb{D}^n)$. Fix $j \in \{1,\dots,n\}$ and $\lambda \in \mathbb{D}^n$. Then the function $$h_j(z) = \frac{1}{n(2+\log 4)}\mathrm{Log}\frac{4}{1-z_j\overline{\varphi_j(\lambda)}}$$ is in $\Bloch_{0^*}(\mathbb{D}^n)$ with $\blochnorm{h_j} \leq \frac{1}{n}$ (see \cite{Allen:2009}). Define $$f(z) = \sum_{j=1}^n h_j(z) = \frac{1}{n(2+\log 4)}\sum_{j=1}^n \mathrm{Log}\frac{4}{1-z_j\overline{\varphi_j(\lambda)}}.$$ Then $f \in \Bloch_{0^*}(\mathbb{D}^n)$ with $\blochnorm{f} \leq \sum_{j=1}^n \blochnorm{h_j} \leq 1.$ Since $\wcompop{\psi}{\varphi}$ is bounded, we have $$\begin{aligned} \norm{\wcompop{\psi}{\varphi}} &\geq \munorm{\wcompop{\psi}{\varphi} f}\\ &\geq \mu(\lambda)\modu{\psi(\lambda)}\modu{f(\varphi(\lambda))}\\ &= \frac{2}{n(2+\log 4)}\frac{1}{2}\mu(\lambda)\modu{\psi(\lambda)}\sum_{j=1}^n\log\frac{4}{1-\modu{\varphi_j(\lambda)}^2}\\ &\geq \frac{1}{n(1+\log 2)} \frac{1}{2}\mu(\lambda)\modu{\psi(\lambda)}\sum_{j=1}^n\log\frac{1+\modu{\varphi_j(\lambda)}}{1-\modu{\varphi_j(\lambda)}}. \end{aligned}$$ Taking the supremum over all $\lambda \in \mathbb{D}^n$, we obtain \begin{equation}\label{newopbound}\vartheta_\mu(\psi,\varphi) = \sup_{\lambda \in \mathbb{D}^n}\; \frac{1}{2}\mu(\lambda)\modu{\psi(\lambda)}\sum_{j=1}^n\log\frac{1+\modu{\varphi_j(\lambda)}}{1-\modu{\varphi_j(\lambda)}} \leq n(1+\log 2)\norm{\wcompop{\psi}{\varphi}},\end{equation} which is finite since $\wcompop{\psi}{\varphi}$ is assumed to be bounded. Finally, suppose $\psi \in \Hinfu{\mu}(\mathbb{D}^n)$ and $\vartheta_\mu(\psi,\varphi)$ is finite. By relation (\ref{rhopolydisk}), we have $$\upsilon_\mu(\psi,\varphi) = \sup_{z \in \mathbb{D}^n}\; \frac{1}{2}\mu(z)\modu{\psi(z)}\omega(\varphi(z)) \leq \sup_{z \in \mathbb{D}^n}\; \mu(z)\modu{\psi(z)}\sum_{j=1}^n\log\frac{1+\modu{\varphi_j(z)}}{1-\modu{\varphi_j(z)}} = \vartheta_\mu(\psi,\varphi),$$ which is finite. Thus by Theorem \ref{boundednesstheorem}, $\wcompop{\psi}{\varphi}$ is bounded from $\mathcal{B}(\mathbb{D}^n)$ to $\Hinfu{\mu}(\mathbb{D}^n)$, completing the proof. \end{proof} As mentioned before, without an equivalent form for $\omega(z)$ on $\mathbb{D}^n$, a norm equality purely in terms of the symbols can not be established. The norm estimates we provide follow immediately from relations (\ref{omegaequalityball}), (\ref{rhopolydisk}), (\ref{newopbound}), and Theorem \ref{boundednesstheorem}. \begin{theorem}\label{polynorm} Let $\psi \in H(\mathbb{D}^n)$ and $\varphi=(\varphi_1,\dots,\varphi_n) \in S(\mathbb{D}^n)$. If $W_{\psi,\varphi}:\mathcal{B}(\mathbb{D}^n) \to \Hinfu{\mu}(\mathbb{D}^n)$ is bounded, then $$\max\;\left\{\|\psi\|_{\Hinfu{\mu}}, \frac{1}{n(1+\log 2)}\vartheta_\mu(\psi,\varphi)\right\} \leq \|W_{\psi,\varphi}\| \leq \max\left\{\|\psi\|_{\Hinfu{\mu}},\vartheta_\mu(\psi,\varphi)\right\}.$$ \end{theorem} Finally, we show the sufficient condition for compactness from Theorem \ref{compactnesssufficiency} is necessary. \begin{theorem}\label{compactnesstheorem} Let $\psi \in H(\mathbb{D}^n)$ and $\varphi=(\varphi_1,\dots,\varphi_n) \in S(\mathbb{D}^n)$. Then the following are equivalent: \begin{enumerate} \item[\listitem{a}] $\wcompop{\psi}{\varphi}:\mathcal{B}(\mathbb{D}^n)\to\Hinfu{\mu}(\mathbb{D}^n)$ is compact. \item[\listitem{b}] $\wcompop{\psi}{\varphi}:\Bloch_{0^*}(\mathbb{D}^n)\to\Hinfu{\mu}(\mathbb{D}^n)$ is compact. \item[\listitem{c}] $\psi \in \Hinfu{\mu}(\mathbb{D}^n)$ and $$\lim_{\varphi(z)\to\partial \mathbb{D}^n} \frac{1}{2}\mu(z)\modu{\psi(z)}\sum_{j=1}^n\log\frac{1+\modu{\varphi_j(z)}}{1-\modu{\varphi_j(z)}} =0.$$ \end{enumerate} \end{theorem} \begin{proof} The implication $(a) \Rightarrow (b)$ is clear. Suppose $\wcompop{\psi}{\varphi}:\Bloch_{0^*}(\mathbb{D}^n) \to \Hinfu{\mu}(\mathbb{D}^n)$ is compact. So by the boundedness of $\wcompop{\psi}{\varphi}$ and Theorem \ref{boundednesstheorempolydisk}, $\psi \in \Hinfu{\mu}(\mathbb{D}^n)$ and \begin{equation}\label{supboundpolydisk}\sup_{z \in \mathbb{D}^n}\;\frac{1}{2}\mu(z)\modu{\psi(z)}\sum_{j=1}^n\log\frac{1+\modu{\varphi_j(z)}}{1-\modu{\varphi_j(z)}} < \infty.\end{equation} Let $(z^{(k)})$ be a sequence in $\mathbb{D}^n$ such that $\varphi(z^{(k)}) \to \partial\mathbb{D}^n$ as $k \to \infty$. So there is an index $m \in \{1,\dots,n\}$ such that $\modu{\varphi_m(z^{(k)})} \to 1$ as $k \to \infty$. It follows that $$\sum_{j=1}^n \log\frac{1+\modu{\varphi_j(z^{(k)})}}{1-\modu{\varphi_j(z^{(k)})}} \to \infty$$ as $k \to \infty$. So by inequality (\ref{supboundpolydisk}), it must be the case that $$\lim_{k \to \infty} \mu(z^{(k)})\psi(z^{(k)}) = 0.$$ For $k \in \mathbb{N}$, define $$f_k(z) = \displaystyle\frac{\left(\mathrm{Log}\displaystyle\frac{4}{1-z_m\conj{\varphi_m(z^{(k)})}}\right)^2}{\log\displaystyle\frac{4}{1-\modu{\varphi_m(z^{(k)})}^2}}.$$ Then the sequence $(f_k)$ is bounded in $\Bloch_{0^*}(\mathbb{D}^n)$ and converges to 0 locally uniformly in $\mathbb{D}^n$ (see \cite{Allen:2009}). By the compactness of $\wcompop{\psi}{\varphi}$, we have \begin{eqnarray} \notag \frac{1}{2}\mu(z^{(k)})\modu{\psi(z^{(k)})}\log\frac{1+\modu{\varphi_m(z^{(k)})}}{1-\modu{\varphi_m(z^{(k)})}} &\leq& \frac{1}{2}\mu(z^{(k)})\modu{\psi(z^{(k)})}\log\frac{4}{1-\modu{\varphi_m(z^{(k)})}^2}\\ \notag &=& \frac{1}{2}\mu(z^{(k)})\modu{\psi(z^{(k)})f_k(\varphi(z^{(k)}))}\\ \label{polyestimate2}&\leq& \munorm{\psi(f_k\circ\varphi)} \to 0 \end{eqnarray} as $k \to \infty$. Now let $\ell \in \{1,\dots,n\}$ such that $\modu{\varphi_\ell(z^{(k)})} \not\to 1$ as $k \to \infty$. So there exists $r \in (0,1)$ such that $\modu{\varphi_\ell(z^{(k)})}\leq r$ for all $k \in \mathbb{N}$. Since $\mu(z^{(k)})\psi(z^{(k)}) \to 0$ as $k \to \infty$, we have \begin{equation}\label{polyestimate3}\frac{1}{2}\mu(z^{(k)})\modu{\psi(z^{(k)})}\log\frac{1+\modu{\varphi_\ell(z^{(k)})}}{1-\modu{\varphi_\ell(z^{(k)})}} \leq \frac{1}{2}\mu(z^{(k)})\modu{\psi(z^{(k)})}\log\frac{1+r}{1-r} \to 0\end{equation} as $k \to \infty$. So by inequalities (\ref{polyestimate2}) and (\ref{polyestimate3}), we have $$\lim_{k \to \infty} \frac{1}{2}\mu(z^{(k)})\modu{\psi(z^{(k)})}\sum_{j=1}^n\log\frac{1+\modu{\varphi_j(z^{(k)}))}}{1-\modu{\varphi_j(z^{(k)}))}} = 0,$$ showing $(b) \Rightarrow (c)$. Lastly, suppose $\psi \in \Hinfu{\mu}(\mathbb{D}^n)$ and $$\lim_{\varphi(z)\to\partial \mathbb{D}^n} \frac{1}{2}\mu(z)\modu{\psi(z)}\sum_{j=1}^n\log\frac{1+\modu{\varphi_j(z)}}{1-\modu{\varphi_j(z)}} =0.$$ Then from relation (\ref{rhopolydisk}), we obtain $$\lim_{\varphi(z)\to\partial\mathbb{D}^n} \frac{1}{2}\mu(z)\modu{\psi(z)}\omega(\varphi(z)) \leq \lim_{\varphi(z)\to\partial\mathbb{D}^n}\frac{1}{2}\mu(z)\modu{\psi(z)}\sum_{j=1}^n\log\frac{1+\modu{\varphi(z)}}{1-\modu{\varphi(z)}} = 0.$$ Therefore, by Theorem \ref{compactnesssufficiency}, $\wcompop{\psi}{\varphi}$ is compact, proving $(c) \Rightarrow (a)$. \end{proof} \section{Conclusions and Open Questions}\label{Section:Conclusions} The general bounded homogeneous domains offer a framework by which traditional operator theory on spaces of holomorphic functions in several complex variables (specifically on the unit ball and the unit polydisk) can be unified. As with any generalization, any results we gain are met with trade-offs. One of the trade-offs is the use of the $\omega(z)$ in characterizing boundedness and compactness of the multiplication, composition, and weighted composition operators. For the unit ball, a closed form for $\omega(z)$ is known, and upper bounds are known for the unit polydisk. However, no closed form for the polydisk, or no bounds known for other bounded homogeneous domains presents a temporary roadblock. This roadblock presents itself when we consider conditions for compactness of the composition and weighted composition operators. We conclude this paper with the following open questions we hope researchers will take up. \begin{enumerate} \item[\listitem{1}] Is there a closed form for $\omega(z)$ on $\mathbb{D}^n$? Is there a sharper bound than $\rho(z,0)$? \item[\listitem{2}] Are there other specific bounded homogeneous domains for which a closed form for $\omega(z)$ can be determined? \item[\listitem{3}] Is Conjecture \ref{conjecture} true for all bounded homogeneous domains? Is Conjecture \ref{conjecture} true for any other specific bounded homogeneous domain? \end{enumerate} \bibliographystyle{amsplain}
{ "timestamp": "2022-08-04T02:20:25", "yymm": "2208", "arxiv_id": "2208.02147", "language": "en", "url": "https://arxiv.org/abs/2208.02147", "abstract": "We study the bounded and the compact weighted composition operators from the Bloch space into the weighted Banach spaces of holomorphic functions on bounded homogeneous domains, with particular attention to the unit polydisk. For bounded homogeneous domains, we characterize the bounded weighted composition operators and determine the operator norm. In addition, we provide sufficient conditions for compactness. For the unit polydisk, we completely characterize the compact weighted composition operators, as well as provide computable estimates on the operator norm.", "subjects": "Functional Analysis (math.FA)", "title": "Weighted composition operators from the Bloch space to weighted Banach spaces on bounded homogeneous domains", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446425653805, "lm_q2_score": 0.7279754548076478, "lm_q1q2_score": 0.7087894014925625 }
https://arxiv.org/abs/2212.09893
Diagonals separating the square of a continuum
A metric continuum $X$ is indecomposable if it cannot be put as the union of two of its proper subcontinua. A subset $R$ of $X$ is said to be continuumwise connected provided that for each pair of points $p,q\in R$, there exists a subcontinuum $M$ of $X$ such that $\{p,q\}\subset M\subset R$. Let $X^{2}$ denote the Cartesian square of $X$ and $\Delta$ the diagonal of $X^{2}$. In \cite{ka} it was asked if for a continuum $X$, distinct from the arc, $X^{2}\setminus \Delta$ is continuumwise connected if and only if $X$ is decomposable. In this paper we show that no implication in this question holds. For the proof of the non-necessity, we use the dynamic properties of a suitable homeomorphism of the Cantor set onto itself to construct an appropriate indecomposable continuum $X$.
\section{Introduction} A {\it continuum} is a compact connected non-degenerate metric space. A subcontinuum of a continuum $X$ is a nonempty connected closed subset of $X$, so singletons are subcontinua. An {\it arc} is a continuum homeomorphic to the interval $[0,1]$. Let $X^{2}$ denote the Cartesian square of $X$ and $\Delta$ the diagonal of $X^{2}$. The authors studied in \cite{immm} conditions under which $\Delta$ satisfies some of the properties described in \cite[Table 1]{bpv} and \cite[Definition 1.1]{mm}, for a number of examples and families of continua. Recently, H. Katsuura \cite{ka} proved that the arc is the only continuum for which $\Delta$ does not satisfy that $X^{2}\setminus \Delta$ is connected, and he included the following question \cite[p. 4]{ka} (Katsuura mentioned that this question was suggested by Wayne Lewis of Texas Tech University in a private conversation): \begin{question}\label{dos} If $X$ is a continuum other than the arc, is $X^{2}\setminus \Delta$ continuumwise connected if and only if $X$ is decomposable? \end{question} As an application of Dynamic Systems to Continuum Theory, we use the dynamic properties of a particular homeomorphism from the Cantor set onto itself to prove that the necessity in Question 2 is not satisfied. In fact, we show that no implication in Question \ref{dos} is satisfied. \section{No Sufficiency} A {\it map} is a continuous function. Given continua $X$ and $Y$, and a number $\varepsilon>0$, an $\varepsilon$-{\it map} is an onto map $f:X\rightarrow Y$ such that for each $y\in Y$, diameter($f^{-1}(y))<\varepsilon$. The continuum $X$ is {\it arc-like} provided that for each $\varepsilon>0$, there exists an $\varepsilon$-map $f:X\rightarrow [0,1]$. \begin{lemma} \label{tres} Let $X$ be an arc-like continuum and $p,q\in X$ such that $p\neq q$. Then, for every continuum $K\subset X^2$ containing the points $(p,q)$ and $(q,p)$, $K\cap\Delta\neq\emptyset$. \end{lemma} \begin{proof} Let $d$ be a metric for $X$. Let $K$ be a continuum in $X^2$ such that $(p,q),(q,p)\in K$, for some points $p\neq q\in X$. Suppose that $K\cap\Delta = \emptyset$. Hence, there exists $\varepsilon>0$ such that $d(x,y)>\varepsilon$, for every point $(x,y)\in K$. Since $X$ is an arc-like continuum, there exists an $\varepsilon$-map $\lambda:X \rightarrow [0,1]$. Then for each $(x,y)\in K$, $\lambda(x)\neq \lambda(y)$. By the connectedness of $K$, we obtain that either $\lambda(x)<\lambda(y)$ for each $(x,y)\in K$ or $\lambda(y)<\lambda(x)$ for each $(x,y)\in K$. This contradicts the fact that $(p,q),(q,p)\in K$. \end{proof} \begin{corollary}\label{cuatro} \label{cuatro} Let $X$ be an arc-like continuum. Then $X^{2}\setminus\Delta$ is not continuumwise connected. \end{corollary} Since there are decomposable arc-like continua (for example, the sin$(\frac{1}{x})$-curve), Corollary \ref{cuatro}, shows that the sufficiency in Question \ref{dos} is not satisfied. We also prove in \cite[Section 7]{immm} that, in fact the sin$(\frac{1}{x})$-curve belongs to a family of curves for which $\Delta$ satisfies a stronger property in $X^{2}$. S. B. Nadler, Jr. \cite[p. 329]{na2} called {\it Elsa continua} to the compactifications of the ray $[0,\infty)$ whose remainder is an arc and he proved that they are arc-like \cite[Lemma 6]{na1} (in fact, it is easy to show that a compactification of $[0,\infty)$ with non-degenerate remainder, is arc-like if and only if the remainder is an arc-like continuum). Then for each Elsa continuum, $X$ is deconposable and $X^{2}\setminus \Delta$ is not continuumwise connected. \section{No Necessity} In this section we present an indecomposable continuum $X$ such that $X^{2}\setminus \Delta$ is continuumwise connected. \begin{definition}\label{uno} Let $X$ be a continuum and $A$ a subcontinuum of X with int$_{X} (A) =\emptyset$. We say that $A$ is a continuum of {\rm colocal connectedness} in $X$ if for each open subset $U$ of $X$ with $A\subset U$ there exists an open subset $V$ of $X$ such that $A\subset V \subset U$ and $X\setminus V$ is connected. \end{definition} By \cite[Table 1]{bpv} and \cite[Section 1]{mm} it is known that if $\Delta$ satisfies Definition \ref{uno} in $X^{2}$ then $X^{2}\setminus \Delta $ is continuumwise connected. The construction and proof of the properties of $X$ strongly depends on the dynamic properties of a particular homeomorphism of the Cantor set onto itself. Let $C$ denote the Cantor ternary set. In this section we will use a homeomorphism $f:C\rightarrow C$ such that $f$ is minimal ($C$ does not contain any proper nonempty closed subset $A$ such that $f(A)=A$, or equivalently, every orbit of $f$ is dense) and $f$ is weakly mixing (for any two nonempty subsets $U,V$ of $X^{2}$, there exist $n\in \mathbb {N}$ such that $(f\times f)^{n}(U)\cap V\neq\emptyset$). A recent reference for the existence of such homeomorphisms is \cite[Theorem 4.1]{bko}. It is known that the inverse of $f^{-1}$ is also minimal \cite[Theorem 6.2 (e)]{ac}. By \cite[Proposition 2]{dk}, $f\times f$ has a dense orbit. We consider the space $X$ obtained by identifying in $C\times [0,1]$, for each $p\in C$, the points $(p,1)$ and $(f(p),0)$. Let $\varphi:C\times [0,1]\rightarrow X$ be the quotient mapping and let $\sigma=\varphi\times\varphi:(C\times[0,1])^{2}\rightarrow X^{2}$. Let $\rho$ be the metric on $C\times [0,1]$ given by $\rho((p,s),(q,t))=\vert p-q\vert+\vert s-t\vert)$. Fix a metric $D$ for the space $X$. In the hypothesis of the following theorem, we write the specific properties that we use of the homeomorphism $f$. \begin{theorem} Suppose that $f:C\rightarrow C$ is a homeomorphism such that the orbits of $f$ and $f^{-1}$ are dense and $f\times f:C^{2}\rightarrow C^{2}$ has a dense orbit. Then $X$ is an indecomposable continuum such that the diagonal $\Delta$ in $X^{2}$ is colocally connected. \end{theorem} \begin{proof} The indecomposability of $X$ is proved in \cite[Corollary, p. 552]{gu}. So we only need to show that $\Delta$ is colocally connected in $X^{2}$. In order to do this, take an open subset $U$ in $X^{2}$ such that $\Delta\subset U$. Then there exists $\varepsilon>0$ such that if $D(x,y)<\varepsilon$, then $(x,y)\in U$. Fix $\delta>0$ such that $\delta<\frac{1}{10}$ and if $(p,s),(q,t)\in C\times[0,1]$ and $\rho((p,s),(q,t))<2\delta$, then $D(\varphi(p,s),\varphi(q,t))<\frac{\varepsilon}{3}$ (hence $\sigma((p,s),(q,t))\in U$). Define $$V_{1}=\{((p,s),(q,t))\in (C\times[0,1])^{2}:\vert p-q\vert<\delta\text{ and }\vert t-s \vert<\delta\},$$ $$V_{2}=\{((p,s),(q,t))\in (C\times[0,1])^{2}:\vert p-f(q)\vert<\delta, s<\delta\text{ and }1-\delta<t\},$$ $$V_{3}=\{((p,s),(q,t))\in (C\times[0,1])^{2}:\vert f(p)-q\vert<\delta, t<\delta\text{ and }1-\delta<s\},$$ $$V_{0}=V_{1}\cup V_{2}\cup V_{3}$$ and $$K=(C\times[0,1])^{2}\setminus V_{0}.$$ We check that $K$ has the following properties. (a) $\sigma(K)$ is a continuum, and (b) $\Delta\subset X^{2}\setminus \sigma(K)\subset U$. In order to prove (a), observe that $V_{0}$ is open in $(C\times[0,1])^{2}$, so $K$ and $\sigma(K)$ are compact, so we only need to show that $\sigma(K)$ is connected. Let \begin{center} $M=(C\times\{0\})\times(C\times\{\frac{1}{2}\})$. \end{center} Notice that $M\subset K$. {\bf Claim 1.} Let $z=((p,s),(q,t))\in K$ be such that $s\leq t$. Then there exists a subcontinuum $A$ of $K$ such that $z,((p,0),(q,\frac{1}{2}))\in A$. We prove Claim 1. We consider two cases. {\bf Case 1.} $2\delta\leq s$ or $t\leq 1-\delta$. For each $r\in [0,1]$, let $u(r)=rs$ and $v(r)=rt+(1-r)(t-s)$. Define $\lambda(r)=((p,u(r)),(q,v(r)))$ and $A_{1}=\{\lambda(r):r\in[0,1]\}$. Then $z,((p,0),(q,t-s))\in A_{1}$. Take $r\in [0,1]$, observe that $u(r)\leq v(r)\leq t$ and $v(r)-u(r)=t-s$. Since $u(r)\leq v(r)$ and $\delta<1-\delta$, we have that $\lambda(r)\notin V_{3}$. Since $v(r)-u(r)=t-s$ and $z\notin V_{1}$, we have that $\lambda(r)\notin V_{1}$. In the case that $t\leq 1-\delta$, we have that $v(r)\leq 1-\delta$, Hence $\lambda(r)\notin V_{2}$. Now we consider the case that $2\delta\leq s$. If $r\leq \frac{1}{2}$, then $\delta\leq(1-r)2\delta\leq (1-r)s$, so $t\leq1\leq 1-\delta +(1-r)s$. This implies that $v(r)=rt+(1-r)(t-s)\leq 1-\delta$. Thus $\lambda(r)\notin V_{2}$. If $\frac{1}{2}\leq r$, then $\delta\leq\frac{s}{2}\leq rs=u(r)$. Hence $\lambda(r)\notin V_{2}$. This completes the proof that for each $r\in [0,1]$, $\lambda(r)\in K$. Therefore $A_{1}\subset K$. Given $r\in [0,1]$, let $w(r)=r(t-s)+(1-r)\frac{1}{2}$ and $\eta(r)=((p,0),(q,w(r)))$. Let $A_{2}= \{\eta(r):r\in[0,1]\}$. Observe that $((p,0),(q,t-s)), ((p,0),(q,\frac{1}{2}))\in A_{2}$ and $\eta(r)\notin V_{3}$. Take $r\in [0,1]$. Since $\lambda(0)\in K$, we have that $((p,0),(q,t-s))\notin V_{1}\cup V_{2}$. Since $((p,0),(q,t-s))\notin V_{1}$, we have that either $\delta \leq \vert p-q\vert$ or $\delta\leq t-s$. If $\delta \leq \vert p-q\vert$, it is clear that $\eta(r)\notin V_{1}$. If $\delta\leq t-s$, since $\delta \leq\frac{1}{2}$, we have that $\delta \leq w(r)$. Thus $\eta(r)\notin V_{1}$. Since $((p,0),(q,t-s))\notin V_{2}$, we have that either $\delta \leq \vert p-f(q)\vert$ or $t\leq 1-\delta$. If $\delta \leq \vert p-f(q)\vert$, it is clear that $\eta(r)\notin V_{2}$. If $t\leq 1-\delta$, since $\frac{1}{2}\leq 1-\delta$, we have that $w(r)\leq 1-\delta$. Thus $\eta(r)\notin V_{2}$. We have shown that for each $r\in [0,1]$, $\eta(r)\notin V_{1}\cup V_{2}\cup V_{3}$. Hence $A_{2}\subset K$. Define $A=A_{1}\cup A_{2}$. Then $A$ has the required properties. {\bf Case 2.} $s<2\delta$ and $1-\delta<t$. Given $r\in [0,1]$, let $y(r)=r(1-\delta)+(1-r)t$ and $\gamma(r)=((p,s),(q,y(r)))$. Let $A_{3}=\{\gamma(r):r\in [0,1]\}$. Notice that $z,((p,s),(q,1-\delta))\in A_{3}$ and for each $r\in [0,1]$, $1-\delta\leq y(r)$, so $\gamma(r)\notin V_{3}$ and $\delta<\frac{7}{10}\leq y(r)-s$. Thus $\gamma(r)\notin V_{1}$. Since $z\notin V_{2}$ and $1-\delta < t$, we have that either $\delta \leq \vert p-f(q)\vert$ or $\delta \leq s$, in both cases it is clear that $\gamma(r)\notin V_{2}$. This completes the proof that for each $r\in [0,1]$, $\gamma(r)\in K$. Therefore $A_{3}\subset K$. We apply Case 1 to the point $z_{0}=((p,s),(q,1-\delta))$, so there exists a subcontinuum $A_{4}$ of $K$ such that $z_{0},((p,0),(q,\frac{1}{2}))\in A_{4}$. Define $A=A_{3}\cup A_{4}$. Then $A$ has the required properties. This finishes the proof of Claim 1. By the symmetry of the roles of both coordinates in the definition of $V_{0}$, we obtain that the following claim also holds. {\bf Claim 2.} Let $z=((p,s),(q,t))\in K$ be such that $t\leq s$. Then there exists a subcontinuum $A$ of $K$ such that $z,((p,\frac{1}{2}),(q,0))\in A$. {\bf Claim 3.} Let $z=((p,s),(q,t))\in K$. Then there exists a subcontinuum $B$ of $\sigma(K)$ such that $\sigma(z)\in B$ and $B\cap \sigma(M)\neq \emptyset$. The proof for the case $s\leq t$ follows from Claim 1. So we may suppose that $t\leq s$. By Claim 2, there exists a subcontinuum $A_{4}$ of $K$ such that $z,((p,\frac{1}{2}),(q,0))\in A_{4}$. Let $A_{5}=\{((p,\frac{1}{2}+r),(q,r)):r\in [0,\frac{1}{2}]\}$. Clearly, $((p,\frac{1}{2}),(q,0)),((p,1),(q,\frac{1}{2}))\in A_{5}$ and $A_{5}\subset K$. Let $A=A_{4}\cup A_{5}$. Then $A$ is a subcontinuum of $K$, $z\in A$ and $\sigma((p,1),(q,\frac{1}{2}))=\sigma((f(p),0),(q,\frac{1}{2}))\in \sigma(A)\cap\sigma(M)$. Hence $B=\sigma(A)$ satisfies the required properties. {\bf Claim 4.} Let $z=((p,0),(q,\frac{1}{2}))\in M$. Then there exists a subcontinuum $E$ of $\sigma(K)$ such that $\sigma(z),\sigma((f(p),0),(f(q),\frac{1}{2}))\in E$. In order to prove the existence of $E$, let $A=\{((p,r),(q,\frac{1}{2}+r)):r\in [0,\frac{1}{2}]\}\cup\{((p,\frac{1}{2}+r),(f(q),r)):r\in[0,\frac{1}{2}]\}$. Since $\varphi(q,1)=\varphi(f(q),0)$ and $\varphi(p,1)=\varphi(f(p),0)$, we obtain that $E=\sigma(A)$ satisfies the required properties. Hence, Claim 4 is proved. Fix a point $(p_{0},q_{0})\in C^{2}$ such that $(p_{0},q_{0})$ has a dense orbit under $f\times f$. By Claim 4, there exists a sequence $B_{1},B_{2},\ldots$ of subcontinua of $\sigma(K)$ such that for each $n\in \mathbb{N}$, $\sigma((f^{n-1}(p_{0}),0),(f^{n-1}(q_{0}),\frac{1}{2})),\sigma((f^{n}(p_{0}),0),(f^{n}(q_{0}),\frac{1}{2}))\in B_{n}$. Then the set $B=B_{1}\cup B_{2}\cup\cdots$ is a connected subset of $\sigma(K)$ and $\sigma((p_{0},0),(q_{0},\frac{1}{2}))\in B$. Given a point $z=((p,0),(q,\frac{1}{2}))\in M$, since $(p_{0},q_{0})$ has a dense orbit under $f\times f$, we have that $\sigma(z)\in\sigma(\cl_{X^{2}}(\{((f^{n}(p_{0}),0),(f^{n}(q_{0}),\frac{1}{2})):n\in\mathbb{N}\}))\subset \cl_{X^{2}}(B)$. This proves that $B\cup \sigma(M)$ is a connected subset of $\sigma(K)$. Hence, Claim 3 implies that $\sigma(K)$ is connected. This ends the proof of (a). In order to prove (b), take $\sigma(z)\in \Delta$, where $z=((p,s),(q,t))$. Then $\varphi(p,s)=\varphi(q,t)$. Thus either $(p,s)=(q,t)$ or ($t=1$ and $(p,s)=(f(q),0))$ or ($s=1$ and $(q,t)=(f(p),0)$). In the first case, $z\in V_{1}$, in the second $z\in V_{2}$ and in the third one, $z\in V_{3}$. In any case, $z\notin K$. We have shown that $\Delta\subset X^{2}\setminus \sigma(K)$. Now we prove that $X^{2}\setminus \sigma(K)\subset U$. Take an element $w=\sigma(z)$, where $z=((p,s),(q,t))\notin K$. Then $z\in V_{1}\cup V_{2}\cup V_{3}$. We consider three cases. {\bf Case 1.} $z\in V_{1}$. In this case, $\vert p-q\vert<\delta$ and $\vert t-s\vert<\delta$, and by the definition of $\delta$, $\sigma(z)\in U$. {\bf Case 2.} $z\in V_{2}$. In this case, $\vert p-f(q)\vert<\delta, s<\delta$ and $1-\delta<t$. Then $\rho((p,s),(p,0))<\delta$, $\rho((p,0),(f(q),0))<\delta$ and $\rho((q,1),(q,t))<\delta$. So, $D(\varphi(p,s),\varphi(p,0))<\frac{\varepsilon}{3}$, $D(\varphi(p,0),\varphi(f(q),0))<\frac{\varepsilon}{3}$ and $D(\varphi(q,1),\varphi(q,t))<\frac{\varepsilon}{3}$. Since $\varphi(f(q),0)=\varphi(q,1)$, we have that $D(\varphi(p,s),\varphi(q,t))<\varepsilon$. By the choice of $\varepsilon$, we conclude that $\sigma(z)=(\varphi(p,s),\varphi(q,t))\in U$. {\bf Case 3.} $z\in V_{3}$. This case is similar to Case 2. This completes the proof that $X^{2}\setminus \sigma(K)\subset U$. Therefore (b) holds. Finally define $V=X^{2}\setminus \sigma(K)$. Then $V$ is an open subset of $X^{2}$ such that $\Delta\subset V\subset U$, and $X^{2}\setminus V$ is connected. This finishes the proof that $\Delta$ is colocally connected in $X^{2}$. \end{proof} \bigskip
{ "timestamp": "2022-12-21T02:03:27", "yymm": "2212", "arxiv_id": "2212.09893", "language": "en", "url": "https://arxiv.org/abs/2212.09893", "abstract": "A metric continuum $X$ is indecomposable if it cannot be put as the union of two of its proper subcontinua. A subset $R$ of $X$ is said to be continuumwise connected provided that for each pair of points $p,q\\in R$, there exists a subcontinuum $M$ of $X$ such that $\\{p,q\\}\\subset M\\subset R$. Let $X^{2}$ denote the Cartesian square of $X$ and $\\Delta$ the diagonal of $X^{2}$. In \\cite{ka} it was asked if for a continuum $X$, distinct from the arc, $X^{2}\\setminus \\Delta$ is continuumwise connected if and only if $X$ is decomposable. In this paper we show that no implication in this question holds. For the proof of the non-necessity, we use the dynamic properties of a suitable homeomorphism of the Cantor set onto itself to construct an appropriate indecomposable continuum $X$.", "subjects": "General Topology (math.GN); Dynamical Systems (math.DS)", "title": "Diagonals separating the square of a continuum", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446486833801, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.7087894002001863 }
https://arxiv.org/abs/1506.03043
Disconjugacy characterization by means of spectral of $(k,n-k)$ problems
This paper is devoted to the description of the interval of parameters for which the general linear $n^{\rm th}$-order equation\begin{equation}\label{e-Ln}T_n[M]\,u(t) \equiv u^{(n)}(t)+a_1(t)\, u^{(n-1)}(t)+\cdots +a_{n-1}(t)\, u'(t)+(a_{n}(t)+M)\,u(t)=0 \,,\quad t\in I\equiv[a,b],\end{equation}with $a_i\in C^{n-i}(I)$, is disconjugate on $ I $.Such interval is characterized by the closed to zero eigenvalues of this problem coupled with $(k,n-k)$ boundary conditions, given by\begin{equation}\label{e-k-n-k}u(a)=\cdots=u^{(k-1)}(a)=u(b)=\cdots=u^{(n-k-1)}(b)=0\,,\quad 1\leq k\leq n-1\,.\end{equation}
\section{Introduction} There are a huge number of works related to the disconjugacy and its properties, (see \cite{Cop, Lev, Sha} for details). From the last half of the past century till now this subject of investigation has attracted important researchers who have stablished very interesting criteria to ensure such property for particular equations. This is the case of \cite{Neh2}, where is characterized the disconjugacy of the second order equation $y''(t)+f(t)\, y(t)=0$, with $f \ge 0$, in terms of the least eigenvalue of the corresponding eigenvalue problem $y''(t)+\lambda \, f(t)\, y(t)=0, y(a)=y'(b)=0$. In \cite{Barr} the general equation $(p(t) y'(t))'+f(t)\, y(t)=0$ has been treated. On that paper some characterization by means of variational approach has given. More recently, in \cite{ClaHin} sufficient conditions to ensure the disconjugation of some second and fourth order equations are showed. In \cite{Neh1} sufficient conditions (and different necessary ones) for disconjugacy on $[a,+\infty)$, are obtained for linear differential equations of the form $y^{(2\,n)}(t)-(-1)^n\,p(t)\,y(t)=0$, with $p\ge 0$. Also, in \cite{Eli} conditions for disconjugacy of the linear differential equation $y^{(n)}(t)+p(t)\,y(t)=0 $ in $[a,+\infty)$ and $p(t)$ of constant sign, are given. In \cite{Sak} the disconjugacy of the linear differential equation $(r(t)\,x'(t))'+p(t)\,x'(t)+q(t)\,x(t)=0$, in $[a,b]$ is studied. Sufficient conditions to warrant the disconjugacy of the nonlinear $p-$ Laplacian equation $\left( |u'(t)|^{p-1}\,u'(t)\right) '+q(t)\,|u(t)|^{p-2}\,u(t)=0$ on $[a,+\infty)$, have been obtained in\cite{NapPin}. The aim of this paper consists on the characterization of the disconjugacy of the general $n$-th order linear differential operator $u^{(n)}(t)+a_1(t)\, u^{(n-1)}(t)+\cdots +a_{n-1}(t)\, u'(t)+a_{n}(t)\,u(t)$ on any arbitrary interval $[a,b]$. Tacking into account that the coefficient of $u$ can be uniquely decomposed as $$a_n(t)=\tilde{a}_n(t)+\frac{1}{b-a}\int_a^ba_n(s)\,ds, \quad t \in I,$$ it is obvious that such problem is equivalent to study the set of parameters $M$ for which the linear differential equation \eqref{e-Ln} is disconjugate on $I$. To this end, we assume that such set is not empty, i.e., there exists at least a $\bar{M}$ such that $T_n[\bar{M}]\,u(t)=0$ is disconjugate on $I$. Denote as $X_k$ the corresponding space of definition to the $(k,n-k)$ boundary value conditions given in \eqref{e-k-n-k}. { \begin{equation*} X_k= \left\lbrace u\in C^n(I)\ \mid u(a) =\dots=u^{(k-1)}(a)=u(b)=\dots=u^{(n-k-1)}(b)=0\right\rbrace. \end{equation*}} Now, in order to make the paper more readable, we introduce some previous concepts and results. \begin{definition} Let $a_k\in C^{n-k}(I)$ for $k=1,\dots,n$. The linear differential equation (\ref{e-Ln}) of order $n$ is said to be disconjugate on an interval $J$ if every non trivial solution has, at most, $n-1$ zeros on $J$, multiple zeros being counted according to their multiplicity. \end{definition} The following results and definitions about this concept are collected on \cite[Chapter 3]{Cop}. \begin{theorem}\label{T::1} If the equations $L_1\,y=0$ and $L_2\,y=0$ are disconjugate on the interval $I$, then the composite equation $L_1\,(L_2\,y)=0$ is also disconjugate on $I$. \end{theorem} Defining the distance between two equations \eqref{e-Ln}$_1$ and \eqref{e-Ln}$_2$ by $\sup_{t\in I}\sum_{k=1}^n \left| a_{k,1}(t)-a_{k,2}(t)\right|$, we have the following result in the correspondent metric space. \begin{proposition} \label{P::1} The set of all disconjugate equations (\ref{e-Ln}) on a compact interval $I$ is connected and open. \end{proposition} \begin{definition}\label{Def::1} Let $a\in\mathbb{R}$, denote the first right point conjugate of $a$ for the linear differential equation (\ref{e-Ln}) by \[\eta_M(a)=\sup\left\lbrace b>a\quad\mid\quad \text{equation (\ref{e-Ln}) is disconjugate on } [a,b]\right\rbrace \in (a,\infty]\,.\] \end{definition} We consider a fundamental system of solutions $y_1[M](t),\dots,y_n[M](t)$ of equation \eqref{e-Ln}, where every $y_k[M](t)$ is uniquely determined by the following initial conditions: \[y_k^{(n-k)}[M](a)=1\,,\quad y_k^{(n-j)}[M](a)=0\,,\ j=1,\dots,n\,,\ j\neq k\,.\] Then, we denote the $n-1$ Wronskians as \begin{equation} W^n_k[M](t):=\left| \begin{array}{ccc} y_1[M](t)&\dots&y_k[M](t)\\ \vdots&\cdots&\vdots\\ y_1[M]^{(k-1)}(t)&\cdots&y_k[M]^{(k-1)}(t)\end{array}\right| \,,\quad k=1,\dots,n-1 \,. \end{equation} \begin{proposition}\label{P::3} There exists a solution of equation (\ref{e-Ln}) which verifies the boundary conditions $(k,n-k)$ on $[a,b]$ if, and only if, $W^n_{n-k}[M](b)=0$. \end{proposition} \begin{definition} Denote $\omega_M(a)$ as the least $b>a$, if one exists, at which one of the Wronskians $W^n_1[M](b),\dots,$ $W^n_{n-1}[M](b)$ vanishes. \end{definition} The next result gives us a relation between this concept and the one given on Definition \ref{Def::1}. \begin{proposition}\label{P::4} $\eta_M(a)=\omega_M(a)$. \end{proposition} \begin{proposition}\label{P::2} Let $b=\eta_M(a)$ and let $n-k\in\left\lbrace 1,\dots,n-1\right\rbrace $ be such that $W^n_{n-k}[M](b)=0$ and $W^n_{\ell}[M](b)\neq 0$ for every $\ell<n-k$. The corresponding solution of \eqref{e-Ln} with $(k,n-k)$ boundary conditions is uniquely determined up to a constant factor, and does not vanish on the open interval $(a,b)$. \end{proposition} Now, we are going to introduce the concept of Green's function related to the operator $T_n[M]$ coupled with boundary conditions (\ref{e-k-n-k}), see \cite{Cab} for details. \begin{definition} We say that $g_M$ is a Green's function for problem (\ref{e-Ln})-(\ref{e-k-n-k}) if it satisfies the following properties: \begin{itemize} \item[$(g_1)$] $g_M$ is defined on the square $I\times I$ (except $t=s$ if $n=1$). \item[$(g_2)$] For $k=0,1,\dots,n-2$, the partial derivatives $\dfrac{\partial^k g_M}{\partial t^k}$ exist and they are continuous on $I\times I$. \item[$(g_3)$] $\dfrac{\partial^{n-1} g_M}{\partial t^{n-1}}$ and $\dfrac{\partial^n g_M}{\partial t^n}$ exist and they are continuous on the triangles $a\leq s<t\leq b$ and $a\leq t<s\leq b$. \item[$(g_4)$] For each $s\in(a,b)$, the function $t\rightarrow g_M(t,s)$ is a solution of the differential equation (\ref{e-Ln}) on $[a,s)\cup (s,b]$. \item[$(g_5)$] For each $t\in(a,b)$ there exist the lateral limits {\footnotesize \[\dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t^-,t)=\dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t,t^+)\quad\text{and}\quad \dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t,t^-)=\dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t^+,t)\]} and, moreover {\small \[\dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t^+,t)-\dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t^-,t)=\dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t,t^-)-\dfrac{\partial^{n-1} }{\partial t^{n-1}}g_M(t,t^+)=1\,.\]} \item[$(g_6)$] For each $s\in(a,b)$, the function $t\rightarrow g_M(t,s)$ satisfies the boundary conditions $(k,n-k)$, i.e., {\small \[g_M(a,s)=\cdots=\dfrac{\partial^{k-1} }{\partial t^{k-1}}g_M(a,s)=g_M(b,s)=\cdots=\dfrac{\partial^{n-k-1} }{\partial t^{n-k-1}}g_M(b,s)\,.\]} \end{itemize} \end{definition} Denote the Green's function related to the operator $T_n[M]$ in $X_k$ as $g_{M,k}$. If equation \eqref{e-Ln} is disconjugate on $I$ and $u$ is a solution of problem $T_n[M]\,u(t)=\sigma(t)$, $t\in I$, with boundary conditions $(k,n-k)$, it is uniquely determined by the expression \[u(t)=\int_a^bg_{M,k}(t,s)\,\sigma(s)\,ds\,.\] We also mention a result which appears on \cite[Chapter 3, Section 6]{Cop} and that connects the disconjugacy and the sign of Green's function related to problem (\ref{e-Ln})-(\ref{e-k-n-k}). \begin{lemma}\label{L::1} If the linear differential equation (\ref{e-Ln}) is disconjugate on $I$ and $g(t,s)$ is the Green's function related to problem (\ref{e-Ln})-(\ref{e-k-n-k}), by defining $p(t)=(t-a)^k\,(t-b)^{n-k}$ we have that \[ g(t,s)\,p(t)\geq 0\,,\quad \forall\,(t,s)\in I \times I\quad \text{and}\quad \dfrac{g(t,s)}{p(t)}>0\,,\quad \forall\,(t,s)\in I\times (a,b)\,.\] \end{lemma} In the sequel, we introduce two conditions on $g_M(t,s)$ that will be used along the paper, see \cite[Section 1.8]{Cab}. \begin{itemize} \item[$(P_g$)] Suppose that there is a continuous function $\phi(t)>0$ for all $t\in (a,b)$ and $k_1,\ k_2\in \mathcal{L}^1(I)$, such that $0<k_1(s)<k_2(s)$ for a.e. $s\in I$, satisfying \[\phi(t)\,k_1(s)\leq g_M(t,s)\leq \phi(t)\, k_2(s)\,,\quad \text{for a. e. } (t,s)\in I \times I \,.\] \item[($N_g$)] Suppose that there is a continuous function $\phi(t)>0$ for all $t\in (a,b)$ and $k_1,\ k_2\in \mathcal{L}^1(I)$, such that $k_1(s)<k_2(s)<0$ for a.e. $s\in I$, satisfying \[\phi(t)\,k_1(s)\leq g_M(t,s)\leq \phi(t)\, k_2(s)\,,\quad \text{for a. e. }(t,s)\in I \times I\,.\] \end{itemize} Next result, which appears in \cite{CabSaa}, gives us a property of the operator under the disconjugacy hypothesis. \begin{lemma}\label{L::2} Let $\bar{M}\in\mathbb{R}$ be such that $T_n[\bar{M}]\,u(t)=0$ is disconjugate on $I$. Then the following properties are fulfilled: \begin{itemize} \item If $n-k$ is even, then $T_n[\bar{M}]$ is a inverse positive operator on $X_k$ and its related Green's function, $g_{\bar{M}}(t,s)$, satisfies ($P_g$). \item If $n-k$ is odd, then $T_n[\bar{M}]$ is a inverse negative operator on $X_k$ and its related Green's function satisfies ($N_g$). \end{itemize} \end{lemma} The following result, which appears on \cite[Theorem 3.2]{Neh}, shows a property of the eigenvalues of a disconjugate operator. \begin{theorem}\label{T::10} Let $\bar{M}\in\mathbb{R}$ be such that $T_n[\bar{M}]\,u(t)=0$ is disconjugate on $I$. Then \begin{itemize} \item If $n-k$ is even, there is not any eigenvalue of $T_n[\bar{M}]$ on $X_k$ such that $\lambda<0$. \item If $n-k$ is odd, there is not any eigenvalue of $T_n[\bar{M}]$ on $X_k$ such that $\lambda>0$. \end{itemize} \end{theorem} Next two following results, see \cite[Section 1.8]{Cab}, ensure the existence of eigenvalues in different cases \begin{theorem} Let $\bar{M}\in \mathbb{R}$ be fixed. If operator $T_n[\bar{M}]$ is invertible in $X_k$ and its related Green's function satisfies condition $(P_g)$, then there exists $\lambda_1>0$, the least eigenvalue in absolute value of operator $T_n[\bar{M}]$ in $X_k$. Moreover, there exist a nontrivial constant sign eigenfunction corresponding to the eigenvalue $\lambda_1$. \end{theorem} \begin{theorem} Let $\bar{M}\in \mathbb{R}$ be fixed. If operator $T_n[\bar{M}]$ is invertible in $X_k$ and its related Green's function satisfies condition $(N_g)$, then there exists $\lambda_2<0$, the least eigenvalue in absolute value of operator $T_n[\bar{M}]$ in $X_k$. Moreover, there exist a nontrivial constant sign eigenfunction corresponding to the eigenvalue $\lambda_2$. \end{theorem} Finally, we introduce the following sets, which characterize the intervals of constant sign for $g_M(t,s)$. \begin{eqnarray} \nonumber P_T&=& \left\lbrace M\in \mathbb{R}\,,\ \mid \quad g_M(t,s)\geq 0\quad \forall \quad(t,s)\in I\times I\right\rbrace, \\\nonumber N_T&=& \left\lbrace M\in \mathbb{R}\,,\ \mid \quad g_M(t,s)\leq 0\quad \forall\quad (t,s)\in I\times I\right\rbrace. \end{eqnarray} Next results describe the structure of the two previous parameter's set, see \cite[Section 1.8]{Cab} \begin{theorem}\label{T::6} Let $\bar{M}\in\mathbb{R}$ be fixed. Suppose that operator $T_n[\bar{M}]$ is invertible on $X_k$, its related Green's function is nonnegative on $I \times I$, it satisfies condition ($P_g$), and the set $P_T$ is bounded from above. Then $P_T=(\bar{M}-\lambda_1,\bar{M}-\bar{\mu}]$, with $\lambda_1>0$ the least positive eigenvalue of operator $T_n[\bar{M}]$ in $X_k$ and $\bar{\mu}<0$ such that $T_n[\bar{M}-\bar{\mu}]$ is invertible in $X_k$ and the related nonnegative Green's function $g_{\bar{M}-\bar{\mu}}$ vanishes at some points on the square $I\times I$. \end{theorem} \begin{theorem}\label{T::7} Let $\bar{M}\in\mathbb{R}$ be fixed. Suppose that operator $T_n[\bar{M}]$ is invertible in $X_k$, its related Green's function is nonpositive on $I\times I$, it satisfies condition ($N_g$), and the set $N_T$ is bounded from below. Then $N_T=[\bar{M}-\bar{\mu},\bar{M}-\lambda_2)$, with $\lambda_2<0$ the biggest negative eigenvalue of operator $T_n[\bar{M}]$ in $X_k$ and $\bar{\mu}>0$ such that $T_n[\bar{M}-\bar{\mu}]$ is invertible in $X_k$ and the related nonpositive Green's function $g_{\bar{M}-\bar{\mu}}$ vanishes at some points on the square $I\times I$. \end{theorem} \section{Characterization of disconjugacy} This section is devoted to prove de main result of this paper. The result is the following. \begin{theorem}\label{T::aut} Let $\bar{M}\in\mathbb{R}$ and $n\geq 2$ be such that $T_n[\bar{M}]\,u(t)=0$ is a disconjugate equation on $I$. Then, $T_n[M]\,u(t)=0$ is a disconjugate equation on $I$ if, and only if, $M\in(\bar{M}-\lambda_1,\bar{M}-\lambda_2)$, where \begin{itemize} \item $\lambda_1=+\infty$ if $n=2$ and, for $n>2$, $\lambda_1>0$ is the minimum of the least positive eigenvalues on $T_n[\bar{M}]$ in $X_k$, with $n-k$ even. \item $\lambda_2<0$ is the maximum of the biggest negative eigenvalues on $T_n[\bar{M}]$ in $X_k$, with $n-k$ odd. \end{itemize} \end{theorem} \begin{proof} Let $n>2$. First, we are going to see that the optimal interval of disconjugation, $D_{\bar{M}}$, must necessarily be a subset of $(\bar{M}-\lambda_1,\bar{M}-\lambda_2)$. Using, Lemma \ref{L::1}, it is known that if $M\in D_{\bar{M}}$, Green's function related to operator $T_n[M]$ in $X_k$ is of constant sign, positive if $n-k$ is even and negative if $n-k$ is odd. Let $\widehat{k}\in\left\lbrace 1,\dots,n-1\right\rbrace $ be such that $n-\widehat{k}$ is even and $\lambda_1$ attained as the least positive eigenvalue of $T_n[\bar{M}]$ in $X_{\widehat{k}}$. Using Lemma \ref{L::2} and Theorem \ref{T::6}, we can affirm that $g_{\widehat{M},\widehat{k}}$ changes sign on $I\times I$ for $\widehat{M}\leq\bar{M}-\lambda_1$, then $\widehat{M}\notin D_{\bar{M}}$ for every $\widehat{M}\leq \bar{M}-\lambda_1$. In an analogous way, let $\widetilde{k}\in\left\lbrace 1,\dots,n-1\right\rbrace $ be such that $n-\widetilde{k}$ is odd and $\lambda_2$ attained as the biggest negative eigenvalue of $T_n[\bar{M}]$ in $X_{\widetilde{k}}$. Using the same arguments, with Lemma \ref{L::2} and Theorem \ref{T::7}, we can affirm that $g_{\widetilde{M},\widetilde{k}}$ has not constant sign on $I\times I$ for $\widetilde{M}\geq\bar{M}-\lambda_2$, then $\widetilde{M}\notin D_{\bar{M}}$ for every $\widetilde{M}\geq \bar{M}-\lambda_2$. Hence, we have proved that $D_{\bar{M}}\subset\left( \bar{M}-\lambda_1,\bar{M}-\lambda_2\right) $. \vspace{0.3cm} Let's see now that $D_{\bar{M}}=\left( \bar{M}-\lambda_1,\bar{M}-\lambda_2\right) $. Denote $M_1=\inf D_{\bar{M}}$ and $M_2=\sup D_{\bar{M}}$. Because of Proposition \ref{P::1}, $D_{\bar{M}}$ should be an open interval, in particular $M_j\neq\bar{M}$ for $j=1,2$. If $D_{\bar{M}}\neq (\bar{M}-\lambda_1,\bar{M}-\lambda_2)$ then, at least one (or both) of the two following inequalities holds: either $M_1>\bar{M}-\lambda_1$ or $M_2<\bar{M}-\lambda_2$. Suppose that first inequality is fulfilled. Since $T_n[M_1]\,u(t)=0$ is not a disconjugate equation on $I$, we have that $c=\eta(a)\leq b$. Using Proposition \ref{P::2}, we can ensure the existence of $\ell\in\left\lbrace 1,\dots,n-1\right\rbrace $ such that there exists a solution of $T_n[M_1]\,u(t)=0$, satisfying boundary conditions $(n-\ell,\ell)$ on $[a,c]$. If $c=b$, we have that $\bar{M}-M_1\in(\lambda_2,\lambda_1)$ will be an eigenvalue of $T_n[\bar{M}]$ in $X_\ell$, and it contradicts the definition of $\lambda_1$ when $n-l$ is even and $\lambda_2$, if $n-l$ is odd. So, we have that $c<b$. Using Proposition \ref{P::3}, we know that $W_\ell[M_1](c)=0$. And, since $T_n[M]\,u(t)=0$ is a disconjugate equation on $I$ for $M\in (M_1,M_2)$, we can affirm that $W_\ell[M_1+\delta](t)\neq0\,,\quad t\in(a,b]$ for every $0<\delta<M_2-M_1$. Since $W_\ell[M](t)$ is a continuous function of $M$, we can affirm that $W_\ell[M_1](t)$ is of constant sign on a neighborhood of $c$, so it has a double zero at $c$ as a function of $t$. Using the expression of the derivative of the Wronskian given in \cite{Neh}, we know that \begin{equation}\label{Ec::dw}0=\dfrac{\partial}{\partial t}W^n_{\ell}[M_1](t)_{\mid t=c}=\left| \begin{array}{ccc} y_1[M_1](c)&\dots &y_\ell[M_1](c)\\ &\vdots&\\ y_1^{(\ell-2)}[M_1](c)&\dots&y_\ell^{(\ell-2)} [M_1](c)\\ y_1^{(\ell)}[M_1](c)&\dots&y_\ell^{(\ell)}[M_1](c)\end{array}\right|.\end{equation} We take the following solution of \eqref{e-Ln} \[y(t)=\left| \begin{array}{ccc} y_1[M_1](c)&\dots &y_\ell[M_1](c)\\ &\vdots&\\ y_1^{(\ell-2)}[M_1](c)&\dots&y_\ell^{(\ell-2)} [M_1](c)\\ y_1[M_1](t)&\dots&y_\ell[M_1](t)\end{array}\right|\,.\] Since it is a linear combination of $y_1[M_1],\dots y_\ell[M_1]$, it is obvious that it has $n-\ell$ zeros at $a$. Since $W_\ell[M_1](c)=0$, $y$ trivially verifies the boundary conditions $(n-\ell,\ell)$ at $[a,c]$. And, using Proposition \ref{P::2}, since $c=\eta_M(a)$ we know that it does not vanish on the open interval $(a,c)$. Because of equality (\ref{Ec::dw}) it is not difficult to verify that such function also verifies the boundary conditions $(n-\ell-1,\ell+1)$ on $[a,c]$. As consequence, denoting as $g_{\bar{M},n-\ell}$ and $g_{\bar{M},n-\ell-1}$, the related Green's functions to problem \eqref{e-Ln} -- \eqref{e-k-n-k}, for $M=\bar{M}$, $b=c$ and $k=l$ or $k=l+1$, respectively, we deduce the following equalities for all $t\in [a,c]$ \[ y(t)=\int_a^c g_{\bar{M},n-\ell}(t,s)\,(\bar{M}-M_1)\,y(s)\,ds\,,\quad \text{and} \quad y(t)=\int_a^c g_{\bar{M},n-\ell-1}(t,s)\,(\bar{M}-M_1)\,y(s)\,ds\,.\] Using Lemma \ref{L::1} we know that $g_{\bar{M},n-\ell}(t,s)$ and $g_{\bar{M},n-\ell-1}(t,s)$ have different constant sign on $[a,c]\times(a,c)$, so last equalities cannot be satisfied at the same time. Then we can affirm that $M_1=\bar{M}-\lambda_1$. With analogous arguments we conclude that $M_2=\bar{M}-\lambda_2$. If $n=2$, the argument related to $\lambda_2$ is the same. Suppose that there exist $M^*<\bar{M}$ such that the equation \eqref{e-Ln} is not disconjugate on $[a,b]$, then $M_1<\bar{M}$ must be defined and also $c=\eta_M(a)$. If $c=b$ it implies the existence of a positive eigenvalue of $T_n[\bar{M}]$ in $X_1$, which is a contradicts Theorem \ref{T::10}. Then, we can proceed analogously to the case where $n>2$ with $c<b$ and arrive to a contradiction. So, our result is proved. \end{proof} \subsection{Particular cases} Since $u^{(n)}(t)=0$ is always a disconjugate equation at any interval (see \cite{Cop} for details), this result can obviously be applied to operators $T_n[M]\,u(t)=u^{(n)}(t)+M\,u(t)$. So, in order to construct the optimal parameter set of disconjugacy, we only need to calculate the closest to zero eigenvalues. Until eighth order the eigenvalues of problems $(k,n-k)$ are explicitly obtained on \cite[Section 4]{CabSaa}. For instance, in the second order case we know that the closest to zero eigenvalue of $u''$ in $X_1$ is $-\pi^2$, so the optimal interval of disconjugacy is $(-\infty,\pi^2)$. Also, in third order we have that the least positive eigenvalue of operator $u'''$ in $X_1$ is $\left( \lambda_3^1\right) ^3$ and the biggest negative eigenvalue of operator $u'''$ in $X_2$ is $-\left( \lambda_3^1\right) ^3$, where $ \lambda_3^1\approxeq 4.23321$ is the least positive solution of \begin{equation*} \cos \left(\frac{1}{2} \sqrt{3} \lambda \right)-\sqrt{3} \sin \left(\frac{1}{2} \sqrt{3} \lambda \right)=e^{\frac{-3\,\lambda}{2}}\,. \end{equation*} So, we can affirm that $u'''(t)+M\,u(t)=0$ is a disconjugate equation if, and only if $M\in \left( -\left( \lambda_3^1\right) ^3,\left( \lambda_3^1\right) ^3\right) $. In fourth order we obtain that the biggest negative eigenvalue of operator $u^{(4)}$ in $X_1$ and $X_3$ is given by $-\left( \lambda_4^1\right) ^4$ and the least positive eigenvalue of operator $u^{(4)}$ in $X_2$ is $\left( \lambda_4^2\right) ^4$, where $\lambda_4^1\approxeq 5.553$ is the least positive solution of \begin{equation*} \tan\left( \dfrac{\lambda}{\sqrt{2}}\right) =\tanh\left( \dfrac{\lambda}{\sqrt{2}}\right) \,, \end{equation*} and $\lambda_4^2\approxeq4.73004$ is the least positive solution of \begin{equation*} \cos (\lambda ) \cosh (\lambda )=1. \end{equation*} Hence, we can conclude that $u^{(4)}(t)+M\,u(t)=0$ is disconjugate in $[0,1]$ if, and only if $M\in \left( -\left( \lambda_4^2\right) ^4,\left( \lambda_4^1\right) ^4\right) $. We point out that our result it is also applicable to other kind of operators, such as, for example $T_6[M]\,u(t)=u^{(6)}(t)-8\,u^{(3)}(t)+M\,u(t)$ on $[0,1]$. It is not difficult to verify, by means of the characterization of the first right point conjugate of $a$, given in Proposition \ref{P::4}, that $T_6[0]\,u(t)=0$ is a disconjugate equation on $[0,1]$. So, we can apply Theorem \ref{T::aut}. From the self-adjoint character of operator $T_6[0]$, one can conclude (see \cite{CabSaa} for details) that the eigenvalues related to boundary conditions $(2,4)$ and $(4,2)$, and $(5,1)$ and $(1,5)$ are the same. So, we only need to calculate the eigenvalues related to $(1,5)$, $(2,4)$ and $(3,3)$ boundary conditions. Numerically, we obtain that \begin{itemize} \item the biggest negative eigenvalue related to the boundary conditions $(5,1)$ is $\lambda_1\approxeq -(8.40247)^6$. \item the least positive eigenvalue related to the boundary conditions $(4,2)$ is $\lambda_2\approxeq (6.717)^6$. \item the biggest negative eigenvalue related to the boundary conditions $(3,3)$ is $\lambda_3\approxeq -(6.2835)^6$. \end{itemize} Then we can conclude that $T_6[M]\,u(t)=0$ is a disconjugate equation on $[0,1]$ if, and only if, $M\in\left(-(6.717)^6,(6.2835)^6\right)$. \vspace{0.3cm} Let's consider now the operator $T_4[M]\,u(t)=u^{(4)}(t)+50\,u''(t)+M\,u(t)$. In this case, if we study the operator for $M=0$, we obtain \[W_2^4[0](t)=\frac{5 \sqrt{2} t \sin \left(5 \sqrt{2} t\right)+2 \cos \left(5 \sqrt{2} t\right)-2}{2500}\,,\] which changes sign on $[0,1]$. So, $T_4[0]\,u(t)=0$ is not a disconjugate equation on $[0,1]$. But, if we take $\bar{M}=200$, we can verify, studying its different Wronskians, see Propositions \ref{P::3} and \ref{P::4}, that $T_n[200]\,u(t)=0$ is a disconjugate equation on $[0,1]$. Hence we can apply Theorem \ref{T::aut} to this problem. Due to the fact that it is also a self adjoint problem, we only need to obtain the eigenvalues related to the boundary conditions $(3,1)$ and $(2,2)$. The eigenvalue related to the boundary conditions $(3,1)$ is given by $-\lambda_1^4$ where $\lambda_1\approxeq 3.71137$ is the least positive solution of the following equation \[\sqrt{25-\sqrt{425-\lambda ^4}} \sin \left(\sqrt{\sqrt{425-\lambda ^4}+25}\right)-\sqrt{\sqrt{425-\lambda ^4}+25} \sin \left(\sqrt{25-\sqrt{425-\lambda ^4}}\right)=0\,,\] and the eigenvalue related to the boundary conditions $(2,2)$ is given by $\lambda_2^4$ where $\lambda_2\approxeq 2.77939$ is the least positive solution of the following equation {\scriptsize \[-2 \sqrt{200-\lambda ^4}+50 \sin \left(\sqrt{25-\sqrt{\lambda ^4+425}}\right) \sin \left(\sqrt{\sqrt{\lambda ^4+425}+25}\right)+2 \sqrt{200-\lambda ^4} \cos \left(\sqrt{25-\sqrt{\lambda ^4+425}}\right) \cos \left(\sqrt{\sqrt{\lambda ^4+425}+25}\right)=0\,.\]} Hence we conclude that $T_4[M]\,u(t)=0$ is a disconjugate equation if, and only if, \[M\in (200-\lambda_2^4,200+\lambda_1^4)\approxeq(140.324,389.73)\,.\] This characterization is also applicable to problems with non-constant coefficients. For instance, let's consider the third order operator $T_3[M]\,u(t)=u^{(3)}(t)+\cos(10 t)\,u''(t)+M\,u(t)$ on $[0,1]$. Let's see that it is disconjugate for $M=0$. Since every solution of the first order linear differential equation $L_1\,u(t)=u'(t)+\cos(10\,t)\,u(t)=0$ follows the expression $u(t)= c_1\,e^{\sin(10\,t)}$, with $c_1\neq 0$, we conclude that it is disconjugate on any real interval. Also, it is well-known that the equation $L_2\,u(t)=u''(t)=0$ is also disconjugate on any real interval. So, as a direct application of Theorem \ref{T::1}, we can affirm that $L_2\,L_1\,u(t)=T_3[0]\,u(t)=0$ is a disconjugate equation on any real interval. Now, using Propositions \ref{P::3} and \ref{P::4} again, we can obtain numerically the closest to zero eigenvalues related to the boundary conditions $(2,1)$ and $(1,2)$, which are $\lambda_1\approxeq -4.33149^3$ and $\lambda_2\approxeq 4.29055^3$, respectively. So, we can affirm that $T_3[M]\,u(t)=0$ is a disconjugate equation in $[0,1]$ if, and only if $M\in (-\lambda_2,-\lambda_1)$.
{ "timestamp": "2015-06-15T02:07:45", "yymm": "1506", "arxiv_id": "1506.03043", "language": "en", "url": "https://arxiv.org/abs/1506.03043", "abstract": "This paper is devoted to the description of the interval of parameters for which the general linear $n^{\\rm th}$-order equation\\begin{equation}\\label{e-Ln}T_n[M]\\,u(t) \\equiv u^{(n)}(t)+a_1(t)\\, u^{(n-1)}(t)+\\cdots +a_{n-1}(t)\\, u'(t)+(a_{n}(t)+M)\\,u(t)=0 \\,,\\quad t\\in I\\equiv[a,b],\\end{equation}with $a_i\\in C^{n-i}(I)$, is disconjugate on $ I $.Such interval is characterized by the closed to zero eigenvalues of this problem coupled with $(k,n-k)$ boundary conditions, given by\\begin{equation}\\label{e-k-n-k}u(a)=\\cdots=u^{(k-1)}(a)=u(b)=\\cdots=u^{(n-k-1)}(b)=0\\,,\\quad 1\\leq k\\leq n-1\\,.\\end{equation}", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Disconjugacy characterization by means of spectral of $(k,n-k)$ problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.97364464868338, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.7087894002001862 }
https://arxiv.org/abs/1510.01670
Sketching for Simultaneously Sparse and Low-Rank Covariance Matrices
We introduce a technique for estimating a structured covariance matrix from observations of a random vector which have been sketched. Each observed random vector $\boldsymbol{x}_t$ is reduced to a single number by taking its inner product against one of a number of pre-selected vector $\boldsymbol{a}_\ell$. These observations are used to form estimates of linear observations of the covariance matrix $\boldsymbol{\varSigma}$, which is assumed to be simultaneously sparse and low-rank. We show that if the sketching vectors $\boldsymbol{a}_\ell$ have a special structure, then we can use straightforward two-stage algorithm that exploits this structure. We show that the estimate is accurate when the number of sketches is proportional to the maximum of the rank times the number of significant rows/columns of $\boldsymbol{\varSigma}$. Moreover, our algorithm takes direct advantage of the low-rank structure of $\boldsymbol{\varSigma}$ by only manipulating matrices that are far smaller than the original covariance matrix.
\section{Introduction} We introduce and analyze a technique for estimating the covariance matrix $\mtx{\varSigma}$ of $\sN$-dimensional random vector $\vct{x}$ from samples $\vct{x}_1,\vct{x}_2,\ldots,\vct{x}_{Q}$. We will show that these quantities can be estimated accurately from {\em sketches} or {\em compressed measurements} of $\vct{x}$. Different methods for sketching covariance matrices that are either Toeplitz, sparse, or low-rank are studied in \cite{bioucas-dias14co} and \cite{dasarathy15sk}. Sketching of simultaneously structured covariance matrices (e.g., low-rank and sparse matrices or low-rank and Toeplitz matrices) was first considered in \cite{chen14es} and \cite{chen15ex}. In particular, it is shown in \cite{chen14es} that simultaneously $K\times K$-sparse and rank-$R$ covariance matrices can be recovered from $\msf{O}(K^2 R\log \sN)$ generic rank-one sketeches through minimizaton of a mixture of the trace norm and the $\ell_1$ norm. It is recognized in \cite{chen14es} that the achieved sample complexity is suboptimal. In this paper, we show how the sample complexity can be improved using specifically tailored rank-one sketches. We also demonstrate how the estimation can be performed by manipulating matrices of dimension $\sN\timesR$, where $R$ is the rank of the target. As the samples $\vct{x}_t$ are presented, they are mapped into scalar values by taking an inner product against one of $L$ different vectors $\vct{a}_1,\ldots,\vct{a}_{L}$. If a fixed vector $\vct{a}_\ell$ is used on $T$ different samples, then we have the estimate \begin{equation} \label{eq:approxmean} \vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell = \mb{\msf E}\left[\vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\vct{x}\vx^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\vct{a}_\ell\right] \approx \frac{1}{T}\sum_{t=1}^{T}|\<\vct{x}_t,\vct{a}_{\ell}\>|^2. \end{equation} The $Q$ data points are thus turned into $L$ measurements of $\mtx{\varSigma}$ of the form \begin{equation} \label{eq:ymeas} y[\ell] = \vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell + \mathrm{noise}. \end{equation} The measurements $y[\ell]$ can be formed in a decentralized manner, and since they are scalars they are easy to store and communicate. We will show that by choosing the vectors $\vct{a}_\ell$ attentively, we can estimate $\mtx{\varSigma}$ from $L=\msf{O}(KR\log \sN)$ rank-one sketches that is much smaller than the number of entries in the covariance matrix, $L\ll\sN^2$. The proposed sketching scheme is similar to the efficient measurement scheme recently proposed for compressive phase retrieval \cite{iwen15ro,bahmani15ef}. Moreover, our algorithm for estimating $\mtx{\varSigma}$ works by manipulating matrices in factored form, making it scalable for large $\sN$ regimes. \subsection{Data model} The data points $\vct{x}_1,\ldots,\vct{x}_{Q}\in\mathbb{R}^{\sN}$ are independent realizations of a zero-mean random vector with covariance matrix $\mtx{\varSigma}$. We will consider the case where $\mtx{\varSigma}$ is simultaneously sparse and low-rank. This means that we can closely approximate $\mtx{\varSigma}$ with the factorization \[ \mtx{\varSigma} \approx \mtx{U}\mU^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}, \] where $\mtx{U}$ is a $\sN\timesR$ matrix with at most $K$ non-zero rows (we will assume $R\leqK\ll\sN$). $\mtx{\varSigma}$ itself can be well-approximated by a matrix with $K^2$ non-zero terms --- all but $K$ rows (or columns) will be approximately zero, and in each of these $K$ significant rows, all but $K$ entries will be approximately zero. Other than these properties of the covariance matrix, our framework does not depend heavily on the particulars of the distribution of the $\vct{x}_q$. The algorithms below depend on having bounds on the approximation error in \eqref{eq:approxmean}; these bounds might be derived from other known properties of $\mtx{\varSigma}$ (i.e.\ its trace or the dynamic range of its eigenvalues) or of the distribution of $\vct{x}$. \subsection{Choosing the sketching vectors} \label{sec:sketchingvectors} Each data vector $\vct{x}_t$ is compressed into a single number by taking an inner product against one of $L$ different $\vct{a}_\ell\in\mathbb{R}^{\sN}$. We will denote the set of indices that use vector $\vct{a}_\ell$ as $\set{T}_\ell$. It is not critically important how the $Q$ observation are divided among the $\set{T}_\ell$ other than that the sets should have close to the same size. For simplicity, we will assume that all of the index sets have the same number of terms, $T:=|\set{T}_\ell|$ for all $\ell$, and so $Q=\sTL$. The guarantees for the estimation algorithm presented below depend on the $\vct{a}_\ell$ having certain properties. We restrict the $\vct{a}_\ell$ to lie in an $M$ dimensional subspace of $\mathbb{R}^{\sN}$, generating them using \begin{equation} \label{eq:asubspace} \vct{a}_{\ell} = \mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\vct{w}_\ell, \end{equation} where $\mtx{\varPsi}$ is an $M\times\sN$ matrix whose rows form a basis for this subspace. We will take the $\vct{w}_\ell$ to be randomly generated. The analysis below holds when the $\vct{w}_\ell$ are independent and distributed $\mathrm{Normal}(\vct{0},{\bf I})$. However, it is likely that the analysis could be generalized to other distributions, and the only thing we rely on algorithmically is that all $L$ vectors lie in a subspace as in \eqref{eq:asubspace}. Our analysis also requires that the matrix $\mtx{\varPsi}$ is a stable embedding for $2K$-sparse vectors. We will assume below that $\mtx{\varPsi}$ obeys the {\em restricted isometry property} \cite{candes06ne} \begin{equation} \label{eq:rip} (1-\delta_{K})\|\vct{z}_1-\vct{z}_2\|_2^2 ~\leq~ \|\mtx{\varPsi}(\vct{z}_1-\vct{z}_2)\|_2^2 ~\leq~ (1+\delta_{K})\|\vct{z}_1-\vct{z}_2\|_2^2, \end{equation} for all $K$-sparse $\vct{z}_1,\vct{z}_2\in\mathbb{R}^{\sN}$. There are many examples of matrices that have this property for a number of rows proportional to the sparsity $K$ times a logarithmic factor; see \cite{vershynin12in,rauhut10co} for detailed overviews. If $\mtx{\varPsi}$ is populated with independent \emph{subgaussian} random variables, then \eqref{eq:rip} holds with high probability for $M\gtrsimK\log(\sN/K)$. More structured matrices that allow for fast computations also have this property. For example, if $\mtx{\varPsi}$ consists of $M$ randomly selected rows of an $\sN\times\sN$ Fourier matrix, then \eqref{eq:rip} holds with high probability for $M\gtrsimK\log^4(\sN)$. With this subspace conditions, we can write the covariance measurements as \begin{align} \nonumber y[\ell] = \frac{1}{T}\sum_{t\in\set{T}_\ell}|\<\vct{x}_t,\vct{a}_\ell\>|^2 &\approx \vct{w}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varPsi}\mtx{\varSigma}\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\vct{w}_\ell \\ \label{eq:rank1ip} &= \<\mtx{\varPsi}\mtx{\varSigma}\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}},\vct{w}_\ell\vct{w}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\>. \end{align} That is, each $y[\ell]$ is the matrix inner product of $\mtx{\varPsi}\mtx{\varSigma}\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$ with an $M\timesM$ rank-1 random matrix. This gives the covariance matrix measurements a {\em nested structure}. We can write \[ \vct{y} = \linop{W}(\mtx{\varPsi}\mtx{\varSigma}\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}) + \mathrm{noise}. \] The $\sN\times\sN$ covariance matrix $\mtx{\varSigma}$ is first mapped to an $M\timesM$ matrix by applying $\mtx{\varPsi}$ to either side. Then $\linop{W}$ maps this result to $\mathbb{R}^{L}$ by taking the series of matrix inner products in \eqref{eq:rank1ip}. In the next section, we will see how this nested structure allows us to decouple the estimation process into a low-rank estimation stage followed by two sparse approximation stages. In Section~\ref{sec:analysis}, we present results that relate the number of sketches $L$ and their accuracy (which is controlled by $T$) to the accuracy of our estimation procedure. \subsection{Estimating the covariance} \label{sec:algorithm} Let $\mtx{\varSigma}^{\star}\in\mathbb{R}^{\sN\times\sN}$ be a rank-$R$ positive semidefinite matrix that has at most $K$ nonzero rows (and columns) with $R < L\ll \sN$. Given a matrix $\mtx{\varPsi}\in\mathbb{R}^{M\times\sN}$ and a linear operator $\linop{W}:\mtx{B}\mapsto\left[\vct{w}_{\ell}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{B}\vct{w}_\ell\right]_{\ell=1}^{L}$, where $\vct{w}_\ell\sim\mathrm{Normal}\left(\vct{0},{\bf I}\right)$, we consider the problem of estimating $\mtx{\varSigma}^{\star}$ from noisy linear measurements of the form \begin{align} \vct{y} & =\linop{W}\left(\mtx{\varPsi}\mtx{\varSigma}^\star\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\right) + \vct{z}, \label{eq:measurements} \end{align} where $\vct{z}\in\mathbb{R}^{L}$ is the noise vector that is bounded as $\|\vct{z}\|_2\leq\varepsilon$. We propose the following two-stage procedure for estimation of $\mtx{\varSigma}^{\star}$. \begin{enumerate} \item {\bf Low-rank estimation stage}: Since $\mtx{\varSigma}^\star$ is low-rank, we know that $\mtx{\varPsi}\mtx{\varSigma}\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$ will be low-rank as well. Our first stage ``inverts'' the $\linop{W}(\cdot)$ operator by looking for a low-rank matrix that explains the measurements $\vct{y}$. There are a number of ways we might do this, but here we will use the standard convex relaxation that minimizes the trace norm (nuclear norm) in place of the rank: \begin{align} \widehat{\mtx{B}} & \in\argmin_{\mtx{B}\succcurlyeq{\bf 0}}\ \operatorname{trace}\left(\mtx{B}\right) \label{eq:pre-estimate}\\ &\hspace{3ex}\mr{subject\ to\ }\|\linop{W}\left(\mtx{B}\right)-\vct{y}\|_{2}\leq\varepsilon.\nonumber \end{align} The output $\widehat{\mtx{B}}$ of this first stage can be thought of as an estimate of $\mtx{\varPsi}\mtx{\varSigma}^\star\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$. \item {\bf Sparse estimation stage}: In the second stage, we invert the action of $\mtx{\varPsi}$ on the left and $\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$ on the right. It is conceivable that these two steps could be combined into a single sparse approximation step, but performing them sequentially leads to a natural analysis (as we will see in Section~\ref{sec:analysis}). \begin{enumerate} \item \label{enu:sparse-a} Since we are given $\widehat{\mtx{B}}\approx\mtx{\varPsi}\mtx{\varSigma}^\star\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$, we start by looking for a matrix with a small number of non-zero rows whose range is close to the range of $\mtx{\varSigma}^\star$. The fact that $\widehat{\mtx{B}}$ is approximately rank $R$ allows us to work with a rank $R$ approximation of $\widehat{\mtx{B}}$ and be computationally efficient. We start by computing the $M\timesR$ matrix $\widehat{\mtx{U}}$ of the top $R$ (unit-norm) eigenvectors of $\widehat{\mtx{B}}$. We can then look for a row-wise sparse $\sN\timesR$ matrix that is close to $\widehat{\mtx{B}}\widehat{\mtx{U}}$ after an application of $\mtx{\varPsi}$ on the left: \begin{align} \widehat{\mtx{Q}}_{1} & \in\argmin_{\mtx{Q}}\left\|\mtx{Q}\right\|_{1,2} \nonumber\\ &\hspace{3ex}\mr{subject\ to\ }\left\Vert \mtx{\varPsi}\mtx{Q}-\widehat{\mtx{B}}\widehat{\mtx{U}}\right\Vert _{F}\leq\frac{c_{1}\varepsilon}{\sqrt{L}}. \label{eq:sparseaopt} \end{align} The $\|\cdot\|_{1,2}$ norm above is a convex relaxation for the number of non-zero rows; $\|\mtx{Z}\|_{1,2} = \sum_{n=1}^{\sN}\|\mtx{Z}_{n,:}\|_2$, where $\mtx{Z}_{n,:}$ is the $n$th row of $\mtx{Z}$. Conceptually (and this is formalized in the analysis below), the output $\widehat{\mtx{Q}}_1$ can be used to form the approximation $\widehat{\mtx{Q}}_1\widehat{\mtx{U}}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\approx\mtx{\varSigma}^\star\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$. Thus we have effectively undone the action of $\mtx{\varPsi}$ on the left of $\mtx{\varSigma}^\star$. \item \label{enu:sparse-b} The second sparse approximation step does a similar computation on the row-space of the output above. We compute $\widehat{\mtx{V}}\in\mtx{\varSigma}^{\sN\timesR}$, the matrix of the left singular vectors of $\widehat{\mtx{Q}}_{1}$, and compute \begin{align} \widehat{\mtx{Q}}_{2} & \in\argmin_{\mtx{Q}}\left\Vert \mtx{Q}\right\Vert _{1,2} \nonumber\\ &\hspace{3ex}\mr{subject\ to\ }\left\Vert \mtx{\varPsi}\mtx{Q}- \widehat{\mtx{U}}\widehat{\mtx{Q}}_{1}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\widehat{\mtx{V}}\right\Vert _{F}\!\!\leq\!\frac{c_{2}\varepsilon}{\sqrt{L}}. \label{eq:sparsebopt} \end{align} Conceptually, the output $\widehat{\mtx{Q}}_{2}$ undoes the action of $\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$ on the right, giving us the approximation $\widehat{\mtx{Q}}_{2}\widehat{\mtx{V}}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\approx\mtx{\varSigma}^\star$ \end{enumerate} \end{enumerate} The output of the second stage are two $\sN\timesR$ matrices $\widehat{\mtx{V}}$ and $\widehat{\mtx{Q}}_2$. These can be used to form an approximate factorization of $\mtx{\varSigma}^\star$. In Section~\ref{sec:analysis} below, we give a mathematical guarantee on how close \begin{align} \widehat{\mtx{\varSigma}} & =\widehat{\mtx{Q}}_{2}\widehat{\mtx{V}}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}, \label{eq:estimate} \end{align} is to the true covariance $\mtx{\varSigma}^\star$. The constants $c_1$ and $c_2$ in the sparse estimation stage above are user-defined; the analysis below relies on a particular choice for these parameters. The algorithm above is carefully designed to be sublinear in the number of entries in the $\sN\times\sN$ matrix $\mtx{\varSigma}^\star$. Stage 1 above is an optimization program over the cone of $M\timesM$ SDP matrices, and we have seen that we can take $M\gtrsimK\log^\alpha\sN$. The sparse approximation stage exclusively handles $\sN\timesR$ matrices. This allows our algorithmic framework to scale to regimes where the dimension $\sN$ and the number of samples $Q$ are large. \subsection{Noise magnitude} In general, all the algorithm proposed above needs is a bound on the total size of the error in the measurements of the covariance matrix. With the model in \eqref{eq:approxmean} and \eqref{eq:ymeas}, this error is simply the deviation of a quadratic function of the random vector $\vct{x}$ from its mean. If we have information about the distribution of $\vct{x}$, we may be able to derive the desired bound in a principled way. To demonstrate this, suppose that $\vct{x}\sim\mr{Normal}(\mb{0},\mtx{\varSigma})$ which implies $\<\vct{x},\vct{a}_\ell\>\sim\mr{Normal}(0,\vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell)$. This means that \[ y[\ell] = \frac{1}{T}\sum_{t\in\set{T}_\ell}|\<\vct{x}_t,\vct{a}_\ell\>|^2 \] is proportional to a Chi-squared random variable with $T$ degrees of freedom. Since $\mb{\msf E}[y[\ell]]=\vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell$, the mean energy of the $\ell$th component $e[\ell]:=y[\ell]-\vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell$ of the error is \[ \mb{\msf E}[e[\ell]^2] = \frac{2(\vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell)^2}{T}, \] and the total error $\|\vct{e}\|_2^2$ can be shown to concentrate around \begin{align*} \mb{\msf E}[\|\vct{e}\|_2^2] &= \frac{2}{T}\sum_{\ell=1}^L(\vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell)^2. \end{align*} The obtained concentration bounds may depend on the (a priori unknown) covariance $\mtx{\varSigma}$, but usually they can be approximated by some attributes of $\mtx{\varSigma}$. If the $\vct{a}_\ell$ are chosen as in Section~\ref{sec:sketchingvectors}, $\vct{a}_\ell^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\mtx{\varSigma}\vct{a}_\ell$ will itself be a weighted sum of Chi-squared random variables whose moments can be calculated in terms of the Frobenius and spectral norms of $\mtx{\varSigma}$. \section{Analysis} \label{sec:analysis} Our main theorem shows that for an appropriate number of sketches, the estimation algorithm detailed in Section~\ref{sec:algorithm} produces a provably good estimate of $\mtx{\varSigma}^\star$. In addition to the number of sketches $L$ being sufficiently large, we also assume that $\mtx{\varPsi}$ obeys the restricted isometry property in \eqref{eq:rip}. \begin{thm} \label{thm:accuracy} There is a constant $C_1$ such that if \begin{equation} \label{eq:Lbound} L ~\geq~ C_1\cdot R\cdot M, \end{equation} then for appropriately chosen constants $c_{1}$ and $c_{2}$ in \eqref{eq:sparseaopt} and \eqref{eq:sparsebopt} above, with probability exceeding $1-\mathrm{e}^{-\msf{O}(M)}$ the estimate in \eqref{eq:estimate} obeys \begin{align*} \left\Vert \widehat{\mtx{\varSigma}}-\mtx{\varSigma}^{\star}\right\Vert _{F} & \leq\frac{C_2\varepsilon}{\sqrt{L}}, \end{align*} where $C_2$ is another constant which depends on $c_1,c_2,$ and the restricted isometry constant $\delta_{K}$. \end{thm} The theorem above gives us a uniform guarantee; it holds simultaneously for all rank-$R$ and row-wise $K$-sparse $\mtx{\varSigma}^\star$ for the same $\{\vct{a}_\ell\}$. We will sketch a proof below, withholding some of the details due to space constraints. The accuracy of the first stage is established through a simple application of a recent result in the theory of low-rank matrix recovery. The work \cite{kueng14lo} establishes uniform bounds on the recovery of low-rank matrices from inner products against a series of independent, rank-1 symmetric random matrices. This exactly describes the $\linop{W}(\cdot)$ operator in \eqref{eq:measurements}. A direct application of the main theorem in that paper shows that for $L$ as in \eqref{eq:Lbound}, we have \begin{alignat}{1} \left\Vert \widehat{\mtx{B}}-\mtx{\varPsi}\mtx{\varSigma}^{\star}\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\right\Vert _{F} & \leq\frac{c\varepsilon}{\sqrt{L}}, \label{eq:low-rank-accuracy} \end{alignat} with high probability for an absolute constant $c>0$. The accuracy of the second stage follows from the lemma below, which we will prove at the end of the section. \begin{lem} \label{lem:master} Suppose that for some rank-$R$ and row-wise $K$-sparse matrix $\mtx{\varSigma}^{\sharp}$, a matrix $\mtx{B}^{\sharp}$, and a constant $\epsilon$ we have \begin{align*} \left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}-\mtx{B}^{\sharp}\right\Vert _{F} & \leq\epsilon, \end{align*} where $\mtx{\varPsi}$ obeys \eqref{eq:rip} with a sufficiently small $\delta_{K}$. Let $\mtx{U}^{\sharp}$ denote the top $R$ right singular vectors of $\mtx{B}^{\sharp}$ and \begin{align} \mtx{Q}^{\sharp} & =\argmin_{\mtx{Q}}\ \left\Vert \mtx{Q}\right\Vert _{1,2} \label{eq:Q-opt}\\ &\hspace{3ex} \mr{subject\ to\ }\left\Vert \mtx{\varPsi}\mtx{Q}-\mtx{B}^{\sharp}\mtx{U}^{\sharp}\right\Vert _{F}\leq2\epsilon.\nonumber \end{align} Then we have \begin{align*} \left\Vert \mtx{Q}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}-\mtx{\varSigma}^{\sharp}\right\Vert _{F} & \leq C\epsilon, \end{align*} for some constant $C>0$ depending only on $\delta_{K}$. \end{lem} With $\widehat{\mtx{U}}$ as the top $R$ eigenvectors of the output of the first stage $\widehat{\mtx{B}}$, Lemma~\ref{lem:master} tells us that the result $\widehat{\mtx{Q}}_1$ of solving \eqref{eq:sparseaopt} in Stage 2a will obey \begin{align*} \left\Vert \widehat{\mtx{U}}\widehat{\mtx{Q}}_{1}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}-\mtx{\varPsi}\mtx{\varSigma}^{\star}\right\Vert _{F}= \left\Vert \widehat{\mtx{Q}}_{1}\widehat{\mtx{U}}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}-\mtx{\varSigma}^{\star}\mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\right\Vert _{F} & \leq\frac{c'\varepsilon}{\sqrt{L}}, \end{align*} for some constant $c'$. For stage 2b, with $\widehat{\mtx{V}}\in\mathbb{R}^{\sN\timesR}$ as the top $R$ left singular vectors of $\widehat{\mtx{Q}}_1$, we can again invoke the lemma and conclude that for some constant $C_2$ we have \begin{align*} \left\Vert \widehat{\mtx{\varSigma}}-\mtx{\varSigma}^{\star}\right\Vert _{F} &=\left\Vert \widehat{\mtx{Q}}_{2}\widehat{\mtx{V}}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}-\mtx{\varSigma}^{\star}\right\Vert _{F} \leq\frac{C_{2}\varepsilon}{\sqrt{L}}. \end{align*} It remains to prove Lemma~\ref{lem:master}. Because $\mtx{\varPsi}\mtx{\varSigma}^{\sharp}$ is rank-$R$ and $\mtx{B}^{\sharp}\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}$ is the best rank-$R$ approximation of $\mtx{B}^{\sharp}$, we have \begin{align*} \left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}-\mtx{B}^{\sharp}\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right\Vert _{F} & \leq\left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}\!-\!\mtx{B}^{\sharp}\right\Vert _{F}\!\! +\! \left\Vert \mtx{B}^{\sharp}\!-\!\mtx{B}^{\sharp}\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right\Vert _{F}\\ & \leq 2\left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}-\mtx{B}^{\sharp}\right\Vert _{F} \leq 2\epsilon. \end{align*} Therefore, we can write \begin{align*} &\left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}\left({\bf I}-\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right)\right\Vert _{F}^{2} + \left\Vert \left(\mtx{\varPsi}\mtx{\varSigma}^{\sharp}-\mtx{B}^{\sharp}\right)\mtx{U}^{\sharp}\right\Vert _{F}^{2} \\ & =\left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}\left({\bf I} - \mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right)\right\Vert _{F}^{2} + \left\Vert \left(\mtx{\varPsi}\mtx{\varSigma}^{\sharp}-\mtx{B}^{\sharp}\right)\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right\Vert _{F}^{2}\\ & =\left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}-\mtx{B}^{\sharp}\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right\Vert _{F}^{2} \leq 4 \epsilon^{2}. \end{align*} In particular, since $\mtx{\varSigma}^\sharp$ is row-wise $K$-sparse, from \eqref{eq:rip} we have \begin{align} \sqrt{1-\delta_{K}}\left\Vert \mtx{\varSigma}^{\sharp}\left({\bf I}-\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right)\right\Vert _{F} & \leq\left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}\left({\bf I}-\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right)\right\Vert _{F}\nonumber \\ & \leq2\epsilon. \label{eq:U-perp} \end{align} Moreover, we have already obtained \begin{align*} \left\Vert \mtx{\varPsi}\mtx{\varSigma}^{\sharp}\mtx{U}^{\sharp}-\mtx{B}^{\sharp}\mtx{U}^{\sharp}\right\Vert _{F} & \leq 2\epsilon. \end{align*} It follows from this latter bound that $\mtx{\varSigma}^{\sharp}\mtx{U}^{\sharp}$ is feasible in (\ref{eq:Q-opt}). Since $\mtx{\varPsi}$ obeys \eqref{eq:rip}, we can use standard results for compressive sensing of block-sparse signals (e.g.\ \cite{eldar09ro}) to guarantee that for some absolute constant $c>0$ we have \begin{align*} \left\Vert \mtx{Q}^{\sharp}-\mtx{\varSigma}^{\sharp}\mtx{U}^{\sharp}\right\Vert _{F} & \leq c\epsilon. \end{align*} Therefore, using \eqref{eq:U-perp} and with $C=\sqrt{c^{2}+\frac{4}{1-\delta_{K}}}$ we have \begin{align*} \left\Vert \mtx{Q}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\!\!-\!\!\mtx{\varSigma}^{\sharp}\right\Vert _{F}^{2}\! \! &\! =\left\Vert \mtx{Q}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\!\!-\!\mtx{\varSigma}^{\sharp}\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right\Vert _{F}^{2} \!\!\!+\! \left\Vert \!\mtx{\varSigma}^{\sharp}\!\left(\!{\bf I}\!-\!\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\!\right)\right\Vert _{F}^{2}\\ &\! =\left\Vert \mtx{Q}^{\sharp}-\mtx{\varSigma}^{\sharp}\mtx{U}^{\sharp}\right\Vert _{F}^{2} +\left\Vert \mtx{\varSigma}^{\sharp}\left({\bf I} -\mtx{U}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}\right)\right\Vert _{F}^{2}\\ &\leq C^2\epsilon^2, \end{align*} and thereby $\left\Vert \mtx{Q}^{\sharp}\mtx{U}^{\sharp{\scalebox{0.5}[0.6]{$\mathsf{T}$}}}-\mtx{\varSigma}^{\sharp}\right\Vert _{F} \leq C\epsilon$. \section{Simulations} \label{sec:simulations} \begin{figure} \noindent\centering \includegraphics[width=1\columnwidth]{NEvsKR} \caption{The empirical $0.9$ quantile of estimation error vs. sparsity $K$ for rank $R$ in $\left\lbrace 2,4,8\right\rbrace$} \label{fig:NEvsK-R} \end{figure} We ran numerical simulations on synthetic data as follows. With $\sN=1000$ and for $R\in\left\lbrace 2,4,8 \right\rbrace$ and $K\in\left\lbrace 10,11,\dotsc,19\right\rbrace$, in each of the $100$ trials we generated an $\sN\times R$ matrix $\mtx{U}$ by drawing a $K\times R$ random matrix with iid standard Gaussian entries, modulating its columns by iid $\mr{Uniform}[0,1]$, and interleaving its rows with $\sN-K$ all-zero rows uniformly at random. Then $\mtx{\varSigma}^\star = \mtx{U}\mU^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}$ is selected as the target covariance matrix. We also set $M=\left\lceil 2K\left(1+\log\frac{\sN}{K}\right)\right\rceil$ and $L=3\sRM$. To compute the measurements we draw $2500$ iid samples $\vct{x}_t$ from $\mr{Normal}\left(\mb{0},\mtx{\varSigma}^\star\right)$ and $L$ samples of $\vct{a}_\ell = \mtx{\varPsi}^{\scalebox{0.5}[0.6]{$\mathsf{T}$}}\vct{w}_\ell$ where the $M\times\sN$ marix $\mtx{\varPsi}$ is populated with iid $\mr{Normal}\left(0,\frac{1}{M}\right)$ and $\vct{w}_\ell \sim \mr{Normal}\left(\mb{0},{\bf I}\right)$. In each trial, we applied the proposed method with $\varepsilon = 2\sRK/\sqrt{T}$, $c_1 = \sqrt{3}$, and $c_2 =3$. Figure \ref{fig:NEvsK-R} illustrates the $0.9$ quantile of the relative error of the estimated covariance matrix as $K$ varies between $10$ and $19$ for $R=2,4$ and $8$. The error is almost flat as a function of $K$, in agreement with the theoretical analysis for the prescribed number of measurements. Higher variations for smaller values of $R$ is the effect of smaller sample size. \bibliographystyle{IEEEtran}
{ "timestamp": "2015-10-09T02:12:22", "yymm": "1510", "arxiv_id": "1510.01670", "language": "en", "url": "https://arxiv.org/abs/1510.01670", "abstract": "We introduce a technique for estimating a structured covariance matrix from observations of a random vector which have been sketched. Each observed random vector $\\boldsymbol{x}_t$ is reduced to a single number by taking its inner product against one of a number of pre-selected vector $\\boldsymbol{a}_\\ell$. These observations are used to form estimates of linear observations of the covariance matrix $\\boldsymbol{\\varSigma}$, which is assumed to be simultaneously sparse and low-rank. We show that if the sketching vectors $\\boldsymbol{a}_\\ell$ have a special structure, then we can use straightforward two-stage algorithm that exploits this structure. We show that the estimate is accurate when the number of sketches is proportional to the maximum of the rank times the number of significant rows/columns of $\\boldsymbol{\\varSigma}$. Moreover, our algorithm takes direct advantage of the low-rank structure of $\\boldsymbol{\\varSigma}$ by only manipulating matrices that are far smaller than the original covariance matrix.", "subjects": "Information Theory (cs.IT); Numerical Analysis (math.NA); Statistics Theory (math.ST)", "title": "Sketching for Simultaneously Sparse and Low-Rank Covariance Matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446463891303, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.7087893985300288 }
https://arxiv.org/abs/2005.05779
Instability of Defection in the Prisoner's Dilemma Under Best Experienced Payoff Dynamics
We study population dynamics under which each revising agent tests each strategy k times, with each trial being against a newly drawn opponent, and chooses the strategy whose mean payoff was highest. When k = 1, defection is globally stable in the prisoner`s dilemma. By contrast, when k > 1 we show that there exists a globally stable state in which agents cooperate with probability between 28% and 50%. Next, we characterize stability of strict equilibria in general games. Our results demonstrate that the empirically plausible case of k > 1 can yield qualitatively different predictions than the case of k = 1 that is commonly studied in the literature.
\section{Introduction} The standard approach in game theory assumes that players form beliefs about the various uncertainties they face and then best respond to these beliefs. In equilibrium, the beliefs will be correct and the players will play a Nash equilibrium. However, in some economic environments where the players have limited information about the strategic situation, Nash equilibrium prediction is hard to justify. Consider the following example from \cite{osborne1998games}. You are new to town and are planning your route to work. How do you decide which road to take? You know that other people use the roads, but have no idea which road is most congested. One plausible procedure is to try each route several times and then permanently adopt the one that was (on average) best. The outcome of this procedure is stochastic: you may sample the route that is in fact the best on a day when a baseball game congests it. Once you select your route, you become part of the environment that determines other drivers' choices. This procedure is formalized as follows. Consider agents in a large population who are randomly matched to play a symmetric $n$-player game with a finite set of actions. Agents occasionally revise their action (which can also be interpreted as agents occasionally leaving the population and being replaced by new agents who base their behavior on the sampling procedure, as in the motivating example above). Each revising agent samples each feasible action $k$ times and chooses the action that yields the highest average payoff (applying some tie-breaking rule). This procedure induces a dynamic process according to which the distribution of actions in the population evolves (best experienced payoff dynamics: \citealp{sethi2000stability, sandholm2019best}). An $S(k)$ equilibrium {$\alpha^\ast$ is a rest point of the above dynamics. The equilibrium is locally stable if any distribution of actions in the population that is sufficiently close to $\alpha^\ast$ converges to $\alpha^\ast$, and globally stable if any distribution of actions in the population with support that includes all actions converges to $\alpha^\ast.$} The existing literature on payoff sampling equilibria (as surveyed below) has mainly focused on $S(1)$ equilibria, due to their tractability. It seems plausible that real-life behavior would rely on sampling each action more than once. A key insight of our analysis is that sampling actions several times might lead to qualitatively different results than sampling each action only once. In particular, in the prisoner's dilemma game, $S(1)$ dynamics yield the Nash equilibrium behavior, while $S(k)$ dynamics (for $k>1$) {may} induce substantial cooperation. Recall that each player in the prisoner's dilemma game has two actions, cooperation $c$ and defection $d$, and the payoffs are as in Table \ref{tab:Payoff-Matrix-of}, where $g, l > 0.$ \cite{sethi2000stability} has shown that defection is the unique $S(1)$ globally stable equilibrium. By contrast, our first main result (Theorem \ref{thm:mainresult}) shows that for any $k\geq2$, a game for which the gains from defection are not too large (specifically, $g,l<\frac{1}{k-1}$) admits a globally stable state in which the rate of cooperation is between $28\%$ and $50\%$. \begin{table}[h] \centering{} \begin{tabular}{cc|cc} & & \textcolor{red}{\emph{c }}\textcolor{red}{{} } & \textcolor{red}{\emph{d}}\tabularnewline \cline{2-4} \cline{3-4} \cline{4-4} \multirow{2}{*} & \textcolor{blue}{\emph{c}}\textcolor{blue}{{} } & \textcolor{blue}{1}~~,~~\textcolor{red}{1} & \textcolor{blue}{~~-$l$}~,~\textcolor{red}{1+$g$} \tabularnewline & \textcolor{blue}{\emph{d }}\textcolor{blue}{{} } & \textcolor{blue}{1+$g$}~,~\textcolor{red}{-$l$~~~} & \textcolor{blue}{0}~~,~~\textcolor{red}{0} \tabularnewline \end{tabular} \begin{centering} \caption{ Prisoner's Dilemma Payoff Matrix (${\color{blue}g},{\color{red}l}>0$) \label{tab:Payoff-Matrix-of}} \par\end{centering} \end{table} {Our remaining results characterize the local stability of strict equilibria for $k \ge 2$. Proposition \ref{prop:Sk-strict-PD} shows that defection in the prisoner's dilemma game is locally stable iff $l>\frac{1}{k-1}$. Theorem \ref{thm:unstable sn} extends the analysis to general symmetric games.} It presents a simple necessary and sufficient condition for {a strict symmetric equilibrium action $a^\ast$} to be $S(k)$ locally stable ({improving on} the conditions presented in \citealp{sethi2000stability, sandholm2020stability}). Roughly speaking, the condition is that in any set of actions $A^\prime$ that does not include $a^\ast$ there is an action that never yields the highest payoff when the corresponding sample includes a single occurrence of an action in $A^\prime$ and all the other sampled actions are $a^\ast$. { Theorem \ref{thm:asymmetric unstable sn} extends the characterization of local stability of strict equilibria to general asymmetric games.} \textbf{Outline}: {In the remaining parts of the Introduction we review the related literature, and compare our predictions with the experimental findings.} In Section \ref{sec:model}, we introduce our model and {the} solution concept. We analyze the prisoner's dilemma in Section \ref{sec:3}, and characterize the stability of strict equilibria in general symmetric games in Section \ref{sec:symmetric games}. {An extension of the analysis to asymmetric games is presented in Section \ref{sec:asymmetric games}.} \subsection{{Related Experimental Literature and Testable Predictions}\label{subsec:predictions}} { The typical length of a lab experiment, as well as the subjects' cognitive costs, are likely to limit the sample sizes used by subjects to test the various actions to small values such as $k=2$ or $k=3$ (because larger samples induce too-high costs of non-optimal play during the sampling stage, and require larger cognitive effort to analyze). Proposition \ref{pro:k-2-3} shows that for these small sample sizes of $k=2$ or $3$, the $S(k)$ dynamics admit a unique globally stable equilibrium, which depends on the parameter $l$. Specifically, everyone defecting is the globally stable equilibrium if $l>\frac{1}{k-1}$, while there is a substantial rate of cooperation between $24\%$ and $33\%$ if $l<\frac{1}{k-1}.$} {The predictions of our model match quite well the empirical findings of the meta-study of \citet{mengel2018risk} concerning the behavior of subjects playing the one-shot prisoner's dilemma. \citet[Tables A.3, B.5]{mengel2018risk} summarizes 29 sessions of lab experiments of that game in a ``stranger'' (one-shot) setting from 16 papers (with various values of $g$ and $l$, {both} with median 1; {the distribution of values is presented in Appendix \ref{sec-experiments-g-l}}). The average rate of cooperation in these experiments is $37\%$. Our predictions are also broadly consistent with the experimentally observed comparative statics with respect to $g$ and $l$, which is that the rate of cooperation is decreasing in $l$ but is independent of $g$ (see \citealp[Table B.5]{mengel2018risk}, where $l$ is called $RISK^{Norm}$ and $g$ is called $TEMPT^{Norm}$).}\footnote{{Theorem \ref{thm:mainresult} shows that for $g$ that is not too large (specifically, $g<\frac{1}{k-1}$), the minimal rate of cooperation in the globally stable state is $28\%$ (when $l<\frac{1}{k-1}$). Proposition \ref{pro:k-2-3} allows arbitrary large $g$, which has the modest impact of slightly decreasing the minimal globally stable rate of cooperation to 24\%.}} The empirically observed average cooperation rate of $37\%$ can also be explained by other theories. Specifically, it can be explained by agents making errors when the payoff differences are small (quantal response equilibrium, \citealp{mckelvey1995quantal}), or by agents caring about the payoffs of cooperative opponents (inequality aversion $\grave{\textrm{a}}$ la \citealp{fehr1999theory}, and reciprocity $\grave{\textrm{a}}$ la \citealp{rabin1993incorporating}). Our model has two advantages in comparison with these alternative models. First, our model is parameter free (for a fixed $k$), while the existing models may require tuning their parameters to fit the experimental data (such as the parameter describing the agents' error rates in a quantal response equilibrium). Second, the predictions of the existing models are arguably less compatible with the above-mentioned experimentally observed comparative statics. Quantal response equilibrium predicts that the cooperation rate decreases in both parameters. The other models predict that the cooperation rate decreases in $g$ (because an increasing $g$ increases the material payoff from defecting, while it does not change the payoff of a cooperative opponent), and their prediction with respect to $l$ is ambiguous, because increasing $l$ has two opposing effects: increasing the material gain from defection against a defecting opponent but decreasing the payoff of a cooperating opponent. Our predictions might have an even better fit with experiments in which subjects have only partial information about the payoff matrix (a setting that might be relevant to many real-life interactions), such as a ``black box'' setting in which players do not know the game's structure and observe only their realized payoffs (see, e.g., \citealp{nax2015directional, nax2016learning,burton2017social}). \subsection{Related {Theoretical} Literature}\label{sec:related literature} The payoff sampling dynamics approach employed in this paper was pioneered by \cite{osborne1998games} and \cite{sethi2000stability}. The approach has been used in a variety of applications, including bounded-rationality models in industrial organization (\citealp{spiegler2006competition,spiegler2006market}), coordination games (\citealp{ramsza2005stability}), trust and delegation of control (\citealp{rowthorn2008procedural}), market entry (\citealp{chmura2011minority}), ultimatum games (\citealp{mikekisz2013sampling}), common-pool resources (\citealp{cardenas2015stable}), contributions to public goods (\citealp{mantilla2018efficiency}), and finitely repeated games (\citealp{Raj}). Most of these papers mainly focus on $S(1)$ dynamics, in which each action is only sampled once.\footnote{{ See also the variant of the $S(1)$ dynamics presented in \cite{rustichini2003equilibria}, according to which after an initial phase of sampling each action once, each player in each round chooses the action that has yielded the highest average payoff so far.}} One exception is \cite{sandholm2019best}, which analyzes the stable $S(k)$ equilibrium in a centipede game and shows that it involves cooperative behavior even when the number of trials $k$ of each action is large. Another is \cite{sandholm2020stability}, which presents general stability and instability criteria of $S(k)$ equilibria in general classes of games, thus providing a unified way of deriving many of the specific results the above papers derive, as well as several new results. A related, alternative approach is \emph{action sampling dynamics} (or sample best-response dynamics), according to which each revising agent obtains a small random sample of other players' actions, and chooses the action that is a best reply to that sample (see, e.g., \citealp{sandholm2001almost, kosfeld2002myopic, kreindler2013fast, oyama2015sampling, heller2018social, salant2020statistical}). The action sampling approach is a plausible heuristic when the players know the payoff matrix and are capable of strategic thinking but do not know the exact distribution of actions in the population. \section{Model}\label{sec:model} We consider a unit-mass continuum of agents who are randomly matched to play a symmetric {$n$-player} game $G=\{A,u\}$, where $A = \{a_1, a_2, \dots, a_m\}$ is the (finite) set of actions and $u: A^n \rightarrow \mathbb{R}$ is the payoff function, {which is invariant to permutations of its second through $n$-th arguments. An agent taking action $a^1$ against opponents playing $a^2, \dots,a^n$, in any order, receives payoff $u(a^1,a^2, \dots,a^n)$.} Aggregate behavior in the population is described by a \textit{population state} $\alpha$ lying in the unit simplex $\Delta \equiv \{\alpha =(\alpha_{a_i})_{i=1}^m \in \mathbb{R}^m_{+} \ | \ \sum_{i=1}^m \alpha_{a_i} = 1\},$ with $\alpha_{a_i}$ representing the fraction of agents in the population using action $a_i .$ The standard basis vector $e_a \in \Delta$ represents the pure, or monomorphic, state in which all agents play action $a.$ Where no confusion is likely, we identify the action with the monomorphic state, denoting them both by $a$. The set of \emph{interior population states}, in which all actions are used by a positive mass of agents, is $Int(\Delta)\equiv \Delta \cap \mathbb{R}^m_{++}$. A sampling procedure involves the testing of the different actions against randomly drawn opponents, as explained next. Agents occasionally receive opportunities to switch actions (equivalently, this can be thought of as agents dying and being replaced by new agents). These opportunities do not depend on the currently used actions. That is, when the population state is $\alpha(t)$, the proportion of agents originally using an action $a$ out of the agents who revise between time $t$ and $t+dt$ is equal to their proportion in the population $\alpha_{a}(t)$. When an agent receives a revision opportunity, he tries each of the feasible actions $k$ times, using it each time against a newly drawn opponent from the population. Thus, the probability that the opponent's action is any $a \in A$ is $\alpha_{a}(t)$. The agent then chooses the action that yielded the highest mean payoff in these trials, employing some tie-breaking rule if more than one action yields the highest mean payoff. All of our results hold for any tie-breaking rule. Denote the probability that the chosen action is $a$ by $w_{a,k}(\alpha(t))$. As a result of the revision procedure described above, the expected change in the number of agents using an action $a$ during an infinitesimal time interval of duration $dt$ is \begin{equation}\label{eq:expected change} w_{a,k}(\alpha(t))dt - \alpha_a(t)dt. \end{equation}The first term in \eqref{eq:expected change} is an inflow term, representing the expected number of revising agents who switch to action $a$, while the second term is an outflow term, representing the expected number of revising agents who are currently playing that action. In the limit $dt\to 0,$ the rate of change of the fraction of agents using each action is given {in vector notation by} \begin{equation}\label{eq:BEP} \dot{\alpha} = w_{k}(\alpha(t)) - \alpha(t), \end{equation}{where $w_k$ is a vector whose $a$-th component is $w_{a,k}$.} The system of differential equations \eqref{eq:BEP} is called the $k$-\textit{payoff sampling dynamic}. Its rest points are called $S(k)$ equilibria. \begin{definition}[\citealp{osborne1998games}]\label{def:rest-point} \emph{A population state $\alpha^\ast\in\Delta$ is an \emph{$S(k)$ equilibrium} if $ w_{k}(\alpha^\ast) = \alpha^\ast$.} \end{definition} An equilibrium is (locally) asymptotically stable if a population beginning near it remains close and eventually converges to the equilibrium, and it is (almost) globally asymptotically stable if the population converges to it from any initial interior state. \begin{definition}\label{def:local-stability} \emph{An $S(k)$ equilibrium $\alpha^{*}$ is \emph{ asymptotically stable} if:} \emph{ \begin{enumerate} \item (\emph{Lyapunov stability}) for every neighborhood $U$ of $\alpha^{*}$ in $\Delta$ there is a neighborhood $V \subset U$ of $\alpha^{*}$ such that if $\alpha(0) \in V $, then $\alpha(t) \in U $ for all $t >0$; and \item there is some neighborhood $U$ of $\alpha^{*}$ in $\Delta $ such that all trajectories initially in $U$ converge to $\alpha^{*}$; that is, $\alpha\left(0\right)\in U$ implies $\lim_{t\rightarrow\infty}\alpha\left(t\right)=\alpha^{*}.$ \end{enumerate} } \end{definition} \begin{definition}\label{def:global-stability} \emph{An $S(k)$ equilibrium $\alpha^{*}$ is \emph{ globally asymptotically stable} if all interior trajectories converge to $\alpha^{*}$; that is, $\alpha\left(0\right)\in Int(\Delta)$ implies $\lim_{t\rightarrow\infty}\alpha\left(t\right)=\alpha^{*}.$} \end{definition} \section{ {The Prisoner's Dilemma}}\label{sec:3} This section focuses on the {(two-player)} prisoner's dilemma game. The set of actions is given by $A = \{c, d\}$, where $c$ is interpreted as cooperation and $d$ as defection. The payoffs are as described in Table \ref{tab:Payoff-Matrix-of}, with $g,l>0$: when both players cooperate they get payoff 1, when they both defect they get 0, and when one player defects and the other cooperates, the defector gets 1+$g$ and the cooperator gets $-l$. \subsection{{Stability of Defection}}\label{sec:prelim analysis}\label{sec:strict} { \citet[Example 5]{sethi2000stability} analyzes the $S(1)$ dynamics and shows that everyone defecting is globally stable.} \begin{claim} [\citealp{sethi2000stability}] \label{prop:S1} \label{claim:S1} {Defection} is $S(1)$ globally asymptotically stable. \end{claim} {The argument behind Claim \ref{claim:S1} is as follows. When an agent samples the action $c$ (henceforth, the \emph{$c$-sample}) her payoff is higher than when sampling the action $d$ (henceforth, the \emph{$d$-sample}) iff the opponent has cooperated in the $c$-sample and defected in the $d$-sample (which happens with probability $\alpha_c \cdot \alpha_d$). Therefore, the $1$-payoff sampling dynamic is given by \begin{align*}\dot{\alpha}_{c} & =w_{c,1}(\alpha)-\alpha_{c}=\alpha_{c} \cdot \alpha_{d}-\alpha_{c}=\alpha_{c}(1-\alpha_{c})-\alpha_{c}=-\alpha_{c}^{2} \ ~~~~~(<0 \text{ if } \alpha_c >0).\end{align*} } The unique rest point $\alpha^{*}_c =0$ is the unique $S(1)$ equilibrium, and it is easy to see that it is globally asymptotically stable. {Our next result shows that for $k \ge 2$ everyone defecting is no longer globally asymptotically stable (indeed, it is not even locally asymptotically stable) if $l<\frac{1}{k-1}$.} \begin{proposition} \label{prop:Sk-strict-PD} {Let $k \ge 2$ and assume} that\footnote{The stability of defection in the borderline case of $l=\frac{1}{k-1}$ depends on the tie-breaking rule, because observing a single $c$ in the $c$-sample and no $c$'s in the $d$-sample produces a tie between the two samples. If one assumes a uniform tie-breaking rule, then action $c$ wins with a probability of $\frac{k}{2} \epsilon - O(\epsilon^2)$, which is greater than $\epsilon$ if and only if $k>2$. Thus, with this rule, defection is stable if $k=2$ and unstable if $k>2$.} $l \neq \frac{1}{k-1}$. {Defection} is $S(k)$ asymptotically stable if and only if $l > \frac {1}{k-1}$. \end{proposition} {Proposition \ref{prop:Sk-strict-PD} is implied by the results of \citet{sandholm2020stability}, and as we show in Section \ref{subsec-applications}, it also follows from Theorem \ref{thm:unstable sn} below.} For completeness, we provide a direct sketch of proof. \begin{proof}[Sketch of Proof] Consider a population state in which a small fraction $\epsilon$ of the agents cooperate and the remaining $1-\epsilon$ agents defect. A revising agent most likely sees all the opponents defecting, both in the $c$-sample and in the $d$-sample. With a probability of approximately $k \epsilon$, the agent sees a single cooperation in the $c$-sample and no cooperation in the $d$-sample, and so $c$ yields a mean payoff of $\frac{1-(k-1)\cdot l}{k}$ and $d$ yields $0$. The former is higher iff $l<\frac{1}{k-1}$. Thus, if the last inequality holds, then the prevalence of cooperation gradually increases, and the population drifts away from the state where everyone defects. By contrast, if $l>\frac{1}{k-1}$, then cooperation yields the higher mean payoff only if the $c$-sample includes at least two cooperators, which happens with a negligible probability of order $\epsilon ^2$. Therefore, in this case, cooperation gradually dies out, and the population converges to the state where everyone defects. \end{proof} \subsection{{Stability of (Partial) Cooperation}}\label{sec:main analysis} Next we show that for any $k\geq2$, {if $g$ and $l$ are sufficiently small, then} the prisoner's dilemma game admits a globally asymptotically stable $S(k)$ equilibrium in which the frequency of cooperation is between $28\%$ and $50\%$ and is increasing in $k$. \begin{theorem}\label{thm:mainresult} {For} $k\geq2$ and $g,l<\frac{1}{k-1},$ the unique $S(k)$ globally asymptotically stable equilibrium $\alpha^k$ {in the prisoner's dilemma game} satisfies $0.28<\alpha^k_c <0.5$. Moreover, $\alpha^{k'}_c<\alpha^{k}_c$ for all $2 \le k'<k$. \end{theorem} { The intuition for Theorem \ref{thm:mainresult} is as follows. The condition $g,l<\frac{1}{k-1}$ implies that cooperation yields a higher average payoff than defection iff the opponent has cooperated more times in the $d$-sample than in the $c$-sample. If $\alpha_c$ is close to zero, then the probability of this event is roughly $k \cdot \alpha_c>\alpha_c$. The symmetry between the two samples implies that the probability that the opponent has cooperated more times in the $d$-sample is less than 0.5. Thus, there exists $\alpha^k_c<0.5$, which is not close to zero, for which the probability of cooperation yielding a higher average payoff is equal to $\alpha^k_c$. The proof shows that this equality holds for $\alpha^k_c>0.28$, and that $\alpha_c$ is globally stable. } \begin{proof} The proof of the theorem uses a number of claims, whose formal proofs are given in Appendix \ref{sec:proof-Thm1}. In what follows, we state each claim and present a sketch of proof. \noindent \textbf{Notation } For $j \leq k$, let $f_{k,p}(j)\equiv\binom{k}{j}p^j(1-p)^{k-j}$ be the probability mass function of a binomial random variable with parameters $k$ and $p.$ Let $Tie(k,p)=\sum_{j=0}^k (f_{k,p}(j))^2$ be the probability of having a tie between two independent binomial random variables with parameters $k,p$, and let $Win(k,p)=0.5\cdot (1-Tie (k,p){)}$ be the probability that the first random variable has a larger value than the second. Let $p \equiv \alpha_c$ denote the proportion of cooperating agents in the population. \begin{claim}\label{thm:BEPdynamic}\label{claim1} Assume that $g,l\in (0,\frac{1}{k-1})$. The $k$-payoff sampling dynamic is given by \begin{equation}\label{eq:BEPdyn} \dot{p} = Win(k,p)-p . \end{equation} \end{claim} \begin{proof}[Sketch of Proof] The condition $g,l<\frac{1}{k-1}$ implies that action $c$ has a higher mean payoff iff the $c$-sample includes more cooperating opponents than the $d$-sample does. The number of cooperators in each sample has a binomial distribution with parameters $k$ and $p$, and so the probability of $c$ having a higher mean payoff is $Win(k,p)$ (which we substitute in (\ref{eq:BEP})). \end{proof} For $k \geq 2$ and $0\leq p \leq 1$, denote the expression on the right-hand side of Eq. \eqref{eq:BEPdyn} by \begin{equation} h_{k}(p)=\label{eq:hn}Win(k,p)-p. \end{equation} \begin{claim}\label{thm:claim1}\label{claim2} For $k \geq 2$, the function $h_k$ satisfies $h_k(0)=0, h_k(1) = -1$ and $h_k'(0) > 0.$ \end{claim} \begin{proof}[Sketch of Proof] When $p=0$ (resp., $=1$), in both samples all the opponents are defectors (resp., cooperators). The conclusion implies that $Win(k,p)=0$ for $p \in \{0,1\}$, which, in turn, implies that $h_k(0) = 0$ and $h_k(1) = -1$. Next, observe that for $p=\epsilon <<1$, $Win(k,p)\approx k\epsilon$, which is approximately the probability of having at least one cooperator in the $c$-sample. Thus, $h_k(\epsilon) \approx k\epsilon-\epsilon$, which implies that $h'_k(0)=k-1>0$. \end{proof} \begin{claim}\label{thm:claim2}\label{claim3} For $k \geq 2$, the expression $h_k(p)$ is concave in $p$, and satisfies $h_k(p) < h_{k+1}(p)$ for $p \in (0,1)$, $h_k\left(\frac{1}{2}\right) < 0$, and $\lim_{k \rightarrow \infty}h_k\left(\frac{1}{2}\right) = 0.$ \end{claim} \begin{proof}[Sketch of Proof] Observe that $Tie(k,p)$ is close to 1 when $p$ is close to either zero or one, and is smaller for intermediate $p$'s. The formal proof shows (by analyzing the characteristic function) that $Tie(k,p)$ is (1) convex in $p$, (2) decreasing in $k$ (i.e., the larger the number of actions in each sample, the smaller the probability of having exactly the same number of cooperators in both samples), and (3) converges to zero as $k$ tends to $\infty$. These findings imply that $h_k(p)=(0.5 \cdot (1-Tie(k,p))-p$ is concave in $p$ and increasing in $k$, and that \[h_k\left(\frac{1}{2}\right) = \frac{1}{2}\cdot \left(1-Tie\left(k,\frac{1}{2}\right)\right)-\frac{1}{2} < \frac{1}{2}-\frac{1}{2} =0, \text{ ~~and}\] \[\lim_{k\rightarrow\infty}h_{k}\left(\frac{1}{2}\right)=\lim_{k\rightarrow\infty}\left(\frac{1}{2}\cdot \left(1-Tie\left(k,\frac{1}{2}\right)\right)-\frac{1}{2}\right)=\left(\frac{1}{2}-0\right)-\frac{1}{2}=0.\] \end{proof} It follows from Claims \ref {claim2} and \ref{claim3} that for $k\geq2$ the equation $h_k(p)=0$ has a unique solution in the interval $(0,1)$, that this solution $p(k)$ corresponds to an $S(k)$ globally asymptotically stable state, that it satisfies $p(k)<0.5$, and that it is increasing in $k$. To complete the proof of \cref{thm:mainresult}, it remains to show that $p(2)>0.28$. This inequality is an immediate corollary of the fact that for $p=0.28$, \[h_2(p=0.28) = 2p(1-p)^3 + p^2(1-p^2) - p \approx 0.001 > 0. ~~~~~\qedhere \] \end{proof} Figure \ref{figure1} shows the $S(k)$ payoff sampling dynamics and the $S(k)$ globally stable equilibria for various values of $k$. \begin{figure}[h] \centering \includegraphics[scale=0.8]{BEP_PD.pdf} \caption{The function $h_k$ and its zero $p(k)$ for various values of $k$.} \label{fig:BEP_PD}\label{figure1} \end{figure} \begin{remark} {Theorem \ref{thm:mainresult} shows that partial cooperation is globally stable for any fixed $k$ if the parameters $g$ and $l$ are sufficiently small (with the upper bound depending on $k$). At the same time, it is well known that, for fixed $g$ and $l$, defection is globally stable if $k$ is sufficiently large (\citealp[Prop. 4.3]{sandholm2020stability}; see also \citealp[Prop. 4]{osborne1998games}).} \end{remark} { Theorem \ref{thm:mainresult} leaves open the question of the stability of partial cooperation when either $g$ or $l$ is larger than $\frac{1}{k-1}$. The next proposition answers this question for small sample sizes of $k=2$ or $3$, leaving full characterization of larger sample sizes for future research. As the proposition states, everyone defecting is globally asymptotically stable if $l>\frac{1}{k-1}$, while if the reverse inequality holds, the globally asymptotically stable state has a substantial rate of cooperation of between $24\%$ and $33\%$.} \begin{proposition}\label{pro:k-2-3} {For $k \in \{2,3\},$ the unique $S(k)$ globally asymptotically stable equilibrium $\alpha^k$ satisfies $\alpha^k_c=0$ if $l>\frac{1}{k-1}$, and $\alpha^k_c\in (0.24,0.33)$ if $l<\frac{1}{k-1}$.} \end{proposition} The proof of the proposition is presented in Appendix \ref{sub-k-2-3}. \section{{General Symmetric Games}}\label{sec:symmetric games} In this section we extend Proposition \ref{prop:Sk-strict-PD} by presenting a necessary and sufficient condition for $S(k)$ asymptotic stability of actions in general symmetric games. {Our characterization uses the following two definitions.} \subsection{Definitions}\label{subsec-defs} \begin{definition}\label{def:generic} A symmetric {$n$-player} game with payoff function $u : A^n \rightarrow \mathbb{R}$ is \textit{generic} if for any two sequences of action profiles $\left(\left(a_j^1, a_j^2,\dots,a_j^n\right)\right)_{j=1}^L$ and $\left(\left(\tilde{a}_j^1, \tilde{a}_j^2,\dots,\tilde{a}_j^n\right)\right)_{j=1}^L$ of equal length {$L$}, the equality \[ \sum_{j=1}^L u\left(a_j^1, a_j^2,\dots,a_j^n\right) = \sum_{j=1}^L u\left(\tilde{a}_j^1, \tilde{a}_j^2,\dots,\tilde{a}_j^n\right) \] implies $\left\{a_j^1\right\}_{j=1}^L = \left\{\tilde{a}_j^1\right\}_{j=1}^L$. \end{definition} {Thus, a symmetric game is generic if the sums of payoffs of two sequences of action profiles are equal only if every action that appears as the first action in one of the profiles in one sequence also appears as the first action in the other sequence. Note that this definition of genericity is rather weak. A stronger definition could be used instead; the use of a weak definition only strengthens our results below. Observe also that if each entry in the payoff matrix is independently drawn from a continuous (atomless) distribution, then the resulting random symmetric game is generic with probability one. Clearly, in a generic game, every pure Nash equilibrium is a strict equilibrium.} \begin{definition}\label{def:support} \emph{For an action $a^\ast$ in a symmetric {$n$-player} game, and for $a,a' \in A\backslash\left\{ a^{*}\right\}$:} \begin{enumerate} \item \emph{action} $a$ directly $S(k)$ supports $a'$ against $a^\ast$ \emph{if} {$$u(a', a,a^\ast,\dots,a^\ast) + (k-1)\cdot u(a', a^\ast,\dots,a^\ast) > k\cdot u(a^\ast,\dots,a^\ast); \emph{ and}$$} \item \emph{action} $a$ $S(k)$ supports $a'$ by spoiling $a^\ast$ \emph{if} $$ k\cdot u(a', a^\ast,\dots,a^\ast) > u(a^\ast, a,a^\ast,\dots,a^\ast)+(k-1) \cdot u(a^\ast, \dots,a^\ast) $$ $$\text{~~\textrm{\emph{and}}~~}u(a',a^\ast,\dots,a^\ast)>u(b,a^\ast,\dots,a^\ast)~~\forall b\notin \{a^\ast,a'\}. $$ \end{enumerate} {\emph{Action $a$ $S(k)$ \emph{single supports}, \emph{double supports,} or just \emph{supports} action $a'$ against $a^\ast$ if exactly one of conditions 1 and 2, both conditions, or at least one condition, respectively, holds. The notion of \emph{weak $S(k)$ support} and the related terms (weak direct support, weak support by spoiling, weak single support, and weak double support) are defined similarly, except that the strict inequalities in 1 and 2 are replaced by weak inequalities.}} \end{definition} {Less formally, action $a$ supports action $a'$ against action $a^\ast$ if a single appearance of $a$ in a population in which almost everyone plays $a^\ast$ can make the mean payoff in the $a'$-sample the highest one.} For an action $a$ that directly supports $a'$ (a supporter of $a'$, {in the terminology of} \citeauthor{sandholm2020stability}, \citeyear[p.12]{sandholm2020stability}), a single appearance of $a$ in the $a'$-sample (with all other actions being $a^\ast$) is sufficient to make the mean payoff larger than that yielded by $a^\ast$, and thus to make it the largest payoff. For $a$ that supports $a'$ by spoiling $a^\ast$ (a benefiting spoiler of $a^\ast$, in that terminology), a single appearance of $a$ in the $a^\ast$-sample is sufficient to make the mean payoff smaller than that yielded by $a'$. This makes the latter the largest mean payoff {if} $a'$ is the second-best reply to $a^\ast$ (while if another action $a''$ is second best, then $a''$ yields a higher payoff, assuming that the $a''$-sample includes only $a^\ast$'s). Note that, in a generic game, the second-best reply is unique: the set $\argmax_{b\neq a^{*}}u(b,a^\ast,\dots,a^\ast)$ is a singleton. {Also, in a generic game, the notions of support and weak support coincide: action $a$ $S(k)$ supports $a'$ against $a^\ast$ if and only if it weakly $S(k)$ supports $a'$ against $a^\ast$.} Observe that if $a^\ast$ is a symmetric equilibrium action, then $S(k)$ support against it is ``easier" the smaller $k$ is: {if action $a$ (weakly) $S(k)$ supports $a'$ against $a^\ast$, then it (respectively, weakly) $S(k')$ supports $a'$ against $a^\ast$ for all $k'<k$. } \subsection{Result} {It is well known (\citealp{osborne1998games}) that an action $a^\ast$ can be an $S(k)$ equilibrium only if $a^\ast$ is a symmetric Nash equilibrium (otherwise, $w_{a^\ast,k}(e_{a^\ast})=0$, which contradicts $e_{a^\ast}$ being a rest point). Moreover, if the tie-breaking rule assigns positive probability to all co-winning actions, then $a^\ast$ must be a strict equilibrium (otherwise, $w_{a^\ast,k}(e_{a^\ast})<1$, which again contradicts $e_{a^\ast}$ being a rest point). Thus, being a strict symmetric equilibrium is essentially a necessary condition for an action to be $S(k)$ asymptotically stable. Our next result characterizes the conditions for a strict symmetric equilibrium action to be $S(k)$ asymptotically stable when $k \geq 2$ or $n \geq 3$}. \begin{theorem}\label{thm:unstable sn} {Suppose that $k \geq 2$ or $n \geq 3$. A} necessary and sufficient condition for a strict symmetric equilibrium action $a^\ast$ in a generic symmetric {$n$-player} game to be $S(k)$ asymptotically stable is that, for the set $A^\ast \equiv A \backslash \{a^\ast\}$, \begin{enumerate} \item [I.] {every nonempty subset $A' \subseteq A^\ast$ includes an action $a'$} that is not $S(k)$ supported {against $a^\ast$} by any action in $A'$. \end{enumerate} { In a non-generic game, condition I is still necessary for $S(k)$ asymptotic stability, and a sufficient condition is} \begin{enumerate} \item [II.] every nonempty subset $A' \subseteq A^\ast$ includes an action $a'$ that is not \em{weakly} \em{$S(k)$ supported against $a^\ast$ by any action in $A'$}. \end{enumerate} {In conditions I and II, ``is not S(k) supported by'' and ``is not weakly S(k) supported by'' can be replaced by ``does not S(k) support'' and ``does not weakly S(k) support'', respectively, as the conditions resulting from these replacements, I' and II', are equivalent to I and II.} \end{theorem} \begin {proof}[Sketch of Proof] Suppose that there is a nonempty subset of actions $A' \subseteq A^\ast$, with cardinality $m\ge 1$, such that each action $a' \in A'$ is supported by some action in $A'$. Consider an initial population state $\alpha$ in which a fraction $1-\epsilon$ of the agents play $a^\ast$ and $\frac{\epsilon}{m}$ play each of the actions in $A'$. Since each $a' \in A'$ is supported by some action {$a$} in $A'$, {there is} a probability of approximately {$k(n-1) \frac{\epsilon}{m}$} of having action {$a$} appear in the sample and thus making the mean payoff yielded by $a'$ the highest one. It follows that {$w_{a',k}(\alpha) = k(n-1) \frac{\epsilon}{m} > \frac{\epsilon}{m} = \alpha_{a'}$} for all $a' \in A'$. Thus the frequency of all actions in $A'$ increases, {which implies that $a^\ast$ is not asymptotically stable}. The formal proof (Appendix \ref{sec:B3}) formalizes this intuition, by examining the Jacobian matrix at $e_{a^\ast}$ and showing that it admits an eigenvalue greater than 1. Next, suppose that every nonempty subset $A' \subseteq A^\ast$ includes an action $a'$ that is not {weakly} $S(k)$ supported by any action in $A'$. Consider a state in which $1- \epsilon$ of the agents play $a^\ast$. We know that there exists an action $a'$ that is not {weakly} $S(k)$ supported by any action in $A^\ast$. This implies that the probability of action $a'$ having the maximal mean payoff in an agent's sample is $O(\epsilon ^2)$, and, thus, $w_{a', k}(\alpha)=O(\epsilon ^2)$. As the frequency of $a'$ becomes negligible, we can iterate the argument for $A' = A^\ast\backslash \{a'\}$, and find another action $a''$ for which $w_{a'', k}(\alpha)=O(\epsilon ^2)$, etc. The formal proof (Appendix \ref{sec:B3}) shows that (a) condition II implies that all the eigenvalues of the Jacobian matrix are negative, and (b) the phrase ``is not {weakly} $S(k)$ supported by'' can be replaced by ``does not {weakly} $S(k)$ support.'' \end{proof} {We remark that condition II is sufficient for} asymptotic stability also when $k=1$ {and $n=2$}. However, {condition I is not necessary in this case, as} demonstrated by defection in the prisoner's dilemma. { The following corollary shows that any strict symmetric equilibrium is characterized by a threshold $k_0< \infty$ that determines the equilibrium's asymptotic stability for any $k\ge 2$. (Note that the corollary does not give any information regarding $S(1)$ stability.) } \begin{corollary} {Let $a^\ast$ be a strict symmetric equilibrium action in a symmetric game. There exists an integer $k_0$ such that, for $k \ge 2$, action $a^\ast$ is $S(k)$ asymptotically stable iff $k \ge k_0$.} \end{corollary} \begin {proof} { Let $\bar k \ge 2$ be a sufficiently large integer such that, for any action $a' \neq a^\ast$ and action profiles $(a^1,...,a^n)$ and $(a'^1,...,a'^n),$ $$\bar k\cdot\left(u(a^\ast,...,a^\ast)-u(a',a^\ast,...,a^\ast)\right)>u(a^1,...,a^n)-u(a'^1,...,a'^n).$$ The inequality implies that there exists no pair of actions $a,a'\neq a^\ast$ such that action $a$ weakly supports action $a'$ against $a^\ast$, which in view of Theorem \ref{thm:unstable sn} implies that $a^\ast$ is an $S(\bar k)$ asymptotically stable equilibrium. Let $k_0$ be the smallest number for which $a^\ast$ is an $S(k_0)$ asymptotically stable equilibrium. This implies that there is no pair of actions $a,a'\neq a^\ast$ such that action $a$ $S(k_0)$ supports $a'$ against $a^\ast$. The last observation at the end of Section \ref{subsec-defs} implies that for any $k>k_0$ there is no pair of actions $a,a'\neq a^\ast$ such that action $a$ $S(k)$ supports $a'$ against $a^\ast$, which implies that, for $k\geq 2$, action $a^\ast$ is an $S(k)$ asymptotically stable equilibrium iff $k \geq k_0$. } \end{proof} \subsection{Applications}\label{subsec-applications} {In this subsection we demonstrate the usefulness of the necessary and sufficient conditions identified in Theorem \ref{thm:unstable sn} by studying the $S(k)$ asymptotic stability of strict equilibria in a number of applications.} { The first two applications deal with two-action games ($A=\{a^\ast,a'\}$). Observe that in such games condition I (resp., I') in Theorem \ref{thm:unstable sn} reduces to the requirement that the other action $a'$ does not $S(k)$ (resp., weakly) support itself against $a^\ast$.} \paragraph{\textbf{Prisoner's dilemma}} {Cooperation cannot support itself by spoiling, because it always increases the payoff of a defecting opponent. For $k \ge 2$, cooperation directly (resp., weakly) $S(k)$ supports itself against defection iff $l<\frac{1}{k-1}$ (resp., $\leq\frac{1}{k-1}$), because $$u(c,c)+(k-1)\cdot u(c,d)>k \cdot u(d,d)\Leftrightarrow~1+(k-1)\cdot(-l)>0~\Leftrightarrow~l<\frac{1}{k-1}.$$ By Theorem \ref{thm:unstable sn}, the last finding implies that defection is asymptotically stable if $l>\frac{1}{k-1}$ and is not asymptotically stable if $l<\frac{1}{k-1}$, which proves Proposition \ref{prop:Sk-strict-PD}.} \paragraph{\textbf{Public good games}} { Consider a symmetric $n$-player game, with $n \ge 2$, where the set of actions is $A=\{c,nc\}$, with $c$ and $nc$ interpreted as contributing or not contributing, respectively, a fixed amount of some private good. The amount of public good produced is $\varphi(l)$, where $l$ is the number of contributions and $\varphi$ is a nondecreasing production function with $\varphi(0)=0$ and $\varphi(1) < 1$. Each of the contributors gets a payoff of $\varphi(l)-1$, while for a non-contributor the payoff is $\varphi(l)$. The assumption $\varphi(1) < 1$ implies that no one contributing is a strict equilibrium (and it is the unique equilibrium if $\varphi(l+1)- \varphi(l)< 1$ for all $l$). Observe that contributing cannot support itself by spoiling, and it directly $S(k)$ supports itself against non-contributing iff $$u(c,c,nc,...,nc)+(k-1)\cdot u(c,nc,...,nc)>k\cdot u(nc,...,nc)\Leftrightarrow$$ $$\varphi(2)-1+(k-1) \cdot (\varphi(1)-1)>0 \Leftrightarrow k<1-\frac{1-\varphi(2)}{1-\varphi(1)} .$$ By Theorem \ref{thm:unstable sn}, the above finding implies that if $n \ge 3$ or $k \ge 2$, then a sufficient condition for not contributing to be $S(k)$ asymptotically stable is that $$k>1-\frac{1-\varphi(2)}{1-\varphi(1)} .$$ The corresponding weak inequality is a necessary condition.} \paragraph {\textbf{Coordination games}} {Consider a symmetric $n$-player game, with $n \ge 2$, where the set of actions is $A=\{a_1,...,a_m\}$, with $m \ge 2$. If all players choose the same action $a_j$, everyone gets payoff $u_j,$ where $u_1\ge u_2\ge...\ge u_m>0.$ If all players do not choose the same action, then everyone gets zero. The game admits $m$ strict equilibria: everyone playing $a_1$, everyone playing $a_2$, ..., everyone playing $a_m$.} {Next we characterize the stability of the strict symmetric equilibrium action $a_l$, for any $1\le m$. No action $a_i\neq a_l$ $S(k)$ supports an action $a_j \neq a_l$ against $a_l$ by spoiling, because $$u(a_j,a_l,...,a_l)=0<u(a_l,a_i,a_l,..,a_l)+(k-1)\cdot u(a_l,...,a_l)=(k-1)\cdot u_l.$$ The only action that might directly $S(k)$ support $a_j \neq a_l$ against $a_l$ is $a_j$ itself, and this can happen only if $n=2$. This is so because if $a_i \neq a_j$ or $n>2$, then the following inequality holds: $$u(a_j,a_i,a_l,...,a_l)+(k-1)\cdot u(a_j,a_l,...,a_l)=0<k\cdot u(a_l,...,a_l)=k\cdot u_l.$$ If $n=2$, then $a_j$ directly supports itself against $a_l$ iff $$u(a_j,a_j)+(k-1)\cdot u(a_j,a_l)>k\cdot u(a_l,a_l) \Leftrightarrow u_j+0>k \cdot u_l.$$ By Theorem \ref{thm:unstable sn}, this implies that all the strict equilibria are $S(k)$ asymptotically stable for any $k$ if there are at least three players. In the two-player case, and for $k \ge 2$, the strict symmetric equilibrium action $a_l$ is $S(k)$ asymptotically stable if $u_l>\frac{u_1}{k}$ and it is not $S(k)$ asymptotically stable if $u_l<\frac{u_1}{k}.$} \subsection{Comparison with \citet{sandholm2020stability}} We conclude this section by comparing Theorem \ref{thm:unstable sn} with the conditions for stability of strict equilibria presented in \citet[Section 5]{sandholm2020stability} (which, in turn, improve on the conditions presented in \citealp{sethi2000stability}). {For simplicity, the comparison focuses on the case of generic symmetric games.}\footnote{ {In addition}, our setting concerns only revising agents who test all feasible actions. The more general setting studied by \citeauthor{sandholm2020stability} allows dynamics in which revising agents test only some of the actions.} {As in Theorem \ref{thm:unstable sn}, we use the notation $A^\ast \equiv A \backslash \{a^\ast\}$.} \citet{sandholm2020stability} identify two necessary conditions for stability of a strict equilibrium. They are the negations of conditions 1 and 2 in the following proposition. \paragraph{\textbf{Adaptation of Proposition 5.4} \cite[]{sandholm2020stability}} \label{thm:sufficient} Let $a^\ast$ be a strict symmetric equilibrium {action} in a {generic} symmetric game. {For} $k\geq 2$, action $a^\ast$ is \emph{not} $S(k)$ asymptotically stable if either: \begin{enumerate} \item $\exists A' \subseteq A^\ast$ such that every $a'\in A'$ is directly supported by some action in $A'$; or \item $\exists A' \subseteq A^\ast$ such that every action $a'\in A'$ supports some action in $A'$. \end{enumerate} Theorem \ref{thm:unstable sn} strengthens this result by omitting ``directly'' from condition 1, thus weakening the condition. Moreover, it shows that this weaker condition (call it 1') is actually equivalent to condition 2, {and that both 1' and 2 are in fact} necessary \emph{and sufficient} conditions for asymptotic stability. \cite{sandholm2020stability} present the following sufficient condition for stability. \begin{definition} {\emph{Action} $a$ tentatively $S(k)$ supports action $a'$} by spoiling $a^\ast$ \emph{if} $${ k\cdot u(a', a^\ast,\dots,a^\ast) > u(a^\ast, a,a^\ast,\dots,a^\ast)+(k-1) \cdot u(a^\ast,\dots,a^\ast).}$$ \end{definition} \paragraph{\textbf{Adaptation of Prop. 5.9} \cite[]{sandholm2020stability}} Let $a^\ast$ be a strict symmetric equilibrium action in a generic symmetric game. {For} $k\geq 2$, action $a^\ast$ is $S(k)$ asymptotically stable if \begin{enumerate} \item [3.] there exists an ordering of $A^\ast$ such that no action $a$ {in this set} directly $S(k)$ supports or tentatively $S(k)$ supports by spoiling $a^\ast$ any weakly higher action $a'$. \end{enumerate} Theorem \ref{thm:unstable sn} strengthens this result by omitting ``tentatively'' from condition 3, thus weakening the condition and making it \emph{necessary} and sufficient for stability.\footnote{ Condition 3 is not necessary for asymptotic stability. In the {symmetric two-player} game defined by the following payoff matrix, action $a^\ast$ is $S(2)$ asymptotically stable as it satisfies the condition in Theorem \ref{thm:unstable sn}, yet it does not satisfy condition 3 due to action $a''$ tentatively supporting itself by spoiling.\newline $ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ \begin{tabular}{|c|ccc|} \hline $a^{*}$ & 8 & 9 & 3\tabularnewline $a$' & 7 & 5 & 2\tabularnewline $a''$ & 6 & 4 & 1\tabularnewline \hline \end{tabular} } Sufficiency still holds because the weaker condition (call it 3') implies that the lowest action in every subset $A' \subseteq A^\ast$ does not $S(k)$ support any action in $A'$. Necessity holds because it is not very difficult to see that 3' is implied by condition {I} in Theorem \ref{thm:unstable sn}. {(Order $A^\ast$ by recursively removing an element that is not $S(k)$ supported against $a^\ast$ by any of the current elements.)} \section{{Asymmetric Games}}\label{sec:asymmetric games} {In what follows we adapt our model and the characterization of $S(k)$ asymptotic stability to asymmetric games. } {Each player $i$ has a finite set of actions $A_i$ and a payoff function $u_i: \prod_{j=1}^n {A_j} \ \rightarrow \mathbb{R}$. The player is represented by a distinct population of agents, the $i$-population, whose state $\alpha^i$ is an element of the unit simplex in ${\mathbb{R}}^{|A_i|}$. The state of all $n$ populations is given by $\alpha=(\alpha^1,\alpha^2,\dots,\alpha^n) \in \Delta$, where $\Delta$ is the Cartesian product of the players' unit simplices.} {The population state $\alpha$ determines for each player $i$ the probability vector $w^i_{k}(\alpha(t))$ specifying the probability that each of the player's actions yields the highest mean payoff in $k$ trials, employing some tie-breaking rule. For any $k \ge 1$, the $k$-payoff sampling dynamic is given by \begin{equation}\label{eq:asymmetric BEP} \dot{\alpha}^i = w^i_{k}(\alpha(t)) - \alpha^i(t). \end{equation}A population state $\alpha^\ast$ is an $S(k)$ equilibrium if $ w_k^i(\alpha^\ast) = (\alpha^{\ast})^i$ for each player $i$. Asymptotic stability and global asymptotic stability are defined as in the symmetric case. } { The notion of supporting an action against an action profile $a^\ast=(a^\ast_1,a^\ast_2,\dots,a^\ast_n)$ is conceptually similar to that in symmetric games. To present it in a formally similar way, consider the disjoint union $A^\ast=\dot\bigcup_{i=1}^n{(A_i \backslash \{a^\ast_i\})}$. Each element of $A^\ast$ is of the form $a_i$: a specific action of a specified player $i$ such that $a_i\ne a_i^\ast$. For such an element, $(a_i,a^{\ast}_{-i})$ denotes the action profile in which player $i$ plays $a_i$ and all the other players play according to $a^\ast$. For $a_i,a_j \in A^\ast$ with $i \neq j$, $(a_i,a_j,a^{\ast}_{-ij})$ denotes the action profile in which player $i$ plays $a_i$, player $j$ plays $a_j$, and all the other players play according to $a^*$.} \begin{definition}\label{def:asymmetric_support} { \emph{For an action profile $a^\ast$ in an $n$-player game, and for $a_i,a_j \in A^\ast$:} \begin{enumerate} \item \emph{action} $a_i$ directly $S(k)$ supports $a_j$ against $a^\ast$ \emph{if} $i \ne j$ \emph{and} $$u_j(a_i,a_j,a^{\ast}_{-ij}) + (k-1)\cdot u_j(a_j,a^{\ast}_{-j}) > k\cdot u_j(a^\ast);\emph{ and}$$ \item \emph{action} $a_i$ $S(k)$ supports $a_j$ by spoiling $a^\ast$ \emph{if} $i \ne j$ \emph{and} $$ k\cdot u_j(a_j,a^{\ast}_{-j}) > u_j(a_i,a^{\ast}_{-i})+(k-1) \cdot u_j(a^\ast) \text{~~~\emph{and}~~~}u_j(a_j,a^ \ast_{-j})>u_j(b_j,a^\ast_{-j})~~\forall b_j\neq a_j\in A^\ast.$$ \end{enumerate} \emph{Action} $a_i$ $S(k)$ single supports, double supports, \emph{or just} supports \emph{action $a_j$ against $a^\ast$ if exactly one of conditions 1 and 2, both conditions, or at least one condition, respectively, holds.} Weak \emph{$S(k)$ support, and the related terms, are defined similarly, except that the strict inequalities in 1 and 2 are replaced by weak inequalities.}} \end{definition} Next we adapt the characterization of $S(k)$ asymptotic stability to asymmetric games. \begin{theorem}\label{thm:asymmetric unstable sn} {For $k \ge 2$, a necessary condition for a strict equilibrium $a^\ast$ in an $n$-player game to be $S(k)$ asymptotically stable is that condition I (equivalently, I') in Theorem \ref{thm:unstable sn} holds, and a sufficient condition is that II (equivalently, II') holds.} \end{theorem} {Obviously, the necessary conditions coincide with the sufficient ones if the game is generic, in the standard sense. The proof of the theorem, which is very similar to that of Theorem \ref{thm:unstable sn}, is presented in Appendix \ref{sec:asymmetric proof}. } {When the underlying game is symmetric, there are two different best experienced payoff dynamics that are applicable to it. The baseline, \emph{one-population} dynamics presented in Section \ref{sec:model} (specifically, (\ref{eq:BEP})) assumes a single population from which the players are sampled. Moreover, players are not assigned roles in the game; there is no player 1, player 2, etc. An alternative dynamics that can be applied to the game are the \emph{$n$-population} dynamics \ref{eq:asymmetric BEP}. Although meant for asymmetric games, they can be used to study symmetric games in which the players are arbitrarily numbered, with the $i$-population representing player\footnote{{The $n$-population dynamics can also capture environments in which players from a single population play in all roles, the roles are observable, and a player conditions her action on her role.}}} $i$. {In other evolutionary dynamics it is often the case that stability under the one-population dynamics is not equivalent to stability under the $n$-population dynamics. (For example, it is well known that the mixed equilibrium of a hawk-dove game is stable under the one-population replicator dynamics but is not stable under the two-population replicator dynamics.) Our next result shows that this is not the case here.} \begin{corollary}\label{cor-stric-tsymmetric} {For $ k \ge 2$, a strict symmetric equilibrium action $a^\ast$ in a symmetric $n$-player game is $S(k)$ asymptotically stable under the one-population dynamics (\ref{eq:BEP}) if and only if everyone playing $a^\ast$ is $S(k)$ asymptotically stable under the $n$-population dynamics (\ref{eq:asymmetric BEP}).} \end{corollary} {The simple proof, which is given in Appendix \ref{subsec:cor-proof}, relies on the fact that our two definitions of an action supporting another action against a strict symmetric equilibrium (action) essentially coincide when the underlying game is symmetric.} \subsection{Applications} {We conclude this section with demonstrating the usefulness of Theorem \ref{thm:asymmetric unstable sn} by applying it to the study of $S(k)$ asymptotic stability of strict equilibria in asymmetric prisoner's dilemma and hawk-dove games.} \begin{table}[h] \centering % \begin{tabular}{c|cc} \multicolumn{3}{c}{Asymmetric Prisoner's Dilemma}\tabularnewline & \textcolor{red}{\emph{$c_{2}$ }} & \textcolor{red}{\emph{$d_{2}$}}\tabularnewline \hline \textcolor{blue}{\emph{$c_{1}$}} & \textcolor{blue}{1}~~,~~\textcolor{red}{1} & \textcolor{blue}{~~-$l_{1}$}~,~\textcolor{red}{1+$g_{2}$}\tabularnewline \textcolor{blue}{\emph{$d_{1}$}} & \textcolor{blue}{1+$g_{1}$}~,~\textcolor{red}{-$l_{2}$~~~} & \textcolor{blue}{0}~~,~~\textcolor{red}{0}\tabularnewline \end{tabular}~~~~~~~~~~~~~~~% \begin{tabular}{c|cc} \multicolumn{3}{c}{Asymmetric Hawk-Dove}\tabularnewline & \textcolor{red}{\emph{$D_{2}$}} & \textcolor{red}{\emph{$H_{2}$}}\tabularnewline \hline \textcolor{blue}{\emph{$D_{1}$}} & \textcolor{blue}{1}~~,~~\textcolor{red}{1} & \textcolor{blue}{~~$l_{1}$}~,~\textcolor{red}{1+$g_{2}$}\tabularnewline \textcolor{blue}{\emph{$H_{1}$}} & \textcolor{blue}{1+$g_{1}$}~,~\textcolor{red}{$l_{2}$~~~} & \textcolor{blue}{0}~~,~~\textcolor{red}{0}\tabularnewline \end{tabular} \centering{}\caption{Payoff Matrices of Asymmetric Games (${\color{blue}g_{1},g_{2}},{\color{red}l_{1},l_{2}}>0$; in hawk-dove, also $\color{blue}l_{1},\color{red}l_{2}<1)\label{tab:asymmetric-games}$} \end{table} \paragraph{\textbf{Asymmetric prisoner's dilemma}} {The left-hand side of Table \ref {tab:asymmetric-games} presents the payoff matrix of an asymmetric prisoner's dilemma, in which the unique equilibrium is $d=(d_1,d_2)$, mutual defection. Action $c_1$ cannot support $c_2$ by spoiling, because cooperation increases the payoff of a defecting opponent. It directly (weakly) supports $c_2$ against $d$ iff $l_2 < \frac{1}{k-1}$ (respectively, $\le \frac{1}{k-1}$), because $$u_2(c_1,c_2)+(k-1)\cdot u_2(d_1,c_2)>k \cdot u_2(d)\Leftrightarrow~1+(k-1)\cdot(-l_2)>0.$$ The same holds with 1 and 2 interchanged. This implies, by Theorem \ref{thm:asymmetric unstable sn}, that for any $k \ge 2$ mutual defection is $S(k)$ asymptotically stable if $\max(l_1,l_2)>\frac{1}{k-1}$ and is not $S(k)$ asymptotically stable if $\max(l_1,l_2)<\frac{1}{k-1}.$ Note that, if $l_1=l_2$, then in accordance with Corollary \ref{cor-stric-tsymmetric} the condition for stability coincides with that of the one-population model in the symmetric prisoner's dilemma. } \paragraph{\textbf{Asymmetric hawk-dove}} {The right-hand side of Table \ref {tab:asymmetric-games} presents the payoff matrix of the asymmetric hawk-dove game. The game admits two strict equilibria, $(D_1,H_2)$ and $(H_1,D_2)$, in which one player plays hawk and the other plays dove. Observe that action $D_2$ can $S(k)$ support action $H_1$ against $(D_1,H_2)$ only by direct support, which is obtained iff \[ u_{1}\left(H_{1},D_{2}\right)+\left(k-1\right)\cdot u_{1}\left(H_{1},H_{2}\right)>k\cdot u_{1}\left(D_{1},H_{2}\right)\,\Leftrightarrow\,1+g_{1}>k\cdot l_{1}. \] Action $H_1$ can $S(k)$ support action $D_2$ against $(D_1,H_2)$ only by spoiling, which is obtained iff \[ k\cdot u_{2}\left(D_{1},D_{2}\right)>u_{2}\left(H_{1},H_{2}\right)+\left(k-1\right)\cdot u_{2}\left(D_{1},H_{2}\right)\,\Leftrightarrow\,k>\left(k-1\right)\cdot\left(1+g_{2}\right)\,\Leftrightarrow\,g_{2}<\frac{1}{k-1}. \] By Theorem \ref{thm:asymmetric unstable sn}, this implies that $(D_1,H_2)$ is asymptotically stable if $g_{1}<k\cdot l_{1}-1$ or $g_{2}>\frac{1}{k-1}$ and is not asymptotically stable if $g_{1}>k\cdot l_{1}-1$ and $g_{2}<\frac{1}{k-1}$. The conditions for asymptotic stability of the other strict equilibrium are obtained by interchanging 1 and 2. } \addcontentsline{toc}{section}{\protect\numberline{}Appendix} \renewcommand{\thesection}{\Alph{section}} \setcounter{section}{0} \section*{Appendix} \section{Proofs}\label{sec:proof-Thm1} \subsection{Proof of \cref{thm:BEPdynamic}} Recall that $p \equiv \alpha_c$ and $1-p \equiv \alpha_d$ denote the proportion of agents in the population playing actions $c$ and $d$, respectively. An agent's $c$-sample includes $k$ actions of the opponents. If $j \in \{0,...,k\}$ of these are $c$, then the agent's mean payoff (when playing $c$) is $j-(k-j)l$. Similarly, if $j' \in \{0, ... ,k\}$ of the sampled actions in the agent's $d$-sample are $c$, then the mean payoff (when playing action $d$) is $j'(1+g)$. The difference between the two payoffs is \[ (j-(k-j)l)-j'(1+g)=j-j'-((k-j)l+j'g). \]This expression is clearly negative if $j \le j'$. If $j \ge j'+1$, it is positive, since $(k-j)l+j'g \le \frac{k-j+j'}{k-1} \le 1$ and the first inequality becomes an equality only if $j=k$ and $j'=0$ while the second one does so only if $j = j'+1$. These conclusions prove that the $c$-sample yields a superior payoff iff it includes more cooperations than the $d$-sample does. As the number of cooperators in each sample has a binomial distribution with parameters $k$ and $p$, we conclude that $w_{c,k} = Win(k,p)$. \subsection{Proof of \cref{thm:claim1}}\label{sec:A1} A binomial random variable with parameters $k,p$ has a degenerate distribution if $p\in \{0,1\}$, and so $Win(k,0)=Win(k,1)=0$. From Eq. \eqref{eq:hn} we get $h_{k}(0) =0-0=0, h_{k}(1)=0-1=-1$. To show that $h_k'(0) > 0$ for $k = 2,3,4, \dots$, we use the fact that \begin{align*} h_k(p) & = Win(k,p)-p = 0.5(1-Tie(k,p))-p=0.5(1-\sum_{j=0}^k (f_{k,p}(j))^2)-p)\\ & = 0.5(1-((1-p)^{2k}+O(p^2))-p ~~~~~~~(O(p^2)\ \text{denotes the terms with degree} \geq 2)\\ & = 0.5(1-(1-2pk+O(p^2))-p + O(p^2) = kp - p + O(p^2) = (k-1)p + O(p^2). \end{align*} From the above expression, it follows that $h_k'(0) = k-1>0$ for $k>1$. \subsection{Proof of \cref{thm:claim2}}\label{sec:A2} Let $\{A_{j,p}\}_{j=1}^k,$ $\{B_{j,p}\}_{j=1}^k$ be $2k$ independent Bernoulli random variables with parameter $p.$ Then $X_{k,p} = \sum_{i=1}^k A_{j,p}$ and $Y_{k,p} = \sum_{j=1}^k B_{j,p}$ are i.i.d. binomial random variables with parameters $k$ and $p.$ Eq. \eqref{eq:hn} can be expressed as \begin{align}\label{eq:555} h_k(p) & = P(X_{k,p} > Y_{k,p}) - p \nonumber \\ & = \frac{1}{2}\left(P(X_{k,p} > Y_{k,p}) + P(X_{k,p} < Y_{k,p})\right) - p \hspace{0.1in} (\text{since} \ X_{k,p} \ \text{and} \ Y_{k,p} \ \text{are} \ i.i.d. ) \nonumber \\ & = \frac{1}{2}\left(1-P(X_{k,p} = Y_{k,p})\right) - p. \end{align} \noindent For $j = 1,2,\dots,k,$ let $Z_{j,p} = A_{j,p} - B_{j,p}.$ Clearly, $\{Z_{j,p}\}_{j=1}^n$ are i.i.d., with distribution given by \[ P(Z_{j,p} = -1) = P(Z_{j,p} = 1) = pq \ \text{ and } \ P(Z_{j,p} = 0) = 1-2pq, \] where $q=1-p.$ Consider the characteristic function $\varphi_k(\cdot;p)$ of the random variable $Z_p^k = X_{k,p}-Y_{k,p} = \sum_{j=1}^k Z_{j,p}:$ \begin{align*} \varphi_k(t;p) & = \mathbb{E}\left[e^{itZ_p^k}\right] = \mathbb{E}\left[e^{it\sum_{j=1}^k Z_{j,p}}\right] = \left(\mathbb{E}[e^{itZ_{1,p}}]\right)^k \\ & = \left(e^{it(-1)}pq + e^{it(1)}pq + e^{it(0)}(1-2pq)\right)^k = \left(1+pq(e^{-it}+e^{it}-2)\right)^k \\ & = \left(1+2pq(\text{cos}\ (t)-1)\right)^k \hspace{0.45in} (\text{since} \ e^{-it}+e^{it} = 2\text{cos}\ (t)) \\ & = \left(1-4p(1-p)\text{sin}^2\left(\frac{t}{2}\right)\right)^k \hspace{0.3in} \left(\text{since} \ q=1-p \ \text{and} \ \text{cos}\ (t) = 1-2\text{sin}^2\left(\frac{t}{2}\right)\right). \end{align*} The base of the last exponent, with power $k$, is an expression that is convex as a function of $p$, lies between 0 and 1 and, if $0 < p < 1$ and $t$ is not a whole multiple of $\pi$, is different from $0$ and $1$. Therefore, the same is true for $\varphi_k(t;p)$ and, if $0 < p <1$ and $t$ is not a whole multiple of $\pi,$ $\varphi_k(t;p) > \varphi_{k+1}(t;p)$ and $\lim_{k \rightarrow \infty}\varphi_k(t;p) = 0.$ It follows, in view of Eq. \eqref{eq:555} and the fact that (see \cref{thm:fact1}) \[ P(X_{k,p} = Y_{k,p}) = P(Z_p^k = 0) = \frac{1}{2\pi}\int_{-\pi}^{\pi}\varphi_k(t;p)dt, \] that the function $h_k$ is concave and, for $0<p<1,$ the sequence $\left(h_k(p)\right)_{k=1}^{\infty}$ is strictly increasing and converges to $\frac{1}{2} - p.$ This completes the proof. \begin{fact} \label{thm:fact1} $P(Z_p^k = 0) = \frac{1}{2\pi}\int_{-\pi}^{\pi}\varphi_k(t;p)dt.$ \end{fact} \begin{factproof} From the definition of $\varphi_k(t;p),$ we have the following: \begin{align*} \int_{-\pi}^{\pi}\varphi_k(t;p)dt & = \int_{-\pi}^{\pi}\mathbb{E}\left[e^{itZ_p^k}\right]dt = \mathbb{E}\left[\int_{-\pi}^{\pi}e^{itZ_p^k}dt \right] = \mathbb{E}\left[\int_{-\pi}^{\pi}e^{itZ_p^k}\left(\mathds{1}_{\{Z_p^k = 0\}}+ \mathds{1}_{\{Z_p^k \neq 0\}}\right)dt\right] \\ & = \mathbb{E}\left[\int_{-\pi}^{\pi}e^{itZ_p^k}\mathds{1}_{\{Z_p^k = 0\}}dt\right] + \mathbb{E}\left[\int_{-\pi}^{\pi}e^{itZ_p^k}\mathds{1}_{\{Z_p^k \neq 0\}}dt\right] \\ & = \mathbb{E}\left[\int_{-\pi}^{\pi}1 \cdot \mathds{1}_{\{Z_p^k = 0\}}dt\right] + \mathbb{E}\left[0\right] = \mathbb{E}\left[2\pi \cdot \mathds{1}_{\{Z_p^k = 0\}}\right] = 2\pi\cdot P(Z_p^k = 0). \end{align*} From the above series of equalities, it follows that $P(Z_p^k = 0) = \frac{1}{2\pi}\int_{-\pi}^{\pi}\varphi_k(t;p)dt.$ \end{factproof} \subsection{Proof of \cref{thm:unstable sn}}\label{sec:B3} For completeness, we present all details of the proof, although various steps are analogous to arguments presented in the proofs of \citet[Section 5]{sandholm2020stability}. Suppose, first, that the game is generic. Let $T$ be the nonnegative $|A^\ast| \times |A^\ast|$ matrix whose element in row $a'$ and column $a$ is \[ T_{a' a} = \begin{cases} 2 \ \text{if action } a \text{ double } S(k) \text{ supports action } a' \text{ against } a^\ast \\ 1 \ \text{if action } a \text{ single } S(k) \text{ supports action } a' \text{ against } a^\ast \\ 0 \ \text{if action } a \text{ does not } S(k) \text{ support action } a' \text{ against } a^\ast \\ \end{cases} \] We will now compute the Jacobian of the $k$-payoff sampling dynamic \eqref{eq:BEP} in the (monomorphic population state corresponding to the) strict symmetric equilibrium action $a^{*}.$ Suppose that the frequency $\alpha_{a^{*}}$ of action $a^{*}$ in the population is $1-\epsilon$, where $\epsilon>0$ is a small number. Denote by $\alpha^{*} \equiv \alpha|_{A^{*}}$ the frequencies of the actions in the set $A^{*}$, that is, $\alpha^{*}_{a} = \alpha_{a}$ for all $a \in A^{*}.$ Clearly, $\sum_{a \in A^{*}}\alpha^{*}_{a} = \epsilon$, which implies that the Euclidean norm of $\alpha^{*}$ is of order $\epsilon$, that is, $|\alpha^{*}| = \left(\sum_{a \in A^{*}}\alpha^{*2}_{a}\right)^{\frac{1}{2}} = O(\epsilon).$ The probability that a non-equilibrium action $a' \in A^\ast$ yields the best payoff is roughly equal to the probability that it yields a higher payoff than the strict symmetric equilibrium action $a^\ast$ does when both actions are tested $k$ times in the population state $\alpha$. When $a' \in A^\ast$ is tested $k$ times, with a very high probability (of $(1-\epsilon)^{k(n-1)}$) it encounters the equilibrium action $a^\ast$ each time. The probability that $a^\ast$ is encountered $k-2$ or fewer times is of order $\epsilon^2$ or higher, and these higher-order terms can be neglected for stability analysis. Action $a'$ yields a higher payoff than $a^\ast$ does in the following two cases: Case 1: When $a'$ is tested, one of the $k(n-1)$ opponents plays a non-equilibrium action $a$ and the remaining opponents play $a^\ast.$ Action $a'$ obtains the highest mean payoff if $a$ directly $S(k)$ supports it against $a^\ast$, that is, $u(a', a,a^\ast,\dots,a^\ast) + (k-1)\cdot u(a', a^\ast,\dots,a^\ast) > k\cdot u(a^\ast, a^\ast,\dots,a^\ast)$. Case 2: When $a^\ast$ is tested, one of the $k(n-1)$ opponents plays $a'$ and the others play $a^\ast.$ Action $a'$ obtains the maximal mean payoff if $a$ $S(k)$ supports it by spoiling $a^\ast$, that is, $ k\cdot u(a', a^\ast,\dots,a^\ast) > u(a^\ast, a,a^\ast,\dots,a^\ast)+(k-1) \cdot u(a^\ast,\dots,a^\ast)$ and $u(a', a^\ast,\dots,a^\ast)>u(b,a^\ast,\dots,a^\ast)$ for all $b \notin \{a', a^\ast\}$. The probability that action $a'$ yields the best payoff is therefore given by \begin{align}\label{eq:BEP555} w_{a',k}(\alpha) & = k(n-1)\sum_{a \in A^\ast}T_{a' a}\alpha_{a} + O(|\alpha^\ast|^2), \end{align} and the $k$-payoff sampling dynamic \eqref{eq:BEP} can be written as \begin{align*} \dot{\alpha}_{a'} & = w_{a', k}(\alpha) - \alpha_{a'} = k(n-1)\sum_{a \in A^\ast}T_{a' a}\alpha_{a} - \alpha_{a'}+ O(|\alpha^\ast|^2) . \end{align*} In matrix notation, \begin{equation}\label{eq:nfreq} \dot{\alpha}^\ast = f(\alpha^\ast) \equiv (k(n-1)T-I)\alpha^\ast + O(|\alpha^\ast|^2), \end{equation} where $I$ is the $|A^\ast| \times |A^\ast|$ identity matrix and $O(|\alpha^\ast|^2)$ here is an $|A^\ast|$-dimensional vector with elements of order $|\alpha^\ast|^2$ or higher. Let $J$ denote the Jacobian matrix of $f$ evaluated at the origin: \begin{equation*} J = \left.\frac{\partial f(\alpha^\ast)}{\partial \alpha^\ast}\right|_{\alpha^\ast = \underbrace{(0, 0, \dots , 0)}_{|A^\ast| \ \text{zeros}}} = k(n-1)T-I. \end{equation*} The asymptotic stability of the system \eqref{eq:nfreq} can be analyzed by examining the eigenvalues of the Jacobian matrix $J.$ A sufficient condition for $a^\ast$ to be $S(k)$ asymptotically stable is that all the eigenvalues of $J$ have negative real parts (see, e.g., \citealp[Corollary 8.C.2]{sandholm2010population}). A sufficient condition for it \emph{not} to be $S(k)$ asymptotically stable is that at least one of the eigenvalues has a positive real part. The first condition holds, in particular, if the only eigenvalue of $T$ is zero, in other words, if the spectral radius $\rho$ of that matrix is $0$, as this condition means that the only eigenvalue of $J$ is $-1$. The second condition holds if $\rho\ge 1$. This is because the spectral radius of a nonnegative matrix is an eigenvalue with a nonnegative eigenvector (\citealp{johnson1985matrix}, Theorem 8.3.1), and so $\rho\ge 1$ implies that $J$ has the eigenvalue $k(n-1)\rho -1\ge 2\cdot 1-1>0$, with a corresponding nonnegative eigenvector. It therefore suffices to show that $\rho=0$ holds if condition I in the theorem holds and $\rho\ge 1$ holds if the condition does not hold. {Observe that conditions I and I' can be rephrased as follows: every principal submatrix of $T$ has a row or column, respectively, where all entries are zero. Therefore, to complete the proof of the theorem, it remains only to establish the following fact (which in particular proves the equivalence of I and I').} \begin{fact}\label{thm:fact3} Let $M$ be a square matrix of nonnegative integers, and let $\rho$ be its spectral radius. If every principal submatrix of $M$ has a row of zeros, then $\rho=0$. Otherwise, $\rho\ge 1$. The same is true when we replace ``row'' by ``column.'' \end{fact} \begin{factproof} Suppose that $\rho>0$. Let $v$ be a corresponding nonnegative right eigenvector. Since $(Mv)_i = \rho v_i > 0$ for every index $i$ with $v_i > 0$, the set $\alpha$ of all such indices defines a principal submatrix $M'$ (obtained from $M$ by deleting all rows and all columns with indices not in $\alpha$) with the property that every row includes at least one nonzero entry. Conversely, suppose that $M$ has a principal submatrix $M'$, defined by some set of indices $\alpha$, with the above property. Let $v$ be a column vector of $0$'s and $1$'s where an entry is $1$ if and only if its index lies in $\alpha$. It is easy to see that $Mv \ge v$. This vector inequality implies that $\rho\ge 1$ (\citealp{johnson1985matrix}, Theorem 8.3.2). The proof for ``column'' is obtained by replacing $M$ by its transpose. \end{factproof} {We now drop the assumption that the game is generic. This change means that cases 1 and 2 need to be extended by replacing the respective strict inequalities with weak ones. When a weak inequality in case 1 or 2 holds as equality, the probability of moving to action $a'$ depends on the tie-breaking rule. Adding these probabilities to $T_{a' a}$, for all $a',a \in A^\ast$, defines a new matrix $\bar{T} \ge T$, which replaces $T$ in \eqref{eq:nfreq}. If the probabilities were replaced by $1$'s, the result would be a matrix $\bar{\bar{T}} \ge \bar{T}$, where all entries are integers. In view of Fact \ref{thm:fact3}, conditions II and II' are both equivalent to the condition that the spectral radius of $\bar{\bar{T}}$ is $0$. That condition implies that the spectral radius of $\bar{T}$ is also $0$, which implies that $a^\ast$ is $S(k)$ asymptotically stable.} If condition I or I' does not hold, then the spectral radius of $T$ is $1$ or greater. The same is then true for $\bar{T}$, which implies that $a^\ast$ is not $S(k)$ asymptotically stable. \subsection{Proof of \cref{thm:asymmetric unstable sn}}\label{sec:asymmetric proof} {Let $T$ be the $|A^\ast| \times |A^\ast|$ matrix defined exactly as in the proof of \cref{thm:unstable sn}, and denote by $\alpha^{*} \equiv \alpha|_{A^{*}}$ the vector of dimension $\sum_{i=1}^n{(|A_i|-1)}$ whose components are the frequencies of the actions in the set $A^{*}$ (that is, the players' non-equilibrium actions). By arguments similar to those employed in that proof, if the game is generic, then the Jacobian $J$ of the $k$-payoff sampling dynamic \eqref{eq:asymmetric BEP} is given by $J=kT-I$, so that $$ \dot{\alpha}^\ast = (kT-I)\alpha^\ast + O(|\alpha^\ast|^2).$$ The rest of the proof, including the treatment of the non-generic case, is essentially the same as for \cref{thm:unstable sn}. } \subsection{{Proof of Proposition \ref{pro:k-2-3}: Global Stability for $k \in \{2,3\}$}}\label{sub-k-2-3} In this subsection, we characterize the $S(2)$ {and $S(3)$ globally asymptotically stable equilibria} explicitly for all parameter configurations $g, l$ {in the prisoner's dilemma}. {$S(2)$ analysis:} In Theorem \ref{thm:mainresult} we solved the case $g, l < 1.$ Here, we consider the remaining three cases. Recall that, when testing each action twice, we have: \[ \text{ When a player samples }\ c \text{ she} \begin{cases} \text{gets} \ 2 \ \text{with probability} \ p^2 \\ \text{gets} \ 1-l \ \text{with probability} \ 2p(1-p) \\ \text{gets} \ -2l \ \text{with probability} \ (1-p)^2 \end{cases} \] \[ \text{When a player samples}\ d \text{ she} \begin{cases} \text{gets} \ 2(1+g) \ \text{with probability} \ p^2 \\ \text{gets} \ 1+g \ \text{with probability} \ 2p(1-p) \\ \text{gets} \ 0 \ \text{with probability} \ (1-p)^2 \end{cases} \] \textbf{Case I:} $l < 1 < g.$ Action $c$ has a higher mean payoff iff the $c$-sample includes at least one cooperation and the $d$-sample does not include any cooperation. Thus, the $2$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p}& = p^2(1-p)^2 + 2p(1-p)(1-p)^2 - p = (1-p)^2(p^2 + 2p(1-p)) - p\\ & = (1-p)^2(1-(1-p)^2) - p = (1-p)^2 - (1-p)^4 - p. \end{align*} The rest points of the above dynamic are $0$ and $0.245.$ It is straightforward to verify that $0$ is unstable and that $0.245$ is globally stable. \textbf{Case II:} $g < 1 < l.$ Action $c$ has a higher mean payoff iff the $c$-sample includes two cooperations, while the $d$-sample includes at most one cooperation. The $2$-\textit{payoff sampling dynamic} in this case is given by $\dot{p} = p^2(1-p^2) - p$. The unique rest point is $0,$ which is globally stable. \textbf{Case III:} $g, l > 1.$ Action $c$ has a higher mean payoff iff the $c$-sample includes two cooperations, while the $d$-sample does not include any cooperation. The $2$-\textit{payoff sampling dynamic} in this case is given by $\dot{p} = p^2(1-p)^2 - p$. As in the previous case, the unique rest point is $0,$ which is globally stable. {$S(3)$ analysis:} {In Theorem \ref{thm:mainresult} we solved the case $g, l < \frac{1}{2}.$ Here, we consider the remaining cases. Recall that, when testing each action thrice, we have:} \[ \text{When a player samples}\ c \text{ she} \begin{cases} \text{gets} \ 3 \ \text{with probability} \ p^3 \\ \text{gets} \ 2-l \ \text{with probability} \ 3p^2(1-p) \\ \text{gets} \ 1-2l \ \text{with probability} \ 3p(1-p)^2 \\ \text{gets} \ -2l \ \text{with probability} \ (1-p)^3 \end{cases} \] \[ \text{When a player samples}\ d \text{ she} \begin{cases} \text{gets} \ 3(1+g) \ \text{with probability} \ p^3 \\ \text{gets} \ 2(1+g) \ \text{with probability} \ 3p^2(1-p) \\ \text{gets} \ 1+g \ \text{with probability} \ 3p(1-p)^2 \\ \text{gets} \ 0 \ \text{with probability} \ (1-p)^3 \end{cases} \] { \textbf{Case I:} $l < \frac{1}{2}, \frac{1}{2} < g < 2$, and $g+l < 1.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes at least two cooperations, the $d$-sample includes at most one cooperation, or when the $c$-sample includes exactly one cooperation, the $d$-sample does not include any cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} &= p^3(3p(1-p)^2+(1-p)^3) + 3p^2(1-p)(3p(1-p)^2+(1-p)^3) + 3p(1-p)^2 (1-p)^3 - p\\ & = p^2(1-p)^2(3-2p)(1+2p) + 3p(1-p)^5 - p. \end{align*} The rest points of the above dynamic are $0$ and $0.323.$ It is straightforward to verify that $0$ is unstable and that $0.323$ is globally stable.} { \textbf{Case II:} $l < \frac{1}{2}, \frac{1}{2} < g < 2$, and $g+l > 1.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes three cooperations, the $d$-sample includes at most one cooperation, or when the $c$-sample includes either one or two cooperations, the $d$-sample does not include any cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} & = p^3(3p(1-p)^2+(1-p)^3) + 3p^2(1-p)(1-p)^3 + 3p(1-p)^2 (1-p)^3 - p \\ & = p^3(1-p)^2(1+2p) +3p(1-p)^4 - p. \end{align*} The rest points of the above dynamic are $0$ and $0.250.$ It is straightforward to verify that $0$ is unstable and that $0.250$ is globally stable.} { \textbf{Case III:} $l < \frac{1}{2}, g > 2.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes at least one cooperation, the $d$-sample does not include any cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} & = p^3(1-p)^3 + 3p^2(1-p)(1-p)^3 + 3p(1-p)^2 (1-p)^3 - p \\ & = (1-(1-p)^3)(1-p)^3 - p = (1-p)^3 - (1-p)^6 - p. \end{align*} The rest points of the above dynamic are $0$ and $0.245.$ It is straightforward to verify that $0$ is unstable and that $0.245$ is globally stable.} { \textbf{Case IV:} $\frac{1}{2} < l < 2, g < \frac{1}{2}$ and $g+l<1.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes three cooperations, the $d$-sample includes at most two cooperations, or when the $c$-sample includes exactly two cooperations, the $d$-sample includes at most one cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} & = p^3(1-p^3) + 3p^2(1-p)(3p(1-p)^2)+(1-p)^3) - p \\ & = p^3(1-p^3) + 3p^2(1-p)^3(1+2p) - p. \end{align*} The unique rest point is $0,$ which is globally stable.} { \textbf{Case V:} $\frac{1}{2} < l < 2, g < \frac{1}{2}$, and $g+l>1.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes three cooperations, the $d$-sample includes at most two cooperations, or when the $c$-sample includes exactly two cooperations, the $d$-sample does not include any cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} & = p^3(1-p^3) + 3p^2(1-p)(1-p)^3 - p = p^3(1-p^3) + 3p^2(1-p)^4 - p. \end{align*} The unique rest point is $0,$ which is globally stable.} { \textbf{Case VI:} $\frac{1}{2} < l < 2, \frac{1}{2} < g < 2.$ Action $c$ has a higher mean payoff if, when the $c$-sample includes three cooperations, the $d$-sample includes at most one cooperation, or when the $c$-sample includes exactly two cooperations, the $d$-sample does not include any cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} & = p^3(3p(1-p)^2+(1-p)^3) + 3p^2(1-p)(1-p)^3 - p \\ & = p^3(1-p)^2(1+2p) +3p^2(1-p)^4 - p. \end{align*} The unique rest point is $0,$ which is globally stable.} { \textbf{Case VII:} $\frac{1}{2} < l < 2, g > 2.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes at least two cooperations, the $d$-sample does not include any cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} & = p^3(1-p)^3 + 3p^2(1-p)(1-p)^3 - p = p^2(1-p)^3(3-2p) - p. \end{align*} The unique rest point is $0,$ which is globally stable.} { \textbf{Case VIII:} $l > 2, g < \frac{1}{2}.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes three cooperations, the $d$-sample includes at most two cooperations. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by $\dot{p} = p^3(1-p^3) - p.$ The unique rest point is $0,$ which is globally stable.} { \textbf{Case IX:} $l > 2, \frac{1}{2} < g < 2.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes three cooperations, the $d$-sample includes at most one cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by \begin{align*} \dot{p} & = p^3(3p(1-p)^2+(1-p)^3) - p = p^3(1-p)^2(1+2p) - p. \end{align*} The unique rest point is $0,$ which is globally stable.} { \textbf{Case X:} $l > 2, g > 2.$ Action $c$ has a higher mean payoff iff, when the $c$-sample includes three cooperations, the $d$-sample does not include any cooperation. Thus, the $3$-\textit{payoff sampling dynamic} in this case is given by $\dot{p} = p^3(1-p)^3 - p.$ The unique rest point is $0,$ which is globally stable.} \subsection{Proof of Corollary \ref{cor-stric-tsymmetric}}\label{subsec:cor-proof} { Suppose that action $a^\ast$ is not $S(k)$ asymptotically stable under the one-population dynamics, and so there is a subset $A' \subseteq A\backslash \{a^\ast\}$ such that all actions in $A'$ are supported against $a^\ast$ by actions in $A'$. Let $\bar A'=\dot\bigcup_{i=1}^n{A'}$ be the disjoint union of $n$ copies of $A'.$ It follows immediately from Definitions \ref{def:support} and \ref{def:asymmetric_support} and the symmetry of the game that all actions in $\bar A'$ are supported by actions in $\bar A'$ against $\bar{a}^\ast \equiv (a^\ast,a^\ast,\dots,a^\ast)$. Therefore, the strategy profile $\bar{a}^\ast$ is not $S(k)$ asymptotically stable under the $n$-population dynamics.} {Conversely, suppose that the last conclusion holds, so that there is a subset $A' \subseteq A^\ast \equiv \dot\bigcup_{i=1}^n{(A \backslash \{a^\ast\})}$ such that all actions in $A'$ are supported by actions in $A'$ against $\bar{a}^\ast$. Let $\bar A'=\{a\in A \mid \text{there is some player } i \text{ and a corresponding action } a_i \in A'\text{ with }a=a_i\}$ be the set of all actions that are included in $A'$ for at least one player. It is easy to see that all actions in $\bar A'$ are supported against $a^\ast$ by actions in $\bar A'$, and so action $a^\ast$ is not $S(k)$ asymptotically stable under the one-population dynamics.} \section{{Values of $g$ and $l$ in Prisoner's Dilemma Experiments}}\label{sec-experiments-g-l} { Figure \ref{figure_gl} shows the values of $g$ and $l$ in the 29 experiments of the one-shot prisoner's dilemma (taken from 16 papers) as summarized in the meta-study of \citet[Table A.3]{mengel2018risk}. The figure shows that most of these experiments satisfy the condition for global stability of partial cooperation for $k=2$ (namely, $l<1$), and quite a few of them also satisfy the condition for $k=3$ ($l<0.5$).} \begin{figure} \caption{{Values of $g$ and $l$ in the 29 Experiments Summarized in \citet[Table A.3]{mengel2018risk} \label{figure_gl}}} \includegraphics[scale=0.6]{g-l-experiemnts.png} \end{figure} \newpage \begin{singlespace}\renewcommand{\baselinestretch}{1}\small\normalsize\bibliography{mybibtexdatabase}\end{singlespace} \end{document}
{ "timestamp": "2021-01-05T02:05:18", "yymm": "2005", "arxiv_id": "2005.05779", "language": "en", "url": "https://arxiv.org/abs/2005.05779", "abstract": "We study population dynamics under which each revising agent tests each strategy k times, with each trial being against a newly drawn opponent, and chooses the strategy whose mean payoff was highest. When k = 1, defection is globally stable in the prisoner`s dilemma. By contrast, when k > 1 we show that there exists a globally stable state in which agents cooperate with probability between 28% and 50%. Next, we characterize stability of strict equilibria in general games. Our results demonstrate that the empirically plausible case of k > 1 can yield qualitatively different predictions than the case of k = 1 that is commonly studied in the literature.", "subjects": "Theoretical Economics (econ.TH)", "title": "Instability of Defection in the Prisoner's Dilemma Under Best Experienced Payoff Dynamics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446448596305, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.7087893974165906 }
https://arxiv.org/abs/hep-th/9311094
The differential geometry of Fedosov's quantization
B. Fedosov has given a simple and very natural construction of a deformation quantization for any symplectic manifold, using a flat connection on the bundle of formal Weyl algebras associated to the tangent bundle of a symplectic manifold. The connection is obtained by affinizing, nonlinearizing, and iteratively flattening a given torsion free symplectic connection. In this paper, a classical analog of Fedosov's operations on connections is analyzed and shown to produce the usual exponential mapping of a linear connection on an ordinary manifold. A symplectic version is also analyzed. Finally, some remarks are made on the implications for deformation quantization of Fedosov's index theorem on general symplectic manifolds.
\section{Introduction} \label{sec-intro} In a remarkable paper \cite{fe:simple}, B. Fedosov presented a simple and very natural construction\footnote{Actually, this construction appeared in several earlier papers, such as \cite{fe:formal} and \cite{fe:index}.} of a deformation quantization for any symplectic manifold. The construction begins with a linear symplectic connection on the tangent bundle of a manifold and proceeds by iteration to produce a flat connection on the associated bundle of formal Weyl algebras. The aim of this paper is to give some geometric insight into Fedosov's construction by exploring some of its purely classical analogs. Our basic idea is that Fedosov's quantization procedure involves a sort of ``quantum exponential mapping.'' The paper is divided into two parts. The first part (Sections \ref{sec-deformation} through \ref{sec-further}) is mostly discursive and follows fairly closely the lecture given by the second author at the conference in honor of Bert Kostant's 60'th birthday. The second part (Sections \ref{sec-construction} through \ref{sec:sympl}), more technical and logically self-contained, develops the proofs of some of the statements in the first part. \section{Deformation quantization and symplectic connections } \label{sec-deformation} The basic deformation quantization problem is to put a noncommutative (associative) product structure on the formal power series ring ${\cal A} [[\hbar]]$ over a commutative Poisson algebra ${\cal A} $. The physical origins of this particular formulation of the classical limit are obscure to this author (see Section 16.23 in \cite{bo:quantum} for an occurrence in a textbook), but the origins of its recent intensive study seem to lie in the work of Berezin \cite{be:quantization} and Bayen {\em et al} \cite{bffls:deformation}. The deformed product, which is determined by the product of elements of ${\cal A} $ and can be written in the form $$ f*_{\hbar} g = \sum _{j} \hbar ^{j}B_{j}(f,g)$$ for bilinear operators $B_{j}$ on ${\cal A} $, is assumed to start with the original commutative product followed by the Poisson bracket, i.e., $$ f*_{\hbar} g = fg + (i\hbar/2)\{f,g\}+\cdots $$ It was already noted in \cite{bffls:deformation} that, when ${\cal A} $ is the algebra of $C^{\infty}$ functions on a symplectic manifold $P$, the term of order $\hbar^{2}$ in the deformed product is closely related to a torsion-free {\em symplectic connection} on $P$. If there exists such a connection which is {\em flat}, one can immediately write down a solution of the deformation quantization problem. In fact, $P$ is then covered by coordinate charts for which the transition maps are affine symplectic transformations, which leave invariant the Moyal product \cite{bffls:deformation} on $C^{\infty} ({\Bbb R} ^{2n})[[\hbar]]$. (The algebra with this product is often called the (smooth) {\bf \em Weyl Algebra}.) When $P$ does not admit a flat symplectic connection (or where such a connection cannot be invariant under an important symmetry group, such as in the case of the hyperbolic plane with its $PSL(2,{\Bbb R} )$ symmetry), it is necessary to ``piece together'' quantizations like the one above in a more elaborate way. The first such construction to work for all symplectic manifolds was that of DeWilde and Lecomte \cite{de-le:existence} who actually piece together quantizations on individual coordinate systems with nonlinear transition maps. A more geometric formulation of the ``piecing construction'' was given by Maeda, Omori, and Yoshioka in \cite{om-ma-yo:weyl}. Their construction has important elements in common with that of Fedosov, so we will describe it in some detail here. For any symplectic manifold $P$, we consider its tangent bundle $TP$ with the Poisson structure for which each fibre is a symplectic leaf, carrying the translation invariant symplectic 2-form associated with its structure as a symplectic vector space. It is very simple to quantize this Poisson manifold --- we simply quantize each fibre by the Moyal product, so that the noncommutative algebra $C^{\infty} (TM)[[\hbar]]$ may be identified with the set of ``smooth'' sections of a bundle whose fibre over each point $p$ of $P$ is the smooth Weyl algebra of the tangent space. Geometrically, we think of $C^{\infty} (TM)[[\hbar]]$ as the ``space of functions on the quantized tangent bundle''. To quantize $P$ itself, we may try to embed $C^{\infty} (P)[[\hbar]]$ as a subalgebra of this noncommutative $C^{\infty} (TP)[[\hbar]]$. The embedding onto the functions which are constant on fibres of $TP$ does not work, since this subalgebra is commutative, so we need a different embedding. In geometric terms, we are trying to replace the usual projection of $TP$ to $P$ by a different one. To see what this new projection should look like, we may examine the simplest case, where $P$ is a symplectic vector space. Denoting by $x$ some linear coordinates on $P$ and $(x,y)$ the corresponding coordinates on $TP$, the multiplication on the quantized $TP$ is just the ``Moyal product in $y,$ with $x$ as a parameter.'' If we identify each function $f(x)$ on $P$ with the function $Lf(x,y)=f(x+y)$ on $TP$, then taking the fibrewise Moyal product of $Lf$ and $Lg$ reproduces the ``Moyal product in $x$'' of $f$ and $g$. The operator $L$ is just pullback by the map $(x,y)\mapsto (x+y),$ which is nothing but the {\em exponential mapping} of $P$ with its flat affine structure. (Here, the deformation parameter $\hbar$ just comes along for the ride.) It is not hard to see that the same idea works for any symplectic manifold with a flat, torsion--free symplectic connection: pulling back functions from $P$ to $TP$ by the exponential map allows one to pull back the multiplication on the quantized tangent bundle to get the aforementioned Moyal-type deformation quantization of $P$. At this point, one might object that the connection on $P$ might not be complete, in which case its exponential map is not globally defined. In this case, it suffices to restrict the fibrewise Moyal product to the open subset of $TP$ which is the domain of the exponential map. When the connection on $P$ has nonvanishing torsion or curvature, the situation is much worse, in that the set of functions pulled back from $P$ by the exponential map is not closed under the multiplication on the quantized $TP$. Two remedies for this problem have been used in the literature. We may describe them roughly as follows. In \cite{om-ma-yo:weyl}, the local exponential mappings coming from a covering of $P$ by Darboux coordinate systems are ``patched together'' to produce a new projection which is no longer (at least not manifestly) the exponential map of a connection. This new projection is built at the ``quantum level'': that is, what is actually constructed is a patching together of modifications of the local pullback mappings from functions on the base to subalgebras of local sections of the bundle of Weyl algebras. In fact, this description is slightly oversimplified. It is not the bundle of smooth Weyl algebras but rather the bundle of formal Weyl algebras (formal power series around zero of functions on the tangent spaces) which must be used. Also, the parameter $\hbar$ now plays an essential role, in that the pullback of a function on the base is no longer independent of $\hbar$. Fedosov \cite{fe:simple}, on the other hand, beginning with a linear symplectic connection having zero torsion but nonvanishing curvature, lifts it to a connection on the bundle of formal Weyl algebras and then modifies the connection on this bundle, with structure Lie algebra the inner derivations, to make it flat. He then shows how the space of parallel sections of this bundle, which is clearly a subalgebra of the algebra of all sections, may be identified with the space $C^{\infty} (P)[[\hbar]]$. The idea which we propose in this paper is that Fedosov's construction can also be interpreted on the level of spaces. The parallel sections of the bundle of formal Weyl algebras are defined by the vanishing of covariant differentiation operators, which are derivations on the space of sections. When the space of sections is considered as the space of functions on the quantized $TP$ (more precisely, on a formal neighborhood of the zero section in the quantized $TP$), these derivations may be thought of as vector fields, determining a distribution on $TP$ which is transverse to the fibres, i.e. an {\bf \em Ehresmann connection}, on $TP.$ The flatness of Fedosov's connection is equivalent to the involutivity of the distribution, or the flatness of the Ehresmann connection. Thus, the parallel sections of the formal Weyl algebra bundle are interpreted as functions on the quantized $TP$ which are constant along the leaves of the quantum foliation which is tangent to the quantum Ehresmann connection. To pull back a function on $P,$ we first identify it with a function on the zero section, then extend it to a function on $TP$ constant along leaves. This is possible if the Ehresmann connection is not tangent to the zero section of $TP,$ but is rather transverse to the zero section, so that each leaf intersects the zero section in a unique point. To achieve this, even if the initial linear connection is flat, we must modify it by the addition of a ``translation'' term to obtain a connection with affine structure group. The somewhat mysterious fact that a parallel section for the Fedosov connection is not determined by its value at a single point can also be attributed to the addition of the translational term, which changes the way in which the covariant differential operator behaves with respect to the grading in the formal Weyl algebra. In other terms, the fact is related to the non-determination of a $C^{\infty} $ function by its Taylor series at one point. An analogous situation occurs on the bundle of infinite jets of real-valued functions on $P$, which carries a ``flat connection'' for which the parallel sections are the infinite jets of functions on $P$. In a sense, what Fedosov's construction does is to identify this bundle of infinite jets of functions on $P$ with the bundle of formal functions on the fibres of $TP$ and then to pull back the natural flat connection from the former bundle to the latter. There is another way to see the necessity of ``affinizing'' the structure group of the connection. If a nonlinear flat connection on $TM$ has the zero section as a parallel section, then its linearization at the origin would be a flat linear connection. There are, in general, obstructions to the existence of such a connection; by allowing the zero vectors to move, we bypass these obstructions. \section{Classical analogs} \label{sec-classical} In the previous section, we used geometric language to give an intuitive picture of the constructions on quantized formal power series in \cite{fe:simple}. Now we will turn the tables and apply Fedosov's formal constructions in purely classical settings. A more detailed version of the following remarks is contained in the second part of this paper, beginning with Section \ref{sec-construction}. Suppose that we are simply given a differentiable manifold $P$ with a torsion free linear connection on $TP$. This connection may be lifted to a connection on the associated bundle of algebras of formal power series on the tangent spaces of $P.$ If we extend the structure group of this connection from the Lie algebra of linear vector fields on ${\Bbb R}^{m}$ ($m$ being the dimension of $P$) to the Lie algebra of formal vector fields, then Fedosov's iteration method can be used to produce, in a canonical way, a flat connection on this bundle for which the corresponding Ehresmann connection is transverse to the zero section. The leaves of the foliation given by the flat Ehresmann connection are the fibres of a ``mapping'' from a formal neighborhood of the zero section in $TP$ to $P$. We denote this formal mapping by $\mbox{EXP}.$ On the other hand, there is another well known mapping from $TP$ to $P$ determined by a connection, namely the usual exponential mapping $\exp.$ What is the relation between these two mappings? Since both are produced in a canonical manner from the connection, it is natural to guess that they are equal. In fact this is true, as we prove below. The structure of the proof is of some interest. We first prove that, for a real-analytic connection, the flat connection given by Fedosov's iteration is given by convergent power series, so it actually defines a flat real-analytic Ehresmann connection on a neighborhood of the zero section. In this case, we prove by following geodesics that $\mbox{EXP}$ and $\exp$ coincide. Next, we show that $\mbox{EXP}$ can be defined in the purely formal context by ``extension by continuity,'' using the fact that the convergent power series (already the polynomials) are dense in all the power series with respect to the uniformity for which two power series are close if all their coefficients up to some high degree are equal. Finally, we conclude that $\mbox{EXP}$ and $\exp$ are equal since they agree on a dense subset. Since all the constructions are local, they then apply on any manifold, without any reference to a real-analytic structure. In a sense, we have shown that the iterative procedure used by Fedosov to ``flatten'' a connection, when applied to ordinary manifold, is simply another way of constructing the usual exponential mapping. It is in this sense that we are tempted to say that his quantization procedure involves the pullback of functions from a symplectic manifold $P$ to its quantized $TP$ by quantization of the exponential mapping of a given symplectic connection. Before taking this leap, though, we must be more careful, because the exponential mapping of a symplectic linear connection on a symplectic manifold does NOT in general define symplectic mappings from the tangent spaces with their constant symplectic structures to the manifold with its given symplectic structure. To produce from a symplectic torsion-free linear connection on $TP$ a symplectic nonlinear connection (on a formal neighborhood of the zero section), we must change the structure Lie algebra from the algebra of formal vector fields to the algebra of formal symplectic vector fields. In fact, with a view toward quantization, it is useful to use instead the one-dimensional extension of the latter algebra given by the infinite jets of functions, with the Poisson bracket Lie algebra structure. As is well known, this Lie algebra acts on itself by derivations of both the multiplication and the Poisson bracket. Now the iterative procedure can be applied again in this symplectic classical context to produce (formal) symplectic ``exponential'' mappings from the fibres of $TP$ to $P$. It would be interesting to have a more geometric construction of these mappings. Perhaps they are the same as those obtained by starting with the ordinary exponential mappings and then correcting these by using the deformation-method proof of Darboux's theorem in \cite{we:symplectic}. \section{Rigidity of quantization and Fedosov's index theorem} \label{sec-further} In the sections above, we have concentrated on Fedosov's construction of a deformation quantization. This construction was, however, merely the beginning of Fedosov's work, one of whose points of culmination is an index theorem for arbitrary symplectic manifolds \cite{fe:index}. This theorem is a generalization of the Atiyah-Singer theorem, to which it reduces when the symplectic manifold is a cotangent bundle. In this section, we wish to comment on the role which Fedosov's index theorem might play in resolving a fundamental question in the general theory of deformation quantization. The condition $$ f*_{\hbar} g = fg + (i\hbar/2)\{f,g\}+\cdots $$ which describes the $\hbar$ dependence of the product $*_{\hbar}$ says nothing about how the product behaves after the deformation parameter ``leaves the first-order neighborhood of zero.'' This weak condition seems to contrast with the rigidity implied by the special role of rational values of $q=e^{i\hbar}$ in the theory of quantum groups. The question thus arises as to whether the dependence on $\hbar$ should be ``rigidified'' by some supplementary condition(s). A possible source of such rigidity is suggested in \cite{we:classical}, where in the special case of quantizations of Moyal type on tori with translation-invariant Poisson structures it is shown that the Schwartz kernel of the bilinear multiplication operator $*_{\hbar}$ satisfies a Schr\"odinger equation in which Planck's constant plays the role of time, and the hamiltonian operator is the Poisson structure extended by translation invariance to become a differential operator on $P\times P.$ Although this kind of evolution equation for quantization may possibly extend to other situations where the notion of translation makes sense (e.g. on Lie groups), it is not at all clear how to extend it to general symplectic manifolds. Another kind of rigidity in the deformation parameter appears in the geometric quantization of K\"ahler manifolds by sections of holomorphic sections of line bundles (and the extension of this quantization to general symplectic manifolds by the method of Toeplitz operators in \cite{bo-gu:spectral}). The deformation parameter $\hbar$ then occurs as a unit by which the cohomology class of the symplectic structure (which in physical examples carries the units of action) is divided to obtain a cohomology class with numerical values which, when integral, is the first Chern class of a complex line bundle. When this class is integral for $\hbar=\hbar_0,$ then as $\hbar$ runs over the set of values $\hbar_0 /k$, for $k$ a sufficiently large integer, the dimension of the space of holomorphic sections is given according to the Riemann-Roch theorem by a polynomial in $k$ of degree equal to half the dimension of the symplectic manifold, which is completely determined by the cohomology class of the symplectic structure and the total Chern class of the tangent bundle for an almost complex structure compatible with the symplectic structure. This behavior shows that geometric quantization has a very rigid behavior with respect to $\hbar.$ The index theory of Fedosov \cite{fe:index} provides an analog in deformation quantization of the rigidity described in the preceding paragraph. According to this theory, when his deformation quantization is extended from scalar functions to matrix-valued functions on $P,$ a notion of ``abstract elliptic operator'' can be defined. When a suitable trace is introduced, the index of such an operator can be defined. Fedosov's index theorem then expresses this index as a polynomial in $(1/\hbar)$ which is completely determined by the Chern character of a certain vector bundle over $P$ associated to the operator, the $\hat{A}$ class of the tangent bundle of $P$ (with an almost complex structure), and the cohomology class of the symplectic structure. Once again, this formula suggests that some quantizations (for which this formula is true) are ``better'' than others which might have the same first-order behavior with respect to $\hbar.$ \section{Construction of formal flat connections} \label{sec-construction} In this section we study in detail the classical analog of the flat connection $D$ constructed in \cite{fe:simple} for the quantum case. In this classical setting the Weyl algebra bundle is replaced by ${\cal A}={\cal F}(TP)$, the algebra of smooth functions on $TP$, or, if one restricts oneself to the study of formal power series on the fibres, by ${\cal A} = \Gamma( \bigcup_{p \in P} {\cal J}^\infty_0( T_pP,{\Bbb R}))$, where $ {\cal J}^\infty_0(T_pP,{\Bbb R})$ denotes the set of $\infty$-jets at 0 of real valued functions on $T_p P$. The product on ${\cal A}$ in the non-symplectic setting is either pointwise multiplication for ${\cal A}={\cal F}(TP)$, or, for ${\cal A} = \Gamma( \bigcup_{p \in P} {\cal J}^\infty_0( T_p P,{\Bbb R}))$, the mapping induced by it on the set of $\infty$-jets, respectively. In the symplectic case studied in Section \ref{sec:sympl} this product is supplemented by the fibrewise Poisson bracket (i.e., the Poisson bracket on the fibres of TP induced by the symplectic form $\omega$ on $P$), to produce a Poisson algebra structure. Let $\Lambda(P)$ be the set of forms on $P$. The contraction $(i_X \alpha)(\cdot)$ is defined as $\alpha(X, \cdot)$ for a vector $X$ and a form $\alpha$. As in \cite{fe:simple}, we can define operators $\delta$ and $\delta^{-1}$ on ${\cal A} \otimes \Lambda(P)$, given in local coordinates $(x^i)$ on $P$ and induced coordinates $(x^i,y^i)$ on $TP$ by: \begin{equation} \delta = d x^i \wedge \frac{\partial}{\partial y^i} \end{equation} and \begin{equation} \label{eq:deltainv} \delta^{-1} a = \frac{1}{p+q} \; i_{ y^i \frac{\partial}{\partial x^i}} a \end{equation} if $a$ is a $q$-form that is homogenous of degree $p$ in $y$, $p+q \neq 0$, and $\delta^{-1} a =0 $ if $p=q=0$. Let $ {\cal V}^{inv}(TP)$ denote the set of invariant vertical vector fields, i.e., of vertical lifts of vector fields on $P$, We can define the operators $\delta$ and $\delta^{-1}$ on forms with values in the `formal vertical vector fields' ${\cal A} \otimes {\cal V}^{inv}(TP) \otimes \Lambda(P)$ as well, by letting them act trivially on $ {\cal V}^{inv}(TP)$. With this definition, the ``Hodge decomposition'' % \begin{equation} \label{eq:delta} a =a_{00} + (\delta^{-1} \delta + \delta \delta^{-1} ) a \end{equation} still holds for $a \in {\cal A} \otimes {\cal V}^{inv}(TP)\otimes\Lambda(P)$. Here, $a_{00}$ denotes the homogeneous part of $a$ which is a zero-form of degree 0 in the vertical coordinates $(y^i)$; i.e. it is simply an element of ${\cal V}^{inv}(TP)$. We would like to construct from a given linear, torsion-free, nonflat connection $\partial$ a flat, nonlinear connection on $TP$ with structure group $\mbox{Diff}({\Bbb R}^n)$ (or the symplectomorphism group of ${\Bbb R}^n$ for even $n$ in the case of the Poisson algebra studied in Section \ref{sec:sympl}), or equivalently, a flat Ehresmann connection on $TP$. Locally, such a connection is represented by a one-form $X_{\mbox{\scriptsize tot}}$ on $P$ with values in the vertical vector fields, yielding for the local expression for the curvature $\Omega$, i.e., the obstruction to the integrability of the horizontal distribution defining the connection: \[ \Omega = d X_{\mbox{\scriptsize tot}} + 1/2 \comm{X_{\mbox{\scriptsize tot}}}{X_{\mbox{\scriptsize tot}}} \] where $\comm{\cdot }{\cdot }$ acts as the Lie bracket on the vector part and as the exterior product on the form part of $X$. A connection induces a covariant derivative $D$ on elements of ${\cal A}$, which is given in local coordinates by: \[ D f =\left( \frac{\partial}{\partial x^i} f\right) d x^i + X_{\mbox{\scriptsize tot}} \cdot f. \] By analogy with the quantum case in \cite{fe:simple}, $X_{\mbox{\scriptsize tot}}$ should be locally of the form % \begin{equation} \label{eq:Xansatz} X_{\mbox{\scriptsize tot}} = - \frac{\partial}{\partial y^i} d x^i - \Gamma^i_{kl}(x) y^k \frac{\partial}{\partial y^i} d x^l + X , \end{equation} % where $\Gamma^i_{kl}(x)$ are the Christoffel symbols of the connection $\partial ,$ and $X$ is at least of second degree in $y$. The addition of the term $ \frac{\partial}{\partial y^i} d x^i$, which corresponds to the operator $\delta$, corresponds to the transition from the linear connection $\partial $ to an affine connection with connection form $X_{\mbox{\scriptsize tot} }-X$ by the addition of the solder form \cite{ko-no:foundations}. The vanishing of the torsion for $\partial $ implies that the ``translational'' part of the curvature of the affine connection is zero. For instance, if the connection $\partial$ is flat, we may choose $X \equiv 0$ in $(\ref{eq:Xansatz})$ to get a flat connection again. In this case, the original horizontal distribution corresponding to $\partial$ is already integrable. For $P= {\Bbb R}^n$ with the canonical flat connection, the leaves of the horizontal distribution correponding to $\partial$ are affine subspaces ${\Bbb R}^n \times \{v\} \subset {\Bbb R}^{2 n}$ parallel to the zero section $Z = {\Bbb R}^n \times \{0\} \cong P$. This foliation is rotated by the addition of the solder form in such a way that the leaf through $ (x,v)$ intersects the zero section in $(x+v,0)$ for arbitrary $(x,v) \in {\Bbb R}^{2n}$ (Figure 1). \begin{figure} \centering \mbox{\psfig{file=fed1.ps,width=6cm,angle=270} } \caption{Addition of the solder form to a fl at linear connection ``rotates'' the parallel sections (from dotted to solid lines) so that they become transversal to the zero section.} \end{figure} In the general, non-flat case one may interpret the ansatz $(\ref{eq:Xansatz})$ as first going over from the structure group $\mbox{Gl}(n,{\Bbb R})$ with $n= \dim(P)$ to the affine group $\mbox{Gl}(n,{\Bbb R}) \lhd {\Bbb R}^{n}$ in order to get a horizontal distribution transversal to the zero section, and then deforming it in order to get a flat nonlinear connection, which has an integrable horizontal distribution and hence defines a foliation. Although the transition from a linear connection to an affine connection is canonical, one could in principle take any multiple of $\delta$ in order to define a generalized affine connection in the sense of \cite{ko-no:foundations}, i.e., a connection on the $\mbox{Gl}(n,{\Bbb R}) \lhd {\Bbb R}^{n}$-bundle of affine frames. However, the choice made in the ansatz is not only geometrically natural, but also distinguished by the validity of Theorem \ref{thm1} and Lemma \ref{lem1} below. The requirement of the vanishing of the curvature yields the condition: \begin{equation} \label{eq:curvzero} 0 = \Omega = R - \delta X + \partial X + 1/2\comm{X}{X} , \end{equation} where $ \partial X \stackrel{\mbox{\tiny {def}}}{=} d X - \comm{ \Gamma^i_{kl}(x) y^k \frac{\partial}{\partial y^i} d x^l}{X}$, and $R$ denotes the curvature tensor corresponding to $\partial$. The term $\comm{\frac{\partial}{\partial y^j} dx^j} {\Gamma^i_{kl}(x) y^k \frac{\partial}{\partial y^i} d x^l}$ which one would expect to appear in (\ref{eq:curvzero}) is equal to $\Gamma^{i}_{kl}(x) dx^{j}\wedge dx^{l} \frac{\partial}{\partial y^i},$ which vanishes because the connection is torsion--free. We first try to find a formal power series in $y$ for the solution of Equation (\ref{eq:curvzero}). Solutions of this equation are not unique; however, we will show that they are in one-to-one correspondence to formal vertical vector fields $a \in {\cal A} \otimes {\cal V}^{inv}(TP)$ of degree at least 3 in $y$ via the `normalization condition' \begin{equation} \label{eq:bound} \delta^{-1} X = a. \end{equation} We will see in Section \ref{sec:anal} that $\delta^{-1} X$ is just the $X$-dependent term that appears in the equation defining autoparallel curves. Hence, the case $a=0$ will prove to be special in so far as the term involving $X$ does not change autoparallel curves: in general $\delta^{-1} X$ can be interpreted as an acceleration term for autoparallel curves. Therefore it might appear natural to admit only the special case $a=0$. However, we will see in section \ref{sec:sympl} that the boundary condition $\delta^{-1} X$ is not appropriate in the symplectic case. (It must be replaced by the condition $\delta^{-1} r =0$ for the corresponding hamiltonian function $r$.) Hence, we will admit arbitrary values of $a$ in the non-symplectic case as well. Using $(\ref{eq:delta})$ and the vanishing of $X_{00}$ (since $X$ is a one-form), we get: \begin{equation} \label{eq:iter} X = \delta a + \delta^{-1}(R + \partial X + 1/2 \comm{X}{X}), \end{equation} which must be satisfied by any solution to $(\ref{eq:curvzero})$ and $(\ref{eq:bound})$. As in \cite{fe:simple}, this equation may be solved by iteration, yielding a unique power series for $X$. In spite of the more general normalization condition $(\ref{eq:bound})$, the argument in \cite{fe:simple} (also see the ``topological lemma'' in \cite{do:quantization}) showing that the $X$ found in this way is indeed a solution to $(\ref{eq:curvzero})$ is still valid: defining $A$ as the right hand side of equation $(\ref{eq:curvzero})$ with the recursively constructed $X$, we may check that $A$ fulfills the equation $ \delta A = \partial A +[X,A]$ and the condition $\delta^{-1} A= 0$ and hence vanishes identically. Hence, we have proved the following theorem \begin{theorem} For each $a \in {\cal A} \otimes {\cal V}^{inv}(TP)$ at least of degree 3 in $y$ there is a unique formal flat connection, i.e., a solution to $(\ref{eq:curvzero})$, satisfying the normalization condition $(\ref{eq:bound})$. \end{theorem} \section{Analytic connections and the ``exponential map''} \label{sec:anal} In the last section we constructed a formal flat connection from a given linear, torsion-free connection. In this section we will show that this formal power series is convergent in a tubular neighborhood $U$ of the zero section if the manifold $M$, the connection $\partial$, and the element $a$ in the normalization condition are real analytic. The proof uses the method of majorants, as discussed in any standard reference on real analytic functions, such as \cite{kr-pa:primer}. \begin{theorem} \label{thm0} Let $P$ be a real analytic manifold, $\partial$ an analytic linear connection, and $a \in {\cal F}(TP)$ an analytic function such that $a$ and its derivatives up to order two in the direction of the fibres vanish on the zero section $Z$. Then the formal power series which is the solution of $(\ref{eq:iter})$ obtained by iteration is convergent in a neighborhood $U$ of $Z$ and hence defines an analytic Ehresmann connection on $ U \subset TP$. \end{theorem} \noindent {\em Proof: } Choose coordinates $(x^i)$ on $P$ and induced coordinates $(x^i,y^i)$ on $TP$. We define a linear operator $T$ on power series (divisible by $y)$ in $(x,y)$ by its action on monomials in $y$: \[ T y^k \phi(x) \stackrel{\mbox{\tiny {def}}}{=} \frac{1}{k} y^k \frac{d}{dx} \phi(x) \] \noindent We will show: \begin{enumerate} \item \label{pr:anal1} The power series of the components $X^i_j$ of $X = X^i_j \frac{\partial}{\partial y^i} d x^j$ are majorized by \newline $f(\sum x^i, \sum y^i)$, where $f(x,y)$ is a power series with no terms of order zero in $y$ which is a solution of: \begin{equation} \label{eq:major} f(x,y) = A ( \frac{y^2}{(1-\beta x)(1-\beta y)} + y T f(x,y) + \frac{y}{1-\beta x} f(x,y) + f(x,y)^2). \end{equation} for sufficiently large $A>0$ and $\beta>0$. \item \label{pr:anal2} $f(x,y)$ is convergent in a neighborhood of $(0,0)$. \end{enumerate} Hence, $X$ is analytic. To show \ref{pr:anal1}, we first observe that Equation $(\ref{eq:major})$ has a unique solution containing no term of order zero in $y$, which can be computed by iteration. We say that $f$ majorizes $X^i_j$ up to order $k$ in $y$ iff the absolute value of the coefficient of any monomial $x^\mu y^\nu$ in $X^i_j$ for arbitrary multiindices $\mu, \nu$ with $|\nu| \leq k$ is not larger than the respective coefficient in $f(\sum x^i, \sum y^i)$. Both $X^i_j$ and $f(\sum x^i, \sum y^i)$ contain no terms of order zero or one in $y$. As the coefficients of any analytic function $\rho(z) = \sum_{\mu \in {\Bbb N}^N} C^\mu z^\mu$ in variables $(z^1,\ldots,z^N)$ have to satisfy $|C^\mu| < A B^{|\mu|}$ for some constants $A,B$, the function $\rho$ is majorized by $\alpha/(1-\beta (z^1+...+z^N))$ for some $\alpha, \beta>0$. From this general statement it follows that there are $\alpha, \beta$ such that the components of $\delta a + \delta^{-1} R $ are majorized by $\alpha y^2 (1-\beta x)^{-1} (1-\beta y)^{-1}$ , and the Christoffel symbols $\Gamma^i_{kl}$ are majorized by $\alpha (1-\beta x)^{-1} $, where $x=\sum x^i, y=\sum y^i$. Hence, from equations $(\ref{eq:iter})$ and $(\ref{eq:major})$ it is obvious that for $A > \alpha$ the power series $f(\sum x^i, \sum y^i)$ majorizes all $X^i_j$ up to order 2 in $y$. Now assume that $f(\sum x^i, \sum y^i)$ majorizes all $X^i_j$ up to some order $k \in {\Bbb N}$. Then, by construction of $T$, the components of $\delta^{-1} d X $ are majorized up to order $(k+1)$ by $ 2 N y T f(x,y)$, where $N=\dim P$. The components of $ 1/2 \comm{X}{X}$ are majorized by $2 N^2 f(x,y) \frac{d}{dy}f(x,y)$ up to order $k$. Thanks to the factor $1/(p+q)$ in the definition $(\ref{eq:deltainv})$ of $\delta^{-1}$, the components of $\delta^{-1} 1/2 \comm{X}{X}$ are majorized up to order $(k+1)$ by \[4 N^3 \int_0^y f(x,\hat{y}) \frac{d}{d\hat{y}}f(x,\hat{y}) d\hat{y} = 2 N^3 f(x,y)^2, \] Finally, the components of $\comm{ \Gamma^i_{kl} y^k \frac{\partial}{\partial y^i} dx^k}{X}$ are majorized up to order $k$ by $N^2 \alpha /(1-\beta x) (f + y \frac{d}{dy} f)$ and $\delta^{-1}\comm{ \Gamma^i_{kl} y^k \frac{\partial}{\partial y^i} dx^k}{X}$ is majorized up to order $(k+1)$ by \[ 2 N^3 \int_0^y \alpha/(1-\beta x) (f(x,\hat{y}) + \hat{y} \frac{d}{d\hat{y}} f(x,\hat{y}) ) d \hat{y} = 2 N^3 \alpha /(1-\beta x) y f \] Hence, by equations $(\ref{eq:iter} )$, $(\ref{eq:major})$, $X^i_j$ is majorized up to order $(k+1)$ in $y$ by $f(\sum x^i, \sum y^i)$, and \ref{pr:anal1} is shown. To prove \ref{pr:anal2} we show, again by induction on the degree in $y$, that $f$ is majorized by the solution $g(x,y) $ which has no terms of degree zero or one in $y$ of the purely algebraic equation \begin{equation} \label{eq:major2} g(x,y) = \tilde{A} ( \frac{y^2}{(1-\beta x)(1-\beta y)} + 2 \frac{y}{1-\beta x} g(x,y) + g(x,y)^2), \end{equation} where $\tilde{A}=\beta A$ if $\beta >1$, and $\tilde{A}=A$ otherwise. Obviously, $g$ majorizes $f$ up to order $2$. Furthermore, by iteration of equation $(\ref{eq:major2})$, $g$ has the property that the coefficient of $y^k$ consists of a sum of terms of the form $a/(1- \beta x)^l$ with $a\in {\Bbb R}^+, l \in {\Bbb N} ,l\leq k$. Assume that $f$ is majorized by $g$ up to order $k$ in $y$. As \[ y T \frac{y^k}{(1-\beta x)^l} = \frac{l}{k} \frac{\beta y}{1-\beta x} \frac{y^k}{(1-\beta x)^l} \] $y T f(x,y) $ is majorized up to order $(k+1)$ by $ \beta y g(x,y)/(1-\beta x)$, and hence, by equations $(\ref{eq:major})$, $(\ref{eq:major2})$, $f$ is majorized by $g$ up to order $(k+1)$, so $f$ is majorized by $g$. The solutions to the quadratic equation $(\ref{eq:major2})$ are \[ g(x,y) = - \frac{1}{2} \left( 2 \frac{y}{1-\beta x} - \frac{1}{\tilde{A}} \right) \pm \sqrt{ \frac{1}{4} \left( 2 \frac{y}{1-\beta x} - \frac{1}{\tilde{A}} \right)^2 - \frac{y^2}{(1-\beta x) (1-\beta y) }}. \] Both solutions are obviously analytic functions in $(x,y)$, as the argument of the square root is an analytic function with nonvanishing zeroth degree term, the solution with the minus sign being the desired solution, which starts with a term quadratic in $y$. Hence $f$ is majorized by a convergent power series, and \ref{pr:anal2} is shown. \hfill Q.E.D. ~~~~\\ \medskip Since the Ehresmann connection defined by $X$ is flat, the horizontal distribution is integrable. Due to the $\delta$-term in $(\ref{eq:Xansatz})$, this distribution is transversal to the zero section $Z$, and for a sufficiently small tubular neighborhood $U \subset TP$ of $Z$ each leaf in $U$ intersects $Z$ in exactly one point. Hence, we can define an ``exponential mapping'' $\mbox{EXP}_p: T_pP \cap U \rightarrow P$ for any $p \in P$ which maps $v_p $ to the unique intersection point of the leaf through $v_p$ and the zero section in $TP$ (Figure 2). \begin{figure} \centering \mbox{\psfig{file=fed2.ps,width=6cm,angle=270} } \caption{The ``exponential mapping'' for a flat linear connection is found by following parallel sections from the fibres to the zero section.} \end{figure} With this definition, the normalization condition $(\ref{eq:bound})$ with $a=0$ is distinguished by the following theorem: \begin{theorem} \label{thm1} The mappings $\mbox{EXP}_p$ and the usual exponential mappings $\exp_p$ coincide on $U \cap T_pP$ for all $p \in P$ iff the choice $\delta^{-1}(X) = 0$ is made in (\ref{eq:bound}). \end{theorem} \noindent {\it Proof:} To prove the theorem, we first observe that $X$ determines a natural notion of autoparallel curves: Any curve $\gamma$ in $P$ may be lifted to a curve in $TP$ by taking the derivative $\dot{\gamma}$. A curve $\gamma$ is called autoparallel iff $\dot{\gamma}$ is horizontal, i.e., tangent to the horizontal distribution defining the connection. In local coordinates, this condition is of the form: \begin{equation} \label{eq:dgl} \ddot{x}^i(t) = - \dot{x}^i(t) - \Gamma^i_{k l}(x) \dot{x}^k \dot{x}^la + X^i_k(x,\dot{x}) \dot{x}^k ~~, \end{equation} where $ X^i_k(x,y) \frac{\partial}{\partial y^i} dx^k $ is the local coordinate expression for $X$. The connection between autoparallel curves and the `exponential mapping' $\mbox{EXP}$ is particular simple in the case of $P={\Bbb R}^n$ with the standard flat connection $\partial$ (Figure 1). In this case, the equation for autoparallel curves is just $ \ddot{x}^i(t) = - \dot{x}^i(t)$ with the solution \[ \dot{x}(t) = \dot{x}(0) e^{-t}, ~~~ x(t) = x(0) + \dot{x}(0) (1 - e^{-t}).\] The term $- \dot{x}^i(t)$ coming from the $\delta$-term in $X_{\mbox{\scriptsize tot}}$ can be interpreted as a friction term slowing the particle down as it approaches the zero section (Figure 1). In particular, $EXP(x(0),v(0)) = \lim_{t \rightarrow \infty} x(t) = x(0) + v(0)$, and $EXP$ coincides with the usual exponential mapping of the connection $\partial$. In the general case, the interpretation of the first term on the right hand side of $(\ref{eq:dgl})$ as a friction term is still valid. Note that the last term in $(\ref{eq:dgl})$ is precisely $\delta^{-1}(X)$. Suppose that this is zero. Then a reparametrization $t \leadsto - \log(1-\tau)$ yields the usual geodesic equation corresponding to the connection $\partial$ for $\tilde{x}(\tau) \stackrel{\mbox{\tiny {def}}}{=} x(-\log(1-\tau))$. In particular, for sufficiently small initial velocity $v$, $\tilde{x}(\tau)$ is defined for $0 \leq \tau \leq 1$, and $x(t)$ is defined for $0 \leq t < \infty$. As $\lim_{t \rightarrow \infty} \tau(t) =\lim_{t \rightarrow \infty} (1-e^{-t}) =1$, we get: $\lim_{t \rightarrow \infty} x(t)= \lim_{\tau \rightarrow 1} \tilde{x}(\tau) = \exp(v)$ with $v= \dot{x}(0)$. On the other hand, by construction the curve $(x,\dot{x})(t)$ in $TP$ is contained in the leaf of the horizontal foliation through $v$. As $\lim_{t \rightarrow \infty} \dot{x}(t) = \lim_{t \rightarrow \infty} e^{-t} \frac{d}{d\tau} \tilde{x}(\tau) = 0$, $\lim_{t \rightarrow \infty} x(t)$ is just the intersection point of the leaf through $v$ and the zero section, i.e. $\mbox{EXP}(v)$. Hence, $\mbox{EXP}(v)=\exp(v)$ and we have proved that the condition $\delta^{-1}(X)=0 $ is sufficient for the maps $\mbox{EXP}$ and $\exp$ to coincide. To prove the converse we assume $\mbox{EXP}_p = \exp_p $ for all $p \in P$, and $\delta^{-1} X$ arbitrary. We first show the following lemma: \begin{lemma} \label{lem1} Let $\gamma$ be a geodesic corresponding to the connection $\partial$, $\tilde{\gamma} : {\Bbb R} \rightarrow P $ the reparametrized geodesic: $ t \mapsto \gamma(1-e^{-t})$, i.e., a solution to the differential equation \begin{equation} \label{eq:dgl_noX} \ddot{x}^i(t) = - \dot{x}^i(t) - \Gamma^i_{k l}(x) \dot{x}^k \dot{x}^l . \end{equation} Then, for all $t$: \[ \exp_{\tilde{\gamma}(t)} \left( \dot{ \tilde{\gamma}}(t) \right) = \exp_{\tilde{\gamma}(0)} \left( \dot{ \tilde{\gamma}}(0) \right) = \lim_{t \rightarrow \infty} {\tilde{\gamma}(t)} \] \end{lemma} \noindent {\bf Remark:} The lemma says that a solution of $(\ref{eq:dgl_noX})$ is a reparametrized geodesic that is slowed down by the `friction term' $- \dot{x}$ just to that extent that $\exp(\dot{\tilde{\gamma}}(t))$ is constant. \bigskip \noindent {\em Proof of the lemma:} Denote by $\gamma_2$ the geodesic starting at $\tilde{\gamma}(t_0)$ with initial velocity $\dot{\tilde{\gamma}}(t_0)$ for some fixed time $t_0$. As follows from the first part of the proof of theorem \ref{thm1} above, $\tilde{\gamma}_2(t)= \gamma_2(1-e^{-t})$ defines again a solution of $(\ref{eq:dgl_noX})$. Obviously, this solution is obtained from $\tilde{\gamma}$ by a constant shift in time, as $ \dot{\tilde{\gamma}}_2(0)= \dot{\tilde{\gamma}}(t_0)$. Hence, $\lim_{t \rightarrow \infty} \tilde{\gamma}(t) = \lim_{t \rightarrow \infty} \tilde{\gamma}_2(t)$. On the other hand, we have already proved that $\lim_{t \rightarrow \infty} \tilde{\gamma}(t) = \exp_{\tilde{\gamma}(0)}(\dot{\tilde{\gamma}}(0))$ (and hence $\lim_{t \rightarrow \infty} \tilde{\gamma}_2(t) = \exp_{\tilde{\gamma}_2(0)}(\dot{\tilde{\gamma}}_2(0))$), so the lemma follows. \hfill Q.E.D. ~~~~\\ \medskip Using this lemma we will finish the proof of Theorem \ref{thm1}, i.e., we will show that the condition $\delta^{-1} X = 0$ is a necessary condition for the maps $\mbox{EXP}$ and $\exp$ to coincide. Assume that the curve $t \mapsto \frac{d}{dt} \gamma(1-e^{-t}) $ in $TP$ is not completely contained in the leaf through $v_q$, where $\gamma: t \mapsto \gamma(t)$ is a geodesic of $\partial$ with $\gamma(0) = v_q $ for an arbitrary $v_q \in TP$. By continuity there is a open subset $O \subset {\Bbb R}$ such that $w(t) := \frac{d}{d t} \gamma(1-e^{-t}) $ is not contained in the leaf through $v_q$ for any $t \in O$. By assumption and the lemma, $\mbox{EXP}(w(t)) = \exp(w(t)) = \exp(v_q) = \mbox{EXP}(v_q)$, so we get an infinity of different leaves intersecting the zero section in the same point, in contradiction to the transversality of the horizontal distribution. Hence, the curve $t \mapsto \frac{d}{dt} \gamma(1-e^{-t}) $ is completely contained in the leaf through $v_q$, i.e., it is a horizontal curve, and therefore must fulfill both equations $(\ref{eq:dgl})$ and $(\ref{eq:dgl_noX})$. Obviously, this is only possible if the difference between the two equations vanishes, i.e., \[ X^i_k(x,\dot{x}) \dot{x}^k \equiv 0 \Leftrightarrow \delta^{-1} X = 0 ,\] and Theorem \ref{thm1} is proved. \hfill Q.E.D. ~~~\\ \medskip \noindent {\bf Remark: } Even when $\delta^{-1} X \neq 0$, the equality $\mbox{EXP}(v) = \lim_{t \rightarrow \infty} x(t)$, where $x(t)$ is the solution of $(\ref{eq:dgl})$ with $\dot{x}(0)=v$, still holds. To prove this statement we show that the `friction term' in $(\ref{eq:dgl})$ insures that $\lim_{t \rightarrow \infty} \dot{x}(t)=0$ for sufficiently small initial velocity $v$. However, this is easy to see: Multiplying $(\ref{eq:dgl})$ by $\dot{x}^i$ and summing over $i$, we get: \[ \frac{1}{2} \frac{d}{d t}\sum_i \dot{x}^i \dot{x}^i = -\sum_i \dot{x}^i \dot{x}^i - \sum_i \dot{x}^i \left( \Gamma^i_{k l}(x) \dot{x}^k \dot{x}^l + X^i_k(x,\dot{x}) \dot{x}^k \right) \] As $X$ has degree at least two in $y$, for sufficiently small velocity \[ | \sum_i \dot{x}^i \left( \Gamma^i_{k l}(x) \dot{x}^k \dot{x}^l + X^i_k(x,\dot{x}) \dot{x}^k \right)| < \epsilon \sum_i \dot{x}^i \dot{x}^i \] for some $\epsilon$ with $0<\epsilon <1$. Hence, \[ (-1- \epsilon) \sum_i \dot{x}^i \dot{x}^i < \frac{1}{2} \frac{d}{dt} \sum_i \dot{x}^i \dot{x}^i (-1 +\epsilon) \sum_i \dot{x}^i \dot{x}^i, \] \[ e^{-2(1+ \epsilon)t} \sum_i \dot{x}^i(0) \dot{x}^i(0) < OA \sum_i \dot{x}^i(t) \dot{x}^i(t) < e^{-2(1-\epsilon)t} \sum_i \dot{x}^i(0) \dot{x}^i(0), \] and \[ \lim_{t\rightarrow \infty} \dot{x}(t) =0 \] but the curve $(x(t),\dot{x}(t))$ does not intersect the zero section at a finite time. \section{ The formal exponential map} In the last section we have shown how to construct the mapping $\mbox{EXP}$ in the analytic case, where the formal flat connection was convergent and defined a flat Ehresmann connection in the usual sense on some neighborhood of the zero section. In the non-analytic case we cannot expect the series for $X$ to be convergent. However, we will show that we can define a formal exponential map, i.e., an $\infty$-jet of mappings from the fibre $T_pP$ to $P$, and we will show that for this map Theorem \ref{thm1} still holds, if we replace $\exp_p$ by its $\infty$-jet as well. In order to define the formal exponential map we first show the following lemma for real-analytic manifolds: \begin{lemma} \label{lem:jets} Let $P$ be a real-analytic manifold, $\partial$ an analytic connection and $a$ an analytic function, $X$ the solution of $(\ref{eq:curvzero})$ with the normalization condition $(\ref{eq:bound})$. Then the $k$-jet of $\mbox{EXP}_p$ at $0_p \in T_pP$ for arbitrary $k \in {\Bbb N}, p \in P$ is completely determined by the $k$-jet at $p$ of the connection form corresponding to $\partial$ and the $(k+1)$-jet at $0_p$ of $a$. \end{lemma} \noindent {\em Proof: } It is sufficient to consider some small neighborhoods $V\subset P$ of $p \in P $ and $TV \subset TP$ of $0_p \in T_pP$. We choose coordinates $(x^i)$ on $V$ and show first that the $k$-jet of $\mbox{EXP}_p$ is determined by the $k$-jet of $X$. By the definition of the map $\mbox{EXP}_p$, we may compute the components $\mbox{EXP}_p^i$ of $\mbox{EXP}_p$ in the chosen coordinate system by taking the covariant constant continuation $f^i$ to $TV \subset TP$ of the coordinate functions $x^i$ and restricting them to the fibre $T_pP$: \[ \mbox{EXP}_p^i(y) = f^i(y) \restr{T_pP} \] with \[ D f^i = 0 , ~~~~~ f^i\restr{Z} = x^i ,\] where $Z$ denotes the zero section in $TP$. As the constructed connection is flat, we know that such functions $f^i$ exist. Rather than constructing them by the iterative procedure of \cite{fe:simple}, we will use a more geometric approach. The condition $D f^i = 0$ reads locally: \begin{equation} \label{eq:Dexp} \frac{\partial}{\partial x^k} f^i + M^m_k \frac{\partial}{\partial y^m} f^i = 0, \end{equation} where $M$ is the matrix with elements \begin{equation} \label{eq:defM} M^m_k = \delta^m_k + \Gamma^m_{jk} y^j -X^m_k, \end{equation} where $\delta^m_k $ equals $1$ if $m=k$, and $0$ otherwise. Due to the $\delta^m_k$ coming from the solder form the matrix valued function $M$ is invertible, and the inverse $M^{-1}$ is analytic at $0_p$ again. Furthermore, its $k$-jet at $0_p$ only depends on the $k$-jet of $M$ at $0_p$. Hence, we can solve (\ref{eq:Dexp}) for $ \frac{\partial}{\partial y^k} f^i$ yielding \begin{equation} \label{eq:Dexpinv} \frac{\partial}{\partial y^k} f^i + (M^{-1})^m_k \frac{\partial}{\partial x^m} f^i = 0, \end{equation} where the zero-jet of $M^{-1}$ at $0_p$ is just the unit matrix. $f^i$ is given for $y=0$, and we know that this set of partial differential equations has a unique solution, since the connection is flat. To compute it, we can integrate the differential equations successively: The equation for $ \frac{\partial}{\partial y^1} f^i$ determines $f^i$ on the $y^1$-axis, the equations for $\frac{\partial}{\partial y^k} f^i$ with $1\leq k\leq m$ determine $f^i$ on the subspace spanned by the first $m$ basis vectors. In each step the solution is an analytic function on the respective subspace, depending analytically on the initial conditions, by the Cauchy-Kovalevskaya theorem. Hence, $f^i$ is analytic at $0_p$. Furthermore, by comparing coefficients, we see that the $k$-jet of $f^i$ only depends on the $k$-jet of $M^{-1}$, and hence of $M$. Since the operator $\delta^{-1}$ contains a factor $y^i$, it follows from the iteration formula $(\ref{eq:iter})$ for $X$ that the $k$-jet of $X$ is determined by the $k$-jets of the Christoffel-symbols $\Gamma^i_{jk}$ of $\partial$ and the $(k+1)$-jet of $a$. By the definition $(\ref{eq:defM})$ this is true for $M$ as well, and the lemma is proved. \hfill Q.E.D. ~~~~\\ \medskip Using this lemma, we can define a formal exponential map in the non-analytic case: We use the topology on the $\infty$-jets generated by the basis of open sets consisting of sets of $\infty$-jets whose $k$-jets agree for some $k \in {\Bbb N} $, i.e., a sequence $(j^i)$ of $\infty$-jets converges to an $\infty$-jet $j$, if for any $k \in {\Bbb N}$ there is an $N(k) \in {\Bbb N}$ auch that the $k$-jets of $j^r$ and $j$ agree for $r > N(k)$. In this topology the set of $\infty$-jets of analytical functions forms a dense subset of the set of all $\infty$-jets. (The polynomials are already dense.) Hence, we can define the $\infty$-jet $j^\infty_p(\EXP_p)$ of $\mbox{EXP}$ at $p$ in the non-analytic case by the continous continuation to all $\infty$-jets of the map that assigns to the pair $(\partial,j^\infty_p(a))$, consisting of an analytic connection and the $\infty$-jet of an analytic function $a$ defining the normalization condition for $X$, the $\infty$-jet $j^\infty_p(\EXP_p)$. This continuation exists and is unique by Lemma \ref{lem:jets}. \begin{theorem} Theorem \ref{thm1} still holds in the non-analytic case, if we replace $\mbox{EXP}_p$, and $\exp_p$ by their $\infty$-jets. \end{theorem} \noindent {\em Proof:} If we impose the normalization condition $(\ref{eq:bound})$ with $a=0$ the direct statement of the theorem is an immediate consequence of Theorem \ref{thm1}, Lemma \ref{lem:jets}, and the fact that the $\infty$-jets of analytic functions are dense in the chosen topology. To show the converse, we use the fact that $j^\infty_p(\exp_p)$ coincides with $j^\infty_p(\EXP_p)$ for the choice $a=0$ in $(\ref{eq:bound})$ by the first part of the theorem. We denote for the moment the corresponding $X$ by $X^0$. Now, assume that the $k$-jet of $a$ in $(\ref{eq:bound})$ is different from zero for some $k >2$. Then, by $(\ref{eq:iter})$, the $(k-1)$-jet of $X$ and $X^0$ are different as well. This implies, that the $(k-1)$-jets of $M$ and $M^{-1}$ in $(\ref{eq:Dexp}), (\ref{eq:Dexpinv})$ are different from those for $a=0$, so the $(k-1)$-jets of the corresponding jets $j^\infty_p(\EXP_p)$ are different. \hfill Q.E.D. ~~~~\\ \medskip \section{ Flat symplectic connections} \label{sec:sympl} So far, we have studied the problem of constructing a flat connection for an arbitrary manifold $P$, without any additional structure. However, if we are interested in getting closer to the quantum case, then $P$ should be a classical phase space with its symplectic structure, and ${\cal F}(P)$ a Poisson algebra. In this case, the Weyl-algebra bundle of \cite{fe:simple} is replaced by ${\cal A}= {\cal F}(TP)$ (or ${\cal A} = \Gamma( \bigcup_{p \in P} {\cal J}^\infty_0(T_p P,{\Bbb R}))$) with the fibrewise Poisson-structure $\{,\}_{\mbox{\scriptsize fib}}$. The flat connection must be compatible with this additional structure, i.e., the following equation should hold for all $f,g \in {\cal A}$: \begin{equation} \label{eq:Dderiv} D\{f,g\}_{\mbox{\scriptsize fib}} = \{ D f, g \}_{\mbox{\scriptsize fib}} + \{ f , D g\}_{\mbox{\scriptsize fib}}. \end{equation} Hence, $\partial $ has to be a symplectic connection and $X(v)$ has to be a symplectic vector field on $T_{x}P$ for any $v \in T_{x}P$. As the fibres are vector spaces, $X$ must be globally hamiltonian, i.e., there must be an ${\cal A}$-valued one-form $r$ such that the components of the one form $r$ are minus the hamiltonian functions of the components of the vector-valued one-form $X$. (The minus sign is chosen in order to guarantee that the Lie bracket of $X$ with itself corresponds to the Poisson bracket of $r$ with itself.) In this case, the normalization condition $\delta^{-1} X = 0$ is in general not admissible, as it will not give a symplectic vector field. This statement is an immediate consequence of the fact that the usual exponential mapping of a symplectic connection is generally not a local symplectomorphism, and the following theorem: \begin{theorem} The mapping $\mbox{EXP}(p)$ is a local symplectomorphism iff $X(v)$ is a hamiltonian vector field for any $v \in TP.$ \end{theorem} \noindent {\em Proof: } Let $X(v)$ be a hamiltonian vector field for any $v \in TP$, with $X = X_r$ for some ${\cal A}$-valued one-form $r$. Then $D$ is of the form: \[ D a = \partial a + \{ \omega_{ij} y^i dx^j,a\}_{\mbox{\scriptsize fib}} + \{r,a\}_{\mbox{\scriptsize fib}} ,\] and has the property $(\ref{eq:Dderiv})$. We have to show: \begin{equation} \label{eq:sympl} \mbox{EXP}_p^* \{f,g\} = \{\mbox{EXP}_p^* f, \mbox{EXP}_p^* g\}_p \end{equation} for all $f,g \in {\cal F}(P)$, where $\{,\}_p$ is the Poisson bracket on the fibre $T_p P$ induced by the constant symplectic form $\omega(p)$, i.e., the restriction of $\{,\}_{\mbox{\scriptsize fib}} $ to the fibre $T_pP$. For any $ a \in {\cal F}(P)$ let $\sigma(a)$ denote the unique covariant constant continuation of $a$, i.e., $\sigma(a) \in {\cal A}$, $D \sigma(a) =0, \sigma(a) \restr{P \subset TP} =a$, where we have identified $P$ with the zero section in $TP$. By construction of $\mbox{EXP}$ we have: \begin{equation} \label{eq:exp} \mbox{EXP}_p ^* f = \sigma(f) \restr{T_p P} . \end{equation} Obviously, due to the $\delta$-term in $X_{\mbox{\scriptsize tot}}$, $\sigma(a)$ is locally of the form: \[\sigma(a)(x,y) = a(x) + \frac{\partial}{\partial x^i}a(x) y^i + O(y^2) \] and hence: \[ \{ \sigma(f), \sigma(g) \}_{\mbox{\scriptsize fib}} \restr{P} = \{ f, g \}, ~~~~~ D\{ \sigma(f), \sigma(g) \}_{\mbox{\scriptsize fib}} = 0~~~~~~~(\mbox{by (\ref{eq:Dderiv}))} \] \[\Rightarrow \{ \sigma(f), \sigma(g) \}_{\mbox{\scriptsize fib}} = \sigma(\{f,g\}) \] for any $f,g \in {\cal F}(P)$ by uniqueness of the covariant constant continuation of a function in ${\cal F}(P)$. Using $(\ref{eq:exp})$, equation $(\ref{eq:sympl})$ follows. \hfill Q.E.D. ~~~~~\\ \medskip For a given linear symplectic connection the set of its nonlinear ``flattenings'' satisfying $(\ref{eq:Dderiv})$ is in one-to-one correspondence with the functions $ f \in {\cal A}$ at least of degree 4 in the fibres by the condition: $\delta^{-1} r = f$, where $r$ is the unique one form in ${\cal A} \otimes \Lambda(P)$ which is a hamiltonian for $X$ and vanishes on $P \subset TP$. Indeed, as any $X$ is at least of second degree in the vertical coordinates, $r$ is at least of third, and $\delta^{-1} r$ at least of fourth degree. On the other hand, given $f$, we may write the analog of equation $(\ref{eq:iter})$ for $r$ instead of $X$, which is essentially the same (up to signs, since X is minus the hamiltonian vector field of $r$, and the replacement of the commutator of vector fields by the Poisson bracket of functions.) Hence, given $f$, we can uniquely construct $r$ as the solution of the equation: \begin{equation} \label{eq:rcurv} 0 = \hat{R} - \delta r + \partial r + 1/2\{r,r\} . \end{equation} with the condition \begin{equation} \label{eq:rcond} \delta^{-1} r = f, \end{equation} where the ${\cal A}$-valued two-form $\hat{R}$ is the hamiltonian for the curvature $R$ considered as a two form on $P$ with values in the vertical vector fields on $TP$, locally given by $\hat{R} = (1/4) \omega_{ij} R^j_{klm} y^i y^k d x^k d x^m $. This solution may again be iteratively computed: \begin{equation} \label{eq:riter} r = \delta f + \delta^{-1}(\hat{R} + \partial r + 1/2 \{r,r\}). \end{equation} We observe that the connection on $U \subset TP$ can be easily reconstructed from the mapping $\mbox{EXP}$: The leaf through any $0_p \in TP$ is given by $\bigcup_{q \in P} \mbox{EXP}_q^{-1}(\{0_p\})$, and the connection is defined by the tangent distribution to those leaves. Since the mappings $\mbox{EXP}_p$ for different symplectic connections $\partial$ and different `normalization conditions' $(\ref{eq:rcond})$ are all local symplectomorphisms, this observation implies the following theorem: \begin{theorem} The different Ehresmann connections constructed from different symplectic connections and different choices of $f$ are related by fibrewise symplectomorphism, i.e., by a section in the bundle $\bigcup_{p \in P} \mbox{\it Sym}^{\,0_p}(T_pP)$ where $\mbox{\it Sym}^{\,0_p}(T_pP)$ denotes the group of (infinite jets of) symplectomorphisms of $T_pP$ with fixed point $0_p \in T_pP$. \end{theorem} This statement is the symplectic analog of Theorem 4.3 in \cite{fe:simple} for the quantum case, which states that two different `abelian connections' $D, \tilde{D}$ are related by some inner automorphism $U$ such that $\tilde{D} = D - [ D U \circ U^{-1}, \cdot ]$. (The theorem is formulated only for the normalization condition $\delta^{-1} r = 0$. However, this condition is not used in the proof, so it is valid for the more general normalization conditions studied by us as well.) For the condition $\delta^{-1} r = 0$, $\mbox{EXP}$ is in a suitable sense a `minimal deformation' of the usual exponential mapping such that the resulting mapping is symplectic: namely, as the symplectic connection $\partial$ defines a splitting of $TTP$ in horizontal and vertical subspaces, we can define the accelaration $a(v_p)$ of autoparallel curves due to $X$ and the friction term $- \delta$ at a point $v_p \in TP$ as the difference of the tangent vectors to the lifted autoparallel curve and the lifted geodesic corresponding to $\partial$ through that point, which is always a vertical vector. We identify $T(T_p P)$ and $T_pP $, and denote by $\uparrow$ the vertical lift: $\uparrow: T_p P \rightarrow T(T_p P), ~~ v\mapsto v^{\uparrow}$. In order to avoid notational confusion, we denote by $\tilde{\omega}_p $ the symplectic form on the vector space $T_pP$ induced by the symplectic form $\omega$ on $P$. \begin{theorem} The condition $\delta^{-1} r = 0$ for a solution of $(\ref{eq:rcurv})$ is equivalent to the requirement that the acceleration $a(v_p)$ is contained in the $\tilde{\omega}_p$-complement of $v_p^{\uparrow}$ for all $p \in P, v_p \in T_pP$, i.e, $\tilde{\omega}_p(v_p^\uparrow, a(v_p)) = 0 ~\forall v_p \in TP$. \end{theorem} \noindent {\it Proof: } Any solution $r$ of $(\ref{eq:rcurv})$ is of the form $(\ref{eq:riter})$ for some $f$. $a(v_p)$ may be written as \[ a(v_p) = - v_p^{\uparrow} + a^X(v_p), \] where the first term comes from the friction term $- \delta$ and $a^X(v_p)$ is the acceleration due to $X$. For arbitrary $f$, we conclude from $(\ref{eq:dgl})$ that $a^X(v_p)$ is just $(\delta^{-1} X)(v_p)$, which is nonzero even if $\delta^{-1} r =0$, as $\delta^{-1}$ explicitly contains the vertical coordinates $y$. Using the definition of $\delta^{-1}$ we get: \begin{equation} \label{eq:delXr} \delta^{-1} X = \frac{I+1}{I} X_{\delta^{-1} r} + (\omega^\#( I^{-1} r))^{\uparrow}, \end{equation} where $X_f$ denotes the hamiltonian vector field on the fibres of $TP$ corresponding to $f$ (i.e., $\tilde{\omega}^\#(d_y f)$), and $\frac{I+1}{I}, I^{-1}$ are linear operators acting on monomials in $y$ of total degree $i$ by multiplication with $\frac{i+1}{i}$ and $ i^{-1}$, respectively. By $(\ref{eq:riter})$ $r$ is of the form $r = \delta f + \delta^{-1} \alpha$ for a two form $\alpha \in {\cal A} \otimes \Lambda(P)$. Setting $V_p = \sum y^i \frac{\partial}{\partial x^i}$ and using $(\ref{eq:delXr})$ we get: \begin{eqnarray*} \tilde{\omega}_p(\delta^{-1} X , V_p^{\uparrow}) &=& \tilde{\omega}(\frac{I+1}{I} X_{f}, V_p^{\uparrow}) + \tilde{\omega}(\omega(p)^\#( I^{-1} (\delta f + \delta^{-1} \alpha))^{\uparrow} , V_p^\uparrow)\\ &=& (\delta f)(V_p) + (\frac{1}{I} \delta^{-1} \alpha)(V_p) \\ &=& (\delta f)(V_p) , \end{eqnarray*} where the last equality holds because for each monomial in $y$, the expression $ (\delta^{-1} \alpha)(V_p)$ is --- up to a real factor --- just $\alpha(V_p,V_p) = 0$. Hence, $\tilde{\omega}_p(v_p^\uparrow, a(v_p)) = 0~~ \forall v_p \in TP$ is fulfilled for $f \equiv 0$. On the other hand, since $ (\delta f)(v_{p}) = \frac{d}{d \lambda} f(x,\lambda y)\restr{\lambda=1} $, it follows from the fact that $$\tilde{\omega}_p(v_p^{\uparrow}, a(v_p)) = 0~~ \forall v_p \in TP$$ that $f(x,y)$ has to be independent of $y$ and hence zero, as the constant term in $f$ must vanish. \hfill Q.E.D. ~~~~ \\
{ "timestamp": "1999-10-01T20:45:00", "yymm": "9311", "arxiv_id": "hep-th/9311094", "language": "en", "url": "https://arxiv.org/abs/hep-th/9311094", "abstract": "B. Fedosov has given a simple and very natural construction of a deformation quantization for any symplectic manifold, using a flat connection on the bundle of formal Weyl algebras associated to the tangent bundle of a symplectic manifold. The connection is obtained by affinizing, nonlinearizing, and iteratively flattening a given torsion free symplectic connection. In this paper, a classical analog of Fedosov's operations on connections is analyzed and shown to produce the usual exponential mapping of a linear connection on an ordinary manifold. A symplectic version is also analyzed. Finally, some remarks are made on the implications for deformation quantization of Fedosov's index theorem on general symplectic manifolds.", "subjects": "High Energy Physics - Theory (hep-th); Differential Geometry (math.DG); Quantum Algebra (math.QA)", "title": "The differential geometry of Fedosov's quantization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446502128797, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7087893955674948 }
https://arxiv.org/abs/2008.00358
Relational Algorithms for k-means Clustering
This paper gives a k-means approximation algorithm that is efficient in the relational algorithms model. This is an algorithm that operates directly on a relational database without performing a join to convert it to a matrix whose rows represent the data points. The running time is potentially exponentially smaller than $N$, the number of data points to be clustered that the relational database represents.Few relational algorithms are known and this paper offers techniques for designing relational algorithms as well as characterizing their limitations. We show that given two data points as cluster centers, if we cluster points according to their closest centers, it is NP-Hard to approximate the number of points in the clusters on a general relational input. This is trivial for conventional data inputs and this result exemplifies that standard algorithmic techniques may not be directly applied when designing an efficient relational algorithm. This paper then introduces a new method that leverages rejection sampling and the $k$-means++ algorithm to construct an O(1)-approximate k-means solution.
\section{Analysis of the Weighting Algorithm} \label{sec:weight_algo_analysis} The goal in this subsection is to prove Theorem \ref{thm:coreset_kmeans} which states that the alternative weights form an $O(1)$-approximate coreset with high probability. Throughout our analysis, ``with high probability'' means that for any constant $\rho > 0$ the probability of the statement not being true can be made less than $\frac{1}{N^\rho}$ asymptotically by appropriately setting the constants in the algorithm. Intuitively if a decent fraction of the points in each donut are closer to center $c_i$ than any other center, then Theorem \ref{thm:coreset_kmeans} can be proven by using a straight-forward application of Chernoff bounds to show that each alternate weight $w'_i$ is likely close to the true weight $w_i$. The conceptual difficultly is if only a very small portion of points in a donut $D_{i, j}$ are closer to $c_i$ than any other points, in which case the estimated $f'_{i,j} < \frac{1}{2{k'}^2 \log{N}}$ and thus the ``uncounted'' points in $D_{i,j}$ would contribute no weight to the computed weight $w'_i$. We call this the \textbf{undersampled} case. If many docuts around a center $i$ are undersampled, the computed weight $w'_i$ may well poorly approximate the actual weight $w_i$. To address this, we need to prove that omitting the weight from these uncounted points does not have a significant impact on the objective value. We break our proof into four parts. The first part, described in subsubsection \ref{subsubsect:definingfractional}, involves conceptually defining a fractional weight $w_i^f$ for each center $c_i \in {C}$. Each point has a weight of $1$, and instead of giving all this weight to its closes center, we allow fractionally assigning the weight to various ``near'' centers. $w_i^f$ is then the aggregated weight over all points for $c_i$. The second part, described in subsubsection \ref{subsubsect:fractionalproperties}, establishes various properties of the fractional weight that we will need. The third part, described in subsubsection \ref{subsubsect:alternatefractional}, shows that each fractional weight $w^f_i$ is likely to be closely approximated the computed weight $w'_i$. The fourth part, described in subsubsection \ref{subsubsect:fractionaloptimal}, shows that the fractional weights for the centers in $C$ form a $O(1)$-approximate coreset. Subsubsection \ref{subsubsect:fractionaloptimal} also contains the proof of Theorem \ref{thm:coreset_kmeans}. \subsection{Defining the Fractional Weights} \label{subsubsect:definingfractional} To define the fractional weights we first define an auxiliary directed acyclic graph $G = (S, E)$ where there is one node in $S$ corresponding to each row in $J$. For the rest of this section, with a little abuse of notation we use $S$ to denote both the nodes in graph $G$, and the set of $d$-dimensional data points in the design matrix. Let $p$ be an arbitrary point in $S - {C}$. Let $\alpha(p)$ denote the subscript of the center closest to $p$, i.e., if $c_i \in {C}$ is closest to $p$ then $\alpha(p)=i$. Let $D_{i,j}$ be the donut around $c_{i}$ that contains $p$. If $D_{i,j}$ is not undersampled then $p$ will have one outgoing edge $(p, c_{i})$. So let us now assume that $D_{i,j}$ is undersampled. Defining the outgoing edges from $p$ in this case is a bit more complicated. Let $A_{i,j}$ be the points $q \in D_{i,j}$ that are closer to $c_{i}$ than any other center in $C$ (i.e., $\alpha(q)=i$). If $j =1$ then $D_{i, 1}$ contains only the point $p$, and the only outgoing edge from $p$ goes to $c_i$. So let us now assume $j>1$. Let $c_h$ the center that is closest to the most points in $D_{i, j-1}$, the next donut in toward $c_{i}$ from $D_{i, j}$. That is $c_h = \argmax_{c_j \in {C}} \sum_{q \in D_{i, j-1}} \mathbbm{1}_{\alpha(q) = c_j }$. Let $M_{i, j-1}$ be points in $D_{i, j-1}$ that are closer to $c_h$ than any other center. That is $M_{i, j-1}$ is the collection of $q \in D_{i, j-1}$ such that $\alpha(q)=h$. Then there is a directed edge from $p$ to each point in $M_{i, j-1}$. Before defining how to derive the fractional weights from $G$, let us take a detour to note that $G$ is acyclic. The proof of following lemma can be found in Appendix \ref{section:ommited:proofs}. \begin{lemma} \label{lemma:acyclic} $G$ is acyclic. \end{lemma} \begin{proof} Consider a directed edge $(p,q) \in E$, and $c_i$ be the center in ${C}$ that $p$ is closest to, and $D_{i,j}$ the donut around $c_i$ that contains $p$. Then since $p \in D_{i, j}$ it must be the case that $\norm{p - c_i}_2^2 > r_{i, j-1}$. Since $q \in B_{i, j-1}$ it must be the case that $\norm{q - c_i}_2^2 \le r_{i, j-1}$. Thus $\norm{p - c_i}_2^2 > \norm{q - c_i}_2^2$. Thus the closest center to $q$ must be closer to $q$ than the closest center to $p$ is to $p$. Thus as one travels along a directed path in $G$, although identify of the closest center can change, the distance to the closest center must be monotonically decreasing. Thus, $G$ must be acyclic. \end{proof} We explain how to compute a fractional weight $w^f_p$ for each point $p \in S$ using the network $G$. Initially each $w_p^f$ is set to 1. Then conceptually these weights flow toward the sinks in $G$, splitting evenly over all outgoing edges at each vertex. More formally, the following flow step is repeated until is no longer possible to do so: \medskip \noindent \textbf{Flow Step:} Let $p \in S$ be an arbitrary point that currently has positive fractional weight and that has positive outdegree $h$ in $G$. Then for each directed edge $(p, q)$ in $G$ increment $w_q^f$ by $w_p^f / h$. Finally set $w_p^f$ to zero. \medskip As the sinks in $G$ are exactly the centers in ${C}$, the centers in ${C}$ will be the only points that end up with positive fractional weight. Thus we use $w^f_i$ to refer to the resulting fractional weight on center $c_i \in {C}$. \subsection{Properties of the Fractional Weights} \label{subsubsect:fractionalproperties} Let $f_{i,j}$ be the fraction of points that are closest to $c_i$ among all centers in ${C}$ in this donut $D_{i,j} = B_{i,j} - B_{i,j-1}$. We show in Lemma~\ref{lem:kirk1} and Lemma~\ref{lem:kirk2} that with high probability, either the estimated ratio is a good approximation of $f_{i,j}$, or the real ratio $f_{i,j}$ is very small. We show in Lemma~\ref{lem:max_weight} that the maximum flow through any node is bounded by $1+\epsilon$ when $N$ is big enough. This follows using induction because each point has $\Omega({k'} \log{N})$ neighbors and every point can have in degree from one set of nodes per center. We further know every point that is not uncounted actually contributes to their centers weight. \begin{lemma}\label{lem:kirk1} With high probability either $|f_{i,j}-f'_{i,j}| \leq \epsilon f_{i,j}$ or $f'_{i,j} \le \frac{1}{2{k'}^2 \log{N}}$. \end{lemma} To prove Lemma \ref{lem:kirk1}, we use the following Chernoff Bound. \begin{lemma} \label{lemma:Chernoff} Consider Bernoulli trials $X_i, \ldots, X_n$. Let $X = \sum_{i=1}^n X_i$ and $\mu = E[X]$. Then, for any $\lambda >0$: \begin{align*} {\mathbf{Pr}}[X \geq \mu + \lambda] &\leq \exp\left(-\frac{\lambda^2}{2\mu+\lambda} \right) & \textnormal{ Upper Chernoff Bound}\\ {\mathbf{Pr}}[X \leq \mu - \lambda] &\leq \exp\left(-\frac{\lambda^2}{3\mu}\right) & \textnormal{ Lower Chernoff Bound} \end{align*} \end{lemma} \begin{proof}{Proof of Lemma \ref{lem:kirk1}:} Fix any center $c_i \in {C}$ and $j \in [\log N]$. By applying the low Chernoff bound from Lemma \ref{lemma:Chernoff} it is straight forward to conclude that $\tau$ is large then with high probability at least a third of the test points in each $T_{i,j}$ are in the donut $D_{i,j}$. That is, with high probability $s_{i,j} \ge \frac{\tau}{3\epsilon^2}{k'}^2 \log^2 N$ . So let us consider a particular $T_{i,j}$ and condition $s_{i,j}$ having some fixed value that is at least $\frac{1}{3\epsilon^2}{k'}^2 \log^2 N$. So $s_{i,j}$ is conditioned on being large. Recall $t_{i,j} = \sum_{p \in W_{i,j}} (\mathbbm{1}_{ p \in T_{i,j} }) (\mathbbm{1}_{ \alpha(p) =i })$, and the indicator random variables $\mathbbm{1}_{ p \in T_{i,j} }$ are Bernoulli trials. Further note by the definition of $\gamma_{p,i,j}$ it is the case that $ E[t_{i,j}] = \sum_{p \in W_{i,j}} \gamma_{p,i,j}(\mathbbm{1}_{ \alpha(p) =i })$. Further note that as the sampling of test points is nearly uniform that $ f_{i,j} (1-\delta) s_{i,j} \le E[t_{i,j}] \le f_{i,j} (1+\delta) s_{i,j}$. For notational convenience, let $\mu = E[t_{i,j}]$. We now break the proof into three cases, that cover the ways in which the statement of this lemma would not be true. For each case, we show with high probability the case does not occur. \textbf{Case 1: $f'_{i,j} \geq \frac{1}{2{k'}^2 \log{N}}$ and $f_{i,j} > \frac{1-\epsilon}{2 k'^2 \log{N}} $ and $f'_{i,j} \geq (1+\epsilon) f_{i,j}$.} We are going to prove the probability of this case happening is very low. If we set $\lambda = \epsilon \mu $, then using Chernoff bound, we have \begin{align*} {\mathbf{Pr}}[t_{i,j} \geq (1+\epsilon) \mu ] &\le \exp\left(-\frac{(\epsilon \mu)^2}{2\mu+\epsilon \mu} \right) &\textnormal{[Upper Chernoff Bound]} \\ &\leq \exp\left(-\frac{\epsilon^2 (1-\delta) f_{i,j} s_{i,j}}{2+\epsilon} \right) &\textnormal{[$ \mu \geq (1-\delta) f_{i,j} s_{i,j}$]} \\ &\leq \exp\left(-\frac{\epsilon^2 (1-\delta) (1-\epsilon) s_{i,j}}{3(2 k'^2 \log{N})} \right) &\textnormal{[$ f_{i,j} > \frac{1-\epsilon}{2 k'^2 \log{N}}$]} \\ &\leq \exp\left(-\frac{\epsilon^2 (1-\delta) (1-\epsilon) \tau {k'}^2 \log^2 N}{3(2 k'^2 \log{N})(3\epsilon^2)} \right) &\textnormal{[$ s_{i,j} \geq \frac{\tau}{3\epsilon^2}{k'}^2 \log{N}$]} \\ &= \exp\left(-\frac{(1-\delta) (1-\epsilon) \tau \log N}{18} \right) \end{align*} Therefore, for $\delta \leq \epsilon/2 \leq 1/10$ and $\tau \geq 30$ this case cannot happen with high probability. \textbf{Case 2: $f'_{i,j} \geq \frac{1}{2{k'}^2 \log{N}}$ and $f_{i,j} > \frac{1-\epsilon}{2 k'^2 \log{N}} $ and $f'_{i,j} < (1-\epsilon) f_{i,j}$.} We can use Lower Chernoff Bound with $\lambda=\epsilon \mu$ to prove the probability of this event is very small. \begin{align*} {\mathbf{Pr}}[t_{i,j} \leq (1-\epsilon) \mu ] &\leq \exp\left(-\frac{(\epsilon \mu)^2}{3\mu} \right) \\ &\leq \exp\left(-\frac{\epsilon^2 (1-\delta) f_{i,j} s_{i,j} }{3} \right) &\textnormal{[$ \mu \geq (1-\delta) f_{i,j} s_{i,j}$]} \\ &\leq \exp\left(-\frac{\epsilon^2 (1-\delta) (1-\epsilon) s_{i,j}}{3(2 k'^2 \log{N})} \right) &\textnormal{[$ f_{i,j} > \frac{1-\epsilon}{2 k'^2 \log{N}}$]} \\ &\leq \exp\left(-\frac{\epsilon^2 (1-\delta) (1-\epsilon) \tau {k'}^2 \log^2 N}{3(2 k'^2 \log{N})(3\epsilon^2)} \right) &\textnormal{[$ s_{i,j} \geq \frac{\tau}{3\epsilon^2}{k'}^2 \log{N}$]} \\ &= \exp\left(-\frac{(1-\delta) (1-\epsilon) \tau \log N}{18} \right) \end{align*} Therefore, for $\delta \leq \epsilon/2 \leq 1/10$ and $\tau \geq 30$ this case cannot happen with high probability. \textbf{ Case 3: \boldmath $f'_{i,j} \geq \frac{1}{2{k'}^2 \log{N}}$ and $f_{i,j}\leq \frac{1-\epsilon}{2{k'}^2 \log{N}}$\unboldmath:} Since $f'_{i,j} = \frac{t_{i,j}}{s_{i,j}}$, in this case: \begin{align} \label{eq:dax1} t_{i,j} \geq \frac{s_{i,}}{2{k'}^2 \log{N}} \end{align} Since $\mu \le f_{i,j} (1+\delta) s_{i,j}$, in this case: \begin{align} \label{eq:dax2} \mu \le \frac{1-\epsilon}{2{k'}^2 \log{N}} (1+\delta) s_{i,j} \end{align} Thus subtracting line \ref{eq:dax1} from line \ref{eq:dax2} we conclude that: \begin{align} t_{i,j} \ge \mu + \frac{(\epsilon - \delta + \epsilon \delta) s_{i,j}}{2{k'}^2 \log{N}} \end{align} Let $\lambda = \frac{(\epsilon - \delta + \epsilon \delta) s_{i,j}}{2{k'}^2 \log{N}}$. We can conclude that \begin{align*} {\mathbf{Pr}}[t_{i,j} \ge \mu + \lambda ] &\le \exp\left(-\frac{\lambda^2}{2\mu+\lambda} \right) &\textnormal{Upper Chernoff Bound}\\ &\le \exp\left(\frac{-\lambda^2}{\frac{1-\epsilon}{2{k'}^2 \log{N}} (1+\delta) s_{i,j} +\lambda} \right) &\textnormal{Using line \ref{eq:dax2}}\\ &= \exp\left( \frac{- \left( \frac{ (\epsilon - \delta + \epsilon \delta) s_{i,j}}{2 {k'}^2\log{N}} \right)^2} {\frac{1-\epsilon}{2{k'}^2 \log{N}} (1+\delta) s_{i,j} +\frac{ (\epsilon - \delta + \epsilon \delta) s_{i,j}}{2 {k'}^2 \log{N}} }\right) & \\ &= \exp\left(\frac{- \left(\frac{ (\epsilon - \delta + \epsilon \delta)^2 s_{i,j}}{ {k'}^2 \log{N}}\right)}{2(1-\epsilon)(1+\delta) + 2(\epsilon - \delta + \epsilon \delta) } \right) \\ &\le \exp\left( \frac{ - (\epsilon - \delta + \epsilon \delta)^2 s_{i,j}}{12 {k'}^2 \log{N}}\right) &\\ &= \exp\left( \frac{ - (\epsilon - \delta + \epsilon \delta)^2 \tau\log{N} }{12 \epsilon^2 }\right) &\textnormal{Substituting our lower bound on } s_{i,j} \end{align*} Therefore, for $\delta \leq \epsilon/2 \leq 1/10$ and $\tau \geq 30$ this case cannot happen with high probability. \end{proof} The next case proves the how large $f'_{i,j}$ is when we know that $f_{i,j}$ is large. \begin{lemma}\label{lem:kirk2} If $f_{i,j} > \frac{1+\epsilon}{2k'^2\log{N}}$ then with high probability $f'_{i,j} \geq \frac{1}{2k'^2\log{N}}$. \end{lemma} \begin{proof} We can prove that the probability of $f'_{i,j} < \frac{1}{2k'^2\log{N}}$ and $f_{i,j} \geq \frac{1+\epsilon}{2k'^2\log{N}}$ is small. Multiplying the conditions for this case by $s_{i,j}$ we can conclude that $t_{ij} < \frac{s_{i,j}}{2{k'}^2 \log{N}}$ and $\mu \geq(1-\delta) \frac{(1+\epsilon) s_{i,j}}{2{k'}^2 \log{N}}$. And thus $t_{i,j} \le \mu - \lambda$ where $\lambda = \frac{(\epsilon-\delta -\epsilon\delta) s_{i,j}}{2{k'}^2 \log{N}}$. Then we can conclude that: \begin{align*} {\mathbf{Pr}}[t_{i,j} \leq \mu - \lambda ] &\le \exp\left(-\frac{\lambda^2}{3\mu}\right) &\textnormal{[Lower Chernoff Bound]}\\ &= \exp\left(-\frac{\left(\frac{(\epsilon-\delta -\epsilon\delta) s_{i,j}}{2{k'}^2 \log{N}}\right)^2}{3\mu}\right) & \\ &\le \exp\left(-\frac{\left(\frac{(\epsilon-\delta -\epsilon\delta) s_{i,j}}{2{k'}^2 \log{N}}\right)^2}{3\frac{1-\epsilon}{2{k'}^2 \log{N}} (1+\delta) s_{i,j}}\right) & \\ &= \exp\left(-\frac{\left(\frac{(\epsilon-\delta -\epsilon\delta)^2 s_{i,j}}{2{k'}^2 \log{N}}\right)}{3(1-\epsilon)(1+\delta) }\right) & \\ &\le \exp\left(\frac{-(\epsilon-\delta -\epsilon\delta)^2 s_{i,j}}{12 {k'}^2 \log{N}}\right) & \mbox{[$\delta< \epsilon \leq 1$]} \\ &\le \exp\left(\frac{-(\epsilon-\delta -\epsilon\delta)^2 ( \frac{\tau}{3\epsilon^2}{k'}^2 \log^2 N)}{12 {k'}^2 \log{N}}\right) & \textnormal{[Using our lower bound on $ s_{i,j}$]} \\ \end{align*} Therefore, for $\delta \leq \epsilon/2 \leq 1/10$ and $\tau \geq 30$ this case cannot happen with high probability. \end{proof} We now seek to bound the fractional weights computed by the algorithm. Let $\Delta_i(p)$ denote the total weight received by a point $p \in S\setminus {C}$ from other nodes (including the initial weight one on $p$). Furthermore, let $\Delta_o(p)$ denote the total weight sent by $p$ to all other nodes. Notice that in the flow step $\Delta_o(p) = \Delta_i(p)$ for all $p$ in $S \setminus C$. \begin{lemma} \label{property:total_weights} Let $\Delta_i(p)$ denote the total weight received by a point $p \in S\setminus {C}$ from other nodes (including the initial weight one on $p$). Furthermore, let $\Delta_o(p)$ denote the total weight sent by $p$ to all other nodes. With high probability, for all $q\in S$, $\Delta_i(q) \leq 1 + \frac{1+2\epsilon}{\log{N}}\max_{p:(p,q) \in E}\Delta_o(p)$. \end{lemma} \begin{proof} Fix the point $q$ that redirects its weight (has outgoing arcs in $G$). Consider its direct predecessors: $P(q) = \{p: (p,q) \in E\}$. Partition $P(q)$ as follows: $P(q) = \bigcup_{i = 1, \ldots, {k'}}P_{c_i}(q)$, where $P_{c_i}(q)$ is the set of points that have flowed their weights into $q$, but $c_i$ is actually their closest center in ${C}$. Observe the following. The point $q$ can only belong to one donut around $c_i$. Due to this, $P_{c_i}(q)$ is either empty or contains a set of points in a single donut around $c_i$ that redirect weight to $q$. Fix $P_{c_i}(q)$ for some $c_i$. If this set is non-empty suppose this set is in the $j$-th donut around $c_i$. Conditioned on the events stated in Lemmas \ref{lem:kirk1} and \ref{lem:kirk2}, since the points in $P_{c_i}(q)$ are undersampled, we have $|P_{c_i}(q)| \leq \frac{(1+\epsilon)2^{j-1}}{2{k'}^2 \log{N}}$. Consider any $p \in P_{c_i}(q)$. Let $\beta_i$ be the number of points that $p$ charges its weight to (this is the same for all such points $p$). It is the case that $\beta_i$ is at least $\frac{(1-\delta)2^{j-1}}{2{k'}}$ since $p$ flows its weights to the points that are assigned to the center that has the most number of points assigned to it from $c_i$'s $(j-1)$th donut. Thus, $q$ receives weight from $|P_{c_i}(q)| \leq \frac{(1+\epsilon)2^{j-1}}{2{k'}^2 \log{N}}$ points and each such point gives its weight to at least $\frac{(1-\delta)2^{j-1}}{2{k'}}$ points with equal split. The total weight that $q$ receives from points in $P_{c_i}(q)$ is at most the following. \begin{align*} &\qquad \frac{2{k'}}{(1-\delta)2^{j-1}} \sum_{p \in P_{c_i}(q)} \Delta_o(p) &\\ &\leq \frac{2{k'}}{(1-\delta)2^{j-1}} \sum_{p \in P_{c_i}(q)} \max_{p \in P_{c_i}(q) }\Delta_o(p)& \\ &\leq \frac{2{k'}}{(1-\delta)2^{j-1}} \cdot \frac{(1+\epsilon) \cdot 2^{j-1}}{2{k'}^2 \log{N}} \ \max_{p \in P_{c_i}(q) }\Delta_o(p) &\mbox{[$|P_{c_i}(q)| \leq \frac{(1+2\epsilon)2^{j-1}}{2{k'}^2 \log{N}}$]}\\ &\leq \frac{1+2\epsilon}{{k'} \log N} \max_{p \in P_{c_i}(q)}\Delta_o(p) & \mbox{[$\delta \leq \frac{\epsilon}{2} \leq \frac{1}{10}$]} \end{align*} Switching the max to $\max_{p:(p,q) \in E}\Delta_o(p)$, summing over all centers $c_i \in {C}$ and adding the original unit weight on $q$ gives the lemma. \end{proof} The following crucial lemma bounds the maximum weight that a point can receive. \begin{lemma} \label{lem:max_weight} Fix $\eta$ to be a constant smaller than $\frac{\log(N)}{10}$ and $\epsilon <1$. Say that for all $q \in S \setminus C$ it is the case that $\Delta_o(q) =\eta \Delta_i(q)$. Then, with high probability for any $p \in S \setminus C$ it is the case that $\Delta_i(p) \leq 1+ \frac{2\eta}{\log{N}}$. \end{lemma} \begin{proof We can easily prove this by induction on nodes. The lemma is true for all nodes that have no incoming edges in $G$. Now assume it is true for all nodes whose longest path that reaches them in $G$ has length $t-1$. Now we prove it for nodes whose longest path that reaches then in $G$ is $t$. Fix such a node $q$. For any node $p$ such that $(p,q) \in E$, by induction we have $\Delta_i(p) \leq 1 +\frac{2\eta}{\log{N}}$, so $\Delta_o(p) \leq 2(1+\frac{2\eta}{\log{N}})$. By Lemma~\ref{property:total_weights}, $\Delta_i(q) \leq 1 + \frac{1+2\epsilon}{\log{N}}\max_{p:(p,q) \in E}\Delta_o(p) \leq 1 + \left (\frac{\eta(1+2\epsilon)}{\log{N}} \right) \left (1+ \frac{2\eta}{\log{N}} \right ) = 1+\frac{\eta}{\log{N}} + \frac{\eta}{\log{N}} \cdot \frac{2(1+2\epsilon)\eta +2\epsilon}{\log{N}} \leq 1+ \frac{2\eta}{\log{N}}$. \end{proof} \subsection{Comparing Alternative Weights to Fractional Weights} \label{subsubsect:alternatefractional} It only remains to bound the cost of mapping points to the centers they contribute weight to. This can be done by iteratively charging the total cost of reassigning each node with the flow. In particular, each point will only pass its weight to nodes that are closer to their center. We can charge the flow through each node to the assignment cost of that node to its closest center, and argue that the cumulative reassignment cost bounds the real fractional assignment cost. Further, each node only has $1+\epsilon$ flow going through it. This will be sufficient to bound the overall cost in Lemma~\ref{lem:bound-cost-by-alg}. \begin{lemma}\label{lem:close-weights} With high probability, for every center $c_i$, it is the case that the estimated weight $w'_i$ computed by the weighting algorithm is $(1 \pm 2\epsilon)w^f_i$ where $w^f_i$ is the fractional weight of $i$. \end{lemma} \begin{proof} Apply union of bounds to Lemma \ref{lem:kirk1} and \ref{lem:kirk2} over all $i$ and $j$. Fix a center $c_i$. Consider all of the points that are closest to $c_i$ and are not undersampled. Let $w^s_i$ denote the number of these points. All the incomming edges to $c_i$ in $G$, are coming from these points; therefore based on Lemma \ref{lem:max_weight}, $w^s_i \leq w^f_i \leq w^s_i (1+\frac{2}{\log(N)})$. On the other hand, $w'_i$ is $(1\pm \epsilon)$ approximation of $w^s_i$. Therefore, $ \frac{1-\epsilon}{1+\frac{2}{\log(N)}} w^f_i\leq w'_i \leq (1+\epsilon) w^f_i$. Assuming that $\log N$ is sufficiently larger than $\epsilon$, the lemma follows. \end{proof} \subsection{Comparing Fractional Weights to Optimal} \label{subsubsect:fractionaloptimal} Next we bound the total cost of the fractional assignment defined by the flow. According to the graph $G$, any point $p \in S$ and $c_i \in {C}$, we let $\omega(p, c_i)$ be the fraction of weights that got transferred from $p$ to $c_i$. Naturally we have $\sum_{c_i \in {C}} \omega(p, c_i) = 1$ for any $p \in S$ and the fractional weights $w_i^f = \sum_{p \in S}\omega(p, c_i)$ for any $c_i \in {C}$. \begin{lemma} \label{lem:bound-cost-by-alg} Let $\phi_{opt}$ be the optimal $k$-means cost on the original set $S$. With high probability, it is the case that: $$\sum_{p\in S}\sum_{c_i \in {C}}\omega(p, c_i)\|p - c_i\|^2 \leq 160 (1+\epsilon) \phi_{opt}$$ \end{lemma} \begin{proof} Let $\phi^* = \sum_{p \in S} \|p - c_{\alpha(p)}\|^2$. Consider any $p \in S$ and center $c_i$ such that $\omega(p,c_i)> 0$. Let $P$ be any path from $p$ to $c_i$ in $G$. If node $p$'s only outgoing arc is to its closest center $c_{\alpha(p)} = c_i$, then $P = p \rightarrow c_i$, we have $\sum_{c \in {C}}\omega(p, c) \|p - c\|^2 = \|p - c_{\alpha(p)}\|^2$. Otherwise assume $P = p \rightarrow q_1 \rightarrow q_2 \rightarrow \ldots \rightarrow q_\ell \rightarrow c_i$. Note that the closest center to $q_\ell$ is $c_i$. Let $\Delta(P)$ be the fraction of the original weight of $1$ on $p$ that is given to $c_i$ along this path according to the flow of weights. As we observed in the proof of Lemma \ref{lemma:acyclic}, we have $\|p - c_{\alpha(p)}\| > \|q_1 - c_{\alpha(p)}\| \geq \|q_1 - c_{\alpha(q_1)}\| > \|q_2 - c_{\alpha(q_1)}\| \geq \|q_2 - c_{\alpha(q_2)}\| > \ldots > \|q_\ell - c_{\alpha(q_\ell)}\|$. This follows because for any arc $(u,v)$ in the graph, $v$ is in a donut closer to $c_{\alpha(u)}$ than the donut $u$ is in, and $v$ is closer to $c_{\alpha(v)}$ than $c_{\alpha(u)}$. We make use of the relaxed triangle inequality for squared $\ell_2$ norms. For any three points $x,y,z$, we have $\|x-z\|^2 \leq 2(\|x-y\|^2 + \|y-z\|^2)$. Thus, we bound $\|p - c_i\|^2$ by \begin{align*} \|p - c_i\|^2 &= \|p- c_{\alpha(p)} + c_{\alpha(p)} - q_1 + q_1 - c_i\|^2 \\ &\leq 2\|p- c_{\alpha(p)} + c_{\alpha(p)} - q_1\|^2 + 2\|q_1 - c_i\|^2 &&\text{[relaxed triangle inequality]}\\ & \leq 2(\|p- c_{\alpha(p)}\|+\| c_{\alpha(p)} - q_1\|)^2 + 2\|q_1 - c_i\|^2 && \mbox{[triangle inequality]}\\ & \leq 8\|p-c_{\alpha(p)}\|^2 + 2\|q_1 - c_i\|^2 && \mbox{[$\|p- c_{\alpha(p)}\|\geq\| c_{\alpha(p)} - q_1\|$]}. \end{align*} Applying the prior steps to each $q_i$ gives the following. \begin{align*} \|p - c_i\|^2 &\leq 8(\|p - c_{\alpha(p)}\|^2 + \sum_{j=1}^\ell 2^j \|q_j - c_{\alpha(q_j)}\|^2) \end{align*} Let $\mathcal{P}_q(j)$ be the set of all paths $P$ that reach point $q$ using $j$ edges. If $j=0$, it means $P$ starts with point $q$. We seek to bound $\sum_{j=0}^\infty 2^j \sum_{P \in \mathcal{P}_q(j)} \Delta(P) \|q - c_{\alpha(q_j)}\|^2$. This will bound the charge on point $q$ above over all path $P$ that contains it. Define a weight function $\Delta'(p)$ for each node $p \in S \setminus C$. This will be a new flow of weights like $\Delta$, except now the weight increases at each node. In particular, give each node initially a weight of $1$. Let $\Delta'_o(p)$ be the total weight leaving $p$. This will be evenly divided among the nodes that have outgoing edges from $p$. Define $\Delta'_i(p)$ to be the weight incoming to $p$ from all other nodes plus one, the initial weight of $p$. Set $\Delta'_o(p)$ to be $2 \Delta'_i(p)$, twice the incoming weight. Lemma~\ref{lem:max_weight} implies that the maximum weight of any point $p$ is $\Delta'_i(p) \leq 1 + \frac{4}{\log N}$. Further notice that for any $q$ it is the case that $\Delta'_i(q)= \sum_{j=0}^\infty 2^j \sum_{P \in \mathcal{P}_q(j)} \Delta(P) $. Letting $\mathcal{P}(p,c_i)$ be the set of all paths that start at $p$ to center $c_i$. Notice such paths correspond to how $p$'s unit weight goes to $c_i$. We have $\omega(p, c_i) = \sum_{P \in \mathcal{P}(p, c_i)} \Delta(P)$. Let $\mathcal{P}$ denote the set of all paths, $\ell(P)$ denote the length of path $P$ (number of edges on $P$) , and let $P(j)$ denote the $j$th node on path $P$. Thus we have the following. \begin{eqnarray*} &&\sum_{p\in S}\sum_{c_i \in {C}}\omega(p, c_i)\|p - c_i\|^2\\ &= &\sum_{p\in S}\sum_{c_i \in {C}}\sum_{P \in \mathcal{P}(p,c_i)} \Delta(P) \|p - c_i\|^2 \\ &\leq & 8\sum_{p\in S}\sum_{c_i \in {C}} \sum_{P\in \mathcal{P}(p,c_i)} \Delta(P)(\sum_{j=0}^{\ell(p) - 1}2^j \|P(j) - c_{\alpha(P(j))}\|^2 ) \\ & = & 8 \sum_{P \in \mathcal{P}} \Delta(P)(\sum_{j=0}^{\ell(p) - 1}2^j \|P(j) - c_{\alpha(P(j))}\|^2 ) \\ & = & 8\sum_{q \in S} \sum_{j=0}^{+\infty}\sum_{P \in \mathcal{P}_q(j)} 2^j \Delta(P) \|q - c_{\alpha(q)}\|^2 \\ & = & 8\sum_{q \in S} \Delta'_i(q) \|q - c_{\alpha(q)}\|^2 \\ & \leq & \sum_{q \in S} 8(1+\frac{4}{\log{N}})\|q - c_{\alpha(q)}\|^2 = 8(1+\frac{4}{\log{N}})\phi^* \end{eqnarray*} Lemma \ref{lem:bound-cost-by-alg} follows because if $k' \geq 1067k\log{N}$, $\phi^* \leq 20 \phi_{opt}$ with high probability by Theorem~1 in \cite{aggarwal2009adaptive}. \end{proof} Finally, we prove that finding any $O(1)$-approximation solution for optimal weighted $k$-means on the set $({C}, {W}')$ gives a constant approximation for optimal $k$-means for the original set $S$. Let ${W}^f = \{w_1^f, \ldots, w_{k'}^f\}$ be the fractional weights for centers in ${C}$. Let $\phi_{{W}^f}^*$ denote the optimal weighted $k$-means cost on $({C}, {W}^f)$, and $\phi_{{W}'}^*$ denote the optimal weighted $k$-means cost on $({C}, {W}')$. We first prove that $\phi_{{W}^f}^* = O(1) \phi_\mathrm{OPT}$, where $\phi_\mathrm{OPT}$ denote the optimal $k$-means cost on set $S$. \begin{lemma} \label{lem:const_appx_adaptive} Let $({C}, {W}^f)$ be the set of points sampled and the weights collected by fractional assignment $\omega$. With high probability, we have $\phi_{{W}^f}^* = O(1) \phi_\mathrm{OPT}$. \end{lemma} \begin{proof} Consider the cost of the fractional assignment we've designed. For $c_i \in {C}$, the weight is $w^f_i = \sum_{p \in S}\omega(p, c_i)$. Denote the $k$-means cost of $\omega$ by $\phi_\omega = \sum_{p \in S}\sum_{c \in {C}}\omega(p, c)\|p - c\|^2$. By Lemma \ref{lem:bound-cost-by-alg}, we have that $\phi_\omega \leq 160(1+\epsilon) \phi_\mathrm{OPT}$. Intuitively, in the following we show $\phi_{{W}^f}^*$ is close to $\phi_\omega$. As always, we let ${C}_\mathrm{OPT}$ denote the optimal centers for $k$-means on set $S$. For set of points $X$ with weights $Y:X \to \mathbb{R}^+$ and a set of centers $Z$, we let $\phi_{(X,Y)}(Z) = \sum_{x \in X}Y(x)\min_{z \in Z}\|x-z\|^2$ denote the cost of assigning the weighted points in $X$ to their closest centers in $Z$. Note that $\phi_{{W}^f}^*\leq \phi_{({C}, {W}^f)}({C}_\mathrm{OPT})$ since ${C}_\mathrm{OPT}$ is chosen with respect to $S$. \begin{align*} \phi_{{W}^f}^* & \leq \phi_{({C}, {W}^f)}({C}_\mathrm{OPT}) \\ &= \sum_{c_i \in {C}} (\sum_{p \in S}\omega(p, c_i)) \min_{c \in {C}_\mathrm{OPT}}\|c_i - c\|^2 & \mbox{[$w_i^f=\sum_{p \in S}\omega(p, c_i)$]} \\ &= \sum_{c_i \in {C}} \sum_{p \in S}\min_{c \in {C}_\mathrm{OPT}}\omega(p, c_i) \|c_i - c\|^2 \\ & \leq \sum_{c_i \in {C}} \sum_{p \in S}\min_{c \in {C}_\mathrm{OPT}}\omega(p, c_i) \cdot 2(\|p-c_i\|^2 + \|p - c\|^2) & \mbox{[relaxed triangle inequality]}\\ & = 2\phi_\omega + 2\phi_\mathrm{OPT} \leq 322(1+\epsilon) \phi_\mathrm{OPT} \end{align*} \end{proof} Using the mentioned lemmas, we can prove the final approximation guarantee. \begin{proof}[Proof of Theorem \ref{thm:coreset_kmeans}] Using Lemma \ref{lem:close-weights}, we know $w'_i = (1 \pm 2\epsilon) w_i^f$ for any center $c_i$. Let ${C}'_k$ be $k$ centers for $({C}, {W}')$ that is a $\gamma$-approximate for optimal weighted $k$-means. Let ${C}^f_\mathrm{OPT}$ be the \emph{optimal} $k$ centers for $({C}, {W}^f)$, and ${C}'_\mathrm{OPT}$ optimal for $({C}, {W}')$. We have $\phi_{({C}, {W}^f)}({C}'_k) \leq (1+2\epsilon) \phi_{({C}, {W}')}({C}'_k)$ for the reason that the contribution of each point grows by at most $(1+2\epsilon)$ due to weight approximation. Using the same analysis, $\phi_{({C}, {W}')}({C}^f_\mathrm{OPT}) \leq (1+2\epsilon) \phi_{{W}^f}^*$. Combining the two inequalities, we have \begin{align} \begin{aligned} \label{mainthm:eq1} \phi_{({C}, {W}^f)}({C}'_k) &\leq (1+2\epsilon)^2 \phi_{({C}, {W}')}({C}'_k) \leq (1+2\epsilon)^2\gamma \phi_{{W}'}^* \\ &\leq (1+2\epsilon)^2\gamma \phi_{({C}, {W}')}({C}^f_\mathrm{OPT}) &\mbox{[by optimality of $\phi_{{W}'}^*$]}\\ &\leq (1+2\epsilon)^3\gamma \phi_{{W}^f}^* \leq 322\gamma(1+2\epsilon)^4 \phi_\mathrm{OPT} &\mbox{[using Lemma \ref{lem:const_appx_adaptive}]} \end{aligned} \end{align} Let $\phi_{S}({C}'_k) = \sum_{p \in S}\min_{c \in {C}'_k}\norm{p - c}^2$. For every point $p \in S$, to bound its cost $\min_{c \in {C}'_k}\|p - c\|^2$, we use multiple relaxed triangle inequalities for every center $c_i \in {C}$ , and take the weighted average of them using $\omega(p,c_i)$. \begin{align*} \phi_{S}({C}'_k) &= \sum_{p \in S} \min_{c \in {C}'_k}\|p - c\|^2 \\ &= \sum_{p \in S} \sum_{c_i \in {C}} \omega(p, c_i) \min_{c \in {C}'_k}\|p - c\|^2 &\mbox{[$\sum_{c_i \in {C}} \omega(p, c_i) = 1$]} \\ &\leq \sum_{p \in S} \sum_{c_i \in {C}} \omega(p, c_i) \min_{c \in {C}'_k} 2(\|p - c_i\|^2 + \|c_i-c \|^2) & \mbox{[relaxed triangle inequality]}\\ & = 2\phi_\omega + 2\phi_{({C}, {W}^f)}({C}'_k) &\mbox{[$\sum_{p \in S}\omega(p, c_i) = w_i^f$]} \\ & \leq 2\phi_\omega + 2 \cdot 322\gamma(1+2\epsilon)^4 \phi_\mathrm{OPT} &\mbox{[inequality \eqref{mainthm:eq1}]} \\ & \leq 2 \cdot 160 (1+\epsilon)\phi_\mathrm{OPT} + 2 \cdot 322\gamma(1+2\epsilon)^4 \phi_\mathrm{OPT} & \mbox{[Lemma \ref{lem:bound-cost-by-alg}]} \\ & = O(\gamma)\phi_\mathrm{OPT} \end{align*} \end{proof} \section{Guiding Rules of Algorithm Design} Suppose we are given data set $S$. The \emph{adaptive k-means++ clustering} algorithm proposed in \cite{aggarwal2009adaptive} follows this routine: \begin{itemize} \item Sample ${k'} = O(k)$ centers ${C} = \{ C_1, \ldots, C_{k'} \}$. according to the $k$-means++ sampling rule. \item Calculate the number of points in $A_S(C_i)$ for all centers $C_i$, denote the set of weights by ${W} = \{w_1, \ldots, w_{k'} \}$. \item Weight every center $C_i$ by $w_i$, and solve the weighted $k$-means problem on the weighted input $({C}, {W})$. \end{itemize} We cite the following theorem from \cite{aggarwal2009adaptive} (Theorem 1 in the paper), which says the algorithm gives $O(1)$-approximation for optimal $k$-means objective value with constant probability. \begin{theorem}[\cite{aggarwal2009adaptive}] \label{thm:adaptive_approx} Denote the optimal $k$-means centers by ${C}_\mathrm{OPT}$, and let $\phi_\mathrm{OPT} = \phi_S({C}_\mathrm{OPT}))$ be the optimal $k$-means objective value. If ${k'} = \lfloor 16(k + \sqrt{k}) \rfloor$. Let ${C} \subseteq S$ be the set of centers sampled in the above algorithm, then, $\phi_S({C}) \leq 20 \phi_\mathrm{OPT}$ with probability at least $0.03$. \alireza{Should we use the weights somewhere?} \end{theorem} \yuyan{we need to figure out what to put here?} \begin{algorithm} \caption{In-database adaptive $k$-means++}\label{alg:main_algo} \begin{algorithmic}[1] \Procedure{AdaptiveKMeans++}{$S, k, \epsilon$} \State ${k'} \gets \lfloor 16(k+\sqrt{k}) \rfloor$. \State ${C} \gets \emptyset$. \For{$i = 1,\ldots, {k'}$} \State $C_i \gets \text{Kmeans++Sample}(S,{C})$ \State ${C} \gets {C} \cup \{ C_i\}$ \EndFor \State ${W} \gets \{\text{EstimateWeights}(S, {C}, \epsilon)\}$ \State Return the optimal centers for weighted $k$-means on $({C}, {W})$. \EndProcedure \end{algorithmic} \end{algorithm} \section{Weighting the Centers} \label{sec:algoverview} Our algorithm samples a collection ${C}$ of $k'= \Theta(k\log{N})$ centers using the $k$-means++ sampling described in the prior section. We give weights to the centers to get a coreset. Ideally, we would compute the weights in the standard way. That is, let $w_i$ denote the number of points that are closest to point $c_i$ among all centers in ${C}$. These pairs of centers and weights $(c_i, w_i)$ are known to form a coreset. Unfortunately, as stated in Theorem \ref{thm:hardcount}, computing such $w_i$'s even approximately is $NP$ hard. Instead, we will find a different set of weights which still form a coreset and are computable. Next we describe a relational algorithm to compute a collection $W'$ of weights, one weight $w'_i \in W'$ for each center $c_i \in {C}$. The proof that the centers with these alternative weights $(c_i, w'_i)$ also form a coreset is postponed until the appendix. \medskip \noindent \textbf{Algorithm for Computing Alternative Weights:} Initialize the weight $w'_i$ for each center $c_i \in {C}$ to zero. In the $d$-dimensional Euclidean space, for each center $c_i \in {C}$, we generate a collection of hyperspheres (also named \textbf{balls}) $\{B_{i,j}\}_{j \in [\lg N]}$, where $B_{i,j}$ contains approximately $2^j$ points from $J$. The space is then partitioned into $\{B_{i,0}, B_{i,1} - B_{i,0}, B_{i,2} - B_{i,1}, \ldots\}$. For each partition, we will sample a small number of points and use this sample to estimate the number of points in this partition that are closer to $c_i$ than any other centers, and thus aggregating $w'_i$ by adding up the numbers. Fix small constants $\epsilon, \delta > 0$. The following steps are repeated for $j \in [\lg N]$: \begin{itemize} \item Let $B_{i, j}$ be a ball of radius $r_{i,j}$ centered at $c_i$. Find a $r_{i, j}$ such that the number of points in $J \cap B_{i, j}$ lies in the range $[(1-\delta)2^j, (1+\delta) 2^j]$. This is an application of Lemma~\ref{lem:ball_computing}. \item Let $\tau$ be a constant that is at least $30$. A collection $T_{i,j}$ of $ \frac{\tau}{\epsilon^2}{k'}^2 \log^2 N$ ``test'' points are independently sampled following the same \textbf{approximately uniform} distribution with replacement from every ball $B_{i,j}$. Here an ``approximately uniform'' distribution means one where every point $p$ in $B_{i,j}$ is sampled with a probability $\gamma_{p,i,j} \in [(1-\delta)/|B_{i,j}|, (1+\delta)/|B_{i,j}|]$ on each draw. This can be accomplished efficiently similar to the techniques used in Lemma~\ref{lem:ball_computing} from \cite{abokhamis2020approximate}. Further elaboration is given in the Appendix~\ref{sect:faqai}. \item Among all sampled points $T_{i,j}$, find $S_{i,j}$, the set of points that lie in the \textbf{``donut''} $D_{i,j} = B_{i,j} - B_{i, j-1}$. Then the cardinality $s_{i,j} = |S_{i,j}|$ is computed. \item Find $t_{i,j}$, the number of points in $S_{i,j}$ that are closer to $c_i$ than any other center in ${C}$. \item Compute the ratio $f'_{i,j} = \frac{t_{i,j}}{s_{i,j}}$ (if $s_{i, j} = t_{i,j} = 0 $ then $f'_{i,j}= 0$). \item If $f'_{i,j} \geq \frac{1}{2{k'}^2 \log{N}}$ then $w'_i$ is incremented by $f'_{i,j} \cdot 2^{j-1}$, else $w'_i$ stays the same. \end{itemize} At first glance, the algorithm appears naive: $w_i'$ can be significantly underestimated if in some donuts only a small portion of points are closest to $c_i$, making the estimation inaccurate based on sampling. However, in Section \ref{sec:weight_algo_analysis}, we prove the following theorem which shows that the alternative weights computed by our algorithm actually form a coreset. \begin{theorem} \label{thm:coreset_kmeans} The centers $C$, along with the computed weights $W'$, form an $O(1)$-approximate coreset with high probability. \end{theorem} The running time of a naive implementation of this algorithm would be dominated by sampling of the test points. Sampling a single test point can be accomplished with $m$ applications of the algorithm from \cite{abokhamis2020approximate} and setting the approximation error to $\delta = \epsilon/m$. Recall the running time of the algorithm from \cite{abokhamis2020approximate} is $O\left( \frac{m^6 \log^4 n}{\delta^2} \Psi(n, d, m) \right)$. Thus, the time to sample all test points is $O\left( \frac{k'^2 m^9 \log^6 n}{\epsilon^4} \Psi(n, d, m) \right)$. Substituting for $k'$, and noting that $N \le n^m$, we obtain a total time for a naive implementation of $O\left( \frac{k^2 m^{11} \log^8 n}{\epsilon^4} \Psi(n, d, m) \right)$. \section{Background Information About Database Concepts} \label{sect:dbbackground} Given a tuple $x$, define $\Pi_{F}(x)$ to be projection of $x$ onto the set of features $F$ meaning $\Pi_{F}(x)$ is a tuple formed by keeping the entries in $x$ that are corresponding to the feature in $F$. For example let $T$ be a table with columns $(A,B,C)$ and let $x = (1,2,3)$ be a tuple of $T$, then $\Pi_{\{A,C\}}(x) = (1,3)$. \begin{definition}[Join] Let $T_1,\dots, T_m$ be a set of tables with corresponding sets of columns/features $F_1,\dots,F_m$ we define the join of them $J=T_1 \Join \dots \Join T_m$ as a table such that the set of columns of $J$ is $\bigcup_i F_i$, and $x\in J$ if and only if $\Pi_{F_i}(x) \in T_i$. \end{definition} Note that the above definition of join is consistent with the definition written in Section \ref{section:intro} but offers more intuition about what the join operation means geometrically. \begin{definition}[Join Hypergraph] Given a join $J=T_1 \Join \dots \Join T_m$, the hypergraph associated with the join is $H=(V,E)$ where $V$ is the set of vertices and for every column $a_i$ in $J$ there is a vertex $v_i$ in $V$, and for every table $T_i$ there is a hyper-edge $e_i$ in $E$ that has the vertices associated with the columns of $T_i$. \end{definition} \begin{theorem}[AGM Bound \cite{atserias2008size}] \label{appendix:agm} Given a join $J=T_1 \Join \dots \Join T_m$ with $d$ columns and its associated hypergraph $H=(V,E)$, and let $C$ be a subset of $\mathrm{col}(J)$, let $X = (x_1, \dots, x_m)$ be any feasible solution to the following Linear Programming: \begin{alignat*}{3} & \text{minimize} & \sum_{j=1}^{m} \log(|T_j|)x_{j}& \\ & \text{subject to} \quad& \sum_{\mathclap{{j:v \in e_{j}}}}x_{j}& \geq 1, & v \in C\\ && 0 \leq x_{j}& \leq 1,\quad & j &=1 ,..., t \end{alignat*} Then $\prod_i |T_i|^{x_i}$ is an upper bound for the cardinality of $\Pi_C(J)$, this upperbound is tight if $X$ is the optimal answer. \end{theorem} We give another definition of \emph{acyclicity} which is consistent with the definition in the main body. \begin{definition}[Acyclic Join] We call a join query (or a relational database schema) \textbf{acyclic} if one can repeatedly apply one of the two operations and convert the set of tables to an empty set: \begin{enumerate} \item Remove a column that is only in one table. \item Remove a table for which its columns are fully contained in another table. \end{enumerate} \end{definition} \begin{definition}[Hypertree Decomposition] Let $H=(V,E)$ be a hypergraph and $T=(V',E')$ be a tree with a subset of $V$ associated to each vertex in $v' \in V'$ called \textbf{bag} of $v'$ and show it by $b(v') \subseteq V$. $T$ is called a \textbf{hypertree decomposition} of $H$ if the following holds: \begin{enumerate} \item For each hyperedge $e \in E$ there exists $v' \in V'$ such that $e \subseteq b(v')$ \item For each vertex $v \in E$ the set of vertices in $V'$ that have $v$ in their bag are all connected in $T$. \end{enumerate} \end{definition} \begin{definition} Let $H=(V,E)$ be a join hypergraph and $T=(V',E')$ be its hypertree decomposition. For each $v' \in V'$, let $X^{v'} = (x_1^{v'},x_2^{v'},\dots, x_m^{v'})$ be the optimal solution to the following linear program: $\texttt{min} \sum_{j=1}^{t} x_{j}$, $\text{subject to } \sum_{j:v_{i} \in e_{j} }x_{j} \geq 1, \forall v_i \in b(v')$ where $0 \leq x_{j} \leq 1$ for each $j \in [t]$. Then the \textbf{width of $v'$} is $\sum_i x^{v'}_i$ denoted by $w(v')$ and the \textbf{fractional width of $T$} is $\max_{v' \in V'} w(v')$. \end{definition} \begin{definition}[fhtw] Given a join hypergraph $H=(V,E)$, the \textbf{fractional hypertree width of $H$}, denoted by fhtw, is the minimum fractional width of its hypertree decomposition. Here the minimum is taken over all possible hypertree decompositions. \end{definition} \begin{observation} The fractional hypertree width of an \allowbreak acyclic join is $1$, and each bag in its hypertree decomposition is a subset of the columns in some input table. \end{observation} \begin{theorem}[Inside-out \cite{khamis2016joins}] \label{thm:insideout} There exists an algorithm to evaluate a SumProd query in time $O(T md^2 n^{\mathrm{fhtw}} \log(n))$ where $\mathrm{fhtw}$ is the fractional hypertree width of the query and $T$ is the time needed to evaluate $\oplus$ and $\otimes$ for two operands. The same algorithm with the same time complexity can be used to evaluate SumProd queries grouped by one of the input tables. \end{theorem} \begin{theorem} \label{thm:sumsum:query} Let $Q_f$ be a function from domain of column $f$ in $J$ to $\mathbb{R}$, and $G$ be a vector that has a row for each tuple $r\in T_i$. Then the query \begin{align*} \sum_{X\in J}\sum_{f} Q_f(x_f) \end{align*} can be converted to a SumProd and the query returning $G$ with definition \begin{align*} G_r = \sum_{X\in Y_i \Join J} \sum_{f} F_i(x_f) \end{align*} can be converted to a SumProd query grouped by $T_i$. \end{theorem} \begin{proof} Let $S = \{(a,b) \:\vert\: a \in \mathbb{R}, b \in \mathbb{I}\}$, and for any two pairs of $(a,b),(c,d) \in S$ we define: \begin{align*} (a,b) \oplus (c,d) = (a+c,b+d) \end{align*} and \begin{align*} (a,b) \otimes (c,d) = (ad+cb, bd). \end{align*} Then the theorem can be proven by using the following two claims: \begin{enumerate} \item $(S,\oplus,\otimes)$ forms a commutative semiring with identity zero $I_0 = (0,0)$ and identity one $I_1 = (0,1)$. \item The query $\oplus_{X \in J} \otimes_{f} (Q_f(x_f),1)$ is a SumProd FAQ where the first entry of the result is $\sum_{X\in J}\sum_{f} Q_f(x_f)$ and the second entry is the number of rows in $J$. \end{enumerate} proof of the first claim: Since arithmetic summation is commutative and associative, it is easy to see $\oplus$ is also commutative and associative. Furthermore, based on the definition of $\oplus$ we have $(a,b) \oplus I_0 = (a+0,b+0) = (a,b)$. The operator $\otimes$ is also commutative since arithmetic multiplication is commutative, the associativity of $\otimes$ can be proved by \begin{align*} (a_1,b_1) \otimes ((a_2,b_2) \otimes (a_3,b_3)) &= (a_1,b_1) \otimes (a_2 b_3 + a_3 b_2, b_2 b_3) \\ &= (a_1 b_2 b_3 + b_1 a_2 b_3 + b_1 b_2 a_3, b_1 b_2 b_3) \\ &= (a_1 b_2 + b_1 a_2, b_1 b_2) \otimes (a_3,b_3) \\&= ((a_1,b_1) \otimes (a_2,b_2)) \otimes (a_3,b_3) \end{align*} Also note that based on the definition of $\otimes$, $(a,b) \otimes I_0 = I_0$ and $(a,b) \otimes I_1 = (a,b)$. The only remaining property that we need to prove is the distribution of $\otimes$ over $\oplus$: \begin{align*} (a,b) \otimes ((c_1,d_1) \oplus (c_2,d_2)) &= (a,b) \otimes (c_1+c_2,d_1 + d_2) \\ &= (a,b) \otimes (c_1 + c_2, d_1 + d_2) \\ &= (c_1 b + c_2 b + a d_1 + a d_2, bd_1 + bd_2) \\ &= (c_1 b + a d_1, bd_1) \oplus (c_2 b + a d_2, bd_2) \\ &= ((a,b) \otimes (c_1,d_1)) \oplus ((a,b) \otimes (c_2,d_2)) \end{align*} Now we can prove the second claim: To prove the second claim, since we have already shown the semiring properties of $(S, \oplus, \otimes)$ we only need to show what is the result of $\oplus_{X \in J} \otimes_{f} (Q_f(x_f),1)$. We have $\otimes_{f} (Q_i(x_f),1) = (\sum_f Q_i(x_f), 1)$, therefore \begin{align*} \oplus_{X \in J} \otimes_{f} (Q_i(x_f),1) = \oplus_{X \in J} (\sum_f Q_f(x_f), 1) = (\sum_{X\in J}\sum_{f} Q_f(x_f), \sum_{X \in J} 1) \end{align*} where the first entry is the result of the SumSum query and the second entry is the number of rows in $J$. \end{proof} \section{Relational Implementation of 3-means++} \label{section:3means} Recall that the 3-means++ algorithm picks a point $x$ to be the third center $c_3$ with probability $P(x) = \frac{L(x)}{Y}$ where $L(x) = \min( \norm{x-c_1}_2^2, \norm{x - c_2}_2^2)$ and $Y = \sum_{x \in J} L(x)$ is a normalizing constant. Conceptually think of $P$ as being a `hard'' distribution to sample from. \textbf{Description of the Implementation:} The implementation first constructs two identically-sized axis-parallel hypercubes/boxes $b_1$ and $b_2$ centered around $c_1$ and $c_2$ that are \textbf{as large as possible} subject to the constraints that the side lengths have to be non-negative integral powers of $2$, and that $b_1$ and $b_2$ can not intersect. Such side lengths could be found since we may assume $c_1$ and $c_2$ have integer coordinates or they are sufficiently far away from each other that we can scale them and increase their distance. Conceptually the implementation also considers a box $b_3$ that is the whole Euclidean space. \begin{figure}[h] \centering \includegraphics[scale=0.3]{boxes.PNG} \caption{Boxes used for sampling the third center} \label{fig:third:center} \end{figure} To define our ``easy'' distribution $Q$, for each point $x$ define $R(x)$ to be \begin{align*} R(x) = \begin{cases} \norm{x-c_1}_2^2 & x\in b_1 \\ \norm{x-c_2}_2^2 & x \in b_2 \\ \norm{x-c_1}_2^2 & x \in b_3 \text{ and } x\notin b_1 \text{ and } x\notin b_2 \end{cases} \end{align*} In the above definition, note that when $x\notin b_1 \text{ and } x\notin b_2$, the distance of $x$ to both centers are relatively similar; therefore, we can assign $x$ to either of the centers -- here we have assigned it to $c_1$. Then $Q(x)$ is defined to be $\frac{R(x)}{Z}$, where $Z = \sum_{x \in J} R(x)$ is normalizing constant. The implementation then repeatedly samples a point $x$ with probability $Q(x)$. After sampling $x$, the implementation can either (A) reject $x$, and then resample or (B) accept $x$, which means setting the third center $c_3$ to be $x$. The probability that $x$ is accepted after it is sampled is $\frac{ L(x)}{R(x)}$, and thus the probability that $x$ is rejected is $1-\frac{L(x)}{R(x)}$. It is straightforward to see how to compute $b_1$ and $b_2$ (note that $b_1$ and $b_2$ can be computed without any relational operations), and how to compute $L(x)$ and $R(x)$ for a particular point $x$. Thus, the only non-straight-forward part is sampling a point $x$ with probability $Q(x)$, which we explain now: \begin{itemize} \item The implementation uses a SumProd query to compute the aggregate 2-norm squared distance from $c_1$ constrained to points in $b_3$ (all the points) and grouped by table $T_1$ using Lemma \ref{lem:box_computing}. Let the resulting vector be $C$. So $C_r$ is the aggregate 2-norm squared distance from $c_1$ of all rows in the design matrix that are extensions of row $r$ in $T_1$. \item Then the implementation uses a SumProd query to compute the aggregated 2-norm squared distance from $c_2$, constrained to points in $b_2$, and grouped by $T_1$. Let the resulting vector be $D$. Notice that an axis-parallel box constraint can be expressed as a collection of axis-parallel hyperplane constraints, and for every axis-parallel constraint it is easy to remove the points not satisfying it from the join by filtering one of the input tables having that dimension/feature. Then the sum product query is the same as the sum product query in the previous step. \item Then the implementation uses a SumProd query to compute the aggregated 2-norm squared distance from $c_1$, constrained to points in $b_2$, and grouped by $T_1$ Let the resulting vector be $E$. \item Then pick a row $r$ of $T_1$ with probability proportional to $C_r - E_r + D_r$. \item The implementation then replaces $T_1$ by a table consisting only of the picked row $r$. \item The implementation then repeats this process on table $T_2$, then table $T_3$ etc. \item At the end $J$ will consist of one point/row $x$, where the probability that a particular point $x$ ends up as this final row is $Q(x)$. To see this note that in the iteration performed for $T_i$, $C-E$ is the aggregate 2-norm squared distances to $c_1$ for all points not in $b_2$ grouped by $T_i$, and $D$ is the aggregated squared distances of the points in $b_2$ to $c_2$ grouped by $T_i$. \end{itemize} We now claim that this implementation guarantees that $c_3=x$ with probability $P(x)$. We can see this using the standard rejection sampling calculation. At each iteration of sampling from $Q$, let $S(x)$ be the event that point $x$ is sampled and $A(x)$ be the event that $x$ is accepted. Then, \begin{align*} {\mathbf{Pr}}[S(x) \text{ and } A(x)] &= {\mathbf{Pr}}[A(x)] \mid S(x)] \cdot {\mathbf{Pr}}[S(x)] = \frac{L(x)}{R(x)} Q(x) = \frac{L(x)}{Z} \end{align*} Thus $x$ is accepted with probability proportional to $L(x)$, as desired. As the number of times that the implementation has to sample from $Q$ is geometrically distributed, the expected number of times that it will have to sample is the inverse of the probability of success, which is $\max_x \frac{R(x)}{L(x)}$. It is not too difficult to see (we prove it formally in Lemma \ref{lemma:box:boundary}) that $\max_x \frac{R(x)}{L(x)} = O(d)$. It takes $3m$ SumProd queries to sample from $Q$. Therefore, the expected running time of our implementation of 3-means++ is $O(md \Psi(n, d, m))$. \section{Pseudo-code} \label{sect:pseudo-code} In this section you may find the algorithms explained in Section \ref{sec:kmeans++} in pseudo-code format. \begin{algorithm}[H] \caption{Algorithm for creating axis-parallel hyperrectangles} \label{alg:finding:boxes} \begin{algorithmic}[1] \Procedure{Construct Boxes}{${C}_{i-1}$} \State \textbf{Input:} Current centers ${C}_{i-1} = \{c_1, \dots, c_{i-1}\}$ \State \textbf{Output:} ${\mathcal B}_i$, a set of boxes and their centers \State ${\mathcal B}_i \gets \emptyset$ \State ${\mathcal G}_i \gets \{(b^*_j,c_j) \:\vert\: b^*_j \text{ is a unit size hyper-cube around }c_j, j \in [i-1]\}$ \Comment{We assume there is no intersection between the boxes in $G$ initially, up to scaling} \While{$|{\mathcal G}_i| > 1$} \State Double all the boxes in ${\mathcal G}_i$. \State ${\mathcal G}'_i = \emptyset$ \Comment{Keeps the boxes created in this iteration of doubling} \While{$\exists (b_1,y_1) , (b_2,y_2) \in {\mathcal G}_i$ that intersect with each other} \State $b \gets $ the smallest box in Euclidean space containing both $b_1$ and $b_2$. \State ${\mathcal G}_i \gets ({\mathcal G}_i \setminus \{(b_1,y_1),(b_2,y_2)\}) \cup \{(b,y_1)\}$ \State ${\mathcal G}'_i \gets ({\mathcal G}'_i \cup \{(b,y_1)\}$ \If{$(b_1,y_1)\notin {\mathcal G}'_i$} \Comment{Check if box $b_1$ hasn't been merged with other boxes in the current round} \State $b'_1 \gets$ halved $b_1$, add $(b'_1, y_1)$ to ${\mathcal B}_i$ \EndIf \If{$(b_2,y_2)\notin {\mathcal G}'_i$} \Comment{Check if box $b_2$ hasn't been merged with other boxes in the current round} \State $b'_2 \gets$ halved $b_2$, add $(b'_2,y_2)$ to ${\mathcal B}_i$ \EndIf \EndWhile \EndWhile \State There is only one box and its representative remaining in ${\mathcal G}_i$, replace this box with the whole Euclidean space. \State ${\mathcal B}_i \gets {\mathcal B}_i \cup {\mathcal G}_i$. \State Return ${\mathcal B}_i$. \EndProcedure \end{algorithmic} \end{algorithm} \section{Omitted Proofs} \label{section:ommited:proofs} \subsection*{NP-Hardness of Approximating Cluster Size} \begin{proofof}[Theorem \ref{thm:hardcount}] We've proved the \#P-hardness in the main body. Here we prove the second part of Theorem \ref{thm:hardcount} that given an acyclic database and a set of centers ${c_1,\dots,c_k}$, it is NP-Hard to approximate the number of points assigned to each center when $k\geq 3$. We prove it by reduction from Subset Sum. In Subset Sum problem, the input is a set of integers $A = {w_1,\dots,w_m}$ and an integer $L$, the output is true if there is a subset of $A$ such that its summation is $L$. We create the following acyclic schema. There are $m$ tables. Each table $T_i$ has a single unique column $x_i$ with two rows ${w_i,0}$. Then the join of the tables has $2^m$ rows, and it is a cross product of the rows in different tables in which each row represents one subset of $A$. Then consider the following three centers: $c_1 = (\frac{L-1}{m},\frac{L-1}{m},\dots,\frac{L-1}{m})$, $c_2 = (\frac{L}{m},\dots,\frac{L}{m})$, and $c_1 = (\frac{L+1}{m},\frac{L+1}{m},\dots,\frac{L+1}{m})$. The Voronoi diagram that separates the points assigned to each of these centers consists of two parallel hyperplanes: $\sum_i x_i = L-1/2$ and $\sum_i x_i = L+1/2$ where the points between the two hyperplanes are the points assigned to $c_2$. Since all the points in the design matrix have integer coordinates, the only points that are between these two hyperplanes are those points for which $\sum_i x_i = L$. Therefore, the approximation for the number of points assigned to $c_2$ is non-zero if and only if the answer to Subset Sum is True. \end{proofof} \begin{algorithm}[H] \caption{Algorithm for sampling the next center} \label{alg:sampling1} \begin{algorithmic}[1] \Procedure{KMeans++Sample}{${C}_{i-1}, T_1, \dots, T_m$} \State Let $p(b)$ be the box that is the parent of $b$ in the tree structure of all boxes in ${\mathcal B}_i$. \State $c_i \gets \emptyset$ \State ${\mathcal B}_i \gets \textsc{Construct Boxes}({C}_{i-1})$ \State Let $(b_0,y_0)$ be the tuple where $b_0$ is the entire Euclidean space in ${\mathcal B}_i$. \While{$c_i = \emptyset$} \For{$1 \leq \ell \leq m$} \Comment{Sample one row from each table.} \State Let $H$ be a vector having an entry $H_r$ for each $r \in T_\ell$. \State $J' \gets r_1 \Join \ldots \Join r_{\ell-1} \Join J$. \Comment{Focus on only the rows in $J$ that uses all previously sampled rows from $T_1, \ldots, T_{\ell-1}$ in the concatenation.} \State $\forall r \in T_\ell$ evaluate $H_r \gets \sum_{x \in r \Join J' \cap b_0} \norm{x-y_0}_2^2$ \For{$(b,y) \in {\mathcal B}_i \setminus \{(b_0, y_0)\}$} \State Let $(b',y') \in {\mathcal B}_i$ be the tuple where $b'=p(b)$. \State $\forall r \in T_\ell$ use SumProd query to evaluate two values: $\sum_{x \in r \Join J' \cap b}\norm{x-y}_2^2$ and $\sum_{x \in r \Join J' \cap b}\norm{x-y'}_2^2$. \State $H_r \gets H_r - \sum_{x \in r \Join J' \cap b} \norm{x-y'}_2^2 + \sum_{x \in r \Join J' \cap b} \norm{x-y}_2^2$ \EndFor \State Sample a row $r_\ell \in T_\ell$ with probability proportional to $H_r$. \EndFor \State $x \gets r_1 \Join \dots \Join r_m$. \State Let $(b^*, y^*)$ be the tuple where $b^*$ is the smallest box in ${\mathcal B}_i$ containing $x$. \State $c_i \gets x$ with probability $\frac{\min_{c \in {C}_{i-1}}\|x- c\|_2^2}{\norm{x - y^*}_2^2}$. \Comment{Rejection sampling.} \EndWhile \State \textbf{return} $c_i$. \EndProcedure \end{algorithmic} \end{algorithm} \section{Uniform Sampling From a Hypersphere} \label{sect:faqai} In order to uniformly sample a point from inside a ball, it is enough to show how we can count the number of points located inside a ball grouped by a table $T_i$. Because, if we can count the number of points grouped by input tables, then we can use similar technique to the one used in Section \ref{sec:kmeans++} to sample. Unfortunately, as we discussed in Section \ref{subsec:warmup}, it is $\#P$-Hard to count the number of points inside a ball; however, it is possible to obtain a $1\pm \delta$ approximation of the number of points \cite{abokhamis2020approximate}. Bellow we briefly explain the algorithm in \cite{abokhamis2020approximate} for counting the number of points inside a hypersphere. Given a center $c$ and a radius $R$, the goal is approximating the number of tuples $x\in J$ for which $\sum_i (c^i-x^i)^2 \leq R$. Consider the set $S$ containing all the multisets of real numbers. We denote a multiset $A$ by a set of pairs of $(v,f_A(v))$ where $v$ is a real value and $f(v)$ is the frequency of $v$ in $A$. For example, $A=\{(2.3,10),(3.5,1)\}$ is a multiset that has $10$ members with value $2.3$ and $1$ member with value $3.5$. Then, let $\oplus$ be the summation operator meaning $C = A \oplus B$ if and only if for all $x \in R$, $f_C(x) = f_A(x) + f_B(x)$, and let $\otimes$ be the convolution operator such that $C = A \otimes B$ if and only if $f_C(x) = \sum_{i\in \mathbb{R}} f_A(i) + f_B(x-i)$. Then the claim is $(S,\oplus,\otimes)$ is a commutative semiring and the following SumProd query returns a multiset that has all the squared distances of the points in $J$ from $C$: \begin{align*} \bigoplus_{x\in J}\bigotimes_{i}\{((x^i-c^i)^2, 1)\} \end{align*} Using the result of the multiset, it is possible to count exactly the number of tuples $x\in J$ for which $\norm{x-c}_2^2\leq R^2$. However, the size of the result is as large as $\Omega(|J|)$. In order to make the size of the partial results and time complexity of $\oplus$ and $\otimes$ operators polynomial, the algorithm uses $(1 + \delta)$ geometric bucketing. The algorithm returns an array where in $j$-th entry it has the smallest value $r$ for which there are $(1+\delta)^j$ tuples $x\in J$ satisfying $\norm{x-c}_2^2 \leq r^2$. The query can also be executed grouped by one of the input tables. Therefore, using this polynomial approximation scheme, we can calculate conditioned marginalized probability distribution with multiplicative $(1 \pm \delta)$. Therefore, using $m$ queries, it is possible to sample a tuple from a ball with probability distribution $\frac{1}{n}(1\pm m \delta)$ where $n$ is the number of points inside the ball. In order to get a sample with probability $\frac{1}{n}(1\pm \epsilon)$, all we need is to set $\delta = \epsilon/m$; hence, on \cite{abokhamis2020approximate}, the time complexity for sampling each tuple will be $O\Big(\frac{m^9\log^4(n)}{\epsilon^2}\Psi(n,d,m)\Big)$ \section{Hardness of Lloyd's Algorithm} \label{sect:Lloyds} After choosing $k$ initial centers, a type of local search algorithm, called Lloyd's algorithm, is commonly used to iteratively find better centers. After associating each point with its closest center, and Lloyd's algorithm updates the position of each center to the center of mass of its associated points. Meaning, if $X_c$ is the set of points assigned to $c$, its location is updated to $\frac{\sum_{x\in X_c} x}{|X_c|}$. While this can be done easily when the data is given explicitly, we show in the following theorem that finding the center of mass for the points assigned to a center is $\#$P-hard when the data is relational, even in the special case of an acyclic join and two centers. \begin{theorem} Given an acyclic join, and two centers, it is $\#$P-hard to compute the center of mass for the points assigned to each center. \end{theorem} \begin{proof} We prove by a reduction from a decision version of the counting knapsack problem. The input to the counting knapsack problem consists of a the set $W= \{w_1,\dots, w_n\}$ of positive integer weights, a knapsack size $L$, and a count $D$. The problem is to determine whether there are at least $D$ subsets of $W$ with aggregate weight at most $L$. The points in our instance of $k$-means will be given relationally. We construct a join query with $n+1$ columns/attributes, and $n$ tables. All the tables have one column in common and one distinct column. The $i$-th table has $2$ columns $(d_i, d_{n+1})$ and three rows $\{(w_i,-1), (0,-1), (0, D)\}$. Note that the join has $2^n$ rows with $-1$ in dimension $n+1$, and one row with values $(0,0,\dots,0,D)$. The rows with $-1$ in dimension $d+1$ have all the subsets of $\{w_1,\dots, w_n\}$ in their first $n$ dimensions. Let the two centers for $k$-means problem be any two centers $c_1$ and $c_2$ such that a point $x$ is closer to $c_1$ if it satisfies $\sum_{d=1}^n x_d < L$ and closer to $c_2$ if it satisfies $\sum_{d=1}^n x_d > L$. Note that the row $(0,0,\dots,0,D)$ is closer to $c_1$. Therefore, the value of dimension $n+1$ of the center of mass for the tuples that are closer to $c_1$ is $Y= (D-C)/C$ where $C$ is the actual number of subsets of $W$ with aggregate weight at most $L$. If $Y$ is negative, then the number of solutions to the counting knapsack instance is at least $D$. \end{proof} \section{Introduction} \label{section:intro} Kaggle surveys~\cite{KaggleSurvey} show that the majority of learning tasks faced by data scientists involve \emph{relational data}. Conventional formats usually represent data with multi-dimensional points where each dimension corresponds to a feature of the data. In contrast, a \textbf{relational database} consists of tables $T_1, T_2, \ldots, T_m$ where the features could be stored partially in the tables. The columns in each table are a subset of features\footnote{In relational database context the columns are also referred to as \emph{attributes} but here we call them features per the tradition of broader communities.} and the rows are data records for these features. The underlying data is represented by the \textbf{design matrix} $J=T_1 \Join \dots \Join T_m$ where each row in $J$ can be interpreted as a data point. Here the \textbf{join} ($\Join$) is a binary operator on two tables $T_i$ and $T_j$. The result of the join is the set of all possible concatenations of two rows from $T_i$ and $T_j$ such that they are equal in their common columns/features. If $T_i$ and $T_j$ have no common columns their join is the cross product of all rows. See Table \ref{table:join} for an example of join operation on two tables. Almost all learning tasks are designed for data in matrix format. The current standard practice for a data scientist is the following. \begin{table}[H] \begin{mdframed} \noindent \textbf{Standard Practice:} \begin{enumerate} \item Extract the data points from the relational database by taking the join of all tables to find the design matrix $J=T_1 \Join \dots \Join T_m$. \item Then interpret each row of $J$ as a point in a Euclidean space and the columns as the dimensions, corresponding to the features of data. \item Import this design matrix $J$ into a standard algorithm. \end{enumerate} \end{mdframed} \end{table} \begin{table}[t] \centering \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{2}{|c|}{\cellcolor[HTML]{FFFFC7}$T_1$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_1$ & $f_2$ \\ \hline 1 & 1 \\ \hline 2 & 1 \\ \hline 3 & 2 \\ \hline 4 & 3 \\ \hline 5 & 4 \\ \hline \end{tabular} \quad \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{2}{|c|}{\cellcolor[HTML]{FFFFC7}$T_2$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_2$ & $f_3$ \\ \hline 1 & 1 \\ \hline 1 & 2 \\ \hline 2 & 3 \\ \hline 5 & 4 \\ \hline 5 & 5 \\ \hline \end{tabular} \quad \begin{tabular}{|c|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{3}{|c|}{\cellcolor[HTML]{FFFFC7}$T_1 \Join T_2$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_1$ & $f_2$ & $f_3$ \\ \hline 1 & 1 & 1 \\ \hline 1 & 1 & 2 \\ \hline 2 & 1 & 1 \\ \hline 2 & 1 & 2 \\ \hline 3 & 2 & 3 \\ \hline \end{tabular} \caption{A join of tables $T_1$ and $T_2$. Each has $5$ rows and $2$ features, sharing $f_2$. The join has all features from both tables. The rows with $f_2=x$ in the join is the cross product of all rows with $f_2=x$ from $T_1$ and $T_2$. For example, for $f_2 = 1$, the four rows in $T_1 \Join T_2$ has $(f_1, f_3)$ values $\{(1, 1), (1, 2), (2, 1), (2, 2)\}$, this is the cross product of $f_1 \in \{1, 2\}$ from $T_1$ and $f_3 \in \{1, 2\}$ from $T_2$.} \label{table:join} \vspace{-.5cm} \end{table} A relational database is a highly compact data representation format. The size of $J$ can be exponentially larger than the input size of the relational database \cite{atserias2008size}. Extracting $J$ makes the standard practice inefficient. Theoretically, there is a potential for exponential speed-up by running algorithms \emph{directly} on the tables in relational data. We call such algorithms \textbf{relational algorithms} if their running time is polynomial in the size of tables when the database is \emph{acyclic}. Acyclic databases will be defined shortly. This leads to the following exciting algorithmic question. \begin{mdframed} \noindent \textbf{The Relational Algorithm Question:} \renewcommand{\labelenumi}{\Alph{enumi}.} \begin{enumerate} \item Which standard algorithms can be implemented as relational algorithms? \item For standard algorithms that are \emph{not} implementable by relational algorithms, is there an alternative efficient relational algorithm that has similar performance? \end{enumerate} \end{mdframed} This question has recently been of interest to the community. However, few algorithmic techniques are known. Moreover, we do not have a good understanding of which problems can be solved on relational data and which cannot. Relational algorithm design has a interesting combinatorial structure that requires a deeper understanding. We design a relational algorithm for $k$-means. It has a polynomial time complexity for \textbf{acyclic} relational databases. The relational database is acyclic if there exists a tree with the following properties. There is exactly one node in the tree for each table. Moreover, for any feature (i.e. column) $f$, let $V(f)$ be the set of nodes whose corresponding tables contain feature $f$. The subgraph induced on $V(f)$ must be a connected component. Acyclicity can be easily checked, as the tree can be found in polynomial time if it exists \cite{yu1979algorithm}. Luckily, most of the natural database schema are acyclic or nearly acyclic. Answering seemingly simple questions on general (cyclic) databases, such as if the join is empty or not is NP-Hard. For general databases, efficiency is measured in terms of the \textbf{fractional hypertree width} of the database (denoted by ``fhtw'')\footnote{See Appendix~\ref{sect:dbbackground} for a formal definition.}. This measures how close the database structure is to being acyclic. This parameter is $1$ for acyclic databases and larger as the database is farther from being acyclic. State-of-the-art algorithms for queries as simple as counting the number of rows in the design matrix have linear dependency on $n^{\text{fhtw}}$ where $n$ is the \emph{maximum} number of rows in all input tables \cite{FAQ}. Running in time linear in $n^{\text{fhtw}}$ is the goal, as fundamental barriers need to be broken to be faster. Notice that this is polynomial time when fhtw is a fixed constant (i.e. nearly acyclic). Our algorithm has linear dependency on $n^{\text{fhtw}}$, matching the state-of-the-art. \medskip \noindent \textbf{Relational Algorithm for $k$-means:} $k$-means is perhaps the most widely used data mining algorithm (e.g. $k$-means is one of the few models in Google's BigQuery ML package~\cite{bigqueryml}). The input to the $k$-means problem consists of a collection $S$ of points in a Euclidean space and a positive integer $k$. A feasible output is $k$ points $c_1, \ldots, c_k$, which we call \textbf{centers}. The objective is to choose the centers to minimize the aggregate squared distance from each original point to its nearest center. Recall extracting all data points could take time exponential in the size of a relational database. Thus, the problem is to find the cluster centers without fully realizing all data points the relational data represents. \textit{ }\cite{Rkmeans} was the first paper to give a non-trivial $k$-means algorithm that works on relational inputs. The paper gives an $O(1)$-approximation. The algorithm's running time has a superlinear dependency on $k^d$ when the tables are acyclic and thus is not polynomial. Here $k$ is the number of cluster centers and $d$ is the dimension (a.k.a number of features) of the points. This is equivalently the number of distinct columns in the relational database. For a small number of dimensions, this algorithm is a large improvement over the standard practice and they showed the algorithm gives up to 350x speed up on real data versus performing the query to extract the data points (not even including the time to cluster the output points). Several questions remain. Is there a relational algorithm for $k$-means? What algorithmic techniques can we use as building blocks to design relational algorithms? Moreover, how can we show some problems are hard to solve using a relational algorithm? \medskip \noindent \textbf{Overview of Results:} The main result of the paper is the following. \begin{theorem} \label{thm:main_thm1} Given an acyclic relational database with tables $T_1, T_2, \ldots T_m$ where the design matrix $J$ has $N$ rows and $d$ columns. Let $n$ be the maximum number of rows in any table. Then there is a randomized algorithm running in time polynomial in $d$, $n$ and $k$ that computes an $O(1)$ approximate $k$-means clustering solution with high probability. \end{theorem} In appendix \ref{sect:dbbackground}, we discuss the algorithm's time complexity for cyclic databases. To illustrate the challenges for finding such an algorithm as described in the prior theorem, even when the database is acyclic, consider the following theorem. \begin{theorem}\label{thm:hardcount} Given an acyclic relational database with tables $T_1, T_2, \ldots T_m$ where the design matrix $J$ has $N$ rows and $d$ columns. Given $k$ centers $c_1,\dots,c_k$, let $J_i$ be the set of points in $J$ that are closest to $c_i$ for $i \in [k]$. It is $\#P$-Hard to compute $|J_i|$ for $k\geq 2$ and $NP$-Hard to approximate $|J_i|$ to any factor for $k \geq 3$. \end{theorem} You may find the proof in Section \ref{subsection:hardness_computing_weights}. We show it by reducing a $NP$-Hard problem to the problem of determining if $J_i$ is empty or not. Counting the points closest to a center is a fundamental building block in almost all $k$-means algorithms. Moreover, we show even performing one iteration of the classic Lloyd's algorithm is $\#P$-Hard in Appendix~\ref{sect:Lloyds}. Together this necessitates the design of new techniques to address the main theorem, shows that seemingly trivial algorithms are difficult relationally, and suggests computing a coreset is the right approach for the problem as it is difficult to cluster the data directly. \medskip \noindent \textbf{Overview of Techniques:} We first compute a \textbf{coreset} of all points in $J$. That is, a collection of points with weights such that if we run an $O(1)$ approximation algorithm on this weighted set, we will get a $O(1)$ approximate solution for all of $J$. To do so, we sample points according to the principle in $k$-means++ algorithm and assign weights to the points sampled. The number of points chosen will be $\Theta(k\log N)$. Any $O(1)$-approximate weighted $k$-means algorithm can be used on the coreset to give Theorem \ref{thm:main_thm1}. \medskip \noindent \textbf{k-means++:} $k$-means++ is a well-known $k$-means algorithm \cite{DBLP:conf/soda/ArthurV07,aggarwal2009adaptive}. The algorithm iteratively chooses centers $c_1, c_2, \ldots$. The first center $c_1$ is picked uniformly from $J$. Given that $c_1, \ldots, c_{i-1}$ are picked, a point $x$ is picked as $c_i$ with probability $P(x) = \frac{L(x)}{Y}$ where $L(x) = \min_{j \in [i-1]} ( \norm{x-c_j}_2^2)$ and $Y = \sum_{x \in J} L(x)$. Here $[i-1]$ denotes $\{1,2, \ldots, i-1\}$. Say we sample $\Theta(k \log N)$ centers according to this distribution, which we call the \textbf{$k$-means++ distribution}. It was shown in \cite{aggarwal2009adaptive} that if we cluster the points by assigning them to their closest centers, the total squared distance between points and their cluster centers is at most $O(1)$ times the optimal $k$-means cost with high probability. Note that this is not a feasible $k$-means solution because more than $k$ centers are used. However, leveraging this, the work showed that we can construct a coreset by weighting these centers according to the number of points in their corresponding clusters. We seek to mimic this approach with a relational algorithm. Let's focus on one iteration where we want to sample the center $c_i$ given $c_1, \ldots, c_{i-1}$ according to the $k$-means++ distribution. Consider the assignment of every point to its closest center in $c_1, \ldots, c_{i-1}$. Notice that the $k$-means++ probability is determined by this assignment. Indeed, the probability of a point being sampled is the cost of assigning this point to its closest center ($\min_{j \in [i-1]} \norm{x-c_j}_2^2$) normalized by $Y$. $Y$ is the summation of this cost over all points. The relational format makes this distribution difficult to compute without the design matrix $J$. It is hard to efficiently characterize which points are closest to which centers. The assignment \emph{partitions} the data points according to their closest centers, where each partition may not be easily represented by a compact relational database (unlike $J$). \iffalse \color{blue} \begin{itemize} \item Say k-means++ distribution is normalized proportional to the assignment to the points' nearest centers. \item The assignment is hard to characterize and thus sampling directly according to this distribution is hard. \item Some distributions are easy to sample from and we can use rejection-sampling to bridge the gap between the target distribution and an easy distribution (introduce the mechanism). \item How do we find an easy distribution: by approximating the assignment - assign points to their near-closest centers. \item The assignment is characterized by hyper-rectangles (say why hyper-rectangles are good for our purposes). \end{itemize} \color{black} \fi \medskip \noindent \textbf{A Relational k-means++ Implementation:} Our approach will sample every point according to the $k$-means++ distribution without computing this distribution directly. Instead, we use \textbf{rejection sampling} \cite{casella2004generalized}, which allows one to sample from a ``hard'' distribution $P$ using an ``easy'' distribution $Q$. Rejection sampling works by sampling from $Q$ first, then reject the sample with another probability used to bridge the gap between $Q$ and $P$. The process is repeated until a sample is accepted. In our setting, $P$ is the $k$-means++ distribution, and we need to find a $Q$ which could be sampled from efficiently with a relational algorithm (without computing $J$). Rejection sampling theory shows that for the sampling to be efficient, $Q$ should be close to $P$ point-wise to avoid high rejection frequency. In the end, we will \emph{perfectly simulate} the $k$-means++ algorithm. We now describe the intuition for designing such a $Q$. Recall that $P$ is determined by the assignment of points to their closest centers. We will approximate this assignment up to a factor of $O(i^{2}d)$ when sampling the $i^{th}$ center $c_i$, where $d$ is the number of columns in $J$. Intuitively, the approximate assignment makes things easier since for any center we can easily find the points assigned to it using an efficient relational algorithm. Then $Q$ is found by normalizing the squared distance between each point and its assigned center. The approximate assignment is designed as follows. Consider the $d$-dimensional Euclidean space where the data points in $J$ are located. The algorithm divides space into a \textbf{laminar} collection of \textbf{hyper-rectangles}\footnote{A laminar set of hyper-rectangles means any two hyper-rectangles from the set either have no intersection, or one of them contains the other.} (i.e., $\{x \in \mathcal{R}^d: v_j \leq x_j \leq w_j, j=1, \ldots, d\}$, here $x_j$ is the value for feature $f_j$). We assign each hyper-rectangle to a center. A point assigns itself to the center that corresponds to the \emph{smallest} hyper-rectangle containing the point. The key property of hyper-rectangles that benefits our relational algorithm is: we can efficiently represent all points from $J$ inside any hyper-rectangle by removing some entries in each table from the original database and taking the join of all tables. For example, if a hyper-rectangle has constraint $v_j \leq x_j \leq w_j$, we just remove all the rows with value outside of range $[v_j, w_j]$ for column $f_j$ from the tables containing column $f_j$. The set of points assigned to a given center can be found by adding and subtracting a laminar set of hyper-rectangles, where each hyper-rectangle can be represented by a relational database. \medskip \noindent \textbf{Weighting the Centers:} We have sampled a good set of cluster centers. To get a coreset, we need to assign weights to them. As we have already mentioned, assuming $P \ne \#P$, the weights cannot be computed relationally. In fact, they cannot be approximated up to any factor in polynomial time unless $P = NP$. Rather, we design an alternative relational algorithm for computing the weights. Each weight will not be an approximate individually, but we prove that the weighted centers form an $O(1)$-approximate coreset in aggregate. The main algorithmic idea is that for each center $c_i$ we generate a collection of hyperspheres around $c_i$ containing geometrically increasing numbers of points. The space is then partitioned using these hyperspheres where each partition contains a portion of points in $J$. Using the algorithm from \cite{abokhamis2020approximate}, we then sample a poly-log sized collection of points from each partition, and use this subsample to estimate the fraction of the points in this partition which are closer to $c_i$ than any other center. The estimated weight of $c_i$ is aggregated accordingly. \medskip \noindent \textbf{Paper Organization:} As relational algorithms are relatively new, we begin with some special cases which help the reader build intuition. In Section~\ref{subsec:warmup} we give a warm-up by showing how to implement $1$-means++ and $2$-means++ (i.e. initialization steps of $k$-means++). In this section, we also prove Theorem~\ref{thm:hardcount} as an example of the limits of relational algorithms. In Section~\ref{section:intro:background} we go over background on relational algorithms that our overall algorithm will leverage. In Section~\ref{sec:kmeans++} we give the $k$-means++ algorithm via rejection sampling. Section~\ref{sec:algoverview} shows an algorithm to construct the weights and then analyze this algorithm. Many of the technical proofs appear in the appendix due to space. \input{warmup} \input{related_work} \section{The $k$-means++ Algorithm} \label{sec:kmeans++} In this section, we describe a relational implementation of the $k$-means++ algorithm. It is sufficient to explain how center $c_i$ is picked given the previous centers $c_1, \ldots, c_{i-1}$. Recall that the $k$-means++ algorithm picks a point $x$ to be $c_i$ with probability $P(x) = \frac{L(x)}{Y}$ where $L(x) = \min_{j \in [i-1]} \norm{x-c_j}_2^2$ and $Y = \sum_{x \in J} L(x)$ is a normalizing constant. The implementation consists of two parts. The first part, described in Section \ref{subsubsect:boxconstruction}, shows how to partition the $d$-dimensional Euclidean space into a laminar set of hyper-rectangles (referred to as \textbf{boxes} hereafter) that are generated around the previous centers. The second part, described in Section \ref{subsubsect:Qsampling}, samples according to the ``hard'' distribution $P$ using rejection sampling and an ``easy'' distribution $Q$. Conceptually, we assign every point in the design matrix $J$ to an \emph{approximately} nearest center among $c_1, \ldots, c_{i-1}$. This is done by assigning every point in $J$ to one of the centers contained in the \emph{smallest} box this point belongs to. Then $Q$ is derived using the squared distance between the points in $J$ and their assigned centers. For illustration, we show the special case of when $k=3$ in Appendix \ref{section:3means}. We refer the reader to this section as a warm-up before reading the general algorithm below. \subsection{Box Construction} \label{subsubsect:boxconstruction} Here we explain the algorithm for constructing a set of laminar boxes given the centers sampled previously. The construction is completely combinatorial. It only uses the given centers and we don't need any relational operation for the construction. \medskip \noindent \textbf{Algorithm Description:} Assume we want to sample the $i^{th}$ point in $k$-means++. The algorithm maintains two collections ${\mathcal G}_i$ and ${\mathcal B}_i$ of tuples. Each tuple consists of a box and a point in that box, called the \textbf{representative} of the box. This point is one of the previously sampled centers. One can think of the tuples in ${\mathcal G}_i$ as ``active'' ones that are subject to changes and those in ${\mathcal B}_i$ as ``frozen'' ones that are finalized, thus removed from ${\mathcal G}_i$ and added to ${\mathcal B}_i$. When the algorithm terminates, ${\mathcal G}_i$ will be empty, and the boxes in ${\mathcal B}_i$ will be a laminar collection of boxes that we use to define the ``easy'' probability distribution $Q$. The initial tuples in ${\mathcal G}_i$ consist of one \emph{unit hyper-cube} (side length is $1$) centered at each previous center $c_j$, $j \in [i-1]$, with its representative point $c_j$. Up to scaling of initial unit hyper-cubes, we can assume that initially no pair of boxes in ${\mathcal G}_i$ intersect. This property of ${\mathcal G}_i$ is maintained throughout the process. Initially ${\mathcal B}_i$ is empty. Over time, the implementation keeps growing the boxes in ${\mathcal G}_i$ in size and moves tuples from ${\mathcal G}_i$ to ${\mathcal B}_i$. The algorithm repeats the following steps in rounds. At the beginning of each round, there is no intersection between any two boxes in ${\mathcal G}_i$. The algorithm performs a doubling step where it \textbf{doubles} every box in ${\mathcal G}_i$. Doubling a box means each of its $d-1$ dimensional face is moved twice as far away from its representative. Mathematically, a box whose representative point is $y \in \mathcal{R}^d$ may be written as $\{x\in \mathcal{R}^d: y_i - v_i \leq x_i \leq y_i + w_i, i=1, \ldots, d\}$ ($v_i,w_i>0$). This box becomes $\{x\in \mathcal{R}^d: y_i - 2v_i \leq x_i \leq y_i + 2w_i, i=1, \ldots, d\}$ after doubling. After doubling, the algorithm performs the following operations on intersecting boxes until there are none. The algorithm iteratively picks two arbitrary intersecting boxes from ${\mathcal G}_i$. Say the boxes are $b_1$ with representative $y_1$ and $b_2$ with representative $y_2$. The algorithm executes a \textbf{melding} step on $(b_1, y_1)$ and $(b_2, y_2)$, which has the following procedures: \begin{itemize} \item Compute the smallest box $b_3$ in the Euclidean space that contains both $b_1$ and $b_2$. \item Add $(b_3, y_1)$ to ${\mathcal G}_i$ and delete $(b_1, y_1)$ and $(b_2, y_2)$ from ${\mathcal G}_i$. \item Check if $b_1$ (or $b_2$) is a box created by the doubling step at the beginning of the current round and hasn't been melded with other boxes ever since. If so, the algorithm computes a box $b_1'$ (resp. $b_2'$) from $b_1$ (resp. $b_2$) by \textbf{halving} it. That is, each $d-1$ dimensional face is moved so that its distance to the box's representative is halved. Mathematically, a box $\{x\in \mathcal{R}^d: y_i - v_i \leq x_i \leq y_i + w_i, i=1, \ldots, d\}$ ($v_i,w_i>0$), where vector $y$ is its representative, becomes $\{x\in \mathcal{R}^d: y_i - \frac{1}{2}v_i \leq x_i \leq y_i + \frac{1}{2}w_i, i=1, \ldots, d\}$ after halving. Then $(b_1', y_1)$ (or $(b_2', y_2)$) is added to ${\mathcal B}_i$. Otherwise do nothing. \end{itemize} Notice that melding decreases the size of ${\mathcal G}_i$. The algorithm terminates when there is one tuple $(b_0, y_0)$ left in ${\mathcal G}_i$, at which point the algorithm adds a box that contains the whole space with representative $y_0$ to ${\mathcal B}_i$. Note that during each round of the doubling and melding, the boxes which are added to ${\mathcal B}_i$ are the ones that after doubling were melded with other boxes, and they are added at their shapes before the doubling step. \begin{lemma} \label{lemma:box:intersection} The collection of boxes in ${\mathcal B}_i$ constructed by the above algorithm is laminar. \end{lemma} \begin{proof} Note that right before each doubling step, the boxes in ${\mathcal G}_i$ are disjoint and that is because the algorithm in the previous iteration melds all the boxes that have intersection with each other. We prove by induction that at all time, for every box $b$ in ${\mathcal B}_i$ there exist a box $b'$ in ${\mathcal G}_i$ such that $b \subseteq b'$. Since the boxes added to ${\mathcal B}_i$ in each iteration are a subset of the boxes in ${\mathcal G}_i$ before the doubling step and they do not intersect each other, laminarity of ${\mathcal B}_i$ is a straight-forward consequence. Initially ${\mathcal B}_i$ is empty and therefore the claim holds. Assume in some arbitrary iteration $\ell$ this claim holds right before the doubling step, then after the doubling step since every box in ${\mathcal G}_i$ still covers all of the area it was covering before getting doubled, the claim holds. Furthermore, in the melding step every box $b_3$ that is resulted from melding of two boxes $b_1$ and $b_2$ covers both $b_1$ and $b_2$; therefore, $b_3$ will cover $b_1$ and $b_2$ if they are added to ${\mathcal B}_i$, and if a box in ${\mathcal B}_i$ was covered by either of $b_1$ or $b_2$, it will be still covered by $b_3$. \end{proof} The collection of boxes in ${\mathcal B}_i$ can be thought of as a tree where every node corresponds to a box. The root node is the entire space. In this tree, for any box $b'$, among all boxes included by $b'$, we pick the inclusion-wise \emph{maximal} boxes and let them be the \textbf{children} of $b'$. Thus the number of boxes in ${\mathcal B}_i$ is $O(i)$ since the tree has $i$ leaves, one for each center. \subsection{Sampling} \label{subsubsect:Qsampling} To define our easy distribution $Q$, for any point $x \in J$, let $b(x)$ be the minimal box in ${\mathcal B}_i$ that contains $x$ and $y(x)$ be the representative of $b(x)$. Define $R(x) = \norm{x-y(x)}_2^2$, and $Q(x) = \frac{R(X)}{Z}$ where $Z = \sum_{x \in J} R(x)$ normalizes the distribution. We call $R(x)$ the \textbf{assignment cost} for $x$. We will show how to sample from target distribution $P(\cdot)$ using $Q(\cdot)$ and rejection sampling, and how to implement the this designed sampling step relationally. \medskip \noindent \textbf{Rejection Sampling:} The algorithm repeatedly samples a point $x$ with probability $Q(x)$, then either (A) rejects $x$ and resamples, or (B) accepts $x$ as the next center $c_i$ and finishes the sampling process. After sampling $x$, the probability of accepting $x$ is $\frac{ L(x)}{R(x)}$, and that of rejecting $x$ is $1-\frac{L(x)}{R(x)}$. Notice that here $\frac{L(x)}{R(x)} \leq 1$ since $R(x) = \norm{x-y(x)}_2^2 \geq \min_{j \in [i-1]}\norm{x-c_j}_2^2$. If $S(x)$ is the the event of initially sampling $x$ from distribution $Q$, and $A(x)$ is the event of subsequently accepting $x$, the probability of choosing $x$ to be $c_i$ in one given round is: \begin{align*} \Pr[S(x) \text{ and } A(x)] &= \Pr[A(x) \mid S(x)] \Pr[S(x)] = \frac{{L(x)} }{R(x)} Q(x) = \frac{L(x)}{Z} \end{align*} Thus the probability of $x$ being the accepted sample is proportional to $L(x)$, as desired. We would like $Q(\cdot)$ to be close to $P(\cdot)$ point-wise so that the algorithm is efficient. Otherwise, the acceptance probability $\frac{L(x)}{R(x)}$ is low and it might keep rejecting samples. \medskip \noindent \textbf{Relational Implementation of Sampling:} We now explain how to relationally sample a point $x$ with probability $Q(x)$. The implementation heavily leverages Lemma \ref{lem:box_computing}, which states for given box $b^*$ with representative $y^*$, the cost of assigning all points in $r \Join J \cap b^*$ to $y^*$ for each row $r \in T_i$ can be computed in polynomial time using a SumProd query grouped by $T_i$. Recall that we assign all points in $J$ to the representative of the smallest box they belong to. We show that the total assignment cost is computed by evaluating SumProd queries on the boxes and then adding/subtracting the query values for different boxes. Following the intuition provided in Section \ref{subsec:warmup}, the implementation generates a single row from table $T_1, T_2, \ldots, T_m$ sequentially. The concatenation of these rows (or the join of them) gives the sampled point $x$. It is sufficient to explain assuming we have sampled $r_1, \ldots, r_{\ell-1}$ from the first $\ell-1$ tables, how to implement the generation of a row from the next table $T_\ell$. Just like $1$- and $2$-means++ in subsection \ref{subsec:warmup}, the algorithm evaluates a function $F_\ell(\cdot)$ defined on rows in $T_\ell$ using SumProd queries, and samples $r$ with probability $\frac{F_\ell(r)}{\sum_{r' \in T_\ell}F_\ell(r')}$. Again, we focus on $r_1 \Join \ldots \Join r_{\ell-1} \Join J$, denoting the points in $J$ that uses the previously sampled rows. The value of $F_\ell(r)$ is determined by points in $r \Join r_1 \Join \ldots \Join r_{\ell-1} \Join J$. To ensure we generate a row according to the correct distribution $Q$, we define the function $F_\ell(\cdot)$ as follows. Let $F_\ell(r)$ be the total assignment cost of all points in $r \Join r_1 \Join \ldots \Join r_{\ell - 1} \Join J$. That is, $F_\ell(r) = \sum_{x \in r \Join r_1 \Join \ldots \Join r_{\ell - 1} \Join J}R(x)$. Notice that the definition of function $F_\ell(\cdot)$ is very similar to $2$-means++ apart from that each point is no longer assigned to a given center, but the representative of the smallest box containing it. Let $G(r, b^*, y^*)$ denote the cost of assigning all points from $r \Join r_1\Join \ldots \Join r_{\ell-1} \Join J$ that lies in box $b^*$ to a center $y^*$. By replacing the $J$ in Lemma \ref{lem:box_computing} by $r_1 \Join \ldots \Join r_{\ell-1} \Join J$, we can compute all $G(r, b^*, y^*)$ values in polynomial time using one SumProd query grouped by $T_\ell$. The value $F_\ell(r)$ can be expanded into subtraction and addition of $G(r, b^*, y^*)$ terms. The expansion is recursive. For a box $b_0$, let $H(r, b_0) = \sum_{x \in r \Join r_1\Join \ldots \Join r_{\ell-1} \Join J \cap b_0} R(x)$. Notice that $F_\ell(r) = H(r, b_0)$ if $b_0$ is the entire Euclidean space. Pick any row $r \in T_\ell$. Assume we want to compute $H(r, b_0)$ for some tuple $(b_0, y_0)\in {\mathcal B}_i$. Recall that the set of boxes in ${\mathcal B}_i$ forms a tree structure. If $b_0$ has no children this is the base case - $H(r, b_0) = G(r, b_0, y_0)$ by definition since all points in $b_0$ must be assigned to $y_0$. Otherwise, let $(b_1, y_1), \ldots, (b_q, y_q)$ be the tuples in ${\mathcal B}_i$ where $b_1, \ldots, b_q$ are children of $b_0$. Notice that, by definition all points in $b_0 \setminus (\bigcup_{j \in [q]}b_j)$ is assigned to $y_0$. Then, one can check that the following equation holds for any $r$: $$H(r, b_0) = G(r, b_0, y_0) - \sum_{j \in [q]}G(r, b_j, y_0) + \sum_{j \in [q]}H(r, b_j)$$ Starting with setting $b_0$ as the entire Euclidean space, the equation above could be used to recursively expand $H(\cdot, b_0)=F_\ell(\cdot)$ into addition and subtraction of $O(|{\mathcal B}_i|)$ number of $G(\cdot, \cdot, \cdot)$ terms, where each term could be computed with one SumProd query by Lemma \ref{lem:box_computing}. \medskip \noindent \textbf{Runtime Analysis of the Sampling:} We now discuss the running time of the sampling algorithm simulating $k$-means++. These lemmas show how close the probability distribution we compute is as compared to the $k$-means++ distribution. This will help bound the running time. \begin{lemma} \label{lem:newbox_side_lengths} Consider the box construction algorithm when sampling the $i^{th}$ point in the $k$-means++ simulation. Consider the end of the $j^{th}$ round where all melding is finished but the boxes have not been doubled yet. Let $b$ be an arbitrary box in ${\mathcal G}_i$ and $h(b)$ be the number of centers in $b$ at this time. Let $c_a$ be an arbitrary one of these $h(b)$ centers. Then: \renewcommand{\labelenumi}{\Alph{enumi}.} \begin{enumerate} \item The distance from $c_a$ to any $d-1$ dimensional face of $b$ is at least $2^j$. \item The length of each side of $b$ is at most $ h(b) \cdot 2^{j+1}$. \end{enumerate} \end{lemma} \begin{proof} The first statement is a direct consequence of the definition of doubling and melding since at any point of time the distance of all the centers in a box is at least $2^j$. To prove the second statement, we define the assignment of the centers to the boxes as following. Consider the centers inside each box $b$ right before the doubling step. We call these centers, the centers assigned to $b$ and denote the number of them by $h'(b)$. When two boxes $b_1$ and $b_2$ are melding into box $b_3$, we assign their assigned centers to $b_3$. We prove each side length of $b$ is at most $h'(b) 2^{j+1}$ by induction on the number $j$ of executed doubling steps. Since $h'(b) = h(b)$ right before each doubling, this will prove the second statement. The statement is obvious in the base case, $j=0$. The statement also obviously holds by induction after a doubling step as $j$ is incremented and the side lengths double and the number of assigned boxes don't change. It also holds during every meld step because each side length of the newly created larger box is at most the aggregate maximum side lengths of the smaller boxes that are moved to ${\mathcal B}_i$, and the number of assigned centers in the newly created larger box is the aggregate of the assigned centers in the two smaller boxes that are moved to ${\mathcal B}_i$. Note that since for any box $b$ all the assigned centers to $b$ are inside $b$ at all times, $h'(b)$ is the number of centers inside $b$ before the next doubling. \end{proof} This lemma bounds the difference of the two probability distributions. \begin{lemma} \label{lemma:box:boundary}Consider the box generation algorithm when sampling the $i$th point in the $k$-means++ simulation. For all points $x$, $R(x) \leq O(i^2 d)\cdot L(x)$. \end{lemma} \begin{proof} Consider an arbitrary point $x$. Let $c_\ell$, $\ell \in [i-1]$, be the center that is closest to $x$ under the 2-norm distance. Assume $j$ is minimal such that just before the $(j+1)$-th doubling round, $x$ is contained in a box $b$ in ${\mathcal G}_i$. We argue about the state of the algorithm at two times, the time $s$ just before doubling round $j$ and the time $t$ just before doubling round $j+1$. Let $b$ be a minimal box in ${\mathcal G}_i$ that contains $x$ at time $t$, and let $y$ be the representative for box $b$. Notice that we assign $x$ to the representative of the smallest box in ${\mathcal B}_i$ that contains it, so $x$ will be assigned to $y$. Indeed, none of the boxes added into ${\mathcal B}_i$ before time $t$ contains $x$ by the minimality of $j$, and when box $b$ gets added into ${\mathcal B}_i$ (potentially after a few more doubling rounds) it still has the same representative $y$. By Lemma \ref{lem:newbox_side_lengths} the squared distance from from $x$ to $r$ is at most $(i-1)^2 d 2^{2j+2}$. So it is sufficient to show that the squared distance from $x$ to $c_\ell$ is $\Omega(2^j)$. Let $b'$ be the box in ${\mathcal G}_i$ that contains $c_\ell$ at time $s$. Note that $x$ could not have been inside $b'$ at time $s$ by the definition of $t$ and $s$. Then by Lemma \ref{lem:newbox_side_lengths} the distance from $c_\ell$ to the edge of $b'$ at time $t$ is at least $2^{2j-2}$, and hence the distance from $c_\ell$ to $x$ is also at least $2^{2j-2}$ as $x$ is outside of $b'$. \end{proof} The following theorem bounds the running time. \begin{theorem} \label{thm:sampling_time} The expected time complexity for running $k'$ iterations of this implementation of $k$-means++ is $O(k'^4 dm \Psi(n, d, m))$. \end{theorem} \begin{proof} When picking center $c_i$, a point $x$ can be sampled with probability $Q(x)$ in time $O(m i \Psi(n, m, d))$. This is because the implementation samples one row from each of the $m$ tables. To sample one row we evaluate $O(|{\mathcal B}_i|)$ SumProd queries, each in $O(\Psi(n,m,d))$ time. As mentioned earlier ${\mathcal B}_i$ can be thought of as a tree of boxes with $i-1$ leaves, so $|{\mathcal B}_i| = O(i)$. By Lemma \ref{lemma:box:boundary}, the probability of accepting any sampled $x$ is $\frac{L(x)}{R(x)} = \frac{1}{O(i^2 d)}$. The expected number of sampling from $Q$ until getting accepted is $O(i^2 d)$. Thus the expected time of finding $c_i$ is $O(i^3dm\Psi(n,m,d))$. Summing over $i \in [k']$, we get $O(k'^4 dm \Psi(n, m, d))$. \end{proof} \section{Related Work and Background} \label{section:intro:background} \medskip \noindent \textbf{Related Work on K-means:} Constant approximations are known for the $k$-means problem in the standard computational setting~\cite{LiS16,KanungoMNPSW04}. Although the most commonly used algorithm in practice is a local search algorithm called Lloyd's algorithm, or sometimes confusingly just called ``the k-means algorithm''. The $k$-means++ algorithm from \cite{DBLP:conf/soda/ArthurV07} is a $\Theta(\log k)$ approximation algorithm, and is commonly used in practice to seed Lloyd's algorithm. Some coreset construction methods have been used before to design algorithms for the $k$-means problem in other restricted access computational models, including steaming \cite{GuhaMMMO03,BravermanFLSY17}, and the MPC model \cite{EneIM11,BahmaniMVKV12}, as well as speeding up sequential methods \cite{MeyersonOP04,SohlerW18}. \medskip \noindent \textbf{Relational Algorithms for Learning Problem:} Training different machine learning models on relational data has been studied; however, many of the proposed algorithms are not efficient under our definition of a relational algorithm. It has been shown that using repeated patterns in the design matrix, linear regression, and factorization machines can be implemented \cite{rendle2013scaling} more efficiently. \cite{Kumar:2015:LGL:2723372.2723713, SystemF, khamis2018ac} has improved the relational linear regression and factorization machines for different scenarios. A unified relational algorithm for problems such as linear regression, singular value decomposition and factorization machines proposed in \cite{IndatabaseLinearRegression}. Algorithms for training support vector machine is studied in \cite{yangtowards,linearSVM}. In \cite{cheng2019nonlinear}, a relational algorithm is introduced for Independent Gaussian Mixture Models, and they have shown experimentally that this method will be faster than materializing the design matrix. \medskip \noindent \textbf{Relational Algorithm Building Blocks:} In the path join scenario, the $1$- and $2$-means++ sampling methods introduced in subsection \ref{subsec:warmup} have similar procedures: starting with the first table $T_1$, iteratively evaluate some general function $F_i(\cdot)$ defined on all rows in the table $T_i$, sample one row $r_i$ according to the distribution normalized from $F_i(\cdot)$. The function $F_i(\cdot)$ for table $T_i$ is defined on the matrix $r_1 \Join \ldots \Join r_{i-1} \Join J$ where $J$ is the design matrix. This matrix is also the design matrix of a new relational database, constructed by throwing away all rows in previous tables apart from the sampled $r_1, \ldots, r_{i-1}$. We can generalize the computation of $F_i(\cdot)$ functions into a broader class of queries that we know could be implemented efficiently on \emph{any} acyclic relational databases, namely \textbf{SumProd queries}. See \cite{FAQ} for more details. In the following lemmas assume the relational database has tables $T_1, \ldots, T_m$ and their design matrix is $J$, let $n$ be the maximum number of rows in each table $T_i$, $m$ be the number of tables and $d$ be the number of columns in $J$. \begin{definition} For the $j^{th}$ feature ($j \in [d]$) let $q_j: \mathbb{R} \rightarrow S$ be an efficiently computable function that maps feature values to some set $S$. Let the binary operations $\oplus$ and $\otimes$ be any operators such that $(S,\oplus,\otimes)$ forms a commutative semiring. The value of $\bigoplus_{x\in J}\bigotimes_{j \in [d]} q_j(x_j)$ is a SumProd query. \end{definition} \begin{lemma}[\cite{FAQ}] \label{lem:sumprod} Any SumProd query can be computed efficiently in time $O(md^2 n^{\text{fhtw}} \log(n))$ where fhtw is the fractional hypertree width of the database. For acyclic databases fhtw=1 so the running time is polynomial. \end{lemma} Despite the cumbersome formal definition of SumProd queries, below we list their key applications used in this paper. With a little abuse of notation, throughout this paper we use $\Psi(n, d, m)$ to denote the worst-case time bound on any SumProd queries. \begin{lemma} \label{lem:box_computing} Given a point $y \in \mathcal{R}^d$ and a hyper-rectangle $b = \{x \in \mathcal{R}^d: v_i \leq x_i \leq w_i, i=1, \ldots, d\}$ where $v$ and $w$ are constant vectors, we let $J \cap b$ denote the data points represented by rows of $J$ that also fall into $b$. Pick any table $T_j$. Using one single SumProd query we can compute for all $r \in T_j$ the value $\sum_{p \in r \Join J \cap b}\norm{p - y}_2^2$. The time required is at most that required by one SumProd query, $\Psi(n, d, m)$, \end{lemma} Lemma \ref{lem:box_computing} is an immediate result of Theorem \ref{thm:sumsum:query} which you may find in Appendix \ref{sect:dbbackground} and the fact that we can efficiently represent all points from $J$ inside any hyper-rectangle by removing some entries in each table from the original database and taking the join of all tables. The following lemma follows by an application of the main result in \cite{abokhamis2020approximate}. In Appendix \ref{sect:faqai} we formally show to apply their result to give the following lemma. \begin{lemma}[\cite{abokhamis2020approximate}]\label{lem:ball_computing} Given a hypersphere $\{x \in \mathcal{R}^d: \|x - y_0\|^2 \leq z_0^2\}$ where $y_0$ is a given point and $z_0$ is the radius, a $(1+\epsilon)$-approximation of the number of points in $J$ that lie inside this hypersphere could be computed in $O\left( \frac{m^6 \log^4 n}{\epsilon^2} \Psi(n, d, m) \right)$ time. \end{lemma} Notice that a SumProd query could be used to output either a scalar (similar to Lemma \ref{lem:ball_computing}) or a vector whose entries are function values for every row $r$ in a chosen table $T_j$ (in Lemma \ref{lem:box_computing}). We say the SumProd query is \textbf{grouped by} $T_j$ in the latter case. \iffalse A valid SumProd query $Q$ requires a cost function $q_f: \mathbb{R} \rightarrow S$ for every feature $f$ which maps all entries in column $f$ to some base set $S$. We assume that given a value in column $f$, $q_f$ takes $O(1)$ time to compute. $Q$ also needs pre-defined additive and multiplicative operations on set $S$, denoted by $\oplus$ and $\otimes$ respectively. Let $F$ be the set of features. We use $x \in J$ to denote that $x$ is a row(data point) in $J$. The SumProd query $Q$ evaluates the following value for $J$: $$ \bigoplus_{x\in J}\bigotimes_{f \in F} q_f(x_f)$$ What we have computed in $1$- and $2$-means++ are special cases of SumProd queries: \begin{itemize} \item For $1$-means++, let $q_f(x_f)=1$, $\oplus$ and $\otimes$ be arithmetic addtion and multiplication. The query computes the number of rows in $J$. \item For $2$-means++, given a fixed point $p \in \mathcal{R}^d$, let $q_f(x_f) = (1, (x_f-p_f)^2)$, $(a, b) \oplus (c, d) = (a + c, b+d)$ and $(a, b) \otimes (c, d) = (ac, ad +bc)$. The query computes a pair $(a, b)$ where $a$ is the number of rows in $J$ and $b$ is the aggregate square distances from $p$ over the points/rows in $J$. \end{itemize} Previous research \cite{XXX} has shown that when the mapped set and the defined operations $(S)$ forms a \emph{commutative semiring}, a SumProd query can be evaluated efficiently by directly working on tables. The detailed definition and discussion about the conditions are covered in Appendix section \ref{sect:dbbackground}. \fi \iffalse The old version. We will now briefly cover some results on relational algorithms that this paper will leverage as building blocks. The most important result is the existence of relational algorithms for SumProd queries. A SumProd query $Q$ consists of: \begin{itemize} \item A collection $T_1, \ldots, T_m$ of tables in which each column has an associated feature that is numerical. Let $F$ be the collection of all features, and $d= |F|$ is the number of features. The design matrix is $J=T_1 \Join \dots \Join T_m$, the natural join of the tables. We use $n$ to denote the number of rows in the largest input table and use $N$ to denote the number of rows in $J$. \item A function $q_f: \mathbb{R} \rightarrow S$ for each feature $f \in F$ for some base set $S$. We generally assume each $q_f$ is easy to compute. \item Binary operations $\oplus$ and $\otimes$ such that $(S,\oplus,\otimes)$ forms a commutative semiring. Most imporantly this means that $\otimes$ distributes over $\oplus$. \end{itemize} Evaluating $Q$ results in the following element of the semiring: $$ \bigoplus_{x\in J}\bigotimes_{f \in F} q_f(x_f)$$ where $x$ is a row in the design matrix and $x_f$ is the value for feature $f$ in that row. For example, if each $q_f(x_f) =1$, $\otimes$ is multiplication, and $\oplus$ is addition, then the resulting SumProd query computes $ \sum_{x\in J}\prod_{f \in F} 1$, which evaluates to the number of rows in the design matrix $J$. As another example, if $q_f(x_f) = (1, (x_f-p_f)^2)$, $(a, b) \oplus (c, d) = (a + c, b+d)$ and $(a, b) \otimes (c, d) = (ac, ad +bc)$ then one can verify that $(\mathbb{Z}_{\ge 0} \times \mathbb{R}_{\ge 0}, \oplus, \otimes)$ is a commutative semiring, and the resulting SumProd query computes a pair $(a, b)$ where $a$ is the number of rows in $J$ and $b$ is be the aggregate 2-norm square distances from a fixed point $p$ over the points/rows in $J$ (see appendix section \ref{sect:dbbackground} for more details). A SumProd query $Q$ grouped by a table $T_i$ computes for each row $r \in T_i$ the value of $$ \bigoplus_{x\in r \Join J}\bigotimes_{f \in F} q_f(x_f) $$ That is, it computes for each row $r \in T_i$ the value of $Q$ under the assumption that $r$ was the only row in $T_i$. For example, evaluating the SumProd query $ \sum_{x\in J}\prod_{f \in F} 1$ grouped by table $T_i$, for the path join discussed in subsection \ref{subsec:warmup}, would compute, for each edge between layer $i$ and layer $i+1$, the number of paths in the graph $G$ passing through this edge. \begin{definition} Let $\Psi(n, d, m)$ be the time to evaluate a SumProd query on a join of $n$ rows, $d$ columns and $m$ tables $\oplus$ and $\otimes$ can be evaluated in constant time. It is known that $\Psi(n, d, m)$ is polynomial in the parameters assuming an acyclic join. \end{definition} Our algorithm will use SumProd queries as a building block. We will state our algorithm's formal run time in terms of $\Psi(n, d, m)$ . Another useful building block for us will be SumProd queries with additionally constraints. This consists of a SumProd query $Q$, a table $T_i$, and a constraint sytem $\mathcal C$. Evaluating such a query would result in the evaluation of the SumProd query $Q$ grouped by table $T_i$, when $J$ is restricted to only those rows that satisfy the constraint $\mathcal C$ (or equivalently the rows not satisfying $\mathcal C$ are removed from $J$). SumProd queries with additive constraints were introduced in \cite{faqai}. We will use a special type of constraint called an \textbf{additive constraint}. An additive constraint consists of a function $g_f$ for each feature $f \in F$ and a bound $L$, and the constraint is that $\sum_{f \in F} g_f(x_f) \le L$. \cite{faqai} gives an algorithm for SumProd queries with additive inequalities that conceptually improves on standard practice, but this algorithm's running time is still exponential in the worst-case. \cite{abokhamis2020approximate} shows that computing a SumProd query with a single additive constraint is NP-hard. \cite{abokhamis2020approximate} additionally shows that there is a relational algorithm to compute a $(1+\epsilon)$-approximation when the operators satisfy some additional natural properties (which are too complicated to go into here). For our purposes it is sufficient to know that this result from \cite{abokhamis2020approximate} yields a relational algorithm, with time complexity $O\left( \frac{m^6 \log^4 n}{\epsilon^2} \Psi(n, d, m) \right)$, to compute a $(1+\epsilon)$-approximation of the number of points that lie inside a specified hypersphere. \fi \section{Warm-up: Efficiently Implementing 1-means++ and 2-means++} \label{subsec:warmup} This section is a warm-up to understand the combinatorial structure of relational data. We will show how to do $k$-means++ for $k\in \{1,2\}$ (referred to as 1- and 2-means++) on a simple join structure. We will also show the proof of Theorem~\ref{thm:hardcount} which states that counting the number of points in a cluster is a hard problem on relational data. First, let us consider relationally implementing 1-means++ and 2-means++. For better illustration, we consider a special type of acyclic table structure named \textbf{path join}. The relational algorithm used will be generalized to work on more general join structures when we move to the full algorithm in Section \ref{sec:kmeans++}. In a path join each table $T_i$ has two features/columns $f_i$, and $f_{i+1}$. Table $T_i$ and $T_{i+1}$ then share a common column $f_{i+1}$. Assume for simplicity that each table $T_i$ contains $n$ rows. The design matrix $J=T_1 \Join T_2 \Join \ldots \Join T_m$ has $d= m+1$ features, one for each feature (i.e. column) in the tables. Even with this simple structure, the size of the design matrix $J$ could still be exponential in the size of database - $J$ could contain up to $n^{m/2}$ rows , and $d n^{m/2}$ entries. Thus the standard practice could require time and space $\Omega(mn^{m/2})$ in the worst case. \begin{table}[h] \label{table:intro} \centering \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{2}{|c|}{\cellcolor[HTML]{FFFFC7}$T_1$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_1$ & $f_2$ \\ \hline 1 & 1 \\ \hline 2 & 1 \\ \hline 3 & 2 \\ \hline 4 & 3 \\ \hline 5 & 4 \\ \hline \end{tabular} \quad \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{2}{|c|}{\cellcolor[HTML]{FFFFC7}$T_2$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_2$ & $f_3$ \\ \hline 1 & 1 \\ \hline 1 & 2 \\ \hline 2 & 3 \\ \hline 5 & 4 \\ \hline 5 & 5 \\ \hline \end{tabular} \quad \begin{tabular}{|c|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{3}{|c|}{\cellcolor[HTML]{FFFFC7}$J=T_1 \Join T_2$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_1$ & $f_2$ & $f_3$ \\ \hline 1 & 1 & 1 \\ \hline 1 & 1 & 2 \\ \hline 2 & 1 & 1 \\ \hline 2 & 1 & 2 \\ \hline 3 & 2 & 3 \\ \hline \end{tabular} \quad \raisebox{-0.5\height}{\includegraphics[scale=0.2]{graph_figure.PNG}} \caption{A path join instance where the two tables $T_1$ and $T_2$ have $m=2$ and $n=5$. This shows $T_1$, $T_2$, the design matrix $J$, and the resulting layered directed graph $G$. \emph{Every} path from the left most layer to the right most layer of this graph $G$ corresponds to one data point for the clustering problem (i.e. a row of the design matrix).} \vspace{-.5cm} \end{table} \medskip \noindent \textbf{Graph Illustration of the Design Matrix:} Conceptually consider a directed acyclic graph $G$, where there is one layer of nodes corresponding to each feature $f_i(i=1, \ldots, d)$, and edges only point from nodes in layer $f_i$ to layer $f_{i+1}$. The nodes in $G$ correspond to feature values, and edges in $G$ correspond to rows in tables. There is one vertex $v$ in layer $f_i$ for each value that appears in column $f_i$ in table $T_{i-1}$ or $T_i$, and one edge pointing from $u$ in layer $f_i$ to $v$ in layer $f_{i+1}$, if $(u,v)$ is a row in table $T_i$. Then, there is a one-to-one correspondence between \textbf{full paths} in $G$ (paths from layer $f_1$ to layer $f_d$) and rows in the design matrix. \medskip \noindent \textbf{A Relational Implementation of 1-means++:} Implementing the 1-means++ algorithm is equivalent to \emph{generating a full path uniformly at random from $G$}. We generate this path by iteratively picking a row from table $T_1, \ldots, T_m$, corresponding to picking an arc pointing from layer $f_1$ to $f_2$, $f_2$ to $f_3$, ..., such that concatenating all picked rows (arcs) will give a point in $J$ (full path in $G$). To sample a row from $T_1$, for every row $r \in T_1$, consider $r \Join J$, which is all rows in $J$ whose values in columns $(f_1, f_2)$ are equivalent to $r$. Let the function $F_1(r)$ denote the total number of rows in $r \Join J$. This is also the number of full paths passing arc $r$. Then, every $r \in T_1$ is sampled with probability $\frac{F_1(r)}{\sum_{r' \in T_1}F_1(r')}$, notice $\sum_{r' \in T_1}F_1(r')$ is the total number of full paths. Let the picked row be $r_1$. After sampling $r_1$, we can conceptually throw away all other rows in $T_1$ and focus only on the rows in $J$ that uses $r_1$ to concatenate with rows from other tables (i.e., $r_1 \Join J$). For any row $r \in T_2$, let the function $F_2(r)$ denote the number of rows in $r \Join r_1 \Join J$, also equivalent to the total number of full paths passing arc $r_1$ and $r$. We sample every $r$ with probability $\frac{F_2(r)}{\sum_{r' \in T_2}F_2(r')}$. Notice that $\sum_{r' \in T_2}F_2(r')=F_1(r_1)$, the number of full paths passing arc $r_1$. Repeat this procedure until we have sampled a row in the last table $T_m$: for table $T_i$ and $r \in T_i$, assuming we have sampled $r_1, \ldots, r_{i-1}$ from $T_1, \ldots, T_{i-1}$ respectively, throw away all the other rows in previous tables and focus on $r_1\Join \ldots \Join r_{i-1} \Join J$. $F_i(r)$ is the number of rows in $r \Join r_1\Join \ldots \Join r_{i-1} \Join J$ and $r$ is sampled with probability proportional to $F_i(r)$. It is easy to verify that every full path is sampled uniformly. For every table $T_i$ we need to find the function $F_i(\cdot)$ which is defined on all its rows. There are $m$ such functions. For each $F_i(\cdot)$, we can find all $F_i(r)$ values for $r \in T_i$ using a one-pass dynamic programming and then sample according to the values. Repeating this procedure $m$ rounds completes the sampling process. This gives a polynomial time algorithm. \medskip \noindent \textbf{A Relational Implementation for 2-means++:} Assume $x=(x_1, \ldots, x_d)$ is the first center sampled and now we want to sample the second center. By $k$-means++ principles, any row $r \in J$ is sampled with probability $\frac{\|r - x\|^2}{\sum_{r' \in J}\|r' - x\|^2}$. For a full path in $G$ corresponding to a row $r \in J$ we refer to $\|r - x\|^2$ as the \textbf{aggregated cost} over all $d$ nodes/features. Similar to $1$-means++, we pick one row in each table from $T_1$ to $T_m$ and putting all the rows together gives us the sampled point. Assume we have sampled the rows $r_1, r_2, \ldots, r_{i-1}$ from the first $i-1$ tables and we focus on all full paths passing $r_1, \ldots, r_{i-1}$ (i.e., the new design matrix $r_1 \Join \ldots \Join r_{i-1} \Join J$). In $1$-means++, we compute $F_i(r)$ which is the total number of full paths passing arc $r_1, \ldots, r_{i-1}, r$ (i.e., $r \Join r_1 \Join \ldots \Join r_{i-1} \Join J$.) and sample $r \in T_i$ from a distribution normalized using $F_i(r)$ values. In $2$-means++, we define $F_i(r)$ to be the summation of aggregated costs over all full paths which pass arcs $r_1, \ldots, r_{i-1}, r$. We sample $r \in T_i$ from a distribution normalized using $F_i(r)$ values. It is easy to verify the correctness. Again, each $F_i(\cdot)$ could be computed using a one-pass dynamic programming which gives the values for all rows in $T_i$ when we sample from $T_i$. This would involve $m$ rounds of such computations and give a polynomial algorithm. \subsection{Hardness of Relationally Computing the Weights:} \label{subsection:hardness_computing_weights} Here we prove Theorem~\ref{thm:hardcount}. We will focus on showing that given a set of centers, counting the number of points in $J$ that is closest to any of them is $\#P$-hard. Due to space, see Appendix~\ref{section:ommited:proofs} for a proof of the other part of the theorem that it is hard to approximate the center weights for three centers. We prove $\#P$-Hardness by a reduction from the well known $\#P$-hard Knapsack Counting problem. The input to the Knapsack Counting problem consists of a set $W = \{w_1,\dots,w_h\}$ of nonnegative integer weights, and a nonnegative integer $L$. The output is the number of subsets of $W$ with aggregate weight at most $L$. To construct the relational instance, for each $i \in [h] $, we define the tables $T_{2i-1}$ and $T_{2i}$ as follows: \begin{table}[h] \label{table:nphard} \centering \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{2}{|c|}{\cellcolor[HTML]{FFFFC7}$T_{2i-1}$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_{2i-1}$ & $f_{2i}$ \\ \hline 0 & 0 \\ \hline 0 & $w_i$ \\ \hline \end{tabular} \quad \begin{tabular}{|c|c|} \hline \rowcolor[HTML]{FFFFC7} \multicolumn{2}{|c|}{\cellcolor[HTML]{FFFFC7}$T_{2i}$} \\ \hline \rowcolor[HTML]{FFFFC7} $f_{2i}$ & $f_{2i+1}$ \\ \hline 0 & 0 \\ \hline $w_i$ & 0 \\ \hline \end{tabular} \quad \end{table} Let centers $c_1$ and $c_2$ be arbitrary points such that points closer to $c_1$ than $c_2$ are those points $p$ for which $\sum_{i=1}^d p_i \le L$. Then there are $2^h$ rows in $J$, since $w_i$ can either be selected or not selected in feature $2i$. The weight of $c_1$ is the number of points in $J$ closer to $c_1$ than $c_2$, which is in turn exactly the number of subsets of $W$ with total weight at most $L$. \iffalse (The original version of warmup for 1- and 2-means++) \medskip \noindent \textbf{A Relational Implementation of 1-means++:} Conceptually consider a layered directed acyclic graph $G$, with one layer for each feature. In $G$ there is one vertex $v$ in layer $i$ for each entry value that appears in the $f_i$ column in either table $T_{i-1}$ or table $T_i$. Further, in $G$ there is a directed edge between a vertex $v$ in layer $i$ and a vertex $w$ in layer $i+1$ if and only if $(v, w)$ is a row in table $T_i$. Then there is a one-to-one correspondence between full paths in $G$, which are paths from layer $1$ to layer $d$, and rows in the design matrix. Implementing the 1-means++ algorithm is equivalent to generating a full path uniformly at random from $G$. This can be done by counting the number of full paths using dynamic programming and topological sorting. The dynamic program is easy to construct. Informally, it determines for each vertex $v$ the number of paths originating in layer 1 that end in $v$. Using these paths counts, it is straight-forward to generate a full path uniformly at random. It is easy to implement this algorithm directly on the tables. The resulting running time would be $O(nm \log n)$. \medskip \noindent \textbf{A Relational Implementation for 2-means++:} Assume for simplicity, and without loss of generality, that the first center selected using the above $1$-means++ algorihtm was the origin. Conceptually think of each vertex in the layered graph $G$ as having a cost equal to the square of its feature value. We can implement 2-means++ is equivalent to generating a full path with probability proportional to its aggregate cost. This can again be efficiently implemented in time $O(nm \log n)$ using dynamic programming and topological sorting, where for each vertex $v$ the aggregate number of paths from layer 1 to $v$, and the aggregate costs of the paths from layer 1 to $v$, are stored. \fi
{ "timestamp": "2021-05-24T02:05:29", "yymm": "2008", "arxiv_id": "2008.00358", "language": "en", "url": "https://arxiv.org/abs/2008.00358", "abstract": "This paper gives a k-means approximation algorithm that is efficient in the relational algorithms model. This is an algorithm that operates directly on a relational database without performing a join to convert it to a matrix whose rows represent the data points. The running time is potentially exponentially smaller than $N$, the number of data points to be clustered that the relational database represents.Few relational algorithms are known and this paper offers techniques for designing relational algorithms as well as characterizing their limitations. We show that given two data points as cluster centers, if we cluster points according to their closest centers, it is NP-Hard to approximate the number of points in the clusters on a general relational input. This is trivial for conventional data inputs and this result exemplifies that standard algorithmic techniques may not be directly applied when designing an efficient relational algorithm. This paper then introduces a new method that leverages rejection sampling and the $k$-means++ algorithm to construct an O(1)-approximate k-means solution.", "subjects": "Data Structures and Algorithms (cs.DS); Databases (cs.DB); Machine Learning (cs.LG)", "title": "Relational Algorithms for k-means Clustering", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446494481299, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7087893950107758 }
https://arxiv.org/abs/1805.00918
Some Estimates of Virtual Element Methods for Fourth Order Problems
In this paper, we employ the techniques developed for second order operators to obtain the new estimates of Virtual Element Method for fourth order operators. The analysis is based on elements with proper shape regularity. Estimates for projection and interpolation operators are derived. Also, the biharmonic problem is solved by Virtual Element Method, optimal error estimates are obtained. Our choice of the discrete form for the right hand side function relaxes the regularity requirement in previous work and the error estimates between exact solutions and the computable numerical solutions are provided.
\section{Introduction} Virtual Element methods are designed to use polygonal/polyhedral meshes. It gives us the flexibility to generate general meshes which is a great advantage especially in computational mechanics. Virtual Element Methods for second order problems are well studied in \cite{Beirao13}--\cite{Beir16-3}, also the stability analysis and error analysis for these methods are studied and extended, new techniques based on the shape regularity and discrete norm for virtual element functions are developed in \cite{Brenner17}. Virtual Element Methods for fourth order problems are analyzed in \cite{Brezzi13},\cite{Chinosi16}, however, the stability analysis and error analysis are not completed. The motivations of this paper are: firstly, applying the techniques in \cite{Brenner17} to higher order problems, getting the basic estimates similar as in \cite{Brenner17}; secondly, the error analysis for biharmonic equation can be improved, if we modify the virtual element method slightly then with less regularity of right hand side function $f$, the same convergence rates are achieved; thirdly, the drawback of numerical solutions $u_h$ of virtual element methods are they can't be computed directly, so that we present two ways to get the approximations of $u_h$ preserving the same convergence rate, and the computation of the approximations is much more efficient. The paper is organized as follows: In Section 2, 2.1--2.3, the definition of two dimensional virtual element with the shape regularity is given. The projection in 2.2 is the same as in \cite{Chinosi16}. Compared with the definition of local virtual element space in \cite{Brezzi13}, \cite{Chinosi16}, the right hand side polynomial $q_v(= \Delta^2v) \in \mathbb{P}_k(D)$ not $\mathbb{P}_{k-4}(D)$ or $\mathbb{P}_{k-2}(D)$. This definition provides more degrees of freedom which is necessary to denote the $L_2$ projection from virtual element function space to $\mathbb{P}_k(D)$. In Section 2.4, a semi-norm $|||\cdot|||_{k,D}$ similar as in \cite{Brenner17} is presented. The local estimates for the projections $\Pi_{k,D}^\Delta$ and $\Pi_{k,D}^0$ are obtained. In Section 2.5--2.6, a piecewise $C^1$ polynomial $w$ depending only on the values on $\partial D$ is constructed and the local interpolation error is proved. In section 3, we get the error estimates between $u_h$, its approximations and the exact solution for biharmonic equation. In Section 4, we draw the conclusions of the paper and future works. \section{Local Virtual Element Spaces in Two Dimensions} Let $D$ be a polygon in $\mathbb{R}^2$ with diameter $h_D$. For a nonnegative integer $k$, $\mathbb{P}_k$ is the space of polynomials of degree $\leq k$ and $\mathbb{P}_{-k} = \{0\},\ k \geq 1.$ The space $\mathbb{P}_k(D)$ is the restriction of $\mathbb{P}_k$ to $D.$ The index $(r,s)$ related to the degree of $k\geq 2$ is defined by $$ r \geq \max\{3,k\},\ s = k-1,\ m=k-4. $$ The set of edges of $D$ is denoted by $\mathcal{E}_D$ and $\mathbb{P}_k(e)$ is the restriction of $\mathbb{P}_k$ to $e\in \mathcal{E}_D.$ Then we define $\mathbb{P}_{r,s}(\partial D)$ as \begin{eqnarray*} \mathbb{P}_{r,s}(\partial D) &:=& {\bigg \{ } v|_e\in \mathbb{P}_r(e),\ \left.\frac{\partial v}{\partial n}\right|_e \in \mathbb{P}_s(e), \forall e\in \mathcal{E}_D,\ \text{and } \\ &&\quad v,\nabla v \in C(\partial D),\text{ values of $v$,$\nabla v$ at each vertex of $D$ are given degrees of freedom} {\bigg\} }. \end{eqnarray*} \subsection{Shape Regularity}\label{Shape-Regularity} Here the shape regularity assumptions are the same as in \cite{Brenner17}. Let $ {D}$ be the polygon with diameter $h_D$. Assume that \begin{equation}\label{assume1} | {e}|\geq \rho h_D\quad {\rm for\ any\ edge}\ {e} \ {\rm of}\ {D},\ \rho\in(0,1), \end{equation} and \begin{equation}\label{assume2} {D}\ {\rm is\ star\ shaped\ with\ respect\ to\ a\ disc\ \mathfrak{B}\ with\ radius\ =\ \rho}h_D. \end{equation} The center of $\mathfrak{B}$ is the star center of $ {D}.$ \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale = 1.2] \coordinate (A) at (0,0); \coordinate (B) at (3,0); \coordinate (C) at (3.5,1.5); \coordinate (D) at (2,3); \coordinate (E) at (0,1); \coordinate (O) at ($1/5*(A)+1/5*(B)+1/5*(C)+1/5*(D)+1/5*(E)$); \draw (A) --(B) --(C) --(D) --(E) --(A); \draw[style=dashed](O) circle (0.8); \fill [black] (O) circle (1pt); \draw[style=dashed](O)--(A); \draw[style=dashed](O)--(B); \draw[style=dashed](O)--(C); \draw[style=dashed](O)--(D); \draw[style=dashed](O)--(E); \end{tikzpicture} \end{center}\caption{A subdomain}\label{fg1} \end{figure} The polygon in Figure \ref{fg1} is an example of ${D}$, we denote by $\mathcal{T}_D$ the corresponding triangulation of D. We will use the notation $A \apprle B$ to represent the inequality $A \leq (constant)B$, where the positive constant depends only on $k$ and the parameter $\rho$, and it increases with $k$ and $1/\rho$. The notation $A \approx B$ is equivalent to $A \apprle B$ and $A \apprge B$. \begin{lemma}\label{bramble}\cite{Bramble70} Bramble-Hilbert Estimates. Conditions \eqref{assume1}-\eqref{assume2} imply that we have the following estimates: \begin{equation} \inf\limits_{q\in\mathbb{P}_l} |\xi - q|_{H^m(D)} \apprle h^{l+1-m} |\xi|_{ H^{l+1}(D)}, \ \forall \xi\in H^{l+1}(D),\ l = 0,\cdots, k,\ and\ 0\leq m \leq l. \end{equation} \end{lemma} Details can be found in \cite{Brenner07}, Lemma 4.3.8. \begin{lemma}\label{imbedding}\cite{Adams03} Sobolev Imbedding Theorem 4.12. From \eqref{assume1}-\eqref{assume2}, we have: \begin{equation} \|\xi\|_{C^j(D)} \apprle \sum\limits_{l=0}^{2+j}h_D^{l-1} |\xi|_{ H^{l}(D)} , \ \forall \xi\in H^{2+j}(D),\ j=1,2. \end{equation} \end{lemma} \begin{lemma}\label{general-poincare}\cite{Necas11} The Generalized Poincar$\acute{\rm e}$ Inequality. From \eqref{assume1}-\eqref{assume2}, suppose $h_D = 1$, we have: \begin{equation} \|\xi\|_{H^2(D)}^2 \apprle |\xi|_{H^2(D)}^2+\sum\limits_{i=1}^{2} \left( \int_{\partial D}\frac{\partial \xi}{\partial x_i}\ dx \right)^2 + \left( \int_{\partial D}{\xi}\ dx \right)^2 , \quad \forall \xi\in H^{2}(D). \end{equation} \end{lemma} \begin{proof} The proof is similar as the one in \cite{Necas11}, Section 1.1.6. \end{proof} \subsection{The Projection $\Pi_{k,D}^\Delta $} By the generalized poincar$\acute{\rm e}$ inequality from Lemma \ref{general-poincare}, the Sobolev space $H^2(D)$ is a Hilbert space with the inner product $(((\cdot)))$ denoted as: \begin{equation}\label{inner-p} (((u,v))) = ((u,v))_D +\sum\limits_{i=1}^{2} \left( \int_{\partial D}\frac{\partial u}{\partial x_i}\ dx \right)\left( \int_{\partial D}\frac{\partial v}{\partial x_i}\ dx \right) + \left( \int_{\partial D}{u}\ dx \right)\left( \int_{\partial D}{v}\ dx \right) , \end{equation} where $$ ((u,v))_D = \int_D \sum_{i,j}\frac{\partial^2 u}{\partial x_i\partial x_j} \frac{\partial^2 v}{\partial x_i\partial x_j}\ dx,\ i= 1,2; j=1,2, $$ for any $u, v \in H^{2}(D).$ The operator $\Pi_{k,D}^\Delta: H^2(D)\rightarrow \mathbb{P}_k(D)$ is denoted with respect to $(((\cdot)))$ as: $$ (((\Pi_{k,D}^\Delta \xi, q)))_D = ((( \xi, q)))_D,\ \forall q\in \mathbb{P}_k(D), $$ let $q=1,$ we have \eqref{pikd3}, let $q=x$ then $q=y$, we have \eqref{pikd2}, it is the same as in \cite{Chinosi16} which is listed as: \begin{eqnarray} ((\Pi_{k,D}^\Delta \xi, q))_D &=& (( \xi, q))_D, \ \forall \xi\in H^2(D), \ \forall q\in \mathbb{P}_k(D), \label{pikd1}\\ \int_{\partial D} \nabla \Pi_{k,D}^\Delta \xi\ dx &=& \int_{\partial D} \nabla \xi\ ds, \label{pikd2}\\ \int_{\partial D} \Pi_{k,D}^\Delta \xi\ ds &=& \int_{\partial D} \xi\ ds. \label{pikd3} \end{eqnarray} On the domain $D$ , with boundary $\partial D$, we denote by ${\bf n} = (n_1,n_2)$ the outward unit normal vector to $\partial D$, and by ${\bf t} = (t_1,t_2)$ the unit tangent vector in the counterclockwise ordering of the boundary. For $u\in H^2(D)$, we define $${\bf D}^2 u = (u_{11},u_{22},u_{12},u_{21})=\left(\frac{\partial^2 u}{\partial x_1^2},\frac{\partial^2 u}{\partial x_2^2},\frac{\partial^2 u}{\partial x_1\partial x_2},\frac{\partial^2 u}{\partial x_2\partial x_1}\right).$$ We then denote by $U_{nn}({\bf D}^2 u) := \sum\limits_{i,j}u_{ij} n_i n_j$ the normal bending moment, by $U_{nt}({\bf D}^2 u) := \sum\limits_{i,j}u_{ij} n_i t_j$ the twisting moment, and by $Q_{n}({\bf D}^2 u) := \sum\limits_{i,j}\frac{\partial u_{ij}}{\partial x_i} n_j$ the normal shear force, and $U_{\Delta}({\bf D}^2 u) = \Delta^2 u$. After integrating by parts twice we have \begin{equation}\label{eq_bp1} ((u,v))_D=\int_D U_{\Delta}({\bf D}^2 u) v \ dx +\int_{\partial D} U_{nn}({\bf D}^2 u)\frac{\partial v}{\partial n}\ ds -\int_{\partial D} \left(Q_{n}({\bf D}^2 u)+\frac{\partial U_{nt}({\bf D}^2 u)}{\partial t}\right)v\ ds. \end{equation} \subsection{Local VEM Space $\mathcal{Q}_k(D)$} Then, for $k\geq 2,$ the local VEM space $\mathcal{Q}^k(D)\in H^2(D)$ is defined as: $v\in H^2(D)$ belongs to $\mathcal{Q}^k(D)$ if and only if (i) $v|_{\partial D}$ and trace of $\frac{\partial v}{\partial n}$ on $\partial D$ belongs to $\mathbb{P}_{r,s}(\partial D),$ then (ii) there exists a polynomial $q_v(= \Delta^2v) \in \mathbb{P}_k(D)$ such that \begin{equation}\label{var-form} ((v,w))_D= (q_v, w),\quad \forall w\in H^2_0(D), \end{equation} and (iii) \begin{equation}\label{equal-parts} \Pi_{k,D}^0v-\Pi_{k,D}^\Delta v \in \mathbb{P}_{k-4}(D), \end{equation} where $\Pi_{k,D}^0$ is the projection from $L_2(D)$ onto $\mathbb{P}_k(D)$. \begin{remark} It's obvious that $\mathbb{P}_k(D)$ is a subspace of $\mathcal{Q}^k(D).$ From \eqref{equal-parts}, we have $\Pi_{k,D}^0v=\Pi_{k,D}^\Delta v,$ $k=2,3$. \end{remark} The choice (ii) can be replaced by $q_v(= \Delta^2v) \in \mathbb{P}_{k-2}(D)$ in \cite{Chinosi16}, also Lemma \ref{equivalence_b}--Lemma \ref{interpolation_error}, and Corollary \ref{corollary_1}--Corollary \ref{corollary_2} are valid. The reason we chose $q_v\in \mathbb{P}_{k}(D)$ is that it helps to get the same error estimate with less smooth right hand side $f$. For $k=2,3$, we only require $f\in L_2(\Omega)$, while in \cite{Chinosi16}, $f\in H^1(\Omega)$ for $k=2$, and $f\in H^2(\Omega)$ for $k=3$. However, Theorem \ref{theorem1} -- \ref{theorem3}, Corollary \ref{coh1}-- \ref{copik0l2} in this paper are valid for \cite{Chinosi16} with $f\in H^1(\Omega), H^2(\Omega)$.\\ For completeness, we recall the definition for degrees of freedom in \cite{Brezzi13}, employ the following notation: for $i$ a nonnegative integer, $e$ an edge with midpoint $x_e$, length $h_e$, the set of $i+1$ normalized monomials is denoted by $\mathcal{M}_i^e$ $$ \mathcal{M}_i^e :=\left\{1,\frac{x-x_e}{h_e},\left(\frac{x-x_e}{h_e}\right)^2,\cdots, \left(\frac{x-x_e}{h_e}\right)^i\right\}. $$ And for domain $D$ with diameter $h_D$ and barycenter ${\bf x}_D$, the set of $(i+1)(i+2)/2$ normalized monomials is defined by $\mathcal{M}_i^D$ $$ \mathcal{M}_i^D :=\left\{ \left(\frac{x-{\bf x}_D}{h_D}\right)^\alpha,\quad |\alpha|\leq i\right\}, $$ where $\alpha$ is a nonnegative multiindex $\alpha = (\alpha_1,\alpha_2)$, $|\alpha|=\alpha_1+\alpha_2$ and ${\bf x}^\alpha=x_1^{\alpha_1} x_2^{\alpha_2}$. In $D$ the degrees of freedom are denoted as: \begin{equationate} \orig@item[]\refstepcounter{equation}\def\item{\hfill(\theequation)\orig@item[]\refstepcounter{equation}}{$\bullet$ The values $v$ at each vertex of $D$.} \label{bul1} \orig@item[]\refstepcounter{equation}\def\item{\hfill(\theequation)\orig@item[]\refstepcounter{equation}}{$\bullet$ The values $\nabla v$ at each vertex of $D$.} \orig@item[]\refstepcounter{equation}\def\item{\hfill(\theequation)\orig@item[]\refstepcounter{equation}}{$\bullet$ For $r>3$, the moments $\frac{1}{h_e}\int_e q(s)v\ ds,\ \forall q\in \mathcal{M}_{r-4}^e,\ \forall e\in \partial D$.} \orig@item[]\refstepcounter{equation}\def\item{\hfill(\theequation)\orig@item[]\refstepcounter{equation}}{$\bullet$ For $s>1$, the moments $\frac{1}{h_e}\int_e q(s)\frac{\partial v}{\partial n}\ ds,\ \forall q\in \mathcal{M}_{s-2}^e,\ \forall e\in \partial D$.} \orig@item[]\refstepcounter{equation}\def\item{\hfill(\theequation)\orig@item[]\refstepcounter{equation}}{$\bullet$ For $m\geq 0$, the moments $\frac{1}{|D|}\int_D q(x)v(x)\ dx,\ \forall q\in \mathcal{M}_{m}^D$.} \label{bul5} \end{equationate} \begin{lemma}\label{lemma1} Given any $g\in \mathbb{P}_{r,s}{\partial{D}}$ and $f\in \mathbb{P}_k(D)$, there exists a unique function $v\in H^2(D)$ such that (i) $v=g,$ $\frac{\partial v}{\partial n}=\frac{\partial g}{\partial n}$ on $\partial D$ and (ii) $$ \int_D{\bf D}^2 v \cdot {\bf D}^2 w\ dx = \int_{D} fw\ dx,\quad \forall w\in H^2_0(D). $$ \end{lemma} \begin{proof} Similar as in \cite{Brenner17}, let $\tilde{g} \in H^2(D)$ be a $C^1$, $\mathbb{P}_k$ piece-wise polynomial constructed in Section \ref{case1}, such that $v=g,$ $\frac{\partial v}{\partial n}=\frac{\partial g}{\partial n}$ on $\partial D$. Then the unique $v\in H^2(D)$ is given by $\phi+\tilde{g}$, where $\phi\in H^2_0(D)$ is defined by $$ \int_D{\bf D}^2 \phi \cdot {\bf D}^2 w\ dx = \int_{D} fw\ dx -\int_D{\bf D}^2 \tilde{g} \cdot {\bf D}^2 w\ dx , \quad \forall w\in H^2_0(D). $$ \end{proof} \begin{lemma} We have (i) {\rm dim} $\mathcal{Q}^k(D)$ = {\rm dim} $\mathbb{P}_{r,s}(\partial D)$ {\rm + dim} $\mathbb{P}_{m}(D)$, and (ii) $v\in \mathcal{Q}^k(D)$ is uniquely determined by $v|_{\partial D}, \left.\frac{\partial v}{\partial n}\right|_{\partial D}$ and $\Pi_{k-4,D}^0v$. \end{lemma} \begin{proof} Following \cite{Brenner17} and \cite{Brezzi13}, let $\tilde{\mathcal{Q}}^k_D:=\left\{v\in H^2(D), v|_{\partial D}, \left.\frac{\partial v}{\partial n}\right|_{\partial D} \in \mathbb{P}_{r,s}(\partial D)\text{ and } \Delta^2 v\in \mathbb{P}_k(D) \right\}$. The linear map $v\mapsto (v|_{\partial D}, \left.\frac{\partial v}{\partial n}\right|_{\partial D}, \Delta^2 v)$ from $\tilde{\mathcal{Q}}^k_D$ to $\mathbb{P}_{r,s}(\partial D)\times \mathbb{P}_k(D)$ is an isomorphism by Lemma \ref{lemma1}. The linear map $v\mapsto (v|_{\partial D}, \left.\frac{\partial v}{\partial n}\right|_{\partial D}, \Pi_{k-4,D}^0 v +(\Pi_{k,D}^0 -\Pi_{k-4,D}^0)(v-\Pi_{k,D}^\Delta v) )$ from $\tilde{\mathcal Q^k}(D)$ to $\mathbb{P}_{r,s}(\partial D)\times \mathbb{P}_k(D)$ is also an isomorphism. Suppose $v\in$ null space, then $\Pi_{k-4,D}^0v = 0$, $$ v|_{\partial D} = 0\ {\rm and}\ \left.\frac{\partial v}{\partial n}\right|_{\partial D} =0. $$ With \eqref{pikd1}-\eqref{pikd3} and \eqref{eq_bp1}, we have $\Pi_{k,D}^\Delta v = 0$, so that by \eqref{equal-parts}, $$ 0=\Pi_{k-4,D}^0v=\Pi_{k,D}^0v \in \mathbb{P}_{k-4}(D). $$ In \eqref{var-form}, let $w = v\in \mathcal{Q}^k(D)$, then we have $$ |v|^2_{H^2(D)} = 0 \Rightarrow v=0. $$ \end{proof} \begin{lemma}\label{discrete_estimates} Discrete Estimates. From Conditions \eqref{assume1}-\eqref{assume2}, and the equivalence of norms on finite dimensional vector spaces, for any $u\in \mathbb{P}_k$, we have the following estimates: $$ \|{\bf D}^2 u\|_{L_2(D)} \apprle \|u\|_{L_2(D)}\quad \text{and}\quad \left\|\frac{\partial u}{\partial t}\right\|_{L_2(e)}\apprle h_e^{-1/2}\|u\|_{L_2(e)}, $$ $$ h_D^2\|U_{\Delta}({\bf D}^2 u)\|_{L_2(D)} +\|U_{nn}({\bf D}^2 u)\|_{L_2(D)} +h_D^{3/2}\left\|Q_{n}({\bf D}^2 u)+\frac{\partial U_{nt}({\bf D}^2 u)}{\partial t}\right\|_{L_2(\partial D)} \apprle \|{\bf D}^2 u\|_{L_2(D)}. $$ \end{lemma} \subsection{Estimates of $|||\cdot|||_{k,D}$}\label{sec1.4} The semi-norm $|||\cdot|||_{k,D}$ for $\xi\in H^2(D)$ is defined by \begin{equation}\label{eq_v-norm} |||\xi|||_{k,D}^2 = \|\Pi^0_{k-4,D}\xi\|_{L_2(D)}^2 +h_D\sum_{e\in\mathcal{E}_D}\|\Pi^0_{r,e}\xi\|_{L_2(e)}^2 +h_D^3\sum_{e\in\mathcal{E}_D, i=1,2}\left\|\Pi^0_{r-1,e}\frac{\partial \xi}{\partial x_i}\right\|_{L_2(e)}^2 . \end{equation} There is an obvious stability estimate from \eqref{pikd1} \begin{equation}\label{obvious_ineq} |\Pi_{k,D}^\Delta\xi|_{H^2(D)}\leq |\xi|_{H^2(D)}, \ \forall \xi\in H^2(D). \end{equation} We define the kernel of operator $\Pi_{k,D}^\Delta$ as: $$ {\rm Ker} \Pi_{k,D}^\Delta := \{v\in\mathcal{Q}^k(D): \Pi_{k,D}^\Delta v = 0\}. $$ And for any $ v \in\mathcal{Q}^k(D)$, we have \begin{eqnarray*} \sum_{e\in\mathcal{E}_D}\|\Pi^0_{r,e}v\|_{L_2(e)}^2 &=& \|v\|_{L_2(\partial D)}^2,\\ \sum_{e\in\mathcal{E}_D}\left\|\Pi^0_{r-1,e}\frac{\partial v}{\partial x_i}\right\|_{L_2(e)}^2 &=& \left\|\frac{\partial v}{\partial x_i}\right\|_{L_2(\partial D)}^2. \end{eqnarray*} \begin{lemma}\label{new-equi} For any $v \in \mathcal{Q}^k(D)$, we have the equivalence of norms: \begin{eqnarray*} |||v|||_{k,D} &=& \|\Pi^0_{k-4,D}\xi\|_{L_2(D)}^2 +h_D\|v\|_{L_2(\partial D)}^2 +h_D^3\sum_{i=1,2}\left\|\frac{\partial v}{\partial x_i}\right\|_{L_2(\partial D)}^2 \\ &\approx& \|\Pi^0_{k-4,D}\xi\|_{L_2(D)}^2 +h_D\|v\|_{L_2(\partial D)}^2 +h_D^3\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2 . \end{eqnarray*} \end{lemma} \begin{proof} Suppose $h_D =1$, by the discrete estimates from Lemma \ref{discrete_estimates}, we have $$ \left\|\frac{\partial v}{\partial t}\right\|_{L_2(\partial D)} \apprle \|v\|_{L_2(\partial D)}, $$ and \begin{eqnarray*} \left\|\frac{\partial v}{\partial t}\right\|^2_{L_2(\partial D)}+\left\|\frac{\partial v}{\partial n}\right\|^2_{L_2(\partial D)} \approx \sum_{i=1,2}\left\|\frac{\partial v}{\partial x_i}\right\|_{L_2(\partial D)}^2, \end{eqnarray*} so that the equivalence is obtained. \end{proof} \begin{lemma}\label{pq_ineq} For any $p\in \mathbb{P}_{k-4}$, $k\geq 2$, there exists $q \in \mathbb{P}_{k}$, such that $\Delta^2 q = p$ and $$ \|q\|_{L_2(D)}\apprle \|p\|_{L_2(D)}. $$ \end{lemma} \begin{proof} From \cite{Brenner17}, we know that $\Delta$ maps $\mathbb{P}_k$ to $\mathbb{P}_{k-2}$, so that $\Delta^2=\Delta\Delta$ maps $\mathbb{P}_k$ to $\mathbb{P}_{k-4}$. Then there exists an operator $(\Delta^2)^{\dagger}: \mathbb{P}_{k-4}\rightarrow \mathbb{P}_{k}$ such that $\Delta^2 (\Delta^2)^{\dagger}$ is the identity operator on $\mathbb{P}_{k-4}$. We define the norm of $p$ as $$ \|p\|_{(\Delta^2)^{\dagger}} := \inf_{(\Delta^2)^{\dagger}p\in \mathbb{P}_{k}} \|(\Delta^2)^{\dagger}p\|_{L_2(D)}. $$ Since we have $$ q = \sum\limits_{i,j=0;i+j\leq k} c_{ij}x_1^i x_2^j\ \text{ and }\ \Delta^2 q = p, $$ the minimization problem $$ \|p\|_{(\Delta^2)^{\dagger}} = \inf_{c_{i,j}} \|q\|_{L_2(D)}, $$ is solvable. So, there exists $q=(\Delta^2)^{\dagger}p$ such that $$ \|q\|_{L_2(D)} = \|p\|_{(\Delta^2)^{\dagger}}. $$ By the equivalence of norms on finite dimensional vector space, we have $$ \|p\|_{(\Delta^2)^{\dagger}}\apprle \|p\|_{L_2(D)}, $$ then the result is obtained. \end{proof} \begin{lemma}\label{equivalence_b} For any $v\in {\rm Ker} \Pi_{k,D}^\Delta$, we have $$ |||v|||^2_{k,D}\approx h_D\left\|v\right\|_{L_2(\partial D)}^2 + h_D^3\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2. $$ \end{lemma} \begin{proof} Suppose $h_D = 1$. For $k<4$, $\Pi_{k-4,D}^0 v =0,$ the equivalence is trivial. For $k\geq 4,$ let $v\in {\rm Ker} \Pi_{k,D}^\Delta$, by \eqref{pikd1}, \eqref{eq_bp1}, and using the same $p$, $q$ in Lemma \ref{pq_ineq}, we have \begin{equation}\label{pqker} \int_D v ({\Delta}^2q)\ dx = \int_{\partial D} \left(Q_{n}({{\bf D}^2 q})+\frac{\partial U_{nt}({{\bf D}^2 q})}{\partial t}\right)v\ ds -\int_{\partial D} U_{nn}({{\bf D}^2 q})\frac{\partial v}{\partial n}\ ds. \end{equation} By lemma \ref{discrete_estimates}, we have $$ \left\|Q_{n}({{\bf D}^2 q})+\frac{\partial U_{nt}({{\bf D}^2 q})}{\partial t}\right\|_{L_2(\partial D)} \apprle \|{{\bf D}^2 q}\|_{L_2(D)} \apprle \|{q}\|_{L_2(D)} $$ and $\left\|U_{nn}({{\bf D}^2 q})\right\|_{L_2(\partial D)}\apprle \|{q}\|_{L_2(D)}$. Then, by Lemma \ref{pq_ineq}, $$ \left|\int_D v p\ dx\right| \apprle \left( \left\|v\right\|_{L_2(\partial D)} + \left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)} \right)\|{q}\|_{L_2(D)}, $$ so that $$ \|\Pi_{k-4,D}^0 v\|_{L_2(D)} = \max\limits_{p\in \mathbb{P}_{k-4}}\left|\int_D (\Pi_{k-4,D}^0 v) (p/\|{p}\|_{L_2(D)})\ dx\right| \apprle \left\|v\right\|_{L_2(\partial D)} + \left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}, $$ which means $$ \|\Pi_{k-4,D}^0 v\|_{L_2(D)}^2 \apprle \left\|v\right\|_{L_2(\partial D)}^2 + \left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2, $$ with Lemma \ref{new-equi}, we get the result. \end{proof} \begin{remark}\label{remark_t} Same as in \cite{Brenner17}, we have $$ |||v|||^2_{k,D}\approx h_D^3\left\|\frac{\partial v}{\partial t}\right\|_{L_2(\partial D)}^2 + h_D^3\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2,\quad \forall v \in {\rm Ker} \Pi_{k,D}^\Delta, $$ where $\partial/\partial t$ denotes a tangential derivative along $\partial D$. \end{remark} There is also a stability estimate for $\Pi_{k,D}^\Delta\xi$ in $H^1(D)$ norm in terms of the semi-norm $|||\cdot|||_{k,D}$. \subsection{Estimates of $\Pi_{k,D}^\Delta$ and $\Pi_{k,D}^0$} \begin{lemma}\label{leq-semi-norm} We have \begin{equation} \|\Pi_{k,D}^\Delta\xi\|_{L_2(D)}+h_D|\Pi_{k,D}^\Delta\xi|_{H^1(D)}+h_D^2|\Pi_{k,D}^\Delta\xi|_{H^2(D)} \apprle |||\xi|||_{k,D},\ \forall \xi\in H^2(D). \end{equation} \end{lemma} \begin{proof} Suppose $u=\Pi_{k,D}^\Delta\xi$ and $h_D = 1$, by \eqref{pikd1}, \eqref{eq_bp1}, we have \begin{eqnarray}\label{eq_in} &&|\Pi_{k,D}^\Delta\xi|_{H^2(D)}^2 = ((\Pi_{k,D}^\Delta\xi,\xi))_D\\ &=& \int_D U_{\Delta}({\bf D}^2 (\Pi_{k,D}^\Delta\xi)) v \ dx +\int_{\partial D} U_{nn}({\bf D}^2 (\Pi_{k,D}^\Delta\xi))\frac{\partial v}{\partial n}\ ds -\int_{\partial D} \left(Q_{n}({\bf D}^2 u(\Pi_{k,D}^\Delta\xi))+\frac{\partial U_{nt}({\bf D}^2 (\Pi_{k,D}^\Delta\xi))}{\partial t}\right)v\ ds, \nonumber \end{eqnarray} then, by Cauchy-Schwarz inequality, Lemma \ref{discrete_estimates} and \eqref{eq_v-norm}, we have \begin{eqnarray}\label{eq_in2} |\Pi_{k,D}^\Delta\xi|_{H^2(D)}^2 \apprle |\Pi_{k,D}^\Delta\xi|_{H^2(D)} |||\xi|||_{k,D} \apprle |||\xi|||_{k,D}^2. \end{eqnarray} By \eqref{pikd2}-\eqref{pikd3}, and Lemma \ref{general-poincare}, we have \begin{equation}\label{norm_P} \|\Pi_{k,D}^\Delta\xi\|^2_{H^2(D)} \apprle |\Pi_{k,D}^\Delta\xi|_{H^2(D)}^2+\sum_{i=1}^{2}\left(\int_{\partial D}\frac{\partial \Pi_{k,D}^\Delta\xi }{\partial x_i}\ ds\right)^2+\left(\int_{\partial D} \Pi_{k,D}^\Delta \xi\ ds\right)^2 \end{equation} also we have $$ \int_{\partial D}\frac{\partial \Pi_{k,D}^\Delta\xi }{\partial x_i}\ ds =\int_{\partial D}\frac{\partial \xi }{\partial x_i}\ ds $$ \begin{equation}\label{bound_xi} \left|\int_{\partial D}\frac{\partial \xi }{\partial x_i}\ ds\right| \leq \sum_{e\in\mathcal{E}_D}\left|\int_{e}{\Pi_{0,e}^0\frac{\partial \xi }{\partial x_i} }\ ds\right| \apprle \left(\sum_{e\in\mathcal{E}_D} \left\|\Pi^0_{r-1,e}\frac{\partial \xi }{\partial x_i} \right\|_{L_2(e)}^2\right)^{1/2} \end{equation} Similarly, \begin{equation}\label{bound_xi1} \left(\int_{\partial D} \Pi_{k,D}^\Delta \xi\ ds\right)^2 \apprle \sum_{e\in\mathcal{E}_D}\|\Pi^0_{r,e}\xi\|_{L_2(e)}^2. \end{equation} From \eqref{eq_in}-\eqref{bound_xi1}, the following inequality is valid \begin{equation} \|\Pi_{k,D}^\Delta\xi\|_{H^2(D)} \apprle|||\xi|||. \end{equation} \end{proof} \begin{lemma}\label{friedrichs} For any $\xi\in H^2(D)$, we have \begin{equation}\label{upper_bound1} |||\xi |||_{k,D} \apprle \|\xi\|_{L_2(D)} + h_D|\xi|_{H^1(D)} + h_D^2|\xi|_{H^2(D)}, \end{equation} and there exists $\bar{\xi}\in \mathbb{P}_1$, such that \begin{equation}\label{upper_bound2} |||\xi - \bar{\xi} |||_{k,D} \apprle h_D^2|\xi|_{H^2(D)}, \end{equation} where $$ \int_{\partial D}\nabla\bar{\xi} \ dx = \int_{\partial D}\nabla\xi \ dx, $$ $$ \int_{\partial D}\bar{\xi} \ dx = \int_{\partial D}\xi \ dx \quad {\rm or} \quad \int_{\partial D} \bar{\xi} \ dx = \int_{D}\xi \ dx. $$ \end{lemma} \begin{proof} Assume $h_D$ = 1, then by trace theorem $$|||\xi |||_{k,D} \apprle \|\xi\|_{L_2(D)} +\|\xi\|_{L_2(\partial D)} +\sum_{i=1,2}\left\|\frac{\partial \xi}{\partial x_i}\right\|_{L_2(\partial D)} \apprle \|\xi\|_{L_2(D)} + |\xi|_{H^1(D)} + |\xi|_{H^2(D)}. $$ So that we have \eqref{upper_bound1}, and by Lemma \ref{general-poincare} $$|||\xi -\bar{\xi} |||_{k,D}^2 \apprle \|\xi-\bar{\xi}\|_{H^2(D)}^2 \apprle |\xi|_{H^2(D)}^2+\sum\limits_{i=1}^{2} \left( \int_{\partial D}\frac{\partial (\xi -\bar{\xi})}{\partial x_i}\ dx \right)^2 + \left( \int_{\partial D}{\xi -\bar{\xi}}\ dx \right)^2 $$ with the definition of $\bar{\xi}$, we arrived at $$|||\xi -\bar\xi |||_{k,D} \apprle |\xi|_{H^2(D)}. $$ \end{proof} \begin{corollary}\label{corollary_1} We have \begin{eqnarray} \|\xi - \Pi_{k,D}^\Delta\xi\|_{L_2(D)} &\apprle& h_D^{l+1}|\xi|_{H^{l+1}(D)},\quad \forall \xi\in H^{l+1}(D),\ 1\leq l\leq k,\label{l2_error} \\ |\xi - \Pi_{k,D}^\Delta\xi|_{H^{1}(D)} &\apprle& h_D^{l}|\xi|_{H^{l+1}(D)},\quad \forall \xi\in H^{l+1}(D),\ 1\leq l\leq k,\label{h1_error} \\ |\xi - \Pi_{k,D}^\Delta\xi|_{H^{2}(D)} &\apprle& h_D^{l-1}|\xi|_{H^{l+1}(D)},\quad \forall \xi\in H^{l+1}(D),\ 2\leq l\leq k.\label{h2_error} \end{eqnarray} \end{corollary} \begin{proof} From Lemma \ref{bramble}, Lemma \ref{leq-semi-norm}, and Lemma \ref{friedrichs}, for any $q\in \mathbb{P}_l, \ \xi\in H^{l+1}(D)$ we have $$ \|\xi - \Pi_{k,D}^\Delta\xi\|_{L_2(D)}\apprle\|\xi - q\|_{L_2(D)}+\|\Pi_{k,D}^\Delta (q - \xi)\|_{L_2(D)}\apprle h_D^{l+1}|\xi|_{H^{l+1}(D)}, $$ so as the $H^1$ and $H^2$ error estimates. \end{proof} For the $L_2$ operator $\Pi_{k,D}^0$, from \cite{Brenner17}, we have \begin{equation}\label{pikd0l2} \|\Pi_{k,D}^0\xi\|_{L_2(D)}\leq \|\xi\|_{L_2(D)}, \quad |\Pi_{k,D}^0\xi|_{H^1(D)}\apprle |\xi|_{H^1(D)},\quad \forall \xi\in H^1(D), \end{equation} and \begin{equation}\label{pikd0h1} |\xi - \Pi_{k,D}^0\xi|_{H^1(D)}\apprle h_D^{l}|\xi|_{H^{l+1}(D)},\quad \forall \xi\in H^{l+1}(D),\ 1\leq l\leq k. \end{equation} \begin{lemma}\label{pikd0-bound} The estimates of $\Pi_{k,D}^0$ satisfy \begin{eqnarray} |\Pi_{k,D}^0\xi|_{H^{2}(D)} &\apprle& |\xi|_{H^{2}(D)},\quad \forall \xi\in H^{2}(D),\label{pikd0h2} \\ |\xi - \Pi_{k,D}^0\xi|_{H^{2}(D)} &\apprle& h_D^{l-1}|\xi|_{H^{l+1}(D)},\quad \forall \xi\in H^{l+1}(D),\ 2\leq l\leq k.\label{pikd0h2_error} \end{eqnarray} \end{lemma} \begin{proof} Here, suppose $\xi\in H^2(D)$, then by \eqref{obvious_ineq}, \eqref{h1_error}, \eqref{pikd0h1} and the inverse inequality, we have \begin{eqnarray*} |\Pi_{k,D}^0\xi|_{H^2(D)} &\apprle & |\Pi_{k,D}^0\xi-\Pi_{k,D}^\Delta\xi|_{H^2(D)} +|\Pi_{k,D}^\Delta \xi|_{H^2(D)} \\ &\apprle & h_D^{-1}|\Pi_{k,D}^0\xi-\xi+\xi-\Pi_{k,D}^\Delta\xi|_{H^1(D)} +|\xi|_{H^2(D)} \\ &\apprle & |\xi|_{H^2(D)} \end{eqnarray*} \end{proof} \subsection{An Inverse Inequality} \begin{lemma}\label{minimum-energy} The following inequality is valid \begin{equation} |v|_{H^2(D)} \leq |\xi|_{H^2(D)}, \end{equation} for any $v\in \mathcal{Q}^k(D)$ and $\xi\in H^2(D)$ such that $\xi=v$ on $\partial D$, $\frac{\partial \xi}{\partial n}=\frac{\partial v}{\partial n}$ on $\partial D$, and $\Pi_{k,D}^0(\xi -v) = 0.$ \end{lemma} \begin{proof} Then by \eqref{pikd1} and \eqref{eq_bp1}, we have $$ ((v,\xi-v))_D = ((v,\xi))_D-|v|^2_{H^2(D)}=0 $$ And hence, $$ |\xi|^2_{H^2(D)} = |\xi-v|^2_{H^2(D)} +|v|^2_{H^2(D)}, $$ which means \begin{equation} |v|_{H^2(D)} \leq |\xi|_{H^2(D)}. \end{equation} \end{proof} Next, we will consider the relation between $|v|_{H^2(D)}$ and $|||v|||_{k,D}$, $\forall v \in \mathcal{Q}^k(D)$. \subsubsection{Construction of $w$}\label{case1} The degree of freedom of $v\in \mathcal{Q}^k(D)$ is defined in \cite{Brezzi13}, from (4.7)--(4.11). For $k\geq 2,$ we will employ the triangulation $\mathcal{T}_D$ to define a piecewise polynomial $w$ which has the same boundary conditions as $v$. On each internal triangle, we employ a $\mathbb{P}_{r}$ macroelement, which is defined in \cite{Douglas79}, Section 1. Suppose $k=2,3$, in Figure \ref{fg-k2k3}, on each internal triangle, the function is defined by $\mathbb{P}_3$ macroelement as in Figure \ref{fg-p3}. All degrees of freedom within $D$ are 0. \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale = 1.2] \coordinate (A) at (0,0); \coordinate (B) at (3,0); \coordinate (C) at (3.5,1.5); \coordinate (D) at (2,3); \coordinate (E) at (0,1); \coordinate (O) at ($1/5*(A)+1/5*(B)+1/5*(C)+1/5*(D)+1/5*(E)$); \draw (A) --(B) --(C) --(D) --(E) --(A); \draw[style=dashed](O)--(A); \draw[style=dashed](O)--(B); \draw[style=dashed](O)--(C); \draw[style=dashed](O)--(D); \draw[style=dashed](O)--(E); \fill [black] (A) circle (1pt); \draw [black] (A) circle (2.5pt); \fill [black] (B) circle (1pt); \draw [black] (B) circle (2.5pt); \fill [black] (C) circle (1pt); \draw [black] (C) circle (2.5pt); \fill [black] (D) circle (1pt); \draw [black] (D) circle (2.5pt); \fill [black] (E) circle (1pt); \draw [black] (E) circle (2.5pt); \coordinate (A) at (0+5,0); \coordinate (B) at (3+5,0); \coordinate (C) at (3.5+5,1.5); \coordinate (D) at (2+5,3); \coordinate (E) at (0+5,1); \coordinate (O) at ($1/5*(A)+1/5*(B)+1/5*(C)+1/5*(D)+1/5*(E)$); \coordinate (AB) at ($(A)!1/2!(B)$); \coordinate (BC) at ($(B)!1/2!(C)$); \coordinate (CD) at ($(C)!1/2!(D)$); \coordinate (DE) at ($(D)!1/2!(E)$); \coordinate (EA) at ($(E)!1/2!(A)$); \draw (A) --(B) --(C) --(D) --(E) --(A); \draw[style=dashed](O)--(A); \draw[style=dashed](O)--(B); \draw[style=dashed](O)--(C); \draw[style=dashed](O)--(D); \draw[style=dashed](O)--(E); \fill [black] (A) circle (1pt); \draw [black] (A) circle (2.5pt); \fill [black] (B) circle (1pt); \draw [black] (B) circle (2.5pt); \fill [black] (C) circle (1pt); \draw [black] (C) circle (2.5pt); \fill [black] (D) circle (1pt); \draw [black] (D) circle (2.5pt); \fill [black] (E) circle (1pt); \draw [black] (E) circle (2.5pt); \draw ($(AB)$) -- ($(AB)!0.2cm!90:(A)$); \draw ($(AB)$) -- ($(AB)!0.2cm!-90:(A)$); \draw ($(BC)$) -- ($(BC)!0.2cm!90:(B)$); \draw ($(BC)$) -- ($(BC)!0.2cm!-90:(B)$); \draw ($(CD)$) -- ($(CD)!0.2cm!90:(C)$); \draw ($(CD)$) -- ($(CD)!0.2cm!-90:(C)$); \draw ($(DE)$) -- ($(DE)!0.2cm!90:(D)$); \draw ($(DE)$) -- ($(DE)!0.2cm!-90:(D)$); \draw ($(EA)$) -- ($(EA)!0.2cm!90:(E)$); \draw ($(EA)$)--($(EA)!0.2cm!-90:(E)$); \end{tikzpicture} \end{center}\caption{Local d.o.f. for the lowest-order element: $k = 2,\ (r,s,m)=(3,1,-2)$ (left), and next to the lowest element: $k = 3,\ (r,s,m)=(3,2,-1)$ (right).}\label{fg-k2k3} \end{figure} \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale = 1.3] \coordinate (D) at (0,0); \coordinate (E) at (3,0); \coordinate (O) at (1.5,2.5); \coordinate (DE) at ($(D)!1/2!(E)$); \coordinate (OD) at ($1/2*(D)+1/2*(O)$); \coordinate (OE) at ($(O)!1/2!(E)$); \coordinate (center) at ($1/3*(D)+1/3*(O)+1/3*(E)$); \draw (D)--(E); \draw (E)--(O); \draw (O)--(D); % \draw(center)--(D); \draw(center)--(E); \draw(center)--(O); % \fill [black] (D) circle (1pt); \draw [black] (D) circle (2.5pt); \fill [black] (E) circle (1pt); \draw [black] (E) circle (2.5pt); \fill [black] (O) circle (1pt); \draw [black] (O) circle (2.5pt); \draw ($(OD)$) -- ($(OD)!0.2cm!90:(O)$); \draw ($(OD)$) -- ($(OD)!0.2cm!-90:(O)$); \draw ($(DE)$) -- ($(DE)!0.2cm!90:(D)$); \draw ($(DE)$) -- ($(DE)!0.2cm!-90:(D)$); \draw ($(OE)$) -- ($(OE)!0.2cm!90:(O)$); \draw ($(OE)$) -- ($(OE)!0.2cm!-90:(O)$); % \end{tikzpicture} \end{center}\caption{$\mathbb{P}_3$ macroelement}\label{fg-p3} \end{figure} From the definition of $w$, we have \begin{equation}\label{w-norm} \|w\|_{L_2(D)}^2 \approx h_D^2|w|_{H^2(D)}^2 \approx h_D\|w\|_{L_2(\partial D)}^2 +h_D^3\left\|\frac{\partial w}{\partial n}\right\|_{L_2(\partial D)}^2 = h_D\|v\|_{L_2(\partial D)}^2 +h_D^3\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2. \end{equation} \begin{lemma}\label{inverse-inequality} $$ |v|_{H^2(D)} \apprle h_D^{-2}|||v|||_{k,D},\ \forall v \in \mathcal{Q}^k(D). $$ \end{lemma} \begin{proof} Following \cite{Brenner17}, it suffices to prove when $h_D = 1$. Let $\phi$ be a smooth function supported on the disc $\mathfrak{B}$ with radius $\rho$, such that $$ \int_D\phi \ dx = 1. $$ By the equivalence of norms on finite dimensional spaces, we have $$ \|p\|_{L_2(D)}\apprle \int_{D} \phi p^2 \ dx, \ \forall p\in \mathbb{P}_{k}. $$ Let $w \in H^2 (D)$ be the piecewise polynomial constructed in Section \ref{case1}, and let $\xi = w + p\phi$ for $p\in \mathbb{P}_{k}$ such that $$ \int_{D}(\xi-v)q\ dx = 0, \ \forall q\in \mathbb{P}_{k}, $$ or equivalently \begin{equation}\label{pqphi} \int_{D}pq\phi \ dx = \int_{D}(v-w)q\ dx = \int_{D}(\Pi_{k,D}^0v -w)q \ dx, \ \forall q\in \mathbb{P}_{k}. \end{equation} Let $q=p$ in \eqref{pqphi}, then $$ \|p\|_{L_2(D)}\apprle \|\Pi_{k,D}^0v -w\|_{L_2(D)}\apprle \|\Pi_{k,D}^0v\|_{L_2(D)}+\|w\|_{L_2(D)}, $$ and by Lemma \ref{leq-semi-norm}, \begin{eqnarray*} \|\Pi_{k,D}^0v\|_{L_2(D)}^2 &=&\|\Pi_{k-4,D}^0v\|_{L_2(D)}^2 +\|(\Pi_{k,D}^0 -\Pi_{k-4,D}^0)v\|_{L_2(D)}^2\\ &\apprle&\|\Pi_{k-4,D}^0v\|_{L_2(D)}^2 +\|\Pi_{k,D}^\Delta v\|_{L_2(D)}^2\\ &\apprle&\|\Pi_{k-4,D}^0v\|_{L_2(D)}^2 +\|v\|_{L_2(\partial D)}^2 +\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2. \end{eqnarray*} From \eqref{w-norm}, we have \begin{equation}\label{p2} \|p\|_{L_2(D)}^2 \apprle \|\Pi_{k-4,D}^0v\|_{L_2(D)}^2 +\|v\|_{L_2(\partial D)}^2 +\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2. \end{equation} Also, by Lemma \ref{minimum-energy}, $$ |v|_{H^2(D)}\apprle |\xi|_{H^2(D)}. $$ By \eqref{w-norm} and \eqref{p2}, $$ |\xi|_{H^2(D)}^2 \apprle |w|_{H^2(D)}^2+|p\phi|_{H^2(D)}^2 \apprle \|\Pi_{k-4,D}^0v\|_{L_2(D)}^2 +\|v\|_{L_2(\partial D)}^2 +\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2. $$ Then, $\forall v \in \mathcal{Q}^k(D) $, we have $$ |v|_{H^2(D)}^2 \apprle \|\Pi_{k-4,D}^0v\|_{L_2(D)}^2 +\|v\|_{L_2(\partial D)}^2 +\left\|\frac{\partial v}{\partial n}\right\|_{L_2(\partial D)}^2. $$ By Lemma \ref{new-equi}, we arrived at the estimate. \end{proof} \begin{corollary}\label{corollary_2} For any $v\in \mathcal{Q}^k(D)$, $$ \|v\|_{L_2(D)}+h_D|v|_{H^1(D)}+h^2_D|v|_{H^2(D)} \apprle |||v|||_{k,D}. $$ \end{corollary} \begin{proof} From Lemma \ref{leq-semi-norm}, Corollary \ref{corollary_1} and Lemma \ref{inverse-inequality}, we have \begin{eqnarray*} \|v\|_{L_2(D)} &\apprle& \|v - \Pi_{k,D}^\Delta v\|_{L_2(D)}+\|\Pi_{k,D}^\Delta v\|_{L_2(D)} \apprle h_D^{2}|v|_{H^{2}(D)}+|||v|||_{k,D}, \\ h_D|v|_{H^{1}(D)} &\apprle& h_D|v - \Pi_{k,D}^\Delta v|_{H^{1}(D)}+h_D|\Pi_{k,D}^\Delta v|_{H^{1}(D)} \apprle h_D^{2}|v|_{H^{2}(D)}+|||v|||_{k,D}. \end{eqnarray*} \end{proof} \subsection{Estimate of Interpolation Operator} The interpolation operator $I_{k,D} : H^3 (D) \rightarrow \mathcal{Q}^k(D)$ is defined by the conditions that $\xi$ and $I_{k,D}\xi$ have the same value for each degree of freedom of $I_{k,D}\xi$. It is clear that $$ I_{k,D}\xi=\xi,\quad \forall \xi\in \mathcal{Q}^k(D)\ {\rm or}\ \forall \xi\in \mathbb{P}_k(D). $$ \begin{lemma}\label{interpolation_error} The interpolation errors are listed below, $\forall \xi\in H^{l+1}(D),\ 2\leq l\leq k,$ we have \begin{eqnarray} \|\xi-I_{k,D}\xi\|_{L_2(D)} +\|\xi-\Pi_{k,D}^\Delta I_{k,D}\xi\|_{L_2(D)} &\apprle& h_D^{l+1}|\xi|_{H^{l+1}(D)},\label{interpolation_error_l2} \\ |\xi-I_{k,D}\xi|_{H^1(D)} +|\xi-\Pi_{k,D}^\Delta I_{k,D}\xi|_{H^1(D)} &\apprle& h_D^{l}|\xi|_{H^{l+1}(D)},\label{interpolation_error_h1} \\ |\xi-I_{k,D}\xi|_{H^2(D)} +|\xi-\Pi_{k,D}^\Delta I_{k,D}\xi|_{H^2(D)} &\apprle& h_D^{l-1}|\xi|_{H^{l+1}(D)}.\label{interpolation_error_h2} \end{eqnarray} \end{lemma} \begin{proof} Suppose $h_D =1$, by Trace theorem, Lemma \ref{imbedding}, Lemma \ref{leq-semi-norm} and Corollary \ref{corollary_2}, we have \begin{eqnarray*} \|I_{k,D}\xi\|_{L_2(D)} +\|\Pi_{k,D}^\Delta I_{k,D}\xi\|_{L_2(D)} &\apprle& |||I_{k,D}\xi|||_{k,D} \apprle \|\xi\|_{H^{l+1}(D)}, \\ |I_{k,D}\xi|_{H^1(D)} +|\Pi_{k,D}^\Delta I_{k,D}\xi|_{H^1(D)} &\apprle& |||I_{k,D}\xi|||_{k,D} \apprle \|\xi\|_{H^{l+1}(D)}, \\ |I_{k,D}\xi|_{H^2(D)} +|\Pi_{k,D}^\Delta I_{k,D}\xi|_{H^2(D)} &\apprle& |||I_{k,D}\xi|||_{k,D} \apprle \|\xi\|_{H^{l+1}(D)}. \end{eqnarray*} The results follow from Lemma \ref{bramble}, and $I_{k,D}q = q, \forall q\in \mathbb{P}_l$. \end{proof} \section{The Biharmonic Problem in Two Dimensions} Let $\Omega$ be a bounded polygonal domain in $\mathbb{R}^2$, $f\in L_2(\Omega)$, the biharmonic equation is \begin{equation}\label{biharmonic-equation} \begin{cases} \ \Delta^2u &= f, \\ \ u|_{\partial \Omega} &= 0, \\ \ \left.\frac{\partial u}{\partial n}\right|_{\partial \Omega} &= 0. \end{cases} \end{equation} The variational formulation of \eqref{biharmonic-equation} is finding $u\in H^2_0(\Omega)$, such that $$ a(u,v) = (f,v),\ \forall v\in H^2_0(\Omega), $$ where $$a(u,v) = ((u,v))_{\Omega}=\int_{\Omega}{\bf D}^2u\cdot{\bf D}^2v\ dx. $$ \begin{remark} For $u,v\in H^2_0(\Omega)$, we also have $a(u,v) = \int_\Omega\Delta u\Delta v \ dx$. However, $((u,v))_{D},$ where $D$ is a sub-domain of $\Omega$ and $\int_D\Delta u\Delta v \ dx$ are not equivalent. \end{remark} In following sections, we will use virtual element method to solve \eqref{biharmonic-equation}. \subsection{Virtual Element Spaces} Let $\mathcal{T}_h$ be a conforming partition of $\Omega$ by polygonal subdomains, i.e., the intersection of two distinct subdomains is either empty, common vertices or common edges. We assume that all the polygons $D\in \mathcal{T}_h$ satisfy the shape regularity assumptions in Section \ref{Shape-Regularity}. We take the virtual element space $\mathcal{Q}_h^k$ to be $\{v \in H_0^2(\Omega): v|_D \in \mathcal{Q}^k(D),\ \forall D \in \mathcal{T}_h \}$ and denote by $\mathcal{P}_{h}^k$ the space of (discontinuous) piecewise polynomials of degree $\leq k$ with respect to $\mathcal{T}_h$. The operators are defined in terms of their local counterparts: \begin{eqnarray} (\Pi_{k,h}^\Delta v)|_D &=& \Pi_{k,D}^\Delta (v|_D),\quad \forall v\in H^2(\Omega),\\ (\Pi_{k,h}^{0}v)|_D &=& \Pi_{k,D}^{0}(v|_D),\quad \forall v\in L_2(\Omega),\\ (I_{k,h}v)|_D &=& I_{k,D}(v|_D),\quad \forall v\in H^3(\Omega). \end{eqnarray} Also, the semi-norm on $H^2(\Omega)+\mathcal{P}_h^k$ is defined as \begin{eqnarray} |v|_{1,h}^2 = \sum_{D\in \mathcal{T}}|v|_{H^1(D)}^2,\quad |v|_{2,h}^2 = \sum_{D\in \mathcal{T}}|v|_{H^2(D)}^2, \end{eqnarray} so that $|v|_{2,h}=|v|_{H^2(D)}$ for $v\in H^2(D),$ and $$ |v-\Pi_{k,h}^{2}v|_{2,h} = \inf_{w\in \mathcal{P}_h^k}|v-w|,\ \forall v\in H^2(\Omega)+\mathcal{P}_h^k. $$ The local estimates: Corollary \ref{corollary_1}, \eqref{pikd0l2}-\eqref{pikd0h1}, Lemma \ref{pikd0-bound} and Lemma \ref{interpolation_error} immediately imply the following global results, where $h = \max\limits_{D\in\mathcal{T}_h} h_D$.} \begin{corollary}\label{corollary_3} The global error estimates are listed below, $\forall \xi\in H^{l+1}(D),\ 2\leq l\leq k,$ we have \begin{eqnarray} \|\xi-I_{k,h}\xi\| +\|\xi-\Pi_{k,h}^\Delta I_{k,h}\xi\|+ \|\xi - \Pi_{k,h}^0\xi\|+\|\xi - \Pi_{k,h}^\Delta\xi\| &\apprle& h^{l+1}|\xi|_{H^{l+1}(\Omega)}, \\ \|\xi-I_{k,h}\xi\|_{1,h} +\|\xi-\Pi_{k,h}^\Delta I_{k,h}\xi\|_{1,h}+ |\xi - \Pi_{k,h}^0\xi|_{1,h}+|\xi - \Pi_{k,h}^\Delta\xi|_{1,h} &\apprle& h^{l}|\xi|_{H^{l+1}(\Omega)}, \\ \|\xi-I_{k,h}\xi\|_{2,h} +\|\xi-\Pi_{k,h}^\Delta I_{k,h}\xi\|_{2,h}+ |\xi - \Pi_{k,0}^0\xi|_{2,h}+|\xi - \Pi_{k,h}^\Delta\xi|_{2,h} &\apprle& h^{l-1}|\xi|_{H^{l+1}(\Omega)}, \end{eqnarray} where the norm $\|\cdot\|:= \|\cdot\|_{L^2(\Omega)}$. \end{corollary} \subsection{The Discrete Problem Our goal is to find $u_h\in \mathcal{Q}_h^k$, which satisfies \begin{equation}\label{discrete-problem} a_h(u_h,v_h) = (f,\Xi_h v_h), \quad \forall v_h\in \mathcal{Q}_h^k, \end{equation} where $\Xi_h$ is an operator from $\mathcal{Q}_h^k$ to $\mathcal{P}_h^k$, and \begin{eqnarray} a_h(w,v) &=& \sum_{D\in\mathcal{T}_h} \left( a^D(\Pi_{k,D}^\Delta w,\Pi_{k,D}^\Delta v) +S^D(w-\Pi_{k,D}^\Delta w,v-\Pi_{k,D}^\Delta v) \right),\label{discrete1} \\ a^D(w,v)&=&\int_D{\bf D}^2w\cdot{\bf D}^2v\ dx,\label{discrete2} \\ S^D(w,v)&=& h_D^{-4}(\Pi^0_{k-4,D}w,\Pi^0_{k-4,D}v)_D + h_D^{-3}\sum_{e\in\mathcal{E}_D} (\Pi^0_{r,e}w,\Pi^0_{r,e}v)_e \nonumber \\ &&\quad + h_D^{-1}\sum_{e\in\mathcal{E}_D,i=1,2} \left( \Pi^0_{r-1,e}\frac{\partial w}{\partial x_i}, \Pi^0_{r-1,e}\frac{\partial v}{\partial x_i} \right)_e,\label{discrete3} \end{eqnarray} for $w,v\in H^2(\Omega)$. If $w,v\in \mathcal{Q}^k(D),$ we have $$ \Pi^0_{r,e}w = w|_e,\quad \Pi^0_{r-1,e}\frac{\partial w}{\partial x_i} = \left.\frac{\partial w}{\partial x_i}\right|_e, $$ so that $S^D(w,v)$ can be computed explicitly with the degrees of freedom of $\mathcal{Q}^k(D)$. \subsubsection{Other Choices of $S^D(\cdot,\cdot)$} The systems of virtual element method are equivalent if the bilinear form satisfies $$ S^D(v,v) \approx h^{-4}_D|||v|||_{k,D}^2,\quad \forall v\in {\rm Ker}\Pi_{k,D}^\Delta . $$ From Lemma \ref{equivalence_b} and Remark \ref{remark_t}, we can take \begin{eqnarray} S^D(w,v)&=& h_D^{-3} (w,v)_{\partial D} + h_D^{-1} \left( \frac{\partial w}{\partial n}, \frac{\partial v}{\partial n} \right)_{\partial D},\\ S^D(w,v)&=& h_D^{-1} \left( \frac{\partial w}{\partial t}, \frac{\partial v}{\partial t} \right)_{\partial D} + h_D^{-1} \left( \frac{\partial w}{\partial n}, \frac{\partial v}{\partial n} \right)_{\partial D}, \end{eqnarray} for $w, v\in {\rm Ker}\Pi_{k,D}^\Delta $. \subsection{Well-posedness of the Discrete Problem} We can show the well-posedness by the following Lemmas. \begin{lemma}\label{S^Dbound2} For any $v\in H^2(D)$, we have $$ S^D(v-\Pi_{k,D}^\Delta v,v-\Pi_{k,D}^\Delta v)^{1/2}\apprle |v-\Pi_{k,D}^\Delta v|_{H^2(D)}. $$ \end{lemma} \begin{proof} By \eqref{pikd1}-\eqref{pikd3} and Lemma \ref{friedrichs}, let $w = v-\Pi_{k,D}^\Delta v$, we have $\bar{w} = 0$, and $$ S^D(w,w)^{1/2} = h_D^{-2}|||w-\bar{w}|||_{k,D}\apprle |w|_{H^2(D)}. $$ \end{proof} \begin{lemma}\label{S^D-equivalent} For any $v\in {\rm Ker}\Pi_{k,D}^\Delta $, we have $S^D(v,v)\approx |v|^2_{H^2(D)}.$ \end{lemma} \begin{proof} For any $v\in {\rm Ker}\Pi_{k,D}^\Delta,$ we have $v\in H^2(D)$ and $$ v = v-\Pi_{k,D}^\Delta v, $$ with Lemma \ref{inverse-inequality} and Lemma \ref{S^Dbound2}, we have the equivalence. \end{proof} \begin{remark}\label{S^Dbound} By Lemma \ref{S^Dbound2}, for any $v,w\in \mathcal{Q}_h^k$, we have $$ S^D(v-\Pi_{k,D}^\Delta v,w-\Pi_{k,D}^\Delta w)\apprle |v-\Pi_{k,D}^\Delta v|_{H^2(D)} |w-\Pi_{k,D}^\Delta w|_{H^2(D)}. $$ \end{remark} Then by Lemma \ref{S^D-equivalent}, as in \cite{Brenner17}, we have \begin{equation}\label{a(vv)} a_h(v,v) \approx a(v,v),\quad \forall v\in \mathcal{Q}_h^k, \end{equation} which means problem \eqref{discrete-problem} is uniquely solvable. \subsection{Choice of $\Xi_h$} Here, we chose $\Xi_h$ as \begin{equation} \Xi_h = \begin{cases} \Pi_{k,h}^0 & \quad \text{if } 2\leq k \leq 3,\\ \Pi_{k-1,h}^0 & \quad \text{if } k\geq 4. \end{cases} \end{equation} The following result can be used for error analysis in $H^2(\Omega)$ norm. Define $H^0(\Omega) := L_2(\Omega)$, and the indices $(k, l, m)$ as \begin{equation}\label{index_klm} (k, l, m):=\begin{cases} {\rm for}\ 2\leq k\leq 3,\ l=0,\ m=2,\\ {\rm for}\ 4\leq k,\ l = k-3,\ m=2+l. \end{cases} \end{equation} \begin{lemma}\label{klm-h2} With \eqref{index_klm}, we have \begin{equation} (f,w-\Xi_h w) \apprle h^{2+l}|f|_{H^l(\Omega)} |w|_{H^2(\Omega)}, \quad \forall f\in H^{l}(\Omega),\ w\in \mathcal{Q}_h^k. \end{equation} \end{lemma} \begin{proof} For $k\leq 3, f\in L_2(\Omega)$, we define $\Pi_{k-4} f = 0,$ so that with Corollary \ref{corollary_3}, we have \begin{eqnarray*} (f,w-\Xi_h w) &=& (f-\Pi_{k-4,h}^0 f,w-\Xi_h w)\\ &=& (f,w-\Pi_{k,h}^0 w)\\ &\leq& \|f\|_{L_2(\Omega)} \|w-\Pi_{1,h}^0 w\|_{L_2(\Omega)}\\ &\apprle& h^2\|f\|_{L_2(\Omega)} |w|_{H^2(\Omega)} \end{eqnarray*} For $k\geq 4, f\in H^l(\Omega), l = k-3$, with Corollary \ref{corollary_3},we have \begin{eqnarray*} (f,w-\Xi_h w) &=& (f-\Pi_{k-4,h}^0 f,w-\Xi_h w)\\ &\leq& \|f-\Pi_{k-4,h}^0 f\|_{L_2(\Omega)} \|w-\Pi_{1,h}^0 w\|_{L_2(\Omega)}\\ &\apprle& h^{2+l}|f|_{H^l(\Omega)} |w|_{H^2(\Omega)} \end{eqnarray*} \end{proof} \begin{lemma}\label{klm-h3} With \eqref{index_klm}, we have \begin{equation}\label{klm-h3-1} (f,I_{k,h}w-\Xi_h I_{k,h}w) \apprle h^{3+l}|f|_{H^l(\Omega)} |w|_{H^3(\Omega)}, \quad \forall f\in H^{l}(\Omega),\ w\in H^3(\Omega),\ k\geq 2, \end{equation} \begin{equation}\label{fik=3h} (f,I_{k,h}w-\Xi_h I_{k,h}w) \apprle h^{3+s+l}|f|_{H^l(\Omega)} \|w\|_{H^{3+s}(\Omega)}, \quad \forall f\in H^{l}(\Omega),\ w\in H^{3+s}(\Omega),\ k\geq 3, \ 0<s\leq 1. \end{equation} \end{lemma} \begin{proof} Follow the proof in Lemma \ref{klm-h2} with Lemma \ref{pikd0-bound} and Lemma \ref{interpolation_error}. For $k\geq 2$, $w\in H^3(\Omega)$, we have \begin{eqnarray*} (f,I_{k,h} w -\Xi_h I_{k,h} w) &=& (f-\Pi_{k-4,h}^0f,I_{k,h} w -\Xi_h I_{k,h} w)\\ &=& (f-\Pi_{k-4,h}^0f,I_{k,h} w -w + w-\Xi_h w + \Xi_h (w-I_{k,h} w) ), \end{eqnarray*} then with Corollary \ref{corollary_3}, we can get \eqref{klm-h3-1}, so as $k\geq 3$ and $w\in H^{3+s}(\Omega)$. \end{proof} \subsection{Error Estimate in $|\cdot|_{H^2(\Omega)}$ Norm} Firstly, the error estimate in $|\cdot|_{H^2(\Omega)}$ norm for $u-u_h$ is given in Theorem \ref{theorem1}. \begin{theorem}\label{theorem1} With \eqref{index_klm}, we have $$ |u-u_h|_{H^2(\Omega)} \apprle \inf_{v_h\in\mathcal{Q}_h^k}|u-v_h|_{H^2(\Omega)} + \inf_{p\in\mathcal{P}_h^k}|u-p|_{2,h} + h^m|f|_{H^l(\Omega)}. $$ Suppose $u\in H^{k+1}(\Omega)$ then $$ |u-u_h|_{H^2(\Omega)} \apprle h^{k-1}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}). $$ \end{theorem} \begin{proof} Similar as in \cite{Brenner17}, for any given $v_h\in \mathcal{Q}_h^k$, from \eqref{a(vv)}, we have $$ |u_h-v_h|_{H^2(\Omega)}\apprle \max_{w_h\in\mathcal{Q}_h^k} \frac{a_h(v_h-u_h,w_h)} {|w_h|_{H^2(\Omega)}}, $$ and by \eqref{discrete-problem}, $$ a_h(v_h-u_h,w_h) = a_h(v_h,w_h)-(f,\Xi_h w_h). $$ Then from \eqref{discrete1} to \eqref{discrete3}, by \eqref{h2_error}, \eqref{interpolation_error_h2}, we have \begin{eqnarray*} a_h(v_h-u_h,w_h) &=& \sum_{D\in\mathcal{T}_h} a^D(\Pi_{k,D}^\Delta (v_h -u)+(\Pi_{k,D}^\Delta u-u),w_h) + (f,w_h-\Xi_h w_h)\\ &&+ \sum_{D\in\mathcal{T}_h} S^D( (I-\Pi_{k,D}^\Delta )(v_h-\Pi_{k,D}^\Delta u),(I-\Pi_{k,D}^\Delta )w_h ), \end{eqnarray*} with Lemma \ref{klm-h2}, Remark \ref{S^Dbound} or Lemma \ref{S^Dbound2} and Corollary \ref{corollary_3}, the estimate is obtained. \end{proof} \subsection{Error Estimate in $|\cdot|_{H^1(\Omega)}$ and $\|\cdot\|_{L_2(\Omega)}$ Norm} We suppose $\Omega$ is also convex and start with a consistency result. \begin{lemma} Suppose $u\in H^{k+1}(\Omega)$ then \begin{equation}\label{h166} a(u-u_h,I_{k,h}\xi)\apprle h^{k}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)})|\xi|_{H^3(\Omega)} ,\quad \forall \xi\in H^3(\Omega)\cap H^2_0(\Omega),\ k\geq 2, \end{equation} \begin{equation}\label{l267} a(u-u_h,I_{k,h}\xi)\apprle h^{k+s}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)})\|\xi\|_{H^{3+s}(\Omega)} ,\quad \forall \xi\in H^{3+s}(\Omega)\cap H^2_0(\Omega),\ k\geq 3, \end{equation} where $l$ is defined in \eqref{index_klm}. \end{lemma} \begin{proof} Similar as in \cite{Brenner17}, we have \begin{eqnarray*} a(u-u_h,I_{k,h}\xi) &=& \sum_{D\in\mathcal{T}_h} a^D(u-u_h, I_{k,D}\xi - \Pi_{k,D}^\Delta I_{k,D}\xi) + \sum_{D\in\mathcal{T}_h} a^D(\Pi_{k,D}^\Delta u-u, I_{k,D}\xi - \Pi_{k,D}^\Delta I_{k,D}\xi)\\ &&\quad + (f,I_{k,h}\xi-\Xi I_{k,h}\xi) + \sum_{D\in\mathcal{T}_h} S^D( (I-\Pi_{k,D}^\Delta )u_h,(I-\Pi_{k,D}^\Delta )I_{k,D}\xi )\\ &\apprle& \left( |u-u_h|_{H^2(\Omega)}+|u-\Pi_{k,h}^\Delta u|_{2,h} \right) |I_{k,h}\xi - \Pi_{k,h}^\Delta I_{k,h}\xi|_{2,h}+(f,I_{k,h}\xi-\Xi I_{k,h}\xi). \end{eqnarray*} Then by Lemma \ref{klm-h3}, Corollary \ref{corollary_3}, and Theorem \ref{theorem1}, we get the estimate. \end{proof} From the regularity results of \eqref{biharmonic-equation}, see \cite{Grisvard92}, we have \begin{equation}\label{regularity-h1} \|u\|_{H^3(\Omega)} \apprle \|f\|_{H^{-1}(\Omega)},\quad \forall f\in {H^{-1}(\Omega)}. \end{equation} \begin{equation}\label{regularity-l2} \|u\|_{H^{3+s}(\Omega)} \apprle \|f\|_{L_2(\Omega)},\quad \forall f\in {L_2(\Omega)}, \ 0<s\leq 1. \end{equation} \begin{theorem}\label{theorem2} Suppose $u\in H^{k+1}(\Omega)$ then \begin{equation} |u-u_h|_{H^1(\Omega)}\apprle h^{k}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}), \end{equation} where $l$ is defined in \eqref{index_klm}. \end{theorem} \begin{proof} Using the duality arguments and \eqref{h166}, let $f=-\Delta(u-u_h),$ and $\phi\in H_0^2(\Omega)$ be the solution of \eqref{biharmonic-equation}, then we get \begin{eqnarray*} |u-u_h|^2_{H^1(\Omega)} &=&(\Delta^2\phi,u-u_h)\\ &=&a(u-u_h,\phi-I_{k,h}\phi)+a(u-u_h,I_{k,h}\phi)\\ &\apprle& h^{k}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)})|\phi|_{H^3(\Omega)}, \end{eqnarray*} by \eqref{regularity-h1}, we have $|\phi|_{H^3(\Omega)}\apprle |u-u_h|_{H^1(\Omega)}$, then the result is obtained. \end{proof} \begin{theorem}\label{theorem3} Suppose $u\in H^{k+1}(\Omega)$ then \begin{equation}\label{l2k=2} \|u-u_h\|_{L_2(\Omega)}\apprle h^{2}(|u|_{H^{3}(\Omega)}+ |f|_{L_2(\Omega)}), \ k=2, \end{equation} \begin{equation} \|u-u_h\|_{L_2(\Omega)}\apprle h^{k+s}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}),\ k\geq 3, \ 0<s\leq 1, \end{equation} where $l$ is defined in \eqref{index_klm}. \end{theorem} \begin{proof} For $k=2,$ by Theorem \ref{theorem2} and Poincar$\acute{\rm e}$ inequality, \eqref{l2k=2} is obtained. For $k\geq 3$, using the duality arguments and \eqref{l267}, let $f=u-u_h,$ and $\phi\in H_0^2(\Omega)$ be the solution of \eqref{biharmonic-equation}, then we get \begin{eqnarray*} \|u-u_h\|^2_{L_2(\Omega)} &=&(\Delta^2\phi,u-u_h)\\ &=&a(u-u_h,\phi-I_{k,h}\phi)+a(u-u_h,I_{k,h}\phi)\\ &\apprle& h^{k+s}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)})\|\phi\|_{H^{3+s}(\Omega)}, \end{eqnarray*} by \eqref{regularity-l2}, we have $\|\phi\|_{H^{3+s}(\Omega)}\apprle \|u-u_h\|_{L_2(\Omega)}$, then the result is obtained. \end{proof} \subsection{Error Estimates for $\Pi_{k,h}^\Delta u_h$} We also have an error estimate for the computable $\Pi_{k,h}^\Delta u_h$. \begin{corollary} Suppose $u\in H^{k+1}(\Omega)$ then $$ |u-\Pi_{k,h}^\Delta u_h|_{2,h} \apprle h^{k-1}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}). $$ \end{corollary} \begin{proof} By \eqref{obvious_ineq}, Theorem \ref{theorem1}, Corollary \ref{corollary_3}, and $$ |u-\Pi_{k,h}^\Delta u|_{2,h}\leq \inf_{p\in\mathcal{P}_h^k}|u-p|_{2,h}, \ \forall u\in H^2(\Omega), $$ the estimate is obtained. \end{proof} \begin{corollary}\label{coh1} Suppose $u\in H^{k+1}(\Omega)$ then $$ |u-\Pi_{k,h}^\Delta u_h|_{1,h} \apprle h^{k}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}), $$ where $l$ is defined in \eqref{index_klm}. \end{corollary} \begin{proof} By Corollary \ref{corollary_1}, Corollary \ref{corollary_3}, Theorem \ref{theorem2}, Lemma \ref{leq-semi-norm} and $$ |u-\Pi_{k,h}^\Delta u_h|_{1,h}\leq |u-\Pi_{k,h}^\Delta u|_{1,h}+|\Pi_{k,h}^\Delta u-\Pi_{k,h}^\Delta u_h|_{1,h}. $$ Suppose $\xi = u-u_h,$ and $\bar{\xi}$ is defined as in Lemma \ref{friedrichs}, the second term is estimated as $$ h_D|\Pi_{k,D}^\Delta \xi|_{H^1(D)}= h_D|\Pi_{k,D}^\Delta (\xi-\bar{\xi})|_{H^1(D)}\apprle |||\xi-\bar{\xi}|||_{k,D}\apprle h_D^2|\xi|_{H^2(D)}, $$ so that $$ |\Pi_{k,D}^\Delta (u-u_h)|_{H^1(D)}^2\apprle h^2_D|u-u_h|^2_{H^2(D)} $$ sum them up then the estimate is obtained. \end{proof} \begin{corollary} Suppose $u\in H^{k+1}(\Omega)$ then \begin{equation}\label{hl2k=2} \|u-\Pi_{k,h}^\Delta u_h\|_{L_2(\Omega)}\apprle h^{2}(|u|_{H^{3}(\Omega)}+ \|f\|_{L_2(\Omega)}), \ k=2, \end{equation} \begin{equation} \|u-\Pi_{k,h}^\Delta u_h\|_{L_2(\Omega)}\apprle h^{k+s}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}),\ k\geq 3, \ 0<s\leq 1, \end{equation} where $l$ is defined in \eqref{index_klm}. \end{corollary} \begin{proof} By Corollary \ref{corollary_1}, Corollary \ref{corollary_3}, Theorem \ref{theorem2}, Theorem \ref{theorem3}, Lemma \ref{leq-semi-norm} and $$ \|u-\Pi_{k,h}^\Delta u_h\|_{L_2(\Omega)}\leq \|u-\Pi_{k,h}^\Delta u\|_{L_2(\Omega)}+\|\Pi_{k,h}^\Delta u-\Pi_{k,h}^\Delta u_h\|_{L_2(\Omega)}. $$ From Lemma \ref{friedrichs}, the second term is estimated as $$ \|\Pi_{k,D}^\Delta (u-u_h)\|_{L_2(D)}^2 \apprle \|u-u_h\|_{L_2(D)}^2+ h_D^2|u-u_h|_{H^1(D)}^2 +h_D^4|u-u_h|_{H^2(D)}^2, $$ sum them up then, the estimate is obtained. \end{proof} \subsection{Error Estimates for $\Pi_{k,h}^0u_h$} Since $\Pi_{k,h}^0u_h$ can be computed explicitly, we can also get the similar error estimates between $u$ and $\Pi_{k,h}^0u_h$. \begin{corollary}\label{coh2pik0} Suppose $u\in H^{k+1}(\Omega)$ then $$ |u-\Pi_{k,h}^0 u_h|_{2,h} \apprle h^{k-1}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}). $$ \end{corollary} \begin{proof} By Lemma \ref{pikd0-bound}, Theorem \ref{theorem1}, and $$ |u-\Pi_{k,h}^0 u_h|_{2,h}\leq |u-\Pi_{k,h}^0 u|_{2,h} +|\Pi_{k,h}^0 (u-u_h)|_{2,h}, \ \forall u\in H^2(\Omega), $$ the estimate is obtained. \end{proof} \begin{corollary}\label{coh1-pik0} Suppose $u\in H^{k+1}(\Omega)$ then $$ |u-\Pi_{k,h}^0 u_h|_{1,h} \apprle h^{k}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}), $$ where $l$ is defined in \eqref{index_klm}. \end{corollary} \begin{proof} By \eqref{pikd0l2}, \eqref{pikd0h1}, Theorem \ref{theorem2}, and $$ |u-\Pi_{k,h}^0 u_h|_{1,h}\leq |u-\Pi_{k,h}^0 u|_{1,h}+|\Pi_{k,h}^0 (u-u_h)|_{1,h}, $$ for the second term, we have $$ |\Pi_{k,D}^0 (u-u_h)|_{H^1(D)}^2\apprle |u-u_h|^2_{H^1(D)} $$ sum them up then the estimate is obtained. \end{proof} \begin{corollary}\label{copik0l2} Suppose $u\in H^{k+1}(\Omega)$ then \begin{equation}\label{hl2k=2-pik0} \|u-\Pi_{k,h}^0 u_h\|_{L_2(\Omega)}\apprle h^{2}(|u|_{H^{3}(\Omega)}+ \|f\|_{L_2(\Omega)}), \ k=2, \end{equation} \begin{equation} \|u-\Pi_{k,h}^0 u_h\|_{L_2(\Omega)}\apprle h^{k+s}(|u|_{H^{k+1}(\Omega)}+ |f|_{H^{l}(\Omega)}),\ k\geq 3, \ 0<s\leq 1, \end{equation} where $l$ is defined in \eqref{index_klm}. \end{corollary} \begin{proof} By Theorem \ref{theorem3}, and $$ \|u-\Pi_{k,h}^0 u_h\|_{L_2(\Omega)}\leq \|u-\Pi_{k,h}^0 u\|_{L_2(\Omega)}+\|\Pi_{k,h}^0 (u-u_h)\|_{L_2(\Omega)}, $$ the second term is estimated as $$ \|\Pi_{k,D}^0 (u-u_h)\|_{L_2(\Omega)} \apprle \|u-u_h\|_{L_2(\Omega)}, $$ so the results are obtained. \end{proof} \section{Conclusion} We have extended the works done in \cite{Brenner17} to forth order problems in two dimension. Similar basic estimates for local projections $\Pi_{k,D}^\Delta$, $\Pi_{k,D}^0$, $I_{k,D}$ and the improved error analysis of modified virtual element method for biharmonic equation are obtained. The computable piecewise polynomials $\Pi_{k,h}^\Delta u_h$ and $\Pi_{k,h}^0 u_h$ are more efficient to use. We can replace \eqref{pikd2} by \eqref{pikd4} \begin{equation}\label{pikd4} \int_D \nabla \Pi_{k,D}^\Delta \xi\ dx = \int_D \nabla \xi\ dx, \end{equation} To compute \eqref{pikd4}, we have \begin{equation} \int_D\frac{\partial \xi }{\partial x_i}\ dx = \int_{\partial D}{\xi }n_i\ ds. \end{equation} For $k\geq 4$, can replace \eqref{pikd3} by \eqref{pikd5} \begin{equation}\label{pikd5} \int_{D} \Pi_{k,D}^\Delta \xi\ ds = \int_{D} \xi\ dx. \end{equation} And the replacements attain same estimates for projections and error analysis. There is a potential to extend the work here to three dimensional fourth order problems.
{ "timestamp": "2018-05-03T02:12:28", "yymm": "1805", "arxiv_id": "1805.00918", "language": "en", "url": "https://arxiv.org/abs/1805.00918", "abstract": "In this paper, we employ the techniques developed for second order operators to obtain the new estimates of Virtual Element Method for fourth order operators. The analysis is based on elements with proper shape regularity. Estimates for projection and interpolation operators are derived. Also, the biharmonic problem is solved by Virtual Element Method, optimal error estimates are obtained. Our choice of the discrete form for the right hand side function relaxes the regularity requirement in previous work and the error estimates between exact solutions and the computable numerical solutions are provided.", "subjects": "Numerical Analysis (math.NA)", "title": "Some Estimates of Virtual Element Methods for Fourth Order Problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446494481299, "lm_q2_score": 0.727975443004307, "lm_q1q2_score": 0.7087893950107756 }
https://arxiv.org/abs/2003.10015
General Optimal Polynomial Approximants, Stabilization, and Projections of Unity
For various Hilbert spaces of analytic functions on the unit disk, we characterize when a function $f$ has optimal polynomial approximants given by truncations of a single power series. We also introduce a generalized notion of optimal approximant and use this to explicitly compute orthogonal projections of 1 onto certain shift invariant subspaces.
\section{Background, Introduction, and Notation} Throughout this paper $\mathcal{H}$ will be a reproducing kernel Hilbert space of analytic functions on the unit disk $\mathbb{D}$. We will denote the reproducing kernel for $\mathcal{H}$ as $k_\lambda(z) = k(z,\lambda)$ and the normalized reproducing kernel as $\hat{k}_\lambda = k_\lambda/\|k_\lambda\|_\mathcal{H}$. That is, a priori, for $\lambda \in \mathbb{D}$, we have $f(\lambda) = \langle f, k_\lambda \rangle_\mathcal{H}$. Further, we will assume that $\mathcal{H}$ satisfies the following: \begin{enumerate} \item The polynomials $\mathcal{P}$ are dense in $\mathcal{H}$. \item The forward shift $S$, mapping $f(z) \mapsto zf(z)$, is a bounded operator on $\mathcal{H}$. \end{enumerate} When $V \subseteq \mathcal{H}$ is a closed subspace, we will denote $\Pi_V : \mathcal{H} \to V$ as the orthogonal projection from $\mathcal{H}$ onto $V$. We will denote by $\mathcal{P}_n$ the set of complex polynomials of degree less than or equal to $n$ and, for any $f \in \mathcal{H}$, define $f\mathcal{P}_n := \{ pf : p \in \mathcal{P}_n\}$. Note that $f\mathcal{P}_n$ is a finite dimensional closed subspace of $\mathcal{H}$. When $f$ is fixed, we will denote $\Pi_n : \mathcal{H} \to f\mathcal{P}_n$ as the orthogonal projection onto $f\mathcal{P}_n$. \subsection{Cyclicity and Shift Invariant Subspaces} The results to come are born from the study of shift invariant subspaces and cyclic functions. We say a subspace $V \subseteq H$ is shift invariant if $SV \subseteq V$. We say a function $f \in \mathcal{H}$ is \textit{cyclic} if $$[f] := \overline{\text{span}\{z^nf : n \ge 0\}}^{\ \mathcal{H}}$$ is equal to $\mathcal{H}$ itself. Note that $[f]$ is a (possibly trivial) shift invariant subspace and is the smallest closed subspace of $\mathcal{H}$ containing $f$. It is also easily seen that if $f$ is cyclic, then $f$ cannot vanish in $\mathbb{D}$. In \cite{brown1984cyclic} it was pointed out that $f \in \mathcal{H}$ is cyclic if and only if for any cyclic function $g \in \mathcal{H}$, there exist polynomials $(p_n)$ such that $\| p_nf - g \|_{\mathcal{H}} \to 0$. From this equivalence, and taking $g = 1$ in spaces where $1 = k_0$, the study of \textit{optimal polynomial approximants} has arisen. The optimality referred to here is with respect to the distance between $f \mathcal{P}_n$ and 1. The argument minimizing this distance will be denoted $p_n^*f$ (details to come in Section \ref{genopa}). A jumping off point for optimal approximants could be considered the work in \cite{fricain2014cyclicity}; the authors study the optimal approximants of the function $1-z$ to characterize the cyclicity of holomorphic functions on the closed unit disk. In \cite{beneteau2019boundary}, the authors compute Taylor coefficients of $1 - p_n^*f$ in weighted Hardy spaces (discussed below) when $f$ is a polynomial and prove results about the convergence of $( 1 - p_n^*f )$. In \cite{beneteau2016orthogonal}, the authors dive into orthogonal polynomials and reproducing kernels to get lower bounds on the moduli of zeros of optimal approximants in Dirichlet-type spaces. Then in \cite{beneteau2016zeros}, the authors study a larger class of reproducing kernel Hilbert spaces and give results on accumulation points and lower bounds on the moduli of zeros of optimal approximants. Following these themes, we would like to develop some theory for different choices of $g$ (cyclic or not) in considering $\|pf - g \|_\mathcal{H}$ and then explore the relationship between optimal approximants and generalized \textit{inner} functions (first studied in \cite{beneteau2017remarks}). This will then yield some observations which allow us to explicitly compute $\Pi_{[f]}(1)$ when $f$ is a polynomial. Let us first outline some spaces where assumptions (1) and (2) from above hold. \subsection{Weighted Hardy Spaces}\label{whs} A well-studied family of spaces satisfying these properties are some so-called weighted Hardy spaces. Letting $w := \{w_k\}_{k \ge 0}$ be a sequence of positive real numbers with $\lim_{k \to \infty}w_{k+1}/w_k = 1$ and $w_0 =1$, define $H^2_w$ as the space of all functions $f(z)$ with Maclaurin series \[ f(z) = \sum_{k=0}^{\infty} a_k z^k, \ \ \ |z| < 1 \] for which \[ \| f \|^2_w := \sum_{k=0}^{\infty} w_k |a_k|^2 < \infty. \] $H^2_w$ is a Hilbert space; if $f$ and $g$ are elements of $H^2_w$ with Maclaurin coefficients $\{a_k\}$ and $\{b_k\}$ respectively, the inner product is given by \[ \langle f, g \rangle_w = \sum_{k = 0}^{\infty}w_k a_k \overline{b_k}. \] The limit condition on the sequence $w$ ensures that functions analytic in a disk larger than $\mathbb{D}$ belong to $H^2_w$, and that all functions in these spaces are analytic in $\mathbb{D}$. Taking $\alpha \in \mathbb{R}$ and $w = \{\left(k+1\right)^\alpha\}_{k \ge 0}$ gives the Dirichlet-type spaces $\mathcal{D}_\alpha$. When $\alpha = 0$ we recover the classical Hardy space $H^2$, $\alpha = -1$ gives the Bergman space $A^2$, and $\alpha = 1$ gives the Dirichlet space $\mathcal{D}$. Much of the existing literature on optimal polynomial approximants has focused on these spaces. However, in this paper, the results to be proved will extend to some other spaces that do not have some of the useful properties present in the $\mathcal{D}_\alpha$ spaces. Below we give two examples of such spaces. \subsection{Szeg\H{o}'s Theorem and $\frac1m H^2$}\label{oneoverm} A classical theorem of Szeg\H{o} says that for $v \in L^1(\mathbb{T})$ positive, the closure of the analytic polynomials in $L^2(v)$ coincides with all of $L^2(v)$ if and only if $\int_{\mathbb{T}} \log{v} = -\infty$ (e.g., see \cite{conway1991theory}). In the case that $\int_{\mathbb{T}} \log{v} > -\infty$, there exists an outer (i.e. $H^2$-cyclic) function $v$ such that $|m|^2 = v$. Further, $P^2(v) := \overline{\text{span}\{z^k : k \ge 0\}}^{\ L^2(v)}$ is isomorphic to $\frac1m H^2 : = \{ f/m : f \in H^2 \}$. It follows that multiplication by $1/m$ is an isometry and for all $f\in P^2(v)$, we have $\| f \|_{P^2(v)} = \| f/m \|_{H^2}$. An important distinction in these spaces is that the monomials are not pairwise orthogonal. \subsection{de Branges-Rovnyak Spaces} Denote by $H^\infty$ the set of bounded analytic functions on $\mathbb{D}$. If $b$ is a function in the unit ball of $H^\infty$ (i.e. $\sup_{z\in \mathbb{D}}|b(z)| \le 1$), then there exists a reproducing kernel Hilbert space on $\mathbb{D}$, denoted $\mathcal{H}(b)$ so that the reproducing kernel for this space is given by \[ k_\lambda(z) = \frac{1 - \overline{b(\lambda)}b(z)}{1 - \overline{\lambda}z}. \] These spaces are called \textit{de Branges-Rovnyak spaces} (see \cite{timotin2015short} for an introduction). The structure of these spaces vary with choice of $b$ but we would like to keep in mind the spaces for which the reproducing kernel at zero is not equal to one (i.e. when $b(0) \neq 0$). We will generalize some ideas from the existing body of work, for example in the Dirichlet-type spaces, where the function 1 is the reproducing kernel at zero. We will not dig into the study of de Brange-Rovnyak spaces here but the authors in \cite{fricain2014dbr} have characterized cyclicity when $b$ is non-extreme. \section{General Optimal Approximants}\label{genopa} We make the distinction of \textit{general} optimal polynomial approximant to generalize the case when $g=1$ in studying $\| pf - g \|_\mathcal{H}$. Any further use of $g$ will be in this context. \begin{definition}[Optimal Polynomial Approximant] Let $f,g \in \mathcal{H} $ with $\langle f, g \rangle \neq 0$. Define the $n$th \textit{optimal polynomial approximant} to $g/f$ as \[ p_n^* := \argmin_{p \in \mathcal{P}_n} \| pf - g \|_\mathcal{H}. \] \end{definition} Given the Hilbert space structure, the above minimization is immediate-- simply project $g$ onto $f \mathcal{P}_n$, i.e. \[ p_n^* = \Pi_{f\mathcal{P}_n}(g). \] Hence, $p_n^*$ exists and is unique when $f$ and $g$ are not orthogonal. Morally, if $\lim_{n \to \infty} p_n^*$ looks like $g/f$, then the above norm goes to zero and does so \textit{optimally}. In this sense, we are trying to approximate $g/f$ with polynomials. In \cite{fricain2014cyclicity} (Theorem 2.1) an algorithm for finding optimal polynomial approxiamants is given for $g = 1$ in spaces where $k_0$, the reproducing kernel at zero, is equal to 1. We will state the ideas from the algorithm below. \begin{definition}[Optimal System] For $f, g \in \mathcal{H}$, define the $n$th \textit{optimal matrix} of $f$ in $\mathcal{H}$ as \[ G_n := \left( \langle z^if, z^jf \rangle_\mathcal{H} \right)_{0 \leq i,j \leq n} \] and the $n$th \textit{optimal system} of $g/f$ as \[ G_n \vec{a}_n = \left( \langle g, f \rangle, \langle g, zf \rangle, \ldots, \langle g, z^nf \rangle\right)^T. \] \end{definition} \begin{prop} Let $f,g \in \mathcal{H}$ with optimal system $G_n \vec{a}_n = \left( \langle g, f \rangle, \langle g, zf \rangle, \ldots, \langle g, z^nf \rangle\right)^T$ and let $p_n^*$ be the $n$th optimal approximant to $g/f$. Then $\vec{a}_n = (a_0, a_1, \ldots, a_n)$ gives the coefficients of $p_n^*$. That is, $p_n^*(z) = a_0 + a_1z + \cdots + a_n z^n$. \end{prop} \begin{proof} The optimality of $p_n^*$ means for all $q \in \mathcal{P}_n$ \[ \| p_n^*f - g \|_\mathcal{H}^2 \leq \| qf - g \|_\mathcal{H}^2. \] This occurs if and only if $p_n^*f - g \perp qf$. Equivalently, for $k = 0, \ldots, n$, we must have \[ \langle p_n^*f - g, z^k f \rangle_\mathcal{H} = 0. \] Moving $\langle g, z^k f \rangle_\mathcal{H}$ to the right hand side of the above equation and putting $p_n^*(z) = \sum_{k=0}^n a_k z^k$ gives the proposed system. \end{proof} \begin{prop} For $f \in \mathcal{H}$, the orthogonal projections $\Pi_n : \mathcal{H} \to f\mathcal{P}_n$ converge to the orthogonal projection $\Pi_{[f]} : \mathcal{H} \to [f]$ in the strong operator topology. Further, if $g\in \mathcal{H}$ with $\langle f, g \rangle \neq 0$ and $(p_n^*)$ are the optimal approximants to $g/f$, then $\varphi = \Pi_{[f]}(g)$ is the unique function such that \[ \| p_n^*f - \varphi \|_\mathcal{H} \to 0. \] \end{prop} \begin{proof} Let $u \in \mathcal{H}$ and put $u = \Pi_{[f]}(u) + v$. Then $v$ is orthogonal to $[f]$, and hence orthogonal to $f\mathcal{P}_n$, so $\Pi_n (v) = 0$ for all $n \ge 0$. Since $\cup_n f\mathcal{P}_n$ is dense in $[f]$, given $\epsilon > 0$, there exists $N$ such that $\text{dist}(\Pi_{[f]}(u), f\mathcal{P}_N) < \epsilon$. Then, for all $ n \ge N$, we have \begin{align*} \| \Pi_{[f]}(u) - \Pi_n(u) \|_\mathcal{H} &= \text{dist}(\Pi_{[f]}(u), f\mathcal{P}_n) \\ & \le \text{dist}(\Pi_{[f]}(u), f\mathcal{P}_N) \\ &< \epsilon. \end{align*} Since $u$ was arbitrary, we have that $\Pi_n \to \Pi_{[f]}$ strongly. Further, take $u=g$ to get $\| \Pi_n(g) - \Pi_{[f]}(g) \|_\mathcal{H} = \| p_n^*f - \varphi \|_\mathcal{H} \to 0$. \end{proof} Again, note that $f$ is cyclic if and only if $p_n^*f \to g$ for any cyclic $g$ and $(p_n^*)$ the optimal approximants to $g/f$. We will now make some observations and motivate a few questions. \section{Truncations of Power Series and Stabilization of Optimal Approximants} Let $h$ be an analytic function. We will denote the $n$th Taylor polynomial of $h$ as \[ T_n\left(h\right) := \sum_{k=0}^{n} \frac{ h^{\left(k\right)}\left(0\right)}{k!} z^k. \] For $f\in \mathcal{H}$, a first natural guess might be that the optimal approximants to $g/f$ are $T_n(g/f)$. This turns out to be a not-so-good guess. For example, in the Dirichlet space $\mathcal{D}$, the cyclic function $1-z$ was studied in \cite{beneteau2015cyclicity} and there it was pointed out that \begin{align*} \| T_n(1/f)f - 1 \|_{\mathcal{D}} &= \| \left(1 + z + \ldots + z^n\right)(1-z) - 1 \|_{\mathcal{D}}\\ &= \| z^{n+1} \|_{\mathcal{D}}\\ &= n+1 \end{align*} which is unbounded as $n \to \infty$. In this case, $T_n(1/f)$ is neither optimal nor provides a sequence that proves $f$ to be cyclic (even though $T_n(1/f)f \to 1$ pointwise in $\mathbb{D}$). Instead of using Taylor polynomials, we ask a couple of more general questions: \begin{enumerate} \item[(Q1)] Given a power series $\varphi(z) = \sum_{k=0}^{\infty} a_k z^k$, can we characterize $f$ such that the $n$th optimal polynomial approximants to $g/f$ are given by $T_n(\varphi)$ for all $n$ greater than some $M$? \item[(Q2)] Suppose $\varphi$ is a polynomial. Can we characterize $f$ such that $\Pi_{[f]} (g) = \varphi$? \end{enumerate} We will proceed by first answering these questions for $g = \hat{k}_0$. \subsection{The Reproducing Kernel at Zero and Inner Functions} As mentioned previously, much of the existing literature on optimal approximants has been centered around approximating $1/f$ in spaces where 1 is the reproducing kernel at zero. In the following section, we will make a few observations and generalize these results. \begin{definition}[Inner function] Say that $f \in \mathcal{H}$ is \textit{$\mathcal{H}$-inner} if \[ \langle f, z^jf \rangle_\mathcal{H} = \delta_{j0}. \] \end{definition} This definition was first given by Aleman, Richter, and Sundberg in \cite{aleman1996beurling} for $\mathcal{H} = A^2$. This definition coincides with the classical definition of inner in $H^2$. Let us gather some facts about the relationship between inner functions and optimal polynomial approximants. See \cite{beneteau2017remarks} for further discussion on this topic. \begin{prop} There is a unique function in $\mathcal{H}$ that is both cyclic and inner. This function is $\hat{k}_0$, the normalized reproducing kernel at zero. \end{prop} \begin{proof} Let $\theta \in \mathcal{H}$ be cyclic and inner. Then for all $h \in \mathcal{H}$, there exist polynomials $p_n$ such that $p_n \theta \to h$ and as $\theta$ is inner, $\langle p_n \theta, \theta \rangle_\mathcal{H} = p_n(0)$. Taking limits, and noting $\theta(0) \neq 0$ by cyclicity, we have $\langle h, \theta \rangle_\mathcal{H} = h(0)/\theta(0)$. This implies that $\overline{\theta(0)}\theta$ is the reproducing kernel at zero. Thus, by the Riesz representation theorem, this function is well-defined for any choice of $\theta$ and must be $k_0$. Normalizing $k_0$ then concludes the proof. \end{proof} Note that in the Dirichlet-type spaces, the functions $\theta$ above are just unimodular constants. However, as noted previously, in DeBrange-Rovnyak spaces $\mathcal{H}(b)$, unless $b(0) = 0$, the reproducing kernel at zero is non-constant and is given by $\overline{\theta(0)}\theta= 1 - \overline{b(0)}b$. \begin{lemma}\label{inner} Let $f \in \mathcal{H}$ with $f(0) \neq 0$. Let $\varphi$ be the orthogonal projection of $\hat{k}_0$ onto $[f]$. Then $\varphi/\sqrt{\varphi(0)}$ is $\mathcal{H}$-inner. \end{lemma} This lemma is a generalization of ideas from \cite{beneteau2017remarks}. \begin{proof} Notice that $\hat{k}_0 - \varphi \perp [f]$ and $[f]$ is shift invariant, so for all $j \ge1$ we have \[ 0 = \langle z^j \varphi, \hat{k}_0 - \varphi \rangle_\mathcal{H} = - \langle z^j \varphi, \varphi \rangle_\mathcal{H}. \] Further, $\langle \varphi, \varphi \rangle_\mathcal{H} = \langle \hat{k}_0, \varphi \rangle_\mathcal{H} = \varphi(0)$ which gives $\|\varphi\|_\mathcal{H} = \sqrt{\varphi(0)}$. Thus \[ \left\langle \frac{\varphi}{\sqrt{\varphi(0)}}, z^j \frac{\varphi}{\sqrt{\varphi(0)}} \right\rangle_\mathcal{H} = \delta_{j0} \] so $\varphi/\sqrt{\varphi(0)}$ is inner. \end{proof} \begin{lemma}\label{finite} Let $f \in \mathcal{H}$ with $f(0) \neq 0$ and let $(p_n^*)$ be the optimal approximants to $\hat{k}_0/f$. Let $\varphi(z) = \sum_{k=0}^{\infty}a_kz^k$ and suppose that $p_n^* = T_n(\varphi)$ for all $n \ge M$. Then $p_n^* = p_M^*$ for all $n \ge M$. That is, $\varphi = p_M^*$. \end{lemma} \begin{proof} By hypothesis, for all $n \ge M$, $\varphi(0) = (p_n^* f)(0) = (p_M^*f)(0)$. Now notice, for all $n \ge M$, \begin{align*} \| p_n^* f - p_M^*f \|_\mathcal{H}^2 &= \| p_n^* f \|_\mathcal{H}^2 - 2\text{Re}\{\langle p_n^*f, p_M^*f \rangle_\mathcal{H} \} + \| p_m^* f \|_\mathcal{H}^2\\ &= (p_n^* f)(0) - 2(p_M^*f)(0) + (p_M^*f)(0)\\ &=0 \end{align*} Hence, $p_n^*f = p_M^*f$ for all $n \ge M$, and as $f$ is not identically zero, $p_n^* = p_M^*$ for all $n \ge M$.\\ \end{proof} \begin{remark} It should be pointed out that Lemma \ref{finite} says that there are no functions $f$ for which the optimal approximants to $\hat{k}_0/f$ come from truncations of a single power series with finitely many zero coefficients. This lemma can also be seen as a consequence of the simple exercise showing that $\text{dist}^2(\hat{k}_0, f\mathcal{P}_n) = 1 - (p_n^*f)(0)$. This also tells us that for $g = \hat{k}_0$, (Q1) and (Q2) are equivalent. The following definition is now natural. \end{remark} \begin{definition}[Stabilizing approximants] Let $f,g \in \mathcal{H}$ with $\langle f, g \rangle \neq 0$ and let $(p_n^*)$ be the optimal approximants to $g/f$. Say that the optimal approximants \textit{stabilize} at $p_M^*$ if $M$ is the smallest non-negative integer such that $p_n^* = p_M^*$ for all $n \ge M$. \end{definition} \begin{lemma}\label{constant} Let $f \in \mathcal{H}$ with $f(0) \neq 0$ and let $(p_n^*)$ be the optimal approximants to $\hat{k}_0/f$. Then $f$ is inner if and only if $p_n^* = \overline{f(0)}$ for all $n \ge 0$. \end{lemma} The forward implication of this lemma was proven in \cite{beneteau2017remarks} for spaces where $\hat{k}_0 = 1$. \begin{proof} Suppose that $f$ is inner. For any $n \ge 0$, consider the optimal system $G_n \vec{a}_n = ( \overline{f(0)}, 0, \ldots, 0)^T$ for $\hat{k}_0/f$. As $f$ is inner, the entries in the first row and column of $G_n$, except the (0,0) entry, are all zero. It follows that the inverse of $G_n$ must also satisfy this property. Now considering $G_n^{-1}(\overline{f(0)}, 0, \ldots, 0)^T$ to recover the coefficients of $p_n^*$, we see that $p_n^*$ is the constant $\overline{f(0)}$. Now suppose $p_n^*(z) = \overline{f(0)}$ for all $n \ge 0$. Considering the optimal system $G_1 (\overline{f(0)}, 0)^T = (\overline{f(0)}, 0)^T$ quickly yields that $\| f\|_\mathcal{H} = 1$ and $\langle f, zf \rangle_\mathcal{H} = 0$. As the coefficients of $p_n^*$ are stable, a simple induction argument then shows that $\langle f, z^k f \rangle_\mathcal{H} = 0$ for all $k \ge 1$. \end{proof} We now give a characterization answering (Q2) when $g = \hat{k}_0$. \begin{theorem}\label{main} Let $f \in \mathcal{H}$ with $f(0) \neq 0$ and let $(p_n^*)$ be the optimal polynomial approximants to $\hat{k}_0/f$. The following are equivalent, and the smallest $M$ for which each of the statements hold is the same: \begin{enumerate} \item There exists a function $\varphi(z) = \sum_{k \ge 0} a_k z^k$ such that $p_n^* = T_n(\varphi)$ for all $n \ge M$. \item The optimal approximants to $\hat{k}_0/f$ stabilize at $p_M^*$. \item $p_M^*f$ is the orthogonal projection of $\hat{k}_0$ onto $[f]$. \item $f = c u/p_M^*$, where $c = \sqrt{(p_M^*f)(0)}$ and $u$ is $\mathcal{H}$-inner. \end{enumerate} \end{theorem} \begin{proof} The equivalence of (1) and (2) is given by Lemma \ref{finite} and taking $p_M^* = \varphi$ for the backward implication. The equivalence of (2) and (3) follows by definition. The fact that (3) implies (4) is given by Lemma \ref{inner}. The unique minimality of $M$ until now follows by definition and trivial arguments. Now let us assume (4), putting $p_M^*(z) = \sum_{k=0}^M a_k z^k$ and assuming that $M$ is minimal. Then, \begin{align*} 0 &= \left\langle z \ \frac{p_M^*f}{\sqrt{(p_M^*f)(0)}}, \frac{p_M^*f}{\sqrt{(p_M^*f)(0)}}\right\rangle_\mathcal{H} \\ &= \langle z p_M^*f, p_M^*f \rangle_\mathcal{H} \\ &= \sum_{k=0}^M a_k \langle z^{k+1} f, p_M^*f \rangle_\mathcal{H} \\ &= a_M \langle z^{M+1} f, p_M^*f \rangle _\mathcal{H} \end{align*} where the last equality holds by optimality of $p_M^*$. By the minimality of $M$, $a_M \neq 0$ so we must have $\langle z^{M+1} f, p_M^*f \rangle_\mathcal{H} = 0$. A simple induction argument shows that $\langle z^{M+k} f, p_M^*f \rangle_\mathcal{H} = 0$ for all $k \ge 1$. It follows that \[ \langle qf, p_M^*f \rangle_\mathcal{H} = q(0)f(0) \] for all $q \in \mathcal{P}$. In other words, $p_M^*f$ is the orthogonal projection of $\hat{k}_0$ onto $[f]$, i.e. (3). \end{proof} \begin{cor}\label{inv} Let $f \in \mathcal{H}$ be cyclic. Suppose that the optimal polynomial approximants to $\hat{k}_0/f$ stabilize at $p_M^*$. Then $f = \hat{k}_0/p_M^*$ and $p_M^*$ has no zeros inside $\mathbb{D}$. \end{cor} \begin{proof} Since $f$ is cyclic, $f(0) \neq 0$. By optimality, we have \[ \langle p_M^* f, qf \rangle_\mathcal{H} = \langle \hat{k}_0, qf \rangle_\mathcal{H} \] for all $q \in \mathcal{P}$. As $f$ is cyclic, $\{qf : q \in \mathcal{P} \}$ is dense in $\mathcal{H}$. It follows immediately that $p_M^*f = \hat{k}_0$. Lastly, as $\hat{k}_0$ is cyclic and therefore zero-free in $\mathbb{D}$, and $f$ is analytic in $\mathbb{D}$, $p_M^*$ must not have any zeros in $\mathbb{D}$. \end{proof} \begin{remark}\label{zeros} For $h\in \mathcal{H}$, let us denote the zero set of $h$ as $Z(h) := \{ \beta \in \mathbb{C} : h(\beta) = 0 \}$. It was shown in \cite{beneteau2016orthogonal} that in the Dirichlet-type spaces $D_\alpha$, $Z(p_n^*) \cap \overline{\mathbb{D}} = \emptyset$ when $\alpha \ge 0$ and $Z(p_n^*) \cap \overline{D}(0, 2^{-\alpha/2}) = \emptyset$ when $\alpha < 0$. The above corollary improves this result for $\alpha < 0$ when $f$ is cyclic and has stabilizing approximants. However, it should be noted that, a priori, $p_m^*$ may have zeros on the unit circle. \end{remark} As a consequence of Remark $\ref{zeros}$, we have the following proposition, which also follows naturally from the classical Beurling theorem (see \cite{beurling1949}). \begin{cor} Let $f \in H^2$ with $f(0) \neq 0$. Let $(p_n^*)$ be the optimal polynomial approximants to $1/f$. If the optimal approximants to $1/f$ stabilize at $p_M^*$, then $M = 0$. That is, every function in the Hardy space that has stabilizing optimal approximants is a constant multiple of an inner function. \end{cor} \begin{proof} Recall (e.g. from \cite{partington2001hardy}) that in $H^2$, every inner function $h$ has the form $h = BQ$ where $B$ is a (possibly infinite) Blaschke product and $Q$ is a singular inner function. Namely, if the optimal approximants to $1/f$ stabilize at $p_M^*$, then via Lemma \ref{constant} we have \[ p_M^* f = \sqrt{(p_M^*f)(0)} \ BQ. \] We know, from Remark \ref{zeros}, that $p_M^*$ must have zeros outside $\overline{\mathbb{D}}$. But as $Z(Q) = \emptyset$ and $Z(B) \subset \mathbb{D}$, we must have $Z(f) = Z(B)$. This now tells us that $p_M^*$ must be zero-free, i.e. $p_M^*$ is constant and therefore $M = 0$. We now conclude that $f$ is a constant multiple of the $H^2$-inner function $BQ$. \end{proof} \section{General Approximants} We now return to the case of approximating some arbitrary $g/f$ with $g,f \in \mathcal{H}$. First we would like to recall the correspondence of norms in the $\frac1m H^2$ spaces from Section \ref{oneoverm} as one motivation for studying general approximants. With this correspondence, we have the following proposition. \begin{prop} Let $f \in \frac1m H^2 \setminus \{0\}$. Put $f = h/m$ with $h\in H^2$. Then the optimal polynomial approximants to $1/f$ in $\frac1m H^2$ correspond to the optimal polynomial approximants to $m/h$ in $H^2$. \end{prop} \begin{proof} It suffices to notice that for any polynomial $p$ we have \[ \| pf - 1 \|_{\frac1m H^2} = \| ph - m \|_{H^2}. \] Minimizing each side of the equality above we see that \[ \left(\Pi_{f\mathcal{P}_n}(1)\right)/f = \left(\Pi_{h\mathcal{P}_n}(m)\right)/h \] where the projections on the left and right hand sides above are taken in $\frac1m H^2$ and $H^2$ respectively. Lastly, as $f \neq 0$, these projections are unique and represent the optimal approximants. \end{proof} We can now reframe questions about cyclicity in $\frac1m H^2$ as questions in $H^2$ via general optimal approximants. This is advantageous because $H^2$ has nicer structural properties than $\frac1m H^2$ (e.g., the monomials are orthogonal in $H^2$ but not in $\frac1m H^2$). Let us now give some results pertaining to inner functions and general optimal approximants. In general, (Q1) and (Q2) are not equivalent. For example, take $f(z) = 1$ and $g(z) = \sum_{k\ge 0} b_kz^k$. Then the optimal approximants to $g/f$ are just $T_n(g)$: \[ \| T_n(g) - g \|_\mathcal{H} \leq \|p - g\|_\mathcal{H} \] for all $p \in \mathcal{P}_n$. \subsection{General Stabilization} Let us first make an important observation. \begin{prop}\label{modelshift} Let $f \in \mathcal{H}$ and define \[ \mathcal{K}_{Sf} : = \mathcal{H} \ominus [Sf]. \] For any $h \in \mathcal{H}$, we have $h \in \mathcal{K}_{Sf}$ if and only if $\Pi_{[f]}(h) \in \mathcal{K}_{Sf}$. Further, $\hat{k}_0$ and $\Pi_{[f]}(\hat{k}_0)$ are always elements of $\mathcal{K}_{Sf}$. \end{prop} \begin{proof} Note that $\mathcal{K}_{Sf}$ can also be expressed as \[ \mathcal{K}_{Sf} = \{ h \in \mathcal{H} : \langle h, z^kf \rangle_\mathcal{H} = 0 \ \text{for all} \ k \ge 1\}. \] Simply observe that $\langle h, z^kf \rangle_\mathcal{H} = \langle h, \Pi_{[f]}(z^kf) \rangle_\mathcal{H} = \langle \Pi_{[f]}(h), z^kf \rangle_\mathcal{H}$ and that \\ $\langle k_0, z^kf \rangle_\mathcal{H} = 0$ for all $k \ge 1$. \end{proof} We will now give the generalization of Theorem \ref{main} for general approximants. \begin{theorem} Let $f, g \in \mathcal{H}$ with $\langle f, g \rangle \neq 0$. Let $(q_n^*)$ be the optimal approximants to $g/f$. The following are equivalent, and the smallest $M$ for which each of the statements hold is the same: \begin{enumerate} \item $g \in \mathcal{K}_{Sf}$ and $\Pi_{[f]}(g) = q_M^*f$. \item $q_M^*f \in \mathcal{K}_{Sf}$. \item $q_M^*f/ \|q_M^*f\|_\mathcal{H}$ is inner and $\langle q_M^*f , z^k f \rangle_\mathcal{H} = 0$ for $k= 1, \ldots, M$. \end{enumerate} \end{theorem} \begin{proof} To see (1) implies (2), note that if $\Pi_{[f]}(g) = q_M^*f$ then $\langle q_M^*f, z^k f \rangle_\mathcal{H} = \langle g, z^kf \rangle_\mathcal{H}$. So if $g \in \mathcal{K}_{Sf}$, then $\langle q_M^*f, z^k f \rangle_\mathcal{H} = 0$ for all $k \ge 1$. For (2) implies (3), the fact that $\langle q_M^*f , z^k f \rangle_\mathcal{H} = 0$ for $k= 1, \ldots, M$ follows by definition of $q_M^*f \in \mathcal{K}_{Sf}$. To see $q_M^*f/ \|q_M^*f\|_\mathcal{H}$ is inner, put $q_M^*(z) = \sum_{j=0}^M b_kz^j$ and observe, for all $k \ge1$, \[ \langle q_M^*f, z^kq_M^*f \rangle_\mathcal{H} = \sum_{j=0}^M \overline{b_j} \langle q_M^*f, z^{j+k}f \rangle_\mathcal{H} = 0 \] where the second equality holds because $q_M^*f \in \mathcal{K}_{Sf}$. Thus, $q_M^*f/ \|q_M^*f\|_\mathcal{H}$ is inner. Further, the unique minimality of $M$ in the above statements is immediate. For (3) implies (1), we use the same idea as the last part of Theorem \ref{main}. Put $q_M^*(z) = \sum_{j=0}^M b_k z^j$ and assume $M$ is minimal. Since $q_M^*f/\| q_M^*f\|_\mathcal{H}$ is inner, we have \begin{align*} 0 &= \langle z q_M^*f, q_M^*f \rangle_\mathcal{H} \\ &= \sum_{j=0}^M b_k \langle z^{j+1} f, q_M^*f \rangle_\mathcal{H} \\ &= b_M \langle z^{M+1} f, q_M^*f \rangle_\mathcal{H} \end{align*} where the last equality holds by the assumption that $q_m^*f$ is orthogonal to $z^kf$ for $k = 1, \ldots, M$. By the minimality of $M$, $b_M \neq 0$ so we must have $\langle z^{M+1} f, q_M^*f \rangle_\mathcal{H} = 0$. A simple induction argument shows that $\langle z^{M+k} f, q_M^*f \rangle_\mathcal{H} = 0$ for all $k \ge 1$. Thus, $q_M^*f \in \mathcal{K}_{Sf}$. Further, if $q_M^*f \in \mathcal{K}_{Sf}$, then $g - q_M^*f = g - \Pi_M(g)$ is orthogonal to $[f]$. It follows that $\Pi_{[f]}(g) = q_M^*f $, thus the approximants to $g/f$ stabilize at $q_M^*$. Lastly, $g \in \mathcal{K}_{Sf}$ since now $\langle q_M^*f, z^kf \rangle_\mathcal{H} = \langle g, z^kf \rangle_\mathcal{H} = 0$ for all $k\ge 1$. \end{proof} We can also characterize cyclicity in terms of $\mathcal{K}_{Sf}$. \begin{theorem} Let $f \in \mathcal{H}$. Then $f$ is cyclic if and only if $\mathcal{K}_{Sf} = \text{span}\{k_0\}$. \end{theorem} \begin{proof} Suppose $f$ is cyclic. Then for any $h \in \mathcal{H}$, we can find $(p_n)$ so that $p_nf \to h$. Letting $g \in \mathcal{K}_{Sf}$ we have \begin{align*} \langle g, h \rangle_\mathcal{H} &= \lim_{n \to \infty} \ \langle g, p_nf \rangle_\mathcal{H} \\ &=\lim_{n \to \infty} \overline{(p_nf)(0)} \ \langle g , 1 \rangle_\mathcal{H}\\ &= \overline{h(0)} \ \langle g , 1 \rangle_\mathcal{H}. \end{align*} Thus, $g$ reproduces, up to a constant, the value of $h$ at zero so $g \in \text{span}\{k_0\}$. Conversely, let $\mathcal{K}_{Sf} = \text{span}\{k_0\}$. Since $\Pi_{[f]}(k_0) \in \mathcal{K}_{Sf}$, there exists some constant $\lambda$ so that $\Pi_{[f]}(k_0) = \lambda k_0$. This means that the cyclic function $k_0 \in [f]$ so $f$ is cyclic. \end{proof} One may compare this with the well-known ``codimension one" property of shift invariant subspaces (e.g., see \cite{richter1988bounded}). \section{Projections of Unity} In light of Proposition \ref{modelshift}, we will compute $\Pi_{[f]}(1)$ when $f \in \mathcal{P}\subset H^2_w$. Note that in our definition of $H^2_w$ from Section \ref{whs}, we have $\hat{k}_0 = 1$. First, let us make a definition. \begin{definition}[Reproducible point] Let $\beta \in \mathbb{C}$ and $m \in \mathbb{Z}^+ \cup \{\infty \}$. Say that $\beta$ is reproducible of order $m$ in $H^2_w$ if point evaluation at $\beta$ of the $m$-th derivative of functions in $H^2_w$ is bounded. If no such $m$ exists, say that $\beta$ is not reproducible. Denote the collection of reproducible points of order $m$ for $H^2_w$ as $\Omega_m(H^2_w)$. \end{definition} Notice that $\Omega_0(H^2_w)$ is just the set of points for which point evaluation is bounded in $H^2_w$. Since we are assuming $H^2_w$ is a reproducing kernel Hilbert space on $\mathbb{D}$, we always have $\mathbb{D} \subseteq \Omega_0(H^2_w)$. But $\Omega_0(H^2_w)$ could be a strictly larger set. For example, a routine exercise shows that when $\alpha > 1$, $\Omega_0 (\mathcal{D}_\alpha) = \overline{\mathbb{D}}$ and $\Omega_m(\mathcal{D}_\alpha) \subseteq \overline\mathbb{D}$ for all $m \ge 1$ (the proper inclusion depending on $m$ and $\alpha$). If $|\beta| > 1$, then $\beta$ is not reproducible in $\mathcal{D}_\alpha$. In $H^2$, $\Omega_m(H^2) = \mathbb{D}$ for all $m$. We need one more observation and lemma before stating our last theorem. Let $s_\beta(z) = \frac{1}{1-\overline{\beta} z}$ denote the Szeg\H{o} kernel, which is the reproducing kernel in $H^2$. Let $s_\beta^{(n)}$ denote the $n$-th derivative of $s_\beta$ and let $s_\beta^n$ denote the reproducing kernel for $n$-th derivatives in $H^2$, i.e. $\langle f, s_\beta^n \rangle_{H^2} = f^{(n)}(\beta)$ for all $f \in H^2$. Such an element exists because $f$ is assumed to be analytic and is unique by the Riesz representation theorem. A simple exercise shows that \[ s_\beta^{(n)}(z) = \sum_{k\ge 0} (k+1)(k+2)\ldots(k+n)\overline{\beta}^{k+n}z^k \] and \[ s_\beta^n(z) = \sum_{k \ge 0} k(k-1)\ldots(k-n+1) \overline{\beta}^{k-n}z^k. \] Further, in $H^2_w$, we have \[ k_\beta^{(n)}(z) = \sum_{k\ge 0} (k+1)(k+2)\ldots(k+n)\frac{\overline{\beta}^{k+n}z^k}{w_k} \] and \[ k_\beta^n(z) = \sum_{k \ge 0} k(k-1)\ldots(k-n+1) \frac{\overline{\beta}^{k-n}z^k}{w_k}. \] Let us now relate the power series coefficients of $s_\beta^{(n)}$ and $s_\beta^n$. \begin{lemma}\label{span} Let $F_0(k) = P_0(k) = 1$. For each $N \in \mathbb{Z}^+$, define $F_N(k):= \prod_{j=1}^N (k+j)$ and $P_N(k) := \prod_{j=0}^{N-1} (k-j)$. Then $F_n \in \text{span}\{P_0, \ldots, P_n\}$ for all $n \in \mathbb{Z}^+ \cup \{0\}$. \end{lemma} \begin{proof} We will proceed by induction. Let $N=1$ and observe $F_1(k) = k+1 = P_1(k) + P_0(k)$, so the base case holds. Now suppose $F_N \in \text{span}\{P_0, \ldots, P_N\}$ and note that $F_{N+1}(k) = (k+N+1) F_N(k)$. By the induction hypothesis, we can find constants $c_i$ such that \begin{align*} F_{N+1}(k) &= (k+N+1) F_N(k)\\ &= k\sum_{i=0}^N c_i P_i(k) + (N+1)\sum_{i=0}^N c_i P_i(k). \end{align*} Observe, for any $n \ge 0$, that $kP_n(k) = (k-n)P_n(k) + nP_n(k) = P_{n+1}(k) + nP_n(k)$. Hence, $kP_n \in \text{span}\{P_0, \ldots, P_{n+1} \}$ and also $k\sum_{i=0}^N c_i P_i \in \text{span}\{ p_0, \ldots, p_{N+1} \}$. Thus, $F_{n+1} \in \text{span}\{P_0, \ldots, P_{n+1}\}$. \end{proof} \begin{remark} The purpose of this lemma, as an immediate corollary, is that $s_\beta^{(n)} \in \text{span}\{ s_\beta, s_\beta^1, \ldots, s_\beta^n\}$. \end{remark} We may now state and prove our final theorem. \begin{theorem}\label{projun} Let $f \in H^2_w$ be a monic polynomial with $f(0) \neq 0$. Suppose $f$ has zeros $\beta_1, \ldots, \beta_r$ with multiplicities $m_1, \ldots, m_r$, respectively. Let $Z_j := \{ \beta_i \in Z(f) \cap \Omega_j : m_i > j \}$ be the set of zeros of $f$ that are reproducible of order $j$ and have multiplicity greater than $j$. Let $I_j := \{ i \in \{ 1, \ldots, r\} : \beta_i \in Z_j \}$ be the set of indices appearing in $Z_j$. Let $R:= \max(\{ j: Z_j \neq \emptyset\})$ be the largest value of $j$ such that $Z_j$ is non-empty. Let $\varphi$ be the orthogonal projection of $1$ onto $[f]$. Then \[ \varphi(z) = 1 + \sum_{j=0}^{R} \ \sum_{i \in I_j} C_{i,j} k_{\beta_i}^{j}(z) \] where $k_\beta^i$ denotes the reproducing kernel for $i$-th derivatives in $H^2_w$ at $\beta$ and $C_{i,j}$ are constants determined by $\langle \varphi, k_{\beta_i}^j \rangle_w = 0$ for each $i \in I_j$ and $0 \le j \le R$. \end{theorem} \begin{proof} Put $f(z) = z^d + a_{d-1}z^{d-1} + \dots + a_0$ and denote the Fourier coefficients of $\varphi$ as $\varphi_k = \langle \varphi, z^k \rangle_w / \| z^k \|_w$. Since $\varphi \in \mathcal{K}_{Sf}$, we have $\langle \varphi, z^{k+d} + a_{d-1}z^{k + d-1} + \dots + a_0 z^k \rangle_w = 0$ for all $k \ge 1$. This gives the recurrence relation \[ w_{k+d} \varphi_{k+d} = \sum_{j = 0}^{d-1} -w_{k+j} \ \overline{a}_j \ \varphi_{k+ j}. \] Now let us use $\Phi_n : = w_n \varphi_n$ to obtain the constant coefficient recurrence relation \[ \Phi_{k+d} = \sum_{j = 0}^{d-1} -\overline{a}_j \Phi_{k+ j}. \] We will now find the generating function $\Phi (z)$ (viewed as a \textit{formal} power series) by summing over all $n$ (see \cite{graham1989concrete} for more on solving recurrence relations and generating functions): \begin{align*} \Phi(z) &= p(z) + \sum_{n \ge 0} -\overline{a_{d-1}}\Phi_{n-1}z^n + \dots + \sum_{n \ge 0} -\overline{a_0}\Phi_{n-d}z^n \\ & = p(z) - \overline{a_{n-1}}z\Phi(z) - \dots - \overline{a_0}z^d \Phi(z). \end{align*} where $p$ is a polynomial of degree $d$ given by the initial conditions of the relation. Solving for $\Phi(z)$ gives \begin{align*} \Phi(z) &= \frac{p(z)}{1 + \overline{a_{d-1}}z + \dots \overline{a_0}z^d} \\ &= \frac{p(z)}{z^d \overline{f(1/\overline{z})}}\\ &= \frac{p(z)}{\prod_{i = 1}^r (1 - \overline{\beta_i}z)^{m_i}}. \end{align*} After doing long division (because $\deg{p} = d$) and using partial fractions, with constants $C$ and $ c_{i,j}$, we may put \begin{align*} \Phi(z) &= C + \sum_{i = 1}^{r} \sum_{j = 1}^{m_i} \frac{c_{i,j}}{(1 - \overline{\beta_i}z)^j}\\ &= C + \sum_{i = 1}^{r} \sum_{j = 1}^{m_i} \frac{c_{i,j}}{\overline{\beta_i}^{j - 1}(j-1)!} \ \frac{d^{j-1}}{dz^{j-1}}\left(\frac{1}{1 - \overline{\beta_i}z}\right)\\ \end{align*} Putting $\tilde{C}_{i,j} = \frac{c_{i,j}}{\overline{\beta_i}^{j - 1}(j-1)!}$ and substituting in with terms of $s_\beta^{(j)}$, we get \[ \Phi(z) = C + \sum_{i = 1}^{r} \sum_{j = 1}^{m_i} \tilde{C}_{i,j} s^{(j-1)}_{\beta_i}(z). \] By Lemma \ref{span}, we can find constants $C_{i,j}$ such that \[ \Phi(z) = C + \sum_{i = 1}^{r} \sum_{j = 1}^{m_i} C_{i,j} s^{j-1}_{\beta_i}(z). \] The upshot of going through the trouble of writing $\Phi$ in this way is that when substituting back in with $\varphi_k$, each term of the form $s^{j-1}_{\beta_i}$ becomes $k^{j-1}_{\beta_i}$. Doing so, we find the formal power series \[ \tilde{\varphi}(z) = C + \sum_{i = 1}^{r} \sum_{j = 1}^{m_i} C_{i,j} k^{j-1}_{\beta_i}(z). \] In order to find $\varphi$, we must determine which terms above converge in $H^2_w$. This is precisely when $\beta_i \in Z_j$, for appropriate $i,j$. Namely, \[ \varphi(z) = C + \sum_{j=0}^{R} \ \sum_{i \in I_j} C_{i,j} k_{\beta_i}^{j}(z). \] Lastly, the claim about the constants follows by noting that any function in $[f]$ must vanish, with proper multiplicity, at the reproducible zeros of $f$. If we let $F:=\sum_{j=0}^{R} \ \sum_{i \in I_j} C_{i,j} k_{\beta_i}^{j}$ and note that $\Pi_{[f]}F = 0$, then $\varphi = \Pi_{[f]}\varphi = \Pi_{[f]}C + \Pi_{[f]}F = C\varphi$, so $C = 1$. As $\varphi \in [f]$, the other constants $C_{i,j}$ can also be determined by using the fact that $\varphi^{(j)}(\beta) = \langle \varphi, k_\beta^j \rangle_w = 0$ for $\beta \in Z_j$. \end{proof} \begin{remark} The above theorem shows something stronger than what is stated; we have actually shown that $\mathcal{K}_{Sf} = \text{span}\{1, k_\beta^j : \beta \in Z_j\}$. Example \ref{simple} below gives an explicit linear system whose solution gives the constants appearing in $\varphi$ when $f$ has simple zeros. An immediate corollary of the above theorem is that if $f, q \in \mathcal{P} \subset H^2_w$ with $Z(q) \cap \left(\cup_{m\ge 0} \Omega_m \right) = \emptyset$, then $\Pi_{[f]}(1) = \Pi_{[qf]}(1)$. This also tells us that the optimal approximants to $1/f$ and $1/(qf)$ form an equivalence class with respect to the limit of their approximants. That is, the equivalence $f \sim h$ if and only if $\Pi_{[f]}(1) = \Pi_{[h]}(1)$. We will call this the Roman equivalence relation; the approximants of two different functions in the same equivalence class travel along different roads, but end up in the same place. Another observation worth noting is that $\text{dist}^2(1, [f]) = \varphi(0) = 1 + \sum_i C_{i,0}$. This is due to the fact that $k_\beta(0) = 1$ and $k_\beta^j(0) = 0$ for all $j \ge 1$. \end{remark} \section{Examples, Further Questions, and Discussion} In \cite{beneteau2016orthogonal} and \cite{beneteau2016zeros}, the location and distribution of zeros of optimal approximants were studied. One outstanding task is to obtain lower bounds on the moduli of zeros of a stabilized optimal approximant. Preliminary examples suggest that if the approximants to $1/f$ stabilize at $p_M^*$, then $p_M^*$ has no zeros inside $\mathbb{D}$, regardless of the space. Thinking back to questions (Q1) and (Q2), one could also explore the relationship of optimal approximants to $T_n(\Pi_{[f]}(k_0))$. Lastly, a natural desire would be to extend Theorem \ref{projun} to any function, not just a polynomial. \begin{example} Let us consider $f(z) = \prod_{i=1}^d (z - \beta_i) \in H^2$ with $f(0) \neq 0$. Let $\Omega = Z(f) \cap \mathbb{D}$. Since $\Omega_m(H^2) = \mathbb{D}$ for all $m \ge 0$, we know from Theorem \ref{projun} that $\varphi(z) := \left(\Pi_{[f]}1\right)(z)$ is given by \[ \varphi(z) = \frac{p(z)}{\prod_{\beta \in \Omega}(1 - \overline{\beta}z)}. \] We also know that $p$ must vanish at each $\beta \in \Omega$ so we get, for some constant $C$, \[ \varphi(z) = C \prod_{\beta \in \Omega} \frac{(z - \beta)}{(1 - \overline{\beta}z)}. \] Since $\langle \varphi , \varphi \rangle = \varphi(0)$ we get \[ C = \frac{\prod_{\beta \in \Omega} \overline\beta}{\left\| \prod_{\beta \in \Omega}\frac{(z - \beta)}{(1 - \overline{\beta}z)} \right\|_{H^2}^2}. \] After some computation, we see that $C = \prod_{\beta \in \Omega}\frac{|\beta|}{\beta}$. This gives $\varphi$ as a familiar Blaschke product (an $H^2$-inner function): \[ \varphi(z) = \prod_{\beta \in \Omega} \frac{|\beta|}{\beta}\frac{(z - \beta)}{(1 - \overline{\beta}z)}. \] This also tells us that $\text{dist}^2(1, [f]) = \varphi(0) = \prod_{ \beta \in \Omega}|\beta|^2$. Further, when $\Omega$ is empty, we have that $\varphi = 1$ and $f$ is cyclic. \end{example} \begin{example}\label{simple} Suppose $f \in H^2_w$ is a monic polynomial with with simple zeros and $f(0) \neq 0$. Let $\{ \beta_i \}_{1}^{d} = Z(f) \cap \Omega_0(H^2_w)$. Theorem \ref{projun} says the orthogonal projection of $1$ onto $[f]$ is given by \[ \varphi(z) = 1 + \sum_{i = 1}^d C_i k_{\beta_i}(z) \] for some constants $C_i$. Since $\varphi$ vanishes at each $\beta_i$, we get, for $1 \le j \le d$, \[ 0 = \varphi(\beta_j) = 1 + \sum_{i = 1}^d C_i k_{\beta_i}(\beta_j) = 1 + \sum_{i = 1}^d C_i \langle k_{\beta_i}, k_{\beta_j} \rangle_w. \] Moving the independent term 1 to the left-hand side in each equation expressed above gives the linear system \[ \left( \langle k_{\beta_i}, k_{\beta_j} \rangle_w \right)_{1 \le i,j \le d} \left( C_1, \ldots, C_d \right)^T = \left( -1, \ldots, -1\right)^T. \] \end{example} \subsection*{Acknowledgment} Many thanks to John McCarthy, Catherine B{\'e}n{\'e}teau, and Daniel Seco for helpful discussion. \bibliographystyle{plain}
{ "timestamp": "2020-03-24T01:19:12", "yymm": "2003", "arxiv_id": "2003.10015", "language": "en", "url": "https://arxiv.org/abs/2003.10015", "abstract": "For various Hilbert spaces of analytic functions on the unit disk, we characterize when a function $f$ has optimal polynomial approximants given by truncations of a single power series. We also introduce a generalized notion of optimal approximant and use this to explicitly compute orthogonal projections of 1 onto certain shift invariant subspaces.", "subjects": "Functional Analysis (math.FA)", "title": "General Optimal Polynomial Approximants, Stabilization, and Projections of Unity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446479186301, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7087893938973374 }
https://arxiv.org/abs/1401.0153
The Minimum Number of Rotations About Two Axes for Constructing an Arbitrary Fixed Rotation
For any pair of three-dimensional real unit vectors $\hat{m}$ and $\hat{n}$ with $|\hat{m}^{\rm T} \hat{n}| < 1$ and any rotation $U$, let $N_{\hat{m},\hat{n}}(U)$ denote the least value of a positive integer $k$ such that $U$ can be decomposed into a product of $k$ rotations about either $\hat{m}$ or $\hat{n}$. This work gives the number $N_{\hat{m},\hat{n}}(U)$ as a function of $U$. Here a rotation means an element $D$ of the special orthogonal group ${\rm SO}(3)$ or an element of the special unitary group ${\rm SU}(2)$ that corresponds to $D$. Decompositions of $U$ attaining the minimum number $N_{\hat{m},\hat{n}}(U)$ are also given explicitly.
\section{#1}} \newcommand{\refroyalsub}[2]{\ref{#2}} \newcommand{\refappparen}[1]{(Appendix~\ref{#1})} \newcommand{\mtilde}[1]{#1} \newcommand{k}{k} \begin{document} \title[The Minimum Number of Rotations About Two Axes]{The Minimum Number of Rotations About Two Axes for Constructing an Arbitrarily Fixed Rotation} \author{% Mitsuru Hamada} \iffalse \address{% Quantum Information Science Research Center, Quantum ICT Research Institute, Tamagawa University, Tamagawa-gakuen 1-chome, Machida, Tokyo 194-8610, Japan} \fi \thanks{% The author is with Tamagawa University, Tamagawa-gakuen 1-chome, Machida, Tokyo 194-8610, Japan.} \iffalse \curraddr{} \email{} \thanks{} \fi \keywords{SU(2), SO(3), rotation} \date{} \dedicatory{} \begin{abstract} For any pair of three-dimensional real unit vectors $\myv{m}$ and $\myv{\varvn}$ with $\abs{\rinnpr{\myv{m}}{\myv{\varvn}}} < 1$ and any rotation $U$, let $N_{\myv{m},\myv{n}}(U)$ denote the least value of a positive integer $k$ such that $U$ can be decomposed into a product of $k$ rotations about either $\myv{m}$ or $\myv{\varvn}$. This work gives the number $N_{\myv{m},\myv{n}}(U)$ as a function of $U$. Here a rotation means an element $D$ of the special orthogonal group ${\rm SO}(3)$ or an element of the special unitary group ${\rm SU}(2)$ that corresponds to $D$. Decompositions of $U$ attaining the minimum number $N_{\myv{m},\myv{n}}(U)$ are also given explicitly. \end{abstract} \maketitle \section{Introduction \label{ss:intro}} In this work, an issue on optimal constructions of rotations in the Euclidean space $\mymathbb{R}^3$, under some restriction, is addressed and solved. By a rotation or rotation matrix, we usually mean an element of the special orthogonal group ${\rm SO}(3)$. However, we follow the custom, in quantum physics, to call not only an element of ${\rm SO}(3)$ but also that of the special unitary group ${\rm SU}(2)$ a rotation. \begin{comment} \footnote{Recall that ${\rm SU}(2)$ and ${\rm SO}(3)$ denote the set of $2\times 2$ unitary matrices with determinant $1$ and the set of $3\times 3$ real orthogonal matrices with determinant $1$, respectively.} \end{comment} This is justified by the well-known homomorphism from ${\rm SU}(2)$ onto ${\rm SO}(3)$ (Section~\refroyalsub{ss:prel}{sssub:homomorphism}). Given a pair of three-dimensional real unit vectors $\myv{m}$ and $\myv{\varvn}$ with $\abs{\rinnpr{\myv{m}}{\myv{\varvn}}} < 1$, where $\myv{m}^{\rm T}$ denotes the transpose of $\myv{m}$, let $N_{\myv{m},\myv{n}}(\cD)$ denote the least value of a positive integer $k$ such that any rotation in $\cD$ can be decomposed into (constructed as) a product of $k$ rotations about either $\myv{m}$ or $\myv{\varvn}$, where $\cD = {\rm SU}(2),{\rm SO}(3)$. It is known that $ N_{\myv{m},\myv{n}}\big({\rm SO}(3)\big) = N_{\myv{m},\myv{n}}\big({\rm SU}(2)\big) = \lceil \pi / \arccos \abs{\rinnpr{\myv{m}}{\myv{\varvn}}} \rceil +1 $ for any pair of three-dimensional real unit vectors $\myv{m}$ and $\myv{\varvn}$ with $\abs{\rinnpr{\myv{m}}{\myv{\varvn}}} < 1$~\cite{Lowenthal71,Lowenthal72}. Then, a natural question arises: What is the least value, $N_{\myv{m},\myv{n}}(U)$, of a positive integer $k$ such that an arbitrarily fixed rotation $U$ can be decomposed into a product of $k$ rotations about either $\myv{m}$ or $\myv{\varvn}$? In this work, the minimum number $N_{\myv{m},\myv{n}}(U)$ is given as an explicit function of $U$, where $U$ is expressed in terms of parameters known as Euler angles~\cite{Wigner,BiedenharnLouck}. Moreover, optimal, i.e., minimum-achieving decompositions (constructions) of any fixed element $U \in {\rm SU}(2)$ are presented explicitly. \begin{comment} Naturally, the above formula on $N_{\myv{m},\myv{n}}:= N_{\myv{m},\myv{n}}\big({\rm SU}(2)\big)$ follows from the obtained stronger formula on $N_{\myv{m},\myv{n}}(U)$, as will be shown. \end{comment} In this work, not only explicit constructions but also simple inequalities on geometric quantities, which directly show lower bounds on the number of constituent rotations, will be presented. Remarkably, the proposed explicit constructions meet the obtained lower bounds, which shows both the optimality of the constructions and the tightness of the bounds. The results in this work were obtained before the author came to know Lowenthal's formula on $N_{\myv{m},\myv{n}}\big({\rm SO}(3)\big)$~\cite{Lowenthal71,Lowenthal72} and a related result~\cite{DA04}. Prior to the present work, the work~\cite{DA04} has treated the issue of determining $N_{\myv{m},\myv{n}}(D)$, $D \in {\rm SO}(3)$. The interesting result~\cite{DA04}, however, gave $N_{\myv{m},\myv{n}}(D)$, $D \in {\rm SO}(3)$, only algorithmically (with the largest index of a sequence of real numbers with some property). \iffalse This work's expression and derivation of $N_{\myv{m},\myv{n}}(D)$ are considerably different from those of \cite{DA04}. \fi The distinctive features of the present work include the following: $N_{\myv{m},\myv{n}}(U)$ is given in terms of an explicit function of parameters of $U\in{\rm SU}(2)$; explicit optimal decompositions are presented; this work's results on $N_{\myv{m},\myv{n}}(U)$ imply Lowenthal's formula on $N_{\myv{m},\myv{n}}\big({\rm SO}(3)\big)$ in a consistent self-contained manner.% \footnote{Here the crux of the difficulty in obtaining this work's results will be explained. Finding the minimum {\em odd}\/ number of factors needed for decomposing $U$, which is expressed with a standard parameter $\beta} %{\gamma$ of $U$, together with minimum-achieving decompositions, was relatively easy. The crux lay in obtaining a solution to attain the minimum {\em even}\/ number of factors, which was found to be expressed with a new parameter $\beta} %{\gamma'$ eventually.} Regarding another direction of related research, we remark that $N_{\myv{m},\myv{n}}(\cD)$ is known as the order of (uniform) generation of the Lie group $\cD$, and this notion has been extended to other Lie groups. The interested reader is referred to relatively extensive treatments on uniform generation~\cite{KLowenthal75,Leite91}, where one would find that even determining the order $N_{\myv{m},\myv{n}}\big({\rm SO}(3)\big)$ needs a special proof~\cite{Lowenthal71,Lowenthal72}, \cite[Appendix]{Leite91}. Detailed elementary arguments below would help us dispel some confusions related to $N_{\myv{m},\myv{n}}\big({\rm SU}(2)\big)$ often found in textbooks on quantum computation. There, not to mention the ignorance of the fact $N_{\myv{m},\myv{n}}\big({\rm SU}(2)\big)=\lceil \pi / \arccos \abs{\rinnpr{\myv{m}}{\myv{\varvn}}} \rceil +1$, a wrong statement equivalent to saying that $N_{\myv{m},\myv{n}}\big({\rm SU}(2)\big)$ were {\em three}, regardless of the choice of non-parallel vectors $\myv{m}$ and $\myv{n}$, is observed. Regarding physics, this work has been affected by the issue of constructing an arbitrary unitary operator on a Hilbert space discussed in quantum physics~\cite{ReckZBB94}. This is relevant to universal gates for quantum computation~\cite{BoykinMPRV99}. In this context, requiring the availability of rotations about a pair of exactly orthogonal axes seems too idealistic. For example, consider a Hamiltonian $H$ of a quantum system represented by $\mymathbb{C}^2$, and note that $H$ determines the axis of the rotations $[c(t)]^{-1} \exp (-{i} %{{\rm i}}t H ) \in {\rm SU}(2)$, $t\in \mymathbb{R}$, where $c(t)$ is a square root of $\det \exp (-{i} %{{\rm i}}t H)$. [Often, although not always, differences of unitary matrices (evolutions) up to scalar multiples are ignorable.] Thus, explicit decompositions attaining the minimum $N_{\myv{m},\myv{n}}(U)$ of an arbitrary rotation $U$ for the generic vectors $\myv{m}$ and $\myv{n}$ will be useful. For applications to control, the reader is referred to \cite{DA04} and references therein. This paper is organised as follows. After giving preliminaries in Section~\ref{ss:prel}, the main theorem establishing $N_{\myv{m},\myv{n}}(U)$ and explicit constructions of rotations are presented in Section~\ref{ss:const}. Then, inequalities that show limits on constructions are presented in Section~\ref{ss:limits}. The proofs of the results of this work are presented in Section~\ref{ss:Proofs}. Section~\ref{ss:conc} contains the conclusion. Several arguments are relegated to appendices. \iffalse including the elementary proof of the formula on $N_{\myv{m},\myv{n}}$~\cite{Lowenthal71,Lowenthal72}. \fi \section{Preliminaries and a Known Result \label{ss:prel}} \subsection{Definitions} The notation to be used includes the following: $\mymathbb{N}$ denotes the set of strictly positive integers; $S^2= \{ \myv{v} %\underline{v}} %{n} \in \mymathbb{R}^3 \mid \| \myv{v} %\underline{v}} %{n} \| =1 \}$ where $\| \myv{v} %\underline{v}} %{n} \| = \sqrt{v} %\underline{v}} %{n_x^2+v} %\underline{v}} %{n_y^2+v} %\underline{v}} %{n_z^2}$ for $\myv{v} %\underline{v}} %{n}=(v} %\underline{v}} %{n_x,v} %\underline{v}} %{n_y,v} %\underline{v}} %{n_z)^{\rm T}$; $\lceil x \rceil$ denotes the smallest integer not less than $x \in \mymathbb{R}$. As usual, $\arccos x \in [0,\pi]$ and $\arcsin x \in [-\pi/2,\pi/2]$ for $x\in [-1,1]$. The Hermitian conjugate of a matrix $U$ is denoted by $U^{\dagger}$. Throughout, $I$ denotes the $2\times 2$ identity matrix; $X,Y$, and $Z$ denote the following Pauli matrices: \[ X=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad Y=\begin{pmatrix} 0 & -i} %{{\rm i} \\ i} %{{\rm i} & 0 \end{pmatrix}, \quad Z=\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} . \] We shall work with a matrix \begin{equation} R_{\myv{v} %\underline{v}} %{n}}(\theta) := (\cos\mfrac{\theta}{2}) I - i} %{{\rm i} (\sin\mfrac{\theta}{2})(v} %\underline{v}} %{n_x X + v} %\underline{v}} %{n_y Y + v} %\underline{v}} %{n_z Z) \label{eq:Rtheta} \end{equation} where $\myv{v} %\underline{v}} %{n}=(v} %\underline{v}} %{n_x,v} %\underline{v}} %{n_y,v} %\underline{v}} %{n_z)^{\rm T} \in S^2$ and $\theta\in\mymathbb{R}$. This represents the rotation about $\myv{v} %\underline{v}} %{n}$ by angle $ \theta$ (through the homomorphism in Section~\ref{sssub:homomorphism}). In particular, for $\myv{y}=(0,1,0)^{\rm T}$ and $\myv{z}=(0,0,1)^{\rm T}$, we put \[ R_{y}(\theta):=R_{\myv{y}}(\theta) = \begin{pmatrix} \cos\frac{\theta}{2} \ & - \sin\frac{\theta}{2} \\ \mbox{}\ \sin\frac{\theta}{2} & \cos\frac{\theta}{2} \end{pmatrix} \quad \mbox{and}\quad R_{z}(\theta):=R_{\myv{z}}(\theta) = \begin{pmatrix} e} %{{\rm e}^{-i} %{{\rm i}\frac{\theta}{2}} & 0 \\ 0 & e} %{{\rm e}^{i} %{{\rm i}\frac{\theta}{2}} \end{pmatrix} . \] For $\myv{m},\myv{\varvn}\in S^2$ with $\abs{\rinnpr{\myv{m}}{\myv{\varvn}}} < 1$, we define the following: \iffalse \footnote{% As a definition, $N_{\myv{m},\myv{n}}(U)$ is to be understood as $+\infty$ when the integer set to be minimised is empty, though the theorem tells this never happens.} \fi \begin{equation} N_{\myv{m},\myv{n}}(U) := \min \{ \varnu \in \mymathbb{N} \mid \exists V_1, \dots,V_{\varnu}\in \mathcal{R}} %{\delta_{\myv{m}} \cup \mathcal{R}} %{\delta_{\myv{\varvn}},\ U= V_1 \cdots V_{\varnu} \} \label{eq:Nmn} \end{equation} for $U\in {\rm SU}(2)$, where $ \mathcal{R}} %{\delta_{\myv{v} %\underline{v}} %{n}} := \{ R_{\myv{v} %\underline{v}} %{n}}(\theta) \mid \theta \in \mymathbb{R} \} $, and \begin{equation} N_{\myv{m},\myv{n}} := N_{\myv{m},\myv{n}}\big({\rm SU}(2)\big) := \min \{ k \in \mymathbb{N} \mid \forall U \in {\rm SU}(2), \ N_{\myv{m},\myv{n}}(U) \le k \} . \end{equation} Using the homomorphism $F$ from ${\rm SU}(2)$ onto ${\rm SO}(3)$ to be defined in Section~\ref{sssub:homomorphism}, we put $ \hat{\mathcal{R}} %{\delta}_{\myv{v} %\underline{v}} %{n}} := \big\{ F\big(R_{\myv{v} %\underline{v}} %{n}}(\theta)\big) \mid \theta \in \mymathbb{R} \big\}$. We extend the definition of $N_{\myv{m},\myv{n}}$ to ${\rm SO}(3)$: \begin{equation} N_{\myv{m},\myv{n}}(D) := \min \{ \varnu \in \mymathbb{N} \mid \exists \genD_1, \dots,\genD_{\varnu}\in \hat{\mathcal{R}} %{\delta}_{\myv{m}} \cup \hat{\mathcal{R}} %{\delta}_{\myv{\varvn}},\ D= \genD_1 \cdots \genD_{\varnu} \} \label{eq:NmnSO3} \end{equation} for $D\in {\rm SO}(3)$ and \begin{equation} N_{\myv{m},\myv{n}}\big({\rm SO}(3)\big) := \min \{ k \in \mymathbb{N} \mid \forall D \in {\rm SO}(3), \ N_{\myv{m},\myv{n}}(D) \le k \} . \end{equation} \subsection{The Maximum of the Minimum Number of Constituent Rotations Over All Target Rotations} This work's results lead to an elementary self-contained proof of the following known theorem (Appendix~\ref{ss:proofthDecomp}). \begin{theorem}[{\rm Lowenthal~\cite{Lowenthal71,Lowenthal72}}] \label{th:Decomp} For any $\myv{m},\myv{\varvn}\in S^2$ with $\abs{\rinnpr{\myv{m}}{\myv{\varvn}}} < 1$, \[ N_{\myv{m},\myv{n}}\big({\rm SO}(3)\big) = N_{\myv{m},\myv{n}}\big({\rm SU}(2)\big) = \Big\lceil \frac{ \pi }{\arccos \abs{\rinnpr{\myv{m}}{\myv{\varvn}}} } \Big\rceil +1. \] \iffalse ORG \[ N_{\myv{m},\myv{n}} = \Big\lceil \frac{ \pi }{\arccos \abs{\rinnpr{\myv{m}}{\myv{\varvn}}} } \Big\rceil +1. \] \fi \end{theorem} \subsection{Parameterisations of the Elements in SU(2)\label{sssub:para}} \mbox{}\ The following lemma presents a well-known parameterisation of ${\rm SU}(2)$ elements. \begin{lemma}\label{lem:para} For any element $U\in{\rm SU}(2)$, there exist some $\alpha} %{\beta,\gamma} %{\delta \in \mymathbb{R}$, and $\beta} %{\gamma\in [0,\pi]$ such that \begin{equation} U= \begin{pmatrix} e} %{{\rm e}^{-i} %{{\rm i} \frac{\gamma} %{\delta+\alpha} %{\beta}{2}}\cos\frac{\beta} %{\gamma}{2} & \mbox{}\, - e} %{{\rm e}^{i} %{{\rm i} \frac{\gamma} %{\delta-\alpha} %{\beta}{2}} \sin\frac{\beta} %{\gamma}{2}\\ e} %{{\rm e}^{- i} %{{\rm i} \frac{\gamma} %{\delta-\alpha} %{\beta}{2}} \sin\frac{\beta} %{\gamma}{2} & e} %{{\rm e}^{i} %{{\rm i} \frac{\gamma} %{\delta+\alpha} %{\beta}{2}} \cos\frac{\beta} %{\gamma}{2} \end{pmatrix} = R_{z}(\alpha} %{\beta) R_{y}(\beta} %{\gamma) R_{z}(\gamma} %{\delta) . \label{eq:uni1} \end{equation} \end{lemma} The parameters $\alpha} %{\beta,\beta} %{\gamma$, and $\gamma} %{\delta$ in this lemma are often called Euler angles \footnote{The restriction of $\beta} %{\gamma$ to $[0,\pi]$ does not seem common. However, in a straightforward proof of this lemma, $\beta} %{\gamma\in [0,\pi]$ can be chosen so that $\cos(\beta} %{\gamma/2)=\abs{a}$ and $\sin(\beta} %{\gamma/2)=\abs{b}$ when the first row of $U$ is $(a,b)$. Also any $R_{z}(\alpha} %{\beta') R_{y}(\beta} %{\gamma') R_{z}(\gamma} %{\delta')$ without this restriction can be written as $R_{z}(\alpha} %{\beta) R_{y}(\beta} %{\gamma) R_{z}(\gamma} %{\delta)$ with some $\beta} %{\gamma\in [0,\pi]$ and $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$. This readily follows from equations $R_{\myv{v}}(\theta+2\pi)=-R_{\myv{v}}(\theta)$, $\myv{v}\in S^2$, $\theta\in\mymathbb{R}$, and $R_z(-\pi)R_y(\beta} %{\gamma')R_{z}(\pi)=R_y(-\beta} %{\gamma')$, $\beta} %{\gamma'\in\mymathbb{R}$. \label{fn:pipi}} The lemma can be rephrased as follows: Any matrix in ${\rm SU}(2)$ can be written as \begin{equation} \begin{pmatrix} a & b \\ - b^* & a^* \end{pmatrix}\label{eq:ab} \end{equation} with some complex numbers $a$ and $b$ such that $|a|^2+|b|^2=1$~\cite{Wigner}. Hence, any matrix in ${\rm SU}(2)$ can be written as \begin{equation} \begin{pmatrix} w + i} %{{\rm i} z & y + i} %{{\rm i} x \\ - y + i} %{{\rm i} x & w - i} %{{\rm i} z \end{pmatrix} = w I + i} %{{\rm i} ( x X + y Y + z Z ) \label{eq:Uxyzw} \end{equation} with some real numbers $x,y,z$, and $w$ such that $ w^2+x^2+y^2+z^2=1 $. Take a real number $\theta$ such that $\cos(\theta/2)=w$ and $\sin(\theta/2)=\sqrt{1-w^2}=\sqrt{x^2+y^2+z^2}$; write $x,y$, and $z$ as $x=-v_x\sin(\theta/2),y=-v_y\sin(\theta/2)$, and $z=-v_z\sin(\theta/2)$, where $v_x,v_y,v_z\in\mymathbb{R}$ and $v_x^2+v_y^2+v_z^2=1$. Thus, using real numbers $\theta,v_x,v_y,v_z\in\mymathbb{R}$ with $v_x^2+v_y^2+v_z^2=1$, any matrix in ${\rm SU}(2)$ can be written as \[ (\cos\mfrac{\theta}{2}) I - i} %{{\rm i} (\sin\mfrac{\theta}{2}) (v_x X + v_y Y + v_z Z) , \] which is nothing but $R_{\myv{v}}(\theta)$ in (\ref{eq:Rtheta}). \subsection{Homomorphism from SU(2) onto SO(3)\label{sssub:homomorphism}} For $U \in {\rm SU}(2)$, we denote by $F(U)$ the matrix of the linear transformation on $\mymathbb{R}^3$ that sends $(x,y,z)^{\rm T}$ to $(x',y',z')^{\rm T}$ through \begin{equation}\label{eq:Uconj} U(xX+yY+zZ)U^{\dagger}=x'X+y'Y+z'Z . \end{equation} Namely, for any $(x,y,z)^{\rm T},(x',y',z')^{\rm T} \in \mymathbb{R}^3$ with (\ref{eq:Uconj}), \[ \begin{pmatrix} x' \\ y' \\ z' \end{pmatrix} = F(U) \begin{pmatrix} x \\ y \\ z \end{pmatrix} . \] We also define \begin{equation} \hat{R}_{\myv{v}}(\theta):= F\big(R_{\myv{v}}(\theta)\big), \quad \myv{v}\in S^2,\theta\in\mymathbb{R}. \end{equation} \subsection{Generic Orthogonal Axes and Coordinate Axes\label{sssub:lmyz}} Lemma~\ref{lem:para} can be generalised as follows. \begin{lemma}\label{lem:paraEulerG} Let $\myv{l},\myv{m}\in S^2$ be vectors with $\rinnprsp{\myv{l}}{\myv{m}}=0$. Then, for any $V\in {\rm SU}(2)$, there exist some $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$, and $\beta} %{\gamma\in [0,\pi]$ such that \begin{equation} V= R_{\myv{m}}(\alpha} %{\beta) R_{\myv{l}}(\beta} %{\gamma) R_{\myv{m}}(\gamma} %{\delta) . \label{eq:Eulerlm} \end{equation} \end{lemma} } %{\noindent% {\em Proof.} \ Since $F$ is onto ${\rm SO}(3)$, there exists an element $U \in {\rm SU}(2)$ such that $\myv{l}=F(U) (0,1,0)^{\rm T} \quad\mbox{and}\quad \myv{m}=F(U) (0,0,1)^{\rm T}$.% \footnote{For the sake of constructiveness, such an element $U$ is constructed in \refappcomma{app:pr_lm}.} With this element $U$, some $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$, and some $\beta} %{\gamma\in [0,\pi]$, write $U^{\dagger} V U = R_{z}(\alpha} %{\beta) R_{y}(\beta} %{\gamma) R_{z}(\gamma} %{\delta)$ in terms of the parameterisation (\ref{eq:uni1}). Then, since $UR_{z}(\alpha} %{\beta)U^{\dagger}=R_{\myv{m}}(\alpha} %{\beta)$ $UR_{y}(\beta} %{\gamma)U^{\dagger}=R_{\myv{l}}(\beta} %{\gamma)$, and $UR_{z}(\gamma} %{\delta)U^{\dagger}=R_{\myv{m}}(\gamma} %{\delta)$, we obtain (\ref{eq:Eulerlm}). \hfill $\Box$\vspace*{1ex} We also have Lemma~\ref{lem:transR3}, which is easy but worth recognising. \begin{lemma}\label{lem:transR3} Let arbitrary $\kappa, \nu \in \mymathbb{N}$, $\myv{u}_1,\dots,\myv{u}_\kappa,\myv{v}_1,\dots,\myv{v}_{\nu} \in S^2$, and $U\in {\rm SU}(2)$ be given. Put $\myv{u}_1' = F(U)\myv{u}_1,\dots,\myv{u}_\kappa'=F(U)\myv{u}_\kappa, \myv{v}_1' = F(U)\myv{v}_1,\dots$, and $\myv{v}_\nu'=F(U)\myv{v}_{\nu}$. Then, for any $\theta_1,\dots, \theta_{\kappa},\phi_1,\dots \phi_{\nu} \in \mymathbb{R}$, \[ R_{\myv{u}_1}(\theta_1) \cdots R_{\myv{u}_\kappa}(\theta_\kappa) = R_{\myv{v}_1}(\phi_1) \cdots R_{\myv{v}_{\nu}}(\phi_{\nu}) \] if and only if (iff) \[ R_{\myv{u}'_1}(\theta_1) \cdots R_{\myv{u}'_\kappa}(\theta_\kappa) = R_{\myv{v}'_1}(\phi_1) \cdots R_{\myv{v}'_{\nu}}(\phi_{\nu}) . \] \end{lemma} \mynoinden {\em Proof.}\/ This readily follows from $UR_{\myv{u}_j}(\theta_j)U^{\dagger}=R_{\myv{u}'_j}(\theta_j)$ and $UR_{\myv{v}_j}(\phi_j)U^{\dagger}=R_{\myv{v}'_j}(\phi_j)$. \hfill $\Box$} % \vspace*{1ex} \begin{comment} This lemma will be used often to rewrite a statement involved with rotations about generic vectors as a statement involved with rotations about coordinate axes, when the latter is easier to prove. \end{comment} \begin{comment} {\em Example.}\/ To illustrate the usefulness of reducing a statement with a generic vector to that with a coordinate axis, we shall check that $F\big(R_{\myv{m}}(\theta)\big)$ is actually a rotation about $\myv{m}$ by angle $\theta$ in the Euclidean space for any $\myv{m}\in S^2$ and $\theta\in \mymathbb{R}$. To see this, putting $\myv{z}=(0,0,1)^{\rm T}$, calculate $F\big(R_{\myv{z}}(\theta)\big)$ as \begin{equation} F\big(R_{\myv{z}}(\theta)\big)=F\big(R_z(\theta)\big) = \begin{pmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{pmatrix} , \label{eq:rotRz} \end{equation} which shows that $F\big(R_{\myv{z}}(\theta)\big)$ is actually the rotation about $\myv{z}$ by angle $\theta$. Then, since there exists an element $U\in {\rm SU}(2)$ such that $\myv{m}=F(U) \myv{z}$, with which $R_{\myv{m}}(\theta)$ can be written as $U R_{\myv{z}}(\theta)U^{\dagger}$, we have $F\big(R_{\myv{m}}(\theta)\big)=F(UR_{\myv{z}}(\theta)U^{\dagger})= F(U) F\big(R_{\myv{z}}(\theta)\big) [F(U)]^{-1}$. Regarding the last three rotations, note that $[F(U)]^{-1}$ moves $\myv{m}$ to $\myv{z}$, i.e., $[F(U)]^{-1}\myv{m}=\myv{z}$, and $F\big(R_{\myv{z}}(\theta)\big)$ is the rotation about the resultant vector $\myv{z}$ by $\theta$, and $F(U)$ moves $\myv{z}$ back to $\myv{m}$. This implies that $F\big(R_{\myv{m}}(\theta)\big)$ is the desired rotation about $\myv{m}$ by $\theta$. \end{comment} \section{% The Minimum Numbers of Constituent Rotations and Optimal Constructions of an Arbitrary Rotation \label{ss:const}} Here we present the result establishing $N_{\myv{m},\myv{n}}(U)$ with needed definitions. \begin{definition} For $\myv{v}\in S^2$ and \begin{equation} U= \begin{pmatrix} w + i} %{{\rm i} z & y + i} %{{\rm i} x \\ - y + i} %{{\rm i} x & w - i} %{{\rm i} z \end{pmatrix} = wI + i} %{{\rm i} ( x X + y Y + z Z ) \ \ \in \ {\rm SU}(2) \label{eq:u123} \end{equation} where $w,x,y,z\in \mymathbb{R}$ are parameters to express $U$ uniquely, $\myb(\myv{v},U)$ is defined by \begin{equation} \myb(\myv{v},U) := \abs{(x,y,z)\myv{v}} . \label{eq:myb} \end{equation} \end{definition} \begin{definition}\label{def:g} Functions $f: \mymathbb{R}^3 \to [0,\pi]$ and $g: \mymathbb{R}^2 \times (0,\pi/2] \to \mymathbb{N}$ are defined by \begin{align*} f& (\alpha} %{\beta,\beta} %{\gamma,\delta):= \\ & 2 \arccos \sqrt{ \cos^2\frac{\beta} %{\gamma}{2}\cos^2\frac{\delta}{2}+ \sin^2\frac{\beta} %{\gamma}{2}\sin^2\frac{\delta}{2} \mbox{}+2\cos\alpha} %{\beta \sin\frac{\beta} %{\gamma}{2}\sin\frac{\delta}{2} \cos\frac{\beta} %{\gamma}{2}\cos\frac{\delta}{2} } \end{align*} and \[ g(\alpha} %{\beta,\beta} %{\gamma,\delta) := \begin{cases} \mbox{}\ {\displaystyle 2\Big\lceil \frac{f(\alpha} %{\beta,\beta} %{\gamma,\delta)}{2\delta} + \frac{1}{2} \Big\rceil } & \mbox{if $f(\alpha} %{\beta,\beta} %{\gamma,\delta) \ge \delta$}\\ \mbox{}\ 4 & \mbox{otherwise.} \end{cases} \] \end{definition} \begin{theorem}\label{th:num_rot} For any $\myv{m},\myv{\varvn}\in S^2$ with $\rinnpr{\myv{m}}{\myv{n}} \in [0,1)$, $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$, and $\beta} %{\gamma\in [0,\pi]$, if \[ \myb(\myv{m},U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) \ge \myb(\myv{n},U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) , \] then \[ N_{\myv{m},\myv{n}}\big(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\big) = N_{\myv{m},\myv{n}}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = \min \Big\{ 2 \Big\lceil \frac{\beta}{2\delta} \Big\rceil +1 , \, g(\alpha} %{\beta,\beta} %{\gamma,\delta) , \, g(\gamma} %{\delta,-\beta} %{\gamma,\delta) \Big\} \] where $\delta=\arccos\rinnpr{\myv{m}}{\myv{\varvn}} \in (0,\pi/2]$, $\myv{l} =\| \myv{m} \times \myv{n} \|^{-1} \myv{m} \times \myv{n}$, and \[ U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}} := R_{\myv{m}}(\alpha} %{\beta)R_{\myv{l}}(\beta} %{\gamma) R_{\myv{m}}(\gamma} %{\delta) . \] \end{theorem} Note that there is no loss of generality in assuming $\myb(\myv{m},U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) \ge$ $\myb(\myv{n},U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})$, but also note that $\alpha} %{\beta,\beta} %{\gamma$ and $\gamma} %{\delta$ vary, in general, if $\myv{m}$ and $\myv{n}$ are interchanged. We give two constructions or decompositions, which will turn out to attain the minimum number $N_{\myv{m},\myv{n}}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})$ in the theorem. \begin{proposition}\label{lem:const_odd} Given arbitrary $\myv{m},\myv{\varvn}\in S^2$ with $\rinnpr{\myv{m}}{\myv{n}} \in [0,1)$, $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$, and $\beta} %{\gamma\in [0,\pi]$, put \begin{equation}\label{eq:premise_delta} \delta=\arccos\rinnpr{\myv{m}}{\myv{\varvn}}\in (0,\pi/2] \end{equation} and \[ \myv{l} =\| \myv{m} \times \myv{n} \|^{-1} \myv{m} \times \myv{n}. \] Then, for any $k\in\mymathbb{N}$ and $\beta} %{\gamma_1, \dots, \beta} %{\gamma_k \in (0, 2\delta]$ satisfying \begin{equation}\label{eq:beta1k} \beta} %{\gamma=\beta} %{\gamma_1 + \cdots +\beta} %{\gamma_k , \end{equation} there exist some $\alpha} %{\beta_j,\gamma} %{\delta_j,\theta_j\in\mymathbb{R}$ such that \begin{equation}\label{eq:pro_RRRR} R_{\myv{l}}(\beta} %{\gamma_j )= R_{\myv{m}}(-\alpha} %{\beta_j) R_{\myv{n}}(\theta_j) R_{\myv{m}}(-\gamma} %{\delta_j) \end{equation} for $j=1,\dots,k$. For these parameters, it holds that \begin{align} R_{\myv{m}}(\alpha} %{\beta)& R_{\myv{l}}(\beta} %{\gamma) R_{\myv{m}}(\gamma} %{\delta) = \notag\\ & R_{\myv{m}}(\alpha} %{\beta-\alpha} %{\beta_1)R_{\myv{n}}(\theta_1) R_{\myv{m}}(-\gamma} %{\delta_1-\alpha} %{\beta_2)R_{\myv{n}}(\theta_2) R_{\myv{m}}(-\gamma} %{\delta_2-\alpha} %{\beta_3)R_{\myv{n}}(\theta_3) \cdots \notag\\ & \cdot R_{\myv{m}}(-\gamma} %{\delta_{k-1}-\alpha} %{\beta_{k})R_{\myv{n}}(\theta_{k}) R_{\myv{m}}(-\gamma} %{\delta_{k}+\gamma} %{\delta) .\label{eq:pro_prod} \end{align} \end{proposition} \begin{remark}\label{rem:const} The least value of $k$ such that (\ref{eq:beta1k}) holds for some $\beta} %{\gamma_1, \dots, \beta} %{\gamma_k \in (0,2\delta]$ is $\lceil \beta} %{\gamma/(2\delta) \rceil $.% \footnote To make the construction explicit, one can set $\beta} %{\gamma_j = 2\delta$ for $j \ne k$. The analogous comment applies to the division of $\beta} %{\gamma'+\delta$ in Proposition~\ref{lem:const_even}.} Hence, this proposition gives a decomposition of an arbitrary element $U = R_{\myv{m}}(\alpha} %{\beta) R_{\myv{l}}(\beta} %{\gamma) R_{\myv{m}}(\gamma} %{\delta) \in {\rm SU}(2)$ into the product of $2\lceil \beta} %{\gamma/(2\delta) \rceil +1$ rotations.% \footnote{All remarks except Remark~\ref{rem:const}, which needs no proof, will be proved in what follows.} \end{remark} \begin{remark}\label{rem:const_expl} For $\mtilde{\beta} %{\gamma},\delta\in\mymathbb{R}$ with $0 \le \mtilde{\beta} %{\gamma}/2 \le \delta \le \pi/2$, $\delta \ne 0$, and $t \in \mymathbb{R}$, let \[ H_t(\mtilde{\beta} %{\gamma},\delta) := \begin{cases} \mbox{}\ 0 & \mbox{if $\mtilde{\beta} %{\gamma}/2 <\delta = \pi/2$}\\ \mbox{}\ t & \mbox{if $\mtilde{\beta} %{\gamma}/2 = \delta = \pi/2$}\\ {\displaystyle \arcsin \frac{\tan (\mtilde{\beta} %{\gamma}/2) }{\tan \delta}} & \mbox{otherwise.} \end{cases} \] Then, an explicit instance of the set of parameters $\alpha} %{\beta_j, \gamma} %{\delta_j$, and $\theta_j$ for which (\ref{eq:pro_RRRR}) holds is given by $(\alpha} %{\beta_j,\gamma} %{\delta_j,\theta_j)^{\rm T}=\myvec{\sigma}_{t_j}(\beta} %{\gamma_j,\delta)$, where \begin{equation} \myvec{\sigma}_t(\beta} %{\gamma,\delta):= \begin{pmatrix} H_t(\beta} %{\gamma, \delta) - \slfrac{\pi}{2}\\ H_t(\beta} %{\gamma, \delta) + \slfrac{\pi}{2}\\ \mbox{}\ 2 \arcsin \frac{\displaystyle \sin (\beta} %{\gamma/2) }{\displaystyle \sin \delta} \end{pmatrix} \end{equation} and $t_j\in\mymathbb{R}$ can be chosen arbitrarily, $j=1,\dots,k$. [These make (\ref{eq:pro_prod}) hold.] \end{remark} \begin{proposition}\label{lem:const_even} Given any $\myv{m},\myv{\varvn}\in S^2$ with $\rinnpr{\myv{m}}{\myv{n}} \in [0,1)$, put $\delta=\arccos\rinnpr{\myv{m}}{\myv{\varvn}}\in (0,\pi/2]$ and $ \myv{l} =\| \myv{m} \times \myv{n} \|^{-1} \myv{m} \times \myv{n} $. For an arbitrary $U \in {\rm SU}(2)$, choose parameters $\alpha} %{\beta',\gamma} %{\delta'\in \mymathbb{R}$, and $\beta} %{\gamma'\in [0,\pi]$ such that \begin{equation}\label{eq:const_even} R_{\myv{l}}(-\delta)U = R_{\myv{m}}(\alpha} %{\beta') R_{\myv{l}}(\beta} %{\gamma') R_{\myv{m}}(\gamma} %{\delta') . \end{equation} Then, \begin{equation} \label{eq:pro_prod_even0} U = R_{\myv{n}}(\alpha} %{\beta') R_{\myv{l}}(\beta} %{\gamma'+\delta) R_{\myv{m}}(\gamma} %{\delta') . \end{equation} Furthermore, for any $k' \in \mymathbb{N}$ and $\beta} %{\gamma'_1, \dots, \beta} %{\gamma'_{k'} \in (0, 2\delta]$ satisfying \begin{equation}\label{eq:beta1kprime} \beta} %{\gamma'+\delta=\beta} %{\gamma'_1 + \cdots +\beta} %{\gamma'_{k'} , \end{equation} there exist some $\alpha} %{\beta'_j, \gamma} %{\delta'_j, \theta'_j\in\mymathbb{R}$ such that \begin{equation}\label{eq:pro_RRRR_even} R_{\myv{l}}(\beta} %{\gamma'_j )= R_{\myv{m}}(-\alpha} %{\beta'_j) R_{\myv{n}}(\theta'_j) R_{\myv{m}}(-\gamma} %{\delta'_j) \end{equation} for $j=1,\dots,k'$. For these parameters, it holds that \begin{align} U = & \ R_{\myv{n}}(\alpha} %{\beta') R_{\myv{m}}(-\alpha} %{\beta'_1)R_{\myv{n}}(\theta'_1) R_{\myv{m}}(-\gamma} %{\delta'_1-\alpha} %{\beta'_2)R_{\myv{n}}(\theta'_2) R_{\myv{m}}(-\gamma} %{\delta'_2-\alpha} %{\beta'_3)R_{\myv{n}}(\theta'_3) \cdots \notag\\ & \ \cdot R_{\myv{m}}(-\gamma} %{\delta'_{k'-1}-\alpha} %{\beta'_{k'})R_{\myv{n}}(\theta'_{k'}) R_{\myv{m}}(-\gamma} %{\delta'_{k'}+\gamma} %{\delta') . \label{eq:pro_prod_even} \end{align} \end{proposition} \begin{remark}\label{rem:const_even} The least value of $k'$ such that (\ref{eq:beta1kprime}) holds for some $\beta} %{\gamma'_1, \dots, \beta} %{\gamma'_{k'} \in (0,2\delta]$ is $\lceil (\beta} %{\gamma' +\delta)/ (2\delta) \rceil =\lceil \beta} %{\gamma' / (2\delta) + 1/2 \rceil$. Moreover, if $\beta} %{\gamma' \ge \delta$ and $k'=\lceil \beta} %{\gamma'/(2\delta) + 1/2 \rceil $, the parameter $\alpha} %{\beta'_1$ can be chosen so that it satisfies $\alpha} %{\beta'_1=0$ as well as (\ref{eq:pro_RRRR_even}) and (\ref{eq:pro_prod_even}). Hence, when $\beta} %{\gamma' \ge \delta$, this proposition and the fact just mentioned give a decomposition of an arbitrary element $U = R_{\myv{n}}(\alpha} %{\beta') R_{\myv{l}}(\beta} %{\gamma'+\delta) R_{\myv{m}}(\gamma} %{\delta') \in {\rm SU}(2)$ into the product of $2 \lceil \beta} %{\gamma' / (2\delta) + 1/2 \rceil$ rotations, and when $\beta} %{\gamma' < \delta$, a decomposition of $U$ into the product of four rotations. \end{remark} \begin{remark}\label{rem:const_expl_even} An explicit instance of the set of parameters $\alpha} %{\beta'_j, \gamma} %{\delta'_j$, and $\theta'_j$, $j=1,\dots,k'$, for which (\ref{eq:pro_RRRR_even}) and (\ref{eq:pro_prod_even}) hold is given by $(\alpha} %{\beta'_j,\gamma} %{\delta'_j,\theta'_j)^{\rm T}=\myvec{\sigma}_{t_j}(\beta} %{\gamma'_j,\delta)$, where $t_j\in\mymathbb{R}$ can be chosen arbitrarily, $j=1,\dots,k'$. \end{remark} \section{% Limits on Constructions \label{ss:limits}} In order to bound $N_{\myv{m},\myv{n}}( D )$, etc., from below, we use the geodesic metric on the unit sphere $S^2$, which is denoted by $d$. Specifically, \begin{equation} d(\myv{u},\myv{v}) := \arccos \rinnpr{\myv{u}}{\myv{v}} \in [0,\pi] \end{equation} for $\myv{u},\myv{v}\in S^2$. This is the length of the geodesic connecting $\myv{u}$ and $\myv{v}$ on $S^2$. We have the following lemma. [Recall we have put $\hat{R}_{\myv{v}}(\theta)= F\big(R_{\myv{v}}(\theta)\big)$.] \begin{lemma}\label{lem:ti} Let $\myv{n},\myv{m}$ be arbitrary vectors in $S^2$ with $\delta = d(\myv{m},\myv{n}) = \arccos \rinnpr{\myv{m}}{\myv{n}} \in (0,\pi]$. Then, for any $k} %{j \in \mymathbb{N}$ and $\phi} %{\sigma_1,\dots,\phi} %{\sigma_{2k} %{j}\in\mymathbb{R}$, the following inequalities hold: \begin{gather} d(\hat{R}_{\myv{m}}(\phi} %{\sigma_{2k} %{j-1}) \hat{R}_{\myv{n}}(\phi} %{\sigma_{2k} %{j-2}) \cdots \hat{R}_{\myv{m}}(\phi} %{\sigma_3) \hat{R}_{\myv{n}}(\phi} %{\sigma_{2})\hat{R}_{\myv{m}}(\phi} %{\sigma_1)\myv{m}, \myv{m}) \le 2 (k} %{j-1) \delta, \label{eq:lem3_3}\\ d(\hat{R}_{\myv{m}}(\phi} %{\sigma_{2k} %{j-1}) \hat{R}_{\myv{n}}(\phi} %{\sigma_{2k} %{j-2}) \cdots \hat{R}_{\myv{m}}(\phi} %{\sigma_3) \hat{R}_{\myv{n}}(\phi} %{\sigma_{2}) \hat{R}_{\myv{m}}(\phi} %{\sigma_1)\myv{m}, \myv{n}) \le (2 k} %{j -1) \delta, \label{eq:lem3_4}\\ d(\hat{R}_{\myv{n}}(\phi} %{\sigma_{2k} %{j}) \hat{R}_{\myv{m}}(\phi} %{\sigma_{2k} %{j-1}) \cdots \hat{R}_{\myv{m}}(\phi} %{\sigma_3) \hat{R}_{\myv{n}}(\phi} %{\sigma_{2}) \hat{R}_{\myv{m}}(\phi} %{\sigma_1)\myv{m}, \myv{n}) \le (2 k} %{j -1) \delta , \label{eq:lem3_2} \\ d(\hat{R}_{\myv{n}}(\phi} %{\sigma_{2k} %{j}) \hat{R}_{\myv{m}}(\phi} %{\sigma_{2k} %{j-1}) \cdots \hat{R}_{\myv{m}}(\phi} %{\sigma_3) \hat{R}_{\myv{n}}(\phi} %{\sigma_{2}) \hat{R}_{\myv{m}}(\phi} %{\sigma_1)\myv{m}, \myv{m}) \le 2 k} %{j \delta. \label{eq:lem3_1} \end{gather} \end{lemma} This can be shown easily by induction on $k} %{j$ using the triangle inequality for $d$. In what follows, (\ref{eq:lem3_3}) and (\ref{eq:lem3_2}) will be used in the following forms: \begin{equation} 2\Big\lceil \frac{d(D\myv{m},\myv{m})}{2\delta} \Big\rceil +1 \le 2 k} %{j -1 \quad\mbox{and}\quad 2\Big\lceil \frac{d(D'\myv{m},\myv{n})}{2\delta} +\frac{1}{2}\Big\rceil \le 2 k} %{j . \end{equation} These bounds hold when $D$ and $D' \in {\rm SO}(3)$ equal the product of $2k} %{j-1$ rotations and that of $2k} %{j$ rotations, respectively, in Lemma~\ref{lem:ti} (since $k} %{j$ is an integer). It will turn out that these bounds are tight. \section{Proof of the Results \label{ss:Proofs}} \subsection{Structure of the Proof} Here the structure of the whole proof of the results in this work is described. Theorem~\ref{th:num_rot} is obtained as a consequence of Lemma~\ref{lem:1} to be presented. The constructive half of Lemma~\ref{lem:1} is due to Propositions~\ref{lem:const_odd} and \ref{lem:const_even}. The other half of Lemma~\ref{lem:1}, related to limits on constructions, is due to Lemma~\ref{lem:ti}. Theorem~\ref{th:Decomp} is derived from Theorem~\ref{th:num_rot} in Appendix~\ref{ss:proofthDecomp}. \subsection{Proof of Propositions~\ref{lem:const_odd} and \ref{lem:const_even}} The following lemma is fundamental to the results in this work. \begin{lemma}\label{lem:EulerG} For any $\beta} %{\gamma,\theta\in\mymathbb{R}$ and for any $\myv{\varu}, \myv{l},\myv{m} \in S^2$ such that $\rinnprsp{\myv{l}}{\myv{m}}=0$, the following two conditions are equivalent.\vspace{1ex} \mbox{}\hspace{.5ex}I. There exist some $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$ such that \begin{equation}\label{eq:EulerG} R_{\myv{\varu}}(\theta) = R_{\myv{m}}(\alpha} %{\beta) R_{\myv{l}}(\beta} %{\gamma) R_{\myv{m}}(\gamma} %{\delta) . \end{equation} \mbox{}\hspace{.5ex}II. $\sqrt{1-(\rinnpr{\myv{m}}{\myv{\varu}})^2}\abs{\sin\sfrac{\theta}{2}} =\abs{\sin\sfrac{\beta} %{\gamma}{2}}$. \end{lemma} \noindent {\em Proof.} 1) Take an element $U\in {\rm SU}(2)$ such that \begin{equation} \myv{l}=F(U) (0,1,0)^{\rm T} \quad\mbox{and}\quad \myv{m}=F(U) (0,0,1)^{\rm T} , \end{equation} and put $\myv{v}=(v_x,v_y,v_z)^{\rm T}$ for the parameters $v_x,v_y$, and $v_z$ such that \begin{equation} \myv{\varu}= v_x \myv{l} \times \myv{m} + v_y \myv{l} + v_z \myv{m}.\label{eq:u_v} \end{equation} Then, owing to Lemma~\ref{lem:transR3}, (\ref{eq:EulerG}) holds iff \begin{equation} R_{\myv{v}}(\theta) = R_{z}(\alpha} %{\beta) R_{y}(\beta} %{\gamma) R_{z}(\gamma} %{\delta) . \label{eq:v_zyz} \end{equation} 2) A direct calculation shows \begin{eqnarray} R_{z}(\alpha} %{\beta) R_{y}(\beta} %{\gamma) R_{z}(\gamma} %{\delta) &=& \cos\frac{\beta} %{\gamma}{2} \cos\frac{\gamma} %{\delta+\alpha} %{\beta}{2} I -i} %{{\rm i}\sin\frac{\beta} %{\gamma}{2} \sin\frac{\gamma} %{\delta-\alpha} %{\beta}{2} X \notag\\ && \mbox{}-i} %{{\rm i}\sin\frac{\beta} %{\gamma}{2} \cos\frac{\gamma} %{\delta-\alpha} %{\beta}{2} Y -i} %{{\rm i}\cos\frac{\beta} %{\gamma}{2} \sin\frac{\gamma} %{\delta+\alpha} %{\beta}{2} Z .\label{eq:zyz} \end{eqnarray} Hence, (\ref{eq:v_zyz}) is equivalent to \begin{numcases} {} \cos\mfrac{\theta}{2} = \cos\mfrac{\beta} %{\gamma}{2} \cos\mfrac{\gamma} %{\delta+\alpha} %{\beta}{2}\label{eq:1}\\ v} %\underline{v}} %{n_x \sin\mfrac{\theta}{2} = \sin\mfrac{\beta} %{\gamma}{2} \sin\mfrac{\gamma} %{\delta-\alpha} %{\beta}{2} \label{eq:2}\\ v} %\underline{v}} %{n_y \sin\mfrac{\theta}{2} = \sin\mfrac{\beta} %{\gamma}{2} \cos\mfrac{\gamma} %{\delta-\alpha} %{\beta}{2} \label{eq:3}\\ v} %\underline{v}} %{n_z \sin\mfrac{\theta}{2} = \cos\mfrac{\beta} %{\gamma}{2} \sin\mfrac{\gamma} %{\delta + \alpha} %{\beta}{2} . \label{eq:4} \end{numcases} 3) We shall prove I $\Rightarrow$ II. On each side of (\ref{eq:2}) and (\ref{eq:3}), squaring and summing the resultant pair, we have \begin{equation} \sqrt{1-v} %\underline{v}} %{n_z^2}\abs{\sin\mfrac{\theta}{2}}=\abs{\sin\mfrac{\beta} %{\gamma}{2}}.\label{eq:IIorg} \end{equation} [Eqs.\ (\ref{eq:1}) and (\ref{eq:4}) also imply (\ref{eq:IIorg}) similarly.] But (\ref{eq:IIorg}) implies II in view of (\ref{eq:u_v}). 4) Next, we shall prove II $\Rightarrow$ I. Transforming $(\alpha} %{\beta,\beta} %{\gamma)$ into $(\eta,\zeta)$, where the two pairs are related by \begin{equation} \eta=\frac{\gamma} %{\delta+\alpha} %{\beta}{2} \quad\mbox{and}\quad \zeta=\frac{\gamma} %{\delta-\alpha} %{\beta}{2}, \label{eq:5_6} \end{equation} we see, from the paragraphs 1) and 2), that I is equivalent to the following condition: There exist some $\eta,\zeta\in\mymathbb{R}$ such that \begin{numcases} {} \cos\mfrac{\theta}{2} = \cos\mfrac{\beta} %{\gamma}{2} \cos\eta \label{eq:7} \\ v} %\underline{v}} %{n_x \sin\mfrac{\theta}{2} = \sin\mfrac{\beta} %{\gamma}{2} \sin\zeta \label{eq:8}\\ v} %\underline{v}} %{n_y \sin\mfrac{\theta}{2} = \sin\mfrac{\beta} %{\gamma}{2} \cos\zeta \label{eq:9}\\ v} %\underline{v}} %{n_z \sin\mfrac{\theta}{2} = \cos\mfrac{\beta} %{\gamma}{2} \sin\eta. \label{eq:10} \end{numcases} Hence, it is enough to show that II implies the existence of some $\eta,\zeta\in\mymathbb{R}$ satisfying (\ref{eq:7})--(\ref{eq:10}). Now suppose $\cos\mfrac{\beta} %{\gamma}{2} \ne 0$. Then, if we show \begin{equation} \frac{ \cos^2\mfrac{\theta}{2} }{ \cos^2\mfrac{\beta} %{\gamma}{2} } + \frac{ v} %\underline{v}} %{n_z^2 \sin^2\mfrac{\theta}{2} }{ \cos^2\mfrac{\beta} %{\gamma}{2} } = 1 , \label{eq:pr1} \end{equation} it will immediately imply the existence of $\eta$ satisfying (\ref{eq:7}) and (\ref{eq:10}). \begin{comment} Namely, $\eta$ will be specified by \begin{equation} \cos\eta = \frac{ \cos\mfrac{\theta}{2} }{ \cos\mfrac{\beta} %{\gamma}{2} } \quad\mbox{and}\quad \sin\eta = \frac{ v} %\underline{v}} %{n_z \sin\mfrac{\theta}{2} }{ \cos\mfrac{\beta} %{\gamma}{2} }. \label{eq:eta_tri} \end{equation} \end{comment} From II, however, we have (\ref{eq:IIorg}), and hence, $(1-v} %\underline{v}} %{n_z^2)\sin^2\mfrac{\theta}{2}=\sin^2\mfrac{\beta} %{\gamma}{2}$, i.e., $1-(1-v} %\underline{v}} %{n_z^2)\sin^2\mfrac{\theta}{2}=\cos^2\mfrac{\beta} %{\gamma}{2}$, which is equivalent to (\ref{eq:pr1}) by the assumption $\cos\mfrac{\beta} %{\gamma}{2} \ne 0$. If $\cos\mfrac{\beta} %{\gamma}{2} = 0$, then $\abs{\sin\mfrac{\beta} %{\gamma}{2}}=1$. This and (\ref{eq:IIorg}) imply $1-v} %\underline{v}} %{n_z^2=\abs{\sin\mfrac{\theta}{2}}=1$, and hence, $v} %\underline{v}} %{n_z=\cos\mfrac{\theta}{2}=0$. Then, (\ref{eq:7}) and (\ref{eq:10}) hold for any choice of $\eta$. In a similar way, if $\sin\mfrac{\beta} %{\gamma}{2} \ne 0$, \begin{equation} \frac{ v} %\underline{v}} %{n_x^2 \sin^2\mfrac{\theta}{2} }{ \sin^2\mfrac{\beta} %{\gamma}{2} } + \frac{ v} %\underline{v}} %{n_y^2 \sin^2\mfrac{\theta}{2} }{ \sin^2\mfrac{\beta} %{\gamma}{2} } = 1 \label{eq:pr2} \end{equation} will immediately imply the existence of $\zeta$ satisfying (\ref{eq:8}) and (\ref{eq:9}). \begin{comment} Namely, $\zeta$ will be specified by \begin{equation} \sin\zeta = \frac{v} %\underline{v}} %{n_x \sin\mfrac{\theta}{2}}{ \sin\mfrac{\beta} %{\gamma}{2} } \quad\mbox{and}\quad \cos\zeta = \frac{v} %\underline{v}} %{n_y \sin\mfrac{\theta}{2}}{ \sin\mfrac{\beta} %{\gamma}{2} } . \label{eq:pr3} \end{equation} \end{comment} But (\ref{eq:pr2}) follows again from II or (\ref{eq:IIorg}) since $1-v} %\underline{v}} %{n_z^2=v} %\underline{v}} %{n_x^2+v} %\underline{v}} %{n_y^2$. If $\sin\mfrac{\beta} %{\gamma}{2} = 0$, both (\ref{eq:8}) and (\ref{eq:9}) hold for any choice of $\zeta$ similarly. \hfill $\Box$\vspace*{1ex} } %{\noindent% {\em Proof of Proposition~\ref{lem:const_odd}.}\/ \hspace*{2.8ex Choose a parameter $\theta_j$ such that $\abs{\sin(\theta_j/2)}=$ $\sin(\beta} %{\gamma_j/2)/\sin\delta$, which is possible by the assumption $\beta} %{\gamma_j \in (0, 2\delta]$; then, it follows from Lemma~\ref{lem:EulerG} that there exist some $\alpha} %{\beta_j,\gamma} %{\delta_j\in \mymathbb{R}$ such that (\ref{eq:pro_RRRR}), i.e., $R_{\myv{l}}(\beta} %{\gamma_j )= R_{\myv{m}}(-\alpha} %{\beta_j) R_{\myv{n}}(\theta_j) R_{\myv{m}}(-\gamma} %{\delta_j)$ holds, $j=1,\dots,k$. Inserting these into \[ R_{\myv{m}}(\alpha} %{\beta)R_{\myv{l}}(\beta} %{\gamma)R_{\myv{m}}(\gamma} %{\delta)=R_{\myv{m}}(\alpha} %{\beta)R_{\myv{l}}(\beta} %{\gamma_1) \cdots R_{\myv{l}}(\beta} %{\gamma_k) R_{\myv{m}}(\gamma} %{\delta), \] we obtain (\ref{eq:pro_prod}). \hfill $\Box$\vspace*{1ex} } %{\noindent% {\em Proof of Proposition~\ref{lem:const_even}}.\/ Note $R_{\myv{l}}(\delta)R_{\myv{m}}(\alpha} %{\beta')R_{\myv{l}}(-\delta)= R_{\myv{n}}(\alpha} %{\beta')$, which is equivalent to $R_{y}(\delta)R_{z}(\alpha} %{\beta')R_{y}(-\delta)=R_{v}(\alpha} %{\beta')$, where $\myv{v} = (\sin\delta, 0, \cos\delta)^{\rm T}$, by Lemma~\ref{lem:transR3} Figure~\ref{fig:1}) and therefore, can be checked easily by a direct calculation. Using this equation, we can rewrite (\ref{eq:const_even}) as $U=R_{\myv{n}}(\alpha} %{\beta') R_{\myv{l}}(\beta} %{\gamma'+\delta) R_{\myv{m}}(\gamma} %{\delta')$, which is (\ref{eq:pro_prod_even0}). Then, applying to $R_{\myv{l}}(\beta} %{\gamma'+\delta) R_{\myv{m}}(\gamma} %{\delta')$ the decomposition in Proposition~\ref{lem:const_odd} with $(\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta)$ replaced by $(0,\beta} %{\gamma'+\delta,\gamma} %{\delta')$, it readily follows that there exist some $\alpha} %{\beta'_j,\gamma} %{\delta'_j$, and $\theta'_j \in \mymathbb{R}$, $j=1,\dots,k'$, that satisfy the following: $\abs{\sin(\theta'_j/2)}=\sin(\beta} %{\gamma'_j/2)/\sin\delta$ and (\ref{eq:pro_RRRR_even}) for $j=1,\dots,k'$, and \begin{align} R_{\myv{l}}(\beta} %{\gamma'+\delta)& R_{\myv{m}}(\gamma} %{\delta') \notag\\ =& \ R_{\myv{m}}(-\alpha} %{\beta'_1)R_{\myv{n}}(\theta'_1) R_{\myv{m}}(-\gamma} %{\delta'_1-\alpha} %{\beta'_2)R_{\myv{n}}(\theta'_2) R_{\myv{m}}(-\gamma} %{\delta'_2-\alpha} %{\beta'_3)R_{\myv{n}}(\theta'_3) \cdots \notag\\ & \ \cdot R_{\myv{m}}(-\gamma} %{\delta'_{k'-1}-\alpha} %{\beta'_{k'})R_{\myv{n}}(\theta'_{k'}) R_{\myv{m}}(-\gamma} %{\delta'_{k'}+\gamma} %{\delta') . \end{align} \iffalse \begin{eqnarray} R_{\myv{l}}(\beta} %{\gamma'+\delta) R_{\myv{m}}(\gamma} %{\delta') &=& R_{\myv{m}}(-\alpha} %{\beta'_1)R_{\myv{n}}(\theta'_1) R_{\myv{m}}(-\gamma} %{\delta'_1-\alpha} %{\beta'_2)R_{\myv{n}}(\theta'_2) R_{\myv{m}}(-\gamma} %{\delta'_2-\alpha} %{\beta'_3)R_{\myv{n}}(\theta'_3) \cdots \notag\\ && \cdot R_{\myv{m}}(-\gamma} %{\delta'_{k'-1}-\alpha} %{\beta'_{k'})R_{\myv{n}}(\theta'_{k'}) R_{\myv{m}}(-\gamma} %{\delta'_{k'}+\gamma} %{\delta') . \end{eqnarray} \fi Thus, we obtain the proposition. \hfill $\Box$\vspace*{1ex} Remarks~\ref{rem:const_expl} and \ref{rem:const_expl_even} to these propositions are proved in \refapp{app:proof_rem}. The statement on $\alpha} %{\beta'_1$ in Remark~\ref{rem:const_even} follows from Remark~\ref{rem:const_expl_even} (put $\beta} %{\gamma'_1=2\delta$ and $t_1=\pi/2$) or, more directly, from an equation $R_{\myv{l}}(2\delta)=R_{\myv{n}}(\pi)R_{\myv{m}}(-\pi)$, which is equivalent to $R_{y}(2\delta)=R_{v}(\pi)R_{z}(-\pi)$, where $\myv{v}=(\sin\delta,0,\cos\delta)^{\rm T}$, by Lemma~\ref{lem:transR3}. \begin{figure} \begin{center} \begin{picture}(150,165)(-50,-55) \put(111,0){\makebox(0,0)[l]{$\myv{z}$\ ($\myv{m}$)}} \put(0,0){\vector(1,0){109.5}} \put(0,100){\makebox(0,0)[b]{$\myv{x}$\ ($\myv{l} \times \myv{m}$)}} \put(0,0){\vector(0,1){98.5}} \put(1,-4){\makebox(0,0)[t]{$0$}} \put(0,0){\vector(-1,-1){40}} \put(-40,-40){\makebox(0,0)[t]{$\myv{y}$\ ($\myv{l}$)}} \put(0,0){\vector(2,1){98}} \put(101,50){\makebox(0,0)[l]{$\myv{v}$\ ($\myv{n}$)}} \multiput(98,0)(0,2){25}{\line(0,1){1}} \put(92,-2){\makebox(0,0)[t]{$\cos\delta$}} \multiput(0,49)(2,0){49}{\line(1,0){1}} \put(-3,50){\makebox(0,0)[r]{$\sin\delta$}} \end{picture} \caption{Configuration of $\myv{l},\myv{m}$, and $\myv{n}$ in Propositions~\ref{lem:const_odd} and \ref{lem:const_even}, and configuration of $\myv{y}=(0,1,0)^{\rm T}$, $\myv{z}=(0,0,1)^{\rm T}$, and $\myv{v}$ in arguments around these propositions \label{fig:1}} \end{center} \end{figure} \subsection{Proof of Theorem~\ref{th:num_rot} \label{sssub:ProofTh2}} Let $2\mymathbb{N} -1$ and $2\mymathbb{N}$ denote the set of odd numbers in $\mymathbb{N}$ and that of even numbers in $\mymathbb{N}$, respectively. We define the following for $\myv{m},\myv{\varvn}\in S^2$ with $\abs{\rinnpr{\myv{m}}{\myv{\varvn}}} < 1$: \begin{align*} \varM{\myv{m},\myv{n}}{{\rm odd}}(U) := \min \{ \varnu \in 2\mymathbb{N} \! - \! 1 \mid \, & \exists V_1, V_3, \dots,V_{\varnu}\in \mathcal{R}} %{\delta_{\myv{m}},\\ & \exists V_2, V_4, \dots,V_{\varnu-1}\in \mathcal{R}} %{\delta_{\myv{n}}, \ U= V_{\varnu}V_{\varnu-1} \cdots V_1\} ,\\ \varM{\myv{m},\myv{n}}{\rm even}(U) := \min \{ \varnu \in 2\mymathbb{N} \mid \, & \exists V_1, V_3, \dots,V_{\varnu-1}\in \mathcal{R}} %{\delta_{\myv{m}},\\ & \exists V_2, V_4, \dots,V_{\varnu}\in \mathcal{R}} %{\delta_{\myv{n}}, \ U= V_{\varnu}V_{\varnu-1} \cdots V_1\} ,\\ \varMcomb{\myv{m},\myv{n}}(U) := \min \{ \varM{\myv{m},\myv{n}}{{\rm odd}}(U), & \ \varM{\myv{m},\myv{n}}{\rm even}(U) \} \end{align*} for $U\in {\rm SU}(2)$; \begin{align*} \mbox{}\ \varM{\myv{m},\myv{n}}{\rm odd}(D) := \min \{ \varnu \in 2\mymathbb{N} \! - \! 1 \mid \, & \exists \genD_1, \genD_3, \dots,\genD_{\varnu}\in \hat{\mathcal{R}} %{\delta}_{\myv{m}},\\ & \mbox{}\!\! \exists \genD_2, \genD_4, \dots,\genD_{\varnu-1}\in \hat{\mathcal{R}} %{\delta}_{\myv{n}}, \ D= \genD_{\varnu}\genD_{\varnu-1} \cdots \genD_1\} ,\\ \varM{\myv{m},\myv{n}}{\rm even}(D) := \min \{ \varnu \in 2\mymathbb{N} \mid \, & \exists \genD_1, \genD_3, \dots,\genD_{\varnu-1}\in \hat{\mathcal{R}} %{\delta}_{\myv{m}},\\ & \exists \genD_2, \genD_4, \dots,\genD_{\varnu}\in \hat{\mathcal{R}} %{\delta}_{\myv{n}}, \ D= \genD_{\varnu}\genD_{\varnu-1} \cdots \genD_1\} ,\\ \varMcomb{\myv{m},\myv{n}}(D) := \min \{ \varM{\myv{m},\myv{n}}{\rm odd}(D), & \ \varM{\myv{m},\myv{n}}{\rm even}(D) \} \end{align*} for $D\in {\rm SO}(3)$. The following lemma largely solves the issue of determining the optimal number $N_{\myv{m},\myv{n}}(U)$. \begin{lemma}\label{lem:1} Let $\myv{m},\myv{n}$, $\myv{l}$, and $\delta$ be as in Theorem~\ref{th:num_rot}. Then, for any $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$, and $\beta} %{\gamma\in [0,\pi]$, \begin{equation} \varM{\myv{m},\myv{n}}{\rm odd}\big(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\big) = \varM{\myv{m},\myv{n}}{\rm odd}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = 2 \Big\lceil \frac{\beta}{2\delta} \Big\rceil +1 \label{eq:lem1_1} \end{equation} and \begin{equation} \varM{\myv{m},\myv{n}}{\rm even}\big(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\big) = \varM{\myv{m},\myv{n}}{\rm even}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = g(\alpha} %{\beta,\beta} %{\gamma,\delta) \label{eq:lem1_2} \end{equation} where $U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}$ is as defined in Theorem~\ref{th:num_rot}. \end{lemma} \begin{corollary}\label{coro:1} Let $\myv{m},\myv{n}$, $\myv{l}$, and $\delta$ be as in Theorem~\ref{th:num_rot}. Then, for any $\alpha} %{\beta,\gamma} %{\delta\in\mymathbb{R}$, and $\beta} %{\gamma\in [0,\pi]$, \begin{gather}\label{eq:2:DUP_SEP10} \varMcomb{\myv{m},\myv{n}}\big(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\big) = \varMcomb{\myv{m},\myv{n}}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = \min \Big\{ 2 \Big\lceil \frac{\beta}{2\delta} \Big\rceil +1 , \, g(\alpha} %{\beta,\beta} %{\gamma,\delta) \Big\}. \end{gather} \end{corollary} } %{\noindent% {\em Proof}.\/ In the case where $\beta} %{\gamma=0$, since $\varM{\myv{m},\myv{n}}{\rm odd}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = 1$ and $\varM{\myv{m},\myv{n}}{\rm even}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = 2$, (\ref{eq:lem1_1}) and (\ref{eq:lem1_2}) are trivially true. We shall prove the statement for $\beta} %{\gamma>0$. To establish (\ref{eq:lem1_1}), we shall show the first and third inequalities in \begin{equation} 2 \Big\lceil \frac{\beta}{2\delta} \Big\rceil +1 \le \varM{\myv{m},\myv{n}}{\rm odd}\big(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\big) \le \varM{\myv{m},\myv{n}}{\rm odd}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) \le 2 \Big\lceil \frac{\beta}{2\delta} \Big\rceil +1 \label{eq:lem1_1_comb} \end{equation} while the second inequality trivially follows from the definition of $\varM{\myv{m},\myv{n}}{\rm odd}$. Note first that Remark~\ref{rem:const} to Proposition~\ref{lem:const_odd} immediately implies the third inequality in (\ref{eq:lem1_1_comb}). To prove the first inequality, assume \begin{equation} F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})= \genD_{\varnu}\genD_{\varnu-1}\cdots \genD_1 \label{eq:pr_lem1_1} \end{equation} for some $j=2k-1$ with $k\in\mymathbb{N}$, where $\genD_{\nu}\in\hat{\mathcal{R}} %{\delta}_{\myv{m}}$ if $\nu$ is odd and $\genD_{\nu}\in\hat{\mathcal{R}} %{\delta}_{\myv{n}}$ otherwise. We shall evaluate $d(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}} ) \myv{m},\myv{m})= d(\genD_{\varnu}\genD_{\varnu-1}\cdots \genD_1 \myv{m},\myv{m})$. \hspace*{1ex} Noting that $d(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\myv{m},\myv{m})=\beta} %{\gamma$, we have $\beta} %{\gamma \le 2(k-1)\delta$ by (\ref{eq:lem3_3}) of Lemma~\ref{lem:ti}. This implies $\lceil \beta} %{\gamma/(2\delta) \rceil \le k-1$, and therefore, \begin{equation}\label{eq:oddlb} 2\Big\lceil \frac{\beta} %{\gamma}{2\delta} \Big\rceil +1 \le 2k-1=j. \end{equation} From this bound, we have the first inequality in (\ref{eq:lem1_1_comb}), and hence, (\ref{eq:lem1_1}). To establish (\ref{eq:lem1_2}), we shall first treat the major case where $f(\alpha} %{\beta,\beta} %{\gamma,\delta) \ge \delta$. Recalling that $g(\alpha} %{\beta,\beta} %{\gamma,\delta) = 2 \lceil f(\alpha} %{\beta,\beta} %{\gamma,\delta)/(2\delta) +1/2 \rceil$ in this case, we shall show the first and third inequalities in \begin{equation}\label{eq:lem1_2_comb} 2 \Big\lceil \frac{f(\alpha} %{\beta,\beta} %{\gamma,\delta)}{2\delta} + \frac{1}{2} \Big\rceil \le \varM{\myv{m},\myv{n}}{\rm even}\big(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\big) \le \varM{\myv{m},\myv{n}}{\rm even}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) \le 2 \Big\lceil \frac{f(\alpha} %{\beta,\beta} %{\gamma,\delta)}{2\delta} + \frac{1}{2} \Big\rceil \end{equation} while the second inequality holds trivially. Note that Remark~\ref{rem:const_even} to Proposition~\ref{lem:const_even} will imply the third inequality upon showing that $\beta} %{\gamma'$ in Proposition~\ref{lem:const_even} satisfies $\beta} %{\gamma'=f(\alpha} %{\beta,\beta} %{\gamma,\delta)$ when $U=U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}$. To see $\beta} %{\gamma'=f(\alpha} %{\beta,\beta} %{\gamma,\delta)$, rewrite (\ref{eq:const_even}), using Lemma~\ref{lem:transR3}, as \begin{equation} R_{y}(-\delta)R_{z}(\alpha} %{\beta)R_{y}(\beta} %{\gamma)R_{z}(\gamma} %{\delta) = R_{z}(\alpha} %{\beta') R_{y}(\beta} %{\gamma') R_{z}(\gamma} %{\delta') . \end{equation} Then, a direct calculation shows the absolute value of the $(1,1)$-entry of the left-hand side equals \[ \sqrt{ \cos^2\frac{\beta} %{\gamma}{2}\cos^2\frac{\delta}{2}+ \sin^2\frac{\beta} %{\gamma}{2}\sin^2\frac{\delta}{2} \mbox{}+2\cos\alpha} %{\beta \sin\frac{\beta} %{\gamma}{2}\sin\frac{\delta}{2} \cos\frac{\beta} %{\gamma}{2}\cos\frac{\delta}{2} }. \] This shows $\beta} %{\gamma'=f(\alpha} %{\beta,\beta} %{\gamma,\delta)$ in view of (\ref{eq:uni1}). To prove the first inequality in (\ref{eq:lem1_2_comb}) assume (\ref{eq:pr_lem1_1}) holds for some $j=2k$ with $k\in\mymathbb{N}$, where $\genD_{\nu}\in\hat{\mathcal{R}} %{\delta}_{\myv{m}}$ if $\nu$ is odd and $\genD_{\nu}\in\hat{\mathcal{R}} %{\delta}_{\myv{n}}$ otherwise. Note that $\myv{n}=R_{\myv{l}}(\delta)\myv{m}$ and hence, for $U=R_{\myv{n}}(\alpha} %{\beta')R_{\myv{l}}(\beta} %{\gamma'+\delta)R_{\myv{m}}(\gamma} %{\delta')$ in Proposition~\ref{lem:const_even}, \[ d(U\myv{m},\myv{n})=d(R_{\myv{l}}(\beta} %{\gamma'+\delta)\myv{m},\myv{n})=d(R_{\myv{l}}(\beta} %{\gamma'+\delta)\myv{m},R_{\myv{l}}(\delta)\myv{m})=(\beta} %{\gamma'+\delta)-\delta=\beta} %{\gamma'. \] Then, we have $\beta} %{\gamma' \le (2 k -1) \delta$ by (\ref{eq:lem3_2}) of Lemma~\ref{lem:ti}. This implies $\lceil (\beta} %{\gamma'+\delta)/(2\delta) \rceil \le k$, and therefore, \begin{equation}\label{eq:evenlb} 2\Big\lceil \frac{\beta} %{\gamma'+\delta}{2\delta} \Big\rceil \le 2k=j. \end{equation} From this bound, we have the first inequality in (\ref{eq:lem1_2_comb}) and hence, the equality among all sides of (\ref{eq:lem1_2_comb}). This shows (\ref{eq:lem1_2}) in the case where $f(\alpha} %{\beta,\beta} %{\gamma,\delta) \ge \delta$. The proof of (\ref{eq:lem1_2}) in the other case is given in \refapp{app:case2}. This completes the proof of the lemma. The proved lemma immediately implies the corollary. \hfill $\Box$\vspace*{1ex} \iffalse \mbox{} \hspace{1ex} \mbox{} \hfill $\Box$} % \vspace*{1ex} \fi } %{\noindent% {\em Proof of Theorem~\ref{th:num_rot}.}\/ Note that for any $U\in {\rm SU}(2)$, \[ N_{\myv{m},\myv{n}}(U) = \min \{ \varM{\myv{m},\myv{n}}{\rm odd}(U), \varM{\myv{m},\myv{n}}{\rm even}(U), \varM{\myv{n},\myv{m}}{\rm odd}(U), \varM{\myv{n},\myv{m}}{\rm even}(U) \} , \] and we can write $U$ in terms of three parametric expressions: \begin{equation*} U=R_{\myv{u}}(\theta)=U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}= U_{\tilde{\alpha} %{\beta},\tilde{\beta} %{\gamma},\tilde{\gamma} %{\delta}}^{\myv{n},-\myv{l}} \end{equation*} where $\beta} %{\gamma,\tilde{\beta} %{\gamma} \in [0, \pi]$, $\alpha} %{\beta,\gamma} %{\delta,\tilde{\alpha} %{\beta},\tilde{\gamma} %{\delta},\theta \in \mymathbb{R}$, and $\myv{u}\in S^2$. Then, we have \[ \frac{\beta}{2}=\arcsin \bigg[ \sqrt{1-(\rinnpr{\myv{m}}{\myv{u}})^2}\Big|\sin\frac{\theta}{2}\Big| \bigg] \quad\mbox{and}\quad \frac{\tilde{\beta}}{2}=\arcsin \bigg[ \sqrt{1-(\rinnpr{\myv{n}}{\myv{u}})^2}\Big|\sin\frac{\theta}{2}\Big| \bigg] \] owing to Lemma~\ref{lem:EulerG}, and hence, \[ \varM{\myv{m},\myv{n}}{\rm odd}(U)= 2\Big\lceil \frac{\arcsin \sqrt{1-(\rinnpr{\myv{m}}{\myv{u}})^2}\abs{\sin\mfrac{\theta}{2}}}{\delta} \Big\rceil +1 \] and \[ \varM{\myv{n},\myv{m}}{\rm odd}(U)= 2\Big\lceil \frac{\arcsin \sqrt{1-(\rinnpr{\myv{n}}{\myv{u}})^2}\abs{\sin\mfrac{\theta}{2}}}{\delta} \Big\rceil +1 \] owing to Lemma~\ref{lem:1}. Then, if $\abs{\rinnpr{\myv{m}}{\myv{u}}} \ge \abs{\rinnpr{\myv{n}}{\myv{u}}}$ whenever $\sin (\theta/2) \ne 0$, which implies $\varM{\myv{m},\myv{n}}{\rm odd}(U) \le \varM{\myv{n},\myv{m}}{\rm odd}(U)$, we shall have \begin{eqnarray} N_{\myv{m},\myv{n}}(U) &= &\min \{ \varM{\myv{m},\myv{n}}{\rm odd}(U), \varM{\myv{m},\myv{n}}{\rm even}(U), \varM{\myv{n},\myv{m}}{\rm even}(U) \} \notag\\ &= &\min \Big\{ 2 \Big\lceil \frac{\beta}{2\delta} \Big\rceil +1, g(\alpha} %{\beta,\beta} %{\gamma,\delta), \varM{\myv{n},\myv{m}}{\rm even}(U) \Big\} \label{eq:lem1pre} \end{eqnarray} for $U=U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}$. But $[\sin (\theta/2) \ne 0 \rightarrow \abs{\rinnpr{\myv{m}}{\myv{u}}} \ge \abs{\rinnpr{\myv{n}}{\myv{u}}}]$ follows from $\myb(\myv{m},U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) \ge \myb(\myv{n},U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})$ by the definition of $\myb$. [This is because writing $U$ in (\ref{eq:u123}) as $U=R_{\myv{u}}(\theta)$, $\theta\in\mymathbb{R}$, $\myv{u}\in S^2$, results in $-\sin(\theta/2) \myv{u} = (x,y,z)^{\rm T}$ as in Section~\refroyalsub{ss:prel}{sssub:para}, whereby $\myb(\myv{v},U)=\abs{\sin(\theta/2)} \abs{ \rinnpr{\myv{u}}{\myv{v}} }$.] Hence, we have (\ref{eq:lem1pre}). A short additional argument (\refappcomma{app:tech}) shows \begin{equation} \varM{\myv{n},\myv{m}}{\rm even}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = g(\gamma} %{\delta,-\beta} %{\gamma,\delta), \label{eq:a_even} \end{equation} and therefore, \[ N_{\myv{m},\myv{n}}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) = \min \Big\{ 2 \Big\lceil \frac{\beta}{2\delta} \Big\rceil +1 , \, g(\alpha} %{\beta,\beta} %{\gamma,\delta) , \, g(\gamma} %{\delta,-\beta} %{\gamma,\delta) \Big\}. \] Finally, from Corollary~\ref{coro:1} or from the argument in \refappcomma{app:N_mnUandD}, it readily follows that $ N_{\myv{m},\myv{n}}\big(F(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}})\big) = N_{\myv{m},\myv{n}}(U_{\alpha} %{\beta,\beta} %{\gamma,\gamma} %{\delta}^{\myv{m},\myv{l}}) $. Hence, we obtain the theorem. \hfill $\Box$\vspace*{1ex} From the viewpoint of construction, we summarise the (most directly) suggested way to obtain an optimal construction of a given element $U\in {\rm SU}(2)$, where we assume $\delta=\arccos\rinnpr{\myv{m}}{\myv{n}}\in (0,\pi/2]$ without loss of generality. If $\myb(\myv{m},U)$ $\ge \myb(\myv{n},U)$, choose a construction that attains the minimum in (\ref{eq:lem1pre}). The construction is among that of Proposition~\ref{lem:const_odd}, that of Proposition~\ref{lem:const_even}, and that of Proposition~\ref{lem:const_even} applied to $U^{\dagger}$ in place of $U$ [note $U^{\dagger}=R_{\myv{u}_1}(\phi_1) \cdots R_{\myv{u}_j}(\phi_j)$ implies $U=R_{\myv{u}_j}(-\phi_j)\cdots R_{\myv{u}_1}(-\phi_1)$]. \begin{comment} OR \end{comment} If $\myb(\myv{m},U) < \myb(\myv{n},U)$, interchanging $\myv{m}$ and $\myv{n}$, apply the construction just described.% \footnote{One (seemingly difficult) issue arises: Determine all optimal decompositions of an arbitrarily fixed rotation. Note that in Propositions~\ref{lem:const_odd} and \ref{lem:const_even} and their proofs, any solution for $R_{\myv{n}}(\theta) = R_{\myv{m}}(\alpha} %{\beta)R_{\myv{l}}(\beta} %{\gamma ) R_{\myv{m}}(\gamma} %{\delta)$ can be used (see Corollary~\ref{coro:8} in \refapp{app:proof_rem} for explicit solutions, among which one is chosen to be used in Remarks~\ref{rem:const_expl} and \ref{rem:const_expl_even}).} \section{Conclusion \label{ss:conc}} This work has established the least value $N_{\myv{m},\myv{n}}(U)$ of a positive integer $k$ such that $U$ can be decomposed into the product of $k$ rotations about either $\myv{m}$ or $\myv{\varvn}$ for an arbitrarily fixed element $U$ in ${\rm SU(2)}$, or in ${\rm SO}(3)$, where $\myv{m},\myv{n} \in S^2$ are arbitrary real unit vectors with $\abs{\rinnpr{\myv{m}}{\myv{n}}}<1$. \iffalse $N_{\myv{m},\myv{n}}(\cD) = \min \{ k \in \mymathbb{N} \mid \forall U \in \cD, \ N_{\myv{m},\myv{n}}(U) \le k \}$, is obtained as $ N_{\myv{m},\myv{n}}(\cD) = \lceil \pi / \arccos \abs{\rinnpr{\myv{m}}{\myv{\varvn}}} \rceil +1 $ for any $\myv{m},\myv{\varvn}\in S^2$ with $\abs{\rinnpr{\myv{m}}{\myv{\varvn}}} < 1$, and for $\cD = {\rm SU}(2),{\rm SO}(3)$. \fi \section*{Acknowledgments} This work was supported by SCOPE (Ministry of Internal Affairs and Communications), and by Japan Society for the Promotion of Science KAKENHI Grant numbers 22540150 and 21244007.
{ "timestamp": "2014-07-01T02:16:53", "yymm": "1401", "arxiv_id": "1401.0153", "language": "en", "url": "https://arxiv.org/abs/1401.0153", "abstract": "For any pair of three-dimensional real unit vectors $\\hat{m}$ and $\\hat{n}$ with $|\\hat{m}^{\\rm T} \\hat{n}| < 1$ and any rotation $U$, let $N_{\\hat{m},\\hat{n}}(U)$ denote the least value of a positive integer $k$ such that $U$ can be decomposed into a product of $k$ rotations about either $\\hat{m}$ or $\\hat{n}$. This work gives the number $N_{\\hat{m},\\hat{n}}(U)$ as a function of $U$. Here a rotation means an element $D$ of the special orthogonal group ${\\rm SO}(3)$ or an element of the special unitary group ${\\rm SU}(2)$ that corresponds to $D$. Decompositions of $U$ attaining the minimum number $N_{\\hat{m},\\hat{n}}(U)$ are also given explicitly.", "subjects": "Mathematical Physics (math-ph); Quantum Physics (quant-ph)", "title": "The Minimum Number of Rotations About Two Axes for Constructing an Arbitrary Fixed Rotation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446463891304, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7087893927838992 }
https://arxiv.org/abs/math/0603122
Introduction to Partially Ordered Patterns
We review selected known results on partially ordered patterns (POPs) that include co-unimodal, multi- and shuffle patterns, peaks and valleys ((modified) maxima and minima) in permutations, the Horse permutations and others. We provide several (new) results on a class of POPs built on an arbitrary flat poset, obtaining, as corollaries, the bivariate generating function for the distribution of peaks (valleys) in permutations, links to Catalan, Narayna, and Pell numbers, as well as generalizations of few results in the literature including the descent distribution. Moreover, we discuss q-analogue for a result on non-overlapping segmented POPs. Finally, we suggest several open problems for further research.
\section{Introduction and background}\label{intro} An occurrence of a \emph{pattern} $\tau$ in a permutation $\pi$ is defined as a subsequence in $\pi$ (of the same length as $\tau$) whose letters are in the same relative order as those in $\tau$. For example, the permutation $31425$ has three occurrences of the pattern $1{\mbox{-}} 2{\mbox{-}} 3$, namely the subsequences 345, 145, and 125. \emph{Generalized permutation patterns} (\emph{GPs}) being introduced in~\cite{BabStein} allow the requirement that some adjacent letters in a pattern must also be adjacent in the permutation. We indicate this requirement by removing a dash in the corresponding place. Say, if pattern $2{\mbox{-}} 31$ occurs in a permutation $\pi$, then the letters in $\pi$ that correspond to $3$ and $1$ are adjacent. For example, the permutation $516423$ has only one occurrence of the pattern $2{\mbox{-}} 31$, namely the subword 564, whereas the pattern $2{\mbox{-}} 3{\mbox{-}} 1$ occurs, in addition, in the subwords 562 and 563. Placing ``$[$'' on the left (resp., ``$]$'' on the right) next to a pattern $p$ means the requirement that $p$ must begin (resp., end) from the leftmost (resp., rightmost) letter. For example, the permutation $32415$ contains two occurrences of the pattern $[2\mn13$, namely the subwords 324 and 315 and no occurrences of the pattern $3\mn2{\mbox{-}} 1]$. A further generalization of the GPs is {\em partially ordered patterns} ({\em POPs}), where the letters of a pattern form a partially ordered set (poset), and an occurrence of such a pattern in a permutation is a linear extension of the corresponding poset in the order suggested by the pattern (we also pay attention to eventual dashes and brackets). For instance, if we have a poset on three elements labeled by $1^{\prime}$, $1$, and $2$, in which the only relation is $1<2$ (see Figure~\ref{poset01}), then in an occurrence of $p=1^{\prime}{\mbox{-}} 12$ in a permutation $\pi$ the letter corresponding to the $1^{\prime}$ in $p$ can be either larger or smaller than the letters corresponding to $12$. Thus, the permutation 31254 has three occurrences of $p$, namely $3\mn12$, $3\mn25$, and $1\mn25$. \begin{figure}[h] \begin{center} \begin{picture}(6,2) \put(0,0){\put(2,0){\p} \put(4,0){\p} \put(4,2){\p} \path(4,0)(4,2) \put(1.2,0){$1'$} \put(4.5,0){1} \put(4.5,2){2} } \end{picture} \caption{A poset on three elements with the only relation $1<2$.} \label{poset01} \end{center} \end{figure} Let $\mathcal{S}_n(p_1,\ldots,p_k)$ denote the set of $n$-permutations avoiding simultaneously each of the patterns $p_1,\ldots,p_k$. The POPs were introduced in~\cite{Kit2}\footnote{The POPs in this paper are the same as the POGPs in~\cite{Kit2}, which is an abbreviation for Partially Ordered Generalized Patterns.} as an auxiliary tool to study the maximum number of non-overlapping occurrences of {\em segmented} GPs ({\em SGPs}), also known as {\em consecutive} GPs, that is, the GPs, occurrences of which in permutations form contiguous subwords (there are no dashes). However, the most useful property of POPs known so far is their ability to ``encode" certain sets of GPs which provides a convenient notation for those sets and often gives an idea how to treat them. For example, the original proof of the fact that $|\mathcal{S}_n(123, 132, 213)|={n\choose \lfloor n/2 \rfloor}$ took 3 pages (\cite{Kit1}); on the other hand, if one notices that $|\mathcal{S}_n(123, 132, 213)|=|\mathcal{S}_n(11^{\prime}2)|$, where the letters $1$, $1^{\prime}$, and $2$ came from the same poset as above, then the result is easy to see. Indeed, we may use the property that the letters in odd and even positions of a ``good" permutation do not affect each other because of the form of $11^{\prime}2$. Thus we choose the letters in odd positions in ${n \choose \lfloor n/2 \rfloor}$ ways, and we must arrange them in decreasing order. We then must arrange the letters in even positions in decreasing order too. The POPs can be used to encode certain combinatorial objects by restricted permutations. Examples of that are Propositions~\ref{combobject01} and~\ref{combobject02}, as well as several other propositions in~\cite{BurKit}. Such encoding is interesting from the point of view of finding bijections, but it also may have applications for enumerating certain statistics. The idea is to encode a set of objects under consideration as a set of permutations satisfying certain restrictions (given by certain POPs); under appropriate encodings, this allows us to transfer the interesting statistics from the original set to the set of permutations, where they are easy to handle. For an illustration of how encodings by POPs can be used, see~\cite[Thm. 2.4]{KitPM} which deals with POPs in {\em compositions} rather than in permutations, though, but the approach remains the same. As a matter of fact, some POPs appeared in the literature before they were actually introduced. Thus the notion of a POP allows us to collect under one roof (to provide a uniform notation for) several combinatorial structures such as {\em peaks}, {\em valleys}, {\em modified maxima} and {\em modified minima} in permutations, {\em Horse permutations} and $p$-{\em descents} in permutations discussed in Section~\ref{sec1}. This paper is organized as follows. Section~\ref{sec1} reviews selected results in the literature related to POPs; Section~\ref{sec2} provides a complete solution for SPOPs built on a {\em flat poset}\footnote{The concept of a ``flat poset'' is used in theoretical computer science~\cite{Aceto} to denote posets with one element being less than any other element (there are no other relations between the elements). See Figure~\ref{poset08} for the shape of such poset.} without repeated letters. In particular, as a corollary to a more general result, we provide the generating function for the distribution of peaks (valleys) in permutations, which seems to be a new result, or at least one the author could not find in the literature (it looks like only a continued fraction expansion of the generating function for the distribution of peaks is known). Section~\ref{q-an} gives a $q$-analogue for a result on non-overlapping patterns~(\cite[Thm. 16]{Kit3}). Finally, in Section~\ref{sec3}, we state several open problems on POPs. In what follows we need the following notations. Let $\sigma$ and $\tau$ be two POPs of length greater than 0. We write $\sigma <\tau$ to indicate that any letter of $\sigma$ is less than any letter of $\tau$. We write $\sigma<>\tau$ when no letter in $\sigma$ is comparable with any letter in $\tau$. Also, {\em SPOP} abbreviates Segmented POP. A {\em left-to-right minimum} of a permutation $\pi$ is an element $a_i$ such that $a_i<a_j$ for every $j<i$. Analogously we define {\em right-to-left minimum}, {\em right-to-left maximum}, and {\em left-to-right maximum}. If $\pi=a_1a_2\cdots a_n\in \mathcal{S}_n$, then the {\em reverse} of $\pi$ is $\pi^r:=a_n\cdots a_2a_1$, and the {\em complement} of $\pi$ is a permutation $\pi^c$ such that $\pi^c_{i}=n+1-a_i$, where $i\in[n]=\{1,\ldots,n\}$. We call $\pi^r$, $\pi^c$, and $(\pi^r)^c=(\pi^c)^r$ {\em trivial bijections}. The {\em GF} ({\em EGF}; {\em BGF}) denotes the ({\em exponential}; {\em bivariate}) {\em generating function}. \section{Review of selected results on POPs}\label{sec1} In this section we review several results in the literature related to POPs. \subsection{Co-unimodal patterns}\label{unimod} For a permutation $\pi=\pi_1\pi_2\cdots\pi_n\in\mathcal{S}_n$, the {\em inversion index}, $\mbox{inv}(\pi)$, is the number of ordered pairs $(i,j)$ such that $1\leq i<j\leq n$ and $\pi_i>\pi_j$. The major index, $\mbox{maj}(\pi)$, is the sum of all $i$ such that $\pi_i>\pi_{i+1}$. Suppose $\sigma$ is a SPOP and $$\mbox{place}_{\sigma}(\pi)=\{i\ |\ \pi \mbox{ has an occurrence of } \sigma \mbox{ starting at } \pi_i\}.$$ Let $\mbox{maj}_{\sigma}(\pi)$ be the sum of the elements of $\mbox{place}_{\sigma}(\pi)$. \begin{figure}[h] \begin{center} \begin{picture}(8,6) \put(0,0){\put(2,2){\p} \put(4,0){\p} \put(4,6){\p} \put(6,2){\p} \put(6,4){\p} \path(2,2)(4,0)(6,2)(6,4)(4,6)(2,2) \put(4.3,6){$\sigma_1$=5} \put(0.6,2){$\sigma_2$} \put(6.3,2){$\sigma_4$} \put(6.3,4){$\sigma_5$} \put(4.5,-0.3){$\sigma_3$=1} } \end{picture} \caption{A poset for co-unimodal pattern in the case $j=3$ and $k=5$.} \label{poset02} \end{center} \end{figure} If $\sigma$ is {\em co-unimodal}, meaning that $k=\sigma_1>\sigma_2>\cdots >\sigma_j<\cdots <\sigma_k$ for some $2\leq j\leq k$ (see Figure~\ref{poset02} for a corresponding poset in the case $j=3$ and $k=5$), then the following formula holds~\cite{BjornerWachs}: $$\sum_{\pi\in\mathcal{S}_n}t^{\mbox{maj}_{\sigma}(\pi^{-1})}q^{\mbox{maj}(\pi)}= \sum_{\pi\in\mathcal{S}_n}t^{\mbox{maj}_{\sigma}(\pi^{-1})}q^{\mbox{inv}(\pi)}.$$ If $k=2$ we deal with usual descents, thus a co-unimodal pattern can be viewed as a generalization of the notion of a descent. This may be a reason why a co-unimodal pattern $p$ is called $p$-{\em descent} in~\cite{BjornerWachs}. Also, setting $t=1$ we get a well-known result by MacMahon on equidistribution of maj~and~inv. \subsection{Peaks and valleys in permutations} A permutation $\pi$ has exactly $k$ {\em peaks} (resp., {\em valleys}), also known as {\em maxima} (resp., {\em minima}), if $|\{j\ |\ \pi_j>\max\{\pi_{j-1},\pi_{j+1}\}\}|=k$ (resp., $|\{j\ |\ \pi_j<\min\{\pi_{j-1},\pi_{j+1}\}\}|=k$). Thus, an occurrence of a peak in a permutation is an occurrence of the SPOP $1'21''$, where relations in the poset are $1'<2$ and $1''<2$. Similarly, occurrences of valleys correspond to occurrences of the SPOP $2'12''$, where $2'>1$ and $2''>1$. See Figure~\ref{poset03} for the posets corresponding to the peaks and valleys. So, any research done on the peak (or valley) statistics can be regarded as research on (S)POPs (e.g., see~\cite{Warren}). \begin{figure}[h] \begin{center} \begin{picture}(8,2) \put(-4,0){\put(2,0){\p} \put(4,2){\p} \put(6,0){\p} \path(2,0)(4,2)(6,0) \put(10,2){\p} \put(12,0){\p} \put(14,2){\p} \path(10,2)(12,0)(14,2) \put(1,0){$1'$} \put(6.5,0){$1''$} \put(4.5,2){2} \put(9,2){$2'$} \put(12.5,-0.3){$1$} \put(14.5,2){$2''$} } \end{picture} \caption{Posets corresponding to peaks and valleys.} \label{poset03} \end{center} \end{figure} Also, results related to {\em modified maxima} and {\em modified minima} can be viewed as results on SPOPs. For a permutation $\sigma_1\ldots\sigma_n$ we say that $\sigma_i$ is a {\em modified maximum} if $\sigma_{i-1}<\sigma_i>\sigma_{i+1}$ and a {\em modified minimum} if $\sigma_{i-1}>\sigma_i<\sigma_{i+1}$, for $i=1,\ldots,n$, where $\sigma_0=\sigma_{n+1}=0$. Indeed, we can view a pattern $p$ as a function from the set of all symmetric groups $\cup_{n\geq 0}\mathcal{S}_n$ to the set of natural numbers such that $p(\pi)$ is the number of occurrences of $p$ in $\pi$, where $\pi$ is a permutation. Thus, studying the distribution of modified maxima (resp., minima) is the same as studying the function $ab]+1'21''+[dc$ (resp., $ba]+2'12''+[cd$) where $a<b$, $c<d$ and the other relations between the patterns' letters are taken from Figure~\ref{poset03}. Also, recall that placing ``$[$'' (resp., ``$]$'') next to a pattern $p$ means the requirement that $p$ must begin (resp., end) with the leftmost (resp., rightmost) letter. A specific result in this direction is problem 3.3.46(c) on page 195 in~\cite{GolJack}: We say that $\sigma_i$ is a {\em double rise} (resp., {\em double fall}) if $\sigma_{i-1}<\sigma_{i}<\sigma_{i+1}$ (resp., $\sigma_{i-1}>\sigma_{i}>\sigma_{i+1}$); The number of permutations in $\mathcal{S}_n$ with $i_1$ modified minima, $i_2$ modified maxima, $i_3$ double rises, and $i_4$ double falls is $$\left[u_1^{i_1}u_2^{i_2-1}u_3^{i_3}u_4^{i_4}\frac{x^n}{n!}\right]\frac{e^{\alpha_2x}-e^{\alpha_1}x} {\alpha_2e^{\alpha_1x}-\alpha_1e^{\alpha_2x}}$$ where $\alpha_1\alpha_2=u_1u_2$, $\alpha_1+\alpha_2=u_3+u_4$. In Corollary~\ref{valleys} we obtain explicit generating function for the distribution of peaks (valleys) in permutations. This result is an analogue to a result in~\cite{Entr} where the circular case of permutations is considered, that is, when the first letter of a permutation is though to be to the right of the last letter in the permutation. In~\cite{Entr} it is shown that if $M(n,k)$ denotes the number of circular permutations in $\mathcal{S}_n$ having $k$ maxima, then $$\sum_{n\geq 1}\sum_{k\geq 0}M(n,k)y^k\frac{x^n}{n!}=\frac{zx(1-z\tanh xz)}{z-\tanh xz}$$ where $z=\sqrt{1-y}$. \subsection{Patterns containing $\Box$-symbol} In~\cite{ManQ} the authors study simultaneous avoidance of the patterns $1\mn3{\mbox{-}} 2$ and $1\Box 23$. A permutation $\pi$ avoids $1\Box 23$ if there is no $\pi_i<\pi_j<\pi_{j+1}$ with $i<j-1$. Thus the $\Box$ symbol has the same meaning as ``${\mbox{-}}$" except for $\Box$ does not allow the letters separated by it to be adjacent in an occurrence of the corresponding pattern. In the POP-terminology, $1\Box 23$ is the pattern $1{\mbox{-}} 1'{\mbox{-}} 23$, or $1{\mbox{-}} 1'23$, or $11'{\mbox{-}} 23$, where $1'$ is incomparable with the letters $1,2,$ and $3$ which, in turn, are ordered naturally: $1<2<3$. The permutations avoiding $1\mn3{\mbox{-}} 2$ and $1\Box 23$ are called {\em Horse permutations}. The reason for the name came from the fact that these permutations are in one to one correspondence with {\em Horse paths}, which are the lattice paths from (0,0) to $(n,n)$ containing the steps $(0,1)$, $(1,1)$, $(2,1)$, and $(1,2)$ and not passing the line $y=x$. According to~\cite{ManQ}, the generating function for the horse permutations is $$\frac{1-x-\sqrt{1-2x-3x^2-4x^3}}{2x^2(1+x)}.$$ Moreover, in~\cite{ManQ} the generating functions for Horse permutations avoiding, or containing (exactly) once, certain patterns are given. In~\cite{FMan}, patterns of the form $x{\mbox{-}} y\Box z$ are studied, where $xyz\in \mathcal{S}_3$. Such a pattern can be written in the POP-notation as, for example, $x{\mbox{-}} y{\mbox{-}} a{\mbox{-}} z$ where $a$ is not comparable to $x$, $y$, and $z$. A bijection between permutations avoiding the pattern $1{\mbox{-}} 2\Box 3$, or $2{\mbox{-}} 1\Box 3$, and the set of {\em odd-dissection convex polygons} is given. Moreover, generating functions for permutations avoiding $1{\mbox{-}} 3\Box 2$ and certain additional patterns are obtained in~\cite{FMan}. \subsection{A pattern of the form $\sigma{\mbox{-}} m{\mbox{-}}\tau$} Let $\sigma$ and $\tau$ be two SGPs (the results below work for SPOPs as well). We consider the POP $\alpha=\sigma{\mbox{-}} m{\mbox{-}}\tau$ with $m>\sigma$, $m>\tau$, and $\sigma<>\tau$, that is, each letter of $\sigma$ is incomparable with any letter of~$\tau$ and $m$ is the largest letter in $\alpha$. The POP $\alpha$ is an instance of so called {\em shuffle patterns} (see~\cite[Sec 4]{Kit2}). \begin{The}{\rm(\cite[Thm. 16]{Kit2})}\label{shufflePatern2} Let $A(x)$, $B(x)$ and $C(x)$ be the EGF for the number of permutations that avoid $\sigma$, $\tau$ and $\alpha$ respectively. Then $C(x)$ is the solution to the following differential equation with $C(0)=1$: $$C^{\prime}(x)=(A(x)+B(x))C(x) - A(x)B(x).$$ \end{The} If $\tau$ is the empty word then $B(x)=0$ and we get the following result for segmented GPs: \begin{Cor}{\rm(\cite[Thm. 13]{Kit2},\cite{Knuth})} Let $\alpha=\sigma{\mbox{-}} m$, where $\sigma$ is a SGP on $[k-1]$. Let $A(x)$ {\rm(}resp., $C(x)${\rm)} be the EGF for the number of permutations that avoid $\sigma$ {\rm(}resp., $\alpha${\rm)}. Then $C(x) = e^{F(x,A)},$ where $F(x,A) = \int_0^x A(y)\ dy$. \end{Cor} \begin{Exa}{\rm(\cite[Ex 15]{Kit2})} Suppose $\alpha = 12{\mbox{-}} 3$. Here $\sigma = 12$, whence $A(x) = e^x$, since there is only one permutation that avoids $\sigma$. So $$C(x) = e^{F(x,\exp)} = e^{e^x-1}.$$ We get~\cite[Prop. 4]{Claes} since $C(x)$ is the EGF for the Bell numbers. \end{Exa} \begin{Cor}{\rm(\cite[Cor. 19]{Kit2})} Let $\alpha=\sigma{\mbox{-}} m{\mbox{-}} \tau$ is as described above. We consider the pattern $\varphi (\alpha) = {\varphi}_1(\sigma){\mbox{-}} m{\mbox{-}} {\varphi}_2(\tau)$, where ${\varphi}_1$ and ${\varphi}_2$ are any trivial bijections. Then $|\mathcal{S}_n(\alpha)|=|\mathcal{S}_n(\varphi(\alpha))|$. \end{Cor} \subsection{Multi-patterns} Suppose $\{ {\sigma}_1, {\sigma}_2, \ldots , {\sigma}_k \}$ is a set of segmented GPs and $p={\sigma}_1{\mbox{-}} {\sigma}_2{\mbox{-}} \cdots{\mbox{-}} {\sigma}_k$ where each letter of ${\sigma}_i$ is incomparable with any letter of ${\sigma}_j$ whenever $i \neq j$ (${\sigma}_i<>{\sigma}_j$). We call such POPs {\em multi-patterns}. Clearly, the Hasse diagram for such a pattern is $k$ disjoint chains similar to that in Figure~\ref{poset09}. \begin{figure}[h] \begin{center} \begin{picture}(5,3) \put(0,0) {\put(0,0){\p} \put(0,1){\p} \put(0,2){\p} \put(0,0){\p} \put(2,0){\p} \put(2,1){\p} \put(2,2){\p} \put(4,0){\p} \put(4,1){\p} \put(6,0){\p} \put(6,1){\p} \put(6,2){\p} \path(0,0)(0,1)(0,2) \path(2,0)(2,1)(2,2) \path(4,0)(4,1) \path(6,0)(6,1)(6,2)} \end{picture} \end{center} \caption{A poset corresponding to a multi-pattern.} \label{poset09} \end{figure} \begin{The}{\rm(\cite[Thm. 23 and Cor. 24]{Kit2})}\label{mult1} The number of permutations avoiding the pattern $p={\sigma}_1{\mbox{-}} {\sigma}_2{\mbox{-}} \cdots{\mbox{-}} {\sigma}_k$ is equal to that avoiding a multi-pattern obtained from $p$ by an arbitrary permutation of $\sigma_i$'s as well as by applying to $\sigma_i$'s any of trivial bijections. \end{The} The following theorem is the basis for calculating the number of permutations that avoid a multi-pattern. \begin{The}{\rm(\cite[Thm. 28]{Kit2})}\label{mult2} Let $p={\sigma}_1{\mbox{-}} {\sigma}_2{\mbox{-}} \cdots{\mbox{-}} {\sigma}_k$ be a multi-pattern and let $A_i(x)$ be the EGF for the number of permutations that avoid ${\sigma}_i$. Then the EGF $A(x)$ for the number of permutations that avoid~$p$ is $$A(x)=\displaystyle\sum_{i=1}^{k}A_i(x)\displaystyle\prod_{j=1}^{i-1}((x-1)A_j(x) + 1).$$ \end{The} \begin{Cor}{\rm(\cite[Cor. 26]{Kit2})} Let $p={\sigma}_1{\mbox{-}} {\sigma}_2{\mbox{-}} \cdots{\mbox{-}} {\sigma}_k$ be a multi-pattern, where $|{\sigma}_i|=2$ for all $i$. That is, each ${\sigma}_i$ is either 12 or 21. Then the EGF for the number of permutations that avoid $p$ is given by $$A(x)=\frac{1-(1+(x-1)e^x)^k}{1-x}.$$ \end{Cor} \begin{Rem}\label{goodrem} Although the results in Theorems~\ref{mult1} and~\ref{mult2} are stated in~\cite{Kit2} for $\sigma_i$'s which are SGPs, one can see that the same arguments work for $\sigma_i$'s which are SPOPs. Thus we have a generalization of these theorems. \end{Rem} \subsection{Non-overlapping patterns -- an application of POPs} This subsection deals additionally with occurrences of patterns in words. The letters $1,2,1',2'$ appearing in the examples below are ordered as in Figure~\ref{poset001} to be found on page~\pageref{poset001}. Theorem~\ref{mult2} and its counterpart in the case of words~\cite[Thm. 4.3]{KitMan} and~\cite[Cor. 4.4]{KitMan}, as well as Remark~\ref{goodrem} applied for these results, give an interesting application of the multi-patterns in finding a certain statistic, namely the {\em maximum number of non-overlapping occurrences of a SPOP} in permutations and words. For instance, the maximum number of non-overlapping occurrences of the SPOP $11'2$ in the permutation 621394785 is 2, and this is given by the occurrences $213$ and $478$, or the occurrences $139$ and $478$. Theorem~\ref{dist} generalizes~\cite[Thm. 32]{Kit2} and~\cite[Thm. 5.1]{KitMan}. \begin{The}{\rm(\cite[Thm. 16]{Kit3})}\label{dist} Let $p$ be a SPOP and $B(x)$ {\rm(}resp., $B(x;k)${\rm)} is the EGF {\rm(}resp., GF{\rm)} for the number of permutations {\rm(}resp., words over $[k]${\rm)} avoiding $p$. Let $D(x,y)=\sum_{\pi}y^{N(\pi)}\frac{x^{|\pi|}}{|\pi|!}$ and $D(x,y;k)=\sum_{n\geq 0}\sum_{w\in [k]^n}y^{N(w)}x^n$ where $N(s)$ is the maximum number of non-overlapping occurrences of $p$ in~$s$. Then $D(x,y)$ and $D(x,y;k)$ are given by $$\frac{B(x)}{1-y(1+(x-1)B(x))} \mbox{\ \ \ and \ \ \ } \frac{B(x;k)}{1-y(1+(kx-1)B(x;k))}.$$ \end{The} The following examples are corollaries to Theorem~\ref{dist}. \begin{Exa}{\rm(\cite[Ex 1]{Kit3})} If we consider the SPOP $11'$ then clearly $B(x)=1+x$ and $B(x;k)=1+kx$. Hence, $$D(x,y)=\frac{1+x}{1-yx^2}=\sum_{i\geq 0}(x^{2i} + x^{2i+1})y^i,$$and $$D(x,y;k)=\frac{1+kx}{1-y(kx)^2}=\sum_{i\geq 0}((kx)^{2i} + (kx)^{2i+1})y^i.$$ \end{Exa} \begin{Exa}{\rm(\cite[Ex 2]{Kit3})} For permutations, the distribution of the maximum number of non-overlapping occurrences of the SPOP $122'1'$ is given by $$D(x,y)=\frac{\frac{1}{2}+\frac{1}{4}\tan x(1+e^{2x}+2e^{x}\sin x)+\frac{1}{2}e^x\cos x}{1-y(1+(x-1)(\frac{1}{2}+\frac{1}{4}\tan x(1+e^{2x}+2e^{x}\sin x)+\frac{1}{2}e^x\cos x))}.$$ \end{Exa} \subsection{Segmented patterns of length four} In this subsection we provide the known results related to SPOPs of length four. Corollaries~\ref{newfour1} and~\ref{newfour} in subsection~\ref{subsec1} give extra results in this direction. In subsection~\ref{usolved-4} we provide unsolved cases with initial values for the number of the restricted permutations. In this subsection, $A(x)=\sum_{n\geq 0}A_nx^n/n!$ is the EGF for the number of permutations in question. The patterns in the subsection are built on the poset from Figure~\ref{poset001} and the letter $1''$ is not comparable to any other letter. \begin{The}{\rm(\cite[Thm. 30]{Kit2})} For the SPOP $122^{\prime}1^{\prime}$, we have that $$A(x) = \frac{1}{2} + \frac{1}{4}\tan x (1+e^{2x} + 2e^{x}\sin x) + \frac{1}{2}e^{x} \cos x .$$ \end{The} \begin{prop}{\rm(\cite[Prop. 8,9]{Kit3})}\label{combobject01} There are ${n-1 \choose \lfloor (n-1)/2 \rfloor}{n \choose \lfloor n/2 \rfloor}$ permutations in $\mathcal{S}_n$ that avoid the SPOP $12^{\prime}21^{\prime}$. The $(n+1)$-permutations avoiding $12'21'$ are in one-to-one correspondence with different walks of $n$ steps between lattice points, each in a direction N, S, E or W, starting from the origin and remaining in the positive quadrant. \end{prop} \begin{prop}{\rm(\cite[Prop. 4,5,6]{Kit3})} For the SPOP $11'1''2$, one has $$A_n=\frac{n!}{\lfloor n/3 \rfloor !\lfloor (n+1)/3 \rfloor !\lfloor (n+2)/3 \rfloor !},$$ and for the SPOP $11'21''$ and $n\geq 1$, we have $A_n=n\cdot\displaystyle{n-1 \choose \lfloor (n-1)/2 \rfloor}$. Moreover, for the SPOPs $1'1''12$ and $1'121''$, we have $A_0=A_1=1$, and, for $n\geq 2$, $A_n=n(n-1)$.\end{prop} \begin{prop}{\rm(\cite[Prop. 7]{Kit3})} For the SPOP $1231'$, we have $$A(x)=xe^{x/2}\left(\cos \frac{\sqrt{3}x}{2} -\frac{\sqrt{3}}{3}\sin \frac{\sqrt{3}x}{2} \right)^{-1} + 1,$$ and for the SPOPs $1321'$ and $2131'$, we have $$A(x)=x(1-\int_{0}^{x}e^{-t^2/2}\ dt)^{-1} +1.$$\end{prop} We end up this subsection with a result on multi-avoidance of SPOPs that has a combinatorial interpretation. \begin{prop}{\rm(\cite[Prop. 2.1,2.2]{BurKit})}\label{combobject02} There are $2\binom{n}{\lfloor n/2 \rfloor}$ $n$-permutations avoiding the SPOPs $11'22'$ and $22'11'$ simultaneously. For $n\ge 3$, there is a bijection between such $n$-permutations and the set of all $(n+1)$-step walks on the $x$-axis with the steps $a=(1,0)$ and $\bar{a}=(-1,0)$ starting from the origin but not returning to it. \end{prop} \section{Patterns built on flat posets}\label{sec2} In this section, we consider flat posets built on $k+1$ elements $a,a_1,\ldots, a_k$ with the only relations $a<a_i$ for all $i$. A Hasse diagram for the flat poset is in Figure~\ref{poset08}. \begin{figure}[h] \begin{center} \begin{picture}(5,3) \put(0,0) {\put(2,0){\p} \put(0,2){\p} \put(1,2){\p} \put(2,2){\p} \put(3,2){\p} \put(4,2){\p} \put(2,-0.6){$a$} \put(-0.5,2.5){$a_1$} \put(0.5,2.5){$a_2$} \put(2,2.5){$\cdots$} \put(4,2.5){$a_k$} \path(0,2)(2,0)(1,2) \path(2,2)(2,0)(3,2) \path(2,0)(4,2)} \end{picture} \end{center} \caption{A flat poset.} \label{poset08} \end{figure} \subsection{Avoidance and distribution of the patterns}\label{subsec1} The following proposition generalizes~\cite[Prop. 6]{Claes}. Indeed, letting $k=2$ in the proposition we deal with involutions and permutations avoiding $1{\mbox{-}} 23$ and $1{\mbox{-}} 32$. \begin{prop}\label{claes} The permutations in ${\mathcal{S}}_n$ having cycles of length at most $k$ are in one-to-one correspondence with permutations in ${\mathcal{S}}_n$ that avoid $a{\mbox{-}} a_1\cdots a_k$. \end{prop} \begin{proof} We construct a bijection in a similar to~\cite[Prop. 6]{Claes} way. Let $\pi\in {\mathcal{S}}_n$ be a permutation with cycles of length at most $k$. A standard form for writing $\pi$ in cycle notation is requiring that \begin{itemize} \item[(1)] Each cycle is written with its least element first; \item[(2)] The cycles are written in decreasing order of their least element. \end{itemize} Define $\hat{\pi}$ to be the permutation obtained from $\pi$ by writing it in standard form and erasing the parentheses separating the cycles. The permutation $\hat{\pi}$ avoids $a{\mbox{-}} a_1\cdots a_k$. Indeed, the distance between two left-to-right minima (the number of letters between them) in $\hat{\pi}$ does not exceed $k-1$ because of the restriction on the cycle lengths. Thus if $\hat{\pi}$ contains $a{\mbox{-}} a_1\cdots a_k$ then among the letters of $\hat{\pi}$ corresponding to $a_1\cdots a_k$ there is at least one left-to-right minimum, say $m$, and the letter in $\hat{\pi}$ corresponding to $a$ must be less than $m$. This contradicts the definition of a left-to-right minimum. Conversely, if $\hat{\pi}$ is an $a{\mbox{-}} a_1\cdots a_k$-avoiding permutation then any two of its consecutive left-to-right minima are at distance not exceeding $k-1$ from each other, since otherwise we have an occurrence of $a{\mbox{-}} a_1\cdots a_k$ starting at a left-to-right minimum preceding a factor of length at least $k$ that does not contain other left-to-right minima. The left-to-right minima of $\hat{\pi}$ define cycles of~$\pi$. \end{proof} \begin{Cor}\label{nicecor} The EGF for the number of permutations avoiding $a{\mbox{-}} a_1\cdots a_k$ is given by $\exp(\sum_{i=1}^{k}x^i/i)$.\end{Cor} \begin{proof} According to Proposition~\ref{claes} we only need to find the EGF $p(x)=\sum_{n\geq 0}p_nx^{n}/n!$ for the number of permutations with cycles of length at most $k$, which is known (see, e.g.,~\cite{GesStan}), but we rederive it here. Suppose $\pi$ is an $n$-permutation with cycles of length at most $k$ and 1 occurs in a cycle $C$. If $i$ is the number of neighbors of 1 in $C$ then $0\leq i\leq k-1$ and there are ${n-1\choose i}i!$ possibilities for choosing such $C$. Thus $$p_n=\displaystyle\sum_{i=0}^{k-1}{n-1\choose i}i!p_{n-i-1}$$ which after summing over all $n\geq 1$ gives $$p'(x)=(1+x+\cdots+x^{k-1})p(x)$$ and therefore the claim is true since $P(0)=1$. \end{proof} \begin{prop}\label{easyprop1} One has ${\mathcal{S}}_n(a{\mbox{-}} a_1\cdots a_k)={\mathcal{S}}_n(aa_1\cdots a_k)$, and thus the EGF for the number of permutations avoiding $aa_1\cdots a_k$ is $\exp(\sum_{i=1}^{k}x^i/i)$. \end{prop} \begin{proof} Clearly ${\mathcal{S}}_n(a{\mbox{-}} a_1\cdots a_k)\subseteq {\mathcal{S}}_n(aa_1\cdots a_k)$. Suppose now that $\pi\in {\mathcal{S}}_n(aa_1\cdots a_k)$ and $\pi$ contains an occurrence of $a{\mbox{-}} a_1\cdots a_k$, say $\pi_i\pi_j\pi_{j+1}\cdots\pi_{j+k-1}$ where $i+1<j$. We will get a contradiction which will show that ${\mathcal{S}}_n(aa_1\cdots a_k)\subseteq {\mathcal{S}}_n(a{\mbox{-}} a_1\cdots a_k)$. One can assume that $j-i$ is minimal out of all occurrences of $a{\mbox{-}} a_1\cdots a_k$ in $\pi$. If $\pi_{j-1}<\pi_i$ then $\pi_{j-1}\pi_j\pi_{j+1}\cdots\pi_{j+k-1}$ is an occurrence of $aa_1\cdots a_k$, a contradiction to $\pi\in {\mathcal{S}}_n(aa_1\cdots a_k)$; otherwise, $\pi_i\pi_{j-1}\pi_j\cdots\pi_{j+k-2}$ is an occurrence of $a{\mbox{-}} a_1\cdots a_k$, a contradiction to $j-i$ being minimal. \end{proof} \begin{Cor}\label{newfour1} The EGF for the number of permutations avoiding $aa_1a_2a_3$ is given by $\exp(x+x^2/2+x^3/3)$.\end{Cor} \begin{The}{\rm(Distribution of $aa_1a_2\cdots a_k$)}\label{spogpdist} Let $$P:=P(x,y)=\sum_{n\geq 0}\sum_{\pi\in\mathcal{S}_n}y^{e(\pi)}x^{n}/n!$$ be the BGF on permutations, where $e(\pi)$ is the number of occurrences of the SPOP $p=aa_1a_2\cdots a_k$ in $\pi$. Then $P$ is the solution of \begin{equation}\label{eq1} \frac{\partial P}{\partial x}=yP^2+\frac{(1-y)(1-x^k)}{1-x}P\end{equation} with the initial condition $P(0,y)=1$. \end{The} \begin{proof} Suppose $\pi=\pi'1\pi''$ is a permutation. Then \[ e(\pi) = \left\{ \begin{array}{ll} e(\pi')+e(\pi'')+1 & \mbox{if $|\pi''|\geq k$,} \\ e(\pi') & \mbox{if $|\pi''|< k$} \end{array} \right. \] since an occurrence of $p$ cannot start at $\pi'$ and end not in $\pi'$; also when $\pi''$ is of length at least $k$ it contributes one extra occurrence of $p$ starting at 1. Suppose $P_{<k}:=P_{<k}(x,y)=\sum_{n=0}^{k-1}\sum_{\pi\in\mathcal{S}_n}y^{e(\pi)}x^{n}/n!=\sum_{n\geq 0}x^n=\frac{1-x^k}{1-x}$. Readers familiar with the symbolic method can now see that $$P'=P(y(P-P_{<k})+P_{<k})$$ with the initial condition $P(0,y)=1$ and the desired is easy to get by plugging in $P_{<k}$ and rewriting the equation. The rest of the proof is dedicated to a brief explanation of the symbolic method (see~\cite{FlaSed} for more details) and applying it to our case. In our presentation we follow~\cite{ElizNoy}. There is a direct correspondence between set-theoretic operations on combinatorial classes and algebraic operations on EGFs. Let $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$ be classes of labeled combinatorial objects, and $A(x)$, $B(x)$, and $C(x)$ be their EGFs respectively. Then if $\mathcal{A}=\mathcal{B}\cup \mathcal{C}$ is the union of disjoint copies then $A(x)=B(x)+C(x)$; if $\mathcal{A}=\mathcal{B}\star \mathcal{C}$ is the labeled product, that is, the usual Cartesian product enriched with the relabeling operation, then $A(x)=B(x)C(x)$; if $\mathcal{A}=\mathcal{B}^{\Box}\star \mathcal{C}$ is the box product, that is, the subset of $\mathcal{B}\star \mathcal{C}$ formed by those pairs in which the smallest label lies in the $\mathcal{B}$ component, then $A(x)=\int_0^x(\frac{d}{dt}B(t))\cdot C(t)dt$. The same holds if we have the BGFs instead of EGFs. Let $\mathcal{P}$ be the class of all permutations and $\mathcal{P}_{<k}$ is the class of permutations of length less than $k$. With some abuse of notation, we introduce the parameter $y$ in the equation for classes meaning that it will be placed there when we write the corresponding differential equations for the BGFs. With this notation and using the property of $e(\pi)$, we can write $$\mathcal{P}=\{\epsilon\}+\{x\}^{\Box}\star\mathcal{P}\star[y(\mathcal{P}-\mathcal{P}_{<k})+\mathcal{P}_{<k}]$$ where $\epsilon$ is the empty permutation. We differentiate the corresponding equation for BFGs to get the desired result. \end{proof} Note, that if $y=0$ in Theorem~\ref{spogpdist} then the function in Corollary~\ref{nicecor}, due to Proposition~\ref{easyprop1}, is supposed to be the solution to~(\ref{eq1}), which is true. If $k=1$ in Theorem~\ref{spogpdist}, then as the solution to~(\ref{eq1}) we get nothing else but the distribution of descents in permutations: $(1-y)(e^{(y-1)x}-y)^{-1}$. Thus Theorem~\ref{spogpdist} can be thought as a generalization of the result on the descent distribution. The following theorem generalizes Theorem~\ref{spogpdist}. Indeed, Theorem~\ref{spogpdist} is obtained from Theorem~\ref{spogpdist1} by plugging in $\ell=0$ and observing that obviously $aa_1\cdots a_k$ and $a_1\cdots a_ka$ are equidistributed. \begin{The}{\rm(Distribution of $a_1a_2\cdots a_kaa_{k+1}a_{k+2}\cdots a_{k+\ell}$)}\label{spogpdist1} Let $$P:=P(x,y)=\sum_{n\geq 0}\sum_{\pi\in\mathcal{S}_n}y^{e(\pi)}x^{n}/n!$$ be the BGF of permutations where $e(\pi)$ is the number of occurrences of the SPOP $p=a_1a_2\cdots a_kaa_{k+1}a_{k+2}\cdots a_{k+\ell}$ in $\pi$. Then $P$ is the solution of \begin{equation}\label{eq2} \frac{\partial P}{\partial x}=y\left(P-\frac{1-x^k}{1-x}\right)\left(P-\frac{1-x^{\ell}}{1-x}\right)+\frac{2-x^k-x^{\ell}}{1-x}P-\frac{1-x^k-x^{\ell}+x^{k+\ell}}{(1-x)^2}.\end{equation} with the initial condition $P(0,y)=1$. \end{The} \begin{proof} A proof is straightforward applying the technique introduced in the proof of Theorem~\ref{spogpdist}. We use the same notation and adjusted steps of that proof without explanations. Suppose $\pi=\pi'1\pi''$ is a permutation. Then \[ e(\pi) = \left\{ \begin{array}{ll} e(\pi')+e(\pi'')+1 & \mbox{if $|\pi'|\geq k$ and $|\pi''|\geq \ell$ ,} \\ e(\pi') + e(\pi'') & \mbox{otherwise.} \end{array} \right. \] One can now see that $\mathcal{P}$ is equal to $$\{\epsilon\}+\{x\}^{\Box}\star[y(\mathcal{P}-\mathcal{P}_{<k})\star(\mathcal{P}-\mathcal{P}_{<\ell}) +(P-\mathcal{P}_{<k})\star \mathcal{P}_{<\ell} + \mathcal{P}_{<k}\star (\mathcal{P}-\mathcal{P}_{<\ell})+ \mathcal{P}_{<k}\star\mathcal{P}_{<\ell}]$$ and the rest is obtained by rewriting in terms of BGFs and differentiating. \end{proof} If $y=0$ in Theorem~\ref{spogpdist1} then we get the following corollary: \begin{Cor}\label{generate1} The EGF $A(x)=\sum_{n\geq 0}A_nx^n/n!$ for the number of permutations avoiding the SPOP $p=a_1a_2\cdots a_kaa_{k+1}a_{k+2}\cdots a_{k+\ell}$ satisfies the following differential equation with the initial condition $A(0)=1$: $$A'(x)=\frac{2-x^k-x^{\ell}}{1-x}A(x)-\frac{1-x^k-x^{\ell}+x^{k+\ell}}{(1-x)^2}.$$ \end{Cor} The following corollaries to Corollary~\ref{generate1} are obtained by plugging in $k=\ell=1$ and $k=1$ and $\ell=2$ respectively. \begin{Cor}\rm{(}\cite{Kit1}\rm{)}\label{valleyless} The EGF for the number of permutations avoiding $a_1aa_2$ is $(\exp(2x)+1)/2$ and thus $|\mathcal{S}_n(a_1aa_2)|=2^{n-1}$.\end{Cor} \begin{Cor}\label{newfour} The EGF for the number of permutations avoiding $a_1aa_2a_3$ is $$1+\sqrt{\frac{\pi}{2}}(\mbox{erf}(\frac{1}{\sqrt{2}}x+\sqrt{2})-\mbox{erf}(\sqrt{2}))e^{\frac{1}{2}x(x+4)+2}$$ where $\mbox{erf}(x)=\frac{2}{\sqrt{\pi}}\displaystyle\int_{0}^{x}e^{-t^2}\ dt$ is the error function. \end{Cor} If $k=1$ and $\ell=1$ then our pattern $a_1aa_2$ is nothing else but the valley statistic. In~\cite{RieZel} a recursive formula for the generating function of permutations with exactly $k$ valleys is obtained, which however does not seem to allow (at least easily) finding the corresponding BGF. As a corollary to Theorem~\ref{spogpdist1} we get the following BGF by solving~(\ref{eq2}) for $k=1$ and $\ell=1$: \begin{Cor}\label{valleys} The BGF for the distribution of peaks (valleys) in permutations is given by $$1-\frac{1}{y}+\frac{1}{y}\sqrt{y-1}\cdot \tan\left(x\sqrt{y-1}+\arctan\left(\frac{1}{\sqrt{y-1}}\right)\right).$$\end{Cor} Expanding the BGF in Corollary~\ref{valleys} we can get, for example, the sequences A000431, A000487, and A000517 appearing in~\cite{Sloane} for the number of permutations with exactly one, two, and three valleys respectively. Note, that we have already obtained the number of {\em valleyless} permutations in Corollary~\ref{valleyless}. The valleyless permutations were studied in~\cite{RieZel}. \subsection{Distribution of the patterns with additional restrictions} The results from this subsection are in a similar direction as that in the papers~\cite{BraClaSte},~\cite{Man}, \cite{ManVan}, and several other papers, where the authors study $1{\mbox{-}} 3{\mbox{-}} 2$-avoiding permutations with respect to avoidance/count of other patterns. Such a study not only gives interesting enumerative results, but also provides a number of applications (see~\cite{BraClaSte}). To state the theorem below, we define $P_{k}=\sum_{n=0}^{k-1}\frac{1}{n+1}{2n \choose n}x^n$. That is, $P_{k}$ is the sum of initial $k$ terms in the expansion of the generating function $\frac{1-\sqrt{1-4x}}{2x}$ of the Catalan numbers. \begin{The} {\rm(Distribution of $a_1a_2\cdots a_kaa_{k+1}a_{k+2}\cdots a_{k+\ell}$ on $\mathcal{S}_n(2{\mbox{-}} 1{\mbox{-}} 3)$)}\label{spogpdist2} Let $$P:=P(x,y)=\sum_{n\geq 0}\ \sum_{\pi\in\mathcal{S}_n(2{\mbox{-}} 1{\mbox{-}} 3)}y^{e(\pi)}x^{n}$$ be the BGF of $2{\mbox{-}} 1{\mbox{-}} 3$-avoiding permutations where $e(\pi)$ is the number of occurrences of the SPOP $p=a_1a_2\cdots a_kaa_{k+1}a_{k+2}\cdots a_{k+\ell}$ in $\pi$. Then $P$ is given by $$\frac{1-x(1-y)(P_{k}+P_{\ell})-\sqrt{(x(1-y)(P_{k}+P_{\ell})-1)^2-4xy(x(y-1)P_{k}P_{\ell}+1)}}{2xy}.$$\end{The} \begin{proof} Let $\pi=\pi_11\pi_2\in\mathcal{S}_n(2{\mbox{-}} 1{\mbox{-}} 3)$. Then each letter in $\pi_1$ must be greater than any letter in $\pi_2$, where both $\pi_1$ and $\pi_2$ must necessarily be $2{\mbox{-}} 1{\mbox{-}} 3$-avoiding. Conversely, every permutation of this form is clearly $2{\mbox{-}} 1{\mbox{-}} 3$-avoiding. It is easy to see that $e(\pi)=e(\pi_1)+e(\pi_2)+\delta_{|\pi_1|,|\pi_2|}$, where \[ \delta_{|\pi_1|,|\pi_2|} = \left\{ \begin{array}{ll} 1 & \mbox{if $|\pi_1|\geq k$ and $|\pi_2|\geq \ell$ ,} \\ 0 & \mbox{otherwise.} \end{array} \right. \] Using the symbolic method we get that, in terms of GFs, $$P=1+x(y(P-P_k)(P-P_{\ell})+P_k\cdot P+P\cdot P_{\ell}-P_k\cdot P_{\ell})$$ where 1 corresponds to the empty permutation, and we subtracted $P_k\cdot P_{\ell}$ since the permutations corresponding to this term are counted twice, namely in $P_k\cdot P$ and in $P\cdot P_{\ell}$. To get the desired we solve the equation above for $P$. \end{proof} We now discuss several corollaries to Theorem~\ref{spogpdist2}. Note that letting $y=1$ we obtain the GF for the Catalan numbers. Also, letting $y=0$ in the expansion of $P$, we obtain the GF for the number of permutations avoiding simultaneously the patterns $2{\mbox{-}} 1{\mbox{-}} 3$ and $a_1a_2\cdots a_kaa_{k+1}a_{k+2}\cdots a_{k+\ell}$. If $k=1$ and $\ell=0$ in Theorem~\ref{spogpdist2}, then $P_k=1$ and $P_{\ell}=0$, and we obtain the distribution of descents in $2{\mbox{-}} 1{\mbox{-}} 3$-avoiding permutations. This distribution gives the {\em triangle of Narayana numbers} (see~\cite[A001263]{Sloane} for more details). If $k=\ell=1$ in Theorem~\ref{spogpdist2} then we deal with avoiding the pattern $2{\mbox{-}} 1{\mbox{-}} 3$ and counting occurrences of the pattern $312$, since any occurrence of $a_1aa_2$ in a legal permutation must be an occurrence of $312$ and vice versa. Thus the BGF of $2{\mbox{-}} 1{\mbox{-}} 3$-avoiding permutations with a prescribed number of occurrences of $312$ is given by $$\frac{1-2x(1-y)-\sqrt{4(1-y)x^2+1-4x}}{2xy}.$$ Reading off the coefficients of the terms involving only $x$ in the expansion of the function above, we can see that the number of $n$-permutations avoiding simultaneously the patterns $2{\mbox{-}} 1{\mbox{-}} 3$ and $312$ is $2^{n-1}$, which is known and is easy to see directly from the structure of such permutations. Reading off the coefficients of the terms involving $y$ to the power 1 we see that the number of $n$-permutations avoiding $2{\mbox{-}} 1{\mbox{-}} 3$ and having exactly one occurrence of the pattern $312$ is given by $(n-1)(n-2)2^{n-4}$. The corresponding sequence appears as~\cite[A001788]{Sloane} and gives an interesting fact which we state as Proposition~\ref{faces}. We give a combinatorial proof of that fact. \begin{prop}\label{faces} There is a bijection between 2-dimensional faces in the $(n+1)$-dimensional hypercube and $2{\mbox{-}} 1{\mbox{-}} 3$-avoiding $(n+2)$-permutations with exactly one occurrence of the pattern $312$. \end{prop} \begin{proof} Recall that a node in a hypercube is at level $i$ if the binary vector corresponding to it contains $i$ 1's. A 2-dimensional face in $(n+1)$-dimensional hypercube can be specified by choosing two positions in an $(n+1)$-binary vector and fixing the remaining entries of the vector to be 0 or 1 (in $2^{n-1}$ ways). Indeed, any 2-dimensional face in a hypercube is a 4-cycle having two nodes at the same, say $i$-th, level, one node at the $(i+1)$-st level and one node at the $(i-1)$-st level. Moreover, the binary vectors corresponding to the nodes from the $i$-th level must differ only in two coordinates and thus one of the vectors has 1 and 0 in these coordinates whereas the second vector has 0 and 1 there. So, the number of 2-dimensional faces in the $(n+1)$-dimensional hypercube is given by ${n+1 \choose 2}2^{n-1}$ which is the same as the number of the $(n+2)$-permutations under consideration (we refer to such permutations as ``good permutations"). We now describe the structure of the good permutations. Suppose $\pi=\pi_11\pi_2$ is a good permutation. Clearly, to avoid $2{\mbox{-}} 1{\mbox{-}} 3$, any letter of $\pi_1$ must be greater than any letter of $\pi_2$. If $\pi_1$ and $\pi_2$ are non-empty, then the unique occurrence of the pattern $312$ involves 1 in $\pi$ and both $\pi_1$ and $\pi_2$ must avoid simultaneously $2{\mbox{-}} 1{\mbox{-}} 3$ and $312$. The permutations avoiding both $2{\mbox{-}} 1{\mbox{-}} 3$ and $312$ have one peak, that is, the elements to the right (resp., left) of the largest element must be in decreasing (resp., increasing) order. If $\pi_1$ (resp., $\pi_2$) is empty, then $\pi_2$ (resp., $\pi_1$) is a good permutation and we use induction on length to describe the structure of $\pi$. Given a 2-dimensional face defined by a binary vector $${\bf v}=a_1\cdots a_ixa_{i+2}\cdots a_jya_{j+2}\cdots a_{n+1}$$ with chosen positions $i+1$ and $j+1$ filled by $x$ and $y$ (if $y$ is next to $x$ then $j=i+1$; if $x$ is the leftmost element then $i=0$; if $y$ is the rightmost element then $j=n$). Based on the structure considerations above, we describe a procedure to find a good $(n+2)$-permutation corresponding to ${\bf v}$. We read ${\bf v}$ from left to right and place $1,2,\ldots, n+2$, one by one, into our permutation $\pi=\pi_1\pi_2\cdots\pi_{n+2}$ which we think of as being initially $n+2$ empty slots. If we write, say, $\pi'_k$ then we mean that the $k$-th slot of $\pi$ is filled. We start filling $\pi$ by reading $a_k$, $k=1,2,\ldots,i$: if $a_k=0$, place $k$ into the leftmost empty slot of $\pi$; place $k$ into the rightmost empty slot otherwise. Suppose that as the result of filling the first $i$ elements we get $\pi'_1\cdots\pi'_t\pi_{t+1}\cdots \pi_{n+t-i+2}\pi'_{n+t-i+3}\cdots\pi'_{n+2}$. Set $\pi_{t+j-i+1}=i+1$. Note that currently we have the word $\pi'_1\cdots\pi'_tA(i+1)B\pi'_{n+t-i+3}\cdots\pi'_{n+2}$, where $A$ and $B$ consist of empty slots, $|A|=j-i\geq 1$ and $|B|=n-j+1\geq 1$. In what follows, any element to be filled in $A$ is greater than any element to be filled in $B$, and thus the element $i+1$ is involved in an occurrence of the pattern $312$. This occurrence will be the only one in the permutation. We fill in $B$ by reading $a_k$, $k=j+2,\ldots, n+1$ and placing the elements $(i+2),\ldots,(n-j+i+1)$, one by one, as follows: if $a_k=0$, place the current element into the leftmost empty slot of $B$; place the current element insto the rightmost empty slot otherwise. We place $(n-j+i+2)$ in the remaining empty slot of $B$. Fill in the remaining elements, one by one in increasing order, into $A$ by reading $a_k$, $k=i+2,\ldots,j$ in the way similar to that when proceeding with $B$. In particular, $n+2$ will be placed in the remaining empty slot of $A$. For example, the face $110x0y01$ corresponds to the permutation $389457621$, where $A$ is filled by 89 and $B$ by 576. Our map is obviously injective and the converse to it is easy to see. \end{proof} If $k=1$ and $\ell=2$ in Theorem~\ref{spogpdist2} then we deal with avoiding the pattern $2{\mbox{-}} 1{\mbox{-}} 3$ and counting occurrences of the pattern $a_1aa_2a_3$. In particular, one can see that the number of permutations avoiding simultaneously $2{\mbox{-}} 1{\mbox{-}} 3$ and $a_1aa_2a_3$ is given by the {\em Pell numbers} $p(n)$ defined as $p(n) = 2p(n-1) + p(n-2)$ for $n>1$; $p(0) = 0$ and $p(1) = 1$. The Pell numbers appear as~\cite[A000129]{Sloane}, where one can find objects related to our restricted permutations. \section{$q$-analogues for non-overlapping SPOPs}\label{q-an} The purpose of this section is to prove Theorem~\ref{q-analog} which is a $q$-analogue of~\cite[Thm. 16]{Kit3}. In fact, the formulation of Theorem~\ref{q-analog} is similar to that of the $q$-analogue of~\cite[Thm. 32]{Kit2} obtained in~\cite{MenRem}. Moreover, to prove Theorem~\ref{q-analog} one can use the same arguments as those in~\cite{MenRem} involving rather complicated considerations based on symmetric functions, but we choose a simpler proof that is similar to proving~\cite[Thm. 32]{Kit2} in~\cite{Kit2}. We fix some notations. Let $p$ be a segmented POP (SPOP) and $A^p_{n,k}$ be the number of $n$-permutations avoiding $p$ and having $k$ inversions. As usually, $[n]_q=q^0+\cdots +q^{n-1}$, $[n]_q!=[n]_q\cdots [1]_q$, $\left[\begin{array}{c} n \\ i \end{array}\right]_q= \frac{[n]_q!}{[i]_q![n-i]_q!}$, and, as above, $\mbox{inv}(\pi)$ denotes the number of inversions in a permutation $\pi$. We set $A^p_n(q)=\sum_{\pi \mbox{ avoids } p}q^{\mbox{inv}(\pi)}$. Moreover, $$A^p_q(x)=\sum_{n,k}A^p_{n,k}q^k\frac{x^n}{[n]_q!}=\sum_{n}A^p_n(q)\frac{x^n}{[n]_q!}= \sum_{\pi \mbox{ avoids }p}q^{\mbox{inv}(\pi)} \frac{x^{|\pi|}}{[|\pi|]_q!}.$$ All the definitions above are similar in case of permutations that {\em quasi-avoid} $p$, indicated by $B$ rather than $A$, namely, those permutations that have exactly one occurrence of $p$ and this occurrence consists of the $|p|$ rightmost letters in the permutations. \begin{Lem}\label{lem01}\rm{(A $q$-analogue of~\cite[Prop. 4]{Kit2} that is valid for POPs)} We have $B^p_q(x)=(x-1)A^p_q(x)+1$. \end{Lem} \begin{proof} If we consider all $(n-1)$-permutations avoiding $p$ (the number of those, if we register inversions, is $A^p_{n-1}(q)$) and all possible extensions of these permutations to the $n$-permutations by writing one more letter to the right; then the number of obtained permutations, with inversions registered, is $(1+q+\cdots+q^{n-1})A_{n-1}(q)=[n]_qA_{n-1}(q)$, where, for instance, $q^{n-1}$ in the sum corresponds to having $1$ in the rightmost position. Obviously, the set of these permutations is a disjoint union of the set of all $n$-permutations that avoid $p$ and the set of all $n$-permutations that quasi-avoid $p$. Thus, $B^p_n(q)=[n]_qA^p_{n-1}(q)-A^p_n(q)$. Multiplying both sides of the last equality by $x^n/[n]_q!$ and summing over all $n$ gives the desired result. \end{proof} \begin{Lem}\label{lem02}\rm{(A $q$-analogue of~\cite[Thm. 25]{Kit2} that is valid for POPs)} Let $P=p{\mbox{-}}\sigma$ be a POP, where $\sigma$ is an arbitrary POP built on the alphabet that is incomparable to that involved in a SPOP $p$. Then $$A^P_q(x)=A^p_q(x)+A^{\sigma}_q(x)B^p_q(x).$$ \end{Lem} \begin{proof} If a permutation $\pi$ avoids $p$ then it avoids $P$. Otherwise we find the leftmost occurrence of $p$ in $\pi$. We assume that this occurrence consists of the $|p|$ rightmost letters among the $i$ leftmost letters of $\pi$. So the subword of $\pi$ beginning at the $(i+1)$st letter must avoid $\sigma$. From this, using independence between the first $i$ letters of $\pi$ and the remain letters, we conclude $$A^P_n(q)=A^p_n(q)+\displaystyle\sum_{i=|\sigma|}^n \left[\begin{array}{c} n \\ i \end{array}\right]_qB^p_i(q) A^{\sigma}_{n-i}(q).$$ Observe that one can change the lower bound in the sum above to 0, because $B^p_i(q)=0$ for $i=0,1,\ldots,|p|-1$. Multiplying both sides by $x^n/[n]_q!$ and summing over all $n$ we get the desired. \end{proof} \begin{The}\label{mult}\rm{(A $q$-analogue of~\cite[Thm. 28]{Kit2} that is valid for POPs)} Let $p=p_1{\mbox{-}}\cdots{\mbox{-}} p_k$ be a multi-pattern ($p_i$s are SPOPs, and letters of $p_i$ and $p_j$ are incomparable for $i\neq j$). Then $$A^p_q(x)=\displaystyle\sum_{i=1}^{k}A^{p_i}_q(x)\prod_{j=1}^{i-1}B^{p_j}_q(x)= \sum_{i=1}^{k}A^{p_i}_q(x)\prod_{j=1}^{i-1}((x-1)A^{p_j}_q(x)+1).$$ \end{The} \begin{proof} The first equality follows from lemma~\ref{lem02} by induction on $k$, and the second equality is then given by lemma~\ref{lem01}.\end{proof} \begin{The}\rm{(A $q$-analogue of~\cite[Thm. 16]{Kit3})}\label{q-analog} If $N_p(\pi)$ denotes the maximum number of non-overlapping occurrences of a SPOP $p$ in $\pi$, then $$\displaystyle\sum_{\pi}y^{N(\pi)}q^{inv(\pi)}\frac{x^{|\pi|}}{|\pi|!}= \frac{A^p_q(x)}{1-yB^p_q(x)}=\frac{A^p_q(x)}{1-y((x-1)A^p_q(x)+1)}.$$\end{The} \begin{proof} We fix $k$ and consider the multi-pattern $P_k=p{\mbox{-}}\cdots{\mbox{-}} p$ with $k$ copies of $p$. A permutation avoiding $P_k$ has at most $k-1$ non-overlapping occurrences of $p$. From Theorem~\ref{mult}, $$A^{P_{k+1}}_q(x)-A^{P_k}_q(x)=A^p_q(x)(B^p_q(x))^k,$$ which is a bivariate generating function for the number of permutations with exactly $k$ non-overlapping occurrences of $p$ and with registered inversions. The result is now follows from summing over all $k$ and applying lemma~\ref{lem01}. \end{proof} \section{Some open problems on POPs}\label{sec3} We know very little on avoiding, and almost nothing on the distribution of, POPs. There are a lot of posets and different classes of posets, which provides enormous possibilities for further research. In particular, a natural step would be to extend/generalize results in the literature related to GPs to that related to POPs in the manner Proposition~\ref{claes} and Theorem~\ref{dist} are obtained. In this section, we state just few problems on POPs that might be interesting to solve. \subsection{Alternating patterns} A permutation $\pi_1\pi_2\ldots\pi_n$ is {\em alternating} (resp., {\em reverse alternating}) if $\pi_1>\pi_2<\pi_3>\cdots$ (resp., $\pi_1<\pi_2>\pi_3<\cdots$). It is well known that the EGF for the number of (reverse) alternating permutations is $\tan x+\sec x$. We say that a permutation is a {\em $k$-non-alternating} (resp., {\em $k$-non-reverse-alternating}) if it does not contain $k$ consecutive letters that form an (resp., reverse) alternating permutation. Using the complement, one can see that the numbers of $k$-non-alternating and $k$-non-reverse-alternating $n$-permutations are the same. \begin{Prob} Enumerate $k$-non-alternating $n$-permutations. (For $k=4$ and $n\geq 4$ the numbers of ``good'' $n$-permutations are 19, 70, 331, 1863, 11637, 81110, ...; for $k=5$ and $n\geq 5$ we have the sequence 104, 528, 3296, 23168, 179712,...) \end{Prob} \begin{Prob} Enumerate $n$-permutations that are both $k$-non-alternating and $k$-non-reverse-alternating. (For $k=4$ and $n\geq 4$ we have the sequence 14, 52, 204, 1010, 5466, 34090,...; for $k=5$ and $n\geq 5$ we have 24, 88, 458, 2716, 17808, 135182,...)\end{Prob} To generalize the problems above, we define a $k$-{\em alternating} (resp., $k$-reverse-alternating) pattern to be one that forms a (resp., reverse) alternating permutation of length $k$. Clearly, a $k$-alternating (resp., $k$-reverse-alternating) segmented pattern is a SPOP, where the corresponding poset is built on $k$ elements $a_1,\ldots, a_k$ with the relations $a_1>a_2<a_3>\cdots$ (resp., $a_1<a_2>a_3<\cdots$) (see Figure~\ref{poset000} for the case $k=5$). \begin{figure}[h] \begin{center} \begin{picture}(10,3) \put(-5,0){\put(12,2){\p} \put(14,0){\p} \put(16,2){\p} \put(18,0){\p} \put(20,2){\p} \path(12,2)(14,0)(16,2)(18,0)(20,2) \put(-1.2,0){$a_1$} \put(0.6,2){$a_2$} \put(3.6,0.8){$a_3$} \put(6.3,2){$a_4$} \put(8.3,0){$a_5$} \put(10.8,2){$a_1$} \put(13.6,0.8){$a_2$} \put(16.3,2){$a_3$} \put(18.5,0){$a_4$} \put(20.3,2){$a_5$} \put(0,0){\p} \put(2,2){\p} \put(4,0){\p} \put(6,2){\p} \put(8,0){\p} \path(0,0)(2,2)(4,0)(6,2)(8,0) } \end{picture} \caption{Posets for the 5-reverse-alternating and 5-alternating patterns.} \label{poset000} \end{center} \end{figure} Note that an occurrence of a descent in a permutation is an occurrence of a 2-alternating pattern. Thus we have yet another generalization of the notion of a descent beyond that discussed in subsection~\ref{unimod}. Moreover, such patterns generalize the patterns associated with peaks (valleys) in permutations, which gives a motivation to study them. The number of descents in a permutation $\pi$ is denoted by $des(\pi)$. {\em Eulerian numbers} $A(n,k)$ count permutations in the symmetric group $\mathcal{S}_n$ with $k$ descents and they are the coefficients of the {\em Eulerian polynomials} $A_n(t)$ defined by $A_n(t)=\sum_{\pi\in\mathcal{S}_n}t^{1+des(\pi)}$. The Eulerian polynomials satisfy the identity $$\sum_{k\geq 0}k^nt^k=\frac{A_n(t)}{(1-t)^{n+1}}.$$ For more properties of the Eulerian polynomials see~\cite{comtet}. A natural generalization of the polynomials $A_n(t)$ is given by considering $k$-alternating patterns instead of descents in the definition of the polynomials. Let us call such new polynomials $B^k_n(t)$. From definitions, $A_n(t)=B^2_n(t)$. \begin{Prob} Study the properties of the polynomials $B^2_n(t)$ and find the distribution for $k$-alternating patterns, that is, find an explicit formula for $B^2_n(t)$ or, if possible, coefficients of $B^2_n(t)$. \end{Prob} \begin{Prob} Find joint distribution for $k$-alternating and $k$-reverse-alternating patterns. \end{Prob} Note that there are many other (segmented) patterns that can be built on posets similar to those in Figure~\ref{poset000}. For example, one could consider the pattern $a_1a_3a_5a_2a_4$ built on the poset to the left in Figure~\ref{poset000}. To study other than alternating patterns built on such posets might be also an interesting direction to explore. \subsection{Co-unimodal patterns} Recall from subsection~\ref{unimod} that a SPOP $\sigma=\sigma_1\sigma_2\ldots\sigma_k$ is co-unimodal if $k=\sigma_1>\sigma_2>\cdots >\sigma_j<\cdots <\sigma_k$ for some $2\leq j\leq k$. We extend the concept of co-unimodal pattern to that of {\em free co-unimodal pattern} by removing the restriction ``$k=$'' in the definition. Note that co-unimodal patterns impose weaker restrictions on permutations than free co-unimodal patterns do. \begin{Prob} How many of $n$-permutations avoid a co-unimodal pattern of length~$k$. (For $k=4$ and $j=2$ (resp., $j=3$) see the record for the pattern $utxv$ (resp., $spor$) in table~\ref{unsolved}.)\end{Prob} \begin{Prob} How many of $n$-permutations avoid a free co-unimodal pattern of length $k$. (For $k=4$, because of the complement, $j=2$ and $j=3$ give the same number of $n$-permutations avoiding them; see the record for the pattern $ijkm$ in table~\ref{unsolved}.)\end{Prob} \begin{Prob} Find the distribution of a co-unimodal pattern of length $k$. \end{Prob} \begin{Prob} Find the distribution of a free co-unimodal pattern of length~$k$.\end{Prob} \begin{Prob} Find the number of $n$-permutations avoiding simultaneously two or more of (free) co-unimodal patterns. We provide some numerical data in case $k=4$. Suppose $F_2$ and $F_3$ are the free co-unimodal patterns corresponding to $j=2$ and $j=3$ respectively; also, $U_2$ and $U_3$ are the co-unimodal patterns corresponding to $j=2$ and $j=3$ respectively. The initial values for the number of $n$-permutations, $n\geq 4$, avoiding a pair of the patterns are as follows: ($F_2$,$F_3$) -- 18, 66, 252, 1176, 5768, 34216; ($F_2$,$U_3$) -- 19, 75, 330, 1753, 10319, 70011; ($F_3$,$U_2$) -- 20, 81, 372, 1981, 11866, 80043; ($U_2$,$U_3$) -- 21, 91, 462, 2718, 18181, 136491. \end{Prob} \begin{Prob} Find the joint distribution of two or more (free) co-unimodal patterns. \end{Prob} \subsection{Remaining cases of SPOPs of length four}\label{usolved-4} In table~\ref{unsolved}, we record few initial values for the number of $n$-permutations in some of unsolved cases of avoidance of SPOPs of length four, $n\geq 1$. In the table we record patterns having at least one pair of incomparable letters (see Figure~\ref{poset001} for the corresponding poset), although there are unsolved cases when all elements are comparable (we have a chain in the Hasse diagram). We refer to~\cite{ElizNoy} for information on unsolved segmented GPs of length four. Table~\ref{unsolved} is also an extended version of the corresponding table in~\cite{Kit3}. \begin{figure}[h] \begin{center} \begin{picture}(6,4) \put(0,0){\put(2,0){\p} \put(2,2){\p} \put(4,0){\p} \put(4,2){\p} \put(4,4){\p} \path(2,0)(2,2) \path(4,0)(4,2)(4,4) \put(1,0){$1'$} \put(1,2){$2'$} \put(4.3,0){1} \put(4.3,2){2} \put(4.3,4){3} } \end{picture} \caption{Poset from which some patterns in table~\ref{unsolved} are built.} \label{poset001} \end{center} \end{figure} \begin{table}[ht] \begin{center} \begin{tabular}{|l|l|} \hline $11'22'$ & $1,\ 2,\ 6,\ 18,\ 70,\ 300,\ 1435,\ 7910,\ 47376,\ldots$\\ \hline $121'2'$ & $1,\ 2,\ 6,\ 18,\ 61,\ 281,\ 1541,\ 8920,\ 57924,\ldots$\\ \hline $11'2'2$ & $1,\ 2,\ 6,\ 18,\ 71,\ 322,\ 1665,\ 9789,\ 64327,\ldots$ \\ \hline $12'1'2$ & $1,\ 2,\ 6,\ 18,\ 61,\ 272,\ 1410,\ 8048,\ 51550,\ldots$\\ \hline $121'3$ & $1,\ 2,\ 6,\ 20,\ 83,\ 411,\ 2290,\ 14588,\ 104448,\ldots$\\ \hline $131'2$ & $1,\ 2,\ 6,\ 20,\ 81,\ 390,\ 2161,\ 13678,\ 96983,\ldots$\\ \hline $231'1$ & $1,\ 2,\ 6,\ 20,\ 83,\ 402,\ 2245,\ 14192,\ 100650,\ldots$\\ \hline $abcd$ & $1,\ 2,\ 6,\ 19,\ 70,\ 331,\ 1863,\ 11637,\ 81110,\ldots$\\ \hline $utxv$ & $1,\ 2,\ 6,\ 23,\ 110,\ 630,\ 4210,\ 32150,\ 276210,\ldots$\\ \hline $spor$ & $1,\ 2,\ 6,\ 22,\ 100,\ 540,\ 3388,\ 24248,\ 195048,\ldots$\\ \hline $ijkm$ & $1,\ 2,\ 6,\ 21,\ 90,\ 450,\ 2619,\ 17334,\ 129114,\ldots$\\ \hline $egfh$ & $1,\ 2,\ 6,\ 20,\ 84,\ 412,\ 2300,\ 14676,\ 104536,\ldots$\\ \hline $efgh$ & $1,\ 2,\ 6,\ 20,\ 80,\ 404,\ 2368,\ 15488,\ 114480,\ldots$\\ \hline $fegh$ & $1,\ 2,\ 6,\ 20,\ 80,\ 360,\ 1888,\ 11168,\ 75168,\ldots$\\ \hline \end{tabular} \smallskip \caption{The initial values for the number of $n$-permutations avoiding 4-SPOPs in a few of unsolved cases, $n\geq 1$. See Figures~\ref{poset001} and~\ref{poset010} for the corresponding poset.} \label{unsolved} \end{center} \end{table} Other 4-SPOPs that were not considered can be built on the posets in Figure~\ref{poset010}. For example, for the second poset there are three SPOPs to consider that are non-equivalent up to trivial bijections: $egfh$, $efgh$, and $fegh$ (see table~\ref{unsolved} for corresponding sequences). \begin{figure}[h] \begin{center} \begin{picture}(2,4) \put(-10,0){\put(5,0){\p} \put(5,2){\p} \put(7,0){\p} \put(7,2){\p} \path(5,0)(5,2)(7,0)(7,2)(5,0) \put(-0.8,0){$a$} \put(-0.7,2){$b$} \put(2.3,0){$c$} \put(2.3,2){$d$} \put(4.3,0){$e$} \put(4.3,2){$f$} \put(7.3,0){$g$} \put(7.3,2){$h$} \put(9.3,2){$i$} \put(11.3,1.2){$j$} \put(12.7,0.3){$k$} \put(15.3,2){$m$} \put(9,2){\p} \put(11,1){\p} \put(13,0){\p} \put(15,2){\p} \path(9,2)(11,1)(13,0)(15,2) \put(0,0){\p} \put(0,2){\p} \put(2,0){\p} \put(2,2){\p} \path(0,0)(0,2)(2,0)(2,2) \put(16.3,0.5){$p$} \put(17.1,-0.2){$o$} \put(17.3,2){$s$} \put(19.3,1){$r$} \put(18,0){\p} \put(17,1){\p} \put(18,2){\p} \put(19,1){\p} \path(18,0)(17,1)(18,2)(19,1)(18,0) \put(21,0){\p} \put(21,2){\p} \put(23,1.5){\p} \put(23,0.5){\p} \path(21,0)(21,2)(23,1.4)(23,0.6)(21,0) \put(20.3,0){$t$} \put(20.2,2){$u$} \put(23.3,1.7){$v$} \put(23.3,0.4){$x$} } \end{picture} \caption{Five posets to build 4-SPOPs that were not considered.} \label{poset010} \end{center} \end{figure} Notice that the leftmost poset in Figure~\ref{poset010} can be used to build the 4-reverse-alternating pattern $abcd$, as well as the 4-alternating pattern $dcba$, whereas the third (resp., forth, fifth) poset in Figure~\ref{poset010} can be used to build free co-unimodal (resp., co-unimodal) pattern(s) of length 4, namely $ijkm$ (resp., $spor$, $utxv$). \subsection{Further research directions} The problems stated above can be extended to many POPs by inserting dash(es) in the SPOPs discussed. Also, a natural generalization of any avoidance problem is finding the distribution of a (S)POP under consideration. Moreover, joint distribution of (S)POPs and, as a special case, multi-avoidance of these patterns, is a possible direction for further research after choosing (S)POPs to consider. All these problems are interesting from enumerative point of view but also might bring interesting connections to other combinatorial objects, in which case, as always, explicit bijections would be desirable. \section{Acknowledgements} The author is grateful to Ira Gessel for the discussion and references related to the distribution of peaks in permutations; to Jeff Remmel for the discussion on $q$-analogues for non-overlapping occurrences of patterns, as well as for his support during the author's stay at UCSD; to the two anonymous referees for their helpful comments.
{ "timestamp": "2006-09-28T17:47:28", "yymm": "0603", "arxiv_id": "math/0603122", "language": "en", "url": "https://arxiv.org/abs/math/0603122", "abstract": "We review selected known results on partially ordered patterns (POPs) that include co-unimodal, multi- and shuffle patterns, peaks and valleys ((modified) maxima and minima) in permutations, the Horse permutations and others. We provide several (new) results on a class of POPs built on an arbitrary flat poset, obtaining, as corollaries, the bivariate generating function for the distribution of peaks (valleys) in permutations, links to Catalan, Narayna, and Pell numbers, as well as generalizations of few results in the literature including the descent distribution. Moreover, we discuss q-analogue for a result on non-overlapping segmented POPs. Finally, we suggest several open problems for further research.", "subjects": "Combinatorics (math.CO)", "title": "Introduction to Partially Ordered Patterns", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446448596305, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7087893916704608 }
https://arxiv.org/abs/math/0602050
Rough Path Analysis Via Fractional Calculus
Using fractional calculus we define integrals of the form $% \int_{a}^{b}f(x_{t})dy_{t}$, where $x$ and $y$ are vector-valued Hölder continuous functions of order $\displaystyle \beta \in (\frac13, \frac12)$ and $f$ is a continuously differentiable function such that $f'$ is $\lambda$-Höldr continuous for some $\lambda>\frac1\beta-2$. Under some further smooth conditions on $f$ the integral is a continuous functional of $x$, $y$, and the tensor product $x\otimes y$ with respect to the Hölder norms. We derive some estimates for these integrals and we solve differential equations driven by the function $y$. We discuss some applications to stochastic integrals and stochastic differential equations.
\section{Introduction} The theory of rough path analysis has been developed from the seminal paper by Lyons \ \cite{lyons}. The purpose of this theory is to analyze dynamical systems $dx_{t}=f(x_{t})dy_{t}$, where the control function $y$ is not differentiable. If the rough control $y$ has finite $p$-variation on bounded intervals, where $p\geq 2$, then the dynamical system is a continuous function, in the $p$-variation norm, of $y$ and the associated multiplicative functionals $\overbrace{y\otimes {\cdots }\otimes y}^k $, with $% k=2,\ldots ,[p]$. In the case $1\leq p<2$, the dynamical system can be formulated using Riemann-Stieltjes integrals and applying the results of Young \cite{Yo}. In this case, $x_{t}$ is a continuous function of $y$ in the $p$-variation norm (see Lyons \cite{Ly2}). Suppose that $f$ and $g$ are H\"{o}lder continuous functions on the interval $[a,b]$, of order $\lambda $ and $\mu $, respectively, with $\lambda +\mu >1$% . Then, the Riemann-Stieltjes integral $\int_{a}^{b}fdg$ can be expressed as a Lebesgue integral using fractional derivatives (see Z\"{a}hle \cite{Za98} and Proposition 2.1 below). This fact has been exploited by Nualart and R% \u{a}\c{s}canu in \cite{NR} to analyze dynamical systems driven by a control function $y$ which is H\"{o}lder continuous of order $\beta >\frac{1}{2}$. In this case further results are obtained in \cite{HN} along the line of the present paper. The purpose of this paper is to analyze dynamical systems $% dx_{t}=f(x_{t})dy_{t}$, \ where the control function $y$ is H\"{o}lder continuous of order $\beta \in (\frac{1}{3},\frac{1}{2})$, using the techniques of the classical fractional calculus, and following an approach inspired by the work of Nualart and R\u{a}\c{s}canu \cite{NR} in the case $% \beta >\frac{1}{2}$. In order to achieve this objective, we first provide in Section 3 an explicit formula for integrals of the form $% \int_{a}^{b}f(x_{t})dy_{t}$, where $x$ and $y$ are H\"{o}lder continuous of order $\beta \in (\frac{1}{3},\frac{1}{2})$. This formula, given in Theorem 3.1, is based on the fractional integration by parts formula, and it involves the functions $x$, $y$, and the quadratic multiplicative functional $x\otimes y\,$. Notice that this explicit formula does not depend on any approximation scheme. \ As a consequence, we derive estimates in the H\"{o}% lder norm for the indefinite integral. Section 4 is devoted to establish the existence and uniqueness of a solution for the dynamical system $dx_{t}=f(x_{t})dy_{t}$. \ The main ingredient in the proof of these results is to transform this equation into a system of integral equations for $x$ and $x\otimes y$ that can be solved by a standard application of a fixed point argument. We show how the solution depends continuously on the H\"{o}lder norm of $y$ and $y\otimes y$. We also prove some stability results for the differential equations which are interesting, new and may be difficult to obtain by other approaches. Remark that to derive our results we do not make use of the theory of rough paths, and we obtain explicit formulas that do not depend on any approximation argument. These results can be applied to implement a path-wise approach to define stochastic integrals and to solve stochastic differential equations driven by a multidimensional Brownian motion. As an application of the deterministic results obtained for dynamical systems we derive a sharp rate of almost sure convergence of the Wong-Zakai approximation for multidimensional diffusion processes. We couldn't find this kind of estimates elsewhere. Similar results hold in the case of a fractional Brownian motion with Hurst parameter $H\in (\frac{1}{3},\frac{1}{2% })$. The approximation of the solutions of stochastic differential equations driven by a fractional Brownian motion with Hurst parameter $H\in (\frac{1}{3},\frac{1}{2})$ is more involved and it will be treated in a forthcoming paper. \section{Fractional Integrals and Derivatives} Let $a,b\in \mathbb{R}$ with $a<b.$ Denote by $L^{p}\left( a,b\right) $, $% p\geq 1$, the space of Lebesgue measurable functions $f:\left[ a,b\right] \rightarrow \mathbb{R}$ for which $\left\Vert f\right\Vert _{L^{p}\left( a,b\right) }<\infty $, where \begin{equation*} \left\Vert f\right\Vert _{L^{p}\left( a,b\right) }=\left\{ \begin{array}{lll} \left( \int_{a}^{b}\left\vert f\left( t\right) \right\vert ^{p}dt\right) ^{1/p}, & & \hbox{if \ }1\leq p<\infty \\ ess\sup \left\{ \left\vert f\left( t\right) \right\vert :t\in \left[ a,b% \right] \right\} , & & \hbox{if \ }p=\infty .% \end{array}% \right. \end{equation*} Let $f\in L^{1}\left( a,b\right) $ and $\alpha >0.$ The left-sided and right-sided fractional Riemann-Liouville integrals of $f$ of order $\alpha $ are defined for almost all $x\in \left( a,b\right) $ by \begin{equation*} I_{a+}^{\alpha }f\left( t\right) =\frac{1}{\Gamma \left( \alpha \right) }% \int_{a}^{t}\left( t-s\right) ^{\alpha -1}f\left( s\right) ds \end{equation*}% and \begin{equation*} I_{b-}^{\alpha }f\left( t\right) =\frac{\left( -1\right) ^{-\alpha }}{\Gamma \left( \alpha \right) }\int_{t}^{b}\left( s-t\right) ^{\alpha -1}f\left( s\right) ds, \end{equation*}% respectively, where $\left( -1\right) ^{-\alpha }=e^{-i\pi \alpha }$ and $% \Gamma \left( \alpha \right) =\int_{0}^{\infty }r^{\alpha -1}e^{-r}dr$ is the Euler gamma function. Let $I_{a+}^{\alpha }(L^{p})$ (resp. $% I_{b-}^{\alpha }(L^{p})$) be the image of $L^{p}(a,b)$ by the operator $% I_{a+}^{\alpha }$ (resp. $I_{b-}^{\alpha }$). If $f\in I_{a+}^{\alpha }\left( L^{p}\right) \ $ (resp. $f\in I_{b-}^{\alpha }\left( L^{p}\right) $) and $0<\alpha <1$ then the Weyl derivatives are defined as \begin{equation} D_{a+}^{\alpha }f\left( t\right) =\frac{1}{\Gamma \left( 1-\alpha \right) }% \left( \frac{f\left( t\right) }{\left( t-a\right) ^{\alpha }}+\alpha \int_{a}^{t}\frac{f\left( t\right) -f\left( s\right) }{\left( t-s\right) ^{\alpha +1}}ds\right) \label{1.1} \end{equation}% and \begin{equation} D_{b-}^{\alpha }f\left( t\right) =\frac{\left( -1\right) ^{\alpha }}{\Gamma \left( 1-\alpha \right) }\left( \frac{f\left( t\right) }{\left( b-t\right) ^{\alpha }}+\alpha \int_{t}^{b}\frac{f\left( t\right) -f\left( s\right) }{% \left( s-t\right) ^{\alpha +1}}ds\right) \label{1.2} \end{equation}% where $a\leq t\leq b$ (the convergence of the integrals at the singularity $% s=t$ holds point-wise for almost all $t\in \left( a,b\right) $ if $p=1$ and moreover in $L^{p}$-sense if $1<p<\infty $). For any $\lambda \in (0,1)$, we denote by $C^{\lambda }(a,b)$ the space of $% \lambda $-H\"{o}lder continuous functions on the interval $[a,b]$. Recall from \cite{SaKiMa93} that we have: \begin{itemize} \item If $\alpha <\frac{1}{p}$ and $q=\frac{p}{1-\alpha p}$ then \begin{equation*} I_{a+}^{\alpha }\left( L^{p}\right) =I_{b-}^{\alpha }\left( L^{p}\right) \subset L^{q}\left( a,b\right) . \end{equation*} \item If $\alpha >\frac{1}{p}$ then% \begin{equation*} I_{a+}^{\alpha }\left( L^{p}\right) \,\cup \,I_{b-}^{\alpha }\left( L^{p}\right) \subset C^{\alpha -\frac{1}{p}}\left( a,b\right) . \end{equation*} \end{itemize} The following inversion formulas hold: \begin{eqnarray} I_{a+}^{\alpha }\left( D_{a+}^{\alpha }f\right) &=&f,\quad \quad \;\forall f\in I_{a+}^{\alpha }\left( L^{p}\right) \label{1.4} \\ I_{a-}^{\alpha }\left( D_{a-}^{\alpha }f\right) &=&f,\quad \quad \;\forall f\in I_{a-}^{\alpha }\left( L^{p}\right) \label{1.3} \end{eqnarray}% and \begin{equation} D_{a+}^{\alpha }\left( I_{a+}^{\alpha }f\right) =f,\quad D_{a-}^{\alpha }\left( I_{a-}^{\alpha }f\right) =f,\quad \;\forall f\in L^{1}\left( a,b\right) \,. \label{1.5} \end{equation}% On the other hand, for any $f,g\in L^{1}(a,b)$ we have \begin{equation} \int_{a}^{b}I_{a+}^{\alpha }f(t)g(t)dt=(-1)^{\alpha }\int_{a}^{b}f(t)I_{b-}^{\alpha }g(t)dt\,, \label{1.6} \end{equation}% and for $f\ \in I_{a+}^{\alpha }\left( L^{p}\right) $ and $g\in I_{a-}^{\alpha }\left( L^{p}\right) $ we have% \begin{equation} \int_{a}^{b}D_{a+}^{\alpha }f(t)g(t)dt=(-1)^{-\alpha }\int_{a}^{b}f(t)D_{b-}^{\alpha }g(t)dt. \label{1.7} \end{equation} Suppose that $f\in C^{\lambda }(a,b)$ and $g\in C^{\mu }(a,b)$ with $\lambda +\mu >1$. Then, from the classical paper by Young \cite{Yo}, the Riemann-Stieltjes integral $\int_{a}^{b}fdg$ exists. The following proposition can be regarded as a fractional integration by parts formula, and provides an explicit expression for the integral $\int_{a}^{b}fdg$ in terms of fractional derivatives (see \cite{Za98}). \begin{proposition} \label{p.2.1} Suppose that $f\in C^{\lambda }(a,b)$ and $g\in C^{\mu }(a,b)$ with $\lambda +\mu >1$. Let ${\lambda }>\alpha $ and $\mu >1-\alpha $. Then the Riemann Stieltjes integral $\int_{a}^{b}fdg$ exists and it can be expressed as% \begin{equation} \int_{a}^{b}fdg=(-1)^{\alpha }\int_{a}^{b}D_{a+}^{\alpha }f\left( t\right) D_{b-}^{1-\alpha }g_{b-}\left( t\right) dt, \label{1.8} \end{equation}% where $g_{b-}\left( t\right) =g\left( t\right) -g\left( b\right) $. \end{proposition} \bigskip We will make use of the following two-variable fractional integration by parts formula, whose proof is given in the Appendix. \begin{lemma} \label{l.2.2} Let $\varphi (\xi ,\eta )$ and $\psi (\xi ,\eta )$ be two functions of class $C^{2}$ defined on $a\leq \xi \leq \eta \leq b$. Suppose $% \psi (\xi ,\eta )$ vanishes on the diagonal. The following fractional integration by parts formula holds for any $0<\alpha <1$.% \begin{equation} \int_{a}^{b}d\xi \int_{\xi }^{b}\varphi (\xi ,\eta )\frac{\partial ^{2}\psi }{\partial \xi \partial \eta }(\xi ,\eta )d\xi d\eta =-\int_{a}^{b}d\eta \int_{a}^{\eta }D_{a+}^{\alpha ,\xi }D_{b-}^{\alpha ,\eta }\varphi _{a+,b-}(\xi ,\eta )\Gamma ^{\alpha }\psi (\xi ,\eta )d\xi , \label{1.9} \end{equation}% where $D_{a+}^{\alpha ,\xi }$ denotes the fractional derivative on variable $% \xi $, $D_{b-}^{\alpha ,\eta }$ denotes the fractional derivative on the variable $\eta $, and the operator $\Gamma ^{\alpha }$ is defined by% \begin{equation} \Gamma ^{\alpha }\psi (\xi ,\eta )=D_{\eta -}^{1-\alpha ,\xi }D_{\xi +}^{1-\alpha ,\eta }\psi (\xi ,\eta ). \label{1.10} \end{equation} \end{lemma} \setcounter{equation}{0} \section{Integration of Rough Functions} Fix $\frac{1}{3}<\beta <\frac{1}{2}$. Suppose that $x\ :[0,T]\rightarrow \mathbb{R}^{m}$ and $y\ :[0,T]\rightarrow \mathbb{R}^{d}$ are $\beta $-H\"{o}% lder continuous functions. Following \cite{lyons} we assume that $x\otimes y$ is well-defined and $\ $it is a continuous function defined on $\Delta :=\{(s,t):0\leq s\leq t\leq T\}$ with values on $\mathbb{R}^{m}\otimes \mathbb{R}^{d}$ verifying the following properties: \begin{description} \item[i)] For all $s\leq u\leq t$ we have (\textit{multiplicative property})% \begin{equation} \left( x\otimes y\right) _{s,u}+\left( x\otimes y\right) _{u,t}+(x_{u}-x_{s})\otimes (y_{t}-y_{u})=\left( x\otimes y\right) _{s,t}. \label{2.1} \end{equation} \item[ii)] For all $(s,t)\in \Delta $% \begin{equation} \left| \left( x\otimes y\right) _{s,t}\right| \leq k|t-s|^{2\beta }. \label{2.2} \end{equation} \end{description} That is, $(x,y,x\otimes y)$ constitutes a multiplicative functional in the sense of the rough paths analysis theory. We will say that $(x,y,x\otimes y)$ is a $\beta $-\textit{H\"{o}lder continuous multiplicative functional on }$% \mathbb{R}^{m}\otimes \mathbb{R}^{d}$. If $x$ and $y$ are smooth functions, then \begin{equation} \left( x\otimes y\right) _{s,t}^{i,j}=\int_{s<\xi <\eta <t}dx_{\xi }^{i}dy_{\eta }^{j} \label{2.3} \end{equation}% clearly defines a $\beta $-H\"{o}lder continuous multiplicative functional. Let $f:\mathbb{R}^{m}\mathbb{\rightarrow R}^{d}$ be a continuously differentiable function such that $f^{\prime }$ is $\lambda $-H\"{o}lder continuous, where $\lambda >\frac{1}{\beta }-2$. Our aim is to define the integral% \begin{equation} \int_{a}^{b}f(x_{r})dy_{r}=\sum_{i=1}^{d}\int_{a}^{b}f_{i}(x_{r})dy_{r}^{i} \label{2.4} \end{equation}% using fractional calculus. Fix a number $\alpha $ such that $1-\beta <\alpha <2\beta $ and $\alpha <% \frac{\lambda \beta +1}{2}$. This is possible because $3\beta >1$ and $\frac{% \lambda \beta +1}{2}>1-\beta $. Notice first that the fractional integration by parts formula (\ref{1.8}) cannot be used to define the integral (\ref{2.4}) because the fractional derivative $D_{a+}^{\alpha }f\left( x\right) $ is not well-defined under our hypotheses. For this reason we introduce the following \textit{compensated fractional derivative}: \begin{eqnarray} \lefteqn{\widehat{D}_{a+}^{\alpha }f\left( x\right) (r)=\frac{1}{\Gamma \left( 1-\alpha \right) }\Bigg( \frac{f\left( x_{r}\right) }{(r-a)^{\alpha }}} \notag \\% &&+\alpha \int_{a}^{r}\frac{f\left( x_{r}\right) -f\left( x_{\theta }\right) -f^{\prime }(x_{\theta })(x_{r}-x_{\theta })}{\left( r-\theta \right) ^{\alpha +1}}d\theta \Bigg) . \label{2.5} \end{eqnarray}% This derivative is well-defined under our hypotheses because% \begin{equation*} \frac{\left| f\left( x_{r}\right) -f\left( x_{\theta }\right) -f^{\prime }(x_{\theta })(x_{r}-x_{\theta })\right| }{\left( r-\theta \right) ^{\alpha +1}}\leq K|r-\theta |^{(1+\lambda )\beta -\alpha -1}, \end{equation*}% where $K=\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{\beta }^{1+\lambda }$ and $(1+\lambda )\beta -\alpha >0$ since $\alpha <\frac{% \lambda \beta +1}{2}<(1+\lambda )\beta $. For $\theta <\xi <\eta $ introduce the kernel \begin{eqnarray} G(\theta ,\xi ,\eta ):= &&\frac{\newline 1}{\alpha \Gamma \left( \alpha \right) \Gamma (2\alpha -1)}\left( \xi -\theta \right) ^{\alpha -1}\left( \eta -\xi \right) ^{\alpha -1} \notag \\ &&\times \int_{0}^{1}q^{2\alpha -2}(1-q)^{-\alpha }\left( 1+(1-q)\frac{\xi -\theta }{\eta -\xi }\right) ^{-1}dq. \label{2.6} \end{eqnarray}% Define for $\varepsilon <\alpha +\beta -1$ and $\theta <\xi <\eta <b$ \begin{equation} K_{\theta ,b}(\xi ,\eta )=-D_{\theta +}^{1,\alpha -{\varepsilon }% }D_{b-}^{2,\alpha -{\varepsilon }}\left[ G_{b-}(\theta ,\xi ,\eta )\right] \,. \label{2.7} \end{equation}% In \ Lemma \ref{l.7.2} we will show that \ this kernel satisfies% \begin{equation*} \sup_{0\le s<t\le T}\int_{s< \xi < \eta < t}\left| K_{s,t}(\xi ,\eta )\right| d\xi d\eta <\infty . \end{equation*} Finally, we denote \begin{equation} {\Lambda }_{a}^{b}(x\otimes y):=\int_{a}^{b}\int_{a}^{\eta }K_{a,b}(\xi ,\eta )\Gamma ^{\alpha -{\varepsilon }}\left( x\otimes y\right) _{\xi ,\eta }d\xi d\eta . \label{2.8} \end{equation} We are ready now to define the integral $\int_{a}^{b}f(x_{r})dy_{r}$. \begin{definition} \label{def1} Let $(x,y,x\otimes y)$ be a $\beta $-\textit{H\"{o}lder continuous multiplicative functional }on\textit{\ }$\mathbb{R}^{m}\otimes \mathbb{R}^{d}$. Let $f:\mathbb{R}^{m}\mathbb{\rightarrow R}^{d}$ be a continuously differentiable function such that $f^{\prime }$ is $\lambda $-H% \"{o}lder continuous, where $\lambda >\frac{1}{\beta }-2$. Fix $\alpha >0$ and $\varepsilon >0$ such that $1-\beta <\alpha <2\beta $, $\alpha <\frac{% \lambda \beta +1}{2}$ and $\ $ $\varepsilon <\alpha +\beta -1$. Then, for any $0\leq a<b\leq T$ we define% \begin{eqnarray} \int_{a}^{b}f(x_{r})dy_{r} &=&(-1)^{\alpha }\ \sum_{i=1}^{d}\int_{a}^{b}% \widehat{D}_{a+}^{\alpha }f_{i}\left( x\right) (r)D_{b-}^{1-\alpha }y_{b-}^{i}(r)dr \notag \\ &&+\sum_{i=1}^{m}\sum_{j=1}^{d}\int_{a}^{b}D_{a+}^{2\alpha -1}\partial _{i}f_{j}\left( x\right) (r){\Lambda }_{r}^{b}(x^{i}\otimes y^{j})dr\,. \label{2.9} \end{eqnarray} \end{definition} Notice that if $y$ is $\beta $-H\"{o}lder continuous, the fractional derivative $D_{b-}^{1-\alpha }y_{b-}(r)$ is well-defined because \begin{equation*} \frac{\left\vert y_{\sigma }-y_{r}\right\vert \ }{\left( \sigma -r\right) ^{2-\alpha }}\leq \left\Vert y\right\Vert _{\beta }|\sigma -r|^{\beta +\alpha -2} \end{equation*}% and $\beta +\alpha -2>-1$.The following theorem asserts that this definition is coherent with the classical notion of integral and will allow us to deduce estimates in the H\"{o}lder norm. \begin{theorem} \label{t.3.1} Suppose $y:[0,T]\rightarrow \mathbb{R}^{d}$ is a continuously differentiable function. Let $x\ :[0,T]\rightarrow \mathbb{R}^{m}$ be a $\beta $-H\"{o}lder continuous function and let $x\otimes y$ be defined by $\left( x\otimes y\right) _{s,t}^{i,j}=\int_{s}^{t}\left( x_{\xi }^{i}-x_{s}^{i}\right) \left( y^{j}\right) _{\xi }^{\prime }d\xi $. Assume that $f$ satisfies the assumptions of Definition \ref{def1}. Then, the integral $% \int_{a}^{b}f(x_{r})dy_{r}$ introduced in (\ref{2.9}) coincides with $% \sum_{i=1}^d\int_{a}^{b}f_i(x_{r})(y^i)_{r}^{\prime }dr$. \end{theorem} \begin{proof} To simplify the proof we take $m=d=1$. From (\ref{1.8}) and (\ref{2.5}) we get% \begin{eqnarray*} \int_{a}^{b}f(x_{r})y_{r}^{\prime }dr &=&(-1)^{\alpha }\int_{a}^{b}D_{a+}^{\alpha }f\left( x\right) (r)D_{b-}^{1-\alpha }y_{b-}(r)dr \\ &=&(-1)^{\alpha }\int_{a}^{b}\widehat{D}_{a+}^{\alpha }f\left( x\right) (r)D_{b-}^{1-\alpha }y_{b-}(r)dr+A_{2}, \end{eqnarray*}% where% \begin{eqnarray} A_{2} &=&\frac{\alpha (-1)^{\alpha }}{\Gamma \left( 1-\alpha \right) }% \int_{a}^{b}\int_{a}^{r}\frac{f^{\prime }(x_{\theta })(x_{r}-x_{\theta })}{% \left( r-\theta \right) ^{\alpha +1}}D_{b-}^{1-\alpha }y_{b-}(r)drd\theta \notag \\ &=&\frac{\alpha (-1)^{\alpha }}{\Gamma \left( 1-\alpha \right) }% \int_{a}^{b}f^{\prime }(x_{\theta })\left( \int_{\theta }^{b}\frac{% x_{r}-x_{\theta }}{\left( r-\theta \right) ^{\alpha +1}}D_{b-}^{1-\alpha }y_{b-}(r)dr\right) d\theta . \label{2.10} \end{eqnarray}% So, it suffices to show that% \begin{equation} A_{2}=\int_{a}^{b}D_{a+}^{2\alpha -1}f^{\prime }\left( x\right) (r){\Lambda }% _{r}^{b}(x\otimes y)dr. \label{2.11} \end{equation} Formula (\ref{2.11}) should be first proved for $x$ of class $C^1$ and then extended to a general $\beta $-H\"{o}lder continuous function. Applying\ (\ref{1.4}), (\ref% {1.6}), and \ (\ref{1.5}) we obtain% \begin{eqnarray*} A_{2} &=&\frac{\alpha (-1)^{\alpha }}{{\Gamma }(1-\alpha )}\int_{a}^{b}\ D_{b-}^{1-\alpha }y_{b-}(r)\int_{a}^{r}\frac{f^{\prime }(x_{\theta })(x_{r}-x_{\theta })}{(r-\theta )^{\alpha +1}}d\theta dr \\ &=&\frac{\alpha (-1)^{3\alpha -1}}{{\Gamma }(1-\alpha )}\int_{a}^{b}\ D_{b-}^{1-\alpha }y_{b-}(r)\int_{a}^{r}D_{a+}^{2\alpha -1}f^{\prime }(x)(\theta )I_{r-}^{2\alpha -1,\theta }\left( \frac{x_{r}-x_{\theta }}{% (r-\theta )^{\alpha +1}}\right) d\theta dr \\ &=&\frac{\alpha (-1)^{3\alpha -1}}{{\Gamma }(1-\alpha )}% \int_{a}^{b}D_{a+}^{2\alpha -1}f^{\prime }(x)(\theta )\int_{\theta }^{b}I_{r-}^{2\alpha -1,\theta }\left( \frac{x_{r}-x_{\theta }}{(r-\theta )^{\alpha +1}}\right) \ D_{b-}^{1-\alpha }y_{b-}(r)drd\theta \\ &=&\frac{\alpha (-1)^{2\alpha -1}}{{\Gamma }(1-\alpha )}% \int_{a}^{b}D_{a+}^{2\alpha -1}f^{\prime }(x)(\theta )\int_{\theta }^{b}I_{a+}^{\alpha ,r}I_{r-}^{2\alpha -1,\theta }\left( \frac{% x_{r}-x_{\theta }}{(r-\theta )^{\alpha +1}}\right) dy_{r}d\theta \,, \end{eqnarray*}% where $I^{\alpha ,\theta }$ denotes the fractional integral applied to a function of $\theta $ and a similar notation is used for fractional derivatives. Now \begin{eqnarray*} &&I_{a+}^{\alpha ,r}I_{r-}^{2\alpha -1,\theta }\left( \frac{x_{r}-x_{\theta }% }{(r-\theta )^{\alpha +1}}\right) \\ &=&I_{a+}^{\alpha ,r}\left[ \frac{(-1)^{1-2\alpha }}{{\Gamma }(2\alpha -1)}% \int_{\theta }^{r}(\theta ^{\prime }-\theta )^{2\alpha -2}\frac{% x_{r}-x_{\theta ^{\prime }}}{(r-\theta ^{\prime })^{\alpha +1}}d\theta ^{\prime }\right] \\ &=&\frac{(-1)^{1-2\alpha }}{{\Gamma }(2\alpha -1){\Gamma }(\alpha )}% \int_{\theta <\theta ^{\prime }<\xi <r^{\prime }<r}(r-r^{\prime })^{\alpha -1}(\theta ^{\prime }-\theta )^{2\alpha -2}(r^{\prime }-\theta ^{\prime })^{-\alpha -1}dx_{\xi }d\theta ^{\prime }dr^{\prime }\,. \end{eqnarray*}% Thus \begin{eqnarray} \lefteqn{A_{2} =\frac{1}{{\Gamma }(2\alpha -1){\Gamma }(\alpha )}% \int_{a}^{b}D_{a+}^{2\alpha -1}f^{\prime }(x)(\theta ) } \label{2.12} \\ &&\times \int_{\theta }^{b}\left( \int_{\theta <\theta ^{\prime }<\xi <r^{\prime }<\eta }(\eta -r^{\prime })^{\alpha -1}(\theta ^{\prime }-\theta )^{2\alpha -2}(r^{\prime }-\theta ^{\prime })^{-\alpha -1}d\theta ^{\prime }dr^{\prime }\right) dx_{\xi }dy_{\eta }d\theta \,. \notag \end{eqnarray}% Making the change of variable $\frac{r^{\prime }-\xi }{\eta -\xi }=w$ and using formula 3.196 in Gradshteyn and Ryzhik \cite{GR} we obtain \begin{eqnarray*} &&\int_{\xi }^{\eta }\left( \eta -r^{\prime }\right) ^{\alpha -1}\left( r^{\prime }-\theta ^{\prime }\right) ^{-\alpha -1}dr^{\prime } \\ &=&\left( \eta -\xi \right) ^{-1}\int_{0}^{1}(1-w)^{\alpha -1}(w+\frac{\xi -\theta ^{\prime }}{\eta -\xi })^{-\alpha -1}dw \\ &=&\frac{1}{\alpha }\left( \eta -\xi \right) ^{-1}\left( \frac{\xi -\theta ^{\prime }}{\eta -\xi }\right) ^{-\alpha -1}F(1,\alpha +1,\alpha +1,-\frac{% \eta -\xi }{\xi -\theta ^{\prime }}) \\ &=&\frac{1}{\alpha }\left( \eta -\xi \right) ^{-1}\left( \frac{\xi -\theta ^{\prime }}{\eta -\xi }\right) ^{-\alpha -1}(1+\frac{\eta -\xi }{\xi -\theta ^{\prime }})^{-1} \\ &=&\frac{1}{\alpha }\left( \eta -\xi \right) ^{\alpha }\left( \xi -\theta ^{\prime }\right) ^{-\alpha }\left( \eta -\theta ^{\prime }\right) ^{-1}, \end{eqnarray*}% and substituting this expression into (\ref{2.12}) yields% \begin{eqnarray*} A_{2} &=&\frac{1}{{\Gamma }(2\alpha -1){\Gamma }(\alpha )\alpha }% \int_{a}^{b}D_{a+}^{2\alpha -1}f^{\prime }(x)(\theta ) \\ &&\times \int_{\theta <\xi <\eta <b}\left( \eta -\xi \right) ^{\alpha }\left( \int_{\theta }^{\xi }\left( \xi -\theta ^{\prime }\right) ^{-\alpha }\left( \eta -\theta ^{\prime }\right) ^{-1}(\theta ^{\prime }-\theta )^{2\alpha -2}d\theta ^{\prime }\right) dx_{\xi }dy_{\eta }d\theta \,. \end{eqnarray*}% Using (\ref{2.6}) we get \begin{eqnarray*} &&\left( \eta -\xi \right) ^{\alpha }\int_{\theta }^{\xi }\left( \xi -\theta ^{\prime }\right) ^{-\alpha }\left( \eta -\theta ^{\prime }\right) ^{-1}(\theta ^{\prime }-\theta )^{2\alpha -2}d\theta ^{\prime } \\ &=&(\eta -\xi )^{\alpha -1}(\xi -\theta )^{\alpha -1}\int_{0}^{1}(1-q)^{-\alpha }q^{2\alpha -2}\left( 1+(1-q)\frac{\xi -\theta }{\eta -\xi }\right) ^{-1}dq \\ &=&G(\theta ,\xi ,\eta ){\Gamma }(2\alpha -1){\Gamma }(\alpha )\alpha. \end{eqnarray*}% Hence,% \begin{equation*} A_{2}=\int_{a}^{b}D_{a+}^{2\alpha -1}f^{\prime }(x)(\theta )\int_{\theta <\xi <\eta <b}G(\theta ,\xi ,\eta )dx_{\xi }dy_{\eta }d\theta. \end{equation*} Applying the two-dimensional fractional integration by parts formula (\ref% {1.9}) to $\varphi (\xi ,\eta )=G(\theta ,\xi ,\eta )$ and $\psi (\xi ,\eta )=(x\otimes y)_{\xi ,\eta }$ and \ using (\ref{2.7}) we obtain% \begin{eqnarray*} \int_{\theta< \xi < \eta < b}G(\theta ,\xi ,\eta )dx_{\xi }dy_{\eta } &=&-\int_{\theta }^{b}\int_{\theta }^{\eta }D_{\theta +}^{\alpha -{\varepsilon ,\xi }}D_{b-}^{\alpha -{\varepsilon ,\eta }} G_{b-}(\theta ,\xi ,\eta ) \\ &&\times \Gamma ^{\alpha -{\varepsilon }}(x\otimes y)_{\xi ,\eta }d\xi d\eta \\ &=&\int_{\theta }^{b}\int_{\theta }^{\eta }K_{\theta ,b}(\xi ,\eta )\Gamma ^{\alpha -\varepsilon }(x\otimes y)_{\xi ,\eta }d\xi d\eta \,. \end{eqnarray*}% Thus \begin{equation*} A_{2}=\int_{a}^{b}D_{a+}^{2\alpha -1}f^{\prime }\left( x\right) (r)\int_{r}^{b}\int_{r}^{\eta }K_{r,b}(\xi ,\eta )\Gamma ^{\alpha -\varepsilon }(x\otimes y)_{\xi ,\eta }d\xi d\eta dr. \end{equation*}% This proves the theorem. \end{proof} For any $(s,t)\in \Delta $, and given a $\beta $-H\"{o}lder continuous multiplicative functional $(x,y,x\otimes y)$, we define \begin{eqnarray} \left\| x\right\| _{s,t,\beta } &=&\sup_{s\leq \theta <r\leq t}\frac{% |x_{r}-x_{\theta }|}{|r-\theta |^{\beta }}, \label{2.13} \\ \left\| x\otimes y\right\| _{s,t,\beta } &=&\sup_{s\leq \theta <r\leq t}% \frac{|\left( x\otimes y\right) _{\theta ,r}|}{|r-\theta |^{2\beta }}. \label{2.14} \end{eqnarray}% We also set $\left\| x\right\| _{\beta }=\left\| x\right\| _{0,T,\beta }$, and $\left\| x\otimes y\right\| _{\beta }=\left\| x\otimes y\right\| _{0,T,\beta }$. Also, $\left\| \cdot \right\| _{s,t,\infty }$ will denote the supremum norm in the interval $[s,t]$. In the sequel, $k$ will denote a constant that may depend on the parameters $\beta $, $\alpha $, $\lambda $, $% \varepsilon$ and $T$. The following estimate is useful. \begin{proposition} \label{p.3.2} Under the hypotheses of Definition \ref{def1} we have, if $% b-a\leq 1$ \begin{eqnarray} \left\| \int f(x_{r})dy_{r}\right\| _{a,b,\beta } &\leq &k\left\| f\right\| _{\infty }\left\| y\right\| _{a,b,\beta }+k\left( \left\| x\otimes y\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }\right) \notag \\ &&\!\!\!\!\times \left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,b,\beta }^{\lambda }(b-a)^{\lambda \beta }\right) (b-a)^{\beta -2\varepsilon }. \label{2.15} \end{eqnarray}% Moreover, if the second derivative $f^{\prime \prime }$ is $\lambda $-H\"{o}% lder continuous and bounded, and $(\tilde{x},y,\tilde{x}\otimes y)$ is also a $\beta $-\textit{H\"{o}lder continuous multiplicative functional }on% \textit{\ }$\mathbb{R}^{m}\otimes \mathbb{R}^{d}$, then% \begin{eqnarray} &&\left\| \int f(x_{r})dy_{r}-\int f(\tilde{x}_{r})dy_{r}\right\| _{a,b,\beta } \notag \\ &\leq &kH_{1}\Vert x-\tilde{x}\Vert _{a,b,\infty }+kH_{2}\Vert x-\tilde{x}% \Vert _{a,b,\beta }+kH_{3}\Vert (x-\tilde{x})\otimes y\Vert _{a,b,{\beta }% },\, \label{2.16} \end{eqnarray}% where% \begin{eqnarray*} H_{1} &=&\left\| y\right\| _{a,b,\beta }\left( \Vert f^{\prime }\Vert _{\infty }+\left\| f^{\prime \prime }\right\| _{\lambda }\Vert \widetilde{x}% \Vert _{a,b,{\beta }}\left( \Vert x\Vert _{a,b,{\beta }}^{\lambda }+\Vert \tilde{x}\Vert _{a,b,{\beta }}^{\lambda }\right) \right) (b-a)^{\beta (1+\lambda )} \\ &&+\left( \Vert f^{\prime \prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{\lambda }\left( \Vert x\Vert _{a,b,\beta }^{\lambda }+\Vert \widetilde{x}\Vert _{a,b,\beta }^{\lambda }\right) (b-a)^{\beta \lambda }\right) \\ &&\times \left( \Vert x\otimes y\Vert _{a,b,{\beta }}+\Vert x\Vert _{a,b,{% \beta }}\left\| y\right\| _{a,b,\beta }\ \right) (b-a)^{\beta -2\varepsilon }, \\ H_{2} &=&\ \left\| f^{\prime \prime }\right\| _{\infty }\left\| y\right\| _{a,b,\beta }\left( \Vert x\Vert _{a,b,{\beta }}+\Vert \tilde{x}\Vert _{a,b,{% \beta }}\right) (b-a)^{\beta (1+\lambda )} \\ &&+\left\| f^{\prime \prime }\right\| _{\infty }\left( \Vert x\otimes y\Vert _{a,b,{\beta }}+\Vert x\Vert _{a,b,{\beta }}\left\| y\right\| _{a,b,\beta }\ \right) (b-a)^{2\beta -2\varepsilon } \\ &&+\ \left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\Vert \tilde{x}% \Vert _{a,b,{\beta }}^{\lambda }(b-a)^{\lambda \beta }\right) \left\| y\right\| _{a,b,\beta }(b-a)^{\beta -2\varepsilon } \\ H_{3} &=&\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\Vert \tilde{% x}\Vert _{a,b,{\beta }}^{\lambda }(b-a)^{\lambda \beta }\right) (b-a)^{\beta -2\varepsilon }. \end{eqnarray*} \end{proposition} \bigskip \textbf{Remark: }In (\ref{2.15}) we can replace $\left\| f\right\| _{\infty } $ and $\left\| f^{\prime }\right\| _{\infty }$ by $\left\| f(x)\right\| _{a,b,\infty }$ and $\left\| f^{\prime }(x)\right\| _{a,b,\infty }$, respectively. \medskip \begin{proof} First we have, for any $r\in \lbrack a,b]$ \begin{equation} \left| D_{a+}^{\alpha }f(x)(r)\right| \leq k\left( |f(x_{r})|\left( r-a\right) ^{-\alpha }+\left\| f\right\| _{\lambda }\left\| x\right\| _{a,r,\beta }^{\lambda }\left( r-a\right) ^{\lambda \beta -\alpha }\right) \,, \label{2.17} \end{equation}% \begin{equation} \left| \hat{D}_{a+}^{\alpha }f(x)(r)\right| \leq k\left( |f(x_{r})|\left( r-a\right) ^{-\alpha }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,r,\beta }^{1+\lambda }\left( r-a\right) ^{(\lambda +1)\beta -\alpha }\right) , \label{2.18} \end{equation}% and \begin{equation} \left| D_{b-}^{1-\alpha }y_{b-}(r)\right| \leq k\left\| y\right\| _{r,b,\beta }(b-r)^{\alpha +\beta -1}\,. \label{2.19} \end{equation}% The expression (\ref{1.11}) of $\Gamma $ yields \begin{equation} \left| \Gamma ^{\alpha -\varepsilon }\left( x\otimes y\right) _{a,b}\right| \leq k\left( \left\| x\otimes y\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }\right) (b-a)^{2\beta +2\alpha -2-2\varepsilon }. \label{2.20} \end{equation}% Consequently, from (\ref{2.8}) Lemma \ref{l.7.2} we obtain the estimate \begin{equation} |{\Lambda }_{a}^{b}(x\otimes y)|\leq k\left( \left\| x\otimes y\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }\right) (b-a)^{2\beta +2\alpha -2-2\varepsilon }. \label{2.21} \end{equation}% Thus \begin{eqnarray*} \lefteqn{\left| \int_{a}^{b}f(x_{r})dy_{r}\right| \leq k\left\| y\right\| _{a,b,\beta }\ \left( \int_{a}^{b}|f(x_{r})|\left( r-a\right) ^{-\alpha }(b-r)^{\alpha +\beta -1}dr\right.} \\ &&\qquad \left. +\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,b,\beta }^{1+\lambda }\int_{a}^{b}\left( r-a\right) ^{(\lambda +1)\beta -\alpha }(b-r)^{\alpha +\beta -1}dr\right) \\ &&\qquad +k\int_{a}^{b}\left( \frac{\left\| f^{\prime }\right\| _{\infty }}{% (r-a)^{2\alpha -1}}+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,b,\beta }^{\lambda }\left( r-a\right) ^{\lambda \beta -2\alpha +1}\right) \\ &&\qquad \quad \times \ \left( \left\| x\otimes y\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }\right) (b-r)^{2\beta +2\alpha -2-2\varepsilon }dr. \end{eqnarray*}% Therefore, we obtain% \begin{eqnarray*} \lefteqn{\left| \int_{a}^{b}f(x_{r})dy_{r}\right| \leq k\left\| f\right\| _{\infty }\left\| y\right\| _{a,b,\beta }(b-a)^{\beta } }\\ &&+k\left( \left\| x\otimes y\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }\right) \left\| f^{\prime }\right\| _{\infty }(b-a)^{2\beta -2\varepsilon } \\ &&+k\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,b,\beta }^{\lambda }(\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }+\left\| x\otimes y\right\| _{a,b,\beta })(b-a)^{\left( \lambda +2\right) \beta -2\varepsilon } \end{eqnarray*}% and this implies (\ref{2.15}) easily. Note that for any $a\leq \theta \leq r\leq b$ we can write \begin{eqnarray*} &&f(x_{r})-f(x_{\theta })-f^{\prime }(x_{\theta })(x_{r}-x_{\theta })-\left[ f(\tilde{x}_{r})-f(\tilde{x}_{\theta })-f^{\prime }(\tilde{x}_{\theta })(% \tilde{x}_{r}-\tilde{x}_{\theta })\right] \\ &=&\int_{0}^{1}\left[ f^{\prime }(x_{\theta }+z(x_{r}-x_{\theta }))-f^{\prime }(x_{\theta })\right] (x_{r}-x_{\theta }-\widetilde{x}_{r}+% \widetilde{x}_{\theta })dz \\ &&+\int_{0}^{1}\left[ f^{\prime }(x_{\theta }+z(x_{r}-x_{\theta }))-f^{\prime }(\widetilde{x}_{\theta }+z(\widetilde{x}_{r}-\widetilde{x}% _{\theta }))-f^{\prime }(x_{\theta })+f^{\prime }(\widetilde{x}_{\theta })% \right] (\widetilde{x}_{r}-\widetilde{x}_{\theta })dz \\ &=&a_{1}-a_{2}. \end{eqnarray*}% We have% \begin{equation*} \left| a_{1}\right| \leq \left\| f^{\prime \prime }\right\| _{\infty }\Vert x\Vert _{a,b,{\beta }}\Vert x-\tilde{x}\Vert _{a,b,{\beta }}(r-\theta )^{2{% \beta }}. \end{equation*}% For the term $a_{2}$ we make the decomposition% \begin{eqnarray*} \lefteqn{a_{2} =(\widetilde{x}_{r}-\widetilde{x}_{\theta })\int_{0}^{1}\int_{0}^{1}% \left[ \left( \widetilde{x}_{\theta }-x_{\theta }\right) +z\left( \widetilde{% x}_{r}-\widetilde{x}_{\theta }-x_{r}+x_{\theta }\right) \right] }\\ &&\qquad \times f^{\prime \prime }(x_{\theta }+z(x_{r}-x_{\theta })+t(\widetilde{x}% _{\theta }-x_{\theta })+tz(\widetilde{x}_{r}-\widetilde{x}_{\theta }-x_{r}+x_{\theta }))dtdz \\ &&\qquad -(\widetilde{x}_{r}-\widetilde{x}_{\theta })(\widetilde{x}_{\theta }-x_{\theta })\int_{0}^{1}f^{\prime \prime }(x_{\theta }+t(\widetilde{x}% _{\theta }-x_{\theta }))dt\\ &&= \left( \widetilde{% x}_{r}-\widetilde{x}_{\theta }-x_{r}+x_{\theta }\right) \\ &&\quad \times \int_{0}^{1}\int_{0}^{1}z% f^{\prime \prime }(x_{\theta }+z(x_{r}-x_{\theta })+t(\widetilde{x}% _{\theta }-x_{\theta })+tz(\widetilde{x}_{r}-\widetilde{x}_{\theta }-x_{r}+x_{\theta }))dtdz \\ &&-(\widetilde{x}_{r}-\widetilde{x}_{\theta })(\widetilde{x}_{\theta }-x_{\theta }) \int_{0}^{1}\int_{0}^{1}% \left[ f^{\prime \prime }(x_{\theta }+z(x_{r}-x_{\theta })+t(\widetilde{x}% _{\theta }-x_{\theta })+tz(\widetilde{x}_{r}-\widetilde{x}_{\theta }-x_{r}+x_{\theta })) \right. \\ &&\quad \left. - f^{\prime \prime }(x_{\theta }+t(\widetilde{x}% _{\theta }-x_{\theta }))\right] dt dz. \end{eqnarray*}% Thus,% \begin{eqnarray*} \left| a_{2}\right| &\leq &\left\| f^{\prime \prime }\right\| _{\infty }\Vert \widetilde{x}\Vert _{a,b,{\beta }}\Vert x-\tilde{x}\Vert _{a,b,{\beta }}(r-\theta )^{2{\beta }} \\ &&+\Vert f^{\prime \prime }\Vert _{\lambda }\Vert \tilde{x}\Vert _{a,b,{% \beta }}\Vert x-\tilde{x}\Vert _{a,b,{\infty }}\left( \Vert x\Vert _{a,b,{% \beta }}^{\lambda }+\Vert x-\tilde{x}\Vert _{a,b,{\beta }}^{\lambda }\right) (r-\theta )^{{\beta (1+\lambda )}}. \end{eqnarray*}% As a consequence,% \begin{eqnarray} &&\left| f(x_{r})-f(x_{\theta })-f^{\prime }(x_{\theta })(x_{r}-x_{\theta })- \left[ f(\tilde{x}_{r})-f(\tilde{x}_{\theta })-f^{\prime }(\tilde{x}% _{\theta })(\tilde{x}_{r}-\tilde{x}_{\theta })\right] \right| \notag \\ &\leq &kI_{1}(r-\theta )^{{\beta (1+\lambda )}}, \label{2.22} \end{eqnarray}% where \begin{eqnarray*} I_{1} &=&\ \Vert f^{\prime \prime }\Vert _{\infty }\left\{ \Vert x\Vert _{a,b,{\beta }}+\Vert \tilde{x}\Vert _{a,b,{\beta }}\right\} \Vert x-\tilde{x% }\Vert _{a,b,{\beta }} \\ &&+\Vert f^{\prime \prime }\Vert _{\lambda }\Vert \widetilde{x}\Vert _{a,b,{% \beta }}\left( \Vert x\Vert _{a,b,{\beta }}^{\lambda }+\Vert \tilde{x}\Vert _{a,b,{% \beta }}^{\lambda } \right) \Vert x-\tilde{x}\Vert _{a,b,\infty }\,. \end{eqnarray*}% On the other hand, we have \begin{eqnarray*} &&D_{s+}^{2\alpha -1}f^{\prime }(x)(r)-D_{s+}^{2\alpha -1}f^{\prime }(\tilde{% x})(r) \\ &=&\frac{1}{\Gamma (2-2\alpha )}\left\{ \frac{f^{\prime }(x_{r})-f^{\prime }(% \tilde{x}_{r})}{(r-s)^{2\alpha -1}}\right. \\ &&\left. +(2\alpha -1)\int_{s}^{r}\frac{\left[ f^{\prime }(x_{r})-f^{\prime }(\widetilde{x}_{r})-f^{\prime }(x_{\theta })+f^{\prime }(\widetilde{x}% _{\theta })\right] }{(r-\theta )^{2\alpha }}d\theta \right\} . \end{eqnarray*}% Using the decomposition% \begin{eqnarray*} &&f^{\prime }(x_{r})-f^{\prime }(\widetilde{x}_{r})-f^{\prime }(x_{\theta })+f^{\prime }(\widetilde{x}_{\theta }) \\ &=&\int_{0}^{1}f^{\prime \prime }(x_{r}+t(\widetilde{x}_{r}-x_{r}))(% \widetilde{x}_{r}-x_{r})dt-\int_{0}^{1}f^{\prime \prime }(x_{\theta }+t(% \widetilde{x}_{\theta }-x_{\theta }))(\widetilde{x}_{\theta }-x_{\theta })dt, \end{eqnarray*}% we obtain \begin{eqnarray} &&\left| D_{s+}^{2\alpha -1}f^{\prime }(x)(r)-D_{s+}^{2\alpha -1}f^{\prime }(% \tilde{x})(r)\right| \notag \\ &\leq &k(r-s)^{1-2\alpha }\Vert f^{\prime \prime }\Vert _{\infty }\Vert x-% \tilde{x}\Vert _{s,r,\infty }+k(r-s)^{\beta -2\alpha +1}\Vert f^{\prime \prime }\Vert _{\infty }\Vert x-\tilde{x}\Vert _{s,r,\beta } \notag \\ &&\quad +k(r-s)^{\beta \lambda -2\alpha +1}\Vert f^{\prime \prime }\Vert _{\lambda }\left( \Vert x\Vert _{s,r,\beta }^{\lambda }+\Vert \widetilde{x}\Vert _{s,r,\beta }^{\lambda }\right) \Vert x-\tilde{x}\Vert _{s,r,\infty } \notag \\ &=&kI_{2}(r-s)^{1-2\alpha }, \label{2.23} \end{eqnarray}% where \begin{eqnarray*} I_{2} &=&\left( \Vert f^{\prime \prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{\lambda }\left( \Vert x\Vert _{a,b,\beta }^{\lambda }+\Vert \widetilde{x% }\Vert _{a,b,\beta }^{\lambda }\right) (b-a)^{\beta \lambda }\right) \Vert x-% \tilde{x}\Vert _{a,b,\infty } \\ &&+\Vert f^{\prime \prime }\Vert _{\infty }\Vert x-\tilde{x}\Vert _{a,b,\beta }(b-a)^{\beta }. \end{eqnarray*}% Now using (\ref{2.19}), (\ref{2.22}), (\ref{2.23}) we obtain \begin{eqnarray} \lefteqn{\left| \int_{a}^{b}\left[ f(x_{r})-f(\tilde{x}_{r})\right] dy_{r}\right| \leq k\left\| y\right\| _{a,b,\beta} } \notag\\ &&\times \left( \int_{a}^{b}|f(x_{r})-f(% \tilde{x}_{r})|\left( r-a\right) ^{-\alpha }(b-r)^{\alpha +\beta -1}dr\right. \notag \\ && \left. +I_{1}\int_{a}^{b}\left( r-a\right) ^{\beta (1+\lambda )-\alpha }(b-r)^{\alpha +\beta -1}dr\right) \notag \\ && +kI_{2}\int_{a}^{b}(b-r)^{1-2\alpha }\left| \Lambda _{r}^{b}(x\otimes y)\right| dr \notag \\ && +\int_{a}^{b}\left| D_{a+}^{2\alpha -1}f^{\prime }\left( \tilde{x}% \right) (r)\right| \left| \Lambda _{r}^{b}(\left[ x-\tilde{x}\right] \otimes y)\right| dr. \notag \end{eqnarray}% Finally, using (\ref{2.21}) we get \begin{eqnarray} \lefteqn{\left| \int_{a}^{b}\left[ f(x_{r})-f(\tilde{x}_{r})\right] dy_{r}\right| \leq k\left\| y\right\| _{a,b,\beta } } \notag \\ &&\left( \Vert f^{\prime }\Vert _{\infty }(b-a)^{\beta }\Vert x-\tilde{x}\Vert _{a,b,\infty }+I_{1}(b-a)^{{% \beta (2+\lambda )}}\right) \notag \\ &&+kI_{2}\left( \Vert x\otimes y\Vert _{a,b,{\beta }}+\Vert x\Vert _{a,b,{% \beta }}\Vert y\Vert _{a,b,{\beta }}\right) (b-a)^{2{\beta -2\varepsilon }} \notag \\ &&+\left[ \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\Vert \tilde{x}% \Vert _{a,b,{\beta }}^{\lambda }(b-a)^{{\lambda }{\beta }}\right] \label{2.24} \\ &&\times \left\{ \Vert (x-\tilde{x})\otimes y\Vert _{a,b,{\beta }}+\Vert x-% \tilde{x}\Vert _{a,b,{\beta }}\Vert y\Vert _{a,b,{\beta }}\right\} (b-a)^{2{% \beta -2\varepsilon }}\,. \notag \end{eqnarray}% This implies (\ref{2.16}). \end{proof} The following corollary is the direct consequence of the proposition. \begin{corollary} \label{c.3.3} Assume $b-a\leq 1$. Under the hypotheses of Definition \ref% {def1}, if $(x,\tilde{y},x\otimes \tilde{y})$ is also a $\beta $-\textit{H% \"{o}lder continuous multiplicative functional }on\textit{\ }$\mathbb{R}% ^{m}\otimes \mathbb{R}^{d}$, we have% \begin{eqnarray} &&\left\| \int f(x_{r})dy_{r}-\int f(x_{r})d\tilde{y}_{r}\right\| _{a,b,\beta } \notag \\ &\leq &k\left\| f\right\| _{\infty }\left\| y-\tilde{y}\right\| _{a,b,\beta }+k\left( \left\| x\otimes (y-\tilde{y})\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y-\tilde{y}\right\| _{a,b,\beta }\right) \notag \\ &&\times \left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,b,\beta }^{\lambda }(b-a)^{\lambda \beta }\right) (b-a)^{\beta -2\varepsilon }. \label{2.25} \end{eqnarray}% On the other hand, if the derivative $f^{\prime \prime }$ is $\lambda $-H% \"{o}lder continuous and bounded, $(\tilde{x},y,\tilde{x}\otimes y)$ is another $\beta $-\textit{H\"{o}lder continuous multiplicative functional }on% \textit{\ }$\mathbb{R}^{m}\otimes \mathbb{R}^{d}$, and $\tilde{f}$ is another function satisfying the hypotheses of Definition \ref{def1}, then \begin{eqnarray} &&\left\| \int f(x_{r})dy_{r}-\int \tilde{f}(\tilde{x}_{r})dy_{r}\right\| _{a,b,\beta } \notag \\ &\leq &kH_{1}^{f}\Vert x-\tilde{x}\Vert _{a,b,\infty }+kH_{2}^{f}\Vert x-% \tilde{x}\Vert _{a,b,\beta }+kH_{3}^{f}\Vert (x-\tilde{x})\otimes y\Vert _{a,b,{\beta }} \notag \\ &&+k\left\| f-\widetilde{f}\right\| _{\infty }\left\| y\right\| _{a,b,\beta }+k\left( \left\| x\otimes y\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }\right) \notag \\ &&\times \left( \left\| f^{\prime }-\tilde{f}^{\prime }\right\| _{\infty }+\left\| f^{\prime }-\tilde{f}^{\prime }\right\| _{\lambda }\left\| \tilde{x% }\right\| _{a,b,\beta }^{\lambda }(b-a)^{\lambda \beta }\right) (b-a)^{\beta -2\varepsilon }. \label{e.3.29} \end{eqnarray} \end{corollary} The estimate (\ref{2.25}) implies that for a fixed $x$, the mapping $% (y,x\otimes y)\rightarrow \int f(x_{r})dy_{r}$ is continuous with respect to the $\beta $-norm. As a consequence, if $y^{n}$ is a sequence of continuously differentiable functions (or Lipschitz functions) such that% \begin{eqnarray*} \left\| y-y^{n}\right\| _{\beta } &\rightarrow &0, \\ \left\| x\otimes y-x\otimes y^{n}\right\| _{\beta } &\rightarrow &0 \end{eqnarray*}% as $n$ tends to infinity, then% \begin{equation} \left\| \int f(x_{r})dy_{r}-\int f(x_{r})dy_{r}^{n}\right\| _{\beta }\rightarrow 0. \label{2.26} \end{equation}% Hence, the integral $\int f(x_{r})dy_{r}$ introduced in Definition \ref{def1} does not depend on the parameters $\alpha $ and $\varepsilon $, and it coincides with the classical integral $\int f(x_{r})y_{r}^{\prime }dr$ when $% y$ is continuously differentiable. Set $t_{i}^{n}=\frac{iT}{n}$ for $i=0,1,\ldots ,n$. If $y$ is $\beta $-H\"{o}% lder continuous, the sequence of functions \begin{equation*} y_{t}^{n}=y_{0}\mathbf{1}_{\{0\}}(t)+\sum_{i=1}^{n}\mathbf{1}% _{(t_{i-1}^{n},t_{i}^{n}]}(t)\left[ y_{t_{i-1}^{n}}+\frac nT\left( t-t_{i-1}^{n}\right) (y_{t_{i}^{n}}-y_{t_{i-1}^{n}})\right] \end{equation*}% converge to $y$ in the $\beta' $-norm for any $\beta'<\beta$. Assume that the multiplicative functional $\int_{s}^{t}(x_{r}-x_{s})dy_{r}^{n}$ converges in the $\beta' $% -norm as $n$ tends to infinity to $\left( x\otimes y\right) _{s,t}$. Then (% \ref{2.26}) holds with $\beta=\beta'$. In particular, this means that \begin{equation} \int_{0}^{T}f(x_{r})dy_{r}=\lim_{n\rightarrow \infty }\sum_{i=1}^{n}\frac{n}{% T}\left( \int_{t_{i-1}^{n}}^{t_{i}^{n}}f(x_{s})ds\right) (y_{t_{i}^{n}}-y_{t_{i-1}^{n}}). \label{eqa} \end{equation} For any $p\geq 1$, the $p$ variation of a function $x:[0,T]\rightarrow \mathbb{R}$ is defined as% \begin{equation*} \mathrm{Var}_{p}(x)=\sup_{\pi }\left( \sum_{i=1}^{n}\left\vert x(t_{i}^{n})-x(t_{i-1}^{n})\right\vert ^{p}\right) ^{1/p}, \end{equation*}% where $\pi =\{0=t_{0}<\cdots <t_{n}=T\}$ runs over all partitions of $[0,T]$% . Notice that \begin{equation*} \mathrm{Var}_{1/\beta }(x)\leq \left\Vert x\right\Vert _{\beta }. \end{equation*}% Then, for any $\beta $-H\"{o}lder continuous multiplicative functional $% (x,y,x\otimes y)$ on $\mathbb{R}^{m}\otimes \mathbb{R}^{d}$ and any function $f$ satisfying the hypotheses of Definition \ref{def1}, the integral $% \int_{0}^{T}f(x_{r})dy_{r}$ coincides with the integral defined using the $% \frac{1}{\beta }$-variation norm (see \cite{LQ}). This implies that $% \int_{0}^{T}f(x_{s})dy_{s}$ is given by the limit of the Riemann sums of the form% \begin{equation*} \int_{0}^{T}f(x_{s})dy_{s}=\lim_{\left\vert \pi \right\vert \rightarrow 0}\sum_{i=1}^{n}f(x_{t_{i-1}})(y_{t_{i}}-y_{t_{i-1}})+f^{\prime }(x_{t_{i-1}})(x\otimes y)_{t_{i-1},t_{i}}, \end{equation*}% where $\pi =\{0=t_{0}<\cdots <t_{n}=T\}$ runs over all partitions of $[0,T]$. \bigskip In order to handle differential equations we need to introduce the tensor product of two multiplicative functionals: \begin{definition} \label{d.3.3} Suppose that $(x,y,x\otimes y)$ and $(y,z,y\otimes z)$ are $% \beta $-H\"{o}lder continuous real valued multiplicative functionals. Then, for all $\ a\leq b\leq c$, we define \begin{eqnarray*} \lefteqn{\left( x\otimes \left( y\otimes z\right) _{\cdot ,c}\right) _{a,b} =\int_{a}^{b}\Lambda _{a,r,b,c}(x,z)D_{b-}^{1-\alpha }y_{b-}(r)dr} \\ &&+\frac{1}{\Gamma (2-2\alpha )}\int_{a<r<\xi <\eta <b}K_{r,b}(\xi ,\eta ) \\ &&\times \frac{\Gamma ^{\alpha -\varepsilon }\left( x\otimes y\right) _{\xi ,\eta }\left( z_{c}-z_{r}\right) -\left( x_{r}-x_{a}\right) \Gamma ^{\alpha -\varepsilon }\left( y\otimes z\right) _{\xi ,\eta }}{(r-a)^{2\alpha -1}}% drd\xi d\eta , \end{eqnarray*}% where% \begin{equation*} \Lambda _{a,r,b,c}(x,z)=\frac{(-1)^{\alpha }}{\Gamma \left( 1-\alpha \right) }\left( \frac{(x_{r}-x_{a})(z_{c}-z_{r})}{(r-a)^{\alpha }}+\alpha \int_{a}^{r}\frac{(z_{r}-z_{\theta })(x_{\theta }-x_{r})}{\left( r-\theta \right) ^{\alpha +1}}d\theta \right) . \end{equation*} \end{definition} We have the following result. \begin{proposition} \label{p.3.4} If the function $y$ is continuously differentiable and for all $a\leq b$ \begin{eqnarray*} \left( y\otimes z\right) _{a,b} &=&\int_{a}^{b}(z_{b}-z_{r})\ y_{r}^{\prime }dr, \\ \left( x\otimes y\right) _{a,b} &=&\int_{a}^{b}\ (x_{r}-x_{a})y_{r}^{\prime }dr, \end{eqnarray*}% then% \begin{equation*} \left( x\otimes \left( y\otimes z\right) _{\cdot ,c}\right) _{a,b}=\int_{a}^{b}(x_{r}-x_{a})(z_{c}-z_{r})y_{r}^{\prime }dr. \end{equation*} \end{proposition} \begin{proof} We are going to use formula (\ref{2.9}) with $m=2$, $d=1$, $f(x,z)=xz$ and the functions $x_{t}-x_{a}$ and $z_{c}-z_{t}$. In this way we obtain \begin{eqnarray*} &&\int_{a}^{b}(x_{\theta }-x_{a})(z_{c}-z_{\theta })dy_{\theta } \\ &=&\frac{(-1)^{\alpha }}{\Gamma \left( 1-\alpha \right) }\int_{a}^{b}\left( \frac{(x_{r}-x_{a})(z_{c}-z_{r})}{(r-a)^{\alpha }}+\alpha \int_{a}^{r}\frac{% (z_{r}-z_{\theta })(x_{\theta }-x_{r})}{\left( r-\theta \right) ^{\alpha +1}}% d\theta \right) \\ &&\times D_{b-}^{1-\alpha }y_{b-}(r)dr \\ &&+\frac{1}{\Gamma (2-2\alpha )}\int_{a}^{b}\left[ \frac{z_{c}-z_{r}}{% (r-a)^{2\alpha -1}}+(2\alpha -1)\int_{a}^{r}\frac{z_{r}-z_{\theta }}{% (r-\theta )^{2\alpha }}d\theta \right] \\ &&\qquad \times \int_{r}^{b}\int_{r}^{\eta }K_{r,b}(\xi ,\eta )\Gamma ^{\alpha -\varepsilon }\left( x\otimes y\right) _{\xi ,\eta }d\xi d\eta dr \\ &&-\frac{1}{\Gamma (2-2\alpha )}\int_{a}^{b}\left[ \frac{x_{r}-x_{a}}{% (r-a)^{2\alpha -1}}+(2\alpha -1)\int_{a}^{r}\frac{x_{r}-x_{\theta }}{% (r-\theta )^{2\alpha }}d\theta \right] \\ &&\qquad \times \int_{r}^{b}\int_{r}^{\eta }K_{r,b}(\xi ,\eta )\Gamma ^{\alpha -\varepsilon }\left( y\otimes z\right) _{\xi ,\eta }d\xi d\eta dr, \end{eqnarray*}% and this completes the proof. \end{proof} It is easy to obtain the following estimate \begin{proposition} \label{p.3.8} Suppose that $(x,y,x\otimes y)$ and $(y,z,y\otimes z)$ are $% \beta $-H\"{o}lder continuous real valued multiplicative functionals. Then, for any $a\leq b\leq c$ we have \begin{eqnarray*} \lefteqn{\left| \left( x\otimes \left( y\otimes z\right) _{\cdot ,c}\right) _{a,b}\right| \leq k\left( \left\| y\right\| _{a,b,\beta }\left\| x\right\| _{a,b,\beta }\left\| z\right\| _{a,b,\beta }\right. }\\ &&+\left. \left\| z\right\| _{a,b,\beta }\left\| y\otimes x\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\otimes z\right\| _{a,b,\beta }\right) (b-a)^{3\beta } \\ &&+k\left\| z\right\| _{a,c,\beta }\left( \left\| y\right\| _{a,b,\beta }\left\| x\right\| _{a,b,\beta }+\ \left\| y\otimes x\right\| _{a,b,\beta }\ \right) (b-a)^{2\beta }(c-a)^{\beta }. \end{eqnarray*}% \newline \end{proposition} If $b=c$ we write $\left( x\otimes \left( y\otimes z\right) _{\cdot ,b}\right) _{a,b}=\left( x\otimes y\otimes z\right) _{a,b}$. If the functions $x$, $y$ and $z$ are continuously differentiable, then% \begin{equation*} \left( x\otimes y\otimes z\right) _{a,b}=\int_{a<r<\theta <\sigma <b}x_{t}^{\prime }y_{\theta }^{\prime }z_{\sigma }^{\prime }drd\theta d\sigma . \end{equation*}% Define \begin{equation*} \left\| x\otimes y\otimes z\right\| _{a,b,\beta }=\sup_{a\leq \theta <r\leq b}\frac{|\left( x\otimes y\otimes z\right) _{\theta ,c}|}{|r-\theta |^{3\beta }}. \end{equation*}% Then, Proposition \ref{p.3.8} implies that \begin{eqnarray} \left\| x\otimes y\otimes z\right\| _{a,b,\beta } &\leq &k\left( \left\| y\right\| _{a,b,\beta }\left\| x\right\| _{a,b,\beta }\left\| z\right\| _{a,b,\beta }+\left\| z\right\| _{a,b,\beta }\left\| y\otimes x\right\| _{a,b,\beta }\right. \notag \\ &&\qquad +\left. \left\| x\right\| _{a,b,\beta }\left\| y\otimes z\right\| _{a,b,\beta }\right) . \label{2.27} \end{eqnarray} Proposition \ref{p.3.8} also implies that $(x,\left( y\otimes z\right) _{\cdot ,c},\left( x\otimes \left( y\otimes z\right) _{\cdot ,c}\right) )$ is a $\beta $-H\"{o}lder \ continuous functional on the interval $[0,c]$. As a consequence, if $f$ satisfies the assumptions of Definition \ref{def1}, we can define the integral $\int_{a}^{b}f(x_{r})d_{r}\left( y\otimes z\right) _{r,c}$, for all $a\leq b\leq c$. The following estimate for this integral will be needed to solve differential equations. \begin{proposition} \label{p.3.1} Suppose that $(x,y,x\otimes y)$ and $(y,z,y\otimes z)$ are $% \beta $-H\"{o}lder continuous multiplicative functionals on\textit{\ }$% \mathbb{R}^{m}\otimes \mathbb{R}^{d}$. Let $f:\mathbb{R}^{m}\mathbb{% \rightarrow R}^{d}$ be a continuously differentiable function such that $% f^{\prime }$ is $\lambda $-H\"{o}lder continuous, where $\lambda >\frac{1}{% \beta }-2$. Fix $\alpha >0$ and $\varepsilon >0$ such that $1-\beta <\alpha <2\beta $, $\alpha <\frac{\lambda \beta +1}{2}$ and $\ $ $\varepsilon <\alpha +\beta -1$. Then the following estimate holds% \begin{eqnarray} &&\sup_{a\leq \xi \leq \eta \leq b}\frac{1}{(\eta -\xi )^{2\beta }}\left| \int_{\xi }^{\eta }f(x_{r})d_{r}(y\otimes z)_{r,\eta }\right| \leq \ k\Big[ A_{a,b} \notag \\ && \qquad \qquad+B_{a,b}\left\| x\otimes y\right\| _{a,b,\beta }\,(b-a)^{\beta -2\varepsilon }\Big] , \label{2.28} \end{eqnarray}% where \begin{eqnarray} A_{a,b}&= &\left( \Vert y\otimes z\Vert _{a,b,\beta }+\Vert y\Vert _{a,b,\beta }^{\ }\Vert z\Vert _{a,b,\beta }^{\ }\right) \label{2.29} \\ &&\times \left[ \left\| f\right\| _{\infty }+\left( \left\| x\right\| _{a,b,\beta }\left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,b,\beta }^{1+\lambda }(b-a)^{\lambda \beta }\right) (b-a)^{\beta -2\varepsilon }\right] \,, \notag \end{eqnarray}% and \begin{equation} B_{a,b}=\ \left\| z\right\| _{a,b,\beta }\left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{a,b,\beta }^{\lambda }(b-a)^{\lambda \beta }\right) \,. \label{2.30} \end{equation} \end{proposition} \begin{proof} To simplify the proof we will assume $d=m=1$. From (\ref{2.1}) it is easy to see that \begin{equation} \left\| \left( x\otimes y\right) _{\cdot ,b}\right\| _{a,b,\beta }\leq \left( \left\| x\otimes y\right\| _{a,b,\beta }+\left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }\right) (b-a)^{\beta }, \label{2.31} \end{equation}% and from Proposition \ref{p.3.8} we have \begin{eqnarray} &&\Vert x\otimes \left( y\otimes z\right) _{\cdot ,b}\Vert _{a,b,\beta } \notag \leq k\Big( \left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }^{\ }\Vert z\Vert _{a,b,\beta }^{\ } \\ &&\qquad +\left\| x\right\| _{a,b,\beta }\left\| y\otimes z\right\| _{a,b,\beta }+\left\| z\right\| _{a,b,\beta }\left\| x\otimes y\right\| _{a,b,\beta }\Big) (b-a)^{{\beta }}\,. \label{2.32} \end{eqnarray}% From (26), (\ref{2.31}), and (\ref{2.32}) we obtain% \begin{eqnarray*} &&\left| \int_{a}^{b}f(x_{r})d_{r}(y\otimes z)_{r,t}\right| \\ &\leq &k\Vert f\Vert _{\infty }\Vert (y\otimes z)_{\cdot ,b}\Vert _{a,b,\beta }(b-a)^{\beta } \\ &&+k[\left( \Vert x\otimes (y\otimes z)_{\cdot ,b}\Vert _{a,b,\beta }+\Vert x\Vert _{a,b,\beta }\Vert (y\otimes z)_{\cdot ,b}\Vert _{a,b,\beta }\right) \\ &&\times \left( \Vert f^{\prime }\Vert _{\infty }+\Vert f^{\prime }\Vert _{{% \lambda }}\Vert x\Vert _{a,b,\beta }^{{\lambda }}(b-a)^{\lambda \beta }\right) ](b-a)^{2\beta -2\varepsilon } \\ &\leq &k\Vert f\Vert _{\infty }\left( \Vert y\otimes z\Vert _{a,b,\beta }+\Vert y\Vert _{a,b,\beta }^{\ }\Vert z\Vert _{a,b,\beta }^{\ }\right) (b-a)^{2\beta } \\ &&+k\left( \left\| x\right\| _{a,b,\beta }\left\| y\right\| _{a,b,\beta }^{\ }\Vert z\Vert _{a,b,\beta }^{\ }+\left\| x\right\| _{a,b,\beta }\left\| y\otimes z\right\| _{a,b,\beta }+\left\| z\right\| _{a,b,\beta }\left\| x\otimes y\right\| _{a,b,\beta }\right) \\ &&\times \left( \Vert f^{\prime }\Vert _{\infty }+\Vert f^{\prime }\Vert _{{% \lambda }}\Vert x\Vert _{a,b,\beta }^{{\lambda }}(b-a)^{\lambda \beta }\right) ](b-a)^{3{\beta -2\varepsilon }}, \end{eqnarray*}% which implies the desired result. \end{proof} \setcounter{equation}{0} \section{Differential Equations Driven by Rough Paths} Let $y:[0,1]\rightarrow \mathbb{R}^{d}$ be a $\beta $-H\"{o}lder continuous function. Suppose that \ $(y^{i},y^{j},y^{i}\otimes y^{j})$ is a $\beta $-H% \"{o}lder continuous multiplicative function, for each $i,j=1,\ldots ,d$. We aim to solve the differential equation% \begin{equation} x_{t}=x_{0}+\int_{0}^{t}f(x_{r})dy_{r}, \label{3.1} \end{equation}% where $f=\mathbb{R}^{m}\rightarrow \mathbb{R}^{md}$. Formula (\ref{2.9}) and Definition \ref{d.3.3} allow us to transform this equation into the following system of integral equations: \begin{eqnarray} x_{t} &=&x_{0}+(-1)^{\alpha }\int_{0}^{t}\widehat{D}_{0+}^{\alpha }f\left( x\right) (s)D_{t-}^{1-\alpha }y_{t-}(s)ds \label{3.2} \\ &&+\int_{0}^{t}D_{0+}^{2\alpha -1}f^{\prime }\left( x\right) (s)\int_{s}^{t}\int_{s}^{\eta }K_{0,s}(\xi ,\eta )\Gamma ^{\alpha -{% \varepsilon }}\left( x\otimes y\right) _{\xi ,\eta }d\xi d\eta ds, \notag \end{eqnarray}% \begin{eqnarray} \left( x\otimes y\right) _{s,t} &=&(-1)^{\alpha }\int_{s}^{t}\widehat{D}% _{s+}^{\alpha }f\left( x\right) (r)D_{t-}^{1-\alpha }\left( y\otimes y\right) _{\cdot ,t-}(r)dr \notag \\ &&+\int_{s}^{t}D_{s+}^{2\alpha -1}f^{\prime }\left( x\right) (r) \label{3.3} \\ &&\times \int_{r}^{t}\int_{r}^{\eta }K_{s,r}(\xi ,\eta )\Gamma ^{\alpha -\varepsilon }\left( x\otimes \left( y\otimes y\right) _{\cdot ,t}\right) _{\xi ,\eta }d\xi d\eta dr. \notag \end{eqnarray} \begin{theorem} \label{th4} Let $y:[0,1]\rightarrow \mathbb{R}^{d}$ be a $\beta $-H\"{o}lder continuous function. Suppose that \ $(y^{i},y^{j},y^{i}\otimes y^{j})$ is a real valued $\beta $-H\"{o}lder continuous multiplicative function, for each $i,j=1,\ldots ,d$. Let $f:\mathbb{R}^{m}\rightarrow \mathbb{R}^{md}$ be a continuously differentiable function such that $f^{\prime }$ is $\lambda $-H% \"{o}lder continuous, where $\lambda >\frac{1}{\beta }-2$, and $f$ and $% f^{\prime }$ are bounded. Set% \begin{equation*} \rho _{f}:=\Vert f\Vert _{\infty }+\Vert f^{\prime }\Vert _{\infty }+\Vert f^{\prime }\Vert _{{\lambda }}. \end{equation*}% Then there is a solution $\ $to Equations (\ref{3.2})- (\ref{3.3}), such that $(x,y,x\otimes y)$ is a $\beta $-H\"{o}lder continuous multiplicative functional. Moreover, for any $\gamma >\frac 1\beta$ the function $x$ satisfies the estimate% \begin{equation} \sup_{0\leq t\leq T}|x_{t}|\leq |x_{0}|+\ T\left\{ 2k\rho _{f}\left[ \Vert y\Vert _{{\beta }}+\frac{\Vert y\otimes y\Vert _{\beta }}{\Vert y\Vert _{\beta }}\right] \vee 1\right\} ^\gamma\,, \label{3.4} \end{equation}% where $k$ is a universal constant depending only on $\beta $ and $\gamma$. \end{theorem} \begin{proof} To simplify the proof we will assume $d=m=1$. The proof will be done in several steps. \textbf{Step 1.} Fix $\alpha>0$ and $\varepsilon>0$ such that $1-\beta <\alpha< 2\beta$, $\alpha <\frac{\lambda\beta +1}2$, $\varepsilon< \alpha+\beta-1$, $\varepsilon <\frac \beta 2$, and ${(1-2\varepsilon )/({\beta -2\varepsilon )}}<\gamma$. We will write the Equations (\ref{3.2}) and (\ref{3.3}) in the compact form% \begin{eqnarray*} x &=&\Phi _{1}(x,y,x\otimes y), \\ x\otimes y &=&\Phi _{2}(x,y,y\otimes y,x\otimes y). \end{eqnarray*}% Consider the mapping $J:(x,x\otimes y)\rightarrow (J_{1}x,J_{2}\left( x\otimes y\right) )$ defined by% \begin{eqnarray*} J_{1}x &=&\Phi _{1}(x,y,x\otimes y), \\ J_{2}\left( x\otimes y\right) &=&\Phi _{2}(x,y,y\otimes y,x\otimes y). \end{eqnarray*}% We need some a priori estimates of the H\"{o}lder norms of $J_{1}x$ and $% J_{2}\left( x\otimes y\right) $ in terms of the H\"{o}lder norms of $x$ and $% x\otimes y$. From (\ref{2.15}) it follows that \begin{eqnarray} \left\| J_{1}x\right\| _{s,t,\beta } &\leq &k[\left\| f\right\| _{\infty }\left\| y\right\| _{s,t,\beta }+\left( \left\| x\otimes y\right\| _{s,t,\beta }+\left\| x\right\| _{s,t,\beta }\left\| y\right\| _{s,t,\beta }\right) \notag \\ &&\qquad \times \left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{s,t,\beta }^{\lambda }(t-s)^{{\lambda }{\beta }}\right) (t-s)^{\beta -2\varepsilon }]. \label{3.5} \end{eqnarray}% On the other hand, Proposition \ref{p.3.1} implies that \begin{equation} \left\| J_{2}\left( x\otimes y\right) \right\| _{s,t,\beta }\leq \ k\left[ A_{s,t}+B_{s,t}\left\| x\otimes y\right\| _{s,t,{\beta }}\,(t-s)^{\beta -2\varepsilon }\right] , \label{3.6} \end{equation}% where $A_{s,t}$ and $B_{s,t}$ are defined by (\ref{2.29}) and (\ref{2.30}), respectively. \textbf{Step 2. \ }Set \begin{equation*} \alpha (y):=\left( \ 2k\rho _{f}\left[ \Vert y\Vert _{{\beta }}+\frac{\Vert y\otimes y\Vert _{\beta }}{\Vert y\Vert _{\beta }}\right] \vee 1\right) ^{1/(\beta -2\varepsilon )}, \end{equation*}% where $k$ is the constant appearing in formulas (\ref{3.5}) and (\ref{3.6}). Suppose that \begin{equation} 0<t-s\leq \frac{1}{\alpha (y)}. \label{3.7} \end{equation}% Then, the inequalities \begin{eqnarray} \Vert x\Vert _{s,t,\beta } &\leq &2k\rho _{f}\Vert y\Vert _{\beta } \label{3.8} \\ (t-s)^{\beta -2\varepsilon }\left\| x\otimes y\right\| _{s,t,{\beta }} &\leq &\ \Vert y\Vert _{\beta } \label{3.9} \end{eqnarray}% imply that \begin{eqnarray} \Vert J_{1}x\Vert _{s,t,\beta } &\leq &2k\rho _{f}\Vert y\Vert _{\beta } \label{3.10} \\ (t-s)^{\beta -2\varepsilon }\left\| J_{2}(x\otimes y)\right\| _{s,t,{\beta }% } &\leq &\ \Vert y\Vert _{\beta } \label{3.11} \end{eqnarray}% In fact, from the definition of $\alpha (y)$ and (\ref{3.8}) we deduce% \begin{equation} (t-s)^{\beta -2\varepsilon }\Vert x\Vert _{s,t,{\beta }}\leq 1. \label{3.12} \end{equation} By the definition of $B_{s,t}$ and $A_{s,t}$ we have \begin{equation} B_{s,t}\leq (\Vert f^{\prime }\Vert _{\infty }+\Vert f^{\prime }\Vert _{{% \lambda }})\Vert y\Vert _{\beta }\,\leq \rho _{f}\Vert y\Vert _{\beta }\leq \frac{(t-s)^{-(\beta -2\varepsilon )}}{2k}, \label{3.13} \end{equation}% and% \begin{eqnarray} A_{s,t} &\leq &\left( \Vert f\Vert _{\infty }+\Vert f^{\prime }\Vert _{\infty }+\Vert f^{\prime }\Vert _{{\lambda }}\right) \left( \left\| y\otimes y\right\| _{\beta }+\left\| y\right\| _{\beta }^{2}\right) \, \notag \\ &\leq &\frac{(t-s)^{-(\beta -2\varepsilon )}}{2k}\Vert y\Vert _{\beta }. \label{3.14} \end{eqnarray}% Therefore, substituting (\ref{3.13}) and (\ref{3.14}) into (\ref{3.6}) we obtain (\ref{3.11}). Finally, from (\ref{3.5}) we get (\ref{3.10}). \textbf{Step 3.} We can now proceed with the proof of the existence. Let $N$ be a natural number such that $\frac{T}{N}=\delta \leq \frac{1}{\alpha (y)}$. We partition the interval $[0,T]$ in $N$ subintervals of the same length and set $t_{i}=\frac{iT}{N}$, $i=0,1,\ldots ,N-1$. We will make use of the notation $\left\| x\right\| _{i}=\Vert x\Vert _{t_{i-1},t_{i},\beta }$% , and $\left\| x\otimes y\right\| _{i}=\Vert x\otimes y\Vert _{t_{i-1},t_{i},\beta }$, for $i=1,\ldots ,N-1$. \ From Step 2 we know that if that $x$ and $x\otimes y$ satisfy \begin{eqnarray*} \Vert x\Vert _{i} &\leq &\ 2k\rho _{f}\Vert y\Vert _{\beta } \\ \left\| x\otimes y\right\| _{i} &\leq &\ \Vert y\Vert _{\beta }\delta ^{-\left( \beta -2\varepsilon \right) }, \end{eqnarray*}% for any $i=1,\ldots ,N-1$, then the same inequalities hold for $Jx$ and $% Jx\otimes y$, that is% \begin{eqnarray*} \Vert J_{1}x\Vert _{i} &\leq &\ 2k\rho _{f}\Vert y\Vert _{\beta } \\ \left\| J_{2}(x\otimes y)\right\| _{i} &\leq &\ \ \Vert y\Vert _{\beta }\delta ^{-\left( \beta -2\varepsilon \right) }. \end{eqnarray*}% Consequently, there is a constant $C_{1}$ such that \begin{equation*} \Vert J_{1}^{n}x\Vert _{\beta }+\Vert J_{2}^{n}(x\otimes y)\Vert _{\beta }\leq C_{1}\,. \end{equation*}% This implies that the sequence of functions $J_{1}^{n}x$ is equicontinuous and bounded in $C^{\beta }$. Therefore, there exists a subsequence which converges in the $\beta ^{\prime }$-H\"{o}lder norm if $\beta ^{\prime }<\beta $. In the same way, there is a subsequence of $J_{2}^{n}(x\otimes y)$ which converges in the $\beta ^{\prime }$-H\"{o}lder norm. The limit $% (x,x\otimes y)$ defines a $\beta $-H\"{o}lder continuous multiplicative functional $(x,y,x\otimes y)$. Using the continuity of the solution in this norm it is not difficult to show that the limit is a solution. This implies the existence of a solution, which satisfies (\ref{3.8}) and (\ref{3.9}). \textbf{Step 4.} \ Let us now prove the estimate (\ref{3.4}). By step 2, the solution we have constructed satisfies the estimates (\ref{3.8}) and (\ref% {3.9}) if (\ref{3.7}) holds. Then it follows that for any $r\in \lbrack s,t]$% \begin{equation*} \sup_{r\in \lbrack s,t]}|x_{r}|\leq |x_{s}|+(t-s)^{\beta }\Vert x\Vert _{s,t,\beta }\leq |x_{s}|+(t-s)^{2\varepsilon }. \end{equation*}% Since the interval $[0,T]$ can be divided into $[T/\tau ]$ intervals of length $\tau =\frac{1}{\alpha (y)}$, the inequality (\ref{3.4}) follows. \end{proof} \begin{theorem} \label{th5} Let $y:[0,1]\rightarrow \mathbb{R}^{d}$ be a $\beta $-H\"{o}lder continuous function. Suppose that \ $(y^{i},y^{j},y^{i}\otimes y^{j})$ is a real valued $\beta $-H\"{o}lder continuous multiplicative function, for each $i,j=1,\ldots ,d$. Let $f:\mathbb{R}^{m}\rightarrow \mathbb{R}^{md}$ be a twice continuously differentiable function such that $f^{\prime \prime }$ is $% \lambda $-H\"{o}lder continuous, where $\lambda >\frac{1}{\beta }-2$, and $f$% , $f^{\prime }$ and $f^{\prime \prime }$ are bounded. Then there is a unique solution to Equations (\ref{3.2})- (\ref{3.3}) such that $(x,y,x\otimes y)$ is a $\beta $-H\"{o}% lder continuous multiplicative functional. Moreover, if $\tilde{x}$ satisfies $\tilde{x}_{t}=\tilde{x}% _{0}+\int_{0}^{t}f(\widetilde{x}_{r})d\tilde{y}_{r}$\ and $\tilde{y}$ verifies the same hypotheses as $y$, then \begin{equation} \sup_{0\leq t\leq T}|x_{t}-\tilde{x}_{t}|\leq C\left\{ |x_{0}-\tilde{x}_{0}|+\Vert y-\tilde{y}% \Vert _{{\beta }}+\Vert y\otimes (y-\tilde{y})\Vert _{{\beta }}\right\} \,, \label{3.15} \end{equation}% where $C$ depends on $\|y\|_\beta$, $\|y\otimes y\|_\beta$, $\beta$, $\lambda$, and $\hat{\rho}_f$, and where \begin{equation*} \hat{\rho}_{f}=\Vert f\Vert _{\infty }+\left\| f\right\| _{\lambda }+\Vert f^{\prime }\Vert _{\infty }+\left\| f^{\prime }\right\| _{\lambda }+\Vert f^{\prime \prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{{\lambda }% }\,. \end{equation*} \end{theorem} \begin{proof} To simplify the proof we will assume $d=m=1$. Notice that uniqueness follows from the estimate (\ref{3.15}). So it suffices to show this inequality. We fix \ $s<t$ such that $t-s\leq \frac 1 {\beta(y)} $, where $\beta$ is defined as follows \begin{equation} \beta(y)= \left( \ 2k\widehat{\rho }_{f}\left[ \Vert y\Vert _{{\beta }}+ \|y\|^2_\beta+ \Vert y\otimes y\Vert _{\beta } + \frac{\Vert y\otimes y\Vert _{\beta }} {\Vert y\Vert _{\beta }}\right] \vee 1 \right) ^{1/(\beta \lambda )}\,. \label{beta} \end{equation}% The constant $k$ appearing in the definition of $\beta$ will be chosen later. We choose $\alpha>0$ and $\varepsilon>0$ such that $1-\beta <\alpha< 2\beta$, $\alpha <\frac{\lambda\beta +1}2$, and $\varepsilon< \alpha+\beta-1$, $\varepsilon <\frac \beta 2$. We also assume that the solutions $x$ and $\widetilde{x}$ satisfy the following inequalities: \begin{equation} (t-s) ^{\beta -2\varepsilon }\Vert x\Vert _{{\beta }}\leq 1, \label{3.16} \end{equation}% \begin{equation} (t-s) ^{\beta -2\varepsilon }\Vert \widetilde{x}\Vert _{{\beta }}\leq 1, \label{3.17} \end{equation}% \begin{equation} (t-s) ^{\beta -2\varepsilon }\left\| x\otimes y\right\| _{{\beta }}\leq \Vert y\Vert _{\beta }. \label{3.18} \end{equation}% Our first purpose is to estimate the H\"{o}lder norm $\left\| x-\widetilde{x}% \right\| _{s,t,\beta }$. We can write% \begin{eqnarray*} \left\| x-\widetilde{x}\right\| _{s,t,\beta } &\leq &\left\| \int \left[ f(x_{s})-f(\tilde{x}_{s})\right] dy_{s}\right\| _{s,t,\beta }+\left\| \int f(% \tilde{x}_{s})d(y_{s}-\tilde{y}_{s})\right\| _{s,t,\beta } \\ &=&I_{1,s,t}+I_{2,s,t}. \end{eqnarray*}% The term $I_{1,s,t}$ can be estimated using (\ref{2.16}) and we obtain \begin{equation} I_{1,s,t}\leq k[H_{1}\Vert x-\tilde{x}\Vert _{s,t,\infty }+H_{2}\Vert x-% \tilde{x}\Vert _{s,t,\beta }+H_{3}\Vert (x-\tilde{x})\otimes y\Vert _{s,t,{% \beta }}], \label{3.19} \end{equation}% where% \begin{eqnarray*} H_{1} &=&\left\| y\right\| _{s,t,\beta }\left( \Vert f^{\prime }\Vert _{\infty }+\left\| f^{\prime \prime }\right\| _{\lambda }\Vert \widetilde{x}% \Vert _{s,t,{\beta }}\left( \Vert x\Vert _{s,t,{\beta }}^{\lambda }+\Vert \tilde{x}\Vert _{s,t,{\beta }}^{\lambda }\right) \right) (t-s)^{\beta (1+\lambda )} \\ &&+\left( \Vert f^{\prime \prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{\lambda }\left( \Vert x\Vert _{s,t,\beta }^{\lambda }+\Vert \widetilde{x}\Vert _{s,t,\beta }^{\lambda }\right) (t-s)^{\beta \lambda }\right) \\ &&\times \left( \Vert x\otimes y\Vert _{s,t,{\beta }}+\Vert x\Vert _{s,t,{% \beta }}\left\| y\right\| _{s,t,\beta }\ \right) (t-s)^{\beta -2\varepsilon }, \end{eqnarray*}% \begin{eqnarray*} H_{2} &=&\ \left\| f^{\prime \prime }\right\| _{\infty }\left\| y\right\| _{s,t,\beta }\left( \Vert x\Vert _{s,t,{\beta }}+\Vert \tilde{x}\Vert _{s,t,{% \beta }}\right) (t-s)^{\beta (1+\lambda )}\ \\ &&+\left\| f^{\prime \prime }\right\| _{\infty }\left( \Vert x\otimes y\Vert _{s,t,\beta }+\Vert x\Vert _{s,t,\beta }\left\| y\right\| _{s,t,\beta }\ \right) (t-s)^{2\beta -2\varepsilon } \\ &&+\ \left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\Vert \tilde{x}% \Vert _{s,t,\beta }^{\lambda }(t-s)^{\lambda \beta }\right) \left\| y\right\| _{s,t,\beta }(t-s)^{\beta -2\varepsilon }, \end{eqnarray*}% and% \begin{equation*} H_{3}=\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\Vert \tilde{x}% \Vert _{s,t,\beta }^{\lambda }(t-s)^{\lambda \beta }\right) (t-s)^{\beta -2\varepsilon }. \end{equation*}% Then, using the inequalities (\ref{3.16}), (\ref{3.17}), and (\ref{3.18}) we get the following estimates% \begin{eqnarray} H_{1} &\leq &\left\| y\right\| _{\beta }\left( \Vert f^{\prime }\Vert _{\infty }+2\Vert f^{\prime \prime }\Vert _{\infty }+6\left\| f^{\prime \prime }\right\| _{\lambda }\right), \ \label{3.20} \\ H_{2} &\leq &\ \left\| y\right\| _{\beta }\left( 3\left\| f^{\prime \prime }\right\| _{\infty }+\ \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\right) (t-s)^{\beta \lambda }, \label{3.21} \\ H_{3} &\leq &\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\right) . \label{3.22} \end{eqnarray}% It remains to handle the term $\Vert (x-\tilde{x})\otimes y\Vert _{s,t,{% \beta }}$ in (\ref{3.19}). To get estimates for this term we apply again the inequality (\ref{2.16}) and we have \begin{eqnarray} \left| ((x-\tilde{x})\otimes y)_{s,t}\right| &=&\left| \int_{s}^{t}\left[ f(x_{r})-f(\tilde{x}_{r})\right] d_{r}(y\otimes y)_{r,t}\right| \notag \\ &\leq &k(t-s)^{\beta }\left[ \widetilde{H}_{1}\Vert x-\tilde{x}\Vert _{s,t,\infty }+\widetilde{H}_{2}\Vert x-\tilde{x}\Vert _{s,t,\beta }\right. \notag \\ &&\left. +\widetilde{H}_{3}\Vert (x-\tilde{x})\otimes (y\otimes y)_{\cdot ,t}\Vert _{s,t,{\beta }}\right] , \label{3.23} \end{eqnarray}% where% \begin{eqnarray*} \widetilde{H}_{1} &=&\left\| (y\otimes y)_{\cdot ,t}\right\| _{s,t,\beta }\left( \Vert f^{\prime }\Vert _{\infty }+\left\| f^{\prime \prime }\right\| _{\lambda }\Vert \widetilde{x}\Vert _{s,t,{\beta }}\left( \Vert x\Vert _{s,t,% {\beta }}^{\lambda }+\Vert \tilde{x}\Vert _{s,t,{\beta }}^{\lambda }\right) \right) (t-s)^{\beta (1+\lambda )} \\ &&+\left( \Vert f^{\prime \prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{\lambda }\left( \Vert x\Vert _{s,t,\beta }^{\lambda }+\Vert \widetilde{x}\Vert _{s,t,\beta }^{\lambda }\right) (t-s)^{\beta \lambda }\right) \\ &&\times \left( \Vert x\otimes (y\otimes y)_{\cdot ,t}\Vert _{s,t,{\beta }% }+\Vert x\Vert _{s,t,{\beta }}\left\| (y\otimes y)_{\cdot ,t}\right\| _{s,t,\beta }\ \right) (t-s)^{\beta -2\varepsilon }, \end{eqnarray*}% \begin{eqnarray*} \widetilde{H}_{2} &=&\ \left\| f^{\prime \prime }\right\| _{\infty }\left\| (y\otimes y)_{\cdot ,t}\right\| _{s,t,\beta }\left( \Vert x\Vert _{s,t,{% \beta }}+\Vert \tilde{x}\Vert _{s,t,{\beta }}\right) (t-s)^{\beta (1+\lambda )}\ \\ &&+\left\| f^{\prime \prime }\right\| _{\infty }\left( \Vert x\otimes (y\otimes y)_{\cdot ,t}\Vert _{s,t,\beta }+\Vert x\Vert _{s,t,\beta }\left\| (y\otimes y)_{\cdot ,t}\right\| _{s,t,\beta }\ \right) (t-s)^{2\beta -2\varepsilon } \\ &&+\ \left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\Vert \tilde{x}% \Vert _{s,t,\beta }^{\lambda }(t-s)^{\lambda \beta }\right) \left\| (y\otimes y)_{\cdot ,t}\right\| _{s,t,\beta }(t-s)^{\beta -2\varepsilon }, \end{eqnarray*}% and% \begin{equation*} \widetilde{H}_{3}=H_{3}. \end{equation*}% Using (\ref{2.31}), (\ref{2.32}), (\ref{3.16}), (\ref{3.17}) and (\ref{3.18}% ) we get the following estimates% \begin{eqnarray} \lefteqn{\widetilde{H}_{1} =\left( \left\| y\otimes y\right\| _{s,t,\beta }+\ \left\| y\right\| _{s,t,\beta }^{2}\right) } \notag \\ && \times\left( \Vert f^{\prime }\Vert _{\infty }+\left\| f^{\prime \prime }\right\| _{\lambda }\Vert \widetilde{x}% \Vert _{s,t,{\beta }}\left( \Vert x\Vert _{s,t,{\beta }}^{\lambda }+\Vert \tilde{x}\Vert _{s,t,{\beta }}^{\lambda }\right) \right) (t-s)^{\beta (2+\lambda )} \notag \\ &&+k\left( \Vert f^{\prime \prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{\lambda }\left( \Vert x\Vert _{s,t,\beta }^{\lambda }+\Vert \widetilde{x}\Vert _{s,t,\beta }^{\lambda }\right) (t-s)^{\beta \lambda }\right) \notag \\ &&\times \left( \left\| x\right\| _{s,t\beta }\left\| y\right\| _{s,t,\beta }^{2}+\left\| x\right\| _{s,t,\beta }\left\| y\otimes y\right\| _{s,t,\beta }+\left\| y\right\| _{s,t,\beta }\left\| x\otimes y\right\| _{s,t,\beta }\right) \ (t-s)^{2\beta -2\varepsilon } \notag \\ &\leq &k\left( \left\| y\otimes y\right\| _{\beta }+\ \left\| y\right\| _{\beta }^{2}\right) \left( \Vert f^{\prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{\infty }+\left\| f^{\prime \prime }\right\| _{\lambda }\right) (t-s)^{\beta }, \label{3.24} \end{eqnarray}% and% \begin{eqnarray} \lefteqn{\widetilde{H}_{2} =\ \left\| f^{\prime \prime }\right\| _{\infty }\left( \left\| y\otimes y\right\| _{s,t,\beta }+\ \left\| y\right\| _{s,t,\beta }^{2}\right) \left( \Vert x\Vert _{s,t,{\beta }}+\Vert \tilde{x}\Vert _{s,t,{% \beta }}\right) (t-s)^{\beta (2+\lambda )}\ } \notag \\ &&+k\left\| f^{\prime \prime }\right\| _{\infty }\Big( \left\| y\right\| _{s,t,\beta }\left\| x\otimes y\right\| _{s,t,\beta } \\ &&+\Vert x\Vert _{s,t,\beta }\left\| y\otimes y\right\| _{s,t,\beta }+\Vert x\Vert _{s,t,\beta }\ \left\| y\right\| _{s,t,\beta }^{2}\ \Big) (t-s)^{3\beta -2\varepsilon } \notag \\ &&+k\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\Vert \tilde{x}% \Vert _{s,t,\beta }^{\lambda }(t-s)^{\lambda \beta }\right) \left( \left\| y\otimes y\right\| _{s,t,\beta }+\ \left\| y\right\| _{s,t,\beta }^{2}\right) (t-s)^{2\beta -2\varepsilon } \notag \\ &\leq &\ k\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }+\left\| f^{\prime \prime }\right\| _{\infty }\right) \left( \left\| y\otimes y\right\| _{\beta }+\ \left\| y\right\| _{\beta }^{2}\right) (t-s)^{\beta }. \label{3.25} \end{eqnarray}% On the other hand, from (\ref{2.32}) we get \begin{eqnarray} \lefteqn{\Vert (x-\tilde{x})\otimes (y\otimes y)_{\cdot ,t}\Vert _{s,t,{\beta }} \leq k\Big( \left\| x-\tilde{x}\right\| _{s,t\beta }\left\| y\right\| _{\beta }^{2} }\notag \\ \label{3.26} &&+\left\| x-\tilde{x}\right\| _{s,t,\beta }\left\| y\otimes y\right\| _{\beta }+\left\| y\right\| _{\beta }\left\| \left( x-% \tilde{x}\right) \otimes y\right\| _{s,t,\beta }\Big) (t-s)^{{\beta }}. \end{eqnarray}% Thus, substituting (\ref{3.24}), (\ref{3.25}), (\ref{3.22}) and (\ref{3.26}) into (\ref{3.23}) yields% \begin{eqnarray*} \lefteqn{\left\| (x-\tilde{x})\otimes y\right\| _{s,t,\beta } \leq k(t-s)^{\beta }\Big[\left( \left\| y\otimes y\right\| _{\beta }+\ \left\| y\right\| _{\beta }^{2}\right) } \\ &&\times \left( \Vert f^{\prime }\Vert _{\infty }+\Vert f^{\prime \prime }\Vert _{\infty }+\left\| f^{\prime \prime }\right\| _{\lambda }\right) \Vert x-\tilde{x}\Vert _{s,t,\infty } \\ &&+\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }+\left\| f^{\prime \prime }\right\| _{\infty }\right) \left( \left\| y\otimes y\right\| _{\beta }+\ \left\| y\right\| _{\beta }^{2}\right) \Vert x-\tilde{x}\Vert _{s,t,\beta } \\ &&+\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\right) \\ &&\times \left( \left\| x-\tilde{x}\right\| _{s,t\beta }\left\| y\right\| _{\beta }^{2}+\left\| x-\tilde{x}\right\| _{s,t,\beta }\left\| y\otimes y\right\| _{\beta }+\left\| y\right\| _{\beta }\left\| \left( x-\tilde{x}% \right) \otimes y\right\| _{s,t,\beta }\right) \Big] \\ &\leq &k(t-s)^{\beta }\hat{\rho}_{f}\left( \left\| y\otimes y\right\| _{\beta }+\ \left\| y\right\| _{\beta }^{2}\right) \left( \Vert x-\tilde{x}% \Vert _{s,t,\infty }+\Vert x-\tilde{x}\Vert _{s,t,\beta }\right) \\ &&+k(t-s)^{\beta }\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\right) \left\| y\right\| _{\beta }\left\| \left( x-\tilde{x}\right) \otimes y\right\| _{s,t,\beta }. \end{eqnarray*}% The condition $t-s\le 1/\beta(y)$, if the constant in $\beta(y)$ is chosen in an appropriate way, implies that $$ k(t-s)^{\ \beta }\left( \Vert f\Vert _{\infty }+\Vert f\Vert _{\lambda }\right) \left\| y\right\| _{\beta }\leq \frac{1}{2}. $$ Hence, \begin{equation} \left\| (x-\tilde{x})\otimes y\right\| _{s,t,\beta }\leq k(t-s)^{\ \beta }% \hat{\rho}_{f}\left( \left\| y\otimes y\right\| _{\beta }+\ \left\| y\right\| _{\beta }^{2}\right) \left( \Vert x-\tilde{x}\Vert _{s,t,\infty }+\Vert x-\tilde{x}\Vert _{s,t,\beta }\right) . \label{3.27} \end{equation}% Substituting (\ref{3.27}), (\ref{3.20}), (\ref{3.21}) and (\ref{3.22}) into (% \ref{3.19}) yields% \begin{eqnarray*} I_{1,s,t} &\leq &k\hat{\rho}_{f}[\left\| y\right\| _{\beta }\Vert x-\tilde{x}% \Vert _{s,t,\infty }+\left\| y\right\| _{\beta }\Vert x-\tilde{x}\Vert _{s,t,\beta }(t-s)^{\beta \lambda } \\ &&+\left( \left\| y\otimes y\right\| _{\beta }+\ \left\| y\right\| _{\beta }^{2}\right) \left( \Vert x-\tilde{x}\Vert _{s,t,\infty }+\Vert x-\tilde{x}% \Vert _{s,t,\beta }\right) (t-s)^{\beta }]. \end{eqnarray*}% Again, condition $t-s\le 1/\beta(y)$, if the constant in $\beta(y)$ is chosen in an appropriate way, implies that \begin{equation} I_{1,s,t}\leq k\hat{\rho}_{f}\left\| y\right\| _{\beta }\ \Vert x-\tilde{x}% \Vert _{s,t,\infty }+\frac{1}{2}\Vert x-\tilde{x}\Vert _{s,t,\beta }\ . \label{3.28} \end{equation} For the term $I_{2,s,t}$ we have the following estimates, using (\ref{2.25})% \begin{eqnarray} I_{2,s,t} &\leq &k\left\| f\right\| _{\infty }\left\| y-\tilde{y}\right\| _{s,t,\beta }+k\left( \left\| x\otimes (y-\tilde{y})\right\| _{s,t,\beta }+\left\| x\right\| _{s,t,\beta }\left\| y-\tilde{y}\right\| _{s,t,\beta }\right) \notag \\ &&\times \left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{s,t,\beta }^{\lambda }(t-s)^{\lambda \beta }\right) (t-s)^{\beta -2\varepsilon } \notag \\ &\leq &k\rho _{f}\left\| y-\tilde{y}\right\| _{\beta }+k\left\| x\otimes (y-% \tilde{y})\right\| _{s,t,\beta }\left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\right) (t-s)^{\beta -2\varepsilon }. \label{3.29} \end{eqnarray}% In order to estimate $\left\| x\otimes (y-\tilde{y})\right\| _{s,t,\beta }$ we make use of Proposition \ \ref{p.3.1} and we obtain% \begin{eqnarray} \left\| x\otimes (y-\tilde{y})\right\| _{s,t,\beta } &=&\sup_{s\leq \xi \leq \eta \leq t}\frac{1}{(\eta -\xi )^{2\beta }}\left| \int_{\xi }^{\eta }f(x_{r})d_{r}(y\otimes (y-\tilde{y}))_{r,\eta }\right| \notag \\ &\leq &\ k\left[ A_{s,t}+B_{s,t}\left\| x\otimes y\right\| _{s,t,\beta }\,(t-s)^{\beta -2\varepsilon }\right] , \label{3.31} \end{eqnarray}% where \begin{eqnarray} A_{s,t} &=&\left( \Vert y\otimes (y-\tilde{y})\Vert _{s,t,\beta }+\Vert y\Vert _{s,t,\beta }^{\ }\Vert (y-\tilde{y})\Vert _{s,t,\beta }^{\ }\right) \notag \\ &&\times \left[ \left\| f\right\| _{\infty }+\left( \left\| x\right\| _{s,t,\beta }\left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{s,t,\beta }^{1+\lambda }(t-s)^{\lambda \beta }\right) (t-s)^{\beta -2\varepsilon }\right] \, \notag \\ &\leq &\rho _{f}\left( \Vert y\otimes (y-\tilde{y})\Vert _{s,t,\beta }+\Vert y\Vert _{s,t,\beta }^{\ }\Vert (y-\tilde{y})\Vert _{s,t,\beta }^{\ }\right) , \label{3.33} \end{eqnarray}% and \begin{eqnarray} B_{s,t} &=&\ \left\| (y-\tilde{y})\right\| _{s,t,\beta }\left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\left\| x\right\| _{s,t,\beta }^{\lambda }(t-s)^{\lambda \beta }\right) \, \notag \\ &\leq &\left\| (y-\tilde{y})\right\| _{s,t,\beta }\left( \left\| f^{\prime }\right\| _{\infty }+\left\| f^{\prime }\right\| _{\lambda }\right) . \label{3.34} \end{eqnarray}% Substituting (\ref{3.33}) and (\ref{3.34}) into (\ref{3.31}) yields \begin{equation} \left\| x\otimes (y-\tilde{y})\right\| _{s,t,\beta }\leq \ k\rho _{f}\left[ \Vert y\otimes (y-\tilde{y})\Vert _{\beta }+\Vert y\Vert _{\beta }^{\ }\Vert (y-\tilde{y})\Vert _{\beta }^{\ }\right] . \label{3.35} \end{equation}% Finally, from (\ref{3.35}) and (\ref{3.39}) we obtain% \begin{eqnarray} I_{2,s,t} &\leq &k\rho _{f}\left\| y-\tilde{y}\right\| _{\beta }+k\rho _{f}^{2}\left[ \Vert y\otimes (y-\tilde{y})\Vert _{\beta }+\Vert y\Vert _{s,t,\beta }^{\ }\Vert (y-\tilde{y})\Vert _{\beta }^{\ }\right] (t-s)^{\beta -2\varepsilon } \notag \\ &\leq &k\rho _{f}\left\| y-\tilde{y}\right\| _{\beta }+k\rho _{f}^{2}\Vert y\otimes (y-\tilde{y})\Vert _{\beta }(t-s)^{\beta -2\varepsilon }. \label{3.37} \end{eqnarray}% Now from (\ref{3.28}) and (\ref{3.37}) we get \begin{eqnarray*} \left\| x-\widetilde{x}\right\| _{s,t,\beta } &\leq &k\hat{\rho}_{f}\left\| y\right\| _{\beta }\Vert x-\tilde{x}\Vert _{s,t,\infty }+\frac{1}{2}\Vert x-% \tilde{x}\Vert _{s,t,\beta } \\ &&+k\rho _{f}\left\| y-\tilde{y}\right\| _{\beta }+k\rho _{f}^{2}\Vert y\otimes (y-\tilde{y})\Vert _{\beta }(t-s)^{\beta -2\varepsilon }. \end{eqnarray*}% Or \begin{eqnarray} \left\| x-\widetilde{x}\right\| _{s,t,\beta } &\leq &k\hat{\rho}_{f}\left\| y\right\| _{\beta }\Vert x-\tilde{x}\Vert _{s,t,\infty } \notag \\ &&\quad +k\rho _{f}\left\| y-\tilde{y}\right\| _{\beta }+k\rho _{f}^{2}\Vert y\otimes (y-\tilde{y})\Vert _{\beta }(t-s)^{\beta -2\varepsilon }. \label{3.38} \end{eqnarray}% Notice that% \begin{equation} \Vert x-\tilde{x}\Vert _{s,t,\infty }\leq \left| x_{s}-\tilde{x}_{s}\right| +(t-s)^{\beta }\Vert x-\tilde{x}\Vert _{s,t,\beta }. \label{3.39} \end{equation}% Hence, \begin{eqnarray*} \left\| x-\widetilde{x}\right\| _{s,t,\beta } &\leq &k\hat{\rho}_{f}\left\| y\right\| _{\beta }[\left| x_{s}-\tilde{x}_{s}\right| +(t-s)^{\beta }\Vert x-% \tilde{x}\Vert _{s,t,\beta }] \\ &&+k\rho _{f}\left\| y-\tilde{y}\right\| _{\beta }+k\rho _{f}^{2}\Vert y\otimes (y-\tilde{y})\Vert _{\beta }(t-s)^{\beta -2\varepsilon }. \end{eqnarray*}% And consequently, \begin{equation} \left\| x-\widetilde{x}\right\| _{s,t,\beta }\leq k\hat{\rho}_{f}\left\| y\right\| _{\beta }\left| x_{s}-\tilde{x}_{s}\right| +k\rho _{f}\left\| y-% \tilde{y}\right\| _{\beta }+k\rho _{f}^{2}\Vert y\otimes (y-\tilde{y})\Vert _{\beta }(t-s)^{\beta -2\varepsilon }. \label{3.40} \end{equation} Substituting (\ref{3.40}) into (\ref{3.39}) yields% \begin{eqnarray} \lefteqn{\Vert x-\tilde{x}\Vert _{s,t,\infty } \leq \left| x_{s}-\tilde{x}% _{s}\right| +(t-s)^{\beta } \Big( k\hat{\rho}_{f}\left\| y\right\| _{\beta }\left| x_{s}-% \tilde{x}_{s}\right| } \notag \\ && +k\rho _{f}\left\| y-\tilde{y}\right\| _{\beta }+k\rho _{f}^{2}\Vert y\otimes (y-\tilde{y})\Vert _{\beta }(t-s)^{\beta -2\varepsilon }\Big) . \label{3.41} \end{eqnarray} Suppose that $y=\widetilde{y}$. Then, Equation (\ref{3.41}) implies that $x=\widetilde{x}$ in a small interval $[0,\delta]$, and by a recursive argument, the uniqueness follows. Denote $\kappa = \frac{1}{\beta (y)}\ $ and $% t_{n}=n\kappa $. Set \begin{equation*} Z_{n}=\sup_{0\leq s\leq t_{n}}|x_{s}-\tilde{x}_{s}| \end{equation*}% Then inequality \ (\ref{3.41}) states that \begin{equation*} Z_{n+1}\leq (1+k\rho _{f}\kappa ^{\beta })Z_{n}+k\rho _{f}\kappa ^{\beta }\Vert y-\tilde{y}\Vert _{{\beta }}+k\rho _{f}^{2}\kappa ^{2{\beta }-2{\varepsilon }}\Vert y\otimes (y-\tilde{y})\Vert _{{% \beta }} \end{equation*}% Therefore \begin{eqnarray*} Z_{T} &\leq &k(1+k\rho _{f}\kappa ^\beta)^{T/\kappa }|x_{0}-% \tilde{x}_{0}| \\ &&+k\sum_{l=0}^{T/\kappa }(1+k\rho _{f}\kappa ^\beta )^{l}\left[ \rho _{f}\kappa ^{\beta }\Vert y-\tilde{y}\Vert _{{% \beta }}+\rho _{f}^{2}\kappa ^{2{\beta }-2{\varepsilon }}\Vert y\otimes (y-\tilde{y})\Vert _{{\beta }}\right]. \end{eqnarray*} This implies the desired estimate. \end{proof} The following corollary is direct consequence of (\ref{3.38}) and (\ref{3.15}% ). \begin{corollary} \label{c.4.3} If $f$ is twice continuously differentiable and $f^{\prime \prime }$ is Lipschitz continuous and if $x$ and $\tilde{x}$ satisfy \begin{equation*} x_{t}=x_{0}+\int_{0}^{t}f(x_{s})dy_{s}\qquad \mathrm{and}\qquad \tilde{x}% _{t}=\tilde{x}_{0}+\int_{0}^{t}f(\tilde{x}_{s})d\tilde{y}_{s}\,, \end{equation*}% then \begin{equation} \Vert x_{t}-\tilde{x}_{t}\Vert _{{\beta }}\leq \ C \left\{ |x_{0}-\tilde{x}_{0}|+\Vert y-\tilde{y}% \Vert _{{\beta }}+\Vert y\otimes (y-\tilde{y})\Vert _{{\beta }% }\right\} \,, \label{3.30} \end{equation}% where we use the notation of Theorem 4.1. \end{corollary} \setcounter{equation}{0} \section{Stochastic Differential Equations} Suppose that $B=\{B_{t}=(B_{t}^{1},B_{t}^{2},\ldots ,B_{t}^{d})\}$ is a $d$% -dimensional Brownian motion. Fix a time interval $[0,T]$. Define% \begin{equation*} \left( B\otimes B\right) _{s,t}=\int_{s}^{t}\left( B_{r}-B_{s}\right) d\circ B_{r}, \end{equation*}% where the stochastic integral is a Stratonovich integral. That is, \begin{equation*} \left( B\otimes B\right) _{s,t}^{i,j}=\left\{ \begin{array}{ccc} \frac{1}{2}(B_{t}^{i}-B_{s}^{i})^{2} & \text{if} & i=j \\ \int_{s}^{t}\left( B_{r}^{i}-B_{s}^{i}\right) dB_{r}^j & \text{if} & i\neq j% \end{array}% \right. ,\ \end{equation*}% where the stochastic integral is an It\^{o} integral. It is not difficult to show that we can choose a version of $\left( B\otimes B\right) _{s,t}$ in such a way that $(B,B,B\otimes B)$ constitutes a $\beta $-H\"{o}lder continuous multiplicative functional, for a fixed $\beta \in (1/3,1/2)$. As a first application of Theorem \ref{t.3.1} and (\ref{eqa}) we deduce that the Stratonovich stochastic integral $\int_0^T f(B_r) d\circ B_r$ has the following path-wise expression \begin{eqnarray} \int_{0}^{T}f(B_{r})\circ dB_{r} &=&(-1)^{\alpha }\ \sum_{i=1}^{d}\int_{0}^{T}% \widehat{D}_{0+}^{\alpha }f_{i}\left( B\right) _r (D_{T-}^{1-\alpha }B_{T-}^{i})_r dr \notag \\ &&\!\!\!\! +\sum_{i=1}^{m}\sum_{j=1}^{d}\int_{0}^{T}D_{0+}^{2\alpha -1}\partial _{i}f_{j}\left( B\right) _r{\Lambda }_{r}^{T}(B^{i}\otimes T^{j})dr\,. \end{eqnarray} We can apply Theorem \ref{th4} and deduce the existence of a solution for the stochastic differential equation in $\mathbb{R}^{m}$ \begin{equation} X_{t}=X_{0}+\int_{0}^{t}f(X_{s})dB_{s}, \label{4.2} \end{equation}% where the initial condition $X_{0}$ is an arbitrary random variable, and the function $f:\mathbb{R}^{m}\rightarrow \mathbb{R}^{md}$ is a continuously differentiable function such that $\ f^{\prime }$ is $\lambda $-H\"{o}lder continuous, where $\lambda >\frac{1}{\beta }-2$, and $f$ and $f^{\prime }$ are bounded. \ By Theorem \ref{th5} the solution is unique if $f$ is twice continuously differentiable with bounded derivatives and $f^{\prime \prime }$ is $\lambda $-H\"{o}lder continuous, where $\lambda >\frac{1}{\beta }-2.$ \ \ The stochastic integral here is a path-wise integral which depends on $B$ and $% B\otimes B$. We have also the stability type results (\ref{3.15}) and (\ref{3.30}). In particular, if $B^{{\varepsilon }}$ is a piece-wise smooth approximation of $% B$ such that \begin{equation*} \Vert B^{{\varepsilon }}-B\Vert _{{\beta }}\qquad \mathrm{and}\qquad \Vert B\otimes (B^{{\varepsilon }}-B)\Vert _{{\beta }} \end{equation*}% converge to zero with a certain rate, then according to Corollary \ref{c.4.3}% , $\Vert X-X^{{\varepsilon }}\Vert _{{\beta }}$ will also converge to $0$ with the same rate, where \begin{equation*} X_{t}^{{\varepsilon }}=X_{0}+\int_{0}^{t}f(X_{s}^{{\varepsilon }})dB_{s}^{{% \varepsilon }}. \end{equation*}% In particular, this implies that the stochastic process solution of (\ref% {4.2}) coincides with the solution of the Stratonovich stochastic differential equation% \begin{equation} X_{t}=X_{0}+\int_{0}^{t}f(X_{s})d^{o}B_{s}^{H}. \label{4.3} \end{equation}% In this section we will apply these results in order to obtain the almost sure rate of convergence \ of the Wong-Zakai approximation to the stochastic differential equation \ (\ref{4.2}). That is, we will consider the rate of convergence in H\"{o}lder norm when we approximate the Brownian motion by a polygonal line. In order to get a precise rate for these approximations we will make use of the following exact modulus of continuity of the Brownian motion. There exists a random variable $G$ such that almost surely for any $s,t\in \lbrack 0,T]$ we have% \begin{equation} \left| B_{t}-B_{s}\right| \leq G|t-s|^{1/2}\sqrt{\log \left( |t-s|^{-1}\right) } \label{4.3b} \end{equation} Let $\pi =\left\{ 0=t_{0}<t_{1}<\cdots <t_{n}=T\right\} $ be the uniform partition of the interval $[0,T]$. That is $t_{k}=\frac{kT}{n}$, $k=0,\ldots ,n$. We denote by $B^{\pi \text{ }}$the polygonal approximation of the Brownian motion defined by% \begin{equation*} B_{t}^{\pi }=\sum_{k=0}^{n-1}\left( B_{t_{k}}+\frac{n}{T}\left( t-t_{k}\right) \left( B_{t_{k+1}}-B_{t_{k}}\right) \right) \mathbf{1}% _{(t_{k},t_{k+1}]}(t). \end{equation*} We have the following result \begin{lemma} \label{lema1} There exist a random variable $C_{T,\beta}$ such that \begin{eqnarray} \Vert B-B^{\pi }\Vert _{{\beta }} &\leq &C_{T,\beta }n^{\beta -1/2}\sqrt{% \log n} \label{4.5} \\ \Vert B\otimes (B-B^{\pi })\Vert _{{\beta }} &\leq &C_{T,\beta }n^{\beta -1/2}\sqrt{\log n}. \label{4.6} \end{eqnarray}% \end{lemma} \begin{proof} Fix $0<s<t<T$ and assume that $s\in \lbrack t_{l},t_{l+1}]$ and $t\in \lbrack t_{k},t_{k+1}]$. Let us first estimate \begin{equation*} h_{1}(s,t)=\frac{1}{(t-s)^{{\beta }}}|B_{t}^{\pi }-B_{t}-(B_{s}^{\pi }-B_{s})|\,. \end{equation*}% If $t-s\geq \frac{T}{n}$, then using (\ref{4.3b}) we obtain \begin{eqnarray*} \left| h_{1}(s,t)\right| &\leq &T^{-\beta }n^{{\beta }}\left[ \left| B_{t_{k}}-B_{t}+\frac{n}{T}\left( t-t_{k}\right) \left( B_{t_{k+1}}-B_{t_{k}}\right) \right| \right. \\ &&\left. +\left| B_{t_{l}}-B_{s}+\frac{n}{T}\left( s-t_{l}\right) \left( B_{t_{l+1}}-B_{t_{l}}\right) \right| \right] \\ &\leq &4GT^{-\beta +1/2}n^{-1/2+\beta }\sqrt{\log \left( n/T\right) }. \end{eqnarray*}% If $t-s<\frac{T}{n}$, then there are two cases. Suppose first that $s,t\in \lbrack t_{k},t_{k+1}]$. In this case, \ if $n$ is large enough ($% n>Te^{2/(1-2\beta )}$) we obtain using (\ref{4.3b})% \begin{eqnarray*} \left| h_{1}(s,t)\right| &\leq &\frac{|B_{t}-B_{s}|}{(t-s)^{{\beta }}} +\frac{n}{T} \frac{|B_{t_{k+1}}-B_{t_k}|}{(t-s)^\beta}(t-s)\\ &\leq& G|t-s|^{\frac{1}{2}-{\beta }}\sqrt{\log |t-s|^{-1}} +GT^{-1/2}\sqrt{\log (n/T)}\ n^{1-1/2} (t-s)^{1-\beta} \\ &\leq &GT^{-\beta +1/2}n^{-1/2+\beta }\sqrt{\log \left( n/T\right) }. \end{eqnarray*}% On the other hand, if $s\in \lbrack t_{k-1},t_{k}]$ and $t\in \lbrack t_{k},t_{k+1}]$ we have, again if $n$ is large enough \begin{eqnarray*} \left| h_{1}(s,t)\right| &\leq &\frac{1}{(t-s)^{{\beta }}}\left| B_{t_{k}}-B_{t}+\frac{n}{T}\left( t-t_{k}\right) \left( B_{t_{k+1}}-B_{t_{k}}\right) \right. \\ &&\left. -\left\{ B_{t_{k}}-B_{s}-\frac{n}{T}\left( t_{k}-s\right) \left( B_{t_{k}}-B_{t_{k-1}}\right) \right\} \right| \\ &\leq &\frac{1}{(t-s)^{{\beta }}}\left[ |B_{t}-B_{s}|+\frac{n}{T}(t-s)\left( |B_{t_{k}}-B_{t_{k-1}}|+|B_{t_{k+1}}-B_{t_{k}}|\right) \right] \\ &\leq &\ \frac{G}{(t-s)^{{\beta }}}\left[ |t-s|^{1/2}\sqrt{\log |t-s|^{-1}}% +2(t-s)\left( \frac{n}{T}\right) ^{1/2}\sqrt{\log \left( n/T\right) }\right] \\ &\leq &3GT^{-\beta +1/2}n^{-1/2+\beta }\sqrt{\log \left( n/T\right) }. \end{eqnarray*}% This proves (\ref{4.5}). Now we turn to the estimate of the term \begin{equation*} h_{2}(s,t)=\frac{1}{(t-s)^{2{\beta }}}\left| \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j,\pi }-\int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j}\right| \, \end{equation*}% for $i\neq j$ (the case $i=j$ is obvious from (\ref{4.5}). We \ claim that the there exists a random variable $Z$ such that, almost surely, for all $% s,t\in \lbrack 0,T]$ we have% \begin{equation} \left| \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j}\right| \leq Z|t-s|\log |t-s|^{-1}. \label{4.7} \end{equation}% In fact, it suffices to show this inequality almost surely for all $s$ and $t $ rational numbers. If we fix $s$, the process $\left\{ M_{t},t\in \lbrack s,T]\right\} $% \begin{equation*} M_{t}=\int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j} \end{equation*}% is a continuous martingale and it can be represented as a time-changed Brownian motion:% \begin{equation*} M_{t}=W_{\int_{s}^{t}(B_{u}^{i}-B_{s}^{i})^{2}du}. \end{equation*}% As a consequence, applying (\ref{4.3b}) there exists a random variable $G_{1} $ such that \begin{equation*} |M_{t}|=\left| W_{\int_{s}^{t}(B_{u}^{i}-B_{s}^{i})^{2}du}\right| \leq G\left( \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})^{2}du\right) ^{1/2}\sqrt{\log \left( \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})^{2}du\right) ^{-1}} \end{equation*}% and again (\ref{4.3b}), applied to $B_u^i-B_s^i$, yields% \begin{equation*} |M_{t}|\leq G_{1}G_{2}\left( \int_{s}^{t}(u-s)\log |u-s|^{-1}du\right) ^{1/2}% \sqrt{\log \left( G_{2}^{2}\int_{s}^{t}(u-s)\log |u-s|^{-1}du\right) ^{-1}}, \end{equation*}% for some random variable $G_{2}$. We have for $|t-s|\leq 1$% \begin{equation*} \int_{s}^{t}(u-s)\log |u-s|^{-1}du=|t-s|^{2}\left( \frac{1}{4}+\frac{1}{2}% \log |t-s|^{-1}\right) , \end{equation*}% and this implies easily the estimate (\ref{4.7}). Suppose first that $t-s\geq \frac{T}{n}$. Then% \begin{eqnarray*} h_{2}(t,s) &=&\frac{1}{(t-s)^{2\beta }}\left| \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j,\pi }-\int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j}\right| \\ &=& \frac{1}{(t-s)^{2\beta }}\left| (B_t^i-B_s^i) (B_t^{j,\pi} -B_s^{j,\pi} ) - \int_{s}^{t}(B_{u}^{j,\pi }-B_{s}^{j,\pi })dB_{u}^i\right. \\ && \qquad \left. - (B_t^i-B_s^i) (B_t^{j} -B_s^{j} ) + \int_{s}^{t}(B_{u}^{j }-B_{s}^{j})dB_{u}^i\right| \\ &\leq &\frac{1}{(t-s)^{2\beta }}\left| (B_{t}^{i}-B_{s}^{i})(B_{t}^{j,\pi }-B_{s}^{j,\pi }-B_{t}^{j}-B_{s}^{j})\right| \\ &&+\frac{1}{(t-s)^{2\beta }}\left| \int_{s}^{t}\left[ B_{u}^{j,\pi }-B_{u}^{j}\right] dB_{u}^{i}\right| \\ &=&A_{1}+A_{2}. \end{eqnarray*}% Using (\ref{4.3b}) and (\ref{4.5}) the term $A_{1}$ can be estimated as follows% \begin{eqnarray} A_{1} &\leq &G|t-s|^{1/2-\beta }\sqrt{\log |t-s|^{-1}}\Vert B^{j,\pi }-B^{j}\Vert _{{\beta }} \notag \\ &\leq &C_{T,\beta }n^{\beta -1/2}\sqrt{\log n}. \label{4.10} \end{eqnarray}% For the term $A_{2}$ we proceed as in the proof of the estimate (\ref{4.7}). We have% \begin{equation*} \int_{s}^{t}\left[ B_{u}^{j,\pi }-B_{u}^{j}\right] dB_{u}^{i}=W_{% \int_{s}^{t}\left( B_{u}^{j,\pi }-B_{u}^{j}\right) ^{2}du} \end{equation*}% where $W$ is a Brownian motion. \ As a consequence, using that% \begin{equation*} \Vert B-B^{\pi }\Vert _{{\infty }}\leq C_{T,\beta }n^{-1/2}\sqrt{\log \left( n/T\right) } \end{equation*} (this estimate is proved as (\ref{4.5})) we get \begin{eqnarray} A_{2} &\leq &\frac{G}{(t-s)^{2\beta }}\left( \int_{s}^{t}(B_{u}^{j,\pi }-B_{u}^{j})^{2}du\right) ^{1/2}\sqrt{\log \left( \int_{s}^{t}(B_{u}^{j,\pi }-B_{u}^{j})^{2}du\right) ^{-1}} \notag \\ &\leq &C_{T,\beta }(t-s)^{1/2-2\beta }n^{-1/2}\sqrt{\log n}\sqrt{\log \left[ (t-s)^{-1}n\left( \log n\right) ^{-1}\right] } \notag \\ &\leq &C_{T,\beta }n^{\beta -1/2}\sqrt{\log n}. \label{4.11} \end{eqnarray}% Suppose now that $t-s<\frac{T}{n}$. We make the decomposition% \begin{eqnarray*} h_{2}(s,t) &\leq &\frac{1}{(t-s)^{2{\beta }}}\left( \left| \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j,\pi }\right| +\left| \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j}\right| \right) \\ &=&B_{1}+B_{2}. \end{eqnarray*}% Then (\ref{4.7}) yields \begin{equation} B_{2}\leq Z|t-s|^{1-2\beta }\log |t-s|^{-1}\leq C_{T,\beta }n^{2\beta -1}\log n\,. \label{4.12} \end{equation} In order to handle the term $B_{1}$, assume first that $s,t\in \lbrack t_{k},t_{k+1}]$. Then \begin{equation*} \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})dB_{u}^{j,\pi }=\frac{n}{T}\left( B_{t_{k+1}}^{j}-B_{t_{k}}^{j}\right) \int_{s}^{t}(B_{u}^{i}-B_{s}^{i})du \end{equation*}% and we obtain% \begin{equation*} B_{1}\leq C_{T,\beta }(t-s)^{1-2\beta }\log n\ \leq C_{T,\beta }n^{1-2\beta }\log n. \end{equation*}% Finally, if $s\in \lbrack t_{k-1},t_{k}]$ and $t\in \lbrack t_{k},t_{k+1}]$ we have% \begin{eqnarray*} B_{1} &\leq &(t-s)^{-2\beta }\frac{n}{T}\left| \left( B_{t_{k}}^{j}-B_{t_{k-1}}^{j}\right) \int_{s}^{t_{k}}(B_{u}^{i}-B_{s}^{i})du+\left( B_{t_{k+1}}^{j}-B_{t_{k}}^{j}\right) \int_{t_{k}}^{t}(B_{u}^{i}-B_{s}^{i})du\right| \\ &\leq &C_{T,\beta }n^{1-2\beta }\log n. \end{eqnarray*} The proof is now complete. \end{proof} As a consequence, we can establish the following result. \begin{theorem} Let $f:\mathbb{R}^{m}\rightarrow \mathbb{R}^{md}$ be continuously differentiable with bounded derivative up to forth order and let $X$ satisfy \begin{equation*} X_{t}=X_{0}+\int_{0}^{t}f(X_{s})dB_{s} \end{equation*}% If $X_{t}^{\pi }$ satisfies the following ordinary differential equation \begin{equation*} X_{t}^{\pi }=X_{0}+\int_{0}^{t}f(X_{s}^{\pi })dB_{s}^{\pi }\,, \end{equation*}% then for any ${\beta }\in (1/3,1/2)$, there is a random constant $C_{T,\beta }\in (0,\infty )$ such that \begin{equation} \Vert X-X^{\pi }\Vert _{{\beta }}\leq C_{T,\beta }n^{\beta -1/2}\sqrt{\log n}% . \label{4.4} \end{equation} \end{theorem} \begin{proof} The result is a straightforward consequence of Lemma \ref{lema1} and Theorem \ref{th5}. \end{proof} \setcounter{equation}{0} \section{Appendix} \begin{proof}[Proof of Lemma \ref{l.2.2}] The fractional integration by parts formula (\ref{1.8}) yields \begin{eqnarray*} &&\int_{a}^{b}d\xi \ \int_{\xi }^{b}\varphi (\xi ,\eta )\frac{\partial ^{2}\psi }{\partial \xi \partial \eta }(\xi ,\eta )d\eta \\ &=&(-1)^{1-\alpha }\int_{a}^{b}d\xi \int_{\xi }^{b}D_{b-}^{\alpha ,\eta }\varphi _{b-}(\xi ,\cdot )(\eta )D_{\xi +}^{1-\alpha ,\eta }\frac{\partial \psi }{\partial \xi }(\xi ,\cdot )(\eta )d\eta . \end{eqnarray*}% The operators $D_{\xi +}^{1-\alpha ,\eta }$ and $\frac{\partial \psi }{% \partial \xi }$ commute, as it follows from the following computations: \begin{eqnarray*} &&D_{\xi +}^{1-\alpha ,\eta }\frac{\partial \psi }{\partial \xi }(\xi ,\eta ) \\ &=&\frac{1}{\Gamma \left( \alpha \right) }\left( (\eta -\xi )^{\alpha -1}% \frac{\partial \psi }{\partial \xi }(\xi ,\eta )+(1-\alpha )\int_{\xi }^{\eta }\frac{\frac{\partial \psi }{\partial \xi }(\xi ,\eta )-\frac{% \partial \psi }{\partial \xi }(\xi ,\eta ^{\prime })}{\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }\right) \\ &=&\frac{1}{\Gamma (\alpha )}\Bigg\{\frac{\partial }{\partial \xi }\left[ (\eta -\xi )^{\alpha -1}\psi (\xi ,\eta )\right] +(\alpha -1)(\eta -\xi )^{\alpha -2}\psi (\xi ,\eta ) \\ &&+(1-\alpha )\frac{\partial }{\partial \xi }\int_{\xi }^{\eta }\frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })}{\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }+(1-\alpha )\frac{\psi (\xi ,\eta )}{% \left( \eta -\xi \right) ^{2-\alpha }}\Bigg\} \\ &=&\frac{1}{\Gamma (\alpha )}\frac{\partial }{\partial \xi }\Bigg\{(\eta -\xi )^{\alpha -1}\psi (\xi ,\eta )+(1-\alpha )\int_{\xi }^{\eta }\frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })}{\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }\Bigg\} \\ &=&\frac{\partial }{\partial \xi }D_{\xi +}^{1-\alpha ,\eta }\psi (\xi ,\eta ). \end{eqnarray*}% Thus \begin{eqnarray*} &&\int_{a}^{b}d\xi \ \int_{\xi }^{b}\varphi (\xi ,\eta )\frac{\partial ^{2}\psi }{\partial \xi \partial \eta }(\xi ,\eta )d\eta \\ &=& (-1)^{1-\alpha} \int_{a}^{b}d\eta \int_{a}^{\eta }D_{b-}^{\alpha ,\eta }\varphi _{b-}(\xi ,\cdot )(\eta )\frac{% \partial }{\partial \xi }D_{\xi +}^{1-\alpha ,\eta }\psi (\xi ,\eta )d\xi . \end{eqnarray*}% Hence, applying again (\ref{1.8}) we obtain (\ref{1.9}) with ${\Gamma }% ^{\alpha }\psi (\xi ,\eta )$ given by (\ref{1.10}). \end{proof} \bigskip \noindent\textbf{Remarks:} \medskip \noindent \textbf{1. }Formula (\ref{1.9}) holds if $\varphi $ is of class $C^{2}$ in $a<\xi <\eta <b$ and% \begin{equation*} \int_{a}^{b}\int_{a}^{\eta }\left| D_{a+}^{\alpha ,\xi }D_{b-}^{\alpha ,\eta }\varphi _{b-}(\xi ,\eta )\right| \ d\xi d\eta <\infty . \end{equation*} \medskip \noindent\textbf{2. \ }Under the conditions of the above lemma, we also have ${\Gamma }^{\alpha }\psi (\xi ,\eta )=D_{\xi +}^{1-\alpha ,\eta }D_{\eta -}^{1-\alpha ,\xi }\psi (\xi ,\eta )$. \medskip \noindent \textbf{3. \ }The operator ${\Gamma }^{\alpha }$ can also be expressed as follows.% \begin{eqnarray*} &&{\Gamma }^{\alpha }\psi (\xi ,\eta ) \\ &=&\frac{(-1)^{1-\alpha }}{\Gamma \left( \alpha \right) ^{2}}\left\{ (\eta -\xi )^{\alpha -1}\left( (\eta -\xi )^{\alpha -1}\psi (\xi ,\eta )+(1-\alpha )\int_{\xi }^{\eta }\frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })}{% \left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }\right) \right. \\ &&+(1-\alpha )\int_{\xi }^{\eta }\left( \xi ^{\prime }-\xi \right) ^{\alpha -2}\Bigg[(\eta -\xi )^{\alpha -1}\psi (\xi ,\eta )-(\eta -\xi ^{\prime })^{\alpha -1}\psi (\xi ^{\prime },\eta ) \\ &&\left. +(1-\alpha )\left( \int_{\xi }^{\eta }\frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })}{\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}% d\eta ^{\prime }-\int_{\xi ^{\prime }}^{\eta }\frac{\psi (\xi ^{\prime },\eta )-\psi (\xi ^{\prime },\eta ^{\prime })}{\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }\right) \Bigg]d\xi ^{\prime }\right\} \\ &=&\frac{(-1)^{1-\alpha }}{\Gamma \left( \alpha \right) ^{2}}\left\{ (\eta -\xi )^{2\alpha -2}\psi (\xi ,\eta )+(1-\alpha )(\eta -\xi )^{\alpha -1}\int_{\xi }^{\eta }\frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })}{% \left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }\right. \\ &&+(1-\alpha )\int_{\xi }^{\eta }\frac{(\eta -\xi )^{\alpha -1}\psi (\xi ,\eta )-(\eta -\xi ^{\prime })^{\alpha -1}\psi (\ \xi ^{\prime },\eta )}{% \left( \xi ^{\prime }-\xi \right) ^{2-\alpha }}d\xi ^{\prime } \\ &&+(1-\alpha )^{2}\int_{\xi }^{\eta }\int_{\xi ^{\prime }}^{\eta }\frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })-\psi (\xi ^{\prime },\eta )+\psi (\xi ^{\prime },\eta ^{\prime })}{\left( \xi ^{\prime }-\xi \right) ^{2-\alpha }\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }d\xi ^{\prime } \\ &&\left. +(1-\alpha )^{2}\int_{\xi }^{\eta }\int_{\xi }^{\xi ^{\prime }}% \frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })}{\left( \xi ^{\prime }-\xi \right) ^{2-\alpha }\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}% d\eta ^{\prime }d\xi ^{\prime }\right\} \,. \end{eqnarray*}% Exchanging the integration order, we see the last double integral equals to \begin{equation*} \int_{\xi }^{\eta }\frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })}{% \left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}\int_{\eta ^{\prime }}^{\eta }\frac{1}{\left( \xi ^{\prime }-\xi \right) ^{2-\alpha }}d\xi ^{\prime }d\eta ^{\prime }. \end{equation*} This leads to the following expression for $\Gamma ^{\alpha }$ \begin{eqnarray} \lefteqn{\Gamma ^{\alpha }\psi (\xi ,\eta ) =\frac{(-1)^{1-\alpha }}{\Gamma \left( \alpha \right) ^{2}}\left\{ (\eta -\xi )^{2\alpha -2}\psi (\xi ,\eta )\right.} \notag \\ &&\quad +(1-\alpha )\int_{\xi }^{\eta }\frac{(\eta -\xi )^{\alpha -1}\psi (\xi ,\eta )-(\eta -\xi ^{\prime })^{\alpha -1}\psi (\ \xi ^{\prime },\eta )% }{\left( \xi ^{\prime }-\xi \right) ^{2-\alpha }}d\xi ^{\prime } \notag \\ &&\quad +(1-\alpha )^{2}\int_{\xi }^{\eta }\int_{\xi ^{\prime }}^{\eta }% \frac{\psi (\xi ,\eta )-\psi (\xi ,\eta ^{\prime })-\psi (\xi ^{\prime },\eta )+\psi (\xi ^{\prime },\eta ^{\prime })}{\left( \xi ^{\prime }-\xi \right) ^{2-\alpha }\left( \eta -\eta ^{\prime }\right) ^{2-\alpha }}d\eta ^{\prime }d\xi ^{\prime } \notag \\ &&\quad +\left. (\alpha -1)\int_{\xi }^{\eta }\frac{\phi (\xi ,\eta )-\phi (\xi ,\eta ^{\prime })}{(\eta -\eta ^{\prime })^{2-\alpha }}(\eta ^{\prime }-\xi )^{\alpha -1}d\eta ^{\prime }\right\} \,. \label{1.11} \end{eqnarray} Consider the kernel $K_{s,t}(\xi ,\eta )$ defined in (\ref{2.7}), that is,% \begin{equation*} K_{s,t}(\xi ,\eta )=D_{s+}^{1,{\alpha -{\varepsilon }}}D_{t-}^{2,{\alpha -{% \varepsilon }}}G_{t-}(s,\xi ,\eta ), \end{equation*}% where% \begin{eqnarray} G(s,\xi ,\eta ) &=&C_{\alpha }\left( \xi -s\right) ^{\alpha -1}\left( \eta -\xi \right) ^{\alpha -1}\int_{0}^{1}q^{2\alpha -2}(1-q)^{-\alpha }(1+(1-q)% \frac{\xi -s}{\eta -\xi })^{-1}dq \notag \\ &=&\left( \xi -s\right) ^{\alpha -1}\left( \eta -\xi \right) ^{\alpha -1}\phi \left( \frac{\xi -s}{\eta -\xi }\right) , \label{e.7.1} \end{eqnarray}% \begin{equation} \phi (z)=C_{\alpha }\int_{0}^{1}q^{2\alpha -2}(1-q)^{-\alpha }(1+(1-q)z)^{-1}dq=C_{\alpha }\int_{0}^{1}(1-q)^{2\alpha -2}q^{-\alpha }(1+qz)^{-1}dq, \label{e.7.2} \end{equation}% and $C_{\alpha }$ is given as the coefficient in (\ref{2.6}). \begin{lemma} \label{l.7.1} Let $1/2<\alpha<1$. The function $\phi (z)$ defined in (\ref% {e.7.2}) satisfies $\phi (0)<\infty $, $\phi $ is decreases to zero as $z$ tends to infinity. If $\beta <1-\alpha $, then \begin{equation} \phi (z)\leq cz^{-\beta } \label{e.7.3} \end{equation} Moreover, if $\beta <2-\alpha $, \begin{equation} \left| \phi ^{\prime }(z)\right| \leq cz^{-\beta }. \label{e.7.4} \end{equation} \end{lemma} \begin{lemma} \label{l.7.2} \bigskip The kernel $K_{s,t}(\xi ,\eta )$ satisfies% \begin{equation} \sup_{0\le s<t\le T }\int_{s< \xi < \eta < t}\left| K_{s,t}(\xi ,\eta )\right| d\xi d\eta <\infty . \label{e.7.5} \end{equation} \end{lemma} \begin{proof} To simplify the notation we omit the dependence on the variable $s$ in $% G(s,\xi ,\eta )$. Also, $c$ will denote a generic constant depending on $% \alpha $ and $\varepsilon $. We have \begin{eqnarray*} &&D_{s+}^{1,{\alpha -{\varepsilon}} }D_{t-}^{2,{\alpha -{\varepsilon}} }G_{t-}(\xi ,\eta ) \\ &=&cD_{s+}^{1,{\alpha -{\varepsilon}} }\left( \frac{G(\xi ,\eta )-G(\xi ,t)}{% (t-\eta )^{{\alpha -{\varepsilon}} }}+({\alpha -{\varepsilon}}) \int_{\eta }^{t}\frac{G(\xi ,\eta )-G(\xi ,\eta ^{\prime })}{(\eta ^{\prime }-\eta )^{{% \alpha -{\varepsilon}} +1}}d\eta ^{\prime }\right) \\ &=&c\left( \frac{G(\xi ,\eta )-G(\xi ,t)}{(t-\eta )^{{\alpha -{\varepsilon}} }(\xi -s)^{{\alpha -{\varepsilon}} }}+\frac{1}{(\xi -s)^{{\alpha -{% \varepsilon}} }}\int_{\eta }^{t}\frac{G(\xi ,\eta )-G(\xi ,\eta ^{\prime })}{% (\eta ^{\prime }-\eta )^{{\alpha -{\varepsilon}} +1}}d\eta ^{\prime }\right. \\ &&+({\alpha -{\varepsilon}}) \int_s^{\xi }\frac{G(\xi ,\eta )-G(\xi ,t)-G(\xi ^{\prime },\eta )+G(\xi ^{\prime },t)}{(t-\eta )^{{\alpha -{% \varepsilon}} }(\xi -\xi ^{\prime })^{{\alpha -{\varepsilon}} +1}}d\xi ^{\prime } \\ &&\left. +({\alpha -{\varepsilon}} ) ^{2}\int_s^{\xi }\int_{\eta }^{t}\frac{% G(\xi ,\eta )-G(\xi ,\eta ^{\prime })-G(\xi ^{\prime },\eta )+G(\xi ^{\prime },\eta ^{\prime })}{(\eta ^{\prime }-\eta )^{{\alpha -{\varepsilon}} +1}(\xi -\xi ^{\prime })^{{\alpha -{\varepsilon}} +1}}d\eta ^{\prime }d\xi ^{\prime }\right) . \end{eqnarray*}% Set% \begin{eqnarray*} A_{1} &=&\frac{G(\xi ,\eta )-G(\xi ,t)}{(t-\eta )^{{\alpha -{\varepsilon}} }(\xi -s)^{{\alpha -{\varepsilon}} }} \\ A_{2} &=&\frac{1}{(\xi -s)^{{\alpha -{\varepsilon}} }}\int_{\eta }^{t}\frac{% G(\xi ,\eta )-G(\xi ,\eta ^{\prime })}{(\eta ^{\prime }-\eta )^{{\alpha -{% \varepsilon}} +1}}d\eta ^{\prime } \\ A_{3} &=&\int_s^{\xi }\frac{G(\xi ,\eta )-G(\xi ,t)-G(\xi ^{\prime },\eta )+G(\xi ^{\prime },t)}{(t-\eta )^{{\alpha -{\varepsilon}} }(\xi -\xi ^{\prime })^{{\alpha -{\varepsilon}} +1}}d\xi ^{\prime } \\ A_{4} &=&\int_s^{\xi }\int_{\eta }^{t}\frac{G(\xi ,\eta )-G(\xi ,\eta ^{\prime })-G(\xi ^{\prime },\eta )+G(\xi ^{\prime },\eta ^{\prime })}{(\eta ^{\prime }-\eta )^{{\alpha -{\varepsilon}} +1}(\xi -\xi ^{\prime })^{{\alpha -{\varepsilon}} +1}}d\eta ^{\prime }d\xi ^{\prime }. \end{eqnarray*}% It suffices to show that% \begin{equation} \sup_{0\le s<t\le T}\int_{s< \xi < \eta < t}|A_{i}| d\xi d\eta <\infty \ \label{e.7.6} \end{equation}% for $i=1,2,3,4.$ \textbf{Step 1} Suppose $i=1$. Using the fact that the function $\phi $ is bounded we obtain \begin{equation*} G(\xi ,\eta )\leq c\left( \xi -s\right) ^{\alpha -1}\left( \eta -\xi \right) ^{\alpha -1}. \end{equation*}% Hence,% \begin{eqnarray*} \left| A_{1}\right| &=&\frac{|G(\xi ,\eta )|+ |G(\xi ,t)|}{(t-\eta )^{{% \alpha -{\varepsilon}} }(\xi -s)^{{\alpha -{\varepsilon}} }} \\ &\leq & c\left( \xi -s\right) ^{\varepsilon -1}\left( \eta -\xi \right) ^{\alpha -1}(t-\eta )^{-\alpha +\varepsilon } \\ &&\quad + c(\xi-s)^{{\varepsilon}-1}(t-\eta)^{{\varepsilon}-1}\,. \end{eqnarray*} and (\ref{e.7.6} ) holds for $i=1$. \textbf{Step 2} Suppose $i=2$. \ We have% \begin{equation*} G(\xi ,\eta )-G(\xi ,\eta ^{\prime })=\int_{\eta }^{\eta ^{\prime }}\frac{% \partial G}{\partial y}(\xi ,y)dy,\ \end{equation*}% and \begin{eqnarray*} \frac{\partial G}{\partial \eta }(\xi ,\eta ) &=&(\alpha -1) (\xi -s)^{\alpha -1}(\eta -\xi )^{\alpha -2}\phi \left( \frac{\xi -s}{\eta -\xi }% \right) \\ &&- (\xi -s)^{\alpha -1}(\eta -\xi )^{\alpha -1}\phi ^{\prime }\left( \frac{ \xi -s}{\eta -\xi }\right) \frac{\xi -s}{\left( \eta -\xi \right) ^{2}} \\ &=&(\xi -s)^{\alpha -1}(\eta -\xi )^{\alpha -2}\chi \left( \frac{\xi -s}{% \eta -\xi }\right) , \end{eqnarray*}% where% \begin{equation*} \chi (z)=(\alpha -1) \phi (z)- z\phi ^{\prime }(z). \end{equation*}% Notice that, by Lemma \ref{l.7.1} the function $\chi (z)$ is uniformly bounded. Hence,% \begin{eqnarray*} |A_{2}| &\leq &c(\xi -s)^{\varepsilon -1}\int_{\eta }^{t}\int_{\eta }^{\eta ^{\prime }}(y-\xi )^{\alpha -2}(\eta ^{\prime }-\eta )^{-\alpha +\varepsilon -1}dyd\eta ^{\prime } \\ &\leq &c(\xi -s)^{\varepsilon -1}(\eta -\xi )^{\frac{\varepsilon }{4}% -1}\int_{\eta }^{t} \int_{\eta }^{\eta ^{\prime }}(\eta ^{\prime }-y)^{\frac{% \varepsilon }{2}-1}(y-\eta )^{\frac{\varepsilon }{4}-1}dyd\eta ^{\prime } \\ &\leq &c(\xi -s)^{\varepsilon -1}(\eta -\xi )^{\frac{\varepsilon }{4}% -1}\int_{\eta }^{t} (\eta ^{\prime }-\eta )^{\frac{3\varepsilon }{4}-1}d\eta ^{\prime }, \end{eqnarray*}% which implies that (\ref{e.7.6} ) holds for $i=2$.$\ $ \textbf{Step 3} Suppose $i=3$. We have% \begin{equation*} G(\xi ,\eta )-G(\xi ^{\prime },\eta )=\int_{\xi ^{\prime }}^{\xi }\frac{% \partial G}{\partial x}(x,\eta )dx, \end{equation*}% and \begin{eqnarray*} \frac{\partial G}{\partial \xi }(\xi ,\eta ) &=&(\alpha -1) (\xi -s)^{\alpha -2}(\eta -\xi )^{\alpha -1}\phi \left( \frac{\xi -s}{\eta -\xi }\right) \\ &&-(\alpha -1) (\xi -s)^{\alpha -1}(\eta -\xi )^{\alpha -2}\phi \left( \frac{% \xi -s}{\eta -\xi }\right) \\ &&+ (\xi -s)^{\alpha -1}(\eta -\xi )^{\alpha -1}\phi ^{\prime }\left( \frac{% \xi -s}{\eta -\xi }\right) \frac{\eta -s}{\left( \eta -\xi \right) ^{2}} \\ &=&(\xi -s)^{\alpha -2}(\eta -\xi )^{\alpha -1}(\alpha -1) \phi \left( \frac{% \xi -s}{\eta -\xi }\right) \\ &&+(\xi -s)^{\alpha -1}(\eta -\xi )^{\alpha -2}\gamma \left( \frac{\xi -s}{% \eta -\xi }\right) , \end{eqnarray*}% where% \begin{equation*} \gamma (z)=(1-\alpha ) \phi (z)+\left( 1+z\right) \phi ^{\prime }(z). \end{equation*}% By Lemma \ref{l.7.1} the function $\gamma $ is uniformly bounded. Hence,% \begin{eqnarray*} |A_{3}| &\leq &c(t-\eta )^{-\alpha +\varepsilon }\int_s^{\xi }\int_{\xi ^{\prime }}^{\xi }(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1} \\ &&\times \left[ (x-s)^{\alpha -2}(\eta -x)^{\alpha -1}+(x-s)^{\alpha -1}(\eta -x)^{\alpha -2}\right] dxd\xi ^{\prime } \\ &\leq &c(t-\eta )^{-\alpha +\varepsilon }(\eta -\xi )^{\alpha -1}\int_s^{\xi }\int_{\xi ^{\prime }}^{\xi }(\xi -x)^{\frac{\varepsilon }{2}-1}(x-\xi ^{\prime })^{\frac{\varepsilon }{4}-1}(\xi ^{\prime }-s)^{\frac{\varepsilon }{4}-1}dxd\xi ^{\prime } \\ &&+c(t-\eta )^{-\alpha +\varepsilon }(\eta -\xi )^{\frac{\varepsilon }{4}% -1}\int_s^{\xi }\int_{\xi ^{\prime }}^{\xi }(\xi -x)^{\frac{\varepsilon }{4}% -1}(x-\xi ^{\prime })^{\frac{\varepsilon }{2}-1}(\xi ^{\prime }-s)^{\alpha -1}dxd\xi ^{\prime } \\ &= &c(t-\eta )^{-\alpha +\varepsilon }(\eta -\xi )^{\alpha -1}(\xi-s)^{{% \varepsilon}-1}+c (t-\eta )^{-\alpha +\varepsilon }(\eta -\xi )^{\frac{% \varepsilon }{4} -1} (\xi-s)^{\frac34 {\varepsilon}+\alpha-1}\,. \end{eqnarray*}% which implies that (\ref{e.7.6} ) holds for $i=3$. \textbf{Step 4} Suppose $i=4$. We are going to use the following decomposition% \begin{eqnarray*} &&G(\xi ,\eta )-G(\xi ,\eta ^{\prime })-G(\xi ^{\prime },\eta )+G(\xi ^{\prime },\eta ^{\prime }) \\ &=&-\int_{\eta }^{\eta ^{\prime }}\int_{\xi ^{\prime }}^{\xi }\frac{\partial ^{2}G}{\partial x\partial y}(x,y)dxdy \end{eqnarray*}% We need to compute the second derivative:% \begin{eqnarray*} \frac{\partial ^{2}G}{\partial \xi \partial \eta } &=&(\xi -s)^{\alpha -2}(\eta -\xi )^{\alpha -2}\left( (\alpha -1)^{2}-(\alpha -1)(\alpha -2)% \frac{\xi -s}{\eta -\xi }\right) \phi \left( \frac{\xi -s}{\eta -\xi }\right) \\ &&-(\xi -s)^{\alpha -1}(\eta -\xi )^{\alpha -1}\left[ \phi ^{\prime \prime }\left( \frac{\xi -s}{\eta -\xi }\right) \frac{(\eta -s)(\xi -s)}{\left( \eta -\xi \right) ^{4}}\right. \\ &&\left. +\phi ^{\prime }\left( \frac{\xi -s}{\eta -\xi }\right) \frac{(\eta -\xi )+2(\xi -s)}{\left( \eta -\xi \right) ^{3}}\right] . \end{eqnarray*}% Hence, we can write% \begin{equation*} \frac{\partial ^{2}G}{\partial \xi \partial \eta }=(\xi -s)^{\alpha -2}(\eta -\xi )^{\alpha -2}\psi \left( \frac{\xi -s}{\eta -\xi }\right) , \end{equation*}% where% \begin{eqnarray*} \psi (z) &=&\left( (\alpha -1)^{2}-(\alpha -1)(\alpha -2)z\right) \phi \left( z\right) \\ &&-\phi ^{\prime \prime }\left( z\right) z^{2}(1+z)-\phi ^{\prime }(z)z(1+2z). \end{eqnarray*}% We are going to use the decomposition% \begin{equation*} \psi (z)=\psi _{1}(z)+\psi _{2}(z), \end{equation*}% where% \begin{eqnarray*} \psi _{1}(z) &=&-(\alpha -1)(\alpha -2)z\phi \left( z\right) \\ \psi _{2}(z) &=&(\alpha -1)^{2}\phi \left( z\right) -\phi ^{\prime \prime }\left( z\right) z^{2}(1+z)-\phi ^{\prime }(z)z(1+2z). \end{eqnarray*}% This leads to \begin{equation*} \int_{s\leq \xi \leq \eta \leq t}A_{4}d\xi d\eta =B_{1}+B_{2}, \end{equation*}% where% \begin{eqnarray*} B_{i} &=&-\int_{D}(\eta ^{\prime }-\eta )^{-\alpha +\varepsilon -1}(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1} \\ &&\times (x-s)^{\alpha -2}(y-\xi )^{\alpha -2}\psi _{i}(\frac{x-s}{y-x})d\xi ^{\prime }dxd\xi d\eta dyd\eta ^{\prime }, \end{eqnarray*}% $i=1,2$, and% \begin{equation*} D=\{(\xi ^{\prime },x,\xi ,\eta ,y,\eta ^{\prime }):s<\xi ^{\prime }<x<\xi <\eta <y<\eta ^{\prime }<t\}. \end{equation*} \textbf{Step 5} Estimation of $B_{1}$. Denote \begin{equation*} D_{1}=\{(\xi ^{\prime },x,\xi ,\eta ,\eta ^{\prime }):s<\xi ^{\prime }<x<\xi <\eta <\eta ^{\prime }<t\}. \end{equation*} Using (\ref{e.7.3}) with $\beta =1-\alpha -\delta $ with $\delta <\varepsilon/3 $, we obtain% \begin{eqnarray*} |B_{1}| &\leq &c\int_{D}(\eta ^{\prime }-\eta )^{-\alpha +\varepsilon -1}(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1}(x-s)^{\alpha -1}(y-x)^{\alpha -3} \\ &&\times \phi \left(\frac{x-s}{y-x} \right) d\xi ^{\prime }dxd\xi d\eta dyd\eta ^{\prime } \\ &\leq &c\int_{D}(\eta ^{\prime }-\eta )^{-\alpha +\varepsilon -1}(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1}(x-s)^{2\alpha -2+\delta }(y-x)^{-2-\delta }d\xi ^{\prime }dxd\xi d\eta dyd\eta ^{\prime } \\ &=&c\int_{D_{1}}(\eta ^{\prime }-\eta )^{-\alpha +\varepsilon -1}(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1}(x-s)^{2\alpha -2+\delta } \\ &&\times \left[ (\eta -x)^{-1-\delta }-(\eta ^{\prime }-x)^{-1-\delta }% \right] d\xi ^{\prime }dxd\xi d\eta d\eta ^{\prime } \\ &\leq & c\int_{D_1}(\eta ^{\prime }-\eta )^{-\alpha +\varepsilon -1 }(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1}(x-s)^{2\alpha -2+\delta } (\eta -x)^{-1-\delta } (\eta-\eta^{\prime})^\alpha d\xi ^{\prime }dxd\xi d\eta d\eta ^{\prime } \\ &\leq &c\int_{D_1}(\eta ^{\prime }-\eta )^{ \varepsilon -1 } (\xi -x)^{-1+3\delta+\alpha } (x -\xi ^{\prime })^{-2\alpha+{\varepsilon}-3\delta } \\ &&\quad (x-\xi^{\prime})^{2\alpha-1} (\xi^{\prime}-s)^{-1+\delta } (\eta-\xi)^{-1+\delta} (\xi -x)^{-2\delta -\alpha }d\xi ^{\prime }dxd\xi d\eta d\eta ^{\prime } \\ &\leq &c\int_{D_1}(\eta ^{\prime }-\eta )^{\varepsilon -1 } (\xi -x)^{-1+\delta} (x -\xi ^{\prime })^{{\varepsilon}-3\delta-1 } (\xi^{\prime}-s)^{-1+\delta } (\eta-\xi)^{-1+\delta} d\xi ^{\prime }dxd\xi d\eta d\eta ^{\prime } \\ &<&\infty . \end{eqnarray*} \textbf{Step 6} Estimation of $B_{2}$. Let us compute the function $\psi _{2}(z)$:% \begin{eqnarray*} \psi _{2}(z) &=&\int_{0}^{1}(1-q)^{2\alpha -2}q^{-\alpha }\left[ (\alpha -1)^{2}(1+qz)^{-1}\right. \\ &&\left. -2q^{2}(1+qz)^{-3}z^{2}(1+z)+q(1+qz)^{-2}z(1+2z)\right] dq \\ &=&\int_{0}^{1}(1-q)^{2\alpha -2}q^{-\alpha }(1+qz)^{-3}\left[ (\alpha -1)^{2}(1+qz)^{2}\right. \\ &&\qquad +qz(1+(2-q)z)]dq \end{eqnarray*}% This implies that the function $\psi _{2}(z)$ is uniformly bounded. As a consequence, we deduce the following estimates \begin{eqnarray*} |B_{2}| &\leq &c\int_{D}(\eta ^{\prime }-\eta )^{-\alpha +\varepsilon -1}(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1}(x-s)^{\alpha -2}(y-\xi )^{\alpha -2}d\xi ^{\prime }dxd\xi d\eta dyd\eta ^{\prime } \\ &\leq &c\int_{D}(\eta ^{\prime }-y)^{\frac{\varepsilon }{2}-1}(y-\eta )^{-\alpha +\frac{\varepsilon }{2}}(\xi -\xi ^{\prime })^{-\alpha +\varepsilon -1}(x-s)^{\alpha -2}(y-\xi )^{\alpha -2}d\xi ^{\prime }dxd\xi d\eta dyd\eta ^{\prime } \\ &\leq &c\int_{D}(\eta ^{\prime }-y)^{\frac{\varepsilon }{2}-1}(y-\eta )^{-1+% \frac{\varepsilon }{4}}(\xi -x)^{\frac{\varepsilon }{2}-1}(x-\xi ^{\prime })^{-\alpha +\frac{\varepsilon }{2}} \\ &&\times (x-s)^{\alpha -2}(\eta -\xi )^{-1+\frac{\varepsilon }{4}}d\xi ^{\prime }dxd\xi d\eta dyd\eta ^{\prime } \\ &\leq &c\int_{D}(\eta ^{\prime }-y)^{\frac{\varepsilon }{2}-1}(y-\eta )^{-1+% \frac{\varepsilon }{4}}(\xi -x)^{\frac{\varepsilon }{2}-1}(x-\xi ^{\prime })^{-1+\frac{\varepsilon }{4}} \\ &&\times (\xi ^{\prime }-s)^{-1+\frac{\varepsilon }{4}}(\eta -\xi )^{-1+% \frac{\varepsilon }{4}}d\xi ^{\prime }dxd\xi d\eta dyd\eta ^{\prime } \\ &<&\infty . \end{eqnarray*} \end{proof}
{ "timestamp": "2006-02-02T19:01:34", "yymm": "0602", "arxiv_id": "math/0602050", "language": "en", "url": "https://arxiv.org/abs/math/0602050", "abstract": "Using fractional calculus we define integrals of the form $% \\int_{a}^{b}f(x_{t})dy_{t}$, where $x$ and $y$ are vector-valued Hölder continuous functions of order $\\displaystyle \\beta \\in (\\frac13, \\frac12)$ and $f$ is a continuously differentiable function such that $f'$ is $\\lambda$-Höldr continuous for some $\\lambda>\\frac1\\beta-2$. Under some further smooth conditions on $f$ the integral is a continuous functional of $x$, $y$, and the tensor product $x\\otimes y$ with respect to the Hölder norms. We derive some estimates for these integrals and we solve differential equations driven by the function $y$. We discuss some applications to stochastic integrals and stochastic differential equations.", "subjects": "Probability (math.PR); Dynamical Systems (math.DS)", "title": "Rough Path Analysis Via Fractional Calculus", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446448596304, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7087893916704607 }
https://arxiv.org/abs/1612.00184
Remarks on Kawamata's effective non-vanishing conjecture for manifolds with trivial first Chern classes
Kawamata proposed a conjecture predicting that every nef and big line bundle on a smooth projective variety with trivial first Chern class has nontrivial global sections. We verify this conjecture for several cases, including (i) all hyperkähler varieties of dimension $\leq 6$; (ii) all known hyperkähler varieties except for O'Grady's 10-dimensional example; (iii) general complete intersection Calabi-Yau varieties in certain Fano manifolds (e.g. toric ones). Moreover, we investigate the effectivity of Todd classes of hyperkähler varieties and Calabi-Yau varieties. We prove that the fourth Todd classes are "fakely effective" for all hyperkähler varieties and general complete intersection Calabi-Yau varieties in products of projective spaces.
\section{Introduction} Kodaira vanishing theorem asserts that $H^{i}(X,K_{X}\otimes L)=0$ for $i>0$ and any ample line bundle $L$ on a smooth projective variety\footnote{Throughout this paper, we work over the complex number field ${\mathbb C}$.}\,$X$. It is natural to ask when $H^0(X, K_{X}\otimes L)$ does not vanish. For example, Fujita's freeness conjecture predicts that $|K_X\otimes L^{\otimes (\dim X+1)}|$ is base-point-free (hence has many global sections). Kawamata proposed the following conjecture in general settings. \begin{conjecture}[{Effective Non-vanishing Conjecture, \cite[Conjecture 2.1]{kawamata2}}]\label{kawamata conj0} Let $X$ be a projective normal variety with at most log terminal singularities, $L$ a Cartier divisor on $X$. Assume that $L$ is nef and $L-K_{X}$ is nef and big. Then $H^{0}(X,L)\neq0$. \end{conjecture} Recall that a line bundle $L$ on $X$ is said to be {\it nef} if $L\cdot C\geq 0$ for any curve $C\subset X$, moreover, it is said to be {\it big} if $L^{\dim X}>0$. We are interested in this conjecture for manifolds with trivial first Chern classes. In particular, Kawamata's Effective Non-vanishing Conjecture predicts the following conjecture: \begin{conjecture}\label{kawamata conj} Let $X$ be a smooth projective variety with $c_1(X)=0$ in $H^2(X, \bR)$ and $L$ a nef and big line bundle on $X$. Then $H^{0}(X,L)\neq0$. \end{conjecture} Thanks to Yau's proof of Calabi conjecture \cite{yau}, we have the so-called Beauville--Bogomolov decomposition (\cite{beauville, bogomolov}) for compact K\"{a}hler manifolds with trivial first Chern classes, which allows us to reduce Conjecture \ref{kawamata conj} to the cases of Calabi--Yau varieties and hyperk\"{a}hler varieties. \begin{proposition}[{=Proposition \ref{reduce to irreducible}}] Conjecture \ref{kawamata conj} is true if it holds for all Calabi--Yau varieties and hyperk\"{a}hler varieties. \end{proposition} In this paper, a {\it Calabi--Yau variety} is a compact K\"{a}hler manifold\footnote{Since $h^{0,2}(X)=0$, $X$ is automatically a smooth projective variety by Kodaira embedding and Chow lemma. } $X$ of dimension $n \geq 3$ with trivial canonical bundle such that $h^0(\Omega^p _X) = 0$ for $0 < p < n$, and a {\it hyperk\"ahler variety} is a simply connected smooth projective variety $Y$ such that $H^0(Y, \Omega^2_Y)$ is spanned by a non-degenerate two form. For hyperk\"ahler varieties, the only known examples are (up to deformations): Hilbert schemes of points on K3 surfaces, generalized Kummer varieties (due to Beauville's construction \cite{beauville}), and two examples in dimension $6$ and $10$ introduced by O'Grady \cite{ogrady1, ogrady2}. We can show \begin{theorem}[= Corollary \ref{verification for six dim} and Proposition \ref{verification for beauville}] Conjecture \ref{kawamata conj} is true for the following cases: \begin{enumerate} \item hyperk\"{a}hler varieties of dimension $\leq 6$; \item hyperk\"{a}hler varieties homeomorphic to Hilbert schemes of points on K3 surfaces; \item hyperk\"{a}hler varieties homeomorphic to generalized Kummer varieties; \item hyperk\"{a}hler varieties homeomorphic to O'Grady's $6$-dimensional example. \end{enumerate} \end{theorem} For Calabi--Yau varieties, we have lots of examples provided by smooth complete intersections in Fano manifolds. To state our result, we introduce a subclass of Fano manifolds, which we call \emph{perfect} Fano manifolds, i.e. on which any nef line bundle has a nontrivial section (Definition \ref{def of perfect Fano}). This subclass contains, for example, toric Fano manifolds, Fano manifolds of dimension $n\leq4$, and their products (see Proposition \ref{prop on perfect Fano}). On the other hand, Kawamata's Effective Non-vanishing Conjecture predicts that every Fano manifold is perfect. Then we show that Conjecture \ref{kawamata conj} is true for general complete intersection Calabi--Yau varieties in perfect Fano manifolds. \begin{theorem}[{=Theorem \ref{verification for CICY}}] Let $Y$ be a perfect Fano manifold, $\{H_{i}\}_{i=1}^{m}$ a sequence of base-point-free ample line bundles on $Y$ such that $\otimes_{i=1}^{m}H_{i}\cong K_{Y}^{-1}$. Let $X\subseteq Y$ be a general\,\footnote{Here "general" means that if we label the sections by $\{s_{i}\}_{i=1}^{m}$, we require $\{s_{1}=s_{2}=\cdots=s_{i}=0\}$ are smooth for $i=1,2,...,m$.} complete intersection of dimension $n\geq3$ defined by common zero locus of sections of $\{H_{i}\}_{i=1}^{m}$. Then Conjecture \ref{kawamata conj} is true for $X$. \end{theorem} By Hirzebruch--Riemann--Roch formula, the existence of global sections is related to the effectivity of Todd classes. We define the following fake effectivity for cycles. \begin{definition}Let $X$ be a smooth projective variety and $\gamma$ an algebraic $k$-cycle. $\gamma$ is said to be {\it fakely effective} if the intersection number $\gamma\cdot L^{k}\geq 0$ for any nef line bundle $L$. \end{definition} We remark that to test fake effectivity, it suffices to check for only ample (or nef and big) line bundles $L$ since nef line bundles can be viewed as ``limits" of ample line bundles \cite{lazarsfeld}. As explained above, we will consider the following question. \begin{question}\label{question} Are Todd classes of a Calabi--Yau variety or a hyperk\"ahler variety fakely effective?\end{question} It is well-known that the second Todd class of a smooth projective variety with $c_1=0$ is fakely effective (which is equivalent to pseudo-effectivity in this case) by the result of Miyaoka and Yau \cite{miyaoka, yau0}. We consider higher Todd classes and answer this question affirmatively in the following cases. \vspace{7pt} \begin{theorem}[{=Theorem \ref{effective of Td4}, Proposition \ref{verification for beauville}, and Theorem \ref{CICY td4}}]~ \begin{enumerate} \item The fourth Todd classes of all hyperk\"ahler varieties are fakely effective; \item All Todd classes of hyperk\"ahler varieties homeomorphic to any known hyperk\"ahler variety, except for O'Grady's 10-dimensional example remaining unknown, are fakely effective; \item The fourth Todd classes of general complete intersection Calabi--Yau in products of projective spaces are fakely effective. \end{enumerate} \end{theorem} We also prove a weaker version of Conjecture \ref{kawamata conj} which relates to a conjecture of Beltrametti and Sommese (see H\"{o}ring's work \cite{horing}). \begin{theorem}[{=Theorems \ref{HRR for odd CY}, \ref{HRR for even CY}}] Fix an integer $n\geq 2$. Let $X$ be an $n$-dimensional smooth projective variety with $c_1(X)=0$ in $H^2(X, \bR)$ and $L$ a nef and big line bundle on $X$. Then there exists a positive integer $i\leq \rounddown{\frac{n-1}{4}}+\rounddown{\frac{n+2}{4}}$ such that $H^{0}(X,L^{\otimes i})\ne 0$. In particular, Conjecture \ref{kawamata conj} is true in dimension $\leq4$. \end{theorem} Finally, we remark that one could also consider the K\"{a}hler version of Conjecture \ref{kawamata conj} and Question \ref{question}. However, they can be simply reduced to the projective case. As for Conjecture \ref{kawamata conj}, the existence of a nef and big line bundle on a compact complex manifold forces the manifold to be Moishezon (e.g. \cite[Theorem 2.2.15]{ma}), and hence projective since it is also K\"{a}hler. For Question \ref{question}, if we consider a non-projective hyperk\"ahler manifold $X$, then $q_X(L) =0$ for every nef line bundle $L$ (\cite[Theorem 3.11]{huybrechts}) and hence the intersection numbers of nef line bundles with Todd classes are identically zero by Fujiki's result \cite{fujiki} (see Theorem \ref{fujiki result}). \vspace{1em} \textbf{Acknowledgement.} The second author is grateful to Professor Keiji Oguiso for discussions. Part of this paper was written when the second author was visiting National Center for Theoretical Sciences and he would like to thank Professors Jungkai Chen and Ching-Jui Lai for their hospitality. \section{Reduction to Calabi--Yau and hyperk\"{a}hler varieties} Recall the following Beauville--Bogomolov decomposition thanks to Yau's proof of Calabi conjecture \cite{yau}. \begin{theorem}[{Beauville--Bogomolov decomposition, \cite{beauville, bogomolov}}]\label{classification of CY}Every smooth projective variety $X$ with $c_1(X)=0$ in $H^2(X, \bR)$ admits a finite cover isomorphic to a product of Abelian varieties, Calabi--Yau varieties, and hyperk\"{a}hler varieities. \end{theorem} Then it is easy to reduce Conjecture \ref{kawamata conj} to the cases of Calabi--Yau varieties and hyperk\"{a}hler varieties. \begin{proposition}\label{reduce to irreducible} Conjecture \ref{kawamata conj} is true if it holds for all Calabi--Yau varieties and hyperk\"{a}hler varieties. \end{proposition} \begin{proof} Let $X$ be a smooth projective variety with $c_1(X)=0$ in $H^2(X, \bR)$ and $L$ a nef and big line bundle on $X$. By Beauville--Bogomolov decomposition, there exists a finite cover $\pi:X^\prime\rightarrow X$ such that $$ X^\prime \cong A\times X_1\times \cdots \times X_n\times Y_1\times \cdots \times Y_m, $$ where $A$ is an Abelian variety, $X_i$ a Calabi--Yau variety, and $Y_j$ a hyperk\"{a}hler variety. After pull-back $L$ to $X^\prime$, $$\chi(X^\prime,\pi^{*}L)=\deg(\pi)\cdot \chi(X,L). $$ and $\pi^{*}L$ is again nef and big on $X^\prime$. Kawamata--Viehweg vanishing theorem (\cite{kawamata1}) implies \begin{align*} h^{0}(X^\prime,\pi^{*}L)={}&\chi(X^\prime,\pi^{*}L),\\ h^{0}(X,L)={}&\chi(X,L). \end{align*} Hence to prove $h^{0}(X,L)\neq 0$, it is equivalent to prove $h^{0}(X^\prime,\pi^{*}L)\neq 0$. On the other hand, since by definition $h^1(X_i, \OO_{X_i})=h^1(Y_j, \OO_{Y_j})=0$ for all $i,j$, by \cite[Ex III.12.6]{hart}, there is a natural isomorphism $$\Pic(X^\prime)\cong \Pic(A)\times\prod^{n}_{i=1}\Pic(X_{i})\times\prod^{m}_{j=1}\Pic(Y_{j}),$$ i.e. a line bundle on $X^\prime$ is the box tensor of its restriction on each factors. Since the restriction of $\pi^{*}L$ on each factor is obviously nef and big, to prove $h^{0}(X^\prime,\pi^{*}L)\neq 0$, it suffices to show for any nef and big line bundle on each factor, there exists a nontrivial global section, that is, to verify Conjecture \ref{kawamata conj} for Abelian varieties, Calabi--Yau varieties, and hyperk\"{a}hler varieties. Note that Conjecture \ref{kawamata conj} is true for Abelian varieties by Kodaira vanishing and Hirzebruch--Riemann--Roch formula. \end{proof} \section{Hyperk\"{a}hler case} \subsection{Preliminaries}Let $X$ be a hyperk\"ahler variety. Beauville \cite{beauville} and Fujiki \cite{fujiki} proved that there exists a quadratic form $q_X: H^2(X, \mathbb{C}) \to \mathbb{C}$ and a constant $c_X\in \mathbb{Q}_+$ such that for all $\alpha\in H^2(X, \mathbb{C})$, $$\int_X\alpha^{2n}= c_X\cdot q_X(\alpha)^n. $$ The above equation determines $c_X$ and $q_X$ uniquely if assuming: \begin{enumerate} \item $q_X$ is a primitive integral quadratic form on $H^2(X, \ZZ)$; \item $q_X(\sigma + \overline{\sigma}) > 0$ for $0\neq \sigma \in H^{2,0}(X)$. \end{enumerate} Here $q_X$ and $c_X$ are called the {\it Beauville--Bogomolov--Fujiki form} and the {\it Fujiki constant} of $X$ respectively. Note that condition (2) above is equivalent to the following condition (see \cite[Propositions 8 and 11]{nieper0}): \begin{enumerate} \item[(2')] There exists an $\alpha \in H^2(X,\bR)$ with $q_X(\alpha)\neq 0$, and for all $\alpha \in H^2(X,\bR)$ with $q_X(\alpha)\neq 0$, we have that $$ \frac{\int_X \pp_1(X)\alpha^{2n-2} }{q_X(\alpha)^{n-1}}< 0.$$ \end{enumerate} Here $\pp_1(X)$ is the first Pontrjagin class. By the above definition, it is easy to see that $q_X$ and $c_X$ are actually topological invariants. Recall a result by Fujiki \cite{fujiki} (see also \cite[Corollary 23.17]{gross} for a generalization). \begin{theorem}[{\cite{fujiki}, \cite[Corollary 23.17]{gross}}]\label{fujiki result} Let $X$ be a hyperk\"ahler variety of dimension $2n$. Assume $\alpha\in H^{4j}(X,\mathbb{C})$ is of type $(2j, 2j)$ on all small deformation of $X$. Then there exists a constant $C(\alpha)\in\mathbb{C}$ depending on $\alpha$ such that $$\int_{X}\alpha\cdot\beta^{2n-2j}=C({\alpha})\cdot q_{X}(\beta)^{n-j},$$ for all $\beta\in H^{2}(X,\mathbb{C})$. \end{theorem} A direct application of this result (cf. \cite[1.11]{huybrechts}) is that, for a line bundle $L$ on $X$, Hirzebruch--Riemann--Roch formula gives $$\chi(X,L)=\sum_{i=0}^{2n}\frac{1}{(2i)!}\int_X\td_{2n-2i}(X)(c_{1}(L))^{2i}=\sum_{i=0}^{2n}\frac{a_{i}}{(2i)!}q_{X}\big(c_{1}(L)\big)^{i}, $$ where $$a_{i}=C(\td_{2n-2i}(X)).$$ Since rational Chern classes are determined by rational Pontrjagin classes (cf. \cite[Proposition 1.13]{nieper1}), rational Chern classes (and hence Todd classes) are topological invariants of $X$ by Novikov's theorem, hence $a_i$'s in the above formula are constants depending only on the topology of $X$. For a line bundle $L$ on $X$, Nieper \cite{nieper} defined the {\it characteristic value} of $L$, $$ \lambda(L):=\begin{cases}\frac{24n\int_{X}\ch(L)}{\int_{X}c_{2}(X) \ch(L)} & \text{if well-defined;}\\ 0 & \text{otherwise.}\end{cases} $$ Note that $\lambda(L)$ is a positive (topological constant) multiple of $q_X(c_1(L))$ (cf. \cite[Proposition 10]{nieper}), more precisely, $$ \lambda(L)=\frac{12c_X}{(2n-1)C(c_2(X))}q_X(c_1(L)). $$ It is easy to see that if $L$ is a nef line bundle, then $q_X(c_1(L))\geq 0$ and $\lambda(L)\geq 0$; if $L$ is a nef and big line bundle, then $q_X(c_1(L))>0$ and $\lambda(L)>0$ (cf. \cite[Corollary 6.4]{huybrechts}). \subsection{Fake effectivity of $4$-th Todd classes of hyperk\"{a}hler varieities} In this subsection, we prove the following thereom. \begin{theorem}\label{effective of Td4} Let $X$ be a hyperk\"{a}hler variety of dimension $2n$ with $n\geq 2$. Then $\td_4(X)$ is fakely effective, that is, $$\int_{X}\td_{4}(X)\cdot L^{2n-4}\geq 0$$ for any nef line bundle $L$ on $X$. Moreover, the inequality is strict for nef and big line bundle $L$. \end{theorem} Firstly, we prove the following lemma. \begin{lemma}[{cf. \cite[Lemma 1]{kurnosov}, \cite[Lemma 3]{guan}}]\label{lemma c22} Let $X$ be a hyperk\"{a}hler variety of dimension $2n$ with $n\geq 2$. Then $c_2^2(X)$ is fakely effective, that is, $$\int_{X}c_2^2(X)\cdot L^{2n-4}\geq 0$$ for any nef line bundle $L$ on $X$. Moreover, the inequality is strict for nef and big line bundle $L$. \end{lemma} \begin{proof}By Theorem \ref{fujiki result}, $$ \int_{X}c_2^2(X)\cdot L^{2n-4}=C(c_2^2(X))\cdot q_X(c_1(L))^{n-2} $$ for a line bundle $L$. Hence it is equivalent to prove that $C(c_2^2(X))>0$. Fix $0\ne \sigma\in H^{2,0}(X)$, by Theorem \ref{fujiki result}, $$ \binom{2n-4}{n-2}\int_{X}c_2^2(X)\cdot (\sigma\overline{\sigma})^{n-2}=C(c_2^2(X))\cdot q_X(\sigma+\overline{\sigma})^{n-2}. $$ Hence it is equivalent to prove that $\int_{X}c_2^2(X)\cdot (\sigma\overline{\sigma})^{n-2}>0$. Take $Q\in \Sym^2H^2(X)$ to be the dual of Beauville--Bogomolov--Fujiki form $q_X$. Taking the orthogonal decomposition of $c_2\in H^4(X)$ induced by the projection to $\Sym^2H^2(X)$, we may write $c_2=\mu Q+p$, where $\mu>0$ is a positive rational number (cf. \cite[Proposition 12]{nieper0}) and $p\in H^4_{\text{prim}}(X)$. Since $p$ is a $(2,2)$-form, by the Hodge--Riemann bilinear relations, \begin{align*} \int_{X}c_2^2(X)\cdot (\sigma\overline{\sigma})^{n-2}={}&\mu^2\int_{X}Q^2\cdot (\sigma\overline{\sigma})^{n-2}+2\mu\int_{X}Qp\cdot (\sigma\overline{\sigma})^{n-2}+\int_{X}p^2\cdot (\sigma\overline{\sigma})^{n-2}\\ ={}&\mu^2\int_{X}Q^2\cdot (\sigma\overline{\sigma})^{n-2}+\int_{X}p^2\cdot (\sigma\overline{\sigma})^{n-2}\\ \geq {}&\mu^2\int_{X}Q^2\cdot (\sigma\overline{\sigma})^{n-2}. \end{align*} Hence it suffices to show that $\int_{X}Q^2\cdot (\sigma\overline{\sigma})^{n-2}>0$. Since $q_X$ is a deformation invariant, so is $Q^2$. By Theorem \ref{fujiki result}, it suffices to show that $C(Q^2)>0$. Let $e_1,\ldots, e_{b_2}$ be an orthonormal basis on $H^2(X,{\mathbb C})$ for which $Q = \sum_{i=1}^{b_2} e_i^2.$ Then for distinct integers $i,j,k$, and formal parameters $t,s$, $$ \int_X (e_i+te_j+se_k)^{2n}=c_X\cdot q_X(e_i+te_j+se_k)^n=c_X\cdot (1+t+s)^n. $$ Comparing the coefficients of $t, s$, \begin{align*} {}&\int_X e_i^{2n}=c_X,\\ {}&\int_X e_i^{2n-2}e_j^2=\frac{c_X}{2n-1},\\ {}&\int_X e_i^{2n-4}e_j^4=\frac{3c_X}{(2n-1)(2n-3)},\\ {}&\int_X e_i^{2n-4}e_j^2e_k^2=\frac{c_X}{(2n-1)(2n-3)}. \end{align*} Hence by Theorem \ref{fujiki result} and $q_X(e_1)=1$, \begin{align*} C(Q^2)=\int_XQ^2e_1^{2n-4}=c_X+\frac{2(b_2-1)c_X}{2n-1}+\frac{3(b_2-1)c_X}{(2n-1)(2n-3)}+\frac{(b_2-1)(b_2-2)c_X}{(2n-1)(2n-3)}>0. \end{align*} We complete the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{effective of Td4}] By Nieper's formula (see \cite[Remark after Definition 19]{nieper} or \cite[Corollary 3.7]{nieper1}), for any $\alpha\in H^{2}(X)$, \begin{equation}\int_{X}\sqrt{\td(X)}\exp(\alpha)=(1+\lambda(\alpha))^{n} \int_{X}\sqrt{\td(X)}. \label{nieper formula} \end{equation} Here$\sqrt{\td(X)}$ is defined to be the formal power series in cohomology ring whose square is $\td(X)$. Note that $$ \sqrt{\td(X)}=1+\frac{1}{24}c_2(X)+\frac{1}{5760}(7c_2^2(X)-4c_4(X))+\cdots $$ and $\int_{X}\sqrt{\td(X)}>0$ by a theorem of Hitchin and Sawon \cite{hitchinsawon}. Fix a nef and big line bundle $L$ on $X$ (hence $\lambda(L)>0$), take $\alpha=t\cdot c_1(L)$ where $t$ is a formal parameter and compare the coefficients of $t$ in (\ref{nieper formula}), then \begin{equation}\int_{X}\sqrt{\td(X)}\cdot L^{2n-4}=(2n-4)!\binom{n}{n-2}\lambda(L)^{n-2}\int_{X}\sqrt{\td(X)}>0. \nonumber \end{equation} Equivalently, this gives \begin{equation}\int_{X}(7c_{2}^{2}(X)-4c_{4}(X))\cdot L^{2n-4}>0. \nonumber \end{equation} Combining with Lemma \ref{lemma c22}, \begin{equation}\int_{X}\td_{4}(X)\cdot L^{2n-4}=\frac{1}{2880}\int_{X}(7c_{2}^{2}(X)-4c_{4}(X))\cdot L^{2n-4}+\frac{1}{576}\int_{X}c_{2}^{2}(X)\cdot L^{2n-4}>0. \nonumber \end{equation} We complete the proof. \end{proof} \begin{corollary}\label{verification for six dim} Let $X$ be a $6$-dimensional hyperk\"{a}hler variety and $L$ a nef and big line bundle on $X$. Then \begin{equation}h^{0}(X,L)\geq 5. \nonumber \end{equation} \end{corollary} \begin{proof} By Kawamata--Viehweg vanishing theorem and Hirzubruch--Riemann--Roch formula, \begin{equation}h^{0}(X,L)=\chi(X,L)=\frac{1}{6!}\int_{X}c^{6}_{1}(L)+\frac{1}{4!}\int_{X}c^{4}_{1}(L)\cdot \td_{2}(X)+\frac{1}{2}\int_{X}c^{2}_{1}(L)\cdot \td_{4}(X)+\chi(X,\mathcal{O}_{X}). \nonumber \end{equation} As it is shown by Miyaoka and Yau \cite{miyaoka, yau0} that $\td_2(X)=\frac{1}{12} c_2(X)$ is fakely effective, along with Theorem \ref{effective of Td4} we get $h^{0}(X,L)>\chi(X,\mathcal{O}_{X})=4$. \end{proof} \subsection{Known hyperk\"{a}hler varieties} For hyperk\"{a}hler varieties, the only known examples are (up to deformations) two families, Hilbert schemes of points on K3 surfaces and generalized Kummer varieties, due to Beauville's construction \cite{beauville} and two examples introduced by O'Grady \cite{ogrady1, ogrady2}. We verify Conjecture \ref{kawamata conj} for these known examples except for O'Grady's $10$-dimensional example. \begin{proposition}\label{verification for beauville} Let $L$ be a nef and big line bundle on a hyperk\"{a}hler variety $X$ of dimension $2n$. \begin{enumerate} \item If $X$ is homeomorphic to a Hilbert scheme of points on K3 surfaces, then $$h^{0}(X,L)\geq \frac{(n+2)(n+1)}{2}.$$ \item If $X$ is homeomorphic to a generalized Kummer variety with $n\geq 2$, or O'Grady's $6$-dimensional example, then $$h^{0}(X,L)\geq (n+1)^{2}.$$ \end{enumerate} Moreover, in the above cases, all Todd classes of $X$ are fakely effective. \end{proposition} \begin{proof} Let $X$ be a hyperk\"{a}hler manifold of dimension $2n$ and $L$ a line bundle. By Theorem \ref{fujiki result}, we know that $$ \chi(X, L)=P_X(q_X(c_1(L)))=P^\prime_X(\lambda(L)), $$ where $P_X$ and $P'_X$ are polynomials depending only on the topology of $X$. Note that all Todd classes of $X$ are fakely effective if and only if all coefficients of $P_X$ (or $P'_X$) are non-negative. For a generalized Kummer variety $K^{n}A$ of an Abelian surface $A$, and a line bundle $H$ on $K^{n}A$, Britze--Nieper \cite{britze} showed that \begin{equation}\chi(K^{n}A, H)=(n+1)\binom{\frac{(n+1)\lambda(H)}{4}+n}{n}. \nonumber \end{equation} Thus $$ P'_{K^{n}A}(t)=(n+1)\binom{\frac{(n+1)t}{4}+n}{n}$$ is a polynomial with positive coefficients. Hence if $X$ is homeomorphic to $K^{n}A$ and $L$ is a line bundle on $X$, then all Todd classes of $X$ are fakely effective and $$ \chi(X, L)=P^\prime_X(\lambda(L))=P'_{K^{n}A}(\lambda(L))=(n+1)\binom{\frac{(n+1)\lambda(L)}{4}+n}{n}. $$ When $L$ is nef and big, $\lambda(L)\in\mathbb{Q}_{>0}$ and hence by Lemma \ref{lemma on integers} (assuming that $n\geq 2$), $\frac{(n+1)\lambda(L)}{4}$ is a positive integer and $h^{0}(X, L )=\chi(X, L )\geq(n+1)^{2}$. For a Hilbert scheme of $n$-points $\Hilb^{n}(S)$ on a K3 surface $S$, \begin{equation}\Pic(\Hilb^{n}(S))\cong \Pic(S)\oplus \mathbb{Z}E \nonumber \end{equation} and any line bundle on $\Hilb^{n}(S)$ is of the form $H_{n}\otimes E^{r}$, where $r\in\mathbb{Z}$ and $H_{n}$ is induced by a line bundle $H$ on $S$ (see \cite[Section 5]{egl}). Ellingsrud--G\"{o}ttsche--Lehn's formula (\cite[Theorem 5.3]{egl}) gives \begin{equation}\chi(\Hilb^{n}(S),H_{n}\otimes E^{r})=\binom{\chi(S,H)-(r^{2}-1)(n-1) }{ n}. \nonumber \end{equation} With the Beauville--Bogomolov--Fujiki form $q$, we have \begin{equation}\big(H^{2}(\Hilb^{n}(S),\mathbb{Z}),q\big)\cong H^{2}(S,\mathbb{Z})_{(-, -)}\oplus_{\perp}\mathbb{Z}[E], \nonumber \end{equation} where $(-,-)$ is the cup product on $S$ and $q([E])=-2(n-1)$ (see \cite[Section 2.2]{huybrechts}). Thus \begin{equation}q(c_{1}(H_{n}\otimes E^{r}))=(H)^2-2r^{2}(n-1), \nonumber \end{equation} and \begin{equation}\chi(\Hilb^{n}(S),H_{n}\otimes E^{r})=\binom{\frac{1}{2}q(c_{1}(H_{n}\otimes E^{r}))+n+1}{n}. \nonumber \end{equation} Thus $$ P_{\Hilb^{n}(S)}(t)=\binom{\frac{1}{2}t+n+1}{n}$$ is a polynomial with positive coefficients. Hence if $X$ is homeomorphic to $\Hilb^{n}(S)$ and $L$ is a line bundle on $X$, then all Todd classes of $X$ are fakely effective and $$ \chi(X, L)=P_X(q_X(c_1(L)))=P_{\Hilb^{n}(S)}(q_X(c_1(L)))=\binom{\frac{1}{2}q_X(c_1(L))+n+1}{n}. $$ When $L$ is nef and big, $q_X(c_1(L))\in\mathbb{Q}_{>0}$ and hence by Lemma \ref{lemma on integers}, $\frac{1}{2}q_X(c_1(L))$ is a positive integer and $h^{0}(X, L )=\chi(X, L )\geq\binom{n+2}{n}$. In general, for a hyperk\"{a}hler variety $X$ of dimension $2n$ and a line bundle $L$ on $X$, Nieper \cite{nieper} used Rozansky--Witten invariants to express those coefficients $a_{i}$ in the expansion of $\chi(X,L)$ in terms of Chern numbers of $X$. More precisely, we have \begin{equation}\chi(X,L)=\int_{X}\exp\bigg(-2\sum_{k=1}^{\infty}\frac{B_{2k}}{4k}\ch_{2k}(X)T_{2k}\bigg(\sqrt{\frac{\lambda(L)}{4}+1}\bigg)\bigg), \label{nieper RR} \end{equation} where $B_{2k}$ are the Bernoulli numbers with $B_{2}=\frac{1}{6}$, $B_{4}=-\frac{1}{30}$, $B_{6}=\frac{1}{42}$, and $T_{2k}$ are even Chebyshev polynomials. Now consider O'Grady's six dimensional hyperk\"{a}hler manifold ${\mathcal M}_6$. By Mongardi--Rapagnetta--Sacc\`{a}'s computation (\cite[Corollary 6.8]{mrs}), we have \begin{equation}\int_{{\mathcal M}_6}c_{2}^{3}({\mathcal M}_6)=30720, \quad \int_{{\mathcal M}_6}c_{2}({\mathcal M}_6)c_{4}({\mathcal M}_6)=7680, \quad \int_{{\mathcal M}_6}c_{6}({\mathcal M}_6)=1920. \nonumber \end{equation} After direct computations by (\ref{nieper RR}), for a line bundle $H$ on ${\mathcal M}_6$, \begin{equation}\chi({\mathcal M}_6,H)=\frac{4}{27}\Big(T_{6}(\sqrt{\lambda(H)/4+1})+6T_{2}(\sqrt{\lambda(H)/4+1})\cdot T_{4}(\sqrt{\lambda(H)/4+1})+20T^{3}_{2}(\sqrt{\lambda(H)/4+1})\Big). \nonumber \end{equation} Plugging Chebyshev polynomials: $T_{2}(x)=2x^{2}-1$, $T_{4}(x)=8x^{4}-8x^{2}+1$, $T_{6}(x)=32x^{6}-48x^{4}+18x^{2}-1$ into the formula, we get \begin{equation}\chi({\mathcal M}_6,H)=4\binom{\lambda(H)+3}{3}. \nonumber \end{equation} Thus $$ P'_{{\mathcal M}_6}(t)=4\binom{t+3}{3}$$ is a polynomial with positive coefficients. Hence if $X$ is homeomorphic to ${\mathcal M}_6$ and $L$ is a line bundle on $X$, then all Todd classes of $X$ are fakely effective and $$ \chi(X, L)=P^\prime_X(\lambda(L))=P'_{{\mathcal M}_6}(\lambda(L))=4\binom{\lambda(L)+3}{3}. $$ When $L$ is nef and big, $\lambda(L)\in\mathbb{Q}_{>0}$ and hence by Lemma \ref{lemma on integers}, $\lambda(L)$ is a positive integer and $h^{0}(X, L )=\chi(X, L )\geq 4\binom{4}{3}=16$. \end{proof} From the above computations, we may propose the following conjecture. \begin{conjecture} Let $X$ be a hyperk\"{a}hler variety of dimension $2n$ and $L$ a nef and big line bundle on $X$. Then \begin{enumerate} \item $h^{0}(X,L)\geq n+2$; \item more wildly, $\int_{X}\td_{2n-2i}(X)\cdot L^{2i}\geq 0$ for all $i\in\mathbb{Z}$. \end{enumerate} \end{conjecture} \section{Calabi--Yau case} \subsection{Complete intersections in Fano manifolds} As for Calabi--Yau varieties, there is a huge amount of examples provided by complete intersections in Fano manifolds\footnote{We only consider compact Fano manifolds in this paper.}. In this subsection, we verify Conjecture \ref{kawamata conj} for these examples. Let $X$ be a projective variety, $N_{1}(X)$ be its set of numerical equivalent classes of 1-cycles in $\mathbb{R}$-coefficients. Set \begin{equation}NE(X)=\big\{\sum a_{i}[C_{i}]\in N_{1}(X) \mid C_{i}\subseteq X, \textrm{ } 0\leq a_{i}\in \bR \big\}. \nonumber \end{equation} and $\overline{NE}(X)$ its closure in $N_{1}(X)$. Note that $\overline{NE}(X)$ is the dual of the cone of nef divisors on $X$. As our testing examples are realized as hypersurfaces, or more generally, complete intersections in Fano manifolds, we recall the following comparison result for closed cone of curves. \begin{theorem}[{\cite{kollar}, \cite[Proposition 3.5]{beltrametti}}]\label{kollar thm} Let $Y$ be a projective manifold of dimension $n\geq4$, $H$ be an ample line bundle on $Y$, and $X$ be an effective smooth divisor in $|H|$. Assume $K^{-1}_{Y}\otimes H^{-1}$ is nef. Then $\overline{NE}(X)\cong\overline{NE}(Y)$ under the natural embedding $X\hookrightarrow Y$. \end{theorem} In fact, by the Lefschetz hyperplane theorem (see e.g. \cite[Example 3.1.25]{lazarsfeld}), $\Pic(X)\cong \Pic(Y)$ under the embedding $i: X\hookrightarrow Y$, and $i_{*}: \overline{NE}(X)\rightarrow \overline{NE}(Y)$ is an inclusion. The above theorem says that under certain condition, $i_{*}$ is an isomorphism and nef line bundles on $X$ and $Y$ are identified under $i^*$. We then compare sections of those line bundles on $X$ and $Y$. \begin{proposition}\label{compare sections} Let $Y$ be a projective manifold, $H$ a line bundle such that $h^{0}(Y,H)\geq2$. Suppose that the linear system $|H|$ contains an smooth element $X$. Assume that $K^{-1}_{Y}\otimes H^{-1}$ is nef. Then for any nef line bundle $L$ on $Y$ with $L\otimes (K^{-1}_{Y}\otimes H^{-1})$ big, we have \begin{equation}h^{0}(Y,L)=h^{0}(X,L|_{X})+h^{0}(Y,L\otimes H^{-1}). \nonumber \end{equation} Furthermore, if $h^{0}(Y,L)>0$, then $h^{0}(X,L|_{X})>0$. \end{proposition} \begin{proof} Twisting the exact sequence \begin{equation}0\rightarrow\mathcal{O}_{Y}(-X)\rightarrow\mathcal{O}_{Y}\rightarrow\mathcal{O}_{X}\rightarrow0 \nonumber \end{equation} with $L$, and taking long exact sequence of the corresponding cohomology, we obtain \begin{equation}0\rightarrow H^{0}(Y,L\otimes H^{-1})\rightarrow H^{0}(Y,L)\rightarrow H^{0}(X,L|_{X})\rightarrow H^{1}(Y,L\otimes H^{-1}). \nonumber \end{equation} Since $L\otimes (K^{-1}_{Y}\otimes H^{-1})$ is nef and big, $H^{1}(Y,L\otimes H^{-1})=0$ by Kawamata--Viehweg vanishing theorem \cite{kawamata1}. Thus \begin{equation}h^{0}(X,L|_{X})=h^{0}(Y,L)-h^{0}(Y,L\otimes H^{-1}). \nonumber \end{equation} Assume that $h^{0}(Y,L)>0$. If $h^{0}(Y,L\otimes H^{-1})=0$, then $$h^{0}(X,L|_{X})=h^{0}(Y,L)>0. $$ If $h^{0}(Y,L\otimes H^{-1})> 0$, then by \cite[Lemma 15.6.2]{kol shafa}, $$h^{0}(Y,L)\geq h^{0}(Y,H)+h^{0}(Y,L\otimes H^{-1})-1\geq h^{0}(Y,L\otimes H^{-1})+1,$$ and hence $h^{0}(X,L|_{X})\geq1$. \end{proof} Thus, to prove Conjecture \ref{kawamata conj} for complete intersections Calabi--Yau in Fano manifolds, we only need to prove that nef line bundles on those Fano manifolds have nontrivial global sections. Motivated by this, we define the following: \begin{definition}\label{def of perfect Fano} A Fano manifold $Y$ is \emph{perfect} if any nef line bundle on it has a nontrivial global section. \end{definition} Kawamata's conjecture \ref{kawamata conj0} predicts that any Fano manifold should be perfect. At the moment, we only show that there are lots of examples of perfect Fano manifolds. \begin{proposition}\label{prop on perfect Fano} The following Fano manifolds are perfect: \begin{enumerate} \item toric Fano manifolds; \item Fano manifolds of dimension $n\leq4$; \item products of perfect Fano manifolds. \end{enumerate} \end{proposition} \begin{proof} (1) This is obvious from the fact that any nef line bundle on a complete toric variety is base-point-free (see \cite[Theorem 6.3.12]{cox}) (2) We only deal with Fano 4-fold $X$ here. By Kawamata--Viehweg vanishing theorem, $H^{i}(X,L)=0$ for any nef line bundle $L$ on $X$ and $i>0$. Then Hirzebruch--Riemann--Roch formula gives \begin{align*}h^{0}(X,L)=\chi(X,L)={}&\frac{1}{24}\int_{X} c^{4}_{1}(L)+\frac{1}{12}\int_{X} c^{3}_{1}(L) c_{1}(X)+\frac{1}{24}\int_{X}c^{2}_{1}(L) c^{2}_{1}(X)\\ {}&+\frac{1}{24}\int_{X}c^{2}_{1}(L) c_{2}(X)+\frac{1}{24}\int_{X}c_{1}(L) c_{1}(X) c_{2}(X)+\chi(X,\mathcal{O}_{X}). \end{align*} Fano condition implies that $\chi(X,\mathcal{O}_{X})=1$. By the pseudo-effectiveness of second Chern classes for Fano manifolds (see \cite[Theorem 6.1]{miyaoka}, \cite[Theorem 2.4]{kmmt}, and \cite[Theorem 1.3]{peternell}), those terms with $c_{2}(X)$ are non-negative. Hence $h^{0}(X,L)>0$. (3) For two Fano manifolds $X$ and $Y$, as $H^{1}(X,\mathcal{O}_{X})=0$, we know $\Pic(X\times Y)\cong \Pic(X)\times \Pic(Y)$ (see e.g. \cite[Ex. III.12.6]{hart}), i.e. a line bundle on $X\times Y$ is the box tensor of line bundles on $X$ and $Y$. Furthermore, it is obvious that the box tensor is nef if and only if so is each factor, and the box tensor has a global section if and only if so does each factor. \end{proof} To sum up, we verify Conjecture \ref{kawamata conj} for general complete intersection Calabi--Yau in perfect Fano manifolds. \begin{theorem}\label{verification for CICY} Let $Y$ be a perfect Fano manifold, $\{H_{i}\}_{i=1}^{m}$ a sequence of base-point-free ample line bundles on $Y$ such that $\otimes_{i=1}^{m}H_{i}\cong K_{Y}^{-1}$. Let $X\subseteq Y$ be a smooth general complete intersection of dimension $n\geq3$ defined by common zero locus of sections of $\{H_{i}\}_{i=1}^{m}$. Then Conjecture \ref{kawamata conj} is true for $X$. \end{theorem} \begin{proof} There is a sequence of projective manifolds \begin{equation}X=X_{m}\subseteq X_{m-1}\subseteq\cdots\subseteq X_{1}\subseteq Y \nonumber \end{equation} with $X_{i}=\cap_{j=1}^{i}s^{-1}_{j}(0)$, the common zero locus of sections of $\{H_{j}\}_{j=1}^{i}$. We inductively apply Theorem \ref{kollar thm} and Proposition \ref{compare sections} to $(X_{i},H_{i+1}|_{X_{i}})_{i=1}^{m}$, then any nef and big line bundle $L$ on $X$ comes from the restriction of a nef line bundle $L_Y$ on $Y$ and $h^{0}(X,L)>0$ since $h^0(Y, L_Y)>0$ by definition of perfect Fano manifolds. \end{proof} \subsection{$\Td_4$ of CICY} In this subsection, we consider complete intersection Calabi--Yau varities (CICY, for short) in products of projective spaces. Fix positive integers $n_1, \ldots, n_m$, a CICY $X$ in $\mathcal{P}(\bn)=\mathbb{P}^{n_1}\times \cdots \times \mathbb{P}^{n_r}$ is provided by the {\em configuration matrix} $$ [\bn| \bq]= \left[ \begin{array}{c|ccc} n_1 & q_1^1 & \cdots & q_K^1 \\ \vdots & \vdots & & \vdots \\ n_m & q_1^m & \cdots & q_K^m \\ \end{array} \right] $$ which encodes the dimensions of the ambient projective spaces and the (multi)-degrees of the defining polynomials. Here $c_1(X)=0$ implies that \begin{align}\label{CY condition} \sum_{\alpha=1}^K q_\alpha^r=n_r+1 \end{align} for all $1\leq r\leq m$. We say that $\bq>0$ if $q_\alpha^r>0$ for all $\alpha$ and $r$. Our goal is to prove the following: \begin{theorem}\label{CICY td4} Let $X=[\bn| \bq]\subset\mathcal{P}(\bn)$ be a general CICY with $\bq>0$. Then $\td_4(X)$ is fakely effective. \end{theorem} From now, $X=[\bn| \bq]\subset\mathcal{P}(\bn)$ is a general CICY with $\bq>0$. For each $r$, denote $H_r$ be the pull-back of hyperplane of $\mathbb{P}^{n_r}$ to $\mathcal{P}(\bn)$, and $J_r=H_r|_X.$ It is easy to compute Chern classes of $X$ in terms of $J_r$ (cf. \cite[B.2.1]{M CY5}), for example, we have \begin{align*} c_2(X){}&=\sum_{r,s=1}^m c_2^{rs} J_r J_s \\ {}&= \sum_{r,s=1}^m \frac{1}{2}\left[-(n_r+1)\delta^{rs} + \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right] J_r J_s, \end{align*} and \begin{align*} c_4 (X) {}&= \sum_{r,s,t,u=1}^m c_4^{rstu} J_r J_s J_t J_u \\ {}&= \sum_{r,s,t,u=1}^m\frac{1}{4}\left[ -(n_r+1)\delta^{rstu} + \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t q_\alpha^u + 2 c_2^{rs} c_2^{tu} \right] J_r J_s J_t J_u. \end{align*} Hence \begin{align*} 2880\td_4(X)={}&4(3c_2^2(X)-c_4(X))\\ ={}&\sum_{r,s,t,u=1}^m (12c_2^{rs}c_2^{tu} -4c_4^{rstu})J_r J_sJ_t J_u \\ ={}& \sum_{r,s,t,u=1}^m\left[10c_2^{rs}c_2^{tu} +(n_r+1)\delta^{rstu} - \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t q_\alpha^u \right] J_r J_s J_t J_u\\ ={}& \sum_{r,s,t,u=1}^m\Bigg[\frac{5}{2}\left(-(n_r+1)\delta^{rs} + \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right) \left(-(n_t+1)\delta^{tu} + \sum_{\alpha=1}^K q_\alpha^t q_\alpha^u \right)\\ {}&\qquad\qquad+(n_r+1)\delta^{rstu} - \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t q_\alpha^u \Bigg] J_r J_s J_t J_u. \end{align*} Here we denote the coefficient of $J_r J_s J_t J_u $ in the above equation to be $A^{rstu}$. Rewrite the equation, \begin{align} \label{BBB}2880\td_4(X)={}&\sum_{r=1}^m B^{rrrr}J_r^4+ \sum_{\substack{1\leq r, s\leq m\\ r\neq s}} B^{rrrs}J_r^3J_s+ \sum_{1\leq r<s\leq m} B^{rrss}J_r^2J_s^2\\ {}&+\sum_{\substack{1\leq r,s,t\leq m\\ r\neq s, r\neq t, s<t}} B^{rrst}J_r^2J_sJ_t+\sum_{1\leq r<s<t<u\leq m} B^{rstu}J_r J_sJ_t J_u.\nonumber \end{align} Here by symmetry, \begin{align*} B^{rrrr}={}& A^{rrrr}=\frac{5}{2}\left(-(n_r+1) + \sum_{\alpha=1}^K (q_\alpha^r )^2\right)^2+(n_r+1)- \sum_{\alpha=1}^K (q_\alpha^r)^4;\\ B^{rrrs}={}&4A^{rrrs}=10\left(-(n_r+1) + \sum_{\alpha=1}^K (q_\alpha^r)^2 \right) \left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)- 4\sum_{\alpha=1}^K (q_\alpha^r)^3 q_\alpha^s;\\ B^{rrss}={}&2A^{rrss}+4A^{rsrs}\\ ={}&10\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)^2+5\left(-(n_r+1) + \sum_{\alpha=1}^K (q_\alpha^r)^2 \right)\left(-(n_s+1) + \sum_{\alpha=1}^K (q_\alpha^s)^2 \right)- 6\sum_{\alpha=1}^K (q_\alpha^r q_\alpha^s)^2;\\ B^{rrst}={}&4A^{rrst}+8A^{rsrt}\\ ={}&10\left(-(n_r+1) + \sum_{\alpha=1}^K (q_\alpha^r)^2 \right) \left( \sum_{\alpha=1}^K q_\alpha^s q_\alpha^t \right)+20\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^t \right)-12\sum_{\alpha=1}^K (q_\alpha^r)^2 q_\alpha^s q_\alpha^t;\\ B^{rstu}={}&8A^{rstu}+8A^{rtsu}+8A^{rust}. \end{align*} We have the following inequalities for these coefficients. \begin{lemma}\label{B geq}In equation (\ref{BBB}), \begin{enumerate} \item $B^{rrrr}\geq 0$ unless $q_{\alpha_0}^r=2$ for some $\alpha_0$ and $q_\alpha^r=1$ for all $\alpha\neq \alpha_0$; \item $B^{rrrs}\geq 0$ unless $q_\alpha^r=1$ for all $\alpha$; \item $B^{rrss} \geq 0$; \item $B^{rrst}\geq 0$; \item $B^{rstu}\geq 0$. \end{enumerate} \end{lemma} \begin{proof}Keep in mind that $q_\alpha^r$ are all positive integers and we will often use (\ref{CY condition}). (1) Divide the index set into three part: \begin{align*} S_1={}&\{\alpha\mid q_{\alpha}^r=1\};\\ S_2={}&\{\alpha\mid q_{\alpha}^r=2\};\\ S_3={}&\{\alpha\mid q_{\alpha}^r\geq 3\}. \end{align*} The goal is to show that $B^{rrrr}\geq 0$ if and only if either $|S_3|>0$ or $|S_2|\ne 1$. The only if part is easy. We show the if part. We have \begin{align*} {}&B^{rrrr}\\={}&\frac{5}{2}\left(-(n_r+1) + \sum_{\alpha=1}^K (q_\alpha^r )^2\right)^2+(n_r+1)- \sum_{\alpha=1}^K (q_\alpha^r)^4\\ ={}&\frac{5}{2}\left( \sum_{\alpha\in S_2\cup S_3} \Big((q_\alpha^r )^2-q_\alpha^r\Big)\right)^2+\sum_{\alpha\in S_2\cup S_3} \Big(q_\alpha^r-(q_\alpha^r)^4\Big)\\ \geq {}&\frac{5}{2} \sum_{\alpha\in S_3} \Big((q_\alpha^r )^2-q_\alpha^r\Big)^2+ \frac{5}{2}\left( \sum_{\alpha\in S_2} \Big((q_\alpha^r )^2-q_\alpha^r\Big)\right)\left( \sum_{\alpha\in S_2\cup S_3} \Big((q_\alpha^r )^2-q_\alpha^r\Big)\right)+\sum_{\alpha\in S_2\cup S_3} \Big(q_\alpha^r-(q_\alpha^r)^4\Big)\\ \geq{}&\frac{5}{2}\left( \sum_{\alpha\in S_2} \Big((q_\alpha^r )^2-q_\alpha^r\Big)\right)\left( \sum_{\alpha\in S_2\cup S_3} \Big((q_\alpha^r )^2-q_\alpha^r\Big)\right)+\sum_{\alpha\in S_2} \Big(q_\alpha^r-(q_\alpha^r)^4\Big)\\ ={}&5|S_2|\left( 2|S_2|+ \sum_{\alpha\in S_3} \Big((q_\alpha^r )^2-q_\alpha^r\Big)\right)-14|S_2|. \end{align*} Here the first inequality is just easy computation and for the second inequality, we use the fact that if $q\geq 3$, then $$ \frac{5}{2}(q^2-q)^2+q-q^4=\frac{5}{2}q(q-1)(3q^2-7q-2)>0. $$ If $|S_3|>0$, then $\sum_{\alpha\in S_3} ((q_\alpha^r )^2-q_\alpha^r)\geq 6$ and hence the above inequality gives $$ B^{rrrr}\geq 30|S_2|-14|S_2|\geq 0. $$ If $|S_2|\ne 1$, then the above inequality gives $$ B^{rrrr}\geq 10|S_2|^2-14|S_2|\geq 0. $$ (2) Divide the index set into two part: \begin{align*} S_1={}&\{\alpha\mid q_{\alpha}^r=1\};\\ S^\prime_2={}&\{\alpha\mid q_{\alpha}^r\geq 2\}. \end{align*} The goal is to show that $B^{rrrs}\geq 0$ if and only if $|S^\prime_2|>0$. The only if part is easy. We show the if part. Assume that $|S^\prime_2|>0$, then \begin{align*} B^{rrrs}={}&10\left(-(n_r+1) + \sum_{\alpha=1}^K (q_\alpha^r)^2 \right) \left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)- 4\sum_{\alpha=1}^K (q_\alpha^r)^3 q_\alpha^s\\ ={}&10\left(\sum_{\alpha=1}^K \Big((q_\alpha^r)^2-q_\alpha^r\Big) \right) \left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)- 4\sum_{\alpha=1}^K (q_\alpha^r)^3 q_\alpha^s\\ ={}&10\left(\sum_{\alpha\in S^\prime_2} \Big((q_\alpha^r)^2-q_\alpha^r\Big) \right) \left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)- 4\sum_{\alpha=1}^K (q_\alpha^r)^3 q_\alpha^s\\ \geq{}& 5\left(\sum_{\alpha\in S^\prime_2} (q_\alpha^r)^2 \right) \left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)- 4\sum_{\alpha\in S_1\cup S^\prime_2} (q_\alpha^r)^3 q_\alpha^s\\ \geq{}&\left(\sum_{\alpha\in S^\prime_2} (q_\alpha^r)^2 \right) \left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)- 4\sum_{\alpha\in S_1} (q_\alpha^r)^3 q_\alpha^s\\ \geq{}&4 \left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)- 4\sum_{\alpha\in S_1} q_\alpha^s\geq 0. \end{align*} Here for the first inequality we use the fact that $2(q^2-q)\geq q^2$ for $q\geq 2$, the second is just easy computation, and for the last inequality we use the fact that $\sum_{\alpha\in S^\prime_2} (q_\alpha^r)^2\geq 4$ since $|S^\prime_2|>0$. (3) Recall that $$-(n_r+1) + \sum_{\alpha=1}^K (q_\alpha^r )^2=\sum_{\alpha=1}^K \Big((q_\alpha^r )^2-q_\alpha^r\Big)\geq 0.$$ It is easy to see that \begin{align*}B^{rrss}\geq{}&10\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)^2- 6\sum_{\alpha=1}^K (q_\alpha^r q_\alpha^s)^2\\ \geq {}&4\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)^2\geq 0. \end{align*} (4) Similarly, it is easy to see that \begin{align*}B^{rrst}\geq {}&20\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^t \right)-12\sum_{\alpha=1}^K (q_\alpha^r)^2 q_\alpha^s q_\alpha^t\\ \geq{}& 8\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right)\left( \sum_{\alpha=1}^K q_\alpha^r q_\alpha^t \right)\geq 0. \end{align*} (5) For $r<s<t<u$, it is easy to see that \begin{align*}A^{rstu} ={}&\frac{5}{2}\left(\sum_{\alpha=1}^K q_\alpha^r q_\alpha^s \right) \left(\sum_{\alpha=1}^K q_\alpha^t q_\alpha^u \right)- \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t q_\alpha^u\\ \geq {}&\frac{5}{2}\sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t q_\alpha^u - \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t q_\alpha^u\\ \geq {}&0. \end{align*} Similarly, $A^{rtsu}\geq 0$ and $A^{rust}\geq 0$. Hence $B^{rstu}\geq 0$. \end{proof} Note that, since $J_r$'s are nef divisors by definition, it is easy to see that if all the coefficients in (\ref{BBB}) are non-negative, then $\td_4(X)$ is fakely effective. We need to deal with the case when the coefficients are not non-negative in the following two lemmas. Also note that since we consider $X$ to be a general CICY in $\PP(\bn)$, the nef cones of $X$ and $\PP(\bn)$ concides by Theorem \ref{kollar thm} (see proof of Proposition \ref{verification for CICY}). Hence to check the fake effectivity of a cycle on $X$, we can view it as a cycle on $\PP(\bn)$ since fake effectivity is tested by nef divisors. We will always use this observation. For two cycles $C$ and $C'$, we will write $C\succeq C'$ if $C-C'$ is fakely effective. We denote the index set $\RR:=\{r\mid q_\alpha^r=1 \text{ for all } \alpha\}$. \begin{lemma}\label{Brrrr}If $r\not \in \RR$, $$ B^{rrrr}J_r^4+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} B^{rrrs}J_r^3J_s\succeq 0. $$ \end{lemma} \begin{proof} By Lemma \ref{B geq}, $B^{rrrs}\geq 0$. Hence if $B^{rrrr}\geq 0$, there is nothing to prove. We may assume $B^{rrrr}< 0$. By Lemma \ref{B geq}, after reordering the index, we have $q_{1}^r=2$ and $q_\alpha^r=1$ for all $\alpha\geq 2$. In this case $B^{rrrr}=-4$ and $$ B^{rrrs}=8q_1^s+16\sum_{\alpha=2}^K q_\alpha^s. $$ Also we have $n_r+1=\sum_{\alpha=1}^Kq_\alpha^r=K+1$. Viewing this cycle as a cycle on $\PP(\bn)$, we have \begin{align*} {}&B^{rrrr}J_r^4+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} B^{rrrs}J_r^3J_s\\ ={}&-4J_r^4+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} \left(8q_1^s+16\sum_{\alpha=2}^K q_\alpha^s\right)J_r^3J_s\\ ={}&\left(-4H_r^4+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} \left(8q_1^s+16\sum_{\alpha=2}^K q_\alpha^s\right)H_r^3H_s\right)\cdot \prod_{\alpha=1}^K\left(\sum_{s=1}^m q_\alpha^sH_s\right)\\ ={}&4\left(-H_r^4+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} \left(2q_1^s+4\sum_{\alpha=2}^K q_\alpha^s\right)H_r^3H_s\right)\cdot \left(2H_r+\sum_{s\neq r} q_1^sH_s\right)\cdot \prod_{\alpha=2}^K\left(H_r+\sum_{s\ne r} q_\alpha^sH_s\right)\\ ={}&8H_r^3\left(-H_r+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} \left(2q_1^s+4\sum_{\alpha=2}^K q_\alpha^s\right)H_s\right)\cdot \left(H_r+\sum_{s\neq r} \frac{q_1^s}{2}H_s\right)\cdot \prod_{\alpha=2}^K\left(H_r+\sum_{s\ne r} q_\alpha^sH_s\right)\\ \succeq {}&8H_r^3\left(-H_r+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} \left(\frac{1}{2}q_1^s+\sum_{\alpha=2}^K q_\alpha^s\right)H_s\right)\cdot \left(H_r+\sum_{s\neq r} \frac{q_1^s}{2}H_s\right)\cdot \prod_{\alpha=2}^K\left(H_r+\sum_{s\ne r} q_\alpha^sH_s\right). \end{align*} Note that by Lemma \ref{positive polynomial}, $$ \left(-H_r+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} \left(\frac{1}{2}q_1^s+\sum_{\alpha=2}^K q_\alpha^s\right)H_s\right)\cdot \left(H_r+\sum_{s\neq r} \frac{q_1^s}{2}H_s\right)\cdot \prod_{\alpha=2}^K\left(H_r+\sum_{s\ne r} q_\alpha^sH_s\right)+H_r^{K+1} $$ is a polynomial in terms of $H_1,\ldots, H_m$ with non-negative coefficients and note that $$H_r^{K+1}=0$$ since $K=n_r$ and $H_r$ is the pullback of hyperplane on $\mathbb{P}^{n_r}$. Hence we have written $$B^{rrrr}J_r^4+ \sum_{\substack{1\leq s \leq m \\ s\neq r}} B^{rrrs}J_r^3J_s$$ as a polynomial in terms of $H_1,\ldots, H_m$ with non-negative coefficients, which is clearly fakely effective. \end{proof} \begin{lemma}\label{Brrrs} If $r\in \RR$, $$ \sum_{\substack{1\leq s\leq m\\s\neq r}} B^{rrrs}J_r^3J_s+ \sum_{\substack{s\not \in \RR\\ s\neq r}} B^{rrss}J_r^2J_s^2+\sum_{\substack{s\in \RR\\ s\neq r}} \frac{1}{2}B^{rrss}J_r^2J_s^2+\sum_{\substack{1\leq s<t\leq m\\ s\neq r, t\neq r}} B^{rrst}J_r^2J_sJ_t\succeq 0. $$ Here for convenience, we set $B^{rrss}=B^{ssrr}$ if $r>s$. \end{lemma} \begin{proof}In this case, $n_r+1=\sum_{\alpha=1}^Kq_\alpha^r=K$. Hence $J_r^K=(H_r|_X)^{n_r+1}=0$, since $H_r$ is the pullback of hyperplane on $\mathbb{P}^{n_r}$. We may assume $K\geq 3$ otherwise there is nothing to prove. We have \begin{align*} B^{rrrs}={}&- 4\sum_{\alpha=1}^K q_\alpha^s;\\ B^{rrss}={}& 10\left( \sum_{\alpha=1}^K q_\alpha^s \right)^2- 6\sum_{\alpha=1}^K (q_\alpha^s)^2\geq 4\left( \sum_{\alpha=1}^K q_\alpha^s \right)^2;\\ B^{rrst}={}& 20\left( \sum_{\alpha=1}^Kq_\alpha^s \right)\left( \sum_{\alpha=1}^K q_\alpha^t \right)-12\sum_{\alpha=1}^K q_\alpha^s q_\alpha^t\geq 8\left( \sum_{\alpha=1}^Kq_\alpha^s \right)\left( \sum_{\alpha=1}^K q_\alpha^t \right). \end{align*} Moreover, since $K\geq 3$, if $s\in \RR$ and $s\ne r$, then $$ B^{rrss}=10\left( \sum_{\alpha=1}^K q_\alpha^s \right)^2- 6\sum_{\alpha=1}^K (q_\alpha^s)^2=10K^2-6K\geq 8K^2\geq 8\left( \sum_{\alpha=1}^K q_\alpha^s \right)^2. $$ Hence we have \begin{align*} {}&\sum_{\substack{1\leq s\leq m\\s\neq r}} B^{rrrs}J_r^3J_s+ \sum_{\substack{s\not \in \RR\\ s\neq r}} B^{rrss}J_r^2J_s^2+\sum_{\substack{s\in \RR\\ s\neq r}} \frac{1}{2}B^{rrss}J_r^2J_s^2+\sum_{\substack{1\leq s<t\leq m\\ s\neq r, t\neq r}} B^{rrst}J_r^2J_sJ_t\\ \succeq {}&-\sum_{\substack{1\leq s\leq m\\s\neq r}} 4\sum_{\alpha=1}^K q_\alpha^s J_r^3J_s+ \sum_{\substack{ s\neq r}} 4\left( \sum_{\alpha=1}^K q_\alpha^s \right)^2J_r^2J_s^2+\sum_{\substack{1\leq s<t\leq m\\ s\neq r, t\neq r}} 8\left( \sum_{\alpha=1}^Kq_\alpha^s \right)\left( \sum_{\alpha=1}^K q_\alpha^t \right)J_r^2J_sJ_t\\ ={}&-4\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^s J_r^3J_s+ 4\left(\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^sJ_rJ_s\right)^2\\ ={}&4J_r\left(\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^sJ_rJ_s\right)\left(-J_r+\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^sJ_s\right). \end{align*} Viewing as a cycle on $\PP(\bn)$ and use the fact that $H_r^{K+1}=0$, \begin{align*} {}& -J_r+\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^sJ_s\\ ={}&\left(-H_r+\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^sH_s\right)\cdot \prod_{\alpha=1}^K\left(\sum_{s=1}^m q_\alpha^sH_s\right)\\ ={}&\left(-H_r+\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^sH_s\right)\cdot \prod_{\alpha=1}^K\left(H_r+\sum_{s\ne r} q_\alpha^sH_s\right)\\ ={}&\left(-H_r+\sum_{\substack{1\leq s\leq m\\s\neq r}} \sum_{\alpha=1}^K q_\alpha^sH_s\right)\cdot \prod_{\alpha=1}^K\left(H_r+\sum_{s\ne r} q_\alpha^sH_s\right)+H_r^{K+1} \end{align*} can be expressed as a polynomial in terms of $H_1,\ldots, H_m$ with non-negative coefficients by Lemma \ref{positive polynomial}. Hence we may express $$ \sum_{\substack{1\leq s\leq m\\s\neq r}} B^{rrrs}J_r^3J_s+ \sum_{\substack{s\not \in \RR\\ s\neq r}} B^{rrss}J_r^2J_s^2+\sum_{\substack{s\in \RR\\ s\neq r}} \frac{1}{2}B^{rrss}J_r^2J_s^2+\sum_{\substack{1\leq s<t\leq m\\ s\neq r, t\neq r}} B^{rrst}J_r^2J_sJ_t $$ as a polynomial in terms of $H_1,\ldots, H_m$ with non-negative coefficients, which is clearly fakely effective. \end{proof} \begin{proof}[Proof of Theorem \ref{CICY td4}]By (\ref{BBB}), Lemmas \ref{B geq}, \ref{Brrrr}, and \ref{Brrrs}, \begin{align*} {}&2880\td_4(X)\\={}&\sum_{r=1}^m B^{rrrr}J_r^4+ \sum_{\substack{1\leq r, s\leq m\\ r\neq s}} B^{rrrs}J_r^3J_s+ \sum_{1\leq r<s\leq m} B^{rrss}J_r^2J_s^2\\ {}&+\sum_{\substack{1\leq r,s,t\leq m\\ r\neq s, r\neq t, s<t}} B^{rrst}J_r^2J_sJ_t+\sum_{1\leq r<s<t<u\leq m} B^{rstu}J_r J_sJ_t J_u\\ \succeq {}&\sum_{r\not \in \RR} B^{rrrr}J_r^4+ \sum_{\substack{1\leq r, s\leq m\\ r\neq s}} B^{rrrs}J_r^3J_s+ \sum_{1\leq r<s\leq m} B^{rrss}J_r^2J_s^2+\sum_{\substack{r \in \RR\\ 1\leq s,t\leq m\\ r\neq s, r\neq t, s<t}} B^{rrst}J_r^2J_sJ_t\\ = {}&\sum_{r\not \in \RR} B^{rrrr}J_r^4+ \sum_{r\not \in \RR}\sum_{\substack{1\leq s \leq m \\ s\neq r}} B^{rrrs}J_r^3J_s\\{}&+\sum_{r \in \RR}\sum_{\substack{1\leq s \leq m \\ s\neq r}} B^{rrrs}J_r^3J_s+ \sum_{1\leq r<s \leq m} B^{rrss}J_r^2J_s^2+\sum_{\substack{r \in \RR\\ 1\leq s,t\leq m\\ r\neq s, r\neq t, s<t}} B^{rrst}J_r^2J_sJ_t\\ \succeq {}&\sum_{r\not \in \RR} \left[B^{rrrr}J_r^4+ \sum_{ s\neq r} B^{rrrs}J_r^3J_s\right]\\{}&+\sum_{r \in \RR}\left[\sum_{s\neq r} B^{rrrs}J_r^3J_s+ \sum_{\substack{s\not \in \RR\\ s\neq r}} B^{rrss}J_r^2J_s^2+\sum_{\substack{s\in \RR\\ s\neq r}} \frac{1}{2}B^{rrss}J_r^2J_s^2+\sum_{\substack{1\leq s<t\leq m\\ s\neq r, t\neq r}} B^{rrst}J_r^2J_sJ_t\right]\\ \succeq{}& 0. \end{align*} Here for the first and second inequalities we use Lemma \ref{B geq}, and for the last inequality we use Lemmas \ref{Brrrr} and \ref{Brrrs}. \end{proof} \section{Some effective results} In this section, using Hirzebruch--Riemann--Roch formula and Miyaoka--Yau inequality \cite{miyaoka, yau0}, we prove a weaker version of Conjecture \ref{kawamata conj} in all dimensions (which is related to a conjecture of Beltrametti and Sommese, see for instance H\"{o}ring's work \cite{horing}). We first consider odd dimensions. \begin{theorem}\label{HRR for odd CY} Let $X$ be a smooth projective variety of dimension $2k+1$ ($k\geq1$) with $c_1(X)=0$ in $H^2(X, \bR)$ and $L$ a nef and big line bundle on $X$. Then there exists $i\in\{1,2,\ldots, k\}$ such that $H^{0}(X,L^{\otimes i})\neq0$. \end{theorem} \begin{proof} By contradiction, we assume $h^{0}(X,L^{\otimes i})=0$ for $i=1,2,\ldots,k$, then $\chi(X,L^{\otimes i})=0$ by Kawamata--Viehweg vanishing theorem. Hirzebruch--Riemann--Roch formula gives \begin{equation}f(t)\triangleq \chi(X,L^{\otimes t})=\int_{X}\frac{L^{2k+1}}{(2k+1)!}t^{2k+1}+\int_{X}\frac{L^{2k-1}\cdot \td_{2}(X)}{(2k-1)!}t^{2k-1} +\cdots+\int_{X}(L\cdot \td_{2k}(X))t. \nonumber \end{equation} as $\td_{\text{odd}}(X)=0$. Then $f(-t)=-f(t)$ and degree $(2k+1)$-polynomial $f(t)$ has roots $\{0,\pm1,\pm2,\ldots,\pm k\}$. Then we can write \begin{equation}f(t)=\alpha t(t^{2}-1)(t^{2}-2^{2})\cdots(t^{2}-k^{2}), \nonumber \end{equation} where $\alpha=\int_{X}\frac{L^{2k+1}}{(2k+1)!}>0$. The coefficient of $t^{2k-1}$ is $-\alpha\cdot(\sum_{i=1}^{k}i^{2})=\int_{X}\frac{L^{2k-1}\cdot c_{2}(X)}{12(2k-1)!}$, where we get a contradiction as the RHS is non-negative by the Miyaoka--Yau inequality \cite{miyaoka, yau0}. \end{proof} Then we consider even dimensions. \begin{theorem}\label{HRR for even CY} Let $X$ be a smooth projective variety of dimension $4k+2$ or $4k+4$ ($k\geq0$) with $c_1(X)=0$ in $H^2(X, \bR)$ and $L$ a nef and big line bundle on $X$. Then there exists $i\in\{1,2,\ldots,2k+1\}$ such that $H^{0}(X,L^{i})\neq0$. \end{theorem} \begin{proof} If $\dim X=4k+2$, assume $h^{0}(X,L^{\otimes i})=0$ for $i=1,2,\ldots,2k+1$, then $\chi(X,L^{\otimes i})=0$. Hirzebruch--Riemann--Roch formula formula gives \begin{equation} f(t)\triangleq \chi(X,L^{\otimes t})=\int_{X}\frac{L^{4k+2}}{(4k+2)!}t^{4k+2}+\int_{X}\frac{L^{4k}\cdot \td_{2}(X)}{(4k)!}t^{4k} +\cdots+\chi(X,\mathcal{O}_{X}). \nonumber \end{equation} as $\td_{\text{odd}}(X)=0$. Then $f(-t)=f(t)$ and degree $(4k+2)$-polynomial $f(t)$ has roots $\{\pm1,\pm2,\ldots,\pm (2k+1)\}$. Then we can write \begin{equation}f(t)=\alpha (t^{2}-1)(t^{2}-2^{2})\cdots(t^{2}-(2k+1)^{2}), \nonumber \end{equation} where $\alpha=\int_{X}\frac{L^{4k+2}}{(4k+2)!}>0$. The coefficient of $t^{4k}$ is $-\alpha\cdot(\sum_{i=1}^{2k+1}i^{2})=\int_{X}\frac{L^{4k}\cdot c_{2}(X)}{12(4k)!}$, where we get a contradiction as the RHS is non-negative by the Miyaoka--Yau inequality \cite{miyaoka, yau0}. If $\dim X=4k+4$, we similarly assume $f(t)=\chi(X,L^{\otimes t})$ has roots $\{\pm1,\pm2,\ldots,\pm (2k+1)\}$, and then \begin{equation}f(t)=\alpha (t^{2}-1)(t^{2}-2^{2})\cdots(t^{2}-(2k+1)^{2})(t^{2}-\beta) \nonumber \end{equation} for some $\beta\in\mathbb{C}$ and $\alpha=\int_{X}\frac{L^{4k+4}}{(4k+4)!}>0$. The coefficient of $t^{4k+2}$ is $-\alpha\cdot(\beta+\sum_{i=1}^{2k+1}i^{2})=\int_{X}\frac{L^{4k+2}\cdot c_{2}(X)}{12(4k+2)!}$, and the constant term is $\alpha\cdot\beta\cdot((2k+1)!)^{2}=\chi(X,\mathcal{O}_{X})$. Miyaoka--Yau inequality gives $\beta<0$, which contradicts to $\chi(X,\mathcal{O}_{X})\geq0$ by Theorem \ref{classification of CY}. \end{proof} \vspace{1pt} \begin{remark} ~\\ 1. As a corollary, Conjecture \ref{kawamata conj} holds true in dimension $n\leq4$. \\ 2. For a hyperk\"{a}hler variety $X$ of dimension $2n$ ($n\geq 2$) and $L$ a nef and big line bundle on $X$, we can enhance the above result by using the effectiveness of fourth Todd class (Theorem \ref{effective of Td4}), and show that there exists a positive integer $i\leq \rounddown{\frac{n-2}{2}}+\rounddown{\frac{n}{2}}$ such that $H^{0}(X,L^{\otimes i})\ne 0$. We leave the detail to the readers. \end{remark} \section{Appendix} In the appendix, we prove some basic lemmas. \begin{lemma}\label{lemma on integers} Let $n\in\mathbb{Z}_{>0}$ be a positive integer and $\lambda\in\mathbb{Q}$ be a rational number. \\ (1) If $n\geq2$ and $(n+1)\cdot\binom{\lambda+n}{n }$ is an integer, then $\lambda\in\mathbb{Z}$. \\ (2) If $\binom{\lambda+n+1 }{n }$ is an integer, then $\lambda\in\mathbb{Z}$. \end{lemma} \begin{proof} (1) Write $\lambda=p/q$ for $p\in \mathbb{Z}$ and $q\in\mathbb{Z}_{>0}$ with $\text{lcm}(p,q)=1$. By contrary, we may assume $q>1$. Then $(n+1)\cdot\binom{\lambda+n}{n }\in \ZZ$ implies that \begin{align} n!q^n\mid (n+1)(p+nq)\cdot(p+q).\label{n1} \end{align} We claim that either $(n+1)$ is prime or $(n+1)\mid2\cdot n!$. If $n\leq 4$, it is obvious. Assume that $n\geq5$. If $(n+1)$ is not prime, we have a factorization $(n+1)=a\cdot b$, for intergers $2\leq a,b\leq n$. If $a\neq b$, they both appear in $n!$ as factors, and hence $(n+1)\mid n!$. If $a=b$, then $a=\sqrt{n+1}\leq \frac{n}{2}$, so $2a\leq n$. Hence $a$ and $2a$ appear in $n!$ as factors, and hence $a^2\mid n!$. For the case when $(n+1)$ is prime, (\ref{n1}) implies that $$q^{n}\mid (n+1)(p+nq)\cdots(p+q), $$ which implies that $q\mid (n+1)p^{n}$ and $q^{2}\mid (n+1)(p^{n}+\frac{n(n+1)}{2}p^{n-1}q)$. As $(p,q)=1$ and $(n+1)$ is prime, we conclude from the first dividing relation that $q=n+1$. Combining with the second relation, we get $q^{2}\mid q p^{n}$, which contradicts with $(p,q)=1$. For the case when $(n+1)\mid 2\cdot n!$, (\ref{n1}) implies that $$q^{n}\mid 2(p+nq)\cdots(p+q).$$ which implies that $q\mid 2p^{n}$. As $(p,q)=1$, we conclude that $q=2$ and $p$ is odd. Hence $(p+nq)\cdots(p+q)$ is odd and (\ref{n1}) implies that $2^{n}\mid n+1$, which is absurd for $n\geq 2$. (2) Write $\lambda=p/q$ for $p\in \mathbb{Z}$ and $q\in\mathbb{Z}_{>0}$ with $\text{lcm}(p,q)=1$ Then $\binom{\lambda+n+1 }{n }\in \ZZ$ implies that \begin{equation}q^{n}\mid (p+(n+1)q)(p+nq)\cdots(p+2q), \nonumber \end{equation} and hence $q\mid p^{n}$. Since $(p,q)=1$, this implies that $q=1$ and $\lambda$ is an integer. \end{proof} \begin{lemma}\label{positive polynomial}Let $m,K$ be two positive integers and $\{q_\alpha^s\mid 1\leq \alpha\leq K, 1\leq s\leq m\}$ a set of non-negative numbers. Consider the homogenous polynomial $$ f(x_1,\ldots, x_m)=\left(-x_1+\sum_{\alpha=1}^K\sum_{s=2}^mq _\alpha^s x_s\right)\cdot \prod_{\alpha=1}^K \left(x_1+\sum_{s=2}^mq_\alpha^s x_s\right)+x_1^{K+1}. $$ Then all coefficients of $f$ are non-negative. \end{lemma} \begin{proof} Consider $f$ as a polynomial of $x_1$ with coefficients in terms of $x_2,\ldots, x_m$. We need to show that for $1\leq k\leq K+1$, the coefficient of $x_1^k$ is a polynomial in terms of $x_2,\ldots, x_m$ with non-negative coefficients. It is easy to see that the coefficient of $x_1^{K+1}$ and $x_1^{K}$ are $0$. Fix $1\leq k\leq K$, then the coefficient of $x_1^{K-k}$ is \begin{align*} {}& \left(\sum_{\alpha=1}^K\sum_{s=2}^mq _\alpha^s x_s\right)\cdot \left(\sum_{\alpha_1<\cdots <\alpha_k}\prod_{j=1}^k\sum_{s=2}^mq_{\alpha_j}^s x_s\right)- \sum_{\alpha_1<\cdots <\alpha_{k+1}}\prod_{j=1}^{k+1}\sum_{s=2}^mq_{\alpha_j}^s x_s\\ ={}&\sum_{\alpha_1<\cdots <\alpha_k}\left( \left(\sum_{\alpha=1}^K\sum_{s=2}^mq _\alpha^s x_s\right)\cdot \left(\prod_{j=1}^k\sum_{s=2}^mq_{\alpha_j}^s x_s\right) - \sum_{\alpha_{k+1}>\alpha_k}\prod_{j=1}^{k+1}\sum_{s=2}^mq_{\alpha_j}^s x_s\right)\\ ={}&\sum_{\alpha_1<\cdots <\alpha_k}\left( \left(\sum_{\alpha=1}^K\sum_{s=2}^mq _\alpha^s x_s\right)\cdot \left(\prod_{j=1}^k\sum_{s=2}^mq_{\alpha_j}^s x_s\right) - \left(\sum_{\alpha_{k+1}>\alpha_k}\sum_{s=2}^mq_{\alpha_{k+1}}^s x_s\right)\cdot\left(\prod_{j=1}^{k}\sum_{s=2}^mq_{\alpha_j}^s x_s\right)\right)\\ ={}&\sum_{\alpha_1<\cdots <\alpha_k}\left( \left(\sum_{\alpha\leq \alpha_k}\sum_{s=2}^mq _\alpha^s x_s\right)\cdot \left(\prod_{j=1}^k\sum_{s=2}^mq_{\alpha_j}^s x_s\right)\right) \end{align*} which is a polynomial in terms of $x_2,\ldots, x_m$ with non-negative coefficients. \end{proof}
{ "timestamp": "2016-12-06T02:03:55", "yymm": "1612", "arxiv_id": "1612.00184", "language": "en", "url": "https://arxiv.org/abs/1612.00184", "abstract": "Kawamata proposed a conjecture predicting that every nef and big line bundle on a smooth projective variety with trivial first Chern class has nontrivial global sections. We verify this conjecture for several cases, including (i) all hyperkähler varieties of dimension $\\leq 6$; (ii) all known hyperkähler varieties except for O'Grady's 10-dimensional example; (iii) general complete intersection Calabi-Yau varieties in certain Fano manifolds (e.g. toric ones). Moreover, we investigate the effectivity of Todd classes of hyperkähler varieties and Calabi-Yau varieties. We prove that the fourth Todd classes are \"fakely effective\" for all hyperkähler varieties and general complete intersection Calabi-Yau varieties in products of projective spaces.", "subjects": "Algebraic Geometry (math.AG)", "title": "Remarks on Kawamata's effective non-vanishing conjecture for manifolds with trivial first Chern classes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446456243804, "lm_q2_score": 0.7279754371026368, "lm_q1q2_score": 0.7087893864810502 }
https://arxiv.org/abs/2007.11730
Nonclosedness of Sets of Neural Networks in Sobolev Spaces
We examine the closedness of sets of realized neural networks of a fixed architecture in Sobolev spaces. For an exactly $m$-times differentiable activation function $\rho$, we construct a sequence of neural networks $(\Phi_n)_{n \in \mathbb{N}}$ whose realizations converge in order-$(m-1)$ Sobolev norm to a function that cannot be realized exactly by a neural network. Thus, sets of realized neural networks are not closed in order-$(m-1)$ Sobolev spaces $W^{m-1,p}$ for $p \in [1,\infty]$. We further show that these sets are not closed in $W^{m,p}$ under slightly stronger conditions on the $m$-th derivative of $\rho$. For a real analytic activation function, we show that sets of realized neural networks are not closed in $W^{k,p}$ for any $k \in \mathbb{N}$. The nonclosedness allows for approximation of non-network target functions with unbounded parameter growth. We partially characterize the rate of parameter growth for most activation functions by showing that a specific sequence of realized neural networks can approximate the activation function's derivative with weights increasing inversely proportional to the $L^p$ approximation error. Finally, we present experimental results showing that networks are capable of closely approximating non-network target functions with increasing parameters via training.
\section{Introduction} \label{sec:intro} From an approximation theory perspective, neural networks use observed training data to approximate an unknown target function. Studying topological properties of sets of neural networks will reveal what kinds of functions can be approximated by neural networks. In particular, closedness of sets of networks is a topological property of interest. If these sets are closed with respect to some norm, then one can construct a sequence of neural networks converging to a target function in that norm if and only if that target function is itself a network. On the other hand, nonclosedness would mean that neural networks can approximate target functions that are not networks themselves. An up-to-date survey on the approximation results for neural networks can be found in \citep{GRK20}. To allow neural networks to approximate a wider class of functions, the number of nodes in the network can be increased. As long as the number of hidden nodes is allowed to grow without bound, Hornik's Universal Approximation Theorem shows that neural networks with only one hidden layer can approximate any $p$-integrable function to arbitrary accuracy \citep{H91}. Other approximation theorems show that neural networks are dense in other function classes, depending on the properties of the activation function, but many of these results allow the depth or width of the network to vary \citep{C89,HSW89,HSW90,KSH12}. These results suggest that sets of realized neural networks are not closed in the corresponding function spaces, since not all of these functions can be represented exactly by a neural network. However, these are results about sets of networks of any width. In practice, the architecture of a neural network is fixed before the learning process begins. Hence, we consider properties of sets of neural networks with a fixed architecture. Related to closedness is the best approximation property in $L^p$ spaces, which holds if every $f \in L^p$ has at least one realized neural network $g$ such that $\|f-g\|_{L^p}$ is minimized over all possible networks with the same architecture. In disproving the best approximation property for sigmoidal neural networks of a fixed size, \citep{GP90} show that sets of realized neural networks with the sigmoid activation function are not closed in $L^p$ spaces and claim that this should be true for all nonlinear activation functions. However, \citep{K95} proves that sets of networks with the Heaviside activation function are closed in $L^p$ spaces, and \citep{KKV20} proves a similar result for generalizations of Heaviside-type networks. Moreover, \citep{KKV00} shows that Heaviside perceptron networks do have the best approximation property, but that best approximation maps with these networks are not unique or continuous. More generally, no best approximation maps using fixed-architecture neural networks are continuous \citep{KKV99}. Petersen, Raslan, and Voigtlaender discuss the closedness and other topological properties of sets of realized neural networks \citep{PRV19,PRV20}. Among other results, they prove that for most commonly used activation functions, these sets are not closed with respect to $L^p$ norms. However, sets of all neural networks with a fixed architecture and uniformly bounded parameters \textit{are} closed, indicating that learning a non-network target function requires the parameters to grow without bound. For example, learning a target function that has fewer derivatives than the activation function requires at least one network parameter to tend to infinity even as the network width increases, as seen in \citep{M97}. We partially investigate the relationship between $L^p$ approximation error and parameter size, finding that the two are approximately inversely proportional when learning the derivative of the activation function. In \citep{PRV19} (a shortened conference paper version of \citep{PRV20}), the authors speculate that sets of neural networks may be closed in Sobolev spaces, where convergence is stronger. However, we show that this is not true, extending their \textit{nonclosedness} results to convergence in Sobolev norm under additional smoothness assumptions on the activation function using proof techniques similar to those in \citep{PRV20}. Our results apply to the case $p=\infty$ after minor modifications, so sets of realized neural networks are also not closed in $W^{m,\infty}$. In some cases, such as network compression or distillation, we have data about the derivatives of the target function in addition to the training data. In these instances, one can train a network to learn the target function and its derivatives. This approach, introduced in \citep{COJSP17} as Sobolev training, often requires less training data and performs better on testing data. Hence, it is natural to consider the theoretical properties of neural networks in Sobolev spaces, as we do in this paper. We also provide some experimental results using Sobolev training with activation functions of varying degrees of smoothness, where we are able to approximate non-network target functions in Sobolev norm. This result indicates that sets of realized neural networks are indeed not closed in Sobolev spaces, but also that Sobolev training does not prevent parameter growth and allows us to approximate functions on the boundary of these sets of realized networks. Our experiments exhibit slow parameter growth relative to a fast decrease in approximation error, but there may be target functions that require much faster parameter growth to approximate. \subsection{Contributions of this Work} \label{subsec:contributions} Our work considers sets of realized neural networks with a fixed architecture and a fixed activation function. In particular, we study the closedness of these sets of realized neural networks in Sobolev spaces. Our main contributions are: \begin{enumerate}[label=\arabic*)] \item We establish in Theorem \ref{thm:m-(m+1)} that for an $m$-times differentiable activation function that is not $(m+1)$-times differentiable, sets of realized neural networks are \textit{not} closed in order-$(m-1)$ Sobolev spaces $W^{m-1,p}$ for $p \in [1,\infty]$. We prove this result by constructing a sequence of neural networks that converges in Sobolev norm to a target function that is not a neural network, which follows the approach in \citep{PRV20}. \item We extend the nonclosedness result of Theorem \ref{thm:m-(m+1)} to $W^{m,p}$ under an additional assumption on the activation function. \item For real analytic activation functions, Theorem \ref{thm:rho_smooth} shows that sets of realized neural networks are not closed in any order Sobolev spaces. \item We show analytically in Proposition~\ref{prop:rates} that for most activation functions, the $L^p$ approximation error decays inversely proportional to the growth of network parameters for a given sequence of networks approximating the derivative of the activation function. The relationship between approximation error and weight growth relates to Theorem 1 in \citep{M97}, although that result holds for $L^\infty$ error of shallow networks with increasing width. \item We present some experiments in Section \ref{sec:experiments} demonstrating that neural networks can approximate target functions that require increasingly large parameters. Our example achieves a fast decay in approximation error with a relatively slow growth in the network parameters, which may not be the case for other non-network target functions. These results appear to be robust to varying degrees of smoothness of the activation function and to certain classes of target functions. \end{enumerate} Our nonclosedness results all indicate that neural networks can be trained to approximate non-network target functions in Sobolev norm. However, we will see that doing so will necessarily cause unbounded growth of network parameters. Thus, the training process may be difficult in practice, or regularization techniques may prevent a network from approximating a non-network target function. In our experiments, we train sequences of networks to approximate non-network target functions in Sobolev norm. The networks are able to closely approximate these target functions, providing further evidence that sets of realized neural networks are not closed in Sobolev spaces. Moreover, the ability to numerically approximate functions on the boundary of these sets of realized neural networks speaks to the expressiveness of neural networks in practice. \subsection{Outline of this Paper} \label{subsec:outline} Our work first provides background material on neural networks, then discusses the closedness of realized neural networks in Sobolev spaces, and finally presents some related numerical results. Section \ref{sec:notation} lays out the definitions and notation required for the rest of the paper. In Section \ref{sec:nonclosedness} we begin with our main results that sets of realized neural networks are not closed in Sobolev spaces under reasonable conditions on the activation function. On the other hand, Section \ref{subsec:closedness} studies realizations of networks with bounded parameters, and presents a result that these sets of realizations \textit{are} closed in Sobolev spaces. Section \ref{subsec:rates} analyzes the relationship between $L^p$ approximation error and parameter growth for a network learning its activation function's derivative. We provide experimental results in Section \ref{sec:experiments} that demonstrate the nonclosedness of sets of realized neural networks and show that some classes of non-network target functions can indeed be approximated in Sobolev norm by a sequence of networks with increasing parameters. \section{Notation and Definitions} \label{sec:notation} We first define neural networks. Every network has an architecture which specifies the input dimension, the number of layers, and the number of nodes in each layer. In addition, the network consists of matrix-vector pairs that determine the affine transformation between consecutive layers. \begin{defn} \citep{PRV20} Let $d,L \in \mathbb{N}$. A \textbf{neural network $\Phi$ with input dimension $d$ and $L$ layers} is a sequence of matrix-vector pairs \begin{equation*} \Phi = \big( (A_1,b_1),\dots,(A_L,b_L) \big), \end{equation*} where $N_0=d$ and $N_1,\dots,N_L \in \mathbb{N}$, and where each $A_\ell$ is an $N_\ell \times N_{\ell-1}$ matrix, and $b_\ell \in \mathbb{R}^{N_\ell}$. We call $(d,N_1,\dots,N_L)$ the \textbf{architecture} of $\Phi$. $N_L$ is the \textbf{output dimension}. Define $\mathcal{NN}(d,N_1,\dots,N_L)$ to be the \textbf{set of all neural networks} $\Phi$ with architecture $(d,N_1,\dots,N_L)$. \end{defn} To emphasize the role of the activation function, we distinguish between a neural network and a realized neural network. A realized network is a function defined by alternately applying the affine transformations of the network and the activation function. We also define the set of all realized neural networks with a fixed architecture and the same activation function. \begin{defn} \citep{PRV20} Let $\Phi$ be a neural network, $\Omega \subset \mathbb{R}^d$, and $\rho : \mathbb{R} \to \mathbb{R}$. The \textbf{realization of $\Phi$ with activation function $\rho$ over $\Omega$} is the function $R_\rho^\Omega(\Phi) : \Omega \to \mathbb{R}^{N_L}$ defined by \begin{equation*} R_\rho^\Omega(\Phi)(x) = W_L ( \rho(W_{L-1} ( \cdots \rho( W_1(x))))) \end{equation*} where the affine transformation $W_\ell : \mathbb{R}^{N_{\ell-1}} \to \mathbb{R}^{N_\ell}$ is defined by $W_\ell(x) = A_\ell x + b_\ell$ and $\rho$ is evaluated componentwise. Define $R_\rho^\Omega$ to be the \textbf{realization map} $\Phi \mapsto R_\rho^\Omega(\Phi)$, and let \begin{equation*} \mathcal{RNN}_\rho^\Omega(d,N_1,\dots,N_L) \vcentcolon= R_\rho^\Omega\big( \mathcal{NN}(d,N_1,\dots,N_L) \big). \end{equation*} We call $\mathcal{RNN}_\rho^\Omega(d,N_1,\dots,N_L)$ the \textbf{set of $\rho$-realizations of networks with architecture $(d,N_1,\dots,N_L)$ over $\Omega$}. \end{defn} We will sometimes need to concatenate networks, which creates a new neural network consisting of the matrix-vector pairs of the first network followed by the pairs of the second network. \begin{defn} \citep{PRV20} Let $\Phi_1 = \big( (A_1^1,b_1^1),\dots,(A_{L_1}^1,b_{L_1}^1) \big)$ and $\Phi_2 = \big( (A_1^2,b_1^2),\dots,(A_{L_2}^2,b_{L_2}^2) \big)$ be two neural networks such that the input dimension of $\Phi_1$ equals the output dimension of $\Phi_2$. Then \begin{equation*} \Phi_1 \bullet \Phi_2 \vcentcolon= \big( (A_1^2,b_1^2),\dots,(A_{L_2-1}^2,b_{L_2-1}^2),(A_1^1 A_{L_2}^2, A_1^1b_{L_2}^2 + b_1^1), (A_2^1,b_2^1),\dots,(A_{L_1}^1,b_{L_1}^1) \big) \end{equation*} defines a neural network with $L_1+L_2-1$ layers. We call $\Phi_1 \bullet \Phi_2$ the \textbf{concatenation of $\Phi_1$ and $\Phi_2$}. \end{defn} Note that for any activation function $\rho: \mathbb{R} \to \mathbb{R}$ and any $\Omega \subset \mathbb{R}^{d_2}$, we have $R_\rho^\Omega(\Phi_1 \bullet \Phi_2) = R_\rho^{\mathbb{R}^{d_1}}(\Phi_1) \circ R_\rho^\Omega(\Phi_2)$, where $d_i$ is the input dimension of $\Phi_i$. That is, concatenation of neural networks corresponds to function composition of the realizations of those networks. For a fixed network architecture $(d,N_1,\dots,N_L)$, we want to consider the closedness of the set $\mathcal{RNN}_\rho^\Omega(d,N_1,\dots,N_L)$ in Sobolev spaces. We define Sobolev spaces below. \begin{defn} Let $k \in \mathbb{N}$, let $\Omega \subset \mathbb{R}^k$ be measurable with non-empty interior, and let $1 \leq p \leq \infty$. The \textbf{Sobolev space} $W^{k,p}(\Omega)$ consists of all functions $f$ on $\Omega$ such that for all multi-indices $\alpha$ with $|\alpha|\leq k$, the mixed partial derivative $f^{(\alpha)} \vcentcolon= D^\alpha f$ exists in the weak sense and belongs to $L^p(\Omega)$. That is, \begin{equation*} W^{k,p}(\Omega) = \left\{ f \in L^p(\Omega) : D^\alpha f \in L^p(\Omega) \text{ for all } |\alpha| \leq k \right\}. \end{equation*} The number $k$ is the \textbf{order} of the Sobolev space. The norm \begin{equation*} \|f\|_{W^{k,p}(\Omega)} \vcentcolon= \sum_{|\alpha| \leq k} \| D^\alpha f \|_{L^p(\Omega)} \end{equation*} makes $W^{k,p}(\Omega)$ a Banach space for any $k \in \mathbb{N}$. Note that $W^{0,p}(\Omega) = L^p(\Omega)$. \end{defn} \section{(Non)closedness in Sobolev Spaces} \label{sec:nonclosedness} In \citep{PRV20} it is shown that $\mathcal{RNN}_\rho^{[-B,B]^d}(d,N_1,\dots,N_{L-1},1)$ is \textit{not} closed in $L^p([-B,B]^d)$ for any $p \in (0,\infty)$, under mild assumptions satisfied by most commonly used activation functions (including ReLU, the rectified linear unit). Moreover, these sets of realized neural networks of a fixed architecture are \textit{not} closed in $C([-B,B]^d)$ with respect to the $L^\infty$ norm for most commonly used activation functions. However, sets of ReLU-realizations of two-layer networks \textit{are} closed in $C([-B,B]^d)$. These results are shown for $[-B,B]^d$, but generalize to any compact set $\Omega \subset \mathbb{R}^d$ with non-empty interior. In this work, we investigate the closedness of sets of realized neural networks in Sobolev spaces. Since convergence in Sobolev norm is stronger than $L^p$ convergence, Petersen, Raslan, and Voigtlaender anticipate that $\mathcal{RNN}_\rho^{[-B,B]^d}(d,N_1,\dots,N_{L-1},1)$ may be closed in Sobolev spaces \citep{PRV19}. We prove that this is often not the case. Provided that the activation function $\rho$ is $m$-times differentiable with bounded derivatives, these sets are not closed in $W^{m-1,p}([-B,B]^d)$ for any $p \in [1,\infty]$. \begin{theorem} \label{thm:m-(m+1)} Let $m, d \in \mathbb{N}$, $p \in [1,\infty]$, and $B>0$. Define $\Omega = [-B,B]^d$. Consider a network architecture $(d,N_1,\dots,N_{L-1},1)$ with $L \geq 2$ and $N_{L-1} \geq 2$. Suppose that $\rho \in C^m(\mathbb{R}) \setminus C^{m+1}(\mathbb{R})$ and all derivatives of $\rho$ up to order $m$ are locally $p$-integrable and bounded on compact sets. Then: \begin{itemize} \item The set $\mathcal{RNN}_\rho^{\Omega}(d,N_1,\dots,N_{L-1},1)$ is not closed in $W^{m-1,p}(\Omega)$. \item If additionally $\rho^{(m)}$ is absolutely continuous and the weak derivative $\rho^{(m+1)}$ exists and is in $L^p(\Omega)$, then $\mathcal{RNN}_\rho^{\Omega}(d,N_1,\dots,N_{L-1},1)$ is not closed in $W^{m,p}(\Omega)$. \end{itemize} \end{theorem} \begin{proof} See Appendix \ref{subapp:m-(m+1)}. Similar to \citep{PRV20}, we construct a sequence of networks whose $\rho$-realizations converge to a target function that is not a $\rho$-realization of some network. In particular, we show order-$(m-1)$ (or order-$m$) Sobolev convergence to a target function that is $(m-1)$-times but not $m$-times differentiable, while \citep{PRV20} show $L^p$ convergence to a discontinuous step function. \end{proof} Section \ref{subsec:activation} lists several commonly used activation functions and whether they satisfy the assumptions of Theorem \ref{thm:m-(m+1)} for some value of $m$. Of course, convergence in order-$(m-1)$ Sobolev norm is stronger than convergence in lower-order Sobolev norm, so $\mathcal{RNN}_\rho^{\Omega}(d,N_1,\dots,N_{L-1},1)$ is not closed in lower-order Sobolev spaces either. \begin{cor} \label{cor:lower-order} Let $d \in \mathbb{N}$ and $p \in [1,\infty]$. Suppose $\rho \in C^m(\mathbb{R}) \setminus C^{m+1}(\mathbb{R})$ with bounded derivatives up to order $m$. Then $\mathcal{RNN}_\rho^{\Omega}(d,N_1,\dots,N_{L-1},1)$ is \textit{not} closed in $W^{k,p}(\Omega)$ for any $k \in \{0,\dots,m-1\}$, where $\Omega = [-B,B]^d$. \end{cor} \begin{proof} Note that convergence in $W^{m-1,p}(\Omega)$ implies convergence in $W^{k,p}(\Omega)$ for all $k \in \{0,\dots,m-1\}$. So we still have $f_n \to f$ in $W^{k,p}(\Omega)$ in the proof of Theorem \ref{thm:m-(m+1)}, but $f \notin \mathcal{RNN}_\rho^{\Omega}(d,N_1,\dots,N_{L-1},1)$. \end{proof} In Theorem \ref{thm:m-(m+1)}, we show that $\mathcal{RNN}_\rho^{\Omega}(d,N_1,\dots,N_{L-1},1)$ is not closed in order-$(m-1)$ Sobolev spaces for $\rho \in C^m(\mathbb{R})$. For an analytic, bounded, and non-constant activation function $\rho$, we extend this result and prove that $\mathcal{RNN}_\rho^{\Omega}(d,N_1,\dots,N_{L-1},1)$ is not closed in any order Sobolev spaces. \begin{theorem} \label{thm:rho_smooth} Let $d \in \mathbb{N}$, $p \in [1,\infty]$, and $B>0$. Suppose that $\rho: \mathbb{R} \to \mathbb{R}$ is real analytic, bounded, and not constant, and that all derivatives $\rho^{(n)}$ of $\rho$ are bounded. Then for all possible neural network architectures $(d,N_1,\dots,N_{L-1},1)$ with $L \geq 2$ and $N_{L-1} \geq 2$ and all $k \in \mathbb{N}$, the set $\mathcal{RNN}_\rho^{[-B,B]^d}(d,N_1,\dots,N_{L-1},1)$ is \textit{not} closed in $W^{k,p}([-B,B]^d)$. \end{theorem} \begin{proof} See Appendix \ref{subapp:rho_smooth}. Using arguments similar to those in \citep{PRV20}, we construct a sequence of networks whose $\rho$-realizations converge in any order Sobolev norm to an unbounded function, which cannot be a $\rho$-realization of some network since $\rho$ is bounded. Lemma \ref{lem:proj} in the proof is interesting in its own right, as it states that realized neural networks with analytic activation functions can approximate the coordinate projection maps to arbitrary accuracy in Sobolev norm. \end{proof} Theorems \ref{thm:m-(m+1)} and \ref{thm:rho_smooth} show that $\mathcal{RNN}_\rho^{[-B,B]^d}(d,N_1,\dots,N_{L-1},1)$ is not closed in Sobolev spaces, although the smoothness of the activation function dictates the order of the Sobolev space in question. In particular, for $\rho \in C^m(\mathbb{R}) \setminus C^{m+1}(\mathbb{R})$, Theorem \ref{thm:m-(m+1)} shows nonclosedness in order-$(m-1)$ or order-$m$ Sobolev spaces, while Theorem \ref{thm:rho_smooth} gives nonclosedness in all orders of Sobolev spaces for analytic activation functions. Both results apply to the $p=\infty$ case, indicating that sets of realized neural networks are not closed with respect to uniform convergence of a network and its derivatives. \subsection{Closedness of Sets of Networks with Bounded Weights} \label{subsec:closedness} The nonclosedness of sets of realized neural networks is undesirable if we only want to learn network target functions or if we want to prevent unbounded growth of network parameters. If we desire closedness, then we must modify the set of neural networks under consideration in some way. However, requiring closedness will necessarily constrain the set of target functions that we can approximate. Modifications to enforce closedness of sets of realized neural networks may include relaxing some assumptions on the activation function or placing restrictions on the network parameters. In this section, we discuss the closedness of sets of realized neural networks whose parameters are all bounded by the same constant. We define a norm on $\mathcal{NN}(d,N_1,\dots,N_L)$ and sets of realized neural networks with bounded norm. \begin{defn} \citep{PRV20} Let $C>0$. Define \begin{equation*} \mathcal{NN}^C(d,N_1,\dots,N_L) = \{ \Phi \in \mathcal{NN}(d,N_1,\dots,N_L) : \|\Phi\|_{total} \leq C \}, \end{equation*} as a set of neural networks with \textbf{uniformly bounded} weights, where \begin{equation*} \|\Phi\|_{total} = \max_{\ell=1,\dots,L} \|A_\ell\|_{max} + \max_{\ell=1,\dots,L} \|b_\ell\|_{max} \end{equation*} and $\|\cdot\|_{max}$ equals the absolute value of the entry of largest magnitude from a matrix or vector. For $\Omega \subset \mathbb{R}^d$ and $\rho : \mathbb{R} \to \mathbb{R}$, also define $$\mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L) \vcentcolon= R_\rho^\Omega\big(\mathcal{NN}^C(d,N_1,\dots,N_L)\big)$$ as a set of realized neural networks with uniformly bounded weights and biases. \end{defn} Petersen, Raslan, and Voigtlaender show that $\mathcal{RNN}_\rho^{[-B,B]^d}(d,N_1,\dots,N_{L-1},1)$ is not closed in $L^p([-B,B]^d)$ or in $C([-B,B]^d)$ with respect to the $L^\infty$ norm. However, sets of realized neural networks with uniformly bounded parameters \textit{are} closed (in fact, compact) in these spaces by Proposition 3.5 in \citep{PRV20}. \iffalse \begin{prop} \citep{PRV20} Let $\Omega \subset \mathbb{R}^d$ be compact, $C>0$, $p \in (0,\infty)$, and $\rho: \mathbb{R} \to \mathbb{R}$ be continuous. Then the set $\mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L)$ of realized neural networks with uniformly bounded weights is compact (and hence closed) in $L^p(\Omega)$ and $C(\Omega)$ (with respect to the $L^\infty$ norm) for any architecture $(d,N_1,\dots,N_L)$. \end{prop} \begin{proof} See Proposition 3.5 in \citep{PRV20}. The compactness of $\mathcal{NN}^C(d,N_1,\dots,N_L)$ follows from the Heine-Borel Theorem. Since the realization map $$R_\rho^\Omega : \mathcal{NN}(d,N_1,\dots,N_L) \to C(\Omega)$$ is continuous (also shown in \citep{PRV20}), the image \begin{equation*} \mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L) = R_\rho^\Omega\big(\mathcal{NN}^C(d,N_1,\dots,N_L)\big) \end{equation*} is compact in $C(\Omega)$. Since $\Omega$ is compact, $C(\Omega)$ is continuously embedded in $L^p(\Omega)$ for any $p \in (0,\infty)$, and thus $\mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L)$ is compact in $L^p(\Omega)$ as well. \end{proof} \fi Since Sobolev convergence is stronger than $L^p$ convergence, $\mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L)$ is also closed in Sobolev spaces. \begin{cor} \label{cor:bounded_weights} Let $\Omega \subset \mathbb{R}^d$ be compact, $C>0$, $p \in [1,\infty]$, $k \in \mathbb{N}$, and $\rho: \mathbb{R} \to \mathbb{R}$ be continuous. Then $\mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L)$ is closed in $W^{k,p}(\Omega)$. \end{cor} \begin{proof} If $(f_n)_{n \in \mathbb{N}} \subset \mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L)$ satisfies $\|f_n-f\|_{W^{k,p}(\Omega)} \to 0$ for some $f$, then $f_n \to f$ in $L^p$ norm. Thus, $f \in \mathcal{RNN}_\rho^{\Omega,C}(d,N_1,\dots,N_L)$ because this set is closed in $L^p(\Omega)$ by Proposition 3.5 in \citep{PRV20}. \end{proof} The nonclosedness of $\mathcal{RNN}_\rho^{\Omega}$ in Sobolev spaces has significant consequences for approximating functions using neural networks. Indeed, it says that for any architecture $S=(d,N_1,\dots,N_{L-1},1)$ with $L \geq 2$ and $N_{L-1} \geq 2$, there is a non-network target function $f \in \overline{\mathcal{RNN}_\rho^{\Omega}(S)} \setminus \mathcal{RNN}_\rho^{\Omega}(S)$, where the closure can be taken with respect to Sobolev norm of the appropriate order. Combined with the closedness of the set $\mathcal{RNN}_\rho^{\Omega,C}(S)$ with uniformly bounded weights, this means that if $\|R_\rho(\Omega)(\Phi_n) - f\|_{W^{k,p}(\Omega)} \to 0$ for some sequence of networks $(\Phi_n)_{n \in \mathbb{N}}$ with architecture $S$, then $\|\Phi_n\|_{total} \to \infty$. This may explain the phenomenon of weights growing without bound that sometimes occurs when training neural networks, and it indicates that using Sobolev training for neural networks can still lead to such a growth of parameters for some target functions. \subsection{Network Parameter Growth Rates} \label{subsec:rates} We have seen that convergence of a neural network to a non-network target function requires infinite growth of at least one network weight, but we would like to know how quickly the weights must grow relative to the decreasing approximation error. An interesting area of further research would be to describe this relationship for fixed architectures in full generality based on characteristics of the activation function, target function, and network architecture. We describe the approximation error decay for most activation functions in the case of a certain sequence of networks learning the derivative of the activation function. \begin{prop} \label{prop:rates} Let $\rho \in C^m(\mathbb{R})$ for some $m \geq 2$, the softsign function, or ELU (cf.\ Table \ref{table:activation}). Let $(h_n)_{n=1}^\infty = (R_\rho^\mathbb{R}(\Phi_n))_{n=1}^\infty$ be the sequence of realized neural networks in $\mathcal{RNN}(1,2,1)$ from the proof of Theorem \ref{thm:m-(m+1)}. Further let $p \in [1,\infty]$ and let $\Omega \subset \mathbb{R}$ be a compact, measurable set with nonempty interior. Then $\|h_n - \rho'\|_{L^p(\Omega)} \leq C_p/\|\Phi_n\|_{total}$ for a constant $C_p$ depending on $p$ but not $n$. \end{prop} \begin{proof} See Appendix \ref{subapp:rates}. \end{proof} In other words, Proposition \ref{prop:rates} shows that the $L^p$ approximation error is approximately inversely proportional to the networks' total norms for a sequence of networks learning the activation function derivative. Note that when the activation function is not analytic, its derivative is a non-network target function. \subsection{Commonly Used Activation Functions} \label{subsec:activation} Though some of the assumptions on the activation function required by Theorems \ref{thm:m-(m+1)} and \ref{thm:rho_smooth} seem strong, they are satisfied by many commonly used activation functions. Thus, the results of these theorems apply, and sets of neural networks are not closed in various orders of Sobolev spaces for $\Omega$ compact and $p \geq 1$. Table \ref{table:activation} summarizes which results apply to several common activation functions. \begin{longtable}{|l|l|l|l|} \hline \multirow{2}{*}{\textbf{Name}} & \multirow{2}{*}{$\rho(x)$} & \textbf{Smoothness/} & $\mathcal{RNN}_\rho^\Omega \Big.$ \\ & & \textbf{Boundedness} & \textbf{not closed in} \\ \hline \endfirsthead Rectified Linear & \multirow{2}{*}{$\max\{0,x\}$} & $\Big. C(\mathbb{R})$, abs. cont., & \multirow{2}{*}{$W^{0,p}(\Omega)$ [1]} \\ Unit (ReLU) & & $\rho' \in L^p(\Omega)$ & \\ \hline Exponential Linear & \multirow{2}{*}{$x \cdot \chi_{x \geq 0} + (e^x-1) \cdot \chi_{x < 0}$} & $\Big. C^1(\mathbb{R})$, $\rho'$ abs. cont., & \multirow{2}{*}{$W^{1,p}(\Omega)$} \\ Unit (ELU) & & $\rho'' \in L^p(\Omega)$ & \\ \hline \multirow{2}{*}{Softsign} & \multirow{2}{*}{$\frac{x}{1+|x|}$} & $\Big. C^1(\mathbb{R})$, $\rho'$ abs. cont., & \multirow{2}{*}{$W^{1,p}(\Omega)$} \\ & & $\rho'' \in L^p(\Omega)$ & \\ \hline Inverse Square Root & \multirow{2}{*}{$x \cdot \chi_{x \geq 0} + \frac{x}{\sqrt{1+ax^2}} \cdot \chi_{x < 0}$} & $\Big. C^2(\mathbb{R})$, $\rho''$ abs. cont., & \multirow{2}{*}{$W^{2,p}(\Omega)$} \\ Linear Unit ($a > 0$) & & $\rho''' \in L^p(\Omega)$ & \\ \hline Inverse Square Root & \multirow{2}{*}{$\frac{x}{\sqrt{1+ax^2}}$} & real analytic, all $\Big.$ & \multirow{2}{*}{$W^{k,p}(\Omega)$ for all $k$} \\ Unit ($a > 0$) & & derivatives bounded & \\ \hline \multirow{2}{*}{Sigmoid} & \multirow{2}{*}{$\frac{1}{1+e^{-x}}$} & real analytic, all $\Big.$ & \multirow{2}{*}{$W^{k,p}(\Omega)$ for all $k$} \\ & & derivatives bounded [2] &\\ \hline \multirow{2}{*}{tanh} & \multirow{2}{*}{$\frac{e^x - e^{-x}}{e^x + e^{-x}}$} & real analytic, all $\Big.$ & \multirow{2}{*}{$W^{k,p}(\Omega)$ for all $k$} \\ & & derivatives bounded [2] &\\ \hline \multirow{2}{*}{arctan} & \multirow{2}{*}{$\arctan(x)$} & real analytic, all $\Big.$ & \multirow{2}{*}{$W^{k,p}(\Omega)$ for all $k$} \\ & & derivatives bounded [3] &\\ \hline \captionsetup{width=0.85\linewidth} \caption{Many activation functions used in practice satisfy some smoothness and boundedness properties so that Theorems \ref{thm:m-(m+1)} and \ref{thm:rho_smooth} apply. Thus, $\mathcal{RNN}_\rho^\Omega$ is not closed in various orders of Sobolev spaces for $\Omega$ compact and $p \in [1,\infty]$. Some results in the table are found in the following references: [1] \citep{PRV20} [2] \citep{MW93} [3] \citep{AL10}.} \label{table:activation} \end{longtable} The smoothness and boundedness assumptions for the ReLU, exponential linear unit, softsign, and inverse square root linear unit can be checked by hand. The real analyticity of the other activation functions is established by properties from \citep{KP02}. \iffalse \begin{rk} The inverse square root unit, sigmoid, tanh, and arctan activation functions in Table \ref{table:activation} are real analytic. \end{rk} \begin{proof} See Appendix \ref{subapp:activation}. \end{proof} \fi For these activation functions, sets of realized neural networks are not closed in Sobolev spaces of certain orders. Thus, using Sobolev training still allows these networks to learn non-network target functions, although doing so will cause unbounded growth of parameters, which follows from Corollary \ref{cor:bounded_weights}. \section{Experimental Results} \label{sec:experiments} We now show some experimental results that demonstrate the nonclosedness of sets of realized neural networks in Sobolev spaces and examine the rate of parameter growth. Specifically, we use Sobolev training with an Adam optimizer \citep{KB17} to produce sequences of neural networks that approximate non-network target functions. For activation functions that are $m$-times but not $(m+1)$-times differentiable, we know from the proof of Theorem \ref{thm:m-(m+1)} that there is a sequence of networks that converges in Sobolev norm to the derivative of the activation function. The derivative is not a realized neural network because it is only $(m-1)$-times differentiable. With this as motivation, each trial of our experiment trains a network to learn a randomly generated $(m-1)$-times differentiable target function in Sobolev norm. We consistently observe a rapidly decreasing approximation error and a fairly steady growth of network weights. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.33]{ELU_Loss_S.png} \caption{Training error. \label{subcap:ELU_Loss_S}} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.33]{ELU_Norm_S.png} \caption{The network norm. \label{subcap:ELU_Norm_S}} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.33]{ELU_Target_S.png} \caption{The target function. \label{subcap:ELU_Target_S}} \end{subfigure} \captionsetup{width=0.85\linewidth} \caption{For each of 100 target functions $f$, we train an ELU network $\Phi$ in $\mathcal{NN}(1,10,1)$ to minimize $\|R_\rho^{[-5,5]}(\Phi)-f\|_{W^{1,2}}$. (\subref{subcap:ELU_Loss_S}) The best $W^{1,2}$ training loss achieved thus far is plotted at each epoch and averaged over all 100 experiments. (\subref{subcap:ELU_Norm_S}) The total network norm is averaged over all 100 experiments with 95\% confidence bands. (\subref{subcap:ELU_Target_S}) An example target function is plotted along with the realized neural network after training.}\label{cap:ELU_S} \end{figure} Figure \ref{cap:ELU_S} shows the results of 100 trails of ELU networks learning non-network target functions in Sobolev norm. Since ELU is $C^1$ but not $C^2$, we train the networks to learn randomly generated piecewise linear ($C^0$ but not $C^1$) functions in order-1 Sobolev space. We see that the $L^2$ and Sobolev approximation errors decrease rather quickly, while the total network norm increases quickly at first and then at a fairly steady rate. These results are consistent with Theorem \ref{thm:m-(m+1)}. We see that networks are able to closely approximate non-network target functions, which is evidence of the nonclosedness of sets of realized neural networks in Sobolev spaces. Appendix \ref{app:exp} provides similar results for the ISRLU activation function. Since ISRLU is $C^2$ and not $C^3$, we train these networks to learn randomly generated piecewise quadratic target functions in order-2 Sobolev norm. For real analytic activation functions, Lemma \ref{lem:proj} indicates that realized neural networks can approximate coordinate projection maps to arbitrary accuracy. We trained a sigmoid neural network to learn the projection map $P_1: \mathbb{R}^2 \to \mathbb{R}$ given by $P_1(x_1,x_2)=x_1$ in order-2 Sobolev norm. We reset the network weights and repeat the training process 100 times to assess whether sigmoid networks consistently learn $P_1$, which is a non-network function. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.33]{Sig_Loss_S.png} \caption{Training error. \label{subcap:Sig_Loss_S}} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.33]{Sig_Norm_S.png} \caption{The network norm. \label{subcap:Sig_Norm_S}} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.42]{Sig_Target_S.png} \caption{The target function. \label{subcap:Sig_Target_S}} \end{subfigure} \captionsetup{width=0.85\linewidth} \caption{In 100 repetitions, we train a sigmoid network $\Phi$ in $\mathcal{NN}(1,10,1)$ to minimize $\|R_\rho^{[-5,5]^2}(\Phi)-P_1\|_{W^{2,2}}$. (\subref{subcap:Sig_Loss_S}) The best $W^{2,2}$ training loss achieved thus far is plotted at each epoch and averaged over all 100 experiments. (\subref{subcap:Sig_Norm_S}) The total network norm is averaged over all 100 experiments with 95\% confidence bands. (\subref{subcap:Sig_Target_S}) The target function (blue) is plotted along with the realized neural network (orange) after training.}\label{cap:Sig_S} \end{figure} Figure \ref{cap:Sig_S} shows the results of 100 trials of sigmoid networks learning $P_1$ in order-2 Sobolev norm. Note that $P_1$ is a non-network target function because it is unbounded, as discussed in the proof of Theorem \ref{thm:rho_smooth}. As in Figure \ref{cap:ELU_S}, we see that the $L^2$ and Sobolev approximation errors decrease rather quickly, while the total network norm increases at a fairly steady rate. These results are consistent with Theorem \ref{thm:rho_smooth}. We see that networks with analytic activation functions are able to closely approximate non-network target functions, which is further evidence of the nonclosedness of sets of realized neural networks in Sobolev spaces. Our experiments were done with shallow networks, but we expect the same nonclosedness to be demonstrated with deep networks since they are necessarily more expressive. \subsection{Network Parameter Growth Rates} \label{subsec:rates_exp} The results in Figures \ref{cap:ELU_S} and \ref{cap:Sig_S} show that realized neural networks are able to approximate non-network target functions in Sobolev norm with growing parameters, but the relationship between the approximation error and parameter growth is not clear. In this section, we provide numerical evidence for one case of Proposition \ref{prop:rates} by showing that $L^2$ approximation error and the network's total norm are approximately inversely proportional for a softsign network learning the softsign derivative. We analyze the relationship by making a scatter plot of approximation error against the total norm. We do the same for the Sobolev error, even though we do not have a theoretical result for this case. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.9\linewidth]{rates_N_title.png} \caption{$L^2$ error vs.\ network norm. \label{subcap:rates_N}} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.89\linewidth]{rates_S_title.png} \caption{Sobolev error vs.\ network norm. \label{subcap:rates_S}} \end{subfigure} \captionsetup{width=0.85\linewidth} \caption{We train a softsign network $\Phi$ in $\mathcal{NN}(1,2,1)$ to learn the softsign derivative and scatter approximation error against the total norm of $\Phi$. (\subref{subcap:rates_N}) Training error vs.\ network norm using $L^2$ training. (\subref{subcap:rates_S}) Training error vs.\ network norm using order-1 Sobolev training.}\label{cap:rates} \end{figure} Figure \ref{cap:rates}(\subref{subcap:rates_N}) shows that the relationship between $L^2$ approximation error and the total norm is approximately inversely proportional, as stated in Proposition \ref{prop:rates}. Note that our networks are trained using an Adam optimizer \citep{KB17} and thus do not follow the sequence $(h_n)_{n=1}^\infty$ from Proposition \ref{prop:rates}, but we still observe the expected relationship between approximation error and weights. The relationship for Sobolev error in Figure \ref{cap:rates}(\subref{subcap:rates_S}) is not as clear. An interesting question of further research is how the relationship between error and norm generalizes to other target functions, sequences of networks, network architectures, and higher-order Sobolev norms. \iffalse In our experiments, the trained sequence of networks escapes the set of realized neural networks along a very similar path as the construction in that proof. These results demonstrate the nonclosedness of sets of realized neural networks, but also that Sobolev training can cause a predictable growth of parameters. The activation function $\rho$ is taken to be the softsign function $$\rho(x) = \frac{x}{1+|x|}$$ so that $\rho \in C^1(\mathbb{R}) \setminus C^2(\mathbb{R})$. Moreover, one can verify that $\rho'$ is absolutely continuous and the weak derivative $\rho''$ is in $L^p$ for $p \in [1,\infty)$. Thus, $\mathcal{RNN}_\rho^{[-B,B]^d}(d,N_1,\dots,N_{L-1},1)$ is not closed in $W^{1,p}([-B,B]^d)$ by Theorem \ref{thm:m-(m+1)}. Specifically, we will provide numerical evidence for the nonclosedness of $\mathcal{RNN}_\rho^{[-5,5]}(1,2,1)$ in $W^{1,2}([-5,5])$ (that is, we take $d=1$, $L=2$, $p=2$, and $B=5$) and by approximating a function on the boundary of $\mathcal{RNN}_\rho^{[-5,5]}(1,2,1)$. Our code can be found \href{https://github.com/scmahan/Neural-Network-Sobolev-Training}{here}. To demonstrate the nonclosedness, we train a sequence of $\rho$-realizations of networks to learn the target function \begin{equation*} f(x) = \rho'(x) = \frac{1}{(1+|x|)^2} \end{equation*} which is not $C^1$ and hence not a $\rho$-realization of some network. If we let $h_n = R_\rho^\mathbb{R}(\Phi_n)$ be defined as in the proof of Theorem \ref{thm:m-(m+1)} (see Appendix \ref{subapp:m-(m+1)}), then $\|h_n - f\|_{W^{1,p}} \to 0$ as $n \to \infty$. However, by Corollary \ref{cor:bounded_weights}, we except to see the network weights growing without bound. For the network $\Phi_n = \big( (A_1^n,b_1^n), (A_2^n,b_2^n) \big)$, the weights from the hidden layer to the output layer are $A_2^n = \begin{pmatrix} n &-n \end{pmatrix}$. Since $\|\Phi_n\|_{total} \to \infty$ as $n \to \infty$, it may be difficult to approximate the non-network target function. However, our experiments show that Sobolev training can approximate such functions. To train the network, we run through 10000 epochs, and in each epoch we generate 1000 training points $f(x)$ with $x$ drawn uniformly from $[-5,5]$. The network parameters are initialized as Gaussian random variables, and then updated at each epoch using an Adam optimizer \citep{KB17} with learning rate $0.005$. We train one network $\Phi$ to minimize $\|\Phi-f\|_{L^2}$ and another network $\Phi_S$ to minimize $\|\Phi_S-f\|_{W^{1,2}}$ (i.e., Sobolev training). Note that Sobolev training requires us to generate training derivatives $f'(x)$ as well. Results from these training processes are shown below. \begin{figure}[ht] \centering \subcaptionbox{Training loss.\label{subcap:Loss_N}} {\includegraphics[scale=0.5]{Loss_N.png}} \subcaptionbox{The network and target function.\label{subcap:Target_N}} {\includegraphics[scale=0.5]{Target_N.png}} \captionsetup{width=0.85\linewidth} \caption{A network $\Phi$ is trained to minimize $\|\Phi-f\|_{L^2}$. Left: The $L^2$ training loss gets very close to 0, with a mean squared error less than $10^{-4}$. Right: The trained network's output appears to closely match that of the target function.}\label{cap:NoSobolevTraining} \end{figure} Figure \ref{cap:NoSobolevTraining} provides two visualizations showing that we can train a network to learn $f$ in $L^2$ norm. The mean $L^2$ training loss between the network and the target function approaches 0, and the network output matches the target function very closely. Since we are training a $\rho$-realization of a neural network to learn $f \notin C^1$, this supports the conclusion of Corollary \ref{cor:lower-order} that $\mathcal{RNN}_\rho^{[-5,5]}(1,2,1)$ is not closed in $L^{2}([-5,5])$. Moreover, it demonstrates that we can approximate functions on the boundary of sets of realized neural networks, even though increased accuracy requires increasingly larger parameters. \begin{figure}[ht] \centering \subcaptionbox{Training loss.\label{subcap:Loss_S}} {\includegraphics[scale=0.5]{Loss_S.png}} \subcaptionbox{The network and target function.\label{subcap:Target_S}} {\includegraphics[scale=0.5]{Target_S.png}} \captionsetup{width=0.85\linewidth} \caption{A network $\Phi_S$ is trained to minimize $\|\Phi_S-f\|_{W^{1,2}}$. Left: The $W^{1,2}$ training loss gets close to 0, with a mean squared error less than $10^{-2}$. Right: The trained network's output appears to closely match that of the target function.}\label{cap:SobolevTraining} \end{figure} Figure \ref{cap:SobolevTraining} similarly provides two visualizations showing that we can train a network to approximate $f$ in the Sobolev space $W^{1,2}$. Since we are now training a $\rho$-realization of a neural network to learn $f \notin C^1$ in Sobolev norm, this supports the conclusion of Theorem \ref{thm:m-(m+1)} that $\mathcal{RNN}_\rho^{[-5,5]}(1,2,1)$ is not closed in $W^{1,2}([-5,5])$. Additionally, we see that neural networks are still expressive enough in practice to approximate non-network target functions in Sobolev norm, despite these approximations requiring unbounded parameter growth. \begin{figure}[ht] \centering \subcaptionbox{The norm $\|A_2\|_F$ for $\Phi$.\label{subcap:A2_N}} {\includegraphics[scale=0.5]{A2_N.png}} \subcaptionbox{The norm $\|A_2\|_F$ for $\Phi_S$.\label{subcap:A2_S}} {\includegraphics[scale=0.5]{A2_S.png}} \captionsetup{width=0.85\linewidth} \caption{The Frobenius norm $\|A_2\|_F$ of the weight matrix from the hidden layer to the output layer is plotted versus the number of epochs. Left: $\|A_2\|_F$ for the network $\Phi$ with $L^2$ training. Right: $\|A_2\|_F$ for the network $\Phi_S$ with Sobolev training.}\label{cap:A2} \end{figure} Finally, figure \ref{cap:A2} shows the growth of parameters that is expected when we train a network to learn a non-network target function. By Corollary \ref{cor:bounded_weights}, convergence of the neural network to a target function which is not a realized neural network implies infinite growth of the parameters. The Frobenius norm of the weight matrix from the hidden layer to the output layer is plotted versus the number of training epochs. We observe that the training loss decreases at a much faster rate than the growth of the network parameters, at least at the beginning of the training process. Indeed, both the $L^2$ and Sobolev losses reach a value close to their minimum within a few epochs. On the other hand, we observe approximately linear growth in the Frobenius norm of the networks, though this growth does eventually slow down due to the convergence of the training process. An interesting question for further research would be to characterize the relationship between the decrease of the training loss and the growth of network parameters when approximating non-network target functions. Another interesting phenomenon in Figures \ref{cap:NoSobolevTraining} and \ref{cap:SobolevTraining} is that Sobolev training produces a network that does not visually approximate the target function as well as $L^2$ training does. This is likely because Sobolev training also considers the target derivatives, and hence is less concerned with the value of the target function itself. Thus, we observe areas where the network output lies slightly above the target function and areas where it lies slightly below. However, it may interesting to further explore the question of how well or how quickly Sobolev training approximates a target function. \fi \iffalse \section{Conclusion} \label{sec:conclusion} In this work, we prove that sets of realized neural networks of a fixed architecture are not closed in Sobolev spaces under certain smoothness conditions on the activation function $\rho$. More specifically, these sets are not closed in order-$(m-1)$ Sobolev spaces when $\rho$ is $m$-times differentiable (with bounded derivatives). Moreover, these sets are not closed in any order Sobolev spaces when $\rho$ is smooth with bounded derivatives of all orders. We present experimental results demonstrating this nonclosedness for the softsign activation function. The nonclosedness of sets of neural networks in Sobolev spaces has significant consequences for using Sobolev training to train neural networks. Most importantly, neural networks can be trained to approximate a non-network target function in Sobolev norm, but doing so requires unbounded growth of the network's parameters. Our experiments also show that it is possible to approximate a non-network target function numerically. This can be considered good or bad depending on the goal of the learning process: Sobolev training does not limit the expressivity of the network, but it also does not regularize the parameters of the network. \fi \subsection*{Declaration of Competing Interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \subsection*{Acknowledgements} This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No.\ DGE-1650112. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. SM is funded by grant NSF DGE GRFP \#1650112. AC is funded by grants NSF DMS \#1819222, \#2012266, and Russell Sage Foundation grant 2196.
{ "timestamp": "2021-01-29T02:02:57", "yymm": "2007", "arxiv_id": "2007.11730", "language": "en", "url": "https://arxiv.org/abs/2007.11730", "abstract": "We examine the closedness of sets of realized neural networks of a fixed architecture in Sobolev spaces. For an exactly $m$-times differentiable activation function $\\rho$, we construct a sequence of neural networks $(\\Phi_n)_{n \\in \\mathbb{N}}$ whose realizations converge in order-$(m-1)$ Sobolev norm to a function that cannot be realized exactly by a neural network. Thus, sets of realized neural networks are not closed in order-$(m-1)$ Sobolev spaces $W^{m-1,p}$ for $p \\in [1,\\infty]$. We further show that these sets are not closed in $W^{m,p}$ under slightly stronger conditions on the $m$-th derivative of $\\rho$. For a real analytic activation function, we show that sets of realized neural networks are not closed in $W^{k,p}$ for any $k \\in \\mathbb{N}$. The nonclosedness allows for approximation of non-network target functions with unbounded parameter growth. We partially characterize the rate of parameter growth for most activation functions by showing that a specific sequence of realized neural networks can approximate the activation function's derivative with weights increasing inversely proportional to the $L^p$ approximation error. Finally, we present experimental results showing that networks are capable of closely approximating non-network target functions with increasing parameters via training.", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)", "title": "Nonclosedness of Sets of Neural Networks in Sobolev Spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137937226356, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617958649386 }
https://arxiv.org/abs/1306.4208
Convex Polytopes from Nested Posets
Motivated by the graph associahedron KG, a polytope whose face poset is based on connected subgraphs of G, we consider the notion of associativity and tubes on posets. This leads to a new family of simple convex polytopes obtained by iterated truncations. These generalize graph associahedra and nestohedra, even encompassing notions of nestings on CW-complexes. However, these poset associahedra fall in a different category altogether than generalized permutohedra.
\section{Background} \subsection{} Given a finite graph $G$, the graph associahedron $\KG$ is a polytope whose face poset is based on the connected subgraphs of $G$ \cite{cdf}. For special examples of graphs, $\KG$ becomes well-known, sometimes classical: when $G$ is a path, a cycle, or a complete graph, $\KG$ results in the associahedron, cyclohedron, and permutohedron, respectively. Figure~\ref{f:2d-tubes} shows some examples, for a graph and a pseudograph with multiple edges. \begin{figure}[h] \includegraphics{2d-tubes} \caption{Graph associahedra of a path and a multi-edge.} \label{f:2d-tubes} \end{figure} These polytopes were first motivated by De Concini and Procesi in their work on ``wonderful'' compactifications of hyperplane arrangements \cite{dp}. In particular, if the hyperplane arrangement is associated to a Coxeter system, the graph associahedron $\KG$ appear as tilings of these spaces, where its underlying graph $G$ is the Coxeter graph of the system \cite{djs}. These compactified arrangements are themselves natural generalizations of the Deligne-Knudsen-Mumford compactification \M{n} of the real moduli space of curves \cite{dev1}. From a combinatorics viewpoint, graph associahedra arise in numerous areas, ranging from Bergman complexes of oriented matroids to Heegaard Floer homology \cite{blo}. Most notably, these polytopes have emerged as graphical tests on ordinal data in biological statistics \cite{mps}. \subsection{} The combinatorial and geometric structures of these polytopes capture and expose the fundamental concepts of connectivity and nestings, and it is not surprising that there have been several similar notions, such as nested sets \cite{fei}, nested complexes \cite{zel} and the larger class of generalized permutohedra of Postnikov \cite{pos}. However, none of these constructions capture the notion of nested sets of posets, as we do below. Indeed, our notion of the set of poset tubes is not a classical {building set}, but falls in a different category altogether. In this paper, we construct a new family of convex polytopes which are extensions of nestohedra and graph associahedra via a generalization of building sets. But rather than starting with a set, we begin with a poset $P$. The resulting \emph{poset associahedron} ${\mathcal K}P$, based on connected lower sets of $P$, cover a wide swath of existing examples from geometric combinatorics, including the permutahedra, associahedra, multiplihedra, graph associahedra, nestohedra, pseudograph associahedra, and their liftings; in fact, all these types are just from two rank posets. Newly discovered are polytopes capturing associativity information of CW-complexs. An overview of the paper is as follows: Section~\ref{s:posets} supplies the definitions of poset associahedra along with several examples, while Section~\ref{s:construct} provides methods of constructing them via induction. Specialization to nestohedra and permutohedra is given in Section~\ref{s:relation}, and we finish with proofs of the main theorems in Section~\ref{s:proof}. \section{Posets} \label{s:posets} \subsection{} We begin with some foundational definitions about posets. The reader is forewarned that definitions here might not exactly match those from earlier works. A \emph{lower set} $L$ is a subset of a poset $P$ such that if $y \preceq x \in L$, then $y \in L$. The \emph{boundary} of an element $x$ is $\partial x :=\{y \in P \suchthat y \prec x\}$. \begin{defn} Let $\bu x := \{y \in P \suchthat \partial y = \partial x\}$ be the \emph{bundle} of the element $x$. A bundle is \emph{trivial} if $\bu x = \{x\}$. \end{defn} Throughout this paper, a poset will be visually represented by its Hasse diagram. Consider the example of a poset $P$ given on the left side of Figure~\ref{f:bundle}. The subset $\{1,2,4,5\}$ in part (a), depicted by the highlighted region, is not a lower set since it does not include element $3$. This poset is partitioned into four bundles, $\{1,2,3\}$,\, $\{4\}$,\, $\{5\}$, and\, $\{6, 7, 8\}$, with elements in a bundle having identical boundary. In particular, notice that all minimal elements of the poset are in one bundle since they share the empty set as boundary. The following is immediate: \begin{figure}[h] \includegraphics{bundle} \caption{Some examples of valid and invalid tubes and tubings.} \label{f:bundle} \end{figure} \begin{lem} The elements of poset $P$ are partitioned into equivalence classes of bundles. \end{lem} \begin{defn} A lower set is \emph{filled} if, whenever it contains the boundary $\partial x$ of an element $x$, it also intersects the bundle $\bu x$ of that element. A \emph{tube} is a filled, connected lower set. A \emph{tubing} $T$ is a collection of tubes (not containing all of $P$) which are pairwise disjoint or pairwise nested, and for which the union of every subset of $T$ is filled. \end{defn} Figure~\ref{f:bundle}(b) shows the boundary of $\{6, 7, 8\}$, which is an unfilled lower set, whereas (c) is a filled one. Note that parts (c, d) display examples of one tube. Parts (e, f) display two disjoint tubes which are not tubings, since the union of the tubes would create an unfilled lower set. Examples of tubings with two and three components are given by (g, h) respectively. \subsection{} We now present our main result. \begin{thm} \label{t:combin} Let $P$ be a poset with $n$ elements partitioned into $b$ bundles. If $\pi(P)$ is the set of tubings of $P$ ordered by reverse containment, the \emph{poset associahedron} ${\mathcal K}P$ is a convex polytope of dimension $n-b$ whose face poset is isomorphic to $\pi(P)$. \end{thm} \noindent This theorem follows from the construction of ${\mathcal K}P$ from truncations, described in Theorem~\ref{t:const} below. We now pause to illustrate several examples. \begin{exmp} Figure~\ref{f:2d-poset} shows the two polytopes of Figure~\ref{f:2d-tubes}, reinterpreted as tubings on posets of their underlying graphs. Both posets are of \emph{two rank}, the maximum length of any chain of the poset. Part (a) has 5 elements and 3 bundles, whereas (b) has 4 elements and 2 bundle, both resulting in polygons, as given in Theorem~\ref{t:combin}. \end{exmp} \begin{figure}[h] \includegraphics{2d-poset} \caption{Poset versions of Figure~\ref{f:2d-tubes}.} \label{f:2d-poset} \end{figure} \begin{exmp} Figure~\ref{f:2d-poset-2} shows two different posets, resulting in identical poset associahedra as Figure~\ref{f:2d-poset}. Part (a) shows a poset structure which does not even come from a CW-complex. Notice here that the left element of height zero cannot be a tube since it needs to be filled. Part (b) has a near identical structure to Figure~\ref{f:2d-poset}(b). Here, the bottom-right element cannot be a tube in itself since it is unfilled. Because both posets have 5 elements and 3 bundles, the dimension of the polytopes is two. \end{exmp} \begin{figure}[h] \includegraphics{2d-poset-2} \caption{Alternate poset associahedra.} \label{f:2d-poset-2} \end{figure} \begin{exmp} Three examples of 3D poset associahedra are given in Figure~\ref{f:3d-exmps}. The cube in (a) can be viewed as an extension of the square in Figure~\ref{f:2d-poset}(b), and Proposition~\ref{p:cube} generalizes this pattern to the $n$-cube. A truncation of this cube results in (b), and both posets having 6 elements partitioned into 3 bundles. Part (c) shows a novel construction of the 3D associahedron $K_5$, with 7 elements partitioned into 4 bundles. \end{exmp} \begin{figure}[h] \includegraphics{3d-exmps} \caption{Examples of 3D poset associahedra.} \label{f:3d-exmps} \end{figure} \subsection{} The notion of being ``filled'' appears in different guises: For graph associahedra, a \emph{tube} of a graph is filled because it is an induced subgraph, catalogued just by listing its set of vertices \cite[Section 2]{cd}. A \emph{tubing} is filled because its tubes are ``far apart,'' which is equivalently described by saying that two distinct tubes in a tubing cannot have a single edge connecting them. Since a simple graph has no bundles, the following is immediate: \begin{prop} The graph associahedron $\KG$ can be obtained as a poset associahedron ${\mathcal K}P$, where $P$ is the face poset of graph $G$. \end{prop} For pseudographs (having loops and multiple edges), a filled tube is a connected subgraph $t$ where at least one edge between every pair of nodes of $t$ is included if such edges exist \cite[Section 2]{cdf}. For multiple loops and edges of $G$, the notion of tubes on posets match perfectly with tubes on $G$, and the proposition above extends to the pseudograph associahedron. The first three examples of Figure~\ref{f:pseudo} displays invalid tubings (all due to not being filled) and the last a valid one; the top row shows tubes on graphs whereas the bottom recasts them on posets. \begin{figure}[h] \includegraphics{pseudo-posets} \caption{Valid and invalid tubings on graphs and posets.} \label{f:pseudo} \end{figure} \begin{rem} For a single loop attached to a node $v$ of $G$, the pseudograph associahedron defined in \cite{cdf} gave a choice of choosing or ignoring the loop when $v$ is chosen in a tube; see Figure~\ref{f:pseudo}(c) and (d). In this paper, however, for the sake of consistency in the notion of ``filled'', we always include the loop for poset associahedron. This allows us to always obtain convex polytopes, rather than unbounded polyhedral chambers of \cite{cdf}. \end{rem} The notion of \emph{associativity}, encapsulated by drawing tubes on graphs, has a natural generalizations to higher-dimensional complexes. In particular, for any CW-complex structure $X$, consider its face poset $P_X$. The poset associahedron ${\mathcal K}P_X$ captures the analogous information of $X$ that the graph associahedron captures for a graph. \begin{exmp} Figure~\ref{f:cello}(a) shows a CW-complex, with three 2-cells, 1-cells, and 0-cells. Part (b) shows the poset structure of this complex, and (c) its poset associahedron. \end{exmp} \begin{figure}[h] \includegraphics{cellohedra} \caption{The poset associahedron of a CW-complex.} \label{f:cello} \end{figure} \section{Constructions} \label{s:construct} \subsection{} The poset associahedron ${\mathcal K}P$ is recursively built by a series of truncations. Because the truncation procedure is a delicate one, we present an overview here and save the details for the proof in Section~\ref{s:proof}. First, the following result allows us to consider only connected posets: \begin{prop} \label{p:discon} Let $P$ be a poset with connected Hasse components $P_1$, \ldots, $P_m$. Then ${\mathcal K}P$ is isomorphic to ${\mathcal K}P_1 \times \cdots \times {\mathcal K}P_m \times \Delta_{m-1}.$ \end{prop} \begin{proof} Any tubing of $P$ can be described as: \begin{enumerate} \item a listing of tubings $T_1 \in {\mathcal K}P_1, \ \ldots, \ T_k \in {\mathcal K}P_m$, \ and \item for each component $P_i$ either including or excluding the tube $T_i = P_i$, as long as all tubes $P_i$ are not included. \end{enumerate} The second part of this description is clearly isomorphic to a tubing of the edgeless graph $H_m$ on $m$ nodes. But from \cite[Section 3]{dev2}, since $\mathcal K H_m$ is the simplex $\Delta_{m-1}$, we are done. \end{proof} \begin{thm} \label{t:const} The poset associahedron ${\mathcal K}P$ is constructed inductively on the number of elements of $P$. Choose a maximal element $x$ of a maximal length chain of $P$. \begin{enumerate} \item If bundle $\bu x$ is trivial, truncate $\mathcal K(P - x)$ to obtain ${\mathcal K}P$. \item If bundle $\bu x$ is nontrivial, truncate $\mathcal K\left( P - (\bu x - x)\right) \times \Delta_{|\bu x - x|}$ to obtain ${\mathcal K}P$. \end{enumerate} \end{thm} \noindent This immediately implies the combinatorial result of Theorem~\ref{t:combin}. The following is a notable consequence: \begin{prop} \label{p:trunc} There are different ways to construct ${\mathcal K}P$, based on the possible choices of maximal elements in the recursive process. \end{prop} In certain situations, altering the underlying poset does not affect the polytope. This occurred in Figures~\ref{f:2d-poset}(b) and~\ref{f:2d-poset-2}(b), and can be presented as \begin{cor} \label{c:max} Let $x$ be a maximal element of a maximal chain of $P$ such that $\bu x$ is trivial. If $\partial x$ is connected, then ${\mathcal K}P = \mathcal K (P - x)$. \end{cor} \begin{proof} We show that $t$ is a tube of $P$ if and only if $t-x$ is a tube of $P - x$. If $x \notin t$, then $\partial x \notin t$, and $t$ has the properties of a tube in both $P$ and $P-x$, or in neither. On the other hand, if $x \in t$, then $\partial x \in t$, and so $t$ is connected if and only if $t-x$ is connected. Extending this isomorphism of tubes to tubings preserves this containment. \end{proof} \begin{figure}[h] \includegraphics[width=\textwidth]{4d-poset} \caption{Construction of a 4D poset associahedron.} \label{f:4d} \end{figure} \begin{exmp} A 4D case for Theorem~\ref{t:const} is provided in Schlegel diagram on the right side of Figure~\ref{f:4d}, where the poset steps are drawn above. Part (a) begins with the poset $P_*$ of Figure~\ref{f:3d-exmps}(b). In Figure~\ref{f:4d}(b), by Corollary~\ref{c:max}, adding the new maximal element to this poset does not change the structure of the polytope. Finally, part (c) shows the addition of a nontrivial bundle, now with two elements. According to Theorem~\ref{t:const}, we first consider the 4D polytope ${\mathcal K}P_* \times \Delta_1$, the left Schlegel diagram of Figure~\ref{f:4d}. Then truncate certain faces (first the two blue chambers, then the four orange ones) to obtain the 4D poset associahedron drawn on the right. \end{exmp} \subsection{} We close this section with a corollary of Proposition~\ref{p:trunc} as it pertains to the classical associahedron $K_n$. Interestingly, this construction of the associahedron is novel, though examples of special cases have appeared in different parts of literature, as referenced below. \begin{prop} The poset associahedron of the \emph{zigzag} poset with $2n-1$ elements yields the classic associahedron $K_{n+1}$. In particular, the associahedron $K_{n+1}$ is obtained by truncations of codimension two faces of $K_{p+1} \times \Delta_1 \times K_{q+1}$, where $n = p+q$ and $p, q \geq 1$. \end{prop} \begin{proof} The poset $P$ of a path $G$ with $n$ nodes is the zigzag poset $2n-1$ elements; the tubings on $P$ resulting in ${\mathcal K}P$ are in bijection with tubes on the graph $G$. The enumeration of the different types of truncation comes from removing a maximal element of $P$ and using Theorem~\ref{t:const}(b). \end{proof} \begin{rem} The particular construction of $K_{n+1}$ from $K_n \times \Delta_1 \times K_2\, \simeq \, K_n \times \Delta_1 $ appears in another form in the work by Saneblidze and Umble \cite{su} on diagonals of associahedra. \end{rem} \begin{figure}[h] \includegraphics{k5-alt1} \caption{The associahedron $K_5$ from a pentagonal prism, $K_4 \times \Delta_1 \times K_2$.} \label{f:k5-alt1} \end{figure} \begin{exmp} Figure~\ref{f:k5-alt1} considers one construction of the 3D associahedron: Part (a) begins with the pentagon from Figure~\ref{f:2d-poset}(a), and (b) adds an extra disconnected element. By Proposition~\ref{p:discon}, the result is the product with $\Delta_1$, a pentagonal prism. Part (c) connects up the poset with a trivial bundle; two edges of the prism are truncated according to the proof of Theorem~\ref{t:const} to yield the associahedron. Indeed, Figure~\ref{f:3d-exmps}(c) is formed in an identical manner. \end{exmp} \begin{figure}[h] \includegraphics{k5-alt2} \caption{The associahedron $K_5$ from a cube, $K_3 \times \Delta_1 \times K_3$.} \label{f:k5-alt2} \end{figure} \begin{exmp} Figure~\ref{f:k5-alt2} considers another assembly of the 3D associahedron: Each disconnected component yields an interval, and together (by Proposition~\ref{p:discon}), the result is a cube (a), the product of three intervals. Part (b) connects up the poset with a trivial bundle, and three edges of its edges are truncated according to the proof of Theorem~\ref{t:const}. This construction appears in \cite{bv}, motivated by truncating codimension two faces of cubes. \end{exmp} \begin{figure}[h] \includegraphics{4d-assoc-poset} \caption{The associahedron $K_6$ from $K_4 \times \Delta_1 \times K_3$.} \label{f:k6-poset} \end{figure} \begin{exmp} In a similar vein, Figure~\ref{f:k6-poset} obtains the 4D associahedron $K_6$ (right side) from truncating five codimension two faces of $K_4 \times \Delta_1 \times K_3$ (left side). The order of truncation is important: first the two blue faces, then two orange faces, and finally one yellow face. \end{exmp} \section{Family of Associahedra} \label{s:relation} \subsection{} The $(n-1)$-dimensional permutahedron $\mathcal{P}_n$ is the convex hull of the points formed by the action of a finite reflection group on an arbitrary point in Euclidean space. The classic example is the convex hull of all permutations of the coordinates of the Euclidean point $(1, 2, \dots , n)$. Changing edge lengths while preserving their directions results in the \emph{generalized permutohedron}, as defined by Postnikov \cite{pos}. An important subclass of these is the \emph{nestohedra} \cite{zel}: Nestohedra have the feature that each of their faces corresponds to a specific combinatorial set, and the intersection of two faces corresponds to the union of the two sets. For a given set $S$, each nestohedron $N(B)$ is based upon a given building set $B$, whose elements are known as tubes, where $B$ must contain all the singletons of $S$ and must also contain the union of any two tubes whose intersection is nonempty. \begin{prop} \label{p:nesto} All nestohedra can be obtained as poset associahedra, with posets of two ranks, with no bundles of size greater than one. \end{prop} \begin{proof} Given a building set $B$ of a set $S$, we describe a ranked poset $P_B$, with exactly two ranks, whose tubes are in bijection with $B$ and whose tubings are in bijection with the nested sets of $B$. The poset $P_B$ has minimal elements given by set $S$, and has maximal elements (each a trivial bundle) given by set $B$, each having boundary exactly the minimal elements that it contains. The tubes of $P_B$ are all the connected lower sets of $P_B$ generated by a single maximal element of $P_B$ (since each such lower set is automatically filled). And if a subset $T$ of tubes is not filled, then $T$ must be the boundary of a maximal element in $P_B$, and thus the collection of minimal elements in $T$ make up an element of $B$. The ordering of nested sets by reverse inclusion corresponds to the ordering of tubings by reverse inclusion, so the nestohedron $N(B)$ is isomorphic to our polytope ${\mathcal K}P_B$. \end{proof} Note that $P_B$ is one of many posets whose polytope is $N(B)$; many more posets (with at most two ranks) can be found. Start with set $S$ and create any number of new maximal elements, each of which covers some of $S$, where each maximal element is a trivial bundle. The set of tubes of such a poset $P$ will yield a building set $B$ on the set of minimal elements of $P$, due to the definition of a tube as a connected filled lower set. This amounts to choosing a subset of the power set of $S$ (a \emph{hypergraph} on $S$), and thus the process of building ${\mathcal K}P$ for such a poset is akin to constructing the hypergraph polytope \cite{hyper}. \begin{figure}[h] \includegraphics{nested-many} \caption{Examples of different posets resulting in identical poset associahedra.} \label{f:nested-many} \end{figure} \begin{exmp} Given set $S = \{1,2,3,4\}$ and building set $B = \{\{1\}, \{2\}, \{3\}, \{4\}, \{12\}, \{23\}, \linebreak \{234\}, \{124\}, S\}$, Figure~\ref{f:nested-many}(a) shows the poset $P_B$ constructed in the proof of Proposition~\ref{p:nesto}. Moreover, all the posets in this figure result in identical poset associahedra. \end{exmp} \subsection{} Although poset associahedra contain nestohedra, they are a different class than generalized permutohedra. For instance, Figure~\ref{f:3d-exmps}(b) shows a 3D polytope which has an octagonal face, something not possible for generalized permutohedra. Figure~\ref{f:octagon} below shows this octagon in detail. Similarly, the 4D example in Figure~\ref{f:4d} is not a nestohedron as well. \begin{figure}[h] \includegraphics{octagon} \caption{Octagonal face of the polyhedron in Figure~\ref{f:3d-exmps}(b).} \label{f:octagon} \end{figure} All graph associahedra and nestohedra are obtained from two rank posets, and it is natural to ask whether all poset associahedra can be obtained from some two rank posets. For instance, the three rank poset of Figure~\ref{f:2d-poset-2}(a) yields the same ${\mathcal K}P$ as the two rank poset of Figure~\ref{f:2d-poset}(a). The following shows that higher ranks indeed hold deeper structure. \begin{thm} There exists poset associahedra ${\mathcal K}P$ which cannot be found as ${\mathcal K}P'$, for any poset $P'$ of two rank. \end{thm} \begin{proof} Consider the 4D example in Figure~\ref{f:4d}, a three rank poset with $f$-vector $(68, 136, 88, 20)$. Since this is not a nestohedron, Proposition~\ref{p:nesto} shows that at least one nontrivial bundle is needed when restricting to two ranks. Using computer calculations, we enumerated the $f$-vectors of all 4D poset associahedra for posets with two ranks and at least one nontrivial bundle. For each polytope with a matching $f$-vector (about 500 posets), we verified that it was not equivalent to the one in Figure~\ref{f:4d}, with these calculations and comparisons performed using the SAGE package. \end{proof} We close with some special examples. \begin{prop} \label{p:perm} Let $\bu x$ be a bundle with $n$ elements of poset $P$ such that all of its elements are maximal. If $\partial x = P - \bu x$, then ${\mathcal K}P = \mathcal K(P - \bu x) \times \mathcal{P}_n$. \end{prop} \begin{proof} A tubing $U \in {\mathcal K}P$ containing no tubes that intersect $\bu x$ can be viewed as a tubing in $\mathcal K(P - \bu x)$; call it $\alpha(U)$. Let $V \in {\mathcal K}P$ be a tubing with only tubes that intersect $\bu x$, where a tube in $V$ is a lower set generated by a subset of $\bu x$, and compatibility of these tubes is equivalent to the subsets being nested. Let $\Gamma_x$ be the complete graph with $n$ nodes, labeled by elements of $\bu x$. Let $\beta(U)$ be the tubing on $\Gamma_x$ such that if $t \subset \bu x$ generates a tube in $U$, $t$ is a tube in $\beta (U)$. Any tubing $T \in {\mathcal K}P$ can be written as a tubing $U$ and a tubing $V$, where the map $T \to (\alpha (U), \beta(V))$ preserves compatibility and is bijective. The proof follows since the graph associahedron of a complete graph of $n$ nodes is the permutohedron $\mathcal{P}_n$. \end{proof} \begin{figure}[h] \includegraphics{simple-exmps} \caption{Examples of simple posets.} \label{f:simple} \end{figure} \begin{cor} \label{p:cube} Consider Figure~\ref{f:simple}. If (a) $P$ is a chain, (b) a cross-stack of $n$ rank, (c) a one rank element with $n+1$ boundary elements, or (d) a bundle with $n$ maximal elements, then ${\mathcal K}P$ is a point, an $n$-cube, an $n$-simplex, or $\mathcal{P}_n$, respectively. \end{cor} \begin{proof} The first follows from Corollary~\ref{c:max}, and the rest from Proposition~\ref{p:perm}. \end{proof} \section{Proofs} \label{s:proof} \subsection{} The proof of Theorem~\ref{t:const} is now given, which immediately results in Theorem~\ref{t:combin}. We proceed by induction on the size of the connected poset $P$: First, an explicit construction of the polytope ${\mathcal K}P$ is provided, as outlined in Theorem~\ref{t:const}, based on the truncation of a smaller polytope (using the induction hypothesis). And second, a poset isomorphism is created between the newly constructed ${\mathcal K}P$ and tubings of $P$, establishing the result. Throughout this proof, we let $x$ be a maximum element in a maximal chain of $P$. We first consider the case when $\bu x$ is trivial, and begin with a definition. \begin{defn} For trivial $\bu x$, if there exists a pairwise disjoint tubing $T$ of $P-x$ such that $$\textup{fill}_x(T) \ := \ \{x\} \ \cup \ \{p \in t \suchthat t \in T\}$$ is a tube of $P$, call $\textup{fill}_x(T)$ the $x$-\emph{fill} of $T$; Figure~\ref{f:filling} gives some examples. \end{defn} \begin{figure}[h] \includegraphics{fillingT} \caption{Some tubings of $P-x$ (top row) and their $x$-fills (bottom row).} \label{f:filling} \end{figure} \emph{\textcolor{mains}{Truncation algorithm:}} \ \ By induction, the theorem holds for $\mathcal K(P-x)$; we refer to this object both as a polytope and as the poset of tubings on $P-x$. Construct ${\mathcal K}P$ by truncating the faces $T$ of $\mathcal K(P-x)$ such that $\textup{fill}_x(T)$ are tubes of $P$. The resulting new facets inherit the labelings of $\textup{fill}_x(T)$, while retaining the old labels for any facets not truncated; the label of each face is the set of labels of its adjacent facets. We perform this truncation iteratively, based on \emph{decreasing} size of the tubes $\textup{fill}_x(T)$; for a tie, the order of truncation is arbitrary. \begin{prop} For trivial $\bu x$, there exists a poset isomorphism $\varphi$ from ${\mathcal K}P$ to the simple polytope found by truncating $\mathcal K(P-x)$ as described above. \end{prop} \begin{proof} It is straightforward that labeling tubes of the facets of our truncated polytope are in bijection with the tubes of $P$. We show that $\varphi$ is a bijection of tubings by checking that a collection $T$ of tubes $\{t\}$ of $P$ is a tubing if and only if its corresponding facets $\{f(t)\}$ in ${\mathcal K}P$ intersect at a face. This simultaneously shows that $\varphi$ preserves the ordering of tubings. \emph{\textcolor{mains}{Forward direction:}} \ \ We show if $T$ is not a tubing, then $\{f(t) \suchthat t \in T\}$ has empty intersection. If not a tubing, there is either a subset $S$ of $T$ that is not filled, or there is a pair of nonnested but intersecting tubes in $T$. First consider the former, when $S$ is unfilled: If $x$ is the only element whose absence causes $S$ to be unfilled, our truncations will have effectively separated the facets by removing their intersection; otherwise, the conclusion follows by the inductive assumption on $\mathcal K(P-x)$. Now consider when $T$ contains intersecting, nonnested tubes, say $t_1$ and $t_2$ of $P$. The faces $f(t_1)$ and $f(t_2)$ have empty intersection when neither tubes contain $x$ (due to inductive assumption on $\mathcal K(P-x)$) or when only one tube $t_1$ contains $x$ (since $f(t_1)$ was formed by truncating a face which was not contained in $f(t_2)$). When both tubes contain $x$, then there must exist tube $t_*$ of $P$ containing their union. Now let $t_1=\textup{fill}_x(T_1)$ and $t_2=\textup{fill}_x(T_2)$ for disjoint tubings $T_1$ and $T_2$ of $P-x$. Note that we are only concerned if $T_1 \cup T_2$ is a tubing of $P-x$, implying the faces $f(t_1)$ and $f(t_2)$ originally intersected in $\mathcal K(P-x)$. Let $t_* = \textup{fill}_x(T_*)$ for a tubing $T_*$ of disjoint tubes, each containing some of $\partial x$. Since $t_1, t_2 \subset t_*$, the truncation of $T_* \subset T_1 \cup T_2$ will have occurred before either of the latter, and will have involved the truncation of the face labeled by $T_1 \cup T_2,$ ensuring that future facets associated to the truncations of $f(t_1)$ and $f(t_2)$ will not intersect. \emph{\textcolor{mains}{Backward direction:}} \ \ Using finite induction on the truncated tubes in $P$, we show if $T$ is a tubing, then $\{f(t) \suchthat t \in T\}$ has nonempty intersection. Note that the tubes $\{t_i\}$ in $T$ containing $x$ must be nested $t_1\supset \dots \supset t_m$ since they intersect at least in $x$. Let $S_i$ be the set of disjoint tubes of $P-x$ such that $t_i = \textup{fill}_x(S_i)$. Thus, associated to a tubing $T$ of $P$, there is a tubing $T_0$ of $P-x$, made of all the tubes of $T$ that do not contain $x$, together with all the tubes in sets $S_i$. Our argument considers a series of $m$ truncations, proceeding from the list of tubes in $T_0$ to the list of tubes in $T$. By the construction of $T_0$, restrict attention to the truncations that form facets whose labels are in $T$. At each step, we show that truncation allows the tubes in the new intermediate list to label facets that intersect. The base case follows from the induction hypothesis (on the number of elements in our poset) that facets labeled by the tubes of $T_0$ do intersect at a unique face of the simple polytope $\mathcal K(P-x)$. Recursively define an intermediate set of tubes called $T_k$, corresponding to having performed truncations to create the facets labeled by $t_1$ through $t_k$, where $$ T_{k} \ := \ (T_{k-1} - S_{k}) \ \cup\ (S_k \ \cap \ (T \ \cup \ \bigcup_{{i>k}} S_i)) \ \cup \ (\{t_{k}\}\cap T). $$ Indeed, after all $m$ truncations, we will have transformed $T_0$ to become $T$, by adding to the list of tubes in $T_0$ all the new tubes $t_i$ in $T$ and subtracting all the tubes which are not in $T$. By induction, assume that after truncating to create facets $\{t_1, \dots, t_{k-1}\}$, the facets labeled by the tubes in $T_{k-1}$ do indeed intersect. We use this assumption to show that truncating to create the facet $t_{k} = \textup{fill}_x(S_k)$ will preserve the property of intersection: To obtain $T_k$ from $T_{k-1}$, add $t_k$ but crucially also remove at least one tube; specifically, we claim that at least one tube in $S_k$ will be removed. Since truncation occurs in decreasing order of containment, the tubes $\{t_{k+1}, \dots, t_m\}$ are all sequentially and properly contained inside of $t_k$. And since any tube of $T \cap S_k$ must be contained in the smallest of the tubes $t_i$, if there is one or more $t_i \subset t_k$, then the tubes in $S_k$ cannot all be found again among the tubes in the sets $\{S_{k+1}, \dots, S_m\}$, nor in $T \cap S_k$ itself. Finally, remove some of $S_k$ when $t_k$ is the smallest tube in $T$ containing $x$, since $T$ (being filled) cannot contain $S_k$. Therefore, since $T_{k} \cap T_{k-1} = T_{k} - \{t_k\}$ does not contain $S_k$, truncating face $f$ (whose containing facets are labeled by $S_k$) will not separate the facets labeled by the tubes of $T_{k} - \{t_k\}.$ However, $f$ does intersect the face where the facets labeled by $T_{k} - \{t_k\}$ intersect, so their intersection will further intersect the new facet labeled by $t_k.$ Thus, the facets labeled by the tubes in $T_k$ will have a nonempty intersection. \end{proof} \subsection{} We now consider the second case in Theorem~\ref{t:combin}, when $\bu x$ is nontrivial. \emph{\textcolor{mains}{Truncation algorithm:}} \ \ Construct the new polytope ${\mathcal K}P$ by truncating certain faces of $${\mathcal K}P_* \ := \ \mathcal K(P-(\bu x - x)) \times \Delta_{|\bu x - x|}.$$ Begin by labeling the vertices of $\Delta_{|\bu x - x|}$ with the elements of $\bu x$, and its faces by the subset of vertex labels which they contain. The faces of ${\mathcal K}P_*$ get labeled by the pairing $(T,B)$, for the corresponding tubing $T$ of $\mathcal K(P-(\bu x-x))$ and subset $B$ of $\bu x$. Now truncate the faces labeled with $(T,B)$ where tubing $T$ has only one tube $t$. If $x \notin t$, label the resulting new facet with $t$. After truncations, let the label of each face be the set of labels of its adjacent facets, retaining the old labels for any facets not truncated. We perform this truncation iteratively, based on \emph{increasing} size of tubes $t$. \begin{prop} For nontrivial $\bu x$, there exists a poset isomorphism $\varphi$ from ${\mathcal K}P$ to the simple polytope found by truncating ${\mathcal K}P_*$ as described above. \end{prop} \begin{proof} It is straightforward that labeling tubes of the facets of our truncated polytope are in bijection with the tubes of $P$. We show that $\varphi$ is a bijection of tubings by checking that a collection $T$ of tubes $t$ of $P$ is a tubing if and only if its corresponding facets $f(t)$ in ${\mathcal K}P$ intersect at a face. This simultaneously shows that $\varphi$ preserves the ordering of tubings. \emph{\textcolor{mains}{Forward direction:}}\ \ We show if $T$ is not a tubing, then $\{f(t) \suchthat t \in T\}$ has empty intersection. If not a tubing, there is either a subset $S$ of $T$ that is not filled, or there is a pair of nonnested but intersecting tubes in $T$. In the case of the former, when $S$ is unfilled, we see implied a further subset $S'$ that was unfilled in the poset $P-(\bu x-x)$, where $S'$ consists of the tubes of $S$ with one modification: Replace any portion of $\bu x$ in those tubes with $x$. Since, by induction, the facets labeled by tubes of $S'$ have no common intersection in $\mathcal K(P-(\bu x-x)),$ and since the product of polytopes preserves this fact, then the faces of the product bearing labels from $S'$ do not have a common intersection. Now consider when $T$ contains intersecting, nonnested tubes, say $t_1$ and $t_2$ of $P$. Replace any portion of $\bu x$ contained in them with $x$, resulting in $t'_1 := t_1-(\bu x-x)$ and $t'_2 := t_2-(\bu x-x)$. If these tubes are still intersecting but nonnested, their facets in $\mathcal K(P-(\bu x-x))$ had no intersection, and this property will be passed along to our new polytope. But if the tubes $t'_1$ and $t'_2$ are nested or equal, then both $t_1$ and $t_2$ contained some of $\bu x$, and we must further consider the intersection \begin{equation} \label{e:inter} t_1 \ \cap \ t_2 \ \cap \ \bu x \,. \end{equation} If this is empty, then $t_1$ and $t_2$ are tubes created by truncating faces of the product polytope ${\mathcal K}P_*$, which in turn corresponded to faces of $\Delta_{|\bu x - x|}$ which did not intersect. Again the non-intersection is inherited by $\mathcal K(P-(\bu x-x))$. Finally if \eqref{e:inter} is nonempty, then it is straightforward to see that the facets labeled by $t_1$ and $t_2$ result from truncating faces that originally do intersect in ${\mathcal K}P_*$. Here, there is a third, prior truncation (of a face $f$) of the product polytope that contains the intersection of the faces that are truncated to become $t_1$ and $t_2$. Indeed, face $f$ gives rise to the facet labeled by the tube $t_1 \cap t_2$, and thus was labeled in the product by the smaller of $t'_1$ and $t'_2$, paired with \eqref{e:inter}. Therefore, it is truncated first and effectively separates the others. \emph{\textcolor{mains}{Backward direction:}}\ \ Using finite induction on the truncated tubes $\{t_1, \dots, t_m\}$ in $P$, we show if $T$ is a tubing, then $\{f(t) \suchthat t \in T\}$ has nonempty intersection. Our argument proceeds by constructing a series of $m$ truncations, showing at each step that the tubes do indeed label facets that intersect. For a tubing $T$, create the set of pairs $$T_0 \ = \ \{\, (t_*,B) \suchthat t_* \ \mbox{is a tube of} \ P-(\bu x-x), \ B \subset \bu x\} \,,$$ where \begin{equation*} t_* \ = \ \begin{cases} \ (t-(\bu x -x), \ t \cap \bu x) & \hspace{10 pt} \mbox{if} \ \partial x \subset t\\ \ (t, \bu x)& \hspace{10 pt} \mbox{otherwise}. \end{cases} \end{equation*} This set gives a list of faces of ${\mathcal K}P_*$, whose intersection is nonempty, providing the base case for truncation: Since $T$ is a tubing, the tubes of $T$ which contain $\partial x$ are all nested, and by our construction of $T_0$, the set of second elements in the pairs are subsets of $\bu x$ having a common intersection. Recursively define an intermediate set of labels $T_k$, formed by performing truncations to create facets labeled by $t_1, \dots, t_k$. If $t_k$ is not in $T$, let $T_k$ be $T_{k-1}$. Otherwise, $T_k$ is formed by discarding from $T_{k-1}$ the pair $(t_k - (\bu x-x), \, t_k \cap \bu x)$ and replacing it with $t_k$ itself. This corresponds to labeling the new facet with the new tube $t_k$. We need to show that faces labeled by elements of $T_k$ still have a nonempty intersection after the truncation of face $t_k$. If $t_k$ does not contain $\partial x$, then it labels a facet and its truncation does not change the polytope. Otherwise, $t_k$ is either (1) contained in or (2) intersects (but is not nested with) some tube $\{t_{k+1}, \dots, t_m\}$. In the latter case (2), where $t_k$ intersects such a tube, then $t_k \notin T,$ and we argue that the truncated face does not contain the intersection of the facets labeled by $T_k$. This follows because $t_k$ either inherits an empty intersection with the faces represented by $T_0$ (from one of the two polytopes in the cross product), or is separated from the intersection of the faces represented by $T_{k-1}$ by an earlier truncation (from the proof in the forward direction). Now consider the former case (1): If $(t_k \cap \bu x) = (t_{k+1} \cap \bu x)$, let $F_*$ be the facet labeled by the pair $(t_k,\bu x)$. Here, before any truncation, $F_*$ originally contains faces labeled by $(t_k, B)$, for $B\subset \bu x$. On the other hand, if $(t_k \cap \bu x) \subset (t_{k+1} \cap \bu x)$, let $F_*$ be the facet labeled by the pair $(P-(\bu x-x), \bu x -z)$, for some $z \in (t_{k+1} \cap \bu x) - (t_k \cap \bu x)$. In this case, this facet contains faces labeled by $(t,B)$, where $t \subseteq P-(\bu x-x)$ and $B \subseteq \bu x -z$. In either case, $F_*$ is chosen to contain the truncated face at step $k$ and no other face scheduled to be truncated afterwards. Moreover, $F_*$ is chosen such that it will eventually be labeled by a tube $t$ that intersects but is not nested with $\{t_{k+1}, \dots, t_m\}$. Therefore, since we are dealing with simple polytopes, $F_*$ also cannot contain any of the facets represented by the elements of $T_k$, both those labeled by tubes not containing $\partial x$, and those created by earlier truncations. Finally, note that the truncated face $t_k$ was assumed to intersect the common intersection of all the other faces represented by $T_{k-1}$, and so the facet created still does. Therefore, the faces represented by $T_k$ have a common intersection, and thus the tubing $T$ will be represented by facets with a common intersection in ${\mathcal K}P$. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2013-06-19T02:02:38", "yymm": "1306", "arxiv_id": "1306.4208", "language": "en", "url": "https://arxiv.org/abs/1306.4208", "abstract": "Motivated by the graph associahedron KG, a polytope whose face poset is based on connected subgraphs of G, we consider the notion of associativity and tubes on posets. This leads to a new family of simple convex polytopes obtained by iterated truncations. These generalize graph associahedra and nestohedra, even encompassing notions of nestings on CW-complexes. However, these poset associahedra fall in a different category altogether than generalized permutohedra.", "subjects": "Combinatorics (math.CO); Algebraic Topology (math.AT); Quantum Algebra (math.QA)", "title": "Convex Polytopes from Nested Posets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137937226356, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617958649386 }
https://arxiv.org/abs/1907.01823
Spectral gaps, symmetries and log-concave perturbations
We discuss situations where perturbing a probability measure on $\mathbb{R}^n$ does not deteriorate its Poincaré constant by much. A particular example is the symmetric exponential measure in $\mathbb{R}^n$, even log-concave perturbations of which have Poincaré constants that grow at most logarithmically with the dimension. This leads to estimates for the Poincaré constants of $(n/2)$-dimensional sections of the unit ball of $\ell_p^n$ for $1 \leq p \leq 2$, which are optimal up to logarithmic factors. We also consider symmetry properties of the eigenspace of the Laplace-type operator associated with a log-concave measure. Under symmetry assumptions we show that the dimension of this space is exactly $n$, and we exhibit a certain interlacing between the "odd" and "even" parts of the spectrum.
\section{Introduction} This work was partly motivated by the study of a family of probability measures on $\mathbb R^n$ which naturally appear when considering statistical questions pertaining to sparse linear modeling: \[ d\nu^{n,Q}(x)=\frac1Z e^{-\|x\|_1-Q(x)}\, dx, \] where $Q$ is a nonnegative quadratic form, $\|x\|_p=(\sum_i |x_i|^p)^{1/p}$, and $Z=Z_{n,Q}$ is the normalizing constant so that $\nu^{n,Q}$ is a probability measure. The latter is related to the classical functional $\theta\mapsto \|y-X\theta\|_2^2+\lambda \|\theta\|_1$ that one minimizes in order to find the LASSO estimator, see e.g. \cite{bvdgBOOK}. Here the quadratic term is supposed to ensure a good fit to data $y$, while minimizing the $L_1$ norm favours a small support for the estimator $\theta$. For a probability measure $\mu$ on $\mathbb R^n$, we denote by $C_P(\mu)$ the Poincar\'e constant of $\mu$, that is the least constant $C$ such that the following inequality holds for all locally Lipschitz functions $f:\mathbb R^n\to \mathbb R$: \begin{equation} \mathrm{Var}_{\mu} (f)\le C \int_{\mathbb R^n} |\nabla f|^2 d\mu. \label{eq_106} \end{equation} Here $\mathrm{Var}_{\mu} (f)=\int (f-\int f\, d\mu)^2 d\mu$ if $f\in L_2(\mu)$, and $+\infty$ otherwize, denotes the variance of $f$ with respect to $\mu$. Such Poincar\'e inequalities, when they hold, allow to quantify concentration properties of $\mu$ as well as relaxation properties of associated Langevin dynamics, see e.g. \cite{BGLbook}. \medskip A natural question, posed to us by S. Gadat, is whether the Poincar\'e constant of $\nu^{n,Q}$ can be upper bounded independently of the quadratic form $Q$. This seems plausible, as the addition of $Q$ only makes the measure more log-concave and more localized around the origin. But making this intuition rigorous is far from obvious. A more demanding question is whether $C_P(\nu^{n,Q})$ is maximal when $Q=0$. Observe that $\nu^{n,0}=\nu^n$ is the $n$-fold product of the Laplace distribution on $\mathbb R$, $d\nu(t)=\exp(-|t|)\, dt/2$. By the tensorization property of Poincar\'e inequalities, we have $C_P(\nu^n)=C_P(\nu)= 4$ (see Lemma 2.1 in \cite{bobkov-ledoux} for $C_P(\nu)\le 4$, the converse inequality is checked with exponential test functions). A positive answer to the latter question would imply that $C_P(\nu^{n,Q})$ is upper bounded by 4, independently of the dimension and of the nonnegative quadratic form $Q$. We cannot establish this bound, but we provide results in this direction which apply to more general settings, while putting forward the relevent features of the problem as symmetry, log-concavity and appropriate comparison with the Gaussian case. A sample result is stated next: \begin{theo}\label{th:Exp-poincare} Let $n \geq 2$ and let $F: {\mathbb R}^n \rightarrow {\mathbb R}$ be an even, convex function and let $1 \leq p \leq 2$. Consider the probability measure $\mu$ on ${\mathbb R}^n$ given by $$ d \mu(x) = \frac{1}{Z} e^{-\| x \|_p^p - F(x)} dx $$ where $Z$ is a normalizing constant. Then, \[ C_P(\mu) \le C (\log n)^{\frac{2-p}{p}},\] where $C$ is a universal constant. \end{theo} We do not know whether the logarithmic factor in Theorem \ref{th:Exp-poincare} is necessary. Up to this logarithmic factor, this theorem provides a positive answer to the above question, since a non-negative quadratic function $Q$ is an even, convex function. Note that in the case where $p=2$, there is no logarithmic factor in Theorem \ref{th:Exp-poincare}, yet in this case the Theorem is well-known and it holds true without the assumption that $F$ is an even function (see Corollary \ref{cor:BL} below). The case where $p\in [1,2)$ is harder, and relies on techniques from the study of log-concave measures. Using a result of Kolesnikov-Milman \cite{kolesnikov-milman} that allows to compare Poincar\'e constants of log-concave functions and their level sets, we obtain the following: \begin{corollary} Let $n \geq 2$ and $p \in [1,2]$. Let $E \subseteq {\mathbb R}^n$ be a linear subspace, and set $\kappa = \dim(E) / n$. Then, \[ C_P\big(\lambda_{B_p^n\cap E}\big) \le c(\kappa) \cdot \log^{\frac{2}{p}} (n) \cdot \sup_{0 \neq \theta \in {\mathbb R}^n} \int_{ B_p^n\cap E} \left \langle x, \frac{\theta}{|\theta|} \right \rangle^2 d \lambda_{B_p^n\cap E}(x),\] where $c(\kappa)$ depends solely on $\kappa \in [0,1]$, where $B_p^n = \{ x \in {\mathbb R}^n \, ; \, \sum_i |x_i|^p \leq 1 \}$, and where $\lambda_{B_p^n \cap E}$ is the uniform probability measure on the section $B_p^n \cap E$. \label{cor1} \end{corollary} This provides a partial confirmation, up to a logarithmic term, of a famous conjecture of Kannan, Lov\'asz and Simonovits, which we recall in Section \ref{sec:KLS}. \medskip In Section \ref{sec_GM} we present additional related results, and in particular a slightly more general version of the above results, see Theorem \ref{th:GM-poincare}. The proofs in Section \ref{sec_GM} rely on ideas from the recent Gaussian-mixtures analysis of Eskenazis, Nayar and Tkocz \cite{ENT-mixtures}, and on the fact going back to \cite{klartag-unconditional}, that the first non-trivial eigenfunction is an odd function under convexity and symmetry assumptions. This fact is revisited here, and in particular we prove the following interlacing result for the spectrum of the Laplace-type operator associated with an even, log-concave measure. A function $f: {\mathbb R}^n \rightarrow [0, \infty)$ is log-concave if the set where it is positive is convex, and $-\log f$ is a convex function on this set. \begin{theo}\label{th:interlace} Let $\mu$ be a finite measure with a log-concave density in ${\mathbb R}^n$. Assume that $\mu$ is even. Then in the definition (\ref{eq_106}) it suffices to consider odd functions, i.e., denoting $\lambda_P(\mu) = 1 / C_P(\mu)$ we have $$ \lambda_P(\mu) = \lambda_P(\mu, ``odd") := \inf_{f: {\mathbb R}^n \rightarrow {\mathbb R} \textrm{ is odd}} \frac{\int_{{\mathbb R}^n} |\nabla f|^2 d \mu}{\mathrm{Var}_{\mu}(f)}, $$ where the infimum runs over all locally-Lipschitz, odd functions $f \in L^2(\mu)$ with $f \not \equiv 0$. \medskip Moreover, the even functions do not lag too far behind in the spectrum. Specifically, for any $(n+1)$-dimensional subspace $E \subseteq L^2(\mu)$ of locally-Lipschitz, odd functions we have $$ \lambda_P(\mu, ``even") := \inf_{f: {\mathbb R}^n \rightarrow {\mathbb R} \textrm{ is even}} \frac{\int_{{\mathbb R}^n} |\nabla f|^2 d \mu}{\mathrm{Var}_{\mu}(f)} \leq \sup_{0 \not \equiv f \in E} \frac{\int_{{\mathbb R}^n} |\nabla f|^2 d \mu}{\mathrm{Var}_{\mu}(f)}, $$ where the infimum runs over all locally-Lipschitz, even functions $f \in L^2(\mu)$ with $f \not \equiv Const$. \end{theo} It is well-known that there exists log-concave measures, such as the Laplace distribution mentioned above, for which the infimum defining the Poincar\'e constant is {\it not} attained. Nevertheless, under mild regularity assumptions on $\mu$ it is known that an eigenspace $E_{\mu} \subseteq L^2(\mu)$ corresponding to the eigenvalue $\lambda_P(\mu)$ does exist, and by elliptic regularity the eigenfunctions are smooth. The eigenspace $E_{\mu}$ consists of all locally-Lipschitz functions $f \in L^2(\mu)$ with $\int f d \mu = 0$ for which $$ \int_{{\mathbb R}^n} f^2 d \mu = C_P(\mu) \cdot \int_{{\mathbb R}^n} |\nabla f|^2 d \mu. $$ Given a measure $\mu$ on ${\mathbb R}^n$ write $\mathcal O_n(\mu)$ for the group of all linear isometries $R: {\mathbb R}^n \rightarrow {\mathbb R}^n$ with $R_* \mu = \mu$. As an example, if $\mu$ has the symmetries of the cube $[-1,1]^n$, then the group $\mathcal O_n(\mu)$ has at least $2^n \cdot n!$ elements, and it has no non-trivial invariant subspaces. \begin{theo} Let $\mu$ be a log-concave probability measure on ${\mathbb R}^n$ with $E_{\mu} \neq \{ 0 \}$. Assume that the group $\mathcal O_n(\mu)$ has no non-trivial invariant subspace in ${\mathbb R}^n$. Moreover we make the regularity assumption that $\mu$ has a $C^2$-smooth, positive density $e^{-\psi}$ and that the Hessian matrix of $\psi$ is non-singular at any point of ${\mathbb R}^n$. Then $$ \dim E_\mu=n. $$ Moreover, for any $f\in E_\mu\setminus\{0\}$, \[E_\mu=\mathrm{span}\big\{ f\circ R; \, R\in \mathcal O_n(\mu)\big\}.\] \label{theo_427_} \end{theo} The proofs of the last two results appear in Section \ref{sec_pc}, where an extended discussion and several other related results may be found. \medskip {\it Acknowledgement.} This paper is based upon work supported by the National Science Foundation under Grant No. DMS-1440140 while two of the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2017 semester. \section{Poincar\'e constants for log-concave measures} \label{sec_pc} \subsection{Perturbation principles} We collect here several useful results on the Poincar\'e constants, dealing with various kinds of perturbations of measures. We start with recalling the classical bounded perturbation principle. It follows from the representation formula $\mathrm{Var}_\mu(f)=\inf_{a\in {\mathbb R}} \int (f-a)^2 d\mu$. \begin{proposition}\label{prop:bounded-perturbation} Let $\mu$ be a probability measures on $\mathbb R^n$ and let $\nu(dx)=e^{V(x)} \mu(dx)$ be another probability measure. If the function $V$ is bounded, then \[ C_P(\nu)\le C_P(\mu)\, e^{\mathrm{Osc}(V)},\] where $\mathrm{Osc}(V)=\sup V-\inf V$ is called the oscillation of $V$. \end{proposition} Denote ${\mathbb R}_+ = [0, \infty)$. The following one-dimensional comparison result appears in \cite{roustant-b-i}: \begin{proposition}\label{prop:unimodal-perturbation} Let $b\in(0,\infty]$ and $V$ be an even continuous function on $\mathbb R$ such that $\mu(dx)=\mathbf{1}_{(-b,b)}(x) e^{-V(x)} dx$ is a probability measure on $\mathbb R$. Let $\rho:\mathbb R\to \mathbb R^+$ be an even function which is non-increasing on $\mathbb R^+$, such that $\nu(dx)=\rho(x)\, \mu(dx)$ is a probability measure. Then $C_P(\nu)\le C_P(\mu)$. \end{proposition} The next statement is known as the Brascamp-Lieb variance inequality. A similar result in the complex setting appeared earlier in H\"ormander's work. \begin{theo}[Brascamp-Lieb \cite{brascamp-Lieb}] Let $V:\mathbb R^n\to \mathbb R$ be a $C^2$ function such that for all $x\in \mathbb R^n$, the Hessian matrix $D^2V(x)$ is positive definite. If $\mu(dx):=e^{-V(x)} dx$ is a probability measure, then for all locally Lipschitz functions $f:\mathbb R^n\to \mathbb R$, \[ \mathrm{Var}_\mu(f)\le \int \big\langle (D^2V)^{-1}\nabla f, \nabla f\big\rangle \, d\mu.\] \end{theo} In particular, if $D^2V(x)\ge \Sigma^{-1}$ for all $x \in {\mathbb R}^n$, where $\Sigma$ is a fixed positive-definite matrix, then for all $f$, $$ \mathrm{Var}_\mu(f)\le \int \big\langle \Sigma\nabla f, \nabla f\big\rangle \, d\mu. $$ Observe that $D^2V(x)\ge \Sigma^{-1}$ means that $x\mapsto V(x)-\frac12 \langle \Sigma^{-1}x,x\rangle$ is convex. This leads, by approximation (or via a different proof, as in \cite{bobkov-ledoux-BLBM} where a stronger log-Sobolev inequality is proved), to the following estimate for log-concave perturbations of Gaussian measures. \begin{corollary}\label{cor:BL} Let $\Sigma$ be a symmetric, positive-definite $n \times n$ matrix. Let $\rho: \mathbb R^n\to \mathbb R^+$ be a log-concave function, such that $\mu(dx):=\rho(x) \exp(-\frac 12 \langle \Sigma^{-1} x, x \rangle) dx$ is a probability measure. Then for all locally Lipschitz functions $f:\mathbb R^n\to \mathbb R$: \[ \mathrm{Var}_\mu(f)\le \int \big\langle \Sigma\nabla f, \nabla f\big\rangle \, d\mu.\] \end{corollary} In the log-concave case, Proposition \ref{prop:bounded-perturbation} may be improved substantially, as shown by E. Milman. A probability measure in ${\mathbb R}^n$ is log-concave if it is supported in an affine subspace, and admits a log-concave density in this subspace. The total variation distance between two probability measures $\mu$ and $\nu$ is $$ d_{TV}(\mu, \nu) = \sup_{A} |\mu(A) - \nu(A)| $$ where the supremum runs over all measurable sets $A$. \begin{theo}[E. Milman, Section 5 in \cite{emanuel}]\label{th:total-variation} Let $\mu_1$ and $\mu_2$ be two log-concave probability measures on $\mathbb R^n$ and let $\varepsilon > 0$. If $d_{TV}(\mu_1,\mu_2)\le 1-\varepsilon$, then \[ C_P(\mu_2)\le c(\varepsilon) \cdot C_P(\mu_1),\] where $c(\varepsilon)$ depends only on $\varepsilon$. \end{theo} \subsection{Background on the KLS conjecture}\label{sec:KLS} In the seminal paper \cite{kls}, Kannan, Lov\'asz and Simonovits (KLS for short) formulated a conjecture on the Cheeger isoperimetric inequality for convex sets, which turned out to be of fundamental importance for the understanding of volumetric properties of high dimensional convex bodies. We refer to the books \cite{abBOOK,BOOKgreek} for an extensive presentation of the topic, and focus on the material that is needed for the present work. The KLS conjecture has several equivalent formulations. The one that fits to our purposes is expressed in spectral terms. For a probability measure $\mu$ on $\mathbb R^n$ with finite second moments, let $C_P(\mu, ``linear")$ denote the least number $C$ such that for every \emph{linear} function $f:\mathbb R^n\to \mathbb R$ it holds $\mathrm{Var}_\mu(f)\le C \int |\nabla f|^2 d\mu$. Plainly \[ C_P(\mu) \le C_P(\mu, ``linear")= \|\mathrm{Cov}(\mu) \|_{op}.\] Here, $\mathrm{Cov}(\mu) = (C_{ij})_{i,j=1,\ldots,n}$ is the covariance matrix of $\mu$, with entries $$ C_{ij} = \int_{{\mathbb R}^n} x_i x_j d \mu(x) - \int_{{\mathbb R}^n} x_i d \mu(x) \int_{{\mathbb R}^n} x_j d \mu(x),$$ and $\|\mathrm{Cov}(\mu) \|_{op}$ is norm of $\mathrm{Cov}(\mu)$ considered as on operator on the Euclidean space $\mathbb R^n$, which is equal to the largest eigenvalue of $\mathrm{Cov}(\mu)$. The KLS conjecture predicts the existence of a universal constant $\kappa$ such that for every dimension $n$ and for every compact convex $K\subset \mathbb R^n$ with non-empty interior (convex body), \[ C_P(\lambda_K)\le \kappa \, C_P(\lambda_K,``linear"), \] where $\lambda_K$ denotes the uniform probability measure on $K$. The conjecture has been verified for only a few families of convex bodies as the unit balls of $\ell_p^n$ \cite{sodin,latalaw}, simplices \cite{barthe-wolff}, bodies of revolution \cite{huet}, some Orlicz balls \cite{kolesnikov-milman}. The second named author proved in \cite{klartag-unconditional} that \[C_P(\lambda_K)\le c \log(1+n)^2 C_P(\lambda_K,``linear"),\] with $c$ being a universal constant, holds for all convex bodies $K\subset \mathbb R^n$ which are invariant by all coordinate changes of signs ($(x_1,\ldots,x_n)\in K \Longleftrightarrow (|x_1|,\ldots,|x_n|)$). Such bodies are called unconditional. See \cite{barthe-cordero} for more general symmetries. Corollary \ref{cor1} above gives another instance of a weak confirmation of the conjecture up to logarithms. The KLS conjecture can be formulated in the wider setting of log-concave probability measures (it turns out to be equivalent to the initial formulation on convex bodies). Let $\kappa_n$ denote the least number such that \[ C_P(\mu)\le \kappa_n \, C_P(\mu,``linear") \] holds for all log-concave probability measures on $\mathbb R^n$. With this notation the KLS conjecture predicts that $\sup_k \kappa_n<+\infty$. We will use known estimates on $\kappa_n$. A rather easy bound was given by Bobkov \cite{bobkov}, extending the original result of \cite{kls} for convex bodies: for all log-concave probability measures on $\mathbb R^n$, \begin{equation}\label{eq:trace-bound} C_P(\mu) \le c\, \mathrm{Tr}(\mathrm{Cov}(\mu)), \end{equation} where $c$ is a universal constant. This gives $\kappa_n\le c\, n$. The best bound so far is due to Lee and Vempala \cite{lee-vempala} after a breaktrough of Eldan \cite{eldan}: there is a universal constant $c$ such that for all log-concave probability measures on $\mathbb R^n$ \[ C_P(\mu) \le c \|\mathrm{Cov}(\mu) \|_{HS}=c\big(\mathrm{Tr}(\mathrm{Cov}(\mu)^*\mathrm{Cov}(\mu))\big)^{1/2}.\] This implies that $\kappa_n\le c \sqrt n$. \subsection{Log-concave measures with symmetries} For a Borel measure $\mu$ on ${\mathbb R}^n$ and a function $f \in L_2(\mu)$ we write \begin{equation} \| f \|_{H^{-1}(\mu)} = \sup \left \{ \int_{{\mathbb R}^n} f u d \mu \, ; \, u \in L^2(\mu) \textrm{ is locally-Lipschitz with } \int_{{\mathbb R}^n} |\nabla u|^2 d \mu \leq 1 \right \}. \label{eq_951} \end{equation} The norm $\| f \|_{H^{-1}(\mu)}$ makes sense only when $\int f d\mu = 0$, as otherwise $\| f \|_{H^{-1}(\mu)} = +\infty$. By duality, it follows from the definition of the Poincar\'e constant that for any $f \in L^2(\mu)$ with $\int f d \mu = 0$, \begin{equation} \| f \|_{H^{-1}(\mu)}^2 \leq C_P(\mu) \int_{{\mathbb R}^n} f^2 d \mu. \label{eq_915} \end{equation} The following proposition is an extension of \cite[Lemma 1]{klartag-unconditional}, from uniform measures on $C^\infty$ smooth convex bodies to finite log-concave measures. A proof is provided for completeness. \begin{proposition} Let $\mu$ be a finite, log-concave measure on ${\mathbb R}^n$. Let $f: {\mathbb R}^n \rightarrow {\mathbb R}$ be a locally-Lipschitz function in $L^2(\mu)$ with $\partial_i f \in L^2(\mu)$ and $\int \partial_i f d \mu = 0$ for all $i$. Then, \begin{equation} \mathrm{Var}_{\mu}(f) \leq \sum_{i=1}^n \| \partial_i f \|_{H^{-1}(\mu)}^2, \label{eq_941} \end{equation} where we recall that $\mathrm{Var}_{\mu}(f) = \int (f - E)^2 d \mu$ and $E = \int f d \mu / \mu({\mathbb R}^n)$. \label{prop_1004} \end{proposition} We require the following lemma, whose proof appears in the Appendix below: \begin{lemma} It suffices to prove Proposition \ref{prop_1004} under the additional assumption that the measure $\mu$ has a $C^{\infty}$-smooth density in ${\mathbb R}^n$ which is everywhere positive. \label{lem_1111} \end{lemma} \begin{proof}[Proof of Proposition \ref{prop_1004}] Thanks to Lemma \ref{lem_1111}, we may assume that $\mu(dx) = \exp(-\psi(x)) dx$, where $\psi: {\mathbb R}^n \rightarrow {\mathbb R}$ is smooth and convex. We may also add a constant to $f$ and assume that $\int f d \mu = 0$. Define the associated Laplace operator $$ L u = \Delta u - \langle \nabla u, \nabla \psi \rangle = \sum_{i=1}^n \partial_{ii} u - \partial_i u \cdot \partial_i \psi $$ for a $C^2$-smooth, compactly-supported $u: {\mathbb R}^n \rightarrow {\mathbb R}$. A virtue of this operator is the integration by parts $$ \int_{{\mathbb R}^n} u (Lv) d \mu = -\int_{{\mathbb R}^n} \langle \nabla u, \nabla v \rangle d \mu, $$ valid whenever $v: {\mathbb R}^n \rightarrow {\mathbb R}$ is $C^2$-smooth and compactly-supported and $u$ is locally-Lipschitz. The Bochner formula states that for any $C^2$-smooth, compactly-supported function $u: {\mathbb R}^n \rightarrow {\mathbb R}$, $$ \int_{{\mathbb R}^n} (Lu)^2 d \mu = \sum_{i=1}^n \int_{{\mathbb R}^n} |\nabla \partial_i u|^2 d \mu + \int_{{\mathbb R}^n} (\nabla^2 \psi) \nabla u \cdot \nabla u \, d \mu \geq \sum_{i=1}^n \int_{{\mathbb R}^n} |\nabla \partial_i u|^2 d \mu. $$ This Bochner formula is discussed in \cite{CFM}, where it is also proven (see \cite[Lemma 3]{CFM}) that there exists a sequence of compactly-supported, $C^2$-smooth functions $u_k: {\mathbb R}^n \rightarrow {\mathbb R} \ (k=1,2,\ldots)$ with \begin{equation} \lim_{k \rightarrow \infty} L u_k = f \qquad \text{in} \ L^2(\mu). \label{eq_401} \end{equation} Now, for any $k \geq 1$, \begin{align} \nonumber \int_{{\mathbb R}^n} f (L u_k) d \mu & = -\sum_{i=1}^n \int_{{\mathbb R}^n} \partial_i f \cdot \partial_i u_k d \mu \leq \sqrt{ \sum_{i=1}^n \int_{{\mathbb R}^n} |\nabla \partial_i u_k|^2 d \mu} \cdot \sqrt{ \sum_{i=1}^n \| \partial_i f \|_{H^{-1}(\mu)}^2} \\ & \leq \| L u_k \|_{L^2(\mu)} \cdot \sqrt{ \sum_{i=1}^n \| \partial_i f \|_{H^{-1}(\mu)}^2}. \label{eq_400} \end{align} By letting $k$ tend to infinity we deduce (\ref{eq_941}) from (\ref{eq_401}) and (\ref{eq_400}). \end{proof} Let us write $C_P(\mu, ``even")$ for the smallest number $C > 0$ for which $$ \mathrm{Var}_{\mu}(f) \leq C \int_{{\mathbb R}^n} |\nabla f|^2 d \mu $$ for all even, locally-Lipschitz functions $f \in L^2(\mu)$. We write $C_P(\mu, ``odd")$ for the analogous quantity where $u$ is assumed an odd function. \medskip When $\mu$ is an even measure and $f \in L^2(\mu)$ is odd, we may restrict attention to odd functions $u$ in the definition (\ref{eq_951}) of $\| f \|_{H^{-1}(\mu)}$. Indeed, replacing $u(x)$ by its odd part $[u(x) - u(-x)] / 2$ cannot possibly increase $\int |\nabla u|^2 d \mu$ or affect the integral $\int f u d \mu$ at all. Consequently, in this case, \begin{equation} \| f \|_{H^{-1}(\mu)}^2 \leq C_P(\mu, ``odd") \cdot \int_{{\mathbb R}^n} f^2 d \mu. \label{eq_429} \end{equation} Moreover, when $\mu$ is an even measure in ${\mathbb R}^n$ we have \begin{equation} C_P(\mu) = \max \{ C_P(\mu, ``odd"), C_P(\mu, ``even") \}. \label{eq_1002} \end{equation} This follows from the fact that any locally-Lipschitz $f \in L^2(\mu)$ may be decomposed as $f = g + h$ with $g$ even and $h$ odd, and $\int gh d \mu = \int (\nabla g \cdot \nabla h) d \mu = 0$. In the case where the even measure $\mu$ is additionally assumed log-concave, formula (\ref{eq_1002}) may be improved. The following corollary is an extension of \cite[Corollary 2(ii)]{klartag-unconditional} from smooth convex bodies to finite log-concave measures. This extension requires a modified argument, as the one in \cite{klartag-unconditional} was based on eigenfunctions, which may not exist in general. \begin{corollary} \label{cor:even-sym} Let $\mu$ be a finite, log-concave measure on ${\mathbb R}^n$. Assume that $\mu$ is even. Then $$ C_P(\mu) = C_P(\mu, ``odd"). $$ \label{thm_1003} \end{corollary} \begin{proof} In view of (\ref{eq_1002}), we need to prove that $C_P(\mu, ``even") \leq C_P(\mu, ``odd")$. Thus, let $f \in L^2(\mu)$ be an even, locally-Lipschitz function. Then $\partial_i f$ is an odd function for all $i$. In the case where $\partial_i f \in L^2(\mu)$ for all $i$, by Proposition \ref{prop_1004} and by (\ref{eq_429}), \begin{equation} \mathrm{Var}_{\mu}(f) \leq \sum_{i=1}^n \| \partial_i f \|_{H^{-1}(\mu)}^2 \leq C_P(\mu, ``odd") \cdot \sum_{i=1}^n \int_{{\mathbb R}^n} |\partial^i f|^2 d \mu = C_P(\mu, ``odd") \cdot \int_{{\mathbb R}^n} |\nabla f|^2. \label{eq_432} \end{equation} Note that (\ref{eq_432}) trivially holds when $\partial_i f \not \in L^2(\mu)$ for some $i$, as the right-hand side is infinite. Now (\ref{eq_432}) shows that $C_P(\mu, ``even") \leq C_P(\mu, ``odd")$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:interlace}] The first part of the theorem follows from Corollary \ref{thm_1003}. As for the second part, let $E \subseteq L^2(\mu)$ be an $(n+1)$-dimensional subspace of locally-Lipschitz, odd functions. Consider the linear map $\theta: E \rightarrow {\mathbb R}^n$ defined via \begin{equation} \theta(f) := \int_{{\mathbb R}^n} \nabla f\, d \mu. \label{eq_412} \end{equation} Since $E$ is $(n+1)$-dimensional, there exists $0 \not \equiv f \in E$ with $\theta(f) = 0$. Since $f$ is odd, the function $\partial_i f$ is an even function for all $i$. In the case where $\partial_i f \in L^2(\mu)$ for all $i$, by Proposition \ref{prop_1004} and (\ref{eq_429}), $$ \mathrm{Var}_{\mu}(f) \leq \sum_{i=1}^n \| \partial_i f \|_{H^{-1}(\mu)}^2 \leq C_P(\mu, ``even") \cdot \sum_{i=1}^n \int_{{\mathbb R}^n} |\partial^i f|^2 d \mu = C_P(\mu, ``even") \cdot \int_{{\mathbb R}^n} |\nabla f|^2 d\mu. $$ This inequality trivially holds if $\partial_i f \not \in L^2(\mu)$ for some $i$. We have thus found $f \in E$ with $$ \lambda_P(\mu, ``even") = \frac{1}{C_P(\mu, ``even")} \leq \frac{\int_{{\mathbb R}^n} |\nabla f|^2 d \mu}{\mathrm{Var}_{\mu}(f)}, $$ completing the proof of the theorem. \end{proof} A measure $\mu$ in ${\mathbb R}^n$ is unconditional if it is invariant under coordinate reflections, i.e., for any test function $\varphi$ and any choice of signs, $$ \int_{{\mathbb R}^n} \varphi(\pm x_1,\ldots,\pm x_n) d \mu(x) = \int_{{\mathbb R}^n} \varphi(x_1,\ldots,x_n) d \mu(x). $$ The following corollary is similar to \cite[Corollay 2(i)]{klartag-unconditional} but it does not involve any regularity assumption: \begin{corollary}\label{cor:uncond-sym} Let $\mu$ be a finite, log-concave measure on ${\mathbb R}^n$. Assume that $\mu$ is unconditional. Then $$ C_P(\mu) = C_P(\mu, ``\textrm{odd in at least one coordinate}"), $$ i.e., in the definition of $C_P(\mu)$ it suffices to consider functions $f(x_1,\ldots,x_n)$ for which there is an index $i$ such that $f$ is odd with respect to $x_i$. \end{corollary} \begin{proof} For $I \subseteq \Omega_n = \{ 1,\ldots, n \}$ we say that $f(x_1,\ldots,x_n)$ is of type $I$ if it is even with respect to $x_i$ for $i \in I$ and odd with respect to $x_i$ for $i \not \in I$. Any $f \in L^2(\mu)$ may be decomposed into a sum of $2^n$ functions, each of a certain type $I \subseteq \Omega_n$. Moreover, even without the log-concavity assumption we have \begin{equation} C_P(\mu) = \max_{I \subseteq \Omega_n} C_P(\mu, ``\textrm{functions of type I}"). \label{eq_452} \end{equation} All we need is to eliminate the case $I = \Omega_n$ from the maximum in (\ref{eq_452}). However, if $f$ is of type $\Omega_n$, then each function $\partial_i f$ is of type $\Omega_n\setminus \{i\} $. We may thus rerun the argument in (\ref{eq_432}) and complete the proof. \end{proof} \subsection{The structure of the eigenspace} We move on to discuss properties of eigenfunctions of log-concave measures with symmetries, following their investigation in \cite{klartag-unconditional}. We will consider a log-concave probability measure $d\mu(x)=e^{-\psi(x)} dx$ such that $\psi:{\mathbb R}^n\to {\mathbb R}$ is of class $C^2$ and $D^2\psi(x)>0$ for all $x$. The Poincar\'e inequality asserts that the non-zero eigenvalues of $-L$, where $$ L=\Delta-\langle \nabla\psi, \nabla \rangle, $$ are at least $1/C_P(\mu)$. We assume here that $\lambda_\mu=\lambda_P(\mu) = 1/C_P(\mu)$ is actually an eigenvalue for $L$ and study the structure of the corresponding eigenspace $E_\mu:=\{ f\in L^2(\mu); Lf=-\lambda_\mu f\}$. Note that elliptic regularity ensures that eigenfunctions are $C^2$-smooth. First, we put forward the key ingredient in \cite{klartag-unconditional}. We reproduce the proof, for completeness. \begin{lemma}\label{lem:injectivity} Under the above assumptions, the linear map $\theta:E_\mu\to {\mathbb R}^n$ defined in (\ref{eq_412}) is injective. As a consequence $\dim E_\mu\le n$. \end{lemma} \begin{proof} Assume $Lf=-\lambda_\mu f$ and $\int \nabla f d\mu=0$. Then using integration by parts, the Poincar\'e inequality for the zero average functions $\partial_i f$ and the Bochner formula gives \begin{align*} \lambda_\mu \int f^2d\mu&= -\int f Lf \, d\mu = \int |\nabla f|^2 d\mu =\sum_i \mathrm{Var}_\mu(\partial_i f) \le \frac{1}{\lambda_\mu} \sum_i \int |\nabla \partial_i f|^2 d\mu\\ =& \frac{1}{\lambda_\mu}\left( \int (Lf)^2d\mu - \int \langle D^2\psi \nabla f,\nabla f\rangle d\mu \right)\le \frac{1}{\lambda_\mu} \int (Lf)^2d\mu =\lambda_\mu \int f^2 d\mu. \end{align*} Hence all the above inequalities are actually equalities. In particular $ \int \langle D^2\psi \nabla f,\nabla f\rangle d\mu =0$, from which we conclude that $f$ is constant. Hence $0=Lf=-\lambda_\mu f$, and $f=0$. \end{proof} Let $\mathcal O_n$ be the group of linear isometries of the Euclidean space ${\mathbb R}^n$. We consider the subgroup of isometries which leave $\mu$ invariant: \[\mathcal O_n(\mu):=\big\{R\in \mathcal O_n;\; R\mu=\mu\big\}= \big\{R\in \mathcal O_n;\; \psi\circ R=\psi\big\}.\] \begin{lemma}\label{lem:Omu} If $f\in E_\mu$ and $R\in \mathcal{O}_n(\mu)$ then $f\circ R^{-1}\in E_\mu$ and \[ \theta\big( f\circ R^{-1}\big)= R \theta(f).\] \end{lemma} \begin{proof} The fact that $f\circ R^{-1}$ is still an eigenfunction is readily checked. Next \[ \theta\big( f\circ R^{-1}\big)=\int \nabla (f\circ R^{-1})d\mu = \int R (\nabla f)\circ R^{-1} d\mu= R \int \nabla f \, d\mu,\] where we have used that $R^{-1}$ is also the adjoint of $R$, and the invariance of $\mu$. \end{proof} \begin{rem} This result can be formulated in a more abstract way. The group $\mathcal{O}_n(\mu)$ has a natural representation as operators on ${\mathbb R}^n$, denoted by $\rho$. It has another one as operators on $E_\mu$, denoted by $\pi$ and defined for $R\in \mathcal O_n(\mu)$ and $f\in E_\mu$ by $\pi(R)f=f\circ R^{-1}$. The statement of the lemma means that $\theta:E_\mu\to {\mathbb R}^n$ intertwines $\pi$ and $\rho$. \end{rem} \begin{rem} The arguments of the above two proofs were used in \cite{klartag-unconditional} to establish the existence of antisymmetric eigenfunctions, more specifically of an odd eigenfunction when $\psi$ is even, and of an eigenfunction which is odd in one coordinate when $\psi$ is unconditional. Note that these results give Corollary~\ref{cor:even-sym} and also Corollary \ref{cor:uncond-sym} below under strong assumptions on the existence of eigenfunctions, which we could remove in the present paper. It was proven in \cite{barthe-cordero} that the existence of antisymmetric eigenfunctions extends as follows: if there exist $R_1,\ldots, R_k\in \mathcal O_n(\mu)$ such that $\{x\in {\mathbb R}^n;\; \forall i, R_ix=x\}=\{0\}$ then for every $f\in E_\mu\setminus \{0\}$ there exists $i$ such that $f\circ R_i -f\in E_\mu\setminus \{0\}$. The proof of this is easy from the lemmas: it is always true that $f\circ R_i -f\in E_\mu$. Assume by contradiction that for all $i$, $f\circ R_i -f=0$. Then $\theta(f)=\theta(f\circ R_i)=R_i^{-1}\theta(f)$. So $\theta(f)\in {\mathbb R}^n$ is a fixed point of all the $R_i$'s. By hypothesis, $\theta(f)=0$ hence $f=0$. \end{rem} The above two statements allow to derive some more structural properties of $E_\mu$ when the measure has enough symmetries. \begin{theo} With the above notation, assume that $\mathcal O_n(\mu)$ has no non-trivial invariant subspace. Then the map $\theta$ is bijective. In particular $\dim E_\mu=n$. Moreover, for any $f\in E_\mu\setminus\{0\}$, \[E_\mu=\mathrm{span}\big\{ f\circ R; \, R\in \mathcal O_n(\mu)\big\}.\] \label{theo_427} \end{theo} \begin{proof} By the above lemma, the range of $\theta$ is invariant by $\mathcal O_n(\mu)$. By Lemma~\ref{lem:injectivity}, the map $\theta$ is injective, so its range cannot be reduced to $\{0\}$. Therefore $\theta(E_\mu)={\mathbb R}^n$, i.e $\theta$ is surjective, hence bijective. \medskip Next consider $S:=\mathrm{span}\big\{f\circ R^{-1};\, R\in \mathcal O_n(\mu)\big\}\subset E_\mu$. Then, thanks to the latter lemma, $\theta(S)=\mathrm{span}\big\{ R\theta(f);\, R\in \mathcal O_n(\mu)\big\}$ is $\mathcal O_n(\mu)$ invariant and non-zero. Therefore it is equal to ${\mathbb R}^n$. Hence $S=E_\mu$. \end{proof} Theorem \ref{theo_427_} above follows from Theorem \ref{theo_427}, as it is well-known by spectral theory that a locally-Lipschitz function $f \in L^2(\mu)$ with $\int f d \mu = 0$ for which an equality in the Poincar\'e inequality is attained, belongs to $E_{\mu}$. \medskip Eventually, let us give an example in a specific case: assume that $\mu$ has the symmetries of the cube, or equivalently that $\psi(x)=\psi\big(|x_{\sigma(1)}|,\ldots, |x_{\sigma(n)}|\big)$ for all permutations $\sigma$ of $\{1,\ldots,n\}$ and all $x\in {\mathbb R}^n$. Then $\mathcal O_n(\mu)$ has no non-trivial invariant subspace and the above proposition applies. But one can give a more precise description of the $n$-dimensional space $E_\mu$ in this case. \medskip Denote by $(e_i)_{i=1}^n$ the canonical basis of ${\mathbb R}^n$, by $S_i$ the orthogonal symmetry with respect to the hyperplane $\{x; x_i=0\}$, and $T_{ij}$, $i\neq j$ the linear operator on ${\mathbb R}^n$ the action of which on the canonical basis is to exchange $e_i$ and $e_j$. Note that $S_i$ and $T_{ij}$ belong to $\mathcal O_n(\mu)$ and are involutive. Since $\theta$ is bijective we define $f_i:=\theta^{-1}(e_i)$, and obtain a basis $(f_i)_{i=1}^n$ of $E_\mu$. The relationships between vectors of ${\mathbb R}^n$ and isometries in $\mathcal O_n(\mu)$ can be transfered to eigenfunctions thanks to $\theta$: \begin{align*} \theta(f_1)&=e_1=-S_1e_1=-S_1\theta(f_1)=\theta(-f_1\circ S_1)\\ \theta(f_1)&=e_1=S_ie_1=S_i\theta(f_1)=\theta(f_1\circ S_i),\quad \mathrm{if} \, i\neq 1\\ \theta(f_1)&=e_1=T_{ij}e_1=T_{ij}\theta(f_1)=\theta(f_1\circ T_{ij}),\quad \mathrm{if} \, i,j\neq 1 \end{align*} imply that $f_1=-f_1\circ S_1$ and for $i,j\neq 1$, $f_1=f_1\circ S_i=f_1\circ T_{i,j}$. In other words for any $(x_2,\ldots,x_n)$, the map $x_1\mapsto f_1(x_1,\ldots,x_n)$ is odd and for any $x_1$, the map $(x_2,\ldots, x_n)\mapsto f_1(x_1,\ldots,x_n)$ is invariant by changes of signs and permutations of coordinates. Still for $i\neq 1$, \[ \theta(f_i)=e_i=T_{1i}e_1=T_{1i} \theta(f_1)= \theta(f_1\circ T_{1i}) \] yields $f_i=f_1\circ T_{1i}$. In particular, $f_i$ is an odd function of $x_i$ and an unconditional and permutation invariant function of $(x_j)_{j\neq i}$. Consequently for $i\neq j$, $\int f_i f_j \, d\mu=0$ (the integral against $dx_i$ is equal to zero since $f_i$ is odd in $x_i$ while $f_j$ and $\psi$ are even in $x_i$). Summarizing, $(f_1,f_1\circ T_{12},\ldots, f_1\circ T_{1n})$ is an orthogonal basis of $E_\mu$. \section{Perturbed products} In this section we investigate Poincar\'e inequalities for multiplicative perturbations of product measures. \subsection{Unconditional measures} We now describe a comparison result which may be viewed as the higher-dimensional analog of Proposition \ref{prop:unimodal-perturbation}, in the case of product measures. We write ${\mathbb R}^n_+ = [0, \infty)^n$. \begin{theo}\label{theo:before-rem-cube} For $i=1,\ldots,n$, let $d\mu_i(t)=\mathbf{1}_{(-b_i,b_i)}(t) e^{-V_i(t)}\, dt$ be an origin symmetric probability measure on $\mathbb R$, with $b_i\in(0,\infty]$ and $V_i$ continuous on $\mathbb R$. Let $\rho:\mathbb R^n\to \mathbb R^+$ be such that $d\mu^{n,\rho}(x)=\rho(x)\prod_{i=1}^nd\mu_i(x_i)$ is a probability measure. Assume that $\rho$ is unconditional (i.e. $\rho(x_1,\ldots,x_n)=\rho(|x_1|,\ldots, |x_n|)$ for all $x\in \mathbb R^n$) and coordinatewise non-increasing on $\mathbb R_+^n$. If in addition $\mu^{n,\rho}$ is log-concave, then \[C_P(\mu^{n,\rho})\le C_P(\mu^{n,1})=\max_i C_P(\mu_i).\] This holds in particular when the measures $\mu_i$ are even and log-concave and $\rho$ is log-concave and unconditionnal. \end{theo} \begin{proof} Since $\mu^{n,\rho}$ is log-concave and unconditional, we know by Corollary~\ref{cor:uncond-sym} that it is enough to prove the Poincar\'e inequality for functions which are odd with respect to one coordinate. Let $f:\mathbb R^n\to \mathbb R$ be locally Lipschitz, and assume that it is odd in the first variable (other variables are dealt with in the same way). Then by symmetry $\int f \, d\mu^{n,\rho}=0$, so that \[ \mathrm{Var}_{\mu^{n,\rho}}(f)= \int f^2 d\mu^{n,\rho}=\int \left( \int_{\mathbb R} f^2(x) \rho(x) d\mu_1(x_1)\right) \prod_{i\ge 2} d\mu_i(x_i). \] In the sense of $\mu_2\otimes\cdots\otimes \mu_n$, for almost every $\overline{x}:=(x_2,\ldots,x_n)$, $Z_{\overline{x}}:=\int_{\mathbb R} \rho(x) d\mu_1(x_1)<+\infty$. Thus way may consider the probability measure $\rho(x_1,\overline{x}) d\mu_1(x_1)/ Z_{\overline{x}}$. It is a perturbation of an even probability measure on $\mathbb R$, by the even unimodal function $x_1\mapsto \rho(x_1,\overline{x})/Z_{\overline{x}}$. Hence by Proposition \ref{prop:unimodal-perturbation}, its Poincar\'e constant is at most $C_P(\mu_1)$. Since $x_1 \mapsto f(x_1,\overline{x})$ is odd, it has a zero average for the later measure and we get \[ \int_{\mathbb R} f^2(x) \rho(x)\frac{d\mu_1(x_1)}{Z_{\overline{x}}} \le C_P(\mu_1) \int_{\mathbb R} (\partial_1 f(x))^2 \rho(x)\frac{d\mu_1(x_1)}{Z_{\overline{x}}}.\] Cancelling $ Z_{\overline{x}}$ and plugging in the former equality, we get \[ \mathrm{Var}_{\mu^{n,\rho}}(f)\le \int \left( C_P(\mu_1) \int_{\mathbb R} (\partial_1 f(x))^2 \rho(x) d\mu_1(x_1)\right) \prod_{i\ge 2} d\mu_i(x_i)\le \max_i C_P(\mu_i) \int |\nabla f|^2 d \mu^{n,\rho}. \qedhere \] \end{proof} \begin{rem} The hypothesis of unconditionality on the perturbation $\rho$ cannot be dropped, as the following example shows. Denote by $U([a,b])$ the uniform probability measure on $[a,b]$. Classically, $C_P(U([a,b]))=(b-a)^2/\pi^2$. We choose $\mu_i=U([-\frac12,\frac12])$. Then the measure $\mu^{n,1}$ is uniform on the unit cube $C_n:=[-\frac12,\frac12]\subset \mathbb R^n$, and $C_P(\mu^{n,1})=\pi^{-2}$. Let $\varepsilon \in (0,1)$ and consider an orthogonal parallelotope $P_\varepsilon$ included in the cube $C_n$ and of maximal side length $(1-\varepsilon )\sqrt{n}$ (such parallelotopes are easily constructed. When $\varepsilon $ tends to zero they collapse to the main diagonal of the cube, the length of which is $\sqrt n$). Then define $\rho_\varepsilon=\mathbf{1}_{P_\varepsilon}/\mathrm{Vol}(P_\varepsilon)$. Clearly $\mu^{n,\rho_\varepsilon}$ is the uniform measure on $P_\varepsilon$, which is a product measure. So by the tensorisation property $C_P(\mu^{n,\rho_\varepsilon})=\frac{1}{\pi^2} ((1-\varepsilon)\sqrt n)^2$. \end{rem} \begin{rem} The product hypothesis is also important. Consider the uniform measure $U(\sqrt{n}B_2^n)$ on the Euclidean Ball of radius $\sqrt n$ in $\mathbb R^n$, for $n\ge 2$. It is well-known that $\sup_n C_P(U(\sqrt{n}B_2^n))<+\infty$. For $\varepsilon\in (0,1)$, define the unconditional parallelotope \[Q_\varepsilon=\left\{x\in\mathbb R^n; |x_1|\le \sqrt{n-\varepsilon}\; \mathrm{and}\, \forall i\ge 2, |x_i|\le \sqrt{\frac{\varepsilon}{n-1}} \right\}\subset \sqrt{n} B_2^n.\] Since it is a product set, $C_P(U(Q_\varepsilon))=C_P\big(U([-\sqrt{n-\varepsilon}, \sqrt{n-\varepsilon}])\big)=(n-\varepsilon)/\pi^2$. Hence $U(Q_\varepsilon)$ is an unconditional and log-concave perturbation of $U(\sqrt{n}B_2^n)$, which is itself log-concave and unconditional. Nevertheless the former has a much larger Poincar\'e constant than the latter when the dimension grows. See also Section 3.3 below. \end{rem} \subsection{The general case} The above examples show that a dimension dependence is sometimes needed, of order $n$ for the covariances and Poincar\'e constants. We show next that this is as bad as it gets, and that such a control of the covariance can be obtained independently of the even log-concave perturbation. \begin{theo} Let $\mu_1, \ldots,\mu_n$ be even log-concave probability measures on $\mathbb R$, and let $\rho:{\mathbb R}^n\to {\mathbb R}^+$ be an even log-concave function such that \[ d\mu_\rho(x):= \rho(x) \prod_{i=1}^n d\mu_i(x_i),\quad x\in {\mathbb R}^n\] is a probability measure. Then, covariance matrices can be compared: \[ \mathrm{Cov}(\mu^{n,\rho})\le n \, \mathrm{Cov}(\mu^{n,1}).\] Moreover, \[ C_p(\mu^{n,\rho}) \le c \sum_{i=1}^n \mathrm{Var}(\mu_i) \le c\, n \max_{i} C_P(\mu_i)=c\, n \, C_P(\mu^{n,1}),\] where $c$ is a universal constant. \end{theo} \begin{proof} We start with the covariance inequality. Set $\sigma_i^2=\mathrm{Var}(\mu_i)$. Let $g$ be an even log-concave function on $\mathbb R$. Then since $g$ is non-increasing on $\mathbb R^+$, \[ \int_{\mathbb R} t^2 g(t) \, d\mu_i(t) \le \left( \int_{\mathbb R} t^2 d\mu_i(t)\right) \left( \int_{\mathbb R} g(t) \, d\mu_i(t)\right) .\] Indeed, by symmetry this follows from the basic fact that $2\mathrm{cov}_{m}(f,g)=\int_{(R_+)^2} (f(x)-f(y))(g(x)-g(y)) \, dm(x) dm(y)\le 0$ if $m$ is a probability measure on $\mathbb R^+$, $f$ is non-decreasing and $g$ is non-increasing. The above inequality, sometimes referred to as Chebyshev's sum inequality, can be restated in terms of the peaked ordering as $ t^2 d\mu_i(t)\prec \sigma_i^2 \mu_i $. Such an inequality is preserved by taking on both side the tensor product with an even log-concave measure (e.g. Kanter \cite[Corollary 3.2]{Kanter}). Hence, tensorizing with $\otimes_{j\neq i} \mu_j$ \[ x_i^2 d\mu_1(x_1)\ldots d\mu_n(x_n) \prec \sigma_i^2 \mu_1\otimes\cdots\otimes \mu_n .\] This means that the left-hand side measure has smaller integral against even log-concave functions. Applying this with $\rho$ gives \begin{equation} \label{eq:bound-coordinate-variance} \int x_i^2 d\mu^{n,\rho}(x)\le \sigma_i^2. \end{equation} This is enough to upper bound the covariance matrix. Indeed, for $\theta\in \mathbb R^n$, \begin{align*} \mathrm{Var}_{\mu^{n,\rho}}(\langle \cdot, \theta\rangle)& =\int \langle x,\theta\rangle^2 d\mu^{n,\rho}(x)=\sum_{i,j} \int x_ix_j \theta_i\theta_j \, d\mu^{n,\rho}(x) \\ & \le \sum_{i,j} |\theta_i|\, |\theta_j| \left(\int x_i^2 d\mu^{n,\rho}(x) \right)^{\frac12}\left(\int x_j^2 d\mu^{n,\rho}(x) \right)^{\frac12} \\ & \le \sum_{i,j}\sigma_i |\theta_i| \sigma_j |\theta_j| = \left( \sum_{i=1}^n |\theta_i| \sigma_i\right) ^2 \\ & \le n \sum_{i=1}^n \sigma_i^2 \theta_i^2= n \, \mathrm{Var}_{\mu_1\otimes\cdots\otimes \mu_n}(\langle \cdot, \theta\rangle) . \qedhere \end{align*} Eventually, since $\mu^{n,\rho}$ is log concave, we may apply Inequality \eqref{eq:trace-bound} \[C_P(\mu^{n,\rho})\le c \, \mathrm{Tr}\big(\mathrm{Cov}(\mu^{n,\rho})\big)= c\sum_{i=1}^n \int x_i^2 d\mu^{n,\rho}(x).\] We conclude thanks to \eqref{eq:bound-coordinate-variance}. \end{proof} \subsection{Gaussian mixtures} \label{sec_GM} In this section, we consider $n$ probability measures on ${\mathbb R}$ which are absolutely continuous {\it Gaussian mixtures}. This means that $\mu_i(dt)=\varphi_i(t)\, dt$ for $i=1,\ldots,n$ with \begin{equation} \varphi_i(t)= \int_{{\mathbb R}_+^*} \frac{e^{-\frac{t^2}{2\sigma^2}}}{\sigma \sqrt{2\pi}} \, dm_i(\sigma) ,\quad t\in {\mathbb R} \label{eq_546} \end{equation} where $m_i$ is a probability measure and ${\mathbb R}^*_+ = (0, \infty)$. In other words if $R_i$ is a random variable with law $m_i$ and is independent of a standard Gaussian variable $Z$, then the product $R_iZ$ is distributed according to $\mu_i$. These measures were considered by Eskenazis, Nayar and Tkocz \cite{ENT-mixtures}, who showed that several geometric and entropic properties of Gaussian measures extend to Gaussian mixtures. \subsubsection{Using the covariance} For log-concave probability measures, it is known that the Poincar\'e constant is related to the operator norm of the covariance matrix of the measure. In order to estimate the covariance, we use an extension by Eskenazis, Nayar and Tkocz of the Gaussian correlation inequality due to Royen \cite{royen}. A function $f$ is quasi-concave if its upper level sets $\{ x ; f(x) \geq t \}$ are convex for all $t$. \begin{theo}[\cite{ENT-mixtures}] Let $\mu_1,\ldots,\mu_n$ be probability measures on $\mathbb R$ which all are Gaussian mixtures. Let $f,g:\mathbb R^n\to \mathbb R^+$ be even and quasi-concave, then for $\mu=\mu_1\otimes\cdots\otimes \mu_n$, \[ \int fg \, d\mu \ge \left( \int f\, d\mu\right) \left( \int g\, d\mu\right).\] \label{thm_548} \end{theo} \begin{rem} The inequality is actually valid for the more general class of even and unimodal functions (i.e. increasing limits of positive combinations of indicators of origin symmetric convex sets). \end{rem} For our purpose we rather need a weaker version of Theorem \ref{thm_548}. Let $c:\mathbb R^n\to {\mathbb R}$ be an even convex function, and $g$ be even and log-concave; for $\varepsilon>0$, consider the log-concave function $f=\exp(-\varepsilon g)$. Then the above theorem gives $\int e^{-\varepsilon c}g \, d\mu \ge \left(\int e^{-\varepsilon c} \, d\mu\right) \left( \int g \, d\mu\right)$. There is equality for $\varepsilon=0$, so comparing derivatives at $\varepsilon =0$ yields that an even convex and an even log-concave function are negatively correlated for $\mu$: \begin{equation}\label{eq:convex-logconcave} \int cg \, d\mu \le \left( \int c\, d\mu\right) \left( \int g\, d\mu\right). \end{equation} In the case of centered Gaussian measures, this negative correlation property between even convex and even log-concave functions was established first by Harg\'e \cite{harge}. \begin{proposition}\label{th:GM-odd-functions} Let $\mu_1,\ldots,\mu_n$ be Gaussian mixtures, and let $\rho:\mathbb R^n\to \mathbb R^+$ be an even log-concave function such that the measure $d\mu^{n,\rho}(x)=\rho(x) \prod_{i=1}^n d\mu_i(x_i)$ is a probability measure on $\mathbb R^n$. Then \[ \mathrm{Cov}(\mu^{n,\rho})\le \mathrm{Cov}(\mu^{n,1})\] If in addition $\mu^{n,\rho}$ is log-concave (which is true if the measures $\mu_i$ and the function $\rho$ are log-concave), then \[ C_p(\mu^{n,\rho}) \le c\, n^{\frac12} \max_{i} \mathrm{Var}(\mu_i) \le c\, n^{\frac12} \max_{i} C_P(\mu_i)=c\, n^{\frac12} C_P(\mu^{n,1}),\] where $c$ is a universal constant. \end{proposition} \begin{proof} Let $\theta\in \mathbb R^n$. Since $x\mapsto \langle x,\theta\rangle^2$ is even and convex, the correlation inequality \eqref{eq:convex-logconcave} yields \[ \int \langle x,\theta\rangle^2 \rho(x) \prod d\mu_i(x_i) \le \left(\int \langle x,\theta\rangle^2 \prod d\mu_i(x_i) \right) \int \rho(x) \prod d\mu_i(x_i). \] Since the measures are centered, this can be rewritten as \[\mathrm{Var}_{\mu^{n,\rho}} \big( \langle \cdot, \theta\rangle\big) \le \mathrm{Var}_{\mu^{n,1}} \big( \langle \cdot, \theta\rangle\big),\quad \theta \in \mathbb R^n.\] Hence the covariance inequality is proved. For the second part of the statement, we apply the best general result towards the Kannan-Lovasz-Simonovits conjecture, recalled in Section \ref{sec:KLS}: for every log-concave probability measure $\eta$ on $\mathbb R^n$, $C_P(\eta)\le c\, n^{1/2} \|\mathrm{Cov}(\eta)\|_{op}$ \end{proof} \begin{rem} The KLS conjecture predicts that for some universal constant $\kappa$ and for all log-concave probability measures $\eta$, $C_P(\eta)\le \kappa \|\mathrm{Cov}(\eta)\|_{op}$. If it were confirmed, then the conclusion of the above theorem could be improved to $C_P(\mu^{n,\rho})\le \kappa \, C_P(\mu^{n,1})$. \end{rem} \begin{rem} The correlation inequality proves that $\mu^{n,\rho}\succ \mu^{n,1}$ for the peaked ordering on measures: $\mu\succ \nu$ means $\mu(K)\ge \nu(K)$ for all origin-symmetric convex sets, and imples $\int f d\mu\ge \int f d\nu$ for all (even) unimodal functions. Also, the weaker correlation inequality \eqref{eq:convex-logconcave} implies that $\mu^{n,\rho}$ is dominated by $\mu^{n,1}$ in the Choquet ordering (integrating against convex functions). \end{rem} \subsubsection{Direct approach} Working directly on the Poincar\'e inequality, we will improve the $n^{1/2}$ to $\log(n)$ in Proposition \ref{th:GM-odd-functions}. \begin{lemma}\label{lem:poincCM} Let $\mu_1, \ldots,\mu_n$ be Gaussian mixtures as in (\ref{eq_546}), and let $\rho:{\mathbb R}^n\to {\mathbb R}^+$ be an even log-concave function such that \[ d\mu^{n,\rho}(x):= \rho(x) \prod_{i=1}^n d\mu_i(x_i),\quad x\in {\mathbb R}^n\] is a probability measure. Then for every odd and locally Lipschitz function $f:{\mathbb R}^n\to {\mathbb R}$, it holds \[ \mathrm{Var}_{\mu^{n,\rho}}(f)\le \int \sum_{i=1}^n \alpha_i(x_i) (\partial_i f(x))^2 d\mu^{n,\rho}(x),\] where \[ \alpha_i(t):= \frac{1}{\varphi_i(t)} \int_{{\mathbb R}_+^*} \sigma\frac{e^{-\frac{t^2}{2\sigma^2}}}{ \sqrt{2\pi}} \, dm_i(\sigma) =\frac{1}{\varphi_i(t)}\int_{|t|}^{+\infty} u \varphi_i(u)\, du,\quad t\in {\mathbb R}.\] \end{lemma} \begin{proof} Since $f$ is odd and $\mu^{n,\rho}$ has an even density, \begin{align*} \mathrm{Var}_{\mu^{n,\rho}}(f)& =\int f^2 \, d\mu^{n,\rho}= \int f^2(x)\rho(x) \prod_{j=1}^n \left( \int_{{\mathbb R}_+^*} \frac{e^{-\frac{x_j^2}{2\sigma_j^2}}}{\sigma_j \sqrt{2\pi}} \, dm_j(\sigma_j) \right) \, dx \\ &= \int_{({\mathbb R}_+^*)^n} \left( \int_{\mathbb R^n} f^2(x) \rho(x) e^{-\frac12 \sum_j \frac{x_j^2}{\sigma_j^2}} \frac{dx}{(2\pi)^{n/2}\prod_j \sigma_j}\right) \prod_{j=1}^n dm_j(\sigma_j) \end{align*} For each $(\sigma_i)_i$ we estimate the inner integral from above thanks to the Brascamp-Lieb inequality, applied to the probability measure \[ dM_\sigma(x)= \frac{1}{Z_\sigma} \rho(x) e^{-\frac12 \sum_j \frac{x_j^2}{\sigma_j^2}} \frac{dx}{(2\pi)^{n/2}\prod_i \sigma_j}\] Since $M_\sigma$ is log-concave with respect to the Gaussian measure $x \mapsto \exp(-\frac 12 \langle \mathrm{Diag}(\sigma)^2 x, x \rangle)$, the Brascamp-Lieb inequality in the form of Corollary \ref{cor:BL} gives \[ \mathrm{Var}_{M_\sigma}(f)\le \int \left\langle \mathrm{Diag}(\sigma)^2 \nabla f, \nabla f\right\rangle \, dM_\sigma(x).\] Since $f$ is odd and $M_\sigma$ is an even measure, we obtain that $\int f^2 dM_\sigma \le \int (\sum \sigma_i^2 (\partial_i f)^2) \, dM_\sigma$. Observe that in this formulation, the normalizing constant $Z_\sigma$ appears on both sides and therefore cancels. This leads to \begin{align*} \mathrm{Var}_{\mu_\rho}(f)& \le \int_{({\mathbb R}_+^*)^n} \left( \int_{\mathbb R^n} \Big( \sum_i \sigma_i^2 \big(\partial_i f(x)\big)^2\Big) \rho(x) e^{-\frac12 \sum_j \frac{x_j^2}{\sigma_j^2}} \frac{dx}{(2\pi)^{n/2}\prod_j \sigma_j}\right) \prod_{j=1}^n dm_j(\sigma_j)\\ &=\sum_{i=1}^n \int_{\mathbb R^n} \big(\partial_i f(x)\big)^2 \left(\int_{({\mathbb R}_+^*)^n} \sigma_i^2 \prod_{j=1}^n \left(e^{-\frac{x_j^2}{2\sigma_j^2}} \,\frac{ dm_j(\sigma_j)}{\sigma_j\sqrt{2\pi}} \right) \right) \rho(x)\, dx \\ &=\sum_{i=1}^n \int_{\mathbb R^n} \big(\partial_i f(x)\big)^2 \left(\int_{({\mathbb R}_+^*)^n} \sigma_i e^{-\frac{x_i^2}{2\sigma_i^2}} \,\frac{ dm_i(\sigma_i)}{\sqrt{2\pi}} \right) \prod_{j\neq i}\varphi_j(x_j)\, \rho(x)\, dx \\ &=\sum_{i=1}^n \int_{\mathbb R^n} \big(\partial_i f(x)\big)^2 \alpha_i(x_i) \, d\mu^{n,\rho}(x). \end{align*} It remains to check the validity of the second expression of $\alpha_i$. This is obvious from the definition of $\varphi_i$ after interchanging integrals as follows: \[ \int_{|t|}^{+\infty} u\varphi_i(u) du = \int_0^{+\infty} \left(\int_{|t|}^{+\infty} u e^{-\frac{u^2}{2\sigma^2}} du\right) \frac{dm_i(\sigma)}{\sigma\sqrt{2\pi}} = \int_0^{+\infty} \sigma^2 e^{-\frac{t^2}{2\sigma^2}} \frac{1}{\sigma\sqrt{2\pi}} \, dm_i(\sigma) .\] \end{proof} \begin{lemma}\label{lem:bound-coef} Let $\varphi:\mathbb R\to \mathbb R^+$ be an even and log-concave function such that $\int \varphi=1$. Then for all $t\in \mathbb R$, \[ \frac{\int_{|t|}^{+\infty} u\varphi(u) du}{\varphi(t)}\le \frac{|t|}{2\varphi(0)}+\frac{1}{4\varphi(0)^2}\cdot\] There is equality when for some $\lambda >0$ and for all $u$, $\varphi(u)=\lambda \exp(-\lambda|u|)/2$. \end{lemma} \begin{proof} It is enough to deal with all $t\ge 0$. For such a fixed $t$, set for all $v>0$, $\psi(v):=\varphi(t+v)$. Then changing variables by $ u=t+v$ \[\frac{\int_{t}^{+\infty} u\varphi(u) du}{\varphi(t)} = \frac{\int_0^{+\infty} (t+v)\psi(v) dv}{\psi(0)} = t\frac{\int_0^{+\infty}\psi}{\psi(0)}+\frac{\int_0^{+\infty} v\psi(v)dv}{\psi(0)}.\] Since $\psi$ is log-concave, the Berwald-Borell inequality implies that the function \[p>0 \mapsto G(p):=\left(\frac{1}{\psi(0)\Gamma(p)} \int_0^{+\infty} \psi(u) u^{p-1}du \right)^{\frac1p}\] is non-increasing (see \cite{milman-pajor} or e.g. Theorem 2.2.3 in \cite{BOOKgreek}). The inequality $G(1)\ge G(2)$ allows us to deduce that \[\frac{\int_{t}^{+\infty} u\varphi(u) du}{\varphi(t)} \le t\frac{\int_0^{+\infty}\psi}{\psi(0)}+\left( \frac{\int_0^{+\infty}\psi}{\psi(0)}\right)^2.\] With our notation \[ \frac{\psi(0)}{\int_0^{+\infty}\psi}=\frac{\varphi(t)}{\int_t^{+\infty} \varphi}=-\frac{d}{dt}\log \left(\int_{t}^{+\infty} \varphi\right).\] Since $\varphi$ is log-concave, the Pr\'ekopa-Leindler inequality ensures that the tail function $t\mapsto \log \Big(\int_t^{+\infty}\varphi\Big)$ is concave, and thus has a non-increasing derivative. It follows that for $t>0$, \[\frac{\int_0^{+\infty}\psi}{\psi(0)} =\frac{\int_t^{+\infty} \varphi}{\varphi(t)} \le \frac{\int_0^{+\infty} \varphi}{\varphi(0)}=\frac{1}{2\varphi(0)} \cdot \] This leads to the claimed inequality. The case of equality is checked by direct calculations. \end{proof} \begin{rem} Better estimates depending on $\varphi$ are easily established. If the even probability density is given by $\varphi=e^{-V}$ where $V$ is differentiable, even and convex, then for $t>0$, \[ \frac{d}{dt}\left( -\frac{te^{-V(t)}}{V'(t)}\right) = te^{-V(t)}\left( 1+\frac{V"(t)}{V'(t)}-\frac{1}{tV'(t)}\right) \ge te^{-V(t)}\left( 1-\frac{1}{tV'(t)}\right). \] Integrating, we obtain that for $t > 0$ such that $t V'(t) \geq 2$, \begin{equation} \int_t^{+\infty} u \varphi(u)\, du \le 2 \frac{t}{V'(t)} \varphi(t), \label{eq_star} \end{equation} since in this case also $s V'(s) \geq 2$ for all $s \geq t$. \end{rem} \begin{theo}\label{th:GM-poincare} For $i=1,\ldots,n$, let $\mu_i(dt) =\varphi_i(t)\, dt$ be a Gaussian mixture on $\mathbb R$ which is log-concave. Let $\rho:\mathbb R^n\to \mathbb R^+$ be an even log-concave function such that $d\mu^{n,\rho}(x)=\rho(x) \prod_{i=1}^n d\mu_i(x_i)$ is a probability measure on $\mathbb R^n$. Then \[ C_P(\mu^{n,\rho}) \le (1+C \log n) \, C_P(\mu^{n,1}) = (1+C \log n) \max_{i} C_P(\mu_i) ,\] where $C$ is a universal constant. \end{theo} \begin{proof} The case $n=1$ is a direct application of Proposition \ref{prop:unimodal-perturbation}. Next we focus on $n\ge2$. We follow the truncation strategy from \cite{klartag-unconditional}. Let $X_i$ be a random variable of law $\mu_i$. Since the latter is symmetric and log-concave, classical results due to Borell and Hensley (see \cite{milman-pajor} or Chapter 2 in \cite{BOOKgreek}) give \[ \| X_1\|_{\psi_1}\le c \| X_i\|_2 \le \frac{c}{\sqrt 2 \varphi_i(0)},\] where the Orlicz norm involves $\psi_1(t)=e^{|t|}-1$ and $c > 0$ is explicit and universal. Choose $\varepsilon:= \sqrt{2}/c$. The later inequality implies $\mathbb E \exp\big(\varepsilon \varphi_i(0) |X_i|\big)\le 2$. \medskip By the correlation inequality \eqref{eq:convex-logconcave}, and then Jensen's inequality \begin{align*} &\exp\left( \varepsilon \int \max_i \big(|x_i|\varphi_i(0)\big) \, d\mu^{n,\rho}(x) \right)\le \exp\left( \varepsilon \int \max_i \big(|x_i|\varphi_i(0)\big) \, d\mu^{n,1}(x) \right)\\ \le & \int \exp\left( \varepsilon \max_i \big(|x_i|\varphi_i(0)\big)\right) \, d\mu^{n,1}(x) \le \int \sum_{i=1}^n \exp\big(\varepsilon |x_i|\varphi_i(0)\big) \, d\mu^{n,1}(x) \\ =& \sum_{i=1}^n\int_{\mathbb R} \exp\big(\varepsilon |x_i|\varphi_i(0)\big) \, d\mu_i(x_i)\le 2n. \end{align*} Therefore \[ \int \max_i \big(|x_i|\varphi_i(0)\big) \, d\mu^{n,\rho}(x)\le c \log(2n).\] Consequently, the set \[ A:=\left\{x\in\mathbb R^n; \; \max_i \frac{|x_i|\varphi_i(0)}{2}\le c \log(2n) \right\},\] verifies $\mu^{n,\rho}(A)\ge \frac12$, thanks to Markov's inequality. This implies that the probability measure \[\mu^{n,\rho}_{|A}:=\frac{\mu^{n,\rho}(\cdot\cap A)}{\mu^{n,\rho}(A)}= \frac{\mathbf 1_A}{\mu^{n,\rho}(A)}\, \rho\cdot (\mu_1\otimes\cdots \otimes \mu_n)\] obtained by conditioning $\mu^{n,\rho}$ to the set $A$ is close to $\mu^{n,\rho}$ in total variation distance: $$ d_{\mathrm{TV}}(\mu^{n,\rho},\mu^{n,\rho}_{|A} )\le \frac12. $$ Since $A$ is convex and symmetric, we can write $\mu^{n,\rho}_{|A}=\mu^{n,\tilde{\rho}}$ where $\tilde \rho:= \frac{\mathbf 1_A}{\mu^{n,\rho}(A)}\, \rho$ is still log-concave and even. Since both measures are log-concave, Theorem \ref{th:total-variation} ensures that for some universal constant $\kappa$ \begin{equation}\label{eq:poinc-mu-mutilde} C_P(\mu^{n,\rho}) \le \kappa\, C_P(\mu^{n,\tilde{\rho}}).\end{equation} We can apply Lemma \ref{lem:poincCM} to $\mu^{n,\tilde{\rho}}$ with the advantage that this measure is supported on $A$. We obtain, using also Lemma \ref{lem:bound-coef}, that for every odd and locally Lipschitz function $f$, \begin{align*} \mathrm{Var}_{\mu^{n,\tilde{\rho}}}(f) & \le \int \sum_i \left( \frac{|x_i|}{2\varphi_i(0)}+\frac{1}{4 \varphi_i(0)^2}\right) (\partial_i f(x))^2 d\mu^{n,\tilde{\rho}}(x) \\ &\le \max_i \frac{1}{\varphi_i(0)^2} \int \sum_i \left(|x_i|\frac{\varphi_i(0)}{2}+\frac{1}{4}\right) (\partial_i f(x))^2 d\mu^{n,\tilde{\rho}}(x) \\ &\le \max_i \frac{1}{\varphi_i(0)^2} \int_A \left(\max_i \Big(|x_i| \frac{\varphi_i(0)}{2}\Big)+\frac{1}{4}\right) |\nabla f(x)|^2 d\mu^{n,\tilde{\rho}}(x) \\ & \le \max_i \frac{1}{\varphi_i(0)^2} \left( \frac14+c\log(2n)\right) \int |\nabla f|^2 d\mu^{n,\tilde{\rho}}. \end{align*} Since $\mu^{n,\tilde{\rho}}$ is log-concave and even, Corollary~\ref{cor:even-sym} ensures that checking the Poincar\'e inequality for odd functions, as we just did, is enough to conclude that \[ C_P(\mu^{n,\tilde{\rho}})\le \max_i \frac{1}{\varphi_i(0)^2} \left( \frac14+c\log(2n)\right). \] Combining this estimate with \eqref{eq:poinc-mu-mutilde} gives a universal constant $C$ such that \[ C_P(\mu^{n,\rho}) \le C \log(n) \max_i \frac{1}{\varphi_i(0)^2} \cdot\] Eventually, for the even log-concave probability measures $\mu_i(dt)=\varphi_i(t)\,dt$ on the real line it is known that $\frac{1}{12} \varphi_i(0)^{-2}\le C_P(\mu_i)\le \varphi_i(0)^{-2}$, see \cite{bobkov}. \end{proof} \subsubsection{Examples} As explained in \cite{ENT-mixtures}, for $p\in (0,2]$ the probability measures on $\mathbb R$ defined by $$ d\nu_p(t)=\exp(-|t|^p)\, dt/Z_p $$ are Gaussian mixtures. When $p\in [1,2]$ they are in addition log-concave, and Theorem \ref{th:GM-poincare} ensures that for every even log-concave perturbation $\rho$, \begin{equation} \label{eq:bound-nu-p} C_P(\nu_p^{n,\rho})\le (1+C \log n) \, C_p(\nu_p). \end{equation} for some universal constant $C$. We point out that $\inf_{p\in [1,2]}C_p(\nu_p)>0$ and $\sup_{p\in [1,2]}C_p(\nu_p)<+\infty$, which is easily verified e.g. with the Muckenhoupt criterion \cite{M_P}. This completes the proof of Theorem \ref{th:Exp-poincare} in the case $p=1$. \medskip When $p=1$, Theorem \ref{th:Exp-poincare} almost answers the motivating question that we mentioned in the introduction: we unfortunately have a weak dependence in the dimension, but we allow more general perturbations. \medskip When $1<p<2$, using the remark after Lemma~\ref{lem:bound-coef}, we obtain from (\ref{eq_star}) that for the measure $\nu_p$, the coefficients $\alpha_i(t)$ of Lemma~\ref{lem:poincCM} verify \[\alpha_i(t)\le c \big(1+|t|^{2-p}\big), \quad t\in \mathbb R,\] where $c$ is a universal constant. This improves on Lemma~\ref{lem:bound-coef}, and can be used in the argument of the proof of Theorem~\ref{th:GM-poincare}. Since there exists a universal $\varepsilon>0$ such that for all $p\in (1,2)$, $$ \int \exp\big ( \varepsilon(|t|^{2-p})^{p/(2-p)}\big) d\nu_p(t)\le 2 $$ we arrive by the same method at \[ C_P(\nu_p^{n,\rho})\le \big(1+C (\log n)^{\frac{2-p}{p}}\big) \, C_p(\nu_p).\] As $\sup_{p\in [1,2]}C_p(\nu_p)<+\infty$, we have proven the following: \begin{theo}\label{th:GM-poincare2} Let $1 \leq p \leq 2$. Let $\rho:\mathbb R^n\to \mathbb R^+$ be an even log-concave function such that $d\nu_p^{n,\rho}(x)=\rho(x) \prod_{i=1}^n d\nu_p(x_i)$ is a probability measure on $\mathbb R^n$. Then \begin{equation} C_P(\nu_p^{n,\rho}) \le (1+C \log n)^{\frac{2-p}{p}}, \label{eq_1129} \end{equation} where $C$ is a universal constant. \end{theo} Theorem \ref{th:GM-poincare2} implies Theorem \ref{th:Exp-poincare}. Note that the bound (\ref{eq_1129}) improves on \eqref{eq:bound-nu-p}, and is independent of the dimension for $p=2$ (as expected for log-concave perturbations of the standard Gaussian measure). \medskip All the above results deal with even log-concave perturbations of the measures $\nu_p$ and their products $\nu_p^n$, $p\in [1,2]$. The spectral gap of such perturbed measures is controlled uniformly in the perturbation (for any given dimension). When $p\in[1,2)$ this is not true for arbitrary log-concave perturbations (i.e. non necessarily even). To see this, it is enough to consider the probability measures $\nu_p$ on $\mathbb R$, and their exponential tilts \[ d\nu_{p,a}(t)=\frac{1}{Z_{p,a}} e^{-|t|^p+at} dt,\] where $a$ in an arbitrary real number if $p>1$, and $a\in (-1,1)$ when $p=1$. Gentil and Roberto \cite{gentil-roberto} have proved that for $p\in [1,2)$, \[ \sup_a C_P(\nu_{p,a})=+\infty.\] For $p=2$, the Brascamp-Lieb inequality ensures that the Poincar\'e constant of any log-concave perturbations of the standard Gaussian measure is dominated by 1. \subsection{Light tails} Since Gaussian mixtures have heavier tails than the Gaussian measure, we now investigate some measures with lighter tails. A special and simple case is when the measures $d\mu_i(t)=e^{-V_i(t)} dt$ have strictly uniformly convex potentials. More specifically, if there exists $\varepsilon>0$ such that for all $i$ and all $t\in \mathbb R$, $V_i"(t)\ge \varepsilon$, then without assuming any symmetry if $\rho$ is log-concave, the probability measure $\mu^{n,\rho}$ also has a potential which is uniformly strictly convex and therefore \[ C_P\big(\mu^{n,\rho}\big)\le \frac{1}{\varepsilon}\cdot\] Nevertheless, strict convexity in the large is not sufficient to yield such uniform results. The behaviour of $\mu_i$ around 0 is important as the next examples show: let $p>2$ and for all $i$, $d\mu_i(t)=\exp(-|t|^p) dt/Z_p$. For $x\in \mathbb R^n$, let us denote by $\overline{x}=(\sum_i x_i)/n$ its empirical mean and $Q(x)=\sum_i(x_i-\overline{x})^2/n$ its empirical variance. As a nonnegative quadratic form, $Q$ is convex. Also note that \[ Q(x)=n \Big|P_{u_n^\bot} x\Big|^2,\] where $u_n=(1/\sqrt n, \ldots, 1/\sqrt n)\in \mathbb R^n$ is a unit vector on the main diagonal line and $P_{u_n^\bot} $ is the orthogonal projection onto the othogonal complement of this line \[u_n^\bot=\big\{x\in \mathbb R^n; \; \sum_i x_i=0\big\}.\] Let us define $\rho_k:\mathbb R^n\to \mathbb R^+$ as the indicator function of the convex origin-symmetric set $\{x\in \mathbb R^n; Q(x)\le 1/k\}$, properly normalized so that $\mu^{n,\rho_k}$ is a probability measure (another possible choice would be $\rho_k=\exp(-kQ)/Z_k$). Then when $k$ tends to $+\infty$ the measure $\mu_{n,\rho_k}$ tends to the measure obtained by conditioning $\mu_1\otimes\cdots\mu_n=\mu^{n,1}$ to the diagonal line $\mathbb Ru_n$. With our choice of $d\mu_i(t)=\exp(-|t|^p) dt/Z_p$, this limiting measure is, after isometric identification of $\mathbb Ru_n$ and $\mathbb R$, \[ \exp\Big(-\sum_{i=1}^n\Big|\frac{t}{\sqrt n}\Big|^p\Big) \frac{dt}{Z_{n,p}} =\exp\Big(-\Big|\frac{t}{n^{\frac12-\frac1p}}\Big|^p\Big) \frac{dt}{Z_{n,p}} \cdot \] This measure is the law of $n^{\frac12-\frac1p}Y$ where $Y$ is distributed according to $\mu_1$. Therefore its variance is $n^{1-\frac{2}{p}} \mathrm{Var}(Y)$ and its Poincar\'e constant is $n^{1-\frac{2}{p}} C_P(\mathbb P_Y)$. For $p>2$ this tends to infinity with the dimension. This growth of the variance in some directions is related to the counterexample in Remark after Theorem \ref{theo:before-rem-cube}, which in a sense corresponds to $p=+\infty$. The behaviour is very different if we start from Gaussian mixtures, as explained in Theorem \ref{th:GM-odd-functions}. \begin{rem} When the functions $V_i$ are strictly uniformly convex in the large, one can obtain Poincar\'e inequalities for small perturbations thanks to a method developped by Helffer, see e.g. \cite{helffer}. His approach can be thought of as a variant of the Brascamp-Lieb inequalities where strict convexity is replaced by uniform spectral gap for restrictions to coordinate lines. More precisely, if $d\mu(x)=e^{-V(x)}dx$, consider for $x\in \mathbb R^n$ and $i\in \{1,\ldots,n\}$, the one dimensional probability measure \[ d\mu_{|x+\mathbb R e_i} (t) := \frac{1}{Z_{(x_j)_{j\neq i}}}\exp\big(-V(x_1,\ldots, x_{i-1},t,x_{i+1},\ldots, x_n)\big) \, dt,\] where $(e_i)_{i=1}^n$ is the canonical basis of $\mathbb R^n$. Then for each $x\in \mathbb R^n$, define the matrix $K(x)$ by \[ K(x)_{i,j}:=\begin{cases} 1/C_P(\mu_{|x+\mathbb R e_i}) & \mathrm{when}\, i=j, \\ \partial^2_{i,j}V(x) & \mathrm{when}\, i\neq j. \end{cases}\] If for all $x$, $K(x)$ is positive definite then for all smooth functions $f$, it holds $\mathrm{Var}_\mu(f)\le \int \langle K^{-1}\nabla f,\nabla f\rangle \, d\mu$. In particular, if for all $x$, $K(x)\ge \varepsilon \mathrm{Id}$ then $C_P(\mu)\le \frac{1}{\varepsilon}$. In our setting of the measures $\mu^{n,\rho}$, the restrictions to coordinate lines are simple (for notational simplicity we present only what happens for $x+\mathbb R e_1$): \[ d(\mu^{n,\rho})_{|x+\mathbb R e_1}(t)=e^{-V_1(t)}\rho(t,x_2,\ldots,x_n) \frac{dt}{Z_{(x_j)_{j\ge 2}}} \cdot\] If $V_1=U_1+B_1$, where $U_1$ is strictly uniformly convex ($U_1"(t)\ge 1/c_1>0$) and $B_1$ is bounded, then $(\mu^{n,\rho})_{|x+\mathbb R e_1}$ can be viewed as a bounded perturbation (by $B_1$) of the strictly uniformly convex measure $e^{-U_1}\rho(\cdot,x_2,\ldots,x_n)/\tilde{Z}$ (this is where the log-concavity of $\rho$ is used. Note that no symmetry assumption is needed). It follows from the Brascamp-Lieb inequality and Proposition \ref{prop:bounded-perturbation} that for all $x$, \[ C_P(\mu_{|x+\mathbb R e_1})\le c_1e^{\mathrm{Osc}(B_1)}.\] This type of uniform bound allows to get Poincar\'e inequalities for $\mu^{n,\rho}$ provided the non-diagonal terms of the Hessian of $-\log \rho$ are small enough, thanks to Helffer's result. This is especially simple to achieve when $\rho=e^{-Q}$ where $Q$ is a small quadratic form. We refer to Theorem 4.1 in \cite{gentil-roberto} for weaker hypotheses on $B_1$ allowing similar results. \end{rem} \section{Application to convex sets} Given a non-empty compact convex set $K\subset \mathbb R^d$, we denote by $\lambda_K$ the uniform probability measure on $K$ (which we may consider in the natural dimension of the affine span of $K$). Also let $B_p^N:=\{x\in \mathbb R^N; \, \|x\|_p\le 1\}$ be the unit ball of $\ell_p^N$. Recall from Section 2.2 that $C_P(\mu,``linear")=\|\mathrm{Cov}(\mu)\|_{op}$ denotes the smallest constant so that the Poincar\'e inequality is satisfied for all linear functions with respect to the measure $\mu$. \begin{theo}\label{theo:ball-section} Let $n\ge d \ge 2$ and $p\in [1,2]$. Let $E$ be any linear subspace of $\mathbb R^n$ of dimension $d$, then \[ C_P\big(\lambda_{B_p^n\cap E}\big) \le c \left( \frac{n}{d}\right)^{\frac{2}{p}-1}\log(n)^{2/p} C_P\big(\lambda_{B_p^n\cap E},``linear"\big),\] where $c$ is a universal constant. In particular, if $d\ge n/2$ then for some universal constant $c'$, $C_P\big(\lambda_{B_p^n\cap E}\big) \le c' \log(d)^2 C_P\big(\lambda_{B_p^n\cap E},``linear"\big)$. \end{theo} This result will be deduced from the ones of the previous sections, thanks to a result of Kolesnikov and Milman \cite{kolesnikov-milman}, which allows to transfer Poincar\'e inequalities from log-concave measures to some of their level sets. The next statement is a combination of Theorem 2.5 and Proposition 2.3 in \cite{kolesnikov-milman}. \begin{theo}[\cite{kolesnikov-milman}] \label{theo:level-set} Let $d\mu(x)=\exp(-V(x))dx$ be a log-concave probability measure on $\mathbb R^d$, with $\min V=0$. Then there exists $t>0$ such that the set $K:=\{x\in \mathbb R^d; V(x)\le t\}$ verifies \begin{enumerate} \item $C_P(\lambda_K) \le C \cdot C_P(\mu) \cdot \log\big(e+C_P(\mu)\sqrt{d}\big)$, \item $C_P(\lambda_K,``linear")\ge c>0$, \end{enumerate} where $C,c$ are universal constants. \end{theo} We shall also need a stability result of the Poincar\'e constant under convergence of measures. For $\varphi: {\mathbb R}^n \rightarrow {\mathbb R}$ write $\| \varphi \|_{Lip} = \sup_{x \neq y} |\varphi(x) - \varphi(y)| / |x-y|$ for its Lipschitz seminorm. According to E. Milman \cite{emanuel}, for any log-concave probability measure $\mu$ on ${\mathbb R}^n$, $$ c_1 \sqrt{C_P(\mu)} \leq \sup_{\| \varphi \|_{Lip} \leq 1} \int |\varphi - E_{\mu, \varphi}| d \mu \leq C_2 \sqrt{C_P(\mu)} $$ where $c_1, C_2 > 0$ are universal constants and $E_{\mu, \varphi} = \int \varphi d \mu$. \begin{proof}[Proof of Theorem~\ref{theo:ball-section}] For $i=1,\ldots,n$ we set $d\mu_i(t)=d\nu_p(t)=\exp(-|\alpha_p t|^p)dt$, where $\alpha_p=2\Gamma(1+1/p) \in [\sqrt{\pi}, 2]$. These measures are even and log-concave, and their density at 0 is equal to $1$. By Theorem \ref{th:GM-poincare2}, for any even log-concave (and normalized) perturbation $\rho$, \begin{equation} C_P(\mu^{n,\rho})\le C (\log n)^{\frac{2-p}{p}} \label{eq_1009} \end{equation} where $C$ is a universal constant. Indeed, since the scaling coefficient $\alpha_p$ has the order of magnitude of a universal constant, it may be absorbed in the universal constant $C$. We apply (\ref{eq_1009}) when $\rho = \rho_{\varepsilon}$ is the normalized indicator of an $\varepsilon$-neighborhood of the subspace $E$. The family of measures $\mu^{n, \rho_{\varepsilon}}$ tends weakly, as $\varepsilon \rightarrow 0$, to the measure $\tilde\mu$ on the Euclidean space $E$ (that we identify to $\mathbb R^{d}$) with density $\exp(-\|\alpha_p x\|^p_p)/Z_E$, where \begin{equation}\label{def:ZE} Z_E = \int_E \prod_{i=1}^n \exp(-|\alpha_p x_i|^p)d^Ex \end{equation} is the integral over $E$ of the density of $(\nu_p)^n$. We claim that \begin{equation} C_P(\tilde{\mu})\le 2 C (\log n)^{\frac{2-p}{p}}. \label{eq_1011} \end{equation} Indeed, otherwise there exists a smooth $\varphi: E \rightarrow {\mathbb R}$ with $Var_{\tilde{\mu}}(\varphi) > 2 C \log(n) \cdot \int |\nabla \varphi|^2 d \tilde{\mu}$. By multiplying $\varphi$ with a slowly-varying cutoff function, we may assume that $\varphi$ is compactly-supported in $E$ (the argument is standard, see Section \ref{sec_app1} below for details). We set $f(x) = \varphi( P_E x)$, where $P_E$ is the orthogonal projection onto $E$ in ${\mathbb R}^n$. Then as $\varepsilon \rightarrow 0^+$, $$ Var_{\mu^{n, \rho_{\varepsilon}}}(f) \longrightarrow Var_{\tilde{\mu}} (\varphi) \quad \text{and} \quad \int |\nabla f|^2 d \mu^{n, \rho_{\varepsilon}} \longrightarrow \int |\nabla \varphi|^2 d \tilde{\mu}, $$ in contradiction to (\ref{eq_1009}). This completes the proof of (\ref{eq_1011}). In order to apply Theorem \ref{theo:level-set}, we need to rescale $\tilde\mu$. Let $Y$ a random vector on $E$ with law $\tilde\mu$, then for $\lambda >0$ the random vector $\lambda Y$ has a distribution of density on $E$ given by \[ \exp\left( -\left\| \frac{\alpha_px}{\lambda}\right\|_p^p-\log(Z_E)-d\log(\lambda)\right).\] This suggests to set $\lambda_E:= Z_E^{-1/d}$. For this choice, the probability measure $\mu(dx)=\exp( -\|\alpha_p x/\lambda_E\|_p^p) d^E_x$ on $E$ verifies \[ C_P(\mu)=\lambda_E^2 C_P(\tilde{\mu}) \le \lambda_E^2 C (\log n)^{\frac{2-p}{p}} = C Z_E^{-\frac2d} (\log n)^{\frac{2-p}{p}}. \] In order to bound the latter quantity from above, we need a lower bound for $Z_E$, as defined in \eqref{def:ZE}. This can be done by general results on sections of isotropic measures. More precise bounds were obtained by Meyer and Pajor \cite{meyer-pajor} in their investigation of extremal volumes of sections of $B_p^n$ (they observe that $Z_E=\mathrm{Vol}_d(B_p^n\cap E)/ \mathrm{Vol}_d(B_p^d) $). For our purpose, a simple bound based on the inradius of $B_n^p$ is the most effective: since $p\le 2$, for any $x\in \mathbb R^n$, $\|x\|_p \le n^{\frac1p-\frac12} \|x\|_2$, thus \begin{align*} Z_E & =\int_E \exp\big(-\|\alpha_p x\|_p^p\big) \, d^Ex \ge \int_E \exp\big(-\| n^{\frac1p-\frac12} \alpha_p x\|_2^p\big)\, d^Ex. \end{align*} The later integral takes the same value for all $d$-dimensional vector spaces $E$. Therefore \begin{align*} Z_E & \ge \int_{\mathbb R^d} \exp\big(-\| n^{\frac1p-\frac12} \alpha_p x\|_2^p\big)\, dx \\ &= \mathrm{Vol}_d(B_2^d) \int_0^{+\infty}d r^{d-1} \exp\big( -(n^{\frac1p-\frac12} \alpha_p r)^p \big) dr \\ &= \mathrm{Vol}_d(B_2^d) \frac{\int_0^{+\infty}d s^{d-1} \exp\big( -s^p \big) ds}{\big(n^{\frac1p-\frac12} \alpha_p\big)^d} = \left(\frac{\sqrt\pi}{n^{\frac1p-\frac12} \alpha_p} \right)^d \frac{\Gamma\big(1+\frac{d}{p}\big)}{\Gamma\big(1+\frac{d}{2}\big)}. \end{align*} For $x$ large, $\Gamma(1+x)^{\frac1x}\sim x/e$, we get that for some numerical constants $c,c'$, \[ Z_E^{-\frac2d}\le c \frac{\alpha_p^2 n^{\frac2p-1}}{\pi} \frac{\frac{d}{2e}}{(\frac{d}{pe})^{\frac{2}{p}}} \le c' \left(\frac{n}{d} \right)^{\frac2p-1}. \] This leads to \[ C_P(\mu) \le C' \left(\frac{n}{d} \right)^{\frac2p-1} (\log n)^{\frac{2-p}{p}}. \] Applying Theorem~\ref{theo:level-set} to $\mu$ provides $t>0$ so that the set $K_E:= \{x\in E; \|\alpha_p x/\lambda_E\|^p_p\le t\}=\alpha_p^{-1}\lambda_E t^{\frac1p} \big(B_p^n\cap E\big)$ verifies \begin{align*} C_P(\lambda_{K_E}) &\le C C'\left(\frac{n}{d} \right)^{\frac2p-1} (\log n)^{\frac2p-1} \log\left( e+C'\left(\frac{n}{d} \right)^{\frac2p-1} (\log n)^{\frac2p-1} \sqrt{d}\right) C_P(\lambda_{K_E},``linear") \\ & \le C'' \left(\frac{n}{d} \right)^{\frac2p-1}\log(n)^{\frac2p} C_P(\lambda_{K_E},``linear"). \end{align*} Since the constants $C_P(\cdot)$ and $C_P(\cdot, ``linear)$ are both 2-homogeneous with respect to dilations of the underlying measure, we get the claim \[ C_P(\lambda_{B_p^n\cap E}) \le C'' \left(\frac{n}{d} \right)^{\frac2p-1}\log(n)^{\frac2p} C_P(\lambda_{B_p^n\cap E},``linear").\] \end{proof} Corollary \ref{cor1} of the introduction clearly follows from Theorem~\ref{theo:ball-section}. \section{Appendix: approximation results} \subsection{Density of test functions} \label{sec_app1} Let $\mu$ be a log-concave measure on ${\mathbb R}^n$. We assume that the support of $\mu$ is not contained in an affine subspace of lower dimension, as otherwise, we may just work in the lower dimensional subspace. Hence $\mu$ is of the form $\rho(x) dx$ where $\rho$ is a log-concave function. Let $\Omega$ be the interior of the support of $\mu$. It is convex and non-empty (assuming that $\mu$ is not the zero measure). The function $\rho$ is positive on $\Omega$ and vanishes outside of $\Omega$. Write $C_c^{\infty}(\Omega)$ for the space of smooth functions, compactly-supported in $\Omega$. By definition, $H^1(\Omega, \mu)=H^1(\mu)$ is the set of (equivalence classes of) functions $f$ in $L^2(\mu)$, for which there exist functions $g_i\in L^2(\mu)$ such that for all $1\le i\le n$ and for all $\varphi \in C_c^{\infty}(\Omega)$, \[ \int_\Omega \partial_i\varphi(x) f(x) \, dx=-\int_\Omega \varphi(x) g_i(x) \, dx.\] Classically, $g_i$ is called a weak partial derivative of $f$ (viewed as a function on $\Omega$). The weak gradient $(g_i)_i$ is simply denoted by $\nabla f$ and \begin{equation} \| f \|_{H^1(\mu)} = \sqrt{ \int_{{\mathbb R}^n} f^2 d \mu+\int_{{\mathbb R}^n} |\nabla f|^2 d \mu}. \label{eq_H1} \end{equation} The following basic result will be useful: \begin{proposition}\label{prop:density} Let $\mu$ be a log-concave measure on ${\mathbb R}^n$. Then the set $C_c^{\infty}({\mathbb R}^n)$ is dense in $H^1(\mu)$. \end{proposition} Several textbooks are dedicated to the study of density of smooth functions in weighted Sobolev spaces (see e.g Kufner \cite{KUFNER}), and they consider more difficult situations. Nevertheless, we found it hard to spot a reasonably self-contained justification of the above proposition. This is why we include an ad-hoc proof, which relies only on very basic facts about density of smooth functions in $H^1_{\mathrm{loc}}(\Omega, dx)$ (see e.g. \cite[Chapter 5]{EVANSbook}). Local approximation in any compact subset of $\Omega$ is easy, since on such a subset $\rho$ is upper bounded, and bounded away from $0$, hence the result for the Lebesgue measure applies. To derive approximation up to the boundary, one usually approximates $f$ by functions which are defined somewhat outside of $\Omega$, on which local approximation applies up to the boundary. To build such functions, when the boundary of $\Omega$ is regular enough, one usually proceeds by local translations of $f$. In our case, since $\Omega$ is convex, a single global dilation does the job. \begin{proof}[Proof of Proposition \ref{prop:density}] Let us set some more notation. Our problem is invariant by translation. Hence we may assume that the origin $0\in \Omega$. The latter being open, there exists $r>0$ such that $B(0,r)\subset \Omega$. Let $f$ be an arbitrary function in $H^1(\mu)$. Our goal is to build compactly-supported smooth functions which are arbitrarily close to $f$ in the $H^1(\mu)$ norm. We first reduce matters to functions $f$ with compact support in ${\mathbb R}^n$. Indeed, given a general $f\in H^1(\mu)$, consider a bump function $\theta:{\mathbb R}^n\to [0,1]$ which is infinitely differentiable and such that $\theta(x)=1$ if $x\in B(0,1)$, while $\theta(x)=0$ if $x\not\in B(0,2)$. For any integer $n\ge 1$, define \[ f_{|n}(x):=\theta(x/n)f(x),\qquad x\in {\mathbb R}^n.\] It is supported in $B(0,2n)$ and belongs to $L^2(\mu)$ since $|f_{|n}|\le |f|$. By dominated convergence \[ \|f-f_{|n} \|_{L^2(\mu)}^2=\int f(x)^2 (1-\theta(x/n))^2 d\mu(x)\] tends to 0 when $n\to +\infty$. Since $\partial_i f_{|n}= \theta(\cdot/n) \partial_if +\frac1n \partial_i\theta (\cdot/n)f$, \[ \|\partial_if-\partial_if_{|n} \|_{L^2(\mu)}\le \frac1n \|f \partial_i\theta(\cdot/n) \|_{L^2(\mu)}+ \|\partial_if-(\partial_if)_{|n} \|_{L^2(\mu)}^2 \] also tends to 0 when $n$ grows. Indeed, the functions $\partial_i\theta$ are uniformly bounded, and we may apply the latter convergence of truncated functions to $\partial_i f\in L^2(\mu)$. \begin{lemma}\label{lem:density-continuous} The set $C_c^{\infty}(\Omega)$ is dense in $L^2(\mu)$. \end{lemma} \begin{proof By the above truncation argument, it is enough to approximate functions with compact support in ${\mathbb R}^n$. Let $h\in L^2(\mu)$ be with support in the open ball $B(0,R)$ for some $R$. By dominated convergence, \[ \lim_{\varepsilon\to 0^+} \int (f \mathbf 1_{(1-\varepsilon) \Omega}-f)^2 d\mu=0.\] For $\varepsilon>0$, the set $\widetilde{\Omega}:=(1-\varepsilon)\Omega\cap B(0,R)$ is relatively compact in $\Omega$, hence there exists $c>0$ such that $c\le \rho(x)\le \frac1c$ for all $x\in \widetilde{\Omega}$. Hence $ f \mathbf 1_{(1-\varepsilon) \Omega}\in L^2(\mu)$ also belongs to the unweighted Lebesgue space $L^2(\widetilde{\Omega}, dx)$, in which it is classical that compactly supported smooth functions are dense. Therefore there is a sequence $g_n\in C_c^{\infty}(\widetilde{\Omega})$ which converges to $f \mathbf 1_{(1-\varepsilon) \Omega}$ for the $L^2(\widetilde{\Omega}, dx)$-topology. Since $\rho\le \frac1c$ on $\widetilde{\Omega}$, and all functions are supported in $\widetilde{\Omega}$ the convergence also holds in the topology of $L^2(\mu)$. \end{proof} For $f\in L^2(\mu)$ and a parameter $\delta\in(0,\frac12)$ we introduce the dilated function $f_\delta$ defined by \[ f_\delta(x):=f\big((1-\delta)x \big), \quad x\in \frac{1}{1-\delta}\Omega \supset \Omega.\] These functions are defined outside of $\Omega$ but provide a fair approximation of $f$ for small $\delta$: \begin{lemma}\label{lem:dilation} Let $f\in L^2(\mu)$, with bounded support. Then for all $\delta\in(0,\frac12)$, $f_\delta\in L^2(\mu)$ and when $\delta$ tends to 0, $f_\delta$ converges to $f$ in the topology of $L^2(\mu).$ If in addition, $f\in H^1(\mu)$ then the convergence holds in the topology of $H^1(\mu)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:dilation}] Assume that $f$ is supported in $B(0,R)$. Let us compute the squared $L^2$ norm of $f_\delta$: \[ \int_\Omega f\big((1-\delta)x\big)^2 \rho(x) dx= (1-\delta)^{-n} \int_{(1-\delta)\Omega} f(y)^2\rho\Big( \frac{y}{1-\delta}\Big) dy.\] The log-concavity of $\rho$ yields $\rho(y)\ge \rho\Big( \frac{y}{1-\delta}\Big)^{1-\delta} \rho(0)^\delta$. Rearranging gives \[ \rho\Big( \frac{y}{1-\delta}\Big) \le \rho(y) \left( \frac{\rho(y)}{\rho(0)}\right)^{\frac{\delta}{1-\delta }}.\] Since $\rho$ is upper-bounded on the compact support of $f$ (see e.g., \cite[Lemma 2.2.1]{BOOKgreek}), there exists a constant $C_R$ such that for all $y$, $f(y)^2 \rho\Big( \frac{y}{1-\delta}\Big) \le C_R f(y)^2 \rho(y)$. Thus $\| f_\delta\|_{L^2(\mu)}\le 2^n C_R \| f\|_{L^2(\mu)}$. \smallskip For any $\varepsilon >0$, Lemma \ref{lem:density-continuous} provides $g\in C_c^{\infty}(\Omega)$ (supported also inside $B(0,R)$ as the proof of the lemma shows) such that $\|f-g\|_{L^2(\mu)}\le \varepsilon$. Then \[ \| f-f_\delta\|_{L^2(\mu)} \le \| f-g\|_{L^2(\mu)} +\| g-g_\delta\|_{L^2(\mu)} +\| g_\delta-f_\delta\|_{L^2(\mu)}.\] By the above norm estimate $ \| g_\delta-f_\delta\|_{L^2(\mu)}\le 2^n C_R\| g-f\|_{L^2(\mu)}$. Moreover since $g$ is uniformly continuous, and $g_\delta$ as well as $g$ vanish outside of $B(0,2R)$, \[ \| g_\delta-g\|_{L^2(\mu)}^2=\int_{B(0,2R)} \big|g(x)-g((1-\delta)x)\big|^2d\mu(x) \le \mu(B(0,2R)) \omega_g(2R\delta)^2,\] where $\omega_g$ denotes the modulus of continuity of $g$. Combining the above estimates gives \[\limsup_{\delta\to 0^+} \| f-f_\delta\|_{L^2(\mu)} \le (1+2^n C_R)\varepsilon,\] for every $\varepsilon>0$. This proves the convergence of $f_\delta$ to $f$. \smallskip Eventually, if $f\in H^1(\mu)$, observe that \[ \| \partial_if-\partial_i(f_\delta)\|_{L^2(\mu)}= \| \partial_if-(1-\delta)(\partial_if)_\delta\|_{L^2(\mu)}\le \delta \|\partial_i f\|_{L^2(\mu)}+(1-\delta) \| \partial_if-(\partial_i f)_\delta\|_{L^2(\mu)}\] tends to 0 when $\delta$ does, by the result that we just proved, applied to $\partial_i f \in L^2(\mu)$. \end{proof} We are now ready to complete the proof Proposition \ref{prop:density}. As already explained, it is enough to approximate an arbitrary $f\in H^1(\mu)$ whose support is contained in $B(0,R)$ for some $R$. For $\delta \in (0,\frac12)$, we consider the dilated function $f_\delta$ defined on $(1-\delta)^{-1}\Omega$. The last ingredient is regularization by convolution: let $\eta:{\mathbb R}^n\to {\mathbb R}^+$ be a standard mollifier, meaning $\eta$ is of class $C^\infty$, $\eta(x)=0$ if $|x|\ge 1$ and $\int \eta(x) dx=1$. For $\varepsilon\in(0,1)$, consider $\eta^\varepsilon$ defined for $x\in {\mathbb R}^n$ by \[ \eta^\varepsilon(x)=\varepsilon^{-n} \eta\Big( \frac{x}{\varepsilon}\Big),\] and the convolution $f_\delta\ast \eta^\varepsilon$. Observe that $f_\delta\in H^1_{\mathrm{loc}}\big((1-\delta)^{-1}\Omega, dx \big)$. Indeed for any compact $K\subset (1-\delta)^{-1}\Omega$, \[\int_K f_\delta(x)^2 dx=(1-\delta)^{-n} \int_{(1-\delta)K}f(x)^2dx \le C_K \int_{(1-\delta)K}f^2 \rho \le C_K\int f^2 d\mu<+\infty,\] where we have used that $\rho$ attains a positive minimum on the compact set $(1-\delta)K\subset \Omega$. The same argument applies to the partial derivatives of $f$. Thus, according to \cite[Theorem 1 of Section 5.3]{EVANSbook}, $f_\delta\ast \eta^\varepsilon$ is well defined and infinitely differentiable on the set \[ U_\varepsilon:=\Big\{ x\in (1-\delta)^{-1}\Omega; \; \mathrm{dist}\big(x, \big((1-\delta)^{-1}\Omega\big)^c\big)>\varepsilon\Big\}.\] Moreover when $\varepsilon$ tends to 0, $f_\delta\ast\eta^\varepsilon$ tends to $f_\delta$ in $H^1_{\mathrm{loc}}\big((1-\delta)^{-1}\Omega, dx \big)$. As $\Omega\cap B(0, 2R+1)\subset\subset (1-\delta)^{-1}\Omega$, we can deduce that when $\varepsilon$ tends to 0, $f_\delta\ast\eta^\varepsilon$ tends to $f_\delta$ in $H^1\big(\Omega\cap B(0,2R+1), dx \big)$. Taking into account the fact that $f_\delta$ and $f_\delta\ast\eta^\varepsilon$ vanish outside of $B(0,2R+1)$ and that the log-concave function $\rho$ is bounded from above in $\Omega \cap B(0,2R+1)$, we can conclude that $\lim_{\varepsilon\to 0^+} \| f_\delta\ast\eta^\varepsilon-f_\delta\|_{H_1(\mu)}=0$. To approximate the orginal function $f$ up to accuracy $\alpha>0$, we simply write \[ \|f_\delta\ast \eta^\varepsilon-f \|_{H^1(\mu)}\le \| f_\delta\ast \eta^\varepsilon-f_\delta\|_{H^1(\mu)}+ \|f_\delta-f \|_{H^1(\mu) },\] use Lemma~\ref{lem:dilation} to find a $\delta$ for which the last term is at most $\alpha/2$. Then we let $\varepsilon$ tend to zero. Since $B(0,r)\subset \Omega$, the set $U_\varepsilon$ contains $((1-\delta)^{-1}-\frac{\varepsilon}{r})\Omega$ when $\varepsilon <r(1-\delta)^{-1}$. Consequently, if $\varepsilon<\delta^2(r(1-\delta))^{-1}$ then $(1+\delta)\Omega \subset U_\varepsilon$. So the above approximations of $f$ are $C^\infty$ on a larger set than $\Omega$. Since they also vanish outside of $B(0,2R+1)$, we may modify them outside of $\Omega$ in order to obtain functions in $C_c^\infty({\mathbb R}^n)$. \end{proof} \subsection{Proof of Lemma \ref{lem_1111}} This section is devoted to the proof of Lemma \ref{lem_1111}. We may assume that the support of $\mu$ is not contained in an affine subspace of lower dimension, as otherwise, we may just work in the lower dimensional subspace. Proposition \ref{prop_1004} is proven above under the additional assumption that $\mu$ has a smooth density that is positive everywhere in ${\mathbb R}^n$. Our goal here is to prove the inequality \begin{equation} \mathrm{Var}_{\mu}(f) \leq \sum_{i=1}^n \| \partial_i f \|^2_{H^{-1}(\mu)} \label{eq_431} \end{equation} in the case of a general, log-concave, finite measure $\mu$ in ${\mathbb R}^n$, and a general function $f \in L^2(\mu)$ whose weak partial derivatives $\partial^1 f, \ldots, \partial^n f$ belong to $L^2(\mu)$ and satisfy $\int \partial^i f d \mu = 0$. Recall the definition (\ref{eq_H1}) of the $H^1(\mu)$-norm, and that $H^1(\mu)$ is the space of $f \in L^2(\mu)$ with $\| f \|_{H^1(\mu)} < \infty$. Recall from Proposition \ref{prop:density} that the collection of all smooth, bounded, Lipschitz functions $u: {\mathbb R}^n \rightarrow {\mathbb R}$ is dense in $H^1(\mu)$. \medskip Next, we claim that both the left-hand side and the right-hand side of (\ref{eq_431}) depend continuously on the function $f$ with respect to the $H^1(\mu)$-topology, as long as we keep the constraint $\int \partial_i f d \mu = 0$ for all $i$. Indeed, the $H^1(\mu)$-norm is stronger than the $L^2(\mu)$-norm, and hence $\mathrm{Var}_{\mu}(f)$ is continuous in $f$ with respect to the $H^1(\mu)$-norm. As for the right-hand side of (\ref{eq_431}), by inequality (\ref{eq_915}) above, $$ \| \partial_i f - \partial_i \tilde{f} \|_{H^{-1}(\mu)} \leq C_p(\mu) \| \partial_i f - \partial_i \tilde{f} \|_{L^2(\mu)} \leq C_p(\mu) \| f - \tilde{f} \|_{H^1(\mu)}. $$ It therefore suffices to prove (\ref{eq_431}) under the additional assumption that $f$ is a smooth function, bounded in ${\mathbb R}^n$ together with its first partial derivatives, such that $\int f d \mu = 0$ and also $\int \partial_i f d \mu = 0$ for $i=1,\ldots,n$. \begin{lemma} \label{lem:approx-mu-above} Let $\mu$ be a finite measure on ${\mathbb R}^n$ whose density $\rho$ is log-concave. Then there exists a sequence of functions $(\rho_k)_{k \geq 1}$ with the following properties: \begin{enumerate} \item[(i)] For any $k$, the function $\rho_k: {\mathbb R}^n \rightarrow (0, \infty)$ is a smooth, everywhere-positive, integrable, log-concave function on ${\mathbb R}^n$ such that $\rho \leq \rho_k$ pointwise. \item[(ii)] Write $S \subseteq {\mathbb R}^n$ for the interior of the support of $\mu$, which is an open, convex set of a full $\mu$-measure. Then $\rho_k \longrightarrow \rho$ locally uniformly in $S$. \item[(iii)] For any measurable function $\varphi: {\mathbb R}^n \rightarrow {\mathbb R}$ that grows at most polynomially at infinity, $$ \int_{{\mathbb R}^n} \varphi \rho_k \stackrel{k \rightarrow \infty} \longrightarrow \int_{{\mathbb R}^n} \varphi \rho. $$ \end{enumerate} \end{lemma} Lemma~\ref{lem:approx-mu-above} will be proven shortly. We apply the lemma to $\mu$ and denote by $\mu_k$ the measure whose density is $\rho_k$. Let $\theta_k \in {\mathbb R}^n$ and $\alpha_k \in {\mathbb R}$ be such that $\tilde{f}_k(x) = f(x) + \langle \theta_k, x \rangle + \alpha_k$ satisfies $$ \int_{{\mathbb R}^n} \tilde{f}_k d \mu_k = 0, \qquad \text{and} \qquad \int_{{\mathbb R}^n} \partial_i \tilde{f}_k d \mu_k = 0 \quad (i=1,\ldots,n).$$ We deduce from Item (iii) of Lemma~\ref{lem:approx-mu-above} that $\theta_k$ and $\alpha_k$ tend to zero as $k \rightarrow \infty$. It also follows that $$ \mathrm{Var}_{\mu_k}(\tilde{f}_k) \stackrel{k \rightarrow \infty}\longrightarrow \mathrm{Var}_{\mu}(f). $$ All that remains in order to complete the proof of Lemma \ref{lem_1111} is to prove that for $i=1,\ldots,n$ and $g = \partial^i f$, \begin{equation} \limsup_{k \rightarrow \infty} \| g - E_k(g) \|_{H^{-1}(\mu_k)} \leq \| g - E(g) \|_{H^{-1}(\mu)}. \label{eq_949} \end{equation} where $E_k(g) = \int_{{\mathbb R}^n} g d \mu_k / \mu_k({\mathbb R}^n)$ and $E(g) = \int g d \mu / \mu({\mathbb R}^n)$. We will actually prove (\ref{eq_949}) for any bounded function $g: {\mathbb R}^n \rightarrow {\mathbb R}$. Normalizing, we may assume that $\sup |g| \leq 1$. Let $\varepsilon > 0$. It suffices to prove that \begin{equation} \limsup_{k \rightarrow \infty} \| g - E_k(g) \|_{H^{-1}(\mu_k)} \leq \| g - E(g) \|_{H^{-1}(\mu)} + 2 \varepsilon \cdot \left[ C_P(\mu) + \sup_k C_P(\mu_k) \right]. \label{eq_503} \end{equation} Indeed, $\sup_k C_P(\mu_k) < \infty$ (see \cite{bobkov}). Let $T \subset S$ be a compact, convex set with $$ \mu({\mathbb R}^n \setminus T) < \varepsilon^2 / 4. $$ Then there exists $k_0$ such that $\mu_k({\mathbb R}^n \setminus T) < \varepsilon^2 / 4$ for all $k \geq k_0$. Define $ h = g \cdot 1_T $ where $1_T$ is the characteristic function of $T$, which equals one in $T$ and vanishes elsewhere. Then for all $k > k_0$, $$ \| g - E(g) - h + E(h) \|_{L^2(\mu)} < \varepsilon \quad \textrm{and also} \quad \| g - E_k(g) - h + E_k(h) \|_{L^2(\mu_k)} < \varepsilon. $$ In view of (\ref{eq_915}) above, we see that (\ref{eq_503}) would follow once we prove that \begin{equation} \limsup_{k \rightarrow \infty} \| h - E_k(h) \|_{H^{-1}(\mu_k)} \leq \| h - E(h) \|_{H^{-1}(\mu)}. \label{eq_506} \end{equation} However, $h$ is supported in the compact set $T \subset S$, where $S$ is an open set in which $\rho$ is positive. The convergence of $\rho_k$ to the density $\rho$ is uniform in $T$. For $k \geq 1$ let $u_k: {\mathbb R}^n \rightarrow {\mathbb R}$ be a locally-Lipschitz function in $L^2(\mu_k)$ with $\int_{{\mathbb R}^n} |\nabla u_k|^2 d \mu_k \leq 1$ and $\int u_k d \mu_k = 0$ and $$ \| h - E_k(h) \|_{H^{-1}(\mu_k)} \leq \frac{1}{k} + \int_{{\mathbb R}^n} h u_k d \mu_k. $$ Since $\rho_k \geq \rho$, necessarily $\int |\nabla u_k|^2 d \mu \leq 1$. Therefore, \begin{align} \nonumber \| h - E(h) \|_{H^{-1}(\mu)} & \geq \int_{{\mathbb R}^n} h u_k d \mu = \int_{T} h u_k d \mu_k + \int_T h u_k (\rho - \rho_k) \\ & \geq \| h - E_k(h) \|_{H^{-1}(\mu_k)} - \frac{1}{k} - \frac{\sup_T |\rho_k - \rho|}{\inf_T \rho_k} \cdot \int_T |u_k| d \mu_k. \label{eq_547} \end{align} Note that $\sup_T |\rho_k - \rho|$ tends to zero with $k$, while $\inf_T \rho_k$ is bounded away from zero for a sufficiently large $k$. Moreover, $\left( \int_T |u_k| d \mu_k \right)^2 \leq \mu_k({\mathbb R}^n)\int_{{\mathbb R}^n} u_k^2 d \mu_k \leq \sup_k \mu_k({\mathbb R}^n)C_P(\mu_k) < \infty$. By letting $k$ tend to infinity, we thus obtain (\ref{eq_506}) from (\ref{eq_547}). This completes the proof of Lemma \ref{lem_1111}. \begin{proof}[Proof of Lemma \ref{lem:approx-mu-above}] Set $\psi(x) = -\log \rho(x)$ for $x \in S$ and $\psi(x) = +\infty$ for $x \not \in S$. The function $\psi$ is convex in ${\mathbb R}^n$, and the integrability of $e^{-\psi}$ implies that there exists $A \in (0,1)$ and $B > 0$ such that \begin{equation} \psi(x) \geq A |x| - B \qquad \text{for all} \ x \in {\mathbb R}^n. \label{eq_1709} \end{equation} See, e.g., \cite[Lemma 2.2.1]{BOOKgreek} for a quick proof. For $k \geq 1$ denote \begin{equation} \tilde{\psi}_k(x) = \inf_{y \in S} \left[ \psi(y) + k |x - y| \right] \qquad \qquad \text{for} \ x \in {\mathbb R}^n. \label{eq_1659} \end{equation} The function $\tilde{\psi}_k$ is a Lipschitz function in ${\mathbb R}^n$, being the infimum of a family of $k$-Lipschitz functions. It is also convex, since it is the infimum-convolution of two convex functions (see, e.g., Rockafellar \cite[Section 5]{roc}). Clearly $\tilde{\psi}_k \leq \psi$. From (\ref{eq_1709}) and (\ref{eq_1659}), for any $k \geq 1$ and $x \in {\mathbb R}^n$, \begin{equation} \tilde{\psi}_k(x) \geq \inf_{y \in S} [A |y| + k|x-y| - B] \geq \inf_{y \in S} [A |y| + A|x-y| - B] \geq A |x| - B. \label{eq_421} \end{equation} Fix a smooth probability density $\theta: {\mathbb R}^n \rightarrow {\mathbb R}$ supported in the unit ball $B(0,1)$. Write $\theta_{\varepsilon}(x) = \varepsilon^{-n} \theta(x / \varepsilon)$ and define $$ \psi_k = \tilde{\psi}_k * \theta_{1/k^2} - 1 / k. $$ The function $\psi_k$ is still $k$-Lipschitz and convex, since a convolution preserves this properties. We claim that \begin{equation} \tilde{\psi}_k - 1/k \leq \psi_k \leq \tilde{\psi}_k \leq \psi \qquad \qquad \textrm{pointwise in} \ {\mathbb R}^n. \label{eq_414} \end{equation} Indeed, since $\tilde{\psi}_k$ is convex and $\theta_{1/k^2}$ is a probability density, by Jensen's inequality, $$ \psi_k + 1 / k = \tilde{\psi}_k * \theta_{1/k^2} \geq \tilde{\psi}_k, $$ which implies the left-hand side inequality in (\ref{eq_414}). On the other hand, since $\tilde{\psi}_k$ is $k$-Lipschitz and $\theta_{1/k^2}$ is supported in the ball of radius $1/k^2$ centered at the origin in ${\mathbb R}^n$, $$ \psi_k + 1/k = \tilde{\psi}_k * \theta_{1/k^2} \leq \tilde{\psi_k} + k / k^2 = \tilde{\psi}_k + 1/k, $$ implying the inequality in the middle in (\ref{eq_414}). This completes the proof of (\ref{eq_414}), as we have already seen the right-hand side inequality in (\ref{eq_414}). \medskip Let us now set $\rho_k = \exp(-\psi_k)$. Since $\psi_k$ is a smooth, convex, Lipschitz function, the function $\rho_k$ is smooth, everywhere-positive and log-concave. It satisfies $\rho_k \geq \rho$ thanks to (\ref{eq_414}). The integrability of $\rho_k$ follows from (\ref{eq_421}) and (\ref{eq_414}), completing the proof of (i). \medskip The function $\psi$ is locally-Lipschitz in $S$ since it is convex. It thus follows from (\ref{eq_1659}) that $\tilde{\psi}_k$ tends to $\psi$ pointwise in $S$, as $k \rightarrow \infty$. According to \cite[Theorem 10.8]{roc}, the convergence is locally-uniform in $S$. Since $\tilde{\psi}_k$ tends to $\psi$ locally uniformly in $S$, we learn from (\ref{eq_414}) that also $\psi_k$ tends to $\psi$ locally uniformly in $S$. Consequently, $\rho_k \longrightarrow \rho$ locally uniformly in $S$, as stated in (ii). It remains to prove (iii). From (\ref{eq_1709}), (\ref{eq_421}) and (\ref{eq_414}), $$ \rho_k(x) \leq e^{B+1 - A |x|} \qquad \textrm{for all} \ k \geq 1, x \in {\mathbb R}^n. $$ Hence the function $|\varphi(x)| e^{B+1 - A |x|}$ is an integrable majorant for the sequence of functions $(\varphi \rho_k)_{k \geq 1}$ in ${\mathbb R}^n$. In view of Lebesgue's dominated convergence theorem, all that remains in order to prove (iii) is to show that $\rho_k \longrightarrow \rho$ almost everywhere in ${\mathbb R}^n$. We already know that $\rho_k \longrightarrow \rho$ in $S$. Since $S$ is a convex set, its boundary has a zero Lebesgue measure. Thus, it suffices to fix a point $x \in {\mathbb R}^n$ which is not in the closure of $S$, and prove that \begin{equation} \rho_k(x) \stackrel{k \rightarrow \infty} \longrightarrow 0. \label{eq_458} \end{equation} There exists $\varepsilon > 0$ such that the ball $B(x,\varepsilon)$ is disjoint from $S$. It follows from (\ref{eq_1709}) and (\ref{eq_1659}) that $\tilde{\psi}_k(x) \geq k \varepsilon - B$ for all $k$. From (\ref{eq_414}) we thus learn that $\psi_k(x) \geq k \varepsilon - B - 1/k \longrightarrow \infty$ as $k \rightarrow \infty$. This implies (\ref{eq_458}), completing the proof of the lemma. \end{proof}
{ "timestamp": "2019-07-11T02:00:46", "yymm": "1907", "arxiv_id": "1907.01823", "language": "en", "url": "https://arxiv.org/abs/1907.01823", "abstract": "We discuss situations where perturbing a probability measure on $\\mathbb{R}^n$ does not deteriorate its Poincaré constant by much. A particular example is the symmetric exponential measure in $\\mathbb{R}^n$, even log-concave perturbations of which have Poincaré constants that grow at most logarithmically with the dimension. This leads to estimates for the Poincaré constants of $(n/2)$-dimensional sections of the unit ball of $\\ell_p^n$ for $1 \\leq p \\leq 2$, which are optimal up to logarithmic factors. We also consider symmetry properties of the eigenspace of the Laplace-type operator associated with a log-concave measure. Under symmetry assumptions we show that the dimension of this space is exactly $n$, and we exhibit a certain interlacing between the \"odd\" and \"even\" parts of the spectrum.", "subjects": "Functional Analysis (math.FA); Probability (math.PR)", "title": "Spectral gaps, symmetries and log-concave perturbations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137931962462, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617954850206 }
https://arxiv.org/abs/2101.08644
Statistics for $S_n$ acting on $k$-sets
We study the natural action of $S_n$ on the set of $k$-subsets of the set $\{1,\dots, n\}$ when $1\leq k \leq \frac{n}{2}$. For this action we calculate the maximum size of a minimal base, the height and the maximum length of an irredundant base.Here a "base" is a set with trivial pointwise stabilizer, "height" is the maximum size of a subset with the property that its pointwise stabilizer is not equal to the pointwise stabilizer of any proper subset, and an "irredundant base" can be thought of as a chain of (pointwise) set-stabilizers for which all containments are proper.
\section{Introduction}\label{s: intro} In this note we study three statistics pertaining to primitive permutation groups. Our main theorem gives the value of these three statistics for the permutation groups $S_n$ acting (in the natural way) on the set of $k$-subsets of the set $\{1,\dots, n\}$. Before we state our main result, let us briefly define the three statistics in question (more complete definitions, as well as some background information, are given in \S\ref{s: defs}): suppose that $G$ is a finite permutation group on a set $\Omega$. We define, first, $\mathrm{B}(G, \Omega)$ to be the maximum size of a minimal base for the action of $G$; we define, second, $\mathrm{H}(G,\Omega)$ to be the maximum size of a subset $\Lambda\subseteq\Omega$ that has the property that its pointwise stabilizer is not equal to the pointwise stabilizer of any proper subset of $\Lambda$; we define, third, $\mathrm{I}(G,\Omega)$ to be the maximum length of an irredundant base for the action of $G$. Our main result is the following. \begin{thm}\label{t: main} Let $k$ and $n$ be positive integers with $1\leq k\leq \frac{n}{2}$. Consider $S_n$ acting in the natural way on $\Omega_k$, the set of $k$-subsets of $\{1,\dots, n\}$. \begin{enumerate} \item $\mathrm{I}(S_n, \Omega_k)=\begin{cases} n-1, & \textrm{if $\gcd(n,k)=1$}; \\ n-2, & \textrm{otherwise}. \end{cases}$ \item ${\rm B}(S_n, \Omega_k) = \mathrm{H}(S_n, \Omega_k) = \begin{cases} n-1, & \textrm{if } k=1; \\ n-2, & \begin{array}{l} \textrm{if } k=2 \textrm{ or } \\ \textrm{if }k\geq 3 \textrm{ and } n=2k+2;\end{array} \\ n-3, & \textrm{otherwise}. \end{cases}$ \end{enumerate} \end{thm} \subsection{Definition of statistics}\label{s: defs} Throughout, consider a finite permutation group $G$ on a set $\Omega$. Let $\Lambda = \{\omega_1,\dots,\omega_k\} \subseteq \Omega$; we write $G_{(\Lambda)}$ or $G_{\omega_1, \omega_2, \dots, \omega_k}$ for the pointwise stabilizer. If $G_{(\Lambda)} = \{1\}$, then we say that $\Lambda$ is a \emph{base}. We say that a base is a \emph{minimal base} if no proper subset of it is a base. We denote the minimum size of a minimal base $\mathrm{b}(G,\Omega)$, and the maximum size of a minimal base $\mathrm{B}(G, \Omega)$. We say that $\Lambda$ is an \emph{independent set} if its pointwise stabilizer is not equal to the pointwise stabilizer of any proper subset of $\Lambda$. We define the \emph{height} of $G$ to be the maximum size of an independent set, and we denote this quantity $\mathrm{H}(G, \Omega)$. Given an ordered sequence of elements of $\Omega$, $[\omega_1,\omega_2,\dots, \omega_\ell]$, we can study the associated \emph{stabilizer chain}: \[ G \geq G_{\omega_1} \geq G_{\omega_1, \omega_2}\geq G_{\omega_1, \omega_2, \omega_3} \geq \dots \geq G_{\omega_1, \omega_2, \dots, \omega_\ell}. \] If all the inclusions given above are strict, then the stabilizer chain is called \emph{irredundant}. If, furthermore, the group $G_{\omega_1,\omega_2,\dots, \omega_\ell}$ is trivial, then the sequence $[\omega_1,\omega_2,\dots, \omega_\ell]$ is called an \emph{irredundant base}. The length of the longest possible irredundant base is denoted $\mathrm{I}(G,\Omega)$. Note that, defined in this way, an irredundant base is not a base (because it is an ordered sequence, not a set). Let us make some basic observations. First, it is easy to verify the following inequalities: \begin{equation}\label{basic} \mathrm{b}(G,\Omega) \leq \mathrm{B}(G,\Omega) \leq \mathrm{H}(G,\Omega) \leq \mathrm{I}(G,\Omega). \end{equation} Second, it is easy to see that $\Lambda=\{\omega_1, \omega_2, \dots, \omega_k\}$ is independent if and only if the pointwise stabilizer of $\Lambda$ is not equal to the pointwise stabilizer of $\Lambda\setminus\{\omega_i\}$ for all $i=1,\dots, k$. Third, any subset of an independent set is independent. \subsection{Some context}\label{s: context} Our interest in the statistics considered here was stimulated by our study of yet another statistic, the \emph{relational complexity} of the permutation group $G$, denoted $\mathrm{RC}(G,\Omega)$. This statistic was introduced in \cite{cherlin_martin}; it can be defined as the least $k$ for which $G$ can be viewed as the automorphism group of a homogeneous relational structure whose relations are $k$-ary \cite{cherlin2}. It is an exercise to confirm that $\mathrm{RC}(G,\Omega)\leq \mathrm{H}(G,\Omega)+1$ for any permutation group $G$ on a set $\Omega$ \cite{glodas}. Anecdotally it would seem that $\mathrm{RC}(G,\Omega)$ tends to track $\mathrm{H}(G,\Omega)+1$ rather closely: it often seems to equal this value or to be rather close to it. In this respect Theorem~\ref{t: main} tells us that the action of $S_n$ on the set of $k$-sets is an aberration: in \cite{cherlin1}, Cherlin calculates that $\mathrm{RC}(S_n,\Omega_k)=\lfloor \log_2 k \rfloor +2$; asymptotically this is very far from the value for the height that is given in Theorem~\ref{t: main}. In a different direction, an earlier result with Spiga, along with work of Kelsey and Roney-Dougal, asserts that the statistics $\mathrm{H}(G,\Omega)$ and $\mathrm{I}(G,\Omega)$ satisfy a particular upper bound whenever $G$ is primitive and not in a certain explicit family of permutation groups \cite{glodas,krd}. Ultimately we would like to calculate the value of $\mathrm{H}(G,\Omega)$ and $\mathrm{I}(G,\Omega)$ for all of the permutation groups in this explicit family; our calculation of $\mathrm{H}(S_n,\Omega_k)$ and $\mathrm{I}(S_n,\Omega_k)$ is the first step in this process. The one statistic that we have neglected in our study is $\mathrm{b}(G,\Omega)$. The value of this statistic for the actions under consideration has not been completely worked out, although significant progress has been made (see \cite{cggmp, halasi} as well as \cite{bailey_cameron} and the references therein). On the other hand, for those primitive actions of $S_n$ for which a point-stabilizer acts primitively in the natural action on $\{1,\dots, n\}$, the value of $\mathrm{b}(G,\Omega)$ is known \cite{burness_guralnick_saxl}. Finally it is worth mentioning that, in general, $\ell(G)$, the maximum length of a chain of subgroups in a group $G$, is an upper bound for $\mathrm{I}(G,\Omega)$ for any faithful action of the group $G$ on a set $\Omega$. It is known that $\ell(S_n)=\lfloor\frac{3n-1}{2}\rfloor-b_n$, where $b_n$ is the number of $1$'s in the binary expansion of $n$ \cite{cameron_solomon_turull}. \subsection{Acknowledgments} The work of the first author was supported by EPSRC grant EP/R028702/1. \section{The proof}\label{s: proof} In this section we prove Theorem~\ref{t: main}. Throughout the proof we will write $G$ for $S_n$. We need some terminology. Suppose that $\Delta=\{\delta_1,\dots, \delta_\ell\}$ is a set of non-empty subsets of $\{1,\dots, n\}$. We define $\mathcal{P}_\Delta$, the \emph{partition associated with $\Delta$ on $\{1,\dots, n\}$}, to be the partition of $\{1,\dots, n\}$ associated with the equivalence relation $\sim$ given as follows: for $x,y\in\{1,\dots, n\}$, we have $x\sim y$ if and only if for all $i=1,\dots, \ell$, $x\in \delta_i\Longleftrightarrow y\in\delta_i$. If $\Delta$ is empty, then we define $\mathcal{P}_\Delta$ to be the partition with a single part of size $n$. It is an easy exercise to check that, first, $\mathcal{P}_\Delta$ can be obtained by taking intersections of all elements of $\Delta$; second, the pointwise stabilizer of $\Delta$ in $S_n$ is simply the stabilizer of all parts of $\mathcal{P}_\Delta$. If $i,j\in \{1,\dots, n\}$ and $\omega \in \Omega_k$, then we will say that $\omega$ \emph{splits} $i$ and $j$ if $|\{i,j\}\cap \omega|=1$. In particular, if $\omega$ splits $i$ and $j$ then for any set $\Delta$ such that $\omega\in\Delta\subseteq \Omega_k$, we have $i\not\sim j$ where $\sim$ is the equivalence relation associated with $\Delta$. \subsection{The result for \texorpdfstring{$\mathrm{I}(G,\Omega_k)$}{I(G,Omegak)}}\label{s:i} We will use the terminology above and begin with a couple of lemmas. \begin{lem}\label{l: i} Let $d,e\in\mathbb{Z}^+\cup\{0\}$ with $e>d$, let $H$ be a permutation group on the set $\{1,\dots, n\}$, let $\Delta=\{\delta_1,\dots, \delta_d\}$ be a set of non-empty subsets of $\{1,\dots, n\}$, let $\Lambda$ be a set of non-empty subsets of $\{1,\dots, n\}$ that contains $\Delta$ and let $\Lambda \setminus \Delta=\{\lambda_{d+1}, \dots, \lambda_e\}.$ Suppose that \[ H\gneq H_{\delta_1} \gneq H_{\delta_1, \delta_2} \gneq \cdots \gneq H_{\delta_1, \dots, \delta_d} \gneq H_{\delta_1,\dots, \delta_d, \lambda_{d+1}}\gneq H_{\delta_1,\dots, \delta_d, \lambda_{d+1},\lambda_{d+2}} \gneq \cdots \gneq H_{(\Lambda)}. \] If $\mathcal{P}_\Delta$ has $r$ parts and $\mathcal{P}_\Lambda$ has $s$ parts, then $|\Lambda|=e\leq d+s-r$. \end{lem} Note that if $\Delta$ is empty, then the lemma applies with $d=0$ and $r=1$, and we obtain that $|\Lambda|\leq s-1$. \begin{proof} For $i=1,\dots, d$, let $\mathcal{P}_i$ be the partition associated with the set $\{\delta_1,\dots, \delta_i\}$ and, for $i=d+1,\dots, e$, let $\mathcal{P}_i$ be the partition associated with the set $\Delta\cup\{\lambda_{d+1},\lambda_{d+2},\dots, \lambda_i\}$. Since all the containments are proper, $\mathcal{P}_{i+1}$ has at least one more part than $\mathcal{P}_{i}$ for all $i=1,\dots, e-1$. There are $|\Delta|$ containments up to $H_{(\Delta)}=H_{\delta_1,\dots, \delta_d}$ and then the number of containments after that is at most $s-r$. The result follows. \end{proof} \begin{lem}\label{l: j} Let $\ell\in\mathbb{Z}^+$, let $g=\gcd(n,k)$, let $\omega_1,\dots,\omega_\ell$ be $k$-subsets of $\Omega$ and let $\mathcal{P}_i$ be the partition associated with $\{\omega_1,\dots, \omega_i\}$. If $\mathcal{P}_{i+1}$ has exactly one more part than $\mathcal{P}_{i}$ for all $i=1,\dots, k-1$, then all parts of $\mathcal{P}_\ell$ have size divisible by $g$. \end{lem} \begin{proof} We proceed by induction. Observe that $\mathcal{P}_1$ has two parts, one of size $k$ and the other of size $n-k$. Both $k$ and $n-k$ are divisible by $g$ and so the result is true for $i=1$. Let $i\in\{1,\dots, k-1\}$ and assume that all parts of $\mathcal{P}_i$ have size divisible by $g$. The property that $\mathcal{P}_{i+1}$ has exactly one more part than $\mathcal{P}_i$ implies that, with precisely one exception, if $P$ is a part of $\mathcal{P}_{i+1}$, then $|\omega_{i+1}\cap P|\in\{0,|P|\} $. In other words \[ \omega_{i+1}=P_1\cup \dots \cup P_m \cup X, \] where $m$ is some integer, $P_1\dots, P_{m+1}$ are parts of $\mathcal{P}_i$ and $X$ is a proper subset of part $P_{m+1}$. Note that, by assumption, $|\omega_{i+1}|=k$ is divisible by $g$. What is more the inductive hypothesis implies that $|P_1|,\dots,|P_m|$ are divisible by $g$, hence the same is true of $|X|$. But now $\mathcal{P}_{i+1}$ has the same parts as $\mathcal{P}_i$ except that part $P_{m+1}$ has been replaced by two parts, $X$ and $P_{m+1}\setminus X$, both of which have size divisible by $g$. The result follows. \end{proof} We are ready to prove item (1) of Theorem~\ref{t: main}. First we let $\Lambda$ be an independent set and observe that Lemma~\ref{l: i} implies that $|\Lambda|\leq n-1$; thus $\mathrm{I}(G,\Omega_k)\leq n-1$. Next we suppose that $\gcd(n,k)=1$ and we must show that there exists a stabilizer chain \[G \gneq G_{\omega_1} \gneq G_{\omega_1, \omega_2}\gneq G_{\omega_1, \omega_2, \omega_3} \gneq \dots \gneq G_{\omega_1, \omega_2, \dots, \omega_{n-1}}, \] with $\omega_1,\dots, \omega_{n-1}\in\Omega_k$. Observe that if such a chain exists, then, writing $\mathcal{P}_i$ for the partition associated with $\{\omega_1,\dots, \omega_i\}$, it is clear that, for $i=1,\dots, n-2,$ the partition $\mathcal{P}_{i+1}$ has exactly one more part than $\mathcal{P}_i$. We show the existence of such a chain by induction on $k$: If $k=1$, then the result is obvious. Write $d=\lfloor n/k\rfloor$ and write $n=dk+r$. For $i=1,\dots, d$, we set \[ \omega_i=\{(i-1)k+1, \dots, ik\}. \] The stabilizer of these $d$ sets is associated with a partition, $\mathcal{P}_d$, of $d$ parts of size $k$ and one of size $r$. Now we will choose the next sets, $\omega_{d+1},\dots, \omega_{d+k-1}$, so that they all contain the part of size $r$ and so that the remaining $k-r$ points in each are elements of $\omega_1$. Observe that, since $(n,k)=1$, we know that $(k, k-r)=1$. Now the inductive hypothesis asserts that we can choose $k-1$ subsets of $\omega_1$, all of size $k-r$, so that the corresponding stabilizer chain in $\mathop{\mathrm{Sym}}(\{1,\dots, k\})$ is of length $k-1$, i.e. so that the corresponding chain of partitions of $\{1,\dots, k\}$ has the property that each partition has exactly one more part than the previous. We can repeat this process for $\omega_2,\dots, \omega_d$, at the end of which we have constructed a stabilizer chain of length $dk$ for which the associated partition, $\mathcal{P}_{dk}$, has $dk$ parts of size $1$ and $1$ part of size $r$. A further $r-1$ subgroups can be added to the stabilizer chain by stabilizing sets of form \[ \{1,\dots, k-1, dk+i\} \] for $i=1,\dots, r-1$. We conclude that $\mathrm{I}(G, \Omega_k)=n-1$ if $\gcd(n,k)=1$. On the other hand, let us see that $\mathrm{I}(G,\Omega_k)\geq n-2$ in general. Define \[ \omega_i=\begin{cases} \{1,\dots, k-1, k+i-1\}, & \textrm{ if } i=1,\dots, n-k; \\ \{i-(n-k), k+1,\dots, 2k-1\} & \textrm{ if } i=(n-k)+1,\dots, n-2. \end{cases} \] It is easy to check that the corresponding stabilizer chain \[G \geq G_{\omega_1} \geq G_{\omega_1, \omega_2}\geq G_{\omega_1, \omega_2, \omega_3} \geq \dots \geq G_{\omega_1, \omega_2, \dots, \omega_{n-2}} \] is irredundant for $G=S_n$. Finally we assume that $\gcd(n,k)=g>1$ and we show that $\mathrm{I}(G,\Omega_k)\leq n-2$. We must show that it is not possible to construct a stabilizer chain of length $n-1$; as we saw above such a chain would have the property that at every stage the corresponding partition $\mathcal{P}_{i+1}$ has exactly one more part than $\mathcal{P}_i$. Suppose that we have a stabilizer chain with the property that, for all $i$, $\mathcal{P}_{i+1}$ has exactly one more part than $\mathcal{P}_i$. Now Lemma~\ref{l: j} implies that all parts of $\mathcal{P}_i$ have size divisible by $g$, for all $i$. We see immediately that such a stabilizer chain is of length at most $n/g-1<n-1$ and we are done. \subsection{Preliminaries for \texorpdfstring{$\mathrm{B}(G,\Omega_k)$}{B(G,Omegak)} and \texorpdfstring{$\mathrm{H}(G,\Omega_k)$}{H(G,Omegak)}} Note, first, that for the remaining statistics the result for $k=1$ is immediate; thus we assume from here on that $k>1$. Note, second, that to prove what remains we need to show that there exists a lower bound for ${\rm B}(G, \Omega_k)$ which equals an upper bound for $\mathrm{H}(G, \Omega_k)$. \subsection{The case \texorpdfstring{$k=2$}{k=2}}\label{s: k=2} First assume that $k=2$, and observe that \[ \Big\{ \{1,2\}, \{1,3\}, \dots, \{1,n-1\}\Big\} \] is a minimal base for $S_n$ acting on $\Omega_2$, and we obtain the required lower bound. For the upper bound on $\mathrm{H}(G, \Omega_2)$ we let $\Lambda$ be an independent set. We construct a graph, $\Gamma_\Lambda,$ on the vertices $\{1,\dots, n\}$ as follows: there is an edge between $i$ and $j$ if and only if $\{i,j\}\in\Lambda$. \begin{lem}\label{l: ind} Suppose that $H$ is a permutation group on $\Omega$ and consider the natural action of $H$ on $\Omega_2$. If $\Lambda$ is an independent subset of $\Omega_2$ with respect to the action of $H$, then $\Gamma_\Lambda$ contains no loops. \end{lem} \begin{proof} Suppose that $[i_1,\dots, i_\ell]$ is a loop in the graph, i.e. $E_j=\{i_j, i_{j+1}\}$ is in $\Lambda$ for $j=1,\dots, \ell-1$, along with $E_\ell=\{i_1, i_\ell\}$. Now observe that if $E_j$ is removed from $\Lambda$ for some $j$, then the stabilizer of the resulting set, $\Lambda \setminus\{E_j\}$, fixes the two vertices contained in $E_j$. But this implies that $\Lambda$ is not independent, a contradiction. \end{proof} We apply Lemma~\ref{l: ind} to the action of $G=S_n$ on $\Omega_2$ and conclude that the graph $\Gamma_\Lambda$ is a forest. If $\Gamma_\Lambda$ is disconnected, then the result follows immediately. Assume, then, that $\Gamma_\Lambda$ is connected, i.e. it is a tree on $n$ vertices. In this case there are $n-1$ edges and we calculate directly that the point-wise stabilizer of $\Lambda$ is trivial. But now, observe that if we remove any set from $\Lambda$, then the point-wise stabilizer remains trivial. This is a contradiction and the result follows. \subsection{A lower bound for \texorpdfstring{$\mathrm{B}(G, \Omega_k)$}{B(G,Omegak)} when \texorpdfstring{$k>2$}{k>2}}\label{s: lower} Assume for the remainder that $k>2$. We prove the lower bound first: first observe that the following set, of size $n-3$, is a minimal base with respect to $S_n$ (note that there are $n-k-1$ sets listed on the first row, and $k-2$ listed altogether on the second and third): {\medmuskip=1mu \thinmuskip=1mu \thickmuskip=1mu \begin{equation}\label{e: 1} \left\{ \begin{array}{c} \Big\{1,2,\dots,k-1, k\Big\}, \Big\{1,2,\dots, k-1, k+1\Big\}, \dots, \Big\{1,2,\dots, k-1, n-2 \Big\},\\ \Big\{1, n-(k-1), n-(k-2), n-(k-3),\dots, n-1\Big\}, \Big\{2, n-(k-1), n-(k-2), n-(k-3),\dots, n-1\Big\},\\ \dots, \Big\{k-2, n-(k-1), n-(k-2), n-(k-3),\dots, n-1\Big\} \end{array} \right\}. \end{equation}} To complete the proof of the lower bound, we must deal with the case $n=2k+2$. For this we observe that the following set, which is of size $n-2=2k$, is a minimal base: \begin{equation}\label{e: 2} \Big\{ \{1,\dots, k+1\} \setminus \{i\} \mid i=1,\dots, k\Big\}\bigcup \Big\{ \{k+2,\dots, 2k+2\} \setminus \{i\} \mid i=k+2,\dots, 2k+1\Big\} \end{equation} \subsection{An upper bound for \texorpdfstring{$\mathrm{H}(G, \Omega_k)$}{H(G,Omegak)} when \texorpdfstring{$k>2$}{k>2}} We must prove that, if $n\geq 2k$, then an independent set in $\Omega_k$ has size at most $n-3$, except when $n=2k+2$, in which case it has size at most $n-2$. It turns out that it is easy to get close to this bound in a much more general setting, as follows. \begin{lem}\label{l: n-2} Let $n\geq2$, let $\Delta$ be a set of subsets of $\Omega=\{1,\dots, n\}$ and suppose that $\Delta$ is independent with respect to the action of $G=S_n$ on the power set of $\Omega$. Then one of the following holds: \begin{enumerate} \item There exists $\delta \in \Delta$ with $|\delta|\in\{1, n-1\}$. \item $|\Delta|\leq n-2$. \end{enumerate} \end{lem} \begin{proof} Let us suppose that (1) does not hold; we will prove that (2) follows. Note that if $\delta\in \Delta$ with $|\delta|>\frac{n}{2}$, then we can replace $\delta$ with $\Omega\setminus\delta$ and the resulting set will still be independent. Note too that, since $\Delta$ is independent, all sets in $\Delta$ are non-empty. Thus we can assume that $1<|\delta|\leq \frac{n}{2}$ for all $\delta\in\Delta$. Suppose that there exist $\delta_1, \delta_2\in \Delta$ such that $\delta_1\cap \delta_2\neq\emptyset$. Since $|\delta_1|, |\delta_2|\leq \frac{n}{2}$ this means that $\Omega\setminus(\delta_1\cup\delta_2)\neq \emptyset$. We conclude that $\mathcal{P}_{\{\delta_1, \delta_2\}}$ contains 4 parts. Now the result follows from Lemma~\ref{l: i}. Suppose, instead, that $\delta_1\cap \delta_2=\emptyset$ for all distinct $\delta_1,\delta_2\in\Delta$. If $|\Delta|\geq 1$, then $\mathcal{P}$ has at most $n-1$ parts and the result follows from Lemma~\ref{l: i}. If $|\Delta|=0$, then the result is true since we assume that $n\geq 2$. \end{proof} To improve the upper bound in Lemma~\ref{l: n-2} (2) from $n-2$ to $n-3$ we will need to do quite a bit of work (and we will need to deal with some exceptions). In what follows we set $\Lambda$ to be an independent set in $\Omega_k$ and, to start with at least, we drop the requirement that $n\geq 2k$. As when $k=2$, it is convenient to think of $\Lambda$ as being the set of hyperedges in a $k$-hypergraph, $\Gamma_\Lambda$, with vertex set $\Omega=\{1,\dots, n\}$. From here on we will write ``edge'' in place of ``hyperedge''. We think of two edges as being \emph{incident} in $\Gamma_\Lambda$ if they intersect non-trivially. If $\Delta$ is a set of edges in this graph (i.e. $\Delta\subseteq \Lambda$), then the \emph{span} of $\Delta$ is the set of vertices equalling the union of all edges in $\Delta$. Write $\Gamma_{C_1},\dots, \Gamma_{C_\ell}$ for the connected components of $\Gamma_\Lambda$; in particular $\ell$ is the number of connected components in $\Gamma_\Lambda$. For $\Gamma_{C_i}$, we write $C_i$ for the vertex set and $\Lambda_{C_i}$ for the edge set. In what follows we repeatedly use the fact that if $\lambda_1,\dots, \lambda_j$ are elements of the independent set $\Lambda$, then we must have \[ G\gneq G_{\lambda_1}\gneq G_{\lambda_1, \lambda_2}\gneq \cdots \gneq G_{\lambda_1,\lambda_2,\dots, \lambda_j}. \] This in turn means that, for all $i=1,\dots, j-1$, the partition $\mathcal{P}_{\lambda_1,\dots, \lambda_{i+1}}$ has more parts than $\mathcal{P}_{\lambda_1,\dots, \lambda_i}$. \begin{lem}\label{l: components}\hfill\, \begin{enumerate} \item If $\Gamma_\Lambda$ has a connected component with exactly one edge, then $|\Lambda|\leq n-3$. \item If $\ell\geq 3$, then $|\Lambda|\leq n-3$. \item Suppose that $\ell=2$, that $|\Gamma_{C_i}|\geq 2$ for $i=1,2$, and that there exist incident edges $E_1, E_2$ in $\Lambda_{C_1}$ such that the span of $\{E_1, E_2\}$ is not equal to $C_1$. Then $|\Lambda| \leq n-3$. \end{enumerate} \end{lem} \begin{proof} For (1), observe that $\mathcal{P}_\Lambda$ has at most $n-k+1$ parts. Then Lemma~\ref{l: i} yields the result. We may assume, then, that any connected component of $\Lambda$ either contains at least 2 edges or none. To prove (2) we go through the possibilities: \begin{itemize} \item If there are at least 3 components with no edges, then, $\mathcal{P}_\Lambda$ has at most $n-2$ parts and Lemma~\ref{l: i} yields the result. \item Suppose there are 2 components with no edges and at least 1 component, $C_1$, containing 2 edges. Let $E_1, E_2$ be incident edges in $\Lambda_{C_1}$ and observe that $\mathcal{P}_{\{E_1, E_2\}}$ contains 4 parts while $\mathcal{P}_\Lambda$ contains at most $n-1$ parts; Lemma~\ref{l: i} yields the result. \item Suppose there are at least 3 components in total and at least 2 components, $C_1$ and $C_2$, containing 2 edges. Let $E_1, E_2$ be incident edges in $\Lambda_{C_1}$, let $F_1, F_2$ be incident edges in $\Lambda_{C_2}$ and observe that $\mathcal{P}_{\{E_1, E_2, F_1, F_2\}}$ contains 7 parts while $\mathcal{P}_\Lambda$ contains at most $n$ parts; Lemma~\ref{l: i} yields the result. \end{itemize} We have proved (2). For (3) let $E_1, E_2$ be incident edges in $\Lambda_{C_1}$ for which the span of $\{E_1, E_2\}$ is not equal to $C_1$, let $F_1, F_2$ be incident edges in $\Lambda_{C_2}$. We can see that $\Lambda \setminus (E_1\cup E_2 \cup F_1\cup F_2)$ is non-empty; thus $\mathcal{P}_{\{E_1, E_2, F_1, F_2\}}$ contains 7 parts and the result again follows from Lemma~\ref{l: i}. \end{proof} The next result deals with a particular case when $\Lambda$ is connected. \begin{lem}\label{l: b2} If $|\Lambda|\geq 2$ and $\Omega$ is spanned by 2 incident edges, then \[ |\Lambda|\leq \begin{cases} n-1, &\textrm{if } n=k+1; \\ n-2, &\textrm{if } n>k+1. \end{cases} \] \end{lem} \begin{proof} Since $\Omega$ is spanned by 2 incident edges, we have $k>|\Omega|/2$. Observe that, for any $\lambda \in \Lambda$, the set $\Omega\setminus \lambda$ is a subset of size $|\Omega|-k$. Since the pointwise stabilizers in $\mathop{\mathrm{Sym}}(\Omega)$ of the subsets $\lambda$ and $\Omega\setminus \lambda$ are equal, we obtain that \[ \overline{\Lambda}= \{\Omega\setminus \lambda \mid \lambda \in \Lambda\} \] is an independent set with respect to the action of $\mathop{\mathrm{Sym}}(\Omega)$ on $\Omega_j$ where $j=|\Omega|-k$. Since $j<|\Omega|/2<k$ it is clear that $\Omega$ is not spanned by 2 edges in $\overline{\Lambda}$. If $j\geq 1$, then Lemma~\ref{l: n-2} implies that $|\overline{\Lambda}|=|\Lambda|=|\Lambda|\leq n-2$, as required. If $j=1$, then the result is obvious. \end{proof} From here on we impose the condition that $n\geq 2k$. \begin{lem}\label{l: special} Suppose that $n\geq 2k$, that $\ell=2$ and that $|\Gamma_{C_i}|\geq 2$ for $i=1,2$. Then \[ |\Lambda|\leq \begin{cases} n-2, &\textrm{if $n=2k+2$}; \\ n-3, & \textrm{otherwise}. \end{cases} \] \end{lem} \begin{proof} Item (3) of Lemma~\ref{l: components} yields this result in the case where there exist $i\in\{1,2\}$ and incident edges $E_1, E_2$ in $\Lambda_{C_i}$ such that the span of $\{E_1, E_2\}$ is not equal to $C_i$. Thus we may assume that, for $i=1,2$ and for distinct $E_1, E_2\in \Gamma_{C_i}$, the span of $\{E_1, E_2\}$ is equal to $C_i$. We claim that for each $i=1,2$, the set $\Lambda_{C_i}$ is independent with respect to the action of $\mathop{\mathrm{Sym}}(C_i)$ on $C_i$. To see this observe that $\Lambda=\Lambda_{C_1}\cup \Lambda_{C_2}$ and that, by definition, $\Lambda_{C_2}$ must be an independent set for $H:=G_{(\Lambda_{C_1})}$. But $H=H_0\times \mathop{\mathrm{Sym}}(C_2)$ where $H_0<\mathop{\mathrm{Sym}}(C_1)$. Now if $\Delta\subseteq \Lambda_{C_2}$, then $H_{(\Delta)} = H_0\times \mathop{\mathrm{Sym}}(C_1)_{(\Delta)}$. In particular, if $\Delta_1, \Delta_2\subseteq \Lambda_{C_2}$, then $H_{(\Delta_1)}=H_{(\Delta_2)}$ if and only if $\mathop{\mathrm{Sym}}(C_2)_{(\Delta_1)}=\mathop{\mathrm{Sym}}(C_2)_{(\Delta_2)}$. This implies immediately that $\Lambda_{C_2}$ is independent with respect to the action of $\mathop{\mathrm{Sym}}(C_2)$ on $C_2$, and the same argument works for $C_1$. Now we apply Lemma~\ref{l: b2} to these two actions. We conclude that, for each $i=1,2$, either $|C_i|=k+1$ and $|\Lambda_{C_i}|\leq |C_i|-1$, or else $|\Lambda_{C_i}|\leq |C_i|-2$. The result now follows from the fact that $|\Lambda|=|\Lambda_{C_1}|+|\Lambda_{C_2}|$ and $n=|C_1|+|C_2|$. \end{proof} Notice that Lemma~\ref{l: special} attends to the strange appearance of ``$n=2k+2$'' in the statement of item (2) of Theorem~\ref{t: main}. Before we prove Theorem~\ref{t: main} we need one more lemma. \begin{lem}\label{l: final} Suppose that $n\geq 2k$. Suppose that either $\Lambda$ is connected, or else it has two connected components, exactly one of which is a single isolated point. Then $|\Lambda|\leq n-3$. \end{lem} \begin{proof} Note that the supposition, along with the fact that $n\geq 2k$, implies that $\Omega$ contains 2 incident edges and $\Omega$ is not spanned by 2 edges. Consider $E_1, E_2$, a pair of incident edges in $\Lambda$. Let $\Pi=\{E_1, E_2\}$ and observe that the parts of $\mathcal{P}_\Pi$ are \begin{equation}\label{rstu} R:=E_1\cap E_2, \,\, S:=E_1\setminus(E_1\cap E_2), \,\, T:=E_2\setminus(E_1\cap E_2) \,\, \textrm{ and } U:=\Omega\setminus (E_1\cup E_2); \end{equation} in particular $|\mathcal{P}_\pi|=4$. If $i$ and $j$ are distinct elements of $\Omega$ that are unsplit by any element of $\Lambda$, then $\mathcal{P}_{\Lambda}$ has at most $n-1$ parts. Then Lemma~\ref{l: i} implies that $|\Lambda|\leq n-3$ as required. Thus we assume that all distinct elements of $\Omega$ are split by an element of $\Lambda$. These observations imply that we can write \begin{equation}\label{xx} \Lambda=\{E_1,E_2\}\cup \Lambda_R\cup \Lambda_S\cup \Lambda_T\cup \Lambda_U, \end{equation} where, for $X\in\{R,S,T,U\}$, $\Lambda_X$ is the set of elements in $\Lambda$ that split pairs of distinct elements in $X$. If there exists $E_3\in \Lambda$ such that $\mathcal{P}_{\{E_1, E_2, E_3\}}$ contains $6$ parts, then Lemma~\ref{l: i} implies that $|\Lambda|=n-3$ and we are done. Thus we assume that $\mathcal{P}_{\{E_1, E_2, E_3\}}$ contains 5 parts for all choices of $E_3\in\Lambda\setminus\{E_1,E_2\}$. In particular if $E_3\in \Lambda_X$, then $E_3$ does not split any pairs of elements in $\Lambda_Y$ for $Y\in\{R,S,T,U\}\setminus X$. This means, first, that if $E_3\cap Y\neq \emptyset$ for some $Y\in\{R,S,T,U\}\setminus X$, then $E_3\supset Y$; it means, second, that the sets $\Lambda_R, \Lambda_S, \Lambda_T$ and $\Lambda_U$ are pairwise disjoint. Set $x:=|R|$, so $|S|=|T|=k-x$. Observe that, since $n\geq 2k$ we must have $|U|\geq x$. We split into two cases and we will show that our assumptions to this point lead to a contradiction. \medskip \textsc{1. Suppose that we can choose $E_1, E_2$ so that $1<x$.} This means, in particular that both $R$ and $U$ have cardinality at least $2$; hence $\Lambda_R$ and $\Lambda_U$ are all non-empty. Let $E_3\in \Lambda_R$. By counting we must have \[ E_3=(E_3\cap R)\cup U \,\,\textrm{ or }\,\, (E_3\cap R)\cup S \cup T. \] Let $E_4\in \Lambda_U$. By counting we must have \[ E_4=(E_4\cap U)\cup R \ \,\,\textrm{ or }\,\, (E_4\cap U)\cup S \,\,\textrm{ or }\,\, (E_4\cap U)\cup T \,\,\textrm{ or }\,\, (E_4\cap U)\cup S \cup T. \] We will go through the various combinations and show that, in every case, the set $\{E_1, E_2, E_3, E_4\}$ is not independent, thereby giving our contradiction. In what follows $g\in G_{E_1, E_3, E_4}$. Consider, first, the possibilities for $E_3$. \begin{enumerate} \item[(E3A)] Suppose that $E_3=(E_3\cap R)\cup U$. Then $g$ stabilizes $(E_1\cup E_3)^C=T$ and $E_3\setminus (E_1\cap E_3)=U$. \item[(E3B)] Suppose that $E_3=(E_3\cap R) \cup S \cup T$. Then $g$ stabilizes $E_3\setminus (E_1\cap E_3)=T$ and $(E_1\cup E_3)^C=U$. \end{enumerate} Thus, in all cases, $g$ stabilizes both $T$ and $U$. Now consider the possibilities for $E_4$. \begin{enumerate} \item[(E4A)] Suppose that $E_4=(E_4\cap U) \cup R$. Then $g$ stabilizes $E_1\cap E_4=R$ and hence also $R\cup T=E_2$. This contradicts independence (the pointwise stabilizer of $\{E_1, E_3, E_4\}$ is equal to the pointwise stabilizer of $\{E_1, E_2, E_3, E_4\}$). \item[(E4B)] Suppose that $E_4=(E_4\cap U) \cup S$. Then $g$ stabilizes $E_1\cap E_4=S$, hence also $\Omega \setminus (S\cup T \cup U)=R$, hence also $R\cup T=E_2$. We have the same contradiction. \item[(E4C)] Suppose that $E_4=(E_4\cap U) \cup S\cup T$. Then $g$ stabilizes $E_1\cap E_4=S$, hence also $\Omega \setminus (S\cup T \cup U)=R$, hence also $R\cup T=E_2$. We have the same contradiction. \item[(E4D)] Suppose that $E_4=(E_4\cap U) \cup T$. For this final case we swap $g$ with an element $h\in G_{E_2, E_3, E_4}$, we swap $S$ with $T$ and we swap $E_1$ with $E_2$. Now, with these changes, the arguments for (E3A) and (E3B) tell us that $h$ stabilizes both $S$ and $U$. Next the argument for (E4B) tells us that $h$ stabilizes $R$, hence also $R\cup S=E_1$. Now we again have a contradiction (the pointwise stabilizer of $\{E_2, E_3, E_4\}$ is equal to the pointwise stabilizer of $\{E_1, E_2, E_3, E_4\}$). \end{enumerate} \medskip \textsc{2. Suppose that $|E_1\cap E_2|\in\{0,1\}$ for all distinct $E_1, E_2\in \Lambda$.} This is the remaining case. We fix an incident pair $E_1$ and $E_2$ and observe that $\Lambda_S$ and $\Lambda_T$ are non-empty. Let $E_3\in \Lambda_S, E_4\in\Lambda_T$; observe that $E_3=(E_3\cap S)\cup U$ and $E_4=(E_4\cap T)\cup U$. But then $|U|\leq |E_3\cap E_4|\leq 1$, hence $|U|=1$. This implies that \[ |E_3|= |E_3\cap S|+|U|=|E_3\cap E_1|+|U|=1+1=2. \] Thus $k=2$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{t: main} (2) for $k\geq 3$] The work in \S\ref{s: lower} implies that we need only prove an upper bound for $\mathrm{H}(S_n, \Omega_k)$. Lemma~\ref{l: components} (2) yields the result if $\ell\geq 3$. Lemma~\ref{l: final} yields the result if $\ell=1$. Assume, then, that $\ell=2$. Lemma~\ref{l: special} yields the result if each component contains at least 2 edges. Lemma~\ref{l: components} (1) yields the result if there is a component with 1 edge. Lemma~\ref{l: final} yields the result if exactly one of the components has 0 edges. Finally if both components have 0 edges, then the fact that $n\geq 2k$ implies the result. \end{proof} \section{The alternating group} One naturally wonders to what extent the results given here extend to the action of $A_n$ on $k$-sets. Throughout this section $k$ and $n$ will be positive integers with $k\leq \frac{n}{2}$. For irredundant bases we can adjust the proof given in \S\ref{s:i}, making use of the following easy fact: suppose that $\mathcal{P}_i$ and $\mathcal{P}_j$ are partitions corresponding to a set of $k$-subsets in $\{1,\dots, n\}$ as described at the start of \S\ref{s: proof}. Let $H_i$ (resp. $H_j$) be the stabilizer in $A_n$ of all parts of $\mathcal{P}_i$ (resp. $\mathcal{P}_j$). If $H_i=H_j$, then either $\mathcal{P}_i=\mathcal{P}_j$ or else the two partitions are of type $1^n$ or $1^{n-2}2^1$. Let us show how this observation yields the required result. \begin{prop}\label{p: I An} \[ \mathrm{I}(A_n, \Omega_k)=\begin{cases} n-2, & \textrm{if $\gcd(n,k)=1$}; \\ \max(2,n-3), & \textrm{otherwise}. \end{cases} \] \end{prop} \begin{proof} Let $G=S_n$ and suppose that \[G \gneq G_{\omega_1} \gneq G_{\omega_1, \omega_2}\gneq G_{\omega_1, \omega_2, \omega_3} \gneq \dots \gneq G_{\omega_1, \omega_2, \dots, \omega_e} \] is a stabilizer chain corresponding to an irredundant base $[\omega_1,\dots, \omega_e]$. The observation implies that we have \[A_n \gneq G_{\omega_1}\cap A_n \gneq G_{\omega_1, \omega_2}\cap A_n \gneq G_{\omega_1, \omega_2, \omega_3} \cap A_n \gneq \dots \gneq G_{\omega_1, \omega_2, \dots, \omega_{e-1}}\cap A_n \geq G_{\omega_1, \omega_2, \dots, \omega_{e}}\cap A_n \] and hence either $[\omega_1,\dots, \omega_e]$ or $[\omega_1,\dots, \omega_{e-1}]$ is an irredundant base for $A_n$. This implies that $\mathrm{I}(A_n, \Omega_k)= \mathrm{I}(S_n, \Omega_k)-1$ and Theorem~\ref{t: main} implies that \[ \mathrm{I}(A_n, \Omega_k)\geq\begin{cases} n-2, & \textrm{if $\gcd(n,k)=1$}; \\ n-3, & \textrm{otherwise}. \end{cases} \] Now we will give an upper bound for $\mathrm{I}(A_n,\Omega_k)$. Let $[\omega_1,\dots, \omega_e]$ be an irredundant base for the action of $A_n$ on $\Omega_k$. Then the observation above implies that $\mathcal{P}_{\{\omega_1,\dots, \omega_{e-1}\}}$ contains at most $n-2$ parts. Applying Lemma~\ref{l: i} with $\Lambda=\{\omega_1,\dots, \omega_{e-1}\}$ and $\Delta=\emptyset$ implies that $e-1=|\Lambda|\leq n-3$ and so $e\leq n-2$. This yields the result when $\gcd(n,k)=1$. Suppose now that $\gcd(n,k)=g>1$ and that $[\omega_1,\dots, \omega_{n-2}]$ is an irredundant base for the action of $A_n$ on $\Omega_k$; we must show that then $n=4$. The observation above implies that $\mathcal{P}_{\{\omega_1,\dots, \omega_{i+1}\}}$ has exactly one more part than $\mathcal{P}_{\{\omega_1,\dots, \omega_{i}\}}$ for $i=1,\dots, n-4$ and $\mathcal{P}_{\{\omega_1,\dots, \omega_{n-2}\}}$ has exactly two more parts than $\mathcal{P}_{\{\omega_1,\dots, \omega_{n-3}\}}$. Lemma~\ref{l: j} implies that all parts of $\mathcal{P}_{\{\omega_1,\dots, \omega_{n-3}\}}$ are divisible by $g$. But the type of $\mathcal{P}_{\{\omega_1,\dots, \omega_{n-3}\}}$ is either $2^21^{n-4}$ or $3^11^{n-3}$ and we conclude that $(n,k)=(4,2)$ as required. The proof is completed by observing that $\mathrm{I}(A_4,\Omega_2)=2$. \end{proof} For the other statistics in question it is easy to pin the value down within an error of 1; the next result does this. \begin{prop}\label{p: d} Suppose that $k$ and $n$ are positive integers with $k \leq \frac{n}{2}$. Then \begin{equation}\label{e: 4}\mathrm{H}(S_n, \Omega_k)-1\leq \mathrm{B}(A_n, \Omega_k)\leq \mathrm{H}(A_n, \Omega_k)\leq \mathrm{H}(S_n, \Omega_k).\end{equation} \end{prop} \begin{proof} The first inequality is obtained by observing that if we excise the final set from \eqref{e: 1} and \eqref{e: 2}, then we obtain a minimal base for $A_n$. The second inequality is elementary; it was given in \eqref{basic}. The third inequality, likewise, is an easy consequence of the definition of height. \end{proof} All that remains, then, is to establish which of the two possible values holds for each value of $k$ and $n$. Let us consider the situation for small values of $k$: \begin{enumerate} \item[($k=1$)] It is immediate that $\mathrm{B}(A_n, \Omega_1)= \mathrm{H}(A_n, \Omega_1)=n-2$, the smaller of the two possible values. \item[($k=2$)] We claim that in this case \[\mathrm{B}(A_n, \Omega_2)= \mathrm{H}(A_n, \Omega_2)=\begin{cases} n-3, &\textrm{if $n\neq 4$;} \\ 2,& \textrm{if $n=4$.} \end{cases}\] Thus, provided $n\neq4$, we again obtain the smaller of the two possible values. To justify our claim we note first that the value for $n=4$ is easy to obtain. When $n> 4$ it is sufficient to prove that $\mathrm{H}(A_n, \Omega_2)\leq n-3$. To see this we let $\Lambda$ be an independent set and we form the graph $\Gamma_\Lambda$ as in \S\ref{s: k=2}. Lemma~\ref{l: ind} implies that, since $\Lambda$ is independent, the graph $\Gamma_\Lambda$ is a forest. If this forest has 3 or more connected components, then the result follows immediately, so we suppose that there are at most 2 components. If one of these components consists of a single edge, then deleting this edge results in a set of $2$-sets whose pointwise stabilizer in $A_n$ is trivial; this is a contradiction of the fact that $\Lambda$ is independent. If all components contain $0$ edges or at least $2$ edges, then it is easy to check that either $n=4$ or else deleting a leaf edge results in a set of $2$-sets whose pointwise stabilizer in $A_n$ is trivial. Again this is a contradiction and we are done. \item[($k=3$)] We claim that in this case $\mathrm{B}(A_n, \Omega_3)= \mathrm{H}(A_n, \Omega_3)=n-3$ which is the larger of the two possible values, except when $n=8$. To justify our claim we note first that the value for $n=8$ is easy to obtain. When $n\neq 8$ it is sufficient to prove that $\mathrm{B}(A_n, \Omega_3)\geq n-3$. This follows simply by observing that the following set is a minimum base of size $n-3$: \[ \Big\{\, \{ 1,2,3\}, \, \{1,2,4\}, \, \dots, \, \{1,2, n-1\} \, \Big\}. \] \end{enumerate} We have not investigated the case $k\geq 4$. Finally, referring to \S\ref{s: context}, we remark that in \cite{cherlin2}, Cherlin calculated $\mathrm{RC}(A_n, \Omega_k)$ precisely (correcting an earlier calculation in \cite{cherlin1}). The comments above imply that for all $k$ and $n$ with $k\leq \frac{n}{2}$ we have \[ \mathrm{H}(A_n, \Omega_k)\leq \mathrm{RC}(A_n, \Omega_k)\leq \mathrm{H}(A_n, \Omega_k)+1. \] Thus, unlike $S_n$, the relational complexity of the action of $A_n$ on $k$-sets does indeed track height. \newcommand{\etalchar}[1]{$^{#1}$}
{ "timestamp": "2021-09-13T02:20:04", "yymm": "2101", "arxiv_id": "2101.08644", "language": "en", "url": "https://arxiv.org/abs/2101.08644", "abstract": "We study the natural action of $S_n$ on the set of $k$-subsets of the set $\\{1,\\dots, n\\}$ when $1\\leq k \\leq \\frac{n}{2}$. For this action we calculate the maximum size of a minimal base, the height and the maximum length of an irredundant base.Here a \"base\" is a set with trivial pointwise stabilizer, \"height\" is the maximum size of a subset with the property that its pointwise stabilizer is not equal to the pointwise stabilizer of any proper subset, and an \"irredundant base\" can be thought of as a chain of (pointwise) set-stabilizers for which all containments are proper.", "subjects": "Group Theory (math.GR); Combinatorics (math.CO)", "title": "Statistics for $S_n$ acting on $k$-sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137926698566, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617951051024 }
https://arxiv.org/abs/0706.0346
Complex Ratios of Cubic Polynomials
Let $p(w)=(w-w_{1})(w-w_{2})(w-w_{3}),$with $\func{Re}w_{1}<\func{Re}w_{2}<\func{Re}w_{3}$. Assume that if the critical points of $p$ are not identical, then they cannot have equal real parts. Define the ratios $\sigma_{1}=\dfrac{z_{1}-w_{1}}{w_{2}-w_{1}}$ and $\sigma _{2}=\dfrac{z_{2}-w_{2}}{w_{3}-w_{2}}$. $(\sigma_{1},\sigma_{2})$ is called the \QTR{it}{ratio vector} of $p$. This extends the definition of ratio vectors given in earlier papers for polynomials of degree $n$ with all real roots. We then derive bounds on the real part, imaginary part, and modulus of the ratios and also some relations between the ratios. In particular, we prove that $\func{Re}\sigma_{1}\leq \func{Re}\sigma_{2}$. We also show that the ratios are real if and only if the roots of $p$ are collinear.
\section{1. Introduction} \qquad Let $p(x)$ be a polynomial of degree $n\geq 2$ with $n$ distinct real roots $r_{1}<r_{2}<\cdots <r_{n}$. Such a polynomial is called hyperbolic. Let $x_{1}<x_{2}<\cdots <x_{n-1}$ be the critical points of $p$, and define the ratios $\sigma _{k}=\dfrac{x_{k}-r_{k}}{r_{k+1}-r_{k}},k=1,2,...,n-1$. $% (\sigma _{1},...,\sigma _{n-1})$ is called the \textit{ratio vector} of $p$, and $\sigma _{k}$ is called the $k$th ratio. Ratio vectors were first discussed in [5] and in [1], where the inequality $\dfrac{1}{n-k+1}<\sigma _{k}<\dfrac{k}{k+1},k=1,2,...,n-1$ was derived. In a similar fashion, one can define ratios for polynomial like functions of the form $% p(x)=(x-r_{1})^{m_{1}}\cdots (x-r_{N})^{m_{N}}$, where $m_{1},...,m_{N}$ are given positive real numbers and $r_{1}<r_{2}<\cdots <r_{N}$(see [4]). In this paper we want to discuss the extension of the notion of ratios to polynomials with \textit{complex} roots. Thus we let $p(z)$ be a polynomial of degree $n\geq 2$ with $n$ distinct complex roots $w_{1},...,w_{n}$ and critical points $z_{1},...,z_{n-1}$. Numerous papers have investigated the relation between the roots and critical points of a polynomial. The focus of this paper is to investigate that relation in the form of the complex ratios $\sigma _{k}=\dfrac{z_{k}-w_{k}}{w_{k+1}-w_{k}},k=1,2,...,n-1$. The main problem is in defining the ratios when there is no natural ordering of roots and critical points as with all real roots. We have to order the $\left\{ w_{k}\right\} $ somehow and then determine which $\left\{ z_{k}\right\} $ are associated with $w_{k}$ and $w_{k+1}$. We use the real parts of the $% \left\{ w_{k}\right\} $ and the $\left\{ z_{k}\right\} $ to do this. For the rest of the paper we concentrate solely on the case $n=3$, which is already fairly nontrivial. We do not define the ratios in the case when two roots or critical points have equal real parts(unless the critical points are identical). One could certainly extend the definition to those cases, but the ratios will not be continuous function of the roots. Our definition does extend the definition of the ratios when $p$ is hyperbolic and the ratios are continuous functions of the roots when the roots are all real. For cubic hyperbolic polynomials, the inequality $\dfrac{1}{n-k+1}<\sigma _{k}<\dfrac{k% }{k+1}$ implies that $\dfrac{1}{3}<\sigma _{1}<\dfrac{1}{2}\,$and $\dfrac{1}{% 2}<\sigma _{2}<\dfrac{2}{3}$. For complex ratios, we derive separate and sharp upper and lower bounds on the real and imaginary parts, and modulus, of each ratio(see Theorems 1 and 2). For cubic hyperbolic polynomials, it is immediate that $\sigma _{1}<\sigma _{2}$. In the complex case we prove that $% \func{Re}\sigma _{1}\leq \func{Re}\sigma _{2}$(Theorem 3). Indeed, one can have $\sigma _{1}=\sigma _{2}$(see Theorem 4). Finally, we show that the ratios are real if and only if the roots of $p$ are collinear(Theorem 5). \section{2. Main Results} Let \[ p(w)=(w-w_{1})(w-w_{2})(w-w_{3}), \]% where we assume that $\func{Re}w_{1}<\func{Re}w_{2}<\func{Re}w_{3}$. Now the critical points of $p$ are $\dfrac{1}{3}\left( w_{1}+w_{2}+w_{3}\pm \sqrt{% w_{1}^{2}+w_{2}^{2}+w_{3}^{2}-w_{1}w_{3}-w_{1}w_{2}-w_{2}w_{3}}\right) $, where $\sqrt{z}$ is the principal branch of the square root function, analytic everywhere except on the \textit{nonpositive} real axis, which we denote by $\Gamma $. Note that $\func{Re}\sqrt{z}\geq 0$ and $\sqrt{z^{2}}=z$ if $\func{Re}z\geq 0$. We also assume that if the critical points are not identical, then they cannot have equal real parts. In other words, we assume that $\func{Re}\sqrt{% w_{1}^{2}+w_{2}^{2}+w_{3}^{2}-w_{1}w_{3}-w_{1}w_{2}-w_{2}w_{3}}\neq 0$ unless $w_{1}^{2}+w_{2}^{2}+w_{3}^{2}-w_{1}w_{3}-w_{1}w_{2}-w_{2}w_{3}=0$. Denote the critical points by $z_{1}$ and $z_{2}$, where $z_{1}=z_{2}$, or $% \func{Re}z_{1}<\func{Re}z_{2}$ if $z_{1}\neq z_{2}$. We define the ratios \begin{equation} \sigma _{1}=\dfrac{z_{1}-w_{1}}{w_{2}-w_{1}},\sigma _{2}=\dfrac{z_{2}-w_{2}}{% w_{3}-w_{2}} \tag{(1)} \end{equation} $(\sigma _{1},\sigma _{2})$ is called the \textit{ratio vector} of $p$. One can give a geometric interpretation for the ratios as follows. First, if $% p\in \pi _{3}$ has \textit{noncollinear} zeros, let $T$ be the triangle whose vertices are $w_{1},w_{2},w_{3}$. Let $E$ be the midpoint ellipse, that is, the ellipse tangent to $T$ at the midpoints of its sides. Then it is well known that the zeros of $p^{\prime }$ are the foci, $z_{1}$ and $% z_{2}$, of $E$. Let $\theta _{1}$ denote the angle between $\overrightarrow{% w_{1}z_{1}}$ and $\overrightarrow{w_{1}w_{2}}$, and let $\theta _{2}$ denote the angle between $\overrightarrow{w_{2}z_{2}}$ and $\overrightarrow{% w_{2}w_{3}}$. Then $\theta _{1}=\arg \sigma _{1}$ and $\theta _{2}=\arg \sigma _{2}$, and thus each ratio represents an angle between a line segment of the circumscribed triangle of $E$ and a line segment connecting a vertex to one of the foci of $E$. Using our definition, neither $\sigma _{1}$ nor $\sigma _{2}$ will be a continuous function of $w_{1},w_{2},$ and $w_{3}$, though they are continuous on an open subset of $C^{3}-\left\{ w_{1},w_{2},w_{3}:w_{i}=w_{j}\ \text{for some }i\neq j\right\} $ and in particular at any point $(w_{1},w_{2},w_{3})$ where all of the $\left\{ w_{k}\right\} $ are real. Clearly, if we translate the roots of $p$, the ratios $\sigma _{1}$ and $\sigma _{2}$ do not change. Thus we may assume that \begin{equation} w_{1}+w_{3}=0 \tag{(2)} \end{equation} which implies that $\func{Re}w_{1}<0<\func{Re}w_{3}$. Note that $\func{Re}% \sqrt{w_{3}}>0$. The critical points of $p$ are then $\dfrac{1}{3}\left( w_{2}\pm \sqrt{3w_{3}^{2}+w_{2}^{2}}\right) $. The assumption that if the critical points are not identical, then they cannot have equal real parts now takes the form \[ 3w_{3}^{2}+w_{2}^{2}\neq 0\Rightarrow \func{Re}\sqrt{3w_{3}^{2}+w_{2}^{2}}% \neq 0 \] If $3w_{3}^{2}+w_{2}^{2}\neq 0$, then by our choice of the branch of $\sqrt{z% },\func{Re}\sqrt{3w_{3}^{2}+w_{2}^{2}}>0$, which implies that $\func{Re}% \left( w_{2}-\sqrt{3w_{3}^{2}+w_{2}^{2}}\right) <\func{Re}\left( w_{2}+\sqrt{% 3w_{3}^{2}+w_{2}^{2}}\right) $. Thus we have \[ z_{1}=\dfrac{1}{3}\left( w_{2}-\sqrt{3w_{3}^{2}+w_{2}^{2}}\right) \text{, }% z_{2}=\dfrac{1}{3}\left( w_{2}+\sqrt{3w_{3}^{2}+w_{2}^{2}}\right) \] Also, $\func{Re}\sqrt{3w_{3}^{2}+w_{2}^{2}}>0\iff 3w_{3}^{2}+w_{2}^{2}\notin \Gamma $. That leads to the following. \textbf{Definition: }We say that $(w_{2},w_{3})$ is an admissible pair if $% w_{2}$ and $w_{3}$ satisfy $3w_{3}^{2}+w_{2}^{2}\notin \Gamma $, $% w_{2}+w_{3}\neq 0$, $\func{Re}w_{2}<\func{Re}w_{3},$ and $0<\func{Re}w_{3}$. A region in $C^{2}$ consisting of only admissible pairs is also called admissible. Note that the ratios are not defined, say, when $w_{1}=-1,w_{2}=ti,w_{3}=1$, $\left\vert t\right\vert >\sqrt{3}$, since in that case $\func{Re}\sqrt{% 3w_{3}^{2}+w_{2}^{2}}=0$, which implies that $\func{Re}z_{1}=\func{Re}z_{2}$% , but $z_{1}\neq z_{2}$. Let \begin{equation} w=\dfrac{w_{2}}{w_{3}}. \tag{(3)} \end{equation}% We shall express $\sigma _{1}$ and $\sigma _{2}$ as analytic functions of $w$% . We then derive bounds on the real part, imaginary part, and modulus of the ratios and also some relations between the ratios. By (1) and (2), $\sigma _{1}=\dfrac{z_{1}+w_{3}}{w_{2}+w_{3}}=\dfrac{1}{3}\dfrac{3z_{1}+3w_{3}}{% w_{2}+w_{3}}=\dfrac{1}{3}\dfrac{w_{2}+3w_{3}-\sqrt{3w_{3}^{2}+w_{2}^{2}}}{% w_{2}+w_{3}}=\dfrac{1}{3}\dfrac{w_{2}+3w_{3}-\sqrt{w_{3}^{2}\left( 3+\dfrac{% w_{2}^{2}}{w_{3}^{2}}\right) }}{w_{2}+w_{3}}$. In general, $\sqrt{% w_{3}^{2}\left( 3+\dfrac{w_{2}^{2}}{w_{3}^{2}}\right) }=\pm \sqrt{w_{3}^{2}}% \sqrt{3+\dfrac{w_{2}^{2}}{w_{3}^{2}}}=w_{3}\sqrt{3+\dfrac{w_{2}^{2}}{% w_{3}^{2}}}$ or $-w_{3}\sqrt{3+\dfrac{w_{2}^{2}}{w_{3}^{2}}}$and thus $% \sigma _{1}=f_{1}(w_{2},w_{3})$ or $\sigma _{1}=f_{2}(w_{2},w_{3})$, where $% f_{1}(w_{2},w_{3})=\dfrac{1}{3}\dfrac{\dfrac{w_{2}}{w_{3}}+3-\sqrt{3+\dfrac{% w_{2}^{2}}{w_{3}^{2}}}}{\dfrac{w_{2}}{w_{3}}+1}$ and $f_{2}(w_{2},w_{3})=% \dfrac{1}{3}\dfrac{\dfrac{w_{2}}{w_{3}}+3+\sqrt{3+\dfrac{w_{2}^{2}}{w_{3}^{2}% }}}{\dfrac{w_{2}}{w_{3}}+1}$. Now $\dfrac{1}{3}\dfrac{w_{2}+3w_{3}-\sqrt{% 3w_{3}^{2}+w_{2}^{2}}}{w_{2}+w_{3}}$ must be an analytic function of $w_{2}$ and $w_{3}$ in any admissible region. $f_{1}$ and $f_{2}$ are also analytic functions of $w_{2}$ and $w_{3}$ in any admissible region with the additional assumption that \begin{equation} 3+\dfrac{w_{2}^{2}}{w_{3}^{2}}\notin \Gamma . \tag{(4)} \end{equation}% Since $3+\dfrac{w_{2}^{2}}{w_{3}^{2}}\neq 0,$ it follows that $\dfrac{1}{3}% \dfrac{w_{2}+3w_{3}-\sqrt{3w_{3}^{2}+w_{2}^{2}}}{w_{2}+w_{3}}$ must equal $% f_{1}(w_{2},w_{3})$ or $f_{2}(w_{2},w_{3})$. Now $f_{1}(0,1)=\allowbreak 1-% \dfrac{1}{3}\sqrt{3}$, while $f_{2}(0,1)=\allowbreak 1+\dfrac{1}{3}\sqrt{3}$% . But $w_{2}=0$ and $w_{3}=1$ yields the polynomial $p(z)=z(z^{2}-1)$, and it is easy to check that $\sigma _{1}=1-\dfrac{1}{3}\sqrt{3}$. It then follows that $\dfrac{1}{3}\dfrac{w_{2}+3w_{3}-\sqrt{3w_{3}^{2}+w_{2}^{2}}}{% w_{2}+w_{3}}=\dfrac{1}{3}\dfrac{\dfrac{w_{2}}{w_{3}}+3+\sqrt{3+\dfrac{% w_{2}^{2}}{w_{3}^{2}}}}{\dfrac{w_{2}}{w_{3}}+1}$. Using (3) we have $\sigma _{1}=\dfrac{1}{3}\dfrac{w+3-\sqrt{3+w^{2}}}{w+1}$. Now let \[ E=\left\{ w:\func{Re}w=0,\left\vert \func{Im}w\right\vert \geq \sqrt{3}% \right\} \]% and let \[ D_{1}=C^{2}-E-\left\{ w:w=-1\right\} ,D_{2}=C^{2}-E-\left\{ w:w=1\right\} . \] Note that $w\in C^{2}-E\iff (w_{2},w_{3})$ satisfies (4). Then \[ \sigma _{1}=\dfrac{1}{3}\dfrac{w+3-\sqrt{3+w^{2}}}{w+1},w\in D_{1}. \]% In a similar fashion one can show that \[ \sigma _{2}=\dfrac{1}{3}\dfrac{-2w+\sqrt{3+w^{2}}}{1-w},w\in D_{2}. \]% This expression for $\sigma _{2}$ also follows from the equation \begin{equation} \left( 1-\sigma _{1}\right) \sigma _{2}=\dfrac{1}{3} \tag{(5)} \end{equation} (5) is easy to prove and the proof is exactly the same as for the case when $% p$ has three distinct real roots (see [1] or [3]). It is now convenient to define the following analytic extensions of $\sigma _{1}$ to $w=-1$ and of $% \sigma _{2}$ to $w=1$, respectively. \[ f(w)=\left\{ \begin{array}{ll} \dfrac{1}{3}\dfrac{w+3-\sqrt{3+w^{2}}}{w+1}=\sigma _{1} & \text{if }w\in D_{1} \\ \dfrac{1}{2} & \text{if }w=-1% \end{array}% \right. \]% and \[ g(w)=\left\{ \begin{array}{ll} \dfrac{1}{3}\dfrac{-2w+\sqrt{3+w^{2}}}{1-w}=\sigma _{2} & \text{if }w\in D_{2} \\ \dfrac{1}{2} & \text{if }w=1% \end{array}% \right. \] Since $\lim\limits_{w\rightarrow -1}\dfrac{1}{3}\dfrac{w+3-\sqrt{3+w^{2}}}{% w+1}=\dfrac{1}{2}$ and $\lim\limits_{w\rightarrow 1}\dfrac{1}{3}\dfrac{-2w+% \sqrt{3+w^{2}}}{1-w}=\dfrac{1}{2}$, $f$ and $g$ are each analytic in the region% \[ D=C^{2}-E. \] We can now replace (5) by \begin{equation} (1-f(w))g(w)=\dfrac{1}{3},w\in D \tag{(6)} \end{equation} Note that $f$ does \textit{not} extend to be continuous on $\partial \left( D\right) $ because of the discontinuity of $\sqrt{3+w^{2}}$ when $3+w^{2}\in \Gamma $. Also, for $w\in \partial \left( D\right) ,f(w)$ does not yield $% \sigma _{1}$ and $g(w)$ does not yield $\sigma _{2}$. Now \[ w\in \partial \left( D\right) \iff w=ti,\left\vert t\right\vert \geq \sqrt{3}% . \] Then $w_{1}=-w_{3},w_{2}=tiw_{3}$, and $p(z)=(z^{2}-w_{3}^{2})(z-itw_{3})$. If $\func{Im}w_{3}\neq 0$, then the ratios are defined, and a simple computation shows that \begin{equation} \sigma _{1}=\dfrac{1}{3}\dfrac{it+i\sqrt{t^{2}-3}+3}{it+1},w\in \partial \left( D\right) \tag{(7)} \end{equation}% One can also compute $\sigma _{2}$ using (5), but we shall not require that here. \textbf{Notation: }We write $\sigma _{1}=\sigma _{1}(w)$ or $\sigma _{2}=\sigma _{2}(w)$ if $(\sigma _{1},\sigma _{2})$ is the ratio vector of $% p(w)=(w-w_{1})(w-w_{2})(w-w_{3})$ with $w_{1}+w_{3}=0,\func{Re}w_{1}<0<\func{% Re}w_{3}$, and $w=\dfrac{w_{2}}{w_{3}}$ We should note here that not every $w\in D$ satisfies $w=\dfrac{w_{2}}{w_{3}} $ for some admissible pair $(w_{2},w_{3})$. For example, $w=2$ cannot occur since $w_{2}=2w_{3}\Rightarrow \func{Re}w_{2}>\func{Re}w_{3}$. Of course the bounds we derive for $w\in D\cup \partial \left( D\right) $ then apply to the subset of values of $w$ which can arise from admissible pairs. In addition, there are admissible pairs $(w_{2},w_{3})$ such that $w\partial \left( D\right) $, such as $w_{2}=2i,w_{3}=1$. This is not a problem since the bounds we derive below are for $w\in D\cup \partial \left( D\right) $. Finally, the ratios themselves are not defined when $w=1$ or $w=-1$(else the $w_{k}$ are not distinct). The real and imaginary parts of $f$ and of $g$ are each harmonic functions, and we want to apply the Maximum--Minimum Principle for harmonic functions to find bounds on the real and imaginary parts of $\sigma _{1}$ and $\sigma _{2}$. Since $D$ is unbounded, we shall require the following special case of the Maximum--Minimum Principle for possibly unbounded domains(see [2], page 8, Corollary 1.10]). \textbf{Proposition 1:} Let $u$ be a real--valued harmonic function in a domain $D$ in $R^{2}$ and suppose that \[ \limsup\limits_{k\rightarrow \infty }u(a_{k})\leq M \]% for every sequence $\left\{ a_{k}\right\} $ in $D$ converging to a point in $% \partial \left( D\right) $ or to $\infty $. Then $u\leq M$ on $D$. \textbf{Remark: }As noted in [2], Proposition 1 remains valid if "$\limsup $% " is replaced by "$\liminf $" and the inequalities are reversed. We also need the following Local Maximum--Minimum Principle for harmonic functions for possibly unbounded domains(see [2], page 23) to prove the sharpness of our bounds on the real and imaginary parts of $\sigma _{1}$ and $\sigma _{2}$. One can prove these bounds directly, but that involves a two variable optimization problem. Using the Maximum--Minimum Principle reduces it to a one variable optimization problem. \textbf{Proposition 2:} Let $u$ be a real--valued harmonic function in a domain $D$ in $R^{2}$ and suppose that $u$ has a local maximum(or minimum) in $D$. Then $u$ is constant. First we require the following lemmas. \textbf{Lemma 1}: (A) The equation $4t\sqrt{t^{2}-3}-5t^{2}+3=0$ has no real solutions. (B) The equation $4t\sqrt{t^{2}-3}+5t^{2}-3=0$ has no real solutions. \textbf{Proof: }$4t\sqrt{t^{2}-3}=5t^{2}-3\Rightarrow 16t^{2}\left( t^{2}-3\right) -(5t^{2}-3)^{2}=0\Rightarrow -9\left( t^{2}+1\right) ^{2}=0$, which has no real solutions. That proves (A), and (B) follows in a\ similar fashion. \textbf{Lemma 2}: (A)The only real solution of the equation $% t^{3}-7t-2\left( t^{2}-1\right) \sqrt{t^{2}-3}=0$ is $t=-2$. (B) The only real solution of the equation $t^{3}-7t+2\left( t^{2}-1\right) \sqrt{t^{2}-3}=0$ is $t=2$. \textbf{Proof: }$t^{3}-7t=2\left( t^{2}-1\right) \sqrt{t^{2}-3}\Rightarrow (t^{3}-7t)^{2}-4\left( t^{2}-1\right) ^{2}(t^{2}-3)=0\Rightarrow $ $-3\left( t-2\right) \left( t+2\right) \left( t^{2}+1\right) ^{2}=0$. $t=-2$ is a solution of the given equation, but not $t=2$. That proves (A), and (B) follows in a\ similar fashion. \textbf{Theorem 1:} Let $p(w)=(w-w_{1})(w-w_{2})(w-w_{3}),$with $\func{Re}% w_{1}<\func{Re}w_{2}<\func{Re}w_{3}$. Let $z_{1}$ and $z_{2}$ be the critical points of $p$, where $z_{1}=z_{2}$ or $\func{Re}z_{1}<\func{Re}% z_{2} $ if $z_{1}\neq z_{2}$. Let $\sigma _{1}=\dfrac{z_{1}-w_{1}}{% w_{2}-w_{1}}$. Then (A) $0<\func{Re}\sigma _{1}<\dfrac{2}{3}$ and the inequality is sharp in that there are $w_{1},w_{2},$ and $w_{3}$ satisfying the hypotheses above and such that $\func{Re}\sigma _{1}$ can be made arbitrarily close to $0$ or arbitrarily close to $\dfrac{2}{3}$. (B)\textit{\ }$-\dfrac{1}{3}\leq \func{Im}\sigma _{1}\leq \dfrac{1}{3}$. (C) $\func{Im}\sigma _{1}=\dfrac{1}{3}\iff $ the roots of $p$ have the form $% \pm i\left( z_{0}+C\right) $ and $2\left( z_{0}+C\right) $, where $\func{Im}% z_{0}<0$, $0<\func{Re}z_{0}<-\dfrac{1}{2}\func{Im}z_{0},$ and $C$ is an arbitrary constant. (D) $\func{Im}\sigma _{1}=-\dfrac{1}{3}\iff $ the roots of $p$ have the form $\pm i\left( z_{0}+C\right) $ and $2\left( z_{0}+C\right) $, where $\func{Im}% z_{0}>0$, $0<\func{Re}z_{0}<\dfrac{1}{2}\func{Im}z_{0},$ and $C$ is an arbitrary constant. (E) $\left\vert \sigma _{1}\right\vert \leq \dfrac{2}{3}$ \textbf{Proof: }While it is not necessary for $f$ to extend to be continuous on $\partial \left( D\right) $ to apply Proposition 1, we must show that $% 0<f(w)<\dfrac{2}{3}$ for $w\in D\cup \partial \left( D\right) $ since $% \sigma _{1}$ can arise for $w\in \partial \left( D\right) $. First we consider the behavior of $f$ at $\infty $. $\lim\limits_{w\rightarrow \infty }f(w)=\lim\limits_{w\rightarrow 0}f(1/w)=\dfrac{1}{3}\lim\limits_{w% \rightarrow 0}\dfrac{1+3w\pm \sqrt{3w^{2}+1}}{w+1}=0$ or $\dfrac{2}{3}$ depending upon whether $w\rightarrow 0$ through $\func{Re}w>0$ or $\func{Re}% w<0$. Thus by Proposition 1, $\limsup\limits_{k\rightarrow \infty }\func{Re}% f(a_{k})\leq \dfrac{2}{3},\liminf\limits_{k\rightarrow \infty }\func{Re}% f(a_{k})\geq 0,\limsup\limits_{k\rightarrow \infty }\func{Im}f(a_{k})\leq \dfrac{1}{3},$ and $\liminf\limits_{k\rightarrow \infty }\func{Im}% f(a_{k})\geq -\dfrac{1}{3}$ for any sequence $\left\{ a_{k}\right\} $ in $D$ converging to $\infty $. We now show that $0\leq \func{Re}f\leq \dfrac{2}{3}$ and $-\dfrac{1}{3}\leq \func{Im}f\leq \dfrac{1}{3}$ as $w$ \textit{% approaches }any point $z\in \partial \left( D\right) $. As $w$ approaches $% z\in \partial \left( D\right) $, $\sqrt{3+w^{2}}$ approaches $\pm \sqrt{% 3-t^{2}}=\pm i\sqrt{t^{2}-3}$. Thus $\dfrac{1}{3}\dfrac{w+3-\sqrt{3+w^{2}}}{% w+1}$ approaches $\dfrac{1}{3}\dfrac{ti+3\pm i\sqrt{t^{2}-3}}{ti+1}$. Note that $\dfrac{1}{3}\dfrac{ti+3+i\sqrt{t^{2}-3}}{ti+1}=\sigma _{1}(w),w\in \partial \left( D\right) $ by (7). Thus by finding the maximum and minimum of $\func{Re}\dfrac{1}{3}\dfrac{ti+3\pm i\sqrt{t^{2}-3}}{ti+1}$ and $\func{Im% }\dfrac{1}{3}\dfrac{ti+3\pm i\sqrt{t^{2}-3}}{ti+1},\left\vert t\right\vert \geq \sqrt{3}$, we are finding the maximum and minimum of $\func{Re}f(w)$ and of $\func{Im}f(w)$ as $w$ approaches $\partial \left( D\right) $, and the maximum and minimum of $\func{Re}\sigma _{1}$ and of $\func{Im}\sigma _{1}$ for $w\in \partial \left( D\right) $. Now \[ \dfrac{1}{3}\dfrac{ti+3+i\sqrt{t^{2}-3}}{ti+1}=u_{1}(t)+iv_{1}(t),\dfrac{1}{3% }\dfrac{ti+3-i\sqrt{t^{2}-3}}{ti+1}=u_{2}(t)+iv_{2}(t) \]% where \begin{equation} u_{1}(t)=\dfrac{1}{3}\dfrac{t^{2}+3+t\sqrt{t^{2}-3}}{t^{2}+1},u_{2}(t)=% \dfrac{1}{3}\dfrac{t^{2}+3-t\sqrt{t^{2}-3}}{t^{2}+1} \tag{(8)} \end{equation}% and \begin{equation} v_{1}(t)=\dfrac{1}{3}\dfrac{-2t+\sqrt{t^{2}-3}}{t^{2}+1},v_{2}(t)=\dfrac{1}{3% }\dfrac{-2t-\sqrt{t^{2}-3}}{t^{2}+1} \tag{(9)} \end{equation}% $u_{1}^{\prime }(t)=-\dfrac{1}{3}\dfrac{4t\sqrt{t^{2}-3}-5t^{2}+3}{\sqrt{% t^{2}-3}\left( t^{2}+1\right) ^{2}}$ and $u_{2}^{\prime }(t)=-\dfrac{1}{3}% \dfrac{4t\sqrt{t^{2}-3}+5t^{2}-3}{\sqrt{t^{2}-3}\left( t^{2}+1\right) ^{2}}$% . By Lemma 1, $u_{1}^{\prime }$ and $u_{2}^{\prime }$ have no real roots, and hence $u_{1}$ and $u_{2}$ have no real critical points. Now $u_{1}(\sqrt{% 3})=u_{1}(-\sqrt{3})=u_{2}(\sqrt{3})=u_{2}(-\sqrt{3})=\dfrac{1}{2}$, $% \lim\limits_{t\rightarrow \infty }u_{1}(t)=\lim\limits_{t\rightarrow -\infty }u_{2}(t)=\dfrac{2}{3}$, and $\lim\limits_{t\rightarrow -\infty }u_{1}(t)=\lim\limits_{t\rightarrow \infty }u_{2}(t)=0$. Thus $0\leq u_{1}(t),u_{2}(t)\leq \dfrac{2}{3}$ for $\left\vert t\right\vert \geq \sqrt{3% }$, which implies that $0\leq \func{Re}\dfrac{1}{3}\dfrac{ti+3\pm i\sqrt{% t^{2}-3}}{ti+1}\leq \dfrac{2}{3}$for $\left\vert t\right\vert \geq \sqrt{3}$% . It follows that $\limsup\limits_{k\rightarrow \infty }\func{Re}% f(a_{k})\leq \dfrac{2}{3}$ and $\liminf\limits_{k\rightarrow \infty }\func{Re% }f(a_{k})\geq 0$ for any sequence $\left\{ a_{k}\right\} $ in $D$ converging to $\partial \left( D\right) $. By Proposition 1, $0\leq f(w)\leq \dfrac{2}{3% }$ for $w\in D$. As noted above, the same proof shows that $0\leq \func{Re}% f\leq \dfrac{2}{3}$ for $w\in \partial \left( D\right) $. By Proposition 2, $% 0<\func{Re}f<\dfrac{2}{3}$ for $w\in D$. It also follows easily that $u_{2}$ is increasing for $t\leq -\sqrt{3}$ and decreasing for $t\geq \sqrt{3}$, which implies that $u_{2}(t)\neq 0$ and $u_{2}(t)\neq \dfrac{2}{3}$ for $% \left\vert t\right\vert \geq \sqrt{3}$. Since $u_{2}(t)=\func{Re}% f(w),w=ti,\left\vert t\right\vert \geq \sqrt{3},0<\func{Re}f<\dfrac{2}{3}$ for $w\in \partial \left( D\right) $. That shows that $0<\func{Re}\sigma _{1}<\dfrac{2}{3}$. To finish the proof of part (A), if $t>\sqrt{3}$, let $% w_{1}=-2t-i$, $w_{2}=-t+2t^{2}i$, and $w_{3}=2t+i$, while if $t<-\sqrt{3}$, let $w_{1}=2t+i$, $w_{2}=t-2t^{2}i$, and $w_{3}=-2t-i$. In either case, $% w=ti $ and $\func{Im}\left( 3w_{3}^{2}+w_{2}^{2}\right) =\allowbreak 12t-4t^{3}\neq 0\Rightarrow \func{Re}\sqrt{3w_{3}^{2}+w_{2}^{2}}\neq 0$. Thus $z_{1}$ and $z_{2}$ have unequal real parts. Since $\func{Re}w_{1}<% \func{Re}w_{2}<\func{Re}w_{3}$ as well, the ratios are defined. Above we showed that \begin{equation} \sigma _{1}(w)=u_{1}(t)+iv_{1}(t),w=it,\left\vert t\right\vert \geq \sqrt{3} \tag{(10)} \end{equation}% Thus $\func{Re}\sigma _{1}(w)=$ $u_{1}(t)$. Since $\lim\limits_{t\rightarrow \infty }u_{1}(t)=\dfrac{2}{3}$ and $\lim\limits_{t\rightarrow -\infty }u_{2}(t)=0$, we can make $\func{Re}\sigma _{1}$ as close to $\allowbreak 0$ or $\dfrac{2}{3}$ by taking $\left\vert t\right\vert $ sufficiently large. That finishes the proof of part (A). To prove part (B), $v_{1}^{\prime }(t)=\allowbreak \dfrac{1}{3}\dfrac{2\sqrt{% t^{2}-3}t^{2}-2\sqrt{t^{2}-3}-t^{3}+7t}{\sqrt{t^{2}-3}\left( t^{2}+1\right) ^{2}}$ and $v_{2}^{\prime }(t)=\dfrac{1}{3}\dfrac{2\sqrt{t^{2}-3}t^{2}-2% \sqrt{t^{2}-3}+t^{3}-7t}{\sqrt{t^{2}-3}\left( t^{2}+1\right) ^{2}}$. By Lemma 2, $v_{1}$ has one real critical point, $t=-2$ and $v_{2}$ has one real critical point, $t=2$. Also, $v_{1}(\sqrt{3})=\allowbreak -\dfrac{1}{6}% \sqrt{3}$, $v_{1}(-\sqrt{3})=\allowbreak \dfrac{1}{6}\sqrt{3}$, $v_{1}(-2)=% \dfrac{1}{3}$, and $\lim\limits_{t\rightarrow -\infty }v_{1}(t)=\lim\limits_{t\rightarrow \infty }v_{1}(t)=\allowbreak 0$, while $% v_{2}(\sqrt{3})=-\dfrac{1}{6}\sqrt{3}$, $v_{2}(-\sqrt{3})=\allowbreak \dfrac{% 1}{6}\sqrt{3}$, $v_{2}(2)=\allowbreak -\dfrac{1}{3}$, and $% \lim\limits_{t\rightarrow -\infty }v_{2}(t)=\lim\limits_{t\rightarrow \infty }v_{2}(t)=\allowbreak 0$. Hence $-\dfrac{1}{3}\leq v_{1}(t),v_{2}(t)\leq \dfrac{1}{3}$ for $\left\vert t\right\vert \geq \sqrt{3}$. Arguing as earlier, by Proposition 1 that proves part (B). To prove (C), suppose that $\func{Im}\sigma _{1}=\dfrac{1}{3}$. If $\sigma _{1}=\sigma _{1}(w),w\in D$, then $\func{Im}f(w)=\dfrac{1}{3}$, which cannot happen by Proposition 2. If $\sigma _{1}=\sigma _{1}(w),w\in \partial \left( D\right) $, then $v_{1}(t)=\allowbreak \dfrac{1}{3}$ by (7). Now it follows easily that the only real solution of $v_{1}(t)=\allowbreak \dfrac{1}{3}$ is $t=-2$, and $t=-2\Rightarrow w=-2i\Rightarrow w_{3}=\dfrac{1}{2}% iw_{2},w_{1}=-\dfrac{1}{2}iw_{2}$. The critical points of the coresponding $% p $ are $z=\dfrac{1}{2}w_{2}$ and $z=\dfrac{1}{6}w_{2}$, which have unequal real parts if $\func{Re}w_{2}\neq 0$. $\func{Re}w_{3}>0\Rightarrow -\dfrac{1% }{2}\func{Im}w_{2}>0\Rightarrow \func{Im}w_{2}<0$. Also, $\func{Re}w_{2}<% \func{Re}w_{3}\Rightarrow \func{Re}w_{2}<-\dfrac{1}{2}\func{Im}w_{2}$. If $% \func{Re}w_{2}<0$, then $z_{1}=\dfrac{1}{2}w_{2}$ and $z_{2}=\dfrac{1}{6}% w_{2}\Rightarrow \sigma _{1}=\dfrac{\dfrac{1}{2}w_{2}+\dfrac{1}{2}iw_{2}}{% w_{2}+\dfrac{1}{2}iw_{2}}=\dfrac{3}{5}+\dfrac{1}{5}i\Rightarrow \func{Im}% \sigma _{1}\neq \dfrac{1}{3}$. Letting $z_{0}=\dfrac{1}{2}w_{2}$, that yields roots of the form $\pm iz_{0}$ and $2z_{0}$, where $\func{Re}z_{0}>0$ and $\func{Re}z_{0}<-\dfrac{1}{2}\func{Im}z_{0}$. Since any translation of $% p $ yields the same ratios, the roots of $p$ must have the form given in part (C). If the roots of $p$ have the form given in part (C), then $z_{1}=% \dfrac{1}{6}w_{2}$ and $z_{2}=\dfrac{1}{2}w_{2}$, which implies that $\sigma _{1}=\dfrac{\dfrac{1}{6}w_{2}+\dfrac{1}{2}iw_{2}}{w_{2}+\dfrac{1}{2}iw_{2}}=% \dfrac{1}{3}+\dfrac{1}{3}i\Rightarrow \func{Im}\sigma _{1}=\dfrac{1}{3}$. The proof of part (D) follows in a simialr fashion and we omit it. Finally, to prove (E), note first that $f(w)=0\Rightarrow w+3-\sqrt{3+w^{2}}% =0\Rightarrow \left( w+3\right) ^{2}-\left( 3+w^{2}\right) =\allowbreak 6w+6=0\Rightarrow w=-1$, but $f(-1)=\dfrac{1}{2}\neq 0$. Thus $f$ has no zero in $D$ and by ([6], Theorem 13.12, page 294), $\log \left\vert f\right\vert $ is harmonic in $D$. We shall apply Proposition 1 to $\log \left\vert f\right\vert $. Since we showed earlier that $\lim\limits_{w% \rightarrow \infty }f(w)=0$ or $\dfrac{2}{3}$, $\limsup\limits_{k\rightarrow \infty }\log \left\vert f\right\vert (a_{k})\leq \log \dfrac{2}{3}$ for any sequence $\left\{ a_{k}\right\} $ in $D$ converging to $\infty $. As $w$ approaches $\partial \left( D\right) $, $9\left\vert f(w)\right\vert ^{2}$ approaches $9\left[ \left( u_{1}(t)\right) ^{2}+\left( v_{1}(t)\right) ^{2}% \right] $ or $9\left[ \left( u_{2}(t)\right) ^{2}+\left( v_{2}(t)\right) ^{2}% \right] $, where $w=it,\left\vert t\right\vert \geq \sqrt{3}$. Now $% \allowbreak 9\left[ \left( u_{1}(t)\right) ^{2}+\left( v_{1}(t)\right) ^{2}% \right] =a(t)=\allowbreak 2\dfrac{t^{2}+3+t\sqrt{t^{2}-3}}{t^{2}+1}$, and it follows easily that $a^{\prime }(t)=\allowbreak 2\dfrac{-4t\sqrt{t^{2}-3}% +5t^{2}-3}{\sqrt{t^{2}-3}\left( t^{2}+1\right) ^{2}}>0$ for all $% t,\left\vert t\right\vert \geq \sqrt{3}$. Since $\lim\limits_{t\rightarrow \infty }a(t)=\allowbreak 4$, $a(t)<4$ for all $t,\left\vert t\right\vert \geq \sqrt{3}$. $9\left[ \left( u_{2}(t)\right) ^{2}+\left( v_{2}(t)\right) ^{2}\right] =b(t)=\allowbreak 2\dfrac{t^{2}+3-t\sqrt{t^{2}-3}}{t^{2}+1}$. It also follows easily that $b^{\prime }(t)=\allowbreak -2\dfrac{4t\sqrt{t^{2}-3% }+5t^{2}-3}{\sqrt{t^{2}-3}\left( t^{2}+1\right) ^{2}}<0$ for all $% t,\left\vert t\right\vert \geq \sqrt{3}$. Since $\lim\limits_{t\rightarrow -\infty }b(t)=\allowbreak 4$, $b(t)<4$ for all $t,\left\vert t\right\vert \geq \sqrt{3}$. Hence $\left\vert f(a_{k})\right\vert ^{2}\leq \dfrac{4}{9}$ for any sequence $\left\{ a_{k}\right\} $ in $D$ converging to $\partial \left( D\right) $, which implies that $\limsup\limits_{k\rightarrow \infty }\log \left\vert f(a_{k})\right\vert \leq \dfrac{1}{2}\log \dfrac{4}{9}$. That proves that $\left\vert f(w)\right\vert \leq \dfrac{2}{3},w\in D$, by Proposition 1. Note also that $9\left\vert \sigma _{1}\right\vert ^{2}=9% \left[ \left( u_{1}(t)\right) ^{2}+\left( v_{1}(t)\right) ^{2}\right] $ for $% w\in \partial \left( D\right) $. By what we just proved, $\left\vert \sigma _{1}(w)\right\vert \leq \dfrac{2}{3},w\in \partial \left( D\right) $. That finishes the proof of part (E). \textbf{Theorem 2: }Let $p(w)=(w-w_{1})(w-w_{2})(w-w_{3}),$with $\func{Re}% w_{1}<\func{Re}w_{2}<\func{Re}w_{3}$. Let $z_{1}$ and $z_{2}$ be the critical points of $p$, where $z_{1}=z_{2}$ or $\func{Re}z_{1}<\func{Re}% z_{2} $ if $z_{1}\neq z_{2}$. Let $\sigma _{2}=\dfrac{z_{2}-w_{2}}{% w_{3}-w_{2}}$. Then (A) $\dfrac{1}{3}<\func{Re}\sigma _{2}<1$ and the inequality is sharp in that there are $w_{1},w_{2},$ and $w_{3}$ satisfying the hypotheses above and such that $\func{Re}\sigma _{2}$ can be made arbitrarily close to $% \dfrac{1}{3}$ or arbitrarily close to $1$. (B) $-\dfrac{1}{3}\leq \func{Im}\sigma _{2}\leq \dfrac{1}{3}$ (C) $\func{Im}\sigma _{2}=\dfrac{1}{3}\iff $ the roots of $p$ have the form $% \pm iz$ and $2z$, where $\func{Im}z<0$ and $0<\func{Re}z<-\dfrac{1}{2}\func{% Im}z$ (D) $\func{Im}\sigma _{2}=-\dfrac{1}{3}\iff $ the roots of $p$ have the form $\pm iz$ and $2z$, where $\func{Im}z>0$ and $\func{Re}z>0$ and $0<\func{Re}z<% \dfrac{1}{2}\func{Im}z$ (E) $\left\vert \sigma _{2}\right\vert \leq 1$ \textbf{Proof: }We proceed exactly as in the proof of Theorem1, working with $g(w)$ instead of with $f(w)$. Since\textbf{\ }$\lim\limits_{w\rightarrow \infty }f(w)=0$ or $\dfrac{2}{3}$, by (6), $\lim\limits_{w\rightarrow \infty }g(w)=\dfrac{1}{3}$ or $1$. As $w$ approaches $z\in \partial \left( D\right) $, $g(w)$ approaches $\dfrac{1}{3}\dfrac{-2ti\pm \sqrt{3-t^{2}}}{1-ti}=% \dfrac{1}{3}\dfrac{-2ti\pm i\sqrt{t^{2}-3}}{1-ti}=1-u_{1}(t)+iv_{1}(t)$ or $% 1-u_{2}(t)+iv_{2}(t)$. Since we showed that $0<u_{1}(t)<\dfrac{2}{3},$ $% 0<u_{2}(t)<\dfrac{2}{3},-\dfrac{1}{3}\leq v_{1}(t)\leq \dfrac{1}{3},$ and $-% \dfrac{1}{3}\leq v_{2}(t)\leq \dfrac{1}{3}$ for $\left\vert t\right\vert \geq \sqrt{3}$, it follows immediately that $\dfrac{1}{3}<\func{Re}\sigma _{2}<1$ and $-\dfrac{1}{3}\leq \func{Im}\sigma _{2}\leq \dfrac{1}{3}$. The rest of parts (A) and (B) follow as in the proof of Theorem 1, parts (A) and (B). Parts (C) and (D) also follow as in the proof of Theorem 1 parts (C) and (D), and part (E) follows directly from Theorem 1, part (E) and (5). \textbf{Theorem 3: }Let $p(w)=(w-w_{1})(w-w_{2})(w-w_{3}),$with $\func{Re}% w_{1}<\func{Re}w_{2}<\func{Re}w_{3}$. Let $z_{1}$ and $z_{2}$ be the critical points of $p$, where $z_{1}=z_{2}$ or $\func{Re}z_{1}<\func{Re}% z_{2} $ if $z_{1}\neq z_{2}$. Let $\sigma _{1}=\dfrac{z_{1}-w_{1}}{% w_{2}-w_{1}}$ and $\sigma _{2}=\dfrac{z_{2}-w_{2}}{w_{3}-w_{2}}$. Then $% \func{Re}\sigma _{2}\geq \func{Re}\sigma _{1}$. \textbf{Proof: }First, $g(w)-f(w)=\allowbreak \dfrac{1}{3}\dfrac{w^{2}+3-2% \sqrt{3+w^{2}}}{w^{2}-1}$ is analytic in $D$ which implies that $\func{Re}% \left( g(w)-f(w)\right) $ is a harmonic function in $D$, so we my apply Proposition 1. Now $\lim\limits_{w\rightarrow \infty }\left( g(w)-f(w)\right) =\allowbreak \dfrac{1}{3}\lim\limits_{w\rightarrow \infty }% \dfrac{w^{2}+3-2\sqrt{3+w^{2}}}{w^{2}-1}=\allowbreak \dfrac{1}{3}\geq 0$. Thus $\liminf\limits_{k\rightarrow \infty }\func{Re}\left( f(a_{k})-g(a_{k})\right) \geq 0$ for any sequence $\left\{ a_{k}\right\} $ in $D$ converging to $\infty $. Also, as $w\rightarrow \partial \left( D\right) $, $g(w)-f(w)\rightarrow \dfrac{1}{3}\dfrac{-t^{2}+3\pm 2i\sqrt{% t^{2}-3}}{-t^{2}-1}=\dfrac{1}{3}\dfrac{t^{2}-3\pm 2i\sqrt{t^{2}-3}}{t^{2}+1}$% . Then $\func{Re}\left( g(w)-f(w)\right) \rightarrow \dfrac{1}{3}\dfrac{% t^{2}-3}{t^{2}+1}\geq 0$ since $t^{2}\geq 3$. It follows that $% \liminf\limits_{k\rightarrow \infty }\func{Re}\left( f(a_{k})-g(a_{k})\right) \geq 0$ for any sequence $\left\{ a_{k}\right\} $ in $D$ converging to $\partial \left( D\right) $. By Proposition 1, $\func{Re% }\left( \sigma _{2}(w)-\sigma _{1}(w)\right) \geq 0$, $w\in D$. For $w\in \partial \left( D\right) $, by (7) and (5), $\sigma _{2}=\allowbreak -i% \dfrac{it+1}{2t-\sqrt{t^{2}-3}}$, which implies that \begin{equation} \sigma _{2}-\sigma _{1}=\allowbreak \dfrac{1}{3}\dfrac{-t^{2}+3+2i\sqrt{% t^{2}-3}}{-t^{2}-1}. \tag{(11)} \end{equation}% By what was just proved, $\func{Re}\left( \sigma _{2}(w)-\sigma _{1}(w)\right) \geq 0$, $w\in \partial \left( D\right) $. \textbf{Note:} The example below shows that is possible to have $\func{Re}% \sigma _{2}=\func{Re}\sigma _{1}$. In fact, below we have $\sigma _{1}=\sigma _{2}$. \textbf{Example:} Let $w_{1}=-1$, $w_{2}=\sqrt{3}i$, $w_{3}=1$, which implies that $w=\sqrt{3}i$ and the $\left\{ w_{k}\right\} $ are the vertices of an equilateral triangle. Then $z_{1}=z_{2}=\dfrac{1}{\sqrt{3}}i$ and $% \sigma _{1}=\sigma _{2}=\dfrac{1}{2}-\dfrac{1}{6}i\sqrt{3}$. It is natural to ask whether the example above gives essentially the only case when $\sigma _{1}=\sigma _{2}$. \textbf{Theorem 4: }$\sigma _{1}=\sigma _{2}\iff w_{1},w_{2},w_{3}$ are the vertices of an equilateral triangle which contains no vertical line segment. \textbf{Proof: }$w=\pm 1\Rightarrow w_{2}=w_{3}$ or $w_{2}=w_{1}$, in which case the ratios are not defined. Thus we may assume that $w\neq \pm 1$. For $% w\in D$, $\sigma _{1}(w)=\sigma _{2}(w)\iff f(w)-g(w)=-\dfrac{1}{3}\dfrac{% w^{2}-2\sqrt{3+w^{2}}+3}{\left( -1+w\right) \left( w+1\right) }=0\iff $ $w^{2}-2\sqrt{3+w^{2}}+3=0\iff w=\pm i\sqrt{3}\iff \left\{ w_{1},w_{2},w_{3}\right\} =\left\{ -w_{3},\pm \sqrt{3}iw_{3},w_{3}\right\} $% , which are easily seen to be the vertices of an equilateral triangle. For $% w\in \partial \left( D\right) $, by (11), $\sigma _{1}(w)=\sigma _{2}(w)\iff $ $\allowbreak \dfrac{1}{3}\dfrac{-t^{2}+3+2i\sqrt{t^{2}-3}}{-t^{2}-1}% =0,w=ti,\left\vert t\right\vert \geq \sqrt{3}$. That yields $t=\pm \sqrt{3}$% , which gives $w=\pm i\sqrt{3}$ as above. We can also assume that the triangle formed by $w_{1},w_{2},w_{3}$ contains no vertcial line segment, since the ratios are not defined in that case either. \textbf{Theorem 5: }$\sigma _{1}$ or $\sigma _{2}$ are real if and only if $% w_{1},w_{2},$ and $w_{3}$ are collinear. \textbf{Proof: }Suppose first that $\sigma _{1}(w)$ is real, $w\in D$. Then $% f(w)$ is real, or $w+3-\sqrt{3+w^{2}}=k(w+1),k\in \Re $, which implies, after some simplification, that $(k^{2}-2k)w^{2}+2(1-k)(3-k)w+k^{2}-6k+6=0$. The discriminant of this quadratic equation is $% 4(1-k)^{2}(3-k)^{2}-4(k^{2}-2k)(k^{2}-6k+6)=4\left( 2k-3\right) ^{2}\geq 0$ since $k\in \Re $. Hence $w$ is real. Now if $w$ is real, then for the ratios to exist, $w=\dfrac{w_{2}}{w_{3}}$ must be a positive real number, which we again denote by $k$. But then $w_{1}=-kw_{2}$ and $w_{3}=kw_{2}$. It is then easy to show that the set of points $\left\{ -kw_{2},w_{2},kw_{2}\right\} $ must be collinear. If $\sigma _{1}(w)$ is real, $w\in \partial \left( D\right) $, then by (10) and (9), $-2t+\sqrt{% t^{2}-3}=0$, which has no real solutions. If $\sigma _{2}$ is real, we can proceed in the same fashion, or just use (5) to show that $\sigma _{1}$ is real. \textbf{Remark: }One can easily extend the definition of complex ratios given in this paper to functions of the form $% p(z)=(z-w_{1})^{m_{1}}(z-w_{2})^{m_{2}}(z-w_{3})^{m_{3}}$, where $% m_{1},m_{2},$ and $m_{3}$ are given positive real numbers. This is discussed in [4] for all real $w_{j}$. \section{References} (1) Peter Andrews, Where not to find the critical points of a polynomial-variation on a Putnam theme, Amer. Math. Monthly 102(1995) 155--158. (2) Sheldon Axler, Paul Bourdon, and Wade Ramey, Harmonic Function Theory, 2001, Springer Verlag, New York, Inc. (3) Alan Horwitz, On the Ratio Vectors of Polynomials, Journal of Mathematical Analysis and Applications 205(1997), 568-576. (4) Alan Horwitz, Ratio vectors of polynomial--like functions, preprint. (5) Gideon Peyser, On the roots of the derivative of a polynomial with real roots, Amer. Math. Monthly 74(1967), 1102--1104. (6) Walter Rudin, Real and Complex Analysis, 2nd ed., 1974, McGraw--Hill. \end{document}
{ "timestamp": "2007-06-03T22:08:17", "yymm": "0706", "arxiv_id": "0706.0346", "language": "en", "url": "https://arxiv.org/abs/0706.0346", "abstract": "Let $p(w)=(w-w_{1})(w-w_{2})(w-w_{3}),$with $\\func{Re}w_{1}<\\func{Re}w_{2}<\\func{Re}w_{3}$. Assume that if the critical points of $p$ are not identical, then they cannot have equal real parts. Define the ratios $\\sigma_{1}=\\dfrac{z_{1}-w_{1}}{w_{2}-w_{1}}$ and $\\sigma _{2}=\\dfrac{z_{2}-w_{2}}{w_{3}-w_{2}}$. $(\\sigma_{1},\\sigma_{2})$ is called the \\QTR{it}{ratio vector} of $p$. This extends the definition of ratio vectors given in earlier papers for polynomials of degree $n$ with all real roots. We then derive bounds on the real part, imaginary part, and modulus of the ratios and also some relations between the ratios. In particular, we prove that $\\func{Re}\\sigma_{1}\\leq \\func{Re}\\sigma_{2}$. We also show that the ratios are real if and only if the roots of $p$ are collinear.", "subjects": "Complex Variables (math.CV); Classical Analysis and ODEs (math.CA)", "title": "Complex Ratios of Cubic Polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.982013792143467, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617947251843 }
https://arxiv.org/abs/1809.10762
An Approach to Duality in Nonlinear Filtering
This paper revisits the question of duality between minimum variance estimation and optimal control first described for the linear Gaussian case in the celebrated paper of Kalman and Bucy. A duality result is established for nonlinear filtering, mirroring closely the original Kalman-Bucy duality of control and estimation for linear systems. The result for the finite state-space continuous time Markov chain is presented. It's solution is used to derive the classical Wonham filter.
\section{Introduction} In Kalman's celebrated paper with Bucy, it is shown that the problem of optimal estimation is dual to an optimal control problem~\cite{kalman1961}. A striking example of the dual relationship is that, with the time arrow reversed, the dynamic Riccati equation (DRE) of the optimal control is the same as the covariance update equation of the Kalman filter. The relationship is useful, e.g., to derive results on asymptotic stability of the linear filter based on asymptotic properties of the solution of the DRE~\cite{ocone1996}. A nonlinear extension of the minimum variance estimator has been considered to be a harder problem. In literature, it has been noted that: i) the dual relationship between the DRE of the LQ optimal control and the covariance update equation of the Kalman filter is {\em not} consistent with the interpretation of the negative log-posterior as a value function; and ii) some of the linear algebraic operations, e.g., the use of matrix transpose to define the dual system, are not applicable to nonlinear systems~\cite{todorov2006optimal,todorov2008general}. For these reasons, the original duality of Kalman-Bucy is seen as an LQG artifact that does not generalize~\cite{todorov2006optimal}. In this paper, a nonlinear extension of the minimum variance estimation is presented for the special case of a Markov process in continuous time, on a finite state-space. The dual system is a backward ordinary differential equation. An optimal control objective is formulated whose solution yields the minimum variance estimator. Using the elementary method of change of control, the formula for the optimal control is obtained and used to derive the classical Wonham filter. \spm{Not sure we need or should emphasize this: The new duality theorem in this paper leads to an elementary derivation of the classical nonlinear filter of Wonham. } The outline of the paper is as follows: classical duality is reviewed in \Sec{sec:prelim}, and the new dual optimal control problem for the finite case is described in \Sec{sec:duality}. Its solution leading to the Wonham filter is presented in \Sec{sec:main}. \nobreak \section{Background on classical duality} \label{sec:prelim} \newP{Linear Gaussian filtering model} Specified by the linear stochastic differential equation (SDE): \begin{flalign} &\text{Signal}\quad\quad\;\;&& \,\mathrm{d} X_t = A^\top X_t \,\mathrm{d} t + \,\mathrm{d} B_t&\nonumber\\%\label{eq:LG_dyn}\\ &\text{Observation}\;&& \,\mathrm{d} Z_t = H^\top X_t\,\mathrm{d} t + \,\mathrm{d} W_t&\nonumbe \end{flalign} where $X_t \in \mathbb{R}^d$ is the state at time $t$, $Z_t \in \mathbb{R}^m$ is the observation, $A$, $H$ are matrices of appropriate dimension, and $B$, $W$ are mutually independent Wiener processes (w.p.) taking values in $\mathbb{R}^d$ and $\mathbb{R}^m$, respectively. The covariance matrices associated with $B$ and $W$ are denoted by ${Q}$ and ${R}$, respectively. The initial condition $X_0$ is drawn from a Gaussian distribution $\mathcal{N}(\hat{x}_0,\Sigma_0)$, independent of $B$ or $W$. It is assumed that the noise covariance matrix is non-singular, ${R}\succ 0$. \newP{Minimum-variance estimator} Consider the problem of constructing a minimum variance estimator for the random variable $f^\top X_T$, at some fixed time $T$, where $f\in\mathbb{R}^d$ is an arbitrary, known vector. Given the observations $\{ Z_t : t\in[0,T]\}$, the following linear structure for the optimal estimator is assumed: \begin{equation*} S_T = y_0^\top \hat{x}_0 - \int_0^T u_t^\top \,\mathrm{d} Z_t \label{eq:KF_LP_est} \end{equation*} where $y_0\in\mathbb{R}^d$ is constructed below, and the input $u=\{u_t : t\in[0,T]\}$ is chosen to solve the optimization problem, \[ \min_{u} \;\; {\sf E} (|S_T - f^\top X_T|^2) \] The solution $S^*_T$ coincides with the minimum-variance estimator of $ f^\top X_T$. This stochastic optimization problem is converted to a deterministic optimal control problem via duality. \newP{Dual optimal control problem} \[ \begin{aligned} \mathop{\text{Minimize}}_{u} \quad & J(u) = \half\ y_0^\top \Sigma_0 y_0 + \int_0^{T} \half u_t^\top R u_t + \half y_t^\top Q y_t\,\mathrm{d} t \\ \text{Subject to} \quad & \frac{\,\mathrm{d} y_t}{\,\mathrm{d} t} = -A y_t -H u_t,\quad y_T = f \end{aligned} \] The process $\{y_t : t\in[0,T]\}$ is referred to as the dual process. The solution of the optimal control problem yields the optimal control input, along with the vector $y_0$ that determines the minimum-variance estimator $S^*_T$. The Kalman filter is obtained by expressing $\{S^*_t(f) : t\ge 0,\ f\in\mathbb{R}^d\}$ as the solution to a linear SDE~\cite[Ch.~7]{astrom1970}. \section{Duality for Nonlinear Filtering:\\ The Finite State space Case}\label{sec:duality} \newP{Nonlinear filtering model} The finite state-space filtering problem is considered, in which the state-space is the canonical basis $\mathbb{S} = \{e_1,e_2,\hdots,e_d\}$ in $\mathbb{R}^d$. The Markovian state process $X=\{X_t : t\in[0,T]\}$ evolves in continuous time, taking values in $\mathbb{S}$. This and the observation process $Z=\{Z_t : t\in[0,T]\}$ are modeled by the SDE, \begin{subequations}\label{eq:dyn-obs} \begin{flalign} &\text{Signal}\quad\quad&&\;\; \,\mathrm{d} X_t = A^\top X_t \,\mathrm{d} t + \,\mathrm{d} B_t&\label{eq:dyn} \\ &\text{Observation} &&\;\; \,\mathrm{d} Z_t = H^\top X_t\,\mathrm{d} t + \,\mathrm{d} W_t&\label{eq:obs} \end{flalign} \end{subequations} where $A\in\mathbb{R}^{d\times d}$ is the \textit{rate matrix}, $H\in\mathbb{R}^{d\times m}$, $W$ is an $m$-dimensional w.p.\ with covariance $R\succ 0$. $B=\{B_t:t\in[0,T]\}$ is defined by $$ B_t = X_t - \int_0^t A^\top X_\tau \,\mathrm{d} \tau $$ and it is a martingale since $A$ is the generator of the Markov process. The initial distribution for $X_0$ is denoted $\pi_0\in {\cal P}(\mathbb{S})$ where ${\cal P}(\mathbb{S})$ denotes the probability simplex in $\mathbb{R}^d$. It is assumed that $X$, $W$ are mutually independent. The linear observation model is chosen without loss of generality: for any function $h \;\colon \; \mathbb{S}\to \mathbb{R}$ we have $h(x) = H^\top x$ for $x\in \mathbb{S}$, with $H_{i}=h(e_i)$. Two filtrations are required in this work: ${\cal F} = \{{\cal F}_t : t\ge 0\}$ and ${\cal Z} = \{{\cal Z}_t : t\ge 0\}$ where \[ {\cal F}_t:=\sigma(X_\tau,W_\tau : 0\le \tau \le t) \,, \quad {\cal Z}_t = \sigma(Z_\tau: 0\le \tau \le t) \] Let $C_{\cal Z}^p$ denote the family of $\mathbb{R}^p$-valued, continuous, and ${\cal Z}$-adapted functions of time (the superscript ``$p$'' is omitted in the special case $p=1$). The filtering problem is to compute the posterior distribution ${\sf P}(X_t \in \,\cdot\, \mid {\cal Z}_t)$~\cite{bensoussan2018estimation}. The solution is derived here through duality, very much like in the classical linear setting. \newP{The dual system} A backward ordinary differential equation (ODE) on $\mathbb{R}^d$, \begin{equation}\label{eq:dyn_y} \frac{\,\mathrm{d} Y_t}{\,\mathrm{d} t} = -AY_t - HU_t,\quad Y_T = f \end{equation} whose solution is \[ Y_t = e^{A(T-t)} f + \int_t^T e^{A(\tau-t)}H U_\tau \,\mathrm{d} \tau\,, \quad 0\le t\le T \] An optimal control problem is posed for the dual system~\eqref{eq:dyn_y} whose solution yields the nonlinear filter. This requires some restrictions on the class of control inputs. The set of \textit{admissible control inputs} is defined as follows: \begin{equation}\label{eq:admissible_control_defn} \clU:=\left\{ U_t = {\sf K}_t^\top Y_t + V_t: {\sf K} \in C_{\cal Z}^{d\times m}, \; V \in C_{\cal Z}^m , \; t\in[0,T]\right\} \end{equation} We denote $U=\{U_t:t\in[0,T]\}$, ${\sf K}=\{{\sf K}_t:t\in[0,T]\}$ and $V=\{V_t:t\in[0,T]\}$. By construction, ${\sf K}$ and $V$ and ${\cal Z}$-adapted processes but $U$ may not be ${\cal Z}$-adapted because of the backward nature of the ODE~\eqref{eq:dyn_y}. The following proposition provides explicit representations for the solution of the backward ODE~\eqref{eq:dyn_y}. Its proof appears in Appendix~\ref{apdx:pf_thm_Y_0}. \medskip \begin{proposition} \label{thm:Y_0} Consider the backward ODE~\eqref{eq:dyn_y} with control input $U_t = {\sf K}_t^\top Y_t + V_t$ where $\{{\sf K}_t:t\in[0,T]\}$ and $\{V_t:t\in[0,T]\}$ are given ${\cal Z}$-adapted processes. Then there exist ${\cal Z}$-adapted processes $\{\Phi_t, \eta_t, \kappa_t, \gamma_t:t\in[0,T]\}$, and $Y_0\in{\cal Z}_T$, such that for each $t\in[0,T]$, \[ Y_t = \Phi_t Y_0 + \eta_t, \quad U_t = \kappa_t^\top Y_0 + \gamma_t \] \end{proposition} This proposition is used to define stochastic integral being used throughout the paper which is illustrated in the Appendix~\ref{apdx:ito}. \medskip \newP{Minimum-variance estimator} The problem of interest is precisely as in the linear Gaussian case: given a fixed time $T>0$, and $f\in\mathbb{R}^d$, the goal is to obtain a representation for the minimum variance estimator for the random variable $f^\top X_T$. Given observations $Z=\{ Z_t : 0\le t\le T \}$ defined according to the model~\eqref{eq:obs}, the following linear structure for the estimator will be justified: \begin{equation} S_T = Y_0^\top \pi_0 - \int_0^T U_t^\top \,\mathrm{d} Z_t \label{eq:NL_est} \end{equation} The vector $ Y_0$ is obtained from the solution to~\eqref{eq:dyn_y}. The optimal control input is chosen as the solution to the optimization problem: \begin{equation*}\label{eq:min_var} \min_{U\in\clU} \;\; {\sf E} [|S_T - f^\top X_T|^2] \end{equation*} Justification for the form \eqref{eq:NL_est} is provided through the formulation of the dual control problem. \begin{remark}\label{rem:stoch_int} The stochastic integral $\int_0^T U_t^\top \,\mathrm{d} Z_t$ in~\eqref{eq:NL_est} is defined as a forward integral. Formally, for a given admissible choice of ${\cal Z}$-adapted processes ${\sf K}$ and $V$, upon using the representation in \Prop{thm:Y_0}, \[ \int_0^T U_t^\top \,\mathrm{d} Z_t = Y_0^\top \int_0^T \kappa_t \,\mathrm{d} Z_t + \int_0^T \gamma_t^\top \,\mathrm{d} Z_t \] where $\{\kappa_t:t\in[0,T]\}$, $\{\gamma_t:t\in[0,T]\}$ are adapted processes and therefore the associated integrals are well-defined as standard It\^o-integrals. A self-contained background on interpreting stochastic integrals for the \textit{non-adapted} processes considered in this paper appears in Appendix~\ref{apdx:ito}. \end{remark} \pgmprev{Wir müssen wissen — wir werden wissen!} \spm{Very cool, my poet friend!} \newP{Dual optimal control problem} \begin{subequations}\label{eq:opt-cont-finite} \begin{align} &\mathop{\text{Min}}_{U\in\clU} \ \ J(U) = {\sf E} \;\Big( \half |Y_0^\top X_0 - Y_0^\top \pi_0|^2 + \int_0^T \half U_t^\top R U_t \,\mathrm{d} t \nonumber\\ &\quad+\int_0^T \half Y_t^\top \,\mathrm{d} \langle X,X^\top \rangle_t Y_t + {\cal E}_tU_t^\top \,\mathrm{d} W_t + {\cal E}_t Y_t^\top \,\mathrm{d} B_t\Big)\label{eq:opt-cont-finite-a}\\ &\text{Subject to} \ \ \frac{\,\mathrm{d} Y_t}{\,\mathrm{d} t} = -A Y_t - H U_t,\quad Y_T = f \label{eq:opt-cont-finite-b} \end{align} \end{subequations} where $\langle X,X^\top \rangle$ denotes the quadratic variation of the Markov process $X$, and the {\em error process} ${\cal E} = \{{\cal E}_t:t\in[0,T]\}$ is defined as follows: \begin{equation}\label{eq:et_defn} {\cal E}_t := Y_0^\top(X_0-\pi_0) +\int_0^t U_\tau^\top \,\mathrm{d} W_\tau +\int_0^t Y_\tau^\top \,\mathrm{d} B_\tau \end{equation} As in Remark~\ref{rem:stoch_int}, the four stochastic integrals appearing above are defined also as forward integrals (see Appendix~\ref{apdx:ito}). The relationship between the optimal control objective $J(\cdot)$ and the minimum variance objective~\eqref{eq:min_var} is illustrated in the following proposition. The proof appears in the Appendix~\ref{apdx:opt_control}. \medskip \begin{proposition}\label{prop:justification-of-cost} Consider the state-observation model \eqref{eq:dyn-obs}, the linear estimator \eqref{eq:NL_est} and the dual optimal control problem \eqref{eq:opt-cont-finite}. For any arbitrary choice of an admissible control input, \begin{equation*} J(U) = \half {\sf E} [|S_T - f^\top X_T|^2] \end{equation*} This provides a justification for the objective function~\eqref{eq:opt-cont-finite-a} and moreover shows that $J(U)\ge 0$ for any admissible control. \end{proposition} \medskip \begin{remark} Consider a deterministic control input of the form $U_t = k_t^\top Y_t + v_t$ where $\{k_t\}$, $\{v_t\}$ are deterministic functions of time (in particular, they do not depend upon the observations). Such a control is trivially admissible. In this case, $\{Y_t\}$ is a deterministic function of time and the error process ${\cal E}$ is a ${\cal F}$-martingale. Consequently, $$ {\sf E}\Big(\int_0^T {\cal E}_tU_t^\top \,\mathrm{d} W_t + {\cal E}_t Y_t^\top \,\mathrm{d} B_t\Big) = 0 $$ and the objective function in~\eqref{eq:opt-cont-finite-a} simplifies to $$ J(U) = \half Y_0^\top \Sigma_0Y_0 + \int_0^T \half U_t^\top R U_t + \half Y_t^\top {\sf E}( Q(X_t) )Y_t \,\mathrm{d} t $$ where $\Sigma_0 := {\sf E}((X_0-\pi_0)(X_0-\pi_0)^\top)$ and $Q(\cdot)$ is a $\mathbb{S}\to \mathbb{R}^{d\times d}$ map defined as follows: \begin{equation*} Q(e_i) := \sum_{j\neq i}A_{ij}(e_j-e_i)(e_j-e_i)^\top,\quad i = 1,\ldots,d \end{equation*} The resulting problem is a deterministic LQ problem whose optimal solution $\{U_t^*:t\in[0,T]\}$ will (in general) yield a sub-optimal estimate $S_T^*$ using~\eqref{eq:NL_est}. The general problem considered here is much tougher because ${\cal E}$ is {\em not} a ${\cal F}$-martingale: Under arbitrary admissible controls, it is not even adapted to this filtration. \end{remark} \medskip We have now set the stage to derive the nonlinear filter via the solution to the dual optimal control problem. \section{Derivation of the Nonlinear Filter} \label{sec:main} Recall that an admissible input has the form $U_t = {\sf K}_t^\top Y_t + V_t$ where $t\in [0,T]$. The goal is to obtain a formula for the gain process ${\sf K} = \{{\sf K}_t:t\in[0,T]\}$ such that the best choice of $V = \{V_t:t\in[0,T]\}$ is zero. This choice of input class can be regarded as an instance of the method of ``change of control'' because $V$ represents the new variable for control~\cite[Ch. 3.1]{bensoussan2018estimation}. If $V_t\equiv 0$ then $\bar{Y}=\{\bar{Y}_t:t\in[0,T]\}$ solves the backward ODE \[ \frac{\,\mathrm{d} \bar{Y}_t}{\,\mathrm{d} t} = -A\bar{Y}_t - H {\sf K}_t^\top \bar{Y}_t,\quad \bar{Y}_T = f \] and the associated control is denoted $\bar{U}_t = {\sf K}_t^\top \bar{Y}_t$ for $t\in [0,T]$. With an arbitrary $V$, the solution is expressed \begin{align*} Y_t = \bar{Y}_t + \tilde{Y}_t,\quad U_t = \bar{U}_t + \tilde{U}_t \end{align*} where $\tilde{Y}=\{\tilde{Y}_t: t\in [0,T]\}$ also solves a backward ODE: \begin{equation}\label{eq:tilde_y} \frac{\,\mathrm{d} \tilde{Y}_t}{\,\mathrm{d} t} = -A\tilde{Y}_t - H {\sf K}_t^\top \tilde{Y}_t - H V_t,\quad \tilde{Y}_T = 0 \end{equation} with $ \tilde{U}_t = {\sf K}_t^\top \tilde{Y}_t + V_t$ for $t\in [0,T]$. The error term is analogously split as $ {\cal E}_t = \bar{{\cal E}}_t + \tilde{{\cal E}}_t $, with \[ \begin{aligned} \bar{{\cal E}}_t &= \bar{Y}_0(X_0-\pi_0)+\int_0^t\bar{U}_\tau^\top \,\mathrm{d} W_\tau + \int_0^t \bar{Y}_\tau^\top\,\mathrm{d} B_\tau \\ \tilde{{\cal E}}_t &= \tilde{Y}_0(X_0-\pi_0)+\int_0^t\tilde{U}_\tau^\top \,\mathrm{d} W_\tau + \int_0^t \tilde{Y}_\tau^\top\,\mathrm{d} B_\tau \end{aligned} \] The optimal gain is described in the following theorem. \begin{theorem}\label{thm:opt-soln} Consider the optimal control problem \eqref{eq:opt-cont-finite}. For any non-zero $V \in C_{\cal Z}^m$, $$ J(U) \geq J(\bar{U}) $$ where the optimal gain is defined as following: \begin{subequations}\label{eq:thm1} \begin{align} \,\mathrm{d} \bar{\pi}_t = A^\top \bar{\pi}_t \,\mathrm{d} t - {\sf K}_t^\top (\,\mathrm{d} Z_t - H^\top \bar{\pi}_t \,\mathrm{d} t),\quad \bar{\pi}_0 = \pi_0 \label{eq:thm1-a}\\ {\sf K}_t = - {\sf E}\big((X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top |{\cal Z}_t\big)HR^{-1},\quad t\in[0,T] \label{eq:thm1-b} \end{align} \end{subequations} \end{theorem} \medskip \subsection{Proof of Thm.~\ref{thm:opt-soln}} It is simple calculation to see that \[ J(U) =J(\bar{U}) + J(\tilde{U}) + {\sf E}(\mathcal{C}) \] where the cross-term $\mathcal{C}$ is defined by \begin{align*} \mathcal{C} &= \underbrace{\tilde{Y}_0^\top(X_0-\pi_0)(X_0-\pi_0)^\top\bar{Y}_0}_{\text{term (i)}} \\ &+ \underbrace{\int_0^T \tilde{U}_t^\top R \bar{U}_t \,\mathrm{d} t + \tilde{Y}_t^\top \,\mathrm{d} \langle X,X^\top \rangle_t \bar{Y}_t}_{\text{term (ii)}} \\ &+ \underbrace{\int_0^T(\tilde{{\cal E}}_t\bar{U}_t^\top+\bar{{\cal E}}_t\tilde{U}_t^\top) \,\mathrm{d} W_t + \int_0^T (\tilde{{\cal E}}_t\bar{Y}_t^\top + \bar{{\cal E}}_t\tilde{Y}_t^\top) \,\mathrm{d} B_t}_{\text{term (iii)}} \end{align*} The strategy now is to choose ${\sf K}$ such that ${\sf E}(\mathcal{C})=0$ for all possible choices of ${\cal Z}$-adapted $V$. \newP{Term (i)} A standard technique of optimal control theory dictates that the terminal condition term be expressed as an integral by introducing a dual variable. Towards this goal, we introduce a vector-valued stochastic process $\bar{\pi} = \{\bar{\pi}_t:t\in[0,T]\}$ with $\bar{\pi}_0=\pi_0$ (the prior). At this point of time, we only require that $\bar{\pi}$ is a ${\cal Z}$-adapted process. The dynamics of this process will be defined later. Using the process $\bar{\pi}$, together with the requirement \eqref{eq:tilde_y} that $\tilde{Y}_T = 0$, we obtain $$ \tilde{Y}_0^\top(\pi_0-X_0)(\pi_0-X_0)^\top\bar{Y}_0 = -\int_0^T \,\mathrm{d} \big(\tilde{Y}_t^\top(\bar{\pi}_t-X_t)(\bar{\pi}_t-X_t)^\top\bar{Y}_t\big) $$ The differential is evaluated by an application of the product formula:\footnote{See Appendix~\ref{apdx:ito} for a justification of the product formula for the class of (non-adapted) stochastic processes arising in this paper.} \begin{align*} \,\mathrm{d}&\big(\tilde{Y}_t^\top(\bar{\pi}_t-X_t)(\bar{\pi}_t-X_t)^\top\bar{Y}_t\big)\\ =&\, \tilde{Y}_t^\top \Big\{ \big(\,\mathrm{d}\bar{\pi}_t - A^\top \bar{\pi}_t\,\mathrm{d} t+{\sf K}_tH^\top (X_t-\bar{\pi}_t)\,\mathrm{d} t-\,\mathrm{d} B_t\big)(\bar{\pi}_t-X_t)^\top\\ &+(\bar{\pi}_t-X_t)\big(\,\mathrm{d}\bar{\pi}_t - A^\top \bar{\pi}_t\,\mathrm{d} t+{\sf K}_tH^\top (X_t-\bar{\pi}_t)\,\mathrm{d} t-\,\mathrm{d} B_t\big)^\top\\ &+\,\mathrm{d}\langle(\bar{\pi}-X),(\bar{\pi}-X)^\top\rangle_t \Big\} \bar{Y}_t-V_t^\top H^\top (X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top\bar{Y}_t\,\mathrm{d} t \end{align*} where $\langle(\bar{\pi}-X),(\bar{\pi}-X)^\top \rangle$ denotes the quadratic variation of the process $\bar{\pi}-X$. It is noted that each of the term in the integral is a quadratic either in $\tilde{Y}_t$ and $\bar{Y}_t$ or in $V_t$ and $\bar{Y}_t$. \newP{Term (ii)} The second term is expressed as: \begin{align*} \int_0^T& \tilde{U}_t^\top R \bar{U}_t \,\mathrm{d} t + \tilde{Y}_t^\top \,\mathrm{d} \langle X,X^\top \rangle_t \bar{Y}_t \\ &= \int_0^T \Big( \;\tilde{Y}_t^\top \big({\sf K}_tR{\sf K}_t^\top \,\mathrm{d} t + \,\mathrm{d} \langle X,X^\top \rangle_t\big)\bar{Y}_t + V_t^\top R{\sf K}_t^\top\bar{Y}_t\,\mathrm{d} t \Big) \end{align*} \newP{Term (iii)} It remains to tackle the two stochastic integrals involving the error processes. We begin by recalling~\eqref{eq:et_defn}: \jin{I think this ${\cal E}_t$ is fine.} \begin{align*} {\cal E}_t &= Y_0^\top (X_0 - \pi_0) + \int_0^t U_\tau^\top \,\mathrm{d} W_\tau +\int_0^t Y_\tau^\top \,\mathrm{d} B_\tau \end{align*} Proceeding as in term~(i), the process $\bar{\pi}$ is again used to express the terminal condition term $Y_0^\top (\pi_0 - X_0)$ as an integral. Once again, using the product rule \begin{align*} \,\mathrm{d} \big({Y}_t^\top (X_t-\bar{\pi}_t)\big) =-Y_t^\top\big(\,\mathrm{d}\bar{\pi}_t & - A^\top \bar{\pi}_t\,\mathrm{d} t+{\sf K}_tH^\top (X_t-\bar{\pi}_t)\,\mathrm{d} t\big)\\ & + Y_t^\top\,\mathrm{d} B_t -V_t^\top H^\top (X_t-\bar{\pi}_t)\,\mathrm{d} t \end{align*} Therefore, \begin{align*} {\cal E}_t =& Y_0^\top(X_0-\pi_0) + \int_0^t U_\tau^\top \,\mathrm{d} W_\tau + \int_0^t Y_\tau^\top \,\mathrm{d} B_\tau\\ =&Y_t^\top (X_t-\bar{\pi}_t)+\int_0^t V_\tau^\top(\,\mathrm{d} W_\tau + H^\top( X_\tau-\bar{\pi}_\tau)\,\mathrm{d} \tau) \\ &+\int_0^tY_\tau^\top\big(\,\mathrm{d}\bar{\pi}_\tau - A^\top \pi_\tau\,\mathrm{d} \tau+{\sf K}_\tau(\,\mathrm{d} W_\tau +H^\top(X_\tau-\bar{\pi}_\tau)\,\mathrm{d} \tau)\big) \end{align*} In order to reduce the notational burden, the following differential notation is adopted for the ${\cal Z}$-adapted stochastic processes ${\bar{I}}=\{{\bar{I}}_t:t\in[0,T]\}$ and $\mathcal{L}=\{\mathcal{L}_t:t\in[0,T]\}$: \begin{align*} \,\mathrm{d} {\bar{I}}_t&:= \,\mathrm{d} Z_t - H^\top\bar{\pi}_t \,\mathrm{d} t\\ \,\mathrm{d} \mathcal{L}_t&:= \,\mathrm{d} \bar{\pi}_t - A^\top \bar{\pi}_t\,\mathrm{d} t + {\sf K}_t \,\mathrm{d} {\bar{I}}_t \end{align*} The notation is used to express the error succinctly as $$ {\cal E}_t =Y_t^\top (\bar{\pi}_t-X_t) - \int_0^tY_\tau^\top\,\mathrm{d} \mathcal{L}_\tau+\int_0^t V_\tau^\top \,\mathrm{d} {\bar{I}}_\tau $$ In particular, upon splitting ${\cal E}_t = \bar{{\cal E}}_t+\tilde{{\cal E}}_t$, we have \begin{align*} \bar{{\cal E}}_t &=\bar{Y}_t^\top (X_t-\bar{\pi}_t) + \int_0^t\bar{Y}_\tau^\top\,\mathrm{d} \mathcal{L}_\tau\\ \tilde{{\cal E}}_t &=\tilde{Y}_t^\top (X_t-\bar{\pi}_t) + \int_0^t\tilde{Y}_\tau^\top\,\mathrm{d} \mathcal{L}_\tau+\int_0^t V_\tau^\top\,\mathrm{d} {\bar{I}}_\tau \end{align*} We thus obtain a useful expression for term (iii): {\small \begin{align*} &\int_0^T\tilde{{\cal E}}_t\bar{U}_t^\top+\bar{{\cal E}}_t\tilde{U}_t^\top \,\mathrm{d} W_t + \int_0^T (\tilde{{\cal E}}_t\bar{Y}_t^\top + \bar{{\cal E}}_t\tilde{Y}_t^\top) \,\mathrm{d} B_t\\ &=\int_0^T\Big\{ \tilde{Y}_t^\top(X_t-\bar{\pi}_t)\bar{Y}_t^\top{\sf K}_t +\bar{Y}_t^\top (X_t-\bar{\pi}_t)\tilde{Y}_t^\top{\sf K}_t +\big(\int_0^t\tilde{Y}_\tau^\top\,\mathrm{d}\mathcal{L}_\tau\big)\bar{Y}_t^\top {\sf K}_t \\ &\quad\quad+\big(\int_0^t\bar{Y}_\tau^\top\,\mathrm{d} \mathcal{L}_\tau\big)\tilde{Y}_t^\top{\sf K}_t +\big(\int_0^t V_\tau^\top\,\mathrm{d} {\bar{I}}_\tau\big)\bar{Y}_t^\top{\sf K}_t + V_t^\top\bar{Y}_t^\top (X_t-\bar{\pi}_t)\\ &\quad\quad+V_t^\top \big(\int_0^t\bar{Y}_\tau^\top\,\mathrm{d} \mathcal{L}_\tau\big)\Big\} \,\mathrm{d} W_t \\ &\quad+\int_0^T\Big\{\tilde{Y}_t^\top (X_t-\bar{\pi}_t)\bar{Y}_t^\top + \bar{Y}_t^\top (X_t-\bar{\pi}_t)\tilde{Y}_t^\top+\big(\int_0^t\tilde{Y}_\tau^\top\,\mathrm{d} \mathcal{L}_\tau\big)\bar{Y}_t^\top\\ &\quad\quad+\big(\int_0^t\bar{Y}_\tau^\top\,\mathrm{d} \mathcal{L}_\tau\big)\tilde{Y}_t^\top+\big(\int_0^t V_\tau^\top\,\mathrm{d} {\bar{I}}_\tau\big)\bar{Y}_t^\top\Big\}\,\mathrm{d} B_t \end{align*}} This concludes our program of expressing each of three terms in $\mathcal{C}$ as an integral with sub-terms containing $\bar{Y}_t,\tilde{Y}_t,V_t$. Now, every sub-term is a quadratic of one of the two types: \begin{enumerate} \item The type 1 quadratic sub-terms contain $\bar{Y}_t$ and $\tilde{Y}_t$. An example of this type of quadratic is $\tilde{Y}_t^\top {\sf K}_t R {\sf K}_t^\top\bar{Y}_t$ in the term (ii). \item The type 2 quadratic sub-terms contain $\bar{Y}_t$ and $V_t$. An example of this is $V_t{\sf K}_t^\top\bar{Y}_t$ in the term (ii). \end{enumerate} We express $ \mathcal{C} = \mathcal{C}_1 + \mathcal{C}_2 $, where $\mathcal{C}_1$ contains only the quadratic sub-terms of type 1 and $\mathcal{C}_2$ contains only the quadratic sub-terms of type 2. Upon collecting terms, we obtain \spm{To do: something seems wrong with the big equation here. The two terms in the integrand at the end seem identical.} {\small \begin{align*} \mathcal{C}_1&=\int_0^T \tilde{Y}_t^\top ({\sf K}_tR{\sf K}_t^\top \,\mathrm{d} t + \,\mathrm{d} \langle X,X^\top \rangle_t)\bar{Y}_t- \tilde{Y}_t^\top \,\mathrm{d}\big\langle(\bar{\pi}-X_t),(\bar{\pi}-X_t)^\top\big\rangle_t\bar{Y}_t\\ &+\int_0^T\Big(\tilde{Y}_t^\top(\bar{\pi}_t-X_t)\bar{Y}_t^\top + \bar{Y}_t^\top(\bar{\pi}_t-X_t)\tilde{Y}_t^\top\Big)\,\mathrm{d}\mathcal{L}_t \\ &+\int_0^T \Big(\big(\int_0^t \bar{Y}_\tau^\top\,\mathrm{d}\mathcal{L}_\tau\big)\tilde{Y}_t^\top {\sf K}_t + \big(\int_0^t \tilde{Y}_\tau^\top\,\mathrm{d}\mathcal{L}_\tau\big)\bar{Y}_t^\top {\sf K}_t\Big)\,\mathrm{d} W_t\\ &+\int_0^T \Big(\big(\int_0^t \bar{Y}_\tau^\top\,\mathrm{d}\mathcal{L}_\tau\big)\tilde{Y}_t^\top +\big(\int_0^t \tilde{Y}_\tau^\top\,\mathrm{d}\mathcal{L}_\tau\big)\bar{Y}_t^\top \Big)\,\mathrm{d} B_t \end{align*} } and {\small \begin{align*} &\mathcal{C}_2 = \int_0^T V_t^\top\big(R {\sf K}_t^\top +H^\top( X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top\big)\bar{Y}_t \,\mathrm{d} t \\ &+\int_0^T \Big\{V_t^\top(X_t-\bar{\pi}_t)^\top \bar{Y}_t + V_t^\top \big(\int_0^t \bar{Y}_\tau^\top\,\mathrm{d}\mathcal{L}_\tau\big) + \big(\int_0^t V_\tau^\top \,\mathrm{d} {\bar{I}}_\tau\big) \bar{Y}_t^\top {\sf K}_t\Big\} \,\mathrm{d} W_t \\ &\quad\quad +\int_0^T\big(\int_0^t V_\tau^\top \,\mathrm{d} {\bar{I}}_\tau\big)\bar{Y}_t^\top \,\mathrm{d} B_t \end{align*}} In order to have ${\sf E}(\mathcal{C}) = {\sf E}(\mathcal{C}_1) + {\sf E}(\mathcal{C}_2) = 0$ for all possible choices of $\bar{Y},\tilde{Y}$ and for all possible choices of $\bar{Y},V$, we follow the following 2-step procedure: \begin{enumerate} \item In Step 1, we obtain an equation for $\bar{\pi}$ by setting \[ {\sf E}(\mathcal{C}_1) = 0,\quad \text{a.s.} \] \item Given $\bar{\pi}$ from Step 1, we next derive a formula for the optimal gain ${\sf K}$ by imposing the requirement \[ {\sf E}(\mathcal{C}_2) = 0,\quad \forall V \in C_{\cal Z}^m \] \end{enumerate} The 2-step procedure is inspired by the analogous procedure in classical LQ theory where the step 1 is used to derive the Ricatti equation and the step 2 is used to derive the formula for the optimal feedback gain; cf.,~\cite[Ch. 7.3.1]{bensoussan2018estimation}. \newP{Step 1} By inspection, we find that upon setting \begin{equation} \,\mathrm{d} \bar{\pi}_t = A^\top \bar{\pi}_t\,\mathrm{d} t - {\sf K}_t \,\mathrm{d} {\bar{I}}_t \,, \quad \bar{\pi}_0=\pi_0 \label{eq:barpit} \end{equation} which is as presented in the theorem statement~\eqref{eq:thm1-a}, we have $\,\mathrm{d}\mathcal{L}_t \equiv 0$, and $\mathcal{C}_1 $ reduces to \begin{align*} \mathcal{C}_1=\int_0^T &\tilde{Y}_t^\top ({\sf K}_tR{\sf K}_t^\top \,\mathrm{d} t+ \,\mathrm{d} \langle X, X^\top \rangle_t)\bar{Y}_t \\ -&\tilde{Y}_t^\top \,\mathrm{d}\big\langle(\bar{\pi}-X),(\bar{\pi}-X)^\top\big\rangle_t\bar{Y}_t \end{align*} It is an easy calculation to compute the quadratic variation \[ \,\mathrm{d} \langle(\bar{\pi}-X),(\bar{\pi}-X)^\top\big\rangle_t = {\sf K}_tR{\sf K}_t^\top \,\mathrm{d} t + \,\mathrm{d} \langle X,X^\top \rangle_t \] and therefore, upon defining the dynamics of $\bar{\pi}$ according to~\eqref{eq:barpit}, \[ \mathcal{C}_1 = 0 \quad \text{a.s.} \] This is true for {\em any} choice of ${\cal Z}$-adapted gain process ${\sf K}$. \medskip Among the consequences are the following pretty representations for the error processes: \begin{align} \label{eq:error_rep_bar} \bar{{\cal E}}_t &= \bar{Y}_t^\top (X_t-\bar{\pi}_t)+ \int_0^t \bar{Y}_\tau^\top \,\mathrm{d}\mathcal{L}_\tau = \bar{Y}_t^\top (X_t-\bar{\pi}_t) \end{align} and similarly, \begin{align*} \tilde{{\cal E}}_t &= \tilde{Y}_t^\top (X_t-\bar{\pi}_t)+ \int_0^t V_\tau \,\mathrm{d}{\bar{I}}_\tau \end{align*} These expressions also hold for {\em any} ${\cal Z}$-adapted ${\sf K}$. \newP{Step 2} A formula for the gain ${\sf K}=\{{\sf K}_t:t\in[0,T]\}$ is obtained by enforcing $ {\sf E} [ \mathcal{C}_2 ] = 0 $. We first carry out some simplifications. It is straightforward calculation that, with $\bar{\pi}$ defined according to~\eqref{eq:barpit}, the integrand of $\mathcal{C}_2$ is a perfect differential: \begin{equation}\label{eq:apdX_ref_1} \mathcal{C}_2 = \int_0^T \,\mathrm{d} \big(\bar{{\cal E}}_t \int_0^t V_\tau^\top \,\mathrm{d} {\bar{I}}_\tau\big) = \bar{{\cal E}}_T \int_0^T V_t^\top \,\mathrm{d} {\bar{I}}_t \end{equation} The following orthogonality condition is thus obtained upon using the representation~\eqref{eq:error_rep_bar} for $\bar{{\cal E}}_T$: \spmprev{I don't think we can claim this is a projection theorem -- is my revision ok?} \[ f^\top {\sf E} \Big((\bar{\pi}_T - X_T) \int_0^T V_t^\top \,\mathrm{d} {\bar{I}}_t\Big)={\sf E} (\mathcal{C}_2) =0 \] Since the function $f$ is arbitrary, we must have \spm{To do (in October): does the $\sigma$ algebra generated by $ \int_0^r V_t^\top \,\mathrm{d} {\bar{I}}_t$ ($V$ arbitrary, but ${\cal Z}$ adapted) generate ${\cal Z}_r$, for any $r$? If so, we have Wonham} \[ {\sf E} \Big((X_T-\bar{\pi}_T) \int_0^T V_t^\top \,\mathrm{d} {\bar{I}}_t\Big) = 0 \] To obtain the formula for ${\sf K}$, the expression inside the expectation is written as an integral---essentially by reversing the steps in finding the perfect differential. This yields \begin{equation} \label{eq:apdX_ref_2} \begin{aligned} {\sf E} \Big(\int_0^T \big({\sf K}_t & R+(X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top H\big)V_t\,\mathrm{d} t\Big) \\ &-\;\; {\sf E}\Big(\int_0^T\big(\int_0^tV_\tau^\top \,\mathrm{d}{\bar{I}}_\tau\big) (\,\mathrm{d} \bar{\pi}_t-A^\top X_t\,\mathrm{d} t)\Big)= 0 \end{aligned} \end{equation} For the equation to hold for arbitrary choices of $V$ and $\bar{I}$ (which is unrelated to the choice of $V$), the two terms should both be zero: \begin{align} {\sf E} \Big(\int_0^T \big({\sf K}_t R+(X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top H\big)V_t\,\mathrm{d} t\Big) &=0 \label{eq:step2_1} \\ {\sf E}\Big(\int_0^T\big(\int_0^tV_\tau^\top \,\mathrm{d}{\bar{I}}_\tau\big) (\,\mathrm{d} \bar{\pi}_t-A^\top X_t\,\mathrm{d} t)\Big) &= 0 \label{eq:step2_2} \end{align} The formula for the optimal ${\sf K}$ is obtained by solving~\eqref{eq:step2_1}. Using the tower property of conditional expectation, because $V_t$ and ${\sf K}_t$ are both ${\cal Z}_t$-measurable, we have \[ E \Big(\int_0^T ({\sf K}_tR+{\sf E}((X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top H\mid {\cal Z}_t) )V_t\,\mathrm{d} t \Big) = 0 \] Since $V$ is an arbitrary ${\cal Z}$-adapted function, ${\sf K}_t$ is uniquely determined on $L^2$ space: \[ {\sf K}_t=-{\sf E}((X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top H\mid {\cal Z}_t)R^{-1},\quad t\in[0,T] \] This gives the formula for the optimal gain ${\sf K}$. \nobreak\hfill\mbox{\rule[0pt]{1.3ex}{1.3ex}} \begin{remark} Using the optimal gain, the equation~\eqref{eq:barpit} for $\bar{\pi}$ becomes \[ \,\mathrm{d} \bar{\pi}_t = A^\top \bar{\pi}_t \,\mathrm{d} t +{\sf E} [ (X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top H\mid {\cal Z}_t ]R^{-1}\,\mathrm{d} {\bar{I}}_t,\quad \bar{\pi}_0 = \pi_0 \] The equation is not closed because we do not know ${\sf E}(X_t\mid {\cal Z}_t) =: \pi_t$. One could consider closing the equation by assuming a certainty equivalence principle that $\pi = \bar{\pi}$. In that case, \[ {\sf E}((X_t-\bar{\pi}_t)(X_t-\bar{\pi}_t)^\top H\mid {\cal Z}_t) = \text{diag}(\pi_t) (H - {\pi}_t^\top H)^\top \] where $\text{diag}(\pi_t)$ is a diagonal matrix whose diagonal entries are the elements of the vector $\pi_t$, and one obtains the equation \[ \,\mathrm{d} {\pi}_t = A^\top {\pi}_t \,\mathrm{d} t +\text{diag}(\pi_t)(H - {\pi}_t^\top H)^\top R^{-1}\,\mathrm{d} I_t,\quad {\pi}_0 = \pi_0 \] where $\,\mathrm{d} I_t = \,\mathrm{d} Z_t - H^\top \pi_t \,\mathrm{d} t$. This is the equation for the Wonham filter. \end{remark} \bibliographystyle{IEEEtran}
{ "timestamp": "2019-03-28T01:04:36", "yymm": "1809", "arxiv_id": "1809.10762", "language": "en", "url": "https://arxiv.org/abs/1809.10762", "abstract": "This paper revisits the question of duality between minimum variance estimation and optimal control first described for the linear Gaussian case in the celebrated paper of Kalman and Bucy. A duality result is established for nonlinear filtering, mirroring closely the original Kalman-Bucy duality of control and estimation for linear systems. The result for the finite state-space continuous time Markov chain is presented. It's solution is used to derive the classical Wonham filter.", "subjects": "Probability (math.PR); Optimization and Control (math.OC)", "title": "An Approach to Duality in Nonlinear Filtering", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.982013790564298, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617935854299 }
https://arxiv.org/abs/2102.12872
Digital almost nets
Digital nets (in base $2$) are the subsets of $[0,1]^d$ that contain the expected number of points in every not-too-small dyadic box. We construct sets that contain almost the expected number of points in every such box, but which are exponentially smaller than the digital nets. We also establish a lower bound on the size of such almost nets.
\section{Introduction} We call a subinterval of $[0,1]$ \emph{basic (in base $q$)} if it is of the form $\bigl[\frac{a}{q^k},\frac{a+1}{q^k}\bigr)$, for nonnegative integers $a$ and~$k$. A \emph{basic box} is a product of basic intervals, i.e., a set of the form $\prod_{i=1}^d\bigl[\frac{a_i}{q^{k_i}},\frac{a_i+1}{q^{k_i}}\bigr)$. If $q=2$, a basic interval is called a dyadic interval, and a basic box is called a dyadic box. We say that a set $P\subset [0,1]^d$ is a \emph{$(m,\veps)$-almost net in base $q$} if it is of size $\abs{P}=q^nm$ for some natural number $n$ and \[ (1-\veps)m\leq \abs{\beta \cap P}\leq (1+\veps)m \] for every basic box $\beta$ of volume $\vol(\beta)=q^{-n}$. In this paper, we are interested in constructions where the parameters $m$ and $\veps$ are independent of~$n$. In contrast, since the family of all axis-parallel boxes has finite VC-dimension, one can construct $(m,\veps)$-almost nets with $m$ linear in $n$ by sampling the points of $P$ at random, see \cite{hs_relative,cm_easy}. The dependence on $n$ is unavoidable for points sampled at random. The case $\veps=0$ of the above definition has been well studied. If $\veps=0$, almost nets are known as \emph{digital $(t,m,s)$-nets} or simply \emph{$(t,m,s)$-nets} (in base $q$) in the literature\footnote{The parameters $t,m,s$ in the definition of $(t,m,s)$-nets have different meaning than in the present paper. They correspond to $\log_q m$, $\log_q(mq^n)$ and $d$ respectively in our notation.}. The adjective `digital' is due to the fact that basic intervals comprise of numbers with specified initial digits in base~$q$. They are used extensively in discrepancy theory and numerical integration algorithms, and are subject to numerous works, including a book devoted exclusively to them \cite{dick_pill_book}. It is known from \cite[Theorem~3]{XN} that, for each $d$, there exist arbitrarily large $(m,0)$-almost nets with $m\leq q^{5d}$, if $q$ is a prime power. On the other hand, $m$ must grow exponentially with $d$ for large enough nets \cite{martin_visentin} (see also \cite{schurer} for asymptotic analysis of the bound in \cite{martin_visentin}). In contrast to these results, for $\veps>0$, we construct $(m,\veps)$-almost nets with $m$ being only polynomial in~$d$. \begin{theorem}\label{mainthm} For any prime $q$, any $d\geq 2$, and any positive integers $m,n$ satisfying $m\geq 400d\log (dq)$, there exists a set $P\subset [0,1]^d$ of size $mq^n$ such that, for any basic box $\beta$ of volume $q^{-n}$, \begin{equation}\label{eq:main} \left(1-10\sqrt{\frac{d\log (dq)}{m}}\right)m\leq \abs{\beta\cap P}\leq\left(1+10\sqrt{\frac{d\log (dq)}{m}}\right)m. \end{equation} In particular, for every $0<\veps<1/2$ and every $d\geq 2$, there exist arbitrarily large $(m,\veps)$-almost nets in base $q$ with $m\leq 100\veps^{-2}d\log(dq)$. Furthermore, the set $P$ satisfying \eqref{eq:main} can be chosen to be an $\bigl(M,0)$-net in base $q$ with $M\leq\nobreak d^{4d}q^{6d}m$. \end{theorem} This result has an application in geometric Ramsey theory: A \emph{convex hole} in a finite set $S\subset \R^d$ is a subset $H\subset S$ in convex position and whose convex hull contains no other point of~$S$. An old problem of Valtr \cite{valtr} asks for the largest $h(d)$ such that every sufficiently large $S\subset \R^d$ in general position contains a hole of size~$h(d)$. Using \Cref{mainthm} one can show that $h(d)\leq 4^{d+o(d)}$, which is an improvement over the bound of $h(d)\leq 2^{7d}$ that can be obtained from~$(t,m,s)$-nets. The details of both bounds are in \cite{bukh_chao_holzman}. The construction behind \Cref{mainthm} is a minor modification on the construction in \cite{bukh_chao}. Whereas the construction in \cite{bukh_chao} uses primes in $\Z$, this construction uses irreducible polynomials in $\F[x]$. The reason for this change is to make the denominators be powers of the same prime~$q$. Furthermore, because the addition in $\F[x]$ satisfies the ultrametric inequality (with respect to the degree), and because we do not need to worry about boxes that are not basic, several details in the new construction are simpler. As such, we do not make any claims about the novelty. Our purpose in writing the present note is to record the details of the construction for its application to convex holes. We also hope that almost nets will find applications in many other areas that currently use the conventional nets.\medskip We do not know when the bound in \Cref{mainthm} is sharp. The following is the best lower bound we were able to prove. Its dependence on $\veps$ is close to optimal, as long as $\veps$ is not too small, but the dependence on $d$ is poor. In the special case $\veps=0$, we recover the lower bound $t= \Omega(s)$ in $(t,m,s)$-nets via a proof different than those in \cite{martin_visentin,schurer} (keeping in mind that that $t$ and $s$ correspond to $\log_q m$ and $d$ respectively). \begin{theorem}\label{lowerbound} Assume that there exists an $(m,\varepsilon)$-net $P\subseteq [0,1]^d$ in base $q$, then the following holds. If $\varepsilon\geq 1/2\sqrt{d}$, then \[m= \Omega\bigl(\frac{\log d}{q^2\varepsilon^2\log(1/\varepsilon)}\bigr).\] If $1/2\sqrt{d}\geq \varepsilon\geq e^{-d/8}$, then \[m= \Omega(q^{-2k-2}\varepsilon^{-2}),\] where $k=\frac{2\log(1/\varepsilon)}{\log d-\log\log(1/\varepsilon)}$. In particular, if $\varepsilon=\omega(d^{-t})$ for some constant $t$, then we have $m= \Omega_{q,t}(1/\varepsilon^2)$. If $\varepsilon=o(e^{-cd})$ for some constant $c$ such that $0<c<\min(1/8,1/q^2)$, then we get an exponential lower bound $m=\Omega\bigl(q^{-2}e^{c'd}\bigr)$, where $c'=2c(1-2\log q/\log(1/c))$. \end{theorem} \paragraph{Open problem.} It would be interesting to prove a result similar to \Cref{mainthm} which applies to all boxes, not only to basic boxes. It is possible to construct a set $P$ for which $\abs{\beta\cap P}$ is lower-bounded by $(1-o(1))m$ for all boxes $\beta$ by taking a union of translates of the set $P$ from \Cref{mainthm} in a manner similar to that in the second part of the proof of \cite[Theorem~2]{bukh_chao}. However, we have been unable to control $\abs{\beta\cap P}$ from above. \paragraph{Acknowledgment.} We are thankful to Ron Holzman for useful discussions, and to two anonymous referees for their help in improving this paper. \section{Proof of \texorpdfstring{\Cref{mainthm}}{Theorem 1}} We denote by $\F$ the finite field consisting of elements $0,1,2,\dotsc,q-1$ equipped with the usual mod-$q$ arithmetic. Let $t\eqdef \lceil 2\log_q d+2\rceil$. Since the number of irreducible polynomials of degree $t$ in $\F[x]$ is \[ \frac{1}{t}(\sum_{i|t}\mu(i)q^{t/i})\geq \frac{1}{t}(q^t-q^{t/2+1})\geq d, \] we may pick $d$ distinct irreducible polynomials $p_1,\dotsc,p_d$ of degree $t$ in $\F[x]$. We fix some such choice of polynomials for the duration of the proof. We associate each of these $d$ polynomials to the respective coordinate direction. We will be interested in \emph{canonical boxes}, which are the boxes of the form \[ B=\prod_{i=1}^d \left[\frac{a_i}{q^{k_it}},\frac{a_i+1}{q^{k_it}}\right). \] for some nonnegative integers $k_i$ and $0\leq a_i<q^{k_it},i=1,2,\dotsc,d$. We say that a polynomial $f\in \Z[x]$ is a \emph{basic polynomial} if $\deg f<t$ and all of its coefficients are in~$\{0,1,\dotsc,q-1\}$. For an irreducible polynomial $p\in \F[x]$ of degree $t$ and a polynomial $f\in\F[x]$, we define the \emph{base-$p$ expansion of $f$} to be $f=\nobreak f_0+f_1p+\dotsb+f_{\ell}p^{\ell}$, where each $f_i$ is a basic polynomial. Put $r_p(f)\eqdef (f_0(1/q)+f_1(1/q)q^{-t}+\dotsb+f_{\ell}(1/q)q^{-\ell t})/q$, where we view the basic polynomials $f_0,f_1,\dotsc,f_{\ell}$ as polynomial functions on $\R$. In other words, if $f_i=\sum_{j<t} c_{i,j}x^j$ with $c_{i,j}\in\{0,1,\dotsc,q-1\}$, then, denoting by $C_i$ the concatenation $c_{i,0}c_{i,1}\ldots c_{i,t-1}$, the base-$q$ expansion of the real number $r_p(f)$ is \[ r_p(f)=0.C_0C_1\ldots C_{\ell}. \] Note that $r_p(f)\in [0,1)$. Define the function $r\colon \F[x]\to [0,1]^d$ by $r(f)\eqdef \bigl(r_{p_1}(f),\dotsc,r_{p_d}(f)\bigr)$. Recall that our aim is to construct a set $P\subset [0,1]^d$ whose intersection with any basic box of volume $q^{-n}$ has almost the expected number of points. \begin{definition} We say that a box $\beta$ is \emph{good} if $\beta$ is a basic box of volume $\vol(\beta)=q^{-n}$. Let $B$ be the smallest canonical box containing~$\beta$. We call $(B,\beta)$ a \emph{good pair}. \end{definition} Note that if $(B,\beta)$ is a good pair, then $\vol(B)\leq q^{-n+dt-1}$. Indeed, every basic interval is contained in an interval of the form $[a/q^{kt},(a+1)/q^{kt})$ that is at most $q^{t-1}$ times larger, and \linebreak therefore $\vol(B)\leq q^{-n}(q^{t-1})^d\leq q^{-n+dt-1}$.\medskip Suppose $B$ is a canonical box. Write it as $B=\prod_i\left[a_i/q^{k_it},(a_i+1)/q^{k_it}\right)$, and consider $r^{-1}(B)$. The set $r^{-1}(B)\subseteq \F[x]$ consists of all solutions to the system \begin{align*} f&\equiv a_1'\pmod{p_1^{k_1}},\\ f&\equiv a_2'\pmod{p_2^{k_2}},\\ &\setbox0\hbox{$\equiv$}\mathrel{\makebox[\wd0]{\vdots}}\\ f&\equiv a_d'\pmod{p_d^{k_d}}, \end{align*} where $a_i'=f_{i,0}+f_{i,1}p_i+\dotsb+f_{i,k_i-1} p_i^{k_i-1}$ and $f_{i,0},f_{i,1},\ldots,f_{i,k_i-1}$ are the unique basic polynomials satisfying $a_i/q^{k_it}=(f_{i,0}(1/q)+f_{i,1}(1/q)q^{-t}+\ldots+f_{i,k_i-1}(1/q)q^{-(k_i-1)t})/q$. By the Chinese Remainder theorem, the set $r^{-1}(B)$ is of the form $A(B)+D(B)\F[x]$ where $D(B)\eqdef p_1^{k_1}p_2^{k_2}\dotsb p_d^{k_d}$ and $A(B)$ is the unique element in $r^{-1}(B)$ of degree less than $t(k_1+\ldots+k_d)$. Note that $\deg D(B)=t(k_1+\ldots+k_d)=-\log_q (\vol(B))$. Given a good pair $(B,\beta)$, define \[L_B(\beta)\eqdef\{ g \in\F[x] : r\bigl(A(B)+gD(B)\bigr) \in \beta\}.\] \begin{claim}\label{claim:L} The set $\cL\eqdef \lbrace L_B(\beta):(B,\beta)\mbox{ is a good pair}\rbrace$ is of size at most $q^{4dt}$. \end{claim} \begin{proof} Let $(B,\beta)$ be a good pair. Write $B$ and $\beta$ in the form \[B=\prod_{i=1}^d \left[\frac{a_i}{q^{k_it}},\frac{a_i+1}{q^{k_it}}\right),\qquad \beta=\prod_{i=1}^d \left[\frac{a_i}{q^{k_it}}+\frac{b_i}{q^{(k_i+1)t}},\frac{a_i}{q^{k_it}}+\frac{c_i}{q^{(k_i+1)t}}\right).\] The condition $r\bigl(A(B)+gD(B)\bigr)\in \beta$ is equivalent to \begin{align*} A(B)+gD(B)&\in a_1'+p_1^{k_1}J_1\pmod{p_1^{k_1+1}},\\ A(B)+gD(B)&\in a_2'+p_2^{k_2}J_2\pmod{p_2^{k_2+1}},\\ &\setbox0\hbox{$\in$}\mathrel{\makebox[\wd0]{\vdots}}\\ A(B)+gD(B)&\in a_d'+p_d^{k_d}J_d\pmod{p_d^{k_d+1}}, \end{align*} where the sets $J_i$ consist of the basic polynomials $f$ such that $f(1/q)q^{t-1}\in [b_i,c_i)$. On the other hand, \begin{align*} A(B)+gD(B)&\equiv a_1'+(\alpha_1+g\delta_1)p_1^{k_1}\pmod{p_1^{k_1+1}},\\ A(B)+gD(B)&\equiv a_2'+(\alpha_2+g\delta_2)p_2^{k_2}\pmod{p_2^{k_2+1}},\\ &\setbox0\hbox{$\equiv$}\mathrel{\makebox[\wd0]{\vdots}}\\ A(B)+gD(B)&\equiv a_d'+(\alpha_d+g\delta_d)p_d^{k_d}\pmod{p_d^{k_d+1}} \end{align*} for some $\alpha_i,\delta_i\in\F[x]/(p_i),i=1,2,\dots,d$. Since $\dim_{\F} \F[x]/(p_i)=\deg p_i=t$, there are at most $q^{2dt}$ different choices for $(\alpha_i,\delta_i)_{i=1}^d$. Also, there are at most $q^{2dt}$ different choices for $(b_i,c_i)_{i=1}^d$ satisfying $0\leq b_i<c_i\leq q^t$. Since $L_B(\beta)$ is determined by $(\alpha_i,\delta_i,b_i,c_i)_{i=1}^d$, the claim is true. \end{proof} To each canonical box $B$ of volume between $q^{-n}$ and $q^{-n+dt-1}$ inclusive we assign a \emph{type}, so that boxes of the same type behave similarly. Formally, let $\A(B)$ be the polynomial obtained from the polynomial $A(B)$ by setting the coefficients of $1,x,x^2,\dotsc,x^{n-dt-1}$ to zero. Similarly, let $\D(B)$ be the polynomial obtained from $D(B)$ by setting the coefficients of $1,x,x^2,\dotsc,x^{n-3dt}$ to zero. The type of $B$ is then the pair $\T(B)\eqdef \bigl(\A(B),\D(B)\bigr)$. Note that, from $q^{-n}\leq \vol(B)\leq q^{-n+dt-1}$ and $\deg D(B)=-\log_q(\vol(B))$ it follows that \begin{equation}\label{eq:dbound} n-dt+1\leq\deg \D(B)\leq n. \end{equation} \begin{claim}\label{claim:type} The number of types is at most $q^{4dt}$. \end{claim} \begin{proof} Since $\deg A(B)<\deg D(B)\leq n$, only the $dt$ (resp.~$3dt$) leading coefficients of $\A(B)$ (resp.~$\D(B)$) may be non-zero. Hence, the number of types is at most $q^{dt}\times q^{3dt}=q^{4dt}$. \end{proof} For a type $\T=(\A,\D)$, let $\Y(\T)\eqdef \lbrace \A+g\D : g\in\F[x]\rbrace$. Note that if $\T=\T(B)$, then $\Y(\T)$ is an approximation to $r^{-1}(B)$. That is to say, the respective elements of $\Y(T)$ and of $r^{-1}(B)$ differ only in low-degree coefficients. Let $\LD{k}$ denote polynomials of degree less than $k$ in $\F[x]$. Our construction will be a union of sets of the form $h+\LD{n-dt}$ where $\deg h\leq n+dt$.\smallskip We first prove that there is no difference in how the sets $\Y(\T)$ and $r^{-1}(B)$ intersect $h+\LD{n-dt}$. \begin{claim}\label{claim:shrink} Suppose $\T(B)=\bigl(\A(B),\D(B)\bigr)$. Then for any polynomial $h\in \LD{n+dt}$ and any polynomial $g$, $\A(B)+g\D(B)\in h+\LD{n-dt}$ if and only if $A(B)+gD(B)\in h+\LD{n-dt}$. \end{claim} \begin{proof} If $\A(B)+g\D(B)\in h+\LD{n-dt}$, then $\deg (\A(B)+g\D(B))< n+dt$. Since $\deg \A(B)< n$ and $\deg \D(B)\geq n-dt$, it follows that $\deg g< 2dt$. From the definition of $\A(B)$ and $\D(B)$, the coefficients of $x^{n-dt},x^{n-dt+1},\ldots$ in $\A(B)+g\D(B)$ are the same as the respective coefficients in $A(B)+gD(B)$. The opposite direction is similar. \end{proof} For a type $\T$ and $L\in\cL$ that satisfy $\T=\T(B)$ and $L=L_B(\beta)$ for some good pair $(B,\beta)$, define \[\Y_{\T}(L)\eqdef\lbrace \A+g\D:g\in L\rbrace.\] With this definition, $\Y_{\T}(L)$ is the approximation to $r^{-1}(\beta)$ induced by the approximation $\Y(\T)$ to~$r^{-1}(B)$. \begin{claim}\label{clm:ex} The set $\bY_{\T}(L)\eqdef\Y_{\T}(L)\cap \LD{n+dt}$ is of size exactly $q^{dt}$. \end{claim} \begin{proof} Let $(B,\beta)$ be a good pair such that $\T=\T(B)$ and $L=L_B(\beta)$. From the previous claim, we know that the size of $\bY_{\T}(L)$ is the same as the size of $r^{-1}(\beta)\cap \LD{n+dt}$. By the Chinese remainder theorem, each of the canonical boxes of volume $q^{-(\lfloor n/t\rfloor+d)t}$ contains equally many points from $r(\LD{n+dt})$. Since $n\leq (\lfloor n/t\rfloor+d)t$, the number of points in $\beta\cap r(\LD{n+dt})$ is equal to $q^{n+dt}\vol(\beta)=q^{dt}$. \end{proof} \begin{claim} Let $h$ be chosen uniformly from $\LD{n+dt}$. Then $\abs{\bY_{\T}(L)\cap (h+\LD{n-dt})}$ is $1$ with probability $q^{-dt}$ and is $0$ otherwise. \end{claim} \begin{proof} Let $u\in \bY_{\T}(L)$ be arbitrary. Clearly $\Pr[u\in h+\LD{n-dt}]=q^{-2dt}$. The events of the form $u\in h+\LD{n-dt}$ are mutually disjoint as $u$ ranges over $\bY_{\T}(L)$. Indeed, suppose $\T=(\A,\D)$ and $u,u'\in \bY_{\T}(L)$ are such that $u,u'\in h+\LD{n-dt}$ for some $h\in \LD{n+dt}$. We may write $u=\A+g\D$ and $u'=\A+g'\D$. Then $u-u'=(g-g')\D\in \LD{n-dt}$. Since $\deg \D(B)\geq n-dt$, this implies that $g=g'$ and hence $u=u'$. In the combination with \Cref{clm:ex}, this implies that \[\Pr\bigl[\,|\bY_{\T}(L)\cap (h+\LD{n-dt})|=1 \bigr]=q^{-2dt}q^{dt}=q^{-dt}.\qedhere\] \end{proof} Sample $q^{dt}m$ elements uniformly at random from $\LD{n+dt}$, independently from one another. Let $H$ be the resulting multiset, and consider the multiset $H+\LD{n-dt}\eqdef \{h+f : h\in H,f\in \LD{n-dt}\}$. For a type $\T$ and $L\in\cL$ that satisfy $\T=\T(B)$ and $L=L_B(\beta)$ for some good pair $(B,\beta)$, define the random variable $N_{\T,L}\eqdef |\bY_{\T}(L)\cap (H+\LD{n-dt})|$. This random variable is distributed according to the binomial distribution $\operatorname{Binom}(q^{dt}m,q^{-dt})$. Let $\veps=\sqrt{33dt\log q/m}$. Note that $\veps<\sqrt{33d(2\log d+3\log q)/m}<10\sqrt{d\log (dq)/m}$, and in particular $\veps<1/2$. Hence, $\veps^2/2-\veps^3/2\geq \veps^2/4$. By the tail bounds for the binomial distribution \cite[Theorems A.1.11 and A.1.13]{alonspencer} we obtain \[\Pr\bigl[\,N_{\T,L}-m>\varepsilon m \bigr]<e^{-(\varepsilon^2/2-\varepsilon^3/2)m}<q^{-8dt}/2,\] \[\Pr\bigl[\,N_{\T,L}-m<-\varepsilon m\bigr]<e^{-\varepsilon^2m/2}<q^{-8dt}/2.\] From \Cref{claim:L,claim:type} and the union bound it then follows that there exists a choice of $H$ such that $N_{\T,L}$ is bounded between $(1-\veps)m$ and $(1+\veps)m$ whenever $\T=\T(B)$, $L=L_B(\beta)$ and $(B,\beta)$ is a good pair. By \Cref{claim:shrink}, this implies that the number of points in any good box $\beta$ of volume $q^{-n}$, the size $\beta\cap r(H+\LD{n-dt})$ is bounded between $(1-\veps)m$ and $(1+\veps)m$. Hence the multiset $r(H+\LD{n-dt})$ in $[0,1)^d$ is of size exactly $mq^n$ and satisfies \eqref{eq:main}. Since the $r$\nobreakdash-image of every set of the form $h+\LD{n-dt}$, for $h\in \F[x]$, is a $(q^{dt},0)$-net, it follows that $r(H+\LD{n-dt})$ is a $(M,0)$-net with $M=q^{2dt}m\leq d^{4d}q^{6d}m$. To obtain a set satisfying the same conclusion, we may perturb the points of $r(H+\LD{n-dt})$ slightly to ensure distinctness. \section{Proof of \texorpdfstring{\Cref{lowerbound}}{Theorem 2}} We shall derive \Cref{lowerbound} from the following lemma. \begin{lemma}\label{lemma:lower} For any positive integers $n,d, q$ and positive real numbers $m,\varepsilon$ with $n\geq d\geq 2$, $q\geq 2$, and $\veps<1/4$, if there exists an $(m,\veps)$-almost net $P\subseteq [0,1]^d$ in base $q$ of size $q^nm$, then \[m= \Omega\bigl(\frac{\log(\binom{d}{k})}{q^{2k}\varepsilon^2\log(1/\varepsilon)}\bigr),\] for any integer $k$ such that $1\leq k\leq d/2$ and \begin{equation}\label{eq:epscond} 2\varepsilon\geq \binom{d}{k}^{-1/2} \end{equation} holds. \end{lemma} \begin{proof} Let $\cB$ be the box $[0,1/q^{n-2k})\times [0,1)^{d-1}$. For any point $v=(v_1,\ldots,v_d)\in \cB$, write its coordinates in base $q$ as $v_\ell=(0.v_{\ell,1}v_{\ell,2}\ldots)_q$. Noting that the first $n-2k$ base-$q$ digits of $v_1$ are zero, we let $X_1(v)$ be the first non-trivial digit of $v_1$, i.e., $X_1(v)\eqdef v_{1,n-2k+1}$. Similarly, let $X_\ell(v)\eqdef v_{\ell,1}$ for $\ell\geq 2$. The proof idea is to use almost independence of functions $X_1,\dotsc,X_d$ for a randomly chosen point of $\cB$. However, we do not directly appeal to the known bound on the size of probability spaces supporting almost independent random variables (see e.g. \cite{alon,aakmrx}) because those bounds are formulated for $\{0,1\}$-valued random variables, whereas $X_1,\dotsc,X_d$ take $q$ distinct values. \medskip Let $S\eqdef P\cap \cB$, and $t\eqdef \abs{S}$. Since $P$ is an $(m,\veps)$-almost net, it follows that $t$ is between $q^{2k}(1-\varepsilon)m$ and $q^{2k}(1+\varepsilon)m$. Assume $v^1,\ldots,v^t$ are all the points in $S$. For $x\in \R$, let $e_q(x)\eqdef \exp(2\pi ix/q)$ where $i=\sqrt{-1}$. Let $U$ be a $\binom{d}{k}$-by-$t$ matrix, where the rows are indexed by $\binom{[d]}{k}$ and the columns are indexed by $[t]$. The general entry of $U$ is \[ U_{J,\ell}\eqdef e_q\bigl(\sum_{j\in J}X_j(v^\ell)\bigr). \] Also, define $A\eqdef \tfrac{1}{t}UU^*$. \begin{claim} The diagonal terms in $A$ are all $1$. The off-diagonal terms are, in absolute value, bounded above by $2\varepsilon$. \end{claim} \begin{proof} The general term of $A$ is given by \[ A_{J_1,J_2}=\tfrac{1}{t}\sum_{\ell=1}^te_q\Bigl(\sum_{j\in J_1}X_j(v^\ell)-\sum_{j\in J_2}X_j(v^\ell)\Bigr). \] If $J_1=J_2$, this is clearly $1$. Suppose $J_1\neq J_2$. Note that, for any choice of $\alpha=(\alpha_j)_{j\in J_1\Delta J_2}$ with $\alpha_j\in \{0,1,\dotsc,q-1\}$, the set \[ \{v\in \cB : X_j(v)=\alpha_{j}\text { for }j\in J_1\Delta J_2\} \] is a basic box of volume $q^{-n+2k-|J_1\Delta J_2|}$. Thus, for any $\tau\in \{0,1,\dotsc,q-1\}$, the region \[B_\tau \eqdef \Bigl\{v\in\cB:\sum_{j\in J_1}X_j(v)-\sum_{j\in J_2}X_j(v)\equiv\tau\pmod q\Bigr\}\] can be partitioned into $q^{|J_1\Delta J_2|-1}$ many basic boxes of volume $q^{-n+2k-|J_1\Delta J_2|}$ each. Since we have $\break-n+2k-|J_1\Delta J_2|\geq -n$, it follows that the number of $\ell$ such that $v^\ell\in B_\tau$ is bounded between $q^{2k-1}(1-\varepsilon)m$ and $q^{2k-1}(1+\varepsilon)m$. Thus, \begin{align*} |A_{J_1,J_2}|=&\frac{1}{t}\abs*{\sum_{\tau=1}^q \abs{B_\tau\cap S}e_q(\tau)}\\ \leq& \frac{1}{t}\abs*{\sum_{\tau=1}^q q^{2k-1}m e_q(\tau)}+\frac{1}{t}\sum_{\tau=1}^q |\varepsilon q^{2k-1}me_q(\tau)|\\ =&\frac{\varepsilon q^{2k}m}{t}\leq 2\varepsilon.\qedhere \end{align*} \end{proof} We apply \cite[Theorem~2.1]{alon} to the matrix $(A+\bar{A})/2$. We obtain that, if $ \binom{d}{k}^{-1/2}\leq 2\varepsilon < 1/2, $ then $2q^{2k}(1+\varepsilon)m\geq 2\rank(A)\geq \rank\bigl((A+\bar{A})/2\bigl)= \Omega(\frac{\log(\binom{d}{k})}{\varepsilon^2\log(1/\varepsilon)})$. Therefore, \[m= \Omega\bigl(\frac{\log(\binom{d}{k})}{q^{2k}\varepsilon^2\log(1/\varepsilon)}\bigr).\qedhere\] \end{proof} The right hand side of lemma \ref{lemma:lower} is a decreasing function of $k$ for $k\in [1,d/2]$. Therefore, we shall pick $k$ as small as possible. If $\varepsilon\geq 1/2\sqrt{d}$, then we may set $k=1$ and get \[m= \Omega\bigl(\frac{\log d}{q^2\varepsilon^2\log(1/\varepsilon)}\bigr).\] If $1/2\sqrt{d}\geq \varepsilon\geq e^{-d/8}$, then we may set $k=\frac{2\log(1/\varepsilon)}{\log d-\log\log(1/\varepsilon)}$. From the assumption on~$\varepsilon$, we have $k\leq \log(1/\varepsilon)$. Therefore, \begin{align*} \binom{d}{k}\geq& (d/k)^k\\ \geq& \exp\bigl(\frac{2\log(1/\varepsilon)}{\log d-\log\log(1/\varepsilon)}(\log d-\log k)\bigr)\\ \geq& \exp\bigl(\frac{2\log(1/\varepsilon)}{\log d-\log\log(1/\varepsilon)}(\log d-\log\log(1/\varepsilon))\bigr)\\ \geq& \frac{1}{\varepsilon^2}, \end{align*} and so \eqref{eq:epscond} holds. Hence, we may apply \Cref{lemma:lower} with $\lceil k\rceil$ in place of $k$ and obtain \[m= \Omega(q^{-2k-2}\varepsilon^{-2}).\] In particular, if $\varepsilon=\omega(d^{-t})$ for some constant $t$, then $k$ is also a constant, and so $m= \Omega_{q,t}(1/\varepsilon^2)$ in this case. If $\varepsilon=o(e^{-cd})$ for some constant $c$ such that $0<c<\min(1/8,1/q^2)$, then the $(m,\varepsilon)$-net is also an $(m,e^{-cd})$-net, when $d$ is large enough. We may apply the result above with $e^{-cd}$ in place of $\varepsilon$. In this case, the calculations above yield $k=2cd/\log (1/c)$, and we get $m= \Omega\bigl(q^{-2}e^{c'd}\bigr)$ where $c'=2c\bigl(1-2\log q/\log(1/c)\bigr)$. \bibliographystyle{plain}
{ "timestamp": "2022-09-28T02:19:38", "yymm": "2102", "arxiv_id": "2102.12872", "language": "en", "url": "https://arxiv.org/abs/2102.12872", "abstract": "Digital nets (in base $2$) are the subsets of $[0,1]^d$ that contain the expected number of points in every not-too-small dyadic box. We construct sets that contain almost the expected number of points in every such box, but which are exponentially smaller than the digital nets. We also establish a lower bound on the size of such almost nets.", "subjects": "Combinatorics (math.CO); Computational Geometry (cs.CG)", "title": "Digital almost nets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137895115187, "lm_q2_score": 0.7217432182679957, "lm_q1q2_score": 0.7087617928255937 }
https://arxiv.org/abs/2210.13071
Normal split submanifolds of rational homogeneous spaces
Let $M \subset X$ be a submanifold of a rational homogeneous space $X$ such that the normal sequence splits. We prove that $M$ is also rational homogeneous.
\section{Introduction} Let $M \subset \ensuremath{\mathbb{P}}^n$ be a projective manifold such that the normal sequence $$ 0 \rightarrow T_M \rightarrow T_{\ensuremath{\mathbb{P}}^n} \otimes \sO_M \rightarrow N_{M/\ensuremath{\mathbb{P}}^n} \rightarrow 0 $$ splits. Since the tangent bundle of the projective space is ample, the existence of a splitting map $T_{\ensuremath{\mathbb{P}}^n} \otimes \sO_M \twoheadrightarrow T_M$ yields that $T_M$ is also ample, hence $M \simeq \ensuremath{\mathbb{P}}^{\dim M}$ by Mori's theorem \cite{Mor79}. In fact a theorem of van de Ven \cite{vdv59} states that $M \subset \ensuremath{\mathbb{P}}^n$ is a linear subspace. It is natural to expect that similar statements exist for submanifolds of arbitrary homogeneous spaces. The case of tori has been solved by Jahnke \cite[Thm.3.2]{Jah05}, so we consider the following \begin{problem} Let $X \simeq G/P$ be a rational homogeneous space. Let $i: M \hookrightarrow X$ be a submanifold such that the normal sequence \begin{equation} \label{normal} 0 \rightarrow T_M \stackrel{\tau_i}{\rightarrow} T_X \otimes \sO_M \rightarrow N_{M/X} \rightarrow 0 \end{equation} splits, i.e. there exists a morphism $\holom{s_M}{T_X \otimes \sO_M}{T_M}$ such that $s_M \circ \tau_i = {\rm id}_{T_M}$. \begin{itemize} \item Describe $M$ up to isomorphism. \item Describe the embedding $i: M \hookrightarrow X$. \end{itemize} \end{problem} Both problems have been solved for special classes of rational homogeneous spaces like hyperquadrics, Grassmannians \cite[Thm.4.7, Prop.5.2]{Jah05}, irreducible compact Hermitian symmetric spaces \cite[Thm.2.3]{Din22} or certain blow-ups of the projective space \cite{Jah05, Li22}. In this paper we focus on the first problem and prove a structure result that holds without further assumptions on $X$, thereby improving \cite[Prop.2.2]{Jah05} and \cite[Thm.2.1]{Din22}: \begin{theorem} \label{theorem-split} Let $X \simeq G/P$ be a rational homogeneous space, and let $M \subset X$ be a submanifold that is normal split (cf. Definition \ref{definition-normal-split}). Then $M$ is rational homogeneous. \end{theorem} Since the vector bundle $T_X \otimes \sO_M$ is globally generated and $M \subset X$ is normal split, the tangent bundle of $T_M$ is globally generated. Thus $M$ is homogeneous and by \cite[Satz 1]{BR62} we have $$ M \simeq A \times Y $$ where $A$ is an abelian variety and $Y$ is rational homogeneous. Thus in order to prove Theorem \ref{theorem-split} we can assume without loss of generality that $M \simeq A$ is abelian (cf. Lemma \ref{lemma-reductions}). Since the tangent bundle of the rational homogeneous space $T_X$ is big \cite{Ric74} and the tangent bundle of an abelian variety is not, one is tempted to argue as in the case of the projective space. Note however that the quotient of a big vector bundle is in general not big, so a priori neither $T_X \otimes \sO_A$ nor its quotient $T_A$ are big. Therefore our argument proceeds in a different fashion: denote by $$ \holom{\pi}{\ensuremath{\mathbb{P}}(T_X)}{X} $$ the Grothendieck projectivisation of the tangent bundle. The global sections of $T_X$ define a morphism \begin{equation} \label{definition-varphi} \holom{\varphi_X}{\ensuremath{\mathbb{P}}(T_X)}{\ensuremath{\mathbb{P}}(H^0(X, T_X))} \end{equation} that is generically finite onto its image. Since $A$ is abelian, the splitting map allows to define a lifting $A \rightarrow \PP(T_X)$ such that the image $\ensuremath{\tilde{A}}$ is contracted by $\varphi_X$. If the fibres of $\varphi_X$ are smooth and rationally connected the existence of $\ensuremath{\tilde{A}}$ is often enough to obtain a contradiction, cp. \cite{Jah05}. For an arbitrary rational homogeneous space $X$ the fibres of $\varphi_X$ are not necessarily smooth (cf. also Remark \ref{remark-fibres-varphi}), but we show in Lemma \ref{lemma-key} that manifolds contracted by $\varphi_X$ are always integral submanifolds with respect to the contact structure on $\PP(T_X)$. Theorem \ref{theorem-split} then follows without too much difficulty. Our result can be extended to normal split submanifolds of projective manifolds such that the tangent bundle $T_X$ is nef and big, cf. Remark \ref{remark-nef-and-big}. By a well-known conjecture of Campana and Peternell \cite{CP91}, these are exactly the rational homogeneous spaces. {\bf Acknowledgements.} The authors thank B. Fu, J. Liu, L. Manivel, B. Pasquier and R. \'Smiech for answering their ignorant questions about homogeneous spaces and contact manifolds. The second-named author thanks the Institut Universitaire de France for providing excellent working conditions. \section{Notation and basic facts} We work over the complex numbers, for general definitions we refer to \cite{Har77}. Varieties will always be supposed to be irreducible and reduced. We use the terminology of \cite{Deb01, KM98} for birational geometry and notions from the minimal model program. We follow \cite{Laz04a} for algebraic notions of positivity. Given a morphism $\holom{\varphi}{A}{B}$ between complex manifolds we denote by $$ \holom{\tau_\varphi}{T_A}{\varphi^* T_B} $$ the tangent map. For a submanifold $M \hookrightarrow X$ of a complex manifold $X$, we will simply denote the tangent map by $$ \holom{\tau_M}{T_M}{T_X \otimes \sO_M}. $$ \begin{definition} \label{definition-normal-split} Let $X$ be a complex manifold, and let $\holom{i}{M}{X}$ be a submanifold. We say that the $M$ is normal split in $X$ if the inclusion $$ 0 \rightarrow T_M \stackrel{\tau_i}{\rightarrow} T_X \otimes \sO_M $$ admits a splitting morphism $s_M: T_X \otimes \sO_M \rightarrow T_M$ such that $s_M \circ \tau_i = {\rm id}_{T_M}$. \end{definition} \begin{remark*} While $\tau_i$ is determined by the embedding, the splitting morphism $s_M$ is not unique. Our statements will not depend on the choice of $s_M$. \end{remark*} \begin{lemma} \label{lemma-reductions} Let $X \simeq G/P$ be a rational homogeneous space, and let $M \subset X$ be a submanifold that is normal split. If $M$ is not rational homogeneous, there exists an abelian variety of positive dimension $A \subset X$ that is normal split. \end{lemma} \begin{proof} Since $M$ is homogeneous, we have by \cite[Satz 1]{BR62} that $M \simeq A \times Y$ with $A$ an abelian variety and $Y$ rational homogeneous. Since $M$ is not rational homogeneous, we have $\dim A>0$. Yet $M$ is a product, so the abelian variety $A$ is normal split in $M$. Thus by \cite[1.2.2]{Jah05} the abelian variety $A$ is normal split in $X$. \end{proof} \section{Geometry of the projectivised tangent bundle} Let $X$ be a complex manifold, denote by $\holom{\pi}{\ensuremath{\mathbb{P}}(T_X)}{X}$ the Grothendieck projectivisation of its tangent bundle. Let $$ 0 \rightarrow \sO_{\ensuremath{\mathbb{P}}(T_X)} \rightarrow \pi^* \Omega_X \otimes \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \rightarrow T_{\ensuremath{\mathbb{P}}(T_X)/X} \rightarrow 0 $$ be the relative Euler sequence. Dualising and tensoring by $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)$ we obtain the canonical quotient map \begin{equation} \label{definition-q} \holom{q}{\pi^* T_X}{\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)}. \end{equation} The composition of the tangent map $\holom{\tau_\pi}{T_{\ensuremath{\mathbb{P}}(T_X)}}{\pi^* T_X}$ with the canonical quotient map $q$ defines an exact sequence \begin{equation} \label{define-contact} 0 \rightarrow \sF \rightarrow T_{\ensuremath{\mathbb{P}}(T_X)} \stackrel{q \circ \tau_\pi}{\rightarrow} \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \rightarrow 0. \end{equation} The corank one distribution $\sF \subset T_{\ensuremath{\mathbb{P}}(T_X)}$ is a contact distribution \cite[Sect.13.2]{Bla10}, i.e. the map $$ \bigwedge^2 \sF \rightarrow T_{\ensuremath{\mathbb{P}}(T_X)}/\sF \simeq \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) $$ induced by the Lie bracket on $\PP(T_X)$ is surjective. Assume now that $X \simeq G/P$ is rational homogeneous, so the tautological bundle $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)$ is nef and big \cite{Ric74}. Since $T_X$ is globally generated, the tautological line bundle $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)$ is globally generated and defines a morphism $$ \holom{\varphi_{|\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)|}}{\ensuremath{\mathbb{P}}(T_X)}{\ensuremath{\mathbb{P}}(H^0(X, T_X))} $$ such that $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \simeq \varphi_{|\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)|}^* \sO_{\ensuremath{\mathbb{P}}(H^0(X, T_X))}(1)$. We denote by \begin{equation} \label{define-varphiX} \holom{\varphi_X}{\PP(T_X)}{Y} \end{equation} the Stein factorisation of $\varphi_{|\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)|}$ and by $L$ the pull-back of $\sO_{\ensuremath{\mathbb{P}}(H^0(X, T_X))}(1)$ to $Y$. By construction $L$ is ample and $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \simeq \varphi_X^* L$. \begin{remark} \label{remark-fibres-varphi} Since $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)$ is nef and big, the morphism $\varphi_X$ is birational. The canonical bundle of $\PP(T_X)$ is isomorphic to $\sO_{\ensuremath{\mathbb{P}}(T_X)}(-n)$, so it is trivial on the fibres of $\varphi_X$. Thus we have $$ K^*_Y \simeq L^{\otimes n} $$ and $Y$ is a Fano variety with canonical Gorenstein singularities. In fact it is not difficult to see that $Y$ has an induced singular contact structure \cite{CF02} given by a global section of $H^0(Y, \Omega_Y^{[1]} \otimes L)$. We can apply \cite[Thm.2]{Kaw91} to see that the irreducible components of $\varphi_X$-fibres are uniruled. We also know by \cite[Cor.1.5]{HM07} that the fibres are rationally chain-connected. Note however that this does not imply that the irreducible components are rationally chain-connected \cite[p.119]{HM07} \end{remark} The following statement, inspired by \cite[Prop.5.9]{MOSWW} is certainly well-known to experts, we give a detailed proof for the convenience of the reader: \begin{lemma} \label{lemma-key} Let $X \simeq G/P$ be a rational homogeneous space, and let $\holom{\pi}{\ensuremath{\mathbb{P}}(T_X)}{X}$ be its projectivised cotangent bundle. Let $F \subset \ensuremath{\mathbb{P}}(T_X)$ be a smooth quasi-projective subvariety that is contracted by the birational map \eqref{define-varphiX} onto a point. Then $F$ is an integral variety with respect to the contact distribution $\sF \subset T_{\ensuremath{\mathbb{P}}(T_X)}$, i.e. one has $$ T_{F} \subset \sF \otimes \sO_{F}. $$ \end{lemma} We will see that this lemma is a translation of the fact that fibres of a symplectic resolution are isotropic with respect to the symplectic form \cite[Thm.1.2]{Wie03}, \cite{Nam01}, a strategy that appears in the literature at several places, e.g. \cite{Bea00, SW04}. Let us recall the relation between the two setups: let $\mathcal Y \rightarrow Y$ be the total space of $L^*$ with the zero section removed, and set $\mathcal X := \ensuremath{\mathbb{P}}(T_X) \times_Y \mathcal Y$. Denote the natural maps by $$ \holom{\tilde \varphi_X}{\mathcal X}{\mathcal Y}, \qquad \holom{\tau}{\mathcal X}{\ensuremath{\mathbb{P}}(T_X)}. $$ Then $\mathcal X \rightarrow \ensuremath{\mathbb{P}}(T_X)$ is a $\ensuremath{\mathbb{C}}^*$-bundle and in fact the total space of $\sO_{\ensuremath{\mathbb{P}}(T_X)}(-1)$ with the zero section removed \cite[Lemma 4.1]{Fu}. The spaces $\mathcal X$ and $\mathcal Y$ are symplectic and $\tilde \varphi_X$ is a symplectic resolution \cite[Lemma 4.2]{Fu}. More precisely the contact form on $\ensuremath{\mathbb{P}}(T_X)$ is obtained from the symplectic form $\tilde \omega$ on $\mathcal X$ by contracting with a vector field generated by the $\ensuremath{\mathbb{C}}^*$-action (cf. proof of \cite[Lemma 4.2]{Fu}). \begin{proof}[Proof of Lemma \ref{lemma-key}] By \eqref{define-contact} the contact distribution $\sF$ is the kernel of the contact map $$ T_{\ensuremath{\mathbb{P}}(T_X)} \rightarrow \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \rightarrow 0. $$ We identify the contact map with a section $$ \theta \in H^0(\ensuremath{\mathbb{P}}(T_X), \Omega_{\ensuremath{\mathbb{P}}(T_X)} \otimes \sO_{\ensuremath{\mathbb{P}}(T_X)}(1)). $$ Since $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \simeq \varphi_X^* L$ we can find an analytic neighbourhood $U$ of $\fibre{\varphi_X}{\varphi_X(F)}$ such that $\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)$ is trivial on $U$. Hence $$ \theta|_U \in H^0(U, \Omega_U) $$ and we are done if we show that $$ \theta|_F \in H^0(F, N_{F/\ensuremath{\mathbb{P}}(T_X)}^*), $$ where $N_{F/\ensuremath{\mathbb{P}}(T_X)}^* \subset \Omega_U \otimes \sO_F$ is the conormal bundle of $F$. {\em Proof of the claim.} We will reduce the claim to the corresponding statement for the symplectic forms: let $T \simeq \ensuremath{\mathbb{C}}^*$ be the fibre of $\mathcal Y \rightarrow Y$ over $\varphi_X(F)$. Then $F \times_Y T \simeq F \times T \subset \mathcal X$ and we set $$ \holom{\sigma}{F \times T}{T}, $$ where $\sigma$ is the restriction of $\tilde \varphi_X$ to $F \times T$. Then by \cite[Lemma 2.9]{Kal} there exists a dense open subset $T_0 \subset T$ and a holomorphic two-form $\omega_T$ on $T_0$ such that $$ \sigma^* \omega_T = \eta|_{F \times T_0} $$ where $\eta$ is the image of $\tilde \omega$ under the restriction map $$ H^0(\mathcal X, \Omega^2_{\mathcal X}) \rightarrow H^0(F \times T, \Omega^2_{\mathcal X} \otimes \sO_{F \times T}) \rightarrow H^0(F \times T, \Omega^2_{F \times T}). $$ Since $T_0$ is a curve, the holomorphic two-form $\omega_T$ is zero, hence $\eta=0$. By \cite[II, Ex.5.16]{Har77} the conormal sequence $$ 0 \rightarrow N_{F \times T/\mathcal X}^* \rightarrow \Omega_{\mathcal X} \otimes \sO_{F \times T} \rightarrow \Omega_{F \times T} \rightarrow 0 $$ induces a filtration of the kernel $\sK$ of the surjection $\Omega^2_{\mathcal X} \otimes \sO_{F \times T} \rightarrow \Omega^2_{F \times T}$: \begin{equation} \label{filtrate} 0 \rightarrow \bigwedge^2 N_{F \times T/\mathcal X}^* \rightarrow \sK \rightarrow N_{F \times T/\mathcal X}^* \otimes \Omega_{F \times T} \rightarrow 0. \end{equation} By what precedes we know that $$ \tilde \omega|_{F \times T} \in H^0(F \times T, \sK), $$ we will now deduce $\theta|_F \in H^0(F, N_{F/\ensuremath{\mathbb{P}}(T_X)}^*)$: by the discussion before the proof $\theta|_F$ is obtained from $\tilde \omega|_{F \times T}$ by contracting with a vector field $v$ generated by the $\ensuremath{\mathbb{C}}^*$-action. Since this vector field is mapped onto zero by $\tau$, the contraction with a $2$-form that is a pull-back from $F$ is equal to zero. Since $$ N_{F \times T/\mathcal X}^* = \tau^* N_{F/\ensuremath{\mathbb{P}}(T_X)}^* $$ we obtain from \eqref{filtrate} that the contraction map $\sK \stackrel{\lrcorner v}{\longrightarrow} \Omega_{\ensuremath{\mathbb{P}}(T_X)}|_F$ factors through a morphism $$ N_{F \times T/\mathcal X}^* \otimes \Omega_{F \times T} \stackrel{\lrcorner v}{\longrightarrow} \Omega_{\ensuremath{\mathbb{P}}(T_X)}|_F. $$ Yet $$ \Omega_{F \times T} \simeq \tau^* \Omega_F \oplus \varphi_X^* \Omega_T, $$ so if we decompose $\tilde \omega|_{F \times T}=\tilde \omega_1 + \tilde \omega_2$ according to the direct sum $$ N_{F \times T/\mathcal X}^* \otimes \Omega_{F \times T} \simeq \left( \tau^* N_{F/\ensuremath{\mathbb{P}}(T_X)}^* \otimes \tau^* \Omega_F \right) \oplus \left( \tau^* N_{F/\ensuremath{\mathbb{P}}(T_X)}^* \otimes \varphi_X^* \Omega_T \right) $$ we see that $\tilde \omega_1 \lrcorner v=0$ while $$ \tilde \omega_2 \lrcorner v \in H^0(F, N_{F/\ensuremath{\mathbb{P}}(T_X)}^*). $$ Since $\theta|_F =(\tilde \omega|_{F \times T}) \lrcorner v = \tilde \omega_2 \lrcorner v$ this shows the claim. \end{proof} The following example shows that the crucial point in Lemma \ref{lemma-key} is that the contact form on $\PP(T_X)$ is a reflexive pull-back from the singular space $Y$, i.e. we use that $T_X$ is nef {\em and big}. \begin{example} Let $X=\ensuremath{\mathbb{C}}^2/\Lambda$ be an abelian surface, so $X$ is homogeneous and the natural map \eqref{define-varphiX} is given by the projection $$ \holom{\varphi_X}{\PP(T_X) \simeq X \times \ensuremath{\mathbb{P}}^1}{\ensuremath{\mathbb{P}}^1}. $$ We will now follow the notation of \cite[Sect.13.2]{Bla10} for the local computation of the contact form $\theta$: for linear coordinates $z_1, z_2$ on $\ensuremath{\mathbb{C}}^2$ the contact form on $\PP(T_X)$ is $$ \theta = \sum_{i=1}^2 d z_i \otimes \zeta_i $$ where $\sum_{i=1}^2 \zeta_i dz_i$ are fibrewise coordinates on $\ensuremath{\mathbb{P}}(T_{X,x}) \simeq {\bf P}(\Omega_{X,x})$ (where ${\bf P}(\Omega_{X,x})$ is the space of lines in $\Omega_{X,x}$). Assume now that $A \subset X$ is an elliptic curve corresponding to the linear subspace $\ensuremath{\mathbb{C}} z_1 \subset \ensuremath{\mathbb{C}}^2$, then $$ \frac{\partial}{\partial z_1} \mapsto \frac{\partial}{\partial z_1}, \qquad \frac{\partial}{\partial z_2} \mapsto 0 $$ defines a splitting $T_X \otimes \sO_A \rightarrow T_A$ of the tangent map $\tau_A$. The curve $\ensuremath{\tilde{A}} \subset \PP(T_X)$ is contained in $X \times {\bf P}(dz_1) \subset X \times {\bf P}(\Omega_{X})$, so it is contracted by $\varphi_X$. The restriction of $\theta$ to $X \times {\bf P}(dz_1)$ is simply the form $dz_1$, in particular the composition $$ T_{\ensuremath{\tilde{A}}} \hookrightarrow T_{\PP(T_X)} \otimes \sO_{\ensuremath{\tilde{A}}} \stackrel{v \mapsto dz_1(v)}{\longrightarrow} \sO_{\ensuremath{\tilde{A}}} $$ is surjective. \end{example} We make a basic observation: \begin{lemma} \label{lemma-observation} Let $X$ be a projective manifold, and let $\holom{\pi}{\ensuremath{\mathbb{P}}(T_X)}{X}$ be its projectivised cotangent bundle. Let $A \subset X$ be an abelian variety that is normal split with splitting map $\holom{s_A}{T_X \otimes \sO_A}{T_A}$ and fix a quotient $\holom{q_A}{T_A}{\sO_A}$. Let $\holom{\sigma_A}{A}{\ensuremath{\mathbb{P}}(T_X)}$ be the lifting determined by the quotient line bundle $$ \holom{q_A \circ s_A}{T_X \otimes \sO_A}{\sO_A}, $$ and denote by $\ensuremath{\tilde{A}} \subset \PP(T_X)$ its image. Since $\ensuremath{\tilde{A}}$ maps isomorphically onto its image in $X$, we can consider the tangent morphism $$ \tau_A: T_{\tilde A} \simeq T_A \rightarrow T_X \otimes \sO_A \simeq \pi^* (T_X) \otimes \sO_{\ensuremath{\tilde{A}}}. $$ Then the composition with the canonical quotient map \eqref{definition-q} gives a surjective map $$ q \circ \tau_A: T_{\tilde A} \simeq T_A \rightarrow \pi^* (T_X) \otimes \sO_{\ensuremath{\tilde{A}}} \rightarrow \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \otimes \sO_{\ensuremath{\tilde{A}}}. $$ \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma-observation}] The lifting $\sigma_A$ is determined by the quotient line bundle $\holom{q_A \circ s_A}{T_X \otimes \sO_A}{\sO_A}$, so by the universal property of the projectivisation \cite[App.A]{Laz04a} the pull-back of the canonical quotient map $\holom{q}{\pi^* T_X}{\sO_{\ensuremath{\mathbb{P}}(T_X)}(1)}$ via $\sigma_A$ identifies to $q_A \circ s_A$. Since $s_A$ is a splitting map for $\holom{\tau_A}{T_A}{T_X \otimes \sO_A}$, the composition $q_A \circ s_A \circ \tau_A = q_A$ is surjective. \end{proof} \begin{remark} \label{remark-lift-rational} It is instructive to compare the situation with liftings of rational curves: let $X$ be a smooth quadric surface, and let $l \subset X$ be a line of a ruling. Then $l \subset X$ is normal split: we have $$ T_X \otimes \sO_l \simeq T_l \oplus \sO_l, $$ so the trivial quotient $T_X \otimes \sO_l \rightarrow \sO_l$ determines a lifting of $l$ to $\ensuremath{\mathbb{P}}(T_X)$ such that the image $\tilde l$ is contained in a $\varphi_X$-fibre. However the morphism $$ T_{\tilde l} \rightarrow \pi^* (T_X) \otimes \sO_{\tilde l} \rightarrow \sO_{\tilde l} $$ is not surjective: we have $T_{\tilde l} \simeq \sO_{\ensuremath{\mathbb{P}}^1}(2)$, so any morphism to a trivial bundle must vanish. The difference to Lemma \ref{lemma-observation} is that the trivial quotient $T_X \otimes \sO_l \rightarrow \sO_l$ does not factor through a morphism $T_X \otimes \sO_l \rightarrow T_l$. \end{remark} \begin{proof}[Proof of Theorem \ref{theorem-split}] We argue by contradiction and assume that the submanifold $M \subset X$ is not rational homogeneous. By Lemma \ref{lemma-reductions} we can assume without loss of generality that $M \simeq A$ is an abelian variety. We fix a splitting map $\holom{s_A}{T_X \otimes \sO_A}{T_A}$ and a trivial quotient $\holom{q_A}{T_A}{\sO_A}$. Denote by $\ensuremath{\tilde{A}} \subset \PP(T_X)$ the lifting of $A$ to $\PP(T_X)$ determined by the quotient $q_A \circ s_A$. By Lemma \ref{lemma-observation} the map $$ q \circ \tau_A: T_{\tilde A} \rightarrow \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \otimes \sO_{\ensuremath{\tilde{A}}} $$ is surjective. Since $\ensuremath{\tilde{A}} \subset \PP(T_X)$ the tangent map $\tau_A$ factors through through the tangent map $$ \tau_{\ensuremath{\tilde{A}}} : T_{\ensuremath{\tilde{A}}} \rightarrow T_{\PP(T_X)} \otimes \sO_{\ensuremath{\tilde{A}}}, $$ and we have a commutative diagram $$ \xymatrix{ T_{\ensuremath{\tilde{A}}} \ar[rr]^{\tau_{\ensuremath{\tilde{A}}}} \ar[d]^{\simeq} & & T_{\PP(T_X)} \otimes \sO_{\ensuremath{\tilde{A}}} \ar[d]^{\tau_\pi} & \\ T_A \ar[rr]^{\tau_{A}} & & \pi^* T_X \otimes \sO_A \ar[r]^q & \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \otimes \sO_{\ensuremath{\tilde{A}}} } $$ By construction the contact map \eqref{define-contact} is the composition $q \circ \tau_\pi$. Since $q \circ \tau_A$ is surjective, we obtain that $q \circ \tau_\pi \circ \tau_{\ensuremath{\tilde{A}}}$ is surjective, in particular $\ensuremath{\tilde{A}} \subset \PP(T_X)$ is not integral with respect to the contact structure. Yet the lifting $A \rightarrow \PP(T_X)$ is given by a trivial line bundle, so $$ \varphi_X^* L \otimes \sO_{\ensuremath{\tilde{A}}} \simeq \sO_{\ensuremath{\mathbb{P}}(T_X)}(1) \otimes \sO_{\ensuremath{\tilde{A}}} \simeq \sO_{\ensuremath{\tilde{A}}}. $$ Since $L$ is an ample line bundle on $Y$, we see that $\ensuremath{\tilde{A}}$ is contracted by $\varphi_X$ onto a point. Thus we have a contradiction to Lemma \ref{lemma-key}. \end{proof} \begin{remark} \label{remark-nef-and-big} Let us conclude by indicating a variant of Theorem \ref{theorem-split} under the weaker assumption that $T_X$ is nef and big. We claim that in this case a normal split submanifold $M \subset X$ is a Fano manifold with semiample tangent bundle. \begin{proof} The basepoint-free theorem implies that $T_X$ is semiample in the sense of \cite{Fuj92}, cf. \cite[Prop.5.5]{MOSWW} for a proof. In particular we can define the birational morphism \eqref{define-varphiX} using the global sections of some positive multiple of $\sO_{\PP(T_X)}(1)$, and Lemma \ref{lemma-key} holds for this morphism. Since $M \subset X$ is normal split, its tangent bundle $T_M$ is also semi-ample. Moreover, by \cite[Main Thm.]{DPS94} there exists a finite \'etale cover $\holom{\eta}{M'}{M}$ such that $M'$ admits a smooth fibration $\holom{f}{M'}{A}$ onto an abelian variety such that the general fibre is Fano. Arguing by contradiction we assume that $A$ is not a point. Since $\eta$ is \'etale, the splitting map $\holom{s_M}{T_X \otimes \sO_M}{T_M}$ lifts to splitting map $$ \holom{\tilde s_M}{\eta^* (T_X \otimes \sO_M)}{\eta^* T_M \simeq T_{M'}}. $$ Since $T_{M'}$ is semiample \cite[Lemma 1]{Fuj92} the tangent map $\holom{\tau_f}{T_{M'}}{T_A}$ splits by \cite[Cor.4]{Fuj92}. Thus we have $T_{M'} \simeq T_{M'/A} \oplus \sO_{M'}^{\oplus \dim A}$ and we can use a quotient line bundle $T_{M'} \twoheadrightarrow \sO_{M'}$ to define a lifting $M' \rightarrow \PP(T_X)$ such that the image is contracted by $\varphi_X$ onto a point. Now the proof of Theorem \ref{theorem-split} yields a contradiction. \end{proof} \end{remark} \newcommand{\etalchar}[1]{$^{#1}$}
{ "timestamp": "2022-10-25T02:25:33", "yymm": "2210", "arxiv_id": "2210.13071", "language": "en", "url": "https://arxiv.org/abs/2210.13071", "abstract": "Let $M \\subset X$ be a submanifold of a rational homogeneous space $X$ such that the normal sequence splits. We prove that $M$ is also rational homogeneous.", "subjects": "Algebraic Geometry (math.AG)", "title": "Normal split submanifolds of rational homogeneous spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137884587394, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617920657572 }
https://arxiv.org/abs/2010.00119
On the size of $A+λA$ for algebraic $λ$
For a finite set $A\subset \mathbb{R}$ and real $\lambda$, let $A+\lambda A:=\{a+\lambda b :\, a,b\in A\}$. Combining a structural theorem of Freiman on sets with small doubling constants together with a discrete analogue of Prékopa--Leindler inequality we prove a lower bound $|A+\sqrt{2} A|\geq (1+\sqrt{2})^2|A|-O({|A|}^{1-\varepsilon})$ which is essentially tight. We also formulate a conjecture about the value of $\liminf |A+\lambda A|/|A|$ for an arbitrary algebraic $\lambda$. Finally, we prove a tight lower bound on the Lebesgue measure of $K+\mathcal{T} K$ for a given linear operator $\mathcal{T}\in \operatorname{End}(\mathbb{R}^d)$ and a compact set $K\subset \mathbb{R}^d$ with fixed measure. This continuous result supports the conjecture and yields an upper bound in it.
\section{Introduction} Let $A$ be a finite non-empty set with elements in a commutative ring $R$ and let $\lambda$ be an element of $R$. A number of papers are devoted to bounding $|A+\lambda A|=|\{x+\lambda y:x,y\in A\}|$ in terms of $|A|$ and various generalizations of this problem. In particular, the sums of several dilates have been intensively studied. The sums of the form $A+\mathcal{L} A$ for linear maps $\mathcal{L}$ (when $R$ is a module over another ring) were also considered in \cite{Mudgal2019}. In \cite{Konyagin2006} it is proved that $$ |A+\lambda A|\ge C\frac{|A|\log |A|}{\log \log |A|} $$ for an absolute constant $C$ and any finite $A\subset \mathbb{R}$, $|A|>2$, and any transcendental $\lambda\in \mathbb{R}$ (it is easy to see that for all transcendental $\lambda$ the minimal value of $|A+\lambda A|$ for fixed $|A|$ is the same). This bound was improved in \cite{Sanders2008} to $|A|\log^{4/3-o(1)} |A|$. By proving stronger versions of Freiman's theorem, it was further improved to $(\log{|A|})^{c\log\log{|A|}}|A|$ in \cite{schoen2011} and finally to $e^{\log^c{|A|}}|A|$ for some $c>0$ in \cite{sanders2012}. In the other direction, there exist arbitrary large sets $A$ such that $|A+\lambda A|\le e^{C\log^{1/2}{|A|}}|A|$ for an absolute constant $C>0$. In \cite{BUKH2008} it is proved among other bounds that $|A+3A|\geqslant 4|A|-O(1)$ for $A\subset \mathbb{R}$ and $|\lambda_1 A+\ldots+\lambda_k A|\geqslant (|\lambda_1| +\ldots+|\lambda_k|)|A|+o(|A|)$ for coprime integers $\lambda_1,\ldots,\lambda_k$. In \cite{Balog2014} it is proved that $|A+\frac{p}{q}A|\geqslant (p+q)|A|-O(1)$ for a rational number $\frac pq$, where $p,q$ are coprime positive integers. In \cite{Chen2018} a bound \begin{equation*}\label{eq:Chen} |A+\lambda A|\geqslant (1+\lambda-\varepsilon)|A| \end{equation*} was proved for any fixed real $\lambda\geqslant 1$ and $\varepsilon>0$ and large enough $A\subset \mathbb{R}$ (with bounds on $|A|$ depending on $\lambda$ and $\varepsilon$.) Here we consider a somehow intermediate variant of the problem: an algebraic but not rational $\lambda$, namely, $\lambda=\sqrt{2}$. Let $N,M$ be positive integers, consider the set $A=\{x+y\sqrt{2}:0\le x<N,0\le y<M\}$, then $\sqrt{2} A\subset \{x+y\sqrt{2}:0\le x<2M, 0\le y<N\}$, therefore $A+\sqrt{2} A\subset \{x+y\sqrt{2}:0\le x<N+2M, 0\le y<M+N\}$ and $$ \frac{|A+\sqrt{2} A|}{|A|} \le \frac{(N+2M)(M+N)}{MN}=2\frac{M}N+\frac{N}M+3 $$ that can be arbitrarily close to $3+2\sqrt{2}=(1+\sqrt{2})^2$ when $N/M$ is close to $\sqrt{2}$. We prove that this constant is tight: \begin{theorem}\label{main} There exist absolute constants $C, \varepsilon > 0$ such that \[ |A+\sqrt{2}A|\geq (1+\sqrt{2})^2|A| - C|A|^{1-\varepsilon}. \] for every finite set $A\subset \mathbb{R}$. \end{theorem} The rest of the paper is organized as follows. In Section \ref{sec:reduction} we reduce Theorem \ref{main} to the case $A\subset \mathbb{Z}[\sqrt{2}]$, in Section \ref{sec:sumsets} we prove an inequality on sum of subsets of an abelian group which we later use. In Section \ref{sec:discrete-prekopa-leindler} we prove Theorem \ref{main} in the case when $A\subset \mathbb{Z}[\sqrt{2}]$ satisfies certain regularity condition, and in Section \ref{sec:freiman} we deduce Theorem \ref{main} for an arbitrary set $A$. Finally, in Section \ref{sec:continuous-setting} we prove a general analogue of the result in the continuous setting and state a conjecture on the value of $\liminf |A+\alpha A|/|A|$ for an arbitrary algebraic $\alpha$. \section{Reduction to $\mathbb{Z}[\sqrt{2}]$}\label{sec:reduction} We first prove a quite intuitive fact that for upper-bounding $|A+\sqrt{2}A|$ one may assume that all elements of $A$ are in $\mathbb{Z}[\sqrt{2}]$. We prove the following slightly more general fact. \begin{lemma}\label{lm:reduction} Suppose that $\alpha\in \mathbb{C}$ and $A$ is a finite set of complex numbers. Then there exists a finite set $B\subset \mathbb{Q}[\alpha]$ such that $|B|=|A|$ and $|B+\alpha \cdot B|\leq |A+\alpha \cdot A|$. \end{lemma} \begin{proof} Let $V$ be a $\mathbb{Q}[\alpha]$ vector space generated by elements of $A$. There exists a linear functional $\varphi\colon V\to \mathbb{Q}[\alpha]$ which is injective on $A$ (a generic $\varphi$ works, for example). Then for $B=\varphi(A)$ we have $|B|=|A|$ and $$|B+\alpha\cdot B| =|\varphi(A+\alpha\cdot A)|\leq |A+\alpha\cdot A|.$$ \end{proof} \section{Sum of subsets of an abelian group}\label{sec:sumsets} We need the following standard fact of Plünnecke--Rusza type. \begin{lemma}\label{lm:doubling} Let $G$ be an abelian group. If sets $A, B\subset G$ with $|A|=|B|$ are such that $C:=A+B$ satisfies $|C|\leq K|A|$ then $|C+C|\leq K^6|C|$. \end{lemma} \begin{proof} The case $A=\emptyset$ is clear, so we suppose that $|A|=|B|>0$. By a variant of Plünnecke--Ruzsa inequality \cite[formula (2.4)]{Ruzsa1996}, we have $|A+A+A|\leq K^3 |A|$ and $|B+B+B|\leq K^3|B|$. We then use Ruzsa sum triangle inequality \cite[formula (4.6)]{Ruzsa1996}: for any non-empty subsets $X, Y, Z\subset G$ we have \[ |Y+Z| \leq \frac{|X+Y|\cdot |X+Z|}{|X|}. \] First, taking $X:=A, Y:=B, Z:=A+A$ we obtain \[ |A+A+B|\leq \frac{|A+B|\cdot |A+A+A|}{|A|}\leq \frac{|C|\cdot K^3|A|}{|A|}=K^3|C|. \] Then, taking $X:=B, Y:=A+A, Z:=B+B$ we obtain \[ |A+A+B+B|\leq \frac{|A+A+B|\cdot |B+B+B|}{|B|}\leq \frac{K^3 |C|\cdot K^3|B|}{|B|}=K^6|C|. \] \end{proof} \section{Discrete Pr\'ekopa--Leindler inequality} \label{sec:discrete-prekopa-leindler} By Lemma \ref{lm:reduction} we may assume that $A\subset \mathbb{Z}[\sqrt{2}]$. Identifying a number $a+b\sqrt{2}$ with a point $(a,b)\in\mathbb{Z}^2$ we get $|A+\sqrt{2}A|=|A+\mathcal{T} A|$, where an operator $\mathcal{T}:\mathbb{Z}^2\rightarrow\mathbb{Z}^2$ is given by $\mathcal{T} (a, b) := (2b, a)$. Note that for a compact set $\Omega\subset \mathbb{R}^2$ the inequality $|\Omega+\mathcal{T} \Omega| \geqslant (1+\sqrt{2})^2|\Omega|$ (here $|\cdot|$ stands for Lebesgue measure) follows from Brunn--Minkowski inequality. The idea is to mimic a proof of Brunn--Minkowski. Among the various proofs we have chosen the one which uses Pr\'ekopa--Leindler inequality. It is well possible that others work, too. The discrete version of Pr\'ekopa--Leindler inequality which we use is similar to that of IMO 2003 Shortlist Problem A6 \cite[section 3.44.2, problem 6]{dj}(proposed by Reid Barton). \begin{definition} For a set $A\subset \mathbb{Z}^2$ we define $\phi_x(A)$ (resp. $\phi_y(A)$) to be the number of different $x$ (resp. y) coordinates of points in $A$. \end{definition} \begin{lemma}\label{lm:main-lemma} Let $A\subset\mathbb{Z}^2$ be a finite set. Then one has \begin{equation}\label{eq:lower-bound} |A+\mathcal{T} A| \geq (1+\sqrt{2})^2|A|-60|A|^{1/2}-6\log{|A|}(\phi_x(A) + \phi_y(A)). \end{equation} \end{lemma} \begin{proof} If $|A|=1$ this is trivial so we assume $|A|>1$. We induct on $|A|$ and for fixed $|A|$ we induct on the diameter of $A$. For a finite set $X\subset\mathbb{Z}^2$ let \[ f(X):=(1+\sqrt{2})^2|X|-60|X|^{1/2}-6\log{|X|}(\phi_x(X) + \phi_y(X)). \] We define a set $A_0$ (resp. $A_1$) to be a set of points of $A$ with even (resp. odd) abscissa. If one of the sets is empty (after a shift we may assume that $A_1$ is empty) then we can consider a set $A':=\mathcal{T}^{-1}A$ instead of $A$ which satisfies $f(A')=f(A)$ and has smaller diameter. So we may assume that both $A_0$ and $A_1$ are non-empty. By shifting $A$ if necessary, we may also assume that \[ |A_1|\geq |A|/2, \qquad |A_0|\leq |A|/2. \] Note that sets $A_0+\mathcal{T} A$ and $A_1+\mathcal{T} A$ are disjoint. So if $|A_0|\leq \frac{|A|}{(1+\sqrt{2})^2}$ we trivially have, using induction hypothesis applied to $A_1$, that \[ |A+\mathcal{T} A|\geq |A_1 + \mathcal{T} A_1| + |A_0+\mathcal{T} A| \geq f(A_1) + |A| \geq f(A). \] So from now on we assume that \begin{equation}\label{eq:assumption-A0-A1} \frac{|A|}{(1+\sqrt{2})^2} \leq |A_0|\leq \frac{|A|}{2}, \qquad \frac{|A|}{2} \leq |A_1|\leq |A|-\frac{|A|}{(1+\sqrt{2})^2}. \end{equation} We define three sets of variables indexed by integers: \begin{align*} x_i:=|A\cap \{(a, b)\in\mathbb{Z}^2:\, a=i\}|, \qquad &y_j:=|A\cap \{(c, d)\in\mathbb{Z}^2:\, d=j\}|, \\ z_k:=|(A+\mathcal{T} A)\cap \{(a, b)\in\mathbb{Z}^2:\, &a=k\}|. \end{align*} Recall that $A_0:=\{(a, b)\in A:\, 2|a\}$. We now want to estimate the size of $A_0+\mathcal{T} A$. We define \[ y=\max_i y_i, \qquad x^0=\max_{2 | i} x_i. \] If $x^0\geq (1+\sqrt{2})|A|^{1/2}$ then the set $A$ has at least this many points on the same vertical line, hence, $\mathcal{T} A$ has also as many points on one horizontal line which implies a lower bound $|A+\mathcal{T} A|\geq (1+\sqrt{2})^2|A|$ and \eqref{eq:lower-bound} trivially follows. Similarly, if $y\geq (1+\sqrt{2})|A|^{1/2}$, the set $\mathcal{T} A$ has at least this many point on a vertical line, hence, $A$ has at least this many points on a horizontal line and \eqref{eq:lower-bound} again trivially follows. So from now on we assume that $x^0, y\leq (1+\sqrt{2})|A|^{1/2}$. Note that by Cauchy--Davenport theorem (which is trivial for the case of $\mathbb{Z}$) if $x_i, y_j > 0$ then $z_{i+2j}\geq x_i + y_j - 1$. This implies that for any $t$ one has \begin{align*} &\{k:\, 2|k, z_k\geq t - 1\} \supset \{i:\, 2|i, x_i\geq \frac{x^0t}{x^0+y} \}+2\cdot \{j:\, y_j\geq \frac{yt}{x^0+y}\}. \end{align*} The lower bounds on the right are chosen such that either both sets we add are empty or both are not. Hence, we can apply Cauchy-Davenport again to obtain \begin{align*} &|\{k:\, 2|k, z_k\geq t - 1\}| \geq |\{i:\, 2|i, x_i\geq \frac{x^0t}{x^0+y} \}|+\{j:\, y_j\geq \frac{yt}{x^0+y}\}|-1,\\ &|\{k:\, 2\not|k, z_k\geq t - 1\}| \geq |\{i:\, 2\not|i, x_i\geq \frac{x^1t}{x^1+y} \}|+| \{j:\, y_j\geq \frac{yt}{x^1+y}\}|-1. \end{align*} If we then integrate the first inequality between $1$ and $x^0+y$ we get \begin{align*} |A_0&+\mathcal{T} A| = \sum_{2|k} z_k \geq \int_1^{x^0+y} |\{k:\, 2|k, z_k\geq t - 1\}|\, dt \\&\geq \left(\int_1^{x^0+y} |\{i:\, 2|i, x_i\geq \frac{x^0t}{x^0+y} \}|\, dt \right) + \left(\int_1^{x^0+y} |\{j:\, y_j\geq \frac{yt}{x^0+y} \}|\, dt \right) - (x^0+y-1) \\&= \left(-|\{i:\, 2|i, x_i\geq 1\}|+\frac{x^0+y}{x^0}\cdot \sum_{2|i} x_i \right) + \left(-|\{j: y_j\geq 1\}|+\frac{x^0+y}{y}\cdot \sum_j y_j\right) - (x^0+y-1). \end{align*} Using the bounds on $x^0, y$ that we assume we obtain \[ |A_0+\mathcal{T} A|\geq \frac{x^0+y}{x^0}\cdot |A_0| + \frac{x^0+y}{y}\cdot |A| - \phi_x(A_0) - \phi_y(A) - (2+2\sqrt{2})|A|^{1/2}. \] Using that $|A_0|\leq |A|/2$, we obtain \begin{align*} |A_0+\mathcal{T} A| & \geq \left(\frac{x^0+y}{x^0} + \frac{2(x^0+y)}{y}\right)\cdot |A_0| - \phi_x(A_0) - \phi_y(A) - (2+2\sqrt{2})|A|^{1/2} \\&\geq (1+\sqrt{2})^2|A_0| - \phi_x(A)-\phi_y(A) - (2+2\sqrt{2})|A|^{1/2}. \end{align*} Also, by induction hypothesis, we have \[ |A_1+\mathcal{T} A|\geq |A_1+\mathcal{T} A_1|\geq (1+\sqrt{2})^2|A_1| - 60|A_1|^{1/2}-6\log{|A_1|}(\phi_x(A_1)+\phi_y(A_1)). \] It remains to add two thing together and note that due to \eqref{eq:assumption-A0-A1} we have \[ 60|A_1|^{1/2}+(2+2\sqrt{2})|A|^{1/2}\leq \left(60\cdot \left(1 - \frac{1}{(1+\sqrt{2})^2} \right)^{1/2} +(2+\sqrt{2})\right)|A|^{1/2}\leq 60|A|^{1/2}, \] and also \begin{align*} 6\log{|A_1|} + 1 \leq 6\log{|A|} + 1 - 6\log{\left(\frac{1}{1-\frac{1}{(1+\sqrt{2})^2}}\right)} \leq 6\log{|A|}. \end{align*} \end{proof} \section{Freiman's theorem}\label{sec:freiman} In order to deduce a lower bound on $|A+\sqrt{2} A|$ for an arbitrary finite set $A\subset \mathbb{R}$ from Lemma \ref{lm:main-lemma}, we use the following structural theorem due to Green and Ruzsa \cite{Green2007} (the version for $\mathbb{Z}$ is due to Freiman \cite{frejman2008foundations}). To state the result we first recall the definition of a \emph{proper arithmetic progression}. \begin{definition} A set $A\subset \mathbb{Z}^2$ is a proper arithmetic progression of dimension $d\geq 1$ if it has the form \begin{equation}\label{eq:AP} P=\left\{v_0 +\ell_1v_1+\dots+\ell_d v_d \, : \, 0\leq \ell_j < L_j \right\}, \end{equation} where $v_0, v_1,\dots, v_d \in \mathbb{Z}^2, L_1, L_2, \dots, L_d\in \mathbb{Z}_+$ and all sums in \eqref{eq:AP} are distinct (in which case $|P|=L_1L_2\dots L_d$). \end{definition} \begin{lemma}[Theorem 1.1, \cite{Green2007}]\label{lm:Freiman} For every $K>0$ there exist constants $d=d(K)$ and $f=f(K)$ such that for any subset $A\subset \mathbb{Z}^2$ with doubling constant at most $K$ (i.e. such that $|A+A|\leq K|A|$) there exists a proper arithmetic progression $P\subset \mathbb{Z}^2$ containing $A$ which has dimension at most $d(K)$ and size at most $f(K)|A|$. \end{lemma} \begin{proof}[Proof of Theorem \ref{main}] By Lemma \ref{lm:reduction} we may assume that $A\subset \mathbb{Z}[\sqrt{2}]$. So the problem is reduced to showing a lower bound on $|A+\mathcal{T} A|$ for an arbitrary finite set $A\subset \mathbb{Z}^2$. We fix an arbitrary set $A\subset \mathbb{Z}^2$ and let $B:=A+\mathcal{T} A$. The idea is to find a non-singular linear transformation $\tau$ commuting with $\mathcal{T}$ such that $\tau(\mathbb{Z}^2)\subset \mathbb{Z}^2$ and for the set $A':=\tau A$ both $\phi_x(A')$ and $\phi_y(A')$ are small (specifically, small means $O(|A|^\varkappa)$ for certain $\varkappa<1$). It allows to apply Lemma \ref{lm:main-lemma} to the set $A'$. The fact that $\tau$ and $\mathcal{T}$ commute ensures that \[ B':=A'+\mathcal{T} A' = \tau A + \mathcal{T} \tau A = \tau (A+ \mathcal{T} A) = \tau B. \] By Cauchy--Davenport theorem we have \begin{equation}\label{eq:two-to-one} \phi_x(B') = \phi_x(A'+\mathcal{T} A')\geq \phi_x(A')+\phi_y(A')-1, \end{equation} so it suffices to choose $\tau$ such that $\phi_x(B') = \phi_x(\tau B)$ is small. We may clearly assume that $|B| = |A+\mathcal{T} A|\leq (1+\sqrt{2})^2|A|$ as otherwise the statement is trivial. Then, by Lemma \ref{lm:doubling}, the set $B=A+\mathcal{T} A$ has doubling constant at most $(1+\sqrt{2})^{12}$ and so by Lemma \ref{lm:Freiman} there exist absolute constants $d, f > 0$ and a proper arithmetic progression \begin{equation*} P=\left\{v_0 +\ell_1v_1+\dots+\ell_d v_d \, : \, 0\leq \ell_j < L_j \right\} \subset \mathbb{Z}^2, \end{equation*} such that $B\subset P$ and $L_1L_2\dots L_d = |P|\leq f|B|$. Without loss of generality we may assume that $L_d\geq L_{d-1} \geq\dots \geq L_1$. Note that $v_d$ has rational coordinates, thus it is not an eigenvector of $\mathcal{T}$. Therefore $v_d$ and $\mathcal{T} v_d$ are linearly independent and there exist integers $\alpha,\beta$ such that the vector $\alpha v_d+\beta \mathcal{T} v_d$ is non-zero but has zero abscissa. Denote $\tau:=\alpha \operatorname{Id} + \beta \mathcal{T}$ (it obviously commutes with $\mathcal{T}$ and is not singular since the eigenvalues of $\mathcal{T}$ are not rational). Then $\tau v_d$ has zero abscissa, and we ensure that \[ \phi_x(\tau B) \leq \prod_{j=1}^{d-1} L_j \leq \left(\prod_{j=1}^d L_j\right)^{1-1/d} = |P|^{1-1/d}\leq (f |B|)^{1-1/d}. \] Using \eqref{eq:two-to-one} and our assumption that $|B|\leq (1+\sqrt{2})^2 |A|$ we deduce that \[ \phi_x(A')+\phi_y(A') \leq 1+(f |B|)^{1-1/d} \leq f_0 |A|^{1-1/d}, \] where $f_0$ is an absolute constant. It then remains to apply Lemma \ref{lm:main-lemma} to the set $A'$ (note that $|A'+\mathcal{T} A'| = |A+\mathcal{T} A|$) to see that \[ |A+\mathcal{T} A| \geq (1+\sqrt{2})^2|A| - 60|A|^{1/2} - 6f_0 |A|^{1-1/d}\cdot \log{|A|} \geq (1+\sqrt{2})^2|A| - C\cdot |A|^{1-\varepsilon}, \] with some $\varepsilon < 1/d$ and absolute constant $C$ large enough. \end{proof} \section{$A+\mathcal{T} A$ in continuous setting}\label{sec:continuous-setting} Let $\mu$ denote the Lebesgue measure in $\mathbb{R}^d$, and let the lower $*$ denote the inner measure (so, $\mu_*$ is the inner Lebesgue measure in $\mathbb{R}^d$). For a linear operator $\mathcal{T}\in \End(\mathbb{R}^d)$ denote $$ H(\mathcal{T})=\prod_{i=1}^d(1+|\lambda_i|) $$ where $\lambda_1,\ldots,\lambda_d$ are the (complex) eigenvalues of $\mathcal{T}$, listed with algebraic multiplicities. \begin{theorem}\label{th:continuous} Let $\mathcal{T}\in \End(\mathbb{R}^d)$ be a linear operator. Then for any set $K\subset \mathbb{R}^d$ we have \begin{equation}\label{continuous} \mu_\star (K+\mathcal{T} K)\ge H(\mathcal{T}) \cdot \mu_\star (K). \end{equation} \end{theorem} \begin{proof} In what follows we assume that $K$ is compact. (The general case follows by passing to a limit. Indeed, $K$ contains compact subsets $K_1,K_2,\ldots$ such that $\mu_\star(K)=\lim \mu(K_n)$, and $K+\mathcal{T} K$ contains $K_n+\mathcal{T} K_n$ that yields $\mu_\star (K+\mathcal{T} K)\ge \mu(K_n+\mathcal{T} K_n)$. So, passing to a limit in \eqref{continuous} for $K_n$ we get it for $K$.) Next, we assume that all $\lambda_i$'s are distinct. Again, the general case follows by a limit procedure. Indeed, there exist open neighborhoods $U_1,U_2,\ldots$ of $K+\mathcal{T} K$ such that $\mu(K+\mathcal{T} K)=\lim \mu(U_n)$. For each $n$ there exists an operator $\mathcal{T}_n$ with distinct eigenvalues which is so close to $\mathcal{T}$ that $K+\mathcal{T}_nK\subset U_n$ and therefore $\mu(U_n)\ge \mu(K+\mathcal{T}_nK)$. We may also assume that $\|\mathcal{T}-\mathcal{T}_n\|\to 0$. Thus if \eqref{continuous} holds for $\mathcal{T}_n$, it also holds for $\mathcal{T}$ (we use here the well-known fact that the spectrum of the limit of operators in $\mathbb{R}^d$ equals to the limit of their spectra.) If $\lambda_1$ is real, let $E_1$ be an eigenspace of $\lambda_1$; if $\lambda_1$ is not real, and $u+iv (u,v\in \mathbb{R}^d)$ is a corresponding complex eigenvector, let $E_1$ be a span of $u,v$. This allows to write $\mathbb{R}^d=E_1\oplus E_2$, where $E_1,E_2$ are $\mathcal{T}$-invariant linear subspaces, and either (i) $\dim E_1=1$; or (ii) $\dim E_1=2$ and $\mathcal{T}$ acts on $E_1$ as a rotational homothety. If $E_2=\{0\}$, then \eqref{continuous} follows directly from the 1- or 2-dimensional Brunn--Minkowski inequality. So, using induction we may suppose that $\dim E_2>0$, and the restriction of $\mathcal{T}$ to $E_2$ satisfies \eqref{continuous}. For $z\in E_2$ denote $K_z=\{x_1\in E_1\, : \, x_1+z\in K\}$, $f(z)=\nu_1(K_z)$, where $\nu_i$ is the Lebesgue measure in $E_i$, $i=1,2$, and without loss of generality $d\mu(x_1+x_2)=d\nu_1(x_1)d\nu_2(x_2)$ for $x_1\in E_1,x_2\in E_2$. Next, denote $a=\sup(f)$ and for $t\in (0,1)$ denote $X(t)=\{x\in E_2\, : \, f(x)\ge ta\}$. The sets $X(t)$ are measurable (since $f$ is measurable by Fubini theorem) and non-empty and we have \begin{equation*} \mu(K)=\int_{E_2} f(x)d\nu_2(x)= \int_0^\infty \nu_2\{x\in E_2\, : \, f(x)\ge \tau\}d\tau=a\int_0^1 \nu_2(X(t))dt. \end{equation*} Choose $x,y\in X(t)$. Note that $$ x+\mathcal{T} y+K_x+\mathcal{T} K_y\subset K+\mathcal{T} K=:L $$ By 1- or 2-dimensional Brunn--Minkowski inequality for non-empty compact sets $K_x,K_y\subset E_1$ we have $$\nu_1(K_x+\mathcal{T} K_y)\ge H\left(\mathcal{T}|_{E_1}\right) \min(\nu_1(K_x),\nu_1(K_y))\ge ta\cdot H\left(\mathcal{T}|_{E_1}\right). $$ Therefore, if we use the notation $L_z := \{x_1\in E_1\, : \, x_1+z\in L\}$, we have $$ \nu_2\left(z\in E_2\, : \, \nu_1(L_z)\ge ta\cdot H\left(\mathcal{T}|_{E_1}\right) \right) \ge \nu_{2*}\left(X(t)+\mathcal{T} X(t)\right)\ge \nu_{2*} (X(t))\cdot H\left(\mathcal{T}|_{E_2}\right) $$ by the induction hypothesis. Therefore \begin{align*} \mu(L)&=\int_{0}^\infty \nu_2\left(z\in E_2\, : \, \nu_1(L_z) \ge \tau\right)d\tau\\&\ge a\cdot H\left(\mathcal{T}|_{E_1}\right) \int_{0}^1 \nu_2\left(z\in E_2\, : \, \nu_1(L_z)\ge ta\cdot H\left(\mathcal{T}|_{E_1}\right) \right)dt\\ &\ge a\cdot H\left(\mathcal{T}|_{E_1}\right) H\left(\mathcal{T}|_{E_2}\right)\int_0^1 \nu_{2*} (X(t))dt =H(\mathcal{T}) \mu(K). \end{align*} \end{proof} \begin{remark} The bound $H(\mathcal{T})$ in Theorem \ref{th:continuous} is sharp. If $\mathcal{T}$ is complex diagonalizable, we may find a convex compact set $K=K(\mathcal{T})\subset\mathbb{R}^d$ such that \eqref{continuous} turns into equality. It suffices to decompose $\mathbb{R}^d$ as a direct sum of 1- and 2-dimensional $\mathcal{T}$-invariant subspaces, onto each of which $\mathcal{T}$ acts as a rotational homothety, and take $K$ to be the direct product of balls in these subspaces. In the general case we fix a complex diagonalizable operator $\mathcal{T}_0$ with the same spectrum as $\mathcal{T}$, then find a convex compact set $K(\mathcal{T}_0)$, then find a sequence of operators $\mathcal{T}_n\to \mathcal{T}_0$ which are similar to $\mathcal{T}$: $\mathcal{T}_n=S_n \mathcal{T} S_n^{-1}$. The sets $K_n:=S_n^{-1} K(\mathcal{T}_0)$ satisfy \begin{align*} \frac{\mu(K_n+\mathcal{T} K_n)} {\mu(K_n)}&=\frac{\mu(S_nK_n+S_n\mathcal{T} K_n)} {\mu(S_nK_n)}=\frac{\mu(K(\mathcal{T}_0)+\mathcal{T}_nK(\mathcal{T}_0))} {\mu(K(\mathcal{T}_0))}\\&\to \frac{\mu(K(\mathcal{T}_0)+T_0K(\mathcal{T}_0))} {\mu(K(\mathcal{T}_0))}=H(\mathcal{T}_0)=H(\mathcal{T}). \end{align*} \end{remark} Now we formulate a general conjecture on $|A+\alpha A|$ for algebraic $\alpha$. For an irreducible polynomial $f(x)\in \mathbb{Z}[x]$ of degree $d\geqslant 1$ (irreducibility in particular means that the coefficients of $f$ do not have a common integer divisor greater than 1) denote $$H(f)=\prod_{i=1}^d (|a_i|+|b_i|),$$ where $f(x)=\prod_{i=1}^d (a_ix+b_i)$ is a full complex factorization of $f$ (clearly the value $H(f)$ is well-defined). \begin{proposition}\label{upper} Let $\alpha$ be an algebraic real number with minimal polynomial $f(x)\in \mathbb{Z}[x]$ Then \begin{equation}\label{eq:algebraic} \lim_{n\rightarrow \infty} \min_{A\subset \mathbb{R}, |A| = n} \frac{|A+\alpha A|}{|A|} \le H(f). \end{equation} \end{proposition} \begin{proof} The upper bound follows from the sharpness of the continuous bound \eqref{continuous} for convex compact sets, see Remark after Theorem \ref{th:continuous}. Namely, consider the following operator $\mathcal{T}$: $g\mapsto x\cdot g$ in the $d$-dimensional factor space $\mathbb{R}[x]/f(x)\mathbb{R}[x]$. Let $\{1,x,\ldots,x^{d-1}\}$ be the standard basis in this space, and normalize Lebesgue measure appropriately. It is not hard to see that $H(f)=|c|\cdot H(\mathcal{T})$, where $c$ is the leading coefficient of $f$. Choose a convex compact set $K\subset \mathbb{R}^d$ such that $\mu(K+\mathcal{T} K)/\mu(K)=H(\mathcal{T})$ (the eigenvalues of $\mathcal{T}$ are $d$ algebraic conjugates of $\alpha$ and they are distinct, since $f$ is irreducible and therefore $f$ and $f'$ are coprime; thus $\mathcal{T}$ is diagonalizable, and such $K$ exists due to Remark after Theorem \ref{th:continuous}.) Then take large $M>0$ and consider the set $$ \Omega_M=\{a_0+a_1x+\ldots+a_{d-1} x^{d-1}\in M\cdot K\, : \, a_i\in \mathbb{Z}, c|a_{d-1}\}. $$ The number of such points is $|\Omega_M|=|c|^{-1} M^d\mu(K)+o(M^d)$. On the other hand, all points in $\mathcal{T}\Omega_M$ have integer coordinates (that's why we required $c|a_{d-1}$). Therefore $|\Omega_M+\mathcal{T}\Omega_M|\le M^d \mu(K+\mathcal{T} K)+o(M^d)$. Finally $$\frac{|\Omega_M+T\Omega_M|}{|\Omega_M|}\le |c| \frac{\mu(K+\mathcal{T} K)}{\mu(K)}+o(1)= |c|H(\mathcal{T})=H(f).$$ It remains to take $A=\{g(\alpha)\, : \, g(x)\in \Omega_M\}$. \end{proof} \begin{conjecture}\label{conj:1} For any real algebraic $\alpha$ the inequality \eqref{eq:algebraic} turns into equality. For complex algebraic $\alpha$ the analogous equality holds for $A\subset \mathbb{C}$. \end{conjecture} This conjecture is a partial case of the following \begin{conjecture}\label{conj:2} Let $\mathcal{T}:\mathbb{R}^d\rightarrow\mathbb{R}^d$ be a linear operator with a characteristic polynomial $f(\mathcal{T})$, then \[ \lim_{n\rightarrow \infty} \min_{A\subset \mathbb{R}^d, |A| = n} \frac{|A+\mathcal{T} A|}{|A|} = \min_{g| f(\mathcal{T}), g\in \mathbb{Z}[x]} H(g), \] where the minimum is taken over all irreducible divisors $g$ of $f(\mathcal{T})$. If there are no polynomials $g$ with rational coefficients that divide $f(\mathcal{T})$, we define the minimum to be infinity. \end{conjecture} It is interesting to find the analogue of Theorem \ref{th:continuous} and Conjectures \ref{conj:1}, \ref{conj:2} for several operators. In this direction we recall a conjecture by Boris Bukh \cite[problem 5]{BBwebpage}: \begin{equation}\label{BB} \sum_{i=1}^k |\mathcal{T}_i A|\ge \left(\sum_i |\det \mathcal{T}_i|^{1/d} -o(1)\right) |A|\end{equation} for large sets $A\subset \mathbb{Z}^d$, where $\mathcal{T}_i$ are $k$ linear operators preserving $\mathbb{Z}^d$ without common invariant subspace such that $\sum_i \mathcal{T}_i \mathbb{Z}^d=\mathbb{Z}^d$. The continuous analogue of \eqref{BB} immediately follows from Brunn--Minkowski inequality, and is in general not tight even for $k=2$ as follows from Theorem \ref{th:continuous}. \medskip We are grateful to B.~Bukh and to I.~Shkredov and S.~Konyagin for drawing our attention to \cite{BBwebpage} and \cite{sanders2012,schoen2011} respectively. \bibliographystyle{siam}
{ "timestamp": "2020-10-02T02:04:38", "yymm": "2010", "arxiv_id": "2010.00119", "language": "en", "url": "https://arxiv.org/abs/2010.00119", "abstract": "For a finite set $A\\subset \\mathbb{R}$ and real $\\lambda$, let $A+\\lambda A:=\\{a+\\lambda b :\\, a,b\\in A\\}$. Combining a structural theorem of Freiman on sets with small doubling constants together with a discrete analogue of Prékopa--Leindler inequality we prove a lower bound $|A+\\sqrt{2} A|\\geq (1+\\sqrt{2})^2|A|-O({|A|}^{1-\\varepsilon})$ which is essentially tight. We also formulate a conjecture about the value of $\\liminf |A+\\lambda A|/|A|$ for an arbitrary algebraic $\\lambda$. Finally, we prove a tight lower bound on the Lebesgue measure of $K+\\mathcal{T} K$ for a given linear operator $\\mathcal{T}\\in \\operatorname{End}(\\mathbb{R}^d)$ and a compact set $K\\subset \\mathbb{R}^d$ with fixed measure. This continuous result supports the conjecture and yields an upper bound in it.", "subjects": "Combinatorics (math.CO)", "title": "On the size of $A+λA$ for algebraic $λ$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137868795702, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617909260026 }
https://arxiv.org/abs/1907.04844
Minimum k-critical bipartite graphs
We study the problem of Minimum $k$-Critical Bipartite Graph of order $(n,m)$ - M$k$CBG-$(n,m)$: to find a bipartite $G=(U,V;E)$, with $|U|=n$, $|V|=m$, and $n>m>1$, which is $k$-critical bipartite, and the tuple $(|E|, \Delta_U, \Delta_V)$, where $\Delta_U$ and $\Delta_V$ denote the maximum degree in $U$ and $V$, respectively, is lexicographically minimum over all such graphs. $G$ is $k$-critical bipartite if deleting at most $k=n-m$ vertices from $U$ creates $G'$ that has a complete matching, i.e., a matching of size $m$. We show that, if $m(n-m+1)/n$ is an integer, then a solution of the M$k$CBG-$(n,m)$ problem can be found among $(a,b)$-regular bipartite graphs of order $(n,m)$, with $a=m(n-m+1)/n$, and $b=n-m+1$. If $a=m-1$, then all $(a,b)$-regular bipartite graphs of order $(n,m)$ are $k$-critical bipartite. For $a<m-1$, it is not the case. We characterize the values of $n$, $m$, $a$, and $b$ that admit an $(a,b)$-regular bipartite graph of order $(n,m)$, with $b=n-m+1$, and give a simple construction that creates such a $k$-critical bipartite graph whenever possible. Our techniques are based on Hall's marriage theorem, elementary number theory, linear Diophantine equations, properties of integer functions and congruences, and equations involving them.
\section{Introduction}\label{sec:introduction} \subsection{Design of fault tolerant networks} Let $\Pi$ be a graph property that is {\em monotone}, i.e., it is preserved when adding edges to a graph. A graph $G$ is fault-tolerant with respect to $\Pi$ if the graph obtained from $G$ by removing a certain set of vertices and/or edges still has the property $\Pi$. Fault-tolerance has been studied with respect to different graph properties and under various fault scenarios: restrictions on the set of vertices and/or edges that can be removed while still maintaining the property $\Pi$. $\Pi$ being monotone motivates to focus on finding fault-tolerant graphs that are minimal with respect to taking subgraphs. There has been particular interest in studying fault-tolerance with respect to properties defined as containing a subgraph isomorphic to a graph from a certain class. Given a family $\mathcal{H}$ of graphs and a positive integer $k$, a graph $G$ is called \emph{vertex $k$-fault-tolerant with respect to $\mathcal{H}$}, denoted by $k$-FT$(\mathcal{H})$, if $G-S$ contains a subgraph isomorphic to some $H\in \mathcal{H}$, for every $S\subset V(G)$ with $|S|\leq k$. Note that in the literature, $k$-FT$(\mathcal{H})$ graphs are also called $(\mathcal{H},k)$-\emph{vertex stable} graphs. For a singleton $\mathcal{H}=\{H\}$, we write shortly $k$-FT$(H)$ instead of $k$-FT$(\{H\})$. Clearly, $K_{q+k}$ is $k$-FT$({H})$ for every $q$-vertex graph $H$. So, there is a need to somehow measure the efficiency of $k$-FT$(\mathcal{H})$ graphs. One option studied in the literature is to consider $k$-FT$(\mathcal{H})$ graphs having both small number of spare nodes and small maximum degree (to allow scalability of a network). Many papers concern the case when $\mathcal{H}=\{P_q\}$, i.e., the property consists in containing a path on $q$ vertices. Bruck, Cypher and Ho \cite{BCH} constructed $k$-FT$({P_q})$ graphs of order $q+k^2$ and maximum degree 4. Zhang's \cite{Zhang,Zhang2} constructions have $q+O(k\log^2k)$ vertices and maximum degree $O(1)$, and $q+O(k\log k)$ vertices and maximum degree $O(\log k)$, respectively. Alon and Chung \cite{refAloChu} gave, for $k=\Omega(q)$, a construction having $q+O(k)$ vertices and maximum degree $O(1)$. Finally, Yamada and Ueno \cite{UY} constructed $k$-FT$(P_q)$ graphs with $q+O(k)$ vertices and maximum degree 3. A different quality measure of $k$-FT$(\mathcal{H})$ graphs was suggested by Hayes \cite{H}: a $k$-FT$(H)$ graph $G$ is called \emph{optimal} if $G$ has $|V(H)|+k$ vertices, and has the minimum number of edges among all such graphs. A construction, having $q+k$ vertices and maximum degree $O(k\Delta(H))$, was given by Ajtai, Alon, Bruck, Cypher, Ho, Naor and Szemer\'edi \cite{AABCHNS}. Yet another quality measure has been introduced by Ueno et al. \cite{ref_UenBagHaSch}, and independently by Dudek et al. in \cite{my}, where the authors were interested in $k$-FT$(H)$ graphs having as few edges as possible (disregarding the number of vertices). This topic has been widely studied \cite{CGNZ,CGZZ,ref_ErdGraSze,FTVW,FTVW2,Z,Z2,Z3}. Such a measure may be motivated by applications in sensor networks where sensors are much cheaper than the connections between them, and so their cost may be omitted. \subsection{$k$-factor-critical graphs and $k$-extendable graphs} In a graph $G=(V,E)$ of order $n$, a subset of edges $M$, $M \subset E$, is a \textit{matching} if for any $e, f \in M$, there is $e \cap f = \emptyset$. We say that the edges in $M$ are \textit{independent}. For a vertex $v \in V$ which is contained in one of the edges of $M$, we say that $v$ is {\em covered} by $M$. If $n$ is even and the size of $M$ is $n/2$, i.e., $M$ covers every vertex of $G$, then we say that $M$ is a \textit{perfect matching}. A graph $G$ is called \textit{$k$-factor critical} or \textit{$k$-critical} if after deleting any $k$ vertices the remaining subgraph has a perfect matching. Thus, a graph $G$ of order $2n+k$ is \textit{$k$-critical} if and only if it is $k$-FT$(M)$, where $M$ is a matching of size $n$. This concept was first introduced and studied for $k = 2$ by Lov\'asz \cite{Lovasz}, under the term of \textit{bicriticality}. For $k>2$, it was introduced by Yu in 1993 \cite{Yu}, and independently by Favaron in 1996 \cite{Favaron}, this problem is also known under name $k$-\textit{matchable graphs} \cite{LY04}. The idea of $k$-criticality is related to an older concept of $k$-extendability. Let $G$ be a graph of order $n$ with a perfect matching $M$, and let $0\leq k<n/2$ be a positive integer. A graph $G$ of even order $n \ge 2k+2$ is called \textit{$k$-extendable} if every matching of size $k$ in $G$ extends to (i.e., is a subset of) a perfect matching in $G$. This concept was introduced by Plummer in 1980. For $k =0$, $k$-extendable and $k$-factor-critical graphs are just graphs with a perfect matching. Moreover, we have the following theorem. \begin{theorem}[\cite{ZWL}] If $k\geq (|V (G)|+ 2)/4$, then a non-bipartite graph $G$ is $k$-extendable if and only if it is $2k$-critical. \end{theorem} It is straightforward that a bipartite graph cannot be $k$-critical for $k>0$. Nevertheless, we have the following result. \begin{theorem}[\cite{Plummer}]\label{thm:extendability_ft} Let $G$ be a connected bipartite graph on $n$ vertices with bipartition $(U,V)$. Suppose $k$ is a positive integer such that $k \leq (n-2)/2$. Then the following are equivalent: \begin{enumerate} \item $G$ is $k$-extendable, \item $|U|=|V|$ and for each non-empty subset $X$ of $U$ such that $|X| \leq |U|-k$ and $|N(X)|\geq |X|+k$. \item For all $U' \subset U$ and $V' \subset \in V$, $|U'|=|V'|=k$, the graph $G'=G - U' - V'$ has a perfect matching. \end{enumerate} \end{theorem} Based on Theorem \ref{thm:extendability_ft}, $k$-extendability of bipartite graphs of order $2(n+k)$ can be seen as fault-tolerance for property $\Pi$ of containing a matching of size $n$, under attacks that consist in removing (at most) $k$ vertices from each color class. Indeed, this property is monotone by the following result. \begin{corollary}[\cite{Plummer}]\label{co:add_edge} Suppose $k$ is a positive integer and $G=(U,V; E)$ is a $k$-extendable bipartite graph. For any $u\in U$ and $v \in V$, $G+uv$ also is $k$-extendable. \end{corollary} In line with the research on efficient design of fault-tolerant graphs, Zhang et al. \cite{ZZ12} presented a construction of $k$-extendable bipartite graphs with minimum number of edges and lowest possible maximum degrees. On the algorithmic side, testing extendability in bipartite graphs reduces to testing connectivity in directed graphs based on the following result of Robertson et al. \begin{theorem}[\cite{RST99}] Let $G=(U,V; E)$ be a connected bipartite graph, let $M$ be a perfect matching in $G$, and let $k \geq 1$ be an integer. Then $G$ is $k$-extendable if and only if $D(G,M)$ is strongly $k$-connected, where $D(G,M)$ is the directed graph obtained by directing every edge from $U$ to $V$, and contracting every edge of $M$. \end{theorem} Since connectivity can be efficiently tested in directed graphs (see, for example, the results of Henziger et al. \cite{H00}), so can the extendability of bipartite graphs. \begin{corollary}\label{co:extendable_bipartite_poly} Given a bipartite graph $G$ and an integer $k$, it can be tested in polynomial time if $G$ is $k$-extendable. \end{corollary} In the literature also some variations of edge-deletable extendable graphs and factor critical graphs are known \cite{PARK20116409,Plesnik1972,WANG20095242}. \subsection{Vertex-fault tolerant design in bipartite graphs} Our work is part of the line of research described above. For a motivation of the particular problem we study, let us present a potential context of application. Consider a network of $m$ sensing nodes and $n$ relay nodes. Sensing nodes need to transmit their readings through the relay nodes, using a pre-established infrastructure of links. For each relay node, only one direct connection with a sensing node can be active at any time. Relay nodes are faulty, but at any given time at most $n-m$ of them may be unavailable, leaving the other $m$ relay nodes ready for transmission. We want to establish a topology of links between sensing and relay nodes, a bipartite graph, to guarantee that under any fault scenario (with at least $m$ relay nodes active), the $m$ sensing nodes can transmit their data through distinct relay nodes. Such an infrastructure needs to have at least $(n-m+1)m$ links. Indeed, suppose that the total number of links is smaller. Then at least one sensing node $v$ is connected to at most $(n-m)$ distinct relay nodes. And there is a fault scenario where precisely the relay nodes linked with $v$ are inactive, in which case $v$ cannot transmit its data. We want to study topologies where not only the total number of links is low, but also the maximum number of links per node is small (both on the relay and sensing nodes sides). Li and Nie formulated the problem in the language of graph theory, they amended the definition of $k$-critical graph with respect to bipartite graphs \cite{LN}. It requires that the $k$ vertices to be deleted lie in the color class with more vertices. \begin{definition}\label{de:critical_bipartite} A bipartite graph $G=(U, V;E)$ such that $k=|U|-|V|\geq 0$ is called a \textit{$k$-critical bipartite graph} if after deleting any $k$ vertices from the set $U$ the remaining subgraph has a perfect matching. \end{definition} Given a bipartite graph $G=(U, V;E)$ such that $k=|U|-|V|\geq 0$, let $\tilde{G}=(U, V \cup D;E \cup E^D)$ be the graph obtained from $G$ by adding the set $D$ of $k$ vertices to $V$ and making them adjacent to all vertices in $U$. Li and Nie showed the following theorem. \begin{theorem}[\cite{LN}]\label{th:tilde} $G$ is $k$-critical bipartite iff $\tilde{G}$ is $k$-extendable. \end{theorem} Therefore, by Corollary \ref{co:add_edge} and Theorem \ref{th:tilde}, we have the following corrolary. \begin{corollary} Suppose $k$ is a positive integer and $G=(U,V; E)$ is a $k$-critical bipartite graph. For any $u\in U$ and $v \in V$, $G+uv$ also is $k$-critical bipartite. \end{corollary} So the property of being $k$-critical bipartite also is monotone, and the property of being $k$-critical bipartite graph of order $2n+k$ can be seen as fault-tolerance for property $\Pi$ of containing a matching of size $n$, under attacks that consist in removing (at most) $k$ vertices from the larger color class. From the algorithmic point of view, by Corollary \ref{co:extendable_bipartite_poly} and Theorem \ref{th:tilde}, we obtain the following corollary. \begin{corollary}\label{co:critical_bipartite_poly} Given a bipartite graph $G$ and an integer $k$, it can be tested in polynomial time if $G$ is $k$-extendable. \end{corollary} \subsection{Preliminaries}\label{ssec:preliminaries} For the ease of reading, we will keep throughout the paper the ``semantics'' of the letters we use to denote objects. We will work with simple undirected bipartite graphs $G=(U,V; E)$, with $|U|=n$, $|V|=m$, and $|E|=e$. We will say that $G$ is of order $(n,m)$ and size $e$. For any $U' \subseteq U$, let us use the notation $H=G[U',V]$ to denote the subgraph induced by $(U',V)$ in $G$ and recall that $H=(U',V; F)$ is a bipartite graph with $U'$ and $V$ as its corresponding color classes. Similar to the use of upper case $U$ and $V$, with additional symbols where necessary, to denote the corresponding color classes and their subsets, we will use lowercase $u$ and $v$, also with additional symbols when needed, to denote vertices in $U$ and $V$, respectively. Let us use $\Delta_U$ and $\Delta_V$ to denote the maximum degree of a vertex in $U$ and $V$, respectively. Similarly, let $\delta_U$ and $\delta_V$ denote the respective minimum degrees. We say that $G$ is \textit{balanced} if $n=m$, and {\em unbalanced} otherwise. We say that $G$ is \textit{biregular} if the degrees of the vertices in both color classes are constant, and {\em irregular} otherwise. If $\delta_U = \Delta_U = a$ and $\delta_V = \Delta_V = b$, then we say that $G$ is $(a,b)$-regular. Speaking of biregular graphs, we will use $a$ and $b$ to denote de degree of vertices in color classes $U$ and $V$, respectively. We will often compute the values of $a$ and $b$ based on the values of $n$ and $m$. In such cases, $n$ and $m$ are ``candidates'' to be the cardinalities of the color classes $U$ and $V$ of some bipartite graph $G=(U,V; E)$, and $a$ and $b$ are the ``candidates'' to be the corresponding degrees in case that $G$ happens to be biregular. Given two sets $U'$ and $V'$, with $U' \subseteq U$ and $V' \subseteq V$, a subset of edges $M$, $M \subset E$, is a \textit{matching} from $U'$ to $V'$ if for every $e$, $e \in M$, we have $e=uv$, with $u \in U'$ and $v \in V'$, and for any $e, f \in M$, there is $e \cap f = \emptyset$. If $|M|=|U'|$, then $M$ is a \textit{complete matching} from $U'$ to $V'$. In other words, in a complete matching $M$, each vertex in $U'$ is incident to precisely one edge from $M$. A complete matching from $U'$ to $V'$ is perfect if $|U'|=|V'|$. Recall that Zhang et al. \cite{ZZ12} presented a construction of $k$-extendable bipartite graphs with minimum number of edges and lowest possible maximum degrees. Similarly to them, we are interested in such a construction for the case of $k$-critical bipartite graphs. We can now introduce the design problem that we study in this paper. Notice that if $m=1$, then the graph $K_{n,1}$ is the only bipartite graph of order $(n,1)$ that is $(n-1)$-critical bipartite. If $n=m$, then the graph $G=(U,V; E)$, where $|U|=|V|=m$, and $E$ is a perfect matching from $U$ to $V$ is $0$-critical bipartite. Moreover, $G$ is minimum with respect to the lexicographical order of $(|E|, \Delta_U, \Delta_V)$ among all $0$-critical bipartite graphs of order $(m,m)$. Our design problem can be seen as a generalization of the perfect matching construction to unbalanced bipartite graphs with $n>m>1$. We will work with the assumption of $n>m>1$ throughout the paper. The main object of study in this paper is given in the following definition. \begin{definition} Given positive integer values $n,m,k$ such that $n>m>1$ and $k=n-m$, the Minimum $k$-Critical Bipartite Graph problem for $(n,m)$ (M$k$CBG-$(n,m)$) is to find a bipartite graph $G=(U,V; E)$ of order $(n,m)$ that is $k$-critical bipartite, and minimum with respect to the lexicographical order of $(|E|, \Delta_U, \Delta_V)$. \end{definition} The optimality measure that we choose for our fault-tolerant design problem is in line with the work of Zhang et al. \cite{ZZ12}, and is motivated by scalability concerns that are natural in many applications. A problem that is closely related to ours was presented by Perarnau and Petridis \cite{PP}. The authors studied the existence of perfect matchings in induced balanced subgraphs of random biregular bipartite graphs. \begin{theorem}[\cite{PP}] Let $k \in \mathbb{Q}^+$, $n \in \mathbb{Z}^+$ be arbitrarily large, and $b\in \{1,\ldots , n\}$, and suppose that $kn, kb \in \mathbb{Z}^+$, with $kb \leq n$. Furthermore, let $U$ and $V$ be sets of size $n$ and $kn$, respectively, and let $G$ be a graph taken uniformly at random among all $(kb,b)$-regular bipartite graphs on the vertex set $(U,V)$. Take subsets $A \subset U$ and $B\subset V$ of size $kb$ and define $H := G[A, B]$ to be the subgraph induced in $G$ by vertex set $(A, B)$. Then \begin{enumerate} \item No perfect matching exists in $H$ with high probability when\\ $\frac{kb^2}{n} -\log(kb) \underset{b \to \infty}{\longrightarrow}-\infty$ or when $b$ is a constant. \item A perfect matching exists in $H$ with high probability when\\ $\frac{kb^2}{n} -\log(kb) \underset{b \to \infty}{\longrightarrow}\infty$. \end{enumerate} \end{theorem} Notice that their result is on balanced subgraphs of order $(kb, kb)$, and our work is on subgraphs of order $(kn, kn)$, in graphs where $n > kn > 1$ and $b = n (1-k) + 1$. So, in our setting, $kn > kb$. The remainder of this paper is organized as follows. Section \ref{sec:applications} presents some relations with other areas of research where our work could find applications. Section \ref{sec:biregular} presents some constructions of biregular bipartite graphs that we use in the following sections. Section \ref{sec:biregular_min} presents some particular properties of biregular bipartite graphs with $b = n - m + 1$. We also describe efficient ways to compute the values of the parameters for which $(a,b)$-regular bipartite graphs of order $(n,m)$ exist, which may be useful for analysis and applications in which generating such graphs is needed. Section \ref{sec:k-critical} presents some results on the construction of k-critical bipartite biregular graphs, with Theorem \ref{thm:main_positive} being the main contribution. The paper concludes with Section \ref{sec:conclusions}. \section{Possible applications}\label{sec:applications} Design of fault-tolerant bipartite graphs has potential applications in the design of flexible processes, where there are $n$ different request types and $m$ servers that should process them (see, for example, the work of Chou et al. \cite{Chou} for a review of the topic). From this point of view, our problem could model systems that work in a multi-period setting, where an initial compatibility infrastructure has to be installed (a bipartite graph between request types and servers) and then, at any time period, there are at most $m$ different types of requests that have to be served. Moreover, our restriction of assigning a distinct server to each request type would correspond to systems where intra-period changes of set-up are not viable, for example, due to a high set-up cost. Up to now, the process flexibility literature has been focused on modeling systems where a server can process different kinds of compatible requests within the same time period (assigning to each of them a fraction of its time), therefore admitting fractional solutions. Moreover, only balanced systems, i.e., with $n = m$, used to be considered. However, the case of unbalanced systems has recently started to gain more interest (see, Deng et al. \cite{Deng} and Henao et al. \cite{Henao} for examples). And we hope that, with adequate tools, the case of systems that require the assignment to be exclusive within each time period, restricting the solutions to be integral, could be investigated. A related line of research is not to design the smallest fault tolerant graph (in terms of any metric, like the ones given above), but to analyze the level of fault tolerance assured by prescribed topologies \cite{CH,PLK}. This topic is of particular interest for algorithm design in high performance computing. Supercomputers are comprised of many processing nodes (with some local memory) that use an interconnection network to communicate during the execution of distributed algorithms. An algorithm delegates computational tasks to different nodes, and uses some logical topology for its message passing. This logical topology has to be somehow embedded in the physical interconnection network provided by the supercomputer. So it is of practical interest to study if the message passing topologies most common in algorithm design (like cycles, and trees of certain types) can still be embedded in interconnection topologies provided by supercomputers (often similar to hypercubes) when the system presents some faults \cite{CH}. Issues of graph modification in order to eliminate the existence of certain substructures are also considered in studies on network interdiction (see \cite{S20} for a recent review). Work on matching interdiction problems was initiated by Boros et al. in \cite{B06}. They defined a {\em minimal blocker} to be an inclusion minimal set of edges in a bipartite graph $G$ the removal of which leaves no perfect matching in $G$. In \cite{Z09}, Zenklusen et al. introduced the concept of a {\em $d$-blocker}, a set of edges the removal of which decreases the cardinality of the maximum matching by at least $d$, and studied minimum cardinality d-blockers in several graph classes. They showed that already the problem of deciding if a given bipartite graph $G$ has a $d$-blocker of size at most $k$ is NP-Complete for any positive integer $d$. Moveover, they gave a construction of minimum $d$-blockers for regular bipartite graphs. In \cite{Z10}, Zenklusen expanded the study to weighted graphs, defining the {\em edge interdiction problem} in a graph with two functions, value and cost, on the edge set. It consists in finding a set of edges of total cost bounded by a budget $B$ the removal of which decreases the value of the maximum matching by at least $D$. In a similar way, he also defined the {\em vertex interdiction problem} in a graph where the cost function is defined on the vertex set and we remove vertices. He showed that the vertex inderdiction problem is NP-hard in bipartite graphs even with unit edge values and vertex costs that are polynomially bounded in the size of the graph, or with unit vertex costs and edge values bounded by a constant. On the other hand, the vertex inderdiction problem can be solved in polynomial time in bipartite graphs with unit edge values and unit vertex costs. Most results on matching interdiction in bipartite graphs consider eliminating (faults of) vertices from both color clases, but Laroche et al. \cite{L14} considered the problem of eliminating vertices only from the larger color class in the context of {\em Robust Nurse Assignment Problem}. In their setting, the larger color class of a bipartite graph corresponds to the set of nurses and the smaller color class to the set of roles. The edges represent the skill sets of the nurses (the set of roles that each of them can assume). The problem asks for the maximum number of absent nurses that still permits to assign each job to a different nurse. They show that this special case of vertex interdiction problem can be solved in polynomial time. Adjiashvili et al. \cite{A16} expanded the work on robust assignments to the field of robust optimization. Given a bipartite graph $G(U,V; E)$, $|U| \geq |V|$, with costs on edges, the {\em Edge-Robust Assignment Problem} is to find a minimum cost set of edges $E'\subset E$ such that under any scenario of a set $F$ of edges that fail, taken from a fixed set $\mathcal{F}$ of possible fault scenarios, the remaining graph $G(U,V; E' \setminus F)$ contains a complete matching. They give some negative and positive complexity results for special cases of only single-edge-fault scenarios. In \cite{A17}, they complemented their results by considering the {\em Vertex-Robust Assignment Problem}. Here, the costs are on the vertices of $U$, and the problem is to find a minimum cost subset of vertices $U'$, $U' \subseteq U$, such that under any fault scenario $F$ of vertices that fail, taken from a fixed set $\mathcal{F}$ of possible fault scenarios, the remaining graph $G[U'\setminus F,V] $ contains a complete matching. In general, $\mathcal{F}$ can be any family of subsets of $U$. In \cite{A17}, the authors study the complexity of problems considering only single-vertex-fault scenarios. The problem we study is also related to the work of Assadi and Bernstein on graph sparsification for the edge-fault tolerant approximate maximum matching problem \cite{AB}. They offer a polynomial time algorithm that, given any graph $G=(V,E)$, $\varepsilon > 0$, and $f \geq 0$, computes a subgraph $H=(V,E')$ of $G$ such that for any set of edges $F$, $|F| = f$, the maximum matching in $(V,E \setminus F)$ is at most $3/2+\varepsilon$ times larger than the maximum matching in $(V,E' \setminus F)$. Moreover, $H$ hast $O(f+n)$ edges. To our knowledge, this is the only non-trivial algorithm known for edge-fault tolerant approximate maximum matching sparsification, and there are no algorithms for the vertex-fault tolerant version. We believe that our results might bring some insights valuable for this kind of challenges. \section{Constructions of biregular graphs}\label{sec:biregular} In this section we present some constructions of biregular bipartite graphs that will be useful throughout the paper. Let us start with a lemma. \begin{lemma}\label{le:xy} Let $n,m,a,b$ be positive integers such that and $an=bm$. Let $c=\gcd(m,n)$ and $d=\gcd(a,b)$. Then there exist positive coprime integers $x$, $y$ such that $n = x c$, $m = y c$, $a = y d$, $b = x d$. \label{podzielniki} \end{lemma} \begin{proof} Let $n = n' \gcd(n,m)$, $m = m'\gcd(n,m)$, $a = a' \gcd(a,b)$, $b = b' \gcd(a,b)$. Since $n a = b m$, we have $\frac{m}{n}=\frac{a}{b}$ and so $\frac{m'}{n'}=\frac{a'}{b'}$. Since both are irreducible fractions, there is $x := n' = b'$ and $y := m' = a'$, and they are coprime. So we have $n = x \gcd(n,m)$, $b = x \gcd(a,b)$, $m = y \gcd(n,m)$, $a = y \gcd(a,b)$. \end{proof} For a positive integer $o$, let $[o]=\{0,1,\ldots,o-1\}$. As usual, the notation $x = y\pmod n$ indicates a congruence relation, whereas the notation $y\bmod n$ is a binary operation which returns the remainder. The values of $x$, $y$, $c$, and $d$, as they appear in Lemma \ref{le:xy} are relevant throughout the paper. Like for $n$ and $m$ being the ``candidates'' for the cardinalities of the color classes $U$ and $V$, and $a$ and $b$, being the ``candidates'' for the corresponding degrees in a graph $G$ under consideration, we will consider the objects denoted by the letters $x$, $y$, $c$, and $d$ to be the ``candidates'' for the parameters described here. \begin{construction}\label{cons1} Let $n,m,a,b$ be positive integers such that $an=bm$. Let $c=\gcd(m,n)$ and $d=\gcd(a,b)$. Define $G_1=(U,V; E)$ as a bipartite graph having color classes $U=\{u_i \mid i \in [n]\}$, $V=\{v_j \mid j \in [m]\}$, and edges $E = \left\{ (u_i, v_{(j+\alpha) \bmod{m}}) \mid i \in [n], \alpha \in [a], j=\floor{\frac{i}{x}}y\right\}$. \end{construction} It is easy to check that, the graph $G_1=(U,V; E)$ can also be constructed in the following way. Namely, let $G^{\prime}$ be a $d$-regular bipartite graph having color classes $U^{\prime}=\{u_i \mid i \in [c]\}$ and $V^{\prime}=\{v_j \mid j \in [c]\}$ such that $E(G^{\prime})=\{u_iv_{(i+\delta)\bmod{c}},\; i \in [c], \delta \in [d] \}$. We will construct now the graph $G_1=(U,V; E)$ by ``blowing up'' each vertex $u_i$ into $x=b/d=n/c$ vertices $u_{i,\alpha}$, $\alpha \in [x]$, whereas each vertex $v_j$ into $y=a/d=m/c>1$ vertices $v_{j,\beta}$, $\beta \in [y]$. Each edge from $G'$ will be substituted now by the corresponding complete bipartite graph $K_{x,y}$. Let us present a similar construction, that also connects each vertex from $U$ with an interval of $a$ consecutive vertices from $V$, only that now the first vertex is chosen in a slightly different way. We will start with some useful lemmas. \begin{lemma} \label{resxy} Let $\alpha\in[c]$. If $\ceil{i\frac{y}{x}}\bmod m = j$, then $\ceil{(i+\alpha x)\frac{y}{x}}\bmod m = (j+\alpha y) \bmod m$. \end{lemma} \begin{proof} $\ceil{(i+\alpha x)\frac{y}{x}}\bmod m= (\alpha y+\ceil{i\frac{y}{x}})\bmod m=(\alpha y + j) \bmod m$. \end{proof} \begin{lemma} Let $x$, $y$, $c$, $n=cx$, $m=cy$ be positive integers, with $y<x$ and $j \in [m]$. Then the number of integer solutions to $\ceil{\frac{iy}{x}} = j \pmod{m}$ with respect to $i$, with $i \in [n]$, is equal to $\floor{j\frac{x}{y}} - \floor{(j-1)\frac{x}{y}}$. Moreover: \begin{itemize} \item $\floor{j\frac{x}{y}} - \floor{(j-1)\frac{x}{y}} = \floor{r\frac{x}{y}} - \floor{(r-1)\frac{x}{y}}$, where $r = j \bmod y$. \item For any interval of consecutive $y$ values of $j$, for $(x \bmod y)$ of them, there are $\ceil{\frac{x}{y}}$ solutions and, for the remaining $(y - x \bmod y)$, there are $\floor{\frac{x}{y}}$ solutions. \item In general, the number of solutions is $\ceil{\frac{x}{y}}$ for $(n \bmod m)$, and $\floor{\frac{x}{y}}$ for $(m - n \bmod m)$ values of $j\in[m]$. \end{itemize} \label{number_of_i_1} \end{lemma} \begin{proof} We have $\ceil{\frac{iy}{x}} \leq \ceil{\frac{(n-1)y}{x}} = \ceil{\frac{cxy-y}{x}} = cy + \ceil{-\frac{y}{x}} = m$. So $\ceil{\frac{iy}{x}} \bmod{m} = j$ means $\ceil{\frac{iy}{x}} = j$ if $j>0$. If $j=0$, then we have $\ceil{\frac{iy}{x}} = 0$ or $\ceil{\frac{iy}{x}} = m$. First, consider the case $j>0$. It means that $(j-1) < \frac{iy}{x}$ and $\frac{iy}{x} \leq j$. So the equation is satisfied iff $i$ belongs to the interval $\left](j-1)\frac{x}{y}, j\frac{x}{y}\right]$. The number of integers it contains is equal to $\floor{j\frac{x}{y}} - \floor{(j-1)\frac{x}{y}}$ (for example, see Chapter 3 of the book by Graham, Knuth and Patashnik \cite{GKP}). And they all belong to $[n]$. Now, let $j=0$. The only solution to $\ceil{\frac{iy}{x}} = 0$ with $i \in [n]$ is $i=0$. On the other hand, $\ceil{\frac{iy}{x}} = m$, without additional restrictions on $i$, is satisfied by all integers inside the interval $\left](m-1)\frac{x}{y}, m\frac{x}{y}\right]$. The only one of them not contained in $[n]$ is $m\frac{x}{y} = cy\frac{x}{y} = cx = n$. So the number of integer solutions to $\ceil{\frac{iy}{x}} \pmod{m} = 0$ with $i \in [n]$ is equal to the number of integer solutions of $\ceil{\frac{iy}{x}} = m$. Let us analyze the behavior of the number of solutions to $\ceil{\frac{iy}{x}} = j$ with respect to the residue of $j \pmod{y}$. Let $j = l y + r$ for some $0 \leq r < y$ and integer $l$. Then $\floor{j\frac{x}{y}} - \floor{(j-1)\frac{x}{y}} = \floor{\frac{lyx + rx}{y}} - \floor{\frac{lyx + rx - x}{y}} = \floor{r\frac{x}{y}} - \floor{(r-1)\frac{x}{y}}$. Let us now consider $r\in[y]$ and suppose that $\ceil{i\frac{x}{y}}=r$, then $i\in[x]$ by Lemma~\ref{resxy}. If $x=qy+p$ for $p\in[y]$, then the number of integer solutions to $\ceil{i\frac{y}{qy+p}}=r$ with respect to $i$, with $i \in [qy+p]$, is equal to either $q=\floor{\frac{x}{y}}$ or $q+1=\ceil{\frac{x}{y}}$. Because $i\in[x]$, the number of solutions is $\ceil{\frac{x}{y}}$ for $p=x\bmod y$ values of $r\in[y]$. For the other $(y - x \bmod y)$ values of $r$, there are $\floor{\frac{x}{y}}$ solutions. Since the number of solutions depends only on $(j \bmod y)$, the second bullet holds. Finally, the number of solutions is $\ceil{\frac{x}{y}}$ for $c\cdot( x \bmod y)= (cx \bmod cy) = (n \bmod m)$ values of for $j\in[m]$. \end{proof} \begin{lemma}\label{consecutive} Let $x$, $y$, $d$, $a=d y$, $b=d x$, $c$, $n=cx$, $m=cy$, be positive integers, with $y<x$, $d < c$, and $j \in [m]$. Then the number of integers $i$, with $i \in [n]$, such that $\ceil{\frac{iy}{x}} \pmod{m} = j$ with $j \in \{l-(a-1),\dots,l\} \pmod{m}$ for some integer $l$, $l \in [m]$, is equal to $b$.\label{number_of_i_2} \end{lemma} \begin{proof} By Lemma \ref{number_of_i_1}, the number of solutions to $\ceil{\frac{iy}{x}} \pmod{m} = j$ depends only on the residue $j \pmod{y}$. On the other hand, by the conditions of the Lemma, there is $a+y \leq m$. So, without loss of generality, we may assume that $m-1 \geq l \geq m - y$ and $l-(a-1) \geq 0$. By Lemma \ref{number_of_i_1}, number of integers $i$, as stated in the thesis, is equal to $\sum_{j=l-(a-1)}^{l} (\floor{j\frac{x}{y}} - \floor{(j-1)\frac{x}{y}})$. By the telescopic property, it is equal to $\floor {l\frac{x}{y}}-\floor {(l-a)\frac{x}{y}} = \floor {l\frac{x}{y}}-\floor {l\frac{x}{y}-dy\frac{x}{y}} = \floor {l\frac{x}{y}}-\floor {l\frac{x}{y}}+dx = dx = b$. \end{proof} \begin{construction}\label{cons3} Let $n,m,a,b$ be positive integers such that $1<m<n$ and $an=bm$. Define $G_2=(U,V; E)$ as a bipartite graph having color classes $U=\{u_i \mid i \in [n]\}$, $V=\{v_j \mid j \in [m]\}$, and edges $E = \left\{ (u_i, v_{(j+\alpha) \bmod{m}}) \mid i \in [n], \alpha \in [a], j=\ceil{\frac{i y}{x}}\right\}$. \end{construction} We will show now that the graph $G_2=(U,V; E)$ is $(a,b)$-regular. \begin{observation}Let $G_2=(U,V; E)$ be a graph of order $(n,m)$ given by Construction~\ref{cons3}, then $\delta_U = \Delta_U = a$ and $\delta_V = \Delta_V = b$.\label{degrees} \end{observation} \begin{proof} One can easily see that $G_2$ is of order $(n,m)$ and $\delta_U = \Delta_U = a$. Let us show that we also have $\delta_V = \Delta_V = b$. Notice that any vertex $v_l \in V$ is adjacent to all vertices $u_i$ such that $\ceil{\frac{iy}{x}} = j$ with $j \in \{l-(a-1),\dots,l\} \pmod{m}$. So, by Lemma \ref{number_of_i_2}, $d(v_l) = b$. \end{proof} \section{Biregular graphs with $b=n-m+1$}\label{sec:biregular_min} Observe that, given integer values $n$ and $m$, $n > m > 1$, by Pigeonhole principle, we need to have $\delta_V\geq b = n-m+1$ for any $k$-critical bipartite graph $G=(U,V;E)$ of order $(n,m)$, where $k=n-m$. So it is interesting to study graphs where $\Delta_V = n-m+1$. In particular, if $a=m(n-m+1)/n$ is an integer, we consider the $(a,b)$-regular bipartite graphs of order $(n,m)$, with $b = n-m+1$. If the subset of them that are $k$-critical bipartite is not empty, then it is the set of solutions to the M$k$CBG-$(n,m)$ problem. In this section we study conditions under which $(a,b)$-regular bipartite graphs of order$(n,m)$, where $a = \frac{m (n-m+1)}{n}$ is integer and $b = n - m + 1$, exist. In the following section we will show that some of them indeed are $k$-critical bipartite. Recall that for an $(a,b)$-regular bipartite graphs of order $(n,m)$, there are other relevant parameters: $x$, $y$, $c$, $d$, as described in Lema \ref{le:xy}. Even though the main result of this paper, Theorem \ref{thm:main_positive}, is based on a fixed quadruple of parameters $n$, $m$, $a$, $b$, here we present a wider vision on how such a quadruple can be completed, when only some of the parameters are fixed, and even based on the values of the parameters $x$, $y$, $c$, $d$ that are ``auxiliary'' for the Theorem \ref{thm:main_positive}. We believe that these results are interesting in their own right, and might find applications in studies on generating (random) biregular bipartite graphs. Recall that we use the letters $n$, $m$, $a$, $b$, $x$, $y$, $c$, $d$ to represent values that are ``candidates'' to be the parameters of a $(a,b)$-regular bipartite graph of order $(n,m)$, where $c=\gcd(n, m)$, $d=\gcd(a,b)$, $n = x c$, $m = y c$, $a = y d$, $b = x d$. Especially in this section, only after we have verified all the conditions we can be sure that the corresponding graphs actually exist. Sometimes it will be useful to rewrite the other variables in terms of $x$, $y$ and $c$. By Lemma \ref{le:xy}, and rewriting in terms of $x$, $y$, $c$, we get the following lemma. \begin{lemma}\label{rewrite_xyc} Let $n,m$ be positive integers such that $1<m<n$, and $\frac{m (n-m+1)}{n}$ is integer. Let $a = \frac{m (n-m+1)}{n}$, $b = n - m + 1$, $c = \gcd(n,m)$, $d = \gcd(a,b)$, $x=\frac{n}{c}$, $y=\frac{m}{c}$. Then the following hold: \[ \begin{array}{rcl} d & = & \frac{c (x-y) + 1}{x} \\ a & = & y \frac{c (x-y) + 1}{x}\\ b & = & c (x-y) + 1\\ m & = & y c\\ n & = & x c\\ \end{array} \] \end{lemma} The following lemma introduces another parameter, $p$, that is useful in the analysis of biregular graphs with $b=n-m+1$, and describes some related properties. \begin{lemma}\label{params_properties} Let $n,m$ be positive integers such that $1<m<n$, and $\frac{m (n-m+1)}{n}$ is integer. Let $a = \frac{m (n-m+1)}{n}$, $b = n - m + 1$, $c = \gcd(n,m)$, $d = \gcd(a,b)$, $x=\frac{n}{c}$, $y=\frac{m}{c}$, and $p = c - d$. There is $p = \frac{m - 1}{x} = \frac{cy - 1}{x} = \frac{yd - 1}{x-y} = \frac{a - 1}{x-y} = \frac{m - a}{y}$. Moreover $p > 0$, $c > d > 0$, $c >p$, $x > y > 0$, $x > x-y$, $n>m>2$, $b>a>1$, $n-x \geq b$, $m-y \geq a$, $n-c \geq m \geq c$, $b-d \geq a \geq d$. \end{lemma} \begin{proof} The first part follows from Lemma \ref{rewrite_xyc}, by simple rewriting of the terms. For the second part, first notice that by our choice of variables, $p$ is integer. Then, since $p = \frac{m - 1}{x}$, it must be non-negative. Finally, $p=0$ would imply $m=1$ - a contradiction with $m>1$. Since $p = \frac{a-1}{x-y}$, $p \geq 1$, $x-y \geq 1$, we have $a \geq 2$. Since $m-1 = px$, and $x \geq 2$, there is $m \geq 3$. \end{proof} In the following, we will use properties of the B\'ezout's Identity (a detailed treatment can be found, for example, in \cite{AAC}). \begin{theorem}[B\'ezout's Identity] Let $\alpha$, $\beta$ and $\gamma$ be integers, $\alpha$, $\beta$ nonzero. Consider the linear Diophantine equation \[\alpha \phi + \beta \psi = \gamma .\] \begin{enumerate} \item The equation is solvable for $\phi$ and $\psi$ in integers iff $\gamma' = \gcd(\alpha,\beta)$ divides $\gamma$. \item If $(\phi_0, \psi_0)$ is a particular solution, then every integer solution is of the form: \[\phi = \phi_0 + l\frac{\beta}{\gamma} , \psi = \psi_0 - l\frac{\alpha}{\gamma}\] where $l$ is an integer. \item If $\gamma = \gcd(\alpha, \beta)$ and $|\alpha|$ or $|\beta|$ is different from $1$, then a particular solution $(\phi_0, \psi_0)$ can be found such that $|\phi_0| < |\beta|$ and $|\psi_0| < |\alpha|$, as the coefficients obtained in the Extended Euclidean Algorithm. \end{enumerate} \label{bezout} \end{theorem} Given a linear Diophantine equation like in Theorem \ref{bezout}, we will say that $\phi$ and $\psi$ are {\em B\'ezout's coefficients} for $\alpha$ and $\beta$. \begin{lemma}\label{cd_coprime} Let $n,m$ be positive integers such that $1<m<n$, and $\frac{m (n-m+1)}{n}$ is integer. Let $a = \frac{m (n-m+1)}{n}$, $b = n - m + 1$, $c = \gcd(n,m)$, $d = \gcd(a,b)$, $x=\frac{n}{c}$, $y=\frac{m}{c}$. Then the identity $dx + (-c)(x-y) = 1$ holds and the pairs $(c, d)$, $(c,x)$ and $(x,x-y)$ are coprime. \end{lemma} \begin{proof} The identity follows from rewriting of the relations presented in Lemma \ref{rewrite_xyc}. Coprimality follows from the identity and Theorem \ref{bezout}. \end{proof} \begin{proposition}\label{xy_existence} For any integer pair $(x,y)$ with $0<y<x$ and $\gcd(x,y)=1$, there exist infinitely many positive integer pairs $(c,d)$ such that there exists an $(a,b)$-regular graph $G$ of order $(n,m)$, where $n = x c$, $m = y c$, $a = y d$, $b = x d$, with $b=n-m+1$. All such $(c_l,d_l)$ pairs can be computed as $(c_l,d_l) = (c_0 + l x, d_0 + l (x-y))$ for any positive integer $l$, where $d_0$ and $(-c_0)$ are B\'ezout coefficients for $x$ and $(x-y)$ in $dx + (-c)(x-y) = 1$. \end{proposition} \begin{proof} Let $(x,y)$ be an integer pair with $0<y<x$ and $\gcd(x,y)=1$. Given any integer pair $(c,d)$, applying the formulas $n = x c$, $m = y c$, $a = y d$, $b = x d$ gives us integer values of $n$, $m$, $a$, $b$. Moreover, we have $a = y d = \frac{x d y c}{x c} = \frac{b m}{n}$, and so $a n = b m$. So the existence of the corresponding $(a,b)$-regular graph of order $(n,m)$, by Construction \ref{cons1}, is equivalent to satisfying the equation $b = n - m + 1$. Rewriting $b = n - m + 1$, we obtain $x d = c x - c y + 1$, and $1 = d x - c (x-y)$. So, such a graph $G$ exists for any pair of integers $(c,d)$ such that $1 = d x - c (x-y)$. By Theorem \ref{bezout}, $d$ and $(-c)$ are B\'ezout coefficients for $x$ and $(x-y)$. Notice that, since $x > 1$, the Extended Euclidean Algorithm will give us B\'ezout's coefficients $d_E$, $(-c_E)$ such that $|d_E| < x-y$ and $|(-c_E)| < x$. If both $d_E$ and $c_E$ are positive, since $x > x-y$, we have $d_E<c_E$ and $d_E - (x-y) < 0$. So we can set $d_0=d_E$ and $c_0=c_E$, and all the other solutions can be constructed as $d_l = d_0 + l (x-y)$ and $c_l = c_0 + l x$ for any positive integer $l$. If either $d_E$ or $c_E$ is not positive, then we get such $d_0$ and $c_0$ by adding $(x-y)$ and $x$ to $d_E$ and $c_E$, respectively. Indeed, since $|d_E| < x-y$, there is $d_E + x - y > 0$, and the analogue holds for $c_E$. \end{proof} Notice that all coprime pairs $(x,y)$ can be generated in a very efficient way, using the concept of the Stern-Brocot tree. Indeed, with adequate data structures, it is possible to generate unique coprime pairs, using constant time per pair. See, for example, Chapter 4 of \cite{GKP} for details. With a similar reasoning, we can obtain the following result based on $c$ and $d$. \begin{proposition}\label{cd_existence} For any integer pair $(c,d)$ with $0<d<c$ and $\gcd(c,d)=1$, there exist infinitely many positive integer pairs $(x,y)$ such that there exists an $(a,b)$-regular graph $G$ of order $(n,m)$, where $n = x c$, $m = y c$, $a = y d$, $b = x d$, with $b=n-m+1$. All such $(x_l,y_l)$ pairs are coprime and can be computed as $(x_l,y_l)=(x_l,x_l-z_l)$, with $(x_l,z_l) = (x_0 + l c, z_0 + l d)$ for any positive integer $l$, where $x_0$ and $(-z_0)$ are B\'ezout coefficients for $d$ and $c$ in $dx + c(-z) = 1$. \end{proposition} Now, let us analyze constructions based on the value of $m$. \begin{proposition}\label{m_existence} Consider an integer $m$, $m \geq 3$. Let $c y$ be any positive integer factorization of $m$ with $c \geq 2$, and let $p x$ be any positive integer factorization of $m-1$, with $c>p$ and $x>y$. Then there exists an $(a,b)$-regular graph $G$ of order $(n,m)$, where $m = c y$, $n = c x$, $d = c - p$, $a = y d$, $b = x d$, with $b=n-m+1$. \end{proposition} \begin{proof} Let $c y$ and $p x$ be such factorizations. Notice that there always exists at least one such a pair of factorizations, namely $c=m$, $y=1$, $p=1$, and $x=m-1$. Using the formulas $n = c x$, $d = c - p$, $a = y d$, $b = x d$, we get integer values for $n$, $d$, $a$, and $b$. So, like in the proof of Proposition \ref{m_existence}, the thesis is equivalent to satisfying $b = n - m + 1$. Indeed, by rewriting, we have $b = x d = x (c - p) = x c - x p = n - m + 1$. \end{proof} Notice that the function that assigns to every integer $m$ the cardinality of the set of values $n$, such that $n>m>1$ and there exists an $(a,b)$-regular graph of order $(n,m)$, with $b=n-m+1$, is very difficult to analyze. On one hand, consider for example a Sophie Germain prime $\alpha$. Recall that a prime number $\alpha$ is a Sophie Germain prime if $2\alpha+1$ is also a prime. Let $m=\beta=2\alpha+1$ and let $m-1=2\alpha$. Then there exists only one factorization $m=cy$ with $c>1$: $y=1$ and $c=\beta$; and three factorizations $m-1 = p x$ with $x > 1$: $(x = 2, p=\alpha)$, $(x = \alpha, p=2)$, and $(x = 2\alpha, p=1)$; hence there are only three values of $n$ such that there exists an $(a,b)$-regular graph of order $(n,m)$, with $b=n-m+1$. It is conjectured that there exist infinitely many Sophie Germain primes, but as for now we have that the Chen's method proves there are infinitely many prime numbers $\alpha$, such that either $\beta = 2\alpha + 1$ is a prime or a product of two distinct prime numbers (see \cite{G11}). So there are infinitely many values $m$ for which there are at most $9$ values of $n$ such that there exists an $(a,b)$-regular graph of order $(n,m)$, with $b=n-m+1$. On the other hand, by the result of Ramanujan \cite{R15}, there are infinitely many integers $m$ for which the number of distinct divisors $d(m)$ of $m$ satisfies $d(m) > 2^{\frac{\log(m)}{\log\log(m)}+O(\frac{\log(m)}{(\log\log(m))^2})}$. So the number of values of $n$ such that there exists an $(a,b)$-regular graph of order $(n,m)$, with $b=n-m+1$, for such $m$ is very high. With a similar reasoning as the one for $m$, we obtain an analysis of constructions based on $a$, $b$, and $n$. \begin{proposition}\label{a_existence} Consider an integer $a$, $a \geq 2$. Let $d y$ be any positive integer factorization of $a$, and let $p z$ be any positive integer factorization of $a-1$. Then there exists an $(a,b)$-regular graph $G$ of order $(n,m)$, where $a = d y$, $x = z + y$, $b = d x$, $c = d + p$, $n = c x$, and $m = c y$, with $b=n-m+1$. \end{proposition} \begin{proposition}\label{b_existence} Consider an integer $b$, $b \geq 3$. Let $dx$ be any positive integer factorization of $b$, with $x \geq 2$, and let $cz$ be any positive integer factorization of $b-1$, with $c > d$ and $z<x$. Then there exists an $(a,b)$-regular graph $G$ of order $(n,m)$, where $b=dx$, $y = x - z$, $a=dy$, $m = c y$, $n = c x$, with $b=n-m+1$. \end{proposition} \begin{proposition}\label{n_existence_1} Consider a composite integer $n$, $n \geq 4$. Let $cx$ be any positive integer factorization of $n$, with $x \geq 2$ and $c \geq 2$, and let $yz$ be any positive integer factorization of $n+1$, with $z \geq c + 1$. Then there exists an $(a,b)$-regular graph $G$ of order $(n,m)$, where $n=cx$, $m=cy$, $d=z-c$, $a=dy$, $b=dx$, with $b=n-m+1$. \end{proposition} Constructions based on $n$ can also be analyzed from a different point of view. \begin{proposition}\label{n_existence_2} Consider a composite integer $n$, $n \geq 4$. Let $cx$ be any positive integer factorization of $n$, with $c \geq 2$ and $x \geq 2$. Then there exists exactly one integer pair $(y,d)$ such that there exists an $(a,b)$-regular graph $G$ of order $(n,m)$, where $m = c y$, $n = c x$, $a = y d$, and $b = x d$, with $b=n-m+1$. Moreover, $d$ and $-(x-y)$ are the B\'ezout's coefficients for $x$ and $c$ in $dx - (x-y)c = 1$ with $d < c$ and $x-y<x$. \end{proposition} \begin{proof} First notice that the condition for $n$ to be composite is necessary. Indeed, for a prime $n$ there is no factorization $n=cx$ with $c \geq 2$ and $x \geq 2$. Let $c x$ be a positive integer factorization of $n$. Given an integer pair $(y,d)$, applying the formulas $m = c y$, $a = y d$, and $b = x d$, gives us integer values of $n$, $m$, $a$, $b$. So, like in the proof of Proposition \ref{m_existence}, the thesis is equivalent to satisfying $b = n - m + 1$. And, by rewriting $b = n - m + 1$, we obtain $x d = c x - c y + 1$, and $1 = d x - c (x-y)$. Recall that, by Lemma \ref{params_properties}, there must be $d < c$ and $x-y < x$. Let us define $z=(x-y)$ and write $1 = dx - zc$. Since $c>1$, by Theorem \ref{bezout}, the Extended Euclidean Algorithm for $x$ and $c$ gives the B\'ezout's coefficients $d_E$ and $(-z_E)$ such that $|d_E| < c$ and $|-z_E| < x$. Notice that, since $x \geq 2$ and $c \geq 2$, there must be both $d_E \neq 0$ and $z_E \neq 0$. Moreover, B\'ezout's coefficients for a pair of positive integers, if they are both non-zero, have to be of opposite signs. So $d_E$ and $z_E$ are either both positive or both negative. If they are both negative, then adding $c$ and $x$ to $d_E$ and $z_E$, respectively, by an argument similar to that of the proof of Proposition \ref{xy_existence}, gives us the values as needed. If they are both positive, then they satisfy our conditions, and adding $c$ and $x$ gives values that exceed $c$ and $x$, respectively. Hence the values of $(d,z)$ that let us satisfy $b = n - m + 1$, and thus the values of $(y,d)$ are unique. \end{proof} Notice that by Lemmas \ref{le:xy}, \ref{params_properties}, and \ref{cd_coprime}, Propositions \ref{xy_existence} - \ref{n_existence_2} cover all possible cases where $(a,b)$-regular bipartite graphs of order $(n,m)$, with $1 < m < n$ and $b=n-m+1$, might exist. Notice also that, since the Extended Euclidean Algorithm runs in time polynomial in the input size (see, for example, \cite{K14} for a detailed analysis), the computations of the first parameter values described in Propositions \ref{xy_existence} and \ref{cd_existence} can be done in polynomial time, and then each following pair can be computed in polynomial time too. Finally, the time complexity of computing the values presented in Proposition \ref{n_existence_2} is polynomial. The computation time of the values presented in Propositions \ref{m_existence}, \ref{a_existence}, and \ref{b_existence} depends on the number of factorizations, which may be exponential in the input size. The time complexity of finding an integer factorization is still open for the classical model of computation, but can be done in polynomial time on a quantum computer \cite{S99}. \section{ $k$-critical bipartite biregular graphs}\label{sec:k-critical} Let us define a special class of $(a,b)$-regular bipartite graphs of order $(n.m)$ that will be the main object of study in this section. Throughout, $n,m,k$ will be positive integers such that $1<m<n$, $k=n-m$, $b = n-m+1$, and $a=\frac{m (n-m+1)}{n}$ is integer. \begin{definition} Let $n,m,k$ be positive integers such that $1<m<n$, $k=n-m$, $b = n-m+1$, and $a=\frac{m (n-m+1)}{n}$ is integer. An $(a,b)$-regular graph $G$ of order $(n,m)$ such that, for any $U' \subset U$ with $|U'| = m$, the subgraph $H := G[U', V]$ induced in $G$ by vertex set $(U', V)$ has a perfect matching, is called biregular $k$-critical bipartite. \end{definition} Our proofs related to the existence of perfect matchings will be based on the well known Hall's theorem. \begin{theorem}[Hall's Theorem] A bipartite graph $G=(U,V;E)$ contains a complete matching from $U$ to $V$ if and only if it satisfies $|N(A)| \geq |A|$ for every $A\subset U$.\label{Hall} \end{theorem} Let us first show some result for graphs given by Construction~\ref{cons1}. \begin{observation}\label{kcons1} Let $n,m,k$ be positive integers such that $1<m<n$, $k=n-m$, $b = n-m+1$, and $a=\frac{m (n-m+1)}{n}$ is integer. The graph $G_1$ given in Construction~\ref{cons1} is biregular $k$-critical bipartite if and only if $c=\gcd(n,m)=m$. \end{observation} \begin{proof} Let $G_1$ be a graph given by Construction~\ref{cons1}. As usual, let $d = \gcd(a,b)$. Note that, for any $v_{j_1},v_{j_2}\in V$, there is either $N(v_{j_1})\cap N(v_{j_2})=\emptyset$ or $N(v_{j_1})= N(v_{j_2})$. Suppose first that $c=m$, then recall that then, by Lemma \ref{le:xy}, we have $d=a$. We will show now that, for any $U' \subset U$ such that $|U'| = m$, the subgraph $H := G[U', V]$ induced in $G$ by the vertex set $(U', V)$ has a perfect matching. Let $U^{\prime}\subset U$, $|U^{\prime}|=m$. We need to show that, for any $A$, $A \subset U^{\prime}$, $|A|=r$, $A=\{u_{i_0},u_{i_1},\ldots,u_{i_{r-1}}\}$, the neighborhood $N(A)$, $N(A) \subset V$, satisfies $|N(A)|\geq |A| = r$. Suppose that $za<|A|\leq (z+1)a$ for some $z\in[k]$, then, by Pigeonhole Principle, there have to exist pairwise distinct vertices $v_{j_1},v_{j_2},\ldots,v_{j_z}\in N(A)\subset V$ such that $N(v_{j_s})\neq N(v_{j_t})$ for any $j_s\neq j_t$. Hence $N(A)\geq za$. Assume now that $c<m$, then let $B=\{v_{c-1,0},v_{c-1,1}\ldots,v_{c-1,y-1}\}$. By Construction~\ref{cons1}, there is $$N(B)=\{u_{c-d,0}, u_{c-d,1},\ldots,u_{c-d,x-1}, \ldots,u_{c-1,0}, u_{c-1,1},\ldots,u_{c-1,x-1}\}.$$ Hence $|N(B)|=d\cdot b/d=b=n-m+1$. Let $A=(U\setminus N(B))\cup \{u_{c-1,0}\}$. Observe that then, in the subgraph $H=G[A,V]$, we have $|B|>1$ and $|N_H(B)|=1$. Thus, Hall's Theorem implies that $H$ does not have a perfect matching. \end{proof} Let us present now our main negative result. \begin{theorem}\label{thm:main_negative} Let $n,m,k$ be positive integers such that $1<m<n$, $k=n-m$, $b = n-m+1$, and $a=\frac{m (n-m+1)}{n}$ is integer. There exists an $(a,b)$-regular graph of order $(n,m)$ that is not biregular $k$-critical bipartite if and only if $a<m-1$. \end{theorem} \begin{proof} Let $G$ be an $(a,b)$-regular graph of order $(n,m)$. As usual, let $c = \gcd(m,n)$ and $d = \gcd(a,b)$. By Lemma \ref{params_properties}, we have $a < m$. Suppose that $a=m-1$. Then, by simple rewriting, we obtain that $b=(m-1)^2$ and $n=(m-1)m$. Take any $U'$, $U' \subset U$ with $|U'| = m$. Suppose that the subgraph $H := G[U', V]$ induced in $G$ by the vertex set $(U', V)$ does not have a perfect matching. Observe that $|N(U')|\geq m-1$, because deg$(u)=m-1$ for any $u\in U'$. Therefore, by Hall's Theorem, we can assume that there exists $A \subseteq U'$ with $|A|=m$ and $|N(A)|=m-1$. It implies that there exists a vertex $v\in V$ such that $N(v)\cap A=\emptyset$. Hence deg$(v)\leq |U|-|A|=(m-2)m<(m-1)^2=b=$deg$(v)$ - a contradiction. From now on, we assume that $a<m-1$. If $c<m$, then Observation~\ref{kcons1} implies that the $(a,b)$-regular graph of order $(n,m)$ given by Construction~\ref{cons1} is not biregular $k$-critical bipartite.\\ For $c=m$, let $G^{\prime}$ be an $(a,b)$-regular bipartite graph with color classes $U=\{u_0,u_1,\ldots,u_{n-1}\}$ and $V=\{v_0,v_1,\ldots,v_{m-1}\}$ such that $E(G^{\prime})=\{(u_{(i+\alpha+zm)\bmod{n}},v_i) \mid i \in [m], \alpha\in[a],z\in[x]\}$. Note that $N_{G^{\prime}}(v_0)\cap N_{G^{\prime}}(v_{m-1})=\emptyset$, since $a<m-1$.\\ We will now construct the graph $G$ by exchanging some edges in $G^{\prime}$. Namely, let $V(G)=V(G^{\prime})$ and $$E(G)=(E(G^{\prime})\setminus\{(u_{(a+zm)\bmod{n}},v_{1}),(u_{zm\bmod{n}},v_{m-1})\mid z\in[x]\})$$ $$\cup\{(u_{zm\bmod{n}},v_{1}),(u_{(a+zm)\bmod{n}},v_{m-1})\mid z\in[x]\}.$$ Let $B=\{v_{0},v_{m-1}\}$. By construction, there is $N(B)=\{u_{(\alpha+zm)\bmod{n}},\;$ for $\alpha\in[a], z\in[x]\}.$ Hence $|N(B)|=a\cdot x=b=n-m+1$. Let $A=(U\setminus N(B))\cup \{u_{0}\}$. Observe that then, in the subgraph $H=G[A,V]$, we have $|B|=2$ whereas $|N_H(B)|=1$. Thus Hall's Theorem implies that $H$ does not have a perfect matching. \end{proof} Let us now present the main result. \begin{theorem}\label{thm:main_positive} Let $n,m,k$ be positive integers such that $1<m<n$, $k=n-m$, $a=\frac{m (n-m+1)}{n}$ is integer, and $b=n-m+1$. Then there exists an $(a,b)$-regular graph of order $(n,m)$ that is biregular $k$-critical bipartite. \label{positive_const_biregular} \end{theorem} \begin{proof} Recall that $c = \gcd(n,m)$, $d = \gcd(a,b)$, $n = x c$, $m = y c$, $a = y d$, $b = x d$. Let $G_2$ be the graph given by Construction~\ref{cons3}, i.e., a graph $(U,V; E)$, with $U=\{u_i \mid i \in [n]\}$, $V=\{v_j \mid j \in [m]\}$, $E = \{ (u_i, v_{(j+\alpha) \bmod{m}}) \mid i\in [n], \alpha \in [a], j=\ceil{\frac{i y}{x}}\}$. Consider the cardinalities $|\{i \mid i\in [n], \ceil{\frac{i y}{x}} = j \pmod{m} \}|$ for $j \in [m]$. Recall that, by Lemma \ref{number_of_i_1}, when $y=1$, then all cardinalities are equal $\ceil{\frac{x}{y}}=\floor{\frac{x}{y}}=x$. Otherwise, since $x$ and $y$ are coprime, $(\ceil{\frac{x}{y}} - \frac{x}{y})y = (y - x \bmod y) \leq y - 1$, and the cardinalities alternate between $\floor{\frac{x}{y}}$ and $\ceil{\frac{x}{y}}$. We will show that, for any $U^{\prime} \subset U$ such that $|U^{\prime}| = m$, in the subgraph $H := G[U^{\prime}, V]$ induced in $G$ by the vertex set $(U^{\prime}, V)$, there exists a complete matching of size $|U^{\prime}|$. By Theorem \ref{Hall}, it means that, for any $A$, $A \subset U$, $|A|=r\leq m$, $A=\{u_{i_0},u_{i_1},\ldots,u_{i_{r-1}}\}$, the neighborhood $$N(A)=\left\{v_{\left(\ceil{\frac{i_{s} y}{x}}+\alpha\right) \bmod{m}}: \,\alpha \in [a],\; s \in [r]\right\}\subset V,$$ satisfies $|N(A)|\geq |A|$. Given such an $A$, let $B(A) = \{v_j \mid \ceil{\frac{i_{s} y}{x}} \bmod{m} = j, s \in [r] \}\subset N(A)$, i.e., $B(A)$ is the set of the ``first neighbours'' of vertices from $A$. For the argument, let us consider the elements of $V$ put on a directed cycle $C_m$, where there is an arc $(v_{j_1}, v_{j_2})$ if and only if $j_2 - j_1 = 1 \pmod{m}$. We will analyze the relative positions of the elements of $B(A)$ on $C_m$. Suppose that there exists a subset $A$, $A \subset U$, such that $|N(A)|/|A| = t < 1$. Among all such subsets, let $A$ be such that the value of $t$ is the lowest possible, the minimum length of a path that covers all vertices of $B(A)$ in $C_m$ is the lowest possible, and such a path terminates with the largest possible number of vertices $v_j$ for which $ |\{ i \mid i\in [n], \ceil{\frac{i y}{x}} = j \pmod{m} \}| = \ceil{\frac{x}{y}}$. Let $P$ denote such a path (if there are more than one, we chose arbitrarily one of them) . Clearly, $N(A)$ does not cover the whole cycle $C_m$, as this would mean $|N(A)| = m \geq |A|$. Let us show that $N(A)$ induces a path in $C_m$. Suppose on the contrary that it is not the case, so we can partition $A$ into a family of disjoint sets $A_0,\dots,A_{s-1}$, each one corresponding to an inclusion maximal path $P_{A_l}$, $l\in [s]$, of the subgraph induced by $N(A)$ in $C_m$. Note that the length of each path $P_{A_l}$, $l\in [s]$, is strictly smaller than the length of $P$. Suppose that we have $|N(A_l)|/|A_l| > |N(A)|/|A|$, for all $l\in [s]$. Thus, there is $|A||N(A_l)|>|A_l| |N(A)|$, for all $l\in [s]$, which implies that $|A|(|N(A_0)|+\ldots+ |N(A_{s-1})|)>(|A_0|+\ldots+|A_{s-1}|) |N(A)|$. Since the sets $A_l$, $l\in [s]$, are pairwise disjoint, and so are their neighborhoods, this leads to $|A||N(A)|>|A| |N(A)|$ - a contradiction. So, for at least for one $l \in [s]$, there is $|N(A_l)|/|A_l| \leq |N(A)|/|A|$ - a contradiction with the choice of $A$. Now, given that $N(A)$ induces a path in $C_m$, we can also show that $B(A)$ induces a path in $C_m$. Notice that, since $N(A)$ induces a path in $C_m$, there must be $V(P) \subseteq N(A)$. Suppose that $P$ contains a vertex $v_j$ not in $B(A)$. This means that $\{ i \mid \ceil{\frac{i y}{x}} = j \pmod{m} \} \cap A = \emptyset$, but $v_j$ is in $N(A)$. So we can create $A^{\prime} = A \cup \{u_i\}$, where $\ceil{\frac{i y}{x}} = j \pmod{m}$, with $|A^{\prime}|>|A|$ and $|N(A^{\prime})| = |N(A)|$ - a contradiction with the choice of $A$, unless $|A|=m$. But $|A|=m$ can not hold, since it would mean that $|N(A)| \leq m-1$, and, for any $v_j \in V$, there are only $m-1$ non-neighbors of $v_j$ in $U$. So, additionally we have shown that we may assume $|A| \leq m-1$ and $|N(A)|\leq m-2$. In a similar way, we can prove that $A = \{ u_i \mid \ceil{\frac{i y}{x}} = j \pmod{m}, v_j\in B(A) \}$. In other words, $A$ corresponds to the set of solutions to $\ceil{\frac{i y}{x}} = j \pmod{m}$ for an interval $J$ of consecutive values of $j$. Therefore, we can assume $|B|=q y + \gamma$ for $\gamma\in[y]$. By Lemma~\ref{number_of_i_1}, the $qy$ vertices in $B$ correspond to $qx$ vertices in $A$. The remaining $\gamma$ vertices in $B$ can be partitioned into $\gamma'$ vertices that correspond to $\gamma' \ceil{\frac{x}{y}}$ vertices in $A$ and $\gamma''$ vertices that correspond to $\gamma'' \floor{\frac{x}{y}}$ vertices in $A$. Again by Lemma~\ref{number_of_i_1}, we have $\gamma'+\gamma''=\gamma$, $\gamma' \leq (x \bmod y)$ and $\gamma'' \leq (y - x \bmod y)$. Hence $qx\leq|A|\leq qx+\gamma'\ceil{\frac{x}{y}}+\gamma''\floor{\frac{x}{y}}$ and $|N(A)|=qy+\gamma+dy-1$. \\ By the choice of $A$, $$|N(A)|=qy+\gamma'+\gamma''+dy-1 <|A|\leq qx+\gamma'\ceil{\frac{x}{y}}+\gamma''\floor{\frac{x}{y}}$$. Suppose $\gamma = 0$, we have $|A|=qx$ and the inequality reduces to $qy + dy - 1 < qx$, which is equivalent to $q > \frac{dy-1}{x-y}$. By Lemma \ref{params_properties}, we have $\frac{yd-1}{x-y} = \frac{m-1}{x}$, and so it is equivalent to $q > \frac{m-1}{x}$. We get $|A| = qx > m-1$ - a contradiction, since we already have already showed that $|A| \leq m-1$. So we may assume that $\gamma'+\gamma'' > 0$.\\ Suppose that $(q+1)x\leq m-1$, then $q+1\leq\frac{m-1}{x}=\frac{yd-1}{x-y}$ by Lemma \ref{params_properties}. Hence $$qy+y +yd-1-(y-\gamma'-\gamma'') \geq qx+x-(y-\gamma'-\gamma'')$$ Recall that $\gamma' \leq (x \bmod y)$ and $\gamma'' \leq (y - x \bmod y)$, $(x \bmod y)\ceil{\frac{x}{y}}+(y - x \bmod y)\floor{\frac{x}{y}}=x$ and $\frac{x}{y}>1$. Therefore $$qx+x-(y-\gamma'-\gamma'') = qx+x-(x \bmod y-\gamma')-((y - x \bmod y)-\gamma'')$$ $$qx+x-x \bmod y+\gamma'-(y - x \bmod y)+\gamma'')\geq qx+x-x \bmod y+\gamma'\ceil{\frac{x}{y}} - (y - x \bmod y)+\gamma''\floor{\frac{x}{y}}$$ $$qx+x- (x \bmod y -\gamma')\ceil{\frac{x}{y}} - ((y - x \bmod y)-\gamma'')\floor{\frac{x}{y}}=qx+\gamma'\ceil{\frac{x}{y}} +\gamma''\floor{\frac{x}{y}}.$$ Thus $$|N(A)|=qy+\gamma'+\gamma''+dy-1 \geq qx+\gamma'\ceil{\frac{x}{y}}+\gamma''\floor{\frac{x}{y}}\geq|A|$$ - a contradiction with the choice of $A$, so we may assume that $(q+1)x \geq m$. On the one hand, $m\leq (q+1)x$ implies $\frac{m-1}{x} < q+1$. Since $p=\frac{m-1}{x}$, it means $q \geq p$. On the other hand, recall that $|A| \leq m-1$. So $|A| > |N(A)| = qy+\gamma+dy-1$ implies $m-1 > qy+\gamma+dy-1$. By Lemma \ref{params_properties}, $qy+\gamma+dy-1 = qy + \gamma + p(x-y) = qy + \gamma + m - 1 - py$. So we obtain $y(p-q) \geq \gamma$. Since $\gamma > 0$, there is $q < p$. So we reach a contradiction. \end{proof} It is interesting to note how a little difference between Construction \ref{cons1} and Construction \ref{cons3} changes the properties of the resulting graphs with respect to the property of being $k$-critical bipartite. Indeed, the only difference is in changing the first vertex adjacent to any $u_i \in U$ from $(\floor{\frac{i}{x}} y \bmod{m})$ to $(\ceil{\frac{iy}{x}} \bmod{m})$. In fact, by reasons similar to what we presented for Construction \ref{cons1}, replacing $(\floor{\frac{i}{x}}y \bmod{m})$ with $(\ceil{\frac{i}{x}}y \bmod{m})$ does not change the properties of the resulting graph with respect to M$k$CBG-$(n,m)$. On the other hand, replacing $(\ceil{\frac{iy}{x}} \bmod{m})$ with $(\floor{\frac{iy}{x}} \bmod{m})$ in Construction \ref{cons3} also generates graphs that are solutions to M$k$CBG-$(n,m)$. As an interesting generalization of Construction \ref{cons3}, if the neighbors following $v_j$, $j=\ceil{\frac{i y}{x}}$, instead of being consecutive ($s=1$), are separated by any $s$, a divisor of $x$, the resulting construction is also a biregular $k$-critical bipartite graph. In other words, for any positive integers $n,m$ such that $1<m<n$, $a=\frac{m (n-m+1)}{n}$ is integer, and $b=m-m+1$, $G=(U,V; E)$, with $U=\{u_i \mid i \in [n]\}$, $V=\{v_j \mid j \in [m]\}$, $E = \{ (u_i, v_{(j+s\alpha) \bmod{m}}) \mid \alpha \in [a], j=\ceil{\frac{i y}{x}}\}$, for any $s$ such that $x = 0 \pmod s $, also is $k$-critical bipartite graph. On the other hand, for any $s$, such that $x \neq 0 \pmod s $, the resulting graph is in general not even biregular. \section{Conclusions}\label{sec:conclusions} We define the Minimum $k$-Critical Bipartite Graph problem for $(n,m)$ : to find a bipartite graph $G=(U,V;E)$, with $|U|=n$, $|V|=m$, $k=n-m$, that is a $k$-critical bipartite graph, and the tuple $(|E|, \Delta_U, \Delta_V)$, where $\Delta_U$ and $\Delta_V$ are the maximum degree in $U$ and $V$, respectively, is lexicographically minimum. We study it in the case of unbalanced biregular graphs, i.e., when $n,m,k$ are positive integers such that $1<m<n$, $k=n-m$, and $a=\frac{m (n-m+1)}{n}$ is integer, and $b=m-m+1$. We show that if $a=m-1$, then all $(a,b)$-regular bipartite graphs of order $(n,m)$ are $k$-critical bipartite, and for $a<m-1$, it is not the case. We characterize the values of $n$, $m$, $a$, and $b$ that admit an $(a,b)$-regular bipartite graph of order $(n,m)$, and give a simple construction that creates such a $k$-critical bipartite graph whenever possible. Our analysis leads to simple algorithmic recipes that can be exploited for generating biregular $k$-critical bipartite graphs. We hope that our results will motivate further studies, in particular in relation with the topics that are mentioned in the introduction. At the moment, we are working on the following conjecture that generalizes Construction \ref{cons3} to unbalanced bipartite graphs that are not biregular. \begin{conjecture} Let $n,m,k$ be positive integers such that $1<m<n$, $k=n-m$, and $a=\frac{m (n-m+1)}{n}$ is not integer. Let $a'= \ceil{a}$. Then the graph $(U,V; E)$, with $U=\{u_i \mid i \in [n]\}$, $V=\{v_j \mid j \in [m]\}$, $E = \{ (u_i, v_{(j+\alpha) \bmod{m}}) \mid i \in [n], \alpha \in [a'], j=\ceil{\frac{i m}{n}}\}$, is $k$-critical bipartite.\label{irreg} \end{conjecture} \bibliographystyle{acm}
{ "timestamp": "2020-12-08T02:22:07", "yymm": "1907", "arxiv_id": "1907.04844", "language": "en", "url": "https://arxiv.org/abs/1907.04844", "abstract": "We study the problem of Minimum $k$-Critical Bipartite Graph of order $(n,m)$ - M$k$CBG-$(n,m)$: to find a bipartite $G=(U,V;E)$, with $|U|=n$, $|V|=m$, and $n>m>1$, which is $k$-critical bipartite, and the tuple $(|E|, \\Delta_U, \\Delta_V)$, where $\\Delta_U$ and $\\Delta_V$ denote the maximum degree in $U$ and $V$, respectively, is lexicographically minimum over all such graphs. $G$ is $k$-critical bipartite if deleting at most $k=n-m$ vertices from $U$ creates $G'$ that has a complete matching, i.e., a matching of size $m$. We show that, if $m(n-m+1)/n$ is an integer, then a solution of the M$k$CBG-$(n,m)$ problem can be found among $(a,b)$-regular bipartite graphs of order $(n,m)$, with $a=m(n-m+1)/n$, and $b=n-m+1$. If $a=m-1$, then all $(a,b)$-regular bipartite graphs of order $(n,m)$ are $k$-critical bipartite. For $a<m-1$, it is not the case. We characterize the values of $n$, $m$, $a$, and $b$ that admit an $(a,b)$-regular bipartite graph of order $(n,m)$, with $b=n-m+1$, and give a simple construction that creates such a $k$-critical bipartite graph whenever possible. Our techniques are based on Hall's marriage theorem, elementary number theory, linear Diophantine equations, properties of integer functions and congruences, and equations involving them.", "subjects": "Combinatorics (math.CO); Data Structures and Algorithms (cs.DS); Number Theory (math.NT)", "title": "Minimum k-critical bipartite graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137863531805, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7087617905460843 }
https://arxiv.org/abs/0909.1434
Homotopy, Delta-equivalence and concordance for knots in the complement of a trivial link
Link-homotopy and self Delta-equivalence are equivalence relations on links. It was shown by J. Milnor (resp. the last author) that Milnor invariants determine whether or not a link is link-homotopic (resp. self Delta-equivalent) to a trivial link. We study link-homotopy and self Delta-equivalence on a certain component of a link with fixing the rest components, in other words, homotopy and Delta-equivalence of knots in the complement of a certain link. We show that Milnor invariants determine whether a knot in the complement of a trivial link is null-homotopic, and give a sufficient condition for such a knot to be Delta-equivalent to the trivial knot. We also give a sufficient condition for knots in the complements of the trivial knot to be equivalent up to Delta-equivalence and concordance.
\section{Introduction} For an ordered and oriented $n$-component link $L$, the {\em Milnor invariant} $\overline{\mu}_L(I)$ is defined for each multi-index $I=i_1i_2...i_m$ with entries from $\{1,...,n\}$ \cite{Milnor,Milnor2}. Here $m$ is called the {\em length} of $\overline{\mu}_L(I)$ and denoted by $|I|$. Let $r(I)$ denote the maximum number of times that any index appears in $I$. Hence any index appear in $I$ at most $r(I)$ times. It is known that if $r(I)=1$, then $\overline{\mu}_L(I)$ is a {\em link-homotopy} invariant \cite{Milnor}, where {\em link-homotopy} is an equivalence relation on links generated by self crossing changes. While Milnor invariants are not strong enough to give a link-homotopy classification for links, they determine whether a link is link-homotopic to a trivial link or not. In fact, it is known that a link $L$ in $S^3$ is link-homotopic to a trivial link if and only if $\overline{\mu}_L(I)=0$ for any $I$ with $r(I)=1$ \cite{Milnor,HL}. Even if a link is link-homotopic to a trivial link, it is not necessarily true that a certain component of the link is null-homotopic in the complement of the other components. In this paper, we study homotopy of knots in the complement of a certain link. Although Milnor invariants $\overline{\mu}(I)$ with $r(I)\geq 2$ are not necessarily link-homotopy invariants, we have the following. The \lq only if' part holds for more general setting, see Proposition~\ref{free-Ck-inv}. \begin{thm}\label{free-homotopy} Let $L=K_0\cup K_1\cup\cdots\cup K_n$ be an $(n+1)$-component link such that $L-K_0$ is a trivial link. Then $K_0$ is null-homotopic in $S^3\setminus(L-K_0)$ if and only if $\overline{\mu}_L(I0)=0$ for any multi-index $I$ with entries from $\{1,...,n\}$. \end{thm} \begin{rem}\label{remark1} (1)~In the theorem above the condition that $L-K_0$ is a trivial link is essential. Let $K$ be a non-trivial knot and $K'$ be the longitude of a tubular neighbourhood of $K$. Then the link $L=K\cup K'$ is a {\em boundary link}, i.e., its components bound disjoint orientable surfaces. Hence the all Milnor invariants of $L$ vanish. (Note that $L$ is link-homotopic to a trivial link.) On the other hand, since $K$ is a non-trivial knot, it follows from Dehn's lemma that $K'$ is not null-homotopic in $S^3\setminus K$ \cite[Chapter~4, B.2]{R}.\\ (2)~In \cite[Example~6.4]{Yasu2}, the last author gave a 3-component link $L=K_1\cup K_2\cup K_3$ such that $K_i$ is null-homotopic in $S^3\setminus(L-K_i)~(i=2,3)$ and $K_1$ is not null-homotopic in $S^3\setminus(L-K_1)$. \end{rem} A link is {\em Brunnian} if every proper sublink of it is trivial. In particular, trivial links are Brunnian. By Theorem~\ref{free-homotopy}, we have the following corollary. This gives a characterization of Brunnian links, where each component is null-homotopic in the complement of the rest of the components. \begin{cor}\label{brunnian-free-homotopy} For an $n$-component Brunnian link $L$, the $i$th component $K$ is null-homotopic in $S^3\setminus(L-K)$ if and only if $\overline{\mu}_L(Ii)=0$ for any multi-index $I$ with entries from $\{1,...,n\}\setminus\{i\}$. \end{cor} \begin{rem}\label{remark2} In the last section, we give a 3-component Brunnian link $L$ such that $L$ is link-homotopic to a trivial link, and each component $K$ of $L$ is not null-homotopic in $S^3\setminus(L-K)$ (Example~\ref{example1}). There are no such examples for 2-component links, since a knot in the complement of the trivial knot is null-homotopic if and only if it is null-homologous. Hence, for a 2-component Brunnian link, the following three conditions are mutually equivalent: (i)~It is link-homotopic to a trivial link.~ (ii)~The linking number vanishes. (iii)~each component is null-homotopic in the complement of the other component. \end{rem} Let $L=K_0\cup K_1\cup\cdots\cup K_n$ be an $(n+1)$-component link. If $L-K_0$ bounds a disjoint union $F$ of orientable surfaces $F_1,..., F_n$ with $\partial F_i=K_i~(i=1,...,n)$ and $F\cap K_0=\emptyset$, then by \cite[Section 6]{C}, $\overline{\mu}_{L}(I0)=0$ for any multi-index $I$ with entries from $\{1,...,n\}$. By combining this and Theorem~\ref{free-homotopy}, we have the following corollary. \begin{cor}\label{free-homotopy2} Let $L=K_0\cup K_1\cup\cdots\cup K_n$ be an $(n+1)$-component link such that $L-K_0$ is a trivial link. If $L-K_0$ bounds a disjoint union $F$ of orientable surfaces $F_1,..., F_n$ with $\partial F_i=K_i~(i=1,...,n)$ and $F\cap K_0=\emptyset$, then $K_0$ is null-homotopic in $S^3\setminus(L-K_0)$. \end{cor} \begin{rem}\label{remark3} J. Hillman has pointed out that Corollary~\ref{free-homotopy2} can be shown by using the universal covering space of $S^3\setminus(L-K_0)$ as follows: We may construct the maximal free cover of $S^3\setminus(L-K_0)$ by gluing infinite copies of $S^3$-cut-along-$F$, for example see \cite[Section 2.2]{Hill}. Note that the maximal free cover is the universal cover, since the link $\partial F=L-K_0$ is trivial. If $K_0\cap F=\emptyset$, then $K_0$ lifts to the universal cover, and hence is null-homotopic in $S^3\setminus(L-K_0)$. \end{rem} Two $n$-component links $L_0$ and $L_1$ are {\em concordant} if there are mutually disjoint $n$ annuli $A_1,..., A_n$ in $S^3\times[0,1]$ with $(\partial (S^3\times[0,1]), \partial A_j)= (S^3\times\{0\},K_{0j})\cup(-S^3\times\{1\},-K_{1j})$ $(j=1,...,n)$, where $-X$ denotes $X$ with the opposite orientation. A link is {\em slice} if it is concordant to a trivial link. Since the Milnor invariants are concordance invariants \cite{Casson}, Theorem~\ref{free-homotopy} gives us the following corollary. \begin{cor}\label{brunnian-slice} For any Brunnian, slice link $L$, each component $K$ is null-homotopic in $S^3\setminus(L-K)$. \end{cor} \begin{rem} Let $K$ be a slice knot which is non-trivial, and $K'$ the longitude of a tubular neighbourhood of $K$. Then the 2-component link $L=K\cup K'$ is a slice link. As we saw in Remark~\ref{remark1}~(1), each component is not null-homotopic in the complement of the other. Hence the Brunnian property in Corollary~\ref{brunnian-slice} is necessary. \end{rem} A {\em $\Delta$-move} \cite{MN,Mat} is a local move on links as illustrated in Figure~\ref{delta-move}. If the three strands in Figure~\ref{delta-move} belong to the same component of a link, we call it a {\em self $\Delta$-move} \cite{Shi}. Two links are said to be {\em $\Delta$-equivalent} (resp. {\em self $\Delta$-equivalent}) if one can be transformed into the other by a finite sequence of $\Delta$-moves (resp. self $\Delta$-moves). Note that self $\Delta$-equivalence implies link-homotopy, i.e., if two links are self $\Delta$-equivalent, then they are link-homotopic. For knots, self $\Delta$-equivalence is the same as $\Delta$-equivalence. It is known that a link $L$ in $S^3$ is self $\Delta$-equivalent to a trivial link if and only if $\overline{\mu}_L(I)=0$ for any $I$ with $r(I)\leq 2$ \cite[Corollary~1.5]{Yasu2}. Even if a link is self $\Delta$-equivalent to a trivial link, it is not necessarily true that a certain component of the link is $\Delta$-equivalent to the trivial knot in the complement of the rest components, where a knot is {\em trivial} in the complement of a link if it bounds a disk disjoint from the link. We study $\Delta$-equivalence of knots in the complement of a certain link. \pvcn \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.45\linewidth]{delta-move.eps} \caption{}\label{delta-move} \end{figure} The following theorem is comparable to Corollary~\ref{free-homotopy2}. \begin{thm}\label{free-delta} Let $L=K_0\cup K_1\cup\cdots\cup K_n$ be an $(n+1)$-component boundary link such that $L-K_0$ is a trivial link. Then $K_0$ is $\Delta$-equivalent to the trivial knot in $S^3\setminus(L-K_0)$. In particular, for any Brunnian, boundary link, each component is $\Delta$-equivalent to the trivial knot in the complement of the rest components. \end{thm} \begin{rem} (1)~As we saw in Remark~\ref{remark1}~(1), there is a 2-component boundary link such that each component is not null-homotopic in the complement of the other component. Since self $\Delta$-equivalence implies link-homotopy, any component is not $\Delta$-equivalent to the trivial knot in the complement of the other component. This implies that the condition, $L-K_0$ is trivial, in Theorem~\ref{free-delta} is essential. We also notice by \cite{SY, Yasu2} that $L$ is self $\Delta$-equivalent to a trivial link since $L$ is a boundary link. \\ (2)~In the last section, we give a 3-component Brunnian link $L$ such that $L$ is self $\Delta$-equivalent to a trivial link, and each component $K$ of $L$ is not $\Delta$-equivalent to the trivial knot in $S^3\setminus(L-K)$ (Example~\ref{example2}). Since some Milnor invariants of $L$ are non-trivial, $L$ is not a boundary link. Hence the condition that $L$ is a boundary link in Theorem~\ref{free-delta} is necessary. \end{rem} For an $n$-component link $L=K_1\cup\cdots\cup K_n$, we denote by $W^i(L)$ the link with the $i$th component Whitehead doubled. In particular $W^i(K_i)$ is the $i$th component of $W^i(L)$. Note that $L-K_i=W^i(L)-W^i(K_i)$. Then we have the following relation between homotopy of a knot and $\Delta$-equivalence of the Whitehead double of that knot in the complement of a trivial link. \begin{thm}\label{homotopy-delta} $($cf. \cite[Theorem~1.4]{MY}$)$ Let $L=K_0\cup K_1\cup\cdots\cup K_n$ be an $(n+1)$-component link such that $L-K_0$ is a trivial link. The component $K_0$ is null-homotopic in $S^3\setminus(L-K_0)$ if and only if $W^0(K_0)$ is $\Delta$-equivalent to the trivial knot in $S^3\setminus(L-K_0)$. \end{thm} It is known that concordance implies link-homotopy \cite{Gif,Gol} and it does not necessarily imply self $\Delta$-equivalence \cite[Claim~4.5]{NS-JKTR00}. Now we consider an equivalence relation on links combining self $\Delta$-equivalence and concordance. Two links $L$ and $L'$ are {\em self-$\Delta$ concordant} if there is a sequence $L=L_1,...,L_m=L'$ of links such that $L_i$ and $L_{i+1}$ are either concordant or self $\Delta$-equivalent for each $i\in\{1,...,m-1\}$. Links up to self $\Delta$-equivalence and concordance have been studied in \cite{TS-MOIT07a}, and \cite{Yasu}. Classification of {\em string links} up to self-$\Delta$ concordance is given by the last author \cite{Yasu}. In \cite{TS-KBJM07} and \cite{TS-MOIT07a}, the second author defined an equivalence relation, {\em $\Delta$-cobordism}. It is not hard to see that two links are $\Delta$-cobordant if and only if they are self-$\Delta$ concordant. We consider self-$\Delta$ concordance of a certain component of a link while fixing the rest of the components. i.e., self-$\Delta$ concordance of knots in the complement of a certain link. Two knots $K$ and $K'$ in the complements of a link $L$ are {\em self-$\Delta$ concordant $($ or $\Delta$ concordant $)$ in $S^3\setminus L$} if there is a sequence $K=K_1,...,K_m=K'$ of knots such that $K_i$ and $K_{i+1}$ are either $\Delta$-equivalent or concordant in $S^3\setminus L$ for each $i\in\{1,...,m-1\}$, where $K_i$ and $K_{i+1}$ are concordant in $S^3\setminus L$ if there is an annulus $A$ in $(S^3\setminus L)\times[0,1]$ with $(\partial((S^3\setminus L)\times[0,1]),\partial A)=((S^3\setminus L) \times\{0\}, K_i)\cup(-(S^3\setminus L)\times\{1\},-K_{i+1})$. For knots in the complement of the trivial knot in $S^3$, we have the following. \begin{thm}\label{delta-concordance} Let $K$ and $K'$ be knots in the complement of the trivial knot $O$ in $S^3$. If $\mathrm{lk}(K,O)=\mathrm{lk}(K',O)=\pm 1$, then $K$ and $K'$ are $\Delta$ concordant in $S^3\setminus O$. \end{thm} \begin{rem}\label{remark6} (1)~Let $K\cup O$ be the link illustrated in Figure~\ref{example2TSY}, where $O$ is the trivial knot and $K$ is a trefoil. Let $H=O'\cup O$ be the Hopf link with linking number one. Note that $\mathrm{lk}(K,O)=\mathrm{lk}(O',O)=1$. It follows from \cite[Proposition~2]{N-O} that $K\cup O$ is not self $\Delta$-equivalent to $H$. While $K$ is neither $\Delta$-equivalent nor concordant to $O'$ in $S^3\setminus O$, the theorem above implies that they are $\Delta$ concordant in $S^3\setminus O$. \\ (2)~Let $W=K\cup O$ be the Whitehead link. Then $\overline{\mu}_W(1122)\neq 0$. Since $\overline{\mu}(1122)$ is invariant under both self $\Delta$-equivalence \cite{FY} and concordance \cite{Casson}, $K$ is not $\Delta$ concordant to be trivial in $S^3\setminus O$. This implies Theorem~\ref{delta-concordance} does not hold for $\mathrm{lk}(K,O)=\mathrm{lk}(K',O)=0$. Moreover, in Example~\ref{example4}, we show that for any $p~(|p|\geq 2)$, there are two links $K\cup O$ and $K'\cup O$ with $\mathrm{lk}(K,O)=\mathrm{lk}(K',O)=p$ such that $K\cup O$ and $K'\cup O$ are not self-$\Delta$ concordant. In particular, $K$ and $K'$ are not $\Delta$ concordant in $S^3\setminus O$. Hence the condition $\mathrm{lk}(K,O)=\mathrm{lk}(K',O)=\pm 1$ is essential. \end{rem} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.25\linewidth]{example2TSY.eps} \caption{}\label {example2TSY} \end{figure} Let $V_1 \cup\cdots\cup V_n$ be a regular neighborhood of a link $\Gamma=\gamma_1 \cup\cdots\cup \gamma_n$ in $S^3$. Let ${k}_i$ be a knot in an unknotted solid torus $\widetilde{V}_i \subset S^3$ such that ${k}_i$ is not contained in a $3$-ball in $\widetilde{V}_i$ $(i=1, \cdots, n)$. Let $l_i$ be the linking number of ${k}_i$ and a meridian of $\widetilde{V}_i$. Let $\phi_i:\widetilde{V}_i \to V_i$ be a homeomorphism which maps a preferred longitude of $\widetilde{V}_i$ onto a preferred longitude of $V_i$. We call the image $L=K_1 \cup\cdots\cup K_n=\phi_1({k}_1) \cup\cdots\cup \phi_n({k}_n)$ a {\it componentwise satellite link of type $(\Gamma;l_1,...,l_n)$} and $\Gamma$ the {\it companion of} $L$. The link in Figure~\ref{example2TSY} is a componentwise satellite link of type $(H;1,1)$ for the Hopf link $H$ with linking number one. If $l_1=\cdots=l_n=1$, then by Theorem~\ref{delta-concordance}, each $k_i$ is $\Delta$ concordant to the core of $\widetilde{V}_i$ in $\widetilde{V}_i$. Hence we have the following. \begin{cor}\label{thm:core} Let $L$ be a componentwise satellite link of type $(\Gamma;1,...,1)$. Then $L$ is self-$\Delta$ concordant to its companion $\Gamma$. \end{cor} \begin{rem}\label{remark7} (1)~Let $L$ be an $n$-component link which is a componentwise satellite link of type $(\Gamma;l_1,...,l_n)$. Suppose that $\Gamma$ is self-$\Delta$ concordant to a trivial link $O$. It is not hard to see that if $\Gamma$ is concordant to a link $\Gamma'$, then $L$ is concordant to a link which is a componetwise satellite link of type $(\Gamma';l_1,...,l_n)$. This and \cite[Proposition~1]{S-Y} imply that $L$ is self-$\Delta$ concordant to a link $L'$ which is a componetwise satellite link of type $(O;l_1,...,l_n)$. Since each component of $L'$ is separated from the rest components by a $2$-sphere, it is $\Delta$-equivalent to the trivial knot \cite{MN}. This implies that $L'$ is self $\Delta$-equivalent to $O$. Hence $L$ and $O$ are self-$\Delta$ concordant for any $l_1,...,l_n$.\\ (2)~Let $L$ be a $2$-component link which is a componentwise satellite link of type $(\Gamma;p,q)$. Then we have that $\overline{\mu}_L(12)=pq\overline{\mu}_{\Gamma}(12)$ and $\overline{\mu}_L(1122)=p^2q^2\overline{\mu}_{\Gamma}(1122)$ \cite[Lemma 1]{S-Y}. Where $\overline{\mu}(12)$ and $\overline{\mu}(1122)$ are Milnor invariants, which are known to be concordance invariants \cite{Casson} and self $\Delta$-equivalence invariants \cite{FY}. Suppose that $\Gamma$ is not self-$\Delta$ concordant to a trivial link. Then by \cite[Corollary~1.5]{Yasu}, either $\overline{\mu}_{\Gamma}(12)$ or $\overline{\mu}_{\Gamma}(1122)$ is nontrivial. Hence if $L$ and $\Gamma$ are self-$\Delta$ concordant, then $|pq|=1$. \end{rem} Corollary~\ref{thm:core} implies the following. \begin{cor}\label{thm:linkcore} Let $L$ and $L'$ be componentwise satellite links of type $(\Gamma;\varepsilon_1,...,\varepsilon_n)$ and $(\Gamma';\varepsilon_1,...,\varepsilon_n)~(\varepsilon_i\in\{-1,1\})$, respectively. Then $L$ and $L'$ are self-$\Delta$ concordant if and only if their companions $\Gamma$ and $\Gamma'$ are self-$\Delta$ concordant. \end{cor} \begin{rem} (1)~Let $\Gamma$ be a $2$-component link which is not self-$\Delta$ concordant to a trivial link. Let $L$ and $L'$ be componentwise satellite links of type $(\Gamma;p,q)$ and $(\Gamma;p',q')$, respectively. By Remark~\ref{remark7} (2), if $L$ and $L'$ are self-$\Delta$ concordant, then $|pq|=|p'q'|$. \\ (2)~In Example~\ref{example4}, we show that for any $p~(|p|\geq 2)$, there are two links $L$ and $L'$ that are not self-$\Delta$ concordant, but are both componentwise satellite links of type $(H;1,p)$ for the Hopf link $H$. \end{rem} \section{Proof of Theorem~\ref{free-homotopy}} In order to prove Theorem~\ref{free-homotopy}, we need the following lemma which is a direct corollary of \cite[Theorem~5.6]{MKS}. \begin{lem}\label{trivial} $($\cite[Theorem~5.6]{MKS}$)$ Let $F(r)=\langle x_1,...,x_r\rangle$ be the free group of rank $r$. An element $w\in F(r)$ is trivial if and only if the Magnus expansion $E(w)$ of $w$ is equal to $1$. \end{lem} Although the lemma above follows from \cite[Theorem~5.6]{MKS}, the proof is very short, and so we include it here for the reader's convenience. \begin{proof} The \lq only if' part is obvious. We show \lq if' part. The proof is essentially the same as the proof of \cite[Theorem~5.6]{MKS}. Let $w=x_{i_1}^{p_1}\cdots x_{i_s}^{p_s}$ be a freely reduced word which represents a nontrivial element, where $p_j$ are non-zero integers and $1\leq i_k\neq i_{k+1}\leq r$. It is not hard to see that for any $i$ and $p$ \[E(x_i^p)=1+p X_i + X_i^2f_i,\] where $f_i$ is an infinite power series in $X_i$. This implies that \[E(w)=(1+p_1 X_{i_1}+X_{i_1}^2 f_{i_1}) \cdots (1+p_s X_{i_s}+X_{i_s}^2f_{i_s}).\] Since $1\leq i_k\neq i_{k+1}\leq r$, the coefficient of $X_{i_1}\cdots X_{i_s}$ is $p_1\cdots p_s(\neq 0)$. Hence $E(w)\neq 1$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{free-homotopy}] First we show the \lq only if' part. Suppose that $K_0$ is null-homotopic in $S^3\setminus(L-K_0)$. Let $L'$ be a link obtained from $L$ by taking a number of zero-framed parallels of $K_i~(i=1,...,n)$. Then $K_0$ is also null-homotopic in $S^3\setminus(L'- K_0)$. In particular, $L'$ is link-homotopic to a trivial link. Hence all Milnor's link-homotopy invariants of $L'$ vanish. By \cite[Theorem~7]{Milnor2}, $\overline{\mu}_L(I0)=0$ for any multi-index $I$ with entries from $\{1,...,n\}$. Now we show \lq if' part. Set $G(L)=\pi_1(S^3-L)$ and $G_q(L)~(q\geq 1)$ the $q$th lower central subgroup of $G(L)$. There is the natural homomorphism from $G(L)/G_q(L)$ to $G(L-K_0)/G_q(L-K_0)$ so that the $i$th meridians $m_{i}~(i=1,...,n)$ of $L$ map to the $i$th meridians $m'_{i}$ of $L-K_0$, and the $0$th meridian $m_{0}$ maps to the trivial element $1$. Let $l$ be the $0$th longitude of $L$. Then $l$ is written as a word $w_l(m_0,m_1,...,m_n)$ in $G(L)/G_q(L)$ and a word $w_l(m'_1,...,m'_n)$ in $G(L-K_0)/G_q(L-K_0)$. We note that $w_l(1,m_1,...,m_n)$ sends to $w_l(m'_1,...,m'_n)$ via the homomorphism above. The Magnus expansion $E(w_l(1,m_1,...,m_n))$ can be obtained from the expansion \[E(w_l(m_0,m_1,...,m_n))=1+\sum\mu_{L}(h_1...h_s 0)X_{h_1}\cdots X_{h_s}\] by substituting $0$ for $X_0$. Hence by the assumption that $\overline{\mu}_L(I0)=0$ for any multi-index $I$ with entries from $\{1,...,n\}$, we have \[E(w_l(1,m_1,...,m_n))=E(w_l(m'_1,...,m'_n))=1.\] Since $G(L-K_0)$ is a free group, by Lemma~\ref{trivial}, $l$ is trivial in $G(L-K_0)$. \end{proof} \section{Proof of Theorem~\ref{free-delta}} Let $L=K_1\cup\cdots\cup K_n$ be an $n$-component link in a $3$-manifold $M$ and $B\subset M$ a band attaching a single component $K_i$ with coherent orientation, i.e., $B\cap L=K_i\cap B\subset \partial B$ consists of two arcs whose orientations from $K_i$ are opposite to those from $\partial B$. Then $L'=(L\cup\partial B)-\mathrm{int}(B\cap K_i)$, which is an $(n+1)$-component link, is said to be obtained from $L$ by {\em fission} (along a band $B$) in $M$, and conversely $L$ is said to be obtained from $L'$ by {\em fusion} (along a band $B$) in $M$ \cite{KSS}. The following lemma is shown in \cite{Yasu}. \begin{lem}\label{reorder} $($\cite[Lemma~3.5]{Yasu}$)$ Let $L_1, L_2, L_3$ be links such that $L_2$ is obtained from $L_1$ by a single fission, and that $L_3$ is obtained from $L_2$ by a single self $\Delta$-move. Then there is a link $L_2'$ such that $L_2'$ is obtained from $L_1$ by a single self $\Delta$-move, and that $L_3$ is obtained from $L'_2$ by a single fission. Here we call a $\Delta$-move a self $\Delta$-move if the three strands belong to a link obtained from a single component by fission. \end{lem} The proof of the following lemma is an easy modification of the proof of \cite[Theorem]{Shi} (or \cite[Theorem~2]{NSY}). \begin{lem}\label{ribbon} Let $K_0\cup K_1\cup\cdots\cup K_n$ be an $(n+1)$-component link. If $K_0$ bounds a ribbon disk $($a singular disk with only ribbon singularities$)$ in $S^3\setminus(L-K_0)$, then $K_0$ is $\Delta$-equivalent to the trivial knot in $S^3\setminus(L-K_0)$. \end{lem} Now we are ready to prove Theorem~\ref{free-delta}. The proof is given by combining Corollary~\ref{free-homotopy2}, and Lemmas~\ref{reorder} and \ref{ribbon}. \begin{proof}[Proof of Theorem~\ref{free-delta}] Let $F_0\cup F_1\cup\cdots\cup F_n$ be a disjoint union of orientable surfaces with $\partial F_i=K_i~(i=0,1,...,n)$ and $F_i\cap F_j=\emptyset~(i\neq j)$. Let $G$ be a bouquet graph which is a spine of $F_0$, i.e., $G$ consists of $2g$ loops $C_1,...,C_{2g}$ and a point $P$ with $C_i\cap C_j=P~(i\neq j)$, and $G$ is a deformation retract of $F_0$, where $g$ is the genus of $F_0$. We may assume that $F_0$ consists of a disk $D$ and bands $b_1,...,b_{2g}$ so that $D$ contains $P$ and $b_i\cup D$ is an annulus with the core $C_i$ for each $i$. By Corollary~\ref{free-homotopy2}, each $C_i$ is homotopic to $P$ in $S^3\setminus(L-K_0)$. Hence $G$ is homotopic to $P$ in $S^3\setminus(L-K_0)$ with $P$ fixed. This implies that $F_0$ can be transformed into a surface $F'_0$ that is contained in a $3$-ball $B^3\subset S^3\setminus(L-K_0)$ by {\em band-pass moves} between $b_i$ and $b_j~(1\leq i\leq j\leq 2g)$ as illustrated in Figure~\ref{band-pass}. Therefore $\partial F_0=K_0$ can be transformed into an {\em algebraically split} link $L_0$ in $B^3$ by a finite sequence of fissions as illustrated in Figure~\ref{fission}, where a link is algebraically split if the linking numbers of its all $2$-component sublinks vanish. Hence $L_0$ is $\Delta$-equivalent to a trivial link in $B^3$ \cite{MN}. It follows from Lemma~\ref{reorder} that there is a knot $K'_0$ such that $K'_0$ is $\Delta$-equivalent to $K_0$ in $S^3\setminus(L- K_0)$ and is transformed into a trivial link by a finite sequence of fissions in $S^3\setminus(L- K_0)$. We note that $K'_0$ is a ribbon knot and $K'_0$ bounds a ribbon disk in $S^3\setminus(L-K_0)$. This and Lemma~\ref{ribbon} imply that $K'_0$ is $\Delta$-equivalent to the trivial knot in $S^3\setminus(L-K_0)$. This completes the proof. \end{proof} \vspace{-7mm} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.4\linewidth]{band-pass.eps} \caption{Band-pass moves between $b_i$ and $b_j$}\label{band-pass} \end{figure} \vspace{-3mm} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.4\linewidth]{fission.eps} \caption{}\label{fission} \end{figure} \section{Proof of Theorem~\ref{homotopy-delta}} Habiro \cite{H} and Goussarov \cite{G} independently introduced the notion of a $C_k$-move. A $C_k$-move is a local move on links as illustrated in Figure \ref{Ck-move}, which can be regarded as a kind of `higher order crossing change'. In particular, a $C_1$-move is a crossing change and a $C_2$-move is a $\Delta$-move. We call a $C_k$-move a {\em self $C_k$-move} if all the strands belong to the same component of a link. The (self) $C_k$-move generates an equivalence relation on links, called \emph{$($self$)$ $C_k$-equivalence}, which becomes finer as $k$ increases. This notion can also be defined by using the theory of claspers \cite{H}. \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.75\linewidth]{Ck-move.eps} \caption{A $C_k$-move involves $k+1$ strands of a link, labelled here with the integers from $0$ to $k$.}\label{Ck-move} \end{figure} The first and the last authors \cite{FY} showed that any Milnor invariant $\overline{\mu}(I)$ with $r(I)\leq k$ is a self $C_k$-equivalence invariant. The proof of \cite[Theorem~1.1]{FY} implies the following proposition. Note that this proposition is a generalization of the \lq only if' part of Theorem~\ref{free-homotopy}. \begin{prop}\label{free-Ck-inv} Let $L$ be an $n$-component link. If the $i$th component $K$ is $C_k$-equivalent to the trivial knot in $S^3\setminus(L-K)$, then $\overline{\mu}_L(I)=0$ for any multi-index $I$ with entries from $\{1,...,n\}$ such that the index $i$ appears in $I$ at least once and at most $k$ times. \end{prop} The \lq only if' part of Theorem~\ref{homotopy-delta} holds for more general setting as follows. Let $W^{i}(L)$ be the link obtained from $L$ by Whitehead doubling the $i$th component of $L$. \begin{prop}\label{whitehead} Let $L=K_0\cup K_1\cup\cdots\cup K_n$ be an $(n+1)$-component link. If $K_0$ is null-homotopic in $S^3\setminus(L-K_0)$, then $W^0(K_0)$ is $\Delta$-equivalent to the trivial knot in $S^3\setminus(L-K_0)$. \end{prop} \begin{proof} Let $K_0'$ be a knot obtained from $K_0$ by a single crossing change in $S^3\setminus(L-K_0)$. Then $W^0(K'_0)$ is obtained from $W^0(K_0)$ by a local move as illustrated in Figure~\ref{delta-pass}, which is realized by $\Delta$-move (for example see \cite{TY}) in $S^3\setminus(L-K_0)$. It follows that $W^0(K_0)$ is $\Delta$-equivalent to a Whitehead doubled trivial knot, which is also trivial, in $S^3\setminus(L-K_0)$. This completes the proof. \end{proof} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.4\linewidth]{delta-pass.eps} \caption{}\label{delta-pass} \end{figure} \begin{proof}[Proof of Theorem~\ref{homotopy-delta}] The \lq only if' part follows from Proposition~\ref{whitehead}. We show the \lq if' part. Suppose that $K_0$ is not null-homotopic in $S^3\setminus(L-K_0)$. Then, by Theorem~\ref{free-homotopy}, there is a multi-index $I$ with entries from $\{1,...,n\}$ such that $\overline{\mu}_L(I0)\neq 0$. This and \cite[Theorem~1.1]{MY} imply that $\overline{\mu}_{W^0(L)}(II00)\neq 0$. Proposition~\ref{free-Ck-inv} completes the proof. \end{proof} \section{Proof of Theorem~\ref{delta-concordance}} Theorem~\ref{delta-concordance} follows from the proposition below. \begin{prop}\label{prp:fission} Let $K$ be a knot in a solid torus $V \subset S^3$ with a meridian disk $M$ such that $K$ intersects $M$ transversely. Assume that $\mathrm{lk}(\partial M,K)=p \neq 0$ and that $|M\cap K|=|p|+2q~(q>0)$. Then by performing $(|p|+q)$ fissions in $V$, $K$ can be transformed into $L_1 \cup L_2$ that satisfies the following: $L_1$ is $p$ zero-framed parallels of the core $c$ of $V$, and $L_2$ is an algebraically split link with $(q+1)$-components in a $3$-ball in $V-L_1$. The curves in $L_1$ have orientation consistent with $V$ if $p$ is positive, and the opposite orientation if $p$ is negative. \end{prop} In order to prove Proposition~\ref{prp:fission}, we need the following lemma. \begin{lem}\label{lem:n-fission} Let $K$ and $M$ be as in Proposition~\ref{prp:fission}. There is a sequence of $q$ fissions that transforms $K$ into an algebraically split link $K' \cup L'$ such that $K'$ is a knot with $|\mathrm{lk}(\partial M,K')|=|M\cap K'|=|p|$ and $L'$ is a $q$-component link in $V-M$. \end{lem} \begin{proof} First, we inductively transform $K$ into a link $K^q \cup L^q$, which is not necessarily algebraically split, such that $L^q$ is contained in $V-M$ and $|\mathrm{lk}(\partial M, K^q)|=|M\cap K^q|=|p|.$ [{\bf 1st Step}] Choose two points $a_1$ and $b_1$ in $M\cap K$ so that \\ (1)~$\text{sign} (a_1)=1$, $\text{sign} (b_1)=-1$ and \\ (2)~there is a subarc $\alpha_1$ in $K$ with $M\cap \alpha_1=\partial \alpha_1=\{a_1,b_1\}$ such that the orientation from $a_1$ to $b_1$ along $\alpha_1$ is as same as that of $K$. \\ Let $\gamma_1$ be an arc in $M$ with $\gamma_1\cap K=\partial \gamma_1=\{a_1,b_1\}$, and let $N(\gamma_1)$ be a fission band of $K$ which is an $I$-bundle over $\gamma_1$ with $N(\gamma_1)\cap M=\gamma_1$. By fission along $N(\gamma_1)$, we have a new link $K^1\cup K^{(1)}$ from $K$, where $K^1\cap\alpha_1=\emptyset$. Note that $M\cap (K^1\cup K^{(1)})=M\cap K^1$, see Figure~\ref{fission2}. \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.6\linewidth]{sty-fig.eps} \caption{}\label{fission2} \end{figure} [{\bf 2nd Step}] Choose two points $a_2$ and $b_2$ in $M\cap K^1$ so that \\ (1)~$\text{sign} (a_2)=1$, $\text{sign} (b_2)=-1$ and \\ (2)~there is a subarc $\alpha_2$ in $K^1$ with $M\cap \alpha_2=\partial \alpha_2=\{a_2,b_2\}$ such that the orientation from $a_2$ to $b_2$ along $\alpha_2$ is as same as that of $K^1$. \\ Let $\gamma_2$ be an arc in $M$ with $\gamma_2\cap K^1=\partial \gamma_2=\{a_2,b_2\}$, and let $N(\gamma_2)$ be a fission band of $K^1$ which is an $I$-bundle over $\gamma_2$ with $N(\gamma_2)\cap M=\gamma_2$. By fission along $N(\gamma_2)$, we have a new link $K^2\cup K^{(1)}\cup K^{(2)}$ from $K^1\cup K^{(1)}$, where $K^2\cap\alpha_2=\emptyset$. Running this process until the $q$-th step, we have $K^q \cup L^q=K^q \cup (K^{(1)} \cup\cdots\cup K^{(q)})$ with $M\cdot(K^q\cup L^q)=M\cdot K^q=\mathrm{lk}(\partial M,K^q) =\mathrm{lk} (\partial M,K)$. From the construction, $L^q$ is a $q$-component link in $V-M$. Now we show that we can choose $\gamma_1,...,\gamma_q$ so that $K^q\cup L^q$ is an algebraically split link. Set $K^q=K^{(q+1)}$ and $l_{i,j}=|\mathrm{lk}(K^{(i)},K^{(j)})|~(1\leq i<j\leq q+1)$. Then we have a vector \[(l_{1,2},l_{1,3},...,l_{1,q+1}, l_{2,3},l_{2,4},...,l_{2,q+1},...,l_{q-1,q},l_{q-1,q+1},l_{q,q+1}).\] This vector depends on the choice of $\gamma_1,...,\gamma_q$. We denote the vector by $v(\gamma_1,...,\gamma_q)$. We choose arcs $\gamma_1,...,\gamma_q$ so that $v(\gamma_1,...,\gamma_q)$ is the minimum under the lexicographic order. If $v(\gamma_1,...,\gamma_q)$ is a non-zero vector, then we have that $l_{i,j}\neq 0$ for some $1\leq i<j\leq q+1$. Case 1: When $i\neq q$ and $\mathrm{lk}(K^{(i)},K^{(j)})>0$ (resp. $<0$), we choose a disk $D_j$ which is a regular neighborhood of $a_j$ in $M$ with $\mathrm{lk}(\partial D_j,K)=1$ (resp. $=-1$). Let $B$ be a band attached to both $\partial D_j$ and $\gamma_i$ with coherent orientation, see Figure~\ref{band}. \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth]{sty-fig2.eps} \caption{}\label{band} \end{figure} We may assume that $(D_j\cup B)\cap K=D_j\cap K=a_j$. Let $\gamma_i'=\gamma_i\cup\partial (B\cup D_j)-\mathrm{int}(\gamma_i\cap B)$ be an arc obtained from $\gamma_i\cup\partial D_j$ by fission along $B$. For $\gamma_1,...,\gamma_{i-1},\gamma'_i,\gamma_{i+1},...,\gamma_q$, we have a new vector $v(\gamma_1,...,\gamma_{i-1},\gamma'_i,\gamma_{i+1},...,\gamma_q)= (l'_{1,2},...,l'_{q,q+1})$. By the construction of $\gamma'_i$, we note that $l'_{i,j}=l_{i,j}-1$ and that if $l'_{s,t}\neq l_{s,t}$, then $s\geq i$ and $t\geq j$. This contradicts the minimality of the choice of $\gamma_1,...,\gamma_n$. Case 2: When $i=q$. Let $a_{n+1}$ be a point in $K^{q}\cap M$. Then by arguments similar to that in Case 1, we also have a contradiction. \end{proof} \begin{proof}[Proof of Proposition \ref{prp:fission}] Let $K' \cup L'$ be a link as in Lemma \ref{lem:n-fission}. Push the $3$-ball $V-\mathrm{int} N(M)$ into the interior of $V$ and let the result be $B^3$. Then $K'\cap (\overline{V-B^3})$ consists $|p|$ arcs $\{c_1,...,c_{|p|}\} \times [0,1]$, where $\{c_1,...,c_{|p|}\}=K'\cap M$. Then we can take $|p|$-bands in $V-B^3$ so that fission along the $|p|$-bands transforms $K' \cup L'$ into the union of the $p$ zero-framed parallels $L_1$ of the core of $V$ and the link $L_2$ with $(q+1)$-components in $B^3$. Since $L_2$ is an algebraically split link, $L_1 \cup L_2$ is the required link in the proposition. \end{proof} \begin{proof}[Proof of Theorem \ref{delta-concordance}] Let $K$ and $K'$ be knots in a solid torus $V \subset S^3$, which is the complement $V$ of the trivial knot $O$, with $\mathrm{lk}(\partial M,K)=\mathrm{lk}(\partial M,K')=1$, where $M$ is a meridian disk of $V$ with $\partial M=O$. Suppose that $K$ intersects $M$ transversely and $|M\cap K|=1+2q$. From Proposition \ref{prp:fission}, there are $(1+q)$ fissions in $V$ which transform $K$ into $L_1 \cup L_2$ such that $L_1$ is the core of $V$ and $L_2$ is an algebraically split link with $q$ components in a $3$-ball $B^3$ in $V-L_1$. Since an algebraically split link is $\Delta$-equivalent to a trivial link \cite{MN}, $L_2$ is $\Delta$-equivalent to a trivial link in $B^3$. This implies that $K$ can be transformed into a link $L_1 \cup L_2$ by a finite number of fissions, and $L_1 \cup L_2$ into a split sum of $L_1$ and a trivial link by self $\Delta$-moves. (Recall that a self $\Delta$-move means a $\Delta$-move whose three strands belong to a link obtained from a single component by fissions.) By Lemma~\ref{reorder}, there is a knot $K''$ such that $K$ is self $\Delta$-equivalent to $K''$ and $K''$ is concordant to $L_1$. By a similar argument, $K'$ is $\Delta$ concordant to $L_1$ and hence $\Delta$ concordant to $K$. \end{proof} \section{Examples} \begin{example}\label{example1} Let $L=K_1\cup K_2\cup K_3$ be the closure of the 3-string link as illustrated in Figure~\ref{example-fig1}, which is represented as a trivial string link with claspers. Roughly speaking, each clasper can be replaced with a tangle as illustrated in Figure~\ref{milnor-tangle}. For a precise definition, see \cite{H}. Note that $L$ is a Brunnian link. By using the calculation method described in \cite[Remark~5.3]{Yasu2}, we have $\overline{\mu}_L(I)=0$ for any $I$ with $|I|\leq 3$, and $|\overline{\mu}_L(3213)|=|\overline{\mu}_L(1231)|=1$. In particular, $\overline{\mu}_L(I)=0$ for any $I$ with $r(I)=1$, hence $L$ is link-homotopic to a trivial link. Since $\overline{\mu}$ has \lq cyclic symmetry' \cite[Theorem~8]{Milnor2}, $|\overline{\mu}_L(3321)|=|\overline{\mu}_L(1332)|=|\overline{\mu}_L(1123)|=1$. It follows from Corollary~\ref{brunnian-free-homotopy} that any component $K_i$ is not null-homotopic in $S^3\setminus(L-K_i)~(i=1,2,3)$. \end{example} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.23\linewidth]{example1.eps} \caption{}\label{example-fig1} \end{figure} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.7\linewidth]{milnor-tangle.eps} \caption{}\label{milnor-tangle} \end{figure} \begin{example}\label{example2} Let $L=K_1\cup K_2$ be the closure of the $2$-string link illustrated in Figure~\ref{example-fig2}. Note that $L$ is a Brunnian link. Then, by using the calculation method described in \cite[Remark~5.3]{Yasu2}, we have $\overline{\mu}_L(I)=0$ for any $I$ with $|I|\leq 5$, and $|\overline{\mu}_L(222211)|=|\overline{\mu}_L(111122)|= 2$. It follows from \cite[Corollary~1.5]{Yasu2} and Proposition~\ref{free-Ck-inv} that $L$ is self $\Delta$-equivalent to a trivial link and any component $K_i$ is not $\Delta$-equivalent to a trivial knot in $S^3\setminus(L-K_i)~(i=1,2)$. In contrast, we notice by Remark~\ref{remark2} that each component $K_i$ is null-homotopic in $S^3\setminus(L-K_i)$. \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.4\linewidth]{example2.eps} \caption{}\label{example-fig2} \end{figure} \end{example} For any $k\geq 2$, there are knots that are $C_k$-equivalent to the trivial knot and not $C_{k+1}$-equivalent to the trivial knot \cite{O}. Let $L$ be a link which is a split sum of such knots. Then each component $K$ of $L$ is $C_k$-equivalent to the trivial link in $S^3\setminus (L-K)$ and is not $C_{k+1}$-equivalent to the trivial link in $S^3\setminus (L-K)$. It seems to be uninteresting. Hence we show that for each $k\geq 2$, there is a Brunnian 2-component link $L$ such that each component $K$ of $L$ is $C_{k-1}$-equivalent to the trivial knot and is not $C_{k}$-equivalent to the trivial knot in $S^3\setminus (L-K)$. \begin{example}\label{example3} Let $L_k~(k\geq 2)$ be the 2-component link as illustrated in Figure~\ref{example-fig3}. Then each component of $L_k$ is not $C_{k}$-equivalent to the trivial knot in the complement of the other component, but is $C_{k-1}$-equivalent to the trivial knot in the complement of the other component. \end{example} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.45\linewidth]{example3.eps} \caption{}\label{example-fig3} \end{figure} \begin{rem} In the proof of Example~\ref{example3}, we show that $\overline{\mu}_{L_k}([p,q])=0$ for any $p,q~(p+q\leq 2k,~p\neq q)$ and $\overline{\mu}_{L_k}([k,k])=-2$, where $\overline{\mu}([p,q])$ denotes $\overline{\mu}(11...122...2)$ with $1$ appearing $p$ times and $2$ appearing $q$ times. \end{rem} \begin{proof} First we compute the Conway polynomial $\nabla_{L_k}(z)$ mod $z^{2k}$. By changing/splicing the two crossings $c_1$ and $c_2$ in Figure~\ref{example-fig3}, we have \[\nabla_{L_k}=\nabla_H(z)-z\nabla_{K_k}-z^2\nabla_{L'_k},\] where $H$ is the Hopf link with $\nabla_H(z)=z$, $K_k$ is the knot as illustrated in Figure~\ref{example-fig3-1} and $L'_k$ is the link as illustrated in Figure~\ref{example-fig3-2}. \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.45\linewidth]{example3-1.eps} \caption{}\label{example-fig3-1} \end{figure} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.45\linewidth]{example3-2.eps} \caption{}\label{example-fig3-2} \end{figure} Note that $L'_k$ is $C_{2k-2}$-equivalent to a trivial link. Since the finite type invariants of order $\leq m-1$ are invariants for $C_m$-equivalence \cite{H}, and since the $z^{m-1}$-coefficient $a_{m-1}$ of the Conway polynomial is a finite type invariant of order $\leq m-1$ \cite{BNv}, we have $\nabla_{L'_k}(z)\equiv 0$ mod $z^{2k-2}$. Hence we have $\nabla_{L_k}(z)\equiv z-z\nabla_{K_k}$ mod $z^{2k}$. Moreover, since $L_k$ is $C_{2k-1}$-equivalent to a trivial link, $\nabla_{L_k}(z)\equiv 0$ mod $z^{2k-1}$. This implies that $\nabla_{L_k}(z)\equiv -a_{2k-2}(K_k)z^{2k-1}$ mod $z^{2k}$. Therefore, it is enough to compute $\nabla_{K_k}(z)$. We compute the Alexander-Conway polynomial in order to have $\nabla_{K_k}(z)$. For a Seifert surface $F$ of $K_k$ and a basis $x_1,...,x_{2k-2},y_1,...,y_{2k-3},z$ of $H_1(F;{\Bbb Z})$ as illustrated in Figure~\ref{example-fig4-1}, we have the following Seifert matrix with respect to the basis \[M(K_k)= \left( \begin{array}{c||c|c} O_{(2k-2)\times(2k-2)} & A_{(2k-2)\times(2k-3)} & \begin{array}{c} 1\\ 0\\ \vdots\\ 0\\ -1 \end{array}\\ \hline\hline B_{(2k-3)\times(2k-2)} & \begin{array}{cccc} 1&0&\cdots&0\\ 0&0&\cdots&0\\ \vdots&\vdots&&\vdots\\ 0&0&\cdots&0 \end{array} & \begin{array}{c} 0\\ \vdots\\ \vdots\\ 0\\ \end{array}\\ \hline \begin{array}{cccc} 0&\cdots&0&-1 \end{array} & \begin{array}{cccc} 0&0&\cdots&0 \end{array} & 0 \end{array} \right),\] where $O_{(2k-2)\times(2k-2)}$ is the $(2k-2)\times (2k-2)$ zero matrix, $A_{(2k-2)\times(2k-3)}=(a_{ij})$ is a $(2k-2)\times (2k-3)$ matrix with \[a_{ij}= \mathrm{lk}(x_i^+,y_j)= \left\{ \begin{array}{ll} 1 & \text{if $i=j$,} \\ -1 & \text{if $i\geq 3$ is odd and $j=i-1$,}\\ 0 & \text{otherwise,} \end{array} \right. \] and $B_{(2k-3)\times(2k-2)}=(b_{ij})$ is a $(2k-3)\times (2k-2)$ matrix with \[b_{ij}= \mathrm{lk}(y_i^+,x_j)= \left\{ \begin{array}{ll} 1 & \text{if $i=j$,}\\ -1 & \text{if $i$ is odd and $j=i+1$,}\\ 0 & \text{otherwise.} \end{array} \right. \] \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.8\linewidth]{example4-1.eps} \caption{}\label{example-fig4-1} \end{figure} For example, when $k=4$, then \[A_{6\times 5}= \left( \begin{array} {ccccc} 1 & 0 & 0 & 0 &0 \\ 0 & 1 & 0 & 0 &0 \\ 0 & -1 & 1 & 0 &0 \\ 0 & 0 & 0 & 1 &0 \\ 0 & 0 & 0 & -1&1 \\ 0 & 0 & 0 & 0 &0 \end{array} \right),~\text{and}~ B_{5\times 6}= \left( \begin{array} {cccccc} 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & -1& 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & -1 \end{array} \right).\] Then, the Conway polynomial $\nabla_{K_4}(\sqrt{t}^{-1}-\sqrt{t})= |\sqrt{t}^{-1}M(K_4)-\sqrt{t}(M(K_4))^T|$ is the product of \[\left| \begin{array} {ccccc|c} \sqrt{t}^{-1}-\sqrt{t} & 0 & 0 & 0 &0 &\sqrt{t}^{-1}\\ \sqrt{t} & \sqrt{t}^{-1}-\sqrt{t} & 0 & 0 &0 &0\\ 0 & -\sqrt{t}^{-1} & \sqrt{t}^{-1}-\sqrt{t} & 0 &0&0 \\ 0 & 0 & \sqrt{t} & \sqrt{t}^{-1}-\sqrt{t} &0&0 \\ 0 & 0 & 0 & -\sqrt{t}^{-1}&\sqrt{t}^{-1}-\sqrt{t} &0\\ 0 & 0 & 0 & 0 &\sqrt{t} &\sqrt{t}-\sqrt{t}^{-1} \end{array} \right|\] and \[\left| \begin{array} {cccccc} \sqrt{t}^{-1}-\sqrt{t} & -\sqrt{t}^{-1} & 0 & 0 & 0 & 0 \\ 0 & \sqrt{t}^{-1}-\sqrt{t} & \sqrt{t} & 0 & 0 & 0 \\ 0 & 0 & \sqrt{t}^{-1}-\sqrt{t} & -\sqrt{t}^{-1}& 0 & 0 \\ 0 & 0 & 0 & \sqrt{t}^{-1}-\sqrt{t} & \sqrt{t} & 0 \\ 0 & 0 & 0 & 0 & \sqrt{t}^{-1}-\sqrt{t} & -\sqrt{t}^{-1}\\ \hline -\sqrt{t} & 0 & 0 & 0 & 0 & \sqrt{t}-\sqrt{t}^{-1} \end{array} \right|. \] Hence we have \[\begin{array}{rcl} \nabla_{K_4}(\sqrt{t}^{-1}-\sqrt{t}) &=& ((-1)^{3}-(\sqrt{t}^{-1}-\sqrt{t})^{6}) ((-1)^{5}-(\sqrt{t}^{-1}-\sqrt{t})^{6})\\ &=& 1+ 2(\sqrt{t}^{-1}-\sqrt{t})^{6}+(\sqrt{t}^{-1}-\sqrt{t})^{12}. \end{array}\] In general, \[\begin{array}{rcl} \nabla_{K_k}(\sqrt{t}^{-1}-\sqrt{t}) &=& ((-1)^{k-1}-(\sqrt{t}^{-1}-\sqrt{t})^{2k-2}) ((-1)^{k+1}-(\sqrt{t}^{-1}-\sqrt{t})^{2k-2})\\ &=& 1+(-1)^k 2(\sqrt{t}^{-1}-\sqrt{t})^{2k-2}+ (\sqrt{t}^{-1}-\sqrt{t})^{4k-4}. \end{array}\] This implies \[\nabla_{L_k}(z)\equiv - (-1)^k 2z^{2k-1}~\text{mod}~ z^{2k}.\] On the other hand, we note that $L_k$ is obtained from the trivial knot by surgery along $C_{2k-1}$-tree $T$ such that the number of leaves that intersect the $i$th component is equal to $k$ for each $i~(i=1,2)$ (see Figure~\ref{milnor-tangle}). It follows from the proof of \cite[Lemma~1.2]{FY} that each component of $L_k$ is $C_{k-1}$-equivalent to the trivial knot in the complement of the other component. Hence by Proposition~\ref{free-Ck-inv}, $\overline{\mu}_{L_k}(I)=0$ for any multi-index $I$ with entries from $\{1,2\}$ such that either the index $1$ or $2$ appears in $I$ at most $k-1$ times. By \cite[Theorem~4.1]{Murasugi} (or \cite[Theorem~4.1]{Cochran}), we have \[(-1)^{k-1}\overline{\mu}_{L_k}([k,k])= \sum_{p+q=2k}(-1)^{q-1}\overline{\mu}_{L_k}([p,q])= -a_{2k-1}(L_k)=(-1)^k 2,\] and hence $\overline{\mu}_{L_k}([k,k])=-2$. Proposition~\ref{free-Ck-inv} implies that each component of $L_k$ is not $C_{k}$-equivalent to the trivial knot in the complement of the other component. \end{proof} We finish this section by presenting infinitely many pairs $L_{p}^{+} \cup L_{p}^{-}$ of componentwise satellite links of type $(\Gamma;1,p) (|p|\geq 2)$ such that $L_p^{+}$ is not self-$\Delta$ concordant to $L_p^{-}$. \begin{example} \label{example4} Let $L_{p}^+$ (resp. $L_{p}^-$) be the link with linking number $p$ as illustrated in the left of Figure \ref{fig:ctr-expl2a} with $T_p^+$ (resp. $T_p^-$) representing the braid $\sigma_1\sigma_2\cdots\sigma_{|p|-1}$ (resp. $\sigma_1^{-1}\sigma_2\cdots\sigma_{|p|-1}$) if $p>0$ and $\sigma_{|p|-1}\cdots\sigma_2\sigma_1$ (resp. $\sigma_{|p|-1}\cdots\sigma_2\sigma_1^{-1}$) if $p<0$. Note that both $L_p^+$ and $L_p^-$ are componentwise satellite links of type $(H;1,p)$ for the Hopf link $H$. $L_{p}^+$ and $L_{p}^-$ are not self-$\Delta$ concordant. \end{example} \begin{figure}[!h] \includegraphics[trim=0mm 0mm 0mm 0mm, width=.8\linewidth]{ctr-expl2a.eps} \caption{}\label{fig:ctr-expl2a} \end{figure} \begin{proof} Set $\varepsilon=p/|p|$. Let $L_{p}^0$ be the link obtained from $L_{p}^+$ by smoothing the crossing which corresponds to $\sigma_1$. Then by the definition of the Conway polynomial, we have \[a_4(L_p^+)-a_4(L_p^-)=a_3(L_p^0),\] where $a_k$ is the coefficient of $z^k$ in the Conway polynomial. By \cite{JH-PAMS85}, we have $a_3(L_p^0)=p-\varepsilon$. For a $2$-component link $L=K_1\cup K_2$, it is known that $a_4(L)\equiv \overline{\mu}_L(1122)$ mod $\overline{\mu}_L(12)$ \cite{Murasugi}, \cite{Cochran}, and $\overline{\mu}_L(12)=\mathrm{lk}(K_1,K_2)=p$. Hence we have \[\overline{\mu}_{L_p^+}(1122)-\overline{\mu}_{L_p^-}(1122)\equiv a_4(L_p^+)-a_4(L_p^-)= a_3(L_p^0)=p-\varepsilon\equiv -\varepsilon~~\text{mod}~p.\] Since $\overline{\mu}(1122)$ is a self-$\Delta$ concordance invariant \cite{FY}, we have the conclusion. \end{proof} \begin{acknowledgments} The authors would like to thank Professor Jonathan Hillman for pointing out Remark~\ref{remark3}. \end{acknowledgments}
{ "timestamp": "2009-09-08T11:16:42", "yymm": "0909", "arxiv_id": "0909.1434", "language": "en", "url": "https://arxiv.org/abs/0909.1434", "abstract": "Link-homotopy and self Delta-equivalence are equivalence relations on links. It was shown by J. Milnor (resp. the last author) that Milnor invariants determine whether or not a link is link-homotopic (resp. self Delta-equivalent) to a trivial link. We study link-homotopy and self Delta-equivalence on a certain component of a link with fixing the rest components, in other words, homotopy and Delta-equivalence of knots in the complement of a certain link. We show that Milnor invariants determine whether a knot in the complement of a trivial link is null-homotopic, and give a sufficient condition for such a knot to be Delta-equivalent to the trivial knot. We also give a sufficient condition for knots in the complements of the trivial knot to be equivalent up to Delta-equivalence and concordance.", "subjects": "Geometric Topology (math.GT)", "title": "Homotopy, Delta-equivalence and concordance for knots in the complement of a trivial link", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137942490252, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.708761790367309 }
https://arxiv.org/abs/1404.3391
Adjacency labeling schemes and induced-universal graphs
We describe a way of assigning labels to the vertices of any undirected graph on up to $n$ vertices, each composed of $n/2+O(1)$ bits, such that given the labels of two vertices, and no other information regarding the graph, it is possible to decide whether or not the vertices are adjacent in the graph. This is optimal, up to an additive constant, and constitutes the first improvement in almost 50 years of an $n/2+O(\log n)$ bound of Moon. As a consequence, we obtain an induced-universal graph for $n$-vertex graphs containing only $O(2^{n/2})$ vertices, which is optimal up to a multiplicative constant, solving an open problem of Vizing from 1968. We obtain similar tight results for directed graphs, tournaments and bipartite graphs.
\section{Introduction} An \emph{adjacency labeling scheme} for a given family of graphs is a way of assigning \emph{labels} to the vertices of each graph from the family such that given the labels of two vertices in the graph, and no other information, it is possible to determine whether or not the vertices are adjacent in the graph. The labels are assumed to be composed of bits and are required to be of the same length. The goal is, of course, to make the labels as short as possible. An adjacency labeling scheme can be used to store a graph \emph{implicitly} in a \emph{distributed} manner. Adjacency labeling schemes first appear in Breuer \cite{Breuer66}, Breuer and Folkman \cite{BF67}, M\"{u}ller \cite{muller}, and Kannan, Naor and Rudich~\cite{KNR92}. (See more references below.) Various other types of labeling schemes were also considered. In a \emph{distance} labeling scheme, given the labels of two vertices it should be possible to deduce the distance between them in the represented graph. In a \emph{routing} scheme, we may want to be able to identify the first edge on a shortest path, or an almost shortest path, between the two vertices. There is a vast literature on these subjects. When the graphs considered are rooted trees, we may want to be able to decide whether a vertex is an \emph{ancestor} of another vertex, given just the labels of the two vertices, or to be able to compute the label of their \emph{Nearest Common Ancestor} (NCA). (See next section and the extensive survey of Gavoille and Peleg~\cite{gavoillepeleg}.) Closely related to adjacency labeling schemes are \emph{induced-universal graphs}. A graph ${\cal G}=({\cal V},{\cal E})$ is said to be an induced-universal graph for a family $\cal F$ of graphs, if for every graph~$G$ of~$\cal F$ there is an induced subgraph of~$\cal G$ that is isomorphic to~$G$. Induced-universal graphs were introduced by Rado~\cite{Rado64}. Kannan {\em et al.}~\cite{KNR92} note that a family $\cal F$ has an $L$-bit adjacency labeling scheme if and only if it has an induced-universal graph on at most $2^L$ vertices. Moon~\cite{moon1965minimal} showed that the family of all $n$-vertex undirected graphs has an induced-universal graph on $O(n2^{n/2})$ vertices. To do that, he implicitly constructs an adjacency labeling scheme for $n$-vertex graphs that assigns each vertex an $(\lfloor n/2\rfloor+\lceil\lg n\rceil)$-bit label.\footnote{Throughout the paper, we let $\lg n=\log_2 n$.} Moon~\cite{moon1965minimal} uses a simple counting argument to show that adjacency labels for $n$-vertex graphs must contain at least $(n-1)/2$ bits, and that any induced-universal graph for $n$-vertex graphs must contain at least $2^{(n-1)/2}$ vertices, showing that his upper bounds are not far from being optimal. Closing the gap between the upper and lower bounds is mentioned as an open problem in Vizing \cite{vizing1968some}. Bollob{\'a}s and Thomason \cite{bollobas1981graphs} show that a random graph on $\lceil n^2 2^{n/2}\rceil $ vertices is, with high probability, an induced-universal graph for the family of $n$-vertex undirected graphs. While succinct adjacency labeling schemes and small induced-universal graphs for various families of graphs were subsequently constructed (see the next section for a summary), no progress was made on the most basic problem of finding adjacency labeling schemes and induced-universal graphs for the family of all $n$-vertex graphs. We obtain an adjacency labeling scheme for $n$-vertex graphs that assigns each vertex an $(\ceil{n/2}+4)$-bit label, which is optimal up to a small additive constant. As a consequence, we also get an induced-universal graph of size $O(2^{n/2})$ which is optimal up to a small multiplicative factor. Using our techniques we also obtain an $(n+3)$-bit adjacency labeling scheme for $n$-vertex directed graphs, an $(\ceil{n/2}+4)$-bit adjacency labeling scheme for $n$-vertex \emph{tournaments}, thus improving an $(\floor{n/2}+\lceil \lg n\rceil)$-bit bound of Moon \cite{moon1968topics}, and finally an $(\frac{n}{4}+O(1))$-bit adjacency labeling scheme for $n$-vertex \emph{bipartite} graphs, improving an $(\frac{n}{4}+2\lceil\lg n\rceil)$-bit scheme of Lozin and Rudolf \cite{Lozin2007}. All these results are again optimal up to a small additive constant and give rise to induced-universal graphs that are optimal up to a small multiplicative factor. \vspace*{-10pt} \subsection*{The basic challenge} \vspace*{-5pt} To illustrate the most basic technical challenge, we briefly consider the simplest case of \emph{directed} graphs. Suppose that there is an adjacency labeling scheme that assigns each vertex of an $n$-vertex graph an $L$-bit label. As given the labels of two vertices we can determine whether the vertices are adjacent, the labels of all the vertices determine the graph. As $n(n-1)$ bits are needed to represent a general $n$-vertex directed graph, we get that $L\ge n-1$, i.e., each label must contain at least $n-1$ bits. (For a formal version and a slight strengthening of this argument, see Section~\ref{sec:lower}.) Suppose now that each vertex~$u$ in an $n$-vertex graph has a distinct index $\,ind(u)\in \{0,1,\ldots,n-1\}$ assigned to it. The graph can then be represented using the adjacency matrix $A=(a_{ij})$, where $a_{ij}=1$ if and only if there is an edge from the vertex whose index is~$i$ to the vertex whose index is~$j$. We can let the label of~$u$ be the $(n-1)$-bit string $adj(u)$ which is simply the $ind(u)$-th row of the adjacency matrix with the diagonal element omitted. Given the labels $adj(u)$ and $adj(v)$ of two vertices~$u$ and~$v$, \emph{and} their indices $ind(u)$ and $ind(v)$, we can easily decide whether there is an edge from~$u$ to~$v$ in the graph. Such an edge exists if and only if $ind(v)<ind(u)$ and $adj(u)[ind(v)]=1$, or $ind(v)>ind(u)$ and $adj(u)[ind(v)-1]=1$. (As can be seen $adj(v)$ is not even required here.) This labeling scheme seemingly matches the trivial lower bound. Unfortunately, it is \emph{not} a valid adjacency labeling scheme. To determine whether~$u$ and~$v$ are adjacent, we need to know not only their \emph{labels}, but also their \emph{indices}. (In the sequel, we thus refer to $adj(u)$ as the \emph{tag}, and not the label of~$u$.) We can of course obtain a valid adjacency labeling scheme by letting the label of a vertex be an encoding of both its index and its tag. But, the resulting labels would then be of length $n+\lceil\lg n\rceil-1$. The fundamental question is whether these extra $\lceil\lg n\rceil$ bits are needed. We show that they are \emph{not} needed. Using a careful choice of indices, we can encode both indices and tags using only $n+O(1)$ bits. \begin{table}[t] \renewcommand{\arraystretch}{1.3} \small \centering \makebox[0pt][c]{ \begin{tabular}{|c|c|c|c|} \hline \hline \bf Graph family & \bf Lower bound & \bf Upper bound & \bf Reference \\ \hline \noalign{\vskip 2mm} \hline General graphs & $2^{\frac{n-1}{2}}$ & $O(n 2^{\frac{n}{2}})$ & Moon \cite{moon1965minimal} \\ \hline Tournaments & $2^{\frac{n-1}{2}}$ & $O(n 2^{\frac{n}{2}})$ & Moon \cite{moon1968topics} \\ \hline Bipartite graphs & $\Omega(2^{\frac{n}{4}})$ & $O(n^2 2^{\frac{n}{4}})$ & Lozin-Rudolf \cite{Lozin2007} \\ \hline \noalign{\vskip 2mm} \hline Graphs of max degree $d$, $d$ even & $\Omega(n^{\frac{d}{2}})$ & $O(n^{\frac{d}{2}})$& Butler \cite{Butler_induced-universalgraphs} \\ \hline Graphs of max degree $d$, $d$ odd & $\Omega(n^{\frac{d}{2}})$ & $O(n^{{\frac{d+1}{2}}-\frac{1}{d}}\log^{2+\frac{2}{d}}n)$ & Esperet {\em et al.}~\cite{Esperet2008} \\ \hline Graphs of max degree 2& $\floor{\frac{11n}{6}}$ & $\floor{\frac{5n}{2}} + O(1)$& Esperet {\em et al.}~\cite{Esperet2008} \\ \hline \noalign{\vskip 2mm} \hline Graphs excluding a fixed minor & $\Omega(n)$ & $n^2 (\log {n})^{O(1)} $ & Gavoille-Labourel \cite{gavoille2007shorter} \\ \hline Planar graphs & $\Omega(n)$ &$n^2(\log n)^{O(1)}$ & Gavoille-Labourel \cite{gavoille2007shorter} \\ \hline Planar graphs of bounded degree & $\Omega(n)$ & $O(n^2)$& Chung \cite{Chung90} \\ \hline Outerplanar graphs & $\Omega(n)$ & $n (\log n)^{O(1)}$ & Gavoille-Labourel \cite{gavoille2007shorter} \\ \hline Outerplanar graphs of bounded degree & $\Omega(n)$& $O(n)$& Chung \cite{Chung90}\\ \hline \noalign{\vskip 2mm} \hline Graphs of treewidth $k$ & $n2^{\Omega(k)}$& $n (\log \frac{n}{k})^{O(k)} $& Gavoille-Labourel \cite{gavoille2007shorter}\\ \hline Graphs of arboricity $k$ & $\frac{n^k}{2^{O(k^2)}}$ & $n^k \min\{(\log n)^{O(1)},2^{O(k\log^*n)}\}$& Alstrup-Rauhe \cite{alstruprauhe} \\ \hline Forests & $\Omega(n)$ & $n2^{O(\log^*n)}$& Alstrup-Rauhe \cite{alstruprauhe} \\ \hline Forests of bounded degree& $\Omega(n)$ & $O(n)$& Chung \cite{Chung90} \\ \hline Trees of depth $d$ & $\Omega(n)$ & $O(nd^3)$ & Fraigniaud-Korman \cite{fraigniaudkorman2} \\ \hline Caterpillars & $\Omega(n)$ & $O(n)$& Bonichon {\em et al.}~\cite{bonichon} \\ \hline \end{tabular} } \caption{Induced-universal graphs for various families of graphs. All families considered, except tournaments, are families of undirected graphs. The results for graphs of maximum degree at most~$d$ assume that~$d$ is a constant. The $\Omega(n^{d/2})$ lower bound for~$d$ odd is due to Butler~\cite{Butler_induced-universalgraphs}. In the result for families of graphs with an excluded minor, the~$O(1)$ term in the exponent depends on the fixed minor excluded.} \label{tab:adjacency2} \end{table} \vspace*{-10pt} \subsection*{Organization of paper} \vspace*{-5pt} The rest of this paper is organized as follows. In Section~\ref{sec:summary} we provide a concise summary of related results. In Section~\ref{sec:prelim} we give a formal definition of adjacency labeling schemes and discuss some variants of the definition. In Section~\ref{sec:blocks} we describe the two building blocks used to obtain all our results. The first one of these building blocks, which is the cornerstone of all our constructions, is a labeling scheme for very unbalanced bipartite graphs. The labels produced by this labeling scheme vary drastically in size. Our second building block is a \emph{spreading} scheme used to smooth the differences in the label sizes. Combining the two schemes we manage to assign all vertices labels of the same size, thus conforming to the formal requirement. In Section~\ref{sec:directed} we present our new labeling schemes for \emph{directed} graphs. In Section~\ref{sec:undirected} we present our new labeling schemes for \emph{undirected} graphs. The labeling schemes for directed graphs are presented first as they are somewhat simpler. In Section~\ref{sec:tournaments} we present our schemes for \emph{tournaments}. In Section~\ref{sec:bipartite} we present our results for \emph{bipartite} graphs. The schemes for bipartite graphs require some additional new ideas. In Section~\ref{sec:efficiency} we discuss the issue of efficient decoding. In Section~\ref{sec:universal} we discuss the construction of induced-universal graphs. In Section~\ref{sec:lower} we discuss lower bounds. We end in Section~\ref{sec:concl} with some concluding remarks and open problems. \section{Summary of related results}\label{sec:summary} A summary of known upper and lower bounds on the size of induced-universal graphs for various families of graphs is given in Table~\ref{tab:adjacency2}. Corresponding results for adjacency labeling schemes can be obtained by taking logarithms. We improve the first three upper bounds, making them asymptotically tight. An induced-universal graph for a family~${\cal F}$ is a graph that contains each graph from~${\cal F}$ as an induced subgraph. A \emph{universal} graph for~${\cal F}$, on the other hand, is a graph that contains each graph from~${\cal F}$ as a subgraph, not necessarily induced. A clique on~$n$ vertices is clearly a universal graph for all $n$-vertex graphs. The challenge is to construct universal graphs with as few edges as possible. Chung~\cite{Chung90} shows that universal graphs can be used to construct induced-universal graphs. Using universal graphs constructed by Babai {\em et al.}~\cite{BCEGS82}, Bhatt {\em et al.}~\cite{BCLR89} and Chung {\em et al.}~\cite{CGP76,CG78,CG79,CG83}, she obtains her induced-universal graphs cited in Table~\ref{tab:adjacency2}. The induced-universal graphs for planar graphs, outerplanar graphs, graphs excluding a fixed minor, and bounded degree graphs listed in Table~\ref{tab:adjacency2} also rely on her ideas. Alon and Capalbo \cite{alon2007sparse,AlonCapalbo2008}, improving many previous results, show that for every fixed~$d$, there is a graph with $O(n^{2-2/d})$ edges which is universal for $n$-vertex graphs of maximum degree at most~$d$, which is asymptotically optimal. Esperet {\em et al.}~\cite{Esperet2008} use this result to obtain their induced-universal graphs for graphs of fixed maximum degree~$d$, where~$d$ is odd. Distance labeling schemes were considered by many authors. See, e.g., Peleg \cite{peleg} and Gavoille {\em et al.}~\cite{Gavoille200485} and the references therein. Labeling schemes for flow and connectivity were considered by Katz {\em et al.}~\cite{siamcompKatzKKP04} and Korman \cite{Korman2010}. Labeling schemes for answering ancestor and NCA queries in trees were considered, among others by, Abiteboul {\em et al.}~\cite{abiteboul}, Alstrup {\em et al.}~\cite{AR02,AGKR04,alstrupnca2014} and Fraigniaud and Korman \cite{fraigniaudkorman}. Routing schemes were also considered by many authors. See, e.g., Eilam {\em et al.}~\cite{EilamGP03}, Fraigniaud and Gavoille \cite{Gavoille01}, Thorup and Zwick \cite{ThZw05,throupzwick} and the references therein. \section{Prelimaries}\label{sec:prelim} We begin with a formal definition of adjacency labeling schemes. For concreteness, we assume throughout the paper that every $n$-vertex graph is defined on the vertex set $V=[n]=\{0,1,\ldots,n-1\}$. Every $n$-vertex graph can of course be made a graph on $V=[n]$ by mapping its vertices to~$[n]$. \begin{definition}[Adjacency labeling schemes]\label{def:label} Let ${\cal F}_n$ be a family of graphs on vertex set $V=[n]=\{0,1,\ldots,n-1\}$. A pair of functions $\mbox{\it Label}:{\cal F}_n\to \left([n]\to\{0,1\}^{L}\right)$ and $\mbox{\it Edge}:\{0,1\}^L\times \{0,1\}^L\to\{0,1\}$ is an $L$-bit adjacency labeling scheme for ${\cal F}_n$ if and only if for every $G=(V,E)\in {\cal F}_n$, where $V=[n]$, and every $u,v\in V$, we have $(u,v)\in E$ if and only if $\mbox{\it Edge}(\mbox{\it Label}(G)(u),\mbox{\it Label}(G)(v))=1$. \end{definition} In Definition~\ref{def:label}, the family ${\cal F}_n$ can be a family of \emph{undirected} graphs or of \emph{directed} graphs. If~ ${\cal F}_n$ is a family of undirected graphs, we should of course have $\mbox{\it Edge}(x,y)=\mbox{\it Edge}(y,x)$, for every $x,y\in\{0,1\}^L$. Many of the papers on adjacency labeling schemes say that a family ${\cal F}_n$ admits an $L$-bit adjacency labeling scheme if and only if given any graph $G\in {\cal F}_n$, it is possible to assign each vertex~$u$ of~$G$ an $L$-bit label such that given the labels of two vertices~$u$ and~$v$ it is possible to decide whether they are adjacent in~$G$. It is not difficult to check that this definition is equivalent to our definition. We explicitly refer to the encoding function $\mbox{\it Label}$, that assigns labels to the vertices of a given graph, and $\mbox{\it Edge}$, the decoding function, that given two labels decides whether the vertices they belong to are adjacent. An adjacency labeling scheme $(\mbox{\it Label},\mbox{\it Edge})$ for a family ${\cal F}_n$ is said to satisfy the \emph{distinctness} property if and only if for every graph $G=(V,E)$ from ${\cal F}_n$, and every two distinct vertices $u,v\in V$ we have $\mbox{\it Label}(G)(u)\ne\mbox{\it Label}(G)(v)$. Not every labeling scheme satisfies this property. (Of course, if $\mbox{\it Label}(G)(u)=\mbox{\it Label}(G)(v)$, then~$u$ and~$v$ must have the same set of neighbors in~$G$.) Some of the published lower bounds for adjacency labeling schemes rely on the distinctness property. Similar lower bounds can be obtained, however, without relying on it. (See Section~\ref{sec:lower}.) The distinctness property is required if we want to convert a labeling scheme into an induced-universal graph. All our labeling schemes satisfy the distinctness property. Furthermore, for all our labeling schemes it is possible to define an \emph{index} function $\mbox{\it Ind}:\{0,1\}^L\to [n]$ such that for every graph $G\in {\cal F}_n$ and every $u\ne v\in [n]$ we have $\mbox{\it Ind}(\mbox{\it Label}(G)(u))\ne \mbox{\it Ind}(\mbox{\it Label}(G)(v))$. However, we would \emph{not} in general have $\mbox{\it Ind}(\mbox{\it Label}(G)(u))=u$. Our labeling schemes make an essential use of the freedom to reassign names, i.e., indices from $[n]$, to the vertices of the graph. Adjacency labeling schemes that posses such an index function are said to be \emph{indexing}. If ${\cal F}$ is a family of graphs, we let ${\cal F}_n$ be the $n$-vertex graphs of~${\cal F}$, and ${\cal F}_{\le n}$ the graphs of~${\cal F}$ with at most~$n$ vertices. If every $n'$-vertex graph~$G'$ of~${\cal F}$, where $n'<n$, can be extended into an $n$-vertex graph~$G$ of~${\cal F}$, e.g., by adding $n-n'$ isolated vertices, then a labeling scheme for~${\cal F}_n$, can also be used as a labeling scheme for ${\cal F}_{\le n}$. A family ${\cal F}$ that satisfies this property is said to satisfy the \emph{extension} property. When a labeling scheme is used, it is essentially assumed that~$L$, the length of the labels, is known. (Various coding issues arise if~$L$ is not known, or if labels are not of the same length.) We may assume that~$n$, the number of vertices in the graph, or an upper bound on this number, is also known. This can be justified as follows. Assume that~${\cal F}$ satisfies the extension property defined above. Let $L_{\cal F}(n)$ be the length of the labels assigned by the labeling scheme to the vertices of $n$-vertex graphs of~${\cal F}$. We may assume, without loss of generality, that~$L_{\cal F}(n)$ is non-decreasing in~$n$. Given a label size~$L$, we can find the largest~$n$ for which $L_{\cal F}(n)=L$ and then infer that the encoded graph has at most~$n$ vertices. The same process should of course be followed when assigning the labels to the vertices. \section{Building blocks}\label{sec:blocks} In this section we present our two main new ideas. The new ideas give rise to the two main building blocks used in all our constructions. Both building blocks are labeling schemes for bipartite graphs. They assign each vertex~$u$ both an \emph{index} $\,ind(u)$ and an adjacency \emph{tag} $\,adj(u)$. The pair $(ind(u),adj(u))$ may be viewed as the adjacency label of~$u$. The first scheme needs the freedom to assign indices to the vertices. The second scheme can use indices already assigned to the vertices. The adjacency tags assigned to the vertices are usually not of the same length. Thus, the resulting labeling schemes do not conform to Definition~\ref{def:label}. They can still be used, however, to construct labeling schemes that do conform to Definition~\ref{def:label}. In a typical application, the graph~$G=(V,E)$ to be encoded is partitioned into~$k$ subgraphs $G_i=(V_i,E_i)$, for $i\in[k]$, where $E=\cup_{i=1}^k E_i$. Each vertex $u\in U$ is assigned a single index $ind(u)$, used in the encoding of all subgraphs, and a separate adjacency tag $adj_i(u)$ for each subgraph. (If $u\not\in V_i$, then $adj_i(u)$ is empty.) The label of~$u$ is then taken to be the tuple $(ind(u),adj_1(u),\ldots,adj_k(u))$. Given $ind(u)$, it would be possible to deduce the length of the tags $adj_1(u),\ldots,adj_k(u)$. While individual tags may have different lengths, the resulting labels would all have the same length. A bipartite graph $G=(U,V,E)$, where $|U|=k$, $|V|=n-k$, $U\cap V=\emptyset$, and of course $E\subseteq U\times V$, is said to be $(k,n-k)$-bipartite graph. We usually assume, without loss of generality, that $U=[k]=\{0,1,\ldots,k-1\}$ and $V=[k,n)=\{k,k+1,\ldots,n-1\}$. Such a bipartite graph can clearly be represented as a $k\times (n-k)$ Boolean adjacency matrix $A=A_G$. \subsection{A labeling scheme for extremely unbalanced bipartite graphs}\label{sub:unbalanced} Our main new idea is a labeling scheme for $(k,n-k)$-bipartite graphs $G=(U,V,E)$ where $k\ll n$. The labeling scheme assigns indices to the vertices of~$V$, thus permuting the columns of the adjacency matrix $A=A_G$, in a way that enables a succinct encoding of the rows of~$A$. Every $n$-bit string is of the form $0^{t_1}1^{t_2}\ldots$ or $1^{t_1}0^{t_2}\ldots$, where $t_1,t_2,\ldots\ge 1$. Each such maximal block of consecutive 0s or 1s is called a \emph{run}. If $A=(a_{i,j})$ is a $k\times n$ Boolean matrix and $\pi\in S_n$ is a permutation on $[n]$, we let $A^\pi=(a^\pi_{i,j})$ be the $k\times n$ matrix defined by $a^\pi_{i,j}=a_{i,\pi(j)}$. For convenience, we start the numbering of the rows and columns of~$A$ from~$0$. \begin{lemma}\label{lem:pi} Let $A$ be an $k\times n$ Boolean matrix. Then, there exists a permutation $\pi\in S_n$ such that the $i$-th row of $A^\pi$ is composed of at most $2^{i}{+}1$ runs. Furthermore, if the $i$-th row is composed of $2^{i}{+}1$ runs, then the first run is a run of 0s. (Recall that row indices start from~$0$.) \end{lemma} \begin{proof} As a warm-up, we begin by proving a slightly weaker statement. We prove that there is a permutation $\pi\in S_n$ for which the $i$-th row of $A^\pi$, for $0\le i<k$, is composed of at most $2^{i+1}$ runs. We view the columns as binary representations of numbers where the bit in row~$i$ is the $i$-th most significant bit. For every $j\in \{0,1,\ldots,2^k-1\}$, let $I_j$ be the set of indices of the columns of~$A$ that contain the $k$-bit binary representation of~$j$. Any permutation $\pi$ that sorts the columns in non-decreasing lexicographic order, i.e., places the indices in $I_0$ first, then those of $I_1$, and so on, ending with the indices in $I_{2^k-1}$, satisfies the required condition. To tighten the bound and obtain the claim of the lemma, we order the blocks $I_0,I_1,\ldots,I_{2^k-1}$ using a \emph{gray code}. The $k$-bit gray code is an ordering of the $k$-bit words such that two consecutive words differ in a single position. For first gray codes are: $\langle 0 , 1\rangle$ and $\langle 00,01,11,10\rangle$. Furthermore, if $\langle g_0,\ldots,g_{2^{b}-1}\rangle$ is the $b$-bit gray code, then $\langle 0g_0,\ldots,0g_{2^{b}-1},1 g_{2^{b}-1},\ldots,1g_{0}\rangle$ is the $(b+1)$-bit gray code. It is easy to verify by induction that the number of times the $i$-th significant bit in a gray code changes is exactly~$2^{i}$. Thus, any permutation $\pi$ that orders the blocks $I_j$ according to a gray code has the property that the $i$-th row in $A^\pi$ is composed of at most $2^{i}+1$ runs. The number of runs may be smaller as some of the index sets $I_j$ may be empty. If the number of runs is exactly $2^{i}+1$, then the first run is a run of 0s. \end{proof} \begin{lemma}\label{lem:L} The total number of $n$-bit strings composed of at most $2^i+1$ runs is $R(n,i)=2\sum_{j=0}^{2^i} {n-1 \choose j}$. Thus, any $n$-bit string composed of at most $2^i+1$ runs can be specified using $L(n,i)=\lceil \lg R(n,i)\rceil$ bits. \end{lemma} \begin{proof} To represent an $n$-bit word composed of $r$ non-empty runs, we need to represent the $r{-}1$ endpoints of the first $r{-}1$ runs. (The first run always starts at position 1, and the $r$-th run always end at position~$n$.) There are thus ${n-1 \choose r-1}$ possibilities. (We have $n-1$ here, as $n$ is the endpoint of the last run, and is therefore not allowed to be the endpoint of any other run.) We need to multiply this number by~2, as the first run may be a run of 0s or a run of 1s. Summing up we get the desired result. \end{proof} Lemma~\ref{lem:pi} states that if the $i$-th row of $A^\pi$ is composed of $2^i+1$ runs, then the first run is a run of 0s. Thus, in the sequel we can actually replace $R(n,i)$ and $L(n,i)$ by $R'(n,i)={n-1\choose 2^i}+2\sum_{j=0}^{2^i-1} {n-1 \choose j}$ and $L'(n,i)=\lceil \lg R'(n,i)\rceil$. This, however, would have only a negligible effect. Let $H(\alpha)=-\alpha\lg \alpha - (1-\alpha)\lg(1-\alpha)$ be the binary entropy function. It is well known that $\sum_{j=0}^k {n \choose j} \le 2^{H(k/n)n}$, for $k\le n/2$. This gives us the following useful upper bound on~$L(n,i)$. \begin{lemma}\label{lem:L2} If $2^i\le n/2$, then $L(n,i)\le \lceil H(2^i/n)n \rceil+1$. \end{lemma} Using Lemmas~\ref{lem:pi} and~\ref{lem:L} we obtain the following labeling scheme: \begin{lemma} \label{lem:bipartite}\mbox{\rm [Run encoding]} For every $k\le \lg n$ there is a labeling scheme with the following properties. The scheme receives an $(k,n-k)$-bipartite graph $G=(U,V,E)$, where $|U|=k$ and $|V|=n-k$, with a distinct index $\,ind_1(u)\in[k]$ assigned to every $u\in U$. The scheme assigns a distinct index $\,ind_2(v)\in [n-k]$ to every $v\in V$. It also assigns each vertex~$u\in U$ an $\ell_i$-bit tag $adj_1(u)$, where $i=ind_1(u)$ and $\ell_i = L(n-k,i)\le L(n,i)\le \left\lceil H\left(2^{i}/n\right)n \right\rceil+1$. For every $u\in U$ and $v\in V$, given $(ind_1(u), adj_1(u))$ and~$ind_2(v)$ it is possible to determine whether $(u,v)\in E$. \end{lemma} \begin{proof} Let $G=(U,V,E)$ be a bipartite graph. For every $i\in[k]$, let $u_i\in U$ be such that $ind_1(u_i)=i$. Let $A\in \{0,1\}^{k\times (n-k)}$ be the adjacency matrix of~$G$ in which the $i$-th row corresponds to~$u_i$. The ordering of the columns of~$A$ is arbitrary. Let $\pi\in S_{n-k}$ be a permutation, whose existence follows from Lemma~\ref{lem:pi}, for which the $i$-th row of~$A^\pi$ is composed of at most $2^i+1$ runs. For every $j\in [n-k]$, let $v_j\in V$ be the vertex whose column is the $j$-th column of~$A^\pi$ and let $ind_2(v_j)=j$. The tag $adj_1(u_i)$ is simply an encoding of the $i$-th row of $A^\pi$, composed of at most $2^{i}{+}1$ runs. By Lemmas~\ref{lem:L} and~\ref{lem:L2}, we can encode this row using $\ell_i=L(n-k,i)\le L(n,i)\le \left\lceil H\left(2^{i}/n\right)n \right\rceil+1$ bits, as required. (Note that as $i\le k-1$ and $k\le\lg n$, we have $2^{i}\le n/2$, so Lemma~\ref{lem:L2} can indeed be applied.) If is not difficult to check that, for every $u\in U$ and $v\in V$, given just $ind_1(u),adj_1(u)$ and $ind_2(v)$, it can be determined whether $(u,v)\in E$. Indeed, $ind_1(u)$ tells us which row of the adjacency matrix corresponds to~$u$. Using $ind_1(u)$ and $adj_1(u)$ we can reconstruct this row. The bit in position $ind_2(v)$ then tells us whether $(u,v)\in E$. \end{proof} In the present setting, $ind_1(u)$ can be inferred from the length of $adj_1(u)$. However, when the scheme of Lemma~\ref{lem:bipartite} is used as a building block in the construction other labeling schemes, $adj_1(u)$ forms a part of a larger label and $ind_1(u)$ is then used to infer the length of~$adj_1(u)$. In Section~\ref{sec:efficiency} we consider a modification of the scheme of Lemma~\ref{lem:bipartite} that allows decoding, i.e., determining whether two vertices are adjacent, in constant time, in an appropriate model of computation. As can be expected, the sum $\sum_{i=0}^{k-1} L(n,i)$ plays an important role in the sequel. As $L(n,i)\le \lceil H(2^i/n)n \rceil + 1$, we get that $\sum_{i=0}^{k-1} L(n,i) \le 2k + \bigl(\sum_{i=0}^{k-1} H(2^i/n)\bigr)n \le 2k + \bar{H}(2^{k-1}/n)n$, where \[\bar{H}(\alpha) = \sum_{j=0}^{\infty} H\!\left(\frac{\alpha}{2^j}\right)\;.\] It is not difficult to verify that $\bar{H}(\alpha)$ is well defined, i.e., that the sum converges for any value of~$\alpha$. It is also not difficult to check numerically that $\bar{H}(\frac{1}{2})=3.15635\ldots$, $\bar{H}(\frac{1}{4})=2.15635\ldots$ and $\bar{H}(\frac{1}{8})=1.34507\ldots$. (Note that as $H(\frac12)=1$, we have $\bar{H}(\frac12)=1+\bar{H}(\frac14)$.) \subsection{A spreading labeling scheme for bipartite graphs}\label{sub:spread} We now present a second labeling scheme for $(k,n-k)$-bipartite graphs used to counterbalance the labeling scheme of Lemma~\ref{lem:bipartite}. The labeling scheme receives a bipartite graph $G=(U,V,E)$ with distinct indices $ind_1(u)$, for $u\in U$, and $ind_2(v)$, for $v\in V$, already assigned to its vertices. The scheme assigns adjacency tags $adj_1(u)$ and $adj_2(v)$ to the vertices $u\in U$ and $v\in V$. The scheme also receives numbers~$0\le \ell_i\le n-k$, for $i\in [k]$, that control the lengths of the tags assigned to the vertices of~$U$. The tags of the vertices of~$V$ are all of the same length~$L$, which, of course, depends on the $\ell_i$'s. The bits contained in the tags $adj_1(u)$ and $adj_2(v)$ are ``raw'' adjacency bits, no coding tricks are used this time. The scheme only uses the freedom to decide whether the adjacency bit corresponding to a pair $(u,v)\in U\times V$ will reside in $adj_1(u)$ or in $adj_2(v)$. The indices $ind_1(u)$ and $ind_2(v)$ will allow us to determine which of the two tags contains the bit and in which position. No assumption regarding the relation between~$k$ and~$n$ is required. \begin{lemma} \label{lem:spread}\mbox{\rm [Spreading]} For every $0\le \ell_i\le n-k$, where $i\in [k]$, there is a labeling scheme with the following properties. The scheme receives an $(k,n-k)$-bipartite graph $G=(U,V,E)$, where $|U|=k$, $|V|=n-k$, with a distinct index $\,ind_1(u)\in[k]$ assigned to every vertex $u\in U$ and a distinct index $\,ind_2(v)\in [n-k]$ assigned to every vertex $v\in V$. The scheme assigns each vertex $u\in U$ an $((n-k)-\ell_i)$-bit tag $adj_1(u)$, where $i=ind_1(u)$. It assigns each vertex $v\in V$ an $L$-bit tag $adj_2(v)$, where $L=\lceil ({\sum_{i=0}^{k-1} \ell_i})/{(n-k)} \rceil$. For every $u\in U$ and $v\in V$, given $(ind_1(u), adj_1(u))$ and $(ind_2(v),adj_2(v))$, and given the $\ell_i$'s, it is possible to determine whether $(u,v)\in E$. \end{lemma} \addtocounter{theorem}{-1} \begin{proof} For every $i\in [k]$, let $u_i\in U$ be the vertex for which $ind_1(u_i)=i$. For every $j\in[n-k]$, let $v_j\in V$ be the vertex for which $ind_2(v_j)=j$. Let~$A=(a_{i,j})$ be the adjacency matrix of~$G$ in which the $i$-th row corresponds to~$u_i$ and the $j$-th column corresponds to~$v_j$. We start with each vertex $u_i$, for $i\in [k]$, holding a $(n-k)$-bit tag $adj_1(u_i)$ that specifies its adjacencies to all vertices of~$V$, i.e., the $i$-th row of the adjacency matrix $A$. Each vertex of~$v_j\in V$ starts with an empty tag $adj_2(v_j)$. Our goal is to move~$\ell_i$ bits from $adj_1(u_i)$, for $i\in[k]$, to the tags $adj_2(v_j)$ of some vertices of~$V$ in such a way that each tag $adj_2(v_j)$ will contain roughly the same number of bits. This can be easily done in the following manner. Let $s_0=0$ and $s_i=(\sum_{j=0}^{i-1}\ell_j)\bmod (n-k)$, for $i>0$. We examine the vertices $u_0,u_1,\ldots$ of~$U$ one by one. Vertex $u_i$ removes bit $a_{i,s_i+j}$, for $j\in[\ell_i]$, from its tag and appends it to the tag of vertex~$v_{s_i+j}$. In both cases, $s_i+j$ is computed modulo $n-k$. As the tags of the vertices of~$V$ acquire bits in a round-robin manner, none of them ends up with more than $L=\lceil ({\sum_{i=0}^{k-1} \ell_i})/{(n-k)} \rceil$ bits. Given the indices and the tags $ind_1(u),adj_1(u)$ and $ind_2(v),adj_2(v)$ of two vertices~$u\in U$ and~$v\in V$, and given all the $\ell_i$'s, it is easy to check whether they are adjacent. Suppose that $i=ind_1(u)$ and $j=ind_2(v)$. If $j$ is not in the (possibly wrapped) interval $[s_i,s_{i+1})$, then the adjacency bit $a_{i,j}$ is contained in~$adj_1(u)$. Otherwise, it is contained in~$adj_2(v)$. Furthermore, the position of~$a_{i,j}$ in~$adj_1(u)$ or $adj_2(v)$ is easily calculated. If $a_{i,j}$ is in $adj_1(u)$, then it is in position~$j$, if $j<s_i<s_{i+1}$, in position~$j-\ell_i$, if $s_i<s_{i+1}\le j$, or in position $j-s_{i+1}$, if $s_{i+1}\le j<s_i$. If $a_{i,j}$ is not in $adj_1(u)$, then it is position $\floor{\frac{\bar{s}_i+((j-s_i)\bmod(n-k)}{n-k}}$ of $adj_2(v)$, where $\bar{s}_0=0$ and $\bar{s}_i=\sum_{j=0}^{i-1}\ell_j$, for $i>0$, where the summation this time is not modulo $n-k$. (Note, in particular, that $u$ only needs to know~$\bar{s}_i$ and~$\ell_i$.) \end{proof} A slightly improved spreading lemma, used to fine-tune our results, can be found in Appendix~\ref{sec:spread2}. \section{Directed graphs}\label{sec:directed} Let $G=(V,E)$ be a directed graph on $V=[n]$. As we saw in the introduction, the na\"{\i}ve labeling scheme of $n$-vertex directed graphs, without self-loops, assigns to each vertex an $(n+\lceil\lg n\rceil-1)$-bit label. We provide the first improvement over this na\"{\i}ve bound. Furthermore, our bound is optimal up to a small additive constant. \begin{theorem}\label{thm:directed} For any $n\ge 100$, there is an adjacency labeling scheme for $n$-vertex directed graphs that assigns each vertex an $(n+4)$-bit label. \end{theorem} \begin{proof} Let $G=(V,E)$ where $V=[n]$ be a directed graph. Partition the vertex set~$V$ into two sets $A=[k]$, and $B=[k,n)$, where $k=\lceil\lg n\rceil-2$. We can view~$G$ as the disjoint union of~$G[A]$, $G[B]$, $G[A,B]$ and $G[B,A]$, where $G[A]$ and $G[B]$ are the induced directed graphs on~$A$ and $B$, respectively, $G[A,B]=(V,E\cap (A\times B))$ is composed of the edges of~$G$ from~$A$ to~$B$, and $G[B,A]=(V,E\cap (B\times A))$ is composed of the edges of~$G$ from~$B$ to~$A$. The graphs $G[A,B]$ and $G[B,A]$ correspond to the undirected bipartite graphs $G[A,B]=(A,B,E\cap (A\times B))$ and $G[B,A]=(A,B,E\cap (B\times A))$, obtained by ignoring the direction of the edges. We start by using the labeling scheme for extremely unbalanced bipartite graphs of Lemma~\ref{lem:bipartite} to represent $G[A,B]$. We assign arbitrary distinct indices to the vertices of~$A$. For concreteness, let $ind_1(i)=i$, for $i\in A$. The scheme of Lemma~\ref{lem:bipartite} assigns indices $ind_2(j)\in [n-k]$ to the vertices of~$B$. It also assigns each vertex $i\in A$ an $\ell_i$-bit tag $adj_1(i)$, where $\ell_i=L(n-k,i)\le L(n,i)\le \lceil H(2^i/n)n \rceil + 1$. Next, we use the spreading scheme of Lemma~\ref{lem:spread} to represent $G[B,A]$, viewed as a bipartite graph $(A,B,E'')$. We use the indices $ind_1(i)$ and $ind_2(j)$ assigned to the vertices of~$A$ and~$B$ above. We apply Lemma~\ref{lem:spread} with $\ell'_i=(k-1)+\ell_i$, for $i\in [k]$. As $k=\lceil\lg n\rceil-2$ and $0\le i\le k-1$, we have $\ell_i\le \lceil H(2^{k-1}/n)n\rceil +1 \le \lceil H(1/4)n\rceil+1\le \lceil 0.82n\rceil+1$. Therefore, $\ell'_i\le n-k$, for $i\in [k]$, as required by~Lemma~\ref{lem:spread}. Vertex~$i$ of~$A$ is thus assigned an $((n-k)-\ell'_i)$-bit tag $adj_2(i)$. Each vertex of~$B$ is assigned a $\Delta$-bit tag $adj_3(j)$, where $\Delta = \lceil (\sum_{i=0}^{k-1} ((k-1)+\ell_i))/(n-k) \rceil$. Next, we use the na\"{\i}ve labeling scheme to encode $G[A]$ and $G[B]$. We again use the indices~$ind_1(i)$ and~$ind_2(j)$ already assigned to the vertices. Each vertex $i\in A$ gets a $(k-1)$-bit tag $adj_4(i)$. Each vertex $j\in B$ gets an $((n-k)-1)$-bit tag $adj_5(j)$. Combing the indices ~$ind_1$ and~$ind_2$ assigned separately to the vertices of~$A$ and $B$, we let $ind(i)=ind_1(i)$ if $i\in A$, and $ind(j)=k+ind_2(j)$, if $j\in B$. Note that now $ind(u)\in[n]$ for every $u\in V=A\cup B$. For simplicity, we also use $ind(u)$, where $u\in V$, to denote the $\lceil\lg n\rceil$-bit binary encoding of $ind(u)$. Finally, we assign vertex~$i$ of~$A$ a label composed of the concatenation of $ind(i), adj_1(i), adj_2(i)$ and~$adj_4(i)$, and vertex~$j$ of~$B$ a label composed of the concatenation of $ind(j), adj_3(j)$ and $adj_5(j)$. Vertex~$i$ of~$A$ is thus assigned a label of length \[ \lceil\lg n\rceil \;+\; \ell_i \;+\; ((n-k)-(k-1)-\ell_i) \;+\; (k-1) \;=\; \lceil\lg n\rceil + (n-k) \;=\; n+2\;. \] Each vertex of~$B$ is assigned a label of length \[ \lceil\lg n\rceil \;+\; \Delta \;+\; (n-k-1) \;=\; n+1+\Delta \;. \] Now, \[ \Delta \;=\; \left\lceil \frac{\sum_{i=0}^{k-1}((k-1)+\ell_i)}{n-k} \right\rceil \;\le\; \left\lceil \frac{k(k+1) + n\sum_{i=0}^{k-1} H(2^i/n)}{n-k} \right\rceil \;\le\; \left\lceil \frac{k(k+1)}{n-k} + \frac{n}{n-k}\bar{H}(2^{k-1}/n) \right\rceil \;. \] As $k=\lceil\lg n\rceil-2$, we have $2^{k-1}/n\le\frac{1}{4}$, and thus $\bar{H}(2^{k-1}/n)\le \bar{H}(\frac14)< 2.16$. It is not difficult to verify that for $n\ge 100$ we have $\frac{k(k+1)}{n-k}<0.5$ and $\frac{n}{n-k}\bar{H}(\frac14)<2.5$, and thus $\Delta\le 3$. The label of each vertex is thus composed of at most $n+4$ bits. We can easily pad the labels of the vertices so that they all contain exactly $n+4$ bits. Given the labels of two vertices it is possible to determine whether they are adjacent. The index of a vertex, residing in the first $\lceil\lg n\rceil$ bits of its label, tells us whether the vertex is a vertex of~$A$ or of~$B$. It also allows us to break the label into the different tags composing it. Given the indices of two vertices we can easily decide which of the tags to use to determine whether the two vertices are adjacent. \end{proof} Theorem~\ref{thm:directed} is also valid for $n<100$, but for that we need to rely on the exact definition of~$L(n-k,i)$ and not just on the convenient upper bounds $L(n-k,i)\le L(n,i)\le \lceil H(2^i/n) \rceil+1$. The $n+4$ bound of Theorem~\ref{thm:directed} can be improved to $n+3$. When $n$ is a power of~$2$, for example, this is easy. Note that in this case $2^{k-1}/n=\frac18$. As $\bar{H}(\frac18)<1.346$, we get that $\Delta\le 2$. Essentially the same calculation works if $n$ is close, from below, to a power of~$2$, as then $2^{k-1}/n$ is not much larger than $\frac18$. To get the $n+3$ for all sufficiently large values of~$n$, some more work needs to be done. We need to use the slightly more economical way of encoding indices, described in Appendix~\ref{sec:indices}, and the modified spreading lemma of Appendix~\ref{sec:spread2}. The details can be found in Appendix~\ref{sec:directed2}. The results in this section are for directed graphs without self-loops. Directed graphs with self-loops could of course be handled by adding a single bit to each label. We defer the treatment of efficient decoding issues to Section~\ref{sec:efficiency}. \section{Undirected graphs}\label{sec:undirected} Our scheme for undirected graphs is slightly more complicated than the scheme of directed graphs, as we need to break the graph into more parts. The main ideas, however, are the same. We start with a simple $(\lfloor n/2\rfloor +\lceil\lg n\rceil)$-bit scheme for $n$-vertex undirected graphs which is implicit in Moon \cite{moon1965minimal}. \begin{theorem}\label{thm:moon}\mbox{\rm [Moon\cite{moon1965minimal}]} For any $n\ge 1$, there is a labeling scheme that receives an $n$-vertex undirected graph $G=(V,E)$, with distinct indices $ind(u)\in[n]$ assigned to its vertices, and assigns each vertex an $\lfloor n/2\rfloor$-bit adjacency information tag $adj(u)$. For every two vertices $u,v\in V$, given $(ind(u),adj(u))$ and $(ind(v),adj(v))$ it is possible to determine whether $(u,v)\in E$. \end{theorem} \begin{proof} Let $u_i\in V$ be the vertex for which $ind(u_i)=i$. Let $A=(a_{i,j})$ be the adjacency matrix of the graph where the $i$-th row and column correspond to~$u_i$. The tag $adj(u_i)$ is composed of the $\lfloor n/2\rfloor$-bit string $a_{i,i+1},a_{i,i+2},\ldots,a_{i,i+\lfloor n/2\rfloor}$, where the addition in the second index is modulo~$n$. This corresponds to arranging the vertices $u_0,u_2,\ldots,u_{n-1}$ in a circle, with each vertex remembering its adjacencies to the $\lfloor n/2\rfloor$ vertices following it in the circle. Given $(ind(u),adj(u))$ and $(ind(v),adj(v))$ we can easily determine whether $(u,v)\in E$. If $ind(v)-ind(u) \le \lfloor n/2\rfloor$, the answer is $adj(u)[ind(v)-ind(u)]$; Otherwise, it is $adj(v)[ind(u)-ind(v)]$, where the subtractions $ind(v)-ind(u)$ and $ind(u)-ind(v)$ are interpreted modulo~$n$. \end{proof} We note that when~$n$ is even, there is slight redundancy in the scheme just describe, as the adjacency bit $a_{i,i+n/2}$, for every $i\in[n]$, is stored twice. We exploit that later to fine-tune our results. Theorem~\ref{thm:moon} yields, of course, an $(\floor{n/2}+\lceil\lg n\rceil)$-bit labeling scheme. Using our techniques, we can reduce the size of the labels to $\lfloor n/2\rfloor +6$. \begin{theorem}\label{thm:undirected} For any $n\ge 400$, there is a adjacency labeling scheme for $n$-vertex undirected graphs that assigns each vertex an $(\lfloor n/2\rfloor +6)$-bit label. \end{theorem} \begin{proof} Let $G=(V,E)$ be an undirected graph where $V=[n]$. We partition~$V$ into four disjoint sets $A_0,A_1,B_0$ and $B_1$ were $|A_0|=|A_1|=k=\lceil\lg n\rceil - 3$, $|B_0|=\lceil\frac{n}{2}\rceil-k$ and $|B_1|=\lfloor\frac{n}{2}\rfloor-k$. For concreteness, we let $A_0=[0,k)$, $B_0=[k,\lceil\frac{n}{2}\rceil)$, $A_1=[\lceil\frac{n}{2}\rceil,\lceil\frac{n}{2}\rceil+k)$ and $B_1=[\lceil\frac{n}{2}\rceil+k,n)$. We partition~$G$ into the disjoint union of the four bipartite graphs $G[A_0,B_0], G[A_0,B_1], G[A_1,B_0], G[A_1,B_1]$ and the two undirected graphs $G[A_0\cup A_1]$ and $G[B_0\cup B_1]$. We assign arbitrary distinct indices to the vertices of $A_0$. For concreteness, we let $ind'(i)=i$, for every $i\in A_0$. Similarly, we let $ind'(i)=i-\lceil\frac{n}{2}\rceil$, for every $i\in A_1$. We now use Lemma~\ref{lem:bipartite} to encode $G[A_0,B_0]$ and $G[A_1,B_1]$. This assigns distinct indices $ind'(j)\in [\lceil\frac{n}{2}\rceil-k\,]$ to all vertices $j\in B_0$, and distinct indices $ind'(j)\in [\lfloor\frac{n}{2}\rfloor-k\,]$ to all vertices $j\in B_1$. We define distinct indices $ind(u)\in [n]$ to all vertices of~$V$ as follows. If $u\in A_0$, then $ind(u)=ind'(u)$. If $u\in B_0$, then $ind(u)=ind'(u)+k$. If $u\in A_1$, then $ind(u)=ind'(u)+\lceil\frac{n}{2}\rceil$. Finally, if $u\in B_1$, then $ind(u)=ind'(u)+\lceil\frac{n}{2}\rceil+k$. The labeling scheme of Lemma~\ref{lem:bipartite} also assign the $i$-th vertices of~$A_0$ and $A_1$ an $\ell_i$-bit tag, where $\ell_i=L(\lceil\frac{n}{2}\rceil-k,i)\le L(\lfloor \frac{n}{2}\rfloor,i)$. (We refrain from explicitly naming the tags.) To compensate for the $\ell_i$ bits assigned to the $i$-th vertex of~$A_0$ and the $i$-th vertex of~$A_1$, and to leave room for the representation of $G[A_0\cup A_1]$, we use Lemma~\ref{lem:spread} to represent $G[A_0,B_1]$ and $G[A_1,B_0]$, with $\ell'_i=k+\ell_i$, for $i\in [k]$. It is easy to verify that $\ell'_i\le \lfloor\frac{n}{2}\rfloor-k$, for $i\in[k]$, as required by Lemma~\ref{lem:spread}. The $i$-th vertices of~$A_0$ and $A_1$ thus get tags composed of $(\lceil\frac{n}{2}\rceil-k)-\ell'_i$ bits, and each vertex of $B_0\cup B_1$ gets a tag composed of $\Delta = \lceil (\sum_{i=0}^{k-1} (k+\ell_i)/(\lfloor \frac{n}{2}\rfloor-k) \rceil$ bits. (Tags are padded, if necessary.) Finally, we use the simple labeling scheme of Theorem~\ref{thm:moon} to represent $G[A_0\cup A_1]$ and $G[B_0\cup B_1]$. We again use the indices already assigned to the vertices. Each vertex of $A_0\cup A_1$ is thus assigned a $k$-bit tag, while each vertex of $B_0\cup B_1$ is assigned a $(\lfloor\frac{n}{2}\rfloor-k)$-bit tag. As in the proof of Theorem~\ref{thm:directed}, the label assigned to a vertex is the concatenation of the binary representation of its index, and the tags assigned to it for each part of the graph it participates in. The $i$-th vertices of $A_0$ and $A_1$ are thus assigned a label of length \[ \lceil\lg n\rceil \;+\; \ell_i \;+\; \left(\left(\left\lceil\frac{n}{2}\right\rceil-k\right)-(k+\ell_i)\right) \;+\; k \;=\; \left\lceil\frac{n}{2}\right\rceil+3 \;. \] Each vertex of $B_0\cup B_1$ is assigned a label of length \[ \lceil\lg n\rceil \;+\; \Delta \;+\; \left(\left\lfloor\frac{n}{2}\right\rfloor-k\right) \;=\; \left\lfloor\frac{n}{2}\right\rfloor+3+\Delta \;. \] Now, as \[\ell_i \;\le\; L\left(\left\lceil\frac{n}{2}\right\rceil-k,i\right)\;\le\; L\left(\left\lfloor \frac{n}{2} \right\rfloor,i\right) \;\le\; H\left(\frac{2^i}{n/2}\right)\frac{n}{2}+2 \;\le\; H\left(\frac{2^{i+1}}{n}\right)\frac{n}{2}+2\;,\] we have \[ \Delta \;=\; \left\lceil \frac{\sum_{i=0}^{k-1}(k+\ell_i)}{\lfloor \frac{n}{2}\rfloor-k} \right\rceil \;\le\; \left\lceil \frac{k(k+2) + \frac{n}{2}\sum_{i=0}^{k-1} H(2^{i+1}/n)}{\lfloor \frac{n}{2}\rfloor-k} \right\rceil \;\le\; \left\lceil \frac{k(k+2)}{\lfloor \frac{n}{2}\rfloor-k} + \frac{\frac{n}{2}}{{\lfloor \frac{n}{2}\rfloor-k}}\bar{H}(2^{k}/n) \right\rceil \;. \] As $k=\lceil\lg n\rceil-3$, we have $2^{k}/n\le\frac{1}{4}$, and thus $\bar{H}(2^{k}/n)\le \bar{H}(\frac14)< 2.16$. It is not difficult to verify that for $n\ge 400$ we have $\frac{k(k+2)}{\lfloor\frac{n}{2}\rfloor-k}<0.5$ and $\frac{\frac{n}{2}}{\lfloor\frac{n}{2}\rfloor-k}\bar{H}(\frac14)<2.5$, and thus $\Delta\le 3$. Each vertex is therefore assigned a label of at most $\lfloor\frac{n}{2}\rfloor+6$ bits. Given the labels of two vertices it is possible to decide whether they are adjacent or not. \end{proof} A different approach that can be used to prove Theorem~\ref{thm:undirected} is the following. We partition the vertex set $V=[n]$ into three sets $A,B$ and $C$, where $|A|=k$, $|B|=\lceil\frac{n-k}{2}\rceil$ and $|C|=\lfloor\frac{n-k}{2}\rfloor$. We partition the graph $G=(V,E)$ into $G[A,B],G[A,C],G[B,C],G[A],G[B]$ and $G[C]$. We use recursion to assign indices and tags to~$G[C]$. We use Lemma~\ref{lem:bipartite} to assign indices and tags to $G[A,B]$. Once all indices are assigned, we use Lemma~\ref{lem:spread} to assign tags to $G[A,C]$. We use a simple scheme for balanced bipartite graphs to assign tags to $G[B,C]$ (see Theorem~\ref{thm:balanced} below). Finally, we use the Moon's scheme (Theorem~\ref{thm:moon}) to assign tags to $G[A]$ and~$G[B]$. The length of the labels produced seems to be essentially the same as those produced in the proof of Theorem~\ref{thm:undirected}. A improved $(\ceil{n/2}+4)$-bit labeling scheme for $n$-vertex undirected graphs can be found in Appendix~\ref{sec:undirected2}. \section{Tournaments}\label{sec:tournaments} A \emph{tournament} is a directed graph $G=(V,E)$ in which every two vertices are connected by an edge in one of the possible directions, i.e., for every $u\ne v\in V$, either $(u,v)\in E$ or $(v,u)\in E$, but not both. There is a trivial correspondence between tournaments on $V=[n]$ and undirected graphs on $V=[n]$. Given a tournament $G=(V,E)$, we can construct an undirected graph $G'=(V,E')$ where $E'=\{\{u,v\} \mid (u,v)\in E \text{ and } u<v\}$. Conversely, given an undirected graph $G'=(V,E')$, we can construct a tournament $G=(V,E)$ where $E=\{ (u,v) \mid (\{u,v\}\in E' \text{ and } u<v) \text{ or } (\{u,v\}\not\in E' \text{ and } u>v\}$. It is thus tempting to claim that any labeling scheme for undirected graphs can also be used as a labeling scheme for tournaments, and vice versa. This, however, is not necessarily the case. The problem is that to check whether $u<v$ the vertices need to know their original indices. In our labeling scheme for undirected graphs the labels of the vertices do not retain this information. However, even though our labeling scheme for undirected graphs assigns new indices to the vertices, it does so in a way that can still be used to represent tournaments. Recall that the labeling schemes partitions~$V$ into four disjoint sets $A_0,A_1,B_0$ and $B_1$. The scheme keeps the original indices of the vertices of~$A_0\cup A_1$ but permutes the indices of the vertices of~$B_0$ and those of~$B_1$. However, these two permutations depend only on $G[A_0,B_0]$ and $G[A_1,B_1]$. To assign labels to a tournament $G=(V,E)$ on $V=[n]$, we first partition~$V$ into $A_0,A_1,B_0$ and~$B_1$ as done by the labeling scheme for undirected graphs. We assume, without loss of generality, that $A_0=\{0,1,\ldots,k-1\}$, $B_0=\{k,\ldots,\lceil\frac{n}{2}\rceil-1\}, A_1=\{\lceil\frac{n}{2}\rceil+k-1\}$ and $B_1=\{\lceil\frac{n}{2}\rceil+k,\ldots,n-1\}$. We next generate the undirected graph $G'=(V,E')$ corresponding to the tournament~$G$ as above, i.e., $E'=\{(u,v) \mid (u,v)\in E \text{ and } u<v\}$. We now apply the labeling scheme for undirected graphs on $G'[A_0,B_0]\cup G'[A_1,B_1]$. Let $ind(u)$ denote the new index assigned to vertex $u\in V$. We may assume that $ind(u_0)<ind(v_0)<ind(u_1)<ind(v_1)$ for every $u_0\in A_0$, $v_0\in B_0$, $u_1\in A_1$ and $v_1\in B_1$. We now generate a second undirected graph $G''=(V,E'')$, where $E''=\{(u,v) \mid (u,v)\in E \text{ and } ind(u)<ind(v)\}$, and use the scheme for undirected graphs to assign labels to the vertices of~$G''$. It is not difficult to check that the indices assigned to the vertices are the same as those assigned by the first application of the labeling scheme. Thus, given the labels of two vertices in~$G''$ we can determine whether they are adjacent in~$G''$. Using their indices we can then determine the direction of the edge in the tournament~$G$. We thus have: \begin{theorem}\label{thm:tournament} For any $n\ge 400$, there is an adjacency labeling scheme for $n$-vertex tournaments that assigns each vertex an $(\lfloor n/2\rfloor +6)$-bit label. \end{theorem} The $\lfloor n/2\rfloor +6$ bound can again be improved to $\ceil{n/2}+4$ using the labeling scheme of Theorem~\ref{thm:undirected2}. \section{Bipartite graphs}\label{sec:bipartite} In this section we design an almost optimal $(\frac{n}{4}+O(1))$-bit adjacency labeling scheme for bipartite graphs. In addition to the ideas of the previous sections, a new idea is used to obtain the result. The following theorem follows easily form Lemma~\ref{lem:spread} (spreading). The proof is deferred to Appendix~\ref{sec:bipartite2}. \begin{theorem}\label{thm:biased} For every~$0\le r<\frac{n}{2}$, there is a labeling scheme for $(\frac{n}{2}-r,\frac{n}{2}+r)$-bipartite graphs, with distinct indices attached to their vertices, that assigns each vertex an $\lceil\frac{n}{4}-\frac{r^2}{n}\rceil$-bit tag. Given the indices and tags of two vertices, and given~$r$, it is possible to determine whether the two vertices are adjacent. \end{theorem} The challenge is again to absorb the $\lceil\lg n\rceil$ index bits, and to do so in a way that works simultaneously for all values of the bias~$r$. If~$r$ is not known in advance, we can add a $\lceil\lg n\rceil$-bit encoding of it to the labels of the vertices. (As we only need to reconstruct~$r$ from the labels of two vertices from opposing sides, $\lceil\frac12\lg n\rceil$ bits are actually enough, but this would not matter.) If $r\ge \sqrt{2n\lg n}$, then as $\frac{n}{4}-\frac{r^2}{n} < \frac{n}{4}-2\lg n$, we can easily absorb the $2\lceil\lg n\rceil$ bits used to represent~$r$ and the index of each vertex and still obtain labels of size at most $\frac{n}{4}$. As expected, the difficult task is handling bipartite graphs that are almost balanced, i.e., $r<\sqrt{2n\lg n}$. We begin by designing an adjacency labeling scheme for perfectly \emph{balanced} bipartite graphs. The proof of the following theorem is similar to the proofs of Theorem~\ref{thm:directed} and~\ref{thm:undirected}, though the graph has to be broken into yet more parts. The proof can be found in Appendix~\ref{sec:bipartite2}. \begin{theorem}\label{thm:balanced} There is a adjacency labeling scheme for $(\frac{n}{2},\frac{n}{2})$-bipartite graphs that assigns each vertex an $(\frac{n}{4}+O(1))$-bit label. The label of each vertex is composed of a distinct index from $[n]$, and an $(\frac{n}{4}-\lg n + O(1))$-bit tag. \end{theorem} To obtain an $(\frac{n}{4}+O(1))$-bit scheme for all bipartite graphs, we design a scheme for almost biased bipartite graphs in which most vertices do not need to know the bias~$r$. \begin{theorem}\label{thm:bipartite} There is a adjacency labeling scheme for $n$-vertex bipartite graphs that assigns each vertex an $(\frac{n}{4} +O(1))$-bit label. The label of each vertex is composed of a distinct index from $[n]$, and an $(\frac{n}{4}-\lg n + O(1))$-bit tag. \end{theorem} \begin{proof} As explained after Theorem~\ref{thm:biased}, there is a simple $(\frac{n}{4}+O(1))$-bit scheme for all $(\frac{n}{2}-r,\frac{n}{2}+r)$-bipartite graphs, where $r\ge\sqrt{2n\lg n}$. We design a new $(\frac{n}{4}+O(1))$-bit scheme for all $(\frac{n}{2}-r,\frac{n}{2}+r)$-bipartite graphs, where $r<\sqrt{2n\lg n}$. By combining the two schemes, we obtain an $(\frac{n}{4}+O(1))$-bit scheme for all bipartite graphs. (The first bit of each label indicates whether the first or second scheme is used.) As we have an $O(1)$ term in the statement of the Theorem, and not a specific constant, we allow ourselves to ignore divisibility and integrality issues and avoid the use of ceilings and floors. Let $R=n^{4/5}$. Let $G=(U,V,E)$ be a $(\frac{n}{2}-r,\frac{n}{2}+r)$-bipartite graph, where $r<\sqrt{2n\lg n}$. Note, in particular, that $r\le \frac{2R^2}{n}=2n^{3/5}$. Partition~$U$ into a set~$U_0$ of size $\frac{n}{2}-R$ and a set~$U_1$ of size $R-r$. Similarly, partition~$V$ into a set~$V_0$ of size $\frac{n}{2}-R$ and a set~$V_1$ of size $R+r$. We view the vertices of $U_0$ and $V_0$ as ordinary, and the vertices of~$U_1$ and~$V_1$ as special. The graph~$G$ is thus partitioned into the disjoint union of the four bipartite graphs $G[U_0,V_0], G[U_0,V_1],G[U_1,V_0]$ and $G[U_1,V_1]$. The main idea is to assign the ordinary vertices of $U_0\cup V_0$ labels that do not depend on~$r$. The labels of the special vertices of $U_1\cup V_1$ would contain an encoding of~$r$, but as they form only a negligible fraction of all vertices, this could be `smoothed' out. We start by encoding $G[U_0,V_0]$ using the scheme of Theorem~\ref{thm:balanced}. Each vertex of $U_0\cup V_0$ gets a distinct index in $[n-2R]$ and an $(\frac{n}{4}-\frac{R}{2}+O(1))$-bit tag. (The label of each vertex includes an encoding of its index.) We assign the vertices of $U_1\cup V_1$ distinct indices from~$[n-2R,n)$. We next use the spreading technique of Lemma~\ref{lem:spread} to encode $G[U_1,V_0]$. We find it more informative to redo the relevant calculations here. We need to split the $(R-r)(\frac{n}{2}-R)$ bits describing the adjacencies in $G[U_1,V_0]$ between the vertices of~$U_1$ and~$V_0$. As the tag of each vertex of $V_0$ is already of size $\frac{n}{4}-\frac{R}{2}+O(1)$, and as we want the tag of each vertex of~$V_0$ to be of size $\frac{n}{4}+O(1)$, each vertex of~$V_0$ gets~$\frac{R}{2}$ of these bits. (As $|U_1|=R-r$, this corresponds to applying Lemma~\ref{lem:spread} with $\ell_i=\frac{R}{2}-r$, for every $i\in [\frac{n}{2}-R]$, on $G[V_0,U_1]$. Note that the sides here are reversed.) The number of bits each vertex of~$U_1$ receives is thus \[ a \;=\; \frac{(R-r)(\frac{n}{2}-R)-\frac{R}{2}(\frac{n}{2}-R)}{R-r} \;=\; \frac{(\frac{n}{2}-R)(\frac{R}{2}-r)}{R-r}\;. \] (Note that~$a$ corresponds to~$L$ of Lemma~\ref{lem:spread}.) The $\frac{R}{2}$ bits that each vertex of~$V_0$ gets are appended to its tag. Vertices of $V_0$ do not know the meaning of these bits, as they do not know~$r$, but the vertices of~$U_1$ do, as they will know~$r$. Similarly, each vertex of $U_0$ gets $\frac{R}{2}$ additional bits, and the number of bits left for each vertex of~$V_1$ is \[ b \;=\; \frac{(R+r)(\frac{n}{2}-R)-\frac{R}{2}(\frac{n}{2}-R)}{R+r} \;=\; \frac{(\frac{n}{2}-R)(\frac{R}{2}+r)}{R+r}\;. \] Next, we verify that $b\le \frac{n}{4}$ if and only if $r\le\frac{2R^2}{n-4R}$. As we assumed that $r\le \frac{2R^2}{n}<\frac{2R^2}{n-4R}$, this condition is satisfied. It can also verified that $a\le \frac{n}{4}$ for every $r<R$. (To see this check that if $r=0$, then $a=\frac{n}{4}-\frac{R}{2}$, and that~$a$ is a decreasing function of~$r$ for $0\le r<R$, as the derivative of~$a$ is terms of~$r$ is $-\frac{R(\frac{n}{2}-R)}{2(R-r)^2}$.) We still need to represent $G[U_1,V_1]$ by splitting the corresponding adjacency bits between the vertices of~$U_1$ and~$V_1$. We again use the spreading technique of Lemma~\ref{lem:spread}. Overall, there are $(R-r)(R+r)=R^2-r^2$ such adjacency bits. We need to verify that we can accommodate them without any vertex of~$U_1$ and $V_1$ getting more than $\frac{n}{4}$ bits overall. A simple `volume' argument can be used to show that we still have enough space in the tags of the vertices of~$U_1$ and $V_1$. More specifically, we know that all adjacencies between $U_0\cup U_1$ and $V_0\cup V_1$ can be encoded using at most~$\frac{n}{4}$ bits per vertex. As each vertex of $U_0$ and $V_0$ already has $\frac{n}{4}$ bits, and as all adjacencies between $U_0$ and $V_0$, $U_0$ and $V_1$, and~$U_1$ and $V_0$ were encoded, there is enough room left in the tags of~$U_1$ and $V_1$ to encode the adjacencies between these two sets. We can also verify it using a simple direct calculation. The total number of bits currently used by vertices of~$U_1$ and $V_1$ is $(R-r)a+(R+r)b=(\frac{n}{2}-R)R$. The total capacity of these vertices is $2R\cdot \frac{n}{4}=\frac{Rn}{2}$, and $\frac{Rn}{2}-(\frac{n}{2}-R)R = R^2 > R^2-r^2$. Thus, there is indeed enough space. One problem still remains. The label of each vertex of $U_1\cup V_1$ should also contain $2\lg n$ bits specifying the index of the vertex and~$r$. Thus, while the labels of all vertices of $U_0\cup V_0$ are all of size $\frac{n}{4}+O(1)$, the labels of the vertices of $U_1\cup V_1$ are currently of size $\frac{n}{4}+2\lg n+O(1)$. This can be easily fixed, however, by persuading each vertex of $U_0$ and $V_0$ to hold one more adjacency bit to $V_1$ and $U_1$, respectively. The number of bits in the labels of~$U_1$ and $V_1$ decreases by $\frac{(\frac{n}{2}-R)}{R+r}\gg 2\lg n$, leaving more than enough room in the label of each vertex to store its index and~$r$. Finally, given the labels of two vertices, it can be determined whether they are adjacent. \end{proof} \section{Efficient decoding}\label{sec:efficiency} In this section we show that the schemes of the preceding sections could be modified so that two vertices need to exchange only $O(\lg n)$ bits of information between them, in a constant number of communication rounds, and spend only $O(1)$ computation time, to decide whether they are adjacent or not. For concreteness, we consider the case of directed graphs. The same ideas apply to all our schemes. Note that this is easily achieved using the simple $(n+\lceil\lg n\rceil-1)$-bit scheme. Consider a distributed setting in which each vertex of the graph is a RAM machine. The label of each vertex is stored in its internal random access memory, assumed to be composed of $w$-bit words, where $w\ge\lceil\lg n\rceil$. In particular, the index of a vertex resides in the first word used to represent its label. In the simple $(n+\lceil\lg n\rceil-1)$-bit scheme, to determine whether there is an edge from~$u$ and~$v$, $v$ sends to~$u$ its $\lceil\lg n\rceil$-bit index. Vertex~$u$ can then access the appropriate adjacency bit in its tag in $O(1)$ time. Our goal is to show that something similar could also be done using our schemes. (Note that when labels are stored in $\lceil\lg n\rceil$-bit words, our improved schemes usually save one memory word.) To decode our $(n+O(1))$-bit scheme in $O(1)$ time, we need to overcome two obstacles. First, we need to be able to decode the succinct run length encoding used in Lemma~\ref{lem:bipartite} is constant time. Second, we need to be able to keep track, in constant time, of the bit movements performed by the spreading lemma (Lemma~\ref{lem:spread}). To solve the first problem we use the following result. \begin{theorem}\mbox{\rm [P{\v a}tra{\c s}cu \cite{patrascu08succinct}]}\label{thm:Patrascu} On a RAM with $\Omega(\lg n)$-bit words, a Boolean array $A[0\ldots n-1]$ containing $k$ ones and $n-k$ zeros can be represented using $\lg {n \choose k } + \frac{n}{\lg^t(n / t)} + \tilde O(n^{3/4})$ bits of memory, supporting \emph{rank} and \emph{select} queries in $O(t)$ time. \end{theorem} A $rank(i)$ query, where $i\in[n]$, asks for the number of 1s in $A[0\ldots i]$. A $select(i)$ query requests the index of the $i$-th 1 in the array. We only need $rank$ queries. Theorem~\ref{thm:Patrascu} assumes that the number of 1s in the array is exactly~$k$. However, it is not difficult to extend the result for the case in which the array contains at most~$k$ 1s. Perhaps the simplest way of doing it is to add $\lceil\lg n\rceil$ bits, which are absorbed in the $\tilde O(n^{3/4})$ term, to encode the actual number of 1s. As we saw in the proof of Lemma~\ref{lem:L}, we can represent an $n$-bit string by its first bit and the end positions of its runs. Thus, we can represent an $n$-bit string composed of at most~$r$ runs using its first bit and an $n$-bit string containing at most~$r$ 1s. The first bit of the string and the parity of $rank(i)$ would then tell us whether the $i$-th bit of the string is a~0 or a~1. Note that the $\lg {n \choose k} $ term in Theorem~\ref{thm:Patrascu} is the information theoretic lower bound, which essentially corresponds to our function $L(n,i)$, when $k=2^i$. The price paid for the efficient decoding is the additive ${n}/{\lg^t(n / t)} + \tilde O(n^{3/4})$ term. If we use $t=2$, then the number of bits lost is only $O(n/\lg^2 n)$. We need to encode about $\lg n$ sparse arrays, with the $i$-th one of them containing at most $2^i$ 1s. Thus the total number of bits lost in all these encodings is only $O(n/\lg n)$. We can easily compensate for these $O(n/\lg n)$ additional bits by slightly adjusting the parameters used in the application of the spreading lemma. (More specifically, we let $\ell_i = \lg {n \choose 2^i} + \frac{n}{\lg^2 n} + \tilde O(n^{3/4})$, instead of $\ell_i=L(n,i)$.) As the $O(n/\lg n)$ additional bits are spread over almost~$n$ tags, each tag acquires at most one additional bit. We next consider the efficient decoding of tags produced using the spreading lemma (Lemma~\ref{lem:spread}). We use the spreading lemma in two different ways. In some applications, all the $\ell_i$'s are equal. In others, the $\ell_i$'s differ, but $k\le\lg n$. If $\ell_i=\ell$, for every $i\in [k]$, the bit movements performed are regular, and we can easily determine in constant time the location of each adjacency bit. (Note, in particular, that in the proof of Lemma~\ref{lem:spread} we simply have $\bar{s}_i = i\ell$.) Also, $\ell$ can be deduced from the label. In the other case, we simply add an encodings of $\bar{s}_i$ and $\ell_i$ to the appropriate labels. The extra $2\lg n$ bits added are again absorbed in the $ \tilde O(n^{3/4})$ term of the $k\le\lg n$ corresponding vertices. The decoding can then again be made in constant time. \section{Induced-universal graphs}\label{sec:universal} As observed by Kannan {\em et al.}~\cite{KNR92}, an $L$-bit adjacency labeling scheme for a family ${\cal F}_n$ yields immediately a $2^L$-vertex induced-universal graph for~${\cal F}_n$. Thus, using Theorem~\ref{thm:undirected} we obtain, in particular, an induced-universal graph for $n$-vertex undirected graphs containing only $O(2^{n/2})$ vertices, resolving the open problem of Moon~\cite{moon1965minimal} and Vizing~\cite{vizing1968some}. \section{Lower bounds}\label{sec:lower} Previous lower bounds on the label sizes assume that labels of different vertices are distinct. We increase the lower bounds by~$1$ without relying on this assumption. For \emph{indexing} adjacency labeling schemes, we increase the lower bounds by~$2$ . Our basic lower bounds follow from the following obvious lemma. \begin{lemma}\label{lem:injective} If $(\mbox{\it Label},\mbox{\it Edge})$ is an adjacency labeling scheme for~${\cal F}_n$, then $\mbox{\it Label}$ is injective, i.e., for every $G\ne G'\in {\cal F}_n$ we have $\mbox{\it Label}(G)\ne\mbox{\it Label}(G')$. \end{lemma} \begin{proof} Let $G=(V,E),G'=(V,E')\in{\cal F}_n$. If $\mbox{\it Label}(G)=\mbox{\it Label}(G')$, then for every $u,v\in V$ we have \[\mbox{\it Edge}(\mbox{\it Label}(G)(u),\mbox{\it Label}(G)(v)) \;=\; \mbox{\it Edge}(\mbox{\it Label}(G')(u),\mbox{\it Label}(G')(v))\;.\] Hence $(u,v)\in E$ if and only if $(u,v)\in E'$ and thus $G=G'$. \end{proof} \begin{theorem}\label{thm:lower1} If there is an $L$-bit adjacency labeling scheme for ${\cal F}_n$, then $L> \frac{1}{n}\lg|{\cal F}_n|$. \end{theorem} \begin{proof} Suppose that $(\mbox{\it Label},\mbox{\it Edge})$ is a labeling scheme for ${\cal F}_n$. By Lemma~\ref{lem:injective}, $\mbox{\it Label}$ is injective and thus $|{\cal F}_n|\le 2^{nL}$. This immediately implies that $L\ge \frac{1}{n}\lg|{\cal F}_n|$. To show that the inequality is strict, we need to show that there is at least one ordered tuple of labels that cannot be produced by $\mbox{\it Label}$. Consider the $2^L$ tuples composed of~$n$ identical labels. Each such tuple may only correspond to the empty graph on~$n$ vertices or to the clique on~$n$ vertices. Thus, at least $2^L-2$ of these tuples are not produced by the labeling scheme. Hence $|{\cal F}_n|< 2^{nL}$ and thus $L> \frac{1}{n}\lg|{\cal F}_n|$. \end{proof} Note that in Theorem~\ref{thm:lower1}, $|{\cal F}_n|$ denotes the number of \emph{named} graphs from~${\cal F}_n$, i.e., graphs of ${\cal F}_n$ on~$[n]$. Graphs with different names are considered different even if they are isomorphic. We let $\overline{{\cal F}}_n$ be the set of isomorphism classes of graphs from~${\cal F}_n$. If the labeling scheme satisfies the distinctness assumption, then the condition $|{\cal F}_n|< 2^{nL}$ used in the proof of Theorem~\ref{thm:lower1} can be replaced by the slightly stronger inequality $|\overline{{\cal F}}_n|\le {2^L \choose n}$. (See. e.g., Alstrup and Rauhe \cite{alstruprauhe}.) (To see that this is a slightly stronger inequality, note that $\frac{|{\cal F}_n|}{n!} \le |\overline{{\cal F}}_n| \le {2^L \choose n} < \frac{2^{nL}}{n!}$.) However, as $L$ is an integer, the resulting lower bound on~$L$ is usually the same, even though a stronger assumption is made. We note in passing that, without relying on the distinctness assumption, we can get $|\overline{{\cal F}}_n|< \bigl(\!{2^L \choose n}\!\bigr)$, where $\bigl(\!{2^L \choose n}\!\bigr)={2^L+n-1 \choose k}$ is the number of multi-subsets of $[2^L]$ of size~$n$. In the proof of Theorem~\ref{thm:lower1}, we viewed $\mbox{\it Label}(G)$ as the ordered tuple $(\mbox{\it Label}(G)(0),\mbox{\it Label}(G)(1),\ldots,\allowbreak\mbox{\it Label}(G)(n{-}1))$. We let $\overline{\mbox{\it Label}}(G)$ denote the corresponding (multi-)set $\{\mbox{\it Label}(G)(0),\mbox{\it Label}(G)(1),\ldots,\allowbreak\mbox{\it Label}(G)(n-1)\}$ in which the order of the labels is ignored. Analogous to Lemma~\ref{lem:injective}, we have the following lemma whose simple proof if omitted. \begin{lemma}\label{lem:injective2} If $(\mbox{\it Label},\mbox{\it Edge})$ is an adjacency labeling scheme for ${\cal F}_n$, then for every $G,G'\in {\cal F}_n$, if $G$ and~$G'$ are not isomorphic, then $\overline{\mbox{\it Label}(G)}\ne\overline{\mbox{\it Label}}(G')$. \end{lemma} Relying on Lemma~\ref{lem:injective2}, we get our second lower bound. \newcommand{{\cal L}}{{\cal L}} \begin{theorem}\label{thm:lower2} If there is an \emph{indexing} $L$-bit adjacency labeling scheme for ${\cal F}_n$, then $L\ge \frac{1}{n}\lg|{\cal F}_n| + \frac{1}{n}\lg\frac{n^n}{n!}$. For $n\ge 200$, we have $L> \frac{1}{n}\lg|{\cal F}_n| + 1.4$ \end{theorem} \begin{proof} Suppose that $(\mbox{\it Label},\mbox{\it Edge})$ is an indexing labeling scheme for ${\cal F}_n$ and let $\mbox{\it Ind}$ be an appropriate index function. Let ${\cal L}_i=\mbox{\it Ind}^{-1}(i)$, for $i\in [n]$. Note that $\sum_{i=0}^{n-1}|{\cal L}_i|=2^L$. For every graph $G\in {\cal F}_n$, we have $|\overline{\mbox{\it Label}}(G)\cap {\cal L}_i|=1$, for $i\in [n]$. Thus, the number of sets of labels is at most $\prod_{i=0}^{n-1}|{\cal L}_i|$. By Lemma~\ref{lem:injective2}, two non-isomorphic graphs must have distinct label sets. Thus \[ \frac{|{\cal F}_n|}{n!} \;\le\; |\overline{{\cal F}_n}| \;\le\; \prod_{i=0}^{n-1}|{\cal L}_i| \;\le\; \left(\frac{2^L}{n}\right)^n\;, \] or equivalently \[ L \;\ge\; \frac{1}{n}\lg\frac{|\overline{{\cal F}_n}|n^n}{n!} \;=\; \frac{1}{n}\lg|{\cal F}_n| + \frac{1}{n}\lg\frac{n^n}{n!} \;. \] It is easy to verify that $\frac{1}{n}\lg\frac{n^n}{n!}$ is increasing in~$n$ and tends to $\lg{\rm e}=1.41695\ldots$ as $n\to\infty$. (By Stirling's formula, $\frac{1}{n}\lg\frac{n^n}{n!} \sim \lg{\rm e}-\frac{\lg\sqrt{2\pi n}}{n}$.) It is also easy to verify that $\frac{1}{n}\lg\frac{n^n}{n!}> 1.4$ for $n\ge 200$. \end{proof} For directed graphs we have $\lg|{\cal F}_n|=n(n-1)$. For undirected graphs and tournaments we have $\lg|{\cal F}_n|={n\choose 2}$. Using Theorem~\ref{thm:lower1} and Theorem~\ref{thm:lower2} we get: \begin{corollary}\label{C-directed} If there is an $L$-bit adjacency labeling scheme for $n$-vertex \emph{directed} graphs, then $L\ge n$. If the labeling scheme is indexing, then $L\ge n+1$. \end{corollary} \begin{corollary}\label{C-undirected} If there is an $L$-bit adjacency labeling scheme for $n$-vertex \emph{undirected} graphs or for $n$-vertex \emph{tournaments}, then $L\ge \lceil\frac{n}{2}\rceil$. If the labeling scheme is indexing, then $L\ge \lceil\frac{n}{2}\rceil+1$. \end{corollary} Using a slightly more tedious counting we get the following lower bound for bipartite graphs. \begin{corollary}\label{C-bipartite} If there is an $L$-bit adjacency labeling scheme for $n$-vertex \emph{bipartite} graphs, then $L\ge \lceil\frac{n}{4}\rceil$. If the labeling scheme is indexing, then $L\ge \lceil\frac{n}{4}\rceil+1$. \end{corollary} \section{Concluding remarks}\label{sec:concl} We presented improved adjacency labeling schemes for directed, undirected and bipartite graphs. Our schemes are almost optimal. They give rise to almost optimal induced-universal graphs for these families of graphs. We also presented slightly improved lower bounds. Closing the small remaining gaps between our upper and lower bounds is an interesting open problem. An \emph{oriented} graph is a directed graph with no anti-parallel edges. We believe that using our techniques it is also possible to design an $(\frac{\lg 3}{2}n+O(1))$-bit adjacency labeling scheme for $n$-vertex oriented graphs. We also believe that the techniques we used for bipartite graphs could also be used to design almost optimal schemes for other \emph{hereditary} families of graphs. (For more on hereditay families of graphs see Bollob{\'a}s and Thomason \cite{countbitp}.) \bibliographystyle{plain}
{ "timestamp": "2014-04-15T02:08:54", "yymm": "1404", "arxiv_id": "1404.3391", "language": "en", "url": "https://arxiv.org/abs/1404.3391", "abstract": "We describe a way of assigning labels to the vertices of any undirected graph on up to $n$ vertices, each composed of $n/2+O(1)$ bits, such that given the labels of two vertices, and no other information regarding the graph, it is possible to decide whether or not the vertices are adjacent in the graph. This is optimal, up to an additive constant, and constitutes the first improvement in almost 50 years of an $n/2+O(\\log n)$ bound of Moon. As a consequence, we obtain an induced-universal graph for $n$-vertex graphs containing only $O(2^{n/2})$ vertices, which is optimal up to a multiplicative constant, solving an open problem of Vizing from 1968. We obtain similar tight results for directed graphs, tournaments and bipartite graphs.", "subjects": "Data Structures and Algorithms (cs.DS); Discrete Mathematics (cs.DM); Combinatorics (math.CO)", "title": "Adjacency labeling schemes and induced-universal graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137858267907, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.708761790166166 }
https://arxiv.org/abs/2109.05474
Combinatorial models for topological Reeb spaces
There are two rather distinct approaches to Morse theory nowadays: smooth and discrete. We propose to study a real valued function by assembling all associated sections in a topological category. From this point of view, Reeb functions on stratified spaces are introduced, including both smooth and combinatorial examples. As a consequence of the simplicial approach taken, the theory comes with a spectral sequence for computing (generalized) homology. We also model the homotopy type of Reeb graphs/ topological Reeb spaces as simplicial sets, which are combinatorial in nature, as opposed to the typical description in terms of quotient spaces.
\section{Introduction} Let~$X$ be a topological space and~$f\colon X \rightarrow {\mathbb{R}}$ a continuous function on it. A \emph{section}~$\sigma$ of~$f$ is a map~\hbox{$ [a,b] \rightarrow X$}, for some real numbers $a\leq b$, subject to~$f\circ \sigma (c)=c$. Two sections~$\sigma\colon [a,b]\rightarrow X$ and~$\rho\colon [b,c]\rightarrow X$, such that~$\sigma(b)=\rho(b)$, may be concatenated into a new section~$\rho\circ \sigma\colon [a,c]\rightarrow X$. This data defines the \emph{section category}~$\mathcal{S}_f$ associated to~$f$ which is in fact a topological category. The nerve construction thus provides a simplicial topological space~$\mathrm{N}\mathcal{S}_f$. We did not put any constraints on~$f$ as of yet. However, if the section category is to recover the homotopical information of~$X$ by realizing~$\mathrm{N}\mathcal{S}_f$, some assumptions are necessary. This should be considered motivation for the concept of \emph{Reeb functions} which requires~$f$ to be sufficiently `nice'. Examples include Morse functions on smooth manifolds and piecewise linear functions on CW complexes. I refer to Definition~\ref{definition:reebFunction} for a precise formulation. \begin{theorem} \label{intro:mainresult} For any Reeb function~$f\colon X \rightarrow {\mathbb{R}}$, the realization of the nerve of the section category of~$f$ is weakly equivalent to~$X$, that is~$X\simeq |\mathrm{N}\mathcal{S}_f|$. \end{theorem} Ralph L. Cohen, John D. S. Jones and Graeme B. Segal prove a similar result for Morse functions in~\cite{cohen1995morse} as an attempt to better understand homotopical aspects of Morse theory~\cite{cohen1995floer}. A purely combinatorial analogue can be found in~\cite{nanda2018discrete} which covers the discrete Morse theory of Robin Forman~\cite{forman1998morse}. Our work can thus be described as an attempt to find a common framework including both smooth and combinatorial examples. Any simplicial topological space comes with a spectral sequence for computing the generalized homology of its classifying space~\cite{segal1968classifying}. A shortcoming of the section category~$\mathcal{S}_f$ is that its classifying space is huge, hence nowhere near computationally feasible. Reeb functions provide a way to extract the essential information in~$\mathcal{S}_f$ into the much smaller \emph{critical subcategory}~${\mathcal{C}}_f$ whose classifying space has unchanged homotopy type when compared to~$\mathcal{S}_f$. Computing the homology of~$X$ via~${\mathcal{C}}_f$, as opposed to~$\mathcal{S}_f$, is analogous to how Morse and CW homology reduces the complexity of singular homology. I refer to Section~\ref{section:spectralsequence} for some basic algebraic properties together with a user-guide on how to carry out computations. Consider a continuous function~$f\colon X\rightarrow {\mathbb{R}}$ on a topological space~$X$. The~\emph{topological Reeb space}~$\mathrm{R}_f$, often referred to as the Reeb graph, was introduced by Georges H. Reeb in~\cite{reeb1946points} to study singularities. Later on it was popularized in computer graphics due to the work of Y. Shinagawa, T. Kunii and Y. Kergosien~\cite{shinagawa1991surface}. Since then there has been several applications in shape analysis~\cite{biasotti2008reeb}. This advertises the need to better understand combinatorial properties of the topological Reeb space~$\mathrm{R}_f$, commonly constructed as a certain quotient space of~$X$ depending on the extra data that is~$f$. I refer to categorified Reeb graphs~\cite{de2016categorified} and Mapper~\cite{singh2007topological} for related work. From the section category~$\mathcal{S}_f$ we define the \emph{combinatorial Reeb space} by first applying the nerve followed by taking path components level-wise~$\pi_0 \mathrm{N}\mathcal{S}_f$. It is important to note that this construction is no topological space, but rather a simplicial set. To compare topological and combinatorial Reeb spaces we make use of the fact that topological spaces and simplicial sets carry the same homotopical information: we identify the homotopy type of a simplicial set~$S$ with that of its geometric realization~$|S|$. The combinatorial and topological Reeb spaces of~$f$ do not have the same homotopy type in general. But if we restrict ourselves to Reeb functions, then they do agree. \begin{theorem} \label{intro:combinatorialReebIsClassicalReeb} For any Reeb function~$f\colon X\rightarrow {\mathbb{R}}$, the simplicial set~$\pi_0\mathrm{N}\mathcal{S}_f$ has the same homotopy type as the topological space~$\mathrm{R}_f$; there is a zigzag of weak homotopy equivalences between~$|\pi_0\mathrm{N}\mathcal{S}_f|$ and~$\mathrm{R}_f$. \end{theorem} Topological Reeb spaces are not graphs in general (Example~\ref{example:ReebNotGraph}) and we might expect combinatorial Reeb spaces to have equally nasty homotopy types. But it turns out that combinatorial Reeb spaces are always weakly homotopic to graphs: \begin{theorem} \label{intro:combinatorialReebIsGraph} The combinatorial Reeb space of any continuous function has the homotopy type of a~$1$--dimensional CW complex. \end{theorem} \textbf{Outline.} Section~\ref{sec:reebfunctions} is all about Reeb functions~$f\colon X\rightarrow {\mathbb{R}}$. To better illustrate the theory we first restrict ourselves to functions on~$\mathrm{C}^1$--manifolds in Section~\ref{subsec:reebC1} before handling more general stratified spaces in Section~\ref{section:stratifiedSpaces}. Results that do not hinge upon any simplicial structure are proven along the way. In Section~\ref{section:sectioncategory} we formally define the topological section category associated to a continuous function as well as the critical subcategory and other intermediate subcategories. Some simplicial background is then provided in Section~\ref{section:moreSimplicialStuff} before proving Theorem~\ref{thm:mainresult}, which implies Theorem~\ref{intro:mainresult}. The spectral sequence associated to section categories, as well as critical subcategories, is discussed in Section~\ref{section:spectralsequence}. General algebraic properties are deduced in Section~\ref{subsec:sectionsequence}, whereas Section~\ref{subsec:criticalsequence} is concerned with how to use the critical subcategory for numerical computations. In the remaining Section~\ref{section:reeb} we introduce combinatorial Reeb spaces. More background on simplicial sets is presented in Section~\ref{section:thmA} before proving Theorems~\ref{intro:combinatorialReebIsClassicalReeb} and~\ref{intro:combinatorialReebIsGraph} in Section~\ref{subsec:combistop} and Section~\ref{subsec:combisgraph}, respectively. \textbf{Notation.} Categories of familiar objects are put inside parentheses, e.g.~$\text{(topological spaces)}$. The set of morphisms between objects~$x,y$ in a category is denoted~$\mathop{\rm Map}\nolimits(x,y)$. In the case of topological spaces~$\mathop{\rm map}\nolimits(X,Y)$ reads the topological space of continuous functions from~$X$ to~$Y$. The standard~$n$--simplex~$\Delta^n$ is modeled as the convex hull of the standard basis vectors in~${\mathbb{R}}^{n+1}$. The~$1$--simplex will also be represented as the unit interval~$I$. We denote by~$[n]$ the category generated by the directed graph \[ 0\rightarrow 1 \rightarrow \dots \rightarrow n \] on~$n$ arrows. In particular,~$[0]$ is the trivial one object category and~$[1]$ is the category on two objects~$0$ and~$1$ connected by a non-trivial arrow~$0\rightarrow 1$. \section{Reeb functions} \label{sec:reebfunctions} We shall clarify what it means for a function~$f\colon X \rightarrow {\mathbb{R}}$ to be a Reeb function. In this work, a stratified space is built out of~$\mathrm{C}^1$--manifolds, which will be covered more in depth later on. Hence we start out by restricting ourselves to the simplest spaces, namely the~$\mathrm{C}^1$--manifolds, in Section~\ref{subsec:reebC1}. Thereafter we move on to the more general stratified spaces in Section~\ref{section:stratifiedSpaces}. The final Proposition~\ref{proposition:modifiedFlowLines} is utilized many times throughout the paper. \subsection{Reeb functions on~$\mathrm{C}^1$--manifolds} \label{subsec:reebC1} A continuous function~$f$ is said to be \emph{proper} if the preimage of compact is compact. \begin{definition} \label{definition:reebFunctionC1} Let~$M$ be a~$\mathrm{C}^1$--manifold and~$f\colon M\rightarrow {\mathbb{R}}$ a~$\mathrm{C}^1$--function. Then~$f$ is a~\emph{Reeb function} if \begin{enumerate}[i)] \item the subspace of critical values of~$f$ is discrete inside~${\mathbb{R}}$ and \item the restriction of~$f$ to each component of~$M$ is proper. \end{enumerate} \end{definition} Recall that a~$\mathrm{C}^1$--function~$f\colon M\rightarrow {\mathbb{R}}$ has a differential~$df\colon M\rightarrow \mathrm{T}^\ast M$ which is a section of the cotangent bundle;~1-form. Let us think of~$df$ in terms of its~\emph{gradient vector field}: Pick an inner product~$\langle -,- \rangle$ on~$\mathrm{T}M$, and characterize~$\mathrm{grad}(f)\colon M\rightarrow \mathrm{T}M$ by~$\langle \mathrm{grad} (f), \mathbf{v} \rangle = df(\mathbf{v})$ for all vector fields~$\mathbf{v}\colon M\rightarrow \mathrm{T}M$. The integral curves of a vector field~$\mathbf{v}\colon M\rightarrow \mathrm{T}M$ are the~$\mathrm{C}^1$--curves~$l\colon (\alpha, \omega)\rightarrow M$, allowing~$\pm \infty$, satisfying~$\frac{dl}{dt}=\mathbf{v}_{l(t)}$. A~\emph{local flow} on~$M$ is a map~$\Psi\colon U\rightarrow M$, defined on an open neighborhood~$U$ of~$ \{0\}\times M$ in~${\mathbb{R}}\times M$, such that~$U\cap ({\mathbb{R}}\times \{p\})$ is an interval for which~$\Psi$ restricts to an integral curve. The maximal integral curves~$l_p$ of~$\mathbf{v}\colon M\rightarrow \mathrm{T}M$ form the maximal flow~$\Psi_{\mathbf{v}}(p,t)=l_p(t)$. It is maximal in the sense that there are no other local flows which contains the domain of~$\Psi_{\mathbf{v}}$. For this maximal flow, let us write~$(\alpha_p,\omega_p)=U\cap ({\mathbb{R}}\times\{p\})$, allowing~$\pm \infty$ as endpoints. Then~$l_p\colon (\alpha_p,\omega_p)\rightarrow M$ is the maximal integral curve subject to~$l_p(0)=p$. If an integral curve passes through a point~$q$ with~$\mathbf{v}(q)=0$ then~$l\colon {\mathbb{R}}\rightarrow M$,~$t\mapsto q$ is the obvious solution. This means, conversely, that all other integral curves are immersions. They do not have to be embeddings, in general. But it is the case whenever~$\mathbf{v}=\mathrm{grad}(f)$ for a function~$f$ as above: \begin{align*} \frac{d (f\circ l)}{dt}&= df_{l(t)}(\frac{dl}{dt}) =\langle \mathrm{grad}(f),\mathrm{grad}(f) \rangle_{l(t)} \end{align*} which is greater than zero so that~$f\circ l$ and hence~$l$ are both injective. The existence of integral curves follows by solving local differential equations. In fact, vector fields and maximal flows are in one-to-one correspondence~\cite[p.~82-83]{brocker1982introduction}. I will refer to the maximal integral curves of~$\mathrm{grad}(f)$ as the \emph{flow-lines} of~$f$.\\ \begin{definition} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a continuous function. A \emph{section} of~$f$ is a continuous function~$\sigma\colon [a,b]\rightarrow {\mathbb{R}}$ such that~$f\circ \sigma$ is the inclusion~$[a,b]\hookrightarrow {\mathbb{R}}$. \end{definition} The next assertion tells us how to continuously pick sections of Reeb functions, a property which will turn out to be extremely useful. \parbox{\linewidth}{ \begin{proposition} \label{proposition:modifiedFlowLinesManifold} Let~$f\colon M\rightarrow {\mathbb{R}}$ be a Reeb function. For any pair~$c<d$ of successive critical values, there is a continuous function~$ g\colon [c,d] \times f^{-1}(c,d)\rightarrow X$ such that for all~$x$ the curve~$g_x=g(-,x)$ \begin{enumerate}[i)] \item is a section;~$f\circ g_x(t) = t$, and \item pass through~$x$ at~$f(x)$;~$g_x(f(x))=x$. \end{enumerate} \end{proposition} } \begin{proof} The idea is simple: We would like to reparametrize the flow-lines of~$f$. Let~$\Psi\colon U \rightarrow f^{-1}(c,d)$ be the maximal flow of~$f$ restricted to~$f^{-1}(c,d)$. For every flow-line~$l_x\colon (\alpha_x,\omega_x)\rightarrow f^{-1}(c,d)$, the preceding discussion implies that the composition~$f \circ l_x\colon (\alpha_x,\omega_x) \rightarrow {\mathbb{R}}$ is injective. And so it defines a~$\mathrm{C}^1$--isomorphism~$h_x\colon (\alpha_x,\omega_x) \rightarrow (f\circ l_x (\alpha_x), f\circ l_x (\omega_x))$. The target must necessarily equal~$(c,d)$, independently of~$x$: there are no critical points in~$f^{-1}(c,d)$ and so an integral curve must meet every fiber. If not, one could have extended it by solving a local differential equation, contradicting the maximality of~$\Psi$. The map~$h\colon U\rightarrow (c,d)\times f^{-1}(c,d)$,~$(a,x)\mapsto (h_x (a),x)$ is a~$\mathrm{C}^1$--diffeomorphism. Its inverse is explicitly given by~$(a,x)\mapsto (h_x^{-1}(a),x)$. Define \[ \tilde{g}\colon (c,d)\times f^{-1}(c,d) \xrightarrow{h^{-1}} U\xrightarrow{\Psi} f^{-1}(c,d), \] then~$l_x=\Psi(-,x)$ implies that the restriction~$\tilde{g}_x=\tilde{g}(-,x)$ is equal to~$\tilde{g}_x(t)=l_x(h_x^{-1}t)$ and thus \[ f\circ \tilde{g}_x(t)=(f\circ l_x) (h_x^{-1}(t))=t. \] Also, the equation~$x=l_x(0)$ implies \[ \tilde{g}(f(x),x)=l_x(h_x^{-1}\circ f\circ l_x(0))=l_x(0)=x. \] Hence the map~$\tilde{g}$ satisfies the asserted properties i) and ii). The proof will be complete once we have extended the map~$\tilde{g}$ to~$[c,d]\times f^{-1}(c,d)$. One can alternatively view~$\tilde{g}$ as a map~$f^{-1}(c,d)\rightarrow \mathop{\rm map}\nolimits ((c,d),f^{-1}[c,d])$, utilizing the right adjoint. In fact, the two properties of~$\tilde{g}$ above tells us that its adjoint factorizes through the subspace~$\mathrm{Flow}_f(c,d)$, of~$\mathop{\rm map}\nolimits ((c,d),f^{-1}[c,d])$, consisting of flow-lines reparametrized as sections~$(c,d)\rightarrow f^{-1}[c,d]$. So the map~$\tilde{g}$ might as well be interpreted as a map~$f^{-1}(c,d)\rightarrow \mathrm{Flow}_f(c,d)$. Since~$f$ is Reeb, hence proper on connected components, the preimage~$f^{-1}[c,d]$ is a disjoint union of compact topological spaces. Consequently any flow-line of the form~$\tilde{g}_x(c,d)\rightarrow f^{-1}[c,d]$ can be extended uniquely to a section~$g_x\colon [c,d]\rightarrow f^{-1}[c,d]$. In other words, there is a function~$e\colon\mathrm{Flow}_f(c,d)\rightarrow \mathcal{S}_f(c,d)$ that extends reparametrized low-lines on~$(c,d)$ to sections on~$[c,d]$. The rather tedious task of demonstrating the continuity of~$e$ is all that remains. For then the composition \[ f^{-1}(c,d)\xrightarrow{\tilde{g}} \mathrm{Flow}_f(c,d)\xrightarrow{e} \mathcal{S}_f(c,d) \] admits an adjoint~$g\colon [c,d]\times f^{-1}(c,d)\rightarrow X$ satisfying the asserted properties. For every~$a\leq b$ in~$[c,d]$ and~$V$ open in~$M$, denote by~$\mathrm{C}([a,b],V)$ the subbasis element whose points are the maps~$\rho\colon[a,b]\rightarrow M$ for which~$\rho([a,b])\subset V$. Then the collection of all~$\mathrm{C}([a,b],V)\cap \mathcal{S}_f(c,d)$ is a subbasis for~$\mathcal{S}_f(c,d)$. Similarly, the collection of all~$\mathrm{C}([a,b],V)\cap \mathrm{Flow}_f(c,d)$, with~$c<a\leq b<d$, is a subbasis for~$\mathrm{Flow}_f(c,d)$. We need only verify that every preimage of the form~$e^{-1}(\mathrm{C}([a,b],V)\cap \mathcal{S}_f(c,d))$ is open. This is trivial whenever~$c<a$ and~$b<d$, for then the preimage~$e^{-1}(\mathrm{C}([a,b],V)\cap \mathcal{S}_f(c,d))$ is the set~$\mathrm{C}([a,b],V)\cap \mathrm{Flow}_f(c,d)$ which is open. To complete the proof, we will assume~$a=c$ and~$b<d$ henceforth: The case~$a>c$ and~$b=d$ is similar, whereas~$a=c$ and~$b=d$ is a special case of the former. Take an arbitrary flow-line~$\tilde{g}$ in~$e^{-1}(\mathrm{C}([c,b],V)\cap \mathcal{S}_f(c,d))$. Let~$g=e\tilde{g}$ be the extension to~$[c,d]$ so that~$g(c)$ is the limit point of~$\tilde{g}$ in~$f^{-1}(c)$. We need only prove that there is an open neighborhood~$N$, of~$\tilde{g}$, inside~$e^{-1}(\mathrm{C}([c,b],V)\cap \mathcal{S}_f(c,d))$. To construct such a neighborhood we first pick a monotone sequence~$(a_n)$ in~$(c,b]$ converging to~$c$. Ehresmann's fibration theorem~\cite{ehresmann1950connexions} provides a~$\mathrm{C}^1$--diffeomorphism~$E_{a'}$ over~${\mathbb{R}}$: \begin{center} \begin{tikzcd} f^{-1}(c,d) \arrow[r, "E_{a'}"] \arrow[dr, "f"] & f^{-1}(a')\times (c,d) \arrow[d, "\mathrm{pr}_2"] \\ & {\mathbb{R}} \end{tikzcd} \end{center} for every real number~$b<a'<c$. The elementary opens in~$ (f^{-1}(a')\cap V)\times (c,d)$ are all of the form~$B\times (c',d')$ where $B$ is an open ball. Since every restriction~$g|_{[a_n,b]}$ has compact image and~$g$ maps into~$V$, there are cylinders~$C_n=E_{b}^{-1}(B_n\times [a_n,b])$ contained in~$V$ with the property that~$N_n=\mathrm{C}([a_n,b],C_n)$ is a neighborhood of~$\tilde{g}$. Moreover, it is safe to assume that the radius of~$B_n$ tends to zero as~$n$ goes to infinity: If $B'$ is a ball contained inside~$B$, and~$B\times (c',d')$ maps into~$V$ under Ehresmann's~$\mathrm{C}^1$--diffeomorphism, then surely so does~$B'\times (c',d')$. I claim that we can choose~$N=N_{n_0}$ for some~$n_0$. Assume conversely that this is not the case. Then no~$N_n$ is contained in~$e^{-1}(\mathrm{C}([c,a'],V)\cap \mathcal{S}_f(c,d))$. So for every~$n$ there is a flow-line~$\rho_n$ and a real number~$a_n'$ in~$[c,a_n]$ such that~$\rho_n(a_n')$ is in the complement of~$V$. But the sequence~$(\rho_n(a_n'))$ converges to the point~$g(c)$--inside~$V$--by construction, a contradiction. \end{proof} \subsection{Extension to stratified spaces} \label{section:stratifiedSpaces} There are several notions of `stratified spaces' around. One of which is the locally cone-like spaces dating back to R. Thom's work in the late 60s~\cite{thom1969ensembles}. A more recent reference is~\cite{goresky1983intersection}. For any topological space~$Z$, we will denote the open cone~$Z\times [0,1)/ Z\times 0$ by~$\mathrm{C}(Z)$. As an example the open cone on the~$(n-1)$--sphere is the open~$n$--disk. A \emph{filtration-preserving map} between two filtrations~$X_0\subset X_1\subset\cdots$ and~$Y_0\subset Y_1\subset\cdots$ of topological spaces, consists of continuous functions~$g_n\colon X_i\rightarrow Y_i$ which commutes with the inclusions:~$g_{n+1}\circ (X_n\subset X_{n+1})=(Y_n\subset Y_{n+1})\circ g_n $. \begin{definition} \label{definition:stratifiedSpace} An~$n$--dimensional \emph{stratification} on a topological space~$X$ is a filtration \[ \emptyset =X_{-1}\subset X_0\subset X_1\subset\dots \subset X_n=X \] satisfying: i) every~$i$th \emph{stratum}~$S_i=X_i\setminus X_{i-1}$ is an~$i$--dimensional~$\mathrm{C}^1$--manifold and ii) for every point~$x$ in~$S_i$ there exists an open neighborhood~$U$ about~$x$, an~$(n-i-1)$--dimensional stratified space~$Z$ and a filtration-preserving homeomorphism~$h:U\simeq{\mathbb{R}}^i\times \mathrm{C}(Z)$. The restriction which takes~$U\cap S_{i+j+1}$ to~${\mathbb{R}}^i\times \mathrm{C} (Z_j-Z_{j-1})$, and its inverse, are both required to be~$\mathrm{C}^1$. We say that a topological space together with an~$n$--dimensional stratification is a \emph{stratified space} of dimension~$n$. \end{definition} Finite-dimensional stratified spaces form a category by only considering filtration-preserving maps. Include filtered colimits to get a more general notion of~\emph{stratified spaces}, allowing infinite filtrations. Every~ CW complex~$X$ fits into this larger category: The~$i$th stratum of~$X$ is the disjoint union of its (open)~$i$--cells. In particular, every weak homotopy type can be represented by such a space. \\ A continuous function~$f:X\rightarrow {\mathbb{R}}$, from a stratified space~$X$, is~\emph{strata-wise}~$\mathrm{C}^1$ if it is~$\mathrm{C}^1$ when restricted to each stratum. A point~$x$ in the~$i$th stratum of~$X$ is \emph{critical} if it is a critical point of the~$\mathrm{C}^1$--map~$f|_{S^ i}$.\\ \begin{example} \label{example:existenceCritical} For a given stratifiable space~$X$, the definition of a strata-wise~$\mathrm{C}^1$--function depends on the choice of stratification. Because of this we can always assume a Reeb function to have critical values. Indeed, let~$f\colon X\rightarrow {\mathbb{R}}$ be a Reeb function for which there are no critical values. Then we slightly modify the stratified structure on~$X$: refine the already existing structure by dividing every stratum~$S$ into the three strata~$f|_S^{-1}(-\infty,0)$,~$f|_S^{-1}(0)$ and~$f|_S^{-1}(0,\infty)$. Then~$f$ is still a Reeb function on~$X$ with this choice of stratification. Moreover, we now have a critical value~$0$. \end{example} We extend Definition~\ref{definition:reebFunctionC1} from differentiable manifolds to stratified spaces in the following way: \begin{definition} \label{definition:reebFunction} Let~$X$ be a stratified space and~$f\colon X\rightarrow {\mathbb{R}}$ a strata-wise~$\mathrm{C}^1$--function. We say that~$f$ is a \emph{Reeb function} if \begin{enumerate}[i)] \item the subspace of critical values of~$f$ is discrete inside~${\mathbb{R}}$ and \item for any connected component~$C$ of some stratum, the restriction of~$f$ to the closure of~$C$, in~$X$, is proper. \end{enumerate} \end{definition} For the purpose of proving Thoerem~\ref{intro:mainresult}, this will turn out to be a satisfactory extension. In particular, there is the stratified version of Proposition~\ref{proposition:modifiedFlowLinesManifold}. \begin{proposition} \label{proposition:modifiedFlowLines} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a Reeb function. For any pair~$c<d$ of successive critical values, there is a continuous function~$ g\colon [c,d] \times f^{-1}(c,d)\rightarrow X$ which satisfies \begin{enumerate}[i)] \item every~$g_x\colon [c,d]\rightarrow X$,~$g_x(t)=g(t,x)$ is a section and \item ~$g(f(x),x)=x$. \end{enumerate} \end{proposition} \begin{proof} For a general stratified space~$X$, and Reeb function~$f\colon X\rightarrow {\mathbb{R}}$, let~$i_1,i_2,\dots$ denote the indices of the non-empty strata. The proof is by induction on~$i_n$. To ease notation I will simply reindex~$i_n\mapsto n$. Define~$f_n$ to be the restriction of~$f$ to~$X_n$. For~$n=0$ there is nothing to prove if~$X_0$ is~$0$--dimensional, otherwise the base case follows by Proposition~\ref{proposition:modifiedFlowLinesManifold}. Assume that a function~$g_{n-1}\colon f_{n-1}^{-1}(c,d)\times [c,d]\rightarrow X_{n-1}$ is constructed to satisfy the assertion. We shall modify the gradient vector field on the~$n$th stratum to take into account the flow on lower dimensional strata. Definition~\ref{definition:stratifiedSpace} tells us that a point~$x$ in~$S_i \cap X_n \cap f^{-1}(c,d)$ admits a neighborhood~$N_x$, contained in~$f^{-1}(c,d)$, of the form~${\mathbb{R}}^i\times \mathrm{C} (Z)$ with~$Z$ an~$(n-i-1)$--dimensional stratified space. We shall define a vector field on each~$N_x\cap S_n$ to obtain a new vector field on all of~$S_n$ via a partition of unity. If~$i<n$, then the~$(n-i-1)$st stratum of~$Z$, which is locally~$\mathrm{C}^1$--diffeomorphic to~${\mathbb{R}}^{n-i-1}$, indicates the intersection between~${\mathbb{R}}^i\times \mathrm{C} (Z)$ and~$S_n$. So the intersection of~$N_x$ and the~$n$th stratum~$S_n$ may be covered by opens~$N_{x,j_x}\simeq {\mathbb{R}}^i\times \mathrm{C}({\mathbb{R}}^{n-i-1})$. Let us construct a vector field on one such neighborhood~$N$ which meets~$S_n$ in $U\simeq {\mathbb{R}}^i \times {\mathbb{R}} \times {\mathbb{R}}^{n-i-1}$ and~$S^i$ in~$V=N\cap S_i\simeq {\mathbb{R}}^i$. There is a~$\mathrm{C}^1$--map~$U\rightarrow V$ which is given by the projection~$\mathrm{pr}_1\colon {\mathbb{R}}^i\times {\mathbb{R}}\times {\mathbb{R}}^{n-i-1}\rightarrow {\mathbb{R}}^i$ in coordinates. The induced map~$\mathrm{T}\mathrm{pr}_1$ on tangent spaces admits a right inverse~$v\mapsto (v,0,0)$. Hence a vector field on~$V$ defines a vector field on~$U$. In particular, the vector field corresponding to an appropriate restriction of~$g_{n-1}$ defines a vector field~$\mathbf{u}\colon U\rightarrow \mathrm{T}U$. Notice that an integral curve~$l$ of~$\mathbf{u}$ cannot have a limit point in~$X_{n-1}\cap f^{-1}(c,d)$ since~$g_{n-1}$ is a family of~$\mathrm{C}^1$ sections. For every~$x$ in~$X_{n-1}$, also contained in the closure of~$S_n$, we associate such a vector field~$\mathbf{u}_x\colon U_x\rightarrow \mathrm{T}U_x$. Otherwise, if~$i=n$ and~$x$ is not contained in any such~$U_x$, then~$N_x\simeq \mathbb{R}^n$ and we simply restrict the gradient vector field on~$S_n$ to~$N_x$. To define a vector field on all of~$S_n$, we cover~$S^n$ with a family of opens~$(U_\alpha)$ as described above and pick a partition of unity~$(\rho_{U_\alpha})$. The formula~$\mathbf{v}=\sum_{\alpha} \rho_{U_{\alpha}} \mathbf{u}_{\alpha}$ defines a vector field~$S_n\rightarrow \mathrm{T}S_n$. Notice how~$df(\mathbf{v})$ is non-zero everywhere precisely because each~$df(\mathbf{u}_\alpha)$ is non-zero everywhere. The corresponding maximal local flow thus results in a map~$g_n\colon [c,d]\times f|_ {S_n}^{-1}(c,d)\rightarrow X$. Combine~$g_{n-1}$ and~$g_{n}$ to define the parametrized family~$g\colon [c,d]\times f_n^{-1}(c,d)\rightarrow X_n$ \[ g(t,x)=\twopartdef{g_{n-1}(t,x)}{x\in X_{n-1}}{g_n(t,x)}{x\in S_n} \] of sections. \end{proof} We end this entire section by proving a lemma. The result is analogous to two basic Morse lemmas that utilizes flow-lines to produce deformation retracts. \begin{lemma} \label{lemma:oneCriticalStratified} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a Reeb function with at most one critical value. Then the inclusion~$f^{-1}a\hookrightarrow X$ is a homotopy equivalence for all~$a$ if there is no critical value, otherwise it is a homotopy equivalence for~$a$ equal to the critical value. \end{lemma} \begin{proof} Define a filtration~$X_n=f^{-1}[a-n,a+n]$,~$n\geq 0$, on~$X$. Given that~$X$ is the homotopy colimit over~$X_n$, it suffices to prove that the inclusion~$i_n \colon f^{-1}a\hookrightarrow f^{-1}[a-n,a+n]$ is a weak homotopy equivalence. The inclusion certainly factorizes \[ f^{-1}a \xhookrightarrow{j_n} f^{-1}[a-n,a]\xhookrightarrow{k_n} f^{-1}[a-n,a+n] \] and we will only argue that~$j_n$ is a weak homotopy equivalence. For the case of~$k_n$ is similar. To every point~$x$, in~$f^{-1}[a-n,a]$, we associate the reparametrized flow-line~$g_x\colon [a-n,a]\rightarrow X$ through~$x$ provided by Proposition~\ref{proposition:modifiedFlowLines}. If~$x$ is in~$f^{-1}a$, then~$g_x\colon \{a \}\rightarrow X$ is the trivial section at~$x$. Define a retract~$r_n$ of~$j_n$ by the formula~$r_n(x)=g_x(a)$. This defines a homotopy equivalence due to the homotopy \[ H(x,t)= g_x(ta+(1-t)f(x)) \] from~$H(x,0)=x$ to~$H(x,1)=j_n\circ r_n(x)$. \end{proof} \section{The section category and its classifying space} \label{section:sectioncategory} In Section~$\ref{subsec:sf}$ we define the section category~$\mathcal{S}_f$ of a continuous function~$f$. Also, if~$f \colon X\rightarrow {\mathbb{R}}$ is a Reeb function, then every subset~$A$ of~${\mathbb{R}}$ which contains the critical values of~$f$ defines a subcategory~${\mathcal{C}}_f^A$ of~$\mathcal{S}_f$. Section~$\ref{section:moreSimplicialStuff}$ is included for the reader that would like some background on simplicial sets. Thereafter Theorem~\ref{intro:mainresult} is deduced from the stronger Theorem~\ref{thm:mainresult} in Section~\ref{sec:mainresult}. \subsection{The section category} \label{subsec:sf} Let us first agree on the meaning of a `topological category'. There are two different flavors: categories enriched in topological spaces and categories internal to topological spaces. In this paper a topological category is to be understood in the latter sense, following G. Segal~\cite{segal1968classifying}. A category~${\mathcal{C}}$ can be described in terms of four structural maps: If~$\mathrm{ob}{\mathcal{C}}$ is the set of objects;~$\mathrm{mor}{\mathcal{C}}$ the set of morphisms; then they are source and target~$s,t\colon \mathrm{mor}{\mathcal{C}} \rightarrow \mathrm{ob}{\mathcal{C}}$, injection of objects as identity morphisms~$i\colon \mathrm{ob}{\mathcal{C}}\rightarrow \mathrm{mor}{\mathcal{C}}$ and composition~$\circ\colon \mathrm{mor}{\mathcal{C}}\times_{\mathrm{ob}{\mathcal{C}} }\mathrm{mor} {\mathcal{C}}\rightarrow \mathrm{mor}{\mathcal{C}}$. The set~$ \mathrm{mor}{\mathcal{C}}\times_{\mathrm{ob}{\mathcal{C}} }\mathrm{mor} {\mathcal{C}}$ is the pullback obtained from the source and target; consists of pairs~$(m,m')$ of morphisms for which~$s(m')=t(m)$ such that~$m'\circ m$ is defined. A category~${\mathcal{C}}$ is a \emph{topological category} if both~$\mathrm{ob}{\mathcal{C}}$ and~$\mathrm{mor}{\mathcal{C}}$ are equipped with topologies and the four structural maps~$s$,~$t$,~$i$ and~$\circ$ are all continuous. Any topological space~$X$ defines a trivial topological category~$\underline{X}$ whose object space and morphism space are both equal to~$X$. The structural maps~$s,t,i$ all agree with the identity on~$X$, whereas composition is the homeomorphism from the diagonal on~$X$ to~$X$.\\ Assume that a continuous function~$f\colon X\rightarrow {\mathbb{R}}$ from a topological space~$X$ is given. Recall that a \emph{section} of~$f$ is a continuous function~$\sigma\colon [a,b]\rightarrow X$ such that~$f\circ \sigma\colon [a,b]\rightarrow {\mathbb{R}}$ is the inclusion. Arrange all of the sections in the \emph{space of all sections}~$\mathrm{mor} \mathcal{S}_f= \coprod_{a\leq b} \mathcal{S}_f [a,b]$, ranging over all pairs~$a\leq b$ in~${\mathbb{R}}$, equipped with the disjoint union topology. Notice how~$f^{-1}a$ and~$\mathcal{S}_f[a,a]$ are canonically homeomorphic. Hence the object space~$\mathrm{ob}\mathcal{S}_f=\coprod_{a\in{\mathbb{R}}} f^{-1}a$ comes with an evident map~$i\colon\mathrm{ob}\mathcal{S}_f\rightarrow \mathrm{mor}\mathcal{S}_f$. Source and target maps~$s,t\colon \mathrm{mor}\mathcal{S}_f\rightarrow \mathrm{ob}\mathcal{S}_f$ are obtained by restricting the evaluation~$\mathrm{eval}\colon \mathcal{S}_f[a,b]\times [a,b]\rightarrow X$ to~$a$ and~$b$, respectively. If~$\sigma\colon[a,b]\rightarrow X$ is a section, then~$s(\sigma)=\sigma(a)$ and~$t(\sigma)=\sigma(b)$. Concatenation defines canonical maps~$\mathcal{S}_f[b,c]\times_{f^{-1}b}\mathcal{S}_f[a,b]\rightarrow \mathcal{S}_f[a,c]$: \[ \rho\circ \sigma(r)=\twopartdef{\sigma(r)}{a\leq r \leq b}{\rho(r)}{b\leq r \leq c} \] From which a composition~$\circ \colon \mathrm{mor} \mathcal{S}_f\times_{\mathrm{ob}\mathcal{S}_f}\mathrm{mor}\mathcal{S}_f\rightarrow \mathrm{mor}\mathcal{S}_f$ is deduced. \begin{figure}[h] \centering \begin{tikzpicture} \draw [-] plot [smooth, tension=0.8] coordinates {(0,0) (2,0.5) (4,-0.5) (6,0.5) (8,-0.5) (10,0)}; \draw [-] (0,-2) -- (10,-2); \draw [-,thick] (0,-0.05) -- (0,0.05); \draw [-,thick] (4,-0.55) -- (4,-0.45); \draw [-,thick] (10,-0.05) -- (10,0.05); \node [right] at (10.1,-2.1) {${\mathbb{R}}$ }; \node [above] at (0,-2) {$a$ }; \node [above] at (4,-2) {$b$ }; \node [above] at (10,-2) {$c$ }; \node [above] at (2,0.5) {$\sigma$ }; \node [above] at (8,-0.5) {$\rho$ }; \end{tikzpicture} \caption{The composition of two sections $\sigma$ and $\rho$ satisfying $s(\rho)=t(\sigma)$.} \label{fig:comp} \end{figure} It is straightforward to check that~$\circ$ is associative: the morphisms are all canonically parametrized as a result of being sections. The inclusion is clearly unital. Hence what we have described is a topological category~$\mathcal{S}_f$. \begin{definition} The \emph{section category} of a continuous function~$f\colon X\rightarrow {\mathbb{R}}$ is the topological category~$\mathcal{S}_f$. \end{definition} Two continuous functions~$f\colon X\rightarrow {\mathbb{R}}$ and~$f'\colon X'\rightarrow {\mathbb{R}}$, together with a continuous function~$\phi \colon X \rightarrow X'$ over~${\mathbb{R}}$ in the sense that~$f'\circ \phi = f$, induces a continuous functor~$\mathcal{S}_\phi\colon \mathcal{S}_f\rightarrow \mathcal{S}_{f'}$. So the assignment~$f\mapsto \mathcal{S}_f$ is functorial from the category of spaces over the real line. Assume~$f\colon X\rightarrow {\mathbb{R}}$ to be a Reeb function from here on. Every section~$\sigma\colon [a,b]\rightarrow {\mathbb{R}}$ of~$f$ is decorated by two real numbers:~$f(s\sigma)=a$ and~$f(t\sigma)=b$. If~$A$ is a non-empty subset of~${\mathbb{R}}$ containing the critical values of~$f$, we define the subcategory~${\mathcal{C}}_f^A$ of sections decorated only by real numbers in~$A$: \begin{definition} \label{def:fullSubcat} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a Reeb function, and consider~$\mathrm{A}$ a non-empty subset of~${\mathbb{R}}$ containing the critical values of~$f$. Define~${\mathcal{C}}_f^A$ as the full subcategory of~$\mathcal{S}_f$ with object space~$\coprod_{a\subset A}f^{-1}a$. \end{definition} If~$A={\mathbb{R}}$, then obviously~${\mathcal{C}}_f^A=\mathcal{S}_f$. And more is true:~${\mathcal{C}}_f^A$ and~$\mathcal{S}_f$ carries the same homotopical information for any choice of~$A$. We shall make this precise in Section~\ref{sec:mainresult}, after giving a brief recap on simplicial spaces. \subsection{Some background on simplicial spaces} \label{section:moreSimplicialStuff} A \emph{simplicial set} is a family of sets~$X_n$,~$n\geq 0$, together with face maps~$d_i\colon X_n\rightarrow X_{n-1}$ and degeneracy maps~$s_j\colon X_n\rightarrow X_{n+1}$ satisfying certain relations~\cite[p.~4]{goerss2009simplicial}. It resembles a simplicial complex: the face map~$d_i$ applied to an~$n$--simplex is the~$(n-1)$--simplex to be interpreted as its~$i$th face. The degeneracy maps, on the other hand, encode the number of ways in which one could consider an~$n$--simplex as a degenerate~$(n+1)$--simplex. The latter is not important to us, for all homotopy types in this paper are unaffected by simply omitting degeneracy maps. This can be made precise by verifying the goodness condition in~\cite{segal1974categories}.\\ The \emph{nerve} is a functor \[ \mathrm{N}\colon \text{(small categories)}\rightarrow \text{(simplicial sets)}. \] It maps a category~${\mathcal{C}}$ to the simplicial set whose set of~$n$--simplices is the~$n$--fold pullback \[(\mathrm{N}{\mathcal{C}})_n=\mathrm{mor}{\mathcal{C}}\times_{\mathrm{ob}{\mathcal{C}}}\dots\times_{\mathrm{ob}{\mathcal{C}}}\mathrm{mor}{\mathcal{C}}, \] of composable~$n$--tuples of morphisms. The outer face maps~$d_0$ and~$d_n$ are given by omitting the first and last component, respectively, whereas the inner face maps~$d_i$ are given by composing the~$i$th and~$(i+1)$th component. A \emph{simplicial space}~$X_\bullet$ is a simplicial set with the additional requirement that~$X_n$ is a topological space and the face and degeneracy maps are continuous. The nerve makes perfect sense as a functor \[ \mathrm{N}\colon \text{(topological categories)}\rightarrow \text{(simplicial spaces)}. \] Denote by~$\Delta^\bullet$ the cosimplicial space with~$n$--cosimplices the standard topological~$n$--simplex~$\Delta^n$. The coface map~$d^i\colon \Delta^n \rightarrow \Delta^{n+1}$ is the inclusion of~$\Delta^{n+1}$'s~$i$th face, whereas the codegeneracy map~$s^j\colon \Delta^n\rightarrow \Delta^{n-1}$ collapses~$\Delta^n$ onto its~$j$th face. Then the \emph{geometrical realization} \[ |\cdot|\colon \text{(simplicial spaces)}\rightarrow \text{(topological spaces)} \] is defined by assigning to a simplicial space~$X_\bullet$ the quotient space \[ |X_\bullet|=(\coprod\limits_n X_n \times \Delta^n )/ \sim \] with relation generated by~$(d_ix,z)\sim (x,d^i z)$ and~$(s_j x,z)\sim (x,s^j z)$. Compose the realization with the nerve to define the~\emph{classifying space} \[ \mathrm{B}=|\mathrm{N}(\cdot) |\colon \text{(topological categories)}\rightarrow \text{(topological spaces)}. \] \begin{example} \label{example:nSimplexSf} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a continuous function and consider the associated simplicial space~$\mathrm{N} \mathcal{S}_f$ obtained from applying the nerve to the section category. For a sequence~$a_0\leq\dots\leq a_n$ we introduce the topological space \[ \mathcal{S}_f[a_0,\dots,a_n]=\mathcal{S}_f[a_0,a_1]\times_{f^{-1}a_1}\mathcal{S}_f[a_1,a_2]\times_{f^{-1}a_2}\dots\times_{f^{-1}(a_{n-1})} \mathcal{S}_f[a_{n-1},a_n] \] of sections~$[a_0,a_n]\rightarrow X$ labeled by the given sequence. With this notation, we may identify the space of~$n$--simplices \[ (\mathrm{N}\mathcal{S}_f) = \coprod_{a_0\leq\dots\leq a_n \subset {\mathbb{R}}} \mathcal{S}_f[a_0,\dots,a_n]. \] If, in addition, the function~$f\colon X\rightarrow {\mathbb{R}}$ is a Reeb function on a stratified space~$X$, then the simplicial space~$\mathrm{N} {\mathcal{C}}_f^A$ has an associated space of~$n$--simplices \[ (\mathrm{N}{\mathcal{C}}_f^A) = \coprod_{a_0\leq\dots\leq a_n\subset A} \mathcal{S}_f[a_0,\dots,a_n]. \] \end{example} \subsection{A proof of Theorem~\ref{intro:mainresult}} \label{sec:mainresult} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a continuous function from a topological space~$X$ to the real line. It is tempting to presume~$X\simeq\mathrm{B} \mathcal{S}_f $ to be true in general (Theorem~\ref{intro:mainresult}). But that is not the case. \begin{example} \label{example:BsfNotX} There is a continuous function~$f\colon I\rightarrow {\mathbb{R}}$, from the unit interval~$I$, uniquely determined by the formula~$f(x)=x\sin(\frac{1}{x})$. Proposition ~\ref{prop:BSfZeroHomology}, to be proven, tells us that there cannot be a path from~$1$ to any other point in~$\mathrm{B} \mathcal{S}_f$: such a path would have to meet an infinite number of~$1$--cells up to homotopy fixing endpoints. Hence~$\mathrm{B} \mathcal{S}_f$ has at least two path components. In fact,~$\mathrm{B} \mathcal{S}_f$ is weakly equivalent to the disjoint union of two points--see Example~\ref{ex:oscillation}, to be computed. \end{example} Assume from here on that~$f\colon X\rightarrow {\mathbb{R}}$ is a Reeb function and that~$A$ is a subset of the real line satisfying the assumptions of Definition~\ref{def:fullSubcat}. Recall from Example~\ref{example:nSimplexSf} that the space of~$n$--simplices in~$\mathrm{N}{\mathcal{C}}_f^A$ is the disjoint union~$\coprod\mathcal{S}_f[\bar{a}]$, indexed over non-decreasing sequences~$\bar{a}=(a_0,\dots,a_n)$ in~$A$. Points in~$\mathrm{B}{\mathcal{C}}_f^A$ are thus classes~$[\bar{\sigma},\bar{t}]$ with~$(\bar{\sigma},\bar{t})$ in~$\mathcal{S}_f[\bar{a}]\times \Delta^n$. There is a map~$\phi\colon \mathrm{B} {\mathcal{C}}_f^A\rightarrow X$ which is soon to be proven a weak homotopy equivalence. For a representative with~$\bar{\sigma}=(\sigma_1,\dots,\sigma_n)$, a sequence of composable sections~$[a_{i-1},a_i]\rightarrow X$, and~$\bar{t}=(t_0,\dots,t_n)$, it is defined by~$\phi[\bar{\sigma},\bar{t}]=\sigma_n\circ\dots \circ \sigma_1 (\bar{a}\bar{t})$. The notation~$\bar{a}\bar{t}$ is short for the dot product~$a_0t_0+\dots+a_nt_n$. It is a straightforward hassle to verify that this does in fact induce a well-defined map on~$\mathrm{B}{\mathcal{C}}_f^A$. Post composition with~$f$ now defines a map~$\bar{f}=f\circ \phi$ from~$\mathrm{B}{\mathcal{C}}_f^A$ to the real line. Applying~$f$ to the above formula reveals that~$\bar{f}[\bar{\sigma},\bar{t}]=\bar{a}\bar{t}$ on representatives--for the composition~$\sigma_n\circ\dots \circ \sigma_1 $ in~${\mathcal{C}}_f^A$ is a section of~$f$. Let us establish some notation before proving some convenient lemmas. Given a finite non-decreasing sequence~$\bar{a}=(a_0,\dots,a_n)$ in~$A$ and a subspace~$K$ of~$\Delta^n$, we denote by~$\langle\bar{a}, K \rangle$ the image of ~$\coprod \mathcal{S}_f[\bar{b}]\times K$, ranging over all subsequences~$\bar{b}$ of~$\bar{a}$, under the quotient map that defines~$\mathrm{B} {\mathcal{C}}^A_f$. In particular,~$\langle \bar{a}, \Delta^n\rangle$ is the subspace generated by classes whose representative is decorated by a subsequence of~$\bar{a}$. \begin{lemma} \label{lemma:compactIntoCfA} If~$m\colon K\rightarrow \mathrm{B} {\mathcal{C}}_f^A$ is a map from a compact space~$K$, then there is an increasing sequence~$\bar{a}$ in~$A$ of length~$n$ such that the image of~$K$ is contained in~$\langle \bar{a},\Delta^n\rangle$. \end{lemma} \begin{proof} Let \[ \mathrm{sk}_k \mathrm{B} {\mathcal{C}}_f^A =(\coprod\limits_{q\leq k} (\mathrm{N}\mathcal{S}_f)_q\times \Delta^q)/\sim \] be the~$k$--skeleton of~$\mathrm{B} {\mathcal{C}}_f^A$. It is well-known that the map~$m$ must factor through some~$k$--skeleton of~$\mathrm{B}{\mathcal{C}}_f^A$, because~$K$ is compact. In our notation, one may alternatively write~$\mathrm{sk}_k \mathrm{B}{\mathcal{C}}_f^A =\cup \langle\bar{b},\Delta^k \rangle$, ranging over all non-decreasing sequences~$\bar{b}=(b_0,\dots,b_k)$ in~$A$. Hence we can deduce even more: the image of~$m$ can only meet finitely many subspaces of the form~$\langle \bar{b} , \Delta^k \rangle$, i.e. it factorizes through~$\cup_{i=0,\dots, q} \langle\bar{b}_i,\Delta^m\rangle $ for finitely many sequences~$\bar{b}_i=(b_{i,0},\dots,b_{i,k})$. Include and order all the components~$b_{i,j}$ to define the bigger non-decreasing sequence~$\bar{a}=(a_0,\dots,a_n)$. A point~$[\bar{\sigma},\bar{t}]$ in the image of~$m$ comes with a subsequence of~$\bar{a}$. \end{proof} Recall the spine~$\mathrm{sp}\Delta^n$ of the topological~$n$--simplex. It is the subspace parametrized by tuples~$\bar{t}=(t_0,\dots,t_n)$ satisfying that at most two successive entries are non-zero; there is an~$i$ such that~$t_j=0$ for all~$j$ except possibly~$j=i-1,i$. Points, or classes, in~$\langle \bar{a}, \mathrm{sp} \Delta^n\rangle$ has a particularly nice presentation: a point~$[\bar{\sigma},\bar{t}]$ in~$\langle \bar{a}, \mathrm{sp} \Delta^n\rangle$ can be represented~$[\sigma_i,(t_{i-1},t_{i})]$, because of how~$\bar{t}=(0,\dots,t_{i-1},t_{i},0\dots,0)$ for some~$i$. \begin{lemma} \label{lemma:spine} Consider a Reeb function~$f\colon X\rightarrow {\mathbb{R}}$ and~$\bar{a}$ a non-decreasing sequence in~$A$. The subspace~$\langle \bar{a}, \Delta^n \rangle $ deformation retracts onto~$\langle \bar{a}, \mathrm{sp}\Delta^n \rangle $ in~$\mathrm{B} {\mathcal{C}}_f^A$. \end{lemma} \begin{proof} A point~$[\bar{\sigma},\bar{t}]=[\sigma_i,(t_{i-1},t_{i})]$ in~$\langle \bar{a}, \mathrm{sp} \Delta^n\rangle$ is mapped to~$t_{i-1}a_{i-1}+t_ia_i$ under~$\bar{f}$. For a fixed~$\bar{\sigma}$ in~$\mathcal{S}_f[\bar{a}]$ the map~$\bar{f}\circ(\bar{\sigma},\mathrm{id}_{\mathrm{sp}\Delta^n})\colon \mathrm{sp}\Delta^n\rightarrow {\mathbb{R}}$ is injective, because~$\bar{a}$ is an increasing sequence. So for every~$a_0\leq b\leq a_n$ and~$\bar{\sigma}$ there is a unique~$\bar{s}_b$ in~$\mathrm{sp}\Delta^n$ such that~$\bar{f}[\bar{\sigma},\bar{s}_b]=b$. Two points in~$\langle \bar{a}, \Delta^n \rangle \cap \bar{f}^{-1}b$ both associate to the same~$\bar{s}_b$. The homotopy \[ R([\bar{\sigma},\bar{t}],t)=[\bar{\sigma},(1-t)\bar{t}+t\bar{s}_{\bar{f}[\bar{\sigma},\bar{t}]}] \] is thus well-defined. And it satisfies~$R(-,0)=\mathrm{id}_{\langle \bar{a}, \Delta^n \rangle }$ whereas the image of~$R(-,1)$ is contained in~$\langle \bar{a}, \mathrm{sp}\Delta^n \rangle $. It is a deformation retract because~$\bar{s}_{\bar{f}[\bar{\sigma},\bar{t}]}=\bar{t}$ whenever~$\bar{t}$ is in the spine; the homotopy is trivial when restricted to the spine. \end{proof} The third lemma is analogous to Lemma~\ref{lemma:oneCriticalStratified}. \begin{lemma} \label{lemma:deformationRetract} Consider a Reeb function~$f\colon X\rightarrow {\mathbb{R}}$ and~$\bar{a}=(a_0,\dots,a_n)$ an increasing subsequence of~$A$ such that~$[a_0,a_n]$ contains at most one critical value of~$f$. Then the subspace~$\langle \bar{a},\mathrm{sp}\Delta^n\rangle $ of~$\mathrm{B}{\mathcal{C}}_f^A$ deformation retracts onto \begin{enumerate}[i)] \item $f^{-1}a$ for any~$a$ in~$\bar{a}$ if there is no critical value or \item $f^{-1}a$ for~$a$ equal to the critical value, otherwise. \end{enumerate} \end{lemma} \begin{proof} We may assume~$a=a_0$, much like we only consider the case~$[a-n,a]$ in the proof of Lemma~\ref{lemma:oneCriticalStratified}: if~$a_i=a$, then we simply identify~$\langle \bar{a},\mathrm{sp}\Delta^n\rangle $ as the homotopy pushout of \[ \langle (a_0,\dots,a),\mathrm{sp}\Delta^i\rangle\leftarrow f^{-1}a\rightarrow \langle (a,\dots,a_n),\mathrm{sp}\Delta^{n-i}\rangle \] and reduce to the case~$a=a_0$ or~$a=a_n$. The deformation retract will be defined inductively in~$n$. If~$n=0$, then there is nothing to prove since~$\langle a,\Delta^0\rangle$ is equal to~$f^{-1}a$. Assume the existence of a deformation retract \[ R_{n-1}\colon \langle (a_0,\dots, a_{n-1}), \mathrm{sp}\Delta^{n-1} \rangle \times I \rightarrow \langle (a_0,\dots, a_{n-1}), \mathrm{sp}\Delta^{n-1} \rangle \] onto~$f^{-1}a$. Then we need only compute a deformation retract of~$\langle \bar{a}, \mathrm{sp}\Delta^n \rangle $ onto~$\langle (a_0,\dots, a_{n-1}), \mathrm{sp}\Delta^{n-1} \rangle$. But~$\langle \bar{a}, \mathrm{sp}\Delta^n \rangle $ is the union of~$\langle (a_0,\dots, a_{n-1}), \mathrm{sp}\Delta^{n-1} \rangle$ and~$\langle (a_{n-1},a_n), \Delta^1\rangle$, so surely it suffices to deformation retract~$\langle (a_{n-1},a_n), \Delta^1\rangle$ onto~$f^{-1}a_{n-1}$. A point~$[x]$ in~$f^{-1}(a_n)$, considered as a subspace of~$\langle (a_{n-1},a_n), \Delta^1\rangle$, has a canonical choice of representative~$[g_x, (0,1)]$ where~$g_x\colon [a_{n-1},a_n]\rightarrow X$ is a reparametrized flow-line provided by Proposition~\ref{proposition:modifiedFlowLines}. Hence every point in~$\langle (a_{n-1},a_n),\Delta^1\rangle\setminus f^{-1}a_{n-1} $ is represented~$[\sigma, (t_0,t_1)]$ with~$\sigma\colon [a_{n-1},a_n]\rightarrow X$ a section. To every section~$\sigma\colon [a_{n-1}, a_n]\rightarrow X$ we associate the map~$g_\sigma \colon [a_{n-1},a_n]^2\rightarrow X$ determined by~$g_\sigma (b,c)=g_{\sigma(b)}(c)$, where~$g_{\sigma (b)}$ is the reparametrized flow-line through~$\sigma(b)$. The section~$\sigma$ can be recovered from the composition~$[a_{n-1},a_n]\xrightarrow{d}[a_{n-1},a_n]^2\xrightarrow{g_\sigma} X$ where~$d$ is the diagonal map~$b\mapsto (b,b)$. For a fixed value~$b_0$, we also recover~$g_{\sigma(b_0)}$ from the composition~$[a_{n-1},a_n]\xrightarrow{(b_0,\mathrm{id})} [a_{n-1},a_n]^2\xrightarrow{g_\sigma} X$. The straight line homotopy~$H_{b_0}\colon I\rightarrow \mathop{\rm map}\nolimits ([a_{n-1,a_n}],[a_{n-1,a_n}]^2)$, \[ H_{b_0}(t)(b)=((1-t)b+tb_0, b) \] between~$d$ an~$(b_0,\mathrm{id})$ will give us the desired deformation retract. Indeed, for a point~$[\sigma, (t_0,t_1)]$, contained in~$\langle (a_{n-1},a_n),\Delta^1\rangle\setminus f^{-1}a_{n-1}$, we declare \[ R_n([\sigma, (t_0,t_1)],t)=[g_\sigma \circ H_{\bar{f}[\sigma, (t_0,t_1)]}(t), (1-t) (t_0,t_1) + t(1,0) ]. \] When~$t=0$ the output is~$[\sigma, (t_0,t_1)]$ whereas~$t=1$ yields~$[g_{\phi[\sigma, (t_0,t_1)]}, (1,0)]=[g_{\phi[\sigma, (t_0,t_1)]} (a_{n-1})]$ in~$f^{-1} a_{n-1}$. Extend~$R_n$ to~$\langle(a_{n-1},a_n), \Delta^1\rangle$ by declaring~$R_n([x], t)=[x]$ for a class~$[x]$ in~$f^{-1}a_{n-1}$. \end{proof} Combining Lemmas~\ref{lemma:spine} and~\ref{lemma:deformationRetract} imply the following. \begin{lemma} \label{lemma:refinement} Assume that~$A'$ is constructed from~$A$ by adding a single real value. Then the inclusion~${\mathcal{C}}_f^A\hookrightarrow {\mathcal{C}}_f^{A'}$ induces a weak homotopy equivalence on classifying spaces. \end{lemma} We are prepared to prove that the homotopy types of~$\mathrm{B} {\mathcal{C}}_f^A$ and~$X$ coincide for all eligible~$A$. In particular, Theorem~\ref{intro:mainresult} follows. \begin{theorem} \label{thm:mainresult} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a Reeb function. The map~$\phi \colon \mathrm{B} {\mathcal{C}}_f^A\rightarrow X$ is a weak homotopy equivalence for all non-empty subsets~$A\subset {\mathbb{R}}$ containing the critical values of~$f$. \end{theorem} \begin{proof} We can essentially reduce the problem to two special cases: 1.~$f$ has no critical values and 2.~$f$ has precisely one critical value. Indeed, assume that there is at least two critical values. Enumerate them~$(c_i)$ according to the standard ordering of the real line. Every critical value~$c_i$ is contained in~$N_i=(c_{i-1},c_{i+1})$, possibly interpreting~$c_{i-1}=-\infty$ or~$c_{i+1}=\infty$. Whence we deduce open covers of~$X$ and~${\mathcal{C}}_f^A$ by considering~$U_i= f^{-1}N_i$ and~$V_i=\bar{f}^{-1}N_i$, respectively. Do note that the only non-empty intersections are of the form~$N_{i,j}=N_i\cap N_j$ for~$j=i,i+1$. Hence it suffices to prove that~$\phi$ restricts to weak homotopy equivalences~$\phi|_{V_{i,j}}\colon V_{i,j}\rightarrow U_{i,j}$ with~$V_{i,j}=V_i\cap V_j$ and~$U_{i,j}=U_i\cap U_j$ subject to~$j=i,i+1$. This essentially follows since~$X$ and~$\mathrm{B}{\mathcal{C}}_f^A$ can be described as homotopy colimits over the given covers. See Theorem 6.7.9 in \cite{tom2008algebraic} for a precise reference. The open cover~$N_i$ is constructed so that~$N_{i,j}=N_i\cap N_j$ contains precisely one critical value when~$i=j$, whereas it contains no critical values when~$j=i+1$. So the reduction to the above two special cases is made precise. Fix~$N=N_{i,j}$,~$U=U_{i,j}$ and~$V=V_{i,j}$ for~$j=i$ or~$i+1$. Also, pick a value~$a$ in~$N$. If~$i=j$, then~$a$ must be chosen as the critical value. Otherwise, it can be any real value. Lemma~\ref{lemma:refinement} allow us to assume that~$a$ is contained in~$A$ without loss of generality. We may thus assume the inclusion~$f^{-1}a\hookrightarrow U$ to factorize through~$\phi|_V$. By the two out of three property it only remains to see that~$f^{-1}a\hookrightarrow U$ and~$f^{-1}a\hookrightarrow V$ are weak homotopy equivalences. The first follows directly from Lemma~\ref{lemma:oneCriticalStratified}. To see that~$f^{-1}a\hookrightarrow V$ is a weak homotopy equivalence we consider an arbitrary commutative diagram \begin{center} \begin{tikzcd} \partial\Delta^m \arrow[d, "i"] \arrow[r, "m"] & f^{-1}a \arrow[d] \\ \Delta^m \arrow[r, "l"] & V \end{tikzcd} \end{center} and find a lift~$L\colon \Delta^m\rightarrow f^{-1}a$ up to homotopy relative to~$\partial\Delta^m$. Lemma~\ref{lemma:compactIntoCfA} tells us that the image of~$l$ is contained in a subspace~$\langle \bar{a}, \Delta^n\rangle $. The deformation retract onto~$\langle \bar{a}, \mathrm{sp}\Delta^n\rangle $, provided by Lemma~\ref{lemma:spine}, fixes~$f^{-1}a$ and hence~$l$ factorizes through~$\langle \bar{a}, \mathrm{sp}\Delta^n\rangle $, up to homotopy relative~$\partial \Delta^m$. A point in~$V$ must map into~$N$ under~$\bar{f}$, hence we can assume~$\bar{a}$ to be contained in~$N$ without loss of generality. Now Lemma~\ref{lemma:deformationRetract} applies to give a deformation retract from~$\langle \bar{a}, \mathrm{sp}\Delta^n\rangle $ to~$f^{-1}a$. But then we can homotope~$l$ to a map which factorizes through~$f^{-1}a$, up to homotopy relative to~$\partial \Delta^m$. \end{proof} \section{Reeb functions for homology computations} \label{section:spectralsequence} Section~\ref{subsec:sectionsequence} introduces the \emph{section spectral sequence} associated naturally to a continuous function~$f\colon X\rightarrow {\mathbb{R}}$. A noteworthy property of this spectral sequence is that its second page only consists of two non-trivial columns (Proposition~\ref{proposition:E2isInfty}). In Section~\ref{subsec:criticalsequence} we introduce the much smaller \emph{critical spectral sequence}. Basic computational tools are deduced and illustrated. \subsection{The section spectral sequence} \label{subsec:sectionsequence} Let us fix a generalized homology theory~$k_\ast$. Recall that a simplicial space~$X_\bullet$ comes with a spectral sequence whose termination is~$k_\ast|X_\bullet|$. The cohomological version is derived in~\cite{segal1968classifying}. For every~$q$, we associate the simplicial abelian group~$k_q X_\bullet \colon \Delta^{\mathop{\rm op}\nolimits} \rightarrow \text{(Abelian groups)} $ by applying~$k_q$ level-wise. Entries on the first page are given by~$\mathrm{E}^1_{p,q}=k_q X_p$, whereas the differential is induced by the face maps in~$k_q X_\bullet$. Indeed, a simplicial abelian group~$A_\bullet$ defines a chain complex by collapsing degeneracies entry-wise and defining the differential~$\partial=\sum_i (-1)^{i} d_i$. Entries on the second page are given by~$\mathrm{E}^2_{p,q}= \mathrm{H}_p k_q X_\bullet$. A map~$F\colon X_\bullet\rightarrow Y_\bullet$ of simplicial spaces naturally induces homomorphisms~$k_qF_p\colon\mathrm{E}^1_{p,q}\rightarrow \bar{\mathrm{E}}^1_{p,q}$ and hence maps~$\mathrm{E}^2_{p,q}\rightarrow\bar{\mathrm{E}}^2_{p,q}$ on the second page. This does in fact define a homomorphism in the category of (homology) spectral sequences, but there is no need for further abstraction. We have seen that a continuous function~$f\colon X\rightarrow {\mathbb{R}}$, on a topological space~$X$, defines a section category~$\mathcal{S}_f$ whose morphisms are the sections~$\sigma\colon [a,b]\rightarrow X$. \begin{definition} The \emph{section spectral sequence} of a continuous function~$f\colon X\rightarrow {\mathbb{R}}$ is the spectral sequence naturally associated to the simplicial topological space~$\mathrm{N}\mathcal{S}_f$. \end{definition} Theorem~\ref{intro:mainresult} tells us that the section spectral sequence converges to~$\mathrm{H}_\ast X$ whenever~$f\colon X\rightarrow {\mathbb{R}}$ is a Reeb function: \begin{proposition} \label{prop:SS} For~$f\colon X\rightarrow {\mathbb{R}}$ a Reeb function, the section spectral sequence converges to~$\mathrm{H}_\ast X$: \[ \mathrm{H}_p k_q \mathrm{N}\mathcal{S}_f \Rightarrow k_{p+q} X. \] \end{proposition} The first algebraic observation is concerned with computability: there are only two non-zero columns on the second page of the section spectral sequence. In particular, the sequence collapses on the second page; all differentials on the second page are zero. \begin{proposition} \label{proposition:E2isInfty} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a continuous function. The section spectral sequence of~$\mathrm{N} \mathcal{S}_f$ satisfies~$\mathrm{E}_{p,q}^2=0$ for~$p\geq 2$. \end{proposition} \begin{proof} The additivity axiom of~$k_\ast$ implies \[ E_{p,q}^1=k_q\Big(\coprod_{a_0< \dots < a_o} \mathcal{S}_f(a_0,\dots a_p)\Big)\simeq \bigoplus_{a_0< \dots < a_p} k_q\mathcal{S}_f(a_0,\dots,a_p). \] For an arbitrary~$q$ we fix a~$p\geq 2$ and denote by~$\partial_p\colon \mathrm{E}^1_{p,q}\rightarrow \mathrm{E}^1_{p-1,q}$ the boundary map. On the level of elements, an element~$\alpha$ in~$k_q\mathcal{S}_f[a_0,\dots,a_p]$ is mapped to the alternating sum~$\sum (-1)^{i} d_i \alpha$ with~$d_i\alpha$ an element of~$k_q\mathcal{S}_f[a_0,\dots,\hat{a}_i,\dots,a_p]$. We shall see that the kernel of~$\partial_p$ is contained in the image of~$\partial_{p+1}$, thus justifying the assertion. An arbitrary element~$\alpha$ in the kernel of~$\partial_p$ can be presented as a linear combination~$c_1\alpha_1+\dots+c_n\alpha_n$ where each~$\alpha_i$ is in some~$k_q\mathcal{S}_f[a_{i,0},\dots,a_{i,p}]$. We proceed by defining a process which splits~$\alpha$ into a sum~$\alpha'+\partial_{p+1} \beta'$ and argue why finitely many iterations must eventually lead to~$\alpha=\partial_{p+1} \beta$ for some~$\beta$ in~$k_q(\mathrm{N}\mathcal{S}_f)_{p+1}$. Start by picking the~$\alpha_i$ with smallest possible~$a_{i,0}$ and recursively smallest possible~$a_{i,j+1}$ subject to smallest possible~$a_{i,j}$. Since~$\alpha$ is assumed to be in the kernel of~$\partial_p$, there must be a~$j$ and~$n$ such that~$d_n\alpha_j$ cancels~$d_q\alpha_i$. But~$\alpha_i$ was chosen so that~$n$ must necessarily equal~$q$--otherwise we would have picked~$\alpha_j$ in place of~$\alpha_i$. There are now two cases: 1.The class~$\alpha_j$ restricts to~$\alpha_i$. That is, we can pick~$j$ such that there is a class~$\beta$ in~$k_q\mathcal{S}_f[a_{i,0},\dots,a_{i,p}, a_{j,p}]$ satisfying~$d_{p+1}\beta=\alpha_i$ and~$d_p \beta = \alpha_j$. If so, replace~$(-1)^p(\alpha_i-\alpha_j)$ with~$\partial_{p+1}\beta-\sum_{k\neq p,p+1}d_k\beta$ in the formula~$c_1\alpha_1+\dots+c_n\alpha_n$ to obtain a new formula~$\alpha=c_1'\alpha_1'+\dots+c_m'\alpha_m'+\partial_{p+1} \beta$. Now there is one less summand~$\alpha_k'$ with the minimal configuration of~$\alpha_i$. 2. Otherwise, if there is no such~$j$, we rather apply~$d_{p-1}$ to~$\alpha_i$. This produces an element~$d_{p-1}\alpha_i$ contained in~$k_q\mathcal{S}_f[a_{i,0},\dots,\hat{a}_{i,p-1},a_{i,p}]$. There must be a~$k$ and~$m$ such that~$d_m \alpha_k$ cancels~$d_{q-1}\alpha_i$. As in 1. the minimality of~$\alpha_i$ implies that~$m=p-1$. Since~$p\geq 2$ we can lift~$\alpha_i$ to~$k_q\mathcal{S}_f[a_{i,0},\dots,a_{i,p-1},a_{k,p-1},a_{i,p}]$ by adding the distinct label from~$\alpha_k$. Denote such a lift~$\beta$. This element satisfies~$d_{p-1}\beta=\alpha_k$ and~$d_p\beta = \alpha_i$. Moreover, among the~$d_l\beta$ for~$l\neq p-1,p$, only~$d_{q+1}\beta$ has a smaller configuration than~$\alpha_i$. As in~$1.$ we replace~$(-1)^p(\alpha_i-\alpha_k)$ with~$\partial_{p+1}\beta-\sum_{l\neq p-1,p} d_l\beta$, yielding a new linear combination~$\alpha=c_1'\alpha_1'+\dots+c_m'\alpha_m'+\partial_q \beta$. We did not manage to reduce the number of minimal configurations, but rather replaced a minimal configuration by a smaller minimal configuration. Repeating this process iteratively must terminate. Indeed, case 2. can only repeat finitely many times as there are only finitely many summands in~$\alpha$. So we have successfully constructed an iterative process that reduces the number of summands with a minimal configuration after finitely many steps. \end{proof} With our choice of indexing, a homology spectral sequence with only two adjacent non-zero columns is a collection of short exact sequences. I refer to~\cite[p. 124]{weibel1995introduction} for details. \begin{corollary} \label{cor:SES} Any continuous function~$f\colon X\rightarrow {\mathbb{R}}$ determines short exact sequences \[ H_0 k_q (\mathrm{N}\mathcal{S}_f)_0\hookrightarrow k_q \mathrm{B} \mathcal{S}_f \twoheadrightarrow H_1 k_{q-1}(\mathrm{N}\mathcal{S}_f)_1 \] for all~$q\geq 1$. \end{corollary} An~$f$--path is a path~$p\colon I\rightarrow X$ in~$X$ which reparametrizes to a concatenation of paths contained in fibers of~$f$ and possibly reversed sections of~$f$. Define the relation~$\sim_f$ on~$X$ by declaring that points are equivalent if they can be connected by an~$f$--path. \begin{proposition} \label{prop:BSfZeroHomology} For~$f\colon X\rightarrow {\mathbb{R}}$ a continuous function, the abelian group~$\mathrm{H}_0\mathrm{B} \mathcal{S}_f$ is the free abelian group on~$X/ \sim_f$. \end{proposition} \begin{proof} The assertion is a direct consequence of~$\mathrm{H}_0 \mathrm{B} \mathcal{S}_f=\mathrm{E}^2_{0,0}$. Indeed,~$\mathrm{E}^2_{0,0}=\mathrm{H}_0\mathrm{H}_0 \mathrm{N} \mathcal{S}_f$ where the first application of~$\mathrm{H}_0$ identifies points in the same path components of fibers, whereas the second identifies points that can be connected via sections. \end{proof} The following calculation justifies Example~\ref{example:BsfNotX}. \begin{example} \label{ex:oscillation} Consider the continuous function~$f\colon I\rightarrow {\mathbb{R}}$,~$f(x)=x\sin (\frac{1}{x})$ . We shall verify that the homology of~$\mathrm{B} \mathcal{S}_f$ with coefficients in~${\mathbb{Z}}$ is \[ \mathrm{H}_n\mathrm{B} \mathcal{S}_f = \twopartdef{{\mathbb{Z}}^2}{n=0}{0}{n\neq 0} \] It follows from Proposition~\ref{proposition:E2isInfty} that the associated spectral sequence has~$\mathrm{E}^2_{p,q}=0$ whenever~$p\geq 2$. Moreover, all of the section spaces~$\mathcal{S}_f[a_0,\dots,a_n]$ are discrete except for~$\mathcal{S}_f[0]=f^{-1}0$ which is homeomorphic to the subspace~$\{\frac{1}{n}|n=1,2,\dots \}\cup 0$ of~${\mathbb{R}}$. Nonetheless, the topological space~$\mathcal{S}_f[0]$ is homotopically discrete as it is weakly equivalent to the natural numbers equipped with the discrete topology. We thus conclude that~$\mathrm{E}^1_{p,q}=0$ for~$q\geq 1$. So in light of Proposition~\ref{proposition:E2isInfty} it only remains to calculate~$\mathrm{E}^2_{0,0}$ and~$\mathrm{E}^2_{1,0}$. Let~$\partial_1\colon \mathrm{E}^1_{1,0}\rightarrow \mathrm{E}^1_{0,0}$ be the remaining non-zero differential on the first page and pick a linear combination~$c_1\sigma_1+\dots+c_n\sigma_n$ in its kernel. Let~$m_i$ denote the minimum of~$\sigma_i$, a section into~$I$, and consider~$m$ the smallest number among the~$m_i$. Observe how every~$\sigma_i$ maps into~$I_m=[m,1]$. The inclusion~$j\colon I_m\hookrightarrow I$ satisfies~$f|_{I_m}=f\circ j$ and so there is an induced map~$\bar{\mathrm{E}}^\ast_{p,q}\rightarrow \mathrm{E}^\ast_{p,q}$, between associated spectral sequences. Theorem~\ref{intro:mainresult} applies to~$f_{I_m}$ so that~$\mathrm{H}_1 I_m=0$ implies~$\bar{\mathrm{E}}^2_{0,1}=0$. Hence~$c_1\sigma_1+\dots+c_n\sigma_n$ is in the image of~$\bar{\partial}_1\colon \bar{\mathrm{E}}^1_{2,0}\rightarrow \bar{\mathrm{E}}^1_{1,0}$, but then it is also in the image of~$\partial_2\colon \mathrm{E}^1_{2,0}\rightarrow \mathrm{E}^1_{1,0}$. The isomorphism~$\mathrm{E}^2_{1,0}\simeq0$ thus follows. Proposition~\ref{prop:BSfZeroHomology} implies~$\mathrm{E}^2_{0,0}\simeq {\mathbb{Z}}^2$: All pairs in~$(0,1]$ can be connected by a finite zigzag of sections, whereas no section starts nor terminates in~$0$. Hence~$\mathrm{H}_0\mathrm{B}\mathcal{S}_f\simeq {\mathbb{Z}}^2$. \end{example} \subsection{The critical spectral sequence} \label{subsec:criticalsequence} For a continuous function~$f\colon X\rightarrow {\mathbb{R}}$, the associated section spectral sequence \[ \mathrm{H}_qk_p \mathrm{N} \mathcal{S}_f \Rightarrow \mathrm{H}_{p+q} \mathrm{B} \mathcal{S}_f \] collapses on the second page which only consists of two non-zero columns. Alas, it has a huge problem when it comes to computability. Computing~$\mathrm{H}_p k_q \mathrm{N} \mathcal{S}_f$ abouts to an uncountable number of homology computations. For we would have to determine~$k_q\mathcal{S}_f[a,b]$ for all real numbers~$a\leq b$. But if~$f\colon X \rightarrow {\mathbb{R}}$ is a Reeb function, we shall see that the complexity is drastically reduced. Recall from Section~\ref{subsec:sf} that for every non-empty subset~$A$ of~${\mathbb{R}}$ which contains the critical values of~$f$, there is the subcategory~${\mathcal{C}}_f^A$ of sections whose source and target are both contained in~$A$. From here on we assume~$f$ to have at least one critical value and introduce the \emph{critical category}~${\mathcal{C}}_f={\mathcal{C}}_f^{\{ \text{critical values}\}}$. \begin{definition} \label{def:criticalSS} The \emph{critical spectral sequence} of a Reeb function~$f\colon X \rightarrow {\mathbb{R}}$ is the spectral sequence naturally associated to~$\mathrm{N}{\mathcal{C}}_f$. \end{definition} Theorem~\ref{thm:mainresult} tells us that the critical spectral sequence converges to~$\mathrm{H}_\ast X$. \begin{proposition} \label{prop:computability} For~$f\colon X \rightarrow {\mathbb{R}}$ a Reeb function, the critical spectral sequence converges to~$k_\ast X$: \[ \mathrm{H}_qk_p \mathrm{N} {\mathcal{C}}_f \Rightarrow k_{p+q} X. \] \end{proposition} As opposed to the section spectral sequence, we need only compute the generalized homology groups of~$\mathcal{S}_f[c_0,c_1]$ whenever~$c_0<c_1$ are critical values of~$f$. If for example~$X$ is compact, this reduces the number of~$q$th generalized homology groups needed to compute from uncountable to finite. We shall reduce the complexity even further: it suffices to compute the generalized homology groups of~$\mathcal{S}_f[c_0,c_1]$ whenever~$c_0<c_1$ are \emph{successive} critical values. Let us introduce some convenient notation before stating the formal result. I remind that~$\mathrm{E}^1_{p,q}\simeq \oplus k_q\mathcal{S}_f[c_0,c_1]$ ranging over all critical values~$c_0< c_1$. For every~$q\geq 0$, the differential~$\partial^1_{1,q}\colon \mathrm{E}^1_{1,q}\rightarrow \mathrm{E}^1_{0,q}$ restricts to a morphism \[ \partial^s_{1,q}\colon \bigoplus k_q\mathcal{S}_f[c_0,c_1] \rightarrow \bigoplus k_q\mathcal{S}_f[c]. \] only ranging over successive pairs of critical values~$c_0<c_1$. \begin{proposition} \label{prop:easyToCompute} For~$f\colon X \rightarrow {\mathbb{R}}$ a Reeb function, the second page of the associated critical spectral sequence satisfies~$\mathrm{E}^2_{1,q}\simeq \ker \partial^s_{1,q}$ for all~$q\geq 2$. \end{proposition} \begin{proof} Let~$\beta$ be an element in~$\mathrm{E}^1_{2,q}$. The relation imposed by~$\partial^1_{2,q}$ on~$\mathrm{E}^2_{1,q}$ is determined by the equation~$\partial^1_{2,q}\beta=d_0\beta-d_1\beta+d_2\beta$ and implies that~$[d_1\beta]=[d_0\beta]+[d_2\beta]$. If~$\beta$ is in~$k_q\mathcal{S}_f[c_0,c_1,c_2]$, then~$d_0\beta$ is in~$k_q \mathcal{S}_f[c_1,c_2]$ etc. So the map~$d_i$ simply forgets the~$i$th label. For a~$[\alpha]$ in~$\mathrm{E}^2_{1,q}$ represented by~$\alpha$ in~$k_q \mathcal{S}_f[c_0,c_1]$, we list all intermediate critical values~$d_0<\dots < d_n$ with~$d_0=c_0$ and~$d_n=c_1$. Now it is only a matter of applying the relation imposed by~$\partial^1_{2,q}$~$n$ times to rewrite~$[\alpha]$ as a linear combination~$[\alpha_1]+\dots+[\alpha_n]$ with~$\alpha_i$ in~$k_q \mathcal{S}_f[d_{i-1},d_i]$. Hence the elements in~$\ker \partial^s_{1,q}$ generates~$\mathrm{E}^2_{1,q}$. Moreover, the relation induced by~$\partial^1_{2,q}$ is trivial on this set of generators, for they cannot be decomposed further. \end{proof} As a last computational tool, we recognize the homotopy type of section spaces decorated by successive critical values. These section spaces has the homotopy type of any intermediate fiber. Hence the critical sequence recovers the homology of~$X$ using the homology type of certain fibers. I refer to Example~\ref{example:torus} for a hands-on demonstration. \begin{proposition} \label{prop:finite} Consider~$c$ and~$d$ two successive critical values of~$f$ as well as a real number~$a$ in~$(c,d)$. The evaluation map~$\mathrm{eval}_a\colon\mathcal{S}_f[c,d]\rightarrow f^{-1}a$ is a homotopy equivalence. \end{proposition} \begin{proof} Let~$g\colon [c,d]\times f^{-1}(c,d)\rightarrow X$ be a family of reparametrized sections which exists per Proposition~\ref{proposition:modifiedFlowLines}. Its adjoint~$\bar{g}\colon f^{-1}(c,d)\rightarrow \mathcal{S}_f[c,d]$ restricts to a map~$g_a \colon f^{-1}a\rightarrow \mathcal{S}_f[c,d]$, mapping a point~$x$ in~$f^{-1}a$ to the reparametrized flow-line $g_x$ through~$x$. It is clear that~$\mathrm{eval}_a\circ g_a$ is the identity on~$f^{-1}a$. Conversely, the composition~$ g_a\circ\mathrm{eval}_a$ maps a section~$\sigma\colon [c,d]\rightarrow X$ to the reparametrized flow-line~$g_{\sigma(a)}$ through~$\sigma(a)$. Note that we may identify~$\sigma$ with the section~$b\mapsto g_{\sigma(b)}(b)$. Denote by~$H\colon~[c,d]\times I\rightarrow [c,d]$ the straight line homotopy~$H(b,t)=(1-t)b+ta$, from which we define a homotopy~$G\colon \mathcal{S}_f[c,d]\times I\rightarrow\mathcal{S}_f[c,d]$. It maps a tuple~$(\sigma,t)$ to the section~$b\mapsto g_{\sigma \circ H(b,t)}(b)$. Notice that~$G(-,0)$ is~$\sigma$ whereas~$G(-,1)$ is~$g_{\sigma(a)}$. We have thus constructed a homotopy from the identity on~$\mathcal{S}_f[c,d]$ to~$ g_a\circ\mathrm{eval}_a$. \end{proof} We calculate the torus' homology as means to illustrate the computational implications of Propositions~\ref{prop:easyToCompute} and~\ref{prop:finite}. \begin{example} \label{example:torus} Let~$h\colon T\rightarrow {\mathbb{R}}$ be the height function on the torus depicted \begin{center} \begin{tikzpicture}[scale= 0.7] \draw [thick] (0,0) ellipse (2cm and 3cm); \draw [thick] (0,0) ellipse (0.5cm and 1cm); \draw[->] (3,0)--(5,0); \draw [thick] (6,-3.5) -- (6,3.5); \node [] at (6,1) {$\bullet$}; \node [] at (6,-1) {$\bullet$}; \node [] at (6,3) {$\bullet$}; \node [] at (6,-3) {$\bullet$}; \node [right] at (6,1) {$c$}; \node [right] at (6,-1) {$b$}; \node [right] at (6,3) {$d$}; \node [right] at (6,-3) {$a$}; \node [above] at (4, 0) {$h$}; \node [] at (12,0) { \begin{tabular}{ |c | c| } \hline $r$ & $f^{-1} r$ \\ \hline $a$ & pt \\ $\frac{a+b}{2}$ & $\partial \Delta^2$ \\ $b$ & $\partial\Delta^2 \vee \partial \Delta^2$ \\ $\frac{b+c}{2}$ & $\partial \Delta^2 \coprod \partial \Delta^2$ \\ $c$ & $\partial\Delta^2 \vee \partial \Delta^2$ \\ $\frac{c+d}{2}$ & $\partial \Delta^2$ \\ $d$ & pt \\ \hline \end{tabular} }; \end{tikzpicture} \end{center} It has four critical values~$a$,~$b$,~$c$ and~$d$. Proposition~\ref{prop:finite} allows us to fill out the above table of homotopy types with ease. We determine the first page of the critical spectral sequence, knowing that we need only compute the homology of section spaces between successive critical values (Proposition~\ref{prop:easyToCompute}): \begin{center} \begin{tikzpicture} \draw (0,0) -- (8.5,0); \draw (0,0) -- (0,4.5); \draw [->] (4.5,1) -- (3.25,1); \draw [->] (4.5,2.5) -- (3.25,2.5); \node [] at (2,1) {${\mathbb{Z}}\oplus {\mathbb{Z}}\oplus {\mathbb{Z}} \oplus {\mathbb{Z}}$}; \node [] at (2,2.5) {${\mathbb{Z}}\oplus {\mathbb{Z}}^2\oplus {\mathbb{Z}}$}; \node [] at (2,4) {$0$}; \node [] at (6,1) {$0\oplus {\mathbb{Z}}^2\oplus {\mathbb{Z}}^2\oplus 0$}; \node [] at (6,2.5) {${\mathbb{Z}}\oplus {\mathbb{Z}}^2 \oplus {\mathbb{Z}}$}; \node [] at (6,4) {$0$}; \node [] at (8,1) {$0$}; \node [] at (8,2.5) {$0$}; \node [] at (8,4) {$0$}; \node [right] at (8.5,0) {$p$}; \node [above] at (0,4.5) {$q$}; \node [above] at (4,1) {$\partial_{1,0}$}; \node [above] at (4,2.5) {$\partial_{1,1}$}; \end{tikzpicture} \end{center} Homology groups are split up according to the above table, e.g. \[ \mathrm{E}^1_{0,1}=\mathrm{H}_0 \partial \Delta^1\oplus \mathrm{H}_0(\partial \Delta^1\coprod \partial \Delta^1)\oplus\mathrm{H}_0 \partial \Delta^1\simeq {\mathbb{Z}}\oplus {\mathbb{Z}}^2 \oplus {\mathbb{Z}}. \] The differentials are induced by subtracting target from source:~$\partial=d_0-d_1=t-s$. For instance, the induced map~$\mathrm{H}_0 t\colon \mathrm{H}_0\frac{a+b}{2}\rightarrow \mathrm{H}_0 f^{-1} b$ is the identity~$1\colon {\mathbb{Z}} \rightarrow {\mathbb{Z}}$ in coordinates. This is because the target of a flow-line through~$f^{-1}\frac{a+b}{2}$ meets the path component of~$f^{-1} b$. By such geometric reasoning we deduce \[ \partial_{1,0}= \begin{bmatrix} -1 & 0 & 0 & 0 \\ 1 & -1 & -1 & 0 \\ 0 & 1 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \;\; \text{ and } \;\; \partial_{1,1}= \begin{bmatrix} 1 & -1 & 0 & 0 \\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{bmatrix}. \] Elementary linear algebra gives the second page: \begin{center} \begin{tikzpicture} \draw (0,0) -- (8.5,0); \draw (0,0) -- (0,4.5); \node [] at (2,1) {${\mathbb{Z}}$}; \node [] at (2,2.5) {${\mathbb{Z}}$}; \node [] at (2,4) {$0$}; \node [] at (5,1) {${\mathbb{Z}}$}; \node [] at (5,2.5) {${\mathbb{Z}}$}; \node [] at (5,4) {$0$}; \node [] at (8,1) {$0$}; \node [] at (8,2.5) {$0$}; \node [] at (8,4) {$0$}; \node [right] at (8.5,0) {$p$}; \node [above] at (0,4.5) {$q$}; \end{tikzpicture} \end{center} And so we read of that~$\mathrm{H}_0 T \simeq {\mathbb{Z}}$,~$\mathrm{H}_2 T\simeq{\mathbb{Z}}$ whereas~$\mathrm{H}_1 T$ is an extension of~${\mathbb{Z}}$ by~${\mathbb{Z}}$ hence~${\mathbb{Z}}^2$. The remaining homology groups are trivial. \end{example} \section{Reeb spaces} \label{section:reeb} The combinatorial Reeb space is introduced in Section~\ref{subsec:comb}. Section~\ref{section:thmA} is merely a recap of Quillen's theorem~A and the theory of collapsing schemes due to K. Brown. The last two Sections are dedicated to clarifying and proving Theorems~\ref{intro:combinatorialReebIsClassicalReeb} and~\ref{intro:combinatorialReebIsGraph}. \subsection{From topological to combinatorial Reeb spaces} \label{subsec:comb} The topological Reeb space is defined for any continuous function~$f\colon X\rightarrow {\mathbb{R}}$. Given such a function one declares points to be equivalent if they are in the same path component of some fiber:~$x\sim_f y$ if there is a real number~$a$ and~$[x]=[y]$ in~$f^{-1}(a)$. \begin{definition} \label{definition:topologicalReebSpace} The \emph{topological Reeb space} associated to a continuous function~$f\colon X\rightarrow {\mathbb{R}}$ is the quotient space~$\mathrm{R}_f=X/\sim_f$. \end{definition} The topological Reeb space is commonly referred to as the Reeb graph, which surely is accurate for e.g. Morse functions, albeit not accurate in general. \begin{example} \label{example:ReebNotGraph} Consider the Hawaiian earring~$\mathbb{H}$ embedded as a subspace of~$\mathbb{R}^2$: \begin{center} \begin{tikzpicture}[scale=1] \foreach \n in {1,...,1000} { \draw (0,1/\n) circle (1/\n); } \end{tikzpicture} \end{center} The fibers of the horizontal projection~$\mathrm{pr}_1\colon \mathbb{H}\rightarrow {\mathbb{R}}$ are all discrete. We thus conclude that the topological Reeb space~$\mathrm{R}_{\mathrm{pr}_1}$ is homeomorphic to~$\mathbb{H}$. But the fundamental group of~$\mathbb{H}$ is not free~\cite{de1992fundamental}. \end{example} I, for one, would very much like to define a Reeb space whose homotopy type is that of a graph. This could very well serve as motivation for our next definition. Recall that a continuous function~$f\colon X\rightarrow {\mathbb{R}}$ has an associated section category~$\mathcal{S}_f$. Applying the nerve followed by the level-wise path components functor produces a simplicial set~$\pi_0\mathrm{N}\mathcal{S}_f$. \begin{definition} \label{definition:combinatorialReeb} The \emph{combinatorial Reeb space} of a continuous function~$f\colon X \rightarrow {\mathbb{R}}$ is the simplicial set~$\pi_0\mathrm{N} \mathcal{S}_f$. \end{definition} Simplicial sets carry a homotopy theory equivalent to the standard theory on topological spaces: The homotopy type of a simplicial set~$S$ is equivalent to that of the topological space~$|S|$. In particular, the homotopy types of~$\pi_0\mathrm{N}\mathcal{S}_f$ and~$\mathrm{R}_f$ can be compared by realizing~$\pi_0\mathrm{N}\mathcal{S}_f$. \subsection{More background on simplicial sets} \label{section:thmA} I will give a brief reminder on Quillen's well-known theorem A~\cite{quillen1973higher} as well as a theorem on collapsing schemes due to K. Brown~\cite{brown1992geometry}. Both are useful to prove Theorems~\ref{intro:combinatorialReebIsClassicalReeb} and~$\ref{intro:combinatorialReebIsGraph}$. Let us first review Quillen's theorem A. For any functor~$F\colon{\mathcal{C}}\rightarrow{\mathcal{D}}$ and object~$d$ in~${\mathcal{D}}$, we define the slice category~$F\downarrow d$ as the pullback \begin{center} \begin{tikzcd} F\downarrow y \arrow[d] \arrow[r] & \mathop{\rm Fun}\nolimits([1],{\mathcal{D}}) \arrow[d, "{(\mathrm{eval}_0,\mathrm{eval}_1)} "] \\ {\mathcal{C}}\times[0] \arrow[r, "F\times d"] & {\mathcal{D}}\times {\mathcal{D}} \end{tikzcd} \end{center} where~$\mathop{\rm Fun}\nolimits([1],{\mathcal{D}})$ is the category of functors~$[1]\rightarrow {\mathcal{D}}$. More explicitly, an object is a tuple~$(c,m)$ in the product~$\mathrm{ob}{\mathcal{C}}\times \mathrm{mor}{\mathcal{D}}$ subject to~$s(m)=F(c)$ and~$t(m)=d$; a morphism~$\alpha\colon (c,m)\rightarrow (c',m')$ is a morphism~$\alpha\colon c\rightarrow c'$ such that~$m= m'\circ F(\alpha)$. This data is commonly depicted \begin{center} \begin{tikzcd} F(c) \arrow[d, "m"] & \\ d & \end{tikzcd} and \begin{tikzcd} & F(c) \arrow[dr, "m"] \arrow[rr, "F(\alpha)"] & & F(c') \arrow[dl, "m' "] \\ & & d & \end{tikzcd} \end{center} Quillen's Theorem A gives a sufficient condition as to when~$F$ realizes to a weak homotopy equivalence: If~$\mathrm{B} (F\downarrow d)$ is contractible for all~$d$, then~$\mathrm{B} F$ is a weak homotopy equivalence. For~$S$ a simplicial set, the topological space~$|S|$ is a CW complex whose cells are in bijection with the non-degenerate simplices in~$S$. In particular, it makes sense to talk about~$d_i e$, the~$i$th face of a cell~$e$ in~$|S|$. Moreover, we define the~$i$th horn of~$e$, which we will denote~$e_i$, to be the union of all its faces except the~$i$th one. We may safely deform~$|S|$ onto a quotient space~$Y$ by collapsing~$e$ onto~$e_i$ without changing the homotopy type of~$S$. Moreover,~$Y$ is clearly a CW complex again. Brown gives conditions for how to iterate this process of collapsing cells without changing the homotopy type of~$|S|$, while making sure that~$Y$ is still a CW complex. Partition the non-degenerate simplices of~$S$ into three classes: essential, redundant and collapsible. The cells corresponding to redundant simplices are to be deformed along the collapsible cells, hence they are truly "redundant". So a function~$c$ from redundant simplices to collapsible simplices that maps~$n$--simplices to~$(n+1)$--simplices is required. If~$s$ is redundant and~$cs$ admits another redundant face~$s'$, then we write~$s'\leq s$. This data defines a \emph{collapsing scheme} if i)~$c$ is a bijection from redundant~$n$--simplices to collapsible~$(n+1)$--simplices for all~$n$ and ii) there is no infinite descending chain~$s \geq s'\geq s''\geq\cdots$. Proposition 1 in~\cite{brown1992geometry} can then be formulated: For a collapsing scheme on~$S$, the quotient map~$|S|\rightarrow Y$ is a weak homotopy equivalence onto a CW complex~$Y$ whose~$n$--cells are in bijection with the essential simplices in~$S$. \subsection{Proof of Theorem~\ref{intro:combinatorialReebIsClassicalReeb}} \label{subsec:combistop} The nerve admits a left adjoint~$\tau_1\colon \text{(simplicial sets)}\rightarrow \text{(small categories)}$ commonly referred to as the \emph{fundamental category}. It agrees with the homotopy category when restricted to quasi-categories. A simplicial set~$S$ is sent to the category~$\tau_1 S$ whose object set is~$S_0$; morphism set is the \emph{directed paths}~$S_1\coprod (S_1\times_{S_0} S_1) \coprod\cdots$ modulo the relations~$s_0 x\sim 1_x$ for all~$0$--simplices~$x$ and~$d_1s\sim d_0s \circ d_2s$ for all~$2$--simplices~$s$. More explicitly, a directed path is a tuple~$(e_1,\dots,e_n)$ of edges/$1$--simplices such that the source of~$e_{i+1}$ is the target of~$e_i$;~$d_1e_{i+1}= d_0 e_i$. The corresponding morphism in~$\tau_1$ is denoted~$e_1\cdots e_n$, utilizing the word notation. We define the~\emph{length} of a word~$e_1\cdots e_n$ to be~$n$ if there is no equivalent word on fewer letters. An~$n$--simplex in~$\mathrm{N}\tau_1 S$ is a tuple~$(w_1,\dots,w_n)$ of composable words/morphisms. We define the length of~$(w_1,\dots,w_n)$ to be the length of the word~$w_1\cdots w_n$. Since~$\tau_1$ is left adjoint to~$\mathrm{N}$, there is an associated unit map~$\eta\colon S\rightarrow \mathrm{N} \tau_1 S$, which is natural in~$S$. It is not a weak homotopy equivalence in general, as pointed out by Thomason in~\cite{thomason1980cat}. But we shall see that the unit always induces a weak homotopy equivalence on combinatorial Reeb spaces. \begin{lemma} \label{lemma:abstract} Let~$S$ be a simplicial set such that \begin{enumerate}[i)] \item $S$ and~$\mathrm{N} \tau_1 S$ only differ in cells of length~$\geq 2$ and \item any word of length~$n$ has a unique presentation on~$n$ words. \end{enumerate} Then the natural map~$\eta\colon S\rightarrow \mathrm{N}\tau_1 S$ is a weak homotopy equivalence. \end{lemma} \begin{proof} The theory of collapsing schemes, due to K. Brown~\cite{brown1992geometry}, is utilized to construct a homotopy inverse of the realized unit~$|\eta|$. I refer to Section~\ref{section:thmA} for a quick summary of this theory. All morphisms in~$\tau_1S$ will be represented uniquely per assumption ii). To partition~$\mathrm{N}\tau_1 S$ into redundant, collapsible and essential simplices we first declare every~$1$--simplex~$e_1\cdots e_n$ of length~$n\geq 2$ redundant and define its associated collapsible~$2$--simplex \[ c(e_1\cdots e_n )=(e_1\cdots e_{n-1} , e_n). \] This function is well-defined because such a presentation is unique. For~$m\geq 2$ we declare an~$m$--simplex of the form~$(e_{1,1} \cdots e_{1,i_1},\dots,e_{m,1} \cdots e_{m,i_m})$, whose length is greater than or equal to~$2$, redundant if its not in the image of~$c$. Its associated~$(n+1)$--simplex is then determined by taking the biggest~$k$ such that~$i_k\geq 2$ and factoring~$e_{k,1}\cdots e_{k,i_k}$ as~$(e_{k,1}\cdots e_{k,i_k-1}, e_{k,i_k})$. In other words, we factorize out the last letter not already factorized. The remaining simplices are declared essential. Do note that these are precisely the ones whose length is equal to~$1$. The function~$c$ is constructed to be a bijection in the sense required by a collapsing scheme. Whereas the second demand follows since a chain associated to a redundant~$n$--simplex~$s$ cannot exceed the length of~$s$ and is therefore necessarily bounded. We thus have a map~$|S|\rightarrow Y$ with~$Y$ a CW complex whose~$n$--cells correspond to essential~$n$--simplices, i.e. those of length~1. Hence~$Y$ is necessarily equal to~$|S|$ and what we have constructed is a homotopy inverse to~$|\eta|$. \end{proof} For a combinatorial Reeb space~$\pi_0\mathrm{N}\mathcal{S}_f$, a morphism in~$\tau_1\pi_0 \mathrm{N} \mathcal{S}_f$ is a word~$[\sigma_1][\sigma_2]\dots,[\sigma_n]$ with representatives~$\sigma_i$ in~$\mathcal{S}_f[a_{i-1},a_i]$ subject to~$[s\sigma_{i+1}]=[t\sigma_{i}]$. So the source and target of successive classes must agree up to path components in fibers. \begin{lemma} \label{lemma:dipath} A word in~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f$ of length~$n$ has a unique presentation on~$n$ letters. \end{lemma} \begin{proof} Assume that a word~$w$ is presented~$[\sigma_1]\cdots[\sigma_n]$ and~$[\rho_1]\cdots[\rho_n]$. We shall see that for all~$i$ the equality~$[\sigma_i]=[\rho_i]$ holds. From the domains of~$\sigma_i$ and~$\rho_j$ we extract sequences~$\bar{a}=(a_0,\dots,a_n)$ and~$\bar{b}=(b_0,\dots,b_n)$ of real numbers. These sequences must be equal. The statement is clear if~$n=1$ since~$a_0=b_0$ and~$a_n=b_n$. If~$n \geq 2$ we conversely assume that~$\bar{a}\neq \bar{b}$. Consider~$i$ the smallest index such that~$a_i\neq b_i$. We assume~$a_i<b_i$ without loss of generality. Introduce the real number~$c=\mathrm{min}(a_{i+1},b_i)$ and define two letters~$[\sigma']=[\sigma_{i+1}|_{[a_i,c]}]$ and~$[\rho']=[\rho_i|_{[b_{i-1},c]}]$ to rewrite~$[\sigma_i][\sigma']=[\rho']$. We observe that~$[\sigma_{i+1}]$ and~$[\rho']$ overlap in~$[\sigma']$ and so there must be a section~$\tau$ satisfying~$[\tau]=[\sigma_i][\sigma_{i+1}]$, contradicting the length of~$w$. Hence the equality~$[\sigma_1]\cdots[\sigma_n]=[\rho_1]\cdots[\rho_n]$ can only be achieved if the letters are equal. For the only relation connecting them is to concatenate sections up to paths in fibers. \end{proof} The simplicial sets~$\pi_0\mathrm{N}\mathcal{S}_f$ and~$\mathrm{N}\tau_1\pi_0\mathrm{N}\mathcal{S}_f$ clearly only differ by simplices of length~$\geq 2$: a word~$(w_1,\dots,w_n)$ can only be of length~$1$ if~$w_i=[\sigma_i]$ and there is a section~$\rho$ such that~$[\rho]=[\sigma_1]\cdots[\sigma_n]$. \begin{lemma} \label{lemma:addComposition} Let~$f\colon X\rightarrow {\mathbb{R}}$ be a continuous function. The unit~$\eta\colon \pi_0\mathrm{N}\mathcal{S}_f\rightarrow \mathrm{N}\tau_1 \pi_0\mathrm{N}\mathcal{S}_f$ realizes to a weak homotopy equivalence. \end{lemma} \begin{proof} A direct consequence of Lemmas~\ref{lemma:abstract} and~\ref{lemma:dipath}. \end{proof} In the proof of Theorem~\ref{intro:combinatorialReebIsClassicalReeb} it will be convenient to utilize Theorem~\ref{intro:mainresult}. But to do so, we must first verify that a Reeb function~$f\colon X\rightarrow {\mathbb{R}}$ defines a Reeb function on the topological space~$\mathrm{R}_f$. \begin{lemma} \label{lemma:ReebhasReebData} If~$f\colon X\rightarrow {\mathbb{R}}$ is a Reeb function, then~$\mathrm{R}_f$ has the homeomorphism type of a~$1$--dimensional CW complex satisfying that the induced function~$\bar{f} \colon \mathrm{R}_f \rightarrow {\mathbb{R}}$ is piecewise linear. \end{lemma} \begin{proof} Let~$\mathrm{R}_f$ be the topological Reeb space, presented as a quotient space according to Definition~\ref{definition:topologicalReebSpace}. A topological space~$Q$, homeomorphic to~$\mathrm{R}_f$, will be constructed to satisfy the assertion. We assume, without loss of generality, that every point in~$X$ is either contained in some critical level or in between two critical levels. This can always be achieved by slightly modifying the stratification on~$X$: one may e.g. present~$X$ as a filtered colimit of preimages of closed intervals under the map~$f$. The set of~$0$--cells is given by~$\coprod \pi_0 f^{-1}c$ ranging over all critical values~$c$, whereas the set of~$1$--cells is given by~$\coprod \pi_0 \mathcal{S}_f[c,d]$ ranging over all successive critical values~$c<d$. The attaching maps comes from the source and target: a~$1$--cell~$e$ labeled by a path component~$[\sigma]$ in~$\pi_0\mathcal{S}_f[c,d]$ admits a source in~$\pi_0 f^{-1}c$;~target in~$\pi_0f^{-1}d$. Denote the resulting CW complex~$Q$. Define the piecewise linear map~$\bar{f}\colon Q\rightarrow {\mathbb{R}}$ as follows. On a closed~$1$--cell~$e\simeq [0,1]$ labeled by a class in~$\pi_0 \mathcal{S}_f[c,d]$ it is the orientation-preserving linear map~$[0,1]\rightarrow [c,d]$. Note that~$\bar{f}\colon Q\rightarrow {\mathbb{R}}$ is constructed to be piecewise linear. There is a rather evident surjective map~$q\colon X\rightarrow Q$: If~$x$ is a point contained in some critical fiber~$f^{-1}c$, then it is mapped to the~$0$--cell labeled by~$[x]$ in~$\pi_0f^{-1}c$. Otherwise, consider the~$1$--cell~$e$ corresponding to~$g_x$, the reparametrized flow-line provided by Proposition~\ref{proposition:modifiedFlowLines}. We then send~$x$ to the point in~$e$ mapped to~$f(x)$ under~$\bar{f}$. Note that this map is constructed to be over~${\mathbb{R}}$ in the sense that~$f=\bar{f}\circ q$. It only remains to verify that~$Q$ has the universal quotient topology--for the topological space~$Q$ is clearly in bijection with~$\mathrm{R}_f$. We thus verify that a subset~$U$ of~$Q$ is open if~$q^{-1}U$ is open in~$X$. This is true if for any closed~$1$--cell~$e$, the intersection~$e\cap U$ is open in~$e$. From the construction of~$Q$ it follows that there is a section~$\sigma\colon [c,d]\rightarrow X$ satisfying that~$q\circ \sigma\colon [c,d] \rightarrow Q$ corestricts to a homeomorphism~$e\simeq [c,d]$. But~$e\cap U$ corresponds to~$\sigma ([c,d])\cap q^{-1} U$ under the given homeomorphism, and~$\sigma ([c,d])\cap q^{-1} U$ is open in~$\sigma([c,d])$ given that~$q^{-1}U$ is open in~$X$. \end{proof} Let us end this section with a \begin{proof}[proof of Theorem~\ref{intro:combinatorialReebIsClassicalReeb}] Lemma~\ref{lemma:ReebhasReebData} allows us to assume that~$f\colon X\rightarrow {\mathbb{R}}$ induces a Reeb function~$\bar{f}\colon \mathrm{R}_f \rightarrow {\mathbb{R}}$. Theorem~\ref{intro:mainresult} thus guarantees~$\mathrm{R}_f\simeq \mathrm{B} \mathcal{S}_{\bar{f}}$. So it suffices to prove that the simplicial sets~$\pi_0\mathrm{N}\mathcal{S}_f$ and~$\mathrm{N} \mathcal{S}_{\bar{f}}$ are weakly equivalent. Functoriality of~$\mathcal{S}$ induces a (continuous) functor~$\mathcal{S}_f\rightarrow \mathcal{S}_{\bar{f}}$ from the quotient map~$q\colon X\rightarrow \mathrm{R}_f$. It maps a section~$\sigma$ of~$f$ to the section~$q\circ \sigma$ of~$\bar{f}$. Sections in the same path components of~$(\mathrm{N}\mathcal{S}_f)_1$ are obviously mapped to the same section of~$\bar{f}$, so we have an induced simplicial map~$F\colon \pi_0\mathrm{N} \mathcal{S}_f\rightarrow \mathrm{N} \mathcal{S}_{\bar{f}}$. A morphism in~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f$ is a word~$[\sigma_1]\cdots[\sigma_n]$, represented by sections~$\sigma_i$ which are composable up to paths contained in fibers of~$f$. The unit~$\eta\colon \pi_0\mathrm{N}\mathcal{S}_f\rightarrow\mathrm{N}\tau_1 \pi_0 \mathrm{N}\mathcal{S}_f$ provides a factorization \[ \pi_0\mathrm{N}\mathcal{S}_f\xrightarrow{\eta}\mathrm{N}\tau_1 \pi_0 \mathrm{N}\mathcal{S}_f \xrightarrow{\mathrm{N} G}\mathrm{N}\mathcal{S}_{\bar{f}} \] of~$F$. We have already seen that~$\eta$ realizes to a weak homotopy equivalence in Lemma~\ref{lemma:addComposition}, so we need only verify that~$\mathrm{N} G$ is a weak equivalence. On the level of objects,~$G\colon\tau_1 \pi_0 \mathrm{N}\mathcal{S}_f \rightarrow \mathcal{S}_{\bar{f}} $ maps a morphism/word~$[\sigma_1]\cdots[\sigma_n]$ to the section~$(q\circ \sigma_n)\circ\dots\circ (q\circ \sigma_1)$ of~$\bar{f}$. We shall construct an inverse functor~$G^{-1}$ from which we conclude that~$\mathrm{B} G$ is in fact a homeomorphism. Consider a section~$\rho\colon [c,d] \rightarrow \mathrm{R}_f$ of~$\bar{f}$ which passes no critical points, except possibly at the end points. Since~$\bar{f}\colon \mathrm{R}_f\rightarrow {\mathbb{R}}$ is assumed to be piecewise linear on a~$1$-dimensional CW complex, the image of~$\rho$ must be contained in some edge~$e$ of~$\mathrm{R}_f$. Take any point~$x$ in~$X$ that maps to the interior of~$e$ and define~$G^{-1}\rho$ to be~$[g_x|_{[c,d]}]$, the reparametrized flow-line provided by Proposition~\ref{proposition:modifiedFlowLines}. This class in~$\pi_0\mathcal{S}_f[c,d]$ is independent of the choice of~$x$. Indeed, assume that another point~$y$ is mapped to the interior of~$e$. Any choice of path~$p\colon I \rightarrow f^{-1}(c,d)$, between~$x$ and~$y$, defines a path from~$g_x|_{[c,d]}$ to~$g_y|_{[c,d]}$ in~$\mathcal{S}_f[c,d]$ via the composition \[ [c,d]\times I\xrightarrow{\mathrm{id}_{[c,d]}\times p} [c,d]\times f^{-1}(c,d)\xrightarrow{g} X. \] A general section~$\rho$ of~$\bar{f}$ can only pass finitely many critical values, because~$f$ is Reeb. Whence we factorize it accordingly~$\rho=\rho_n\circ\cdots\circ \rho_1$ and define~$G^{-1}\rho$ to be the word~$G^{-1}\rho_1\cdots G^{-1}\rho_n$. Applying~$G^{-1}\circ G $ to a word~$[\sigma_1]\cdots[\sigma_n]$ returns~$[g_{x_1}]\cdots [g_{x_n}]$, where~$x_i$ is chosen according to the above description of~$G^{-1}$. We can verify the equality~$[\sigma_i]=[g_{x_i}]$ by considering all reparametrized flow-lines through points in the image of~$[\sigma_i]$. Hence~$G ^{-1}\circ G=\mathrm{id}_{\tau_1\pi_0\mathrm{N}\mathcal{S}_f}$. The remaining equality~$G\circ G^{-1}=\mathrm{id}_{\mathcal{S}_{\bar{f}}}$ follows since the induced map~$\bar{f}\colon \mathrm{R}_f \rightarrow {\mathbb{R}}$ is piecewise linear on a~$1$--dimensional CW complex; an edge in~$\mathrm{R}_f$ uniquely determines a section that traverses it. \end{proof} \subsection{Combinatorial Reeb spaces are graphs} \label{subsec:combisgraph} Before we prove the result, I must first elaborate on the meaning of a `graph'. A simplicial set is a \emph{graph} if it is aspherical--all higher homotopy groups are trivial--and the fundamental group is free for any choice of basepoint. The category~$(\text{graphs})$ of graphs is then the evident full subcategory of~$\text{(simplicial sets)}$. Our definition of combinatorial Reeb spaces gives a functor \[ \text{(spaces over }\mathbb{R}\text{)} \rightarrow \text{(simplicial sets)} \] by mapping~$f\colon X \rightarrow {\mathbb{R}}$ to~$\pi_0\mathrm{N}\mathcal{S}_f$, and we shall see that it does in fact define a functor \[ \text{(spaces over }\mathbb{R}\text{)}\rightarrow (\text{graphs}). \] In light of Example~\ref{example:ReebNotGraph}, I would argue that this is one advantage over topological Reeb spaces. Classifying spaces of groupoids are aspherical, a fact which is easily verified by using simplicial homotopy groups. There is a groupoidification functor~$\text{(small categories)}\rightarrow \text{(groupoids)} $ which assigns to a category~${\mathcal{C}}$ the groupoid~${\mathcal{C}}[{\mathcal{C}}^{-1}]$ in which all morphisms are formally inverted. It may abstractly be described as the left adjoint of the forgetful functor \[ \text{(groupoids)}\rightarrow \text{(small categories)}. \] For a given category~${\mathcal{C}}$, there is an evident functor~$j\colon {\mathcal{C}}\rightarrow {\mathcal{C}}[{\mathcal{C}}^{-1}]$ which is not a weak homotopy equivalence in general: categories can represent all homotopy types, see e.g.~\cite{mcduff1979classifying}, whereas groupoids cannot. But we shall verify that~$j$ does in fact realize to a weak homotopy equivalence for combinatorial Reeb spaces. Recall that a morphism in~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f$ is a word~$[\sigma_1]\cdots[\sigma_n]$, represented by sections that are composable up to paths in fibers. A morphism in the groupoid~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f[\tau_1\pi_0\mathrm{N}\mathcal{S}_f^{-1}]$ is thus a word~$[\sigma_1]^{i_1}[\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}$ in which each~$i_j=\pm 1$. The new relations imposed is generated by~$[\sigma][\rho]^{-1}= 1_{[s\sigma]}$ and~$[\rho]^{-1}[\sigma]=1_{[t\sigma]}$ whenever~$[\sigma]=[\rho]$; they are in the same component of the section space~$\mathcal{S}_f[f(s\sigma),f(t\sigma)]$. Geometrically you might want to interpret this as moving up and down, or down and up, along~$[\sigma]$ cancels to the appropriate identity. A word~$[\sigma][\rho]^{-1}$ is said to be \emph{reducible} if there are factorizations~$[\sigma]=[\sigma_2]\circ [\sigma_1]$ and~$[\rho]=[\rho_1]\circ[\rho_2]$ such that~$[\sigma_2]=[\rho_2]$. In particular,~$[\sigma][\rho]^{-1}=[\sigma_1][\rho_1]^{-1}$. Dually, we declare what it means for~$[\sigma]^{-1}[\rho]$ to be reducible. Intuitively, a part of~$[\sigma]$ may overlap with~$[\rho]$: \begin{center} \begin{tikzpicture} \draw [->] (0,0) -- (2,2); \draw [<-] (4,0) -- (2,2); \draw [<->] (2,2) -- (2,4); \draw [->] (6,0) -- (8,2); \draw [<-] (10,0) -- (8,2); \node [below] at (2,0) {Reducible}; \node [below] at (8,0) {Irreducible }; \end{tikzpicture} \end{center} A morphism/word~$[\sigma_1]^{i_1}[\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}$ in~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f[\tau_1\pi_0\mathrm{N}\mathcal{S}_f^{-1}]$ is then declared~\emph{irreducible} if the word has length equal to~$n$ and there is no reducible subword. Subject to this added requirement, we extend Lemma~\ref{lemma:dipath} to the groupoidification: \begin{lemma} \label{lemma:path} Any morphism in~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f[\tau_1\pi_0\mathrm{N}\mathcal{S}_f^{-1}]$ is uniquely presentable as an irreducible word. \end{lemma} \begin{proof} Assume~$w$ to be presented~$[\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n}$ and~$[\rho_1]^{j_1}\cdots[\rho_n]^{j_n}$, both irreducible. Let~$\bar{a}=(a_0,\dots,a_n)$ and~$\bar{b}=(b_0,\dots,b_n)$ be the sequences obtained by successively considering the domains of sections that appear as representatives in the two words. These sequences must be equal and so the letters must be equal. Indeed, all relations connecting them alters the associated sequences of real numbers. The statement is clear if both are of length~$1$ since~$a_0=b_0$ and~$a_n=b_n$. If the length is~$\geq 2$ we conversely assume that~$\bar{a}\neq \bar{b}$. Consider~$q\geq 1$ the smallest index such that~$a_q\neq b_q$. We assume~$a_q<b_q$ without loss of generality. Case 1:~$a_q<a_{q-1}$ and~$b_q>b_{q-1}$. Apply~$[\sigma_1]^{-i_1}\cdots[\sigma_{q-1}]^{-i_q}$ and~$w'=[\rho]_{n}^{-j_{n}}\cdots [\rho_{q+1}]^{-j_{q+1}}$ to~$w$. The result is an equality~$[\sigma_q]^{i_q}\cdots[\sigma_n]^{i_n}w'=[\rho_q]^{j_q}$. For this particular case, we deduce~$i_q=-1$ and~$j_q=1$. But then the equality can only hold if something cancels~$[\sigma_q]^{-1}$, contradicting the irreducibility of~$[\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n}$. Case 2:~$a_q>a_{q-1}$ and~$b_q > b_{q-1}$, or~$a_q < a_{q-1}$ and~$b_q< b_{q-1}$. These are proved in a similar fashion as the previous case. \end{proof} Before presenting the next Lemma I remind that a natural transformation~$ F\Rightarrow G$ between two functors~${\mathcal{C}}\rightarrow {\mathcal{D}}$ is equivalent to a functor~${\mathcal{C}}\times [1]\rightarrow {\mathcal{D}}$ whose restriction to~$0$ and~$1$ in~$[1]$ is~$F$ and~$G$, respectively. A natural transformation~$F\Rightarrow G$ thus defines a homotopy~$\mathrm{B} F\sim \mathrm{B} G$. See e.g. Segal's paper~\cite{segal1968classifying}. \begin{lemma} \label{lemma:acyclic} For any combinatorial Reeb space the associated map~$j\colon \tau_1\pi_0\mathrm{N}\mathcal{S}_f\rightarrow \tau_1\pi_0\mathrm{N}\mathcal{S}_f[\tau_1\pi_0\mathrm{N}\mathcal{S}_f^{-1}]$ realizes to a weak homotopy equivalence. \end{lemma} \begin{proof} Consider an arbitrary object~$[x]$ in~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f[\tau_1\pi_0\mathrm{N}\mathcal{S}_f^{-1}]$. Quilen's theorem~A reduces the problem to proving that the comma category~$j\downarrow [x]$ is contractible. An object in the comma category is a morphism/word in~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f[\tau_1\pi_0\mathrm{N}\mathcal{S}_f^{-1}]$ terminating at~$[x]$. All words are presented uniquely according to Lemmas~\ref{lemma:dipath} and~\ref{lemma:path}. We shall define a homotopy from the identity on~$\mathrm{B} (j\downarrow [x])$ to the trivial map~$w\mapsto 1_{[x]}$. There are two essential intermediate functors. The first functor~$\mathrm{pr}_+\colon j\downarrow [x] \rightarrow j\downarrow [x]$ reduces the length of words that start with a letter of the form~$[\sigma_1]$. It maps a non-trivial word~$w=[\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n}$ to \[ \mathrm{pr}_+ ([\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n})= \twopartdef{[\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}}{i_1=1} {{[\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n}}}{i_1=-1} \] A morphism~$w''\colon w\rightarrow w'$ is a factorization~$w=w'\circ j(w'')$ and hence there is a unique choice for~$\mathrm{pr}_+ w''$ yielding a factorization~$\mathrm{pr}_+w=\mathrm{pr}_+w' \circ (\mathrm{pr}_+w'')$. This data comes with a rather evident natural transformation~$\eta_+$ from~$ \mathrm{id}$ to~$ \mathrm{pr}_+$ since~$[\sigma_1]$ defines a morphism from a word~$[\sigma_1][\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}$ to~$[\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}$. In other words, we have defined a functor~$H_+\colon (j\downarrow [x]) \times [1]\rightarrow (j\downarrow [x]) $ whose restriction to~$(j\downarrow [x]) \times 0$ is~$\mathrm{id}$, whereas the restriction to~$(j\downarrow [x]) \times 1$ is~$\mathrm{pr}_+$. The second functor~$\mathrm{pr}_-\colon j\downarrow [x] \rightarrow j\downarrow [x]$ is complementary to~$\mathrm{pr}_+$. It maps a non-trivial word~$w=[\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n}$ to \[ \mathrm{pr}_- ([\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n})= \twopartdef{[\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}}{i_1=-1} {{[\sigma_1]^{i_1}\cdots[\sigma_n]^{i_n}}}{i_1=1} \] Analogous to~$\mathrm{pr}_+$ this data comes with a homotopy~$H_-\colon (j\downarrow [x]) \times [1]\rightarrow (j\downarrow [x]) $. But, in contrast to~$H_+$, this homotopy starts at~$\mathrm{pr}_-$ and terminates at~$\mathrm{id}$. This is because of how~$[\sigma_1]$ defines a morphism from~$[\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}$ to~$[\sigma_1]^{-1}[\sigma_2]^{i_2}\cdots[\sigma_n]^{i_n}$. For every object~$w$, we need only alternate~$H_+$ and~$H_-$ a finite number of times to obtain the trivial word~$1_{[x]}$. We thus conclude that the identity on~$(j\downarrow [x])_n$, generated by words of length~$\leq n$, is homotopic to the trivial map for all~$n$. It follows that the identity on~$j\downarrow [x]$ must be homotopic to the trivial map. \end{proof} We wrap up the discussion on Combinatorial Reeb spaces with a \begin{proof}[proof of Theorem~\ref{intro:combinatorialReebIsGraph}] We have seen in Lemma~\ref{lemma:acyclic} that a Reeb space~$\pi_0\mathrm{N}\mathcal{S}_f$ has the homotopy type of its groupoidification. In particular, it must be aspherical. It remains only to verify that the fundamental group is free, regardless of basepoint. So fix a basepoint~$[x]$ and consider~$\pi_1 (\pi_0\mathrm{N}\mathcal{S}_f)$ which is isomorphic to the automorphism group at~$[x]$, considered as an object in the groupoidification~$\tau_1\pi_0\mathrm{N}\mathcal{S}_f[\tau_1\pi_0\mathrm{N}\mathcal{S}_f^{-1}]$. This group admits the irreducible words (Lemma~\ref{lemma:path}) as a free generating set. For a non-trivial irreducible word cannot possibly be reduced further to the unit~$1_{[x]}$. \end{proof} \section*{Acknowledgment} I would like to thank Markus Szymik who helped shape the ideas and contents of this paper through many helpful discussions. \addcontentsline{toc}{section}{References} \bibliographystyle{amsalpha}
{ "timestamp": "2021-09-14T02:20:55", "yymm": "2109", "arxiv_id": "2109.05474", "language": "en", "url": "https://arxiv.org/abs/2109.05474", "abstract": "There are two rather distinct approaches to Morse theory nowadays: smooth and discrete. We propose to study a real valued function by assembling all associated sections in a topological category. From this point of view, Reeb functions on stratified spaces are introduced, including both smooth and combinatorial examples. As a consequence of the simplicial approach taken, the theory comes with a spectral sequence for computing (generalized) homology. We also model the homotopy type of Reeb graphs/ topological Reeb spaces as simplicial sets, which are combinatorial in nature, as opposed to the typical description in terms of quotient spaces.", "subjects": "Algebraic Topology (math.AT)", "title": "Combinatorial models for topological Reeb spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137937226358, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7087617899873909 }
https://arxiv.org/abs/1003.4436
Knots and tropical curves
Using elementary ideas from Tropical Geometry, we assign a a tropical curve to every $q$-holonomic sequence of rational functions. In particular, we assign a tropical curve to every knot which is determined by the Jones polynomial of the knot and its parallels. The topical curve explains the relation between the AJ Conjecture and the Slope Conjecture (which relate the Jones polynomial of a knot and its parallels to the $\SL(2,\BC)$ character variety and to slopes of incompressible surfaces). Our discussion predicts that the tropical curve is dual to a Newton subdivision of the $A$-polynomial of the knot. We compute explicitly the tropical curve for the $4_1$, $5_2$ and $6_1$ knots and verify the above prediction.
\section{Introduction} \lbl{sec.intro} \subsection{What is a $q$-holonomic sequence?} \lbl{sub.qholo} A sequence of rational functions $f_n(q) \in \mathbb Q(q)$ in a variable $q$ is $q$-{\em holonomic} if it satisfies a linear recursion with coefficients polynomials in $q$ and $q^n$. In other words, we have \begin{equation} \lbl{eq.qholo} \sum_{i=0}^d a_i(q^n,q) f_{n+i}(q)=0 \end{equation} where the coefficients $a_i(M,q) \in \mathbb Z[M,q]$ are polynomials for $i=0,\dots,d$ where $a_d(M,q) \neq 0$. The term was coined by Zeilberger in \cite{Z} and further studied in \cite{WZ}. $q$-holonomic sequences appear in abundance in Enumerative Combinatorics; \cite{PWZ,St}. The fundamental theorem of Wilf-Zeilberger states that a multi-dimensional finite sum of a (proper) $q$-hyper-geometric term is always $q$-holonomic; see \cite{WZ,Z,PWZ}. Given this result, one can easily construct $q$-holonomic sequences. Combining this fundamental theorem with the fact that many {\em state-sum} invariants in Quantum Topology are multi-dimensional sums of the above shape, it follows that Quantum Topology provides us with a plethora of $q$-holonomic sequences of {\em natural origin}; \cite{GL}. For example, the sequence of {\em Jones polynomials} of a knot and its parallels which we will study below (technically, the colored Jones function) is $q$-holonomic. The goal of our paper is to assign a tropical curve to a $q$-holonomic sequence. To motivate the connection between $q$-holonomic sequences and tropical curves, we will write Equation \eqref{eq.qholo} in operator form using the operators $M,L$ which act on a sequence $f_n(q) \in \mathbb Q(q)$ by $$ (M f)_n(q)=q^n f_n(q), \qquad (L f)_n(q)=f_{n+1}(q). $$ It is easy to see that $LM=q ML$ generate the $q$-{\em Weyl algebra} \begin{equation} \lbl{eq.weyl} W=\mathbb Z[q^{\pm 1}]\langle M,L \rangle/(LM-qML) \end{equation} Equation \eqref{eq.qholo} becomes \begin{equation} \lbl{eq.qholo22} P f =0 \end{equation} where \begin{equation} \lbl{eq.qholo2} P =\sum_{i=0}^d a_i(M,q) L^i \in W. \end{equation} In other words, Equation \eqref{eq.qholo2} says that $P$ annihilates $f$. Although a $q$-holonomic sequence $f$ is annihilated by many operators $P \in W$, it was observed in \cite{Ga2} that it is possible to canonically choose an operator $P_f$ with coefficients $a_i(M,q) \in \mathbb Z[M,q]$. Likewise, there is a unique non-homogeneous linear recursion relation of the form $P_f f=b_f$ where $b_f \in \mathbb Z[M,q]$. For a detailed definition, see Section \ref{sec.weyl} below. \begin{definition} \lbl{def.nhom} We call $P_f$ and $(P^{nh}_f,b_f)$ the {\em homogeneous} and the {\em non-homogeneous} annihilator of the $q$-holonomic sequence $f$. \end{definition} \subsection{What is a tropical curve?} \lbl{sub.curve} In this section we will recall the definition of a tropical curve. For a survey on tropical curves, see \cite{RGST,SS}. With those conventions, a {\em tropical polynomial} $P: \mathbb R^2 \longrightarrow \mathbb R$ is a function of the form: \begin{equation} \lbl{eq.Pxy} P(x,y)=\min\{a_1 x + b_1 y + c_1, \dots, a_r x + b_r y + c_r \} \end{equation} where $a_i,b_,c_i$ are rational numbers for $i=1,\dots,r$. $P$ is convex and piecewise linear. The {\em tropical curve} $\mathcal T(P)$ of the tropical polynomial $P$ is the set of points $(x,y) \in \mathbb R^2$ such that $P$ is not linear at $(x,y)$. Equivalently, $\mathcal T(P)$ is the set of points where the minimum is attained at two or more linear functions. A {\em rational graph} $\Gamma$ is a finite union of rays and segments whose endpoints and directions are rational numbers, and each ray has a positive integer multiplicity. A {\em balanced rational graph} is defined in \cite[Eqn.10]{RGST}: at every vertex the sum of the slope vectors with multiplicities adds to zero. Every tropical curve is a balanced rational graph and vice-versa; see \cite[Thm.3.6]{RGST}. Tropical curves are very computable objects. For example, the vertices of a rational curve are the points $(x,y)$ where the minimum in \eqref{eq.Pxy} is attained at least three times. The coordinates of such points can be solved by solving a system of linear equations. An explicit algorithm to compute the vertices and the slopes of a tropical curve is given in \cite[Sec.3]{RGST}, and a computer implementation in {\tt Singular} is available from \cite{Ma}. This allows us to compute the tropical curves of the $4_1$, $5_2$ and $6_1$ knots in Sections \ref{sub.41}-\ref{sub.61non} below. In the case of the $6_1$ knot, the non-homogeneous tropical curve is defined by an explicit polynomial with $r=346$ terms. Tropical curves arise from 2-variable polynomials $P_t(x,y)$ whose coefficients depend on an additional parameter $t$ as follows. Consider \begin{equation} \lbl{eq.Pxyt} P_t(x,y)=\sum_{i=1}^r \gamma_i(t) x^{a_i} y^{b_i} \end{equation} where $\gamma_i(t)$ are algebraic functions of $t$ with order at $t=0$ equal to $c_i$. Then, the corresponding tropical polynomial is given by \eqref{eq.Pxy}. $P_t(x,y)$ gives rise to two Newton polytopes: \begin{itemize} \item The 3-dimensional Newton polytope $N_P$, i.e., the convex hull of the exponents of $(x,y,t)$ in $P_t(x,y)$. \item The 2-dimensional Newton polygon $N_{P,0}$, i.e., the convex hull of the exponents of $(x,y)$ in $P_t(x,y)$. \end{itemize} In fact, $N_{P,0}$ is the image of $N_P$ under the projection map $(x,y,t)\longrightarrow (x,y)$. The {\em lower faces} of $N_P$ give rise to a Newton subdivision of $N_{P,0}$ which is combinatorially dual to the tropical curve $\mathcal T(P)$; see \cite{RGST}. The polynomials $P_t(x,y)$ appear frequently in numerical problems of {\em Path Homotopy Continuation} where one is interested to connect $P_0(x,y)$ to $P_1(x,y)$. They also appear in {\em Quantization problems} in Physics, where $t$ (or $\log t$) plays the role of Planck's constant. We will explain below that they also appear in Quantum Topology, and they are a natural companion of the AJ and the Slope Conjecture. \subsection{The tropical curve of a $q$-holonomic sequence} \lbl{sub.tropholo} In this section we associate a tropical surve to a $q$-holonomic sequence. The main observation is that an element of the $q$-Weyl algebra is a polynomial in 3 variables $M,L,q$. Two of those $q$-commute (i.e., satisfy $LM=qML$) but we can always sort the powers of $L$ to the right and the powers of $M$ to the left. In other words, there is an {\em additive} map \begin{equation} \lbl{eq.additive} \mathbb Z[q^{\pm 1}]\langle M,L \rangle/(LM-qML) \longrightarrow \mathbb Z[M,L,q^{\pm 1}] \end{equation} Let us change variables $(x,y,1/t)=(L,M,q)$ and ignore the coefficients of the monomials of $x^i y^j t^k$, and record only their exponents. They give rise to a tropical curve. Explicitly, let \begin{equation} \lbl{eq.Pijk} P=\sum_{(i,j,k) \in \mathcal{A}} a_{i,j,k} \, q^k M^j L^i \in W \end{equation} denote an element of the $q$-Weyl elgebra, where $\mathcal{A}$ is a finite set and $a_{i,j,k} \in \mathbb Z\setminus\{0\}$ for all $(i,j,k) \in \mathcal{A}$. \begin{definition} \lbl{def.ptp} There is a map \begin{equation} \lbl{eq.ptp} W \longrightarrow \{\text{Tropical Curves in $\mathbb R^2$} \}, \qquad P \mapsto \Gamma_P \end{equation} which assigns to $P$ in \eqref{eq.ptp} the tropical polynomial $P_t(x,y)$ given by: $$ P_t(x,y)=\min_{(i,j,k) \in \mathcal{A}} \{ i x + j y -k \} $$ $\Gamma_P$ is the tropical curve of $P_t(x,y)$. \end{definition} Combining Definitions \ref{def.nhom} and \ref{def.ptp} allows us to assign a tropical curve to a $q$-holonomic sequence $f$. \begin{definition} \lbl{def.tropholo} \rm{(a)} If $f$ is a $q$-holonomic sequence, let $\Gamma_f$ and $\Gamma^{nh}_f$ denote the tropical curves of $P_f(y,x,1/t)$ and $P^{nh}_f(y,x,1/t)$ respectively, where $P_f(M,L,q)$ and $P^{nh}_f(M,L,q)$ are given in Definition \ref{def.nhom}. \end{definition} The tropical curve $\Gamma_f$ of a $q$-holonomic sequence $f$ is closely related to the degree (with respect to $q$) of the sequence of rational functions $f_n(q)$. If $\delta_n=\mathrm{deg}_q(f_n(q))$ denotes this degree, then it was shown in \cite{Ga4} that for large enough $n$, $\delta_n$ is a quadratic quasi-polynomial with slope recorded by the rays of the tropical curve $\Gamma_f$. \subsection{3 polytopes of a $q$-holonomic sequence} \lbl{sub.3polytopes} In this section we assign 3 polytopes to a $q$-holonomic sequence. \begin{definition} \lbl{def.3poly} \rm{(a)} If $P \in W$ is given by Equation \eqref{eq.Pijk}, it defines 3 polytopes: \begin{itemize} \item $N_P$ is the convex hull of the exponents of the polynomial $P(M,L,q)$ with respect to the variables $(M,L,q)$. \item $N_{P,0}$ is the projection of $N_P$ under the projection map $(M,L,q) \longrightarrow (L,M)$. \item $N_{P,1}$ is the convex hull of the exponents of the polynomial $P(L,M,1)$. \end{itemize} \rm{(b)} If $f$ is a $q$-holonomic sequence, its annihilator $P_f$ gives rise to the polytopes $N_{P_f}$, $N_{P_f,0}$ and $N_{P_f,1}$. \end{definition} Note that $N_P$ is a 3-dimensional convex lattice polytope, and $N_{P,0}, N_{P,1}$ are 2-dimensional convex lattice polygons. Since every exponent of $P(M,L,1)$ comes from some exponents of $P(M,L,q)$, it follows that \begin{equation} \lbl{eq.N01} N_{P,1} \subset N_{P,0} \end{equation} \begin{remark} \lbl{rem.dual} It follows by \cite{RGST} that the tropical curve $\Gamma_P$ is dual to a Newton subdivision of $N_{P,0}$. \end{remark} We will say that $P(M,L,q)$ is {\em good} if $N_{P,1}=N_{P,0}$. It is easy to see that goodness is a generic property. \subsection{The slopes of a $q$-holonomic sequence} \lbl{sub.slopeqholo} In this section we discuss the slopes of a $q$-holonomic sequence and their relation with its tropical curve. The proof of the following theorem uses differential Galois theory and the Lech-Mahler-Skolem theorem from number theory. \begin{theorem} \lbl{thm.0}\cite{Ga4} The degree with respect to $q$ of a $q$-holonomic sequence $f_n(q) \in \mathbb Q(q)$ is given (for large values of $n$) by a quadratic quasi-polynomial. \end{theorem} Recall that a {\em quadratic quasi-polynomial} is a function of the form: \begin{equation} \lbl{eq.pp} p: \mathbb N \longrightarrow \mathbb N, \qquad p(n)=\gamma_2(n) \binom{n}{2} + \gamma_1(n) n + \gamma_0(n) \end{equation} where $\gamma_j(n)$ are rational-valued periodic functions of $n$. Quasi-polynomials appear in lattice point counting problems, and also in Enumerative Combinatorics; see \cite{BP,BR,Eh,St} and references therein. The set of {\em slopes} $s(p)$ of a quadratic quasi-polynomial is the finite set of values of the periodic function $\gamma_2(n)$. These are essentially the quadratic growth rates of the quasi-polynomial. More precisely, recall that $x \in \mathbb R$ is a {\em cluster point} of a sequence $(x_n)$ of real numbers if for every $\epsilon>0$ there are infinitely many indices $n \in \mathbb N$ such that $|x-x_n| < \epsilon$. Let $\{x_n\}'$ denote the set of {\em cluster points} of a sequence $(x_n)$. It is easy to show that for every quadratic quasi-polynomial $p$ we have: \begin{equation} \lbl{eq.sp} s(p)=\{ \frac{2}{n^2}p(n) \,\, | n \in \mathbb N \}' \subset \mathbb Q \end{equation} Given a $q$-holonomic sequence $f_n(q) \in \mathbb Q(q)$, let $s(f)$ denote the slopes of the quadratic quasi-polynomial $\mathrm{deg}_q f_n(q)$. Let $s(N)$ denote the set of slopes of the edges of a convex polygon $N$ in the plane. The next proposition relates the slopes of a $q$-holonomic sequence with its tropical curve. See also \cite[Prop.4.4]{Ga4}. \begin{proposition} \lbl{prop.1} If $f$ is $q$-holonomic, then $s(f) \subset -s(N_{P_f,0})$. \end{proposition} \begin{proof} Let $\delta(n)=\mathrm{deg}_q f_n(q)$ denote the degree of $f_n(q)$ with respect to $q$, and let $P$ denote the annihilator of $f$. We expand $P$ in terms of monomials as in Equation \eqref{eq.Pijk}. For every monomial $q^k M^j L^i$ and every $n$ we have $$ \mathrm{deg}_q((q^k M^j L^i) f_n(q))=k+jn+\delta(n+i). $$ Since $P$ annihilates $f$, for every $n$ the following maximum is attained {\em at least twice} (from now on, twice will mean at least twice as is common in Tropical Geometry): \begin{equation} \lbl{eq.maxP} \max_{(i,j,k)}\{ jn+k+\delta(n+i)\} \end{equation} Subtracting $\delta(n)$, it follows that the maximum is obtained twice: \begin{equation} \lbl{eq.maxP2} \max_{(i,j,k)} \{ jn+k+\delta(n+i)-\delta(n)\} \end{equation} Now $\delta(n)$ is a quadratic quasi-polynomial given by $$ \delta(n)=\gamma_2(n) \binom{n}{2} + \gamma_1(n) n + \gamma_0(n) $$ Theorem \ref{thm.0} implies that for large enough $n$, in a fixed arithmetic progression, we have $\gamma_i(n)=\widehat{\gamma}_i$ for $i=1,2$, thus $$ \delta(n+i)-\delta(n)=\widehat{\gamma}_2 \, i \, n + \widehat{\gamma}_2 \, \binom{i}{2} + \widehat{\gamma}_1 \, i $$ Substituting into \eqref{eq.maxP2}, it follows that for large enough $n$ in an arithmetic progression, the max is obtained twice: \begin{equation} \lbl{eq.maxP3} \max_{(i,j,k)}\{ jn+k+ \widehat{\gamma}_2 \, i \, n + \widehat{\gamma}_2 \, \binom{i}{2} + \widehat{\gamma}_1 \, i \} \end{equation} It follows that there exists $(i',j') \neq (i,j)$ such that \begin{equation} \lbl{eq.ga2} \widehat{\gamma}_2=-\frac{j-j'}{i-i'}. \end{equation} This proves Proposition \ref{prop.1}. \end{proof} \section{The $q$-Weyl alegbra and its localization} \lbl{sec.weyl} In this section we will discuss some algebraic properties of the $q$-Weyl algebra and its localization, which will justify Definition \ref{def.nhom}. Recall the $q$-Weyl algebra from \eqref{eq.weyl}. We will say that an element $P$ of $W$ is {\em reduced} if it has the form \eqref{eq.qholo2} where $a_i(M,q) \in \mathbb Z[M,q]$ for all $i$, and the greatest common divisor of $a_i(M,q) \in \mathbb Z[M,q]$ is $1$. Consider the {\em localized $q$-Weyl algebra} $W_{\mathrm{loc}}$ given by \begin{equation} \lbl{eq.wloc} W_{\mathrm{loc}}=\mathbb Q(M,q)\langle L \rangle/(Lf(M,q)-f(Mq,q)L) \end{equation} It was observed in \cite{Ga2} that $W$ is not a principal left-ideal domain, but becomes so after localization; see \cite{Cou}. If $f$ is a sequence of rational functions, consider the left ideal $M_f$ $$ M_f=\{P \in W_{\mathrm{loc}} \, | Pf =0 \} $$ $M_f$ is a principal ideal, which is nonzero if $f$ is $q$-holonomic. Let $P'$ denote the monic generator of $M_f$. Left multiply it by a polynomial in $M,q$ so as to obtain a reduced annihilator $P_f$ of $f$. Now, we discuss non-homogeneous recursion relations of the form $$ \sum_{i=0}^d a_i(q^n,q) f_{n+i}(q)=b(q^n,q) $$ where $a_i(M,q), b(M,q) \in \mathbb Q(M,q)$ for all $i$. In operator form, we can write the above recursion as $$ Pf=b. $$ Consider the left ideal \begin{equation} \lbl{eq.Mhnf} M^{nh}_f=\{ P \in W_{\mathrm{loc}} \,\, | \exists b \in \mathbb Q(M,q) \, : P f=b \} \end{equation} It is easy to see that $M^{nh}_f$ is a left ideal. If $f$ is $q$-holonomic, $M^{nh}_f \neq 0$. Let $P''$ denote the monic generator of $M^{nh}_f$. There exists $b'' \in \mathbb Q(M,q)$ such that $$ P'' f = b'' $$ There are two cases: $b'' \neq 0$ or $b''= 0$. If $b'' \neq 0$, then dividing by $b''$ we obtain that $1/b \cdot P'' f=1$. We left multiply both sides by a polynomial in $M,q$ so as to obtain $P^{nh}_f f= b_f$ where $P^{nh}$ is reduced. If $b''=0$ then multiply by a polynomial in $M,q$ so as to obtain $P^{nh}_f f=0$ and define $b_f=0$ in tha case. This concludes Definition \ref{def.nhom}. The next lemma relates the homogeneous and the non-homogeneous annihilator of a $q$-holonomic sequence. It is well-known that one can convert an non-homogeneous recursion relation $Pf=b$ where $b \neq 0$ into a homogeneous recursion relation of order one more. Indeed, $Pf=b$ where $b \neq 0$ is equivalent to $$ (L-1)b^{-1} P f=0 $$ This implies the following conversion between $(P^{nh}_f,b_f)$ and $P_f$. Fix a $q$-holonomic sequence $f_n(q) \in \mathbb Q(q)$. \begin{lemma} \lbl{lem.nh} \rm{(a)} If $b_f =0$ then $P^{nh}_f=P_f$. If $b_f \neq 0$, then $P^{nh}_f$ is obtained by clearing denominators of $(L-1)b_f^{-1} P_f$ by putting the powers of $L$ on the right and the elements of $\mathbb Q(M,q)$ on the left. \newline \rm{(b)} If $P_f$ is not left divisible by $L-1$ in $W$, then $P_f=P^{nh}_f$ and $b_f=0$. If $P_f$ is left divisible by $L-1$ in $W$, then $P_f=(L-1) Q_f$ and if $d$ is the common denominator of $Q_f$, then $(d Q_f, d)= (P^{nh}_f,b_f)$. \end{lemma} \begin{definition} \lbl{def.fhom} We say that a $q$-holonomic sequence $f$ is {\em homogeneous} if $b_f=0$--else $f$ is non-homogeneous. \end{definition} In other words, a $q$-holonomic sequence $f$ is {\em homogeneous} if and only if $P_f$ is left-divisible by $L-1$ in $W$. \section{Quantum Topology} \lbl{sec.qt} \subsection{The tropical curve of a knot} \lbl{sub.cjones} Quantum Topology is a source of $q$-holonomic sequences attached to knotted 3-dimensional objects. Let $J_{K,n}(q) \in \mathbb Z[q^{\pm 1}]$ denote the {\em colored Jones polynomial} of a knot $K$ in 3-space, colored by the $(n+1)$-dimensional irreducible representation of $\mathfrak{sl}_2$ and normalized to be $1$ at the unknot; \cite{Jo,Tu}. The sequence $J_{K,n}(q)$ for $n=0,1,\dots$ essentially encodes the Jones polynomial of a knot and all of its parallels; see \cite{Tu}. In \cite[Thm.1]{GL} it was shown that the sequence $J_{K,n}(q)$ of colored Jones polynomials of a knot $K$ is $q$-holonomic. \begin{definition} \lbl{def.Aq} \rm{(a)} If $K$ is a knot, we denote by $A_K(M,L,q)$ and $(A^{nh}_K(M,L,q),B_K(M,q))$ the homogeneous and the non-homogeneous annihilator of the $q$-holonomic sequence $J_{K,n}(q)$. These are the non-commutative and the non-homogeneous non-commutative $A$-polynomials of the knot. \newline \rm{(b)} If $K$ is a knot, let $\Gamma_K$ and $\Gamma^{nh}_K$ denote the {\em tropical curves} of $A_K$ and $A^{nh}_K$ respectively. \end{definition} The non-homogeneous non-commutative $A$-polynomial of a knot appeared first in \cite{GS}. \subsection{The AJ Conjecture} \lbl{sub.AJ} The AJ Conjecture (resp. the Slope Conjecture) relates the Jones polynomial of a knot and its parallels to the $\mathrm{SL}(2,\mathbb C)$ character variety (resp. to slopes of incompressible surfaces) of the knot complement. We will relate the two conjectures using elementary ideas from Tropical Geometry. The $A$-polynomial of a knot is a polynomial in two commuting variables $M$ and $L$ that essentially encodes the image of the $\mathrm{SL}(2,\mathbb C)$ character variety of $K$, projected in $\mathbb C^* \times \mathbb C^*$ by the eigenvalues of a meridian and longitude of $K$. It was defined in \cite{CCGLS}. \begin{conjecture} \lbl{conj.AJ}\cite{Ga2} The AJ Conjecture states that \begin{equation} \lbl{eq.AJ} A_K(M,L,1)=B_K(M) A_K(M^{1/2},L) \end{equation} where $A_K(M,L)$ is the $A$-polynomial of $K$ and $B_K(M) \in \mathbb Z[M]$ is a polynomial that depends on $M$ and of course $K$. \end{conjecture} The AJ Conjecture is known for infinitely many 2-bridge knots; see \cite{Le}. It is natural to ask whether the $q$-holonomic sequence $J_{K,n}(q)$ is of non-homogeneous type or not. Based on geometric information (the so-called {\em loop expansion} of the colored Jones polynomial, see \cite{Ga1}), as well as experimental evidence for all knots whose non-commutative $A$-polynomial is known (these are the torus knots in \cite{Hi} and the twist knots in \cite{GS}) we propose the following conjecture. \begin{conjecture} \lbl{conj.inhom} For every knot $K$, $J_{K,n}(q)$ is non-homogeneous. \end{conjecture} The above conjecture implies that $B_K(M,q) \in \mathbb Z[M,q]\setminus\{0\}$ is an invariant which is independent and {\em invisible} from the classical $A$-polynomial of the knot. There is a close connection between the $B_K(M,q)$ invariant of a knot and the torsion polynomial of the knot introduced in \cite{DbG}. We will discuss this in a future publication. \subsection{The Slope Conjecture} \lbl{sub.slope} The Slope Conjecture of \cite{Ga3} relates the degree of the colored Jones polynomial of a knot and its parallels to slopes of incompressible surfaces in the knot complement. To recall the conjecture, let $\delta_K(n)=\mathrm{deg}_q J_{K,n}(q)$ (resp. $\delta^*_K(n)=\mathrm{deg}^*_q J_{K,n}(q)$) denote the maximum (resp. minimum) {\em degree} of the polynomial $J_{K,n}(q) \in \mathbb Z[q^{\pm 1}]$ (or more generally, of a rational function) with respect to $q$. For a knot $K$, define the {\em Jones slopes} $\mathrm{js}_K$ by: \begin{equation} \lbl{eq.js} \mathrm{js}_K=\{ \frac{2}{n^2}\delta_K(n) \,\, | n \in \mathbb N \}' \end{equation} \rm{(b)} Let $\mathrm{bs}_K \subset \mathbb Q \cup \{1/0\}$ denote the set of boundary slopes of incompressible surfaces of $K$; \cite{Ha,HO}. \begin{conjecture} \lbl{conj.slope}\cite{Ga3} The {\em Slope Conjecture} states that for every knot $K$ we have \begin{equation} 2 \mathrm{js}_K \subset \mathrm{bs}_K. \end{equation} \end{conjecture} Note that the Slope Conjecture applied to the mirror of $K$ implies that $2 \mathrm{js}^*_K \subset \mathrm{bs}_K$. The Slope Conjecture is known for alternating knots and torus knots (see \cite{Ga3}), for adequate knots (which include all alternating knots; see \cite{FKP}), for $(-2,3,n)$ pretzel knots (see \cite{Ga3}), and for 2-fusion knots; see \cite{DnG}. A general method for verifying the Slope Conjecture is discussed in \cite{Ga5,DnG}. \subsection{The AJ Conjecture and the Slope Conjecture} \lbl{sub.AJslope} In this section we will see how the AJ Conjecture relates to the Slope Conjecture, expanding a comment of \cite[Sec.2]{Ga3}. We will specialize Definition \ref{def.3poly} to knot theory when $P=A_K$ is the non-commutative $A$-polynomial of a knot $K$, and we will denote by $N_K$, $N_{K,0}$ and $N_{K,1}$ the three polytopes associated to $A_K$. Proposition \ref{prop.1} implies that \begin{equation} \lbl{eq.e1} \mathrm{js}_K \subset -N_{K,0} \end{equation} Let $\mathrm{bs}^A_K$ denote the slopes of the $A$-polynomial of $K$. The AJ Conjecture implies that up to possibly excluding the slope $1/0$ from $2 N_{K,1}$, we have: \begin{equation} \lbl{eq.e2} 2 N_{K,1} = \mathrm{bs}^A_K. \end{equation} For a careful proof, see Proposition \ref{prop.2} and Remark \ref{rem.shift} below. Culler and Shalen show that edges of the Newton polygon of the $A$-polynomial of $K$ give rise to ideal points of the $\mathrm{SL}(2,\mathbb C)$ character variety of $K$; see \cite{CS,CGLS,CCGLS}. For every ideal point, Culler and Shalen construct an incompressible surface whose slope is a boundary slope of $K$; see \cite{CS,CCGLS}. $\mathrm{bs}^A_K$ is the set of the so-called {\em strongly detected boundary slopes} of $K$, and satisfies the inclusion: \begin{equation} \lbl{eq.e3} \mathrm{bs}^A_K \subset \mathrm{bs}_K. \end{equation} If $A_K(M,L,q)$ is good, then \begin{equation} \lbl{eq.e4} N_{K,0} = N_{K,1}. \end{equation} If $K^*$ denotes the mirror of $K$, then $J_{K^*,n}(q)=K_{K,n}(q^{-1})$ which implies that $-N_{K,0}=N_{K^*,0}$. Combining Equations \eqref{eq.e1}-\eqref{eq.e4}, it follows that $$ 2 \mathrm{js}_K \subset \mathrm{bs}_{K^*} $$ which is the Slope Conjecture, up to a harmless mirror image. This derivation also explains two independent factors of $2$, one in Equation \eqref{eq.js} and another one in Equation \eqref{eq.AJ}. \begin{proposition} \lbl{prop.2} If the non-commutative $A$-polynomial of $K$ is good, and if the AJ Conjecture holds, then $\Gamma_K$ is dual to the Newton subdivision of the A-polynomial of $K$ (multiplied by a polynomial in $M$). \end{proposition} \begin{proof} Let $P$ denote the non-commutative $A$-polynomial of a knot $K$. $\Gamma_K$ is dual to $N_{P,0}$. If $P$ is good, then $N_{P,0}=N_{P,1}$. With the notation of Conjecture \ref{conj.AJ}, the AJ Conjecture implies that $$ P(M,L,1)=A_K(M^{1/2},L) B_K(M) $$ where $B_K(M)$ is a polynomial of $M$, and $A_K$ is the $A$-polynomial of $A$. The Newton polygon of of the product of two polynomials is the Minkowski sum of their Newton polygons. Moreover, the Newton polygon of $B_K(M)$ is a vertical line segment in the $(L,M)$-plane. It follows that the Newton polygon of $A_K(M^{1/2},L) B_K(M)$ is the Newton polygon of the $A$-polynomial of $K$ and its translation by a vertical segment. On the other hand, the Newton polygon of $P(M,L,1)$ is $N_{P,1}$. The result follows. \end{proof} \begin{remark} \lbl{rem.shift} Note that the Newton polygon of $A_K(M^{1/2},L) B_K(M)$ is the Newton polygon of $A_K(M^{1/2},L)$ and its shift by a vertical line segment. It follows that the slopes of the Newton polygon of $A_K(M^{1/2},L)B_K(M)$ are the slopes of $A_K(M^{1/2},L)$ plus the slope of a vertical segment (i.e., $1/0$). For concrete examples, see Section \ref{sec.compute} where the Newton polygons of the non-homogeneous $A$-polynomials of $4_1,5_2,6_1,8_1$ is shown and it differs from the Newton polygon of the $A$-polynomial by a shift by a vertical segment. \end{remark} The only knots with explicitly known non-commutative $A$-polynomials (homogeneous and non-homogeneous) are the handful of twist knots $K_p$ of \cite{GS} for $p=-8,\dots,11$. An explicit check shows that these non-commutative $A$-polynomials (both the homogeneous and the non-homogeneous) are good. For details, see Section \ref{sec.compute}. \section{Quantization and Tropicalization} \lbl{sec.qtrop} Quantization is the process of producing the non-commutative $A$-polynomial of a knot from the usual $A$-polynomial. In other words, Quantization starts with $P_1(x,y)$ and produces $P_t(x,y)$ as in Equation \eqref{eq.Pxyt}. On the other hand, Tropical Geometry expands $P_t(x,y)$ at $t=0$ (or equivalently at $q=\infty$) and produces a tropical curve. Schematically, we have a diagram: $$ \left(\begin{array}{c} A\text{-polynomial} \\ q=1 \end{array}\right) \stackrel{\text{Classical limit}}\longleftarrow \left(\begin{array}{c} \text{non-commutative} \\ A\text{-polynomial} \\ q \end{array}\right) \stackrel{\text{Tropicalization}}\longrightarrow \left(\begin{array}{c} \text{Tropical curve} \\ q=\infty \end{array}\right) $$ Quantization is a map reverse to the Classical Limit map in the above diagram. Both sides of the above diagram (i.e., the limits at $q=1$ and $q=\infty$) are classical {\em dual} invariants of the knot. Indeed, the tropical curve ought to be dual to a Newton subdivision of the $A$-polynomial of $K$. This duality is highly nontrivial, even for the simple case of the $4_1$ knot, computed in Section \ref{sub.41} below. This conjectured duality may be related to the duality between Chern-Simons theory (i.e., colored $\mathrm{U}(N)$ polynomials of a knot) and Enumerative Geometry (i.e., BPS states) of the corresponding Calabi-Yau 3-fold. For a discussion of the latter duality, see \cite{ADKMV,DGKV,LMV,DV} and references therein. Physics principles concerning Quantization of complex Lagrangians in Chern-Simons theory suggest that the $A$-polynomial of a knot should determine the non-commutative $A$-polynomial. In particular, it should determine the polynomial invariant $B_K(M,q)$ of Definition \ref{def.nhom}, and it should determine the tropical curves $\Gamma_K$ and $\Gamma^{hn}_K$. Aside from duality conjectures, let us concentrate on a concrete question. It is well-known that the $A$-polynomial of a knot is a {\em triangulated curve} in the sense of {\em algebraic K-theory}. In other words, if $X$ is the curve of zeros $A_K(M,L)=0$ of the $A$-polynomial then there exist nonzero rational functions $z_1,\dots,z_r \in C(X)^*$ in $X$ such that \begin{equation} \lbl{eq.triang} M \wedge L= 2 \sum_{i=1}^r z_i \wedge (1-z_i) \in \wedge^2_{\mathbb Z}(C(X)^*) \end{equation} where $C(X)$ is the field of rational functions of $X$ and $M, \, L \in C(X)^*$ are the eigenvalues of the meridian and the longitude. For a proof of \eqref{eq.triang} (which uses the symplectic nature of the so-called Neumann-Zagier matrices), see \cite[Lem.10.1]{Ch}. For an excellent discussion of triangulated curves $X$ and for a plethora of examples and computations, see \cite{BRVD}. Geometrically, a triangulation of $X$ comes from an ideal triangulation of the knot complement with $r$ ideal tetrahedra with shape parameters $z_1,\dots,z_r$ which satisfy some gluing equations. The symplectic nature of these gluing equations, introduced and studied by Neumann and Zagier in \cite{NZ}, implies \eqref{eq.triang}. The triangulation of $X$ has important arithmetic consequences regarding the {\em volume} of the knot complement and its Dehn fillings, and it is closely related to the {\em Bloch group} of the complex numbers. It is important to realize that $X$ has infinitely many triangulations, and in general it is not possible to choose a canonical one. In addition, triangulations tend to work well with hyperbolic knots. On the contrary, the non-commutative $A$-polynomial and its corresponding tropical curve exist for every knot in 3-space, hyperbolic or not. Let us end with some questions, which aside from its theoretical interest, may play a role in the Quantization of the $A$-polynomial. \begin{question} \lbl{que.1} Is the tropical curve $\Gamma_K$ of a hyperbolic knot $K$ related to a triangulation of its $A$-polynomial curve? \end{question} To formulate our next question, recall that the tropical curve $\Gamma_K$ is dual to a Newton subdivision of the 2-dimensional Newton polytope of the polynomial $A_K(M,L,q)$ with respect to the variables $L$ and $M$. Assuming that $A_K(M,L,q)$ is good, and assuming the AJ Conjecture, it follows that $\Gamma_K$ is dual to the Newton polygon of the $A$-polynomial of $K$. $\Gamma_K$ is a balanced rational graph that consists or edges and rays, and the above assumptions imply that the slopes of the rays are negative inverses of the slopes of the $A$-polynomial of $K$. Consequently, Culler-Shalen theory (see \cite{CS}) implies that the slopes of the rays of $\Gamma_K$ are negative inverses of boundary slopes of $K$, appropriately normalized by a factor of $2$. \begin{question} \lbl{que.2} What is the geometric meaning of the vertices of $\Gamma_K$ (those are points in $\mathbb Q^2$) and of the slopes of the edges of $\Gamma_K$? \end{question} \section{Computations of tropical curves of knots} \lbl{sec.compute} \subsection{The homogeneous tropical curve of the $4_1$ knot} \lbl{sub.41} The non-commutative $A$-polynomial $A_{4_1}(M,L,q)$ of $4_1$ was computed in \cite[Sec.6.2]{GL} and also \cite[Sec.3.2]{Ga2} using the {\tt WZ method} of \cite{WZ,Z} implemented by \cite{PR} in {\tt Mathematica}. The non-commutative $A$-polynomial is given by {\small \begin{eqnarray*} A_{4_1}(y,x,1/t) &=& \frac{x^3 \left(t^2-y\right) \left(t^3-y\right) y^2 (t+y) \left(t-y^2\right) \left(t^3-y^2\right)}{t^{14}} \\ &+ & \frac{\left(t^2-y\right) (-1+y) y^2 \left(t^2+y\right) \left(t^3-y^2\right) \left(t^5-y^2\right)}{t^{15}} \\ &- & \frac{1}{t^{18}}x^2 \left(t^2-y\right)^2 \left(t^2+y\right) \left(t-y^2\right) \left(t^3-y^2\right) \left(t^8-2 t^6 y+t^7 y-t^3 y^2+t^4 y^2-t^5 y^2+t y^3-2 t^2 y^3+y^4\right) \\ &+ & \frac{1}{t^{17}}x (t-y) \left(t^2-y\right) (t+y) \left(t^3-y^2\right) \left(t^5-y^2\right) \left(t^4+y^4-t^3 y (2+y)-t y^2 (1+2 y)+t^2 y \left(1+y+y^2\right)\right) \end{eqnarray*} } Notice that $$ A_{4_1}(x,y,1)=(-1 + x) (-1 + y)^4 (1 + y)^3 (-x + x y + y^2 + 2 x y^2 + x^2 y^2 + x y^3 - x y^4) $$ confirms the AJ Conjecture, since the last factor is the geometric component of the $A$-polynomial of $4_1$, the first term is the abelian component of the $A$-polynomial, and the remaining second and third terms depend only on $y=M$. Expanding out the terms, we obtain that: \begin{math} A_{4_1}(y,x,1/t)= \tfrac{1}{t^{18}}\cdot x^{2}y^{11}+\tfrac{-1}{t^{14}}\cdot x^{3}y^{9}+\tfrac{1-3\cdot t}{t^{17}}\cdot x^{2}y^{10}+\tfrac{-1}{t^{17}}\cdot xy^{11}+\tfrac{-1+t+t^{2}}{t^{13}}\cdot x^{3}y^{8}+\tfrac{-1-3\cdot t^{2}+2\cdot t^{3}-t^{4}}{t^{17}}\cdot x^{2}y^{9}+\tfrac{2}{t^{16}}\cdot xy^{10}+\tfrac{1+2\cdot t^{2}+t^{3}-t^{4}}{t^{13}}\cdot x^{3}y^{7}+\tfrac{-1+3\cdot t-t^{2}+3\cdot t^{3}+2\cdot t^{5}}{t^{16}}\cdot x^{2}y^{8}+\tfrac{1+t^{3}+t^{4}}{t^{16}}\cdot xy^{9}+\tfrac{1-t-t^{3}-2\cdot t^{4}}{t^{12}}\cdot x^{3}y^{6}+\tfrac{3-2\cdot t+3\cdot t^{2}+t^{4}}{t^{14}}\cdot x^{2}y^{7}+\tfrac{-1-t-t^{2}-t^{3}-2\cdot t^{4}}{t^{15}}\cdot xy^{8}+\tfrac{1}{t^{15}}\cdot y^{9}+\tfrac{-2-t-t^{3}+t^{4}}{t^{10}}\cdot x^{3}y^{5}+\tfrac{1-3\cdot t-2\cdot t^{3}-t^{4}-2\cdot t^{6}}{t^{13}}\cdot x^{2}y^{6}+\tfrac{-1-t^{2}-t^{3}-t^{4}-t^{6}}{t^{14}}\cdot xy^{7}+\tfrac{-1}{t^{15}}\cdot y^{8}+\tfrac{-1+t+2\cdot t^{2}+t^{4}}{t^{9}}\cdot x^{3}y^{4}+\tfrac{-2-t^{2}-2\cdot t^{3}-3\cdot t^{5}+t^{6}}{t^{11}}\cdot x^{2}y^{5}+\tfrac{1+t^{2}+t^{3}+2\cdot t^{4}+t^{5}+t^{6}}{t^{13}}\cdot xy^{6}+\tfrac{-1-t-t^{2}}{t^{12}}\cdot y^{7}+\tfrac{1+t-t^{2}}{t^{7}}\cdot x^{3}y^{3}+\tfrac{1+3\cdot t^{2}-2\cdot t^{3}+3\cdot t^{4}}{t^{8}}\cdot x^{2}y^{4}+\tfrac{1+t+2\cdot t^{2}+t^{3}+t^{4}+t^{6}}{t^{11}}\cdot xy^{5}+\tfrac{1+t+t^{2}}{t^{12}}\cdot y^{6}+\tfrac{-1}{t^{4}}\cdot x^{3}y^{2}+\tfrac{2+3\cdot t^{2}-t^{3}+3\cdot t^{4}-t^{5}}{t^{7}}\cdot x^{2}y^{3}+\tfrac{-1-t^{2}-t^{3}-t^{4}-t^{6}}{t^{10}}\cdot xy^{4}+\tfrac{1+t+t^{2}}{t^{8}}\cdot y^{5}+\tfrac{-1+2\cdot t-3\cdot t^{2}-t^{4}}{t^{5}}\cdot x^{2}y^{2}+\tfrac{-2-t-t^{2}-t^{3}-t^{4}}{t^{7}}\cdot xy^{3}+\tfrac{-1-t-t^{2}}{t^{8}}\cdot y^{4}+\tfrac{-3+t}{t^{2}}\cdot x^{2}y+\tfrac{1+t+t^{4}}{t^{6}}\cdot xy^{2}+\tfrac{-1}{t^{3}}\cdot y^{3}+x^{2}+\tfrac{2}{t^{2}}\cdot xy+\tfrac{1}{t^{3}}\cdot y^{2}+\tfrac{-1}{t}\cdot x \end{math} Inspection of the above formula shows that $A_{4_1}(y,x,1/t)$ is good. Using the drawing {\tt polymake} program of \cite{Ma} implemented in {\tt Singular} one can compute the vertices of the tropical curve: {\small \begin{displaymath} (3,-1/2),\;\;(-1,-1/3),\;\;(-3/4,-1/2),\;\;(-2,0),\;\;(2,-1), \;\;(-1/2,-1),\;\;(1,-3/2),\;\;(0,-3/2),\;\; \end{displaymath} \begin{displaymath} (-1/2,-5/4),\;\;(1/2,-7/4),\;\;(-1,-3/2),\;\;(1/2,-2),\;\;(2,-3), \;\;(3/4,-5/2),\;\;(1,-8/3),\;\;(-2,-2),\;\;(-3,-5/2) \end{displaymath} } The tropical curve (with the convention that unmarked edges or rays have multiplicity $1$) is: \vspace*{0.5cm} \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.5 \arrowheadtype t:V \linewd 0.1 \lpatt (1 0) \setgray 0.6 \relunitscale 2 \move (3 0.5) \fcir f:0 r:0.1 \move (3 0.5) \lvec (-0.75 0.5) \htext (1.12 0){$2$} \move (3 0.5) \lvec (2 0) \move (3 0.5) \rlvec (2.5 1.25) \move (3 0.5) \rlvec (1.5 0) \htext (3.5 0.5){$2$} \move (-1 0.66) \fcir f:0 r:0.1 \move (-1 0.66) \lvec (-0.75 0.5) \move (-1 0.66) \lvec (-2 1) \move (-1 0.66) \rlvec (0 1.5) \move (-0.75 0.5) \fcir f:0 r:0.1 \move (-0.75 0.5) \lvec (-0.5 0) \move (-2 1) \fcir f:0 r:0.1 \move (-2 1) \rlvec (-1.5 0) \move (-2 1) \rlvec (-2.5 1.25) \move (2 0) \fcir f:0 r:0.1 \move (2 0) \lvec (-0.5 0) \move (2 0) \lvec (1 -0.5) \move (2 0) \rlvec (1.5 0) \move (-0.5 0) \fcir f:0 r:0.1 \move (-0.5 0) \lvec (-0.5 -0.25) \htext (-0.5 -0.62){$2$} \move (1 -0.5) \fcir f:0 r:0.1 \move (1 -0.5) \lvec (0 -0.5) \htext (0.5 -1){$2$} \move (1 -0.5) \lvec (0.5 -0.75) \move (1 -0.5) \rlvec (1.5 0) \htext (1.5 -0.5){$2$} \move (0 -0.5) \fcir f:0 r:0.1 \move (0 -0.5) \lvec (-0.5 -0.25) \move (0 -0.5) \lvec (0.5 -0.75) \move (0 -0.5) \lvec (-1 -0.5) \htext (-0.5 -1){$2$} \move (-0.5 -0.25) \fcir f:0 r:0.1 \move (-0.5 -0.25) \lvec (-1 -0.5) \move (0.5 -0.75) \fcir f:0 r:0.1 \move (0.5 -0.75) \lvec (0.5 -1) \htext (0.5 -1.37){$2$} \move (-1 -0.5) \fcir f:0 r:0.1 \move (-1 -0.5) \lvec (-2 -1) \move (-1 -0.5) \rlvec (-1.5 0) \htext (-1.5 -0.5){$2$} \move (0.5 -1) \fcir f:0 r:0.1 \move (0.5 -1) \lvec (0.75 -1.5) \move (0.5 -1) \lvec (-2 -1) \htext (-0.75 -1.5){$2$} \move (0.5 -1) \rlvec (1.5 0) \move (2 -2) \fcir f:0 r:0.1 \move (2 -2) \lvec (1 -1.66) \move (2 -2) \rlvec (1.5 0) \move (2 -2) \rlvec (2.5 -1.25) \move (0.75 -1.5) \fcir f:0 r:0.1 \move (0.75 -1.5) \lvec (1 -1.66) \move (0.75 -1.5) \lvec (-3 -1.5) \htext (-1.12 -2){$2$} \move (1 -1.66) \fcir f:0 r:0.1 \move (1 -1.66) \rlvec (0 -1.5) \move (-2 -1) \fcir f:0 r:0.1 \move (-2 -1) \lvec (-3 -1.5) \move (-2 -1) \rlvec (-1.5 0) \htext (-2.5 -1){$2$} \move (-3 -1.5) \fcir f:0 r:0.1 \move (-3 -1.5) \rlvec (-2.5 -1.25) \move (-3 -1.5) \rlvec (-1.5 0) \htext (-3.5 -1.5){$2$} \move (-4 -3) \fcir f:0.8 r:0.05 \move (-4 -2) \fcir f:0.8 r:0.05 \move (-4 -1) \fcir f:0.8 r:0.05 \move (-4 0) \fcir f:0.8 r:0.05 \move (-4 1) \fcir f:0.8 r:0.05 \move (-4 2) \fcir f:0.8 r:0.05 \move (-3 -3) \fcir f:0.8 r:0.05 \move (-3 -2) \fcir f:0.8 r:0.05 \move (-3 -1) \fcir f:0.8 r:0.05 \move (-3 0) \fcir f:0.8 r:0.05 \move (-3 1) \fcir f:0.8 r:0.05 \move (-3 2) \fcir f:0.8 r:0.05 \move (-2 -3) \fcir f:0.8 r:0.05 \move (-2 -2) \fcir f:0.8 r:0.05 \move (-2 -1) \fcir f:0.8 r:0.05 \move (-2 0) \fcir f:0.8 r:0.05 \move (-2 1) \fcir f:0.8 r:0.05 \move (-2 2) \fcir f:0.8 r:0.05 \move (-1 -3) \fcir f:0.8 r:0.05 \move (-1 -2) \fcir f:0.8 r:0.05 \move (-1 -1) \fcir f:0.8 r:0.05 \move (-1 0) \fcir f:0.8 r:0.05 \move (-1 1) \fcir f:0.8 r:0.05 \move (-1 2) \fcir f:0.8 r:0.05 \move (0 -3) \fcir f:0.8 r:0.05 \move (0 -2) \fcir f:0.8 r:0.05 \move (0 -1) \fcir f:0.8 r:0.05 \move (0 0) \fcir f:0.8 r:0.05 \move (0 1) \fcir f:0.8 r:0.05 \move (0 2) \fcir f:0.8 r:0.05 \move (1 -3) \fcir f:0.8 r:0.05 \move (1 -2) \fcir f:0.8 r:0.05 \move (1 -1) \fcir f:0.8 r:0.05 \move (1 0) \fcir f:0.8 r:0.05 \move (1 1) \fcir f:0.8 r:0.05 \move (1 2) \fcir f:0.8 r:0.05 \move (2 -3) \fcir f:0.8 r:0.05 \move (2 -2) \fcir f:0.8 r:0.05 \move (2 -1) \fcir f:0.8 r:0.05 \move (2 0) \fcir f:0.8 r:0.05 \move (2 1) \fcir f:0.8 r:0.05 \move (2 2) \fcir f:0.8 r:0.05 \move (3 -3) \fcir f:0.8 r:0.05 \move (3 -2) \fcir f:0.8 r:0.05 \move (3 -1) \fcir f:0.8 r:0.05 \move (3 0) \fcir f:0.8 r:0.05 \move (3 1) \fcir f:0.8 r:0.05 \move (3 2) \fcir f:0.8 r:0.05 \move (4 -3) \fcir f:0.8 r:0.05 \move (4 -2) \fcir f:0.8 r:0.05 \move (4 -1) \fcir f:0.8 r:0.05 \move (4 0) \fcir f:0.8 r:0.05 \move (4 1) \fcir f:0.8 r:0.05 \move (4 2) \fcir f:0.8 r:0.05 \end{texdraw}\end{center} The Newton subdivision of the Newton polygon is: \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.5 \linewd 0.05 \move (2 11) \lvec (3 9) \move (3 9) \lvec (3 2) \move (3 2) \lvec (2 0) \move (2 0) \lvec (1 0) \move (1 0) \lvec (0 2) \move (0 2) \lvec (0 9) \move (0 9) \lvec (1 11) \move (1 11) \lvec (2 11) \move (2 9) \lvec (2 11) \move (3 7) \lvec (2 9) \move (2 11) \lvec (0 8) \move (0 8) \lvec (1 11) \move (2 9) \lvec (0 8) \move (2 8) \lvec (2 9) \move (3 6) \lvec (2 8) \move (2 8) \lvec (0 8) \move (2 6) \lvec (2 8) \move (3 4) \lvec (2 6) \move (1 6) \lvec (2 8) \move (2 6) \lvec (1 4) \move (1 4) \lvec (1 6) \move (1 6) \lvec (0 8) \move (3 4) \lvec (1 4) \move (1 4) \lvec (0 6) \move (3 3) \lvec (1 2) \move (1 2) \lvec (1 4) \move (2 0) \lvec (3 3) \move (3 3) \lvec (1 0) \move (1 0) \lvec (1 2) \move (1 2) \lvec (0 4) \move (0 0) \fcir f:0.6 r:0.18 \move (0 1) \fcir f:0.6 r:0.18 \move (0 2) \fcir f:0.6 r:0.18 \move (0 3) \fcir f:0.6 r:0.18 \move (0 4) \fcir f:0.6 r:0.18 \move (0 5) \fcir f:0.6 r:0.18 \move (0 6) \fcir f:0.6 r:0.18 \move (0 7) \fcir f:0.6 r:0.18 \move (0 8) \fcir f:0.6 r:0.18 \move (0 9) \fcir f:0.6 r:0.18 \move (0 10) \fcir f:0.6 r:0.18 \move (0 11) \fcir f:0.6 r:0.18 \move (1 0) \fcir f:0.6 r:0.18 \move (1 1) \fcir f:0.6 r:0.18 \move (1 2) \fcir f:0.6 r:0.18 \move (1 3) \fcir f:0.6 r:0.18 \move (1 4) \fcir f:0.6 r:0.18 \move (1 5) \fcir f:0.6 r:0.18 \move (1 6) \fcir f:0.6 r:0.18 \move (1 7) \fcir f:0.6 r:0.18 \move (1 8) \fcir f:0.6 r:0.18 \move (1 9) \fcir f:0.6 r:0.18 \move (1 10) \fcir f:0.6 r:0.18 \move (1 11) \fcir f:0.6 r:0.18 \move (2 0) \fcir f:0.6 r:0.18 \move (2 1) \fcir f:0.6 r:0.18 \move (2 2) \fcir f:0.6 r:0.18 \move (2 3) \fcir f:0.6 r:0.18 \move (2 4) \fcir f:0.6 r:0.18 \move (2 5) \fcir f:0.6 r:0.18 \move (2 6) \fcir f:0.6 r:0.18 \move (2 7) \fcir f:0.6 r:0.18 \move (2 8) \fcir f:0.6 r:0.18 \move (2 9) \fcir f:0.6 r:0.18 \move (2 10) \fcir f:0.6 r:0.18 \move (2 11) \fcir f:0.6 r:0.18 \move (3 0) \fcir f:0.6 r:0.18 \move (3 1) \fcir f:0.6 r:0.18 \move (3 2) \fcir f:0.6 r:0.18 \move (3 3) \fcir f:0.6 r:0.18 \move (3 4) \fcir f:0.6 r:0.18 \move (3 5) \fcir f:0.6 r:0.18 \move (3 6) \fcir f:0.6 r:0.18 \move (3 7) \fcir f:0.6 r:0.18 \move (3 8) \fcir f:0.6 r:0.18 \move (3 9) \fcir f:0.6 r:0.18 \move (3 10) \fcir f:0.6 r:0.18 \move (3 11) \fcir f:0.6 r:0.18 \move (2 11) \fcir f:0 r:0.22 \move (3 9) \fcir f:0 r:0.22 \move (2 9) \fcir f:0 r:0.22 \move (3 7) \fcir f:0 r:0.22 \move (1 11) \fcir f:0 r:0.22 \move (0 8) \fcir f:0 r:0.22 \move (0 9) \fcir f:0 r:0.22 \move (2 8) \fcir f:0 r:0.22 \move (3 6) \fcir f:0 r:0.22 \move (2 6) \fcir f:0 r:0.22 \move (3 4) \fcir f:0 r:0.22 \move (1 6) \fcir f:0 r:0.22 \move (1 4) \fcir f:0 r:0.22 \move (0 6) \fcir f:0 r:0.22 \move (3 3) \fcir f:0 r:0.22 \move (1 2) \fcir f:0 r:0.22 \move (3 2) \fcir f:0 r:0.22 \move (2 0) \fcir f:0 r:0.22 \move (1 0) \fcir f:0 r:0.22 \move (0 4) \fcir f:0 r:0.22 \move (0 2) \fcir f:0 r:0.22 \end{texdraw}\end{center} The reader may observe that the above Newton polygon is the Minkowski sum of the Newton polygon of the $A$-polynomial of $4_1$ with a vertical segment. \subsection{The non-homogeneous tropical curve of the $4_1$ knot} \lbl{sub.41non} The non-homogeneous $A$-polynomial of the $4_1$ knot was computed in Theorem 1 of \cite{GS} (with the notation $A_{-1}(E,Q,q)$ where $E=L$ and $Q=M$). It has $22$ terms and it is given by: \begin{eqnarray*} A^{nh}_{4_1}(M,L,q)&=& L^2 M^2 q^2 \left(-1+M^2 q\right) \left(-1+M q^2\right) +(-1+M) M^2 q^2 \left(-1+M^2 q^3\right) \\ & & -L (-1+M q)^2 (1+M q) \left(1-M q-M^2 q-M^2 q^3-M^3 q^3+M^4 q^4\right) \\ B_{4_1}(M,L)&=& M q (1+M q) \left(-1+M^2 q\right) \left(-1+M^2 q^3\right) \end{eqnarray*} It follows that: \begin{math} A^{nh}_{4_1}(y,x,1/t)=\tfrac{-1}{t^{7}}\cdot xy^{7}+\tfrac{1}{t^{5}}\cdot x^{2}y^{5}+\tfrac{2}{t^{6}}\cdot xy^{6}+\tfrac{-1}{t^{3}}\cdot x^{2}y^{4}+\tfrac{1+t^{2}}{t^{6}}\cdot xy^{5}+\tfrac{-1}{t^{4}}\cdot x^{2}y^{3}+\tfrac{-1-t-t^{2}}{t^{5}}\cdot xy^{4}+\tfrac{1}{t^{5}}\cdot y^{5}+\tfrac{1}{t^{2}}\cdot x^{2}y^{2}+\tfrac{-1-t-t^{2}}{t^{4}}\cdot xy^{3}+\tfrac{-1}{t^{5}}\cdot y^{4}+\tfrac{1+t^{2}}{t^{3}}\cdot xy^{2}+\tfrac{-1}{t^{2}}\cdot y^{3}+\tfrac{2}{t}\cdot xy+\tfrac{1}{t^{2}}\cdot y^{2}-x \end{math} \noindent It is easy to see that the above polynomial is good. The vertices of the corresponding tropical curve are: \begin{displaymath} (1,-1/2),\;\;(-1/2,-1/2),\;\;(-2,0),\;\;(0,-1),\;\;(2,-2),\;\;(1/2,-3/2),\;\;(-1,-3/2) \end{displaymath} The tropical curve is: \vspace*{0.5cm} \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.5 \arrowheadtype t:V \linewd 0.1 \lpatt (1 0) \setgray 0.6 \relunitscale 3 \move (1 0.5) \fcir f:0 r:0.06 \move (1 0.5) \lvec (-0.5 0.5) \htext (0.25 0){$2$} \move (1 0.5) \lvec (0 0) \move (1 0.5) \rlvec (2 1) \move (1 0.5) \rlvec (1 0) \htext (1.33 0.5){$2$} \move (-0.5 0.5) \fcir f:0 r:0.06 \move (-0.5 0.5) \lvec (-2 1) \move (-0.5 0.5) \lvec (0 0) \move (-2 1) \fcir f:0 r:0.06 \move (-2 1) \rlvec (-1 0) \move (-2 1) \rlvec (-2 1) \move (0 0) \fcir f:0 r:0.06 \move (0 0) \lvec (0.5 -0.5) \move (0 0) \lvec (-1 -0.5) \move (2 -1) \fcir f:0 r:0.06 \move (2 -1) \lvec (0.5 -0.5) \move (2 -1) \rlvec (1 0) \move (2 -1) \rlvec (2 -1) \move (0.5 -0.5) \fcir f:0 r:0.06 \move (0.5 -0.5) \lvec (-1 -0.5) \htext (-0.25 -1){$2$} \move (-1 -0.5) \fcir f:0 r:0.06 \move (-1 -0.5) \rlvec (-2 -1) \move (-1 -0.5) \rlvec (-1 0) \htext (-1.33 -0.5){$2$} \move (-3 -2) \fcir f:0.8 r:0.03 \move (-3 -1) \fcir f:0.8 r:0.03 \move (-3 0) \fcir f:0.8 r:0.03 \move (-3 1) \fcir f:0.8 r:0.03 \move (-3 2) \fcir f:0.8 r:0.03 \move (-2 -2) \fcir f:0.8 r:0.03 \move (-2 -1) \fcir f:0.8 r:0.03 \move (-2 0) \fcir f:0.8 r:0.03 \move (-2 1) \fcir f:0.8 r:0.03 \move (-2 2) \fcir f:0.8 r:0.03 \move (-1 -2) \fcir f:0.8 r:0.03 \move (-1 -1) \fcir f:0.8 r:0.03 \move (-1 0) \fcir f:0.8 r:0.03 \move (-1 1) \fcir f:0.8 r:0.03 \move (-1 2) \fcir f:0.8 r:0.03 \move (0 -2) \fcir f:0.8 r:0.03 \move (0 -1) \fcir f:0.8 r:0.03 \move (0 0) \fcir f:0.8 r:0.03 \move (0 1) \fcir f:0.8 r:0.03 \move (0 2) \fcir f:0.8 r:0.03 \move (1 -2) \fcir f:0.8 r:0.03 \move (1 -1) \fcir f:0.8 r:0.03 \move (1 0) \fcir f:0.8 r:0.03 \move (1 1) \fcir f:0.8 r:0.03 \move (1 2) \fcir f:0.8 r:0.03 \move (2 -2) \fcir f:0.8 r:0.03 \move (2 -1) \fcir f:0.8 r:0.03 \move (2 0) \fcir f:0.8 r:0.03 \move (2 1) \fcir f:0.8 r:0.03 \move (2 2) \fcir f:0.8 r:0.03 \move (3 -2) \fcir f:0.8 r:0.03 \move (3 -1) \fcir f:0.8 r:0.03 \move (3 0) \fcir f:0.8 r:0.03 \move (3 1) \fcir f:0.8 r:0.03 \move (3 2) \fcir f:0.8 r:0.03 \end{texdraw}\end{center} \noindent The Newton subdivision of the Newton polygon is: \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.5 \linewd 0.05 \move (1 7) \lvec (2 5) \move (2 5) \lvec (2 2) \move (2 2) \lvec (1 0) \move (1 0) \lvec (0 2) \move (0 2) \lvec (0 5) \move (0 5) \lvec (1 7) \move (1 5) \lvec (1 7) \move (2 3) \lvec (1 5) \move (0 4) \lvec (1 7) \move (1 5) \lvec (0 4) \move (2 3) \lvec (1 2) \move (1 2) \lvec (0 4) \move (1 0) \lvec (2 3) \move (1 0) \lvec (1 2) \move (0 0) \fcir f:0.6 r:0.11 \move (0 1) \fcir f:0.6 r:0.11 \move (0 2) \fcir f:0.6 r:0.11 \move (0 3) \fcir f:0.6 r:0.11 \move (0 4) \fcir f:0.6 r:0.11 \move (0 5) \fcir f:0.6 r:0.11 \move (0 6) \fcir f:0.6 r:0.11 \move (0 7) \fcir f:0.6 r:0.11 \move (1 0) \fcir f:0.6 r:0.11 \move (1 1) \fcir f:0.6 r:0.11 \move (1 2) \fcir f:0.6 r:0.11 \move (1 3) \fcir f:0.6 r:0.11 \move (1 4) \fcir f:0.6 r:0.11 \move (1 5) \fcir f:0.6 r:0.11 \move (1 6) \fcir f:0.6 r:0.11 \move (1 7) \fcir f:0.6 r:0.11 \move (2 0) \fcir f:0.6 r:0.11 \move (2 1) \fcir f:0.6 r:0.11 \move (2 2) \fcir f:0.6 r:0.11 \move (2 3) \fcir f:0.6 r:0.11 \move (2 4) \fcir f:0.6 r:0.11 \move (2 5) \fcir f:0.6 r:0.11 \move (2 6) \fcir f:0.6 r:0.11 \move (2 7) \fcir f:0.6 r:0.11 \move (1 7) \fcir f:0 r:0.14 \move (2 5) \fcir f:0 r:0.14 \move (1 5) \fcir f:0 r:0.14 \move (2 3) \fcir f:0 r:0.14 \move (0 4) \fcir f:0 r:0.14 \move (0 5) \fcir f:0 r:0.14 \move (1 4) \fcir f:0 r:0.14 \move (1 3) \fcir f:0 r:0.14 \move (1 2) \fcir f:0 r:0.14 \move (2 2) \fcir f:0 r:0.14 \move (1 0) \fcir f:0 r:0.14 \move (0 2) \fcir f:0 r:0.14 \end{texdraw} \end{center} \noindent This example exhibits that the non-homogeneous tropical curve is much simpler than the homogeneous one. \subsection{The non-homogeneous tropical curve of the $5_2$ knot} \lbl{sub.52non} The non-homogeneous non-commutative $A$-polynomial $A^{nh}_{5_2}(M,L,q)$ has $98$ terms, and it is given by \cite{GS} (with the notation $A^{nh}_2(E,Q,q)$ where $E=L$, $Q=M$): \begin{math} A^{nh}_{5_2}(y,x,1/t) =\tfrac{-1}{t^{19}}\cdot xy^{12}+\tfrac{1}{t^{17}}\cdot x^{2}y^{10}+\tfrac{3}{t^{18}}\cdot xy^{11}+\tfrac{1}{t^{18}}\cdot y^{12}+\tfrac{-2}{t^{15}}\cdot x^{2}y^{9}+\tfrac{1-t+t^{3}+t^{4}}{t^{18}}\cdot xy^{10}+\tfrac{-1}{t^{18}}\cdot y^{11}+\tfrac{-1-t+t^{2}+t^{3}-t^{4}}{t^{16}}\cdot x^{2}y^{8}+\tfrac{-2-2\cdot t+t^{2}-2\cdot t^{3}-3\cdot t^{4}}{t^{17}}\cdot xy^{9}+\tfrac{-1-t}{t^{14}}\cdot y^{10}+\tfrac{2+2\cdot t-t^{2}+t^{3}+2\cdot t^{4}}{t^{14}}\cdot x^{2}y^{7}+\tfrac{1-2\cdot t-t^{2}+t^{3}-t^{5}}{t^{15}}\cdot xy^{8}+\tfrac{1+t}{t^{14}}\cdot y^{9}+\tfrac{1}{t^{6}}\cdot x^{3}y^{5}+\tfrac{1-t-t^{2}+t^{3}+t^{4}-2\cdot t^{5}}{t^{14}}\cdot x^{2}y^{6}+\tfrac{2-t+t^{2}+4\cdot t^{3}+2\cdot t^{4}-t^{5}+2\cdot t^{6}}{t^{15}}\cdot xy^{7}+\tfrac{1}{t^{9}}\cdot y^{8}+\tfrac{-1}{t^{3}}\cdot x^{3}y^{4}+\tfrac{-2+t-t^{2}-4\cdot t^{3}-2\cdot t^{4}+t^{5}-2\cdot t^{6}}{t^{12}}\cdot x^{2}y^{5}+\tfrac{-1+t+t^{2}-t^{3}-t^{4}+2\cdot t^{5}}{t^{14}}\cdot xy^{6}+\tfrac{-1}{t^{9}}\cdot y^{7}+\tfrac{-1-t}{t^{5}}\cdot x^{3}y^{3}+\tfrac{-1+2\cdot t+t^{2}-t^{3}+t^{5}}{t^{9}}\cdot x^{2}y^{4}+\tfrac{-2-2\cdot t+t^{2}-t^{3}-2\cdot t^{4}}{t^{11}}\cdot xy^{5}+\tfrac{1+t}{t^{2}}\cdot x^{3}y^{2}+\tfrac{2+2\cdot t-t^{2}+2\cdot t^{3}+3\cdot t^{4}}{t^{8}}\cdot x^{2}y^{3}+\tfrac{1+t-t^{2}-t^{3}+t^{4}}{t^{10}}\cdot xy^{4}+\tfrac{1}{t^{3}}\cdot x^{3}y+\tfrac{-1+t-t^{3}-t^{4}}{t^{6}}\cdot x^{2}y^{2}+\tfrac{2}{t^{6}}\cdot xy^{3}-x^{3}+\tfrac{-3}{t^{3}}\cdot x^{2}y+\tfrac{-1}{t^{5}}\cdot xy^{2}+\tfrac{1}{t}\cdot x^{2} \end{math} \noindent The vertices of the tropical curve are: \begin{displaymath} (1,-1/2),\;\;(-1,0),\;\;(-1/2,-1/2),\;\;(17/2,-1/2),\;\;(-1,-1),\;\;(0,-1),\;\;(-6,-2),\;\; \end{displaymath} \begin{displaymath} (6,-1),\;\;(-17/2,-5/2),\;\;(0,-2),\;\;(1,-2),\;\;(-1,-5/2),\;\;(1/2,-5/2),\;\;(1,-3) \end{displaymath} \vspace*{0.5cm} \noindent \vspace*{0.5cm} The Newton subdivision of the tropical curve is: \vspace*{0.5cm} \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.5 \linewd 0.05 \move (1 12) \lvec (2 10) \move (2 10) \lvec (3 5) \move (3 5) \lvec (3 0) \move (3 0) \lvec (2 0) \move (2 0) \lvec (1 2) \move (1 2) \lvec (0 7) \move (0 7) \lvec (0 12) \move (0 12) \lvec (1 12) \move (1 10) \lvec (1 12) \move (2 10) \lvec (2 8) \move (2 8) \lvec (1 10) \move (1 12) \lvec (0 11) \move (1 10) \lvec (0 11) \move (3 3) \lvec (2 8) \move (1 7) \lvec (1 6) \move (1 9) \lvec (1 7) \move (1 10) \lvec (1 9) \move (1 6) \lvec (0 11) \move (2 8) \lvec (2 6) \move (2 6) \lvec (1 6) \move (1 4) \lvec (0 9) \move (1 6) \lvec (1 4) \move (3 1) \lvec (2 6) \move (1 4) \lvec (1 2) \move (2 3) \lvec (2 2) \move (2 5) \lvec (2 3) \move (2 6) \lvec (2 5) \move (2 2) \lvec (1 4) \move (3 1) \lvec (2 2) \move (2 2) \lvec (2 0) \move (3 1) \lvec (2 0) \move (0 0) \fcir f:0.6 r:0.2 \move (0 1) \fcir f:0.6 r:0.2 \move (0 2) \fcir f:0.6 r:0.2 \move (0 3) \fcir f:0.6 r:0.2 \move (0 4) \fcir f:0.6 r:0.2 \move (0 5) \fcir f:0.6 r:0.2 \move (0 6) \fcir f:0.6 r:0.2 \move (0 7) \fcir f:0.6 r:0.2 \move (0 8) \fcir f:0.6 r:0.2 \move (0 9) \fcir f:0.6 r:0.2 \move (0 10) \fcir f:0.6 r:0.2 \move (0 11) \fcir f:0.6 r:0.2 \move (0 12) \fcir f:0.6 r:0.2 \move (1 0) \fcir f:0.6 r:0.2 \move (1 1) \fcir f:0.6 r:0.2 \move (1 2) \fcir f:0.6 r:0.2 \move (1 3) \fcir f:0.6 r:0.2 \move (1 4) \fcir f:0.6 r:0.2 \move (1 5) \fcir f:0.6 r:0.2 \move (1 6) \fcir f:0.6 r:0.2 \move (1 7) \fcir f:0.6 r:0.2 \move (1 8) \fcir f:0.6 r:0.2 \move (1 9) \fcir f:0.6 r:0.2 \move (1 10) \fcir f:0.6 r:0.2 \move (1 11) \fcir f:0.6 r:0.2 \move (1 12) \fcir f:0.6 r:0.2 \move (2 0) \fcir f:0.6 r:0.2 \move (2 1) \fcir f:0.6 r:0.2 \move (2 2) \fcir f:0.6 r:0.2 \move (2 3) \fcir f:0.6 r:0.2 \move (2 4) \fcir f:0.6 r:0.2 \move (2 5) \fcir f:0.6 r:0.2 \move (2 6) \fcir f:0.6 r:0.2 \move (2 7) \fcir f:0.6 r:0.2 \move (2 8) \fcir f:0.6 r:0.2 \move (2 9) \fcir f:0.6 r:0.2 \move (2 10) \fcir f:0.6 r:0.2 \move (2 11) \fcir f:0.6 r:0.2 \move (2 12) \fcir f:0.6 r:0.2 \move (3 0) \fcir f:0.6 r:0.2 \move (3 1) \fcir f:0.6 r:0.2 \move (3 2) \fcir f:0.6 r:0.2 \move (3 3) \fcir f:0.6 r:0.2 \move (3 4) \fcir f:0.6 r:0.2 \move (3 5) \fcir f:0.6 r:0.2 \move (3 6) \fcir f:0.6 r:0.2 \move (3 7) \fcir f:0.6 r:0.2 \move (3 8) \fcir f:0.6 r:0.2 \move (3 9) \fcir f:0.6 r:0.2 \move (3 10) \fcir f:0.6 r:0.2 \move (3 11) \fcir f:0.6 r:0.2 \move (3 12) \fcir f:0.6 r:0.2 \move (1 12) \fcir f:0 r:0.25 \move (2 10) \fcir f:0 r:0.25 \move (1 10) \fcir f:0 r:0.25 \move (2 8) \fcir f:0 r:0.25 \move (0 12) \fcir f:0 r:0.25 \move (0 11) \fcir f:0 r:0.25 \move (3 5) \fcir f:0 r:0.25 \move (3 3) \fcir f:0 r:0.25 \move (1 9) \fcir f:0 r:0.25 \move (1 7) \fcir f:0 r:0.25 \move (1 6) \fcir f:0 r:0.25 \move (2 6) \fcir f:0 r:0.25 \move (0 9) \fcir f:0 r:0.25 \move (1 4) \fcir f:0 r:0.25 \move (3 1) \fcir f:0 r:0.25 \move (0 7) \fcir f:0 r:0.25 \move (1 2) \fcir f:0 r:0.25 \move (2 5) \fcir f:0 r:0.25 \move (2 3) \fcir f:0 r:0.25 \move (2 2) \fcir f:0 r:0.25 \move (2 0) \fcir f:0 r:0.25 \move (3 0) \fcir f:0 r:0.25 \end{texdraw} \end{center} \noindent The tropical curve is: \vspace*{0.5cm} \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.7 \arrowheadtype t:V \linewd 0.1 \lpatt (1 0) \setgray 0.6 \relunitscale 0.70 \move (1 0.5) \fcir f:0 r:0.28 \move (1 0.5) \lvec (-0.5 0.5) \htext (0.25 0){$2$} \move (1 0.5) \lvec (8.5 0.5) \htext (4.75 0){$2$} \move (1 0.5) \lvec (0 0) \move (1 0.5) \rlvec (2.5 1.25) \move (-1 1) \fcir f:0 r:0.28 \move (-1 1) \lvec (-0.5 0.5) \move (-1 1) \rlvec (-2.5 0) \move (-1 1) \rlvec (0 2.5) \move (-0.5 0.5) \fcir f:0 r:0.28 \move (-0.5 0.5) \lvec (-1 0) \move (8.5 0.5) \fcir f:0 r:0.28 \move (8.5 0.5) \lvec (6 0) \move (8.5 0.5) \rlvec (2.5 0.5) \move (8.5 0.5) \rlvec (2.5 0) \htext (9.91 0.5){$2$} \move (-1 0) \fcir f:0 r:0.28 \move (-1 0) \lvec (0 0) \htext (-0.5 -0.5){$4$} \move (-1 0) \lvec (-6 -1) \move (0 0) \fcir f:0 r:0.28 \move (0 0) \lvec (6 0) \htext (3 -0.5){$2$} \move (0 0) \lvec (0 -1) \move (-6 -1) \fcir f:0 r:0.28 \move (-6 -1) \lvec (-8.5 -1.5) \move (-6 -1) \lvec (0 -1) \htext (-3 -1.5){$2$} \move (-6 -1) \rlvec (-2.5 0) \htext (-7.41 -1){$2$} \move (6 0) \fcir f:0 r:0.28 \move (6 0) \lvec (1 -1) \move (6 0) \rlvec (2.5 0) \htext (7.41 0){$2$} \move (-8.5 -1.5) \fcir f:0 r:0.28 \move (-8.5 -1.5) \lvec (-1 -1.5) \htext (-4.75 -2){$2$} \move (-8.5 -1.5) \rlvec (-2.5 -0.5) \move (-8.5 -1.5) \rlvec (-2.5 0) \htext (-9.91 -1.5){$2$} \move (0 -1) \fcir f:0 r:0.28 \move (0 -1) \lvec (1 -1) \htext (0.5 -1.5){$4$} \move (0 -1) \lvec (-1 -1.5) \move (1 -1) \fcir f:0 r:0.28 \move (1 -1) \lvec (0.5 -1.5) \move (-1 -1.5) \fcir f:0 r:0.28 \move (-1 -1.5) \lvec (0.5 -1.5) \htext (-0.25 -2){$2$} \move (-1 -1.5) \rlvec (-2.5 -1.25) \move (0.5 -1.5) \fcir f:0 r:0.28 \move (0.5 -1.5) \lvec (1 -2) \move (1 -2) \fcir f:0 r:0.28 \move (1 -2) \rlvec (2.5 0) \move (1 -2) \rlvec (0 -2.5) \move (-9 -3) \fcir f:0.8 r:0.14 \move (-9 -2) \fcir f:0.8 r:0.14 \move (-9 -1) \fcir f:0.8 r:0.14 \move (-9 0) \fcir f:0.8 r:0.14 \move (-9 1) \fcir f:0.8 r:0.14 \move (-9 2) \fcir f:0.8 r:0.14 \move (-8 -3) \fcir f:0.8 r:0.14 \move (-8 -2) \fcir f:0.8 r:0.14 \move (-8 -1) \fcir f:0.8 r:0.14 \move (-8 0) \fcir f:0.8 r:0.14 \move (-8 1) \fcir f:0.8 r:0.14 \move (-8 2) \fcir f:0.8 r:0.14 \move (-7 -3) \fcir f:0.8 r:0.14 \move (-7 -2) \fcir f:0.8 r:0.14 \move (-7 -1) \fcir f:0.8 r:0.14 \move (-7 0) \fcir f:0.8 r:0.14 \move (-7 1) \fcir f:0.8 r:0.14 \move (-7 2) \fcir f:0.8 r:0.14 \move (-6 -3) \fcir f:0.8 r:0.14 \move (-6 -2) \fcir f:0.8 r:0.14 \move (-6 -1) \fcir f:0.8 r:0.14 \move (-6 0) \fcir f:0.8 r:0.14 \move (-6 1) \fcir f:0.8 r:0.14 \move (-6 2) \fcir f:0.8 r:0.14 \move (-5 -3) \fcir f:0.8 r:0.14 \move (-5 -2) \fcir f:0.8 r:0.14 \move (-5 -1) \fcir f:0.8 r:0.14 \move (-5 0) \fcir f:0.8 r:0.14 \move (-5 1) \fcir f:0.8 r:0.14 \move (-5 2) \fcir f:0.8 r:0.14 \move (-4 -3) \fcir f:0.8 r:0.14 \move (-4 -2) \fcir f:0.8 r:0.14 \move (-4 -1) \fcir f:0.8 r:0.14 \move (-4 0) \fcir f:0.8 r:0.14 \move (-4 1) \fcir f:0.8 r:0.14 \move (-4 2) \fcir f:0.8 r:0.14 \move (-3 -3) \fcir f:0.8 r:0.14 \move (-3 -2) \fcir f:0.8 r:0.14 \move (-3 -1) \fcir f:0.8 r:0.14 \move (-3 0) \fcir f:0.8 r:0.14 \move (-3 1) \fcir f:0.8 r:0.14 \move (-3 2) \fcir f:0.8 r:0.14 \move (-2 -3) \fcir f:0.8 r:0.14 \move (-2 -2) \fcir f:0.8 r:0.14 \move (-2 -1) \fcir f:0.8 r:0.14 \move (-2 0) \fcir f:0.8 r:0.14 \move (-2 1) \fcir f:0.8 r:0.14 \move (-2 2) \fcir f:0.8 r:0.14 \move (-1 -3) \fcir f:0.8 r:0.14 \move (-1 -2) \fcir f:0.8 r:0.14 \move (-1 -1) \fcir f:0.8 r:0.14 \move (-1 0) \fcir f:0.8 r:0.14 \move (-1 1) \fcir f:0.8 r:0.14 \move (-1 2) \fcir f:0.8 r:0.14 \move (0 -3) \fcir f:0.8 r:0.14 \move (0 -2) \fcir f:0.8 r:0.14 \move (0 -1) \fcir f:0.8 r:0.14 \move (0 0) \fcir f:0.8 r:0.14 \move (0 1) \fcir f:0.8 r:0.14 \move (0 2) \fcir f:0.8 r:0.14 \move (1 -3) \fcir f:0.8 r:0.14 \move (1 -2) \fcir f:0.8 r:0.14 \move (1 -1) \fcir f:0.8 r:0.14 \move (1 0) \fcir f:0.8 r:0.14 \move (1 1) \fcir f:0.8 r:0.14 \move (1 2) \fcir f:0.8 r:0.14 \move (2 -3) \fcir f:0.8 r:0.14 \move (2 -2) \fcir f:0.8 r:0.14 \move (2 -1) \fcir f:0.8 r:0.14 \move (2 0) \fcir f:0.8 r:0.14 \move (2 1) \fcir f:0.8 r:0.14 \move (2 2) \fcir f:0.8 r:0.14 \move (3 -3) \fcir f:0.8 r:0.14 \move (3 -2) \fcir f:0.8 r:0.14 \move (3 -1) \fcir f:0.8 r:0.14 \move (3 0) \fcir f:0.8 r:0.14 \move (3 1) \fcir f:0.8 r:0.14 \move (3 2) \fcir f:0.8 r:0.14 \move (4 -3) \fcir f:0.8 r:0.14 \move (4 -2) \fcir f:0.8 r:0.14 \move (4 -1) \fcir f:0.8 r:0.14 \move (4 0) \fcir f:0.8 r:0.14 \move (4 1) \fcir f:0.8 r:0.14 \move (4 2) \fcir f:0.8 r:0.14 \move (5 -3) \fcir f:0.8 r:0.14 \move (5 -2) \fcir f:0.8 r:0.14 \move (5 -1) \fcir f:0.8 r:0.14 \move (5 0) \fcir f:0.8 r:0.14 \move (5 1) \fcir f:0.8 r:0.14 \move (5 2) \fcir f:0.8 r:0.14 \move (6 -3) \fcir f:0.8 r:0.14 \move (6 -2) \fcir f:0.8 r:0.14 \move (6 -1) \fcir f:0.8 r:0.14 \move (6 0) \fcir f:0.8 r:0.14 \move (6 1) \fcir f:0.8 r:0.14 \move (6 2) \fcir f:0.8 r:0.14 \move (7 -3) \fcir f:0.8 r:0.14 \move (7 -2) \fcir f:0.8 r:0.14 \move (7 -1) \fcir f:0.8 r:0.14 \move (7 0) \fcir f:0.8 r:0.14 \move (7 1) \fcir f:0.8 r:0.14 \move (7 2) \fcir f:0.8 r:0.14 \move (8 -3) \fcir f:0.8 r:0.14 \move (8 -2) \fcir f:0.8 r:0.14 \move (8 -1) \fcir f:0.8 r:0.14 \move (8 0) \fcir f:0.8 r:0.14 \move (8 1) \fcir f:0.8 r:0.14 \move (8 2) \fcir f:0.8 r:0.14 \move (9 -3) \fcir f:0.8 r:0.14 \move (9 -2) \fcir f:0.8 r:0.14 \move (9 -1) \fcir f:0.8 r:0.14 \move (9 0) \fcir f:0.8 r:0.14 \move (9 1) \fcir f:0.8 r:0.14 \move (9 2) \fcir f:0.8 r:0.14 \end{texdraw}\end{center} \subsection{The non-homogeneous tropical curve of the $6_1$ knot} \lbl{sub.61non} The non-homogeneous non-commutative $A$-polynomial $A^{nh}_{6_1}(M,L,q)$ has $346$ terms, and it is given by \cite{GS} (with the notation $A^{nh}_{-2}(E,Q,q)$ where $E=L$, $Q=M$): \begin{center} \begin{math} A^{nh}_{6_1}(y,x,1/t)=\tfrac{1}{t^{31}}\cdot x^{2}y^{15}+\tfrac{-1-t}{t^{28}}\cdot x^{3}y^{13}+\tfrac{-1-3\cdot t}{t^{30}}\cdot x^{2}y^{14}+\tfrac{-1}{t^{30}}\cdot xy^{15}+\tfrac{1}{t^{22}}\cdot x^{4}y^{11}+\tfrac{1+3\cdot t+t^{2}}{t^{26}}\cdot x^{3}y^{12}+\tfrac{-1-t+2\cdot t^{2}+2\cdot t^{3}-t^{4}-t^{5}-t^{6}}{t^{30}}\cdot x^{2}y^{13}+\tfrac{2}{t^{29}}\cdot xy^{14}+\tfrac{-1}{t^{18}}\cdot x^{4}y^{10}+\tfrac{1+2\cdot t+2\cdot t^{2}+t^{3}-t^{4}-2\cdot t^{5}+2\cdot t^{6}+t^{7}}{t^{27}}\cdot x^{3}y^{11}+\tfrac{1+4\cdot t+3\cdot t^{2}-t^{3}+2\cdot t^{5}+4\cdot t^{6}+3\cdot t^{7}}{t^{29}}\cdot x^{2}y^{12}+\tfrac{1-t-t^{2}+t^{4}+t^{5}+t^{6}}{t^{29}}\cdot xy^{13}+\tfrac{-1-t-t^{2}}{t^{21}}\cdot x^{4}y^{9}+\tfrac{-1-4\cdot t-4\cdot t^{2}-3\cdot t^{3}-2\cdot t^{6}-3\cdot t^{7}-t^{8}}{t^{25}}\cdot x^{3}y^{10}+\tfrac{1-2\cdot t-3\cdot t^{2}+3\cdot t^{4}+3\cdot t^{5}+t^{6}-4\cdot t^{7}-t^{8}+t^{9}+t^{10}}{t^{28}}\cdot x^{2}y^{11}+\tfrac{-2+2\cdot t^{2}-2\cdot t^{4}-2\cdot t^{5}-2\cdot t^{6}}{t^{28}}\cdot xy^{12}+\tfrac{1+t+t^{2}}{t^{17}}\cdot x^{4}y^{8}+\tfrac{-1-2\cdot t-2\cdot t^{2}+3\cdot t^{4}-2\cdot t^{6}-4\cdot t^{7}-t^{8}+3\cdot t^{9}+t^{10}-t^{11}}{t^{25}}\cdot x^{3}y^{9}+\tfrac{-1-3\cdot t+t^{2}-3\cdot t^{4}-6\cdot t^{5}-6\cdot t^{6}-4\cdot t^{7}+t^{8}-t^{9}-2\cdot t^{10}-3\cdot t^{11}}{t^{27}}\cdot x^{2}y^{10}+\tfrac{1+2\cdot t-t^{3}+t^{5}+2\cdot t^{6}+t^{7}-t^{8}-t^{9}-t^{10}}{t^{27}}\cdot xy^{11}+\tfrac{1+t+t^{2}}{t^{19}}\cdot x^{4}y^{7}+\tfrac{1+3\cdot t+3\cdot t^{2}+2\cdot t^{3}+2\cdot t^{5}+5\cdot t^{6}+5\cdot t^{7}+3\cdot t^{8}-t^{10}+t^{11}+t^{12}}{t^{23}}\cdot x^{3}y^{8}+\tfrac{1+t-2\cdot t^{2}-2\cdot t^{3}-2\cdot t^{4}+2\cdot t^{5}-t^{7}-3\cdot t^{8}-3\cdot t^{9}+2\cdot t^{11}-t^{12}}{t^{25}}\cdot x^{2}y^{9}+\tfrac{1-3\cdot t-t^{2}+3\cdot t^{3}+3\cdot t^{4}+t^{5}-2\cdot t^{6}-2\cdot t^{7}+2\cdot t^{8}+2\cdot t^{9}+2\cdot t^{10}}{t^{26}}\cdot xy^{10}+\tfrac{1}{t^{26}}\cdot y^{11}+\tfrac{-1-t-t^{2}}{t^{15}}\cdot x^{4}y^{6}+\tfrac{1+t-t^{2}-2\cdot t^{3}+t^{4}+3\cdot t^{5}+4\cdot t^{6}-2\cdot t^{7}-4\cdot t^{8}-2\cdot t^{9}+t^{10}+2\cdot t^{11}-2\cdot t^{13}}{t^{22}}\cdot x^{3}y^{7}+\tfrac{1+2\cdot t+2\cdot t^{2}+3\cdot t^{3}+t^{4}+3\cdot t^{5}+3\cdot t^{6}+3\cdot t^{7}+3\cdot t^{8}+t^{9}+t^{11}}{t^{22}}\cdot x^{2}y^{8}+\tfrac{-2-2\cdot t^{3}-4\cdot t^{4}-4\cdot t^{5}-2\cdot t^{6}+t^{7}-t^{9}-2\cdot t^{10}-t^{11}+t^{13}}{t^{25}}\cdot xy^{9}+\tfrac{-1}{t^{26}}\cdot y^{10}+\tfrac{-1}{t^{16}}\cdot x^{4}y^{5}+\tfrac{-2-2\cdot t^{3}-4\cdot t^{4}-4\cdot t^{5}-2\cdot t^{6}+t^{7}-t^{9}-2\cdot t^{10}-t^{11}+t^{13}}{t^{19}}\cdot x^{3}y^{6}+\tfrac{1+2\cdot t+2\cdot t^{2}+3\cdot t^{3}+t^{4}+3\cdot t^{5}+3\cdot t^{6}+3\cdot t^{7}+3\cdot t^{8}+t^{9}+t^{11}}{t^{20}}\cdot x^{2}y^{7}+\tfrac{1+t-t^{2}-2\cdot t^{3}+t^{4}+3\cdot t^{5}+4\cdot t^{6}-2\cdot t^{7}-4\cdot t^{8}-2\cdot t^{9}+t^{10}+2\cdot t^{11}-2\cdot t^{13}}{t^{24}}\cdot xy^{8}+\tfrac{-1-t-t^{2}}{t^{21}}\cdot y^{9}+\tfrac{1}{t^{12}}\cdot x^{4}y^{4}+\tfrac{1-3\cdot t-t^{2}+3\cdot t^{3}+3\cdot t^{4}+t^{5}-2\cdot t^{6}-2\cdot t^{7}+2\cdot t^{8}+2\cdot t^{9}+2\cdot t^{10}}{t^{16}}\cdot x^{3}y^{5}+\tfrac{1+t-2\cdot t^{2}-2\cdot t^{3}-2\cdot t^{4}+2\cdot t^{5}-t^{7}-3\cdot t^{8}-3\cdot t^{9}+2\cdot t^{11}-t^{12}}{t^{19}}\cdot x^{2}y^{6}+\tfrac{1+3\cdot t+3\cdot t^{2}+2\cdot t^{3}+2\cdot t^{5}+5\cdot t^{6}+5\cdot t^{7}+3\cdot t^{8}-t^{10}+t^{11}+t^{12}}{t^{21}}\cdot xy^{7}+\tfrac{1+t+t^{2}}{t^{21}}\cdot y^{8}+\tfrac{1+2\cdot t-t^{3}+t^{5}+2\cdot t^{6}+t^{7}-t^{8}-t^{9}-t^{10}}{t^{13}}\cdot x^{3}y^{4}+\tfrac{-1-3\cdot t+t^{2}-3\cdot t^{4}-6\cdot t^{5}-6\cdot t^{6}-4\cdot t^{7}+t^{8}-t^{9}-2\cdot t^{10}-3\cdot t^{11}}{t^{17}}\cdot x^{2}y^{5}+\tfrac{-1-2\cdot t-2\cdot t^{2}+3\cdot t^{4}-2\cdot t^{6}-4\cdot t^{7}-t^{8}+3\cdot t^{9}+t^{10}-t^{11}}{t^{19}}\cdot xy^{6}+\tfrac{1+t+t^{2}}{t^{15}}\cdot y^{7}+\tfrac{-2+2\cdot t^{2}-2\cdot t^{4}-2\cdot t^{5}-2\cdot t^{6}}{t^{10}}\cdot x^{3}y^{3}+\tfrac{1-2\cdot t-3\cdot t^{2}+3\cdot t^{4}+3\cdot t^{5}+t^{6}-4\cdot t^{7}-t^{8}+t^{9}+t^{10}}{t^{14}}\cdot x^{2}y^{4}+\tfrac{-1-4\cdot t-4\cdot t^{2}-3\cdot t^{3}-2\cdot t^{6}-3\cdot t^{7}-t^{8}}{t^{15}}\cdot xy^{5}+\tfrac{-1-t-t^{2}}{t^{15}}\cdot y^{6}+\tfrac{1-t-t^{2}+t^{4}+t^{5}+t^{6}}{t^{7}}\cdot x^{3}y^{2}+\tfrac{1+4\cdot t+3\cdot t^{2}-t^{3}+2\cdot t^{5}+4\cdot t^{6}+3\cdot t^{7}}{t^{11}}\cdot x^{2}y^{3}+\tfrac{1+2\cdot t+2\cdot t^{2}+t^{3}-t^{4}-2\cdot t^{5}+2\cdot t^{6}+t^{7}}{t^{13}}\cdot xy^{4}+\tfrac{-1}{t^{8}}\cdot y^{5}+\tfrac{2}{t^{3}}\cdot x^{3}y+\tfrac{-1-t+2\cdot t^{2}+2\cdot t^{3}-t^{4}-t^{5}-t^{6}}{t^{8}}\cdot x^{2}y^{2}+\tfrac{1+3\cdot t+t^{2}}{t^{8}}\cdot xy^{3}+\tfrac{1}{t^{8}}\cdot y^{4}-x^{3}+\tfrac{-1-3\cdot t}{t^{4}}\cdot x^{2}y+\tfrac{-1-t}{t^{6}}\cdot xy^{2}+\tfrac{1}{t}\cdot x^{2} \end{math} \end{center} \vspace*{0.5cm} The vertices of the tropical curve are: \begin{displaymath} (2,-1/2),\;\;(-1,-1/2),\;\;(5,-1/2),\;\;(-3/2,-1/2),\;\;(-4,0),\;\;(1,-1),\;\;(-1/2,-1),\;\; \end{displaymath} \begin{displaymath} (-1,-2/3),\;\;(4,-1),\;\;(1/2,-3/2),\;\;(3,-3/2),\;\;(1/5,-8/5),\;\;(-1/2,-5/4),\;\; \end{displaymath} \begin{displaymath} (1/2,-11/4),\;\;(-1/5,-12/5),\;\;(-3,-5/2),\;\;(4,-4),\;\;(1/2,-3),\;\;(1,-10/3),\;\;(3/2,-7/2),\;\; \end{displaymath} \begin{displaymath} (-1/2,-5/2),\;\;(-4,-3),\;\;(-1,-3),\;\;(-5,-7/2),\;\;(1,-7/2),\;\;(-2,-7/2) \end{displaymath} \vspace*{0.5cm} The tropical curve is: \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.7 \arrowheadtype t:V \linewd 0.1 \lpatt (1 0) \setgray 0.6 \relunitscale 1.2 \move (2 1.5) \fcir f:0 r:0.16 \move (2 1.5) \lvec (-1 1.5) \htext (0.5 0.5){$2$} \move (2 1.5) \lvec (5 1.5) \htext (3.5 0.5){$2$} \move (2 1.5) \lvec (1 1) \move (2 1.5) \rlvec (2.5 1.25) \move (-1 1.5) \fcir f:0 r:0.16 \move (-1 1.5) \lvec (-1.5 1.5) \htext (-1.25 0.5){$2$} \move (-1 1.5) \lvec (-1 1.33) \move (-1 1.5) \rlvec (0 2.5) \move (5 1.5) \fcir f:0 r:0.16 \move (5 1.5) \lvec (4 1) \move (5 1.5) \rlvec (2.5 1.25) \move (5 1.5) \rlvec (2.5 0) \htext (5.83 1.5){$2$} \move (-1.5 1.5) \fcir f:0 r:0.16 \move (-1.5 1.5) \lvec (-4 2) \move (-1.5 1.5) \lvec (-1 1.33) \move (-4 2) \fcir f:0 r:0.16 \move (-4 2) \rlvec (-2.5 0) \move (-4 2) \rlvec (-2.5 0.625) \move (1 1) \fcir f:0 r:0.16 \move (1 1) \lvec (-0.5 1) \htext (0.25 0){$3$} \move (1 1) \lvec (4 1) \htext (2.5 0){$2$} \move (1 1) \lvec (0.5 0.5) \move (-0.5 1) \fcir f:0 r:0.16 \move (-0.5 1) \lvec (-1 1.33) \move (-0.5 1) \lvec (-0.5 0.75) \htext (-0.5 -0.12){$2$} \move (-1 1.33) \fcir f:0 r:0.16 \move (4 1) \fcir f:0 r:0.16 \move (4 1) \lvec (3 0.5) \move (4 1) \rlvec (2.5 0) \htext (4.83 1){$2$} \move (0.5 0.5) \fcir f:0 r:0.16 \move (0.5 0.5) \lvec (3 0.5) \htext (1.75 -0.5){$2$} \move (0.5 0.5) \lvec (0.2 0.4) \move (3 0.5) \fcir f:0 r:0.16 \move (3 0.5) \lvec (0.5 -0.75) \move (3 0.5) \rlvec (2.5 0) \htext (3.83 0.5){$2$} \move (0.2 0.4) \fcir f:0 r:0.16 \move (0.2 0.4) \lvec (-0.5 0.75) \move (0.2 0.4) \lvec (-0.2 -0.4) \move (-0.5 0.75) \fcir f:0 r:0.16 \move (-0.5 0.75) \lvec (-3 -0.5) \move (0.5 -0.75) \fcir f:0 r:0.16 \move (0.5 -0.75) \lvec (-0.2 -0.4) \move (0.5 -0.75) \lvec (0.5 -1) \htext (0.5 -1.87){$2$} \move (-0.2 -0.4) \fcir f:0 r:0.16 \move (-0.2 -0.4) \lvec (-0.5 -0.5) \move (-3 -0.5) \fcir f:0 r:0.16 \move (-3 -0.5) \lvec (-0.5 -0.5) \htext (-1.75 -1.5){$2$} \move (-3 -0.5) \lvec (-4 -1) \move (-3 -0.5) \rlvec (-2.5 0) \htext (-3.83 -0.5){$2$} \move (4 -2) \fcir f:0 r:0.16 \move (4 -2) \lvec (1.5 -1.5) \move (4 -2) \rlvec (2.5 0) \move (4 -2) \rlvec (2.5 -0.625) \move (0.5 -1) \fcir f:0 r:0.16 \move (0.5 -1) \lvec (1 -1.33) \move (0.5 -1) \lvec (-1 -1) \htext (-0.25 -2){$3$} \move (1 -1.33) \fcir f:0 r:0.16 \move (1 -1.33) \lvec (1.5 -1.5) \move (1 -1.33) \lvec (1 -1.5) \move (1.5 -1.5) \fcir f:0 r:0.16 \move (1.5 -1.5) \lvec (1 -1.5) \htext (1.25 -2.5){$2$} \move (-0.5 -0.5) \fcir f:0 r:0.16 \move (-0.5 -0.5) \lvec (-1 -1) \move (-4 -1) \fcir f:0 r:0.16 \move (-4 -1) \lvec (-1 -1) \htext (-2.5 -2){$2$} \move (-4 -1) \lvec (-5 -1.5) \move (-4 -1) \rlvec (-2.5 0) \htext (-4.83 -1){$2$} \move (-1 -1) \fcir f:0 r:0.16 \move (-1 -1) \lvec (-2 -1.5) \move (-5 -1.5) \fcir f:0 r:0.16 \move (-5 -1.5) \lvec (-2 -1.5) \htext (-3.5 -2.5){$2$} \move (-5 -1.5) \rlvec (-2.5 -1.25) \move (-5 -1.5) \rlvec (-2.5 0) \htext (-5.83 -1.5){$2$} \move (1 -1.5) \fcir f:0 r:0.16 \move (1 -1.5) \lvec (-2 -1.5) \htext (-0.5 -2.5){$2$} \move (1 -1.5) \rlvec (0 -2.5) \move (-2 -1.5) \fcir f:0 r:0.16 \move (-2 -1.5) \rlvec (-2.5 -1.25) \move (-6 -3) \fcir f:0.8 r:0.08 \move (-6 -2) \fcir f:0.8 r:0.08 \move (-6 -1) \fcir f:0.8 r:0.08 \move (-6 0) \fcir f:0.8 r:0.08 \move (-6 1) \fcir f:0.8 r:0.08 \move (-6 2) \fcir f:0.8 r:0.08 \move (-6 3) \fcir f:0.8 r:0.08 \move (-5 -3) \fcir f:0.8 r:0.08 \move (-5 -2) \fcir f:0.8 r:0.08 \move (-5 -1) \fcir f:0.8 r:0.08 \move (-5 0) \fcir f:0.8 r:0.08 \move (-5 1) \fcir f:0.8 r:0.08 \move (-5 2) \fcir f:0.8 r:0.08 \move (-5 3) \fcir f:0.8 r:0.08 \move (-4 -3) \fcir f:0.8 r:0.08 \move (-4 -2) \fcir f:0.8 r:0.08 \move (-4 -1) \fcir f:0.8 r:0.08 \move (-4 0) \fcir f:0.8 r:0.08 \move (-4 1) \fcir f:0.8 r:0.08 \move (-4 2) \fcir f:0.8 r:0.08 \move (-4 3) \fcir f:0.8 r:0.08 \move (-3 -3) \fcir f:0.8 r:0.08 \move (-3 -2) \fcir f:0.8 r:0.08 \move (-3 -1) \fcir f:0.8 r:0.08 \move (-3 0) \fcir f:0.8 r:0.08 \move (-3 1) \fcir f:0.8 r:0.08 \move (-3 2) \fcir f:0.8 r:0.08 \move (-3 3) \fcir f:0.8 r:0.08 \move (-2 -3) \fcir f:0.8 r:0.08 \move (-2 -2) \fcir f:0.8 r:0.08 \move (-2 -1) \fcir f:0.8 r:0.08 \move (-2 0) \fcir f:0.8 r:0.08 \move (-2 1) \fcir f:0.8 r:0.08 \move (-2 2) \fcir f:0.8 r:0.08 \move (-2 3) \fcir f:0.8 r:0.08 \move (-1 -3) \fcir f:0.8 r:0.08 \move (-1 -2) \fcir f:0.8 r:0.08 \move (-1 -1) \fcir f:0.8 r:0.08 \move (-1 0) \fcir f:0.8 r:0.08 \move (-1 1) \fcir f:0.8 r:0.08 \move (-1 2) \fcir f:0.8 r:0.08 \move (-1 3) \fcir f:0.8 r:0.08 \move (0 -3) \fcir f:0.8 r:0.08 \move (0 -2) \fcir f:0.8 r:0.08 \move (0 -1) \fcir f:0.8 r:0.08 \move (0 0) \fcir f:0.8 r:0.08 \move (0 1) \fcir f:0.8 r:0.08 \move (0 2) \fcir f:0.8 r:0.08 \move (0 3) \fcir f:0.8 r:0.08 \move (1 -3) \fcir f:0.8 r:0.08 \move (1 -2) \fcir f:0.8 r:0.08 \move (1 -1) \fcir f:0.8 r:0.08 \move (1 0) \fcir f:0.8 r:0.08 \move (1 1) \fcir f:0.8 r:0.08 \move (1 2) \fcir f:0.8 r:0.08 \move (1 3) \fcir f:0.8 r:0.08 \move (2 -3) \fcir f:0.8 r:0.08 \move (2 -2) \fcir f:0.8 r:0.08 \move (2 -1) \fcir f:0.8 r:0.08 \move (2 0) \fcir f:0.8 r:0.08 \move (2 1) \fcir f:0.8 r:0.08 \move (2 2) \fcir f:0.8 r:0.08 \move (2 3) \fcir f:0.8 r:0.08 \move (3 -3) \fcir f:0.8 r:0.08 \move (3 -2) \fcir f:0.8 r:0.08 \move (3 -1) \fcir f:0.8 r:0.08 \move (3 0) \fcir f:0.8 r:0.08 \move (3 1) \fcir f:0.8 r:0.08 \move (3 2) \fcir f:0.8 r:0.08 \move (3 3) \fcir f:0.8 r:0.08 \move (4 -3) \fcir f:0.8 r:0.08 \move (4 -2) \fcir f:0.8 r:0.08 \move (4 -1) \fcir f:0.8 r:0.08 \move (4 0) \fcir f:0.8 r:0.08 \move (4 1) \fcir f:0.8 r:0.08 \move (4 2) \fcir f:0.8 r:0.08 \move (4 3) \fcir f:0.8 r:0.08 \move (5 -3) \fcir f:0.8 r:0.08 \move (5 -2) \fcir f:0.8 r:0.08 \move (5 -1) \fcir f:0.8 r:0.08 \move (5 0) \fcir f:0.8 r:0.08 \move (5 1) \fcir f:0.8 r:0.08 \move (5 2) \fcir f:0.8 r:0.08 \move (5 3) \fcir f:0.8 r:0.08 \move (6 -3) \fcir f:0.8 r:0.08 \move (6 -2) \fcir f:0.8 r:0.08 \move (6 -1) \fcir f:0.8 r:0.08 \move (6 0) \fcir f:0.8 r:0.08 \move (6 1) \fcir f:0.8 r:0.08 \move (6 2) \fcir f:0.8 r:0.08 \move (6 3) \fcir f:0.8 r:0.08 \end{texdraw}\end{center} \vspace*{0.5cm} The Newton subdivision of the tropical curve is: \vspace*{0.5cm} \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.5 \linewd 0.05 \relunitscale0.8 \move (2 15) \lvec (4 11) \move (4 11) \lvec (4 4) \move (4 4) \lvec (3 0) \move (3 0) \lvec (2 0) \move (2 0) \lvec (0 4) \move (0 4) \lvec (0 11) \move (0 11) \lvec (1 15) \move (1 15) \lvec (2 15) \move (2 13) \lvec (2 15) \move (3 13) \lvec (3 11) \move (3 11) \lvec (2 13) \move (1 13) \lvec (1 15) \move (2 13) \lvec (1 13) \move (4 9) \lvec (3 11) \move (0 10) \lvec (1 15) \move (1 13) \lvec (0 10) \move (2 12) \lvec (2 13) \move (2 11) \lvec (2 12) \move (2 10) \lvec (2 11) \move (3 11) \lvec (3 9) \move (3 9) \lvec (2 10) \move (0 10) \lvec (2 13) \move (2 10) \lvec (0 10) \move (4 7) \lvec (3 9) \move (3 9) \lvec (3 7) \move (3 7) \lvec (2 10) \move (4 5) \lvec (3 7) \move (1 8) \lvec (2 10) \move (3 7) \lvec (1 8) \move (1 8) \lvec (0 10) \move (2 5) \lvec (3 7) \move (4 5) \lvec (2 5) \move (2 5) \lvec (1 8) \move (1 8) \lvec (1 6) \move (1 6) \lvec (0 8) \move (3 0) \lvec (4 5) \move (4 5) \lvec (2 2) \move (2 4) \lvec (2 5) \move (2 3) \lvec (2 4) \move (2 2) \lvec (2 3) \move (4 5) \lvec (3 2) \move (3 2) \lvec (2 2) \move (3 0) \lvec (3 2) \move (2 5) \lvec (1 6) \move (1 6) \lvec (1 4) \move (1 4) \lvec (0 6) \move (2 2) \lvec (1 4) \move (1 4) \lvec (1 2) \move (2 0) \lvec (2 2) \move (0 0) \fcir f:0.6 r:0.25 \move (0 1) \fcir f:0.6 r:0.25 \move (0 2) \fcir f:0.6 r:0.25 \move (0 3) \fcir f:0.6 r:0.25 \move (0 4) \fcir f:0.6 r:0.25 \move (0 5) \fcir f:0.6 r:0.25 \move (0 6) \fcir f:0.6 r:0.25 \move (0 7) \fcir f:0.6 r:0.25 \move (0 8) \fcir f:0.6 r:0.25 \move (0 9) \fcir f:0.6 r:0.25 \move (0 10) \fcir f:0.6 r:0.25 \move (0 11) \fcir f:0.6 r:0.25 \move (0 12) \fcir f:0.6 r:0.25 \move (0 13) \fcir f:0.6 r:0.25 \move (0 14) \fcir f:0.6 r:0.25 \move (0 15) \fcir f:0.6 r:0.25 \move (1 0) \fcir f:0.6 r:0.25 \move (1 1) \fcir f:0.6 r:0.25 \move (1 2) \fcir f:0.6 r:0.25 \move (1 3) \fcir f:0.6 r:0.25 \move (1 4) \fcir f:0.6 r:0.25 \move (1 5) \fcir f:0.6 r:0.25 \move (1 6) \fcir f:0.6 r:0.25 \move (1 7) \fcir f:0.6 r:0.25 \move (1 8) \fcir f:0.6 r:0.25 \move (1 9) \fcir f:0.6 r:0.25 \move (1 10) \fcir f:0.6 r:0.25 \move (1 11) \fcir f:0.6 r:0.25 \move (1 12) \fcir f:0.6 r:0.25 \move (1 13) \fcir f:0.6 r:0.25 \move (1 14) \fcir f:0.6 r:0.25 \move (1 15) \fcir f:0.6 r:0.25 \move (2 0) \fcir f:0.6 r:0.25 \move (2 1) \fcir f:0.6 r:0.25 \move (2 2) \fcir f:0.6 r:0.25 \move (2 3) \fcir f:0.6 r:0.25 \move (2 4) \fcir f:0.6 r:0.25 \move (2 5) \fcir f:0.6 r:0.25 \move (2 6) \fcir f:0.6 r:0.25 \move (2 7) \fcir f:0.6 r:0.25 \move (2 8) \fcir f:0.6 r:0.25 \move (2 9) \fcir f:0.6 r:0.25 \move (2 10) \fcir f:0.6 r:0.25 \move (2 11) \fcir f:0.6 r:0.25 \move (2 12) \fcir f:0.6 r:0.25 \move (2 13) \fcir f:0.6 r:0.25 \move (2 14) \fcir f:0.6 r:0.25 \move (2 15) \fcir f:0.6 r:0.25 \move (3 0) \fcir f:0.6 r:0.25 \move (3 1) \fcir f:0.6 r:0.25 \move (3 2) \fcir f:0.6 r:0.25 \move (3 3) \fcir f:0.6 r:0.25 \move (3 4) \fcir f:0.6 r:0.25 \move (3 5) \fcir f:0.6 r:0.25 \move (3 6) \fcir f:0.6 r:0.25 \move (3 7) \fcir f:0.6 r:0.25 \move (3 8) \fcir f:0.6 r:0.25 \move (3 9) \fcir f:0.6 r:0.25 \move (3 10) \fcir f:0.6 r:0.25 \move (3 11) \fcir f:0.6 r:0.25 \move (3 12) \fcir f:0.6 r:0.25 \move (3 13) \fcir f:0.6 r:0.25 \move (3 14) \fcir f:0.6 r:0.25 \move (3 15) \fcir f:0.6 r:0.25 \move (4 0) \fcir f:0.6 r:0.25 \move (4 1) \fcir f:0.6 r:0.25 \move (4 2) \fcir f:0.6 r:0.25 \move (4 3) \fcir f:0.6 r:0.25 \move (4 4) \fcir f:0.6 r:0.25 \move (4 5) \fcir f:0.6 r:0.25 \move (4 6) \fcir f:0.6 r:0.25 \move (4 7) \fcir f:0.6 r:0.25 \move (4 8) \fcir f:0.6 r:0.25 \move (4 9) \fcir f:0.6 r:0.25 \move (4 10) \fcir f:0.6 r:0.25 \move (4 11) \fcir f:0.6 r:0.25 \move (4 12) \fcir f:0.6 r:0.25 \move (4 13) \fcir f:0.6 r:0.25 \move (4 14) \fcir f:0.6 r:0.25 \move (4 15) \fcir f:0.6 r:0.25 \move (2 15) \fcir f:0 r:0.31 \move (3 13) \fcir f:0 r:0.31 \move (2 13) \fcir f:0 r:0.31 \move (3 11) \fcir f:0 r:0.31 \move (1 15) \fcir f:0 r:0.31 \move (1 13) \fcir f:0 r:0.31 \move (4 11) \fcir f:0 r:0.31 \move (4 9) \fcir f:0 r:0.31 \move (0 10) \fcir f:0 r:0.31 \move (0 11) \fcir f:0 r:0.31 \move (2 12) \fcir f:0 r:0.31 \move (2 11) \fcir f:0 r:0.31 \move (3 9) \fcir f:0 r:0.31 \move (2 10) \fcir f:0 r:0.31 \move (4 7) \fcir f:0 r:0.31 \move (3 7) \fcir f:0 r:0.31 \move (4 5) \fcir f:0 r:0.31 \move (1 8) \fcir f:0 r:0.31 \move (2 5) \fcir f:0 r:0.31 \move (0 8) \fcir f:0 r:0.31 \move (1 6) \fcir f:0 r:0.31 \move (4 4) \fcir f:0 r:0.31 \move (3 0) \fcir f:0 r:0.31 \move (2 4) \fcir f:0 r:0.31 \move (2 3) \fcir f:0 r:0.31 \move (2 2) \fcir f:0 r:0.31 \move (3 2) \fcir f:0 r:0.31 \move (0 6) \fcir f:0 r:0.31 \move (1 4) \fcir f:0 r:0.31 \move (0 4) \fcir f:0 r:0.31 \move (1 2) \fcir f:0 r:0.31 \move (2 0) \fcir f:0 r:0.31 \end{texdraw} \end{center} \subsection{The non-homogeneous tropical curve of the $8_1$ knot} \lbl{sub.81non} The non-homogeneous non-commutative $A$-polynomial $A^{nh}_{8_1}(M,L,q)$ has $2112$ terms, which we not present here. The vertices of the tropical curve are: \begin{displaymath} (3,-1/2),\;\;(-1,-1/2),\;\;(6,-1/2),\;\;(-2,-1/2),\;\;(9,-1/2),\;\;(2,-1),\;\;(-1,-1),\;\;(-5/2,-1/2),\;\; \end{displaymath} \begin{displaymath} (-6,0),\;\;(5,-1),\;\;(-2,-3/5),\;\;(8,-1),\;\;(3/2,-3/2),\;\;(4,-3/2),\;\;(-1/2,-3/2),\;\;(-3/4,-11/8),\;\; \end{displaymath} \begin{displaymath} (7,-3/2),\;\;(1,-2),\;\;(3,-2),\;\;(0,-2),\;\;(6,-2),\;\;(0,-5/2),\;\;(5/2,-5/2),\;\;(5,-5/2),\;\;(1,-3),\;\;(0,-3),\;\; \end{displaymath} \begin{displaymath} (-5,-7/2),\;\;(0,-7/2),\;\;(-1,-3),\;\;(-5/2,-7/2),\;\;(3/4,-37/8),\;\;(0,-4),\;\;(1/2,-9/2),\;\;(-6,-4),\;\; \end{displaymath} \begin{displaymath} (6,-6),\;\;(1,-5),\;\;(2,-27/5),\;\;(5/2,-11/2),\;\;(-3,-4),\;\;(-1,-4),\;\;(-7,-9/2),\;\;(-4,-9/2),\;\; \end{displaymath} \begin{displaymath} (-3/2,-9/2),\;\;(-8,-5),\;\;(-5,-5),\;\;(-2,-5),\;\;(-9,-11/2),\;\;(2,-11/2),\;\;(-6,-11/2),\;\;(1,-11/2),\;\; \end{displaymath} \begin{displaymath} (-3,-11/2) \end{displaymath} \vspace*{0.5cm} The tropical curve is: \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.6 \arrowheadtype t:V \linewd 0.1 \lpatt (1 0) \setgray 0.6 \relunitscale 0.66 \move (3 2.5) \fcir f:0 r:0.3 \move (3 2.5) \lvec (-1 2.5) \htext (1 1){$2$} \move (3 2.5) \lvec (6 2.5) \htext (4.5 1){$2$} \move (3 2.5) \lvec (2 2) \move (3 2.5) \rlvec (2.5 1.25) \move (-1 2.5) \fcir f:0 r:0.3 \move (-1 2.5) \lvec (-2 2.5) \htext (-1.5 1){$2$} \move (-1 2.5) \lvec (-1 2) \move (-1 2.5) \rlvec (0 2.5) \move (6 2.5) \fcir f:0 r:0.3 \move (6 2.5) \lvec (9 2.5) \htext (7.5 1){$2$} \move (6 2.5) \lvec (5 2) \move (6 2.5) \rlvec (2.5 1.25) \move (-2 2.5) \fcir f:0 r:0.3 \move (-2 2.5) \lvec (-2.5 2.5) \htext (-2.25 1){$2$} \move (-2 2.5) \lvec (-2 2.4) \move (-2 2.5) \rlvec (0 2.5) \move (9 2.5) \fcir f:0 r:0.3 \move (9 2.5) \lvec (8 2) \move (9 2.5) \rlvec (2.5 1.25) \move (9 2.5) \rlvec (2.5 0) \htext (10.5 2.5){$2$} \move (2 2) \fcir f:0 r:0.3 \move (2 2) \lvec (-1 2) \htext (0.5 0.5){$3$} \move (2 2) \lvec (5 2) \htext (3.5 0.5){$2$} \move (2 2) \lvec (1.5 1.5) \move (-1 2) \fcir f:0 r:0.3 \move (-1 2) \lvec (-2 2.4) \move (-1 2) \lvec (-0.75 1.62) \move (-2.5 2.5) \fcir f:0 r:0.3 \move (-2.5 2.5) \lvec (-6 3) \move (-2.5 2.5) \lvec (-2 2.4) \move (-6 3) \fcir f:0 r:0.3 \move (-6 3) \rlvec (-2.5 0) \move (-6 3) \rlvec (-2.5 0.41666666666666666666666666666666666666666666666667) \move (5 2) \fcir f:0 r:0.3 \move (5 2) \lvec (8 2) \htext (6.5 0.5){$2$} \move (5 2) \lvec (4 1.5) \move (-2 2.4) \fcir f:0 r:0.3 \move (8 2) \fcir f:0 r:0.3 \move (8 2) \lvec (7 1.5) \move (8 2) \rlvec (2.5 0) \htext (9.5 2){$2$} \move (1.5 1.5) \fcir f:0 r:0.3 \move (1.5 1.5) \lvec (4 1.5) \htext (2.75 0){$2$} \move (1.5 1.5) \lvec (-0.5 1.5) \htext (0.5 0){$2$} \move (1.5 1.5) \lvec (1 1) \move (4 1.5) \fcir f:0 r:0.3 \move (4 1.5) \lvec (7 1.5) \htext (5.5 0){$2$} \move (4 1.5) \lvec (3 1) \move (-0.5 1.5) \fcir f:0 r:0.3 \move (-0.5 1.5) \lvec (-0.75 1.62) \htext (-0.62 0.06){$2$} \move (-0.5 1.5) \lvec (0 1) \htext (-0.25 -0.25){$2$} \move (-0.75 1.62) \fcir f:0 r:0.3 \move (-0.75 1.62) \lvec (-5 -0.5) \move (7 1.5) \fcir f:0 r:0.3 \move (7 1.5) \lvec (6 1) \move (7 1.5) \rlvec (2.5 0) \htext (8.5 1.5){$2$} \move (1 1) \fcir f:0 r:0.3 \move (1 1) \lvec (3 1) \htext (2 -0.5){$3$} \move (1 1) \lvec (0 1) \htext (0.5 -0.5){$2$} \move (1 1) \lvec (0 0.5) \move (3 1) \fcir f:0 r:0.3 \move (3 1) \lvec (6 1) \htext (4.5 -0.5){$2$} \move (3 1) \lvec (2.5 0.5) \move (0 1) \fcir f:0 r:0.3 \move (0 1) \lvec (0 0.5) \htext (0 -0.75){$2$} \move (6 1) \fcir f:0 r:0.3 \move (6 1) \lvec (5 0.5) \move (6 1) \rlvec (2.5 0) \htext (7.5 1){$2$} \move (0 0.5) \fcir f:0 r:0.3 \move (0 0.5) \lvec (0 0) \htext (0 -1.25){$2$} \move (0 0.5) \lvec (-1 0) \move (2.5 0.5) \fcir f:0 r:0.3 \move (2.5 0.5) \lvec (5 0.5) \htext (3.75 -1){$2$} \move (2.5 0.5) \lvec (1 0) \move (5 0.5) \fcir f:0 r:0.3 \move (5 0.5) \lvec (0.75 -1.62) \move (5 0.5) \rlvec (2.5 0) \htext (6.5 0.5){$2$} \move (1 0) \fcir f:0 r:0.3 \move (1 0) \lvec (0 0) \move (1 0) \lvec (0 -0.5) \move (0 0) \fcir f:0 r:0.3 \move (0 0) \lvec (0 -0.5) \htext (0 -1.75){$2$} \move (0 0) \lvec (-1 0) \move (-5 -0.5) \fcir f:0 r:0.3 \move (-5 -0.5) \lvec (-2.5 -0.5) \htext (-3.75 -2){$2$} \move (-5 -0.5) \lvec (-6 -1) \move (-5 -0.5) \rlvec (-2.5 0) \htext (-6.5 -0.5){$2$} \move (0 -0.5) \fcir f:0 r:0.3 \move (0 -0.5) \lvec (0 -1) \htext (0 -2.25){$2$} \move (0 -0.5) \lvec (-1 -1) \move (-1 0) \fcir f:0 r:0.3 \move (-1 0) \lvec (-2.5 -0.5) \move (-2.5 -0.5) \fcir f:0 r:0.3 \move (-2.5 -0.5) \lvec (-3 -1) \move (0.75 -1.62) \fcir f:0 r:0.3 \move (0.75 -1.62) \lvec (0.5 -1.5) \htext (0.62 -3.06){$2$} \move (0.75 -1.62) \lvec (1 -2) \move (0 -1) \fcir f:0 r:0.3 \move (0 -1) \lvec (0.5 -1.5) \htext (0.25 -2.75){$2$} \move (0 -1) \lvec (-1 -1) \htext (-0.5 -2.5){$2$} \move (0.5 -1.5) \fcir f:0 r:0.3 \move (0.5 -1.5) \lvec (-1.5 -1.5) \htext (-0.5 -3){$2$} \move (-6 -1) \fcir f:0 r:0.3 \move (-6 -1) \lvec (-3 -1) \htext (-4.5 -2.5){$2$} \move (-6 -1) \lvec (-7 -1.5) \move (-6 -1) \rlvec (-2.5 0) \htext (-7.5 -1){$2$} \move (6 -3) \fcir f:0 r:0.3 \move (6 -3) \lvec (2.5 -2.5) \move (6 -3) \rlvec (2.5 0) \move (6 -3) \rlvec (2.5 -0.41666666666666666666666666666666666666666666666667) \move (1 -2) \fcir f:0 r:0.3 \move (1 -2) \lvec (2 -2.4) \move (1 -2) \lvec (-2 -2) \htext (-0.5 -3.5){$3$} \move (1 -2) \lvec (1 -2.5) \move (2 -2.4) \fcir f:0 r:0.3 \move (2 -2.4) \lvec (2.5 -2.5) \move (2 -2.4) \lvec (2 -2.5) \move (2.5 -2.5) \fcir f:0 r:0.3 \move (2.5 -2.5) \lvec (2 -2.5) \htext (2.25 -4){$2$} \move (-3 -1) \fcir f:0 r:0.3 \move (-3 -1) \lvec (-1 -1) \htext (-2 -2.5){$3$} \move (-3 -1) \lvec (-4 -1.5) \move (-1 -1) \fcir f:0 r:0.3 \move (-1 -1) \lvec (-1.5 -1.5) \move (-7 -1.5) \fcir f:0 r:0.3 \move (-7 -1.5) \lvec (-4 -1.5) \htext (-5.5 -3){$2$} \move (-7 -1.5) \lvec (-8 -2) \move (-7 -1.5) \rlvec (-2.5 0) \htext (-8.5 -1.5){$2$} \move (-4 -1.5) \fcir f:0 r:0.3 \move (-4 -1.5) \lvec (-1.5 -1.5) \htext (-2.75 -3){$2$} \move (-4 -1.5) \lvec (-5 -2) \move (-1.5 -1.5) \fcir f:0 r:0.3 \move (-1.5 -1.5) \lvec (-2 -2) \move (-8 -2) \fcir f:0 r:0.3 \move (-8 -2) \lvec (-5 -2) \htext (-6.5 -3.5){$2$} \move (-8 -2) \lvec (-9 -2.5) \move (-8 -2) \rlvec (-2.5 0) \htext (-9.5 -2){$2$} \move (-5 -2) \fcir f:0 r:0.3 \move (-5 -2) \lvec (-2 -2) \htext (-3.5 -3.5){$2$} \move (-5 -2) \lvec (-6 -2.5) \move (-2 -2) \fcir f:0 r:0.3 \move (-2 -2) \lvec (-3 -2.5) \move (-9 -2.5) \fcir f:0 r:0.3 \move (-9 -2.5) \lvec (-6 -2.5) \htext (-7.5 -4){$2$} \move (-9 -2.5) \rlvec (-2.5 -1.25) \move (-9 -2.5) \rlvec (-2.5 0) \htext (-10.5 -2.5){$2$} \move (2 -2.5) \fcir f:0 r:0.3 \move (2 -2.5) \lvec (1 -2.5) \htext (1.5 -4){$2$} \move (2 -2.5) \rlvec (0 -2.5) \move (-6 -2.5) \fcir f:0 r:0.3 \move (-6 -2.5) \lvec (-3 -2.5) \htext (-4.5 -4){$2$} \move (-6 -2.5) \rlvec (-2.5 -1.25) \move (1 -2.5) \fcir f:0 r:0.3 \move (1 -2.5) \lvec (-3 -2.5) \htext (-1 -4){$2$} \move (1 -2.5) \rlvec (0 -2.5) \move (-3 -2.5) \fcir f:0 r:0.3 \move (-3 -2.5) \rlvec (-2.5 -1.25) \move (-10 -4) \fcir f:0.8 r:0.15 \move (-10 -3) \fcir f:0.8 r:0.15 \move (-10 -2) \fcir f:0.8 r:0.15 \move (-10 -1) \fcir f:0.8 r:0.15 \move (-10 0) \fcir f:0.8 r:0.15 \move (-10 1) \fcir f:0.8 r:0.15 \move (-10 2) \fcir f:0.8 r:0.15 \move (-10 3) \fcir f:0.8 r:0.15 \move (-10 4) \fcir f:0.8 r:0.15 \move (-9 -4) \fcir f:0.8 r:0.15 \move (-9 -3) \fcir f:0.8 r:0.15 \move (-9 -2) \fcir f:0.8 r:0.15 \move (-9 -1) \fcir f:0.8 r:0.15 \move (-9 0) \fcir f:0.8 r:0.15 \move (-9 1) \fcir f:0.8 r:0.15 \move (-9 2) \fcir f:0.8 r:0.15 \move (-9 3) \fcir f:0.8 r:0.15 \move (-9 4) \fcir f:0.8 r:0.15 \move (-8 -4) \fcir f:0.8 r:0.15 \move (-8 -3) \fcir f:0.8 r:0.15 \move (-8 -2) \fcir f:0.8 r:0.15 \move (-8 -1) \fcir f:0.8 r:0.15 \move (-8 0) \fcir f:0.8 r:0.15 \move (-8 1) \fcir f:0.8 r:0.15 \move (-8 2) \fcir f:0.8 r:0.15 \move (-8 3) \fcir f:0.8 r:0.15 \move (-8 4) \fcir f:0.8 r:0.15 \move (-7 -4) \fcir f:0.8 r:0.15 \move (-7 -3) \fcir f:0.8 r:0.15 \move (-7 -2) \fcir f:0.8 r:0.15 \move (-7 -1) \fcir f:0.8 r:0.15 \move (-7 0) \fcir f:0.8 r:0.15 \move (-7 1) \fcir f:0.8 r:0.15 \move (-7 2) \fcir f:0.8 r:0.15 \move (-7 3) \fcir f:0.8 r:0.15 \move (-7 4) \fcir f:0.8 r:0.15 \move (-6 -4) \fcir f:0.8 r:0.15 \move (-6 -3) \fcir f:0.8 r:0.15 \move (-6 -2) \fcir f:0.8 r:0.15 \move (-6 -1) \fcir f:0.8 r:0.15 \move (-6 0) \fcir f:0.8 r:0.15 \move (-6 1) \fcir f:0.8 r:0.15 \move (-6 2) \fcir f:0.8 r:0.15 \move (-6 3) \fcir f:0.8 r:0.15 \move (-6 4) \fcir f:0.8 r:0.15 \move (-5 -4) \fcir f:0.8 r:0.15 \move (-5 -3) \fcir f:0.8 r:0.15 \move (-5 -2) \fcir f:0.8 r:0.15 \move (-5 -1) \fcir f:0.8 r:0.15 \move (-5 0) \fcir f:0.8 r:0.15 \move (-5 1) \fcir f:0.8 r:0.15 \move (-5 2) \fcir f:0.8 r:0.15 \move (-5 3) \fcir f:0.8 r:0.15 \move (-5 4) \fcir f:0.8 r:0.15 \move (-4 -4) \fcir f:0.8 r:0.15 \move (-4 -3) \fcir f:0.8 r:0.15 \move (-4 -2) \fcir f:0.8 r:0.15 \move (-4 -1) \fcir f:0.8 r:0.15 \move (-4 0) \fcir f:0.8 r:0.15 \move (-4 1) \fcir f:0.8 r:0.15 \move (-4 2) \fcir f:0.8 r:0.15 \move (-4 3) \fcir f:0.8 r:0.15 \move (-4 4) \fcir f:0.8 r:0.15 \move (-3 -4) \fcir f:0.8 r:0.15 \move (-3 -3) \fcir f:0.8 r:0.15 \move (-3 -2) \fcir f:0.8 r:0.15 \move (-3 -1) \fcir f:0.8 r:0.15 \move (-3 0) \fcir f:0.8 r:0.15 \move (-3 1) \fcir f:0.8 r:0.15 \move (-3 2) \fcir f:0.8 r:0.15 \move (-3 3) \fcir f:0.8 r:0.15 \move (-3 4) \fcir f:0.8 r:0.15 \move (-2 -4) \fcir f:0.8 r:0.15 \move (-2 -3) \fcir f:0.8 r:0.15 \move (-2 -2) \fcir f:0.8 r:0.15 \move (-2 -1) \fcir f:0.8 r:0.15 \move (-2 0) \fcir f:0.8 r:0.15 \move (-2 1) \fcir f:0.8 r:0.15 \move (-2 2) \fcir f:0.8 r:0.15 \move (-2 3) \fcir f:0.8 r:0.15 \move (-2 4) \fcir f:0.8 r:0.15 \move (-1 -4) \fcir f:0.8 r:0.15 \move (-1 -3) \fcir f:0.8 r:0.15 \move (-1 -2) \fcir f:0.8 r:0.15 \move (-1 -1) \fcir f:0.8 r:0.15 \move (-1 0) \fcir f:0.8 r:0.15 \move (-1 1) \fcir f:0.8 r:0.15 \move (-1 2) \fcir f:0.8 r:0.15 \move (-1 3) \fcir f:0.8 r:0.15 \move (-1 4) \fcir f:0.8 r:0.15 \move (0 -4) \fcir f:0.8 r:0.15 \move (0 -3) \fcir f:0.8 r:0.15 \move (0 -2) \fcir f:0.8 r:0.15 \move (0 -1) \fcir f:0.8 r:0.15 \move (0 0) \fcir f:0.8 r:0.15 \move (0 1) \fcir f:0.8 r:0.15 \move (0 2) \fcir f:0.8 r:0.15 \move (0 3) \fcir f:0.8 r:0.15 \move (0 4) \fcir f:0.8 r:0.15 \move (1 -4) \fcir f:0.8 r:0.15 \move (1 -3) \fcir f:0.8 r:0.15 \move (1 -2) \fcir f:0.8 r:0.15 \move (1 -1) \fcir f:0.8 r:0.15 \move (1 0) \fcir f:0.8 r:0.15 \move (1 1) \fcir f:0.8 r:0.15 \move (1 2) \fcir f:0.8 r:0.15 \move (1 3) \fcir f:0.8 r:0.15 \move (1 4) \fcir f:0.8 r:0.15 \move (2 -4) \fcir f:0.8 r:0.15 \move (2 -3) \fcir f:0.8 r:0.15 \move (2 -2) \fcir f:0.8 r:0.15 \move (2 -1) \fcir f:0.8 r:0.15 \move (2 0) \fcir f:0.8 r:0.15 \move (2 1) \fcir f:0.8 r:0.15 \move (2 2) \fcir f:0.8 r:0.15 \move (2 3) \fcir f:0.8 r:0.15 \move (2 4) \fcir f:0.8 r:0.15 \move (3 -4) \fcir f:0.8 r:0.15 \move (3 -3) \fcir f:0.8 r:0.15 \move (3 -2) \fcir f:0.8 r:0.15 \move (3 -1) \fcir f:0.8 r:0.15 \move (3 0) \fcir f:0.8 r:0.15 \move (3 1) \fcir f:0.8 r:0.15 \move (3 2) \fcir f:0.8 r:0.15 \move (3 3) \fcir f:0.8 r:0.15 \move (3 4) \fcir f:0.8 r:0.15 \move (4 -4) \fcir f:0.8 r:0.15 \move (4 -3) \fcir f:0.8 r:0.15 \move (4 -2) \fcir f:0.8 r:0.15 \move (4 -1) \fcir f:0.8 r:0.15 \move (4 0) \fcir f:0.8 r:0.15 \move (4 1) \fcir f:0.8 r:0.15 \move (4 2) \fcir f:0.8 r:0.15 \move (4 3) \fcir f:0.8 r:0.15 \move (4 4) \fcir f:0.8 r:0.15 \move (5 -4) \fcir f:0.8 r:0.15 \move (5 -3) \fcir f:0.8 r:0.15 \move (5 -2) \fcir f:0.8 r:0.15 \move (5 -1) \fcir f:0.8 r:0.15 \move (5 0) \fcir f:0.8 r:0.15 \move (5 1) \fcir f:0.8 r:0.15 \move (5 2) \fcir f:0.8 r:0.15 \move (5 3) \fcir f:0.8 r:0.15 \move (5 4) \fcir f:0.8 r:0.15 \move (6 -4) \fcir f:0.8 r:0.15 \move (6 -3) \fcir f:0.8 r:0.15 \move (6 -2) \fcir f:0.8 r:0.15 \move (6 -1) \fcir f:0.8 r:0.15 \move (6 0) \fcir f:0.8 r:0.15 \move (6 1) \fcir f:0.8 r:0.15 \move (6 2) \fcir f:0.8 r:0.15 \move (6 3) \fcir f:0.8 r:0.15 \move (6 4) \fcir f:0.8 r:0.15 \move (7 -4) \fcir f:0.8 r:0.15 \move (7 -3) \fcir f:0.8 r:0.15 \move (7 -2) \fcir f:0.8 r:0.15 \move (7 -1) \fcir f:0.8 r:0.15 \move (7 0) \fcir f:0.8 r:0.15 \move (7 1) \fcir f:0.8 r:0.15 \move (7 2) \fcir f:0.8 r:0.15 \move (7 3) \fcir f:0.8 r:0.15 \move (7 4) \fcir f:0.8 r:0.15 \move (8 -4) \fcir f:0.8 r:0.15 \move (8 -3) \fcir f:0.8 r:0.15 \move (8 -2) \fcir f:0.8 r:0.15 \move (8 -1) \fcir f:0.8 r:0.15 \move (8 0) \fcir f:0.8 r:0.15 \move (8 1) \fcir f:0.8 r:0.15 \move (8 2) \fcir f:0.8 r:0.15 \move (8 3) \fcir f:0.8 r:0.15 \move (8 4) \fcir f:0.8 r:0.15 \move (9 -4) \fcir f:0.8 r:0.15 \move (9 -3) \fcir f:0.8 r:0.15 \move (9 -2) \fcir f:0.8 r:0.15 \move (9 -1) \fcir f:0.8 r:0.15 \move (9 0) \fcir f:0.8 r:0.15 \move (9 1) \fcir f:0.8 r:0.15 \move (9 2) \fcir f:0.8 r:0.15 \move (9 3) \fcir f:0.8 r:0.15 \move (9 4) \fcir f:0.8 r:0.15 \move (10 -4) \fcir f:0.8 r:0.15 \move (10 -3) \fcir f:0.8 r:0.15 \move (10 -2) \fcir f:0.8 r:0.15 \move (10 -1) \fcir f:0.8 r:0.15 \move (10 0) \fcir f:0.8 r:0.15 \move (10 1) \fcir f:0.8 r:0.15 \move (10 2) \fcir f:0.8 r:0.15 \move (10 3) \fcir f:0.8 r:0.15 \move (10 4) \fcir f:0.8 r:0.15 \end{texdraw}\end{center} The Newton subdivision of the tropical curve is: \vspace*{0.5cm} \begin{center} \begin{texdraw} \drawdim cm \relunitscale 0.5 \linewd 0.05 \relunitscale0.52 \move (3 23) \lvec (6 17) \move (6 17) \lvec (6 6) \move (6 6) \lvec (5 0) \move (5 0) \lvec (3 0) \move (3 0) \lvec (0 6) \move (0 6) \lvec (0 17) \move (0 17) \lvec (1 23) \move (1 23) \lvec (3 23) \move (3 21) \lvec (3 23) \move (4 21) \lvec (4 19) \move (4 19) \lvec (3 21) \move (2 21) \lvec (2 23) \move (3 21) \lvec (2 21) \move (5 19) \lvec (5 17) \move (5 17) \lvec (4 19) \move (1 21) \lvec (1 23) \move (2 21) \lvec (1 21) \move (6 15) \lvec (5 17) \move (3 20) \lvec (3 21) \move (3 19) \lvec (3 20) \move (3 18) \lvec (3 19) \move (4 19) \lvec (4 17) \move (4 17) \lvec (3 18) \move (0 16) \lvec (2 21) \move (3 18) \lvec (0 16) \move (0 16) \lvec (1 23) \move (1 21) \lvec (0 16) \move (5 17) \lvec (5 15) \move (5 15) \lvec (4 17) \move (6 13) \lvec (5 15) \move (4 17) \lvec (4 15) \move (3 16) \lvec (3 18) \move (4 15) \lvec (3 16) \move (5 15) \lvec (5 13) \move (5 13) \lvec (4 15) \move (1 14) \lvec (3 18) \move (3 16) \lvec (1 14) \move (1 14) \lvec (0 16) \move (6 11) \lvec (5 13) \move (4 13) \lvec (4 12) \move (4 14) \lvec (4 13) \move (4 15) \lvec (4 14) \move (3 15) \lvec (3 16) \move (3 14) \lvec (3 15) \move (4 12) \lvec (3 14) \move (5 13) \lvec (5 11) \move (5 11) \lvec (4 12) \move (3 14) \lvec (1 14) \move (6 9) \lvec (5 11) \move (4 12) \lvec (2 12) \move (2 12) \lvec (1 14) \move (5 11) \lvec (5 9) \move (5 9) \lvec (4 12) \move (6 7) \lvec (5 9) \move (4 11) \lvec (4 12) \move (5 9) \lvec (4 11) \move (4 11) \lvec (2 11) \move (2 11) \lvec (2 12) \move (1 14) \lvec (1 12) \move (1 12) \lvec (0 14) \move (5 9) \lvec (3 9) \move (3 9) \lvec (2 11) \move (2 11) \lvec (1 14) \move (2 11) \lvec (1 12) \move (3 5) \lvec (5 9) \move (6 7) \lvec (3 5) \move (5 9) \lvec (3 7) \move (3 8) \lvec (3 9) \move (3 7) \lvec (3 8) \move (3 5) \lvec (3 7) \move (1 12) \lvec (1 10) \move (1 10) \lvec (0 12) \move (5 0) \lvec (6 7) \move (6 7) \lvec (4 2) \move (3 4) \lvec (3 5) \move (3 3) \lvec (3 4) \move (3 2) \lvec (3 3) \move (4 2) \lvec (3 2) \move (6 7) \lvec (5 2) \move (5 2) \lvec (4 2) \move (5 0) \lvec (5 2) \move (2 9) \lvec (2 8) \move (2 10) \lvec (2 9) \move (2 11) \lvec (2 10) \move (2 8) \lvec (1 10) \move (3 7) \lvec (2 8) \move (1 10) \lvec (1 8) \move (1 8) \lvec (0 10) \move (2 8) \lvec (2 6) \move (2 6) \lvec (1 8) \move (3 5) \lvec (2 6) \move (1 8) \lvec (1 6) \move (1 6) \lvec (0 8) \move (2 6) \lvec (2 4) \move (2 4) \lvec (1 6) \move (3 2) \lvec (2 4) \move (1 6) \lvec (1 4) \move (4 0) \lvec (4 2) \move (2 4) \lvec (2 2) \move (3 0) \lvec (3 2) \move (0 0) \fcir f:0.6 r:0.38 \move (0 1) \fcir f:0.6 r:0.38 \move (0 2) \fcir f:0.6 r:0.38 \move (0 3) \fcir f:0.6 r:0.38 \move (0 4) \fcir f:0.6 r:0.38 \move (0 5) \fcir f:0.6 r:0.38 \move (0 6) \fcir f:0.6 r:0.38 \move (0 7) \fcir f:0.6 r:0.38 \move (0 8) \fcir f:0.6 r:0.38 \move (0 9) \fcir f:0.6 r:0.38 \move (0 10) \fcir f:0.6 r:0.38 \move (0 11) \fcir f:0.6 r:0.38 \move (0 12) \fcir f:0.6 r:0.38 \move (0 13) \fcir f:0.6 r:0.38 \move (0 14) \fcir f:0.6 r:0.38 \move (0 15) \fcir f:0.6 r:0.38 \move (0 16) \fcir f:0.6 r:0.38 \move (0 17) \fcir f:0.6 r:0.38 \move (0 18) \fcir f:0.6 r:0.38 \move (0 19) \fcir f:0.6 r:0.38 \move (0 20) \fcir f:0.6 r:0.38 \move (0 21) \fcir f:0.6 r:0.38 \move (0 22) \fcir f:0.6 r:0.38 \move (0 23) \fcir f:0.6 r:0.38 \move (1 0) \fcir f:0.6 r:0.38 \move (1 1) \fcir f:0.6 r:0.38 \move (1 2) \fcir f:0.6 r:0.38 \move (1 3) \fcir f:0.6 r:0.38 \move (1 4) \fcir f:0.6 r:0.38 \move (1 5) \fcir f:0.6 r:0.38 \move (1 6) \fcir f:0.6 r:0.38 \move (1 7) \fcir f:0.6 r:0.38 \move (1 8) \fcir f:0.6 r:0.38 \move (1 9) \fcir f:0.6 r:0.38 \move (1 10) \fcir f:0.6 r:0.38 \move (1 11) \fcir f:0.6 r:0.38 \move (1 12) \fcir f:0.6 r:0.38 \move (1 13) \fcir f:0.6 r:0.38 \move (1 14) \fcir f:0.6 r:0.38 \move (1 15) \fcir f:0.6 r:0.38 \move (1 16) \fcir f:0.6 r:0.38 \move (1 17) \fcir f:0.6 r:0.38 \move (1 18) \fcir f:0.6 r:0.38 \move (1 19) \fcir f:0.6 r:0.38 \move (1 20) \fcir f:0.6 r:0.38 \move (1 21) \fcir f:0.6 r:0.38 \move (1 22) \fcir f:0.6 r:0.38 \move (1 23) \fcir f:0.6 r:0.38 \move (2 0) \fcir f:0.6 r:0.38 \move (2 1) \fcir f:0.6 r:0.38 \move (2 2) \fcir f:0.6 r:0.38 \move (2 3) \fcir f:0.6 r:0.38 \move (2 4) \fcir f:0.6 r:0.38 \move (2 5) \fcir f:0.6 r:0.38 \move (2 6) \fcir f:0.6 r:0.38 \move (2 7) \fcir f:0.6 r:0.38 \move (2 8) \fcir f:0.6 r:0.38 \move (2 9) \fcir f:0.6 r:0.38 \move (2 10) \fcir f:0.6 r:0.38 \move (2 11) \fcir f:0.6 r:0.38 \move (2 12) \fcir f:0.6 r:0.38 \move (2 13) \fcir f:0.6 r:0.38 \move (2 14) \fcir f:0.6 r:0.38 \move (2 15) \fcir f:0.6 r:0.38 \move (2 16) \fcir f:0.6 r:0.38 \move (2 17) \fcir f:0.6 r:0.38 \move (2 18) \fcir f:0.6 r:0.38 \move (2 19) \fcir f:0.6 r:0.38 \move (2 20) \fcir f:0.6 r:0.38 \move (2 21) \fcir f:0.6 r:0.38 \move (2 22) \fcir f:0.6 r:0.38 \move (2 23) \fcir f:0.6 r:0.38 \move (3 0) \fcir f:0.6 r:0.38 \move (3 1) \fcir f:0.6 r:0.38 \move (3 2) \fcir f:0.6 r:0.38 \move (3 3) \fcir f:0.6 r:0.38 \move (3 4) \fcir f:0.6 r:0.38 \move (3 5) \fcir f:0.6 r:0.38 \move (3 6) \fcir f:0.6 r:0.38 \move (3 7) \fcir f:0.6 r:0.38 \move (3 8) \fcir f:0.6 r:0.38 \move (3 9) \fcir f:0.6 r:0.38 \move (3 10) \fcir f:0.6 r:0.38 \move (3 11) \fcir f:0.6 r:0.38 \move (3 12) \fcir f:0.6 r:0.38 \move (3 13) \fcir f:0.6 r:0.38 \move (3 14) \fcir f:0.6 r:0.38 \move (3 15) \fcir f:0.6 r:0.38 \move (3 16) \fcir f:0.6 r:0.38 \move (3 17) \fcir f:0.6 r:0.38 \move (3 18) \fcir f:0.6 r:0.38 \move (3 19) \fcir f:0.6 r:0.38 \move (3 20) \fcir f:0.6 r:0.38 \move (3 21) \fcir f:0.6 r:0.38 \move (3 22) \fcir f:0.6 r:0.38 \move (3 23) \fcir f:0.6 r:0.38 \move (4 0) \fcir f:0.6 r:0.38 \move (4 1) \fcir f:0.6 r:0.38 \move (4 2) \fcir f:0.6 r:0.38 \move (4 3) \fcir f:0.6 r:0.38 \move (4 4) \fcir f:0.6 r:0.38 \move (4 5) \fcir f:0.6 r:0.38 \move (4 6) \fcir f:0.6 r:0.38 \move (4 7) \fcir f:0.6 r:0.38 \move (4 8) \fcir f:0.6 r:0.38 \move (4 9) \fcir f:0.6 r:0.38 \move (4 10) \fcir f:0.6 r:0.38 \move (4 11) \fcir f:0.6 r:0.38 \move (4 12) \fcir f:0.6 r:0.38 \move (4 13) \fcir f:0.6 r:0.38 \move (4 14) \fcir f:0.6 r:0.38 \move (4 15) \fcir f:0.6 r:0.38 \move (4 16) \fcir f:0.6 r:0.38 \move (4 17) \fcir f:0.6 r:0.38 \move (4 18) \fcir f:0.6 r:0.38 \move (4 19) \fcir f:0.6 r:0.38 \move (4 20) \fcir f:0.6 r:0.38 \move (4 21) \fcir f:0.6 r:0.38 \move (4 22) \fcir f:0.6 r:0.38 \move (4 23) \fcir f:0.6 r:0.38 \move (5 0) \fcir f:0.6 r:0.38 \move (5 1) \fcir f:0.6 r:0.38 \move (5 2) \fcir f:0.6 r:0.38 \move (5 3) \fcir f:0.6 r:0.38 \move (5 4) \fcir f:0.6 r:0.38 \move (5 5) \fcir f:0.6 r:0.38 \move (5 6) \fcir f:0.6 r:0.38 \move (5 7) \fcir f:0.6 r:0.38 \move (5 8) \fcir f:0.6 r:0.38 \move (5 9) \fcir f:0.6 r:0.38 \move (5 10) \fcir f:0.6 r:0.38 \move (5 11) \fcir f:0.6 r:0.38 \move (5 12) \fcir f:0.6 r:0.38 \move (5 13) \fcir f:0.6 r:0.38 \move (5 14) \fcir f:0.6 r:0.38 \move (5 15) \fcir f:0.6 r:0.38 \move (5 16) \fcir f:0.6 r:0.38 \move (5 17) \fcir f:0.6 r:0.38 \move (5 18) \fcir f:0.6 r:0.38 \move (5 19) \fcir f:0.6 r:0.38 \move (5 20) \fcir f:0.6 r:0.38 \move (5 21) \fcir f:0.6 r:0.38 \move (5 22) \fcir f:0.6 r:0.38 \move (5 23) \fcir f:0.6 r:0.38 \move (6 0) \fcir f:0.6 r:0.38 \move (6 1) \fcir f:0.6 r:0.38 \move (6 2) \fcir f:0.6 r:0.38 \move (6 3) \fcir f:0.6 r:0.38 \move (6 4) \fcir f:0.6 r:0.38 \move (6 5) \fcir f:0.6 r:0.38 \move (6 6) \fcir f:0.6 r:0.38 \move (6 7) \fcir f:0.6 r:0.38 \move (6 8) \fcir f:0.6 r:0.38 \move (6 9) \fcir f:0.6 r:0.38 \move (6 10) \fcir f:0.6 r:0.38 \move (6 11) \fcir f:0.6 r:0.38 \move (6 12) \fcir f:0.6 r:0.38 \move (6 13) \fcir f:0.6 r:0.38 \move (6 14) \fcir f:0.6 r:0.38 \move (6 15) \fcir f:0.6 r:0.38 \move (6 16) \fcir f:0.6 r:0.38 \move (6 17) \fcir f:0.6 r:0.38 \move (6 18) \fcir f:0.6 r:0.38 \move (6 19) \fcir f:0.6 r:0.38 \move (6 20) \fcir f:0.6 r:0.38 \move (6 21) \fcir f:0.6 r:0.38 \move (6 22) \fcir f:0.6 r:0.38 \move (6 23) \fcir f:0.6 r:0.38 \move (3 23) \fcir f:0 r:0.47 \move (4 21) \fcir f:0 r:0.47 \move (3 21) \fcir f:0 r:0.47 \move (4 19) \fcir f:0 r:0.47 \move (2 23) \fcir f:0 r:0.47 \move (2 21) \fcir f:0 r:0.47 \move (5 19) \fcir f:0 r:0.47 \move (5 17) \fcir f:0 r:0.47 \move (1 23) \fcir f:0 r:0.47 \move (1 21) \fcir f:0 r:0.47 \move (6 17) \fcir f:0 r:0.47 \move (6 15) \fcir f:0 r:0.47 \move (3 20) \fcir f:0 r:0.47 \move (3 19) \fcir f:0 r:0.47 \move (4 17) \fcir f:0 r:0.47 \move (3 18) \fcir f:0 r:0.47 \move (2 20) \fcir f:0 r:0.47 \move (2 19) \fcir f:0 r:0.47 \move (2 18) \fcir f:0 r:0.47 \move (0 16) \fcir f:0 r:0.47 \move (0 17) \fcir f:0 r:0.47 \move (5 15) \fcir f:0 r:0.47 \move (6 13) \fcir f:0 r:0.47 \move (4 15) \fcir f:0 r:0.47 \move (3 16) \fcir f:0 r:0.47 \move (5 13) \fcir f:0 r:0.47 \move (1 14) \fcir f:0 r:0.47 \move (6 11) \fcir f:0 r:0.47 \move (4 14) \fcir f:0 r:0.47 \move (3 15) \fcir f:0 r:0.47 \move (4 13) \fcir f:0 r:0.47 \move (3 14) \fcir f:0 r:0.47 \move (4 12) \fcir f:0 r:0.47 \move (5 11) \fcir f:0 r:0.47 \move (6 9) \fcir f:0 r:0.47 \move (2 12) \fcir f:0 r:0.47 \move (5 9) \fcir f:0 r:0.47 \move (6 7) \fcir f:0 r:0.47 \move (4 11) \fcir f:0 r:0.47 \move (2 11) \fcir f:0 r:0.47 \move (0 14) \fcir f:0 r:0.47 \move (1 12) \fcir f:0 r:0.47 \move (3 9) \fcir f:0 r:0.47 \move (3 5) \fcir f:0 r:0.47 \move (3 8) \fcir f:0 r:0.47 \move (3 7) \fcir f:0 r:0.47 \move (0 12) \fcir f:0 r:0.47 \move (1 10) \fcir f:0 r:0.47 \move (6 6) \fcir f:0 r:0.47 \move (5 0) \fcir f:0 r:0.47 \move (4 5) \fcir f:0 r:0.47 \move (4 4) \fcir f:0 r:0.47 \move (4 3) \fcir f:0 r:0.47 \move (3 4) \fcir f:0 r:0.47 \move (4 2) \fcir f:0 r:0.47 \move (3 3) \fcir f:0 r:0.47 \move (3 2) \fcir f:0 r:0.47 \move (5 2) \fcir f:0 r:0.47 \move (2 10) \fcir f:0 r:0.47 \move (2 9) \fcir f:0 r:0.47 \move (2 8) \fcir f:0 r:0.47 \move (0 10) \fcir f:0 r:0.47 \move (1 8) \fcir f:0 r:0.47 \move (2 6) \fcir f:0 r:0.47 \move (0 8) \fcir f:0 r:0.47 \move (1 6) \fcir f:0 r:0.47 \move (2 4) \fcir f:0 r:0.47 \move (0 6) \fcir f:0 r:0.47 \move (1 4) \fcir f:0 r:0.47 \move (4 0) \fcir f:0 r:0.47 \move (2 2) \fcir f:0 r:0.47 \move (3 0) \fcir f:0 r:0.47 \end{texdraw} \end{center} \subsection{The number of terms of the non-homogeneous $A$-polynomial of twist knots} \lbl{sub.terms} In \cite{GS} we explicitly computed the non-homogeneous $A$-polynomial $(A^{nh}_{K_p},B_{K_p})$ of the {\em twist knots} $K_p$ for $p=-8,\dots,11$. $K_p$ is the knot obtained by $1/p$ surgery on one component of the Whitehead link. This includes the following knots in the Rolfsen notation: $$ K_1 = 3_1, K_2 = 5_2, K_3 = 7_2, K_4 = 9_2, \qquad K_{-1} = 4_1, K_{-2} = 6_1, K_{-3} = 8_1, K_{-4} = 10_1. $$ The computations reveal that for $p=1,\dots,11$, $A^{nh}_{K_p}$ has $(L,M,q)$ degree equal to $$ \left(2p-1,8p-4,\frac{17}{2} p(p-1) + 2\right) $$ The total number of terms of the 3-variable polynomial $A^{nh}_{K_p}$ is given by $$ 139976, 80252, 41996, 19402, 7406, 2112, 346, 22 $$ for $p=-8,\dots,-1$, and by $$ 4, 98, 908, 4100, 12236, 28978, 58668, 106800, 179814, 284998, 430652 $$ for $p=1,\dots,11$. Using the data from \cite{GS}, the author has computed the tropical curves (homogeneous or not) of all twist knots $K_p$ with $p=-8,\dots,11$. Needless to say, the output of the computations it too large to be displayed in the paper. \subsection{Acknowledgment} The idea of the present paper was conceived during the New York Conference on {\em Interactions between Hyperbolic Geometry, Quantum Topology and Number Theory} in New York in the summer of 2009. An early version of the present paper appeared in the New Zealand Conference on {\em Topological Quantum Field Theory and Knot Homology Theory} in January 2010. The author wishes to thank the organizers of the New York Conference, A. Champanerkar, O. Dasbach, E. Kalfagianni, I. Kofman, W. Neumann and N. Stoltzfus and the New Zealand Conference R. Fenn, D. Gauld and V. Jones for their hospitality and for creating a stimulating atmosphere. The author also wishes to thank J. Yu for many enlightening conversations and T. Markwig for the drawing implementation of {\tt polymake}. \ifx\undefined\bysame \newcommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\,} \fi
{ "timestamp": "2010-06-17T02:02:14", "yymm": "1003", "arxiv_id": "1003.4436", "language": "en", "url": "https://arxiv.org/abs/1003.4436", "abstract": "Using elementary ideas from Tropical Geometry, we assign a a tropical curve to every $q$-holonomic sequence of rational functions. In particular, we assign a tropical curve to every knot which is determined by the Jones polynomial of the knot and its parallels. The topical curve explains the relation between the AJ Conjecture and the Slope Conjecture (which relate the Jones polynomial of a knot and its parallels to the $\\SL(2,\\BC)$ character variety and to slopes of incompressible surfaces). Our discussion predicts that the tropical curve is dual to a Newton subdivision of the $A$-polynomial of the knot. We compute explicitly the tropical curve for the $4_1$, $5_2$ and $6_1$ knots and verify the above prediction.", "subjects": "Geometric Topology (math.GT); High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG)", "title": "Knots and tropical curves", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137937226356, "lm_q2_score": 0.7217432122827969, "lm_q1q2_score": 0.7087617899873909 }
https://arxiv.org/abs/1404.4959
One half of almost symmetric numerical semigroups
Let $S,T$ be two numerical semigroups. We study when $S$ is one half of $T$, with $T$ almost symmetric. If we assume that the type of $T$, $t(T)$, is odd, then for any $S$ there exist infinitely many such $T$ and we prove that $1 \leq t(T) \leq 2t(S)+1$. On the other hand, if $t(T)$ is even, there exists such $T$ if and only if $S$ is almost symmetric and different from $\mathbb{N}$; in this case the type of $S$ is the number of even pseudo-Frobenius numbers of $T$. Moreover, we construct these families of semigroups using the numerical duplication with respect to a relative ideal.
\section{Introduction} A numerical semigroup $S$ is a submonoid of $\mathbb{N}$ such that $\mathbb{N} \setminus S$ is finite. Numerical semigroups arise in several contexts, such as commutative algebra, algebraic geometry, coding theory, number theory and combinatorics. Many classes of numerical semigroups are defined translating ring concepts (see e.g. \cite{BDF}); for example symmetric and pseudo-symmetric numerical semigroups are the corresponding concepts of Gorenstein and Kunz rings in numerical semigroup theory. Almost symmetric numerical semigroups, that are the object of this paper, were introduced in \cite{BF}, together with the corresponding notion of almost Gorenstein rings, as generalization of symmetric and pseudo-symmetric numerical semigroups; in fact these classes are exactly the almost symmetric numerical semigroups of type $1$ and $2$, respectively. In \cite{RGSGU} Rosales, Garc\'ia-S\'anchez, Garc\'ia-Garc\'ia, and Urbano-Blanco introduced the concept of one half of a numerical semigroup in order to solve proportionally modular diophantine inequalities; $S$ is one half of $T$ if $S=\{s \in \mathbb{N} | \ 2s \in T \}$. In the last ten years several authors have studied this concept and its generalizations, see for example \cite{Do, MO, M, Sm} and the papers quoted below. Rosales and Garc\'ia-S\'anchez proved in \cite{RGS2} that every numerical semigroup is one half of infinitely many symmetric semigroups and Swanson generalized this result in \cite{Sw}. Moreover Rosales proved in \cite{R} that a numerical semigroup (different from $\mathbb{N}$) is one half of a pseudo-symmetric semigroup if and only if is symmetric or pseudo-symmetric. In this paper we generalize these results to the case of almost symmetric semigroups. According to results of \cite{R}, \cite{RGS}, and \cite{RGS2} we consider separately the cases of almost symmetric semigroups with even and odd type. Starting with a numerical semigroup $S$ with type $t$, in \cite{DS} are constructed infinitely many almost symmetric semigroups with type $1,3,5,7, \dots ,2t+1$, such that $S$ is their half. This construction is called numerical duplication with respect to a proper ideal of $S$ and arises in commutative algebra, in fact it is the value semigroup of particular algebroid branches (see \cite[Theorems 3.4 and 3.6]{BDS}). In this paper we prove that if $S$ is one half of almost symmetric semigroup $T$ with odd type, then the type of $T$ is included in the values above and all such semigroups can be constructed with the numerical duplication with respect to a \textit{relative} ideal. On the other hand if $T$ is almost symmetric with even type, $S$ is almost symmetric and its type is the number of even pseudo-Frobenius numbers of $T$; in particular $t(S) \leq t(T)$. Moreover we prove that a numerical semigroup different from $\mathbb{N}$ is almost symmetric if and only if it is one half of an almost symmetric semigroup with even type or equivalently of a finite number of almost symmetric semigroups with even type. Finally, we characterize these semigroups. \vspace{1em} The paper is organized as follows. In the Section $2$ we recall some definitions and results about numerical semigroups and prove some useful lemmas. In Section $3$ we introduce the numerical duplication, prove that every numerical semigroups can be realized as numerical duplication with respect to a relative ideal (see Proposition \ref{duplication}), and we use this fact in Theorem \ref{main odd} to characterize one half of almost symmetric numerical semigroup $T$ with odd type; moreover, in Theorem \ref{odd} we give bounds for the type of $T$ (see also the discussion after the theorem). Finally, in the last section we characterize when $T$ is almost symmetric with even type in terms of properties of $\frac{T}{2}$ (see Theorem \ref{main even}) and prove in Corollary \ref{final} that a numerical semigroup $S \neq \mathbb{N}$ is almost symmetric if and only if it is one half an almost symmetric numerical semigroup with even type. \section{Preliminaries} Let $S$ be a numerical semigroup. The maximum of $\mathbb{N} \setminus S$ is called \textit{Frobenius number} of $S$ and we denote it by $f(S)$. Clearly if $s \in S \setminus \{0\}$ then $s+f(S) \in S$; more generally we define the set of \textit{pseudo-Frobenius numbers} PF$(S)=\{x \in \mathbb{Z} \setminus S | \ x+s \in S \text{ \ for any \ } s \in S \setminus \{0\} \}$. The cardinality of PF$(S)$ is called the \textit{type} of $S$; this name is due to ring theory (see e.g. the first section of \cite[Chapter II]{BDF}). Let $s$ be an integer such that $s \notin S$. If $f(S)-s \in S, s$ is called a gap of first type, otherwise a gap of second type. We denote the set of gaps of second type with ${\rm L}(S)$; it is easy to see that ${\rm PF}(S) \subseteq {\rm L}(S) \cup \{f(S)\}$. If we have that $s \in S$ if and only if $f(S)-s \notin S$, then we say that $S$ is \textit{symmetric}; if $f(S)$ is even and this property holds for any $s \in \mathbb{Z}$ but $f(S)/2$, we call $S$ \textit{pseudo-symmetric}. Clearly these properties mean that ${\rm L}(S)=\emptyset$ and ${\rm L}(S)=\{\frac{f(S)}{2}\}$ respectively. Finally if ${\rm L}(S) \subseteq {\rm PF}(S)$, we call $S$ \textit{almost symmetric}. It is well known that $S$ is symmetric if and only if has type $1$, while the pseudo-symmetric numerical semigroups have type $2$ (but there are numerical semigroups with type $2$ that are not pseudo-symmetric). Almost symmetric semigroups generalize these two classes, in particular symmetric and pseudo-symmetric numerical semigroups are exactly almost symmetric semigroups with type $1$ and $2$ respectively (see \cite[Proposition 7]{BF}). A \textit{relative ideal} of $S$ is a set $E \subseteq \mathbb{Z}$ such that $E+S \subseteq E$ and $x+E \subseteq S$ for some $x \in S$; moreover if $E \subseteq S$, we say simply that $E$ is a \textit{(proper) ideal} of $S$. We denote with $f(E)$ the Frobenius number of $E$, i.e. the maximum of $\mathbb{Z} \setminus E$. For example $M(S)=S \setminus \{0\}$ and $K(S)=\{x \in \mathbb{Z}| \ f(S)-x \notin S\}$ are relative ideals of $S$ (the first one is a proper ideal) and are called \textit{maximal ideal} and \textit{standard canonical ideal}, respectively. More generally we say that $E$ is a \textit{canonical ideal} of $S$ if $E=K(S)+x$ for some $x \in \mathbb{Z}$. The names of these two ideals come from ring theory and they are very important, for example it is known that $S$ is symmetric if and only if $S=K(S)$ and it is almost symmetric if and only if $M(S)+K(S) \subseteq M(S)$ (see \cite[Proposition 4]{BF}). We note the analogy with ring theory, where a Cohen Macaulay local ring $(R, \mathfrak{m})$ is Gorenstein if and only if is isomorphic to its canonical module $K$, and is almost Gorenstein when $\mathfrak{m}+K \subseteq \mathfrak{m}$, where $R \subseteq K \subseteq \overline{R}$. If $E$ and $F$ are relative ideals of $S$, we define $E-F=\{x \in \mathbb{Z}| \ x+f \in E \text{ \ for any \ } f \in F\}$, that is also a relative ideal. For example we can define $M(S)-M(S)$ in this way and it is easy to check that this is a numerical semigroup satisfying the equality $M(S)-M(S)=S \cup \text{PF}(S)$. There exist other characterizations of almost symmetric numerical semigroups; in the following sections we will use the next two. \begin{lem} \label{almost symmetric} Let $S$ be a numerical semigroup with Frobenius number $f$. Then $S$ is almost symmetric if and only if the following property holds, $$ s \in S \Longleftrightarrow f-s \notin S \cup {\rm PF}(S) $$ for any $s \in \mathbb{Z} \setminus \{0\}$. \end{lem} \begin{proof} First of all let $s$ be an element of $S$. One has $f-s \notin S \cup \text{PF}(S)$ or otherwise $f=s+(f-s) \in S$, a contradiction. This is always true. Now let $s \notin S$. The condition above is equivalent to $f-s \in S \cup \text{PF}(S)$. Clearly, the set of elements such that $f-s \notin S$ for some $s \notin S$ is L$(S)$, then this condition is equivalent to L$(S) \subseteq \text{PF}(S)$, that is $S$ is almost symmetric. \end{proof} The next theorem was proved by Nari in \cite{N}, Theorem $2.4$. \begin{thm} \label{Nari} Let $S$ be a numerical semigroup and {\rm PF}$(S)=\{ f_1 < \dots < f_{t-1} < f \}$, then the following conditions are equivalent: \\ {\rm (1)} $S$ is almost symmetric; \\ {\rm (2)} $f_i+f_{t-i}=f$ for any $i \in \{1, \dots,t-1\}$. \end{thm} In the rest of the paper we distinguish between almost symmetric numerical semigroups with odd and even type. Luckily from \cite[Theorem 3]{RG2} can be deduced a nice distinction between them; here we give a direct proof of this fact. \begin{prop} \label{frobenius odd} Let $T$ be an almost symmetric numerical semigroup, then $T$ has odd type if and only if $f(T)$ is odd. \end{prop} \begin{proof} Let PF$(S)=\{f_1 < \dots <f_{t-1} < f\}$. By Theorem \ref{Nari} one has $f_i+f_{t-i}=f$, then $t$ is even if and only if $f/2 \in {\rm PF}(S)$. Consequently if $f$ is odd then $t$ is odd. Conversely suppose that $f$ is even; note that $f/2 \notin S$, otherwise the Frobenius number is in $S$, and that, by Lemma \ref{almost symmetric}, $f/2=f-f/2 \in S \cup {\rm PF}(S)$; hence $f/2 \in {\rm PF}(S)$. \end{proof} Finally we remember that a numerical semigroup $S$ is one half of $T$ if $S = \{ s \in \mathbb{N} | \ 2s \in T \}$ and in this case we will write $S= \frac{T}{2}$. \section{One half of almost symmetric numerical semigroups with odd type} In this section we study those numerical semigroups that are one half of almost symmetric semigroups with odd type. \begin{thm} \label{odd} Let $T$ be an almost symmetric numerical semigroups with odd type $t$. If $S$ is one half of $T$ then $t(S)\geq (t-1)/2$. \end{thm} \begin{proof} Let PF$(T)=\{ f_1 < \dots < f_{t-1} < f \}$; by Theorem \ref{Nari} one has $f_i+f_{t-i}=f$. In particular, since $f$ is odd, in ${\rm PF}(T)$ there are $(t-1)/2$ even elements and $1+(t-1)/2$ odd elements. Consider an even pseudo-Frobenius number $f_i=2e_i$; clearly $e_i \notin S$ and we claim that $e_i \in {\rm PF}(S)$. Let $s \in S \setminus \{0\}$, then $2s \in T$ and therefore $2(e_i+s)=f_i+2s \in T$, because $f_i \in {\rm PF}(T)$; consequently $e_i+s \in S$ for any $s \in S \setminus \{0\}$, that is $e_i \in {\rm PF}(S)$. Hence there are at least $(t-1)/2$ pseudo-Frobenius numbers in $S$. \end{proof} \begin{rem} In general, there is not an upper bound for $t(S)$. In fact, in \cite{RGS} is proved that every numerical semigroup is one half of a symmetric numerical semigroup; then, even if we restrict to the case $T$ symmetric, $S$ may be {\it any} numerical semigroup. \end{rem} Let $S$ be a numerical semigroup, $E$ a proper ideal of $S$ and $b$ an odd element of $S$. In \cite{DS} is defined the \textit{numerical duplication} of $S$ with respect to $E$ as the numerical semigroup $$ S \! \Join^b \! E = 2\cdot S \cup (2\cdot E +b) $$ where $2 \cdot S=\{2s | \ s \in S\}$ and $2 \cdot E=\{2e| \ e \in E\}$. \vspace{1em} This construction is motivated by a commutative algebra construction (see \cite{BDS}), but we are interested in it because can be used to construct almost symmetric semigroups. For example Corollary $4.9$ of \cite{DS} shows that, starting with a numerical semigroup $S$, it is possible to choose proper ideals $E_0, E_1, \dots , E_{t(S)}$, such that $S \! \Join^b \! E_0, \dots , S \! \Join^b \! E_{t(S)}$ are almost symmetric of type $1, 3, 5, \dots, 2t(S)+1$, respectively. Coming back to Theorem \ref{odd}, the inequality of the statement is equivalent to $t \leq 2t(S)+1$, so the previous remark shows that this estimation is sharp. Unfortunately there exist almost symmetric numerical semigroups with odd type that cannot be constructed in this way. For example consider $T=\langle 9,10,14,15 \rangle = \{0,9,10,14,15,18,19,20,23,24,25,27,28,29,30,32 \rightarrow \}$, where $\rightarrow$ means that all integers greater than $32$ are in $T$; in this case $S=\frac{T}{2}=\{0,5,7,9,10,12,14 \rightarrow \}$. The point is to choose $b \in S$. We must have $2 \cdot E+b=\{9,15,19,23,25,27,29,33,35,37 \dots\}$, so we have $$ \begin{array}{ll} E=\{2,5,7,9,10,11,12,14 \rightarrow \} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ & \text{if } b=5 \\ E=\{1,4,6,8,9,10,11,13 \rightarrow \} & \text{if } b=7 \\ E=\{0,3,5,7,8,9,10,12 \rightarrow \} & \text{if } b=9 \\ E \text{ contains a negative element} & \text{if } b>9 \end{array} $$ in any case $E$ is not contained in $S$ and then $E$ is not a proper ideal of $S$. \vspace{1em} To solve this problem note that, if $E$ is not a proper ideal, but a relative ideal such that $b+E+E \subseteq S$, then $S \! \Join^b \! E$ is still a numerical semigroup. Under this easy generalization, we can construct all numerical semigroups. \begin{prop} \label{duplication} Every numerical semigroup $T$ can be realized as numerical duplication $S \! \Join^b \! E$, where $S=\frac{T}{2}$, $b$ is an odd element of $S$ and $E$ is a relative ideal of $S$ such that $b+E+E \subseteq S$. \end{prop} \begin{proof} Let $b$ be an odd element of $S$ and set $E=\frac{T-b}{2}$ (it is possible that $T-b$ contains negative elements and in this case we have negative elements in $\frac{T-b}{2}$). Suppose that $E$ is not a relative ideal of $S$, that is, there exist $s \in S$ and $e \in E$ such that $s+e \notin E$; consequently $2(s+e)+b \notin T$, but $2s+(2e+b) \in T+T \subseteq T$, since $s \in S$ and $e \in E$; contradiction. This means that $E$ is a relative ideal of $S$. Let $e,e'$ be two elements of $E$, then $b+2e$ and $b+2e'$ is in $T$; therefore $2b+2e+2e' \in T$ and it is equivalent to $b+e+e' \in S$. Hence $b+E+E \subseteq S$. Finally, by construction, it is clear that $T=S \! \Join^b \! E$ . \end{proof} Note that in the previous proof we have not determined $b$, so there exist infinitely many ways to obtain the semigroup $T$ as a numerical duplication. The following corollary is straightforward. \begin{cor} \label{fraction} Let $S$ be a numerical semigroup. Then every semigroup $T$ such that $S=\frac{T}{2}$, is equal to $S \! \Join^b \! E$ for some relative ideal $E$ and an odd integer $b \in S$. \end{cor} Theorem 4.3 of \cite{DS} characterizes almost symmetric semigroups realized as numerical duplication with respect to a proper ideal $E$. We will see that in our case only one implication is true. Let $\widetilde E$ denote the relative ideal $E-e$, where $e :=f(E)-f(S)$; clearly $f(\widetilde E)=f(S)$. Moreover, we set $f:=f(S), M:=M(S)$ and $K:=K(S)$. First of all we note that, by definition, the Frobenius number of $T=S \! \Join^b \! E$ is the maximum between $2f(S)$ and $2f(E)+b$. If $E$ is proper, then $f(E) \geq f(S)$ and so $f(T)=2f(E)+b$; in general it is possible that $2f(S)>2f(E)+b$: in fact, by Proposition \ref{duplication}, $f(T)$ can be even. However, if $T$ is an almost symmetric numerical semigroup with odd type, we have $f(T)=2f(E)+b$ by Proposition \ref{frobenius odd}. Thanks to this, the proof of one implication of \cite[Theorem 4.3]{DS} works also if $E$ is a relative and not proper ideal of $S$ (obviously provided $E+E+b \subseteq S$); hence we will not write the proof of the following proposition. \begin{prop} \label{first part} Let $T$ be an almost symmetric numerical semigroup with odd type. Then there exist a numerical semigroup $S$, a relative ideal $E$ of $S$ and an odd integer $b \in S$ such that $T=S \! \Join^b \! E$. Moreover for every choose of such $S,E,b$ one has $f(T)=2f(E)+b, K-(M-M) \subseteq \widetilde E \subseteq K$, and $K-\widetilde E$ is a numerical semigroup. \end{prop} The converse of the previous result is not true. Consider the numerical semigroup $S=\{0,4,5,6,8 \rightarrow\}$ and the relative ideal $E=\{2,3,4,6 \rightarrow \}$. It is straightforward to check that $K-(M-M)=M=\widetilde E, K=S, E+E+5 \subseteq S, K- \widetilde E=M-M$ and $2f(E)+5 > 2f(S)$; then $T= S \! \Join^5 \! E$ is a numerical semigroup, $K-(M-M) \subseteq \widetilde E \subseteq K$ and $K-\widetilde E$ is a numerical semigroup. However $T=\{0,8,9,10,11,12,13,16 \rightarrow \}$ is not almost symmetric because $1 \in {\rm L}(T) \setminus {\rm PF}(T)$. If we look at the proof of \cite[Theorem 4.3]{DS}, we see that it works also when $E$ is a relative ideal except for case {\bf (iii)}. From this observation it comes out the idea of the next theorem. First of all, we recall and generalize some results of \cite{DS}. The standard canonical ideal of $S \! \Join^b \! E$ is the set of elements $$ z=f(S \! \Join^b \! E)-a \ \ \text{with} \ \ \begin{cases} \frac{a}{2} \notin S, & a \ \text{even}, \\ \frac{a-b}{2} \notin E, & a \ \text{odd}. \end{cases} $$ The next lemma is proved in \cite[Lemma 4.1 and Lemma 4.2]{DS}; in the original statement $E$ is a proper ideal, but the proof works also when $E$ is a relative ideal. \begin{lem}\label{2} Let $E$ be a relative ideal of $S$. Assume that $K-(M-M) \subseteq \widetilde E$. Then we have: \\ {\rm (1)} for any $x \notin E$, $f(E)-x \in M-M$; \\ {\rm (2)} if moreover $K-\widetilde E$ is a numerical semigroup, then, for any $x \notin E$, $f(E)-x \in E-E$. \end{lem} Now we can construct every almost symmetric numerical semigroup with odd type. \begin{thm} \label{main odd} A numerical semigroup $T$ is almost symmetric with odd type if and only if there exist a relative ideal $E$ of $S:=\frac{T}{2}$ and an odd integer $b \in S$ such that: \\ {\rm (1)} $T= S \! \Join^b \! E$; \\ {\rm (2)} $f(T)=2f(E)+b$; \\ {\rm (3)} $K-(M-M) \subseteq \widetilde E \subseteq K$; \\ {\rm (4)} $K-\widetilde E$ is a numerical semigroup; \\ {\rm (5)} $b+e+E+K \subseteq M$. \end{thm} \begin{proof} As we said above, the proof is only a modification of the proof of \cite[Theorem 4.3]{DS}. However we include the complete proof for the sake of completeness. Assume that the five conditions of the statement hold and prove that $T$ is almost symmetric, i.e. $M(T)+K(T) \subseteq M(T)$. There are four cases: \\\\ {\bf (i)} $2s \in M(T)$ and $2f(E)+b-a \in K(T)$, where $s \in M$, $a$ is even and $\frac{a}{2} \notin S$; \\ {\bf (ii)} $2s \in M(T)$ and $2f(E)+b-a \in K(T)$, where $s \in M$, $a$ is odd and $\frac{a-b}{2} \notin E$; \\ {\bf (iii)} $2t+b \in M(T)$ and $2f(E)+b-a \in K(T)$, where $t \in E$, $a$ is even and $\frac{a}{2} \notin S$; \\ {\bf (iv)} $2t+b \in M(T)$ and $2f(E)+b-a \in K(T)$, where $t \in E$, $a$ is odd and $\frac{a-b}{2} \notin E$. \\\\ {\bf (i)} Since \ $2s+2f(E)+b-a$ \ is odd, it belongs to $M(T)$ if and only if \ $s+f(E)-\frac{a}{2} \in E$, \ i.e. $s+f-\frac{a}{2} \in \widetilde E$. Since $\frac{a}{2} \notin S$, i.e. $f-\frac{a}{2} \in K$, we obtain $s+f-\frac{a}{2}\in M+K \subseteq K-(M-M) \subseteq \widetilde E$. \noindent {\bf (ii)} Since $2s+2f(E)+b-a$ is even, it belongs to $M(T)$ if and only if $s+f(E)-\frac{a-b}{2} \in M$. Since $\frac{a-b}{2}\notin E$, we can apply Lemma \ref{2} to obtain $f(E)-\frac{a-b}{2} \in M-M$, that implies the thesis. \noindent {\bf (iii)} Since $2t+b+2f(E)+b-a$ is even, it belongs to $M(T)$ if and only if $t+b+f(E)-\frac{a}{2} \in M$, i.e. $t+b+e+f-\frac{a}{2} \in M$. But this is true by Condition $5$, indeed $t \in E$ and $f-\frac{a}{2} \in K$. \noindent {\bf (iv)} Since $2t+b+2f(E)+b-a$ is odd, it belongs to $M(T)$ if and only if $t+f(E)-\frac{a-b}{2} \in E$. Since $\frac{a-b}{2} \notin E$, the thesis follows immediately by Lemma \ref{2}. This proves that $T$ is almost symmetric; moreover, by Proposition \ref{frobenius odd} its type is odd. Conversely, we have already seen in Proposition \ref{first part} that the first four conditions hold. Moreover, it is easy to see that the last one is true: in fact we can use the same argument of case {\bf (iii)} above. \end{proof} \begin{rem} If the conditions of the previous theorem are satisfied, then $E=e+\widetilde E \subseteq e+K$; consequently, thanks to the fifth condition, we always have $E+E+b \subseteq S$. \end{rem} Anyway we notice that there are not problems when $E$ is a canonical ideal, in fact, if we consider only semigroups with odd Frobenius number, the proof of \cite[Proposition 3.1]{DS} still works. So we have the following theorem: \begin{thm} The numerical semigroup $S \! \Join^b \! E$ is symmetric if and only if $2f(E)+b>2f(S)$ and $E$ is a canonical ideal of $S$. \end{thm} \begin{cor} Let $S$ be a numerical semigroup. Then the family of all symmetric numerical semigroups $T$ such that $S=\frac{T}{2}$ is $$ \mathcal D(S)=\{S \! \Join^b \! E | \ E+E+b \subseteq S \rm{\ and \ } E \rm{ \ is \ a \ canonical \ ideal \ of \ } S \} $$ \end{cor} \begin{proof} By Proposition \ref{duplication}, all semigroups can be realized as numerical duplication with respect to a relative ideal $E$ such that $E+E+b \subseteq S$. Hence, recalling Corollary \ref{fraction}, we can use the previous theorem. Finally note that if $E=K+x$, then $E+E+b \subseteq S$ implies $2x+b>0$, because $0 \in K$. Since $f(E)=f(K)+x=f+x$, we have $2f<2(f+x)+b=2f(E)+b$. \end{proof} Notice that $\mathcal D(S)$ is constructed by Rosales and Garc\'ia-S\'anchez in \cite{RGS2} in a different way, but it is easy to see that they coincide. \section{One half of almost symmetric numerical semigroups with even type} In this section we study when $T$ is almost symmetric with even type or, equivalently, with even Frobenius number. \begin{lem} \label{1} Let $T$ be a numerical semigroup and ${\rm PF}(T)=\{f_1 < \dots < f_t \}$. Set $S:=\frac{T}{2}$. \\ {\rm (1)} If $f_i$ is even, then $\frac{f_i}{2} \in {\rm PF}(S)$. In particular, the type of $S$ is greater than or equal to the number of even pseudo-Frobenius numbers of $T$. \\ {\rm (2)} If $f_t$ is even, then $f(S)=\frac{f_t}{2}$. \end{lem} \begin{proof} {\rm (1)} If $f_i$ is even then $\frac{f_i}{2} \in \mathbb{Z} \setminus S$, since $f_i \notin T$. Let $s$ be a positive element of $S$, then $2s \in T$ and $f_i+2s \in T$, since $f_i \in {\rm PF}(T)$; hence $\frac{f_i}{2}+s \in S$ and then $\frac{f_i}{2} \in {\rm PF}(S)$. \\ {\rm (2)} See \cite[Lemma 6.9]{RG}. \end{proof} In \cite{R} is proved that one half of a pseudo-symmetric numerical semigroup is symmetric or pseudo-symmetric; we also know that these classes consist of the almost symmetric semigroup with type $1$ or $2$, respectively. In the next theorem this result is generalized for any almost symmetric numerical semigroup with even type. \begin{thm} If $T$ is almost symmetric with even Frobenius number, then $S:=\frac{T}{2}$ is almost symmetric and its type is exactly the number of even pseudo-Frobenius numbers of $T$. \end{thm} \begin{proof} Let PF$(S)=\{f_1 < \dots < f_t\}$ and $i \in \{1, \dots, t\}$. Since $f_i \notin S$, $2f_i \notin T$ and then, thanks to Lemma \ref{almost symmetric} and to Lemma \ref{1}, $2(f_t-f_i)=2f_t-2f_i=f(T)-2f_i \in T \cup {\rm PF}(T)$. If $2(f_t-f_i) \in T$, then $s=f_t-f_i \in S$ and therefore $f_t=f_i+s$. If $s \neq 0$, then $f_t \in S$, since $f_i \in {\rm PF}(S)$; hence $s=0$, that is $f_t=f_i$. Consequently if $i \in \{ 1, \dots, t-1\}$, $2f_{t-1} = 2(f_t-f_i) \in {\rm PF}(T)$. In this way we obtain $t-1$ even pseudo-Frobenius number; therefore, since $2f_t(S)$ is not included in this list, there are at least $t$ even pseudo-Frobenius numbers in $T$, and, by the previous lemma, they are exactly $t$. Finally using Theorem \ref{Nari} it is straightforward to check that $S$ is almost symmetric. \end{proof} As in the previous section, starting with a numerical semigroup $S$, we can construct all almost symmetric numerical semigroups $T$ with even type, such that $S=\frac{T}{2}$; for this aim we use numerical duplication again. In \cite{DS} this is not possible, because the Frobenius number of the numerical duplication with respect to a proper ideal is always odd. As in the previous section, $S$ will be a numerical semigroup and we set $f:=f(S), M:=M(S)$ and $K:=K(S)$. Let us start with some lemmas. The first one was proved by J\"ager; for the proof see \cite[Hilfssatz 5]{J}. \begin{lem} \label{Jager} For any relative ideal $E$, $K-E=\{x \in \mathbb Z \ | \ f-x \notin E\}$. \end{lem} \begin{lem} Suppose that $S$ is almost symmetric and $E$ a relative ideal such that $E+E+b \subseteq S$. Assume that $2f \geq 2f(E)+b$. Then the following conditions are equivalent: \\ {\rm (1)} ${\rm PF}(S) \subseteq E-E$. \\ {\rm (2)} $M-M \subseteq E-E$. \\ {\rm (3)} $K \subseteq E-E$. \end{lem} \begin{proof} First we claim that $f \in E-E$. Suppose that there exists $e \in E$ such that $f+e \notin E$. We have $2e+b>0$, since it is odd and $E+E+b \subseteq S$. Therefore $2f < 2(f+e)+b \leq 2f(E)+b$. Hence we get a contradiction, since $2f(E)+b < 2f$. Now, by definition of almost symmetric semigroups, we have: $$ M-M=S \cup {\rm PF}(S)=S \cup {\rm L}(S)\cup \{f\}=K \cup \{f\}. $$ Moreover, $S \subseteq E-E$ because $E$ is an ideal, then, since $f \in E-E$, we obtain: $$ {\rm PF}(S) \subseteq E-E \Longleftrightarrow S \cup {\rm PF}(S) \subseteq E-E \Longleftrightarrow $$ $$ \Longleftrightarrow M-M \subseteq E-E \Longleftrightarrow K \cup \{f\} \subseteq E-E \Longleftrightarrow K \subseteq E-E. $$ \end{proof} \begin{lem} \label{K-E} Let $S$ be almost symmetric and $E$ a relative ideal. If the equivalent conditions of the previous lemma hold, then $M-E = K-E$. \end{lem} \begin{proof} Since $M \subseteq K$, one has $M-E \subseteq K-E$. Suppose that the equality does not hold, i.e. there exists $x \in (K-E)\setminus (M-E)$; then there exists $e \in E$ such that $x+e \in K \setminus M$. Since $x+e \in K$, one has $f-x-e \notin S$ and then $f-x-e \in {\rm L}(S) \cup \{f\} = {\rm PF}(S) \subseteq E-E$ by assumptions. Hence $f-x=(f-x-e)+e \in E$ and, since $x \in K-E$, we obtain $f=(f-x)+x \in K$ that is a contradiction. \end{proof} \begin{thm} \label{main even} Let $S$ be a numerical semigroup, let $b\in S$ be an odd integer, and let $E$ be a relative ideal of $S$ such that $E+E+b \subseteq S$ and $2f > 2f(E)+b$. Then the numerical duplication $T:=S \! \Join^b \! E$ is almost symmetric (with even type) if and only if the following properties hold: \\ {\bf (i)} $S$ is almost symmetric; \\ {\bf (ii)} $M-E \subseteq (E-M)+b$; \\ {\bf (iii)} $K \subseteq E-E$. \end{thm} \begin{proof} $T$ is almost symmetric if and only if $M(T)+K(T) \subseteq K(T)$ and, recalling the characterization of $K(T)$, given before Lemma \ref{2}, this is equivalent to the following four conditions: \\\\ {\bf (i)} $2m+2f-a \in M(T)$ for any $m \in M$ and $a$ even such that $\frac{a}{2} \notin S$; \\ {\bf (ii)} $2m+2f-a \in M(T)$ for any $m \in M$ and $a$ odd such that $\frac{a-b}{2} \notin E$; \\ {\bf (iii)} $2e+b+2f-a \in M(T)$ for any $e \in E$ and $a$ even such that $\frac{a}{2} \notin S$; \\ {\bf (iv)} $2e+b+2f-a \in M(T)$ for any $e \in E$ and $a$ odd such that $\frac{a-b}{2} \notin E$. \\\\ Discussing every condition, we will see that {\bf (i), (ii), (iii)} are equivalent to the properties listed in the statement, while condition {\bf (iv)} is always true, if we assume {\bf (i)} and {\bf (iii)}. The thesis follows immediately from these facts. \\\\ {\bf (i)} We have $2m+2f-a \in M(T)$ if and only if $m+f-\frac{a}{2} \in M$, that is $f-\frac{a}{2} \in M-M=S \cup {\rm PF}(S)$, for any $\frac{a}{2} \notin S$. Thanks to Lemma \ref{almost symmetric}, it is equivalent to say that $S$ is almost symmetric. \\ {\bf (ii)} In this case $2m+2f-a \in M(T)$ if and only if $\frac{2m+2f-a-b}{2} \in E$, that is $m+f-\frac{a-b}{2}-b \in E$. This is equivalent to $f-x \in (E-M)+b$, for any $x \notin E$ and then, applying Lemma \ref{Jager} and Lemma \ref{K-E} we obtain $M-E \subseteq (E-M)+b$. \\ {\bf (iii)} The property $2e+b+2f-a \in M(T)$ is equivalent to $e+f-\frac{a}{2} \in E$, i.e. $f-\frac{a}{2} \in E-E$. Recalling the definition of $K$, it is equivalent to $K \subseteq E-E$. \\ {\bf (iv)} Finally, $2e+b+2f-a \in M(T)$ if and only if $e+f-\frac{a-b}{2} \in M$, i.e. $f-x \in M-E$ for any $x \notin E$. Using Lemma \ref{Jager} it is equivalent to say $K-E \subseteq M-E$ and, by Lemma \ref{K-E}, if we assume {\bf (i)} and {\bf (iii)} this fact is always true. \end{proof} Combining the previous theorem with Proposition \ref{duplication} and Proposition \ref{frobenius odd}, we obtain the following corollary. \begin{cor} \label{coro} Let $T$ be a numerical semigroup and $S=\frac{T}{2}$. Then $T$ is almost symmetric semigroup with even type if and only if $S$ is almost symmetric and there exist an odd integer $b \in S$ and a relative ideal $E$ of $S$ such that \\\\ {\rm (1)} $T= S \! \Join^b \! E$; \\ {\rm (2)} $M(S)-E \subseteq (E-M(S))+b$; \\ {\rm (3)} $K(S) \subseteq E-E$; \\ {\rm (4)} $2f(S) > 2f(E)+b$. \end{cor} We denote by $m(E)$ the smallest integer of $E$. \begin{lem} \label{0} Let $S$ be a numerical semigroup, $b \in S$ odd and $E$ a relative ideal of $S$ such that $E+E+b \subseteq S$. Then there exist $b'$ and $E'$ such that $S \! \Join^b \! E=S \! \Join^{b'} \! E'$ and the smallest element of $E'$ is zero. \end{lem} \begin{proof} Set $E':=E-m(E)$ and $b':=b+2m(E)$. Clearly $m(E')=0$ and $E'+E'+b'=E-m(E)+E-m(E)+b+2m(E)=E+E+b \subseteq S$; moreover, we have $b' \in E+E+b \subseteq S$ and, if $e \in E$, then $2e+b=2(e-m(E))+b+2m(E) \in 2E'+b'$ and vice versa; hence we have the thesis. \end{proof} \begin{rem} Let $S$ be an almost symmetric numerical semigroup; we want to know which almost symmetric semigroups $T$ with even type satisfy $S=\frac{T}{2}$. Assume that the smallest element of $E$ is zero. According to Corollary \ref{coro}, one has $2f(S) > 2f(E)+b$, then $b<2f(S)-2f(E)\leq 2f(S)+2$. Hence we have a finite number of possibilities for $b$; moreover, if we fixed $b$, we have $-1 \leq f(E)<f(S)-\frac{b}{2}$ and then there are finitely many choices for $E$. This fact is obvious, since $f(T)=2f(S)$ and there is a finite number of semigroups with fixed Frobenius number; however this remark is useful for the next example. \end{rem} \begin{ex} Consider the pseudo-symmetric semigroup $S=\{0,3,5 \rightarrow \}$; we want to construct all almost symmetric semigroups $T$ with even type such that $S=\frac{T}{2}$. As in the previous remark we have $b<10$ and, for a fixed $b$, $-1 \leq f(E)<4-\frac{b}{2}$. In view of Lemma \ref{0} we are looking for only ideals containing zero and then $f(E)$ is different from zero. We have four possibilities: $$ \begin{array}{ll} b=3 \ \ \ \ \ \Longrightarrow \ \ \ \ \ f(E)=-1,1,2. \\ b=5 \ \ \ \ \ \Longrightarrow \ \ \ \ \ f(E)=-1,1. \\ b=7 \ \ \ \ \ \Longrightarrow \ \ \ \ \ f(E)=-1. \\ b=9 \ \ \ \ \ \Longrightarrow \ \ \ \ \ f(E)=-1. \end{array} $$ The unique ideals with Frobenius number $-1$ and $1$ are, respectively, $E_1=\mathbb{N}$ and $E_2=\{0,2 \rightarrow \}$, while there are two ideals with Frobenius number 2, $E_3=\{0,3 \rightarrow \}$ and $E_4=\{0,1,3 \rightarrow\}$. Note that, if $b=3$, $E_1$ and $E_4$ are not acceptable, because, in this case, $E+E+b \nsubseteq S$. It is straightforward to check that $E_i-E_i=E_i$ for $i=1,2,3$ and then $K=\{0,2,3,5 \rightarrow \}$ is contained in $E_i-E_i$ for $i=1,2$ but not for $i=3$. Finally we have $$ \begin{array}{ll} M-E_1=\{5 \rightarrow\},\ \ \ \ \ &E_1-M=\{-3 \rightarrow\}, \\ M-E_2=\{3,5 \rightarrow \}, &E_2-M=\{-3,-1 \rightarrow \} \end{array} $$ and then we obtain: $$ \begin{array}{ll} b=3\ \ \ \ \ \ \ \ \ \ &M-E_2 \subseteq (E_2-M)+b, \\ b=5 &M-E_1 \subseteq (E_1-M)+b, \\ &M-E_2 \nsubseteq (E_2-M)+b, \\ b=7 &M-E_1 \subseteq (E_1-M)+b, \\ b=9 &M-E_1 \nsubseteq (E_1-M)+b. \end{array} $$ Hence we have three possibilities and they give the numerical semigroups $$ \begin{array}{ll} S \! \Join^3 \! E_2=\{0,3,6,7,9 \rightarrow\}, \\ S \! \Join^5 \! E_1=\{0,5,6,7,9 \rightarrow\}, \\ S \! \Join^7 \! E_1=\{0,6,7,9 \rightarrow\}. \end{array} $$ Note that the first two are pseudo-symmetric, while the last one is almost symmetric with type four. \end{ex} \begin{lem} For any almost symmetric numerical semigroup $S$ different from $\mathbb{N}$, there exists at least one relative ideal $E$ and one odd integer $b \in S$ such that $S \! \Join^b \! E$ is almost symmetric with even type. \end{lem} \begin{proof} We set $E:=\mathbb{N}$ and $b:=f+1$ if it is odd, or otherwise $b:=f+2$. First of all note that if $e,e' \in E$, one has $e+e'+b >f$, then $E+E+b \subseteq S$; moreover $2f(E)+b=-2+b \leq -2 +f+2 < 2f$. Then, by Theorem \ref{main even}, we have to prove that $K \subseteq E-E$ and $M-E \subseteq (E-M)+b=(E+b)-M$. It is straightforward to check that $E-E=\mathbb{N}$ and $M-E=\{f+1 \rightarrow \}$, then clearly $K \subseteq E-E$ and if $m \in M$ and $x \in M-E$ one has $m+x \geq f+2$. Hence $m+x \in \{f+2, \rightarrow \} \subseteq \{b \rightarrow \}=E+b$ and consequently $M-E \subseteq ((E+b)-M)$ as required. \end{proof} We have already seen that one half of an almost symmetric numerical semigroup with even type is almost symmetric and the previous lemma proves the converse: every almost symmetric numerical semigroup is one half of some almost symmetric semigroup with even type. Then we can state the last corollary: \begin{cor} \label{final} A numerical semigroup different from $\mathbb{N}$ is almost symmetric if and only if it is one half of an almost symmetric numerical semigroup with even type. \end{cor} Notice that, if $\mathbb{N}$ is one half of a semigroup $T$, then $T$ contains all even positive integer. Hence $f(T)$ is odd and it is easy to see that $T$ is symmetric. \medskip \noindent \textbf{Acknowledgments.} The author would like to thank Marco D'Anna for his help and support during the drafting of the paper and Pedro Garc\'ia-S\'anchez for his useful suggestions.
{ "timestamp": "2014-04-22T02:05:02", "yymm": "1404", "arxiv_id": "1404.4959", "language": "en", "url": "https://arxiv.org/abs/1404.4959", "abstract": "Let $S,T$ be two numerical semigroups. We study when $S$ is one half of $T$, with $T$ almost symmetric. If we assume that the type of $T$, $t(T)$, is odd, then for any $S$ there exist infinitely many such $T$ and we prove that $1 \\leq t(T) \\leq 2t(S)+1$. On the other hand, if $t(T)$ is even, there exists such $T$ if and only if $S$ is almost symmetric and different from $\\mathbb{N}$; in this case the type of $S$ is the number of even pseudo-Frobenius numbers of $T$. Moreover, we construct these families of semigroups using the numerical duplication with respect to a relative ideal.", "subjects": "Commutative Algebra (math.AC); Group Theory (math.GR); Number Theory (math.NT)", "title": "One half of almost symmetric numerical semigroups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137926698566, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7087617892275547 }
https://arxiv.org/abs/1611.00932
Natural Partial Order on Rings with Involution
In this paper, we introduce a partial order on rings with involution, which is a generalization of the partial order on the set of projections in a Rickart *-ring. We prove that a *-ring with the natural partial order form a sectionally semi-complemented poset. It is proved that every interval [0,x] forms an orthomodular lattice in case of abelian Rickart *-rings. The concepts of generalized comparability (GC) and partial comparability (PC) are extended to involve all the elements of a *-ring. Further, it is proved that these concepts are equivalent in finite abelian Rickart *-rings.
\section{introduction} An \textit{involution} $*$ on an associative ring $R$ is a mapping such that $(a+b)^*=a^*+b^*$, $(ab)^*=b^*a^*$ and $(a^*)^*=a$, for all $ a,b\in R$. A ring with involution $*$ is called a {\it $*$-ring}. Clearly, identity mapping is an involution if and only if the ring is commutative. An element $e$ of a $*$-ring $R$ is a \textit{projection} if $e=e^2$ and $e=e^*$. For a nonempty subset $B$ of $R$, we write $r(B)=\{x\in R\colon bx=0,\forall b\in B\}$, and call a \textit{right annihilator} of $B$ in $R$. A \textit{Rickart} $*$-\textit{ring} $R$ is a $*$-ring in which right annihilator of every element is generated, as a right ideal, by a projection in $R$. Every Rickart $*$-ring contains unity. For each element $a$ in a Rickart $*$-ring, there is unique projection $e$ such that $ae=a$ and $ax=0$ if and only if $ex=0$, called a {\it right projection} of $a$ denoted by $RP(a)$. In fact, $r(\{a\})=(1-RP(a))R$. Similarly, the left annihilator $l(\{a\})$ and the left projection $LP(a)$ are defined for each element $a$ in a Rickart $*$-ring $R$. The set of projections $P(R)$ in a Rickart $*$-ring $R$ forms a lattice, denoted by $L(P(R))$, under the partial order `$e\leq f$ if and only if $e=fe=ef$'. In fact, $e\vee f=f+RP(e(1-f))$ and $e\wedge f=e-LP(e(1-f))$. This lattice is extensively studied by I. Kaplanski \cite{kp}, S. K. Berberian \cite{skb}, S. Maeda in \cite{ms2, ms3} and others. Drazin \cite{d}, was the first to introduce ``$*$-order'' to involve all elements, where $*$-order is given by, $a\underset{*}{\leqslant} b$ if and only if $a^*a=a^*b$ and $aa^*=ba^*$, which is a partial order on a semigroup with proper involution({\it i.e.}, $aa^*=0$ implies $a=0$). In particular, with the obvious choices for $*$-rings with proper involution, all commutative rings with no nonzero nilpotent elements, all Boolean rings, the ring $B(H)$ of all bounded linear operators on any complex Hilbert space $H$, the Rickart $*$-ring. Janowitz \cite{j} studies $*$-order on Rickart $*$-ring. Thakare and Nimbhorkar \cite{nsk} used $*$-order on Rickart $*$-ring and generalized the comparability axioms to involve all elements of $*$-ring. Mitsch \cite{mt} defined a partial order on a semigroup. We modify that order to have partial order on a $*$-ring. In this paper, we introduce a partial order on a $*$-ring which is a generalization of the partial order on the set of projections in a Rickart $*$-ring. For a $*$-ring $R$, it is proved that the poset $(R, \leq)$ is an sectionally semi-complemented (SSC) poset. For an abelian Rickart $*$-ring, we prove that every interval $[0,x]$ is an orthomodular poset, in fact, an orthomodular lattice. In the last section, comparability axioms are introduced to involve all elements of the $*$-ring. \section {natural partial order and its properties} We introduce an order on a $*$-ring with unity. \begin{defn}\label{d1}Let $R$ be a $*$-ring with unity. Define a relation `$\leq$' on $R$ by `$a\leq b$ if and only if $a=xa=xb=ax^*=bx^*$, for some $x\in R$'. \end{defn} \begin{prop} Let $R$ be a $*$-ring with unity. Then the relation `$\leq$' given in Definition \ref{d1} is a partial order on $R$. \end{prop} \begin{proof}Reflexive: for $x=1$, we have $a=xa=ax^*$. Hence $a\leq a, \forall a\in R$.\\ Antisymmetric: Let $a\leq b$ and $b\leq a$. Then there exist $x,y \in R$ such that $a=xa=xb=ax^*=bx^*$ and $b=yb=ya=by^*=ay^*$, hence $b=ya=y(bx^*)=bx^*=a$.\\ Transitive: Let $a\leq b$ and $b\leq c$. Hence there exist $x,y \in R$ such that $a=xa=xb=ax^*=bx^*$ and $b=yb=yc=by^*=cy^*$. Then $(xy)a=(xy)(bx^*)=x(yb)x^*=xbx^*=ax^*=a, (xy)c=x(yc)=xb=a, a(xy)^*=(xb)(y^*x^*)=x(by^*)x^*=xbx^*=ax^*=a$ and $c(xy)^*=c(y^*x^*)=(cy^*)x^*=bx^*=a$. Hence $a\leq c$. \end{proof} {\bf Henceforth $\mathbf{R}$ denotes a $*$-ring with unity and we say that $a\leq b$ \textit{through} $x$ whenever $a=xa=xb=ax^*=bx^*$}. \begin{note} If we restrict this partial order to the set of projections in a Rickart $*$-ring, then it coincides with the usual partial order for projections given in Berberian \cite{skb}. \end{note} \begin{rem}This order is different from $*$-order. \vspace{.1cm} \noindent For, let $a={\displaystyle\left \begin{array}{cc} 1 & 2 \\ 1 & 2 \\ \end{array \right], b=\left \begin{array}{cc} 2 & 0 \\ 0 & 1 \\ \end{array \right]}\in R=M_2(\mathbb{Z}_3)$ with transpose as an involution.\\ \vspace{.1cm} \noindent Then ${\displaystyle a^*a=\left \begin{array}{cc} 2 & 1 \\ 1 & 2 \\ \end{array \right]}=a^*b$, $aa^*={\displaystyle\left \begin{array}{cc} 2 & 2 \\ 2 & 2 \\ \end{array \right]=ba^*}$, hence $a\underset{*}{\leqslant} b$.\\ \vspace{.1cm} \noindent Next let $x={\displaystyle\left \begin{array}{cc} x_1 & x_2 \\ x_3 & x_4 \\ \end{array \right]}$ be such that $a=xa=xb=ax^*=bx^*$. Then $a=xa$ gives \vspace{.1cm} ${\displaystyle \left \begin{array}{cc} 1 & 2 \\ 1 & 2 \\ \end{array \right]}={\displaystyle\left \begin{array}{cc} x_1 & x_2 \\ x_3 & x_4 \\ \end{array \right]}{\displaystyle\left \begin{array}{cc} 1 & 2 \\ 1 & 2 \\ \end{array \right]}={\displaystyle\left \begin{array}{cc} x_1+x_2 & 2(x_1+x_2) \\ x_3+x_4 & 2(x_3+x_4) \\ \end{array \right]}$ and $a=ax^*$ gives \\ \vspace{.1cm} ${\displaystyle \left \begin{array}{cc} 1 & 2 \\ 1 & 2 \\ \end{array \right]=\left \begin{array}{cc} 1 & 2 \\ 1 & 2 \\ \end{array \right]\left \begin{array}{cc} x_1 & x_3 \\ x_2 & x_4 \\ \end{array \right]=\left \begin{array}{cc} x_1+2x_2 & x_3+2x_4 \\ x_1+2x_2 & x_3+2x_4 \\ \end{array \right]}$.\\ On comparing, we get $x_1+x_2=1=x_1+2x_2$, which gives $x_2=0, x_1=1$. Similarly $x_3+x_4=1,x_3+2x_4=2$, giving $x_3=1, x_4=0$, {\it i.e.}, \vspace{.1cm} $x=\left \begin{array}{cc} 1 & 0 \\ 1 & 0 \\ \end{array \right]$. But $xb\neq a$. Hence $a\nleq b$. On the Other hand, if $c=\left \begin{array}{cc} 1 & 1 \\ 1 & 1 \\ \end{array \right], d=\left \begin{array}{cc} 0 & 1 \\ 1 & 1 \\ \end{array \right]$ and $y=\left \begin{array}{cc} 0 & 1 \\ 0 & 1 \\ \end{array \right]$. Then $c \leq d$ through $y$. While $c^*c\neq c^*d$, hence $c\underset{*}{\nleqslant} d$. Thus these two partial orders (natural partial order and $*$-order) are distinct. \end{rem} \begin{prop} Let $R$ be a commutative $*$-ring. Then $a\leq b$ implies $a\underset{*}{\leqslant} b$. \end{prop} \begin{proof} Let $a\leq b$. Then there exists $x\in R$ such that $a=xa=xb=ax^*=bx^*$. This yields $a^*b=(xa)^*b=a^*x^*b=a^*(bx^*)=a^*a$ and since $R$ is commutative, we get $aa^*=ba^*$. Hence $a\underset{*}{\leqslant} b$. \end{proof} Note that the converse of the above statement is not true in general. Since $*$-order is not a partial order on $\mathbb{Z}_{12}$ with identity mapping as an involution(as $6^*6=6^*0=0^*0$). In the next result, we provide properties of the natural partial order. \begin{thm}\label{t1}Let $R$ be a $*$-ring with unity. Then the following statements hold. \begin{enumerate} \item 0 is the least element of the poset $R$. \item $a \leq e, a\in R, e\in P(R)$(set of projections in $R$) implies $a\in P(R)$. \item $a\leq b$ if and only if $a^*\leq b^*$. \item If $a\leq b$, then $r(b)\subseteq r(a)$ and $l(b)\subseteq l(a)$. \item $a \leq b,$ $b$ regular ({\it i.e.}, $bb'b=b$, for some $b'\in R$) implies $a$ is regular. \item $a\leq b$ and $a$ has right(resp. left) inverse imply $a=b$, {\it i.e.}, every element having right(resp. left) inverse is maximal. \item If $a\leq b$. Then $ac\leq bc$ if and only if $ca\leq cb, \forall c\in R$. \end{enumerate} \end{thm} \begin{proof}$(1)$ Obvious.\\ $(2)$ Suppose $a\leq e, e\in P(R)$. Let $a\leq e$ through $x$, for some $x\in R$, {\it i.e.}, $a=xa=xe=ax^*=ex^*$. This yields $a^2=xe.ex^*=xex^*=ax^*=a$. Also, $a^*=(xe)^*=ex^*=a$, hence $a\in P(R).$\\ $(3)$ Let $a\leq b$. Then $a=xa=xb=ax^*=bx^*$, for some $x\in R$. Hence $a^*=xa^*=a^*x^*=b^*x^*=xb^*$ which gives $a^*\leq b^*$. The Converse follows from the fact that $(a^*)^*=a$.\\ $(4)$ Obvious.\\ $(5)$ Suppose $a\leq b$ and $b$ is regular, {\it i.e.}, $bb'b=b$, for some $b'\in R$. Let $a=xa=xb=ax^*=bx^*$, for some $x\in R$. Then $a=ax^*=xbx^*=xbb'bx^*=(xb)b'(bx^*)=ab'a$. Hence $a$ is regular.\\ $(6)$ Let $c\in R$ be such that $ac=1$(resp. $ca=1)$ and $a\leq b$. Let $a=xa=xb=ax^*=bx^*$, for some $x\in R$. Then $a=xa$(resp. $a=ax^*)$ gives $ac=xac$(resp. $ca=cax^*)$. Thus $1=x$(resp. $1=x^*)$. Hence $a=b$, i.e, $a$ is a maximal element.\\ $(7)$ Suppose $a\leq b$ and $ac\leq bc,$ $\forall c\in R$. Since $a\leq b$, by ($3$) above, we have $a^*\leq b^*$ giving $a^*c^*\leq b^*c^*$, which further yields $(a^*c^*)^*\leq (b^*c^*)^*$, {\it i.e.}, $ca\leq cb$ and conversely. \end{proof} In a poset $P$, the {\it principal ideal} generated by $a\in P$ is given by $(a]=\{x\in P\colon x\leq a\}$. \begin{prop}If $a$ and $b$ are central elements of a $*$-ring $R$ which generate the same ideals of a ring $R$, then there is a order isomorphism between the set of elements $\leq a$ and the set of elements $\leq b$. \end{prop} \begin{proof}Let $a$ and $b$ are central elements with $Ra=Rb$. Then $a=bs, b=at$, for some $s,t\in R$. Denote $(a]=\{x\in R\colon x\leq a\}$. Define $\phi : (a] \rightarrow (b]$ by $\phi(x)=xt$. We claim that $xt\leq b, \forall x\in (a]$. As $x\leq a$, we have $x=x_1x=x_1a=ax_1^*=xx_1^*$, for some $x_1\in R$. Then $x_1xt=xt$, $xtx_1^*=x_1atx_1^*=x_1bx_1^*=x_1x_1^*b=x_1x_1^*at=x_1ax_1^*t=x_1xt=xt$, $x_1b=x_1at=ax_1t=xt$ and $bx_1^*=x_1^*b=x_1^*at=ax_1^*t=xt$. Hence $xt\leq b$. Now, let $x,y\in (a]$ be such that $x=x_1x=x_1a=ax_1^*=xx_1^*$ and $y=y_1y=y_1a=ay_1^*=yy_1^*$, for some $x_1,y_1\in R$. Then $\phi(x)=\phi(y)$ if and only if $xt=yt$ if and only if $x_1at=y_1at$ if and only if $x_1b=y_1b $ if and only if $x_1a=y_1a $ if and only if $x=y$. Hence $\phi$ is well defined and one-one. Let $z\in (b]$. Then as above $zs\in (a]$ and $z=z_1b=z_1z=bz_1^*=zz_1^*$, for some $z_1\in R$. Also $\phi(zs)=zst=z_1bst=z_1at=z_1b=z$, {\it i.e.}, $\phi$ is a bijection. Now, suppose that $x,y\in(a]$ with $x\leq y$. Then $x=x_1x=x_1a=ax_1^*=xx_1^*$, $y=y_1y=y_1a=ay_1^*=yy_1^*$ and $x=x_2x=x_2y=yx_2^*=xx_2^*$, for some $x_1,x_2,y_1\in R$. Next, $(x_1x_2)xt=x_1xt=xt$, $(x_1x_2)yt=x_1xt=xt$, $xt(x_1x_2)^*=xtx_2^*x_1^*=x_1atx_2^*x_1^*=x_1bx_2^*x_1^*=x_1x_2^*x_1^*b=x_1x_2^*x_1^*at=ax_1x_2^*x_1^*t=xx_2^*x_1^*t=xx_1^*t=xt$ and $yt(x_1x_2)^*=ytx_2^*x_1^*=y_1atx_2^*x_1^*=y_1bx_2^*x_1^*=y_1x_2^*x_1^*b=y_1x_2^*x_1^*at=y_1ax_2^*x_1^*t=yx_2^*x_1^*t=xx_1^*t=xt$. Consequently $\phi(x)\leq \phi(y)$. Hence $\phi$ is an order isomorphism. In fact, $\psi:(b]\rightarrow (a]$ defined by $\psi(y)=ys$, works as an inverse of $\phi$. \end{proof} \begin{theorem}[Condition of Compatability] If $xa=ax^*$, $\forall a,x\in R$, then the natural partial order is compatible with multiplication, {\it i.e.}, $a\leq b$ implies $ca\leq cb$, for all $c\in R$. \end{theorem} \begin{proof}In view of Theorem \ref{t1} (7), it is enough to show that $a\leq b$ implies $ac\leq bc, \forall c\in R$. Let $a\leq b$, then there exists $x\in R$ such that $a=xa=xb=ax^*=bx^*$. Hence $ac=xac=xbc=ax^*c=bx^*c$, {\it i.e.}, $ac=xac$, $acx^*=ca^*x^*=c(xa)^*=ca^*=ac$. Also $bcx^*=cb^*x^*=c(xb)^*ca^*=ac$, hence $ac\leq bc$. \end{proof} \begin{defn}\label{d2} Two elements $a$ and $b$ in a $*$-ring $R$ are {\it orthogonal}, denoted by $a\perp b$, if there exists $x\in R$ such that $xa=a=ax^*$ and $xb=0=bx^*$. \end{defn} The orthogonality relation in a $*$-ring has the following properties. \begin{thm}\label{t2} For elements $a,b,c$ in a $*$-ring $R$ , the following statements hold. \begin{enumerate} \item $a\perp a$ implies $a=0$. \item $a\perp b$ if and only if $b\perp a$ if and only if $a\perp (-b)$. \item $a\perp b$, $c\leq a$ imply $c\perp b$. \item $a\perp b $ if and only if $a\leq a-b$. \item $a\leq b$ implies $b-a\leq b$ and $b-a\perp a$. \item If $a\perp b$, then $a\wedge b=0$ and $a+b$ is an upper bound of both $a,b$. \item $a\perp b$, $(a+b)\perp c$ imply $a\perp (b+c)$. \end{enumerate} \end{thm} \begin{proof}$(1), (2)$ Obvious.\\ $(3)$ Suppose that $a\perp b$ and $c\leq a$. Let $x, y\in R$ be such that $a=xa=ax^*$, $xb=0=bx^*$ and $c=yc=cy^*=ya=ay^*$. Then $(yx)c=yxay^*=yay^*=cy^*=c$. Similarly, $c(yx)^*=c$. On the other hand, $(yx)b=0$ and $b(yx)^*=0$. Consequently, $c\perp b$.\\ $(4)$ Suppose $a$ and $b$ are orthogonal. Let $x\in R$ be such that $a=xa=ax^*$ and $xb=0=bx^*$. Then $a=x(a-b)=(a-b)x^*=xa=ax^*$, hence $a\leq a-b$. Conversely, suppose that $a\leq a-b$. Let $x\in R$ be such that $a=x(a-b)=(a-b)x^*=xa=ax^*$. Then $a=x(a-b)$ and $a=xa$ gives $xb=0$. Similarly, $bx^*=0$. Hence $a\perp b$.\\ $(5)$ Let $x\in R$ be such that $a=xa=xb=ax^*=bx^*$. Then $(1-x)(b-a)=b-a-xb+xa=b-a-a+a=b-a$, $(1-x)b=b-xb=b-a$, $b(1-x)^*=b-bx^*=b-a$ and $(b-a)(1-x)^*=b-a-bx^*-ax^*=b-a-a+a=b-a$. Hence $b-a\leq b$. Also $(1-x)(b-a)=b-a=(b-a)(1-x)^*$ and $(1-x)a=0=a(1-x)^*$. Hence $b-a \perp a$.\\ $(6)$ Suppose $a\perp b$ and $x$ be such that $xa=a=ax^*, xb=0=bx^*$. Let $c\leq a$ and $c\leq b$ {\it i.e.} $c=x_1c=x_1a=cx_1^*=ax_1^*$ and $c=x_2c=x_2b=cx_2^*=bx_2^*$, for some $x_1, x_2 \in R$. Then $x_1a=c=x_2b$ gives $c=x_1a=x_2b=x_1ax^*=x_2bx^*=0$. Hence $a\wedge b=0$. From $(2)$ and $(4)$, we have $a\leq a+b$ and $b=(a+b)-a\leq a+b$.\\ $(7)$ Suppose that $a\perp b$, $(a+b)\perp c$. From $(6)$, we have $a\leq a+b$ and $a+b\leq a+b+c$. This gives $a\leq a+b+c$. Then from $(5)$, we get $b+c=(a+b+c)-a\leq a+b+c$ and $(b+c)\perp a$, as required. \end{proof} A poset $P$ with 0 is called {\it sectionally semi-complemented} (in brief SSC) if, for $a,b\in P$, $a< b$, there exists an element $c\in P$ such that $0<c< b$ and $\{a,c\}^l=\{ 0\}$, where $\{a,c\}^l=\{x\in P\colon x\leq a\textnormal{ and } x\leq c\}$. Thus from $(5)$ and $(6)$ of Theorem \ref{t2} , we have the following result. \begin{thm} Let $R$ be a $*$-ring. Then the poset $(R, \leq)$ is an SSC poset. \end{thm} A ring is called an {\it abelian ring} if all of its idempotents are central. \begin{prop}\label{p2}In an abelian Rickart $*$-ring $R$, the following statements are equivalent. \begin{enumerate} \item[i)] $a\leq b$. \item[ii)] There exists a projection $e$ such that $a=ae=be$. \item [iii)] $ab=a^2=ba$. \end{enumerate} \end{prop} \begin{proof}i)$\implies$ii) Suppose $a\leq b$, then there exists $x\in R$ such that $a=xa=xb=ax^*=bx^*$. Since $ a=xa$, we have $(1-x)a=0$. This gives $ a\in r\{1-x\}=eR$, for some projection $e\in R$. Then $ea=a=ae$ and $(1-x)e=0$, {\it i.e.}, $ e=xe=ex^*$. Also, $a=xb$ implies $ea=exb=xeb=eb$. Thus $a=ae=be$.\\ ii)$\implies$iii) Obvious.\\ iii)$\implies$i) Let $ab=a^2 $, {\it i.e.}, $a(b-a)=0$. Then there exists a projection $e$ such that $a=ea=ea$ and $e(b-a)=0$, {\it i.e.}, $eb=ea=a$, hence $a\leq b$. \end{proof} \begin{lem}\label{l6} If $R$ is an abelian Rickart $*$-ring, then $a\perp b$ implies $a\wedge b=0$ and $a\vee b=a+b$. \end{lem} \begin{proof}Suppose $a\perp b$. By Theorem \ref{t2} $(6)$, $a\wedge b=0$ and $a+b$ is an upper bound of $a$ and $b$. Let $a\leq c$ and $b\leq c$, then there exist projections $e,f$ such that $a=ea=ec$ and $b=fb=fc$. Since $a\perp b$, there exists $x\in R$ such that $xa=a=ax^*$ and $xb=0=bx^*$. Let $y=ex+f(1-x)$. Then $y(a+b)=exa+exb+f(1-x)a+f(1-x)b=a+b$, $(a+b)y^*=a+b$, $yc=exc+f(1-x)c=a+b$ and $cy^*=a+b$, {\it i.e.}, $a+b\leq c$. Thus $a\vee b=a+b.$ \end{proof} Before proceeding further, we need the definition of orthomodular poset. An {\it orthomodular poset} is a partially ordered set $P$ with 0 and 1 equipped with a mapping $x\rightarrow x^{\perp}$ (called the {\it orthocomplementation}) satisfying the conditions. \begin{enumerate} \item[i)] $a\leq b\Rightarrow b^{\perp}\leq a^{\perp}$, \item[ii)] $(a^{\perp})^{\perp}=a$ for all $a\in P$, \item[iii)] $a\vee a^{\perp}=1$ and $a\wedge a^{\perp}=0,$ for all $a\in P$, \item[iv)] $a\leq b^{\perp}$ implies that $a\vee b$ exists in $P$, \item[v)] $a\leq b \Rightarrow b=a\vee (a\vee b^{\perp})^{\perp}$. \end{enumerate} The following result is essentially due to Marovt et al. {\cite[Theorem 1]{jm}}. \begin{thm}\label{t3}Let $R$ be a Rickart $*$-ring. Then $a\underset{*}{\leqslant} b$ if and only if there exist projections $p$ and $q$ such that $a=pb=bq$. \end{thm} Thus, from Proposition \ref{p2} and Theorem \ref{t3}, the natural partial order and $*$-order are equivalent on abelian Rickart $*$-rings. This leads to the following two results which are also proved independently by Janowitz \cite{j}. \begin{theorem} Let $R$ is an abelian Rickart $*$-ring. Then every interval $[0,x]$ is an orthomodular poset. \end{theorem} We know that, if $R$ is a Rickart $*$-ring, then the set of projection $P(R)$ forms a lattice and the set $\{e\in P(R)\colon e\leq x''\}$ is sub lattice of $P(R)$, where $x'$ is a projection which generates the right annihilator of $x$. \begin{theorem} In an abelian Rickart $*$-ring $R$ every interval $[0,x]$ is ortho-isomorphic to $\{e\in P(R)\colon e\leq x''\}.$ Hence every interval $[0,x]$ is an orthomodular lattice. \end{theorem} \section{comparability axioms} Two projections $e$ and $f$ are said to be {\it equivalent}, written $e\sim f$, if there is $w\in eRf$ such that $e=ww^*$ and $f=w^*w$, which is an equivalence relation on the set of projections in a Rickart $*$-ring. A projection $e$ is said to be \textit{dominated} by the projection $f$, denoted by $e\lesssim f$, if $e\sim g\leq f$, for some projection $g$ in $R$. Two projections $e$ and $f$ are said to be \textit{generalized comparable} if there exists a central projection $h$ such that $he\lesssim hf$ and $(1-h)f\lesssim (1-h)e$. A $*$-ring is said to satisfy the {\it generalized comparability} $(GC)$ if any two projections are generalized comparable. Two projections $e$ and $f$ are said to be {\it partially comparable} if there exist non zero projections $e_0$, $f_0$ in $R$ such that $e_0\leq e$, $f_0\leq f$ and $e_0\sim f_0$. If for any pair of projections in $R$, $eRf\neq 0$ implies $e$ and $f$ are partially comparable, then $R$ is said to satisfy \textit{partial comparability} ($PC$). More about comparability axioms on the set of projections in a Rickart $*$-ring can be found in Berberian \cite{skb}. Drazin \cite{d} extended the relation of equivalence of two projections to general elements of a $*$-ring as follows. \begin{defn}[{\cite[Definition 2*]{d}}]Let $R$ be a $*$-ring with unity. We say that $a\sim b$ if and only if there exists $x\in aRb,y\in bRa$ such that $aa^*=xx^*, bb^*=yy^*, a^*a=y^*y,b^*b=x^*x$. \end{defn} This relation is symmetric on a $*$-ring. Thakare and Nimbhorkar \cite{nsk} extended the comparability axioms using the above relation and $*$-order to involve all elements of Rickart $*$-ring. We provide a relation which is symmetric and transitive on general elements of $*$-ring as an extension of the relation of equivalence of two projections. \begin{defn}Let $R$ be a $*$-ring with unity. We say that $a\sim b$ if and only if there exists $x,y\in R$ such that $aa^*=xx^*, bb^*=yy^*, a^*a=y^*y,b^*b=x^*x$ with $x=ax=xb$ and $y=by=ya$. \end{defn} Now, we extend the concepts of dominance, $GC, PC$ etc. from the set of projections in a Rickart $*$-ring to general elements in a $*$-ring. \begin{defn} \begin{enumerate} \item Let $R$ be a $*$-ring with unity. We say that {\it $a$ is dominated by $b$} if $a\sim c\leq b$ for some $c\in R$. In notation $a\lesssim b$. \item A $*$-ring $R$ is said to satisfy the {\it generalized comparability for elements} ($GC$) for elements, if for any $a,b \in R$ there exists a central projection $h$ such that $ha\lesssim hb$ and $(1-h)b\lesssim (1-h)a$. \item Two elements $a,b$ in a $*$-ring $R$ are said to be {\it partially comparable} if there exists two non-zero elements $c,d$ in $R$ such that $c\leq a, d\leq b$ with $c\sim d$. If for any $a,b \in R$, $aRb\neq 0$ implies $a$ and $b$ are partially comparable then we say that $R$ has {\it partial comparability for elements} $(PC)$. \end{enumerate} \end{defn} Clearly, if $a\leq b$ or $a\sim b$, then $a\lesssim b$. \begin{lem} If $a\lesssim b$ and $h$ is a central projection, then $ha\lesssim hb$. \end{lem} \begin{defn} Two elements $a$ and $b$ in a $*$-ring $R$ are said to be {\it very orthogonal} if there exists a central projection $h$ such that $ha=a$ and $hb=0$. \end{defn} The relevance of very orthogonality to generalized comparability is as follows: \begin{theorem}\label{gc} If $a$ and $b$ are elements of a $*$-ring $R$. Then the following statements are equivalent. \begin{enumerate} \item[i)]$a$ and $b$ are generalized comparable. \item[ii)] There exists orthogonal decompositions $a=x+y$, $b=z+w$ with $x\sim z$, $y $ and $w$ are very orthogonal. \end{enumerate} \end{theorem} \begin{proof}i) $\Rightarrow$ ii) Suppose $a$ and $b$ are generalized comparable. Let $h$ be a central projection such that $ha\lesssim hb$ and $(1-h)b\lesssim (1-h)a$. Then $ha\sim k_1\leq hb$, $(1-h)b\sim k_2\leq (1-h)a$, for some $k_1,k_2\in R$. Hence $k_1=m_1k_1=m_1hb=k_1m_1^*=hbm_1^*,$ for some $m_1\in R$. Then $k_1=m_1hb$ gives $k_1h=m_1hbh=m_1hb=k_1$. Similarly, $k_2=(1-h)k_2$. Also $hak_2^*=ha(1-h)k_2^*=0=(ha)^*k_2$, $(1-h)bk_1^*=[(1-h)b]^*k_1=0$. We claim that $ha+k_2\sim k_1+(1-h)b$. Since $ha\sim k_1$, there exist $x_1,y_1\in R$ such that $(ha)(ha)^*=x_1x_1^*,~ k_1k_1^*=y_1y_1^*,~(ha)^*(ha)=y_1^*y_1$ and $k_1^*k_1=x_1^*x_1$ with $x_1=hax_1=x_1k_1$ and $y_1=k_1y_1=y_1ha$. Clearly, $x_1=hx_1$ and $y_1=hy_1$, since $k_1h=k_1$. Similarly, Since $(1-h)b\sim k_2$, there exist $x_2,y_2\in R$ such that $k_2k_2^*=x_2x_2^*,~ [(1-h)b][(1-h)b]^*=y_2y_2^*,~k_2^*k_2=y_2^*y_2$ and $[(1-h)b]^*[(1-h)b]=x_2^*x_2$ with $x_2=k_2x_2=x_2(1-h)b$ and $y_2=(1-h)by_2=y_2k_2$. Clearly, $x_2=(1-h)x_2$ and $y_2=(1-h)y_2$, since $k_2(1-h)=k_2$. Let $x=x_1+x_2$ and $y=y_1+y_2$. Since $hk_2=0$ and $(1-h)k_1=0$, we have $(ha+k_2)x=(ha+k_2)(x_1+x_2)=hax_1+hax_2+k_2x_1+k_2x_2=x_1+0+0+x_2=x$, $x[k_1+(1-h)b]=(x_1+x_2)[k_1+(1-h)b]=x_1k_1+x_1(1-h)b+x_2k_1+x_2(1-h)b=x_1+0+0+x_2=x$. Similarly, we have $y=[k_1+(1-h)b]y=y(ha+k_2)$. Also, $xx^*=(x_1+x_2)(x_1+x_2)^*=x_1x_1^*+x_1x_2^*+x_2x_1^*+x_2x_2^*=x_1x_1^*+0+0+x_2x_2^*=(ha)(ha)^*+k_2k_2^*=[ha+k_2][ha+k_2]^*$ and $x^*x=(x_1+x_2)^*(x_1+x_2)=x_1^*x_1+x_1^*x_2+x_2^*x_1+x_2^*x_2=x_1^*x_1+0+0+x_2^*x_2=k_1^*k_1+[(1-h)b]^*[(1-h)b]=[k_1+(1-h)b]^*[k_1+(1-h)b]$. On the other hand, $yy^*=(y_1+y_2)(y_1+y_2)^*=y_1y_1^*+y_1y_2^*+y_2y_1^*+y_2y_2^*=k_1k_1^*+0+0+[(1-h)b][(1-h)b]^*=[k_1+(1-h)b][k_1+(1-h)b]^*$ and $y^*y=(y_1+y_2)^*(y_1+y_2)=y_1^*y_1+y_1^*y_2+y_2^*y_1+y_2^*y_2=y_1^*y_1+0+0+y_2^*y_2=(ha)^*(ha)+k_2^*k_2=[ha+k_2]^*[ha+k_2]$. Therefore $ha+k_2\sim k_1+(1-h)b$. Next, we claim that $ha+k_2\leq a$ and $k_1+(1-h)b\leq b$. Since $h$ is a central projection, $k_2\leq (1-h)a\leq a$ and $ha \leq a$, implies $ k_2=x_1k_2=x_1a=k_2x_1^*=ax_1^*$ and $ha=x_2ha=x_2a=hax_2^*=ax_2^*$, for some $x_1,x_2\in R$. Let $y_1=x_1+hx_2$, then $y_1(ha+k_2)=x_1ha+x_1k_2+hx_2ha+hx_2k_2=ha+k_2$, $(ha+k_2)y_1^*=hax_1^*+k_2x_1^*+hax_2^*h+k_2x_2^*h=ha+k_2$ and $y_1a=a(x_1+hx_2)^*=ha+k_2=ay_1^*$, therefore $ha+k_2\leq a$. Similarly, $(1-h)b+k_1\leq b$. Now put $ha+k_2=x$, $(1-h)b+k_1=z$, $y=a-x$ and $w=b-z$ implies $hb-k_1=b-z=w$. Then $hw=h(hb-k_1)=hb-k_1=w$ and $hy=h(a-x)=ha-hx=ha-ha-hk_2=0$ (since $hk_2=0$), {\it i.e.}, $y$ and $w$ are very orthogonal. Thus $a=x+y, b=z+w$ where $x\perp y$, $z\perp w$ such that we get $x\sim z$ with $y$ and $w$ are very orthogonal.\\ ii) $\Rightarrow$ i) Let $h$ be a central projection such that $hw=w$ and $hy=0$. Then $ha=hx+hy=hx$ and $(1-h)b=(1-h)z+(1-h)w=(1-h)z$, where $ha=hx\sim hz\leq hb$ and $(1-h)b=(1-h)z\sim (1-h)x\leq (1-h)a$. Thus $ha\lesssim hb$ and $(1-h)b\lesssim (1-h)a$. Hence $a,b$ are generalized comparable. \end{proof} Next result implies that $GC$ for elements is stronger than $PC$ for elements. \begin{thm}\label{t4} If $R$ is a $*$-ring with $GC$ for elements then it has $PC$ for elements. \end{thm} \begin{proof} Let $a,b$ are elements of $R$ which are not partially comparable. We will show that $aRb=0$. Applying $GC$ to the pair $a,b$ we get orthogonal decompositions $a=x+y$ and $b=z+w$, where $x\sim z$ and $y,w$ are very orthogonal. If $x\neq 0$ and $w\neq 0$ then $a$ and $b$ are partially comparable, which is a contradiction to the assumption. Hence $x=0=w$, {\it i.e.}, $a,b$ are very orthogonal. Let $h$ be a central projection such that $ha=a$ and $hb=0$. Then $aRb=haRb=aRhb=0$. Thus $R$ has $PC$ for elements. \end{proof} \begin{lem} In an abelian Rickart $*$-ring $a \perp b$ if and only if $RP(a)RP(b)=0$. \end{lem} \begin{proof}First we show that $ab=0$ if and only if $RP(a)RP(b)=0$. Suppose that $ab=0$ which gives $b\in r(\{a\})=(1-RP(a))R$. Hence $(1-RP(a))b=b$ giving $RP(a)b=0$. Since all projections in $R$ are central, we get $RP(a)\in r(\{b\})=(1-RP(b))R$. Which yields $RP(b)RP(a)=0$. Conversely, if $RP(a)RP(b)=0$, then $ab=(aRP(a))(bRP(b))=aRP(a)RP(b)b=0$. Next, Suppose that $a\perp b$. Then there exists $x\in R$ such that $xa=a=ax^*$ and $xb=0=bx^*$, {\it i.e.}, $a(1-x^*)=0$. Hence $RP(a)RP(1-x^*)=0$. Since $R$ is abelian, we have $RP(1-x^*)=1-RP(x^*)=1-RP(x)$. Consequently, $RP(a)RP(x)=RP(a)$. On the other hand, $xb=0$ implies $RP(x)RP(b)=0$. Then $RP(a)RP(b)=RP(a)RP(x)RP(b)=0$, hence $ab=0$. Conversely, if $ab=0$, then $RP(a)RP(b)=0$. Thus $RP(a)a=a=aRP(a)$ and $RP(a)b=0=bRP(a)$. Hence $a\perp b$. \end{proof} The next result shows that the relation $\sim$ is finitely additive. \begin{thm} Let $R$ be an abelian Rickart $*$-ring. If $a_1\perp a_2,~ b_1\perp b_2$ with $a_1\sim b_1$ and $a_2\sim b_2$, then $a_1+a_2\sim b_1+b_2$, {\it i.e.}, the relation $\sim $ is finitely additive. \end{thm} \begin{proof}Since $a_1\perp a_2,~ b_1\perp b_2$, we have $RP(a_1)RP(a_2)=0=RP(b_1)RP(b_2)$. Also, $a_1\sim b_1$ and $a_2\sim b_2$ there exists $x_i, y_i\in R$ such that $a_ia_i^*=x_ix_i^*,~ a_i^*a_i=y_i^*y_i,~b_ib_i^*=y_iy_i^*,~ b_i^*b_i=x_ix_i$ with $x_i=a_ix_i=x_ib_i$ and $y_i=b_iy_i=y_ia_i$ for $i=1,2$. This gives $x_i(1-a_i)=0$(since in an abelian Rickart $*$-ring $RP(x)=LP(x)$), hence $RP(x_i)=RP(x_i)RP(a_i)$, for $i=1,2$. Then for $i\neq j$, we have $x_ia_j=x_iRP(x_i)a_jRP(a_j)=x_iRP(x_i)RP(a_i)a_jRP(a_j)=x_iRP(x_i)RP(a_1)RP(a_j)a_j=0$. Moreover $x_ix_j^*=0=x_i^*x_j$ for $i\neq j$. Similarly, we have $b_jx_i=0$ for $i\neq j$. Let $x=x_1+x_2$ and $y=y_1+y_2$. Then $(a_1+a_2)x=a_1x_1+a_2x_1+a_1x_2+a_2x_2=x_1+0+0+x_2$ and $(b_1+b_2)x=b_1x_1+b_1x_2+b_2x_1+b_2x_2=x_1+0+0+x_2=x$. Consider $xx^*=x_1x_1^*+x_2x_1^*+x_1x_2^*+x_2x_2^*=a_1a_1^*+0+0+a_2a_2^*=(a_1+a_2)(a_1+a_2)^*$ and $x^*x=x_1^*x_1+x_2^*x_1+x_1^*x_2+x_2^*x_2=b_1^*b_1+b_1^*b_2=(b_1+b_2)^*(b_1+b_2)$. Similarly, $y=(b_1+b_2)y=y(a_1+a_2)$, $yy^*=(b_1+b_2)(b_1+b_2)^*$ and $y^*y=(a_1+a_2)^*(a_1+a_2)$. Therefore $a_1+a_2\sim b_1+b_2$. \end{proof} Above result ensures that the converse of Theorem \ref{t4} is true for finite abelian Rickart $*$-rings. \begin{thm} Let $R$ be a finite abelian Rickart $*$-ring. Then $GC$ for elements and $PC$ for elements are equivalent. \end{thm} \begin{proof}Suppose that $R$ has $PC$ for elements. It is enough to show that, $PC$ for elements implies $GC$ for elements. Let $a,b\in R$. If $aRb=0$, then $ab=0$. This gives $RP(a)b=0$. Since $R$ is an abelian ring, we get $a$ and $b$ are very orthogonal. Hence we are done. Suppose $aRb\neq 0$. Hence there exist $a_0\leq a$ and $b_0\leq b$ such that $a_0\sim b_0$. Let $a_1, b_1$ be the largest elements such that $a_1\leq a$, $b_1\leq b$ and $a_1\sim b_1$. Then $a_2=a-a_1$ and $b_2=b-b_1$ are such that $a_2\leq a$, $b_2\leq b$, $a_1\perp a_2$ and $b_1\perp b_2$. By the maximality of $a_1$ and $b_1$, we get $a_2Rb_2=0$, which gives $a_2$ and $b_2$ very orthogonal. Thus we get an orthogonal decompositions $a=a_1+a_2$, $b=b_1+b_2$ such that $a_1\sim b_1$, $a_2$ and $b_2$ very orthogonal. By Theorem \ref{gc} we have $a$ and $b$ are generalized comparable. \end{proof} \begin{prop}Let $R$ be a $*$-ring with $GC$ for elements and $e$ is any projection in $R$. Then $eRe$ also has $GC$ for elements. \end{prop} \begin{proof} Let $a,b\in eRe\subseteq R$. Then there exists a central projection $h$ in $R$ such that $ha\lesssim hb$, $(1-h)b\lesssim (1-h)a$. Let $g=ehe=he\in eRe$ and $x$ be any element in $eRe$. Then $gx=hex=hx=xh=xeh=xhe=xg$. Hence $g$ is a central projection in $eRe$ with $ga=hea=ha, gb=heb=hb$, {\it i.e.}, $ga\lesssim gb$ and $(e-g)b=ab=hab=b-hb=(1-h)b$, $(e-g)a=ea-hea=a-ha=(1-h)a$, {\it i.e.}, $(e-g)b\lesssim (e-g)a$. Thus $a$ and $b$ are generalized comparable in $eRe$. \end{proof} \begin{cor} If the matrix ring $M_n(R)$ has $GC$ for elements, then $R$ has $GC$ for elements. \end{cor} An ideal $I$ of a $*$-ring $R$ is a {\it $*$-ideal} if $a^*\in I$ whenever $a\in I$. \begin{prop} Let $I$ be a $*$-ideal of $R$. If $R$ has $GC$ for elements, then $R/I$ has $GC$ for elements.\end{prop} \begin{proof} Let $a+I$, $b+I\in R/I$. Applying $GC$ to $a,b\in R$, there exists a central projection $h\in R$ such that $ha\lesssim hb$ and $(1-h)b\lesssim (1-h)a$. Then passing to cosets, $h+I$ is central projection in $R/I$ such that $(h+I)(a+I)\lesssim (h+I)(b+I)$ and $[(1+I)-(h+I)](b+I)\lesssim [(1+I)-(h+I)](a+I)$. Hence $R/I$ has $GC$ for elements. \end{proof} \begin{rem} The converse of above statement is not true. For, let $R=\mathbb{Z}_{10}$ with identity map as an involution and $I=\{0,2,4,6,8\}$. Then $R/I=\{0+I,1+I\}$ which has $GC$ for elements trivially. The poset $R$ with natural partial order is depicted in Figure $1$. \begin{center} \begin{tikzpicture} \draw (0,1)--(-1.5,2)--(-2,1)--(0,0)--(-1,1)--(-0.5,2)--(0,1)--(0,0)--(1,1)--(0.5,2)--(0,1)--(1.5,2)--(2,1)--(0,0); \draw [fill=white](0,1) circle(.05);\draw [fill=white](-1.5,2) circle(.05);\draw [fill=white](-2,1) circle(.05);\draw [fill=white](0,0) circle(.05);\draw [fill=white](-1,1) circle(.05);\draw [fill=white](-0.5,2) circle(.05);\draw [fill=white](1,1) circle(.05);\draw [fill=white](0.5,2) circle(.05);\draw [fill=white](1.5,2) circle(.05);\draw [fill=white](2,1) circle(.05); \node [left]at (0,-.1){$0$};\node [above]at (-1.5,2){$7$};\node [left]at (-2,1){$2$};\node [left]at (-1,1){$6$}; \node [above]at (-0.5,2){$1$};\node [right]at (1,1){$4$};\node [above]at (0.5,2){$9$};\node [above]at (1.5,2){$3$}; \node [right]at (2,1){$8$};\node [left]at (0,0.9){$5$};\node [left]at (0.8,-1){Figure 1}; \end{tikzpicture} \end{center} Here $R$ does not have $GC$ for elements. On the contrary, if $R$ has $GC$ for elements, then by Theorem \ref{t4}, $R$ has $PC$ for elements. Let $a=2$ and $b=4$. Then $aRb\neq 0$ and $2\nsim 4$, Since $22^*=4$ and $4^*4=6$ and $R$ being commutative there is no $x\in R$ such that $xx^*=4$ and $x^*x=6$. Hence $2$ and $4$ are not partially comparable in $R$, a contradiction. \end{rem}
{ "timestamp": "2016-11-04T01:03:57", "yymm": "1611", "arxiv_id": "1611.00932", "language": "en", "url": "https://arxiv.org/abs/1611.00932", "abstract": "In this paper, we introduce a partial order on rings with involution, which is a generalization of the partial order on the set of projections in a Rickart *-ring. We prove that a *-ring with the natural partial order form a sectionally semi-complemented poset. It is proved that every interval [0,x] forms an orthomodular lattice in case of abelian Rickart *-rings. The concepts of generalized comparability (GC) and partial comparability (PC) are extended to involve all the elements of a *-ring. Further, it is proved that these concepts are equivalent in finite abelian Rickart *-rings.", "subjects": "Rings and Algebras (math.RA); Combinatorics (math.CO)", "title": "Natural Partial Order on Rings with Involution", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.982013792143467, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7087617888476365 }
https://arxiv.org/abs/1909.06589
On Schur multiplier and projective representations of Heisenberg groups
In this article, we study the Schur mutiplier of the discrete as well as the finite Heisenberg groups and their t-variants. We describe the representation groups of these Heisenberg groups and through these give a construction of their finite dimensional complex projective irreducible representations.
\section{Introduction} The theory of projective representations involves understanding homomorphisms from a group into the projective linear groups. Schur~\cite{IS4, IS7, IS11} extensively studied it. These representations appear naturally in the study of ordinary representations of groups and are known to have many applications in other areas of Physics and Mathematics. We refer the reader to Section~\ref{se2} for precise definitions and related results regarding projective representations of a group. By definition, every ordinary representation of a group is projective, but the converse is not true. Therefore, understanding the projective representations is usually more intricate. Recall, the Schur multiplier of a group $G$ is the second cohomology group ${\mathrm{H}}^2(G, \mathbb{C}^{\times})$, where ${\mathbb C}^\times$ is a trivial $G$-module. The Schur multiplier of a group plays an important role in understanding its projective representations. By definition, every projective representation $\rho$ of $G$ is associated with a 2-cocycle $\alpha: G \times G \to {\mathbb C}^\times$ such that $\rho(x)\rho(y)=\alpha(x,y)\rho(xy)$ for all $x,y \in G$. In this case, we say, $\rho$ is an $\alpha$-representation. Conversely, for every 2-cocycle $\alpha$ of $G$, there exists an $\alpha$-representation of $G$, namely $\mathbb C^\alpha(G)$ the twisted group algebra of $G$. So, the first step towards understanding the projective representations is to describe the 2-cocycles of $G$ up to cohomologous, i.e., to understand the Schur multiplier of $G$. The second step involves constructing $\alpha$-representations of $G$ for all $[\alpha] \in {\mathrm{H}}^2(G, \mathbb C^\times)$, where $[\alpha]$ denotes the cohomology class of $\alpha$. The complex ordinary representations of finite abelian groups are easy to understand. For example, all irreducibles are one dimensional. But this is not true for their projective representations. This problem has been studied by many authors, most notably by Morris, Saeed-ul-Islam, and Thomas in \cite{Moa}, \cite{Moa2}. All irreducible $\alpha$-representations of $({\mathbb Z}/n{\mathbb Z})^k$ for some special $\alpha$ have been described in \cite{Moa}. This work was generalized to all finite abelian groups for some special class of cocycles in \cite{Moa2}. Their results are outlined in \cite[Chapter 3]{GK1} and \cite[Chapter 8]{GK3}. Later, Higgs \cite{RJ} constructed an irreducible $\alpha$-representation of elementary abelian $p$-groups $({\mathbb Z}/p{\mathbb Z})^k$, for every $\alpha$. Also, he counted the number of $[\alpha] \in {\mathrm{H}}^2(({\mathbb Z}/p{\mathbb Z})^k, {\mathbb C}^\times)$ such that irreducible $\alpha$-representations of $({\mathbb Z}/p{\mathbb Z})^k$ continue to be irreducible when restricted to a subgroup of index $\leq p^2$. The corresponding results for $({\mathbb Z}/p^r{\mathbb Z})^k$ with $r>1$ are not yet known. The projective representations of dihedral groups are also well known in the literature; see ~\cite[Theorem~7.3]{GK1}. Schur~\cite {IS11} studied the projective representations of the symmetric groups $S_n$. He proved that the Schur multiplier of $S_n$ for, $n \geq 4$, is ${\mathbb Z}/2{\mathbb Z}$ and described the representation group of $S_n$, see~\cite{Moa1, WB} for more details. Nazarov~\cite{Na1, Na2} explicitly constructed the projective representations of $S_n$ by providing suitable orthogonal matrices for each generator of the symmetric group. In this article, our goal is to describe the Schur multiplier and the projective representations of the discrete Heisenberg groups and their $t$-variants. The $t$-variants of the Heisenberg groups, denoted by $H^t_{2n+1} (R)$, are defined as follows. Let $R$ be a commutative ring with identity and $t \in R$. Define the group $H^t_{2n+1} (R)$ by the set $R^{n+1} \oplus R^n$ with multiplication given by, \[ \begin{array}{l} (a, b_1, \ldots, b_n, c_1, \ldots, c_n) (a', b'_1, b'_2, \ldots, b'_n, c'_1, c'_2, \ldots, c'_n) \\ = (a+a'+ t(\sum_{i = 1}^n b'_i c_i), b_1 + b'_1, \ldots, b_n + b'_n, c_1 + c'_1, \ldots, c_n + c'_n). \end{array} \] For $t = 1$, we recover the classical Heisenberg group and throughout we denote $H^1_{2n+1} (R)$ by $H_{2n+1} (R)$. Except Theorem~\ref{thm2}, which is true for general commutative rings $R$ with identity, the ring $R$ will be $ {\mathbb Z}/r{\mathbb Z}$ for $r \in \mathbb N \cup \{0\}$. It follows from \cite [Corollary 5.1.3]{GK1} that the projective representations of $H^t_{2n+1}({\mathbb Z}/r{\mathbb Z})$ are obtained from those of $H^t_{2n+1}({\mathbb Z}/p_i^{m_i}{\mathbb Z})$, where $r=p_1^{m_1}p_2^{m_2}\cdots p_k^{m_k}$ is the prime decomposition of $r$. Hence, for $r \in \mathbb N$, we can further assume that $t|r$. Our first result describes the Schur multiplier of $H^t_{2n+1} (R)$ for $R = {\mathbb Z}/r{\mathbb Z}$. The description of the Schur multiplier of $H_{2n+1}^t(R)$ for $n>1$ differs from the case $n=1$. For $n = 1$, we further assume that either $r =0 $ or $r$ is an odd natural number. \begin{thm} \label{thm1} \begin{enumerate}[label=(\roman*)] \item For $n>1$, \[ {\mathrm{H}}^2({H}_{2n+1}^t(\mathbb Z/r{\mathbb Z}),{\mathbb C}^\times)= \begin{cases} ({\mathbb Z}/r{\mathbb Z})^{2n^2-n-1} \times ({\mathbb Z}/t{\mathbb Z})^{2n+1},& \text{if }r \in \mathbb N,\\ ({\mathbb C}^\times)^{2n^2-n-1} \times ({\mathbb Z}/t{\mathbb Z})^{2n}, & \text{if } r=0. \end{cases} \] \item For $r \in (2 \mathbb N + 1) \cup \{0\}$, \[ {\mathrm{H}}^2({H}_{3}^t(\mathbb Z/r{\mathbb Z}),{\mathbb C}^\times)= \begin{cases} ({\mathbb Z}/r{\mathbb Z} )^2 \times {\mathbb Z}/t{\mathbb Z},& \text{if }r \in (2 \mathbb N +1),\\ ({\mathbb C}^\times)^2, & \text{if } r=0. \end{cases} \] \end{enumerate} \end{thm} The Schur multiplier of $H_3({\mathbb Z}/r{\mathbb Z})$ was obtained in \cite[Theorem 1.1]{UJ}. A proof of the above result is included in Section~\ref{se1}. Our next aim is to describe the projective representations of $H_{2n+1}^t({\mathbb Z}/r{\mathbb Z})$. Throughout this article, we consider these groups as discrete (abstract) groups and therefore the obtained projective representations may not be unitary or even continuous. It is well known that the projective representations of a group $G$ are obtained from the ordinary representations of its representation group (if it exist); see Corollary \ref{rep_group}. Our next result describes a representation group of $H_3^t({\mathbb Z}/r{\mathbb Z})$. For $r \in \mathbb N \cup \{ 0 \} $, define a group $\widehat{H}(r,t)$ by \[ \widehat{H}(r,t)= \langle x, y, z\mid [x, y ] =z^t, [x, z] = z_1, [y,z] =z_2, x^{r} = y^{r} = z^{rt}=1 \rangle.\] Throughout the article, $[x,y] = xyx^{-1}y^{-1}$ and the relations of the form $[x, y]= 1$ for generators $x$ and $y$ are omitted in the presentation of a group. \begin{thm}\label{thm4} For $r \in (2\mathbb N+1) \cup \{ 0 \} $ and $t\mid r$, the group $\widehat{H}(r,t)$ is a representation group of ${H}_3^t(\mathbb Z/r{\mathbb Z})$. \end{thm} See Section~\ref{se2} for the proof of this result. A construction of all finite-dimensional irreducible ordinary representations of $\widehat{H}(r,t)$ is included in Section~\ref{se3}. Our next result focuses on the projective representations of $H_{2n+1}^t(R)$ for $n> 1$. Recall that the group $H_{2n+1}^t(R)$ projects onto the abelian group $R^{2n}\oplus R/tR$ (see (\ref{eq:Heisenberg-stem})). The following result is true for general commutative rings $R$ with identity. \begin{thm} \label{thm2} For $n > 1$, every irreducible projective representation of $H_{2n+1}^t(R)$ is obtained from an irreducible projective representation of the abelian group $R^{2n}\oplus R/tR$ via inflation. \end{thm} We obtain its proof from a general result regarding the central product of groups; see Corollary~\ref{HG} and Section~\ref{subsec:central-product-proof}. From the above result, the question of determining the projective representations of $H_{2n+1}^t(R)$ for $n> 1$ boils down to understanding the projective representations of abelian groups $R^{2n}\oplus R/tR$. As mentioned earlier, this result is not yet well understood. Next, for $R = {\mathbb Z}/r{\mathbb Z}$ and $n \in \mathbb N$, we describe the representation group of $R^{n}\oplus R/tR$. Define the group $\mathcal{F}_n(r, t)$ as follows. \[ \mathcal{F}_{n}(r,t) = \langle x_k, z_{ij} \mid 1 \leq k \leq n+1, 1 \leq i < j \leq n+1, [x_i, x_j ] = z_{ij},x_1^t= x_j^r = 1 \rangle. \] \begin{thm} \label{thm3} For $r \in \mathbb N \cup \{0\}$ and $t \mid r$, the group $\mathcal{F}_{n}(r,t)$ is a representation group of $(\mathbb Z/r\mathbb Z)^{n}\oplus {\mathbb Z}/t{\mathbb Z}$. \end{thm} A proof of this result is included in Section~\ref{se2}. See Section~\ref{se3}, for a construction of all finite-dimensional ordinary irreducible representations of $\mathcal{F}_{n}(r,t)$. We also obtain results regarding the projective representations of extra-special groups. Recall that a $p$-group $G$ is called an extra-special group if its center $Z(G)$ is cyclic of order $p$ and the quotient $G/Z(G)$ is a non-trivial elementary abelian $p$-group. It is well known that for each $n \geq 1$, there are two extra-special $p$ groups of order $p^{2n+1}$ up to isomorphism with exponents either $p$ or $p^2$. We denote the isomorphism classes of extra special groups of order $p^{2n+1}$ with exponent $p$ and $p^2$ by $ES_{2n+1}(p)$ and $ES_{2n+1}(p^2)$ respectively. From definition, the groups $ES_{2n+1}(p)$ are isomorphic to $H_{2n+1}({\mathbb Z}/p{\mathbb Z})$. Above, we have already stated the results regarding the projective representations for $H_{2n+1}({\mathbb Z}/p{\mathbb Z})$. Combining this with our next result, we complete the picture for extra-special $p$-groups. \begin{corollary}\label{thm5} (i) Every projective representation of $ES_{3}(p^2)$ is equivalent to an ordinary representation. (ii) For $n>1$, every irreducible projective representation of $ES_{2n+1}(p^2)$ is obtained from an irreducible projective representation of $({\mathbb Z}/p{\mathbb Z})^{2n}$ via inflation. \end{corollary} Above, (i) follows because the Schur multiplier of $ES_{3}(p^2)$ is trivial; see \cite[Theorem 3.3.6]{GK}. For the proof of (ii), see Section~\ref{subsec:central-product-proof}. \section{Schur multiplier of $H_{2n+1}^t({\mathbb Z}/r{\mathbb Z}), r \in \mathbb N \cup \{0\}$} \label{se1} In this section, we prove Theorem~\ref{thm1}. Throughout this article, we use $x^y$ to denote the conjugation $yxy^{-1}$. The commutator subgroup and center of a group $G$ are denoted by $G'$ and $Z(G)$, respectively. Recall, for a group $G$ and $i \in \mathbb N$, ${\mathrm{H}}^i(G, \mathbb C^\times) = Z^i(G, \mathbb C^\times)/B^i(G, \mathbb C^\times)$, where $Z^i(G, \mathbb C^\times)$ and $B^i(G, \mathbb C^\times)$ consists of cocycles and coboundaries of $G^i$ respectively. We shall call elements of $Z^2(G, \mathbb C^\times)$ as 2-cocycles (or sometimes just cocycles when it is clear from the context) and elements of ${\mathrm{H}}^2(G, \mathbb C^\times)$ the cohomology classes. For an element $\alpha \in Z^2(G, \mathbb C^\times)$, the corresponding element of ${\mathrm{H}}^2(G, \mathbb C^\times)$ will be denoted by $[\alpha]$. For 2-cocycles $\alpha, \beta \in Z^2(G, \mathbb C^\times)$ we say $\alpha$ is cohomologous to $\beta$, whenever $[\alpha] = [\beta]$. A central extension, \begin{equation} \label{equation-stem} 1\to A \to G \to G/A \to 1 \end{equation} is called a {\it stem} extension, if $A \subseteq Z(G) \cap G'$. For a given stem extension (\ref{equation-stem}), the Hochschild-Serre spectral sequence \cite[Theorem 2, p.~129]{HS} for cohomology of groups yields the following exact sequence. \[1 \rightarrow {\mathrm{Hom}}( A,{\mathbb C}^\times) \xrightarrow[]{{\mathrm{tra}}} {\mathrm{H}}^2(G/A, {\mathbb C}^\times) \xrightarrow{\inf} {\mathrm{H}}^2(G, {\mathbb C}^\times),\] where ${\mathrm{tra}}:{\mathrm{Hom}}( A,{\mathbb C}^\times) \to {\mathrm{H}}^2(G/A, {\mathbb C}^\times) $ given by $f \mapsto [{\mathrm{tra}}(f)]$, where $${\mathrm{tra}}(f)(\overline{x},\overline{y}) = f(\mu (\overline{x})\mu(\bar{y})\mu(\bar{xy})^{-1}),\,\, \overline{x}, \overline{y} \in G/A, $$ for a section $\mu: G/A \rightarrow G$, denotes the transgression homomorphism and the inflation homomorphism, $\inf : {\mathrm{H}}^2(G/A, {\mathbb C}^\times) \to {\mathrm{H}}^2(G, {\mathbb C}^\times) $ is given by $[\alpha] \mapsto [\inf(\alpha)]$, where $\inf(\alpha)(x,y) = \alpha(xA,yA)$. For groups $H_{2n+1}^t(R)$, We have the following stem extension, \begin{equation} \label{eq:Heisenberg-stem} 1 \rightarrow tR \xrightarrow[]{f} H_{2n+1}^t(R) \xrightarrow[]{g} R^{2n} \oplus R/tR \rightarrow 1, \end{equation} given by \[ \begin{array}{c} f(tr) \mapsto (tr,\underbrace{0, 0, \cdots ,0}_{2n\text{-times}} ) \\ g(a, b_1, \ldots, b_n, c_1, \ldots, c_n) = (a \bmod(tR), b_1, \ldots, b_n, c_1, \ldots, c_n). \end{array} \] Let $\alpha \in Z^2(G_1\times G_2, {\mathbb C}^\times)$. Recall that \begin{align} \label{eq:direct product} {\mathrm{H}}^2(G_1\times G_2, {\mathbb C}^\times) \cong^\theta {\mathrm{H}}^2(G_1, {\mathbb C}^\times) \times {\mathrm{H}}^2(G_2, {\mathbb C}^\times) \times {\mathrm{Hom}}(G_1/G'_1 \otimes G_2/G'_2, {\mathbb C}^\times) \end{align} is an isomorphism defined by $$\theta([\alpha]) = ({\mathrm{res}}^{G_1\times G_2}_{G_1}([\alpha]), {\mathrm{res}}^{G_1\times G_2}_{G_2}([\alpha]), \nu),$$ where $\nu: {\mathrm{H}}^2(G,{\mathbb C}^\times) \to {\mathrm{Hom}}(H \otimes K, {\mathbb C}^\times)$ is a homomorphism given by $\nu([\alpha])(\tilde{g}_1 \otimes \tilde{g}_2)=\alpha(g_1,g_2) \alpha(g_2,g_1)^{-1}$, for $\tilde{g}_1 = g_1G_1'$ and $\tilde{g}_2 = g_2G_2'$. We will use this result without explicitly referring to it. Now, we recall the definition of the central product of groups. A group $G$ is called a central product of its two normal subgroups $H$ and $K$ amalgamating $A$ if $G=HK$ with $A=H \cap K$ and $[H,K]=1$. \begin{thm}$($\cite[Theorem A and Theorem 3.6]{HVY}$)$\label{C1} Let $G$ be a central product of two normal subgroups $H$ and $K$ amalgamating $A=H \cap K$. Set $Z=H'\cap K'$. \begin{enumerate}[label=(\roman*)] \item Then the inflation map $\inf:{\mathrm{H}}^2(G/Z, {\mathbb C}^\times) \to {\mathrm{H}}^2(G, {\mathbb C}^\times)$ is surjective and $${\mathrm{H}}^2(G, {\mathbb C}^\times) \cong {\mathrm{H}}^2(G/Z, {\mathbb C}^\times)/N,$$ where $N \cong {\mathrm{Hom}} (Z,{\mathbb C}^\times)$. \item The subgroup ${\mathrm{Hom}}(Z,{\mathbb C}^\times)$ embeds in ${\mathrm{H}}^2(H/A, {\mathbb C}^\times)/L \oplus {\mathrm{H}}^2(K/A, {\mathbb C}^\times)/M$ via ${\mathrm{tra}}:{\mathrm{Hom}}(Z,{\mathbb C}^\times)\to {\mathrm{H}}^2(G/Z, {\mathbb C}^\times)$, where $L \cong {\mathrm{Hom}} \big((A\cap H')/Z, {\mathbb C}^\times \big)$, $M \cong {\mathrm{Hom}} \big((A\cap K')/Z, {\mathbb C}^\times \big)$. \end{enumerate} \end{thm} \begin{lemma}\label{l1} Let $r \in \mathbb N \cup \{0\}$ and $t$ divides $r$. \begin{enumerate}[label=(\roman*)] \item ${\mathrm{H}}^2({\mathbb Z}/t{\mathbb Z} \oplus ({\mathbb Z}/r{\mathbb Z})^k,{\mathbb C}^\times) \cong ({\mathbb Z}/t{\mathbb Z})^k \oplus ({\mathbb Z}/r{\mathbb Z})^{\frac{k(k-1)}{2}}$. Further, any $\alpha \in Z^2({\mathbb Z}/t{\mathbb Z} \oplus ({\mathbb Z}/r{\mathbb Z})^k,{\mathbb C}^\times)$ with $k \geq 2$ satisfies $[\alpha] = [\nu]$ for $\nu \in Z^2({\mathbb Z}/t{\mathbb Z} \oplus ({\mathbb Z}/r{\mathbb Z})^k,{\mathbb C}^\times)$ such that \[\nu\big((m_1,m_2,\ldots , m_k,m_{k+1}),(n_1,n_2,\ldots ,n_k, n_{k+1})\big)= \prod_{1 \leq i < j \leq k+1}\mu_{i,j}^{n_im_j}, \] for some $\mu_{i,j} \in {\mathbb C}^\times$ satisfying $\mu_{i,j}^r=1$ for $2 \leq i < j \leq k+1$ and $\mu_{1,l}^t=1$ for $2 \leq l \leq k+1$. \item Any $\alpha \in Z^2(H_3^t({\mathbb Z}/r{\mathbb Z}),{\mathbb C}^\times)$ satisfies $[\alpha] = [\sigma]$ for $\sigma \in Z^2(H_3^t({\mathbb Z}/r{\mathbb Z}),{\mathbb C}^\times)$ such that for $x = (m_1,n_1,p_1)$ and $y = (m_2, n_2, p_2)$ we have, \[ \sigma(x, y) = \begin{cases} \lambda^{(m_2p_1+ tn_2\frac{p_1(p_1-1)}{2}) }\mu^{(n_1m_2+tp_1\frac{n_2(n_2-1)}{2} + tp_1n_1n_2)}, & r = 0, \\ \lambda^{(m_2p_1+ tn_2\frac{p_1(p_1-1)}{2} )}\mu^{(n_1m_2+tp_1\frac{n_2(n_2-1)}{2} + tp_1n_1n_2)}\delta^{(p_1n_2)}, & r \in \mathbb N, \end{cases} \] for some $\lambda,\mu,\delta \in {\mathbb C}^\times$ such that $\lambda^r=\mu^r=\delta^t=1$. \end{enumerate} \end{lemma} \begin{proof} (i) Schur multiplier of finitely generated abelian groups follows from (\ref{eq:direct product}). We use \cite[Theorem 9.4]{MA} for the cocycle description. We obtain that every cocycle of ${\mathbb Z}/t{\mathbb Z} \oplus {\mathbb Z}/r{\mathbb Z}$ is cohomologous to a cocycle of the form \[ \alpha((m_1,m_2),((n_1,n_2))=\sigma_1(m_1,n_1)\sigma_2(m_2,n_2)g(n_1, m_2), \] where $\sigma_1 \in {\mathrm{H}}^2({\mathbb Z}/t{\mathbb Z} ,{\mathbb C}^\times), \sigma_2 \in {\mathrm{H}}^2({\mathbb Z}/r{\mathbb Z} ,{\mathbb C}^\times)$ and $g: {\mathbb Z}/t{\mathbb Z} \oplus {\mathbb Z}/r{\mathbb Z} \to {\mathbb C}^\times$ is a map such that $g(n_1,m_2)=g(1,1)^{n_1m_2}=\mu_{1,2}^{n_1m_2}$. The general result follows using induction argument on $k$. (ii) The proof of this result goes along the same lines as Packer~\cite[Proposion 1.1]{JP1}. Following the cited proof, we obtain that every $\alpha \in Z^2(H_3^t({\mathbb Z}/r{\mathbb Z}),{\mathbb C}^\times)$ is cohomologous to a cocycle of the form \[ \begin{array}{l} \beta((m_1, n_1,p_1), (m_2, n_2, p_2)) = \\ \lambda^{(m_2p_1+ tn_2\frac{p_1(p_1-1)}{2} )}\mu^{(n_1m_2+tp_1\frac{n_2(n_2-1)}{2} + tp_1n_1n_2)}\delta^{(p_1n_2)}, \end{array} \] for some $\lambda,\mu,\delta \in {\mathbb C}^\times$ such that $\lambda^r=\mu^r=\delta^r=1$ First assume that $r=0$. Choose some $\delta_1 \in {\mathbb C}^\times$ such that $\delta_1^t=\delta$. Now, define a function $b: H_3^t({\mathbb Z}) \to {\mathbb C}^\times$ by $b(m,n,p)=\delta_1^m$. Then \[ b(m_1,n_1,p_1)^{-1}b(m_2,n_2,p_2)^{-1}b(m_1+m_2+tp_1n_2,n_1+n_2,p_1+p_2)=\delta^{p_1n_2} \] is a coboundary. Hence, every cocycle $\alpha \in Z^2(H_3^t({\mathbb Z}),{\mathbb C}^\times)$ is cohomologous to a cocycle of the form \[\sigma((m_1,n_1,p_1), (m_2, n_2, p_2))=\lambda^{m_2p_1+ tn_2\frac{p_1(p_1-1)}{2} }\mu^{n_1m_2+tp_1\frac{n_2(n_2-1)}{2} + tp_1n_1n_2},\] for some $\lambda,\mu\in {\mathbb C}^\times$. Now, assume $r \in \mathbb N$. If we define a map $b:H_3^t({\mathbb Z}/r{\mathbb Z}) \to {\mathbb C}^\times$ by $b(m_1,n_1,p_1)=\delta^{m_1}$, then we have $$b(m_1,n_1,p_1)^{-1}b(m_2,n_2,p_2)^{-1}b(m_1+m_2+tp_1n_2,n_1+n_2,p_1+p_2)=\delta^{tp_1n_2},$$ which says that $\delta^{tp_1n_2}$ is cohomologous to a trivial cocycle. Then every cocycle $\alpha \in Z^2(H_3^t({\mathbb Z}/r{\mathbb Z}),{\mathbb C}^\times)$ is cohomologous to a cocycle of the form \[\sigma((m_1,n_1,p_1), (m_2, n_2, p_2))=\lambda^{m_2p_1+ tn_2\frac{p_1(p_1-1)}{2} }\mu^{n_1m_2+tp_1\frac{n_2(n_2-1)}{2} + tp_1n_1n_2}\delta^{p_1n_2},\] for some $\lambda,\mu,\delta \in {\mathbb C}^\times$ such that $\lambda^r=\mu^r=\delta^t=1$. \end{proof} \begin{corollary} \label{lem: non-trivial-cocycle-abelian} Let $r > 1$ and $\mu$ is a primitive $r$-th root of unity. Then $\alpha \in Z^2({\mathbb Z}/r{\mathbb Z} \oplus {\mathbb Z} /r{\mathbb Z})$ defined by $$\alpha((m,n), (m',n' )) = \mu^{nm'},$$ corresponds to a non-trivial element of ${\mathrm{H}}^2( {\mathbb Z}/r{\mathbb Z} \oplus {\mathbb Z}/r{\mathbb Z}, \mathbb C^\times )$. \end{corollary} \subsection{\textbf{Proof of Theorem \ref{thm1}}} \begin{proof} {\bf (i) Schur multiplier of $H^t_{2n+1}({\mathbb Z}/r{\mathbb Z})$ for $n > 1$:} Let $G=H^t_{2n+1}({\mathbb Z}/r{\mathbb Z})$, $ r \in \mathbb{N} \cup \{0\}$ and $n>1$. Then the group $G$ is a central product of $K_1=H^t_{2n-1}({\mathbb Z}/r{\mathbb Z})$ and $K_2=H^t_3({\mathbb Z}/r{\mathbb Z})$ amalgamating at $A=Z(G)$. Consider $Z=K'_1\cap K'_2$ which is isomorphic to $t{\mathbb Z}/r{\mathbb Z}$. Here $G/Z\cong A/Z \oplus (K_1/A\oplus K_2/A) \cong {\mathbb Z}/t{\mathbb Z} \oplus ({\mathbb Z}/r{\mathbb Z})^{2n}$. By Theorem \ref{C1}, it follows that the homomorphism $\inf$ of the following exact sequence is surjective. \[ 1 \to {\mathrm{Hom}}(Z, {\mathbb C}^\times) \xrightarrow{{\mathrm{tra}}}{\mathrm{H}}^2( G/Z, {\mathbb C}^\times) \xrightarrow{\inf} {\mathrm{H}}^2( G, {\mathbb C}^\times). \] Also, ${\mathrm{Hom}} (t{\mathbb Z}/r{\mathbb Z}, {\mathbb C}^\times)$ embeds in ${\mathrm{H}}^2( K_1/A, {\mathbb C}^\times) \oplus {\mathrm{H}}^2 (K_2/A, {\mathbb C}^\times)$ via ${\mathrm{tra}}$ homomorphism. Hence, \begin{eqnarray} {\mathrm{H}}^2( G, {\mathbb C}^\times) \cong & \frac{{\mathrm{H}}^2( K_1/A, {\mathbb C}^\times)\times {\mathrm{H}}^2( K_2/A, {\mathbb C}^\times)}{ {\mathrm{Hom}}(Z, {\mathbb C}^\times)}\times {\mathrm{Hom}}(({\mathbb Z}/r{\mathbb Z})^{4n-4},{\mathbb C}^\times) \times ({\mathbb Z}/t{\mathbb Z})^{2n}\nonumber\\ \cong & \frac{{\mathrm{Hom}}(({\mathbb Z}/r{\mathbb Z})^{2n^2-5n+4},{\mathbb C}^\times) }{{\mathrm{Hom}}(t{\mathbb Z}/r{\mathbb Z}, {\mathbb C}^\times)}\times {\mathrm{Hom}}(({\mathbb Z}/r{\mathbb Z})^{4n-4},{\mathbb C}^\times) \times ({\mathbb Z}/t{\mathbb Z})^{2n}\nonumber\\ \cong & \frac{{\mathrm{Hom}}(({\mathbb Z}/r{\mathbb Z})^{2n^2-n},{\mathbb C}^\times) }{{\mathrm{Hom}}(t{\mathbb Z}/r{\mathbb Z}, {\mathbb C}^\times)} \times ({\mathbb Z}/t{\mathbb Z})^{2n}\label{eq1}. \end{eqnarray} Here the map $\inf: {\mathrm{H}}^2( G/Z, {\mathbb C}^\times) \to {\mathrm{H}}^2( G, {\mathbb C}^\times)$ is surjective, so every cocycle of $ Z^2(H_{2n+1}^t({\mathbb Z}/r{\mathbb Z}),{\mathbb C}^\times)$ is cohomologous to a cocycle of the form \[ \beta((l_1,m_1,\ldots m_{2n}), (l_1',m_1',\ldots m_{2n}'))=\prod_{1\leq i < j\leq 2n}{\mu_{i,j}}^{m_i' m_j}\prod_{k=1}^{2n}{\mu_k}^{l_1'm_k}, \] for some $\mu_{i,j},\mu_k \in {\mathbb C}^\times$ and $\mu_k^t=1$ for $1 \leq k \leq 2n$, follows from Lemma \ref{l1}$(i)$. If $r=0$, then $\mu_{i,j}\in {\mathbb C}^\times$ and $\mu_k^t=1$ for $1 \leq i < j \leq 2n$, $1 \leq k \leq 2n$. Define a map $b:H_{2n+1}^t({\mathbb Z}) \to {\mathbb C}^\times$ such that $b(l_1,m_1,\ldots m_{2n})=(\delta^{1/t})^{l_1}$ for $\delta \in {\mathbb C}^\times$. By using $b$, we obtain that $\delta^{(\sum_{1\leq i \leq n}m_i'm_{n+i})}$ is cohomologous to a trivial cocycle. Therefore, up to cohomologous we can choose $(\mu_{i,n+i})_{1 \leq i \leq n} \in ({\mathbb C}^\times)^n/\langle( \delta, \delta, \delta, \cdots, \delta) \mid \delta \in {\mathbb C}^\times \rangle$ which is isomorphic to $({\mathbb C}^\times)^{n-1}$. As by (\ref{eq1}), $({\mathbb C}^\times)^{2n^2-n-1} \times ({\mathbb Z}/t{\mathbb Z})^{2n}$ embeds in ${\mathrm{H}}^2(H_{2n+1}^t({\mathbb Z}), {\mathbb C}^\times)$, hence \[ {\mathrm{H}}^2(H_{2n+1}^t({\mathbb Z}), {\mathbb C}^\times) \cong ({\mathbb C}^\times)^{2n^2-n-1} \times ({\mathbb Z}/t{\mathbb Z})^{2n}. \] If $r \in \mathbb N$, then $\mu_{i,j}^r=1$ for $1 \leq i < j \leq 2n$ and $\mu_k^t=1$ for $1 \leq k \leq 2n$. We observe that $x^{(t\sum_{1\leq i \leq n}m_i'm_{n+i})}$ is cohomologous to a trivial cocycle, by using the map $b:H_{2n+1}^t({\mathbb Z}/r{\mathbb Z}) \to {\mathbb C}^\times$ such that $b(l_1,m_1,\ldots m_{2n})=x^{l_1}$, for $x \in {\mathbb C}^\times, x^r=1$. So, up to cohomologous, we can choose $(\mu_{i,n+i})_{1 \leq i \leq n} \in ({\mathbb Z}/r{\mathbb Z})^n/\langle (x^t, x^t, x^t, \cdots, x^t)\mid x\in {\mathbb Z}/r{\mathbb Z} \rangle \cong ({\mathbb Z}/r{\mathbb Z})^{n-1} \times {\mathbb Z}/t{\mathbb Z}$. Therefore, by (\ref{eq1}), \[ {\mathrm{H}}^2(H_{2n+1}^t({\mathbb Z}/r{\mathbb Z}), {\mathbb C}^\times) \cong ({\mathbb Z}/r{\mathbb Z})^{2n^2-n-1} \times ({\mathbb Z}/t{\mathbb Z})^{2n+1}. \] {\bf (ii) Schur multiplier of $H_3^t({\mathbb Z}/r{\mathbb Z})$:} The group $G=H_3^t({\mathbb Z}/r{\mathbb Z})$ is the semi direct product of normal subgroup $N=\langle (m,n)\rangle \cong {\mathbb Z}/r{\mathbb Z} \oplus {\mathbb Z}/r{\mathbb Z}$ and a subgroup $T=\langle p \rangle \cong {\mathbb Z}/r{\mathbb Z}$, where the action of $T$ on $N$ is defined by $p.(m,n)=(m+tpn, n)$. Here $T$ act on ${\mathrm{Hom}}(N, {\mathbb C}^\times)$ by $(x . f)(n)=f(x.n)$ for $f \in {\mathrm{Hom}}(N, {\mathbb C}^\times), n \in N, x \in T$. Then $${\mathrm{H}}^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times))=\frac{Z^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times))}{B^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times))},$$ where $$Z^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times))=\{f: T \to {\mathrm{Hom}}(N, {\mathbb C}^\times) \mid f(xy)=(x.f(y)) f(x) \forall x,y \in T \}$$ and $B^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times))$ consists of $f \in Z^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times))$ such that there exists $g \in {\mathrm{Hom}}(N, {\mathbb C}^\times)$ satisfying $f(x)=(x.g)g^{-1}$ for all $x \in T$. Given $\alpha \in Z^2(N, {\mathbb C}^\times)$, let $\alpha^x\in Z^2(N, {\mathbb C}^\times)$ be defined by $\alpha^x(n,n')=\alpha(x.n,x.n')$ for $x \in T$ and $n,n' \in N$. Let ${\mathrm{H}}^2(N, {\mathbb C}^\times)^T$ denote the $T$-stable subgroup of ${\mathrm{H}}^2(N, {\mathbb C}^\times)$, i.e., $${\mathrm{H}}^2(N, {\mathbb C}^\times)^T=\{[\alpha] \in {\mathrm{H}}^2(N, {\mathbb C}^\times) \mid [\alpha^x]=[\alpha]~ \forall~ x \in T\}.$$ We have the following exact sequence. \[ 1 \to {\mathrm{H}}^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times)) \xrightarrow{\psi} {\mathrm{H}}^2( G, {\mathbb C}^\times) \xrightarrow{{\mathrm{res}}} {\mathrm{H}}^2(N, {\mathbb C}^\times)^T, \] which follows from \cite[Theorem 2.2.5]{GK} and \cite[Corollary 2.5]{JP} for the finite and infinite discrete cases respectively. Here the map $\psi$ is defined by \[ \psi([\chi])((m_1, n_1,p_1),(m_2,n_2,p_2))=\chi(p_1)(m_2,n_2), \] for $\chi \in {\mathrm{H}}^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times))$. Since, by Corollary \ref{lem: non-trivial-cocycle-abelian}, every cocycle $\alpha \in Z^2(N,{\mathbb C}^\times)$ is cohomologous to a cocycle of the form $\alpha((m_1,n_1), (m_2, n_2)) = \mu^{n_1m_2}$, so for $p \in T$, we have $$\alpha^{p}((m_1,n_1), (m_2, n_2))=\alpha((m_1+tpn_1,n_1), (m_2+tpn_2, n_2))=\mu^{n_1m_2+tpn_1n_2}.$$ Then $[\alpha^p]=[\alpha]$ as $$\alpha{\alpha^{p}}^{-1}((m_1,n_1), (m_2,n_2))=b(m_1,n_1)b(m_2,n_2)b(m_1+m_2,n_1+n_2)^{-1},$$ where $b:N \to {\mathbb C}^\times$ defined by $b(m,n)=\mu^{tpn^2/2}$ (as $r$ is odd). Hence, $${\mathrm{H}}^2(N,{\mathbb C}^\times)^T={\mathrm{H}}^2(N,{\mathbb C}^\times).$$ Now, we define a map $\phi:{\mathrm{H}}^2(N,{\mathbb C}^\times) \to {\mathrm{H}}^2(G,{\mathbb C}^\times)$ given by $[\alpha] \mapsto [\phi[\alpha]]$, where \[ \phi([\alpha])((m_1,n_1,p_1),(m_2,n_2,p_2)) =\mu^{n_1m_2+tp_1\frac{n_2(n_2-1)}{2} + tp_1n_1n_2}, \] Then the composition map ${\mathrm{res}} \circ \phi: {\mathrm{H}}^2(N,{\mathbb C}^\times) \to {\mathrm{H}}^2(N,{\mathbb C}^\times)$ becomes the identity homomorphism. Hence, $\phi$ is injective and ${\mathrm{res}}$ is surjective map. Thus we have \begin{eqnarray} \label{eq: Schur multiplier} {\mathrm{H}}^2( H_3^t({\mathbb Z}/r{\mathbb Z}), {\mathbb C}^\times) \cong{\mathrm{H}}^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times)) \times {\mathrm{H}}^2(N,{\mathbb C}^\times). \end{eqnarray} Now onwards, we consider the cases $r=0$ and $r \in \mathbb N$ separately. \noindent {\bf Case~1: $\bm{r = 0}$.} We follow the proof of \cite[Theorem 2.11]{JP}. We show that $${\mathrm{H}}^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times)) \cong {\mathbb C}^\times.$$ Define a map $\tau: Z^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times)) \to ({\mathbb C}^\times)^2$ by $\tau(\chi)=(\chi(1)(1,0),\chi(1)(0,1))$. Then $\tau$ is injective. For $c_1, c_2 \in {\mathbb C}^\times$, define $\chi(p)(m,n)=c_1^{(mp+ tn\frac{p(p-1)}{2}) }c_2^{pn}$. By \cite[Lemma 2.7]{JP}, it follows that $\chi \in Z^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times))$ and $\tau(\chi)=(c_1,c_2)$. So, $\tau$ is surjective. Hence, via the isomorphism $\tau$, we have \[ Z^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times)) \cong ({\mathbb C}^\times)^2. \] Here $B^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times))$ is the set of all $f:T \to {\mathrm{Hom}}(N,{\mathbb C}^\times)$ satisfying the following, \[ f(p)(m,n)=g(m+tpn, n)g(m,n)^{-1} \text{~for~} g \in {\mathrm{Hom}}(N,{\mathbb C}^\times), m,n \in N, p\in T. \] Observe that $\tau(f)=(1,g((1,0)^t))$ and hence, $\tau(B^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times)))\cong {\mathbb C}^\times$. Thus it follows that \[ {\mathrm{H}}^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times)) \cong {\mathbb C}^\times. \] Hence, by (\ref{eq: Schur multiplier}), $${\mathrm{H}}^2( H_3^t({\mathbb Z}), {\mathbb C}^\times) \cong ({\mathbb C}^\times)^2.$$ \noindent {\bf Case~2: $\bm{r \in \mathbb N}$.} For this case, our claim is $${\mathrm{H}}^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times)) \cong {\mathbb Z}/r{\mathbb Z} \oplus {\mathbb Z}/t{\mathbb Z}.$$ Let $\zeta$ be a primitive $r$-th root of unity and ${\mathrm{Hom}}(N,{\mathbb C}^\times) \cong \langle \phi_1,\phi_2 \rangle$ where $\phi_1:N \rightarrow \mathbb{C}^\times $ is defined by $\phi_1(1,0)=\zeta, \phi_1(0,1)=1$ and $\phi_2(1,0)=1, \phi_2(0,1)=\zeta$. Now, $T$ acting on ${\mathrm{Hom}}(N,\mathbb{C}^\times)$ by $^{p}{\phi_1}(1,0)=\phi_1(1,0)$ and $^{p}{\phi_1}(0,1)=\phi_1(tp,1)=\zeta^{pt}$. So, $^{p}{\phi_1}=\phi_1\phi_2^{pt}$. Similarly it is easy to see that $^{p}{\phi_2}=\phi_2$. Now, define a map $Norm:{\mathrm{Hom}}(N,\mathbb{C}^\times)\rightarrow {\mathrm{Hom}}(N,\mathbb{C}^\times)$ by \[ Norm(\phi)=\prod_{p \in T} {^{p}{\phi}}. \] Consider another map $h: {\mathrm{Hom}}(N,\mathbb{C}^\times)\rightarrow {\mathrm{Hom}}(N,\mathbb{C}^\times)$ defined by $h(\phi)={^{p}{\phi}}{ \phi}^{-1}$, where $p$ is a generator of $T$. It is a well known result that ${\mathrm{H}}^1(T, {\mathrm{Hom}}(N,{\mathbb C}^\times) \cong \frac{\ker(Norm)}{image (h)} $ (see step 3 in the proof of Theorem 5.4 of \cite{H}). Since $r$ is odd, it is easy to check that $Norm(\phi_1)=1$ and $Norm(\phi_2)=1$. Therefore, $\ker(Norm)=\langle \phi_1,\phi_2 \rangle$ and image of $h$ is $<\phi_2^t>$. Therefore, ${\mathrm{H}}^1(T, {\mathrm{Hom}}(N, {\mathbb C}^\times) \cong {\mathbb Z}/r{\mathbb Z} \oplus {\mathbb Z}/t{\mathbb Z}$. Thus by (\ref{eq: Schur multiplier}), $${\mathrm{H}}^2( H_3^t({\mathbb Z}/r{\mathbb Z}), {\mathbb C}^\times)\cong({\mathbb Z}/r{\mathbb Z})^2 \times {\mathbb Z}/t{\mathbb Z}.$$ \end{proof} \section{Projective representations of $H^t_{2n+1}(R)$}\label{se2} In this section, we first recall some basic definitions and results regarding projective representations of a group and then prove Theorems~\ref{thm4},\ref{thm2}, and \ref{thm3}. Let $V$ be a complex vector space. A projective representation of a group $G$ is a homomorphism of $G$ into the projective general linear group, ${\mathrm{PGL}}(V) = \mathrm{GL}(V)/Z(V)$. Equivalently, a projective representation is a map $\rho: G \rightarrow \mathrm{GL}(V)$ such that \[ \rho(x) \rho(y) = \alpha(x, y) \rho(xy), \,\, \forall x, y \in G, \] for suitable scalars $\alpha(x, y) \in \mathbb C^\times$. By the associativity of $\mathrm{GL}(V)$, the map $(x,y) \mapsto \alpha(x, y)$ gives a 2-cocycle of $G$, i.e., an element of $Z^2(G, \mathbb C^\times)$. We denote this cocycle by $\alpha$ itself and say $\rho$ is an $\alpha$-representation. Two projective representations $\rho_1: G \rightarrow \mathrm{GL}(V)$ and $\rho_2: G \rightarrow \mathrm{GL}(W)$ are called projectively equivalent if there is an invertible $T\in \mathrm{Hom}( V, W)$ and a map $b: G \rightarrow \mathbb C^\times$ such that \[ b(g)T \rho_1(g) T^{-1} = \rho_2(g) ~\forall ~ g \in G. \] Equivalent projective representations are said to have equivalent 2-cocycles. Thus two cocycles $\alpha, \alpha' :G \times G \rightarrow \mathbb C^\times$ are equivalent if there exists a map $b : G \rightarrow \mathbb C^\times$ such that $\alpha(x, y) = \frac{b(x) b(y)}{b(xy)} \alpha'(x, y)$ for all $x,y \in G$. In terms of Schur multiplier, this means that the representations $\rho$ and $\rho'$ are equivalent implies that their cocycles $\alpha$ and $\alpha'$ are cohomologous, i.e., $[\alpha] = [\alpha']$ in ${\mathrm{H}}^2(G, \mathbb C^\times)$. It is to be noted that to determine all projective representations of $G$ up to equivalence, it is enough to determine projectively inequivalent $\alpha$-representations of $G$ for a set of all 2-cocycle representatives of elements of ${\mathrm{H}}^2(G, \mathbb C^\times)$. We further note that two projectively equivalent $\alpha$-representations $(\rho_1, V)$ and $(\rho_2, W)$ are called linearly inequivalent if $b(g) = 1$ for all $g \in G$. Any $\alpha$-representation $\rho$ of $G$ such that $\alpha$ is cohomologous to trivial 2-cocycle, will be called equivalent to an ordinary representation of $G$. The set of all inequivalent irreducible ordinary representations of a group $G$ will be denoted by $\mathrm{Irr}(G)$. Let $\mathrm{Irr}^{\alpha}(G)$ be the set of complex linearly inequivalent irreducible representations of $G$ corresponding to a 2-cocycle $\alpha$. We can further assume that $\alpha$ is normalized cocycle, i.e., $\alpha \in Z^2(G, \mathbb C^\times)$ satisfies \begin{eqnarray} \label{eq: cocycle condition} \alpha(g, 1) = \alpha (1, g) = 1,\;\; \forall g \in G. \end{eqnarray} Throughout this section, we assume that the cocycle representative of $[\alpha]$ with which we work, satisfies (\ref{eq: cocycle condition}). Next, we recall the definition of a representation group (also called a covering group) of a group $G$ from \cite[Page 23]{IS4}. \begin{definition}[Representation group of $G$] \label{defn:representation-group} A group $G^*$ is called a \emph{representation group} of $G,$ if there is a central extension $$1 \rightarrow A \rightarrow G^* \rightarrow G \to 1$$ such that corresponding transgression map $${\mathrm{tra}}: {\mathrm{Hom}}(A,{\mathbb C}^\times) \to {\mathrm{H}}^2(G,{\mathbb C}^\times)$$ is an isomorphism. \end{definition} In \cite{IS4}, Schur proved that the representation group of a finite group always exists. However, for infinite groups, the parallel result is not known; see \cite{hatui2020projective} for related results. The next result relates the projective representations of a group $G$ and its certain quotient group. \begin{thm}\label{inf} Let $A$ be a subgroup of a finitely generated group $G$ such that $A \subseteq G'\cap Z(G)$ and, $[\alpha]\in {\mathrm{H}}^2(G,{\mathbb C}^\times) $ be in the image of $\inf:{\mathrm{H}}^2(G/A,{\mathbb C}^\times) \to {\mathrm{H}}^2(G,{\mathbb C}^\times)$. Then $\bigcup_{\{[\beta] \in {\mathrm{H}}^2(G/A,{\mathbb C}^\times)\mid \inf([\beta])=[\alpha]\}}\mathrm{Irr}^{\beta}(G/A)$ and $\mathrm{Irr}^{\alpha}(G)$ are in bijective correspondence via inflation. \end{thm} \begin{proof} We have the following exact sequence \[ 1 \to {\mathrm{Hom}}(A,{\mathbb C}^\times) \stackrel{{\mathrm{tra}}}\to {\mathrm{H}}^2(G/A, {\mathbb C}^\times) \stackrel{\inf}\to {\mathrm{H}}^2(G, {\mathbb C}^\times). \] Fix a $[\beta] \in {\mathrm{H}}^2(G/A, {\mathbb C}^\times)$ such that $\inf([\beta])=[\alpha]$. Due to the exactness of the above sequence, the set $\bigcup_{\chi \in {\mathrm{Hom}}(A,{\mathbb C}^\times)}[\beta] {\mathrm{tra}}(\chi)$ consists of all distinct elements of ${\mathrm{H}}^2(G/A, {\mathbb C}^\times)$ that map to $[\alpha]$ via inf. Let $\rho:G \to \mathrm{GL}(V)$ be an irreducible $\alpha$-representation of $G$. Then there exists a representative of $[\beta]$, denoted by $\beta$ itself, such that $\alpha(g,h)=\beta(gA,hA)$ for all $g,h \in G$. Therefore, for all $a \in A$ and $g\in G$, we have $\alpha(g,a)=\alpha(a,g) = 1$. Hence, $$\rho(g)\rho(a)=\rho(a)\rho(g),\,\, \forall a \in A, g\in G.$$ Since every irreducible representation in our case is countable dimensional, by Schur's lemma (due to Dixmier for countable dimensional complex representations), for all $a \in A$, $\rho(a)$ is a scalar multiple of identity. Further $\alpha(a,a') = 1$ for all $a, a' \in A$, so $\rho|_A$ is a homomorphism on $A$. Let $\mu :G/A \rightarrow G$ be a section of $G/A$ in $G$ such that $gA=\mu(gA)A $ for all $g \in G$. Every element $g \in G$ can be written uniquely $g=a_g\mu(gA)$ for some $a_g \in A$. Note that ${\mathrm{tra}}(\rho|_A)(gA, hA)=\rho(\mu(gA)\mu(hA)\mu(ghA)^{-1})$. Now, define $\tilde{\rho}:G/A \to \mathrm{GL}(V)$ by $\tilde{\rho}(gA)=\rho(\mu(gA))$. Then \begin{equation} \left.\begin{aligned} \tilde{\rho}(gA)\tilde{\rho}(hA)\tilde{\rho}(ghA)^{-1} &= \rho(\mu(gA))\rho(\mu(hA))\rho(\mu(ghA))^{-1}\\ & =\beta(gA,hA)\rho(\mu(gA)\mu(hA))\rho(\mu(ghA))^{-1}\\ & =\beta(gA,hA)\rho(\mu(gA)\mu(hA)\mu(ghA)^{-1}\mu(ghA))\rho(\mu(ghA))^{-1}\\ & =(\beta{\mathrm{tra}}(\rho|_A))(gA,hA)\alpha^{-1}(\mu(gA)\mu(hA)\mu(ghA)^{-1},\mu(ghA))\\ & =(\beta{\mathrm{tra}}(\rho|_A))(gA,hA), \end{aligned}\right. \end{equation} where $\alpha^{-1}(\mu(gA)\mu(hA)\mu(ghA)^{-1},\mu(ghA))=1$ as $\mu(gA)\mu(hA)\mu(ghA)^{-1} \in A$. Thus $\tilde{\rho}$ is $\beta'$-representation of $G/A$ such that $[\beta']=[\beta] [{\mathrm{tra}}(\rho|_A)]$ and $\inf([\beta'])=[\alpha]$. Since $\rho$ is irreducible representation and $\rho(a)$ is a scalar multiple of identity for $a\in A$, $\tilde{\rho}$ is also an irreducible representation. Define a map $$\phi : \mathrm{Irr}^{\alpha}(G) \longrightarrow \bigcup_{\{[\beta] \in {\mathrm{H}}^2(G/A,{\mathbb C}^\times)\mid \inf([\beta])=[\alpha]\}}\mathrm{Irr}^{\beta}(G/A)$$ by $\phi(\rho)=\tilde{\rho}$. It is easy to see that $\phi$ is a well defined map. Next, we prove that $\phi$ is injective. Suppose $\rho,\rho' \in \mathrm{Irr}^\alpha(G)$ and $\phi(\rho)=\tilde{\rho}, \phi(\rho')=\tilde{\rho}'$ such that $\tilde{\rho}$ and $\tilde{\rho}'$ are linearly equivalent, i.e., $\tilde{\rho}'(gA)=T\tilde{\rho}(gA)T^{-1}$ for all $g \in G$ and for some $T \in \mathrm{GL}(V)$. Since $\tilde{\rho}$ and $\tilde{\rho}'$ are $\beta {\mathrm{tra}}(\rho|_A)$ and $\beta {\mathrm{tra}}(\rho'|_A)$-representations of $G/A$ respectively, ${\mathrm{tra}}(\rho|_A)={\mathrm{tra}}(\rho'|_A)$. But ${\mathrm{tra}}$ is injective, so $\rho|_A=\rho'|_A$. Now it is easy to check that $\rho'(g)=T\rho(g)T^{-1}$ for $g \in G$. Hence, $\phi$ is injective. It remains to show that $\phi$ is surjective. Let $\tilde{\rho}:G/A \to {\mathrm{PGL}}(V)$ be an irreducible $\beta_1$-projective representation such that $\inf(\beta_1)=\alpha$. Define $\rho:G \to {\mathrm{PGL}}(V)$ via inflation, i.e., $\rho(g)=\tilde{\rho}(gA)$. Then $\rho$ is an irreducible $\alpha$-representation of $G$ and $\phi(\rho)=\tilde{\rho}$. \end{proof} \begin{corollary}\label{rep_group} Let $A$ be a central subgroup of a finitely generated group $G^*$ such that $G^*$ is a representation group of $G = G^*/A$. Then there is a bijection between the sets $\cup_{[\alpha]\in {\mathrm{H}}^2(G, \mathbb C^\times)} \mathrm{Irr}^\alpha(G)$ and $\mathrm{Irr}(G^*)$. \end{corollary} \begin{proof} By the definition of representation group and the exactness of the sequence \[{\mathrm{Hom}}(G^*,{\mathbb C}^\times) \xrightarrow[]{{\mathrm{res}}} {\mathrm{Hom}}( A,{\mathbb C}^\times) \xrightarrow[]{{\mathrm{tra}}} {\mathrm{H}}^2(G, {\mathbb C}^\times) \xrightarrow{\inf} {\mathrm{H}}^2(G^*, {\mathbb C}^\times),\] we have ${\mathrm{res}}: {\mathrm{Hom}}(G^*, {\mathbb C}^\times) \rightarrow {\mathrm{Hom}}( A,{\mathbb C}^\times)$ is trivial. Hence, $A \subseteq [G^*,G^*]$. Since $\inf$ is a trivial map, result follows from Theorem \ref{inf}. \end{proof} \begin{corollary} \label{HG} Let $G$ be a central product of its subgroups $H$ and $K$ with $Z=H'\cap K'$. Then every projective representation of $G$ is obtained from a projective representation of $G/Z$ via inflation. \end{corollary} \begin{proof} By Theorem \ref{C1}, it follows that $\inf:{\mathrm{H}}^2(G/Z,{\mathbb C}^\times)\to {\mathrm{H}}^2(G, {\mathbb C}^\times)$ is a surjective map. Therefore, proof follows by Theorem~\ref{inf}. \end{proof} \subsection{\textbf{Proof of Theorem \ref{thm2} and Corollary \ref{thm5} }} \label{subsec:central-product-proof} For $n>1$, $H_{2n+1}^t(R)$ is a central product of $H_{2n-1}^t(R)$ and $H_3^t(R)$. We obtain a natural homomorphism from ${\mathrm{H}}^2(R^{2n}\oplus R/tR, {\mathbb C}^\times)$ to $ {\mathrm{H}}^2(H_{2n+1}^t(R), \mathbb C^\times),$ via inflation. Let $[\alpha]$ be a cohomology class of $H_{2n+1}^t(R)$. We obtain the following from Theorem~\ref{C1} and Corollary~\ref{HG}. \begin{enumerate}[label=(\roman*)] \item The inflation map from ${\mathrm{H}}^2(R^{2n}\oplus R/tR, \mathbb C^\times)$ to ${\mathrm{H}}^2(H_{2n+1}^t(R), \mathbb C^\times)$ is surjective. \item Every irreducible $\alpha$-representation of $H_{2n+1}^t(R)$ is obtained by composing a irreducible $\beta$-representation of $R^{2n}\oplus R/tR$ for some $\beta \in Z^2(R^{2n}\oplus R/tR, {\mathbb C}^\times)$ such that $[\alpha] = \mathrm{inf}([\beta])$. \end{enumerate} The proof of Theorem~\ref{thm2} now follows from (ii). Similarly, the group $ES_{2n+1}(p^2)$ is a central product of $ES_{2n-1}(p^2)$ and $ES_3(p^2)$, hence Corollary~\ref{thm5}(ii) again follows from Corollary $\ref{HG}$. \subsection{Proof of Theorem \ref{thm3}} \begin{proof} For finite abelian groups it follows by \cite[Theorem 5.4 in Chapter 3]{GK1} that $ \mathcal{F}_{n}(r,t)$ is a representation group of $({\mathbb Z}/r{\mathbb Z})^n \oplus {\mathbb Z}/t{\mathbb Z}$. Hence, in the proof below, we assume $r=0$ and $t$ is a positive integer. However, we remark that the following proof also works for $r \in \mathbb N$ and not the same as appeared in \cite[Theorem 5.4 in Chapter 3]{GK1}. Consider $Z=\langle z_{ij}, 1\leq i <j\leq n+1 \rangle$, a central subgroup of $\mathcal{F}_{n}(r,t)$. There exists a central extension $$1 \to Z \to \mathcal{F}_{n}(r,t) \xrightarrow{\pi} {\mathbb Z}/t{\mathbb Z} \oplus ({\mathbb Z}/r{\mathbb Z})^n \to 1,$$ where $\pi$ is defined by $\pi(\prod_{i=1}^{n+1}x_i^{m_i}\prod_{1\leq i <j\leq n+1 } z_{ij}^{k_{ij}})=(m_1, m_2,\ldots ,m_{n+1})$. Then we have the exact sequence $$1\to {\mathrm{Hom}}(Z,{\mathbb C}^\times) \xrightarrow{{\mathrm{tra}}} {\mathrm{H}}^2(\mathcal{F}_{n}(r,t)/Z, {\mathbb C}^\times) \xrightarrow{\inf} {\mathrm{H}}^2( \mathcal{F}_{n}(r,t), {\mathbb C}^\times).$$ We want to show that inf is a trivial homomorphism. Let $X=\prod_{i=1}^{n+1}x_i^{m_i}\prod_{1\leq i <j\leq n+1 } z_{ij}^{k_{ij}}$ and $Y=\prod_{i=1}^{n+1}x_i^{m'_i}\prod_{1\leq i <j\leq n+1} z_{ij}^{k'_{ij}}$ be two elements of $ \mathcal{F}_{n}(r,t).$ Then the element $XY$ is of the following form: \begin{eqnarray*} XY&=&x_1^{m_1}x_2^{m_2}\ldots x_{n+1}^{m_{n+1}}. x_1^{m'_1}x_2^{m'_2}\ldots x_{n+1}^{m'_{n+1}}.\prod_{1\leq i <j\leq n+1 } z_{ij}^{k_{ij}+k'_{ij}}\\ &=&x_1^{m_1+m'_1}x_2^{m_2+m'_2}\ldots x_{n+1}^{m_{n+1}+m'_{n+1}}. \prod_{1\leq i <j\leq n+1} z_{ij}^{k_{ij}+k'_{ij}-m'_im_j}. \end{eqnarray*} Let $\alpha \in Z^2(\mathcal{F}_{n}(r,t)/Z, {\mathbb C}^\times) $ and $\inf([\alpha])=[\beta]$. Then by Lemma \ref{l1}, \begin{eqnarray*} \beta(X,Y) &=&\alpha(\pi(X),\pi(Y))= \alpha\big((m_1, m_2,\ldots ,m_{n+1}), (m'_1, m'_2,\ldots ,m'_{n+1})\big)\\ &=& \prod_{1 \leq i < j \leq n+1}\mu_{i,j}^{m'_im_j}, \end{eqnarray*} for some $\mu_{i,j}\in {\mathbb C}^\times$. Define a function $\tau: \mathcal{F}_{n}(r,t) \to {\mathbb C}^\times$ by $$\tau(x_1^{m_1}x_2^{m_2}\ldots x_r^{m_r}\prod_{1\leq i <j\leq n+1 } z_{ij}^{k_{ij}})=\prod_{1\leq i <j\leq n+1 } \mu_{i,j}^{-k_{ij}}.$$ Now we have \begin{eqnarray*} &&\tau(X)^{-1}\tau(Y)^{-1}\tau(XY)\\ &&=\prod_{1\leq i <j\leq n+1 } \mu_{i,j}^{k_{ij}} \prod_{1\leq i <j\leq n+1 } \mu_{i,j}^{k'_{ij}} \prod_{1\leq i <j\leq n+1 } \mu_{i,j}^{-k_{ij}-k'_{ij}+m_i'm_j}\\ &&=\prod_{1\leq i <j\leq n+1}\mu_{i,j}^{m_i'm_j}=\beta(X,Y). \end{eqnarray*} Hence, $\beta$ is, in fact, a coboundary, and therefore $\inf$ is trivial. This along with Theorem \ref{inf} and Lemma \ref{l1} completes the proof. \end{proof} \subsection{\textbf{Proof of Theorem \ref{thm4}}} \begin{proof} Consider $Z=\langle z_1,z_2, z^r \rangle$ which is a central subgroup of $\widehat{H}(r,t).$ Now consider the central extension \[ 1 \to Z \to \widehat{H}(r,t) \xrightarrow{\pi}H_3^{t}({\mathbb Z}/r{\mathbb Z}) \to 1, \] where $\pi$ is defined by $\pi(z_1^{k_1}z_2^{l_1}z^{m_1}y^{n_1}x^{p_1})=(m_1, n_1,p_1)$. Then we have the following exact sequence. \[ 1\to {\mathrm{Hom}}(Z,{\mathbb C}^\times) \xrightarrow{{\mathrm{tra}}} {\mathrm{H}}^2(H_3^t({\mathbb Z}/r{\mathbb Z}), {\mathbb C}^\times) \xrightarrow{\inf} {\mathrm{H}}^2( \widehat{H}(r,t),{\mathbb C}^\times). \] We have the following relations in $\widehat{H}(r,t)$. \begin{eqnarray*} &&[x^n,y]=[x,y]^n[x,z^t]^{\frac{n(n-1)}{2}}=z^{tn}z_1^{\frac{tn(n-1)}{2}},\\ &&[x,y^n]=[x,y]^n[y,z^t]^{\frac{n(n-1)}{2}}=z^{tn}z_2^{\frac{tn(n-1)}{2}},\\ &&[x^m,y^n]=z^{tmn}z_1^{tn\frac{m(m-1)}{2}}z_2^{tm\frac{n(n-1)}{2}}. \end{eqnarray*} Let $X=z_1^{k_1}z_2^{l_1}z^{m_1}y^{n_1}x^{p_1}$ and $Y=z_1^{k_2}z_2^{l_2}z^{m_2}y^{n_2}x^{p_2}$ be two elements of $\widehat{H}(r,t).$ Then $XY=z_1^{k_1}z_2^{l_1}z^{m_1}y^{n_1}x^{p_1}.z_1^{k_2}z_2^{l_2}z^{m_2}y^{n_2}x^{p_2} $ has the following expression. \begin{eqnarray*} z_1^{k_1+k_2+m_2p_1+ tn_2\frac{p_1(p_1-1)}{2} }z_2^{l_1+l_2+n_1m_2+tp_1\frac{n_2(n_2-1)}{2} +tp_1n_1n_2 }z^{m_1+m_2+tp_1n_2}y^{n_1+n_2}x^{p_1+p_2}. \end{eqnarray*} We first assume that $r \in \mathbb N$. Then by Lemma \ref{l1}$(ii)$, every $\alpha \in Z^2(H_3^t({\mathbb Z}/r{\mathbb Z}),{\mathbb C}^\times)$ is cohomologous to a cocycle of the form \[\alpha((m_1,n_1,p_1), (m_2, n_2, p_2))=\lambda^{m_2p_1+ tn_2\frac{p_1(p_1-1)}{2} }\mu^{n_1m_2+tp_1\frac{n_2(n_2-1)}{2} +tp_1n_1n_2}\delta^{p_1n_2}, \] for some $\lambda,\mu, \delta \in {\mathbb C}^\times$. Let $\alpha \in Z^2(H_3({\mathbb Z}/r{\mathbb Z}), {\mathbb C}^\times)$. Then $\inf([\alpha])=[\beta]$ is given by \begin{eqnarray*} \beta(X,Y) &=& \alpha(\pi(X), \pi(Y))\\ &=&\alpha((m_1,n_1,p_1), (m_2, n_2, p_2))\\ &=&\lambda^{m_2p_1+ tn_2\frac{p_1(p_1-1)}{2} }\mu^{n_1m_2+tp_1\frac{n_2(n_2-1)}{2} +tp_1n_1n_2}\delta^{p_1n_2}. \end{eqnarray*} Define a function $b: \widehat{H}(r,t) \to {\mathbb C}^\times$ by $b(z_1^{k_1}z_2^{l_1}z^{m_1}y^{n_1}x^{p_1})=\lambda^{k_1}\mu^{l_1}\delta_1^{m_1}$, where $\delta_1 \in {\mathbb C}^\times$ such that $\delta_1^{rt}=1$ and $\delta_1^t=\delta$. The existence of such $\delta_1$ follows by $t|r$. Then we have \begin{eqnarray*} &&b(X)^{-1}b(Y)^{-1}b(XY)\\ &&=\lambda^{m_2p_1+ t n_2\frac{p_1(p_1-1)}{2} }\mu^{n_1m_2+tp_1\frac{n_2(n_2-1)}{2} +tp_1n_1n_2}\delta^{p_1n_2}\\ &&=\beta(X,Y). \end{eqnarray*} Therefore, $\inf$ is trivial. By Theorem \ref{inf} and Lemma \ref{l1}, our result follows for $r \in \mathbb{N}$. For $r=0$, proof goes on the same lines as above by defining the function $b: \widehat{H}(r,t) \to {\mathbb C}^\times$ by $b(z_1^{k_1}z_2^{l_1}z^{m_1}y^{n_1}x^{p_1})=\lambda^{k_1}\mu^{l_1}$. \end{proof} \section{Ordinary representations of $\widehat{H}(r,t)$ and $\mathcal{F}_n(r,t)$ for $ r \in \mathbb N \cup \{0\}$}\label{se3} In this section, we discuss methods to obtain the irreducible representations of $\widehat{H}(r,t)$ and $\mathcal{F}_n(r,t)$ for both the finite as well as the discrete case. For this, first we define induction for the discrete case and state some of the required results. Then we prove a general statement that gives a uniform construction of the irreducible representations for $\widehat{H}(r,t)$ and $\mathcal{F}_n(r,t)$. We use the notation $\mathrm{Irr}(G)$ to denote the isomorphism classes of all irreducible ordinary representations of $G$. Let $\mathrm{Irr}^\circ(G) = \{\rho \in \mathrm{Irr}(G) \mid \dim(\rho) < \infty \}$. For a normal subgroup $N$ of $G$ and $\rho \in \mathrm{Irr}^\circ(N)$, the sets $\{ \delta \in \mathrm{Irr}(G) \mid \langle \delta|_N, \rho \rangle \neq 0 \}$ and $\{ \delta \in \mathrm{Irr}^\circ(G) \mid \langle \delta|_N, \rho \rangle \neq 0 \}$ are denoted by $\mathrm{Irr}(G \mid \rho)$ and $\mathrm{Irr}^\circ(G \mid \rho)$ respectively. We use the following definition of induced representation for the discrete groups. This is an analogue of compact induction for Lie groups and has already been explored in literature; see for example Parshin~\cite[Definition 1]{Parshin}. \begin{definition}(Induced representation) \label{defn:induced} Let $H$ be a subgroup of a finitely generated group $G$ and $(\rho, W)$ be a representation of $H$. The induced representation $(\widetilde{\rho}, \widetilde{W})$ of $\rho$ from $H$ to $G$ has representation space $\widetilde{W}$ consisting of functions $f: G \rightarrow W$ satisfying the following: \begin{enumerate} \item $f(hg) = \rho(h) f(g)$ for all $g \in G$ and $h \in H$. \item The support of $f$ is contained in a union of finitely many right cosets of $H$ in $G$. \end{enumerate} The homomorphism $\widetilde{\rho}: G \rightarrow \mathrm{Aut}(\widetilde{W})$ is given by $\widetilde{\rho}(g)f(x) = f(xg)$ for all $x, g \in G$. We denote this induced representation by ${\text{Ind}}_H^G(\rho)$. \end{definition} We note that it agrees with the usual definition of induction for finite groups. We use a few standard properties of the above induction in the next result; see~\cite[Remark~2.6]{NaSi2018} for exact results used. \begin{proposition} \label{proposition: general construction} Let $G$ be a finitely generated discrete group with a normal subgroup $N$ such that $G/ N $ is cyclic. Let $(\rho, V)$ be an irreducible representation of $N$ and let $I_G(\rho) = \{ g \in G \mid \rho^g \cong \rho\}$ be the inertia group of $\rho$ in $G$. Then the following are true. \begin{enumerate} \item The representation $\rho$ extends to $I_G(\rho)$. \item Any $\delta \in \mathrm{Irr}^\circ(I_G(\rho))$ such that $\langle \rho, \delta|_N\rangle \neq 0$ satisfies the following. \begin{enumerate} \item $\delta|_N = \rho$. \item The representation ${\text{Ind}}_{I_G(\rho)}^G(\delta)$ is irreducible. \end{enumerate} \item For $|G/I_G(\rho) | < \infty$, the sets $\mathrm{Irr}^\circ(I_G(\rho) \mid \rho) $ and $ \mathrm{Irr}^\circ(G \mid \rho)$ are in bijection via $\delta \mapsto {\text{Ind}}_{I_G(\rho)}^G(\delta)$. \end{enumerate} \end{proposition} \begin{proof} For the finite group $G$, (1) is well known; see~\cite[Theorem~11.7]{MI}. We remark that the proof of the above-cited result also works for infinite cases as long as ${\mathrm{H}}^2(G/N, \mathbb C^\times) = 1$. This fact is well known for discrete cyclic groups. Therefore, the result follows in this case also. For finite groups, both (2) and (3) are consequences of the Clifford theory. So, we only deal with the case of infinite discrete group $G$. Let $(\delta, W)$ be a finite-dimensional representation of $I_G(\rho)$ such that $\langle \rho, \delta|_N \rangle \neq 0$. Let $y \in G$ such that $I_G(\rho)/N = \langle yN \rangle $. Then we have $V \subseteq W$. For $V = W$, we are done. Otherwise, there exists smallest $t \in \mathbb N$ such that $W = V \oplus V^y \oplus V^{y^2} \oplus \cdots \oplus V^{y^{t-1}}$ and $V^{y^t} = V$. Here we have used the fact that $W$ is finite-dimensional and both $V$ and $W$ are irreducible. Consider a subgroup $S = \langle y^t \rangle$ of $I_G(\rho)$ and its action on the finite-dimensional space $V$ of $N_t = \langle N, S \rangle$ via $\delta$. Then by (1), the representation $\rho$ extends to a representation $\tilde{\rho}$ of $N_t$ such that $\tilde{\rho}|_N = \rho$ and $\langle \tilde{\rho}, \delta|_{N_t} \rangle \neq 0$. The group $N_t$ is a finite index subgroup of $I_G(\rho)$. Therefore, by Frobenius reciprocity, we obtain $\langle {\text{Ind}}_ {N_t}^{I_G(\rho)}(\tilde{\rho}), \delta \rangle \neq 0 $. We note that ${\text{Ind}}_ {N_t }^{I_G(\rho)}(\tilde{\rho})$ is a finite-dimensional representation. By part (1) we obtain, $${\text{Ind}}_ {N_t }^{I_G(\rho)}(\tilde{\rho}) \cong \oplus_{\chi \in \widehat{I_G(\rho)/ N_t } } \tilde{\rho}\otimes \chi.$$ Therefore, $\tilde{\rho}\otimes \chi \cong \delta$ for some $\chi \in \widehat{I_G(\rho)/ N_t }$. This implies $\delta|_{N} = \rho$. Next, we note that \[ \mathrm{End}_G({\text{Ind}}_{I_G(\rho)}^G(\delta)) \cong \oplus_{g \in G/I_G(\rho)} \mathrm{Hom}_{I_{G}(\rho)}(\delta, \delta^g). \] By definition of $I_G(\rho)$ and the fact that $\delta|_N = \rho$, we have $\mathrm{Hom}_{I_{G}(\rho)}(\delta, \delta^g) \neq 0$ for $g \in G/ I_G(\rho)$ if and only if $g \in I_G(\rho)$. This implies that $\mathrm{End}_G({\text{Ind}}_{I_G(\rho)}^G(\delta)) \cong \mathbb C$, that is ${\text{Ind}}_{I_G(\rho)}^G(\delta)$ is Schur irreducible. By \cite[Theorem~3.1]{NaSi2018}, we obtain that ${\text{Ind}}_{I_G(\rho)}^G(\delta)$ is irreducible. Finally, (3) follows by the definition of $I_G(\rho)$, $\delta|_{N} = \rho$ and the fact that, \[ {\text{Ind}}_{I_G(\rho)}^G(\delta) |_{I_G(\rho)} \cong \oplus_{g \in G/I_G(\rho)} \delta^g. \] \end{proof} \subsection{Construction for two-step nilpotent groups} \label{subsection:two-step construction} In this section, we outline a well-known method to construct all finite-dimensional irreducible representations of a two-step nilpotent group $G$. \begin{enumerate} \item Let $\chi: Z(G) \rightarrow \mathbb C^\times$ be a one dimensional character of $Z(G)$ such that $\chi|_{G'}$ is of finite order. Define the bilinear form, $$\beta_\chi: G/Z(G) \times G/Z(G) \rightarrow \mathbb C^\times;\,\, \beta_\chi(x Z(G), yZ(G)) = \chi([x,y])$$ \item Let $R_\chi = \{ g \in G \mid \beta_\chi(g,g') = 1\,\, \forall \,\, g' \in G \}$. Then the character $\chi$ extends to $R_\chi$. \item For every $\tilde{\chi} \in \mathrm{Irr}^\circ(R_\chi \mid \chi)$, there exists a unique irreducible representation, denoted $\rho_{\tilde{\chi}} \in \mathrm{Irr}(G \mid \chi)$. \item By \cite[Theorem~1.3]{NaSi2018}, $\rho_{\tilde{\chi}} \in \mathrm{Irr}^\circ(G \mid \chi)$ because $\chi|_{G'}$ has finite order. Furthermore, we have $\dim(\rho_{\tilde{\chi}} ) = \sqrt{|G|/|R_\chi|}$. \item The map $\tilde \chi \mapsto \rho_{\tilde \chi}$ gives a bijection in the sets $\mathrm{Irr}^\circ(R_\chi \mid \chi )$ and $\mathrm{Irr}^\circ(G \mid \chi)$. \end{enumerate} The benefit of this method over Mackey Theory for two-step nilpotent groups lies in the fact that many properties about irreducible representations can be easily deduced from this construction. For example, every finite-dimensional irreducible representation of a two-step nilpotent group is monomial follows directly from the above construction. Also, determining the dimensions of all finite-dimensional irreducible representations is easier in this case. For example, the construction implies that all $\rho \in \mathrm{Irr}^\circ(G \mid \chi)$ satisfy $\dim(\rho_{\tilde{\chi}} ) = \sqrt{|G|/|R_\chi|}$. \subsection{ Irreducible representations of $\widehat{H}(r,t)$ and $\mathcal{F}_n(r,t)$. } The group $\mathcal{F}_n(r,t)$ is a two-step nilpotent group. So, its ordinary representations can be directly obtained from its central characters as in Section~\ref{subsection:two-step construction}. However, below, by using Proposition~\ref{proposition: general construction} and Section~\ref{subsection:two-step construction}, we indicate a method that works for both $\mathcal{F}_n(r,t)$ and $\widehat{H}(r,t)$. Consider the subgroups $N_H = \langle x, z, z_1, z_2 \rangle $ and $N_F = \langle x_k , z_{ij}, z_k \mid 1 \leq k \leq n, 1 \leq i < j \leq n \rangle$ of $\widehat{H}(r,t)$ and $\mathcal{F}_n(r,t)$ respectively. Then $N_H$ and $N_F$ are normal subgroups of $\widehat{H}(r,t)$ and $\mathcal{F}_n(r,t)$ such that $\widehat{H}(r,t)/N_H$ and $\mathcal{F}_n(r,t)/N_F$ are cyclic. We note that both $N_F$ and $N_H$ are two-step nilpotent groups. Therefore, irreducible representations of these are obtained from one dimensional representation of the radical of each central character as described in Section~\ref{subsection:two-step construction}. So, it remains to determine the inertia group of these representations of $N_H$ and $N_F$ in $\widehat{H}(r,t)$ and $\mathcal{F}_n(r,t)$ respectively and then the construction is obtained by Proposition~\ref{proposition: general construction}. \section*{Acknowledgements} The authors are grateful to E. K. Narayanan for very insightful discussions regarding this project and for his constant encouragement. They are also indebted to Yuval Ginosar for carefully reading this article and for providing many helpful comments. The authors also thank the referee for careful reading and comments. SH acknowledges the NBHM grant (0204/52/2019/R$\&$D-II/333) and PS acknowledges the SERB MATRICS grant (MTR/2018/000094) respectively. Both SH and PS thank UGC CAS-II grant (Grant No. F.510/25/CAS-II/2018(SAP-I)) at the Indian Institute of Science, Bangalore.
{ "timestamp": "2020-12-24T02:11:24", "yymm": "1909", "arxiv_id": "1909.06589", "language": "en", "url": "https://arxiv.org/abs/1909.06589", "abstract": "In this article, we study the Schur mutiplier of the discrete as well as the finite Heisenberg groups and their t-variants. We describe the representation groups of these Heisenberg groups and through these give a construction of their finite dimensional complex projective irreducible representations.", "subjects": "Group Theory (math.GR); Representation Theory (math.RT)", "title": "On Schur multiplier and projective representations of Heisenberg groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.982013792143467, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7087617888476365 }
https://arxiv.org/abs/1006.2562
On a notion of "Galois closure" for extensions of rings
We introduce a notion of "Galois closure" for extensions of rings. We show that the notion agrees with the usual notion of Galois closure in the case of an S_n degree n extension of fields. Moreover, we prove a number of properties of this construction; for example, we show that it is functorial and respects base change. We also investigate the behavior of this Galois closure construction for various natural classes of ring extensions.
\section{Introduction} Let $A$ be any {ring of rank $n$ over a base ring $B$}, i.e., a $B$-algebra that is free of rank $n$ as a $B$-module. In this article, we investigate a natural definition for the ``Galois closure'' $G(A/B)$ of the ring $A$ as an extension of $B$.\footnote{All rings are assumed to be commutative with unity.} The definition is as follows. For an element $a\in A$, let \begin{equation}\label{pa} P_a(x)=x^n-s_1(a)x^{n-1}+s_2(a) x^{n-2}+\cdots+(-1)^n s_n(a) \end{equation} be the characteristic polynomial of $a$, i.e., the characteristic polynomial of the $B$-module transformation $\times a:A\to A$ given by multiplication by $a$. Furthermore, for an element $a\in A$, let $a^{(1)}$, $a^{(2)}$, \ldots, $a^{(n)}$ denote the elements $a\otimes 1\otimes 1\otimes\cdots\otimes1$, $1\otimes a\otimes 1\otimes\cdots\otimes1$, $\ldots$, $1\otimes 1\otimes 1\otimes\cdots\otimes a$ in $A^{\otimes n}$ respectively. Let $I(A,B)$ denote the ideal in $A^{\otimes n}$ generated by all expressions of the form \begin{equation}\label{fundrels} s_j(a)\;-\sum_{1\leq i_1< i_2< \ldots< i_j\leq n} a^{(i_1)}a^{(i_2)}\cdots a^{(i_j)}\end{equation} where $a\in A$ and $j\in\{1,2,\ldots,n\}$. Note that the symmetric group $S_n$ naturally acts on $A^{\otimes n}$ by permuting the tensor factors, and the ideal $I(A,B)\subset A^{\otimes n}$ is preserved under this $S_n$-action. We define \begin{equation}\label{gcdef} G(A/B)=A^{\otimes n}/I(A,B), \end{equation} and we call $G(A/B)$ the ``$S_n$-closure'' of $A$ over $B$. Since $I(A,B)$ is $S_n$-invariant, we see that the action of $S_n$ on $A^{\otimes n}$ also descends to an $S_n$-action on $G(A/B)$. One easily checks (see Theorem~\ref{fieldcase} below) that if $A/B$ is a degree $n$ extension of fields having associated Galois group $S_n$, then $G(A/B)$ is indeed simply the Galois closure of $A$ as a field extension of $B$. Thus our definition of $S_n$-closure in a sense naturally extends the usual notion of Galois closure to rank $n$ ring extensions. In fact, our definition above also naturally extends to $B$-algebras $A$ that are {\it locally free of rank} $n$, i.e., those $A$ for which there exist $b_1,\dots,b_m\in B$ such that $\sum Bb_i=1$ and $A\otimes_B B_{b_i}$ is free of rank $n$ over the localization $B_{b_i}$.\footnote{The condition that {\it $A$ is locally free of rank $n$ as a $B$-module} is also equivalent to either of the following two natural conditions: (a) $A$ is finitely generated and projective of constant rank $n$ as a $B$-module; (b) $A$ is finitely presented and $A_M$ is free of rank $n$ as a $B_M$-module for all maximal ideals $M$ of $B$. \,(See, e.g., \cite[Thm.~4.6]{Lenstra}.)} For such $A$, we can define the characteristic polynomial $P_a$ of an element $a\in A$ as follows. First, we have a natural isomorphism \begin{equation}\label{iso} A\otimes_{B}\textrm{Hom}_B(A,B)\rightarrow \textrm{End}_B(A), \end{equation} where for $B$-modules $M,N$ we use $\textrm{Hom}_B(M,N)$ to denote the set of $B$-module homomorphisms from $M$ to $N$, and we use $\textrm{End}_B(M)$ to denote $\textrm{Hom}_B(M,M)$. Indeed, (\ref{iso}) gives an isomorphism locally on $B_{b_i}$ (since $A_{b_i}$ is free over $B_{b_i}$), and hence it is an isomorphism globally. Next, if $f$ is any $B$-module endomorphism of $A$, then the trace of $f$ is defined to be the image of $f$ under the canonical map \[ {\rm Tr}^A_B:\textrm{End}_B(A)\cong A\otimes_{B}\textrm{Hom}_B(A,B)\rightarrow B. \] Finally, given an element $a\in A$, we obtain a $B$-module endomorphism of $A$ given by $\times a:A\to A$. We let $s_j(a)$ be the trace of the induced $B$-module endomorphism of $\bigwedge^j A$. Note that the {\it Cayley-Hamilton Theorem} carries over to this setting as $P_a(a)$ is locally zero, hence globally zero. We can then define $I(A,B)$ and $G(A/B)$ as above. (\ref{fundrels}) and (\ref{gcdef}). The notion of $S_n$-closure has a number of interesting properties, which we consider in this article. The first property that should be mentioned is that \begin{theorem} \label{thm:functorial} The $S_n$-closure construction is functorial. \end{theorem} In other words, the construction of $S_n$-closure commutes with base change; more precisely, if $A$ is a ring of rank $n$ over $B$, and $C$ is any ring, then $G((A\otimes_B C)/(B\otimes_B C))=G(A/B)\otimes_B C$. In the case of an extension of fields, we have \begin{theorem}\label{fieldcase} Let $B$ be a field, and suppose $A$ is a separable field extension of $B$ of degree $n$. Let $\widetilde{A}$ be a Galois closure of $A$ over $B$, and let $r=\frac{n!}{{\rm deg}({\widetilde{A}/B})}$. Then \[G(A/B)\cong\widetilde{A}^{r} \] as $B$-algebras. \end{theorem} In particular, if ${\rm deg}(\widetilde{A}/B)=n!$ $($i.e., Gal($\widetilde{A}/B)=S_n)$, then $G(A/B)\cong\widetilde{A}$ as $B$-algebras. Next, we consider the case where $B$ is {\it monogenic} over $A$, i.e., $A$ is generated by one element as a $B$-algebra. Then we have \begin{theorem}\label{monocase} Suppose $A$ is a ring of rank $n$ over $B$ such that $A=B[\alpha]$ for some $\alpha\in A$. Then $G(A/B)$ is a ring of rank $n!$ over $B$. More generally, if $A$ is locally free of rank $n$ over $B$ and is locally generated by one element, then $G(A/B)$ is locally free of rank $n!$ over $B$. \end{theorem} Now, if $B$ is any ring, then we may examine the natural ring $A=B^n$ having rank $n$ over $B$. More generally, we may consider those locally free rings $A$ of rank $n$ that are {\it \'etale} over $B$, i.e., those $A$ for which the determinant of the bilinear form $\langle a,a'\rangle={\rm Tr}^A_B(aa')$---called the {\it discriminant} ${\rm Disc}(A/B)$ of $A$ over $B$---is a unit in $B$ (equivalently, those $A$ for which the map $\Phi:A\to{\rm Hom}_B(A,B)$ given by $a\mapsto (a'\mapsto{\rm Tr}^A_B(aa'))$ is a $B$-module isomorphism). We prove: \begin{theorem}\label{etalecase} For any ring $B$, we have $G(B^n/B)\cong B^{n!}$. More generally, if $A$ is \'etale and locally free of rank $n$ over $B$, then $G(A/B)$ is \'etale and locally free of rank $n!$ over $B$. \end{theorem} In fact, if $B$ is connected, we may explicitly describe the Galois set structure of $G(A/B)$ in terms of that of $A$ (see Section \ref{etalesec}). Thus for either \'etale or locally monogenic ring extensions of rank $n$, the $S_n$-closure construction always yields locally free ring extensions of rank $n!$. For general rings that are locally free of small rank over a base $B$---even those that might not be \'etale or (locally) monogenic---the $S_n$-closure still always yields locally free rings of rank $n!$ over $B$: \begin{theorem}\label{cubicase} Suppose $A$ is locally free of rank $n\leq 3$ over $B$. Then $G(A/B)$ is locally free of rank~$n!$ over~$B$. \end{theorem} For example, if one takes an order $A$ in a noncyclic cubic field $K$, then its $S_3$-closure yields a canonically associated order $\tilde A=G(A/\mathbb{Z})$ in the sextic field $\widetilde{K}$. We will prove in Section~7 that this sextic order satisfies ${\rm Disc}(\tilde A/\mathbb{Z})={\rm Disc}(A/\mathbb{Z})^3$. One might imagine that for more complicated ring extensions, the analogues of Theorems~\ref{monocase}--\ref{cubicase} might not hold. Indeed, one finds in rank 4 that there exist algebras over fields for which the $S_4$-closure need not have rank $4!=24$. For instance, we will show in Section \ref{degexample} that the $S_4$-closure of the ring $K[x,y,z]/(x,y,z)^2$ has dimension 32 over $K$ for any field $K$. This has consequences over $\mathbb{Z}$ as well. For example, suppose $K$ is a quartic field and $A$ is the ring of integers in $K$. Consider the suborder $A'=\mathbb{Z}+2A$. Since $A'/2A'\cong\mathbb{F}_2[x,y,z]/(x,y,z)^2$, we see already that the minimal number of generators for ${\rm Gal}(A'/\mathbb{Z})$ as an abelian group is at least 32 by Theorem~1. Since $A'\otimes\mathbb{Q}=K$, we see that the torsion-free rank of $A'$ is $4!=24$, but one finds that there are also eight dimensions of 2-torsion! Although this may seem unsightly at first, for a number of reasons this additional information contained in the 2-torsion is important to retain in studying the ``Galois closure'' of the order $A'$ (the most prominent reason being perhaps functoriality). We study this example more carefully in Section~\ref{orderexample}. The example will illustrate that there is no natural further quotient of $G(A'/\mathbb{Z})$ that has 24 generators as a $\mathbb{Z}$-module and also respects base change; this gives further evidence that allowing the rank to be higher than $n!$ when constructing $S_n$-closures can be important when considering somewhat more ``degenerate'' ring extensions. \begin{remark}{\em It is possible to obtain a natural Galois closure-type object of rank $n!$ for any order $A$ in a degree $n$ number field $K$, by constructing $G(A/\mathbb{Z})$ as defined above, and then quotienting by all torsion. This quotient was used for convenience in, e.g., \cite{Bhargava3} and \cite{Bhargava4}. Although quite convenient in many contexts, such a quotienting procedure is NOT functorial! } \end{remark} It is an interesting question as to what the possible dimensions are for the $S_n$-closure of a dimension $n$ algebra over a field $K$. In Section~10, we show that the largest possible dimensions occur for the ``maximally degenerate'' rank $n$ algebra over $K$, namely $R_n=K[x_1,\ldots,x_{n-1}]/(x_1,\ldots,x_{n-1})^2$: \begin{theorem}\label{maxrank} Let $K$ be a field and $R_n=K[x_1,\dots,x_{n-1}]/(x_1,\dots,x_{n-1})^2$. Then for all $K$-algebras $A$ of dimension $n$, we have $\dim_KG({A}/K)\leq\dim_KG(R_n/K)$. \end{theorem} In addition to their interest due to Theorem~\ref{maxrank}, the algebras $R_n$ are of interest in their own right as they arise (with $K=\mathbb{F}_p$) as the reductions modulo $p$ of orders $R$ in number fields that are {\it imprimitive} at $p$, i.e., $R=\mathbb{Z}+pR'$ for some order $R'$. For these reasons, we study the $S_n$-closures of these algebras in more detail in Section~\ref{deg}, and show: \begin{theorem}\label{degtheorem} Let $K$ be a field with characteristic $0$ or coprime to $n!$, and let \linebreak $R_n = K[x_1,\ldots,x_{n-1}]/(x_1,\dots,x_{n-1})^2$. Then the dimension of $G(R_n/K)$ over $K$ is strictly greater than $n!$ for $n>3$. \end{theorem} In particular, we find for $n$=1, 2, 3, 4, 5, and 6 that $\dim_K\,G(R_n/K)=1$, $2$, $6$, $32$, $230$, and $1857$ respectively. These ranks thus give the maximal possible ranks for the $S_n$-closures of rank $n$ rings over $K$ for these values of $n$. Theorem~\ref{degtheorem} will in fact follow from a more general structure theorem for these rings $G(R_n/K)$ (see Theorem~\ref{thm:main}). The techniques used to prove Theorem~\ref{degtheorem} are primarily those of representation theory of $S_n$. As we now describe, our notion of Galois closure can also easily be adapted to the more general situation of a morphism $X\rightarrow Y$ of schemes, where $\mathcal{A}$ is a locally free sheaf of $\mathcal{O}_Y$-algebras of rank $n$ and $X=\underline{\textrm{Spec}}_Y\mathcal{A}$. We say then that $X/Y$ is an {\it $n$-covering}. Recall that if $\mathcal{E}$ is a locally free sheaf of rank $n$ on a scheme $Y$ and $f$ is a local section of $\mathcal{E}nd(\mathcal{E})$, then the trace of $f$ is the image of $f$ under the canonical morphism \[ \mathcal{E}nd(\mathcal{E})\cong \mathcal{E}\otimes_{\mathcal{O}_Y}\mathcal{E}^\vee\rightarrow\mathcal{O}_Y. \] If $X/Y$ is an $n$-covering and $\mathcal{A}$ is as above, then for any $a\in\mathcal{A}(U)$ we can define the coefficients $s_j(a)$ of the ``characteristic polynomial'' $P_a$ of $a$ as follows. We obtain an $\mathcal{O}_U$-module endomorphism of $\mathcal{A}|_U$ given by multiplication by $a$. We let $s_j(a)$ be the trace of the induced endomorphism of $\bigwedge^j\mathcal{A}|_U$. We can then define a sheaf of ideals $\mathcal{I}(\mathcal{A},\mathcal{O}_Y)$ of $\mathcal{A}^{\otimes n}$ generated by the local expressions as in (\ref{fundrels}) and let \[ G(\mathcal{A}/\mathcal{O}_Y)=\mathcal{A}^{\otimes n}/\mathcal{I}(\mathcal{A},\mathcal{O}_Y). \] We define \[ G(X/Y)=\underline{\textrm{Spec}}_YG(\mathcal{A}/\mathcal{O}_Y). \] Even in this more general context of $n$-coverings of schemes, we still have the analogues of Theorems~1, 3, 4, and 5. More precisely, \\ \\ \begin{theorem1'} \emph{ If $X/Y$ is an $n$-covering and $Z\rightarrow Y$ is a morphism of schemes, then there is a natural isomorphism \[ G(X/Y)\times_Y Z\cong G(X\times_Y Z / Z). \] } \end{theorem1'} \begin{theorem3'} \emph{ If $X/Y$ is an $n$-covering defined by a locally free sheaf $\mathcal{A}$ of $\mathcal{O}_Y$-algebras which is locally generated as an $\mathcal{O}_Y$-algebra by one element, then $G(X/Y)$ is an $n!$-covering of $Y$. }\\ \end{theorem3'} \begin{theorem4'} \emph{ If $X/Y$ is an $n$-covering which is \'etale, then $G(X/Y)$ is an $n!$-covering of $Y$ which is \'etale. }\\ \end{theorem4'} \begin{theorem5'} \emph{ If $X/Y$ is an $n$-covering defined by a locally free sheaf $\mathcal{A}$ of $\mathcal{O}_Y$-algebras and $n\leq3$, then $G(X/Y)$ is an $n!$-covering of $Y$. }\\ \end{theorem5'} Theorems $1'$, $3'$, $4'$, and $5'$ follow directly from Theorems~1, 3, 4, and 5, due to the local nature of our definitions. Hence we will concentrate primarily on the proofs of Theorems 1--8, in the case where $A$ is a (free) ring of rank $n$ over $B$. \section{Functoriality}\label{funcsec} Let $A$ be any {ring of rank $n$ over a base ring $B$}. In this section, we show that the ideal $I(A,B)$ in $A^{\otimes n}$ is generated by the relations (\ref{fundrels}), where {\em $a$ ranges over a basis of $A$ as a module over $B$}. As such a basis remains a basis of $A\otimes_B C$ as a module over $C$ for any ring $C$, Theorem~1 will then follow. To prove our assertion about $I(A,B)$, we require: \begin{lemma}\label{bart} Let $\mathbb{Z}\langle X,Y\rangle$ denote the noncommutative polynomial ring generated by $X$ and $Y$. Then there exists a unique sequence $f_0(X,Y)$, $f_1(X,Y)$, $\ldots$ of polynomials in $\mathbb{Z}\langle X,Y\rangle$ such that in $\mathbb{Z}\langle X,Y\rangle[[T]]$ we have: \begin{equation}\label{exp} (1-(X+Y)T) = (1-XT)(1-YT)\prod_{k=0}^\infty (1-f_k(X,Y)\,XY\,T^{k+2}). \end{equation} The polynomial $f_m(X,Y)$ is homogeneous of degree $m$ in $X$ and $Y$. \end{lemma} \begin{proof} We first prove by induction on $m$ that the value of $f_m(X,Y)$ is completely determined by Equation (\ref{exp}). Indeed, to see the assertion for $m=0$, we take (\ref{exp}) modulo $T^3$ to obtain \[(1-(X+Y)T) \equiv (1-XT)(1-YT)(1-f_0(X,Y)\,XY\,T^{2})\pmod{T^3}\] implying \[1-XT-YT \equiv 1-XT-YT+(1-f_0(X,Y))\,XY\,T^2 \pmod{T^3}\] and so we must have $f_0(X,Y)=1$. Similarly, assuming that $f_0(X,Y),\ldots,f_{m-1}(X,Y)$ have been determined from (\ref{exp}), the polynomial $f_m(X,Y)$ can also then be determined from (\ref{exp}) by taking (\ref{exp}) modulo $T^{m+3}$: \begin{equation}\label{exp2} (1-(X+Y)T)\equiv (1-XT)(1-YT)\prod_{k=0}^m (1-f_k(X,Y)\,XY\,T^{k+2}) \pmod{T^{m+3}}; \end{equation} equating the coefficients of $T^{m+2}$ in (\ref{exp2}) yields \begin{equation}\label{exp3} f_m(X,Y) = \Bigl[\mbox{coefficient of $T^{m+2}$ in $\displaystyle{ (1-XT)(1-YT)\prod_{k=0}^{m-1} (1-f_k(X,Y)\,XY\,T^{k+2})}$}\Bigr]/(XY). \end{equation} Thus the sequence $\{f_m(X,Y)\}$ is uniquely determined from (\ref{exp}) via the recursive formula (\ref{exp3}). Moreover, Equation (\ref{exp}) is true for this latter sequence $\{f_m(X,Y)\}$ of polynomials because it is true modulo $T^i$ for every $i$. This concludes the proof. \end{proof} \vspace{-.075in} \begin{remark} {\em This beautiful lemma (Lemma~\ref{bart}) was pointed out to us by Bart de Smit. See also \cite{Amitsur}, \cite{RS} for related results.} \end{remark} \begin{remark} {\em The first few polynomials $f_k(X,Y)$ are given as follows:} \begin{equation} \begin{array}{rcl} f_0(X,Y)&=&1\\[.025in] f_1(X,Y)&=&X+Y\\[.025in] f_2(X,Y) &=& X^2 + YX + Y^2\\[.025in] f_3(X,Y) &=& X^3 + XYX + XY^2 + YX^2 + Y^2X + Y^3\\[.025in] f_4(X,Y) &=& X^4 + XYX^2 + XY^2X + XY^3 + YX^3 + Y^2X^2 + Y^3X + Y^4. \end{array} \end{equation} \end{remark} We now return to our assertion about $I(A,B)$. Given $a\in A$, let $Q_a(T)= \det(1-a|_AT)=1-s_1(a)T+s_2(a)T^2 - \cdots$ be the reverse characteristic polynomial of $a$, where we use $a|_A:=\times a$ to denote the $B$-linear transformation on $A$ given by multiplication by $a$. Then given any elements $x,y\in A$, we have by Lemma~\ref{bart} that \begin{equation}\label{bart2} (1-(x+y)|_AT) = (1-x|_AT)(1-y|_AT)\prod_{n=0}^{m-2}(1-(f_n(x,y)\,xy)|_A\,T^{n+2}) \pmod{T^{m+1}}. \end{equation} Taking determinants of both sides of (\ref{bart2}), and equating powers of $T^{m}$, yields an expression for $s_m(x+y)$ as an integer polynomial in $s_i(x)$ ($0\leq i\leq m$), $s_i(y)$ ($0\leq i\leq m$), and $s_i(g(x,y))$ ($0\leq i\leq m/2$) for various integer polynomials $g_j$. \begin{remark}{\em For example, we have:} \begin{equation*} \begin{array}{rcl} s_1(x+y)&\!\!=\!\!&s_1(x)+s_1(y)\\[.025in] s_2(x+y)&\!\!=\!\!&s_2(x)+s_1(x)s_1(y)+s_2(y)-s_1(xy)\\[.025in] s_3(x+y)&\!\!=\!\!&s_3(x) + s_2(x)s_1(y) + s_1(x)s_2(y) + s_3(y) + s_1(xxy)+s_1(xyy) - (s_1(x) + s_1(y)) s_1(xy). \end{array} \end{equation*} \end{remark} \vspace{.05in} Since for any $b\in B$ and $k\in\mathbb{N}$ we have $s_k(bx)=b^ks_k(x)$, it follows by induction on $m$ that the values of all expressions of the form $s_m(a)$ ($0\leq m\leq n)$ for $a\in A$ are determined by the values of $s_i$ ($i\leq m$) on a basis for $A$ as a $B$-module. We conclude that the ideal $I(A,B)$ in $A^{\otimes k}$ is indeed generated by the relations (\ref{fundrels}), where {$a$ ranges over a $B$-basis of $A$}. In particular, Theorem~1 follows in the case where we are considering only ring extensions $A$ that are free of rank $n$ over a base ring $B$. Of course, the above argument can be modified slightly to handle the case where $A$ is {locally} free of rank $n$ over $B$. Indeed, in this case $A$ is still a finitely-generated $B$-module (see Footnote~2). The above argument then shows that $I(A,B)$ is generated by the relations (\ref{fundrels}) where $a$ runs through any set of generators for $A$ as a $B$-module. The assertion of Theorem~1 then follows in this generality as well. \section{The case $A=B^n$}\label{bnsec} \subsection{A $B$-basis for $G(B^n/B)$}\label{basisforbn} Suppose $A$ is the rank $n$ ring $B^n$ over $B$. Let $$e_1=(1,0,\ldots,0), \,\,e_2=(0,1,\ldots,0), \,\,\ldots\,\,,\,\, e_n=(0,0,\ldots,1)$$ be the standard basis for $B^n$ over $B$. As in the introduction, for $a\in A$, we let $a^{(i)}$ denote the element $1\otimes\dots\otimes a\otimes\dots\otimes1$ of $A^{\otimes n}$ with $a$ in the $i^{th}$ tensor factor. Then a natural $B$-basis for $(B^n)^{\otimes n}$ is given by \begin{equation}\label{ebasis} \bigl\{e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)}\bigr\} \end{equation} where $i_1,i_2,\ldots,i_n$ each range between $1$ and $n$. We claim that a natural $B$-basis for $G(B^n/B)$ is also given by (\ref{ebasis}), but where $(i_1,i_2,\ldots,i_n)$ now ranges over all {\it permutations} of $(1,2,\ldots,n)$. To see this, we first note that any general element of the form $e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)}\in (B^n)^{\otimes n},$ such that $(i_1,\ldots,i_n)$ is {\it not} a permutation of $(1,2,\ldots,n)$, is in fact zero in $G(B^n/B)$. Indeed, let $i\in\{1,\ldots,n\}$ be any element such that $i\notin\{i_1,\ldots,i_n\}$. Then since $\sum_{j=1}^n e_i^{(j)}$ equals ${\rm Tr}(e_i)=1$ in $G(B^n/B)$, we deduce $$e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots e_{i_n}^{(n)} = \sum_{j=1}^n \bigl[e_i^{(j)}\,\cdot\, e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)} \bigr] = 0$$ in $G(B^n/B)$, as desired. On the other hand, if $(i_1,\ldots,i_n)$ is a permutation of $(1,2,\ldots,n)$, then $e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots e_{i_n}^{(n)}$ is nonzero in $G(B^n/B)$. To prove this, consider the $B$-algebra homomorphism $\phi_{(i_1,\ldots,i_n)}: (B^n)^{\otimes n} \to B$ defined by $$\phi_{(i_1,\ldots,i_n)}\bigl(e_{i}^{(j)}\bigr)= \left\{ \begin{array}{rl} 1& \mbox{if $i=i_j$}\\ 0& \mbox{otherwise} \end{array}\right. .$$ Then it is evident that the kernel of $\phi_{(i_1,\ldots,i_n)}$ contains $I(B^n,B)$, so that $\phi$ descends to a map \[ \bar\phi_{(i_1,\ldots,i_n)}: G(B^n/B) \to B. \] Moreover, we have $\bar\phi_{(i_1,\ldots,i_n)}\bigl(e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)}\bigr)=1$. We conclude that $e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots e_{i_n}^{(n)}$ is nonzero in $G(B^n/B)$. Finally, note that $e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)}$ is an idempotent for any permutation $(i_1,\ldots,i_n)$, and if $(j_1,\ldots,j_n)$ is any other permutation of $(1,\ldots,n)$, then $$e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)} \,\cdot\, e_{j_1}^{(1)}e_{j_2}^{(2)}\cdots\, e_{j_n}^{(n)}=0.$$ Hence the set (\ref{ebasis}), where $(i_1,i_2,\ldots,i_n)$ ranges over all {\it permutations} of $(1,2,\ldots,n)$, forms a set of nonzero orthogonal idempotents that spans $G(B^n/B)$ as a $B$-module. We conclude that it forms a basis for $G(B^n/B)$, as claimed. Finally, since this basis for $G(B^n/B)$ has $n!$ elements, and consists entirely of idempotents, we conclude that $G(B^n/B)\cong B^{n!}$ as $B$-algebras, as desired. We have proven the first assertion of Theorem \ref{etalecase}. \subsection{The action of $S_n$ on $G(B^n/B)$}\label{snaction} It is interesting to consider the natural action of $S_n$ on $(B^n)^{\otimes n}$, and on $G(B^n/B)$, obtained by permuting the tensor factors. From this point of view, we see that \[ G(B^n/B)\cong B[S_n] \] as $B[S_n]$-modules. The isomorphism is given by $e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)}\mapsto\sigma$, where $\sigma\in S_n$ denotes the permutation $j\mapsto i_j$. If we write $e_\sigma:=e_{i_1}^{(1)}e_{i_2}^{(2)}\cdots\, e_{i_n}^{(n)}$, then the action of an element $g\in S_n$ on $G(B^n/B)$ is given by \[g\cdot e_\sigma = e_{g\sigma}.\] Let $A=B^n$. Under the action of $S_n$ on $G(A/B)$, the ring $A^{(1)}\subset G(A/B)$ given by the image of $A\otimes 1\otimes \cdots\otimes 1$ is fixed by the group $S_{n-1}^{(1)}$, the subgroup of $S_n$ fixing 1. Note that $A^{(1)}\cong A$. Similarly, as in Galois theory, the other ``conjugate'' copies of $A$ in $G(A/B)$, namely $A^{(j)}=1\otimes\cdots\otimes A\otimes\cdots\otimes 1$ (where the $A$ is in the $j$-th tensor factor) for $j=2,\ldots, n$ are fixed by the conjugate subgroups $S_{n-1}^{(j)}\subset S_n$ fixing $j$ for $j=2,\ldots, n$, respectively. In terms of these subgroups $S_{n-1}^{(j)}\subset S_n$, we may express the idempotents $e_i^{(j)}$ in terms of our orthogonal basis $\{e_\sigma\}_{\sigma\in S_n}$ of idempotents for $G(A/B)$ as follows: \begin{equation}\label{eij} e_i^{(j)} = \sum_{\sigma\in S_{n-1}^{(j)}g_{ji}} e_\sigma, \end{equation} where $g_{ji}$ denotes any element in $S_n$ taking $i$ to $j$. That is, $e_i^{(j)}$ corresponds to the sum of $e_\sigma$ over a right coset of $S_{n-1}^{(j)}$, namely, the right coset consisting of elements in $S_n$ taking $i$ to $j$. \section{The case of fields}\label{fieldsec} Before proving Theorem \ref{fieldcase}, we begin by recalling the correspondence between finite \'etale extensions of a field and Galois sets. Let $K$ be a field and fix a separable closure $\bar{K}$ of $K$. Then, given a finite \'etale extension $L/K$, consider the set $S_{L/K}$ of $K$-algebra homomorphisms from $L$ to $\bar{K}$. We see that $G_K:=\textrm{Gal}(\bar{K}/K)$ acts on $S_{L/K}$ by composition: if $\psi\in S_{L/K}$ and $\tau\in G_K$, then $\tau\circ\psi\in S_{L/K}$. Moreover, this action is continuous when $G_K$ is given the profinite topology and $S_{L/K}$ is given the discrete topology, i.e., the action of $G_K$ factors through a finite quotient of $G_K$. We therefore obtain a functor \begin{eqnarray} \label{eqn:equiv} (\textrm{finite\ \'etale\ }K\textrm{-algebras})\longrightarrow (\textrm{finite\ sets\ with\ continuous\ }G_K\textrm{-action}) \end{eqnarray} sending $L$ to $S_{L/K}$, which is in fact an equivalence of categories (see, e.g., \cite[Thm.~2.9]{Lenstra}). Note that if $L/K$ is finite \'etale of degree $n$, then $\bar{K}\otimes_K L$ is isomorphic to $\bar{K}^n$ as a $\bar{K}$-algebra. More canonically, we have an isomorphism \begin{eqnarray} \label{eqn:splitting} \bar{K}\otimes_K L \longrightarrow \bigoplus_{s\in S_{L/K}}\bar{K}\nonumber\\ 1\otimes\ell \longmapsto (s(\ell))_{s\in S_{L/K}} \end{eqnarray} of $\bar{K}$-algebras. The Galois group $G_K$ acts on $\bar{K}\otimes_K L$ through the left tensor factor, and therefore induces an action on $S_{L/K}$ via (\ref{eqn:splitting}); this is precisely the $G_K$-action on $S_{L/K}$ in (\ref{eqn:equiv}). We now turn to the problem of describing the Galois set $S_{G(L/K)/K}$ in terms of the Galois set $S_{L/K}$, where $L/K$ is a finite \'etale extension of degree $n$. We index the $K$-algebra homomorphisms from $L$ to $\bar{K}$ by $1,\dots,n$. This yields an identification of $S_{L/K}$ with $\{1,2,\dots,n\}$. By functoriality, \[ \bar{K}\otimes_K G(L/K)\cong (\bar{K}^n)^{\otimes{n}}/I(\bar{K}^n,\bar{K}) \] as $\bar{K}$-algebras. The $G_K$-action on the left tensor factor of $\bar{K}\otimes_K G(L/K)$ yields an action on $(\bar{K}^n)^{\otimes{n}}/I(\bar{K}^n,\bar{K})$ defined by \[ \tau(e_j^{(i)})=e_{\tau(j)}^{(i)}. \] In Section \ref{snaction}, we proved \[ \bar{K}[S_n]\stackrel{\cong}{\longrightarrow}(\bar{K}^n)^{\otimes{n}}/I(\bar{K}^n,\bar{K}) \] as $\bar{K}$-algebras, where a permutation $\pi\in S_n$ is sent to $e_{\pi(1)}^{(1)}e_{\pi(2)}^{(2)}\dots e_{\pi(n)}^{(n)}$. We see then that the action of $G_K$ on $\bar{K}\otimes_K G(L/K)$ induces an action on $S_n$ where $\tau\in G_K$ acts on $\pi\in S_n$ by \[ (\tau(\pi))(j)=\tau(\pi(j)). \] The Galois set corresponding to $G(L/K)$ is therefore given by $S_n$ with this action of $G_K$. More canonically, \begin{eqnarray} \label{eqn:canGalois} S_{G(L/K)}=\textrm{Perm}(S_{L/K}) \end{eqnarray} as sets and the $G_K$-action on $S_{G(L/K)}$ is given by \begin{eqnarray} \label{eqn:canGalois2} (\tau(f))(s)=\tau(f(s)), \end{eqnarray} where $\tau\in G_K$, $f\in \textrm{Perm}(S_{L/K})$, and $s\in S_{L/K}$. We now prove Theorem \ref{fieldcase}. Let $L$ be a finite separable field extension of $K$ of degree $n$, and let $M$ be the Galois closure of $L/K$ in $\bar{K}$. Let $G$ denote the Galois group of $M/K$; thus $G$ is a transitive permutation group on $n$ elements, namely, on the $n$ embeddings of $L$ into $M$, which we index by $1,\ldots,n$. Let $|G|=m$, and let $r=[S_n:G]=n!/m$, where $S_n$ denotes the group of permutations on the set $\{1,\ldots,n\}$. Using (\ref{eqn:canGalois}), we show $G(L/K)\cong M^r$ as $K$-algebras. Our indexing of the embeddings of $L$ into $M\subset\bar{K}$ identifies $S_{L/K}$ with $\{1,\dots,n\}$. We can then define an action of $G_K$ on $S_n$ by $(\tau(\pi))(j)=\tau(\pi(j))$, where $\tau\in G_K$ and $\pi\in S_n$; note that this yields the same Galois set as that for $G(L/K)$. Since $M$ is the Galois closure of $L/K$, this action of $G_K$ on $S_n$ restricts to an action on $G\subset S_n$. The set $G$ equipped with this action is the Galois set corresponding to $M$. Now $M^r$ corresponds to the disjoint union of $r$ copies of this Galois set. As sets, $S_{M^r}$ is of course in bijection with $S_n=S_{G(L/K)}$; what we must show is that there is a $G_K$-equivariant bijection. Writing $S_{M^r}$ as \[ \coprod_{a\in G \backslash S_n}G\cdot a, \] we see that the action of $\tau\in G_K$ is given by $\tau(ga)=\tau(g)a$, where $g\in G$. Note that this agrees with the action of $G_K$ on $S_n$ defined by (\ref{eqn:canGalois2}). As a result, $S_{G(L/K)}$ and $S_{M^r}$ are isomorphic as Galois sets; hence $G(L/K)$ and $M^r$ are isomorphic as $K$-algebras. \begin{comment} Let us view $M$ as a $K[G]$-module, and let $N=K[S_n]\otimes_{K[G]} M$ be the induced $K[S_n]$-module. In this section, we prove $G(L/K)\cong N$ as $K[S_n]$-algebras, and hence $G(L/K)\cong M^r$ as $K$-algebras. Fix a separable closure $\bar{K}$ of $K$. By the theory of Galois descent, specifying a $K$-algebra $R$ is equivalent to specifying a $\bar{K}$-algebra $R'$ together with an action of the Galois group $G_K$ on $R'$. Explicitly, the equivalence is given by $R'=\bar{K}\otimes_K R$, where $G_K$ acts on the first tensor factor. To prove our desired isomorphism , it then suffices to show that $\bar{K}\otimes_K G(L/K)$ and $\bar{K}\otimes_K N$ are isomorphic as $\bar{K}[S_n]$-algebras with $G_K$-action, where $G_K$ acts on the first tensor factors. We note first that \[ \bar{K}\otimes_K N=\bar{K}[S_n]\otimes_{\bar{K}[G]}(\bar{K}\otimes_K M) \] (as what?). Since $M$ is Galois over $K$ with Galois group $G$, we see that $\bar{K}\otimes_K M\cong \bar{K}[G]$ and so \[ \bar{K}\otimes_K N\cong \bar{K}[S_n]. \] This is in fact an isomorphism of $\bar{K}[S_n]$-algebras with $G_K$-action where $G_K$ acts on $\bar{K}[S_n]$ as follows. Given $\tau\in G_K$, let $\bar{\tau}$ denote the image of $\tau$ in $G\subset S_n$. Then the action of $\tau$ on $\bar{K}[S_n]$ is given by \[ \tau(\sum a_\pi\cdot\pi)=\sum\tau(a_\pi)\cdot(\bar{\tau}\pi). \] We also have \[ \bar{K}\otimes_K G(L/K)\cong \bar{K}\otimes_K (L^{\otimes n}/I(L,K))\cong (\bar{K}\otimes_K L)^{\otimes n}/I(\bar{K}\otimes_K L,\bar{K}), \] (as what?) where the last isomorphism is by functoriality. Since $\bar{K}\otimes_K L$ is isomorphic to $\bar{K}^n$, it follows from Section~3.2 that \[ \bar{K}\otimes_K G(L/K)\cong \bar{K}[S_n] \] (as what?). To complete the proof of Theorem~\ref{fieldcase}, it then suffices to show that the induced $G_K$-action on $\bar{K}[S_n]$ agrees with the one above. To see this, recall from \S3.2 that the isomorphism \[ \bar{K}[S_n]\longrightarrow (\bar{K}^n)^{\otimes n}/I(\bar{K}^n,\bar{K}) \] of $\bar{K}[S_n]$-algebras sends a permutation $\pi\in S_n$ to the image of $e_{\pi(1)}^{(1)}e_{\pi(2)}^{(2)}\dots e_{\pi(n)}^{(n)}$. Since $G_K$ acts on $e_i^{(j)}$ by \[ \tau(e_i^{(j)})=e_{\bar{\tau}(i)}^{(j)}, \] we see that the induced $G_K$ action on $\bar{K}[S_n]$ is as desired. \end{comment} \begin{comment} To this end, let us view $M$ as a $K[G]$-module, and let $N=K[S_n]\otimes_{K[G]} M$ be the induced $K[S_n]$-module. Note that $$N=\bigoplus_{a\in S_n/G}a\otimes M\,\,\cong\,\,M^{\oplus r}$$ as $K$-algebras. Thus $N$ is also naturally an \'etale $K$-algebra, being a direct sum of $r$ copies of $M$. We claim that $G(L/K)\cong N$ as $K[S_n]$-modules and also as $K$-algebras. To prove the claim, we construct a $K[S_n]$-module homomorphism from $G(L/K)$ to $N$ that is also a $K$-algebra homomorphism. Let $\psi_i$ denote the distinct embeddings $L\hookrightarrow M$, where $i\in\{1,\ldots,n\}$. Then $G\subset S_n$ acts on the embeddings $\psi_1,\ldots,\psi_n$, and also on the indices $1,\ldots,n$ via $g\cdot\psi_i = \psi_{g(i)}$ for $g\in G$. The action of $G$ on the indices in $\{1,\ldots,n\}$ is the restriction of the usual action of $S_n$ on the $\{1,\ldots,n\}$. As before, let $S_{n-1}^{(i)}\subset S_n$ denote the stabilizer of $i$. For $x\in L$, let $x^{(i)}\in L^{\otimes n}$ denote $1\otimes\cdots \otimes x\otimes\cdots\otimes 1$, where the $x$ occurs in the $i$-th factor. Then in the natural action of $S_n$ on $L^{\otimes n}$, the group $S_{n-1}^{(i)}$ fixes the element $x^{(i)}$ for any $x\in L$. \vspace{.05in} Define a map $\phi:L^{\otimes n}\to N$ by sending $x^{(i)}$ (for $x\in L$) to \begin{equation}\label{defxi} \phi(x^{(i)})=\sum_{u\in S_{n}/G} u\otimes \psi_{u^{-1}(i)}(x); \end{equation} it is clear that the value of (\ref{defxi}) does not depend on the choices of coset representatives $a\in S_{n}$. We extend the map $\phi$ to a $K$-algebra homomorphism $L^{\otimes n}\to N$. Then $\phi$ is seen to respect the natural $S_n$-actions on $L^{\otimes n}$ and $N$ respectively. Indeed, if $t\in S_n$ is any element that takes $i$ to $j$ ($i,j\in\{1,2,\ldots,n\}$), then we have \begin{eqnarray*} t\phi(x^{(i)}) &=& \displaystyle{\frac{1}{|G|}} \Bigl(t\sum_{u\in S_{n}}u\otimes\psi_{u^{-1}(i)}(x)\,\Bigr) \\ &=& \displaystyle{\frac{1}{|G|}} \Bigl(\,\sum_{u\in S_{n}}tu\otimes\psi_{u^{-1}t^{-1}(j)}(x)\,\Bigr) \\ &=&\displaystyle{\frac{1}{|G|}} \Bigl(\,\,\,\,\sum_{v\in S_{n}}v\otimes\psi_{v^{-1}(j)}(x)\,\Bigr) \\ &=&\phi(x^{(j)}) = \phi(tx^{(i)}). \end{eqnarray*} Moreover, for $x\in L$, it is clear that \begin{equation}\label{fundrels2} s_j(x)\;-\sum_{1\leq i_1< i_2< \ldots< i_j\leq n} \phi(x^{(i_1)})\phi(x^{(i_2)})\cdots \phi(x^{(i_j)})=0 \end{equation} for all $j\in\{1,\ldots,n\}$, since the identity holds in every direct summand $a\otimes M\cong M$ ($a\in S_n/G$); here we are using the fact that for any $a\in S_n$ and $x\in L$, we have $\{\psi_{a^{-1}(1)}(x),\ldots,\psi_{a^{-1}(n)}(x)\} =\{\psi_1(x),\ldots,\psi_n(x)\}$, so that the values of the $j$-th elementary symmetric functions on these sets are equal to $s_j(x)$ for each $j$ and each in each direct summand $a\otimes M$. Equation (\ref{fundrels2}) implies that $\phi$ descends to a homomorphism \begin{equation}\label{barphi} \bar\phi: G(L/K)\rightarrow N \end{equation} of $K$-algebras and $K[S_n]$-modules. We wish to show that $\bar\phi$ is bijective. We may see this in two ways: the first is an argument using torsors, while the second is a direct computation. Note that $M$ is a $G$-torsor over $K$ since $M\otimes_K M$ is isomorphic as an $M[G]$-module to $M[G]$. It follows that $N$ is an $S_n$-torsor over $K$. We see that $G(L/K)$ is an $S_n$-torsor over $K$ as well since functoriality shows that \[ G(L/K)\otimes_K L \cong G(L^n/L) \cong L[S_n] \] as $L[S_n]$-modules. The theorem then follows from that the fact that (\ref{barphi}) is $S_n$-equivariant and that any equivariant morphism of torsors is an isomorphism. We now give a second proof that $\bar\phi$ is an isomorphism. For this, it suffices to show that the map is bijective upon base change to the algebraic closure $\bar{K}$ of $K$. Let $L'=L\otimes \bar K$, $M'=M\otimes\bar K$, and $N'=N\otimes\bar K$, and let $G(L/K)'=G(L/K)\otimes\bar K$. We wish to show that the map $\bar\phi: G(L/K)'\to N'$ (induced from (\ref{barphi}) by base change to $\bar K$) is an isomorphism of $\bar K$-algebras and $\bar K[S_n]$-modules. First, we observe that $L'\cong \bar K^n$ as $\bar K$-algebras, via the map $x\mapsto (\sigma_1(x),\ldots,\sigma_n(x))$ for each $x\in L$. Hence we may identify $L'$ with $\bar K^n$ in this way. Second, we have $M'\cong \bar K^{|G|}$, via the map $x\mapsto (gx)_{g\in G}$ for each $x\in M$ (where we have fixed an embedding $M'\hookrightarrow \bar K$), and in this way we may make the identification $M'=\bar K[G]=\oplus_{g\in G}\, g\bar K$. Let $e_1=(1,0,\ldots,0), \,\,e_2=(0,1,\ldots,0), \,\,\ldots\,\,,\,\, e_n=(0,0,\ldots,1)$ be the standard basis for $L'=\bar K^n$ over $\bar K$, and let $G_1$ be the subgroup of the Galois group $G$ of $M/L$ fixing $L$. Then under the above identifications, we have (reordering the coordinates $e_i$ on $L'$ if necessary) that \[e_i = \sum_{g\in G_1a_i^{-1}} g, \] where $a_i$ is any element of $G\subset S_n$ mapping $1$ to $i$, i.e., $a_1,\ldots,a_n$ form a set of right coset representatives of $G_1$ in $G$. These elements $e_i\in L\subset M$ are thus fixed under the left action of $G_1$. Now the embeddings $\sigma_1,\ldots,\sigma_n:L\hookrightarrow M$ induce embeddings $\sigma_1,\ldots,\sigma_n:L'\hookrightarrow M'$, and we have \[\sigma_j(e_i) = \sum_{g\in a_jG_1a_i^{-1}} g= \sum_{g\in (a_jG_1a_j^{-1})a_ja_i^{-1}} g= \sum_{g\in G_ja_ja_i^{-1}} g,\] where $G_j$ denotes the subgroup of $G\subset S_n$ fixing $j$. Thus the elements $\sigma_j(e_i)$, for $j=1,\ldots,n$, are fixed under the left action of $G_j$. Finally, we have $N':=\bar K[S_n]\otimes_{\bar K[G]} M' =\bar K[S_n]$. Under these identifications, we may now consider the map $\bar \phi: G(L/K)'\rightarrow N'$ induced from (\ref{barphi}). We have \begin{equation}\label{barphi2} \begin{array}{rcl} \bar\phi(e_i^{(j)}) &=& \displaystyle{\sum_{u\in S_n/G} u\otimes \sigma_{u^{-1}(j)}(e_i)} \\[.35in] &=&\displaystyle{\sum_{u\in S_n/G} u\otimes \Bigl(\sum_{g\in G_{u^{-1}(j)}a_{u^{-1}(j)}a_i^{-1}} g\Bigr)} \\[.35in] &=& \displaystyle{\sum_{u\in S_{n-1}^{(j)}/G_j} u\otimes \Bigl(\sum_{g\in G_{j}a_{j}a_i^{-1}} g\Bigr)} \\[.35in] &=& \displaystyle{\sum_{g\in S_{n-1}^{(j)}a_ja_i^{-1}} g}\,, \end{array} \end{equation} where we have used the (easily checked) fact that a complete set of representatives for left cosets of $G_j$ in $S_{n-1}^{(j)}$ also yield a complete set of representatives for left cosets of $G$ in $S_n$. Comparing with equation (\ref{eij}), we see that $\bar \phi: G(L/K)'\rightarrow N'$ is precisely the isomorphism $G(\bar K^n/\bar K)\to \bar K[S_n]$ computed in Sections 3.1 and 3.2. This gives the desired conclusion, and we have proven Theorem~\ref{fieldcase}. \end{comment} \section{The \'etale case}\label{etalesec} We have already proven the first assertion of Theorem~\ref{etalecase}. Suppose, more generally, that $A$ is any ring that is \'etale and locally free of rank $n$ over $B$. Then we claim that $G(A/B)$ is an \'etale $B$-algebra which is locally free of $n!$; this is the second assertion of Theorem~\ref{etalecase}. To prove the claim, we first require a definition. An \'etale $B$-algebra $C$ is called an {\it \'etale cover} of $B$ if the induced morphism ${\rm Spec}\,\, C\rightarrow{\rm Spec}\,\, B$ is surjective (see, e.g., \cite[p.\ 47]{milne}). The key fact we use in proving the second assertion of Theorem~\ref{etalecase} is the following lemma (see, for example, \cite[p.\ 156]{milne} or \cite[Thm.~5.10]{Lenstra}): \begin{lemma}\label{split} Let $R$ be any $B$-algebra that is finitely generated as a $B$-module. Then $R$ is \'etale and locally free of rank $n$ over $B$ if and only if there exists an \'etale cover $C$ of $B$ such that $R\otimes_B C\cong C^n$ as $C$-algebras. \end{lemma} Since $A$ is \'etale and locally free of rank $n$ over $B$, we see by Lemma~\ref{split} that there exists an \'etale cover $C$ of $B$ such that $A\otimes_B C\cong C^n$ as $C$-algebras. By functoriality of the $S_n$-closure, we then have \[ G(A/B)\otimes_{B}C=G(C^n/C)\cong C^{n!} \] as $C$-algebras. Applying Lemma~\ref{split} once again, we conclude that $G(A/B)$ is \'etale and locally free of rank $n!$ over $B$, as desired. We can say more in terms of the underlying Galois sets when ${\rm Spec}\,\, B$ is connected. \begin{comment} Let us first discuss the case when $B=K$ is a field. Fix a separable closure $\bar{K}$ of $K$. Given a finite separable field extension $L/K$, we can associate to it a finite set with a continuous action of the Galois group $G_K:=\textrm{Gal}(\bar{K}/K)$; namely, consider the set $S_{L/K}$ of embeddings of $L$ into $\bar{K}$ with its natural action of $G_K$. More generally, if $L/K$ is a finite \'etale extension, then $L$ is of the form $L_1\times\dots\times L_m$, where $L_i/K$ is a finite separable extension. We therefore obtain a finite set $S_{L_1}\coprod\dots\coprod S_{L_mw}$ with a continuous action of $G_K$. This association to any finite \'etale extension of $K$ a finite set with $G_K$-action is in fact an equivalence of categories (see, e.g., \cite[Thm.~2.9]{Lenstra}). Given that this is true, we now describe the Galois set associated to $G(L/K)$ in terms of the Galois set associated to $L/K$. Let $L/K$ be a finite separable field extension of degree $n$. We index the embeddings of $L$ into $\bar{K}$ by $1,\dots,n$. The extension $L/K$ induces an action of $G_K$ on $\{1,2,\dots,n\}$. This action can be described in a different way as follows. We know that \[ \bar{K}\otimes_K L\cong \bar{K}^n \] as $\bar{K}$-algebras. The action of $G_K$ on the left tensor factor yields an action of $G_K$ on $\bar{K}^n$ given by \[ \tau(e_j)=e_{\tau(j)}, \] where $\tau\in G_K$. We now turn to $G(L/K)$. By functoriality, \[ \bar{K}\otimes_K G(L/K)\cong (\bar{K}\otimes_K L)^{\otimes{n}}/I(\bar{K}\otimes_K L,\bar{K})\cong (\bar{K}^n)^{\otimes{n}}/I(\bar{K}^n,\bar{K}) \] as $\bar{K}$-algebras. This is also an isomorphism of $\bar{K}$-algebras with $G_K$-action, where $G_K$ acts on left tensor factor of $\bar{K}\otimes_K G(L/K)$ and acts on $(\bar{K}^n)^{\otimes{n}}$ by \[ \tau(e_j^{(i)})=e_{\tau(j)}^{(i)}. \] In Section \ref{snaction}, we proved \[ \bar{K}[S_n]\stackrel{\cong}{\longrightarrow}(\bar{K}^n)^{\otimes{n}}/I(\bar{K}^n,\bar{K}) \] as $\bar{K}$-algebras, where a permutation $\pi\in S_n$ is sent to $e_{\pi(1)}^{(1)}e_{\pi(2)}^{(2)}\dots e_{\pi(n)}^{(n)}$. We see then that the action of $G_K$ on $\bar{K}\otimes_K G(L/K)$ induces an action on $S_n$ where $\tau\in G_K$ acts on $\pi\in S_n$ by \[ (\tau(\pi))(j)=\tau(\pi(j)). \] The Galois set corresponding to $G(L/K)$ is therefore given by $S_n$ with this action of $G_K$. Since every finite \'etale extension of $K$ is a product of finite separable field extensions, we see more generally that if $L/K$ is a finite \'etale extension corresponding to a set $S$ with $G_K$-action, then $G(L/K)$ corresponds to the set $\textrm{Perm}(S)$ with $G_K$-action given by \[ (\tau(f))(s)=\tau(f(s)), \] where $\tau\in G_K$, $f\in \textrm{Perm}(S)$, and $s\in S$. \end{comment} Recall that there is an equivalence of categories between finite \'etale extensions of $B$ and finite sets equipped with a continuous action by a certain profinite group $\pi_1^{\textrm{\'et}}(B)$ called the \emph{\'etale fundamental group} of $B$ (see, e.g., \cite[Thm.~1.11]{Lenstra}). When $B=K$ is a field, $\pi_1^{\textrm{\'et}}(K)$ is nothing other than $G_K$. By the same argument as in the case of fields, one shows that if $A/B$ is a finite \'etale extension corresponding to a set $S$ with a continuous action by $\pi_1^{\textrm{\'et}}(B)$, then $G(A/B)$ corresponds to the set $\textrm{Perm}(S)$ with $\pi_1^{\textrm{\'et}}(B)$-action given by \[ (\tau(f))(s)=\tau(f(s)), \] where $\tau\in \pi_1^{\textrm{\'et}}(B)$, $f\in \textrm{Perm}(S)$, and $s\in S$. \section{The monogenic case}\label{monosec} In this section, we examine the situation where $B$ is monogenic over $A$. We prove: \begin{theorem}\label{mon} Let $f$ be a monic polynomial with coeffiecients in $B$, and let $A=B[x]/f(x)$ denote the corresponding monogenic ring of rank $n$ over $B$. Then the ring $G(A/B)$ is a ring of rank $n!$ over $B$, a basis of it being monomials of the form \begin{equation}\label{monobasis} \prod_{i=1}^n x_i^{e_i} \end{equation} where the exponents $e_i$ satisfy $0\leq e_i < i$; here $x_1,x_2,\ldots,x_n$ denote the images in $G(A/B)$ of $x\otimes \ldots\otimes1\otimes 1$, \,$1\otimes x\otimes \ldots\otimes 1$, \,$\ldots$\,, \,$1\otimes 1\otimes \ldots\otimes x\in A^{\otimes n}$ respectively. \end{theorem} \begin{proof} If $A=B[x]$, then $I(A,B)$ is generated by the relations (\ref{fundrels}) where $a=x$. This is because the powers $1,x,x^2,\ldots,x^{n-1}$ of $x$ form a basis for $A$ over $B$, and the elementary symmetric functions $s_i(x^j)$ of powers $x^j$ of $x$ are integer polynomials in the elementary symmetric functions $s_i(x)$ of $x$ (reference?). Hence the relations (\ref{fundrels}) for $a=x^j$ ($j>1$) are implied by those for which $a=x$. Now let the characteristic polynomial of $\times x:A\to A$ be given by $P_x(T)=T^n-s_1(x)T^{n-1}+s_2(x) T^{n-2}+\cdots+(-1)^n s_n(x)$. Then a direct construction of $G(A/B)$ is as follows. By the symmetric function theorem, the ring $R = \mathbb{Z}[X_1,...,X_n]$ is a free module of rank $n!$ (with basis given as above) over the polynomial ring $S = \mathbb{Z}[\Sigma_1,...,\Sigma_n]$, where the $\Sigma_i$ denote the elementary symmetric polynomials $\Sigma_1 = X_1 + ... + X_n$, etc. Using the coefficients of $P_a$, we get a map $\phi:S\to B$ defined by sending $\Sigma_i$ to $s_i(x)$. This allows us to construct the $B$-algebra $R\otimes_S B$, which is then free over $B$ of rank $n!$. We claim that the algebra $R\otimes_S B$ is isomorphic to $G(A/B)$. Indeed, we may define a map $$\phi:A^{\otimes n}=B[x_1,\ldots,x_n]\to R\otimes_S B$$ by sending $x_i\mapsto X_i$. Then, by the definition of $R\otimes_S B$, the kernel of $\phi$ consists of all polynomials in $B[x_1,\ldots,x_n]$ that are symmetric in the $x_i$ and evaluate to 0 when $(x_1,\ldots,x_n)$ is evaluated at $(s_1(x),\ldots,s_n(x))$. But these polynomials are precisely the elements of the ideal $I(A,B)$, and thus $G(A/B)=A^{\otimes n}/I(A,B)\cong R\otimes_SB$, as desired. % \end{proof} \vspace{-.1in} \begin{remark} {\em We note that this construction in the monogenic case is more or less given in Grothendieck (Sem.\ Chevalley 1958, Anneaux de Chow, p.\ 4--19, Lemme 1 and corollary next page).} \end{remark} In the case that $A$ is monogenic over the base ring $B$, we may use Theorem~\ref{mon} to compute the discriminant ${\rm Disc}(G(A/B))$ of the $S_n$-closure of $A$ in terms of the discriminant ${\rm Disc}(A)$ of $A$. We find that, for $n\geq 2$, we have \begin{equation}\label{discid} {\rm Disc}(G(A/B))={\rm Disc}(A)^{n!/2}. \end{equation} To see this, note that it suffices again to prove this identity in the case $B=\mathbb{Z}[\Sigma_1,...,\Sigma_n]$, $A=B[X_1]$, and $G(A/B) = \mathbb{Z}[X_1,...,X_n]$. The identity (\ref{discid}) is trivial for $n=2$, while for general $n$ it follows by induction. Indeed, we have the equalities $A=\mathbb{Z}[X_1][\Sigma'_1,\ldots,\Sigma'_{n-1}]$ and $G(A/B)=A[X_2,\ldots,X_n]$, where $\Sigma'_1,\ldots,\Sigma'_{n-1}$ denote the elementary symmetric polynomials in $X_2,\ldots,X_n$. Theorem 14 now implies that $G(A/B)=G(A[X_2]/A)$, so that $G(A/B)$ is free of rank $(n-1)!$ over~$A$. The induction hypothesis then gives $${\rm Disc}(G(A/B)/A)={\rm Disc}(A[X_2]/A)^{(n-1)!/2}.$$ In the tower of ring extensions $G(A/B)\,/\,A\,/\,B$, we then see that \[\begin{array}{rcl} {\rm Disc}(G(A/B)) &=& {\mathbf N}^A_B({\rm Disc}(G(A/B)/A))\cdot {\rm Disc}(A)^{(n-1)!} \\[.1in] &=& {\rm Disc}(A)^{(n-2)\cdot(n-1)!/2}\cdot {\rm Disc}(A)^{(n-1)!}\\[.1in] &=&{\rm Disc}(A)^{n!/2}, \end{array}\] proving (\ref{discid}). \section{Ranks $k\leq 3$} \label{cubisec} The cases $k=1,2$ in Theorem~\ref{cubicase} follow from Theorem~\ref{monocase}. So we consider the case $k=3$ in this section. \begin{theorem}\label{cubicase2} Assume that $A$ is free of rank $3$ over $B$ with basis $1$, $x$, $y$. Let $x_1$, $x_2$, $x_3$ denote the images in $G(A/B)$ of the elements $x\otimes 1\otimes 1$, $1\otimes x\otimes 1$, $1\otimes 1\otimes x\in A^{\otimes 3}$ respectively, and define $y_1,y_2,y_3$ similarly. Then the ring $G(A/B)$ is free of rank $6$ over $B$ with basis $1, x_1, y_1, x_2, y_2, x_1y_2.$ \end{theorem} \begin{proof} It is known that, by translating $x$ and $y$ by appropriate $B$-multiples of 1, the multiplication table of $A$ as a ring over $B$ can be expressed in the form \begin{equation}\label{ringlaw3} \begin{array}{cll} xy &=& \,\,\,\,\;\!ad \\ x^2 &=& -ac + b x \;\!+ a y \\ y^2 &=& -bd\;\! + d x \;\!\!+ c y. \end{array} \end{equation} for some elements $a,b,c,d\in B$ (see \cite[Prop.\ 4.2]{GGS}). In terms of these elements, the characteristic equations of $x$, $y$, and $x+y$ are given by $$T^3- bT^2 + acT -a^2d=0,$$ $$T^3- cT^2 + bdT -ad^2 =0,$$ and $$T^3- (b + c)T^2 + (ac + bc +bd - 3ad)T - ( a^2d +ac^2 + b^2d + ad^2- 2abd - 2acd )=0 $$ respectively. Note first that the trace relations $x_1 + x_2 + x_3 = b$ and $y_1 + y_2 + y_3 = c$ are equivalent to \begin{eqnarray}\label{x3} x_3&=&b-x_1-x_2,\\ \label{y3} y_3&=&c-y_1-y_2. \end{eqnarray} Hence $G(A/B)$ is generated as a $B$-module by the 9 elements $\{1,x_1,y_1\}\cdot\{1,x_2,y_2\}$, and we need to find 3 additive relations to relate $x_1x_2$, $x_1y_2$, $x_2y_1$, $y_1y_2$. Since all other trace relations are $B$-linear combinations of the trace relations for $x$ and $y$, they do not yield any further new relations. Instead, we now take the quadratic identities $x_1x_2 + x_1x_3 + x_2x_3 = ac$, $y_1y_2 + y_1y_3 + y_2y_3 = bd$, and $(x_1+y_1)(x_2+y_2) + (x_1+y_1)(x_3+y_3) + (x_2+y_2)(x_3+y_3) =ac + bc +bd - 3ad$, which reduce to: \begin{eqnarray}\label{x1x2} x_1x_2&=&a(c-y_1-y_2),\\ y_1y_2&=&d(b-x_1-x_2) \label{y1y2} \\ y_1 x_2 &=& bc -ad - b(c-y_1-y_2) - c(b- x_1-x_2)-x_1 y_2. \label{y1x2} \end{eqnarray} These identities show that $G(A/B)$ is spanned over $B$ by the six elements claimed in the theorem. It remains to show that these six elements are in fact linearly independent. By functoriality, it suffices to consider the case when $B=\mathbb{Z}[a,b,c,d]$ is a free polynomial ring over $\mathbb{Z}$ in variables $a,b,c,d$, and $A$ is free of rank 3 over $B$ with basis 1, $x$, $y$, and multiplication table given by (\ref{ringlaw3}). In that case, let $K$ be the quotient field of $B$. If the six elements 1, $x_1$, $x_2$, $y_1$, $y_2$, $x_1y_2$ satisfy a linear relation over $B$, then they also satisfy a linear relation over $K$. We show that this is not the case. Since ${\rm Disc}(A\otimes_B K/K)$ is a non-zero polynomial in $a$, $b$, $c$, and $d$, it is invertible in $K$ and hence $A\otimes_B K/K$ is \'etale. In fact, $A\otimes_B K$ is a field. If it were not, then the cubic polynomial $f(x)$ defining the extension $A\otimes_B K/K$ would have a root in $K$. As $A/B$ is the universal cubic ring extension, this would imply that every cubic polynomial over $\mathbb{Q}$ has a rational root, which is clearly false. Now, by functoriality, the elements 1, $x_1$, $x_2$, $y_1$, $y_2$, $x_1y_2$ also span $G(A\otimes_B K / K)\cong G(A/B)\otimes_B K$. Since $A\otimes_B K$ is a field, Theorem~\ref{fieldcase} implies that $G(A\otimes_B K / K)$ is a $6$-dimensional vector space over $K$. It follows that 1, $x_1$, $x_2$, $y_1$, $y_2$, and $x_1y_2$ are linearly independent over $K$, and hence over $B$, as desired. \end{proof} \begin{comment} We do so by giving an explicit construction of $G(A/B)$. Consider the ring $R$ of rank 6 over $B$ having $B$-basis $\langle 1,X_1,Y_1,X_2,Y_2,Z\rangle$ and multiplication table given by \begin{equation}\label{rank6gc} \begin{array}{rcrcrcrcrcrcr} X_1^2 &=& -ac &\!+\!& bX_1 &\!+\!& aY_1 & & & & & & \\[.02in] Y_1^2 &=& -bd &\!+\!& dX_1 &\!+\!& cY_1 & & & & & & \\[.02in] X_1Y_1 &=& ad & & & & & & & & & & \\[.02in] X_2^2 &=& -ac & & & & &\!+\!& bX_2 &\!+\!& aY_2 & & \\[.02in] Y_2^2 &=& -bd & & & & &\!+\!& dX_2 &\!+\!& cY_2& & \\[.02in] X_2Y_2 &=& ad & & & & & & & & & & \\[.02in] X_1X_2 &=& ac & & &\!-\!& aY_1 & & &\!-\!& aY_2 & & \\[.02in] Y_1Y_2 &=& bd &\!-\!& dX_1 & & &\!-\!& dX_2 & & & & \\[.02in] X_1Y_2 &=& & & & & & & & & & & Z \\[.02in] X_2Y_1 &=& -bc-ad&\!+\!&cX_1&\!+\!&bY_1 &\!+\!& cX_2 &\!+\!&bY_2 &\!-\!& Z \\[.02in] X_1Z &=& abd &\!-\!&adX_1& & &\!-\!& adX_2 &\!-\!&acY_2 &\!+\!& bZ \\[.02in] X_2Z &=& & &adX_1& & & & & & & & \\[.02in] Y_1Z &=& & & & & & & & & adY_2 & & \\[.02in] Y_2Z &=& acd&\!-\!&bdX_1&\!-\!&adY_1 & & &\!-\!&adY_2 &\!+\!& cZ \\[.02in] Z^2 &=& ad(2bc-ad)&\!-\!&b^2dX_1&\!-\!&abdY_1 &\!-\!& acdX_2 &\!-\!&ac^2Y_2 &\!+\!&(bc-ad)Z \end{array} \end{equation} One checks that the above multiplication table satisfies all associative law relations required of it, and thus $R$ as defined above is indeed a ring of rank 6 over $B$. Now there are three natural injections $\phi_1$, $\phi_2$, $\phi_3$ of $A$ into $R$, defined by sending \[ \begin{array}{lcc} x&\mapsto& X_1\\ y&\mapsto& Y_1 \end{array} \,\,\,\,\mbox{ or }\,\,\,\, \begin{array}{lcr} x&\mapsto& X_2\\ y&\mapsto& Y_2 \end{array} \,\,\,\,\mbox{ or }\,\,\,\, \begin{array}{lcr} x&\mapsto& b-X_1-X_2\\ y&\mapsto& c-\,Y_1\,-\,Y_2 \end{array}, \] respectively. We may thus define a natural morphism \[ \phi=\phi_1\otimes \phi_2\otimes\phi_3:A\otimes A\otimes A\to R. \] It is immediate from the definition of $R$ that $I(A,B)$ is contained in the kernel of $\phi$, and hence $\phi$ induces a surjection $\bar{\phi}$ from $G(A/B)$ to $R$. Since the images of $1$, $x_1$, $y_1$, $x_2$, $y_2$, and $x_1y_2$ in $G(A/B)$ are mapped under $\bar{\phi}$ to elements which are linearly independent over $B$, it follows that these elements in $G(A/B)$ are also linearly independent over $B$ (and that $\bar{\phi}$ is an isomorphism), as desired. \end{proof} \end{comment} Thus to {any} cubic ring $A$ over $B$ with basis $1$, $x$, $y$, there is naturally associated a canonical sextic ring $\tilde A$ over $B$, given by $G(A/B)$. We show that in fact we have the formula \begin{equation}\label{discid2} {\rm Disc}(\tilde A)={\rm Disc}(A)^3.\end{equation} To see this, it again suffices to check this in the case that the base ring $B$ is $\mathbb{Z}[a,b,c,d]$. In this case, it is clear that the multiplication table for $\tilde A$, in terms of our chosen basis $1$, $x_1$, $x_2$, $y_1$, $y_2$, $x_1y_2$ for $\tilde A$, will involve only polynomials in $a,b,c,d$ with coefficients in $\mathbb{Z}$. Thus the discriminant ${\rm Disc}(\tilde A)$ of $\tilde A$ will also be an integer polynomial in $a,b,c,d$. Furthermore, this polynomial ${\rm Disc}(\tilde A)$ must remain invariant under changes of the basis $x,y$ via transformations in ${\rm GL}_2(\mathbb{Z})$, which changes $a,b,c,d$ by the action of ${\rm GL}_2(\mathbb{Z})$ on the binary cubic form $f(x,y)=ax^2+bx^2y+cxy^2+dy^3$ (by \cite[Prop.\ 4.2]{GGS}). It is known (see, e.g., \cite[Lec.~XVII]{Hilbert}) that the only ${\rm GL}_2(\mathbb{Z})$-invariant polynomials in $a,b,c,d$ under this action must be polynonomials in ${\rm Disc}(f)={\rm Disc}(A)$, and thus ${\rm Disc}(\tilde A)$ must be a polynomial in ${\rm Disc}(A)$. To determine this polynomial, we may then restrict to the case where $a=1$ or $c=1$; that is, we may assume the rank 3 ring $A$ is monogenic over $B$, in which case ${\rm Disc}(\tilde A)={\rm Disc}(A)^3$ by (\ref{discid}). Formula (\ref{discid2}) therefore follows for general rank 3 rings $A$ over $B$. \vspace{.1in} In particular, if $A$ is a cubic order in a noncyclic cubic field $K$, then $G(A/\mathbb{Z})$ provides a canonically associated sextic order $\tilde A$ in the Galois closure $\tilde K$ satisfying ${\rm Disc}(\tilde A)={\rm Disc}(A)^3$. \vspace{.1in} We may now deduce the more general Theorem~\ref{cubicase} from Theorem~\ref{cubicase2}. Indeed, let $A$ be any locally free ring of rank 3 over $B$. Then it follows from~\cite[Lemma~1.1]{GL} that, for any maximal ideal $M$ of $B$, the localization $A_M$ is free of rank 3 over $B_M$ {\it with a basis of the form $1$, $x$, $y$} (essentially an application of Nakayama's Lemma). We conclude then, by Theorem~\ref{cubicase2}, that the localization $G(A/B)_M$ is free of rank~6 over $B_M$, for all maximal ideals $M$ of $B$. Since $A$ is finitely presented as a $B$-module (being locally free; see Footnote~2) and the ideal $I(A,B)$ is finitely generated (a set of generators being the relations (\ref{fundrels}), where $a$ ranges over a spanning set for $A$ over $B$; see Section~2), we conclude that $G(A/B)$ too is finitely presented as a $B$-module. Finally, since $G(A/B)$ is finitely presented as a $B$-module, and the localization $G(A/B)_M$ is free of rank $6$ over $B_M$ for all maximal ideals $M$ of $B$, by Footnote 2 on p.\,1, we conclude that $G(A/B)$ is locally free of rank 6 over $B$, and Theorem~\ref{cubicase} follows. \section{An example of a ring of rank $4$ whose $S_4$-closure has rank $32>4!$}\label{degexample} In this section, we give an example of a ring $A$ of rank $4$ over $B$---namely $A=B[x,y,z]/(x,y,z)^2$---such that $G(A/B)$ has rank $32>4!$ over $B$. \begin{proposition} Let $B$ be any base ring, and let $A$ be the ring $B[x,y,z]/(x,y,z)^2$ having rank $4$ over $B$. Then $G(A/B)$ is a ring of rank $32$ over $B$. \end{proposition} \begin{proof} Motivated by the relations (\ref{fundrels}) for $G(A/B)$, we give a direct construction of a ring $R$ over $B$, which we will then show to be naturally isomorphic to $G(A/B)$. Precisely, we construct $R$ to have a $B$-module decomposition of the form \begin{equation}\label{abcdecomp} R= B\oplus [T(x) \oplus T(y) \oplus T(z)] \oplus [U(x) \oplus U(y) \oplus U(z)] \oplus [V(x,y) \oplus V(y,z) \oplus V(x,z)] \oplus W(x,y,z), \end{equation} where $T(\cdot)$, $U(\cdot)$, $V(\cdot,\cdot)$, and $W(x,y,z)$ are free $B$-modules having ranks 3, 2, 5, and 1, respectively. Therefore, $R$ (and thus $G(A/B)$) will have $B$-rank $1+3\cdot 3+3\cdot 2 + 3\cdot 5 + 1 = 32$. The constructions of these $B$-modules $T(\cdot)$, $U(\cdot)$, $V(\cdot,\cdot)$, and $W(\cdot,\cdot,\cdot)$ are as follows. First, $T(x)$ is the $B$-module spanned by $x_1,x_2,x_3,x_4$, modulo the relation $x_1+x_2+x_3+x_4=0$; $T(y)$ and $T(z)$ are defined similarly, and hence each is three-dimensional. Second, $U(x)$ is defined as the symmetric square of $T(x)$, modulo the relations $$x_1^2=x_2^2=x_3^2=x_4^2=x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4=0.$$ Now $x_1+x_2+x_3+x_4=0$ in $T(x)$, so multiplying by $x_1$ and $x_2$ respectively shows that \[ x_1x_2+x_1x_3+x_1x_4=0 \quad\textrm{and}\quad x_1x_2+x_2x_3+x_2x_4=0 \] respectively in $U(x)$; this in turn implies \[ x_2x_3+x_3x_4+x_2x_4=0 \quad\textrm{and}\quad x_1x_3+x_3x_4+x_1x_4=0 \] in $U(x)$. Subtracting the first and last of the latter four relations gives $x_1x_2=x_3x_4$, and similarly we have $x_1x_3=x_2x_4$ and $x_1x_4=x_2x_3$. We thus find that $U(x)$ is spanned over $B$ by the images of any two of the three nonzero elements $x_1x_2$ (or $x_3x_4$), $x_1x_3$ (or $x_2x_4$), and $x_1x_4$ (or $x_2x_3$). The $B$-modules $U(y)$ and $U(z)$ are defined in the analogous manner, and are thus also two-dimensional. Third, $V(x,y)$ is defined as the product $T(x) \otimes T(y)$, modulo the relations \[ x_1 y_1=x_2 y_2=x_3 y_3=x_4 y_4=0 \] (where we have suppressed the tensor symbols). As $T(x)\otimes T(y)$ is a rank 9 module over $B$, we see that $V(x,y)$ is five-dimensional. The $B$-modules $V(y,z)$ and $V(x,z)$ are defined analogously, and hence are also five-dimensional. Finally, $W(x,y,z)$ is the space $T(x)\otimes T(y)\otimes T(z)$ modulo the relations \[ x_iy_iz_j=x_jy_iz_i=x_iy_jz_i=0 \] for all $i$ and $j$. Moreover, for each permutation $(i,j,k)$ of $(1,2,3)$, we also have the relation $x_iy_jz_k={\rm sgn}(i,j,k)x_1y_2z_3$. We have imposed the latter relations because we have such relations in $I(A,B)$: \[0=(x_4y_4)z_3=(-x_1-x_2-x_3)(-y_1-y_2-y_3)z_3=x_1y_2z_3+x_2y_1z_3, \] implying $x_2y_1z_3=-x_1y_2z_3$, etc. With these relations, we see that the rank of $W(x,y,z)$ over $B$ is 1, and is spanned over $B$ by $x_1y_2z_3$. We have not defined any $B$-module components in $R$ involving quadruple products of $x_i,y_j,z_k$ ($1\leq i,j,k\leq 3$) because these we would like to be zero due to the relations $x_iy_i=x_iz_i=y_iz_i=0$ in $A$. Similarly, there are no $B$-module components involving triple products of only $x_i$ and $y_j$ ($1\leq i,j,k\leq 3$), since the analogues of such products in $G(A/B)$ would be zero: \[0=x_1(y_1y_2+y_2y_3+y_1y_3)=x_1y_2y_3,\] and similarly all such triple products would be zero in $G(A/B)$. Thus we keep only those components $T(\cdot)$, $U(\cdot)$, $V(\cdot,\cdot)$, and $W(\cdot,\cdot,\cdot)$ appearing in (\ref{abcdecomp}). The product structure on $R$ is defined simply in terms of the natural maps $T(x)\otimes T(x)\to U(x)$, $T(x)\otimes T(y)\to V(x,y)$, $T(x)\otimes T(y)\otimes T(z)\to W(x,y,z)$, $T(x)\otimes V(y,z)\to W(x,y,z)$, and so on. All other products (such as $T(x)\otimes V(x,y)$) are defined to be zero. With this product structure, it is immediate that $R$ is a ring. To see that $G(A/B)\cong R$, we note that there is a natural surjective map \[ A \otimes A\otimes A\to R, \] sending $x_i\mapsto x_i$, $y_i\mapsto y_i$, and $z_i\mapsto z_i$. Furthermore, the kernel of this map is, by design, contained in $I(A,B)$. To see that it contains $I(A,B)$, one may simply check that it contains the elements (\ref{fundrels}) on a basis of $A$ over $B$, and so on the basis elements $1$ (trivial), $x$, $y$, and $z$. By symmetry of $x$, $y$, and $z$, we then only need to check that the elements (\ref{fundrels}) are in the kernel for $a=x$ and $j=1,2,3$, and this is again immediate. We conclude that $G(A/B)\cong R$ has rank 32 over $B$. \end{proof} \section{Why do we need to allow the rank of $S_n$-closures to exceed $n!$ \,\,?}\label{orderexample} Let $A$ be a ring of rank $n$ over a base $B$. The purpose of this section is to explain, with an example, why one cannot simply add extra relations to $I(A,B)$, to obtain a larger ideal $I'(A,B)$, so that the ``modified $S_n$-closure'' $G'(A/B)=R^{\otimes n}/I'(A,B)$ is: \begin{itemize} \item[1)] always of rank $n!$ over $B$; and \item[2)] compatible with extension of scalars. \end{itemize} Following the example of the previous section, we take the maximally degenerate ring of rank~4 over $\mathbb{F}_2$, namely $\bar{R}=\mathbb{F}_2[x,y,z]/(x,y,z)^2$; we have already seen that $G(\bar{R}/\mathbb{F}_2)$ has rank~32 over $\mathbb{F}_2$. We wish to determine whether we can enlarge the ideal $I(A,B)$ given by (\ref{fundrels}), in a functorial way, to an ideal $I'(A,B)$ so that all modified $S_4$-closures $G'(A/B)=A^{\otimes4}/I'(A,B)$ of rings $A$ of rank 4 over $B$ have rank $4!=24$. In particular, can we can get the $\mathbb{F}_2$-rank of $G'(\bar{R}/\mathbb{F}_2)$ down to 24 by adding extra relations to $I(\bar{R},\mathbb{F}_2)$, in a way that is compatible with extension of scalars? To get a hint of what relations would have to be added, we may view $\bar{R}$ as the reduction mod~2 of an order $R$ in a quartic field $K$; e.g., we may take $R=\mathbb{Z}+2(\mathbb{Z}[t]/(t^4-2))$ in the quartic field $K=\mathbb{Q}[X]/(t^4-2)$. Now the field $K$ is a monogenic (and \'etale!) degree 4 extension of $\mathbb{Q}$, so $G(K/\mathbb{Q})$ has $\mathbb{Q}$-rank 4!=24 by Theorem \ref{monocase} (or \ref{etalecase}). Functoriality tells us that $G(R/\mathbb{Z})\otimes\mathbb{Q}=G(K/\mathbb{Q})$, implying that the ``free part'' of $G(R/\mathbb{Z})$, as a $\mathbb{Z}$-module, will be an order in the \'etale $\mathbb{Q}$-algebra $G(K/\mathbb{Q})$. However, we also know by functoriality that $\dim_{\mathbb{F}_2} G(R/\mathbb{Z})\otimes\mathbb{F}_2 = \dim_{\mathbb{F}_2} G(\bar{R}/\mathbb{F}_2)=32$. Thus, as a $\mathbb{Z}$-module, $G(R/\mathbb{Z})$ will have torsion! Explicitly computing $G(R/\mathbb{Z})$ shows that we have the isomorphism \begin{equation}\label{torsion} G(R/\mathbb{Z})\cong \mathbb{Z}^{24}\oplus (\mathbb{Z}/2\mathbb{Z})^8 \end{equation} as $\mathbb{Z}$-modules. Thus if we are to enlarge $I(A,B)$ to an ideal $I'(A,B)$ so that $G'(A/B)$ always has rank 24 for rank 4 rings $A$ over $B$, then the new relations in $I'(A,B)$ must kill the torsion in (\ref{torsion}). This is because, by functoriality, we wish to have $G'(R/\mathbb{Z})\otimes\mathbb{Q}=G'(K/\mathbb{Q})$, $\dim_\mathbb{Q} G'(K/\mathbb{Q})=24$, and $\dim_{\mathbb{F}_2} G'(R/\mathbb{Z})\otimes\mathbb{F}_2 = \dim_{\mathbb{F}_2} G'(\bar{R}/\mathbb{F}_2)= 24$. So the extra relations in $I'(A,B)$ must simply kill the torsion in (\ref{torsion}). Again by functoriality, we can reduce these relations modulo 2 to determine the ``correct'' quotient of $G(\bar{R}/\mathbb{F}_2)$ that will then give us the only possible modified $S_4$-closure $G'(\bar{R}/\mathbb{F}_2)$ having rank 24 over $\mathbb{F}_2$. To compute this $G'(\bar{R},\mathbb{F}_2)$ explicitly, we first determine the relations in $I(R,\mathbb{Z})$ that cause the torsion to appear in (\ref{torsion}). To begin, we observe that in $G(K/\mathbb{Q})$ we have the relations \begin{equation}\label{basiceq} t_i^3+t_i^2t_j+t_it_j^2+t_j^3=0 \end{equation} for $1\leq i< j\leq 4$, implying the relations \begin{equation}\label{zrels} \begin{array}{lcrcrcrcl} (2t_i^3)(2)&+& (2t_i^2)(2t_j) &+& (2t_i)(2t_j^2) &+&(2)(2t_j^3) &=& 0\\[.05in] (2t_i^4)(2) &+& (2t_i^3)(2t_j)&+& (2t_i^2)(2t_j^2) &+& (2t_i)(2t_j^3) &=& 0\\[.05in] (2t_i^4)(2t_j)&+& (2t_i^3)(2t_j^2)&+& (2t_i^2)(2t_j^3)&+& (2t_i)(2t_j^4) &=& 0\\[.05in] (2t_i^4)(2t_j^2)&+& (2t_i^3)(2t_j^3)&+& (2t_i^2)(2t_j^4)&+&(2t_i)(2t_j^5) &=& 0. \end{array} \end{equation} Now the ring $R\subset K$ has basis $1$, $X=2t$, $Y=2t^2$, and $Z=2t^3$ over $\mathbb{Z}$. In terms of these basis elements, the relations (\ref{zrels}) become \begin{equation}\label{rrels} \begin{array}{rcrcrcrcl} (Z_i)(2)&+& (Y_i)(X_j) &+& (X_i)(Y_j) &+& (2)(Z_j) &=& 0\\[.05in] (4)(2) &+& (Z_i)(X_j)&+& (Y_i)(Y_j) &+& (X_i)(Z_j) &=& 0\\[.05in] (4)(X_j)&+& (Z_i)(Y_j)&+& (Y_i)(Z_j)&+& (X_i)(4) &=& 0\\[.05in] (4)(Y_j)&+& (Z_i)(Z_j)&+& (Y_i)(4)&+&(X_i)(2X_j) &=& 0. \end{array} \end{equation} We may suspect that the elements on the left of (\ref{rrels}) are then in the ideal $I(R,\mathbb{Z})$. However, it turns out that only {\it twice} these elements are actually in $I(R,\mathbb{Z})$. This is precisely what leads to the extra eight-dimensions of two-torsion in $G(R,\mathbb{Z})=R^{\otimes4}/I(R,\mathbb{Z})$. Thus, if one adds these relations to $I(R,\mathbb{Z})$ to obtain $I'(R,\mathbb{Z})$, then there is no more 2-torsion in $G'(R/\mathbb{Z})=R^{\otimes4}/I'(R,\mathbb{Z})$. Reducing the relations in (\ref{rrels}) modulo 2, and letting $x,y,z$ denote the reductions modulo 2 in $\bar{R}$ of the elements $X,Y,Z$ in $R$, we then obtain the following putative relations in our modified $S_4$-closure $G'(\bar{R}/\mathbb{F}_2)$ of $\bar{R}$: \begin{equation}\label{f2rels} \begin{array}{rcl} x_i y_j + y_i x_j &=& 0\\ x_i z_j + y_i y_j + z_ix_j &=& 0\\ y_i z_j + z_i y_j &=& 0\\ z_i z_j &=& 0 \end{array} \end{equation} When one throws these relations into $I(\bar{R},\mathbb{F}_2)$ to form $I'(\bar{R},\mathbb{F}_2)$, we obtain (as was desired) a ring $G'(\bar{R}/\mathbb{F}_2)$ that is 24-dimensional over $\mathbb{F}_2$. This may be viewed as the ``modified $S_4$-closure'' of $\bar{R}=\mathbb{F}_2[x,y,z]/(x,y,z)^2$ which has the desired rank 24. However, it is clearly not unique or canonical---indeed, our modified $S_4$-closure $G'(\bar{R}/\mathbb{F}_2)$ as constructed above is not even symmetric in $x,y,z$! That is, it does not respect the group of automorphisms of $\bar{R}$ over $\mathbb{F}_2$, so in particular it cannot be functorial. The quotient $G'(\bar{R}/\mathbb{F}_2)$ of $G(\bar{R}/\mathbb{F}_2)$ we obtain in this way clearly depends on the particular lift to characteristic 0. It follows that there is no functorial quotient $G'(A/B)$ of $G(A/B)$, for rings $A$ of rank 4 over $B$, such that $G'(A/B)$ always has rank 4!. In fact, we have proven something much stronger. To state the result, we first observe that the relations (\ref{fundrels}) will be in $I(K,\mathbb{Q})$ in any construction of the Galois closure of an $S_4$-quartic field $K$ over $\mathbb{Q}$ as a quotient of $K^{\otimes 4}$, where the tensor factors of $K$ represent the conjugates of $K$ in an algebraic closure $\bar\mathbb{Q}$ of $\mathbb{Q}$. Now, if we choose an order $R$ in $K$ that is imprimitive at $2$, then our procedure in this section leads to an ideal $I'(\bar{R},\mathbb{F}_2)$ such that $G'(\bar{R}/\mathbb{F}_2)=\bar{R}^{\otimes 4}/I'(\bar{R},\mathbb{F}_2)$ is a ring of rank $24$ over $\mathbb{Z}/2\mathbb{Z}$. If one intersects all possible $I'(\bar{R},\mathbb{F}_2)$ that one can obtain in this way over all lifts of $\bar{R}$ to orders $R$ in $S_4$-quartic fields $K$, one obtains precisely the ideal $I(\bar{R},\mathbb{F}_2)$ as we have defined it in (\ref{fundrels}). Thus $I(\bar{R},\mathbb{F}_2)$ cannot be enlarged at all, in a functorial way, so that $G(A/B)$ gives the desired rank 24 Galois closure in the case of an $S_4$-quartic extension $A/B$ of fields.\footnote{Note that the quartic field we had used earlier, namely $\mathbb{Q}[t]/(t^4-2)$ is not an $S_4$-quartic field, but has associated Galois group $D_4$. However, we could instead have taken some polynomial 2-adically close to $t^4-2$ with Galois group $S_4$, and that would not have changed any of the constructions modulo 2. Thus the choice of Galois group of our quartic field $K$ was not essential to any of our arguments.} Therefore, the fact that $G(\bar{R}/\mathbb{F}_2)$ has $\mathbb{F}_2$-rank 32 is something that is forced upon us by functoriality, as this ring by construction contains the information of the Galois closures of all lifts of $\bar{R}$ to characteristic zero. \vspace{.1in} One could also have reached a similar conclusion about $A=K[x,y,z]/(x,y,z)^2$ for other fields $K$ from a representation theoretic point of view. Note that $G(A/K)$ is naturally a representation of ${\rm Aut}_{K}(A)={\rm GL}_3(K)$ and also of $S_4$, and thus (since these actions commute) of the group $\Gamma=S_4\times {\rm GL}_3(K)$. Any modified $S_4$-closure $G'(A/K)$, built as a quotient of $G(A/K)$ in a functorial way, must be $\Gamma$-equivariant. We use $\underline{{\rm triv}}$ and $\underline{{\rm std}}$ to denote the trivial representation and the standard three-dimensional representation of ${\rm GL}_3(K)$, respectively. Also, we write ${\rm triv}$, ${\rm sgn}$, ${\rm std}$, ${\rm std}'$, and ${\rm std}_2$ to denote the trivial, sign, standard, standard $\otimes$ sign, and 2-dimensional representations of $S_4$, respectively. These representations are irreducible when the characteristic is not 2 or 3. In that case, as $\Gamma$-representations, we have the decomposition \begin{equation}\label{repdecomp} G(A/K)\cong ({\rm triv}\otimes \underline{{\rm triv}})\oplus ({\rm std}\otimes\underline{{\rm std}})\oplus ({\rm std}_2\otimes\underline{{\rm std}})\oplus({\rm std}'\otimes \underline{{\rm std}}^\vee)\oplus ({\rm std}_2\otimes \underline{{\rm std}}^\vee)\otimes ({\rm sgn}\otimes\underline{{\rm triv}}) \end{equation} as a sum of irreducible representations; the $K$-dimensions of these irreducible summands are 1, 9, 6, 9, 6, and 1 respectively, giving a total of 32. The first ${\rm triv}\otimes\underline{{\rm triv}}$ corresponds to the subring $K\times 1\subset A$; we then observe that no sum of any subset of elements of $\{9,6,9,6,1\}$ adds up to 8, and thus $G(A/K)$ has no $\Gamma$-equivariant quotient ring of rank 24 over $K$. \vspace{.1in} Either way, we conclude that we simply must allow dimensions larger than $n!$ for $S_n$-closures of rank $n$ rings, in order to preserve all the information that functoriality (i.e., compatibility with base change) demands. \section{The maximal rank of $S_n$-closures} \label{sec:maxrank} The purpose of this section is to show that the analogues for general $n$ of the maximally degenerate ring of rank~4 (considered in Section~\ref{degexample}) form the rings whose $S_n$-closures have maximal rank. Thus we prove Theorem~\ref{maxrank}. The idea of our proof is as follows. In a sense which we make precise below, the ring $R_n=K[x_1,\dots,x_{n-1}]/(x_1,\dots,x_{n-1})^2$ is the ``maximally degenerate point'' in the moduli space of all rank $n$ rings over $K$. Since functoriality shows that the $S_n$-closures of rank $n$ rings fit together into a nicely-behaved sheaf on the moduli space, an upper semi-continuity argument allows us to conclude that the rank of the $S_n$-closure is maximal at the degenerate ring $R_n$. As in \cite{moduli}, let $\mathfrak{B}_n$ be the functor from $\mathbf{Schemes}^{\textrm{op}}$ to $\mathbf{Sets}$ which assigns to any scheme $S$ the set of isomorphism classes of pairs $(\mathcal{A},\phi)$, where $\mathcal{A}$ is an $\mathcal{O}_S$-algebra and $\phi:\mathcal{A}\rightarrow\mathcal{O}_S^n$ is an isomorphism of $\mathcal{O}_S$-modules. By \cite[Prop.\ 1.1]{moduli}, the functor $\mathfrak{B}_n$ is representable by an affine scheme of finite type over $\mathbb{Z}$. The base change $\mathfrak{B}_{n,K}$ of $\mathfrak{B}_n$ to ${\rm Spec}\,\, K$ is affine. Let $\mathfrak{B}_{n,K}={\rm Spec}\,\, B_n$. The identity morphism from $\mathfrak{B}_{n,K}$ to itself yields a distinguished isomorphism class of pairs $(A_n,\phi)$ with $A_n$ a $B_n$-algebra and $\phi:A_n\rightarrow B_n^n$ an isomorphism. Let us choose an object $(A_n,\phi)$ of this isomorphism class. Since we are interested in proving a statement about dimension, this choice does not matter. Since the $S_n$-closure $G(A_n/K)$ of $A_n$ is a $B_n$-algebra of finite rank, it defines a coherent sheaf $\mathcal{F}_n$ on $\mathfrak{B}_{n,K}$. By functoriality of the $S_n$-closure, if we have a morphism $f:{\rm Spec}\,\, C\rightarrow \mathfrak{B}_{n,K}$ corresponding to the pair $(R,\psi)$, then $f^*\mathcal{F}_n$ is isomorphic to $G(R/C)$. Note that there is a natural ${\rm GL}_{n,K}$-action on $\mathfrak{B}_{n,K}$ and that functoriality of the $S_n$-closure shows that it extends to an action on the sheaf $\mathcal{F}_n$. The proof of \cite[Prop.\ 7.1]{moduli} shows that the $K$-point corresponding to $R_n$ is in the Zariski closure of the ${\rm GL}_{n,K}$-orbit of any other point. Upper semi-continuity therefore shows that the dimension of the fiber of $\mathcal{F}_n$ is maximal at the point corresponding to $R_n$, as desired. \section{The $S_n$-closures of the degenerate rings $R_n$} \label{deg} \subsection{Preliminaries from $S_n$-representation theory} In this subsection, we collect several facts from $S_n$-representation theory that we use in the proof of Theorem~\ref{degtheorem} (made more precise in Theorem~\ref{thm:main}). For us, given a positive integer $n$, a {\it partition of $n$} is an $n$-tuple $\lambda=(\lambda_1,\dots,\lambda_n)$ satisfying $n\geq\lambda_1\geq\dots\geq\lambda_n\geq0$ and $\sum \lambda_i=n$. We often drop the $\lambda_i=0$ in our notation, so that the partition $(3,1,0,0)$ of 3, for example, is denoted simply as $(3,1)$. Partitions of $n$ play a key role in $S_n$-representation theory due to the following theorem (see, for example, \cite[2.1.12]{repthy}). \begin{theorem} If $K$ is a field of characteristic $0$ or of characteristic $p>n$, then there is a canonical bijection between partitions of $n$ and irreducible $S_n$-representations over $K$. \end{theorem} Given a partition $\lambda$, we denote by $V_\lambda$ the corresponding irreducible $S_n$-representation. The $V_\lambda$ are called Specht modules and can, in fact, be defined over the integers. We associate to $\lambda$ a \emph{Young diagram}, which consists of $n$ rows of boxes with $\lambda_i$ boxes on the $i^{th}$ row. For example, the Young diagram of $\lambda=(4,2,2,1)$ is \[ \yng(4,2,2,1) \] If $\lambda$ and $\mu$ are two partitions of $n$, and if $k=|\{i:\lambda_i\neq0\}|$, then a \emph{Young tableau of shape} $\mu$ \emph{and content} $\lambda$ is an assignment to each box of the Young diagram of $\mu$ an element of $\{1,2,\dots,k\}$ in such a way that the element $i$ has been assigned to exactly $\lambda_i$ boxes. Such a Young tableau is called \emph{semi-standard} if the numbers assigned to the boxes of the Young diagram of $\mu$ weakly increase across rows and strongly increase down columns. For example, both \[ \young(11114,234,3)\quad\textrm{and}\quad\young(11134,124,3) \] are Young tableaux of shape $(5,3,1)$ and content $(4,2,2,1)$, but only the first is semi-standard. The \emph{Kostka\ number} $K_{\lambda\mu}$ is defined to be the number of semi-standard Young tableaux of shape $\mu$ and content $\lambda$. \begin{definition} \emph{If $\lambda$ and $\mu$ are two partitions of $n$, we say $\mu$ \emph{dominates} $\lambda$ and write $\mu\triangleright\lambda$ if $\sum_{i=1}^{j}\mu_i\geq\sum_{i=1}^j\lambda_i$ for all $j$.} \end{definition} Note that in order for a Young tableau of shape $\mu$ and content $\lambda$ to be semi-standard, all $\lambda_i$ $i$'s must occcur within the first $i$ rows. So, if $\mu$ does not dominate $\lambda$, then $K_{\lambda\mu}=0$. The importance of the Kostka numbers is seen in Young's Rule below (for a proof, see \cite[Cor 4.39]{fh}). \begin{theorem}$\emph{(Young's Rule)}$ \label{thm:young} If $\lambda$ is a partition of $n$, then \[ \emph{Ind}_{S_{\lambda_1}\times\dots\times S_{\lambda_n}}^{S_n}({\rm triv})=\bigoplus_{\mu\triangleright\lambda}K_{\lambda\mu}V_\mu. \] \end{theorem} In particular, since \[ K[S_n]={\rm Ind}_{S_1\times\dots\times S_1}^{S_n}({\rm triv}), \] we see $K_{\lambda\mu}=\dim V_\mu$, where $\lambda=(1,1,\dots,1)$. There is a second combinatorial theorem we later make use of. This theorem, known as the hook formula, gives another way to relate $\dim V_\lambda$ to the Young diagram of $\lambda$. \begin{definition} \emph{The \emph{hook number} of the $j^{th}$ box in the $i^{th}$ row of the Young diagram of $\lambda$ is $1+\lambda_i-j+|\{k\;:\;k>i,\lambda_k\geq\lambda_i\}|$. That is, it is the number boxes in the ``hook'' which runs up the $j^{th}$ column, stops at the box in question, and continues across the $i^{th}$ row to right.} \end{definition} For example, replacing each box in the Young diagram of $(4,2,2,1)$ by its hook number, we have \[ \young(7521,42,31,1) \] \begin{theorem}$\emph{(Hook Formula)}$ Given a partition $\lambda$ of $n$, let $H$ be the product of the hook numbers of the boxes in the Young diagram of $\lambda$. Then $\dim V_\lambda=\frac{n!}{H}$. \end{theorem} \subsection{A structure theorem for $S_n$-closures of degenerate rings} Throughout this subsection, $K$ is a field of characteristic $0$ or of characteristic $p>n$, and $R_n$ denotes the degenerate ring $K[x_1,\dots,x_{n-1}]/(x_1,\dots,x_{n-1})^2$. Then $R_n^{\otimes n}$ is a $K$-vector space of dimension $n^n$ with basis $x_{i_1}\otimes\dots\otimes x_{i_n}$, where $i_j\in\{0,\dots,n-1\}$ and $x_0:=1$. For notational convenience, we drop the tensor signs. \begin{definition} \emph{Let} $\gamma(x_k,x_{i_1}x_{i_2}\dots x_{i_n})=x_{i_1}x_{i_2}\dots x_{i_n}\cdot(x_k1\dots1+1x_k\dots1+\dots+11\dots x_k)$. \end{definition} As shown in the proof of Theorem \ref{thm:functorial}, the ideal $I:=I(R_n,K)$ is generated by the relations (\ref{fundrels}) where $a$ ranges through a basis of $R_n$ over $K$. The ideal $I$ is therefore generated as a $K$-vector space by the $\gamma(x_i,11\dots1)$. As mentioned in the introduction, there is a natural $S_n$-action on $R_n^{\otimes{n}}$ given by permuting the tensor factors and this passes to an action on the $S_n$-closure $G(R_n/K)$ of $R_n$. Here, we see that \[ \pi(\gamma(x_k,x_{i_1}x_{i_2}\dots x_{i_n}))=\gamma(x_k,x_{i_{\pi^{-1}(1)}}x_{i_{\pi^{-1}(2)}}\dots x_{i_{\pi^{-1}(n)}}) \] for all $\pi\in S_n$. Our goal in this subsection is to prove \begin{theorem} \label{thm:main} For all partitions $\lambda$ of $n$, let $m_\lambda$ be the multinomial coefficient $\binom{n-1}{k_0;\dots;k_{n-1}}$, where $k_j=|\{i:i\neq1,\lambda_i=j\}|$. Then there is an isomorphism \[ G(R_n/k)\cong\bigoplus_{\substack{\mu\triangleright\lambda\\ \mu_1=\lambda_1}}m_\lambda K_{\lambda\mu}V_\mu. \] of $S_n$-representations. \end{theorem} As we show in Theorem \ref{cor:reg}, the theorem above implies that the dimension of $G(R_n/K)$ is greater than $n!$ for $n\geq4$; that is, it implies Theorem \ref{degtheorem}. As a first step in proving Theorem \ref{thm:main}, we begin by crudely decomposing $G(R_n/K)$ into certain naturally occurring $S_n$-representations parametrized by partitions of $n$. Let $a=(a_0,a_1,\dots,a_{n-1})$ be an ordered partition of $n$ and let $M_a$ be the subrepresentation of $R_n^{\otimes n}$ generated by the $x_{i_1}\dots x_{i_n}$ with $a_k=|\{j:i_j=k\}|$. Let $I_a$ be the subrepresentation of $I$ generated by the $\gamma(x_j,x_{j_1}\dots x_{j_n})\in M_a$. For example, writing $x$ and $y$ for $x_1$ and $x_2$, respectively, if $a=(1,2,1)$, then $M_a$ is generated by the $12$ elements $1xxy$, $1xyx$, $1yxx$, $\dots$, $xx1y$, and $xxy1$; $I_a$ is generated by the $6$ elements $\gamma(y,11xx)$, $\gamma(y,1x1x)$, $\dots$, $\gamma(y,xx11)$ as well as the $12$ elements $\gamma(x,11xy)$, $\gamma(x,1x1y)$, $\dots$, $\gamma(x,yx11)$. We claim that $I\cap M_a=I_a$. Clearly, $I_a$ is contained in $I\cap M_a$. To prove the other containment, let $\beta\in I\cap M_a\subset I$. We have then that \[ \beta=\sum_{i=1}^N\lambda_i\gamma_i, \] where $\lambda_i\in K$, $\gamma_i$ is some $\gamma(x_k,x_{j_1}\dots x_{j_n})\in M_{a(i)},$ and $a(i)$ is some ordered partition of $n$. Since $R_n^{\otimes n}$ is a direct sum of the $M_{a'}$, we see that for $a'\neq a$, \[ \sum_{a(i)=a'}\lambda_i\gamma_i=0. \] Therefore, $\beta$ is generated by the $\gamma_i\in I_a$. This proves the claim, and as a result, \[ G(R_n/K)=\bigoplus_a M_a/I_a. \] The following lemma shows that if $a_0<a_k$ for some $k$, then $M_a=I_a$. \begin{lemma} \label{l:incexc} Let $i_1,\dots,i_n\in\{0,1,\dots,n-1\}$. If there is some $k$ such that \[ |\{j:i_j=0\}|<|\{j:i_j=k\}|, \] then $x_{i_1}x_{i_2}\dots x_{i_n}\in I$. \end{lemma} Since the notation in the proof of this lemma is a bit cumbersome, we first illustrate the proof with a specific example. Denoting $x_1$, $x_2$, and $x_3$ by $x$, $y$, and $z$, respectively, let us show $1yx1xzyx\in I$. For $a,b,c\in S:=\{1,3,4,5,8\}$, let $[a,b,c]$ denote $x_{i_1}x_{i_2}\dots x_{i_8}$ with $i_a=i_b=i_c=1$, $i_2=i_7=2$, $i_6=3$, and all other $i_j=0$. For example, $[1,3,4]=xyxx1zy1$ and $[3,5,8]=1yx1xzyx$. By the inclusion-exclusion principle, \[ 1yx1xzyx=\sum_{\substack{a<b<c\\ a,b,c\in S}}[a,b,c] - \sum_{\substack{a<b\\ a,b\in S-\{1\}}}[1,a,b] - \sum_{\substack{a<b\\ a,b\in S-\{4\}}}[4,a,b] + \sum_{a\in S-\{1,4\}}\;[1,4,a]. \] It is not difficult to see that each of the sums is in $I$. For example, $\sum[1,a,b]=\alpha\cdot xy111zy1$, where $\alpha$ is sum of all elements of the form $x_{i_1}x_{i_2}\dots x_{i_8}$ with exactly two of the $i_j=1$ and all other $i_j=0$. One then notes that $\alpha=\gamma(x,11111111)^2\in I$. \vspace{.125in} \begin{proofl} Let $T=\{j:i_j=k\}$ and $S=T\cup \{j:i_j=0\}$. For distinct elements $a_1,\dots,a_{|T|}\in S$, let $[a_1,\dots,a_{|T|}]$ denote the element $x_{i'_1}\dots x_{i'_n}$ with $i'_{a_j}=1$, with $i'_j=i_j$ if $j\notin S$, and with all other $i'_j=0$. By an inclusion-exclusion argument similar to the one above, we are reduced to showing \[ \sum_{\substack{a_1<\dots<a_c\\ a_j\in S- \{b_j\}}}[a_1,\dots, a_c,b_1,\dots, b_{|T|-c}]\in I, \] where $1\leq c< |T|$ and the $b_j$ are fixed elements of $S$. This sum equals $\alpha\cdot x_{i'_1}\dots x_{i'_n}$, where $i'_j=i_j$ if $j\notin S-\{b_\ell\}$, and $i_j=0$ otherwise; here $\alpha$ is the sum of all $x_{i''_1}\dots x_{i''_n}$ with exactly $|T|-c$ of the $i''_j=k$. Since $|T|-c>0$, we see $\alpha\in I$, and hence the sum is as well. \end{proofl} We see then from Lemma \ref{l:incexc} that \[ G(R_n/K)=\bigoplus_{\substack{a\textrm{\ s.t.}\\ a_0\geq a_k\textrm{\ }\forall k}} M_a/I_a. \] If $\sigma$ is a permutation of $0,1,\dots,n-1$, and $a=(a_0,\dots,a_{n-1})$ is an ordered partition of $n$, then let $\sigma(a)=(a_{\sigma^{-1}(0)},\dots,a_{\sigma^{-1}(n-1)})$. Note that if $\sigma$ fixes $0$, then it defines an isomorphism of $S_n$-representations $M_a\rightarrow M_{\sigma(a)}$ by sending $x_{i_1}\dots x_{i_n}$ to $x_{\sigma(i_1)}\dots x_{\sigma(i_n)}$. We remark that if $\sigma$ does not fix $0$, it still defines an isomorphism of vector spaces, but this is in general not an isomorphism of $S_n$-representations. For example, if $a_0\geq a_k$ for all $k$ and $a_0> a_{\sigma^{-1}(0)}$, then Lemma \ref{l:incexc} shows that $M_{\sigma(a)}=I_{\sigma(a)}$; however, it follows from $\eqref{eq:star}$ and Proposition \ref{prop:proper} below that $I_a$ is a $\emph{proper}$ subrepresentation of $M_a$. Let $a$ be such that $a_0\geq a_k$ for all $k$. For all $j$ such that $0\leq j\leq n-1$, let $k_j=|\{i:i\neq0,\;a_i=j\}|$. Then $\{\sigma(a):\sigma(0)=0\}$ has cardinality $\binom{n-1}{k_0;\dots;k_{n-1}}=m_{\lambda(a)}$, where $\lambda(a)=(\lambda_1,\dots,\lambda_n)$ is the partition of $n$ such that $\{\lambda_i:1\leq i\leq n\}=\{a_i:0\leq i\leq n-1\}$ as multi-sets. For any partition $\lambda$ of $n$, let $M_\lambda$ and $I_\lambda$ be the isomorphism classes of the $S_n$-representations $M_a$ and $I_a$, respectively, for any $a$ such that $a_0=\lambda_1$ and $\{\lambda_i\}=\{a_i\}$ as multi-sets. This is well-defined as $\lambda(a)=\lambda(\sigma(a))$ for all $\sigma$ fixing $0$. Since $a\mapsto\lambda(a)$ gives a bijection of $\{a:a_0\geq a_k\textrm{\ }\forall k\}$ with the set of partitions of $n$, we have shown \[ G(R_n/K)\cong\bigoplus_\lambda m_\lambda M_\lambda/I_\lambda. \] Given a partition $\lambda$ of $n$, let $i_j=k$ if $\sum_{m=1}^{k-1} \lambda_m < j \leq \sum_{m=1}^k \lambda_m$. Note then that \[ M_\lambda={\rm Ind}_{S_{\lambda_1}\times\dots\times S_{\lambda_n}}^{S_n}(K\cdot x_{i_1}\dots x_{i_n}). \] Since $K\cdot x_{i_1}\dots x_{i_n}$ is the trivial representation of $S_{\lambda_1}\times\dots\times S_{\lambda_n}$, by Young's Rule we have \begin{equation} \label{eq:star} M_\lambda\cong\bigoplus_{\mu\triangleright\lambda}K_{\lambda\mu}V_\mu, \end{equation} where $\lambda$ runs through the partitions of $n$. We have therefore reduced Theorem $\ref{thm:main}$ to the following theorem. \begin{theorem} \label{thm:ideal} For all partitions $\lambda$ of $n$, \[ I_\lambda\cong\bigoplus_{\substack{\mu\triangleright\lambda\\ \mu_1>\lambda_1}} K_{\lambda\mu}V_\mu. \] \end{theorem} To prove Theorem \ref{thm:ideal}, we show that $I_\lambda$ contains a copy of $\bigoplus_{\mu\triangleright\lambda,\;\mu_1>\lambda_1} K_{\lambda\mu}V_\mu$ and that it contains no copy of $V_\mu$ if $\mu\triangleright\lambda$ and $\mu_1=\lambda_1$. These two statements are the content of Propositions $\ref{prop:first}$ and $\ref{prop:proper}$, respectively. \begin{proposition} \label{prop:first} If $\lambda$ and $\mu$ are partitions of $n$ with $\mu_1>\lambda_1$, then the natural morphism \[ {\rm Hom}(V_\mu,I_\lambda)\longrightarrow{\rm Hom}(V_\mu,M_\lambda) \] is an isomorphism. \end{proposition} \begin{proof} Given a semi-standard Young tableau $T$ of shape $\mu$ and content $\lambda$, if $i=j+\sum_{m=1}^{k-1}\lambda_m < \sum_{m=1}^{k}\lambda_m$ for $j>0$, then let $T(i)$ be the number assigned to the $j^{th}$ box on the $k^{th}$ row of $T$. For example, $T(\lambda_1+1)$ is the number assigned to the first box of the second row. We can associate to $T$ an element $\alpha(T):=x_{T(1)-1}\dots x_{T(n)-1}$ of $M_\lambda$. Let $A_T$ be the set of Young tableau $T'$ of shape $\mu$ and content $\lambda$ such that for all $i$, the multi-set of numbers in the $i^{th}$ row of $T'$ is the same as the multi-set of numbers in the $i^{th}$ row of $T$. Then by \cite[2.10.1]{sagan}, the image of any morphism $V_\mu\rightarrow M_\lambda$ of $S_n$-representations is contained in the $S_n$-subspace of $M_\lambda$ generated by the elements $\sum_{T'\in A_T} \alpha(T')$ as $T$ ranges over the semi-standard Young tableau of shape $\mu$ and content $\lambda$. It therefore suffices to show $\sum_{T'\in A_T} \alpha(T')\in I_\lambda$ for every semi-standard Young tableau $T$ of shape $\mu$ and content $\lambda$. We define an equivalence relation on $A_T$ by $T'\sim T''$ if $T'(i)=T''(i)$ for all $i>\mu_1$. This equivalence relation partitions $A_T$ into the disjoint union of sets $S_1, S_2,\dots,S_\ell$. For $i>\mu_1$, let $S_j(i)=T'(i)$ for any $T'\in S_j$. Since \[ \sum_{T'\in A_T}\alpha(T')=\sum_{j=1}^\ell\sum_{T'\in S_j}\alpha(T'), \] it suffices to show each $\sum_{T'\in S_j}\alpha(T')\in I_\lambda$. Note that \[ \sum_{T'\in S_j}\alpha(T')=\delta\cdot (\underbrace{11\dots1}_{\mu_1}x_{S_j(\mu_1+1)-1}\dots x_{S_j(n)-1}), \] where $\delta$ is the sum of all elements of the form $x_{i_1}\dots x_{i_n}$ with \[ \{i_k\}=\{T(k)-1:1\leq k\leq \mu_1\}\cup\{\underbrace{0,0,\dots,0}_{n-\mu_1}\} \] as multi-sets. Letting $a_m=|\{k:i_k=m\}|$ and noting that there is some $m\neq0$ for which $a_m>0$, we see \[ \delta=\prod_{m=1}^{n-1}\gamma(x_m,11\dots1)^{a_m}\in I_\lambda, \] which finishes the proof. \end{proof} \begin{proposition} \label{prop:proper} If $\mu\triangleright\lambda$ and $\mu_1=\lambda_1$, then $V_\mu$ does not occur in $I_\lambda$. \end{proposition} \begin{proof} Let $\Gamma_m$ be the subrepresentation of $I_\lambda$ generated by the $\gamma(x_m,x_{i_1}\dots x_{i_n})\in I_\lambda$. Let $\ell$ be the smallest integer greater than or equal to $m$ such that $\lambda_m=\lambda_\ell>\lambda_{\ell+1}$. If $\lambda_m=\lambda_j$ for all $j\geq m$, then let $\ell=n$. We define \[ \lambda'=(\lambda_1+1,\lambda_2,\dots,\lambda_{\ell-1}, \lambda_\ell-1,\lambda_{\ell+1},\dots,\lambda_n). \] Let $i_j = m$ if $\sum_{b=1}^{m} \lambda'_b < j \leq \sum_{b=1}^{m+1} \lambda'_b$. Note that \[ \Gamma_m={\rm Ind}_{S_{\lambda'_1}\times\dots\times S_{\lambda'_n}}^{S_n} (K\cdot\gamma(x_m,x_{i_1}\dots x_{i_n})). \] Since $K\cdot\gamma(x_m,x_{i_1}\dots x_{i_n})$ is the trivial representation of $S_{\lambda'_1}\times\dots\times S_{\lambda'_n}$, Young's Rule tells us \[ \Gamma_m=\bigoplus_{\epsilon\triangleright\lambda'}K_{\lambda'\epsilon}V_\epsilon. \] Since $\lambda'_1=\lambda_1+1$, any $\epsilon$ which dominates $\lambda'$ must have $\epsilon_1>\lambda_1$. Therefore $V_\mu$ does not occur in any of the $\Gamma_m$, and since $I_\lambda$ is the vector space span of the $\Gamma_m$, it does not occur in $I_\lambda$. \end{proof} This concludes the proof of Theorem $\ref{thm:ideal}$, and hence, also of Theorem \ref{thm:main}. We now turn to the following theorem. \begin{theorem} \label{cor:reg} The regular representation is a subrepresentation of $G(R_n/K)$. If $n\geq4$, it is a proper subrepresentation. In particular, Theorem \ref{degtheorem} follows. \end{theorem} As the proof of this theorem shows, as $n$ gets large, the regular representation is only a small subrepresentation, and so the bound in Theorem \ref{degtheorem} is a weak one. \begin{lemma} \label{l:hook} Let $\epsilon$ and $\tau$ be two partitions of $n$. Suppose $\tau_{k-1}>\tau_k=0$ and that $\epsilon=(\tau_1,\dots,\tau_{i-1},\tau_i-1,\tau_{i+1},\dots,\tau_{k-1},1)$ for some $i>1$. Let $E_1$ and $T_1$ be the product of the hook numbers of the boxes in the first row of the Young diagram of $\epsilon$ and $\tau$, respectively. Then $T_1\geq E_1$. \end{lemma} \begin{proof} Let $h_1$ and $h_2$ be the hook numbers of the first and $\tau_i^{\phantom{1} th}$ box in the first row of the Young diagram of $\tau$, respectively. Then \[ E_1=T_1\frac{(h_1+1)(h_2-1)}{h_1h_2}. \] Then, expanding $(h_1+1)(h_2-1)$ and noting that $h_2>h_1$, we have the desired inequality. \end{proof} \begin{prooft} We must show \[ \sum_{\substack{\mu\triangleright\lambda\\ \mu_1=\lambda_1}}m_\lambda K_{\lambda\mu}\geq\dim V_\mu. \] Fix $\mu$ and let $\lambda=(\mu_1,1,\dots,1)$. We in fact prove $m_\lambda K_{\lambda\mu}\geq\dim V_\mu$. If $\mu=(n)$, then $\lambda=\mu$ and $m_\lambda K_{\lambda\mu}=1=\dim V_\mu$. Now suppose $\mu_1<n$. Let $\mu'=(\mu_2,\dots,\mu_n)$ and $\lambda'=(\lambda_2,\dots,\lambda_n)$. Since $\mu_1=\lambda_1$, the first row of every semi-standard Young tableau of shape $\mu$ and content $\lambda$ consists entirely of $\mu_1$ 1's. Therefore, \[ K_{\lambda\mu}=K_{\lambda'\mu'}=\dim V_{\mu'}, \] where the second equality comes from the paragraph following Theorem \ref{thm:young}. Let $H$ be the product of the hook numbers of the Young diagram of $\mu$ and let $H_1$ be the product of the hook numbers of the boxes in the first row. Since \[ \dim V_\mu=\frac{n!}{H}=\dim V_{\mu'}\cdot\frac{n!}{H_1(n-\mu_1)!} \] and $c_\lambda=\binom{n-1}{\mu_1-1}$, we need only show $H_1\geq n(\mu_1-1)!$. Note that the product of the hook numbers of the boxes in the first row of the Young diagram of $\lambda$ is $n(\mu_1-1)!$. The first part of the corollary therefore follows from Lemma \ref{l:hook}. Note that if $n\geq4$, then letting $\mu=(n-2,2)$ and $\lambda=(n-2,1,1)$, we have \[ \sum_{\substack{\mu\triangleright\lambda\\ \mu_1=\lambda_1}}m_\lambda K_{\lambda\mu}=m_\lambda K_{\lambda\mu}+m_\mu K_{\mu\mu}> \dim V_\mu, \] which shows that the regular representation is a proper subrepresentation. \end{prooft} \subsection{Examples} \label{sec:ex} In this section we illustrate Theorem \ref{thm:main} in the cases $n=3$ and $n=4$. The following table collects the relevant information when $n=3$. \[ \begin{array}{l | c | l | c | c | c} \mu & \dim V_\mu & \lambda\textrm{\ s.t.\ } \lambda_1=\mu_1\textrm{\ and\ } \mu\triangleright\lambda & m_\lambda & K_{\lambda\mu} & m_\lambda K_{\lambda\mu}\\ \hline & & & & & \\[-.023in] \yng(3) & 1 & \hspace{.51in}\yng(3) & 1 & 1 & 1\\[.05in] \yng(2,1) & 2 & \hspace{.51in}\yng(2,1) & 2 & 1 & 2\\[.05in] \yng(1,1,1) & 1 & \hspace{.51in}\yng(1,1,1) & 1 & 1 & 1 \end{array} \] We see that for each partition $\mu$ of $3$, the dimension of $V_\mu$ agrees with $m_\mu K_{\mu\mu}$ and so Theorem \ref{thm:main} shows that $G(R_3/K)$ is the regular representation. The cases $n\leq 3$ are rather uninteresting since for such $n$, whenever $\mu$ and $\lambda$ are partitions of $n$ with $\mu$ dominating $\lambda$ and $\lambda_1=\mu_1$, we in fact have $\mu=\lambda$. When $n=4$, however, there exists a single pair $(\mu,\lambda)$ of partitions satifying the above conditions for which $\mu$ and $\lambda$ are distinct. As shown in Corollary \ref{cor:reg}, this forces $G(R_4/K)$ to contain a proper copy of the regular representation. The $n=4$ case is summarized in the table below. \[ \begin{array}{l | c | l | c | c | c} \mu & \dim V_\mu & \lambda\textrm{\ s.t.\ } \lambda_1=\mu_1\textrm{\ and\ } \mu\triangleright\lambda & m_\lambda & K_{\lambda\mu} & m_\lambda K_{\lambda\mu}\\ \hline & & & & & \\[-.022in] \yng(4) & 1 & \hspace{.5in}\yng(4) & 1 & 1 & 1\\[.05in] \yng(3,1) & 3 & \hspace{.5in}\yng(3,1) & 3 & 1 & 3\\[.05in] \yng(2,2) & 2 & \hspace{.5in}\yng(2,2) & 3 & 1 & 3\\[.05in] & & \hspace{.5in}\yng(2,1,1) & 3 & 1 &3 \\[.05in] \yng(2,1,1) & 3 & \hspace{.5in}\yng(2,1,1) & 3 & 1 & 3\\[.05in] \yng(1,1,1,1) & 1 & \hspace{.5in}\yng(1,1,1,1) & 1 & 1 & 1 \end{array} \] We see then from Theorem \ref{thm:main} that $G(R_4/K)$ contains exactly $\dim V_\mu$ copies of $V_\mu$ for every partition $\mu$ of $4$ other than $\mu=(2,2)$. We see, however, that $G(R_4/K)$ contains $6$ copies of $V_{(2,2)}$. It follows that $G(R_4/K)$ is the regular representation direct sum 4 copies of $V_{(2,2)}$. Since $V_{(2,2)}$ is 2-dimensional, we see $G(R_4/K)$ has dimension $24+8=32$. Let us now reconcile the decomposition of $G(R_4/K)$ given by Theorem \ref{thm:main} with the explicit decomposition given in Section \ref{degexample}. We make no assumption here on the characteristic of $K$. Recall that $T(x)$ has generators $x_i$ for $1\leq i\leq 4$ satisfying the relation $\sum x_i=0$ and that $\sigma\in S_4$ acts by $\sigma(x_i)=x_{\sigma(i)}$. We see then that $T(x)$ is the standard representation; that is, $T(x)\cong V_{(3,1)}$. Recall that $U(x)$ is a two-dimesional vector space generated by the equivalence classes of \[ x_1y_2+x_2y_1+x_3y_4+x_4y_3 \quad\textrm{and}\quad x_1y_3+x_3y_1+x_2y_4+x_4y_2 \] with $S_4$-action given by $\sigma(x_i)=x_{\sigma(i)}$ and $\sigma(y_i)=y_{\sigma(i)}$. Letting $H$ be the subgroup of $S_4$ generated by $(12)(34)$ and $(13)(24)$, we see that $U(x)$ is the $S_4$-representation obtained from the quotient $S_4\rightarrow S_4/H\cong S_3$ and the standard representation of $S_3$. Hence, $U(x)$ is $V_{(2,2)}$. It is clear that $W(x,y,z)$ is the sign representation $V_{(1,1,1,1)}$. Lastly, the composition factors of $V(x,y)$ are $V_{(2,2)}$ and $V_{(2,1,1)}$, each occurring with multiplicity 1. This follows, for example, from an explicit computation using Brauer characters (see \cite[Chpt 7 Def 2.7]{modular}). We see then that $G(R_4/K)$ has the same composition factors as \[ V_{(4)} \oplus V_{(3,1)}^{\oplus 3} \oplus V_{(2,2)}^{\oplus 6} \oplus V_{(2,1,1)}^{\oplus 3} \oplus V_{(1,1,1,1)}; \] that is, if we weaken Theorem \ref{thm:main} to only require that the two $S_n$-modules have the same composition factors, then it holds in arbitrary characteristic for $n\leq 4$. \section{Open questions}\label{sec:open} There are several questions about the $S_n$-closures that have not been treated in this article, which beg for further investigation. First, we have the natural question: \begin{question} \label{q:geom} \emph{ Is there a geometric definition of the $S_n$-closure? } \end{question} The definition we have given in the introduction is rather algebraic. A more geometric definition would perhaps make the functoriality of the $S_n$-closure construction more apparent. Second, we have only proven Theorem \ref{thm:main} in the case where the field $K$ has characteristic prime to $n!$\,. However, we saw in Section~\ref{degexample} that even when $K$ has characteristic 2 or 3, the dimension of $G(R_4/K)$ remains 32, which is precisely what Theorem~\ref{thm:main} would imply in good characteristic. In Section~\ref{sec:ex}, we saw in fact that $G(R_4/K)$ possesses the same composition factors in any characteristic. Does the analogous statement hold for $G(R_n/K)$ for higher values of $n$? \begin{question} \label{q:char} \emph{ Is it true, for a field $K$ of arbitrary characteristic, that \[ G(R_n/K) \,\,\,\,\,\,\mbox{ and }\,\,\,\,\, \bigoplus_{\substack{\mu\triangleright\lambda\\ \mu_1=\lambda_1}}m_\lambda K_{\lambda\mu}V_\mu \] possess the same composition factors? } \end{question} We have shown that the $S_n$-closure of an algebra $A$ of rank $n$ over a field $K$ has dimension $n!$ in many natural cases, and that this dimension in any case is always bounded above by $\dim_K(G(R_n/K))$. What about a lower bound? One would guess that the rank could never go {\it below} $n!$, although this does not seem trivial to prove. \begin{question} \label{q:etale} \emph{If $A$ is a ring of rank $n$ over a field $K$, then is the rank of $G(A/K)$ {at least} $n!$ ? } \end{question} While we do not know the answer to this question in general, we show below that the answer is ``yes'' provided that $n$ is small and the characteristic of $K$ is not 2 or 3: \begin{proposition} \label{prop:etale7} If $n\leq7$, and $A$ is a ring of rank $n$ over a field $K$ having characteristic not $2$ or $3$, then $G(A/K)$ has rank at least $n!$\,. \end{proposition} \begin{proof} By \cite[Cor 6.7]{moduli} and the fact that $\mathfrak{B}_{n,K}$ is irreducible (\cite[Thm.~1.1]{CEVV} which assumes $k$ does not have characteristic $2$ or $3$), we see that the \'etale locus is dense in $\mathfrak{B}_{n,K}$. Theorem~\ref{etalecase} shows that if $A$ is \'etale over $K$, then the rank of $G(A/K)$ is $n!$. Therefore, an upper semi-continuity argument, similar to the one given in Theorem \ref{maxrank}, finishes the proof. \end{proof} The argument of Proposition \ref{prop:etale7} does not extend to higher values of $n$ because it is known that the \'etale locus is {\it not} dense in $\mathfrak{B}_{n,K}$ for $n\geq 8$; see \cite[Prop.\ 9.6]{moduli}. Another question stems from the following. In the Galois theory of fields, one often constructs Galois closures through certain natural intermediate extensions. Namely, suppose $L=K[x]/f(x)$ is a separable field extension of degree $n$ with associated Galois group $S_n$, and $\tilde L$ is the splitting field of $f$ (and thus the Galois closure of $L$ over $K$). Then $f$ has a root $\alpha_1$ in $L$, and $f$ has $n$ roots $\alpha_1,\ldots,\alpha_n$ in the splitting field $\tilde L$. We may thus construct $\tilde L$ through a tower of extensions $$L\,=\,L^{(1)}\,\,\subset\,\, L^{(2)}\,\,\subset\,\,\, \cdots \,\,\,\subset\,\, L^{(n)}=\,\tilde L$$ where $L^{(r)}:=L(\alpha_1,\ldots,\alpha_r)$ has degree $n(n-1)\cdots(n-r+1)$ over $L$. The fields $L^{(r)}$ are well-defined up to isomorphism and independent of the ordering of the roots $\alpha_1,\ldots,\alpha_r$ of~$f$. \begin{question}{\em Let $A$ be a ring of rank $n$ over $B$. Is there a functorial construction of ``intermediate $S_n$-closures'' $$A=G^{(1)}(A/B),\,\,\,\,\, G^{(2)}(A/B),\,\,\,\, \,\ldots\,\,\,, \,\,\,\,\, G^{(n)}(A/B)=G(A/B),$$ such that in the case of an $S_n$-extension of fields $L/K$ of degree $n$, we have $G^{(r)}(L/K)\cong L^{(r)}$?} \end{question} \noindent A natural method to proceed would be to construct $G^{(r)}(A/B)$ as a quotient of $A^{\otimes r}$ by an appropriate ideal $I^{(r)}(A,B)$, where $I^{(n)}(A,B)$ coincides with $I(A,B)\subset A^{\otimes n}$. Finally, it is natural to ask whether Galois type closures can be obtained for groups other than $S_n$. If $G$ is a transitive permutation group on $n$ elements, there should be an analogous way to define a ``$G$-closure'' of a rank $n$ ring. \begin{question}{\em If $G$ is a finite group, what is the natural class of rings for which functorial $G$-closures can be defined?} \end{question} \subsection*{Acknowledgments} We wish to thank J.\ Blasiak, B.\ de Smit, J.\ Ellenberg, D.\ Erman, K.\ Kedlaya, H.\ Lenstra, B.\ Poonen, S. Sun, J-P.\ Serre, C.\ Skinner, N.\ Snyder, B.\ Viray, and M.\ Wood for numerous valuable conversations, comments, and suggestions that helped shape this article.
{ "timestamp": "2010-06-15T02:01:34", "yymm": "1006", "arxiv_id": "1006.2562", "language": "en", "url": "https://arxiv.org/abs/1006.2562", "abstract": "We introduce a notion of \"Galois closure\" for extensions of rings. We show that the notion agrees with the usual notion of Galois closure in the case of an S_n degree n extension of fields. Moreover, we prove a number of properties of this construction; for example, we show that it is functorial and respects base change. We also investigate the behavior of this Galois closure construction for various natural classes of ring extensions.", "subjects": "Commutative Algebra (math.AC); Algebraic Geometry (math.AG); Number Theory (math.NT)", "title": "On a notion of \"Galois closure\" for extensions of rings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137910906876, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7087617880878002 }
https://arxiv.org/abs/1509.07457
A simplicial complex is uniquely determined by its set of discrete Morse functions
We prove that a connected simplicial complex is uniquely determined by its complex of discrete Morse functions. This settles a question raised by Chari and Joswig. In the 1-dimensional case, this implies that the complex of rooted forests of a connected graph G completely determines G.
\section{Introduction} The \emph{complex of discrete Morse functions} $\mathfrak{M}(K)$ of a finite simplicial complex $K$ was introduced by Chari and Joswig in \cite{ChJo} to study the topology of simplicial complexes in terms of their sets of discrete deformations. Despite the potential utility of this complex, very little was known about the relationship between $K$ and $\mathfrak{M}(K)$. Chari and Joswig studied some properties of the complexes associated to graphs and simplices and computed the homotopy type of the complex associated to the $2$-simplex. Their work was shortly followed by Ayala, Fern\'andez, Quintero and Vilches, who described the structure of the \emph{pure Morse complex} of a graph $G$, i.e. the subcomplex of $\mathfrak{M}(G)$ generated by the simplices of maximal dimension \cite{AyFeQuVi1}. As pointed out in \cite{ChJo}, the construction of $\mathfrak{M}(K)$ in the context of graphs was already implicit in the work of Kozlov \cite{Koz1}, who studied complexes arising from directed sub-trees of a given (directed) graph. Kozlov proved shellability of the complexes associated to complete graphs and computed the homotopy type of the complexes associated to paths and cycles. The aim of this article is to settle the connection between a simplicial complex and its complex of discrete Morse functions. We show that $K$ is completely determined by $\mathfrak{M}(K)$. Concretely, our main result is the following. \begin{teoA}\label{Theorem:MainForComplexes} Let $K,L$ be finite connected simplicial complexes. If $\mathfrak{M}(K)$ is isomorphic to $\mathfrak{M}(L)$ then $K$ is isomorphic to $L$.\end{teoA} For the $1$-dimensional case, we prove that Theorem A also holds for multigraphs. \begin{teoB}\label{Theorem:MainForMultigraphs} Let $G,G'$ be finite connected multigraphs. If $\mathfrak{M}(G)$ is isomorphic to $\mathfrak{M}(G')$ then $G$ is isomorphic to $G'$.\end{teoB} We also exhibit an example which shows that the homotopy type of $\mathfrak{M}(K)$ does not determine the homotopy type of $K$. The results in this article provide the complete answers to the foundational questions about $\mathfrak{M}(K)$ raised by Chari and Joswig in \cite{ChJo}. \section{The complex of discrete Morse functions} All simplicial complexes that we deal with are assumed to be finite. We write $\sigma\prec\tau$ if the simplex $\sigma$ is an immediate face of $\tau$ (i.e. a proper maximal face) and we let $V_K$ denote the set of vertices of a complex $K$. We denote by $\Delta^n$ the standard complex consisting of all the faces of an $n$-simplex, and by $\partial\Delta^n$ its boundary (i.e. the complex of all the proper faces of the simplex). A \emph{discrete Morse funcion} $f$ on an abstract simplicial complex $K$ is a map $f:K\rightarrow \mathbb{R}$ satisfying, for every $\sigma\in K$,\begin{enumerate} \item\label{item:Property1DefinitionMorseFunction} $|\{\tau\succ\sigma\,|\,f(\tau)\leq f(\sigma)\}|\leq 1$ and \item\label{item:Property2DefinitionMorseFunction} $|\{\nu\prec\sigma\,|\,f(\nu)\geq f(\sigma)\}|\leq 1$. \end{enumerate} Here $|A|$ denotes the cardinality of the set $A$. A simplex $\sigma$ such that both of these numbers are zero is called \emph{critical}. If $f(\eta)\geq f(\rho)$ for some $\eta\prec\rho$ then the pair $(\eta,\rho)$ is called a \emph{regular pair}. One can easily see that every simplex in $K$ is either critical or belongs to a unique regular pair (see \cite{For1, For2} for more details). If $(\sigma,\tau)$ is a regular pair, we call $\sigma$ the \emph{source simplex} of the pair, and write $s(\sigma,\tau)=\sigma$, and we call $\tau$ the \emph{target simplex} of the pair, and write $t(\sigma,\tau)=\tau$. Typically, a regular pair $(\sigma,\tau)$ is depicted graphically as an arrow from $\sigma$ to $\tau$ (see Figure \ref{Figure:ExamplesOfArrowsInFMD}). \begin{figure}[h] \centering \includegraphics[scale=0.7]{ejemploflechas} \caption{Graphical representation of regular pairs.} \label{Figure:ExamplesOfArrowsInFMD} \end{figure} The index of a regular pair $(\sigma,\tau)$ is the dimension of $\sigma$. A regular pair of index $k$ will be sometimes denoted by $(\sigma^k,\tau^{k+1})$. Given two discrete Morse functions $f,g$ on $K$ we write $f\lesssim g$ if every regular pair of $f$ is also a regular pair of $g$. Following \cite{ChJo}, if $f\lesssim g$ and $g\lesssim f$ (i.e. both functions have the same regular pairs) then we say that they are \emph{equivalent}. We will make no distinction between equivalent Morse functions, i.e. we will work with classes of discrete Morse functions under this equivalence relation. A discrete Morse function with exactly one regular pair is called a \emph{primitive Morse function}. We will often identify a primitive Morse function with its sole regular pair. A collection $f_0,\ldots,f_r$ of primitive Morse functions is said to be \emph{compatible} if there exists a discrete Morse function $f$ on $K$ with $f_i\lesssim f$ for every $i=0,\ldots,r$. The \emph{complex of discrete Morse functions} of $K$ is the simplicial complex $\mathfrak{M}(K)$ whose vertices are the primitive Morse functions on $K$ and whose $r$-simplices are the discrete Morse functions with $r+1$ regular pairs. We identify in this way a discrete Morse function $f$ with the set $\{f_0,\ldots,f_r\}$ of all primitive Morse functions satisfying $f_i\lesssim f$ (i.e. the set of its regular pairs). $\mathfrak{M}(K)$ is also called the \emph{discrete Morse complex} of $K$. Figure \ref{Figure:ExamplesOfComplexOfFMD} shows some low-dimensional examples of discrete Morse complexes. \begin{figure}[h] \centering \includegraphics[scale=0.6]{ejemplocomplexFMD} \caption{Examples of complexes of discrete Morse functions.} \label{Figure:ExamplesOfComplexOfFMD} \end{figure} There is an alternative approach to discrete Morse theory due to Chari \cite{Cha} where the deformations are encoded in terms of acyclic matchings in the Hasse diagram of the face poset of the simplicial complex. It is not hard to see that the pairing of simplices which form regular pairs of a discrete Morse function determines a matching in the Hasse diagram $\mathcal{H}_K$ of $K$. If the arrows in this matching are reversed, it can be easily shown that the resulting directed graph is acyclic. On the other hand, from an acyclic matching on the Hasse diagram of a simplicial complex one can build a discrete Morse function $f$ on $K$ where the regular pairs of $f$ are precisely the edges of the matching. From this viewpoint, $\mathfrak{M}(K)$ is the simplicial complex on the edges of the Hasse diagram of $K$ whose simplices are the subsets of edges which form acyclic matchings. \section{The complexes associated to graphs} The complex of discrete Morse functions has been studied almost exclusively for graphs, as the construction of $\mathfrak{M}(K)$ for a general $K$ is rather complicated (see for example \cite{AyFeQuVi1,ChJo}). We focus first on this case and settle the main result for $1$-dimensional regular CW-complexes (Theorem B). Recall that a \emph{multigraph} $G$ is a triple $(V_G,E_G,f_G)$ where $V_G$ is a (finite) set of vertices, $E_G$ is a set of edges and $f_G:E_G\rightarrow\{\{u,v\}\,:\,u,v\in V_G\text{ and }u\neq v\}$ is a map which assigns to each edge its boundary vertices. If $f_G(e)=f_G(e')$ for $e,e'\in E_G$, we say that $e,e'$ are \emph{parallel edges}. For $v,v'\in V_G$, $E_G(v,v')$ will stand for the set of parallel edges between $v$ and $v'$. Note that, by definition, a multigraph has no loops. \emph{Simple} graphs correspond to multigraphs $G$ where $f_G$ is injective. In this case we shall identify an edge with its boundary vertices and write $e=vw$ if $f_G(e)=\{v,w\}$. Note that simple graphs are precisely the $1$-dimensional simplicial complexes and multigraphs are precisely the $1$-dimensional regular CW-complexes (see \cite{LuWe} for the necessary definitions). The complex of discrete Morse functions of a graph was first studied by Kozlov \cite{Koz1} under a different context. Given a directed graph $G$, Kozlov defined the simplicial complex $\Delta(G)$ whose vertices are the edges of $G$ and whose faces are all directed forests which are subgraphs of $G$. In \cite{Koz1} he studied the shellability of the complete double-directed graph on $n$ vertices (a graph having exactly one edge in each direction between any pair of vertices) and computed the homotopy type of the double-directed $n$-cycle and the double-directed $n$-path. It is not hard to see that for any (undirected) graph $G$, the identity $\mathfrak{M}(G)=\Delta(d(G))$ holds, where $d(G)$ is the directed graph on the vertices of $G$ with one edge in each direction between adjacent vertices of $G$. The aforementioned examples studied by Kozlov correspond respectively to the complex of Morse functions of the complete graph, the $n$-cycle and the $n$-path. Complexes of directed graphs have been widely studied (see for example \cite{BjWe,Eng,Joj,Koz1}) and some results of this theory were used in Babson and Kozlov's proof of the Lov\'asz conjecture (see \cite{BaKo}). In this section we prove Theorem B, which is the special case of Theorem A for regular $1$-dimensional CW-complexes. The definition of the complex of Morse functions for regular CW-complexes is identical to the simplicial case. In particular, for a multigraph $G$, $\mathfrak{M}(G)$ can be viewed as the simplicial complex with one vertex for each directed edge in $G$ and whose simplices are the collections of directed edges which do not form directed cycles. We first establish the result for simple graphs (i.e. the $1$-dimensional case of Theorem A) and then extend it to general multigraphs. We begin by collecting some basic facts about the discrete Morse complex of simple graphs. Given two simplicial complexes $K, L$, we denote $K\equiv L$ if they are isomorphic. \begin{lema} Let $G$ be a connected simple graph. Then,\begin{enumerate} \item $|V_{\mathfrak{M}(G)}|=2|E_G|$. \item $\dim(\mathfrak{M}(G))=|V_G|-2$. \end{enumerate}\end{lema} \begin{proof} If $G$ is a tree then it is collapsible and there exists a discrete Morse function $f\in\mathfrak{M}(G)$ for which all the edges of $G$ are regular (see \cite[Lemma 4.3]{For1}). Hence, $\dim(\mathfrak{M}(G))=|E_G|-1=|V_G|-2$. For the general case, proceed by induction on $n=|E_G|$. If $G$ is not a tree, let $f\in\mathfrak{M}(G)$ be of maximal dimension and let $e_0,\ldots,e_r$ be a cycle in $G$. There must be an edge $e_i$ which is not regular for $f$ (see \cite[Theorem 9.3]{For1}). Let $G'=G-\{e_i\}$. $G'$ is still connected because $e_i$ is in a cycle, $|E_{G'}|=|E_G|-1$ and, by induction, $\dim(\mathfrak{M}(G'))=|V_{G'}|-2=|V_G|-2$. Since $f\in\mathfrak{M}(G')$ and $\dim(\mathfrak{M}(G'))\leq\dim(\mathfrak{M}(G))$, then $\dim(\mathfrak{M}(G))=|V_G|-2$.\end{proof} \begin{coro} If $G,G'$ are connected simple graphs such that $\mathfrak{M}(G)\equiv\mathfrak{M}(G')$ then $|V_G|=|V_{G'}|$ and $|E_G|=|E_{G'}|$. In particular their fundamental groups $\pi_1(G)$ and $\pi_1(G')$ are isomorphic.\end{coro} \begin{obs}\label{Obs:compatibility} It is easy to check that a vertex $v\in G$ is a leaf if and only if the vertex $(v,e)\in V_{\mathfrak{M}(G)}$ is compatible with every other $(u,e')\in V_{\mathfrak{M}(G)}$ with the unique exception of $(w,e)$, where $w$ is the other vertex of the edge $e$. This happens if and only if $\deg(v,e)=2|E_G|-2$, where $\deg(v,e)$ is the degree of the vertex $(v,e)$ in the $1$-skeleton $\mathfrak{M}(G)^{(1)}$ (i.e. the subcomplex of $\mathfrak{M}(G)$ consisting of the simplices of dimension $\leq 1$). In particular, if $\mathfrak{M}(G)\equiv\mathfrak{M}(G')$ then $G$ and $G'$ have the same number of leaves.\end{obs} Let $C_n$ denote the simple cycle with $n$ vertices. \begin{coro}\label{Corollary:MainForCycles} Let $G,G'$ be two connected simple graphs. If $\mathfrak{M}(G)\equiv\mathfrak{M}(G')$ and $G=C_n$ then $G'=C_n$.\end{coro} \begin{proof} By a previous result, $|V_G|=|V_{G'}|$ and $|E_G|=|E_{G'}|$. Since $G=C_n$ then $|V_G|=|E_G|$ and therefore $|V_{G'}|=|E_{G'}|$. Also, since $G$ has no leaves then $G'$ has no leaves. Therefore, $G'=C_n$.\end{proof} In order to prove the main results of this paper we will analyze compatibility of regular pairs, similarly as we did in Remark \ref{Obs:compatibility}. From now on, we write $(\sigma,\tau)\sim (\eta,\rho)$ if $(\sigma,\tau)$ and $(\eta,\rho)$ are compatible as primitive Morse functions (i.e. if they form a simplex in $\mathfrak{M}(K)$), and $(\sigma,\tau)\nsim (\eta,\rho)$ whenever they are not. \begin{teo}\label{Theorem:MainForSimpleGraphs} Let $G,G'\neq C_n$ be connected simple graphs and let $F:\mathfrak{M}(G)\rightarrow\mathfrak{M}(G')$ be a simplicial isomorphism. Define a mapping $f:G\rightarrow G'$ by $f(v)=s(F(v,e))$, where $e$ is any edge incident to $v$. Then $f$ is a well-defined simplicial isomorphism.\end{teo} \begin{proof} The key part of the proof is to see that $f$ is well-defined, i.e. that $f(v)$ does not depend on the choice of the incident edge $e$. Suppose otherwise and let $(v,e_0),(v,e_1)\in V_{\mathfrak{M}(K)}$ be such that $F(v,e_0)=(w,a)$ and $F(v,e_1)=(w',b)$ with $w\neq w'$. Since $(v,e_0)\nsim (v,e_1)$ then $(w,a)\nsim(w',b)$ and hence $a=b$ (see Figure \ref{Figure:InitialSituation}). \begin{figure}[h] \centering \includegraphics[scale=0.7]{demografos1} \caption{} \label{Figure:InitialSituation} \end{figure} We claim that under this situation we can choose such a vertex $v$ of $G$ with degree greater than or equal to $3$. This will lead to a contradiction since an edge containing $v$ different from $e_0$ and $e_1$ provides a primitive Morse function on $G$ which is incompatible with both $(v,e_0)$ and $(v,e_1)$, while the simplicity of $G'$ implies that there is no possible primitive Morse function on $G'$ incompatible with both $(w,a)$ and $(w',a)$. To prove this claim, let $e_1=vv'$ and consider the primitive Morse function $(v',e_1)$. Since $(w',a)=F(v,e_1)\nsim F(v',e_1)$ and $F$ is an isomorphism then there exists and edge $c=w'w''\in G'$ such that $F(v',e_1)=(w',c)$. Consider now $(w'',c)\in\mathfrak{M}(G')$. Using a similar argument for $F^{-1}$ and $(w'',c)$ one can find an edge $e_2\neq e_0,e_1$ such that $F^{-1}(w'',c)=(v',e_2)$ (see Figure \ref{Figure:InductiveSituation}). \begin{figure}[h] \centering \includegraphics[scale=0.7]{demografos2} \caption{} \label{Figure:InductiveSituation} \end{figure} Note that the primitive Morse functions $(v',e_1), (v',e_2)$ satisfy the same hypotheses than $(v,e_0),(v,e_1)$ (but replacing $(w,a),(w',a)$ with $(w',c),(w'',c)$ respectively). Repeating this argument we obtain a path $e_1,e_2,e_3,\ldots$ where, for any vertex $v\in e_i\cap e_{i+1}$, $(v,e_i),(v,e_{i+1})$ are mapped to primitive Morse functions on $G'$ of the form $(u,d),(u',d)$ with $u\neq u'$. By finiteness, this path must form a cycle $C=\{e_j,e_{j+1},\ldots,e_{j+k-1},e_{j+k}=e_j\}$ for some $j,k$. If $j=0$, and since $G$ is not a cycle, there is by connectedness an edge $e\notin C$ intersecting $C$. In this case, $x= e\cap C$ is the desired vertex (see Figure \ref{Figure:CycleSituation} ($a$)). If $j>0$ then the vertex $y=e_{j-1}\cap e_j$ is the desired vertex (see Figure \ref{Figure:CycleSituation} ($b$)). This proves that $f$ is well-defined. \begin{figure}[h] \centering \includegraphics[scale=0.7]{demografos3} \caption{} \label{Figure:CycleSituation} \end{figure} We show now that $f$ is a simplicial morphism. Consider an edge $e=vv'\in G$. We must see that $f(v)f(v')\in G'$. Since $(v,e)\nsim(v',e)$ then $F(v,e)\nsim F(v',e)$. Therefore, either $s(F(v,e))=s(F(v',e))$ or $t(F(v,e))=t(F(v',e))$. In the first case, the same reasoning as above applied to $h=s\circ F^{-1}:G'\rightarrow G$ gives a contradiction. Therefore, $t(F(v,e))=t(F(v',e))$ and, in particular, $f(v)f(v')\in t(F(v,e))$ is an edge in $G'$. Finally, it is easy to see that $f^{-1}=s\circ F^{-1}$ is the inverse of $f$.\end{proof} \begin{coro}\label{Corollary:MainTheoremForSimpleGraphs} Let $G,G'$ be connected simple graphs. If $\mathfrak{M}(G)\equiv\mathfrak{M}(G')$ then $G\equiv G'$.\end{coro} \begin{proof} Follows from Corollary \ref{Corollary:MainForCycles} and Theorem \ref{Theorem:MainForSimpleGraphs}.\end{proof} We now extend the result to multigraphs. Two primitive Morse functions $(v,e),(v',e')\in\mathfrak{M}(G)$ are said to be \emph{parallel} if $v=v'$ and $e$ is parallel to $e'$ in $G$. Recall that the \emph{link} of a simplex $\sigma\in K$ is the subcomplex $lk(\sigma,K)=\{\tau\in K:\ \tau\cap\sigma=\emptyset,\ \tau\cup\sigma\in K\}$. \begin{lema}\label{Lemma:ParallelCharacterization} Let $G$ be a connected multigraph with more than two vertices. Then two primitive Morse functions $(v,e),(v',e')$ are parallel in $\mathfrak{M}(G)$ if and only if $(v,e)\nsim (v',e')$ and $lk((v,e),\mathfrak{M}(G))=lk((v',e'),\mathfrak{M}(G))$.\end{lema} \begin{proof} Suppose first that $(v,e)\nsim (v',e')$ and $lk((v,e),\mathfrak{M}(G))=lk((v',e'),\mathfrak{M}(G))$. If $(v,e)$ and $(v',e')$ are not parallel in $\mathfrak{M}(G)$, then there are only three possibilities for the edges $e$ and $e'$ in $G$ which are shown in Figure \ref{Figure:PossibilitiesArise}. \begin{figure}[H] \centering \includegraphics[scale=0.7]{grafos1} \caption{} \label{Figure:PossibilitiesArise} \end{figure} Since $|V_G|\geq 3$ and $G$ is connected, in each of the three cases, $G$ locally looks as in Figure \ref{Figure:PossibilitiesContradicted}. \begin{figure}[H] \centering \includegraphics[scale=0.7]{grafos2} \caption{} \label{Figure:PossibilitiesContradicted} \end{figure} This contradicts the fact that $lk((v,e),\mathfrak{M}(G))=lk((v',e'),\mathfrak{M}(G))$. The other implication is trivial.\end{proof} Given a simplicial complex $K$, we define an equivalence relation $\mathcal{R}$ on $V_K$ as follows: $$v\mathcal{R} w\Leftrightarrow v=w \text{ or } \{v,w\}\notin K\text{ and }lk(v,K)=lk(w,K).$$ Let $\widetilde{K}$ be the simplicial complex whose vertices are the equivalence classes of vertices of $K$ and whose simplices are the sets $\{\tilde{v}_0,\ldots,\tilde{v}_r\}$ such that $\{v_0,\ldots,v_r\}\in K$. Here $\tilde v$ denotes the equivalence class of the vertex $v$. Note that $\widetilde{K}$ is well-defined since, if $v_i\mathcal{R} v_i'$ then $\{v_0,\ldots,v_i,\ldots,v_r\}\in K$ if and only if $\{v_0,\ldots,v_i',\ldots,v_r\}\in K$. \begin{prop}\label{Proposition:QuotientIsomorphism} Let $K, L$ be simplicial complexes and let $\tilde K$ and $\tilde L$ be as above. If $f:K\rightarrow L$ is a simplicial isomorphism then the map $\tilde{f}:\widetilde{K}\rightarrow\widetilde{L}$ given by $\tilde{f}(\tilde{v})=\widetilde{f(v)}$ is a simplicial isomorphism.\end{prop} \begin{proof} We prove first that $\tilde f$ is well-defined. Suppose $v\mathcal{R} v'$ with $v\neq v'$. Since $\{v,v'\}\notin K$ and $f$ is an isomorphism then $\{f(v),f(v')\}\notin L$. Also, if $\{f(v)\}\cup \sigma\in L$ then $\{v\}\cup f^{-1}(\sigma)\in K$, which implies that $\{v'\}\cup f^{-1}(\sigma)\in K$. Therefore $f(v')\cup\sigma \in L$. Finally, $\tilde{f}$ is an isomorphism since $\tilde{f}^{-1}=\widetilde{f^{-1}}$.\end{proof} \begin{defi} For a multigraph $G$ we define the \emph{simplification} of $G$, denoted by $sG$, as the simple graph obtained from $G$ by identifying parallel edges.\end{defi} \begin{obs}\label{Remark:SimplificationGraphs} By Lemma \ref{Lemma:ParallelCharacterization} one can check that the map $f:\widetilde{\mathfrak{M}(G)}\to \mathfrak{M}(sG)$ defined by $f(\widetilde{(v,e)})=(v,\overline{e})$ is a well-defined isomorphism. Here $\overline{e}$ is the image of the edge $e$ in $sG$. \end{obs} \begin{proof}[Proof of Theorem B] Let $F:\mathfrak{M}(G)\rightarrow\mathfrak{M}(G')$ be an isomorphism. By Proposition \ref{Proposition:QuotientIsomorphism} and Remark \ref{Remark:SimplificationGraphs}, $F$ induces an isomorphism $\mathfrak{M}(sG)\rightarrow\mathfrak{M}(sG')$ which we also denote by $F$. By Theorem \ref{Theorem:MainForSimpleGraphs} there is an isomorphism $f:sG\rightarrow sG'$ sending a vertex $v$ to $s(F(v,e))$ for any edge $e$ incident to $v$. Then, in order to see that $G$ and $G'$ are isomorphic, we only need to check that $|E_G(v,w)|=|E_{G'}(f(v),f(w))|$ for any pair of vertices $v,w$ of $G$. We can suppose that $|E_G(v,w)|\neq 0$ and choose some $e\in E_G(v,w)$. Then $(v,e)\in\mathfrak{M}(G)$ and let $e'=t(F(v,e))\in E_{G'}(f(v),f(w))$. Note that the set $E_G(v,w)$ is in bijection with the set $\{(v,a)\in\mathfrak{M}(G),\ (v,a)\nsim (w,e)\}$. Similarly, $E_{G'}(f(v),f(w))$ is in bijection with $\{(f(v),a')\in\mathfrak{M}(G'),\ (f(v),a')\nsim (f(w),e')\}$. By the isomorphism $F$, both sets have the same cardinality. \end{proof} Chari and Joswig asked in \cite{ChJo} whether there is any connection between the homotopy types of $K$ and $\mathfrak{M}(K)$. They implicitly showed that the homotopy type of $K$ does not determine the homotopy type of $\mathfrak{M}(K)$. For instance, by \cite[Proposition 5.1]{ChJo} the complex of Morse functions associated to the 1-simplex is homotopy equivalent to $S^0$ and the one associated to the 2-simplex is homotopy equivalent to $S^1\vee S^1\vee S^1\vee S^1$. The following example shows that the homotopy type of $\mathfrak{M}(K)$ does not determine the homotopy type of $K$ either. \begin{ej} Consider the following simple graphs. $G$ has three vertices $u,v,w$ and two edges $uv,uw$. The graph $G'$ has four vertices $a,b,c,d$ and four edges $ab,bc,ac,ad$. Note that they are not homotopy equivalent while their associated complexes of Morse functions are both contractible.\end{ej} \section{Proof of the main result} We now extend the result of Corollary \ref{Corollary:MainTheoremForSimpleGraphs} to simplicial complexes of any dimension. The idea behind the proof is that, in ``almost all" cases, a simplicial isomorphism $F:\mathfrak{M}(K)\rightarrow\mathfrak{M}(L)$ restricts to an isomorphism $F|_{\mathfrak{M}(K^{(1)})}:\mathfrak{M}(K^{(1)})\rightarrow\mathfrak{M}(L^{(1)})$ between the $1$-skeletons and by Theorem \ref{Theorem:MainForSimpleGraphs} the $1$-skeletons of $K$ and $L$ are isomorphic. Then an inductive argument shows that an isomorphism $\mathfrak{M}(K)\equiv\mathfrak{M}(L)$ forces all skeletons of $K$ and $L$ to be isomorphic. In the following we will use Forman's concept of $V$-path associated to a discrete vector field $V$ over a complex $K$. Given a discrete Morse function $f:K\rightarrow\mathbb{R}$, an \emph{$f$-path of index $k$} is a sequence of regular $k$-simplices $\sigma_0,\ldots,\sigma_r\in K$ such that $\sigma_i\neq\sigma_{i+1}$ for all $0\leq i\leq r-1$ and $\sigma_{i+1}\prec \tau_i$, where $\tau_i$ is the target of the regular pair with source $\sigma_i$. This is actually the notion of a \emph{$V_f$-path}, where $V_f$ is the discrete gradient vector field of $f$. The $f$-path is called \emph{closed} if $\sigma_0=\sigma_r$ and \emph{non-stationary} if $\sigma_0\neq\sigma_1$. We shall be exclusively dealing with non-stationary closed $f$-paths, so we will simply refer to them as \emph{$f$-cycles}. Note that an $f$-cycle of index $k$ is equivalent to having an incompatible collection $P=\{(\sigma_0,\tau_0),\ldots,(\sigma_r,\tau_r)\}$ of primitive Morse functions of index $k\geq 0$ such that every proper subset of $P$ is compatible. Equivalently, the full subcomplex of $\mathfrak{M}(K)$ spanned by the vertices $(\sigma_0,\tau_0),\ldots,(\sigma_r,\tau_r)$ is the boundary $\partial\Delta^r$ of an $r$-simplex. Note that an $f$-cycle has at least three primitive Morse functions. One with exactly three primitive Morse functions is said to be \emph{minimal} and two minimal $f$-cycles sharing exactly one regular pair are said to be \emph{adjacent}. From the mutually exclusive nature of properties \eqref{item:Property1DefinitionMorseFunction} and \eqref{item:Property2DefinitionMorseFunction} in page \pageref{item:Property1DefinitionMorseFunction} we see that no collection of regular pairs of a given combinatorial Morse function admits $f$-cycles of any index. Actually, Forman proved that this property characterizes the discrete vector fields that arise from a discrete Morse function (see \cite[Theorem 9.3]{For1}). \begin{obss}\label{Remarkssss}\mbox{}\begin{enumerate} \item[($i$)] Note that a cycle $e_0,\ldots,e_r$ in the $1$-skeleton of a complex $K$ gives rise to two possible $f$-cycles of index $0$ in $K$: choosing a vertex $v_0$ for $e_0$, one of them is $\{(v_0,e_0),(v_1,e_1),\ldots,(v_r,e_r)\}$ where $v_i\neq v_{i+1}$ for all $i=0,\ldots,r-1$. The other $f$-cycle arises from selecting the other vertex of $e_0$ to be the source of the primitive Morse function. \item[($ii$)] It is easy to see that if $\{(\sigma_1,\tau_1),(\sigma_2,\tau_2),(\sigma_3,\tau_3)\}$ is a minimal $f$-cycle of index $k-1$ then $\{\tau_1,\tau_2,\tau_3\}$ spans a complex with $k+2$ vertices and a complete $1$-skeleton.\end{enumerate}\end{obss} The following result deals with the cases in which an isomorphism $\mathfrak{M}(K)\to \mathfrak{M}(L)$ does not restrict to an isomorphism $\mathfrak{M}(K^{(1)})\to \mathfrak{M}(L^{(1)})$. \begin{prop}\label{Proposition:IsomorphismRestrictsToOneSkeleton} Let $K,L$ be connected simplicial complexes and let $F:\mathfrak{M}(K)\rightarrow\mathfrak{M}(L)$ be a simplicial isomorphism. If there exists a primitive Morse function $(v,e)\in V_{\mathfrak{M}(K)}$ of index $0$ such that $F(v,e)=(\sigma^{n-1},\tau^n)$ with $n\geq 2$, then $K=L=\partial\Delta^m$ for some $m\geq 2$.\end{prop} \begin{proof} We may assume that $n$ is maximal with the property that there exists $(v,e)\in V_{\mathfrak{M}(K)}$ of index $0$ whose image is $(\sigma^{n-1},\tau^n)$ for some $n\geq 2$. With this assumption, we shall prove that $K=\partial\Delta^{n+1}$. Let $w$ be the other end of $e$ and consider $F(w,e)$. Since $n\geq 2$ it is not hard to build a primitive Morse function incompatible with $F(v,e)$ and $F(w,e)$ at the same time, thus $e$ must be a face of a $2$-simplex $\{v,w,u\} \in K$. Let $e'= wu$ and $e''=uv$ and consider the minimal $f$-cycle $\{(v,e),(w,e'),(u,e'')\}$ in $K$. Then $\{F(v,e),F(w,e'),F(u,e'')\}$ is a minimal $f$-cycle of index $n-1$ in $L$. Let $F(v,e)=(\sigma,\tau)$, $F(w,e')=(\sigma',\tau')$ and $F(u,e'')=(\sigma'',\tau'')$. A simple reasoning shows that if $\sigma'\prec\tau$ then the situation of Figure \ref{Figure:DemoProp2} would arise, which leads to a contradiction. \begin{figure}[H] \centering \includegraphics[scale=0.7]{demoprop2} \caption{The image of $(v,e'')$, $(u,e')$ and $(w,e)$ in the case $\sigma'\prec\tau$. If we consider a minimal $f$-cycle of index $n-1$ $\{\alpha,\beta,\delta\}$ in $\tau$ (in white arrows) then its preimage by $F$ does not constitute an $f$-cycle in $K$, which contradicts the fact that $F$ is an isomorphism.} \label{Figure:DemoProp2} \end{figure} \noindent Therefore, we must have $\sigma\prec\tau''$ and the situation is as shown in Figure \ref{Figure:DemoProp1}. Let $Q$ be the subcomplex generated by the $n$-simplices $\tau,\tau',\tau''$ and note that $Q$ has $n+2\geq 4$ vertices and a complete $1$-skeleton (see Remark \ref{Remarkssss} ($ii$)). Let $S$ denote the collection of all primitive Morse function in $Q$ of index $0$ and let $G(x,a)=t(F^{-1}(x,a))\in K$ for each $(x,a)\in S$. We will prove that $K=\partial \Delta^{n+1}$ in various steps. \begin{figure}[H] \centering \includegraphics[scale=0.7]{demoprop1} \caption{} \label{Figure:DemoProp1} \end{figure} \textsc{Step 1.} We show first that $G(S)$ is a collection of $k$-simplices for a fixed $k\leq n$. Consider a sequence $\tau=\eta^{n}\succ\sigma=\eta^{n-1}\succ\eta^{n-2}\succ\cdots\succ\eta^1\succ\eta^0=y$ of faces of the $n$-simplex $\tau$ ending in a vertex $y$ of $\tau$. Each pair $(\eta^{i-1},\eta^i)$ is incompatible with the previous and the next pair. Since incompatibility for a given regular pair only happens with regular pairs of one dimension up, one dimension down or of the same dimension, we conclude that $F^{-1}(y,\eta^1)=(\psi^{k-1},\rho^k)$ for some $k\leq n$. Now, since $Q$ has a complete 1-skeleton then any edge $a\in Q$ is part of a cycle also containing $\eta^1$. Therefore, any $(x,a)\in S$ is part of an $f$-cycle of index $0$ containing either $(y,\eta^1)$ or $(z,\eta^1)$, where $z$ is the other end of $\eta^1$ (see Remark \ref{Remarkssss} ($i$)). Since by definition $F$ maps $f$-cycles to $f$-cycles, it suffices to show that $t(F^{-1}(z,\eta^1))$ is also a $k$-simplex. But since $|V_Q|\geq 4$, we can form an $f$-cycle of index $0$ containing $(y,\eta^1)$ and a new pair $(p,\psi)$, and another one containing $(z,\eta^1)$ and $(p,\psi)$ as shown in Figure \ref{Figure:DemoFinal1}. \begin{figure}[H] \centering \includegraphics[scale=0.7]{demofinal1} \caption{} \label{Figure:DemoFinal1} \end{figure} \textsc{Step 2.} We show that $k=n$ and that $G(S)$ spans $\partial\Delta^{n+1}$. Fix a minimal $f$-cycle $C_1=\{(v_1, v_1v_2),(v_2,v_2v_3),(v_3,v_1v_3)\}$ in $Q$ and let $T$ be the subcomplex of $K$ generated by the three $k$-simplices in $G(C_1)$. Note that $|V_{T}|=k+2$ by Remark \ref{Remarkssss} ($ii$). We claim that all $k$-simplices in $G(S)$ have their vertices in $V_{T}$. To see this, let $(x,a)\in S$ and let $y$ be the other end of $a$. All possible situations for $(x,a)$ with respect to $C_1$ are contemplated in Figure \ref{Figure:regularpairrespectC1} where one can verify that it is always possible to find a sequence of adjacent minimal $f$-cycles between $C_1$ and a minimal $f$-cycle containing $(x,a)$. \begin{figure}[h] \centering \includegraphics[scale=0.6]{regularpairrespectC1} \caption{The sequence of adjacent minimal $f$-cycles in situation ($d$) is given by $C_1=\{1,2,3\},\{1,4,5\},\{5,6,7\}$ and $\{6,8,9=(x,a)\}$.} \label{Figure:regularpairrespectC1} \end{figure} By an inductive argument it suffices to show that the image by $G$ of a regular pair in a minimal $f$-cycle adjacent to $C_1$ has it vertices in $V_T$. Let $C_2=\{(v_2,v_2v_3),(v_3,v_3v_4),(v_4,v_2v_4)\}$ be a generic minimal $f$-cycle adjacent to $C_1$. Since the $k$-simplex $G(v_2,v_2v_3)\in G(C_1)\cap G(C_2)$, by Remark \ref{Remarkssss} ($ii$) it suffices to show that the only vertex $q\in V_{T}\setminus V_{G(v_2,v_2v_3)}$ is also in $G(C_2)$. But since $(v_3,v_3v_4)\nsim (v_3,v_1v_3)$ then either $s(F^{-1}(v_3,v_3v_4))=s(F^{-1}(v_3,v_1v_3))$ or $t(F^{-1}(v_3,v_3v_4))=t(F^{-1}(v_3,v_1v_3))$. The situation must be as shown in Figure \ref{Figure:DemoProp1} and the possible cases are shown in Figure \ref{Figure:demoprop43a}. This proves that $q\in G(C_2)$. \begin{figure}[h] \centering \includegraphics[scale=0.8]{demoprop43a} \caption{Here $F^{-1}(v_3,v_3v_4)$ is drawn with a dashed arrow. On the left: the case $s(F^{-1}(v_3,v_3v_4))=s(F^{-1}(v_3,v_1v_3))$ cannot happen because we get more than $k+2$ vertices. On the right: in the case $t(F^{-1}(v_3,v_3v_4))=t(F^{-1}(v_3,v_1v_3))$ we readily see that $q\in G(C_2)$.} \label{Figure:demoprop43a} \end{figure} Now, since $Q$ has a complete $1$-skeleton then we can form a cycle in $Q^{(1)}$ containing all the vertices of $Q$. The corresponding $f$-cycle of index $0$ has as a preimage by $F$ an $f$-cycle of index $k-1$ with $n+2$ regular pairs. By definition, the target of all these pairs are distinct $k$-simplices. Therefore, we conclude that $k=n$ and that $G(S)$ spans $\partial\Delta^{n+1}$. \textsc{Step 3.} We show that $K$ is spanned by $G(S)$. First, note that two primitive Morse functions $(x,a),(x,b)\in S$ of index $0$ sharing the same source vertex $x\in V_Q$ are mapped by $F^{-1}$ to primitive Morse functions with the same target $n$-simplex (i.e. $G(x,a)=G(x,b)$). To see this, note that since $F^{-1}(x,a)\nsim F^{-1}(x,b)$ then, by \textsc{Step 1}, either $s(F^{-1}(x,a))=s(F^{-1}(x,b))$ or $G(x,a)=G(x,b)$. Assume the first case holds and let $(x,c)\in S$ with $c\neq a,b$. Note that such a pair $(x,c)$ exists because $n\geq 2$. Since the only primitive Morse functions incompatible with both $F^{-1}(x,a)$ and $F^{-1}(x,b)$ have source $s(F^{-1}(x,a))=s(F^{-1}(x,b))$, there exists an $n$-simplex in $G(S)$ with one vertex $q$ not in $V_{G(x,a)}\cup V_{G(x,b)}$ (see Figure \ref{Figure:demofinalpunto3}). \begin{figure}[h] \centering \includegraphics[scale=0.7]{demofinalpunto3} \caption{} \label{Figure:demofinalpunto3} \end{figure} This is a contradiction since, by the reasoning made in \textsc{Step 2}, all the vertices in $G(S)$ are included in the set of $n+2$ vertices determined by any two distinct $n$-simplices in $G(S)$. We conclude that $G(x,a)=G(x,b)$, thus we have a bijection between $V_Q$ and $G(S)$. Suppose now that $K-\langle G(S)\rangle\neq\emptyset$. Here $\langle G(S)\rangle$ denotes the subcomplex spanned by $G(S)$. Let $\tilde{e}\in K$ be an edge such that $\tilde{e}\cap\langle G(S) \rangle$ consists of a vertex $z$. Consider the primitive Morse function $(z,\tilde{e})$ and let $(z,\tilde{e}'),(z,\tilde{e}'')\in \langle G(S) \rangle$. Since $(z,\tilde{e})\nsim (z,\tilde{e}')$ and $(z,\tilde{e})\nsim (z,\tilde{e}'')$ then $F(z,\tilde{e})\nsim F(z,\tilde{e}')$ and $F(z,\tilde{e})\nsim F(z,\tilde{e}'')$. Since $n$ is maximal and $t(F(z,\tilde{e}'))=t(F(z,\tilde{e}''))$ by the previous reasoning, then $t(F(z,\tilde{e}))$ must be equal to $t(F(z,\tilde{e}'))=t(F(z,\tilde{e}''))$. This is a contradiction because, due to the bijection between $V_Q$ and $G(S)$, all $n+1$ regular pairs whose target is this $n$-simplex are in the image of the $n+1$ regular pairs in $S$ with source $z$. This concludes the proof. \end{proof} Similarly as we did with the edges of simple graphs, for simplicity of notation, an $n$-simplex $\sigma=\set{v_0,\ldots,v_n}\in K$ will be denoted by $\sigma=v_0\cdots v_n$. \begin{proof}[Proof of Theorem A] Let $F:\mathfrak{M}(K)\rightarrow\mathfrak{M}(L)$ be an isomorphism. By Proposition \ref{Proposition:IsomorphismRestrictsToOneSkeleton} we may assume that every primitive Morse function of index $0$ in $\mathfrak{M}(K)$ (resp. in $\mathfrak{M}(L)$) is mapped by $F$ (resp. by $F^{-1}$) to a primitive Morse function of index $0$. This gives a well-defined isomorphism $F|_{\mathfrak{M}(K^{(1)})}:\mathfrak{M}(K^{(1)})\rightarrow\mathfrak{M}(L^{(1)})$. By Theorem \ref{Theorem:MainForSimpleGraphs} there exists an isomorphism $f:K^{(1)}\to L^{(1)}$ with $f(v)=s(F(v,e))$ for any $e\succ v$. Note that for every edge $xy\in K$ we have $F(x,xy)=(f(x),f(x)f(y))$. We will show by induction that for any $(n+1)$-simplex $v_0\cdots v_{n+1}$, $$F(v_0\cdots v_{n},v_0\cdots v_{n} v_{n+1})=(f(v_0)\cdots f(v_{n}),f(v_0)\cdots f(v_{n})f(v_{n+1})).$$ Given $\tau=v_0\cdots v_{n+1}\in K$, consider the following two families of primitive Morse functions: \begin{itemize} \item $I=\{(v_0\cdots\hat{v}_i\cdots v_{n},v_0\cdots v_{n})\ , \ 0\leq i\leq n\}$ \item $J=\{(v_1\cdots\hat{v}_j\cdots v_{n+1},v_1\cdots v_{n+1})\ ,\ 1\leq j\leq n+1\}$,\end{itemize} where the hat over a vertex means that that vertex is to be omitted. By induction, \small \begin{itemize} \item $F(v_0\cdots \hat{v}_i\cdots v_{n},v_0\cdots v_{n})=(f(v_0)\cdots\widehat{f(v_i)}\cdots f(v_{n}),f(v_0)\cdots f(v_{n}))$ \normalsize and \small \item $F(v_1\cdots\hat{v}_j\cdots v_{n+1},v_1\cdots v_{n+1})=(f(v_1)\cdots\widehat{f(v_j)}\cdots f(v_{n+1}),f(v_1)\cdots f(v_{n+1}))$. \end{itemize} \normalsize Since $(v_0\cdots v_{n},v_0\cdots v_{n+1})\in\mathfrak{M}(K)$ is incompatible with every element of $I$ then $$F(v_0\cdots v_{n},v_0\cdots v_{n+1})=(f(v_0)\cdots f(v_n),f(v_0)\cdots f(v_n)w)$$ for some vertex $w\in L$. On the other hand, since $(v_1\cdots v_{n+1},v_0 \cdots v_{n+1})\in\mathfrak{M}(K)$ is incompatible with every element of $J$ then $$F(v_1\cdots v_{n+1},v_0 \cdots v_{n+1})=(f(v_1)\cdots f(v_{n+1}),f(v_1)\cdots f(v_{n+1})u)$$ for some vertex $u\in L$. But $(v_0\cdots v_{n},v_0\cdots v_{n+1})\nsim (v_1\cdots v_{n+1},v_0 \cdots v_{n+1})$, so we must have $f(v_0)\cdots f(v_n)w=f(v_1)\cdots f(v_{n+1})u$, and therefore $w=f(v_{n+1})$ and $u=f(v_0)$. \end{proof}
{ "timestamp": "2015-09-25T02:13:50", "yymm": "1509", "arxiv_id": "1509.07457", "language": "en", "url": "https://arxiv.org/abs/1509.07457", "abstract": "We prove that a connected simplicial complex is uniquely determined by its complex of discrete Morse functions. This settles a question raised by Chari and Joswig. In the 1-dimensional case, this implies that the complex of rooted forests of a connected graph G completely determines G.", "subjects": "Combinatorics (math.CO); Algebraic Topology (math.AT)", "title": "A simplicial complex is uniquely determined by its set of discrete Morse functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137900379086, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7087617873279641 }
https://arxiv.org/abs/2107.09962
Cone Types, Automata, and Regular Partitions in Coxeter Groups
In this article we introduce the notion of a \textit{regular partition} of a Coxeter group. We develop the theory of these partitions, and show that the class of regular partitions is essentially equivalent to the class of automata (not necessarily finite state) recognising the language of reduced words in the Coxeter group. As an application of this theory we prove that each cone type in a Coxeter group has a unique minimal length representative. This result can be seen as an analogue of Shi's classical result that each component of the Shi arrangement of an affine Coxeter group has a unique minimal length element. We further develop the theory of cone types in Coxeter groups by identifying the minimal set of roots required to express a cone type as an intersection of half-spaces. This set of \textit{boundary roots} is closely related to the elementary inversion sets of Brink and Howlett, and also to the notion of the base of an inversion set introduced by Dyer.
\section*{Introduction} In \cite{BH:93}, Brink and Howlett showed that every finitely generated Coxeter system $(W,S)$ is automatic by providing an explicit construction of a finite state automaton $\mathcal{A}_0$ recognising the language $\mathcal{L}(W,S)$ of reduced words of $(W,S)$. A key insight in~\cite{BH:93} was the introduction of the set $\mathcal{E}$ of \textit{elementary roots} of the associated root system, and the proof that $\mathcal{E}$ is finite for all finitely generated Coxeter systems. This remarkable work paved the way for the study of other structures in Coxeter groups which also induce automata, such as the notion of \textit{Garside shadows} and \textit{low elements} introduced by Dehornoy, Dyer, and Hohlweg (see \cite{DDH:15,DH:16}). By the Myhill-Nerode Theorem there exists a unique (up to isomorphism) \textit{minimal} (with respect to the number of states) automaton $\mathcal{A}(W,S)$ recognising $\mathcal{L}(W,S)$. This automaton has been of considerable interest recently. For example, in \cite[Conjecture 2]{HNW:16} Hohlweg, Nadeau and Williams conjectured necessary and sufficient conditions for minimality of the automaton $\mathcal{A}_0$ constructed by Brink and Howlett, and this conjecture was verified by the current authors in~\cite[Theorem~1]{PY:19}. Furthermore, in \cite{HNW:16}, an automaton recognising $\mathcal{L}(W,S)$ is constructed using the smallest Garside shadow, and it is conjectured in \cite[Conjecture 1]{HNW:16} that this automaton is always minimal. In this paper we provide a detailed investigation of the minimal automaton $\mathcal{A}(W,S)$. The set of accept states of this automaton is the set $\mathbb{T}$ of all \textit{cone types} in $W$, where the cone type of $x\in W$ is $$ T(x)=\{y\in W\mid \ell(xy)=\ell(x)+\ell(y)\}. $$ Here $\ell:W\to\mathbb{N}$ is the usual length function, and we note that $|\mathbb{T}|<\infty$ (this is a consequence of the fact that $|\mathcal{E}|<\infty$). Thus studying $\mathcal{A}(W,S)$ amounts to studying cone types in Coxeter groups. A main contribution of this paper is the introduction of the notion of a \textit{regular partition} of~$W$. This concept is inspired by, and simultaneously generalises, both the Shi arrangement and the theory of Garside shadows, and we show that the class of regular partitions of $W$ is essentially equivalent to the class of automata recognising $\mathcal{L}(W,S)$ (see Theorem~\ref{thm:main3}). We will outline our main results below. In order to do so, we fix the following terminology (see Section~\ref{sec:1} for precise definitions). Let $(W,S)$ be a finitely generated Coxeter system, and let $\Phi$ be an associated root system. For $w\in W$ write $\Phi(w)$ for the inversion set of $w$, and $\mathcal{E}(w)=\Phi(w)\cap\mathcal{E}$ for the elementary inversion set of~$w$. The \textit{right weak order} on $W$ is defined by $x\preccurlyeq y$ if and only if $\ell(x^{-1}y)=\ell(y)-\ell(x)$ (equivalently, if and only if $x$ is a prefix of $y$). The left descent set of $w\in W$ is $D_L(w)=\{s\in S\mid \ell(sw)=\ell(w)-1\}$. Recall, from \cite{DDH:15}, that a \textit{Garside shadow} in $W$ is a subset $B\subseteq W$ such that (i) $S\subseteq B$, (ii) $B$ is closed under taking suffixes of elements, and (iii) $B$ is closed under taking joins (in the right weak order) of bounded subsets. Since the intersection of Garside shadows is again a Garside shadow, there exists a unique smallest Garside shadow, denoted~$\widetilde{S}$, and $|\widetilde{S}|<\infty$. One of the main contributions of this paper is the proof of the following fundamental property of cone types in Coxeter groups (see Corollary~\ref{cor:gateexist}). \begin{thm1}\label{thm:main1} For each cone type $T$ there is a unique element $m_T\in W$ of minimal length such that $T(m_T)=T$. Moreover, if $w\in W$ with $T(w)=T$ then $m_T$ is a suffix of $w$. \end{thm1} The path to proving Theorem~\ref{thm:main1} is surprisingly circuitous, and along the way we introduce several new concepts, as outlined below. An initial observation is that while the set $\{w\in W\mid T(w)=T\}$ of all cone type representatives of $T$ is disconnected (in the Coxeter complex), the inverse of this set turns out to be connected, and indeed convex (see Proposition~\ref{prop:convexitybasics}). Thus we consider the sets $$ X_T=\{w\in W\mid T(w^{-1})=T\},\quad\text{for $T\in\mathbb{T}$}. $$ We will ultimately prove Theorem~\ref{thm:main1} by showing that $X_T$ contains a unique minimal length element $g_T$, and that whenever $w\in W$ is such that $T(w^{-1})=T$ then $g_T\preccurlyeq w$. We call $\Gamma=\{g_T\mid T\in\mathbb{T}\}$ the set of \textit{gates} of $W$. Of course $m_T=g_T^{-1}$, however it turns out that the set $\Gamma$ appears to be more fundamental than the set of minimal length cone type representatives. We call the partition $\mathscr{T}=\{X_T\mid T\in\mathbb{T}\}$ of $W$ the \textit{cone type partition}. When $W$ is affine, the partition $\mathscr{T}$ shares many properties with the partition $\mathscr{S}$ of $W$ induced by the classical Shi arrangement introduced by Shi in~\cite{Shi:87a,Shi:87b} (and extensively studied ever since). We illustrate this below in the case $\tilde{\mathsf{B}}_2$ (the parts of the partitions are the connected components; see Example~\ref{ex:completingB2} for the details of how to compute $\mathscr{T}$). \begin{figure}[H] \centering \subfigure[The cone type partition~$\mathscr{T}$]{ \begin{tikzpicture}[scale=0.65] \path [fill=gray!90] (0,0) -- (1,1) -- (0,1) -- (0,0); \path [fill=blue!30] (-1,1)--(0,2)--(2,2)--(2,0)--(1,-1)--(0,-1)--(0,0)--(1,1)--(-1,1); \path [fill=blue!30] (0,2)--(-1,3)--(1,3)--(0,2); \path [fill=blue!30] (1,3)--(1,4)--(2,4)--(1,3); \path [fill=blue!30] (-1,1)--(0,1)--(0,-1)--(-1,-1)--(-1,1); \path [fill=blue!30] (-1,1)--(-2,0)--(-2,2)--(-1,1); \path [fill=blue!30] (-2,0)--(-3,0)--(-3,-1)--(-2,0); \path [fill=blue!30] (1,-1)--(1,-2)--(2,-2)--(1,-1); \path [fill=blue!30] (2,0)--(3,0)--(3,-1)--(2,0); \draw (-4,-2) -- (5,-2); \draw (-4,-1) -- (5,-1); \draw [line width=2pt](-4,0) -- (5,0); \draw [line width=2pt](-4,1) -- (5,1); \draw (-4,2) -- (5,2); \draw (-4,3) -- (5,3); \draw (-4,4) -- (5,4); \draw (-4,5) -- (5,5); \draw (-4,-3) -- (5,-3); \draw (-4,-4) -- (5,-4); \draw (-2,-4) -- (-2,5); \draw (-1,-4) -- (-1,5); \draw [line width=2pt](0,-4) -- (0,5); \draw [line width=2pt](1,-4) -- (1,5); \draw (2,-4) -- (2,5); \draw (-4,-4) -- (-4,5); \draw (-3,-4) -- (-3,5); \draw (3,-4) -- (3,5); \draw (4,-4) -- (4,5); \draw (5,-4) -- (5,5); \draw (-4,4)--(-3,5); \draw (-4,2)--(-1,5); \draw (-4,0) -- (1,5); \draw (-4,-2) -- (3,5); \draw [line width=2pt](-4,-4) -- (5,5); \draw (-2,-4) -- (5,3); \draw (0,-4) -- (5,1); \draw (2,-4)--(5,-1); \draw (4,-4)--(5,-3); \draw (3,5)--(5,3); \draw (1,5)--(5,1); \draw (-1,5)--(5,-1); \draw[line width=2pt] (-3,5)--(5,-3); \draw[line width=2pt] (-4,4)--(4,-4); \draw (-4,2)--(2,-4); \draw (-4,0)--(0,-4); \draw (-4,-2)--(-2,-4); \draw[line width=2pt] (-4,-2)--(-1,1); \draw[line width=2pt] (0,2)--(3,5); \end{tikzpicture} }\quad\qquad \subfigure[The Shi partition $\mathscr{S}$]{\begin{tikzpicture}[scale=0.65] \path [fill=gray!90] (0,0) -- (1,1) -- (0,1) -- (0,0); \path [fill=blue!30] (-1,1)--(0,2)--(2,2)--(2,0)--(1,-1)--(0,-1)--(0,0)--(1,1)--(-1,1); \path [fill=blue!30] (0,2)--(-1,3)--(1,3)--(0,2); \path [fill=blue!30] (1,3)--(1,4)--(2,4)--(1,3); \path [fill=blue!30] (-1,1)--(0,1)--(0,-1)--(-1,-1)--(-1,1); \path [fill=blue!30] (-1,1)--(-2,0)--(-2,2)--(-1,1); \path [fill=blue!30] (-2,0)--(-3,0)--(-3,-1)--(-2,0); \path [fill=blue!30] (1,-1)--(1,-2)--(2,-2)--(1,-1); \path [fill=blue!30] (2,0)--(3,0)--(3,-1)--(2,0); \path [fill=red!30] (-1,1)--(-1,2)--(0,2)--(-1,1); \draw (-4,-2) -- (5,-2); \draw (-4,-1) -- (5,-1); \draw [line width=2pt](-4,0) -- (5,0); \draw [line width=2pt](-4,1) -- (5,1); \draw (-4,2) -- (5,2); \draw (-4,3) -- (5,3); \draw (-4,4) -- (5,4); \draw (-4,5) -- (5,5); \draw (-4,-3) -- (5,-3); \draw (-4,-4) -- (5,-4); \draw (-2,-4) -- (-2,5); \draw (-1,-4) -- (-1,5); \draw [line width=2pt](0,-4) -- (0,5); \draw [line width=2pt](1,-4) -- (1,5); \draw (2,-4) -- (2,5); \draw (-4,-4) -- (-4,5); \draw (-3,-4) -- (-3,5); \draw (3,-4) -- (3,5); \draw (4,-4) -- (4,5); \draw (5,-4) -- (5,5); \draw (-4,4)--(-3,5); \draw (-4,2)--(-1,5); \draw (-4,0) -- (1,5); \draw (-4,-2) -- (3,5); \draw [line width=2pt](-4,-4) -- (5,5); \draw (-2,-4) -- (5,3); \draw (0,-4) -- (5,1); \draw (2,-4)--(5,-1); \draw (4,-4)--(5,-3); \draw (3,5)--(5,3); \draw (1,5)--(5,1); \draw (-1,5)--(5,-1); \draw[line width=2pt] (-3,5)--(5,-3); \draw[line width=2pt] (-4,4)--(4,-4); \draw (-4,2)--(2,-4); \draw (-4,0)--(0,-4); \draw (-4,-2)--(-2,-4); \draw[line width=2pt] (-4,-2)--(3,5); \end{tikzpicture} } \caption{The partitions $\mathscr{S}$ and $\mathscr{T}$ for $\tilde{\mathsf{B}}_2$}\label{fig:B2partitions} \end{figure} In Figure~\ref{fig:B2partitions}(a) the gates $g_T$ are shaded blue. By direct observation, each part of the partition~$\mathscr{S}$ also contains a unique minimal length element, and these are shaded blue and red (the red element in Figure~\ref{fig:B2partitions}(b) is shaded red to highlight the difference with Figure~\ref{fig:B2partitions}(a)). Note that $\mathscr{S}$ is a refinement of $\mathscr{T}$ (written $\mathscr{T}\leq \mathscr{S}$), and that $\mathscr{T}$ is not a hyperplane arrangement. One can define the \textit{Shi partition} $\mathscr{S}$ for an arbitrary Coxeter group by declaring $x,y\in W$ to lie in the same part of $\mathscr{S}$ if and only if $\mathcal{E}(x)=\mathcal{E}(y)$ (see \cite[Definition~3.18]{HNW:16}, and note that in the affine case this agrees with the classical definition, as the hyperplanes of the Shi arrangement are precisely the hyperplanes corresponding to the elementary roots of $W$). While the celebrated result of Shi~\cite{Shi:87b} tells us that in the affine case each component of $\mathscr{S}$ contains a unique minimal length element, it is unknown if this is true for general Coxeter type, and this analogy underscores the difficulty in proving Theorem~\ref{thm:main1}. The above discussion suggests that the language of ``partitions of $W$'' is the appropriate framework in which to study cone types and related structures. Indeed our approach to Theorem~\ref{thm:main1} is via a detailed study of a special class of partitions that we call the \textit{regular partitions} of $W$. A partition $\mathscr{P}$ of $W$ is \textit{regular} if the following conditions are satisfied for each part $P\in\mathscr{P}$: \begin{enumerate} \item if $x,y\in P$ then $D_L(x)=D_L(y)$ (write $D_L(P)$ for this common value), and \item if $s\notin D_L(P)$ then $sP\subseteq P'$ for some part $P'\in\mathscr{P}$. \end{enumerate} A partition satisfying~(1) is called \textit{locally constant}. Let $\mathscr{P}(W)$ denote the set of all partitions of $W$, and let $\mathscr{P}_{\mathrm{reg}}(W)$ denote the set of all regular partitions of~$W$. It is not hard to see that $\mathscr{T}$ is a regular partition, and other interesting examples of regular partitions include partitions induced by Garside shadows, and the partitions induced by general Shi arrangements (see Theorem~\ref{thm:regularpartitions}). The following theorem (see Theorems~\ref{thm:regularautomaton} and~\ref{thm:converseautomaton}) shows that regular partitions are equivalent to ``reduced'' automata recognising $\mathcal{L}(W,S)$ (here ``reduced'' is a natural and mild hypothesis, see Section~\ref{subsec:regularpartitions}). \begin{thm1}\label{thm:main3} For each regular partition $\mathscr{R}$ of $W$ there exists an explicitly defined reduced automaton recognising $\mathcal{L}(W,S)$, with accept states being the parts of $\mathscr{R}$. Moreover, every reduced automaton recognising $\mathcal{L}(W,S)$ arises in such a way from some regular partition~$\mathscr{R}$. \end{thm1} Theorem~\ref{thm:main3} highlights the fundamental role regular partitions play in the automatic structure of $W$. In particular, our construction encapsulates all of the known constructions of automata recognising $\mathcal{L}(W,S)$, and produces infinitely many new examples. To make further progress, and continuing towards a proof of Theorem~\ref{thm:main1}, we undertake a study of the structure of the partially ordered set $\mathscr{P}_{\mathrm{reg}}(W)$ of all regular partitions. Here the partial order is $\mathscr{P}\leq \mathscr{P}'$ if $\mathscr{P}'$ is a refinement of $\mathscr{P}$. We prove (see Theorem~\ref{thm:regularlattice2}): \begin{thm1}\label{thm:main4} The partially ordered set $(\mathscr{P}_{\mathrm{reg}}(W),\leq)$ is a complete lattice, with bottom element $\mathscr{T}$ and top element $\mathbf{1}=\{\{w\}\mid w\in W\}$ (the partition into singletons). \end{thm1} Thus for any partition $\mathscr{P}\in\mathscr{P}(W)$ one may define the \textit{regular completion} $\widehat{\mathscr{P}}\in\mathscr{P}_{\mathrm{reg}}(W)$ by $$ \widehat{\mathscr{P}}=\bigwedge \{\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)\mid \mathscr{P}\leq \mathscr{R}\}. $$ While it is not immediately obvious from the definition, we show in Theorem~\ref{thm:newterminateatregularisation} that $\mathscr{P}\leq\widehat{\mathscr{P}}$, and hence $\widehat{\mathscr{P}}$ is the minimal regular partition refining $\mathscr{P}$. We give an algorithm (called the \textit{simple refinements algorithm}) to compute the regular completion, and prove sufficient conditions for this algorithm to terminate in finite time (see Algorithm~\ref{alg:regularisation} and Theorem~\ref{thm:finitetermination}). Thus we have an essentially free construction of regular partitions, and hence by Theorem~\ref{thm:main3} an essentially free construction of automata recognising the language $\mathcal{L}(W,S)$. Furthermore, our sufficient conditions for the simple refinements algorithm to terminate in finite time leads to sufficient conditions for the resulting automata to be finite state. An important corollary of Theorem~\ref{thm:main4} is the following characterisation of the cone type partition~$\mathscr{T}$. Let $\mathscr{D}$ be the partition of $W$ according to left descent sets (that is, $x$ and $y$ in the same part of $\mathscr{D}$ if and only if $D_L(x)=D_L(y)$). Then (see Corollary~\ref{cor:regularlattice1}): \begin{cor1}\label{cor:TD} We have $\mathscr{T}=\widehat{\mathscr{D}}$. \end{cor1} Corollary~\ref{cor:TD}, along with the simple refinements algorithm, allows for $\mathscr{T}$ to be computed algorithmically (see Example~\ref{ex:completingB2}). Moreover, Corollary~\ref{cor:TD} is a key step in establishing Theorem~\ref{thm:main1}. We next introduce the notion of a \textit{gated} partition. A partition $\mathscr{P}$ of $W$ is called \textit{gated} if for each part $P\in\mathscr{P}$ there exists an element $g\in P$ with $g\preccurlyeq x$ for all $x\in P$. These elements $g$ are called the ``gates'' of the partition, and we write $\Gamma(\mathscr{P})$ for the set of gates of $\mathscr{P}$ (it is clear that each part $P$ of a gated partition has a unique gate). We show that if $\mathscr{P}$ is a gated and convex partition, then the simple refinements algorithm preserves the gated property. Here \textit{convex} means that each part of the partition is convex in the usual sense. Thus we have the following theorem (see Corollary~\ref{cor:regularcompletiongated}), which, combined with Corollary~\ref{cor:TD}, finally leads to a proof of Theorem~\ref{thm:main1}. \begin{thm1}\label{thm1:preservegatedness} Let $\mathscr{P}$ be a locally constant, convex and gated partition of $W$. If the simple refinements algorithm terminates in finite time, then the regular completion $\widehat{\mathscr{P}}$ is gated and convex. \end{thm1} The finite set $\Gamma=\Gamma(\mathscr{T})$ (the set of gates of~$W$) has many remarkable properties. For example, $\Gamma$ is closed under taking suffix, contains all spherical elements of $W$, is contained in every Garside shadow, and is contained in the set $\Gamma(\mathscr{P})$ of gates of every gated regular partition~$\mathscr{P}$ (see Proposition~\ref{prop:gatesbasicfacts}). Moreover we make the following conjecture (which in turn would resolve \cite[Conjecture~1]{HNW:16}, see Theorem~\ref{thm:GminGamma}). \begin{conj1}\label{conj:garside} The set $\Gamma$ is closed under join, and hence is a Garside shadow. \end{conj1} Let $\Phi_{\mathrm{sph}}^+$ denote the set of \textit{spherically supported} positive roots. We have $\Phi_{\mathrm{sph}}^+\subseteq \mathcal{E}$, however this containment can be strict (see~\cite[Theorem~1]{PY:19} for the classification of Coxeter systems for which $\mathcal{E}=\Phi_{\mathrm{sph}}^+$). Let $L\subseteq W$ denote the set of \textit{low elements} of $W$ introduced by Dehornoy, Dyer, and Hohlweg in~\cite{DDH:15} (see Definition~\ref{defn:nlow}). We prove the following, providing evidence for Conjecture~\ref{conj:garside} (see Theorem~\ref{thm:conjectures}). \begin{thm1}\label{thm:conjforspherical} If $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ then $\Gamma=\widetilde{S}=L$, and so $\Gamma$ is a Garside shadow. \end{thm1} Other main contributions of this paper include the following. In Section~\ref{sec:conetypes} we characterise the minimal set $\partial T\subseteq \Phi^+$ of positive roots required to determine $T$ (we call $\partial T$ the \textit{boundary roots} of~$T$), and provide a precise formula for cone types in terms of these roots. If $T$ is a cone type, and $x\in W$ is such that $T=T(x^{-1})$, we define $$ \partial T=\{\beta\in\Phi^+\mid \text{there exists $w\in W$ with $\Phi(x)\cap\Phi(w)=\{\beta\}$}\}. $$ This set of roots is independent of the particular representative $x\in W$ with $T=T(x^{-1})$ chosen, and we prove the following theorem (see Theorem~\ref{thm:geometry2}). \begin{thm1}\label{thm:main2} Let $T$ be a cone type. Then $$ T=\bigcap_{\beta\in\partial T}H_{\beta}^+\quad\text{where}\quad H_{\beta}^+=\{w\in W\mid \ell(s_{\beta}w)>\ell(w)\}. $$ Moreover, removing any root from the above intersection results in strict containment. \end{thm1} We note that if $T(x^{-1})=T$ then $\partial T\subseteq\mathcal{E}(x)$, however strict containment can occur. Moreover, we have $\partial T\subseteq \Phi^1(x)$ (where $\Phi^1(x)$ is the ``base'' of $\Phi(x)$, defined by Dyer~\cite{Dye:19}), however again strict containment can occur. We make the following conjecture (for which we can only prove the reverse implication): \begin{conj1}\label{conj:boundary} Let $x\in W$. Then $x\in \Gamma$ if and only if $\partial T(x^{-1})=\Phi^1(x)$. \end{conj1} We also identify the ``partition theoretic'' equivalent to Garside shadows, in the following sense. If $\mathscr{P}$ is a regular gated partition, then the set $\Gamma(\mathscr{P})$ of all gates of $\mathscr{P}$ contains $S$ and is automatically closed under suffix. However $\Gamma(\mathscr{P})$ is not necessarily closed under join. In Section~\ref{sec:conical} we define \textit{conical partitions} (see Definition~\ref{defn:conical}). These partitions are necessarily gated, and the set of gates of a conical partition is necessarily closed under join. Thus regular conical partitions are equivalent to Garside shadows (see Corollary~\ref{cor:garsideequivalent}). In Section~\ref{sec:ultralow}, motivated by Conjecture~\ref{conj:boundary}, we define \textit{ultra low} elements in a Coxeter group to be the elements $x\in W$ with $\partial T(x^{-1})=\Phi^1(x)$, and investigate their properties. We have $U\subseteq \Gamma\subseteq \widetilde{S}$. Conjecture~\ref{conj:boundary}, if true, implies that $U=\Gamma$, and Conjecture~\ref{conj:garside}, if true, implies that $\Gamma=\widetilde{S}$. Finally, in Section~\ref{sec:superelementary} we consider the question of which elementary roots occur as a boundary root of some cone type. We show that in spherical and affine Coxeter groups all elementary roots occur, and we exhibit a family of rank~$4$ Coxeter groups where the inclusion is strict. We thank C. Hohlweg and M. Dyer for helpful comments on an earlier version of this work, and R. Howlett for useful discussions concerning elementary roots and super elementary roots. This work was supported by funding from the Australian Research Council under the Discovery Project~DP200100712. \newpage \section{Preliminaries}\label{sec:1} In this section we give an overview of background and preliminary results on Coxeter groups, root systems, elementary roots, the Coxeter complex, low elements, Garside shadows, cone types, and automata recognising the language of reduced words in a Coxeter group. Our main references are \cite{AB:08,BB:05,Deo:82} (for Coxeter groups, the Coxeter complex, and root systems), \cite{BH:93} (for elementary roots), \cite{Dye:21,DH:16} (for low elements), \cite{DDH:15,HNW:16} (for Garside shadows), and \cite{Eps:92,HNW:16} (for relevant automata theory). \subsection{Coxeter groups}\label{sec:1:Coxetergroups} Let $(W,S)$ be a Coxeter system. We will assume throughout that $|S| < \infty$. For $s,t\in S$ let $m_{s,t}$ denote the order of $st$. The \textit{length} of $w\in W$ is $$ \ell(w)=\min\{n\geq 0\mid w=s_1\cdots s_n\text{ with }s_1,\ldots,s_n\in S\}, $$ where $\ell(e)=0$, with $e$ the identity element. An expression $w=s_1 \cdots s_n$ with $n=\ell(w)$ is called a \textit{reduced expression} for~$w$. An element $v \in W$ is a \textit{prefix} of $w$ if $\ell(w) = \ell(v) + \ell(v^{-1}w)$. Similarly, an element $u \in W$ is a \textit{suffix} of $w$ if $\ell(w) = \ell(u) + \ell(w u^{-1})$. Note that $v$ is a prefix (respectively suffix) of $w$ if and only if there is a reduced expression for $w$ starting (respectively ending) with a reduced expression for $v$. Let $J\subseteq S$. The \textit{$J$-parabolic subgroup} of $W$ is the subgroup $W_J=\langle J\rangle$, and we say that $J$ is \textit{spherical} if $|W_J| < \infty$. If $J$ is spherical then there exists a unique longest element of $W_J$, denoted~$w_J$, and we have $\ell(w_Jw)=\ell(ww_J)=\ell(w_J)-\ell(w)$ for all $w\in W_J$. If $|W|<\infty$ (that is, $S$ is spherical) then we often write $w_0=w_S$ for the longest element of~$W$. The \textit{left descent set} of $w\in W$ is $$ D_L(w)=\{s\in S\mid \ell(sw)=\ell(w)-1\}, $$ and similarly the right descent set is $D_R(w)=\{s\in S\mid \ell(ws)=\ell(w)-1\}$. By \cite[Proposition~2.17]{AB:08} both $D_L(w)$ and $D_R(w)$ are spherical subsets of $S$ for all $w\in W$. Let $J\subseteq S$. It is well known (see, for example \cite[Proposition~2.20]{AB:08}) that each coset $W_Jw$ contains a unique representative of minimal length. Let $W^J$ be the transversal of these minimal length coset representatives. Then each $w\in W$ has a unique decomposition \begin{align} \label{eq:WJdecomposition}w=uv\quad\text{with $u\in W_J$, $v\in W^J$}, \end{align} and moreover whenever $u\in W_J$ and $v\in W^J$ we have $\ell(uv)=\ell(u)+\ell(v)$. The \textit{right weak order} is the partial order defined on $W$ with $v \preccurlyeq w$ if $v$ is a prefix of~$w$. The partially ordered set $(W, \preccurlyeq)$ is a complete meet semilattice (see \cite[Chapter 3.2]{BB:05}), and thus for any subset $X \subseteq W$ there is a greatest lower bound (or \textit{meet}), denoted $\bigwedge X$. A \textit{bound} for a subset $X\subseteq W$ is an element $w\in W$ such that $x\preccurlyeq w$ for all $x\in X$. It follows from the existence of meets that every bounded subset $X \subseteq W$ admits a least upper bound (or \textit{join}), given by $$ \bigvee X = \bigwedge \{ w \in W \mid x \preccurlyeq w \text{ for all }x \in X \}. $$ If $X=\{x,y\}$ is bounded we write $\bigwedge \{x,y\}=x\wedge y$ and $\bigvee \{x,y\}=x\vee y$. \subsection{Root systems}\label{sec:1:rootsystems} Let $(W,S)$ be a Coxeter system. Let $V$ be an $\mathbb{R}$-vector space with basis $\Pi=\{\alpha_s\mid s\in S\}$, and define a symmetric bilinear form on $V$ by linearly extending $\langle\alpha_s,\alpha_t\rangle=-\cos(\pi/m_{s,t})$. The Coxeter group $W$ acts on $V$ by the rule $s\lambda=\lambda-2\langle \lambda,\alpha_s\rangle \alpha_s$ for $s\in S$ and $\lambda\in V$, and the root system of $W$ is $$ \Phi=\{w\alpha_s\mid w\in W,\,s\in S\}. $$ The elements of $\Phi$ are called \textit{roots}, and the \textit{simple roots} are the roots $\alpha_s$ with $s\in S$. \begin{rem} Note that $\langle\alpha_s,\alpha_t\rangle=-1$ if $m_{st}=\infty$. One may work more generally with an arbitrary \textit{based root system} $\Phi$ associated to $W$, as in \cite[\S2.3]{DH:16}, however for the purpose of this paper the concrete choice of realisation described above suffices. \end{rem} Each root $\alpha\in\Phi$ can be written as $\alpha=\sum_{s\in S}c_s\alpha_s$ with either $c_s\geq 0$ for all $s\in S$, or $c_s\leq 0$ for all $s\in S$. In the first case $\alpha$ is called \textit{positive} (written $\alpha>0$), and in the second case $\alpha$ is called \textit{negative} (written $\alpha<0$). Let $\Phi^+$ denote the set of all positive roots. The set of \textit{reflections} of $W$ is $\{wsw^{-1}\mid w\in W,\,s\in S\}$. If $w\alpha_s=\beta$ we define $s_{\beta}=wsw^{-1}$. Note that this reflection acts on $V$ by $s_{\beta}\lambda=\lambda-2\langle \lambda,\beta\rangle\beta$. The \textit{inversion set} of $w\in W$ is $$ \Phi(w)=\{\alpha\in \Phi^+\mid w^{-1}\alpha<0\}. $$ We recall some well-known facts in the following proposition. \begin{prop}\label{prop:rootsystembasics} Let $u,v,w\in W$ and $s\in S$. \begin{enumerate} \item We have $\ell(ws)>\ell(w)$ if and only if $w\alpha_s>0$. \item We have $\ell(sw)>\ell(w)$ if and only if $w^{-1}\alpha_s>0$. \item If $w=s_1\cdots s_n$ is reduced, then $\Phi(w)=\{\beta_1,\ldots,\beta_n\}$ where $$ \beta_j=s_1\cdots s_{j-1}\alpha_{s_j}\quad\text{for $1\leq j\leq n$}. $$ \item We have $\Phi(w)=\{\beta\in\Phi^+\mid\ell(s_{\beta}w)<\ell(w)\}$. \item $\Phi(v)\subseteq \Phi(w)$ if and only if $v\preccurlyeq w$. \item If $w=uv$ with $\ell(w)=\ell(u)+\ell(v)$ then $\Phi(w)=\Phi(u)\sqcup u\Phi(v)$. \end{enumerate} \end{prop} \begin{proof} See~\cite[Proposition~2.2 and Proposition~3.1]{Deo:82} and \cite[Proposition~3.1.3]{BB:05}. \end{proof} The \textit{support} of a root $\alpha\in\Phi$ is $\mathrm{supp}(\alpha)=\{s\in S\mid c_s\neq 0\}$, where $\alpha=\sum_{s\in S}c_s\alpha_s$. For $J\subseteq S$ let $$ \Phi_J=\{\alpha\in\Phi\mid\mathrm{supp}(\alpha)\subseteq J\}, $$ and for $w\in W$ write $\Phi_J(w)=\Phi(w)\cap\Phi_J$. \begin{lem}\cite[Corollary~2.13]{HNW:16}\label{lem:rootsdecomposition} Let $J\subseteq S$. If $w=uv$ with $u\in W_J$ and $v\in W^J$ then $\Phi(u)=\Phi_J(w)$. In particular, we have $ W^J=\{v\in W\mid \Phi_J(v)=\emptyset\}. $ \end{lem} Each root $\beta\in\Phi^+$ partitions $W$ into two sets $$ H_{\beta}^+=\{w\in W\mid \ell(s_{\beta}w)>\ell(w)\}\quad\text{and}\quad H_{\beta}^-=\{w\in W\mid \ell(s_{\beta}w)<\ell(w)\}. $$ Note that $e\in H_{\beta}^+$. We call $H_{\beta}^+$ and $H_{\beta}^-$ the \textit{half-spaces determined by $\beta$}. Note that if $\beta\in\Phi^+$ then $\beta\in\Phi(w)$ if and only if $w\in H_{\beta}^-$. \subsection{Elementary roots}\label{sec:1:elementaryroots} A root $\beta\in\Phi^+$ is said to \textit{dominate} a root $\alpha\in \Phi^+$ if $w^{-1}\beta<0$ implies that $w^{-1}\alpha<0$ (for all $w\in W$). A root $\beta\in \Phi^+$ is said to be \textit{elementary} if $\beta$ dominates no other positive root $\alpha\neq \beta$. Geometrically, $\beta$ dominates $\alpha$ if and only if $H_{\beta}^-\subseteq H_{\alpha}^-$, or equivalently, if and only if $H_{\beta}^+\supseteq H_{\alpha}^+$. We note that elementary roots are also called \textit{small}, \textit{humble} or \textit{minimal} in the literature. Let $\mathcal{E}\subseteq \Phi^+$ denote the set of all elementary roots. By \cite[Theorem~2.8]{BH:93} the set $\mathcal{E}$ is finite for all (finitely generated) Coxeter systems~$(W,S)$. The \textit{elementary inversion set} of $w\in W$ is $$ \mathcal{E}(w)=\{\beta\in\mathcal{E}\mid w^{-1}\beta<0\}=\Phi(w)\cap \mathcal{E}. $$ Let $\mathbb{E}=\{\mathcal{E}(w)\mid w\in W\}$ be the set of all elementary inversion sets. Since $\mathcal{E}$ is finite, $\mathbb{E}$ is finite too. Let $n\in\mathbb{N}$. A root $\beta\in \Phi^+$ is called \textit{$n$-elementary} if it dominates at most $n$ roots $\alpha\in\Phi^+\backslash\{\beta\}$. Thus $0$-elementary roots are the same as elementary roots. Let $\mathcal{E}_n$ denote the set of all $n$-elementary roots. The $n$-elementary inversion set of $w\in W$ is $\mathcal{E}_n(w)=\Phi(w)\cap\mathcal{E}_n$. Let $\mathbb{E}_n$ denote the set of all $n$-elementary inversion sets. By \cite[Corollary~3.9]{Fu:12} the set $\mathcal{E}_n$ is finite for each $n\in\mathbb{N}$, and hence: \begin{cor}\label{cor:Enfinite} The set $\mathbb{E}_n$ is finite for each $n\in\mathbb{N}$. \end{cor} The following lemma is key to the automatic structure of~$W$ (see \cite{BH:93} and \cite[Lemma~3.21]{DH:16}). \begin{lem}\label{lem:elementaryrootsbasics} Let $w\in W$, $s\in S$, and $n\in\mathbb{N}$. If $\ell(sw)>\ell(w)$ then $$\mathcal{E}_n(sw)=(\{\alpha_s\}\sqcup s\mathcal{E}_n(w))\cap\mathcal{E}_n. $$ \end{lem} The set of \textit{spherical roots} is $$ \Phi_{\mathrm{sph}}=\{\alpha\in\Phi\mid\mathrm{supp}(\alpha)\subseteq J\text{ for some spherical subset $J\subseteq S$}\}. $$ Let $$ \mathbb{S}=\{\Phi_{\mathrm{sph}}(w)\mid w\in W\},\quad\text{where $\Phi_{\mathrm{sph}}(w)=\Phi(w)\cap\Phi_{\mathrm{sph}}$}. $$ Clearly $\mathbb{S}$ is finite. We have $\Phi_{\mathrm{sph}}^+\subseteq \mathcal{E}$, however this containment can be strict. The classification of Coxeter systems for which $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ is as follows (see~\cite[Theorem~1]{PY:19}). Let $\mathcal{X}$ denote the set of connected Coxeter graphs which are either of affine or compact hyperbolic type and contain neither circuits nor infinite bonds. Then $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ if and only if the Coxeter graph of $(W,S)$ does not have a subgraph contained in $\mathcal{X}$. In particular, if follows that $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ whenever $(W,S)$ is spherical, of type $\tilde{A}_n$, right-angled, or has complete Coxeter graph (that is, $m_{s,t}\geq 3$ for all $s,t\in S$ with $s\neq t$). \subsection{The Coxeter complex}\label{sec:1:Coxetercomplex} The \textit{Coxeter complex} of a Coxeter system is a certain abstract simplicial complex $\Sigma(W,S)$ on which~$W$ naturally acts. While no result of this paper formally depends on the Coxeter complex, it is nonetheless a useful concept for providing a geometric intuition for Coxeter groups. We refer to \cite[Chapter~3]{AB:08} for the formal construction of $\Sigma(W,S)$. Here we provide a less formal sketch. For each $w\in W$ let $C_w$ be a combinatorial simplex with $|S|$ vertices, and assign each vertex $x$ of $C_w$ a \textit{type} $\tau(x)\in S$ such that $C_w$ contains precisely one vertex of each type~$s\in S$. For each $w\in W$ and $s\in S$ we glue $C_w$ and $C_{ws}$ together along their cotype $\{s\}$ faces, identifying the vertex of type $s'$ in $C_{w}$ with the vertex of type $s'$ in $C_{ws}$ for all $s'\in S\backslash\{s\}$. The resulting simplicial complex $\Sigma(W,S)$ is called the \textit{Coxeter complex} of $(W,S)$. The Coxeter complex has maximal simplices $C_w$, $w\in W$, and these are called the \textit{chambers} (or sometimes \textit{alcoves}) of the complex. The Coxeter group $W$ acts on $\Sigma(W,S)$ by type preserving simplicial complex automorphisms. On the level of chambers this action is given by $wC_v=C_{wv}$ for all $w,v\in W$, and the action on the set of chambers is simply transitive. Let $C_0=C_e$ be the \textit{fundamental chamber}, and so $C_w=wC_0$. By construction, the chambers $wC_0$ and $wsC_0$ are \textit{$s$-adjacent} (meaning they share a cotype~$\{s\}$ face). Let $\beta\in\Phi^+$. The set $$ H_{\beta}=\{\sigma\in\Sigma(W,S)\mid s_{\beta}(\sigma)=\sigma\} $$ of all simplices fixed by $s_{\beta}$ is called a \textit{wall} of the Coxeter complex. Since $s_{\beta}$ fixes no chambers, there are no chambers contained in the wall $H_{\beta}$. This illustrates the utility of the Coxeter complex, as one can now speak of the wall $H_{\beta}$ separating the half-spaces $H_{\beta}^+$ and $H_{\beta}^-$. We will sometimes identify $W$ with the set of chambers of $\Sigma(W,S)$ by identifying $w\leftrightarrow wC_0$. Thus one may simultaneously think of $W$ as a group, and more geometrically as the associated Coxeter complex. \subsection{Low elements}\label{sec:1:lowelements} The \textit{base} of an inversion set is defined in terms of extreme rays of the cone of $\Phi(w)$ (see \cite{Dye:19} and \cite{DH:16}), however for our purposes the following equivalent characterisation is sufficient (see \cite[Proposition~4.6]{DH:16}). \begin{defn}\label{defn:phi1} Let $w \in W$. The \textit{base} of the inversion set $\Phi(w)$ is $$ \Phi^1(w) = \{ \beta \in \Phi^+ \mid \ell(s_{\beta}w) = \ell(w) - 1 \}. $$ \end{defn} By Proposition~\ref{prop:rootsystembasics}(4) we have $\Phi^1(w)\subseteq\Phi(w)$. For $A \subseteq \Phi^+$ let $\mathrm{cone}(A)$ be the set of all non-negative linear combinations of roots in $A$ and write $\mathrm{cone}_{\Phi}(A) = \mathrm{cone}(A) \cap \Phi^+$. The set $\Phi^1(w)$ determines the inversion set $\Phi(w)$ in the following sense. \begin{thm}\cite[Lemma 1.7]{Dye:19}\label{thm:eqcond} Let $w\in W$. Then \begin{align*} \Phi(w) = \mathrm{cone}_{\Phi}(\Phi^1(w)), \end{align*} and moreover if $A \subseteq \Phi^+$ is such that $\Phi(w) = \mathrm{cone}_{\Phi}(A)$ then $\Phi^1(w) \subseteq A$. \end{thm} \newpage In \cite{DH:16} Dyer and Hohlweg introduced the notion of an $n$-low element of a Coxeter group~$W$. \begin{defn}\label{defn:nlow} Let $n\in\mathbb{N}$. An element $w\in W$ is \textit{$n$-low} if $\Phi(w)=\mathrm{cone}_{\Phi}(A)$ for some $A\subseteq\mathcal{E}_n$. A $0$-low element is called \textit{low}. Let $L_n$ denote the set of all $n$-low elements, and let $L=L_0$ denote the set of low elements. Note that by Theorem~\ref{thm:eqcond} we have that $w$ is $n$-low if an only if $\Phi^1(w)\subseteq \mathcal{E}_n$. \end{defn} Let $\Theta_n:L_n\to\mathbb{E}_n$ be the map $\Theta_n(x)=\mathcal{E}_n(x)$ (introduced by Dyer and Hohlweg in \cite{DH:16}). This map is injective (see \cite[Proposition~3.26]{DH:16}), and hence $|L_n|\leq|\mathbb{E}_n|$ for all $n\in\mathbb{N}$. In \cite[Conjecture~2]{DH:16} Dyer and Hohlweg conjecture that $\Theta_n$ is a bijection for all $n\in\mathbb{N}$. The following result is useful when working with joins. \begin{prop}\cite[Proposition 2.8]{DH:16} \label{prop:inversionsetjoin} If $X \subseteq W$ is bounded, then $$ \Phi(\bigvee X) = \mathrm{cone}_{\Phi}(\bigcup_{x \in X} \Phi(x)). $$ \end{prop} Each reduced expression $w=s_1\cdots s_n$ gives rise to an ordering of the inversion set of $w$, as in Proposition~\ref{prop:rootsystembasics}(3). In particular, the ``final root'' of this ordered sequence is $\beta=s_1\cdots s_{n-1}\alpha_{s_n}=-w\alpha_{s_n}>0$ (see Proposition~\ref{prop:rootsystembasics}(3)). The set of such roots $\beta$, as the reduced expression for $w$ varies, plays an important role later in this work. \begin{defn}\label{defn:finalroots} Let $w\in W$. The set of \textit{final roots} of $w$ is $$ \Phi^0(w)=\{-w\alpha_s\mid s\in D_R(w)\}. $$ \end{defn} Note that $\beta\in\Phi^0(w)$ if and only if $s_{\beta}w=ws$ for some $s\in D_R(w)$, if and only if $\beta=-w\alpha_s>0$ for some $s\in S$. Also note that $\Phi^0(w)\subseteq \Phi^1(w)$. \subsection{Garside shadows} The notion of a \textit{Garside shadow} in a Coxeter system $(W,S)$ was introduced and investigated by Dehornoy, Dyer and Hohlweg \cite{DDH:15} and Dyer and Hohlweg in~\cite{DH:16}. \begin{defn} \label{def:garside_shadow} A \textit{Garside shadow} is a subset $B \subseteq W$ such that $S \subseteq B$ and \begin{enumerate} \item for $X \subseteq B$ if $w = \bigvee X$ exists, then $w \in B$; \item if $w \in B$ and $v$ is a suffix of $w$ then $v \in B$. \end{enumerate} We refer to (1) as \textit{closure under join}, and (2) as \textit{closure under taking suffixes}. \end{defn} It is clear that the intersection of two Garside shadows is again a Garside shadow (see \cite[Proposition~2.2]{DH:16}) and hence there is a unique smallest Garside shadow, denoted $\widetilde{S}$. Using the finiteness of the set of elementary roots, Dyer and Hohlweg show in \cite[Theorem~1.1]{DH:16} that $\widetilde{S}$ is finite for all finitely generated Coxeter systems~$(W,S)$. If $B$ is a Garside shadow then each element $w\in W$ can be ``projected'' onto $B$ as follows. \begin{defn}\cite[Definition~2.4]{HNW:16} Let $B \subseteq W$ be a Garside shadow. The \textit{projection} of $W$ onto $B$ is the function $\pi_B:W\to B$ given by $$ \pi_B(w) = \bigvee \{ b \in B \mid b \preccurlyeq w \}. $$ Note that $\pi_B(w)\in B$ because $B$ is closed under join. \end{defn} The following important theorem was first conjectured in \cite[Conjecture~1]{DH:16}, where it was proved in the case $n=0$ (see \cite[Theorem~1.1]{DH:16}), and for all $n\in\mathbb{N}$ in the case that $W$ is affine (see \cite[Theorem~4.17]{DH:16}). Recently Dyer~\cite{Dye:21} has proved the theorem for all $n\in\mathbb{N}$ for arbitrary~$W$. \begin{thm}\label{thm:nlowgarside}\cite[Corollary~1.7]{Dye:21} Let $n\in\mathbb{N}$. The set $L_n$ of $n$-low elements is a finite Garside shadow. \end{thm} \subsection{Cone types}\label{sec:1:conetypes} The \textit{cone type} of $w\in W$ is $$ T(w) = \{ v \in W \mid \ell(wv) = \ell(w) + \ell(v) \}. $$ Thus $T(w)$ consists of all elements $v$ that ``extend'' $w$. Let $\mathbb{T}=\{T(w)\mid w\in W\}$ be the set of all cone types of $W$. Cone types play a central role in this work. The following proposition collects some basic results. \newpage \begin{prop} \label{prop:conetypebasics} Let $x,y \in W$. The following are equivalent: \begin{enumerate} \item $\ell(x^{-1}y) = \ell(x) + \ell(y)$ \item $y \in T(x^{-1})$ \item $x\in T(y^{-1})$ \item $\Phi(x) \cap \Phi(y) = \emptyset$ \item $\Phi(x^{-1}y) = \Phi(x^{-1}) \sqcup x^{-1}\Phi(y)$. \end{enumerate} \end{prop} \begin{proof} The equivalence of (1), (2) and (3) is immediate from the definitions. For the equivalence of (1) with (4) see, for example \cite[Lemma~1.2]{BH:93}. Finally, if $\ell(x^{-1}y)=\ell(x)+\ell(y)$ then $\Phi(x^{-1}y)=\Phi(x^{-1})\sqcup x^{-1}\Phi(y)$ by Proposition~\ref{prop:rootsystembasics}(6), and conversely if $\Phi(x^{-1}y)=\Phi(x^{-1})\sqcup x^{-1}\Phi(y)$ then $\ell(x^{-1}y)=\ell(x)+\ell(y)$ because $\ell(w)=|\Phi(w)|$ for all $w\in W$, completing the proof. \end{proof} We also note the following obvious fact. \begin{lem}\label{lem:containoneway} If $x\preccurlyeq y$ then $T(y^{-1})\subseteq T(x^{-1})$. \end{lem} \begin{proof} If $w\in T(y^{-1})$ then $\Phi(y)\cap\Phi(w)=\emptyset$ (by Proposition~\ref{prop:conetypebasics}), and hence $\Phi(x)\cap\Phi(w)=\emptyset$ (as $x\preccurlyeq y$ implies that $\Phi(x)\subseteq \Phi(y)$) and hence $w\in T(x^{-1})$ (again by Proposition~\ref{prop:conetypebasics}). \end{proof} The following result gives a formula for cone types in terms of inversion sets and half-spaces. \begin{thm} \label{thm:geometry1} For $x\in W$ we have $$ T(x^{-1}) = \bigcap_{\beta \in \Phi(x)} H_{\beta}^{+}. $$ \end{thm} \begin{proof} We have $y\in T(x^{-1})$ if and only if $\Phi(x)\cap\Phi(y)=\emptyset$ (by Proposition~\ref{prop:conetypebasics}), if and only if $y^{-1}\beta>0$ for all $\beta\in\Phi(x)$, if and only if $\ell(s_{\beta}y)>\ell(y)$ for all $\beta\in \Phi(x)$ (by Proposition~\ref{prop:rootsystembasics}), if and only if $y\in\bigcap_{\beta \in \Phi(x)} H_{\beta}^{+}$. \end{proof} \begin{exa} To apply the formula in Theorem~\ref{thm:geometry1} to determine $T(w^{-1})$, one considers all walls of the Coxeter complex that separate $e$ and $w$ (the positive roots corresponding to these walls are the elements of $\Phi(w)$), and take the intersection of the half-spaces containing the identity for each of these walls. Let us illustrate in an example. \begin{figure}[H] \centering \begin{tikzpicture}[scale=1] \path [fill=gray!20] (-4.33,-2.5)--(0,0)--(0.433,0.75)--(-4.33,3.5); \path [fill=gray!50] (0,0) -- (0.433,0.75) -- (0,1) -- (0,0); \path [fill=red!30] (0.866,0.5)--(2.598,1.5)--(4.33,1.5)--(4.33,0)--(1.732,0); \path [fill=red!70] (2.165,0.75)--(2.598,1.5)--(2.598,0.5); \draw(-4.33,4.5)--(4.33,4.5); \draw(-4.33,3)--(4.33,3); \draw(-4.33,1.5)--(4.33,1.5); \draw(-4.33,0)--(4.33,0); \draw(-4.33,-1.5)--(4.33,-1.5); \draw(-4.33,-3)--(4.33,-3); \draw(-4.33,-3)--(-4.33,4.5); \draw(-3.464,-3)--(-3.464,4.5); \draw(-2.598,-3)--(-2.598,4.5); \draw(-1.732,-3)--(-1.732,4.5); \draw(-.866,-3)--(-.866,4.5); \draw(0,-3)--(0,4.5); \draw(.866,-3)--(.866,4.5); \draw(1.732,-3)--(1.732,4.5); \draw(2.598,-3)--(2.598,4.5); \draw(3.464,-3)--(3.464,4.5); \draw(4.33,-3)--(4.33,4.5); \draw(-4.33,3.5)--({-3*0.866},4.5); \draw(-4.33,2.5)--({-1*0.866},4.5); \draw(-4.33,1.5)--({1*0.866},4.5); \draw(-4.33,.5)--({3*0.866},4.5); \draw(-4.33,-.5)--(4.33,4.5); \draw(-4.33,-1.5)--(4.33,3.5); \draw(-4.33,-2.5)--(4.33,2.5); \draw(-3.464,-3)--(4.33,1.5); \draw(-1.732,-3)--(4.33,.5); \draw(0,-3)--(4.33,-.5); \draw(1.732,-3)--(4.33,-1.5); \draw(3.464,-3)--(4.33,-2.5); \draw(4.33,3.5)--({3*0.866},4.5); \draw(4.33,2.5)--({1*0.866},4.5); \draw(4.33,1.5)--({-1*0.866},4.5); \draw(4.33,.5)--({-3*0.866},4.5); \draw(4.33,-.5)--(-4.33,4.5); \draw(4.33,-1.5)--(-4.33,3.5); \draw(4.33,-2.5)--(-4.33,2.5); \draw(3.464,-3)--(-4.33,1.5); \draw(1.732,-3)--(-4.33,.5); \draw(0,-3)--(-4.33,-.5); \draw(-1.732,-3)--(-4.33,-1.5); \draw(-3.464,-3)--(-4.33,-2.5); \draw(-4.33,-1.5)--(-3.464,-3); \draw(-4.33,1.5)--(-1.732,-3); \draw(-4.33,4.5)--(0,-3); \draw({-3*0.866},4.5)--(1.732,-3); \draw({-1*0.866},4.5)--(3.464,-3); \draw({1*0.866},4.5)--(4.33,-1.5); \draw({3*0.866},4.5)--(4.33,1.5); \draw(4.33,-1.5)--(3.464,-3); \draw(4.33,1.5)--(1.732,-3); \draw(4.33,4.5)--(0,-3); \draw({3*0.866},4.5)--(-1.732,-3); \draw({1*0.866},4.5)--(-3.464,-3); \draw({-1*0.866},4.5)--(-4.33,-1.5); \draw({-3*0.866},4.5)--(-4.33,1.5); \draw[line width=2pt](0.866,-3)--(0.866,4.5);% \draw[line width=2pt](-1.732,-3)--({3*0.866},4.5);% \draw[line width=2pt](3.46,-3)--(-0.866,4.5);% \draw[line width=2pt](-4.33,-2.5)--(4.33,2.5); \draw[line width=2pt](-4.33,3.5)--(4.33,-1.5); \draw[line width=2pt](1.732,-3)--(1.732,4.5); \draw[line width=2pt](0,-3)--(4.33,4.5); \draw[line width=2pt](-4.33,4.5)--(4.33,-0.5); \end{tikzpicture} \caption{A cone type}\label{fig:G2examplecone} \end{figure} \noindent Let $w$ be the element shaded dark red in Figure~\ref{fig:G2examplecone}. The identity is shaded grey, and the walls separating $e$ from $w$ are show in bold. The intersection of the corresponding positive half-spaces is shown in light grey -- this is the cone type $T=T(w^{-1})$. Note that some of the walls are ``redundant'' in the sense that the corresponding roots can be removed from the intersection in Theorem~\ref{thm:geometry1}. We will address this issue in Theorem~\ref{thm:geometry2}. The red shaded region is $X_T=\{x\in W\mid T(x^{-1})=T\}$ (see Proposition~\ref{prop:partsdescription}). Note that this is a convex region, with a unique minimal length element $g$, and moreover for all $x\in X_T$ we have $g\preccurlyeq x$. We will prove these observations in general in Corollary~\ref{cor:gateexist}. \end{exa} The \textit{cone} of $w\in W$ is $$ C(w)=\{v\in W\mid \ell(v)=\ell(w)+\ell(w^{-1}v)\}=\{v\in W\mid w\preccurlyeq v\}. $$ Note that $T(w)=w^{-1}C(w)$. The following lemma shows that joins and intersections of cones are closely related. \begin{lem}\label{lem:conejoin} A subset $X\subseteq W$ is bounded if and only if $\bigcap_{x\in X}C(x)\neq\emptyset$, and if $X\subseteq W$ is bounded then $C(\bigvee X)=\bigcap_{x\in X}C(x)$. \end{lem} \begin{proof} Both statements are clear from the fact that $y\in \bigcap_{x\in X}C(x)$ if and only if $y$ is an upper bound for $X$. \end{proof} We note, in passing the following result, which superficially appears similar to Lemma~\ref{lem:conejoin}, however requires a rather different proof. While we will not require this result in this paper, we record it for future reference. \begin{prop} \label{prop:joinconetype} If $X\subseteq W$ is bounded with $y=\bigvee X$ then $T(y^{-1}) =\bigcap_{x\in X} T(x^{-1})$. \end{prop} \begin{proof} The inclusion $T(y^{-1}) \subseteq\bigcap_{x\in X} T(x^{-1})$ follows from Lemma~\ref{lem:containoneway} because $x\preccurlyeq y$ for all $x\in X$. Now suppose that $w \in \bigcap_{x\in W}T(x^{-1})$. Then by Proposition~\ref{prop:conetypebasics} we have $\Phi(x) \cap \Phi(w) = \emptyset$ for all $x\in X$. We claim that $\Phi(y) \cap \Phi(w) = \emptyset$. For if there exists $\beta \in \Phi(y) \cap \Phi(w)$ then by Proposition~\ref{prop:inversionsetjoin} we have $\Phi(y)=\mathrm{cone}_{\Phi}(\bigcup_{x\in X}\Phi(x))$, and so $$ \beta = \sum c_i \beta_i $$ where $\beta_i \in \Phi(x) \cup \Phi(y)$ with $x\in X$ and $c_i \ge 0$. Since $w^{-1}\beta < 0$ we have $w^{-1}\beta_i < 0$ for some $\beta_i \in\bigcup_{x\in X} \Phi(x)$ and hence $\Phi(x) \cap \Phi(w)$ is non-empty for some $x\in X$, a contradiction. \end{proof} \subsection{Automata recognising the language of reduced words} An automaton~$\mathcal{A}$ can be viewed as a computing device for defining a language over a finite alphabet~$A$. Any string over~$A$ can be input into the automaton, which is then either accepted or rejected by $\mathcal{A}$. The set of strings accepted by $\mathcal{A}$ is the \textit{language recognised by} $\mathcal{A}$ and any language $\mathcal{L}$ for which there exists a finite state automaton recognising $\mathcal{L}$, is called a \textit{regular} language. In this paper we are interested in automata recognising the language of reduced words in a Coxeter system~$(W,S)$. This allows for some minor simplifications to the general definition of an automaton, as explained below. We will work in the setting of $G$ being any group generated by a finite set~$S$. Let $\ell_S:G\to\mathbb{N}$ be the associated length function (defined as in the Coxeter group case). A word $(s_1,\cdots,s_n)\in S^n$ is \textit{reduced} if $\ell_S(s_1\cdots s_n)=n$. Let $\mathcal{L}(G,S)$ be the set of all reduced words (the \textit{language of reduced words} in $(G,S)$). \begin{defn} An \textit{automaton} with \textit{alphabet $S$} is a quadruple $\mathcal{A}=(Y,\mu,o,\dagger)$ where $Y$ is a set (called the \textit{state set}), $o\in Y$ is the \textit{start state}, $\dagger\notin Y$ is the \textit{dead state}, and $\mu:(Y\cup\{\dagger\})\times S\to Y\cup\{\dagger\}$ is a function (called the \textit{transition function}) such that $\mu(\dagger,s)=\dagger$ for all $s\in S$. If $|Y|<\infty$ then $\mathcal{A}$ is a \textit{finite state} automaton. The \textit{language accepted by $\mathcal{A}$} is the set of all words $(s_1,\ldots,s_n)$ such that $y_n\in Y$, where $y_0=o$ and $y_j=\mu(y_{j-1},s_j)$ for $1\leq j\leq n$. \end{defn} We will often omit $\dagger$ from the notation, and simply give the automaton as a triple $\mathcal{A}=(Y,\mu,o)$. It is helpful to think of an automaton $\mathcal{A}=(Y,\mu,o)$ as a directed graph with labelled edges. The vertex set of this graph is $Y$, and if $x,y\in Y$ with $\mu(x,s)=y$ we draw an arrow from $x$ to $y$ with label $s$. Note that the dead state $\dagger$ is not drawn, and we have $\mu(x,s)=\dagger$ if and only if there is no $s$-arrow exiting the state~$x$. If $\mathcal{A}=(Y,\mu,o)$ recognises the language $\mathcal{L}(G,S)$ then there exists a path in the associated directed graph starting at $o$ with edge labels $(s_1,\ldots,s_n)$ if and only if $(s_1,\ldots,s_n)$ is reduced. The concepts of quotients and totally surjective morphisms are useful when comparing two automata. \begin{defn} Let $\mathcal{A}=(Y,\mu,o,\dagger)$ and $\mathcal{A}'=(Y',\mu',o',\dagger')$ be automata recognising $\mathcal{L}(G,S)$. We say that $\mathcal{A}'$ is a \textit{quotient} of $\mathcal{A}$ if there exists a function $f:Y\cup\{\dagger\}\to Y'\cup\{\dagger'\}$ such that: \begin{enumerate} \item $f(o)=o'$, $f(\dagger)=\dagger'$, and $f(Y)=Y'$; \item if $x,y\in Y$ with $\mu(x,s)=y$ then $\mu'(f(x),s)=f(y)$; \item if $x',y'\in Y'$ with $\mu'(x',s)=y'$ then there exists $x,y\in Y$ with $f(x)=x'$, $f(y)=y'$, and $\mu(x,s)=y$. \end{enumerate} We call such a function $f$ a \textit{totally surjective morphism} $f:\mathcal{A}\to\mathcal{A}'$. If, in addition, $f:Y\to Y'$ is injective then we call $f$ an \textit{isomorphism}, and we say that $\mathcal{A}$ and $\mathcal{A}'$ are \textit{isomorphic}, and write $\mathcal{A}\cong \mathcal{A}'$. \end{defn} More intuitively, condition (2) says that if $x\to_s y$ is a transition in $\mathcal{A}$ then $f(x)\to_s f(y)$ is a transition in $\mathcal{A}'$, and condition (3) says that every transition $x'\to_s y'$ in $\mathcal{A}'$ is the image under $f$ of some transition $x\to_s y$ in $\mathcal{A}$. The cone type of $g\in G$ is $T(g)=\{h\in G\mid \ell_S(gh)=\ell_S(g)+\ell_S(h)\}$, and we write $\mathbb{T}(G,S)$ for the set of all cone types. \begin{lem}\label{lem:stateconetype} Let $\mathcal{A}=(Y,\mu,o)$ be an automaton recognising $\mathcal{L}(G,S)$. If $(s_1,\ldots,s_n)$ and $(s_1',\ldots,s_m')$ are reduced words such that the corresponding paths in the automaton end at the same state, then $T(s_1\dots s_n)=T(s_1'\cdots s_m')$. \end{lem} \begin{proof} Let $(t_1,\ldots,t_k)$ be a word, with $t_1,\ldots,t_k\in S$. Since the paths in $\mathcal{A}$ with edge labels $(s_1,\ldots,s_n)$ and $(s_1',\ldots,s_m')$ both end at the same state, and since $\mathcal{A}$ recognises the language $\mathcal{L}(G,S)$, we have that the word $(s_1,\ldots,s_n,t_1,\ldots,t_k)$ is accepted if and only if the word $(s_1',\ldots,s_m',t_1,\ldots,t_k)$ is accepted. Hence the result. \end{proof} The following theorem is essentially the Myhill-Nerode Theorem (see \cite[Theorem~1.2.9]{Eps:92}). We sketch a proof in our context. \begin{thm}\label{thm:MyhillNerode} Let $G$ be a group generated by a finite set~$S$. Let $\mathcal{A}(G,S)=(\mathbb{T}(G,S),\mu,T(e))$, where $\mu$ is given by (for $T\in \mathbb{T}(G,S)$ and $s\in S$) $$ \mu(T,s)=\begin{cases} T(gs)&\text{if $s\in T$ and $g\in G$ is such that $T=T(g)$}\\ \dagger&\text{if $s\notin T$}. \end{cases} $$ Then \begin{enumerate} \item $\mathcal{A}(G,S)$ is an automaton recognising $\mathcal{L}(G,S)$; \item $\mathcal{A}(G,S)$ is a quotient of every automaton recognising $\mathcal{L}(G,S)$; \item $\mathcal{L}(G,S)$ is regular if and only if $|\mathbb{T}(G,S)|<\infty$; \item if $\mathcal{L}(G,S)$ is regular then $\mathcal{A}(G,S)$ is the unique minimal (with respect to the number of states) automaton up to isomorphism recognising $\mathcal{L}(G,S)$. \end{enumerate} \end{thm} \begin{proof} It is elementary to check that if $s\in T$ and $g,g'\in G$ with $T=T(g)=T(g')$ then $T(gs)=T(g's)$, and hence $\mu$ is well defined. The proof of (1) is a simple induction on the length of the word. To prove (2), by Lemma~\ref{lem:stateconetype} if $(s_1,\ldots,s_n)$ and $(s_1',\ldots,s_m')$ are reduced words such that the corresponding paths in the automaton end at the same state~$y\in Y'$, then $T(s_1\dots s_n)=T(s_1'\cdots s_m')$. Thus we can define a function $f:Y'\cup\{\dagger'\}\to\mathbb{T}(G,S)\cup\{\dagger\}$ by setting $f(\dagger')=\dagger$ and $f(y)=T(s_1\cdots s_n)$, and it is straightforward to check that $f$ is a totally surjective morphism. Then (3) follows from (1) and (2) and the definition of regular languages. (4) If $\mathcal{L}(G,S)$ is regular then $\mathcal{A}(G,S)$ has finitely many states (by (3)), and for any finite state automata $\mathcal{A}'=(Y',\mu',o')$ recognising $\mathcal{L}(G,S)$ we have $|\mathbb{T}(G,S)|\leq |Y'|$ by (2). If equality holds then the totally surjective morphism from $\mathcal{A}'$ is injective, and hence an isomorphism. \end{proof} \begin{defn} We refer to the automaton $\mathcal{A}(G,S)$ constructed in Theorem~\ref{thm:MyhillNerode} as the \textit{cone type automaton}. \end{defn} \subsection{Examples of automata recognising $\mathcal{L}(W,S)$} We now recall examples from the literature of finite state automata recognising the language $\mathcal{L}(W,S)$ of reduced words in a Coxeter group. The first construction of a finite state automaton recognising $\mathcal{L}(W,S)$ was given by Brink and Howlett in~\cite[Section 3]{BH:93} using elementary inversion sets (in fact, the automaton in~\cite{BH:93} recognises the language of lexicographically minimal reduced words in $(W,S)$). This concept was extended by Hohlweg, Nadeau, and Williams in~\cite{HNW:16}, leading to the following construction. \begin{thm}\label{thm:canonicalautomaton} (see \cite{BH:93}, \cite{Eri:94a} and \cite[Section~3.4]{HNW:16}) For $n\in\mathbb{N}$ let $\mathcal{A}_{n}=(\mathbb{E}_n,\mu,\emptyset)$, where, for $A\in\mathbb{E}_n$, $$ \mu(A,s)=\begin{cases} (\{\alpha_s\}\cup sA)\cap \mathcal{E}_n&\text{if $\alpha_s\notin A$}\\ \dagger&\text{if $\alpha_s\in A$} \end{cases} $$ Then $\mathcal{A}_n$ is a finite state automaton recognising~$\mathcal{L}(W,S)$. \end{thm} The automaton $\mathcal{A}_n$ is called the \textit{$n$-canonical automaton}. We sometimes call $\mathcal{A}_0$ the \textit{Brink-Howlett automaton}. By Lemma~\ref{lem:elementaryrootsbasics} the transition function of $\mathcal{A}_n$ is given by $\mu(\mathcal{E}_n(w),s)=\mathcal{E}_n(sw)$ whenever $\ell(sw)>\ell(w)$. \begin{cor}\label{cor:finiteconetypes1} Each finitely generated Coxeter system has finitely many cone types. \end{cor} \begin{proof} The fact that $\mathcal{A}_0$ is finite state (as $|\mathcal{E}|<\infty$) implies, by Theorem~\ref{thm:MyhillNerode}, that the cone type automaton $\mathcal{A}(W,S)$ is also finite state. \end{proof} To each Garside shadow~$B$ there is an associated automaton $\mathcal{A}_B=(B,\mu,e)$ (finite state if $|B|<\infty$) defined as follows. \begin{thm}\label{thm:garsideautomaton} \cite[Theorem~1.2]{HNW:16} Let $B$ be a Garside shadow, and let $\mathcal{A}_B=(B,\mu,e)$ where $$ \mu(w,s)=\begin{cases} \pi_B(sw)&\text{if $s\notin D_L(w)$}\\ \dagger&\text{if $s\in D_L(w)$.} \end{cases} $$ Then $\mathcal{A}_B$ is an automaton recognising $\mathcal{L}(W,S)$. \end{thm} It is conjectured by Hohlweg, Nadeau and Williams~\cite[Conjecture~1]{HNW:16} that the automaton $\mathcal{A}_{\widetilde{S}}$ (where $\widetilde{S}$ is the smallest Garside shadow) is the minimal automaton recognising $\mathcal{L}(W,S)$ (and hence isomorphic to the cone type automaton $\mathcal{A}(W,S)$). By \cite[Corollary~1.7]{Dye:21} the set $L_n$ of $n$-low elements forms a finite Garside shadow, and hence $\mathcal{A}_{L_n}=(L_n,\mu,e)$ is a finite state automaton recognising the language of reduced words in~$W$. It is conjectured by Dyer and Hohlweg~\cite[Conjecture~2]{DH:16} that the map $\Theta_n:L_n\to \mathbb{E}_n$ with $\Theta_n(w)=\mathcal{E}_n(w)$ is a bijection. If $(W,S)$ is such that $\Theta_n$ is a bijection, then it follows that $\mathcal{A}_n\cong \mathcal{A}_{L_n}$ (see also the discussion in \cite[\S3.6]{HNW:16}). We note that the results of \cite{DH:16}, \cite{HNW:16} and \cite{PY:19} imply the following, confirming~\cite[Conjecture~1]{HNW:16} in the case that $\mathcal{E}=\Phi_{\mathrm{sph}}^+$, and~\cite[Conjecture~2]{DH:16} in the case that $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ and $n=0$. \begin{thm}\label{thm:conjsspherical} If $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ then the automaton $\mathcal{A}_{\widetilde{S}}$ is minimal, and the map $\Theta_0:L\to\mathbb{E}$ with $\Theta_0(x)=\mathcal{E}(x)$ is bijective. \end{thm} \begin{proof} Since $L$ is a Garside shadow we have $\widetilde{S}\subseteq L$, and by \cite[Proposition~3.26]{DH:16} we have $|L|\leq|\mathbb{E}|$, and so $|\widetilde{S}|\leq|L|\leq|\mathbb{E}|$. On the other hand $|\mathbb{E}|$ is the number of states of the Brink-Howlett automaton $\mathcal{A}_0$, and by \cite[Theorem~1]{PY:19} this automaton is minimal if and only if $\mathcal{E}=\Phi_{\mathrm{sph}}^+$. Thus $|\mathbb{E}|\leq|\widetilde{S}|$ (as $\mathcal{A}_{\widetilde{S}}$ recognises $\mathcal{L}(W,S)$ by \cite[Theorem~1.2]{HNW:16}), and hence $\widetilde{S}=L$ and $|L|=|\mathbb{E}|$. \end{proof} \section{Cone types in Coxeter groups}\label{sec:conetypes} In this section we study cone types in Coxeter groups. In Section~\ref{sec:2:1} we give an explicit formula for the transitions between cone types in $\mathcal{A}(W,S)$. In Section~\ref{sec:boundaryroots} we define the \textit{boundary roots} of a cone type, and show that these roots form the minimal set of roots required to express a cone type as an intersection of half-spaces. In Section~\ref{sec:2:3} we consider the connection between containment of cone types and the property $x\preccurlyeq y$, collecting some results that will be useful later in the paper. \subsection{Transitions in the cone type automata}\label{sec:2:1} The construction of the transition function in the cone type automata $\mathcal{A}(W,S)$ in Theorem~\ref{thm:MyhillNerode} requires one to choose cone type representatives (however, ultimately, is independent of these choices). The following lemma can be used to remove these choices (see Corollary~\ref{cor:transitions} below), and gives an iterative method of computing cone types. \begin{lem} \label{lem:conetypeevolution} Let $T\in\mathbb{T}$ and $s\in S$, and suppose that $s\in T$. Then the set $$ T'=s\{w\in T\mid \ell(sw)=\ell(w)-1\}=s(T\backslash H_{\alpha_s}^+) $$ is a cone type. Moreover, if $T=T(v)$ then $\ell(vs)=\ell(v)+1$ and $T'=T(vs)$. \end{lem} \begin{proof} Let $v\in W$ be such that $T=T(v)$. Since $s\in T$ we have $\ell(vs)=\ell(v)+1$. We claim that $ T(vs)=s\{w\in T\mid \ell(sw)=\ell(w)-1\}, $ from which the result follows. If $u\in T(vs)$ then $\ell(vsu)=\ell(vs)+\ell(u)=\ell(v)+\ell(u)+1$, and it follows that $\ell(vsu)=\ell(v)+\ell(su)$ and $\ell(su)=\ell(u)+1$. Thus the element $w=su$ satisfies $w\in T(v)=T$ and $\ell(sw)=\ell(w)-1$. Conversely, suppose that $w\in T=T(v)$ and $\ell(sw)=\ell(w)-1$. Then $$ \ell((vs)(sw))=\ell(vw)=\ell(v)+\ell(w)=\ell(v)+\ell(sw)+1=\ell(vs)+\ell(sw), $$ and so $sw\in T(vs)$. \end{proof} Lemma~\ref{lem:conetypeevolution} gives a geometric description of the transition function in the automaton $\mathcal{A}(W,S)$. \begin{cor}\label{cor:transitions} The transition function of $\mathcal{A}(W,S)=(\mathbb{T},\mu,T(e))$ is given by $$ \mu(T,s)=\begin{cases} s(T\backslash H_{\alpha_s}^+)&\text{if $s\in T$}\\ \dagger&\text{if $s\notin T$} \end{cases} $$ \end{cor} \begin{exa} Figure~\ref{fig:evolution} illustrates the transitions $T(e)\to_s T(s)\to_t T(st)\to_u T(stu)$ in the cone type automaton of the rank~$3$ Coxeter group with $m_{s,t}=4$, $m_{t,u}=3$ and $m_{s,u}=3$. \begin{figure}[H] \centering \subfigure[The cone type $T(e)$]{ \includegraphics[totalheight=5cm]{hyp_0}}\hspace{0.5cm} \subfigure[The cone type $T(s)$]{ \includegraphics[totalheight=5cm]{hyp_1}} \subfigure[The cone type $T(st)$]{ \includegraphics[totalheight=5cm]{hyp_2}}\hspace{0.5cm} \subfigure[The cone type $T(stu)$]{ \includegraphics[totalheight=5cm]{hyp_3}} \caption{Transitions in $\mathcal{A}(W,S)$}\label{fig:evolution} \end{figure} \end{exa} \subsection{Boundary roots}\label{sec:boundaryroots} In this section we give a geometric description of cone types, proving a more precise, and indeed optimal, version of Theorem~\ref{thm:geometry1}. The main result of this section is Theorem~\ref{thm:geometry2}, giving a formula for a cone type in terms of a minimal set of ``boundary roots'' of the cone type. One interesting consequence this formula is another proof of Corollary~\ref{cor:finiteconetypes1} (the finiteness of $\mathbb{T}$) without directly appealing to automata theory, however the finiteness of $\mathcal{E}$ is still required in the proof. \begin{lem}\label{lem:EandPhi1} Let $x,y\in W$ and $\beta\in\Phi^+$. Suppose that $\Phi(x)\cap\Phi(y)=\{\beta\}$. Then: \begin{enumerate} \item $\beta\in\mathcal{E}$, and \item $\beta\in\Phi^1(x)\cap\Phi^1(y)$. \end{enumerate} \end{lem} \begin{proof} (1) If $\beta \notin \mathcal{E}$ then $\beta$ dominates some root $\alpha \in \Phi^{+}$ with $\alpha\neq \beta$. Since $x^{-1}\beta < 0$ and $y^{-1}\beta<0$ we have $x^{-1}\alpha < 0$ and $y^{-1}\alpha<0$ (by the definition of dominance) and hence $\alpha\in\Phi(x)\cap\Phi(y)$, a contradiction. (2) Let $y = s_1 \cdots s_n$ be a reduced expression. Since $\beta\in\Phi(y)$ we have $\beta = s_1 \cdots s_{j-1}(\alpha_{s_j})$ for some $1\leq j\leq n$ (see Proposition~\ref{prop:rootsystembasics}). Let $y' = s_1 \cdots s_j$. Then $s_{\beta}y' = y' s_j$ with $\ell(s_{\beta}y') = \ell(y') -1$. We have $$ \ell(x^{-1}s_{\beta}y') =\ell((s_{\beta}x)^{-1}y')\le \ell(s_{\beta}x) + \ell(y'). $$ Since $\Phi(s_{\beta}y')=\Phi(y's_j)=\Phi(y')\backslash \{\beta\}$ we have $\Phi(x)\cap\Phi(s_{\beta}y')=\emptyset$, and so by Proposition~\ref{prop:conetypebasics} $$ \ell(x^{-1}s_{\beta}y') = \ell(x) + \ell(s_{\beta}y') = \ell(x) + \ell(y') - 1. $$ Thus $\ell(s_{\beta}x)\geq \ell(x^{-1}s_{\beta}y')-\ell(y')=\ell(x)-1$. On the other hand, $\ell(s_{\beta}x)\leq \ell(x)-1$ because $\beta\in\Phi(x)$ (see Proposition~\ref{prop:rootsystembasics}). Thus $\ell(s_{\beta}x)=\ell(x)-1$ and so $\beta\in\Phi^1(x)$. Interchanging the roles of $x$ and $y$ shows that $\beta \in \Phi^1(y)$ too. \end{proof} \begin{defn} \label{def:boundary_roots} Let $T$ be a cone type. The \textit{boundary roots} of $T$ are the roots $\beta \in \Phi^+$ such that there exists $w \in W$ and $s \in S$ with $w \notin T$ and $s_{\beta}w = ws \in T$. Let $\partial T$ be the set of all boundary roots of~$T$. \end{defn} The conditions $w\notin T$ and $ws\in T$ in Definition~\ref{def:boundary_roots} force $\ell(ws)=\ell(w)-1$ because if $T=T(x)$, we have $\ell(xw)<\ell(x)+\ell(w)$ and $\ell(xws)=\ell(x)+\ell(ws)$, and so $\ell(ws)=\ell(xws)-\ell(x)\leq \ell(xw)+1-\ell(x)<\ell(w)+1$. In terms of the simplicial structure of the Coxeter complex, the roots $\beta\in\partial T$ are the roots $\beta\in\Phi^+$ such that the wall $H_{\beta}$ bounds~$T$. To understand this interpretation, note that the chambers $wC_0$ and $wsC_0$ are adjacent in the Coxeter complex, and the panel (codimension~$1$ simplex) $\pi=wC_0\cap wsC_0$ lies on the wall $H_{\beta}$, and separates the chamber $wsC_0$ (which is contained in $T$) from the chamber $wC_0$ (which is not contained in $T$). For example, in Figure~\ref{fig:G2examplecone} the walls $H_{\beta}$ with $\beta\in\partial T$ are the three walls bounding~$T$. We will formalise the above interpretation in Theorem~\ref{thm:geometry2}. We first develop some important properties of the boundary roots. \begin{thm} \label{thm:boundaryroots} Let $T$ be a cone type. If $T=T(x^{-1})$ then $\beta\in\partial T$ if and only if there exists $w\in W$ with $$ \Phi(x)\cap\Phi(w)=\{\beta\}. $$ Moreover if $\beta\in\partial T$ then there exists $w\in W$, independent of $x$, such that $\Phi(x)\cap\Phi(w)=\{\beta\}$ whenever $T=T(x^{-1})$. \end{thm} \begin{proof} Let $\beta\in\partial T$. Thus there exists $w \in W$ and $s \in S$ with $w \notin T$ and $s_{\beta} w = ws \in T$, and necessarily $\ell(ws)=\ell(w)-1$. Let $x \in W$ with $T(x^{-1}) = T$. Since $w\notin T$ and $ws\in T$ we have $\Phi(x) \cap \Phi(w) \neq \emptyset$ and $\Phi(x) \cap \Phi(ws) = \emptyset$ (by Proposition~\ref{prop:conetypebasics}). Since $\ell(ws)=\ell(w)-1$ and $s_{\beta}w=ws$ we have $\Phi(ws)=\Phi(w)\backslash\{-w\alpha_s\}$ and $\beta=-w\alpha_s$, and thus $\Phi(x)\cap\Phi(w)=\{\beta\}$ (with $w$ independent of the particular $x\in W$ with $T=T(x^{-1})$). Suppose that $T=T(x^{-1})$ and that there is $w \in W$ with $\Phi(x) \cap \Phi(w) = \{ \beta \}$. Let $w = s_1 \cdots s_n$ be a reduced expression, and let $1 \le j \le n$ be such that $\beta = s_1 \cdots s_{j-1}\alpha_{s_j}$ (by Proposition~\ref{prop:rootsystembasics}). Let $v = s_1 \cdots s_{j}$. Then $vs_j = s_{\beta} v$ and we have $\Phi(x) \cap \Phi(v) = \{ \beta \}$ and $\Phi(x) \cap \Phi(vs_j) = \emptyset$. Thus $v\notin T$ and $vs\in T$ (by Proposition~\ref{prop:conetypebasics}), and so $\beta \in \partial T$. \end{proof} \newpage We immediately have the following corollary: \begin{cor}\label{cor:boundaryroots} Let $T$ be a cone type, with $T=T(x^{-1})$. Then $|\partial T|<\infty$ and $\partial T\subseteq \Phi^1(x)\cap\mathcal{E}(x)$. In particular, all boundary roots of $T$ are elementary. \end{cor} \begin{proof} We have $|\partial T|<\infty$ by Theorem~\ref{thm:boundaryroots}, as $\partial T\subseteq \Phi(x)$ whenever $T=T(x^{-1})$. If $\beta\in\partial T$ and $T=T(x^{-1})$ then by Theorem~\ref{thm:boundaryroots} there exists $w\in W$ with $\{\beta\}=\Phi(x)\cap \Phi(w)$. The result follows from Lemma~\ref{lem:EandPhi1}. \end{proof} The main theorem of this section is as follows. \begin{thm} \label{thm:geometry2} If $T$ is a cone type then $$ T = \bigcap_{\beta\in\partial T} H_{\beta}^+. $$ Moreover, no root can be removed from this intersection (in the sense that if a root is omitted then the equality becomes strict containment). \end{thm} \begin{proof} Let $x$ be any element with $T(x^{-1}) = T$. By Corollary~\ref{cor:boundaryroots} we have $\partial T \subseteq \Phi(x)$ and thus by Theorem~\ref{thm:geometry1} we have \begin{align} \label{eq:contain} T=T(x^{-1}) = \bigcap_{\beta \in \Phi(x)} H_{\beta}^{+} \subseteq \bigcap_{\beta \in \partial T} H_{\beta}^{+} \end{align} Now suppose that there exists $$ v \in \big( \bigcap_{\beta \in \partial T} H_{\beta}^{+} \big) \backslash T. $$ Let $v = s_1 \cdots s_n$ be a reduced expression. Since $v \notin T$ we have $\ell(x^{-1}v) < \ell(x^{-1}) + \ell(v)$ and so there is an index $1 \le j < n$ such that $\ell(x^{-1}s_1 \ldots s_j) = \ell(x^{-1}) + j$ and $\ell(x^{-1}s_1 \ldots s_{j+1}) < \ell(x^{-1}) + j+1$. Let $w = s_1 \cdots s_{j+1}$ and let $s = s_{j+1}$. Then $w \notin T$ and $ws \in T$ and writing $\beta = -w\alpha_s > 0$ we have $s_{\beta} w = ws$. Thus $\beta \in \partial T$. Since $\beta\in\Phi(w)$ (as $w^{-1}\beta=-\alpha_s<0$) and $\Phi(w)\subseteq \Phi(v)$ (as $w\preccurlyeq v$) we have $v\in H_{\beta}^-$, a contradiction. Thus equality holds in~(\ref{eq:contain}). Now suppose $\beta_0 \in \partial T$ is omitted from the intersection, and let $ w \in W$, $s \in S$ be such that $w \notin T$ and $s_{\beta_0}w = ws \in T$. Since $\Phi(ws) = \Phi(w) \backslash \{ \beta_0 \}$ and $ws \in T$ we have $$ w \in \bigcap_{\beta \in \partial T \setminus \{ \beta_0 \}} H_{\beta}^{+}, $$ and so the right hand side strictly contains $T$ (as $w\notin T$). \end{proof} While Theorem~\ref{thm:geometry2} gives the most precise formula for the cone type (that is, with no redundancies in the intersection), the following corollary collects various other useful formulae for the cone type. \begin{cor} \label{cor:geometry3} Let $T$ be a cone type. If $T=T(x^{-1})$ then $$ T = \bigcap_{\Phi(x)} H_{\beta}^+ = \bigcap_{\mathcal{E}(x)}H_{\beta}^+ = \bigcap_{\Phi^1(x)}H_{\beta}^+ = \bigcap_{\Phi^1(x) \cap \mathcal{E}(x)} H_{\beta}^+ = \bigcap_{\partial T} H_{\beta}^+. $$ \end{cor} \begin{proof} By Corollary~\ref{cor:boundaryroots} we have $\partial T \subseteq \Phi^1(x) \subseteq \Phi(x)$ and hence by Theorems~\ref{thm:geometry1} and~\ref{thm:geometry2} we have $$ T = \bigcap_{\Phi(x)} H_{\beta}^+ \subseteq \bigcap_{\Phi^1(x)} H_{\beta}^+ \subseteq \bigcap_{\partial T} H_{\beta}^+ = T, $$ and so equality holds throughout. By Corollary~\ref{cor:boundaryroots} we also have $\partial T \subseteq \mathcal{E}(x)$, and so $$ T = \bigcap_{\Phi(x)} H_{\beta}^+ \subseteq \bigcap_{\mathcal{E}(x)} H_{\beta}^+ \subseteq \bigcap_{\partial T} H_{\beta}^+ = T, $$ and so equality holds throughout. \end{proof} \newpage The formulae in Corollary~\ref{cor:geometry3} give another proof of Corollary~\ref{cor:finiteconetypes1}, independent of automata theory, as follows. \begin{cor}\label{cor:finiteconetypes2} Each finitely generated Coxeter system has finitely many cone types. \end{cor} \begin{proof} The result follows from the formula $$ T(x^{-1})=\bigcap_{\beta\in\mathcal{E}(x)}H_{\beta}^+ $$ and the fact that there are only finitely many elementary roots. \end{proof} In Proposition~\ref{prop:conetypebasics} we listed some equivalences to the statement $y\in T(x^{-1})$. We now record some further equivalences. \begin{cor} \label{cor:conetypeequivalences} Let $x, y \in W$. The following are equivalent. \begin{enumerate} \item $y\in T(x^{-1})$; \item $\mathcal{E}(x)\cap\mathcal{E}(y)=\emptyset$; \item $\Phi^1(x)\cap\Phi^1(y)=\emptyset$; \item $\partial T(x^{-1})\cap \Phi(y)=\emptyset$. \end{enumerate} \end{cor} \begin{proof} Using the formulae in Corollary~\ref{cor:geometry3} we have $y\in T(x^{-1})$ if and only if $y\in H_{\beta}^+$ for all $\beta\in\mathcal{E}(x)$, if and only if $\ell(s_{\beta}y)>\ell(y)$ for all $\beta\in\mathcal{E}(x)$, if and only if $\beta\notin \Phi(y)$ for all $\beta\in\mathcal{E}(x)$. Thus (1) and (2) are equivalent. Similarly (1) and (3) are equivalent, and (1) and (4) are equivalent. \end{proof} \begin{rem} The following example shows that each formula in Corollary~\ref{cor:geometry3}, except for the boundary root formula, may have redundancies. Consider $(W,S)$ of type $\tilde{\mathsf{B}}_2$, with $m_{s,t}=4$, $m_{t,u}=4$, and $m_{s,u}=2$. Consider the element $x=tus$. Let $T=T(x^{-1})$. We have $\partial T=\{\alpha_s,\alpha_u\}$, while $\Phi^1(x)=\mathcal{E}(x)=\{\alpha_s,\alpha_u,su\alpha_t\}$ (see Figure~\ref{fig:B2partitions}). \end{rem} Later in this paper we will be interested in the sets $$ X_T=\{x\in W\mid T(x^{-1})=T\},\quad\text{for $T\in\mathbb{T}$}. $$ To obtain a formula for $X_T$ as an intersection of half-spaces, we introduce the \textit{internal roots} of a cone type. \begin{defn} Let $T$ be a cone type. A root $\beta\in\Phi^+$ is an \textit{internal root} of $T$ if there exists $w\in T$ with $\beta\in\Phi(w)$. Let $\mathrm{Int}(T)$ denote the set of all internal roots of $T$. Thus $\mathrm{Int}(T)=\bigcup_{w\in T}\Phi(w)$. \end{defn} Geometrically, $\mathrm{Int}(T)$ is the set of roots $\beta\in\Phi^+$ such that the wall $H_{\beta}$ separates two elements of $T$. To see this, note that if $\beta\in\mathrm{Int}(T)$ then $\beta\in \Phi(w)$ for some $w\in T$, and so $\beta$ separates $e\in T$ and $w\in T$. Conversely, if $\beta\in\Phi^+$ and $H_{\beta}$ separates elements $w,v\in T$ then we may assume $w\in H_{\beta}^-$ (and then $v\in H_{\beta}^+$) and so $\beta\in \Phi(w)$. \begin{thm}\label{thm:conetypeprojection} For $T\in\mathbb{T}$ we have $$ X_T=\bigg(\bigcap_{\beta\in\partial T}H_{\beta}^-\bigg)\cap\bigg(\bigcap_{\beta\in\mathrm{Int}(T)}H_{\beta}^+\bigg). $$ \end{thm} \begin{proof} Let $Y$ denote the right hand side of the equation in the statement of the theorem. Suppose that $x\in X_T$. Thus $T(x^{-1})=T$, and so $\partial T\subseteq\Phi(x)$ (by Corollary~\ref{cor:boundaryroots}) and so $x\in H_{\beta}^-$ for all $\beta\in\partial T$. If $\beta\in \mathrm{Int}(T)$ then $\beta\in\Phi(w)$ for some $w\in T$, and since $\Phi(x)\cap\Phi(w)=\emptyset$ (by Proposition~\ref{prop:conetypebasics}) we have $\beta\notin\Phi(x)$, and so $x\in H_{\beta}^+$ for all $\beta\in\mathrm{Int}(T)$. Hence $X_T\subseteq Y$. Conversely, suppose that $y\in Y$. We claim that $T(y^{-1})=T$. On the one hand, if there exists $w\in T$ with $w\notin T(y^{-1})$ then $\Phi(w)\cap\Phi(y)\neq\emptyset$, and for any $\beta\in\Phi(w)\cap\Phi(y)$ we have $\beta\in\mathrm{Int}(T)$ (as $\beta\in\Phi(w)$ and $w\in T$) and $y\in H_{\beta}^-$ (as $\beta\in\Phi(y)$), a contradiction. Thus $T\subseteq T(y^{-1})$. On the other hand, if $w\notin T$ then by Theorem~\ref{thm:geometry2} there is $\beta\in\partial T$ with $w\in H_{\beta}^-$, and so $\beta\in\partial T\cap \Phi(w)$. Since $y\in Y$ we have $y\in H_{\beta}^-$, and so $\beta\in\Phi(y)$. Thus $\Phi(y)\cap\Phi(w)\neq\emptyset$, and so $w\notin T(y^{-1})$. Thus $T(y^{-1})\subseteq T$, completing the proof. \end{proof} Note that the intersection in Theorem~\ref{thm:conetypeprojection} may be over an infinite set of roots, as $\mathrm{Int}(T)$ may be infinite. We will show in Corollary~\ref{cor:finiteintersection} that in fact each set $X_T$ can be expressed as an intersection of finitely many half-spaces. \subsection{On containment of cone types}\label{sec:2:3} In this section we consider the connection between containment of cone types $T(y^{-1})\subseteq T(x^{-1})$ and the property $x\preccurlyeq y$. We saw in Lemma~\ref{lem:containoneway} that if $x\preccurlyeq y$ then $T(y^{-1})\subseteq T(x^{-1})$. The converse implication is obviously false in general. For example, if $x$ and $y$ are elements in the red shaded region of Figure~\ref{fig:G2examplecone}, then $T(x^{-1})=T(y^{-1})$, however of course $x\preccurlyeq y$ may not occur. However, with some constraints on~$x$, the reverse implication does hold. The following theorem shows that $x\in W_J$, with $J\subseteq S$, is a sufficient condition. Later in this paper we conjecture a generalisation of this result (see Conjecture~\ref{conj:orderisomorphism}). \begin{thm} \label{thm:parabolicconetype} Let $x \in W_J$ and $y \in W$ with $J\subseteq S$ spherical. Then $T(y^{-1}) \subseteq T(x^{-1})$ if and only if $x\preccurlyeq y$. \end{thm} \begin{proof} By Lemma~\ref{lem:containoneway}, we only need to show that if $T(y^{-1}) \subseteq T(x^{-1})$ then $x\preccurlyeq y$. Hence suppose that $T(y^{-1})\subseteq T(x^{-1})$, with $x\in W_J$. Write $y=uv$ as in (\ref{eq:WJdecomposition}), with $u\in W_J$ and $v\in W^J$. Let $z=uw_J$, with $w_J$ the longest element of $W_J$. Since $v\in W^J$ we have \begin{align*} \ell(y^{-1}z)&=\ell(v^{-1}w_J)=\ell(v)+\ell(w_J)=\ell(y)-\ell(u)+\ell(w_J)=\ell(y)+\ell(z), \end{align*} and so $z\in T(y^{-1})$. Thus $z\in T(x^{-1})$. Let $w=w_Jz^{-1}x\in W_J$, and note that $xw^{-1}v=y$. We claim that \begin{align}\label{eq:prefix} \ell(xw^{-1}v)=\ell(x)+\ell(w^{-1}v), \end{align} from which the desired result $x\preccurlyeq y$ follows. To prove~(\ref{eq:prefix}), we have \begin{align*} \ell(xw^{-1}v)&=\ell(xw^{-1})+\ell(v)&&\text{as $v\in W^J$}\\ &=\ell(w_J)-\ell(z)+\ell(v)&&\text{as $xw^{-1}=zw_J$}\\ &=\ell(w_J)-(\ell(x^{-1}z)-\ell(x))+\ell(v)&&\text{as $z\in T(x^{-1})$}\\ &=\ell(x)+\ell(w)+\ell(v)&&\text{as $w=w_Jz^{-1}x$ and $z^{-1}x\in W_J$}\\ &=\ell(x)+\ell(w^{-1}v)&&\text{as $v\in W^J$ and $w\in W_J$}, \end{align*} completing the proof. \end{proof} \begin{cor} Let $W$ be a finite Coxeter group. Then for $x,y \in W$ we have $T(y^{-1}) \subseteq T(x^{-1})$ if and only if $x\preccurlyeq y$. \end{cor} \begin{proof} The result follows by letting $W = W_J$ in the statement of Theorem~\ref{thm:parabolicconetype}. \end{proof} \subsection{Cone types in finite Coxeter groups} We can describe cone types in a finite Coxeter group very precisely, and this description will be useful in conjunction with Theorem~\ref{thm:parabolicconetype} in later sections. \begin{prop} \label{prop:paraboliccone} Let $W$ be a finite Coxeter group and let $w_0$ be the longest element of $W$. Then for $x \in W$ we have $$ T(x^{-1}) = \{ w \in W \mid w \preccurlyeq xw_0 \}. $$ In particular, for all $x,y \in W$ we have $T(x^{-1}) = T(y^{-1})$ if and only if $x = y$. \end{prop} \begin{proof} If $w\in T(x^{-1})$ then $\ell(x^{-1}w)=\ell(x)+\ell(w)$, and so $$ \ell(w^{-1}xw_0)=\ell(w_0)-\ell(w^{-1}x)=\ell(w_0)-\ell(x)-\ell(w)=\ell(xw_0)-\ell(w), $$ and so $w\preccurlyeq xw_0$. Conversely, if $w\preccurlyeq xw_0$ then $$ \ell(w^{-1}xw_0)=\ell(xw_0)-\ell(w)=\ell(w_0)-\ell(x)-\ell(w), $$ but also $\ell(w^{-1}xw_0)=\ell(w_0)-\ell(w^{-1}x)=\ell(w_0)-\ell(x^{-1}w)$, and so $\ell(x^{-1}w)=\ell(x)+\ell(w)$, giving $w\in T(x^{-1})$. In particular, if $T(x^{-1})=T(y^{-1})$ then since $xw_0\in T(x^{-1})$ we have $xw_0\preccurlyeq yw_0$, and similarly $yw_0\preccurlyeq xw_0$. Hence $xw_0=yw_0$ and so $x=y$. \end{proof} \newpage \begin{lem}\label{lem:sphericalprefix1} Let $x\in W$ and $J\subseteq S$ with $J$ spherical. Write $x=uv$ with $u\in W_J$ and $v\in W^J$. If $w\in W_J$ with $w\in T(u^{-1})$ then $w\in T(x^{-1})$. \end{lem} \begin{proof} Since $u^{-1}w\in W_J$, $v\in W^J$, and $w\in T(u^{-1})$, we have $$ \ell(x^{-1}w)=\ell(v^{-1}u^{-1}w)=\ell(v)+\ell(u^{-1}w)=\ell(v)+\ell(u)+\ell(w), $$ and the result follows since $\ell(v)+\ell(u)=\ell(uv)=\ell(x)$. \end{proof} \begin{cor} \label{cor:distinguishconetypes} Let $x,y \in W$ and $J \subseteq S$ with $J$ spherical. Write $x = uv$ and $y=u'v'$ with $u, u' \in W_J$ and $v, v' \in W^J$. If $u \neq u'$, then $T(x^{-1}) \neq T(y^{-1})$. \end{cor} \begin{proof} Since $u \neq u'$ by Proposition~\ref{prop:paraboliccone} there is $w\in W_J$ with $w \in T(u^{-1}) \setminus T(u'^{-1})$. By Lemma~\ref{lem:sphericalprefix1} we have $x \in T(w)$, and since $w \notin T(u'^{-1})$ we have $w \notin T(y^{-1})$. \end{proof} We note, in passing, the following corollary which shows that the minimal automata recognising the language of reduced words in a finite Coxeter group~$W$ is just the ``trivial'' automaton with states $W$ and transition function $\mu(w,s)=ws$ if $\ell(ws)=\ell(w)+1$ (note that this gives an automaton recognising $\mathcal{L}(W,S)$ for all Coxeter systems, however of course it is finite state if and only if $W$ is finite). \begin{cor} If $W$ is a finite Coxeter group then $|\mathcal{A}(W,S)| = |W|$. \end{cor} \begin{proof} By Proposition~\ref{prop:paraboliccone} we have $|\mathcal{A}(W,S)|=|\mathbb{T}| = |W|$. \end{proof} \section{Regular partitions}\label{sec:regularpartitions} In this section we introduce one of the main concepts of this paper: the notion of a ``regular partition'' of~$W$. This concept has its genesis in the Ph.D. of P. Headley in his study of the classical Shi arrangement (see \cite[Lemma~V.5]{Hea:94}). We now give an outline of the results of this section. We begin in Section~\ref{sec:partitions} by setting up appropriate language for working with the partially ordered set of all partitions of~$W$. We then introduce certain special partitions of $W$ that will play an important role in the paper, including the \textit{cone type partition} $\mathscr{T}$, \textit{Garside shadow partitions}, and the \textit{$n$-Shi partitions} associated to $n$-elementary inversion sets. In Section~\ref{subsec:regularpartitions} we define the notion of a regular partition, and exhibit some of the main examples of such partitions. We show in Theorem~\ref{thm:regularautomaton} that each such partition gives rise to an automaton recognising $\mathcal{L}(W,S)$, and in Theorem~\ref{thm:converseautomaton} we show that every automaton recognising $\mathcal{L}(W,S)$ satisfying a mild hypothesis arises in such a way. In Section~\ref{sec:regularcompletion} we study the partially ordered set $\mathscr{P}_{\mathrm{reg}}(W)$ of all regular partitions of $W$. We show in Theorem~\ref{thm:regularlattice2} that $\mathscr{P}_{\mathrm{reg}}(W)$ is a complete lattice with bottom element being the cone type partition~$\mathscr{T}$ (note the convention~(\ref{eq:convention})). This in turn allows us to define the \textit{regular completion} $\widehat{\mathscr{P}}$ of an arbitrary partition $\mathscr{P}$ of $W$ (this is the ``minimal'' regular partition refining $\mathscr{P}$). In Section~\ref{sec:simplerefinements} we develop an algorithm, based on ``simple refinements'', for producing the regular completion of a partition, and provide natural sufficient conditions for this algorithm to terminate in finite time. As a consequence, we prove in Corollary~\ref{cor:regularlattice1} that $\mathscr{T}=\widehat{\mathscr{D}}$, where $\mathscr{D}$ is the partition of $W$ according to left descent sets. This characterisation of $\mathscr{T}$ will be crucial in proving the main result of this paper (Theorem~\ref{thm:main1}). \subsection{Partitions of $W$}\label{sec:partitions} Various partitions of $W$ play an important role in this work, and we begin by recalling some terminology. A \textit{partition} of $W$ is a set $\mathscr{P}$ of subsets of $W$ such that $\bigcup_{P\in \mathscr{P}}P=W$ and $P\cap P'=\emptyset$ for $P,P'\in\mathscr{P}$ with $P\neq P'$. The sets $P\in\mathscr{P}$ are called the \textit{parts} of the partition. Let $\mathscr{P}(W)$ denote the set of all partitions of $W$. If $\mathscr{P}$ and $\mathscr{P}'$ are partitions of $W$ such that each part of $\mathscr{P}'$ is contained in some part of $\mathscr{P}$ then we say that $\mathscr{P}'$ is a \textit{refinement} of $\mathscr{P}$. We also say that $\mathscr{P}'$ is \textit{finer} than $\mathscr{P}$, and that $\mathscr{P}$ is \textit{coarser} than $\mathscr{P}'$. We write \begin{align}\label{eq:convention} \mathscr{P}\leq\mathscr{P}'\quad\text{if $\mathscr{P}'$ is a refinement of $\mathscr{P}$}. \end{align} Note that this is dual to the standard convention. Our choice here is motivated by the fact that we are often interested in the number of parts of a partition, and a partition with few parts is best considered to be ``small''. Thus, the partially ordered set $(\mathscr{P}(W),\leq)$ has top element $\mathbf{1}=\{\{w\}\mid w\in W\}$ (the partition into singletons) and bottom element $\mathbf{0}=\{W\}$ (the partition with one part). A \textit{covering} of $W$ is a set $\mathbb{X}$ of subsets of $W$ with $\bigcup_{X\in\mathbb{X}}X=W$. Each covering of $W$ induces a partition of $W$, as follows. \begin{defn}\label{defn:covering} Let $\mathbb{X}$ be a covering of $W$. Let $\sim_{\mathbb{X}}$ be the equivalence relation on $W$ given by $x\sim_{\mathbb{X}}y$ if and only if $\{X\in\mathbb{X}\mid x\in X\}=\{X\in\mathbb{X}\mid y\in X\}$ (that is, $x\in X$ if and only if $y\in X$, for $X\in\mathbb{X}$). The \textit{partition induced by $\mathbb{X}$} is the partition $\mathscr{X}$ of $W$ into $\sim_{\mathbb{X}}$ equivalence classes. Thus elements $x,y\in W$ lie in the same part of $\mathscr{X}$ if and only if they lie in precisely the same elements of $\mathbb{X}$. \end{defn} Important examples of partitions are provided by hyperplane arrangements. In our general setting, a hyperplane arrangement is most appropriately thought of as a partition of $W$ induced by a set of roots, as follows. Let $\Lambda\subseteq\Phi^+$ be nonempty. The \textit{partition of $W$ induced by $\Lambda$} is the partition $\mathscr{H}(\Lambda)$ induced by the covering $\{H_{\beta}^+,H_{\beta}^-\mid \beta\in\Lambda\}$ (as in Definition~\ref{defn:covering}). We will refer to such partitions as \textit{hyperplane partitions} to emphasise this connection to traditional hyperplane arrangements. We now provide the main examples of partitions that will appear in this work. Recall the definition of $C(w)$ from Section~\ref{sec:1:conetypes}, and recall that $\Pi=\{\alpha_s\mid s\in S\}$. Recall that $\mathbb{T}$ denotes the set of all cone types. \begin{defn}\label{defn:partitions} Let $n\in\mathbb{N}$, and let $B$ be a Garside shadow. \begin{enumerate} \item The \textit{cone type partition} is the partition $\mathscr{T}$ induced by the covering $\mathbb{T}$.\item The \textit{Garside partition} associated to $B$ is the partition $\mathscr{G}_B$ induced by the covering $\{C(b)\mid b\in B\}$ (this is a covering as $e\in B$). \item The \textit{$n$-Shi partition} is the hyperplane partition $\mathscr{S}_n=\mathscr{H}(\mathcal{E}_n)$. \item The \textit{$S$-partition} is the hyperplane partition $\mathscr{D}=\mathscr{H}(\Pi)$. \item The \textit{spherical partition} is the hyperplane partition $\mathscr{J}=\mathscr{H}(\Phi_{\mathrm{sph}})$. \end{enumerate} \end{defn} We now give a more concrete description of the parts of each of the above partitions. Recall that $\mathbb{E}_n$ denotes the set of all $n$-elementary inversion sets, and $\mathbb{S}$ denotes the set of all spherical inversion sets. \begin{prop}\label{prop:partsdescription} Let $B$ be a Garside shadow and let $n\in\mathbb{N}$. \begin{enumerate} \item The parts of the cone type partition $\mathscr{T}$ are the sets $$ X_T=\{w\in W\mid T(w^{-1})=T\},\quad\text{with $T\in\mathbb{T}$}. $$ \item If $B$ is a Garside shadow, the parts of $\mathscr{G}_B$ are the sets $$ \pi_B^{-1}(b)=\{w\in W\mid \pi_B(w)=b\},\quad\text{with $b\in B$}. $$ \item The parts of the $n$-Shi partition $\mathscr{S}_n$ are the sets $$ \{w\in W\mid \mathcal{E}_n(w)=E\},\quad\text{with $E\in\mathbb{E}_n$}. $$ \item The parts of the $S$-partition $\mathscr{D}$ are the sets $$ D_L^{-1}(J)=\{w\in W\mid D_L(w)=J\},\quad\text{with $J\subseteq S$ spherical}. $$ \item The parts of the spherical partition $\mathscr{J}$ are the sets $$ \{w\in W\mid \Phi_{\mathrm{sph}}(w)=\Sigma\},\quad\text{for $\Sigma\in\mathbb{S}$}. $$ \end{enumerate} In particular, $|\mathscr{T}|=|\mathbb{T}|<\infty$, $|\mathscr{G}_B|=|B|$, $|\mathscr{S}_n|=|\mathbb{E}_n|<\infty$, and $|\mathscr{D}|,|\mathscr{J}|<\infty$. \end{prop} \begin{proof} (1) If $u\sim_{\mathbb{T}}v$ then for all $w\in W$ we have $u\in T(w^{-1})$ if and only if $v\in T(w^{-1})$. Thus, by Proposition~\ref{prop:conetypebasics}, for all $w\in W$ we have $w\in T(u^{-1})$ if and only if $w\in T(v^{-1})$, and hence $T(u^{-1})=T(v^{-1})$. So $u,v\in X_T$, where $T=T(u^{-1})$. Conversely, suppose that $u,v\in X_T$ for some $T\in\mathbb{T}$. Thus $T(u^{-1})=T(v^{-1})=T$. If $u\in T(w^{-1})$ then by Proposition~\ref{prop:conetypebasics} we have $w\in T(u^{-1})=T(v^{-1})$, and so again by Proposition~\ref{prop:conetypebasics} we have $v\in T(w^{-1})$. Thus $u\sim_{\mathbb{T}}v$. Thus $|\mathscr{T}|=|\mathbb{T}|$, which is finite by Corollary~\ref{cor:finiteconetypes1}. (2) Let $P$ be a part of $\mathscr{G}_B$ and let $u,v\in P$. Then for all $b\in B$ we have $u\in C(b)$ if and only if $v\in C(b)$, and so $b\preccurlyeq u$ if and only if $b\preccurlyeq v$, and so $\pi_B(u)=\pi_B(v)$. Conversely, suppose that $u,v\in W$ with $\pi_B(u)=\pi_B(v)$. If $b\in B$ and $u\in C(b)$ then $b\preccurlyeq \pi_B(u)=\pi_B(v)\preccurlyeq v$, and hence $v\in C(b)$. (3) Let $P$ be a part of $\mathscr{S}_n$, and let $u,v\in P$ and $\beta\in\mathcal{E}_n$. Then $\beta\in\mathcal{E}_n(u)$ if and only if $\ell(s_{\beta}u)<\ell(u)$, if and only if $u\in H_{\beta}^-$, if and only if $v\in H_{\beta}^-$ (from the definition of $\mathscr{S}_n$, using $u,v\in P$), if and only if $\ell(s_{\beta}v)<\ell(v)$, if and only if $\beta\in \mathcal{E}_n(v)$. Thus $\mathcal{E}_n(u)=\mathcal{E}_n(v)$. Conversely if $u,v\in W$ with $\mathcal{E}_n(u)=\mathcal{E}_n(v)$, and if $\beta\in\mathcal{E}_n$ and $\epsilon\in\{-,+\}$, then $u\in H_{\beta}^{\epsilon}$ if and only if $v\in H_{\beta}^{\epsilon}$, and so $u$ and $v$ lie in the same part of $\mathscr{S}_n$. Thus $|\mathscr{S}_n|=|\mathbb{E}_n|$, which is finite by Corollary~\ref{cor:Enfinite}. (4) Let $P$ be a part of $\mathscr{D}$, and let $u,v\in P$. For $s\in S$ we have $u\in H_{\alpha_s}^-$ if and only if $v\in H_{\alpha_s}^-$, and so $s\in D_L(u)$ if and only if $s\in D_L(v)$, and so $D_L(u)=D_L(v)$. Conversely, let $u,v\in W$ with $J=D_L(u)=D_L(v)$. If $s\in J$ then $u,v\in H_{\alpha_s}^-$, and if $s\in S\backslash J$ then $u,v\in H_{\alpha_s}^+$, and hence $u$ and $v$ are in the same part of $\mathscr{D}$. Finally, recall that descent sets are always spherical subsets (see \cite[Proposition~2.17]{AB:08}). (5) This is similar to (3). \end{proof} \begin{exa} Figure~\ref{fig:hyperbolicconetype} shows a cone type $T$ (shaded grey), and the corresponding set $X_T$ (shaded red). \begin{figure}[H] \centering \includegraphics[totalheight=8cm]{hyp_6} \caption{A cone type $T$ and the corresponding part $X_T$ of $\mathscr{T}$}\label{fig:hyperbolicconetype} \end{figure} \end{exa} Computing the partitions $\mathscr{D}$ and $\mathscr{J}$ is of course trivial: geometrically the walls determining the hyperplane partition $\mathscr{D}$ are the walls bounding the fundamental chamber, and the walls determining the partition $\mathscr{J}$ are the walls passing through a vertex of the fundamental chamber. Computing the partition $\mathscr{S}_n$ is also straightforward once the $n$-elementary roots are known. However computing the cone type partition $\mathscr{T}$ is nontrivial (see Algorithm~\ref{alg:regularisation} and Corollary~\ref{cor:regularlattice1}). \newpage \begin{exa} The partitions $\mathscr{D}$, $\mathscr{J}$, $\mathscr{S}_0$, and $\mathscr{T}$ are illustrated for $\tilde{\mathsf{G}}_2$ in Figure~\ref{fig:G2partitions} (in each case the identity chamber is shaded grey, and the blue and red shaded chambers will be discussed in the following section). The partitions $\mathscr{S}_0$ and $\mathscr{T}$ for $\tilde{\mathsf{B}}_2$ and $\tilde{\mathsf{A}}_2$ are given in Figures~\ref{fig:B2partitions} and~\ref{fig:A2partitions}. \begin{figure}[H] \centering \subfigure[The $S$-partition]{ \centering \begin{tikzpicture}[scale=0.75] \path [fill=blue!30] (0,0)--(-0.433,0.75)--(0,1); \path [fill=blue!30] (0,0)--(0.866,0.5)--(0.433,0.75); \path [fill=blue!30] (0,1)--(0.433,0.75)--(0.866,1.5); \path [fill=blue!30] (0.433,0.75)--(0.866,0.5)--(0.866,1.5); \path [fill=blue!30] (0,1)--(0,1.5)--(-0.866,1.5); \path [fill=blue!30] (0,0)--(0,-1)--(-0.433,-0.75); \path [fill=gray!90] (0,0) -- (0.433,0.75) -- (0,1) -- (0,0); \draw(-4.33,4.5)--(4.33,4.5); \draw(-4.33,3)--(4.33,3); \draw(-4.33,1.5)--(4.33,1.5); \draw(-4.33,0)--(4.33,0); \draw(-4.33,-1.5)--(4.33,-1.5); \draw(-4.33,-3)--(4.33,-3); \draw(-4.33,-3)--(-4.33,4.5); \draw(-3.464,-3)--(-3.464,4.5); \draw(-2.598,-3)--(-2.598,4.5); \draw(-1.732,-3)--(-1.732,4.5); \draw(-.866,-3)--(-.866,4.5); \draw(0,-3)--(0,4.5); \draw(.866,-3)--(.866,4.5); \draw(1.732,-3)--(1.732,4.5); \draw(2.598,-3)--(2.598,4.5); \draw(3.464,-3)--(3.464,4.5); \draw(4.33,-3)--(4.33,4.5); \draw(-4.33,3.5)--({-3*0.866},4.5); \draw(-4.33,2.5)--({-1*0.866},4.5); \draw(-4.33,1.5)--({1*0.866},4.5); \draw(-4.33,.5)--({3*0.866},4.5); \draw(-4.33,-.5)--(4.33,4.5); \draw(-4.33,-1.5)--(4.33,3.5); \draw(-4.33,-2.5)--(4.33,2.5); \draw(-3.464,-3)--(4.33,1.5); \draw(-1.732,-3)--(4.33,.5); \draw(0,-3)--(4.33,-.5); \draw(1.732,-3)--(4.33,-1.5); \draw(3.464,-3)--(4.33,-2.5); \draw(4.33,3.5)--({3*0.866},4.5); \draw(4.33,2.5)--({1*0.866},4.5); \draw(4.33,1.5)--({-1*0.866},4.5); \draw(4.33,.5)--({-3*0.866},4.5); \draw(4.33,-.5)--(-4.33,4.5); \draw(4.33,-1.5)--(-4.33,3.5); \draw(4.33,-2.5)--(-4.33,2.5); \draw(3.464,-3)--(-4.33,1.5); \draw(1.732,-3)--(-4.33,.5); \draw(0,-3)--(-4.33,-.5); \draw(-1.732,-3)--(-4.33,-1.5); \draw(-3.464,-3)--(-4.33,-2.5); \draw(-4.33,-1.5)--(-3.464,-3); \draw(-4.33,1.5)--(-1.732,-3); \draw(-4.33,4.5)--(0,-3); \draw({-3*0.866},4.5)--(1.732,-3); \draw({-1*0.866},4.5)--(3.464,-3); \draw({1*0.866},4.5)--(4.33,-1.5); \draw({3*0.866},4.5)--(4.33,1.5); \draw(4.33,-1.5)--(3.464,-3); \draw(4.33,1.5)--(1.732,-3); \draw(4.33,4.5)--(0,-3); \draw({3*0.866},4.5)--(-1.732,-3); \draw({1*0.866},4.5)--(-3.464,-3); \draw({-1*0.866},4.5)--(-4.33,-1.5); \draw({-3*0.866},4.5)--(-4.33,1.5); \draw[line width=2pt](0,-3)--(0,4.5); \draw[line width=2pt](-1.732,-3)--({3*0.866},4.5); \draw[line width=2pt](-4.33,3.5)--(4.33,-1.5); \end{tikzpicture} }\qquad \subfigure[The spherical partition]{ \centering \begin{tikzpicture}[scale=0.75] \path [fill=blue!30] (0,-1) -- (-0.866,-0.5) -- (-0.866,1.5) -- (0.866,1.5)--(0.866,0.5)--(0.866,-0.5)--(0,-1); \path [fill=blue!30] (1.732,0)--(2.598,0)--(2.598,-0.5); \path [fill=blue!30] (-0.866,1.5)--(-1.732,2)--(-1.299,2.25); \path [fill=blue!30] (-1.732,0)--(-1.299,0.75)--(-0.866,0.5); \path [fill=blue!30] (-2.598,0)--(-2.598,-0.5)--(-1.732,0); \path [fill=blue!30] (0.866,0.5)--({0.866+0.433},0.75)--(1.732,0); \path [fill=blue!30] (0.866,1.5)--({0.866+0.433},2.25)--(1.732,2); \path [fill=gray!90] (0,0) -- (0.433,0.75) -- (0,1) -- (0,0); \draw(-4.33,4.5)--(4.33,4.5); \draw(-4.33,3)--(4.33,3); \draw(-4.33,1.5)--(4.33,1.5); \draw(-4.33,0)--(4.33,0); \draw(-4.33,-1.5)--(4.33,-1.5); \draw(-4.33,-3)--(4.33,-3); \draw(-4.33,-3)--(-4.33,4.5); \draw(-3.464,-3)--(-3.464,4.5); \draw(-2.598,-3)--(-2.598,4.5); \draw(-1.732,-3)--(-1.732,4.5); \draw(-.866,-3)--(-.866,4.5); \draw(0,-3)--(0,4.5); \draw(.866,-3)--(.866,4.5); \draw(1.732,-3)--(1.732,4.5); \draw(2.598,-3)--(2.598,4.5); \draw(3.464,-3)--(3.464,4.5); \draw(4.33,-3)--(4.33,4.5); \draw(-4.33,3.5)--({-3*0.866},4.5); \draw(-4.33,2.5)--({-1*0.866},4.5); \draw(-4.33,1.5)--({1*0.866},4.5); \draw(-4.33,.5)--({3*0.866},4.5); \draw(-4.33,-.5)--(4.33,4.5); \draw(-4.33,-1.5)--(4.33,3.5); \draw(-4.33,-2.5)--(4.33,2.5); \draw(-3.464,-3)--(4.33,1.5); \draw(-1.732,-3)--(4.33,.5); \draw(0,-3)--(4.33,-.5); \draw(1.732,-3)--(4.33,-1.5); \draw(3.464,-3)--(4.33,-2.5); \draw(4.33,3.5)--({3*0.866},4.5); \draw(4.33,2.5)--({1*0.866},4.5); \draw(4.33,1.5)--({-1*0.866},4.5); \draw(4.33,.5)--({-3*0.866},4.5); \draw(4.33,-.5)--(-4.33,4.5); \draw(4.33,-1.5)--(-4.33,3.5); \draw(4.33,-2.5)--(-4.33,2.5); \draw(3.464,-3)--(-4.33,1.5); \draw(1.732,-3)--(-4.33,.5); \draw(0,-3)--(-4.33,-.5); \draw(-1.732,-3)--(-4.33,-1.5); \draw(-3.464,-3)--(-4.33,-2.5); \draw(-4.33,-1.5)--(-3.464,-3); \draw(-4.33,1.5)--(-1.732,-3); \draw(-4.33,4.5)--(0,-3); \draw({-3*0.866},4.5)--(1.732,-3); \draw({-1*0.866},4.5)--(3.464,-3); \draw({1*0.866},4.5)--(4.33,-1.5); \draw({3*0.866},4.5)--(4.33,1.5); \draw(4.33,-1.5)--(3.464,-3); \draw(4.33,1.5)--(1.732,-3); \draw(4.33,4.5)--(0,-3); \draw({3*0.866},4.5)--(-1.732,-3); \draw({1*0.866},4.5)--(-3.464,-3); \draw({-1*0.866},4.5)--(-4.33,-1.5); \draw({-3*0.866},4.5)--(-4.33,1.5); \draw[line width=2pt](0,-3)--(0,4.5); \draw[line width=2pt](-1.732,-3)--({3*0.866},4.5); \draw[line width=2pt](1.732,-3)--({-3*0.866},4.5); \draw[line width=2pt](-4.33,-2.5)--(4.33,2.5); \draw[line width=2pt](-4.33,-1.5)--(4.33,3.5); \draw[line width=2pt](-4.33,0)--(4.33,0); \draw[line width=2pt](-4.33,2.5)--(4.33,-2.5); \draw[line width=2pt](-4.33,3.5)--(4.33,-1.5); \end{tikzpicture} } \subfigure[The $0$-Shi partition]{ \centering \begin{tikzpicture}[scale=0.75] \path [fill=blue!30] (0,-1) -- (-0.866,-0.5) -- (-0.866,1.5) -- (0.866,1.5)--(0.866,0.5)--(1.299,0.75)--(1.732,0)--(0.866,-0.5)--(0,-1); \path [fill=blue!30] (0.866,-0.5)--(0.866,-1.5)--(1.299,-0.75); \path [fill=blue!30] (0.866,1.5)--(0.4333,2.25)--(0.866,2.5)--(1.732,2)--(1.732,1.5); \path [fill=blue!30] (0.866,4.5)--(0.866,5.5)--(1.299,5.25); \path [fill=blue!30] (0,3)--(-0.433,3.75)--(0,4)--(0.433,3.75); \path [fill=blue!30] (1.732,0)--(2.598,0)--(2.598,-0.5)--(2.165,-0.75); \path [fill=blue!30] (-0.866,1.5)--(-1.732,1.5)--(-1.732,2)--(-1.299,2.25); \path [fill=blue!30] (-2.598,1.5)--(-3.46,1.5)--(-3.46,2); \path [fill=blue!30] (2.598,1.5)--(3.46,1.5)--(3.46,2); \path [fill=blue!30] (-1.732,0)--(-1.299,0.75)--(-0.866,0.5); \path [fill=blue!30] (-2.598,0)--(-2.598,-0.5)--(-2.165,-0.75)--(-1.732,0); \path [fill=blue!30] (-2.598,-1.5)--(-3.46,-2)--(-3.03,-2.25); \path [fill=blue!30] (2.598,-1.5)--(3.46,-2)--(3.03,-2.25); \path [fill=blue!30] (0.866,-1.5)--(0.866,-2.5)--(1.299,-2.25); \path [fill=red!30] (-0.866,1.5)--(0,2)--(0.866,1.5); \path [fill=red!30] (-0.866,1.5)--(-0.866,2.5)--(-0.466,2.25); \path [fill=red!30] (0.866,0.5)--(0.866,1.5)--(1.732,1); \path [fill=red!30] (-1.732,0)--(-0.866,1.5)--(-1.732,1); \path [fill=red!30] (1.299,0.75)--(1.732,1)--(1.732,0); \path [fill=gray!90] (0,0) -- (0.433,0.75) -- (0,1) -- (0,0); \draw(-4.33,6)--(4.33,6); \draw(-4.33,4.5)--(4.33,4.5); \draw(-4.33,3)--(4.33,3); \draw(-4.33,1.5)--(4.33,1.5); \draw(-4.33,0)--(4.33,0); \draw(-4.33,-1.5)--(4.33,-1.5); \draw(-4.33,-3)--(4.33,-3); \draw(-4.33,-3)--(-4.33,6); \draw(-3.464,-3)--(-3.464,6); \draw(-2.598,-3)--(-2.598,6); \draw(-1.732,-3)--(-1.732,6); \draw(-.866,-3)--(-.866,6); \draw(0,-3)--(0,6); \draw(.866,-3)--(.866,6); \draw(1.732,-3)--(1.732,6); \draw(2.598,-3)--(2.598,6); \draw(3.464,-3)--(3.464,6); \draw(4.33,-3)--(4.33,6); \draw(-4.33,5.5)--(-3.464,6); \draw(-4.33,4.5)--(-1.732,6); \draw(-4.33,3.5)--(0,6); \draw(-4.33,2.5)--(1.732,6); \draw(-4.33,1.5)--(3.464,6); \draw(-4.33,.5)--(4.33,5.5); \draw(-4.33,-.5)--(4.33,4.5); \draw(-4.33,-1.5)--(4.33,3.5); \draw(-4.33,-2.5)--(4.33,2.5); \draw(-3.464,-3)--(4.33,1.5); \draw(-1.732,-3)--(4.33,.5); \draw(0,-3)--(4.33,-.5); \draw(1.732,-3)--(4.33,-1.5); \draw(3.464,-3)--(4.33,-2.5); \draw(4.33,5.5)--(3.464,6); \draw(4.33,4.5)--(1.732,6); \draw(4.33,3.5)--(0,6); \draw(4.33,2.5)--(-1.732,6); \draw(4.33,1.5)--(-3.464,6); \draw(4.33,.5)--(-4.33,5.5); \draw(4.33,-.5)--(-4.33,4.5); \draw(4.33,-1.5)--(-4.33,3.5); \draw(4.33,-2.5)--(-4.33,2.5); \draw(3.464,-3)--(-4.33,1.5); \draw(1.732,-3)--(-4.33,.5); \draw(0,-3)--(-4.33,-.5); \draw(-1.732,-3)--(-4.33,-1.5); \draw(-3.464,-3)--(-4.33,-2.5); \draw(-4.33,-1.5)--(-3.464,-3); \draw(-4.33,1.5)--(-1.732,-3); \draw(-4.33,4.5)--(0,-3); \draw(-3.464,6)--(1.732,-3); \draw(-1.732,6)--(3.464,-3); \draw(0,6)--(4.33,-1.5); \draw(1.732,6)--(4.33,1.5); \draw(3.464,6)--(4.33,4.5); \draw(4.33,-1.5)--(3.464,-3); \draw(4.33,1.5)--(1.732,-3); \draw(4.33,4.5)--(0,-3); \draw(3.464,6)--(-1.732,-3); \draw(1.732,6)--(-3.464,-3); \draw(0,6)--(-4.33,-1.5); \draw(-1.732,6)--(-4.33,1.5); \draw[line width=0.1pt](-3.464,6)--(-4.33,4.5); \draw[line width=2pt](0,-3)--(0,6);% \draw[line width=2pt](0.866,-3)--(0.866,6);% \draw[line width=2pt](-3.46,-3)--(1.732,6);% \draw[line width=2pt](-1.732,-3)--(3.46,6);% \draw[line width=2pt](1.732,-3)--(-3.46,6);% \draw[line width=2pt](3.46,-3)--(-1.732,6);% \draw[line width=2pt](-4.33,-2.5)--(4.33,2.5); \draw[line width=2pt](-4.33,-1.5)--(4.33,3.5); \draw[line width=2pt](-4.33,0)--(4.33,0); \draw[line width=2pt](-4.33,2.5)--(4.33,-2.5); \draw[line width=2pt](-4.33,3.5)--(4.33,-1.5); \draw[line width=2pt](-4.33,1.5)--(4.33,1.5); \end{tikzpicture}}\qquad \subfigure[The cone type partition]{ \centering \begin{tikzpicture}[scale=0.75] \path [fill=blue!30] (0,-1) -- (-0.866,-0.5) -- (-0.866,1.5) -- (0.866,1.5)--(0.866,0.5)--(1.299,0.75)--(1.732,0)--(0.866,-0.5)--(0,-1); \path [fill=blue!30] (0.866,-0.5)--(0.866,-1.5)--(1.299,-0.75); \path [fill=blue!30] (0.866,1.5)--(0.4333,2.25)--(0.866,2.5)--(1.732,2)--(1.732,1.5); \path [fill=blue!30] (0.866,4.5)--(0.866,5.5)--(1.299,5.25); \path [fill=blue!30] (0,3)--(-0.433,3.75)--(0,4)--(0.433,3.75); \path [fill=blue!30] (1.732,0)--(2.598,0)--(2.598,-0.5)--(2.165,-0.75); \path [fill=blue!30] (-0.866,1.5)--(-1.732,1.5)--(-1.732,2)--(-1.299,2.25); \path [fill=blue!30] (-2.598,1.5)--(-3.46,1.5)--(-3.46,2); \path [fill=blue!30] (2.598,1.5)--(3.46,1.5)--(3.46,2); \path [fill=blue!30] (-1.732,0)--(-1.299,0.75)--(-0.866,0.5); \path [fill=blue!30] (-2.598,0)--(-2.598,-0.5)--(-2.165,-0.75)--(-1.732,0); \path [fill=blue!30] (-2.598,-1.5)--(-3.46,-2)--(-3.03,-2.25); \path [fill=blue!30] (2.598,-1.5)--(3.46,-2)--(3.03,-2.25); \path [fill=blue!30] (0.866,-1.5)--(0.866,-2.5)--(1.299,-2.25); \path [fill=gray!90] (0,0) -- (0.433,0.75) -- (0,1) -- (0,0); \draw(-4.33,6)--(4.33,6); \draw(-4.33,4.5)--(4.33,4.5); \draw(-4.33,3)--(4.33,3); \draw(-4.33,1.5)--(4.33,1.5); \draw(-4.33,0)--(4.33,0); \draw(-4.33,-1.5)--(4.33,-1.5); \draw(-4.33,-3)--(4.33,-3); \draw(-4.33,-3)--(-4.33,6); \draw(-3.464,-3)--(-3.464,6); \draw(-2.598,-3)--(-2.598,6); \draw(-1.732,-3)--(-1.732,6); \draw(-.866,-3)--(-.866,6); \draw(0,-3)--(0,6); \draw(.866,-3)--(.866,6); \draw(1.732,-3)--(1.732,6); \draw(2.598,-3)--(2.598,6); \draw(3.464,-3)--(3.464,6); \draw(4.33,-3)--(4.33,6); \draw(-4.33,5.5)--(-3.464,6); \draw(-4.33,4.5)--(-1.732,6); \draw(-4.33,3.5)--(0,6); \draw(-4.33,2.5)--(1.732,6); \draw(-4.33,1.5)--(3.464,6); \draw(-4.33,.5)--(4.33,5.5); \draw(-4.33,-.5)--(4.33,4.5); \draw(-4.33,-1.5)--(4.33,3.5); \draw(-4.33,-2.5)--(4.33,2.5); \draw(-3.464,-3)--(4.33,1.5); \draw(-1.732,-3)--(4.33,.5); \draw(0,-3)--(4.33,-.5); \draw(1.732,-3)--(4.33,-1.5); \draw(3.464,-3)--(4.33,-2.5); \draw(4.33,5.5)--(3.464,6); \draw(4.33,4.5)--(1.732,6); \draw(4.33,3.5)--(0,6); \draw(4.33,2.5)--(-1.732,6); \draw(4.33,1.5)--(-3.464,6); \draw(4.33,.5)--(-4.33,5.5); \draw(4.33,-.5)--(-4.33,4.5); \draw(4.33,-1.5)--(-4.33,3.5); \draw(4.33,-2.5)--(-4.33,2.5); \draw(3.464,-3)--(-4.33,1.5); \draw(1.732,-3)--(-4.33,.5); \draw(0,-3)--(-4.33,-.5); \draw(-1.732,-3)--(-4.33,-1.5); \draw(-3.464,-3)--(-4.33,-2.5); \draw(-4.33,-1.5)--(-3.464,-3); \draw(-4.33,1.5)--(-1.732,-3); \draw(-4.33,4.5)--(0,-3); \draw(-3.464,6)--(1.732,-3); \draw(-1.732,6)--(3.464,-3); \draw(0,6)--(4.33,-1.5); \draw(1.732,6)--(4.33,1.5); \draw(3.464,6)--(4.33,4.5); \draw(4.33,-1.5)--(3.464,-3); \draw(4.33,1.5)--(1.732,-3); \draw(4.33,4.5)--(0,-3); \draw(3.464,6)--(-1.732,-3); \draw(1.732,6)--(-3.464,-3); \draw(0,6)--(-4.33,-1.5); \draw(-1.732,6)--(-4.33,1.5); \draw(-3.464,6)--(-4.33,4.5); \draw[line width=2pt](0,-3)--(0,6); \draw[line width=2pt](0.866,-3)--(0.866,0.5); \draw[line width=2pt](0.866,1.5)--(0.866,6); \draw[line width=2pt](-3.46,-3)--(-1.732,0); \draw[line width=2pt](0,3)--(1.732,6); \draw[line width=2pt](-1.732,-3)--(3.46,6); \draw[line width=2pt](1.732,-3)--(-3.46,6); \draw[line width=2pt](3.46,-3)--(1.732,0); \draw[line width=2pt](0.866,1.5)--(-1.732,6); \draw[line width=2pt](-4.33,-2.5)--(4.33,2.5); \draw[line width=2pt](-4.33,-1.5)--(4.33,3.5); \draw[line width=2pt](-4.33,0)--(4.33,0); \draw[line width=2pt](-4.33,2.5)--(4.33,-2.5); \draw[line width=2pt](-4.33,3.5)--(4.33,-1.5); \draw[line width=2pt](-4.33,1.5)--(-0.866,1.5); \draw[line width=2pt](0.866,1.5)--(4.33,1.5); \end{tikzpicture}} \caption{The partitions $\mathscr{D}$, $\mathscr{J}$, $\mathscr{S}_0$, and $\mathscr{T}$ for $\tilde{\mathsf{G}}_2$}\label{fig:G2partitions} \end{figure} \end{exa} \begin{rem}\label{rem:shi} We make the following comments. \begin{enumerate} \item If $W$ is affine, then the $0$-Shi partition $\mathscr{S}_0$ is the partition induced by the classical Shi hyperplane arrangement (see \cite{Shi:87a,Shi:87b}). \item The partitions $\mathscr{T}$ and $\mathscr{G}_B$ (with $B$ a Garside shadow) are generally not hyperplane partitions. For example, see the partition $\mathscr{T}$ in Figure~\ref{fig:G2partitions}. \item Note that $\mathscr{J}\leq \mathscr{S}_0$, as $\Phi_{\mathrm{sph}}^+\subseteq \mathcal{E}_0$. \end{enumerate} \end{rem} \subsection{Regular partitions}\label{subsec:regularpartitions} In this section we define the notion of a \textit{regular partition} of $W$, and show that these partitions are intimately related the automatic structure of~$W$. \begin{defn} A partition $\mathscr{P}$ of $W$ is \textit{locally constant} if the function $D_L:W\to 2^S$ is constant on each part of $\mathscr{P}$. If $\mathscr{P}$ is locally constant, and $P\in\mathscr{P}$, we write $D_L(P)=D_L(w)$ for any $w\in W$. \end{defn} \newpage Note that every refinement of a locally constant partition is again locally constant. Moreover, we have the following. \begin{lem}\label{lem:Sminimal} A partition $\mathscr{P}$ is locally constant if and only if $\mathscr{D}\leq \mathscr{P}$. \end{lem} \begin{proof} This is immediate from Proposition~\ref{prop:partsdescription}. \end{proof} \begin{prop}\label{prop:locallyconstantexamples} All partitions in Definition~\ref{defn:partitions} are locally constant. \end{prop} \begin{proof} By Proposition~\ref{prop:partsdescription} the parts of $\mathscr{T}$ are the sets $X_T=\{w\in W\mid T(w^{-1})=T\}$, with $T\in\mathbb{T}$. If $x,y\in X_T$ then $T(x^{-1})=T(y^{-1})$, and thus $D_L(x)=D_L(y)$. So $\mathscr{T}$ is locally constant. Let $B$ be a Garside shadow. By Proposition~\ref{prop:partsdescription} the parts of $\mathscr{G}_B$ are the sets $\pi^{-1}_B(b)$, with $b\in B$. If $x,y\in \pi^{-1}_B(b)$ then $\pi_B(x)=\pi_B(y)=b$. By \cite[Proposition~2.6]{HNW:16} we have $D_L(x)=D_L(\pi_B(x))=D_L(y)$, and hence $\mathscr{G}_B$ is locally constant. If $x,y$ lie in the same part of $\mathscr{S}_n$ then $\mathcal{E}_n(x)=\mathcal{E}_n(y)$. Since each root $\alpha_s$ with $s\in S$ is elementary, it follows that $D_L(x)=D_L(y)$. The remaining cases follow easily from Proposition~\ref{prop:partsdescription}. \end{proof} The main definition of this section is as follows. \begin{defn} A partition $\mathscr{R}$ of $W$ is \textit{regular} if: \begin{enumerate} \item $\mathscr{R}$ is locally constant, and \item if $R\in \mathscr{R}$ and $s\notin D_L(R)$ then $sR\subseteq R'$ for some part $R'\in\mathscr{R}$.\end{enumerate} Let $\mathscr{P}_{\mathrm{reg}}(W)$ denote the set of all regular partitions of $W$. \end{defn} Note that the condition $s\notin D_L(R)$ is equivalent to $R\subseteq H_{\alpha_s}^+$, or more gemetrically, that $e$ and $R$ both lie on the same side of the wall $H_{\alpha_s}$ in the Coxeter complex. \begin{thm}\label{thm:regularpartitions} The following partitions of $W$ are regular: \begin{enumerate} \item the cone type partition $\mathscr{T}$; \item the Garside partition $\mathscr{G}_B$, for any Garside shadow~$B$; \item the $n$-Shi partition $\mathscr{S}_n$, for $n\in\mathbb{N}$. \end{enumerate} \end{thm} \begin{proof} By Proposition~\ref{prop:locallyconstantexamples} these partitions are all locally constant. (1) By Proposition~\ref{prop:partsdescription} each part of $\mathscr{T}$ is of the form $X_T=\{w\in W\mid T(w^{-1})=T\}$ for some cone type~$T$. Suppose that $s\in S$ with $s\notin D_L(X_T)$. For any $w\in X_T$ we have $D_L(w)=D_L(X_T)$, and so $s\in T(w^{-1})=T$. By Lemma~\ref{lem:conetypeevolution} we have $$ T((sw)^{-1})=T(w^{-1}s)=s\{x\in T\mid \ell(sx)=\ell(x)-1\}, $$ which is independent of the particular $w\in X_T$. Thus, writing $T'=T(w^{-1}s)$ (for any $w\in X_T$), we have $sX_T\subseteq X_{T'}$, and hence $\mathscr{T}\in\mathscr{P}_{\mathrm{reg}}(W)$. (2) Let $B$ be a Garside shadow. By Proposition~\ref{prop:partsdescription} the parts of $\mathscr{G}_B$ are the sets $\pi^{-1}_B(b)$, with $b\in B$. If $x,y\in \pi^{-1}_B(b)$ and $s\notin D_L(x)=D_L(y)$ then by \cite[Proposition~2.8]{HNW:16} we have $\pi_B(sx)=\pi_B(s\pi_B(x))=\pi_B(sb)=\pi_B(sy)$, and so $sx$ and $sy$ lie in the part $\pi^{-1}_B(\pi_B(sb))$. Hence $\mathscr{G}_B\in\mathscr{P}_{\mathrm{reg}}(W)$. (3) If $x,y\in W$ lie in the same part of $\mathscr{S}_n$ then $\mathcal{E}_n(x)=\mathcal{E}_n(y)=A$, say. If $s\notin D_L(x)=D_L(y)$ then by Lemma~\ref{lem:elementaryrootsbasics} we have $\mathcal{E}_n(sx)=(\{\alpha_s\}\sqcup sA)\cap \mathcal{E}_n=\mathcal{E}_n(sy)$. Thus $sx$ and $sy$ lie in a common part of $\mathscr{S}_n$, and so the partition is regular. \end{proof} \begin{rem} We record the following observations. \begin{enumerate} \item The top element $\mathbf{1}=\{\{w\}\mid w\in W\}$ of $(\mathscr{P}(W),\leq)$ is obviously regular. Thus each $\mathscr{P}\in\mathscr{P}(W)$ can be refined to a regular partition. In Section~\ref{sec:regularcompletion} we will show that there is a unique minimal such ``regular completion''. \item The partition $\mathscr{D}$ is generally not regular (see, for example, Figure~\ref{fig:G2partitions}(a)). \item The partition $\mathscr{J}$ is generally not regular. For example, the partition $\mathscr{J}$ is not regular for $\tilde{\mathsf{G}}_2$ (see Figure~\ref{fig:G2partitions}(b)), however it is regular for $\tilde{\mathsf{A}}_2$ (see Figure~\ref{fig:A2partitions}). \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.9] \path [fill=blue!30] (-1.5,0.866)--(-1,{2*0.866})--(1,{2*0.866})--(1.5,0.866)--(0.5,-0.866)--(-0.5,-0.866)--(-1.5,0.866); \path [fill=blue!30] (0,{2*0.866})--(-0.5,{3*0.866})--(0.5,{3*0.866})--(0,{2*0.866}); \path [fill=blue!30] (1,0)--(2,0)--(1.5,-0.866); \path [fill=blue!30] (-1,0)--(-2,0)--(-1.5,-0.866); \path [fill=gray!90] (0,0) -- (-0.5,0.866) -- (0.5,0.866) -- (0,0); \draw (2.5, {-3*0.866})--( 3.5, {-1*0.866} ); \draw (1.5, {-3*0.866})--( 3.5, {1*0.866} ); \draw (0.5, {-3*0.866})--( 3.5, {3*0.866} ); \draw (-0.5, {-3*0.866})--( 3, {4*0.866} ); \draw [line width=2pt](-1.5, {-3*0.866})--( 2, {4*0.866} ); \draw [line width=2pt] (-2.5, {-3*0.866})--( 1, {4*0.866} ); \draw (-3.5, {-3*0.866})--(0, {4*0.866} ); \draw (-3.5, {-1*0.866})--(-1, {4*0.866} ); \draw (-3.5, {1*0.866})--(-2, {4*0.866} ); \draw (-3.5, {3*0.866})--(-3, {4*0.866} ); \draw (-2.5, {-3*0.866})--( -3.5, {-1*0.866} ); \draw (-1.5, {-3*0.866})--( -3.5, {1*0.866} ); \draw (-0.5, {-3*0.866})--( -3.5, {3*0.866} ); \draw (0.5, {-3*0.866})--( -3, {4*0.866} ); \draw [line width=2pt] (1.5, {-3*0.866})--( -2, {4*0.866} ); \draw [line width=2pt] (2.5, {-3*0.866})--( -1, {4*0.866} ); \draw (3.5, {-3*0.866})--(0, {4*0.866} ); \draw (3.5, {-1*0.866})--(1, {4*0.866} ); \draw (3.5, {1*0.866})--(2, {4*0.866} ); \draw (3.5, {3*0.866})--(3, {4*0.866} ); \draw (-3.5, -2.598)--( 3.5, -2.598); \draw (-3.5, -1.732)--( 3.5, -1.732); \draw (-3.5, -0.866)--( 3.5, -0.866); \draw[line width=2pt] (-3.5, 0)--( 3.5, 0); \draw (-3.5, 3.464)--( 3.5, 3.464 ); \draw (-3.5, 2.598)--( 3.5, 2.598); \draw (-3.5, 1.732)--( 3.5, 1.732); \draw [line width=2pt] (-3.5, 0.866)--( 3.5, 0.866); % \end{tikzpicture} \caption{In type $\tilde{\mathsf{A}}_2$ we have $\mathscr{T}=\mathscr{S}_0=\mathscr{J}$}\label{fig:A2partitions} \end{figure} \end{enumerate} \end{rem} The main interest in the concept of regular partitions stems from the following theorem, providing a very general geometric construction of automata recognising $\mathcal{L}(W,S)$. Note that if $\mathscr{P}$ is locally constant then the part of $\mathscr{P}$ containing $e$ is the singleton~$\{e\}$ (by considering left descent sets). \begin{thm}\label{thm:regularautomaton} Let $\mathscr{R}$ be a regular partition of $W$. Define $\mu:\mathscr{R}\times S\to \mathscr{R}\cup\{\dagger\}$ by $$ \mu(R,s)=\begin{cases} R'&\text{if $s\notin D_L(R)$ and $sR\subseteq R'\in\mathscr{R}$}\\ \dagger&\text{if $s\in D_L(R)$}. \end{cases} $$ Then $\mathcal{A}(\mathscr{R})=(\mathscr{R},\mu,\{e\})$ is an automaton recognising $\mathcal{L}(W,S)$. Moreover, if $w=s_1\cdots s_n$ is reduced, then the final state of the path with edge labels $(s_1,\ldots,s_n)$ starting at $\{e\}$ is the part $R\in\mathscr{R}$ with $w^{-1}\in R$. \end{thm} \begin{proof} Let $R_0=\{e\}$. We show, by induction on $n\geq 1$, that $(s_1,\ldots,s_n)$ is accepted by $\mathcal{A}(\mathscr{R})$ if and only if $s_1\cdots s_n$ is reduced. Consider $n=1$. Each expression $s$ is reduced. On the other hand, since $s\notin D_L(R_0)$ we have $\mu(R_0,s)=R_1$, where $R_1\in\mathscr{R}$ is the part containing $s$. Thus all length $1$ reduced words are accepted by~$\mathcal{A}(\mathscr{R})$. Let $k\geq 1$, and suppose that $s_1\cdots s_k$ is reduced if and only if $(s_1,\ldots,s_k)$ is accepted by $\mathcal{A}(\mathscr{R})$. Let $s_1\cdots s_ks$ be reduced. Then $s_1\cdots s_k$ is reduced, and so $(s_1,\ldots,s_k)$ is accepted by~$\mathcal{A}(\mathscr{R})$. Let $R_k\in\mathscr{R}$ be the part of $\mathscr{R}$ containing $s_k\cdots s_1$. Since $s_1\cdots s_ks$ is reduced we have $s\notin D_L(s_k\cdots s_1)=D_L(R_k)$. Hence, by the regularity condition, $sR_k\subseteq R_{k+1}$ where $R_{k+1}$ is the part of $\mathscr{R}$ with $ss_k\cdots s_1\in R_{k+1}$. Then $\mu(R_k,s)=R_{k+1}$, and so $(s_1,\ldots,s_k,s)$ is accepted by~$\mathcal{A}(\mathscr{R})$. Conversely, suppose that $(s_1,\ldots,s_k,s)$ is accepted by $\mathcal{A}(\mathscr{R})$. Then $(s_1,\ldots,s_k)$ is accepted, and so $s_1\cdots s_k$ is reduced. Moreover, with $R_k$ being the part containing $s_k\cdots s_1$, the fact that $(s_1,\ldots,s_k,s)$ is accepted gives $\mu(R_k,s)=R_{k+1}$ where $R_{k+1}$ is the part containing $ss_k\cdots s_1$. Thus, by definition, $s\notin D_L(R_k)$. In particular $s\notin D_L(s_k\cdots s_1)$, and so $s_1\cdots s_ks$ is reduced. The final statement is now clear: If $w=s_1\cdots s_k$ is a reduced expression, then the corresponding path in the automaton $\mathcal{A}(\mathscr{R})=(\mathscr{R},\mu,\{e\})$ is $$ \{e\}=R_0\to_{s_1}R_1\to_{s_2}R_2\to_{s_3}\cdots\to_{s_k}R_k, $$ where $R_j\in\mathscr{R}$ is the part containing $s_jR_{j-1}$. Thus $s_n\cdots s_1\in R_n$. \end{proof} The above construction leads to a uniform and conceptual proof of the known automata recognising $\mathcal{L}(W,S)$. In particular, using Theorems~\ref{thm:regularpartitions} and~\ref{thm:regularautomaton} we obtain new proofs of Theorems~\ref{thm:canonicalautomaton} and~\ref{thm:garsideautomaton}. Moreover, the above construction leads to the following remarkable fact that will be used in a crucial way to define the ``regular completion'' of a partition in the following section. \newpage \begin{cor}\label{cor:Tisminimal} If $\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)$ then $\mathscr{R}$ is a refinement of $\mathscr{T}$ (that is, $\mathscr{T}\leq \mathscr{R}$). \end{cor} \begin{proof} Let $\mathcal{A}=(\mathscr{R},\mu,\{e\})$ be the automaton constructed in Theorem~\ref{thm:regularautomaton}. Let $R\in \mathscr{R}$, and suppose that $x,y\in R$. Let $x=s_1\cdots s_n$ and $y=s_1'\cdots s_m'$ be reduced expressions. By the final statement of Theorem~\ref{thm:regularautomaton}, we have that the paths in $\mathcal{A}$ starting at $\{e\}$ with edge labels $(s_n,\ldots,s_1)$ and $(s_m',\ldots,s_1')$ both end at the state $R$. Then by Lemma~\ref{lem:stateconetype} we have $T(s_n\cdots s_1)=T(s_m'\cdots s_1')$, and so $T(x^{-1})=T(y^{-1})$. Thus $x$ and $y$ lie in the same part of the cone type partition (by Proposition~\ref{prop:partsdescription}) and so $\mathscr{R}$ is a refinement of $\mathscr{T}$. \end{proof} There is a partial converse to Theorem~\ref{thm:regularautomaton}. We call an automaton $\mathcal{A}=(Y,\mu,o)$ recognising $\mathcal{L}(W,S)$ \textit{reduced} if the following property holds: If $w=s_1\cdots s_n$ and $w=s_1'\cdots s_n'$ are both reduced expressions for $w$, then the paths in $\mathcal{A}$ starting at $o$ with edge labels $(s_1,\ldots,s_n)$ and $(s_1',\ldots,s_n')$ finish at the same state. \begin{prop} Let $\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)$. The automaton $\mathcal{A}(\mathscr{R})$ constructed in Theorem~\ref{thm:regularautomaton} is reduced. \end{prop} \begin{proof} From the final statement of Theorem~\ref{thm:regularautomaton}, if $w=s_1\cdots s_n$ is a reduced expression then the final state in the path with edge labels $(s_1,\ldots,s_n)$ does not depend on the particular reduced expression chosen, and so $\mathcal{A}(\mathscr{R})$ is reduced. \end{proof} If $\mathcal{A}=(Y,\mu,o)$ is reduced, then for $w\in W$ we can define $$ \mu(o,w)=\mu(\ldots\mu(\mu(o,s_1),s_2),\ldots,s_n) $$ where $s_1\cdots s_n$ is any reduced expression for~$w$ (this is well defined by the reduced assumption). Thus $\mu(o,w)$ is the common end state of all paths in $\mathcal{A}$ whose edge labels represent~$w$. Let $\mathbf{A}_{\mathrm{red}}(W,S)$ denote the set of isomorphism classes of reduced automata recognising~$\mathcal{L}(W,S)$. \begin{thm}\label{thm:converseautomaton} Let $\mathcal{A}=(Y,\mu,o)$ be a reduced automaton recognising $\mathcal{L}(W,S)$. The partition $\mathscr{R}(\mathcal{A})$ of $W$ into sets $$ A_y=\{w\in W\mid \mu(o,w^{-1})=y\},\quad\text{with $y\in Y$}, $$ is a regular partition of $W$. Moreover, the functions $F:\mathscr{P}_{\mathrm{reg}}(W)\to \mathbf{A}_{\mathrm{red}}(W,S)$ and $G:\mathbf{A}_{\mathrm{red}}(W,S)\to \mathscr{P}_{\mathrm{reg}}(W)$ with $F(\mathscr{R})=\mathcal{A}(\mathscr{R})$ (c.f. Theorem~\ref{thm:regularautomaton}) and $G(\mathcal{A})=\mathscr{R}(\mathcal{A})$ are are mutually inverse bijections. \end{thm} \begin{proof} It is clear that $W=\bigcup_{y\in Y}A_y$, and that $A_y\cap A_{y'}=\emptyset$ if $y\neq y'$. Thus $\mathscr{R}(\mathcal{A})$ is a partition of~$W$. Moreover, if $u,v\in A_y$ then $T(u^{-1})=T(v^{-1})$ by Lemma~\ref{lem:stateconetype}. In particular, $D_L(u)=D_L(v)$, and so $\mathscr{R}(\mathcal{A})$ is locally constant. Suppose that $s\notin D_L(A_y)$, and consider $w\in A_y$. Thus $\ell(sw)=\ell(w)+1$ and so $\ell(w^{-1}s)=\ell(w)+1$. Since $y=\mu(o,w^{-1})$, if $w^{-1}=s_1\cdots s_n$ is reduced then $w^{-1}s=s_1\cdots s_ns$ is also reduced, and hence $\mu(o,w^{-1}s)=\mu(y,s)$. Thus $sw\in A_{\mu(y,s)}$, and hence $sA_y\subseteq A_{\mu(y,s)}$. So $\mathscr{R}(\mathcal{A})$ is regular. To prove the final statement, we show that $G(F(\mathscr{R}))=\mathscr{R}$ and $F(G(\mathcal{A}))=\mathcal{A}$ for all $\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)$ and $\mathcal{A}\in \mathbf{A}_{\mathrm{red}}(W,S)$. For the first statement, the states of the automaton $F(\mathscr{R})$ are parts of $\mathscr{R}$, and so the parts of the partition $G(F(\mathscr{R}))$ are the sets $ A_P=\{w\in W\mid \mu(o,w^{-1})=P\}$ with $P\in\mathscr{R}$, with $\mu$ as in Theorem~\ref{thm:regularautomaton}. Recall that if $w^{-1}=s_1\cdots s_n$ is reduced then the path in $F(\mathscr{R})$ starting at $\{e\}$ with edge labels $(s_1,\ldots,s_n)$ ends at the part $P$ of $\mathscr{R}$ containing $w$ (see Theorem~\ref{thm:regularautomaton}). Thus $A_P=\{w\in W\mid w\in P\}=P$. On the other hand, if $\mathcal{A}=(Y,\mu,o)\in\mathbf{A}_{\mathrm{red}}(W,S)$, then the states of the automaton $F(G(\mathcal{A}))$ are the sets $A_y=\{w\in W\mid \mu(o,w^{-1})=y\}$, with $y\in Y$, and the transition function is given by $\mu'(A_y,s)=A_{y'}$ if $s\notin D_L(A_y)$ and $sA_y\subseteq A_{y'}$, with $y'\in Y$. We showed above that $sA_y\subseteq A_{\mu(y,s)}$, and hence $\mu'(A_y,s)=A_{\mu(y,s)}$. It follows that $f:\mathcal{A}\to F(G(\mathcal{A}))$ with $f(y)=A_y$ is an isomorphism of automata. \end{proof} \subsection{The regular completion of a partition}\label{sec:regularcompletion} In this section we show that the partially ordered set $(\mathscr{P}_{\mathrm{reg}}(W),\leq)$ is a complete lattice (that is, every nonempty subset has both a meet and a join). As a consequence, given an arbitrary partition $\mathscr{P}\in\mathscr{P}(W)$ there exists a unique minimal regular partition $\widehat{\mathscr{P}}\in\mathscr{P}_{\mathrm{reg}}(W)$ refining $\mathscr{P}$ (we call this partition the \textit{regular completion} of $\mathscr{P}$. We provide an algorithm for computing the regular completion, along with sufficient conditions for this algorithm to terminate in finite time. An important consequence of this analysis is that $\mathscr{T}=\widehat{\mathscr{D}}$ (see Corollary~\ref{cor:regularlattice1}). This fact will play a pivotal role in proving Theorem~\ref{thm:main1}. \begin{thm}\label{thm:regularlattice2} The partially ordered set $(\mathscr{P}_{\mathrm{reg}}(W),\leq)$ is a complete lattice, with bottom element $\mathscr{T}$ and top element $\mathbf{1}$ (recall convention~(\ref{eq:convention})). \end{thm} \begin{proof} Let $X=\{\mathscr{P}_i\mid i\in I\}\subseteq \mathscr{P}_{\mathrm{reg}}(W)$ be nonempty. The join of $X$ in the partially ordered set $(\mathscr{P}(W),\leq)$ is (see, for example, \cite[p.36]{DP:02}) $$ \bigvee X=\bigg\{\bigcap_{i\in I}P_i\biggm| P_i\in\mathscr{P}_i,\,\bigcap_{i\in I}P_i\neq\emptyset\bigg\}. $$ Write $\mathscr{P}=\bigvee X$. We claim that $\mathscr{P}\in\mathscr{P}_{\mathrm{reg}}(W)$. Clearly $\mathscr{P}$ is locally constant (as it is a common refinement of locally constant partitions). Moreover, if $P\in\mathscr{P}$ and $s\in S$ with $s\notin D_L(P)$ then writing $P=\bigcap_{i\in I}P_i$ with $P_i\in \mathscr{P}_i$ we have $s\notin D_L(P_i)=D_L(P)$ for all $i\in I$ (as $P\neq \emptyset$ and each $\mathscr{P}_i$ is locally constant). Thus by regularity of each $\mathscr{P}_i$ there is $P_i'\in\mathscr{P}_i$ with $sP_i\subseteq P_i'$. Thus $sP=\bigcap_{i\in I}(sP_i)\subseteq \bigcap_{i\in I}P_i'$, which is a part of $\mathscr{P}$ by definition. Thus $\mathscr{P}\in\mathscr{P}_{\mathrm{reg}}(W)$. Thus every nonempty subset $X\subseteq \mathscr{P}_{\mathrm{reg}}(W)$ has a join in the partially ordered set $(\mathscr{P}_{\mathrm{reg}}(W),\leq)$. By Corollary~\ref{cor:Tisminimal} the cone type partition $\mathscr{T}\in\mathscr{P}_{\mathrm{reg}}(W)$ is a lower bound for every nonempty subset $X\subseteq \mathscr{P}_{\mathrm{reg}}(W)$. Thus the set $$ \{\mathscr{R}\in \mathscr{P}_{\mathrm{reg}}(W)\mid \mathscr{R}\leq \mathscr{P}\text{ for all $\mathscr{P}\in X$}\} $$ is nonempty, and using the existence of joins from the previous paragraph, the meet of $X$ is given by $$ \bigwedge X=\bigvee\{\mathscr{R}\in \mathscr{P}_{\mathrm{reg}}(W)\mid \mathscr{R}\leq \mathscr{P}\text{ for all $\mathscr{P}\in X$}\}. $$ Thus $\mathscr{P}_{\mathrm{reg}}(W)$ is a complete lattice with bottom element $\mathscr{T}$ and top element $\mathbf{1}$. \end{proof} Theorem~\ref{thm:regularlattice2} allows us to define the ``regular completion'' of a partition. \begin{defn}\label{defn:regularcompletion} The \textit{regular completion} of $\mathscr{P}\in\mathscr{P}(W)$ is $$ \widehat{\mathscr{P}}=\bigwedge\{\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)\mid \mathscr{P}\leq \mathscr{R}\} $$ Then $\widehat{\mathscr{P}}$ is a regular partition, as $(\mathscr{P}_{\mathrm{reg}}(W),\leq)$ is a complete lattice by Theorem~\ref{thm:regularlattice2}. \end{defn} It is not immediate from the definition that $\mathscr{P}\leq\widehat{\mathscr{P}}$, however we shall see that this is indeed true in Theorem~\ref{thm:newterminateatregularisation} below (and hence $\widehat{\mathscr{P}}$ is the minimal regular partition refining~$\mathscr{P}$). \subsection{Simple refinements algorithm}\label{sec:simplerefinements} We now develop an algorithm to compute $\widehat{\mathscr{P}}$. This algorithm will not always terminate in finite time, however we will provide natural sufficient conditions under which it will terminate in finite time. To begin with, if $\mathscr{P}\in\mathscr{P}(W)$ then the minimal locally constant partition refining $\mathscr{P}$ is obviously $\mathscr{P}\vee \mathscr{D}$ (see Lemma~\ref{lem:Sminimal}), whose parts are the sets $P\cap D_L^{-1}(J)$ for $P\in\mathscr{P}$, $J\subseteq S$ spherical, and $P\cap D_L^{-1}(J)\neq\emptyset$. Moreover, from the definition of regular completion it is clear that $\widehat{\mathscr{P}}=\widehat{\mathscr{P}\vee\mathscr{D}}$, for if $\mathscr{R}$ is regular with $\mathscr{P}\leq\mathscr{R}$ then $\mathscr{P}\vee\mathscr{D}\leq \mathscr{R}\vee\mathscr{D}=\mathscr{R}$ (as $\mathscr{R}$ is locally constant). Thus after replacing $\mathscr{P}$ with $\mathscr{P}\vee\mathscr{D}$, we may assume that $\mathscr{P}$ is locally constant. We now define a \textit{simple refinement} of a locally constant partition. These operations will be the basic building blocks of our algorithm for computing the regular completion. \begin{defn}\label{defn:simplerefinement} Let $\mathscr{P}$ be a locally constant partition of $W$. Suppose that $(P,s)\in\mathscr{P}\times S$ is such that $s\notin D_L(P)$ and $sP$ is not contained in a part of $\mathscr{P}$. Let $ X=\{P'\in\mathscr{P}\mid P'\cap sP\neq\emptyset\}, $ and partition the set $P$ as $$ P=\bigsqcup_{P'\in X}(P\cap sP'), $$ and let $\mathscr{P}'$ be the refinement of $\mathscr{P}$ obtained by replacing the part $P$ of $\mathscr{P}$ by the above partition. We call the refinement $\mathscr{P}\mapsto \mathscr{P}'$ the \textit{simple refinement}, and we say that this refinement is \textit{based at the pair $(P,s)$}. \end{defn} \newpage We note the following. \begin{prop}\label{prop:finiteparts} If $\mathscr{P}$ is locally constant and $\mathscr{P}\mapsto\mathscr{P}'$ by a simple refinement, then $\mathscr{P}'$ is locally constant, and if $|\mathscr{P}|<\infty$ then $|\mathscr{P}'|<2|\mathscr{P}|<\infty$. \end{prop} \begin{proof} The first statement is clear, because all refinements of a locally constant partition are locally constant. For the second statement, note that $|\mathscr{P}'|=|\mathscr{P}|+|X|-1$ (with $X$ as in Definition~\ref{defn:simplerefinement}, as the single part $P$ is replaced by $|X|$ parts $P\cap sP'$ with $P'\in X$), and that $|X|\leq |\mathscr{P}|$. \end{proof} \begin{exa} Figure~\ref{fig:simplerefinement} gives an example of a simple refinement in type $\tilde{\mathsf{G}}_2$. We start with the locally constant partition $\mathscr{P}$ determined by the solid heavy lines. Let $P$ be the part of $\mathscr{P}$ shaded blue, and let $s$ be reflection in the vertical wall bounding the identity chamber (shaded grey). Note that $s\notin D_L(P)$, as $e$ and $P$ lie on the same side of $H_{\alpha_s}$. The set $sP$ is shaded red. There are $4$ parts $P'$ of $\mathscr{P}$ such that $sP\cap P'\neq\emptyset$. Let $\mathscr{P}\mapsto \mathscr{P}'$ via the simple refinement based at $(P,s)$. The partition $\mathscr{P}'$ is given by the union of the solid heavy lines and the dotted heavy lines. The meaning of the black and red circles will be given in Section~\ref{sec:gatedpartitions} (see Example~\ref{ex:gates}). \begin{figure}[H] \centering \begin{tikzpicture}[scale=1] \path [fill=red!20] (-0.433,0.75)--(-0.866,1.5)--(-4.33,3.5)--(-4.33,-1.5); \path [fill=gray!90] (0,0) -- (0.433,0.75) -- (0,1) -- (0,0); \path [fill=blue!40] (0.433,0.75)--(0.866,1.5)--(4.33,3.5)--(4.33,-1.5); \draw(-4.33,4.5)--(4.33,4.5); \draw(-4.33,3)--(4.33,3); \draw(-4.33,1.5)--(4.33,1.5); \draw(-4.33,0)--(4.33,0); \draw(-4.33,-1.5)--(4.33,-1.5); \draw(-4.33,-3)--(4.33,-3); \draw(-4.33,-3)--(-4.33,4.5); \draw(-3.464,-3)--(-3.464,4.5); \draw(-2.598,-3)--(-2.598,4.5); \draw(-1.732,-3)--(-1.732,4.5); \draw(-.866,-3)--(-.866,4.5); \draw(0,-3)--(0,4.5); \draw(.866,-3)--(.866,4.5); \draw(1.732,-3)--(1.732,4.5); \draw(2.598,-3)--(2.598,4.5); \draw(3.464,-3)--(3.464,4.5); \draw(4.33,-3)--(4.33,4.5); \draw(-4.33,3.5)--({-3*0.866},4.5); \draw(-4.33,2.5)--({-1*0.866},4.5); \draw(-4.33,1.5)--({1*0.866},4.5); \draw(-4.33,.5)--({3*0.866},4.5); \draw(-4.33,-.5)--(4.33,4.5); \draw(-4.33,-1.5)--(4.33,3.5); \draw(-4.33,-2.5)--(4.33,2.5); \draw(-3.464,-3)--(4.33,1.5); \draw(-1.732,-3)--(4.33,.5); \draw(0,-3)--(4.33,-.5); \draw(1.732,-3)--(4.33,-1.5); \draw(3.464,-3)--(4.33,-2.5); \draw(4.33,3.5)--({3*0.866},4.5); \draw(4.33,2.5)--({1*0.866},4.5); \draw(4.33,1.5)--({-1*0.866},4.5); \draw(4.33,.5)--({-3*0.866},4.5); \draw(4.33,-.5)--(-4.33,4.5); \draw(4.33,-1.5)--(-4.33,3.5); \draw(4.33,-2.5)--(-4.33,2.5); \draw(3.464,-3)--(-4.33,1.5); \draw(1.732,-3)--(-4.33,.5); \draw(0,-3)--(-4.33,-.5); \draw(-1.732,-3)--(-4.33,-1.5); \draw(-3.464,-3)--(-4.33,-2.5); \draw(-4.33,-1.5)--(-3.464,-3); \draw(-4.33,1.5)--(-1.732,-3); \draw(-4.33,4.5)--(0,-3); \draw({-3*0.866},4.5)--(1.732,-3); \draw({-1*0.866},4.5)--(3.464,-3); \draw({1*0.866},4.5)--(4.33,-1.5); \draw({3*0.866},4.5)--(4.33,1.5); \draw(4.33,-1.5)--(3.464,-3); \draw(4.33,1.5)--(1.732,-3); \draw(4.33,4.5)--(0,-3); \draw({3*0.866},4.5)--(-1.732,-3); \draw({1*0.866},4.5)--(-3.464,-3); \draw({-1*0.866},4.5)--(-4.33,-1.5); \draw({-3*0.866},4.5)--(-4.33,1.5); \draw[line width=2pt](0,-3)--(0,4.5); \draw[line width=2pt]({-2*0.866},-3)--({3*0.866},4.5); \draw[line width=2pt]({-5*0.866},-2.5)--(0,0); \draw[line width=2pt]({-3*0.866},4.5)--({2*0.866},-3); \draw[line width=2pt]({-5*0.866},2.5)--(0,0); \draw[line width=2pt]({-5*0.866},3.5)--({5*0.866},-1.5); \draw[line width=2pt]({-5*0.866},1.5)--(-0.866,1.5); \draw[line width=2pt](-0.866,1.5)--(-0.866,4.5); \draw[line width=2pt](0,1)--({5*0.866},3.5); \draw[line width=2pt, dotted] (0.866,0.5)--({5*0.866},2.5); \draw[line width=2pt, dotted] (0.866,1.5)--({5*0.866},1.5); \node at (-0.5,0.5) {$\bullet$}; \node at (-0.65,0.15) {$\bullet$}; \node at (-1.55,1.7) {$\bullet$}; \node at (-3.3,1.7) {$\bullet$}; \node [color=red] at (0.7,0.82) {$\circ$}; \node [color=red] at (1.25,0.5) {$\bullet$}; \node [color=red] at (1.55,1.7) {$\bullet$}; \node [color=red] at (3.3,1.7) {$\bullet$}; \end{tikzpicture} \caption{A simple refinement}\label{fig:simplerefinement} \end{figure} \end{exa} The following algorithm is called the \textit{simple refinements algorithm}. \begin{alg}\label{alg:regularisation} Let $\mathscr{P}\in\mathscr{P}(W)$. Let $\mathscr{P}_0=\mathscr{P}\vee\mathscr{D}$. For $j\geq 1$, if there exists a pair $(P,s)$ with $P\in\mathscr{P}_{j-1}$ and $s\in S$ with $s\notin D_L(P)$ and $sP\not\subseteq P'$ for any $P'\in\mathscr{P}_{j-1}$, let $\mathscr{P}_{j}$ be the simple refinement of $\mathscr{P}_{j-1}$ based at the pair $(P,s)$. \end{alg} We will show in Theorem~\ref{thm:newterminateatregularisation} below that if Algorithm~\ref{alg:regularisation} terminates in finite time, then the output of the algorithm is the regular completion of the input partition. The key observation is the following lemma. \begin{lem}\label{lem:minimalrefinement} Let $\mathscr{P}$ be a locally constant partition, and suppose that $\mathscr{R}$ is a regular partition with $\mathscr{P}\leq \mathscr{R}$. Let $\mathscr{P}\mapsto \mathscr{P}'$ by a simple refinement. Then $\mathscr{P}'\leq \mathscr{R}$. \end{lem} \begin{proof} We must show that each part of $\mathscr{R}$ is contained in a part of $\mathscr{P}'$. Suppose that the simple refinement $\mathscr{P}\mapsto\mathscr{P}'$ is based at the pair $(P,s)\in\mathscr{P}\times S$. Let $R$ be a part of $\mathscr{R}$. Then $R\subseteq Q$ for some part $Q$ of $\mathscr{P}$ (because $\mathscr{P}\leq \mathscr{R}$). If $Q\neq P$ then $Q$ is also a part of $\mathscr{P}'$ (by the definition of simple refinements), and we are done. So suppose that $Q=P$ and so $R\subseteq P$. Since $\mathscr{R}$ is regular, and since $s\notin D_L(R)$ (as $s\notin D_L(P)$) we have that $sR\subseteq R'$ for some part $R'$ of $\mathscr{R}$. Moreover, since $\mathscr{P}\leq \mathscr{R}$ and since $R'\cap R=\emptyset$ (by the locally constant condition) we have $R'\subseteq P'$ for some part $P'$ of $\mathscr{P}$ with $P'\in X$ (with $X$ as in Definition~\ref{defn:simplerefinement}). Thus $sR\subseteq R'\subseteq P'$, and so $R\subseteq sP'$. But also $R\subseteq P$, and so $R\subseteq P\cap sP'$, which is a part of $\mathscr{P}'$. \end{proof} \begin{thm}\label{thm:newterminateatregularisation} Let $\mathscr{P}\in\mathscr{P}(W)$ and let $\widehat{\mathscr{P}}$ be the regular completion of~$\mathscr{P}$. \begin{enumerate} \item We have $\mathscr{P}\leq\widehat{\mathscr{P}}$. \item If Algorithm~\ref{alg:regularisation} terminates in finite time with output~$\mathscr{Q}$, then $\mathscr{Q}=\widehat{\mathscr{P}}$. \item If $|\mathscr{P}|<\infty$ then Algorithm~\ref{alg:regularisation} terminates in finite time if and only if $|\widehat{\mathscr{P}}|<\infty$. \end{enumerate} \end{thm} \begin{proof} Suppose first that the simple refinements algorithm terminates in finite time. Then there is a chain of partitions $\mathscr{P}\vee\mathscr{D}=\mathscr{P}_0\leq\mathscr{P}_1\leq\mathscr{P}_2\leq\cdots\leq\mathscr{P}_n$ with $\mathscr{P}_{j-1}\mapsto\mathscr{P}_j$ by a simple refinement for $1\leq j\leq n$ and with $\mathscr{P}_n$ regular. By Lemma~\ref{lem:minimalrefinement} every regular partition $\mathscr{R}$ with $\mathscr{P}\leq\mathscr{R}$ satisfies $\mathscr{P}_n\leq\mathscr{R}$, and since $\widehat{\mathscr{P}}$ is the meet of such partitions (by definition), and since $\mathscr{P}_n$ is regular, we have $\widehat{\mathscr{P}}=\mathscr{P}_n$. This proves (2), and also proves (1) in the case that the simple refinements algorithm terminates in finite time. Suppose now that the simple refinements algorithm does not terminate in finite time. Then one can construct an infinite chain $\mathscr{P}\vee\mathscr{D}=\mathscr{P}_0\leq\mathscr{P}_1\leq\mathscr{P}_2\leq\cdots$ of partitions with $\mathscr{P}_{j-1}\mapsto\mathscr{P}_j$ by a simple refinement for all $j\geq 1$. Moreover, it is clear that this sequence can be chosen such that for each $N>0$ there exists $M>0$ such that the partition $\mathscr{P}_M$ restricted to the ball $B_N=\{w\in W\mid \ell(w)\leq N\}$ is regular (by this we shall mean that if $P$ is a part of $\mathscr{P}_M$ and $s\notin D_L(P)$ then $sP\cap B_N$ is contained in some $P'\cap B_N$ with $P'$ a part of $\mathscr{P}_M$). Consider the join $$ \mathscr{P}_{\infty}=\bigvee_{n\geq 0}\mathscr{P}_n. $$ Every regular partition $\mathscr{R}$ with $\mathscr{P}\leq \mathscr{R}$ satisfies $\mathscr{P}_{\infty}\leq\mathscr{R}$ (for if not, then since $\mathscr{P}_0\leq\mathscr{P}_1\leq\cdots$ there is some $n$ with $\mathscr{P}_n\not\leq \mathscr{R}$, contradicting Lemma~\ref{lem:minimalrefinement}). Moreover, $\mathscr{P}_{\infty}$ is regular because by construction the restriction of $\mathscr{P}_{\infty}$ to each finite ball is regular. Thus, as in the previous paragraph, $\widehat{\mathscr{P}}=\mathscr{P}_{\infty}$, and the proof of (1) is complete. To prove (3), note that if $|\mathscr{P}|<\infty$ and the algorithm terminates in finite time, then the output partition (which is $\widehat{\mathscr{P}}$ by (2)) has finitely many parts by Proposition~\ref{prop:finiteparts}. Conversely, if $|\widehat{\mathscr{P}}|<\infty$, and if $\mathscr{P}\vee\mathscr{D}=\mathscr{P}_0\mapsto \mathscr{P}_1\mapsto\cdots \mapsto\mathscr{P}_n$ is a sequence of simple refinements, then since $|\mathscr{P}|<|\mathscr{P}_1|<\cdots <|\mathscr{P}_n|$, and since $\mathscr{P}_n\leq \widehat{\mathscr{P}}$ (by Lemma~\ref{lem:minimalrefinement}) we have $n\leq |\widehat{\mathscr{P}}|-|\mathscr{P}|$, and so Algorithm~\ref{alg:regularisation} terminates in finite time. \end{proof} \begin{rem} Note that, as a consequence of Theorem~\ref{thm:newterminateatregularisation}, if Algorithm~\ref{alg:regularisation} terminates in finite time then the output partition is independent of the order of the simple refinements chosen. \end{rem} We note, in passing, the following corollary. Recall (c.f. \cite[Section~7.1]{DP:02}) that a \textit{closure operator} on a partially ordered set $(X,\leq)$ is a map $c:X\to X$ satisfying (1) $x\leq c(x)$ for all $x\in X$, (2) if $x,y\in X$ with $x\leq y$ then $c(x)\leq c(y)$, and (3) $c(c(x))=c(x)$ for all $x\in X$. The \textit{closed elements} of the closure operator $c:X\to X$ are the elements $x\in X$ with $c(x)=x$. \begin{cor} The map $c:\mathscr{P}(W)\to\mathscr{P}(W)$ with $c(\mathscr{P})=\widehat{\mathscr{P}}$ is a closure operator on $\mathscr{P}(W)$. The set of closed elements is precisely $\mathscr{P}_{\mathrm{reg}}(W)$. \end{cor} \begin{proof} By Theorem~\ref{thm:newterminateatregularisation} we have $\mathscr{P}\leq c(\mathscr{P})$ for all $\mathscr{P}\in\mathscr{P}(W)$. If $\mathscr{P},\mathscr{Q}\in\mathscr{P}(W)$ with $\mathscr{P}\leq\mathscr{Q}$ then $\{\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)\mid \mathscr{P}\leq\mathscr{R}\}\supseteq \{\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)\mid \mathscr{Q}\leq\mathscr{R}\}$, and hence from the definition of regular completion we have $c(\mathscr{P})\leq c(\mathscr{Q})$. Since $\widehat{\mathscr{P}}$ is regular we have $c(c(\mathscr{P}))=c(\mathscr{P})$, and so $c$ is a closure operator. The closed elements are those partitions $\mathscr{P}\in\mathscr{P}(W)$ with $\mathscr{P}=\widehat{\mathscr{P}}$, and these are precisely the regular partitions. \end{proof} \newpage We note that the simple refinements algorithm (Algorithm~\ref{alg:regularisation}) may not terminate in finite time, even in the case that the input partition $\mathscr{P}$ has finitely many parts, as the following example shows. \begin{exa}\label{exa:nonterminate} Let $W=\langle s,t\mid s^2=t^2=e\rangle$ be the infinite dihedral group. Let $\mathscr{P}=\{P_0,P_1,P_2,P_3\}$ be the partition of $W$ into $4$ parts, with $P_0=\{e\}$, $P_1=\{w\in W\mid D_L(w)=\{s\}\}$, $P_2=\{w\in W\mid D_L(w)=\{t\}\text{ and }\ell(w)\in N\}$, and $P_3=\{w\in W\mid D_L(w)=\{t\}\text{ and }\ell(w)\notin N\}$, where $N=\{n(n+1)\mid n\in\mathbb{Z}_{>0}\}$. This is illustrated in Figure~\ref{fig:nonterminate}, with $P_0$ grey, $P_1$ green, $P_2$ red, and $P_3$ blue. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.5] \path [fill=red!30] (-1,0) -- (-2,0) -- (-2,0.6) -- (-1,0.6); \path [fill=red!30] (3,0) -- (2,0) -- (2,0.6) -- (3,0.6); \path [fill=red!30] (9,0) -- (8,0) -- (8,0.6) -- (9,0.6); \path [fill=gray!90] (-3,0) -- (-4,0) -- (-4,0.6) -- (-3,0.6); \path [fill=blue!40] (-2,0)--(-3,0)--(-3,0.6)--(-2,0.6); \path [fill=blue!40] (2,0)--(-1,0)--(-1,0.6)--(2,0.6); \path [fill=blue!40] (8,0)--(3,0)--(3,0.6)--(8,0.6); \path [fill=blue!40] (13.5,0)--(9,0)--(9,0.6)--(13.5,0.6); \path [fill=ForestGreen!40] (-4,0)--(-13.5,0)--(-13.5,0.6)--(-4,0.6); \draw (-13.5,0)--(13.5,0); \draw (-13,0)--(-13,0.6); \draw (-12,0)--(-12,0.6); \draw (-11,0)--(-11,0.6); \draw (-10,0)--(-10,0.6); \draw (-9,0)--(-9,0.6); \draw (-8,0)--(-8,0.6); \draw (-7,0)--(-7,0.6); \draw (-6,0)--(-6,0.6); \draw (-5,0)--(-5,0.6); \draw (-4,0)--(-4,0.6); \draw (-3,0)--(-3,0.6); \draw (-2,0)--(-2,0.6); \draw (-1,0)--(-1,0.6); \draw (0,0)--(0,0.6); \draw (1,0)--(1,0.6); \draw (2,0)--(2,0.6); \draw (3,0)--(3,0.6); \draw (4,0)--(4,0.6); \draw (5,0)--(5,0.6); \draw (6,0)--(6,0.6); \draw (7,0)--(7,0.6); \draw (8,0)--(8,0.6); \draw (9,0)--(9,0.6); \draw (10,0)--(10,0.6); \draw (11,0)--(11,0.6); \draw (12,0)--(12,0.6); \draw (13,0)--(13,0.6); \node at (-3.5,0.275) {$e$}; \node at (-4.5,0.275) {$s$}; \node at (-2.5,0.32) {$t$}; \end{tikzpicture} \caption{The simple refinements algorithm does not terminate in finite time}\label{fig:nonterminate} \end{figure} \noindent We claim that Algorithm~\ref{alg:regularisation} does not terminate in finite time when applied to $\mathscr{P}$. To prove this it is sufficient to show that the regular completion $\widehat{\mathscr{P}}$ has infinitely many parts (see Proposition~\ref{prop:finiteparts} and Theorem~\ref{thm:newterminateatregularisation}). For $n\geq 1$ write $w_n=tst\cdots s$ with $\ell(w_n)=n(n+1)$. Thus $P_2=\{w_1,w_2,w_3,\ldots\}$. We claim that if $i\neq j$ then $w_i$ and $w_j$ do not lie in a common part of $\widehat{\mathscr{P}}$. Suppose that $i<j$, and that $\{w_i,w_j\}$ is contained in a part $Q_0$ of $\widehat{\mathscr{P}}$. Since $s\notin D_L(Q_0)$ we have that $\{sw_i,sw_j\}$ is contained in a part $Q_1$ of $\widehat{\mathscr{P}}$ (as the regular completion is regular). Continuing, we have that $\{tsw_i,tsw_j\}$ is contained in a part $Q_2$ of $\widehat{\mathscr{P}}$, and so on. Writing $v=t\cdots sts$ with $\ell(v)=2(i+1)$ it follows that $\{vw_i,vw_j\}$ is contained in a part $Q_{i+1}$ of $\widehat{\mathscr{P}}$. Note that $vw_i=w_{i+1}$ and $j(j+1)<\ell(vw_j)=j(j+1)+2(i+1)<j(j+1)+2(j+1)=(j+1)(j+2)$. Thus $vw_i\in P_2$ and $vw_j\in P_3$ are in different parts of $\mathscr{P}$, and hence in different parts of the refinement $\widehat{\mathscr{P}}$, a contradiction. Hence the result. \end{exa} We now provide a sufficient condition for Algorithm~\ref{alg:regularisation} to terminate in finite time. Given a partition $\mathscr{P}$, we define the \textit{roots of $\mathscr{P}$} to be the set $\Phi(\mathscr{P})$ of roots $\beta\in\Phi^+$ such that there exist parts $P_1\neq P_2$ of $\mathscr{P}$ and elements $w\in P_1$ and $s\in S$ such that $ws\in P_2$ and $w\alpha_s=\pm\beta$. Geometrically, this means that the wall $H_{\beta}$ of the Coxeter complex separates the chambers of $P_1$ from the chambers of $P_2$, and that $P_1\cap P_2$ (intersection of simplical complexes) contains a panel (codimension~$1$ simplex) of $H_{\beta}$. For example, if $\mathscr{P}$ is the hyperplane partition induced by $\Lambda$, then one can easily check that $\Phi(\mathscr{P})=\Lambda$. Note that if $|\Phi(\mathscr{P})|<\infty$ then $|\mathscr{P}|<\infty$, however the converse is false (see, for example, the infinite dihedral example in Example~\ref{exa:nonterminate}). \begin{thm}\label{thm:finitetermination} Let $\mathscr{P}$ be a locally constant partition with $|\Phi(\mathscr{P})|<\infty$. Then Algorithm~\ref{alg:regularisation} terminates in finite time, outputting the regular completion $\widehat{\mathscr{P}}$, and moreover $|\widehat{\mathscr{P}}|<\infty$. \end{thm} \begin{proof} Since $\mathcal{E}_0\subseteq \mathcal{E}_1\subseteq\cdots$ is a filtration of $\Phi^+$, and since $|\Phi(\mathscr{P})|<\infty$, there is $n\geq 0$ such that $\Phi(\mathscr{P})\subseteq \mathcal{E}_{n}$. From the definition of $\Phi(\mathscr{P})$ it is clear that $\mathscr{S}_{n}$ is a refinement of $\mathscr{P}$, and in particular, $|\mathscr{P}|<\infty$ (see Proposition~\ref{prop:partsdescription}). Then, by Lemma~\ref{lem:minimalrefinement}, if $\mathscr{P}\to\mathscr{P}'$ by a simple refinement we have $\mathscr{P}'\leq\mathscr{S}_n$. Since $|\mathscr{P}|<|\mathscr{P}'|<|\mathscr{S}_n|$ Algorithm~\ref{alg:regularisation} must terminate after finitely many iterations (at most $|\mathscr{S}_n|-|\mathscr{P}|$ iterations in fact). The output is $\widehat{\mathscr{P}}$ and $|\widehat{\mathscr{P}}|<\infty$, by Theorem~\ref{thm:newterminateatregularisation}. \end{proof} The following important corollary is a key ingredient in the proof of Theorem~\ref{thm:main1}. \begin{cor}\label{cor:regularlattice1} We have $\mathscr{T}=\widehat{\mathscr{D}}$. \end{cor} \begin{proof} Since $\mathscr{T}$ is regular (see Theorem~\ref{thm:regularpartitions}) we have $\mathscr{D}\leq \mathscr{T}$ (see Lemma~\ref{lem:Sminimal}). Hence by Lemma~\ref{lem:minimalrefinement} and the fact that Algorithm~\ref{alg:regularisation} terminates in finite time (Theorem~\ref{thm:finitetermination}), we have $\widehat{\mathscr{D}}\leq \mathscr{T}$. Since $\widehat{\mathscr{D}}$ is regular Corollary~\ref{cor:Tisminimal} gives $\mathscr{T}\leq \widehat{\mathscr{D}}$, and hence the result. \end{proof} Combining Corollary~\ref{cor:regularlattice1} with Algorithm~\ref{alg:regularisation} and Theorem~\ref{thm:finitetermination} we obtain an algorithmic way to compute the cone type partition~$\mathscr{T}$ by starting with $\mathscr{D}$ and applying simple refinements. However it is more efficient to instead apply the following theorem, allowing us to start with $\mathscr{J}$ instead of $\mathscr{D}$ (see Example~\ref{ex:completingB2} below). \begin{thm}\label{thm:spherical1} We have $\mathscr{J}\leq \mathscr{T}$, and $\widehat{\mathscr{J}}=\mathscr{T}$. \end{thm} \begin{proof} To prove that $\mathscr{J}\leq \mathscr{T}$ we must show that if $x,y\in W$ with $T(x^{-1})=T(y^{-1})$, then $\Phi_{\mathrm{sph}}(x)=\Phi_{\mathrm{sph}}(y)$. Suppose that $\Phi_{\mathrm{sph}}(x)\neq\Phi_{\mathrm{sph}}(y)$. Then, after interchanging the roles of $x$ and $y$ if neccessary, we may assume that there is a root $\beta\in\Phi_{\mathrm{sph}}^+$ with $\beta\in\Phi(y)\backslash\Phi(x)$. Let $J=\mathrm{supp}(\beta)$. Writing $x=uv$ and $y=u'v'$ with $u,u'\in W_J$ and $v,v'\in W^J$, we have $\Phi_J(x)=\Phi(u)$ and $\Phi_J(y)=\Phi(u')$ (see Lemma~\ref{lem:rootsdecomposition}), and so $u\neq u'$. Thus $T(x^{-1})\neq T(y^{-1})$ by Corollary~\ref{cor:distinguishconetypes}. Since $\mathscr{J}\leq \mathscr{T}$ we have $\widehat{\mathscr{J}}\leq \mathscr{T}$, but also $\mathscr{T}\leq \widehat{\mathscr{J}}$ by Corollary~\ref{cor:Tisminimal}. \end{proof} \begin{exa}\label{ex:completingB2} Figure~\ref{fig:completingB2} illustrates the calculation of $\mathscr{T}$ for $\tilde{\mathsf{B}}_2$ using Theorem~\ref{thm:spherical1} and Algorithm~\ref{alg:regularisation}. Let $s$ (respectively, $t$) be the reflection in the vertical (respectively, horizontal) wall bounding the fundamental chamber. The spherical partition $\mathscr{J}$ is shown in Figure~\ref{fig:completingB2}(a) (in solid heavy lines). Let $P_0$ be the part of $\mathscr{J}$ shaded in blue, and then $sP_0$ is shaded red. The partition $\mathscr{P}_1$ obtained by applying the simple refinement based at $(P_0,s)$ to $\mathscr{J}$ is shown in Figure~\ref{fig:completingB2}(a) as the union of the solid and dotted heavy lines. Similarly, Figure~\ref{fig:completingB2}(b) shows the simple refinement $\mathscr{P}_1\to\mathscr{P}_2$ based at $(P_1,s)$ (with $P_1\in\mathscr{P}_1$ shaded blue), Figure~\ref{fig:completingB2}(c) shows the simple refinement $\mathscr{P}_2\to\mathscr{P}_3$ based at $(P_2,t)$, and Figure~\ref{fig:completingB2}(d) shows the simple refinement $\mathscr{P}_3\to\mathscr{P}_4$ based at $(P_3,t)$. Since $\mathscr{P}_4$ is regular we have $\mathscr{P}_4=\widehat{\mathscr{J}}=\mathscr{T}$ (by Theorems~\ref{thm:newterminateatregularisation} and~\ref{thm:spherical1}). \begin{figure}[H] \centering \subfigure[$\mathscr{J}\mapsto\mathscr{P}_1$]{ \begin{tikzpicture}[scale=0.65] \path [fill=gray!90] (0,0) -- (1,1) -- (0,1) -- (0,0); \path [fill=blue!40] (1,1)--(1,5)--(0,5)--(0,2)--(1,1); \path [fill=red!20] (-1,1)--(-1,5)--(0,5)--(0,2)--(-1,1); \draw (-4,-2) -- (5,-2); \draw (-4,-1) -- (5,-1); \draw [line width=2pt](-4,0) -- (5,0); \draw [line width=2pt](-4,1) -- (5,1); \draw (-4,2) -- (5,2); \draw (-4,3) -- (5,3); \draw (-4,4) -- (5,4); \draw (-4,5) -- (5,5); \draw (-4,-3) -- (5,-3); \draw (-4,-4) -- (5,-4); \draw (-2,-4) -- (-2,5); \draw (-1,-4) -- (-1,5); \draw [line width=2pt](0,-4) -- (0,5); \draw [line width=2pt](1,-4) -- (1,5); \draw (2,-4) -- (2,5); \draw (-4,-4) -- (-4,5); \draw (-3,-4) -- (-3,5); \draw (3,-4) -- (3,5); \draw (4,-4) -- (4,5); \draw (5,-4) -- (5,5); \draw (-4,4)--(-3,5); \draw (-4,2)--(-1,5); \draw (-4,0) -- (1,5); \draw (-4,-2) -- (3,5); \draw [line width=2pt](-4,-4) -- (5,5); \draw (-2,-4) -- (5,3); \draw (0,-4) -- (5,1); \draw (2,-4)--(5,-1); \draw (4,-4)--(5,-3); \draw (3,5)--(5,3); \draw (1,5)--(5,1); \draw (-1,5)--(5,-1); \draw[line width=2pt] (-3,5)--(5,-3); \draw[line width=2pt] (-4,4)--(4,-4); \draw (-4,2)--(2,-4); \draw (-4,0)--(0,-4); \draw (-4,-2)--(-2,-4); \draw[dotted, line width=2pt] (0,2)--(1,3); \end{tikzpicture} }\qquad \subfigure[$\mathscr{P}_1\mapsto\mathscr{P}_2$]{ \begin{tikzpicture}[scale=0.65] \path [fill=gray!90] (0,0) -- (1,1) -- (0,1) -- (0,0); \path [fill=blue!40] (1,1)--(1,5)--(5,5)--(1,1); \path [fill=red!20] (-1,1)--(-1,5)--(-4,5)--(-4,4)--(-1,1); \draw (-4,-2) -- (5,-2); \draw (-4,-1) -- (5,-1); \draw [line width=2pt](-4,0) -- (5,0); \draw [line width=2pt](-4,1) -- (5,1); \draw (-4,2) -- (5,2); \draw (-4,3) -- (5,3); \draw (-4,4) -- (5,4); \draw (-4,5) -- (5,5); \draw (-4,-3) -- (5,-3); \draw (-4,-4) -- (5,-4); \draw (-2,-4) -- (-2,5); \draw (-1,-4) -- (-1,5); \draw [line width=2pt](0,-4) -- (0,5); \draw [line width=2pt](1,-4) -- (1,5); \draw (2,-4) -- (2,5); \draw (-4,-4) -- (-4,5); \draw (-3,-4) -- (-3,5); \draw (3,-4) -- (3,5); \draw (4,-4) -- (4,5); \draw (5,-4) -- (5,5); \draw (-4,4)--(-3,5); \draw (-4,2)--(-1,5); \draw (-4,0) -- (1,5); \draw (-4,-2) -- (3,5); \draw [line width=2pt](-4,-4) -- (5,5); \draw (-2,-4) -- (5,3); \draw (0,-4) -- (5,1); \draw (2,-4)--(5,-1); \draw (4,-4)--(5,-3); \draw (3,5)--(5,3); \draw (1,5)--(5,1); \draw (-1,5)--(5,-1); \draw[line width=2pt] (-3,5)--(5,-3); \draw[line width=2pt] (-4,4)--(4,-4); \draw (-4,2)--(2,-4); \draw (-4,0)--(0,-4); \draw (-4,-2)--(-2,-4); \draw[line width=2pt] (0,2)--(1,3); \draw[dotted, line width=2pt] (1,3)--(3,5); \end{tikzpicture} } \subfigure[$\mathscr{P}_2\mapsto\mathscr{P}_3$]{ \begin{tikzpicture}[scale=0.65] \path [fill=gray!90] (0,0) -- (1,1) -- (0,1) -- (0,0); \path [fill=blue!40] (0,0)--(-4,0)--(-4,1)--(-1,1)--(0,0); \path [fill=red!20] (0,2)--(-4,2)--(-4,1)--(-1,1)--(0,2); \draw (-4,-2) -- (5,-2); \draw (-4,-1) -- (5,-1); \draw [line width=2pt](-4,0) -- (5,0); \draw [line width=2pt](-4,1) -- (5,1); \draw (-4,2) -- (5,2); \draw (-4,3) -- (5,3); \draw (-4,4) -- (5,4); \draw (-4,5) -- (5,5); \draw (-4,-3) -- (5,-3); \draw (-4,-4) -- (5,-4); \draw (-2,-4) -- (-2,5); \draw (-1,-4) -- (-1,5); \draw [line width=2pt](0,-4) -- (0,5); \draw [line width=2pt](1,-4) -- (1,5); \draw (2,-4) -- (2,5); \draw (-4,-4) -- (-4,5); \draw (-3,-4) -- (-3,5); \draw (3,-4) -- (3,5); \draw (4,-4) -- (4,5); \draw (5,-4) -- (5,5); \draw (-4,4)--(-3,5); \draw (-4,2)--(-1,5); \draw (-4,0) -- (1,5); \draw (-4,-2) -- (3,5); \draw [line width=2pt](-4,-4) -- (5,5); \draw (-2,-4) -- (5,3); \draw (0,-4) -- (5,1); \draw (2,-4)--(5,-1); \draw (4,-4)--(5,-3); \draw (3,5)--(5,3); \draw (1,5)--(5,1); \draw (-1,5)--(5,-1); \draw[line width=2pt] (-3,5)--(5,-3); \draw[line width=2pt] (-4,4)--(4,-4); \draw (-4,2)--(2,-4); \draw (-4,0)--(0,-4); \draw (-4,-2)--(-2,-4); \draw[dotted, line width=2pt] (-2,0)--(-1,1); \draw[line width=2pt] (0,2)--(3,5); \end{tikzpicture} }\qquad \subfigure[$\mathscr{P}_3\mapsto\mathscr{T}$]{ \begin{tikzpicture}[scale=0.65] \path [fill=gray!90] (0,0) -- (1,1) -- (0,1) -- (0,0); \path [fill=blue!40] (0,0)--(-4,-4)--(-4,0)--(0,0); \path [fill=red!20] (0,2)--(-4,2)--(-4,5)--(-3,5)--(0,2); \draw (-4,-2) -- (5,-2); \draw (-4,-1) -- (5,-1); \draw [line width=2pt](-4,0) -- (5,0); \draw [line width=2pt](-4,1) -- (5,1); \draw (-4,2) -- (5,2); \draw (-4,3) -- (5,3); \draw (-4,4) -- (5,4); \draw (-4,5) -- (5,5); \draw (-4,-3) -- (5,-3); \draw (-4,-4) -- (5,-4); \draw (-2,-4) -- (-2,5); \draw (-1,-4) -- (-1,5); \draw [line width=2pt](0,-4) -- (0,5); \draw [line width=2pt](1,-4) -- (1,5); \draw (2,-4) -- (2,5); \draw (-4,-4) -- (-4,5); \draw (-3,-4) -- (-3,5); \draw (3,-4) -- (3,5); \draw (4,-4) -- (4,5); \draw (5,-4) -- (5,5); \draw (-4,4)--(-3,5); \draw (-4,2)--(-1,5); \draw (-4,0) -- (1,5); \draw (-4,-2) -- (3,5); \draw [line width=2pt](-4,-4) -- (5,5); \draw (-2,-4) -- (5,3); \draw (0,-4) -- (5,1); \draw (2,-4)--(5,-1); \draw (4,-4)--(5,-3); \draw (3,5)--(5,3); \draw (1,5)--(5,1); \draw (-1,5)--(5,-1); \draw[line width=2pt] (-3,5)--(5,-3); \draw[line width=2pt] (-4,4)--(4,-4); \draw (-4,2)--(2,-4); \draw (-4,0)--(0,-4); \draw (-4,-2)--(-2,-4); \draw[line width=2pt] (-2,0)--(-1,1); \draw[dotted, line width=2pt] (-4,-2)--(-2,0); \draw[line width=2pt] (0,2)--(3,5); \end{tikzpicture} } \caption{Computing $\mathscr{T}$ using $\mathscr{T}=\widehat{\mathscr{J}}$}\label{fig:completingB2} \end{figure} \end{exa} \newpage We note the following corollary. \begin{cor}\label{cor:someobservations} The following are equivalent. \begin{enumerate} \item $\mathcal{E}=\Phi_{\mathrm{sph}}^+$, \item $\mathscr{S}_0=\mathscr{J}$, \item $\mathscr{T}=\mathscr{S}_0$. \end{enumerate} \end{cor} \begin{proof} If $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ then $\mathscr{S}_0=\mathscr{J}$ directly from the definitions. If $\mathscr{S}_0=\mathscr{J}$ then $\mathscr{J}$ is regular (as $\mathscr{S}_0$ is regular by Theorem~\ref{thm:regularpartitions}). Thus $\mathscr{T}\leq\mathscr{J}$ by Corollary~\ref{cor:Tisminimal}. But $\mathscr{J}\leq\mathscr{T}$ by Theorem~\ref{thm:spherical1}, and so equality holds. Hence $\mathscr{T}=\mathscr{J}=\mathscr{S}_0$. On the other hand, suppose that $\mathscr{T}=\mathscr{S}_0$. Thus the Brink-Howlett (ie, the $0$-canonical) automaton is minimal, and so by \cite[Theorem~1]{PY:19} we have $\mathcal{E}=\Phi_{\mathrm{sph}}^+$. \end{proof} \section{Gated partitions}\label{sec:gatedpartitions} In this section we introduce the notion of a \textit{gated partition}. In a gated partition $\mathscr{P}$, each part $P\in\mathscr{P}$ contains a unique ``gate'' $g$ with the property that $g\preccurlyeq x$ for all $x\in P$, and we write $\Gamma(\mathscr{P})$ for the set of all gates. We show, in Section~\ref{sec:simplerefinementsgates}, that simple refinements preserve the gated property (provided an additional hypothesis, convexity, is assumed). It follows that $\mathscr{T}$ is gated, proving Theorem~\ref{thm:main1}. \subsection{Convex partitions}\label{sec:convex} We begin with a discussion of convexity. \begin{defn} A subset $X\subseteq W$ is \textit{convex} if for all $x,y\in X$, and all reduced expressions $x^{-1}y=s_1\cdots s_n$, each element $xs_1\cdots s_j$ with $0\leq j\leq n$ is in $X$. A partition $\mathscr{P}$ of $W$ is \textit{convex} if each part $P\in \mathscr{P}$ is convex. \end{defn} More intuitively, $X\subseteq W$ is convex if for all $x,y\in X$, each chamber that lies on a minimal length gallery from $x$ to $y$ in the Coxeter complex lies in $X$. Here \textit{gallery} means a sequence of adjacent chambers, starting at $x$, and ending at $y$. The following well-known result gives a useful characterisation of convexity. \begin{lem}\cite[Proposition~3.94]{AB:08}\label{lem:intersection} A subset of $W$ is convex if and only if it is an intersection of half-spaces. \end{lem} The above characterisation of convex sets leads to the following proposition. \begin{prop}\label{prop:convexitybasics} The following are convex: \begin{enumerate} \item the intersection of convex sets; \item hyperplane partitions; \item cones and cone types; \item the cone type partition~$\mathscr{T}$. \end{enumerate} \end{prop} \begin{proof} (1) is clear from Lemma~\ref{lem:intersection}. To prove (2), note that if $\Lambda\subseteq\Phi^+$ then the part of the hyperplane partition $\mathscr{H}(\Lambda)$ containing $w\in W$ is $$ P=\bigg(\bigcap_{\beta\in\Lambda_+}H_{\beta}^+\bigg)\cap\bigg(\bigcap_{\beta\in\Lambda_-}H_{\beta}^-\bigg) $$ where $\Lambda_{\pm}=\{\beta\in\Lambda\mid w\in H_{\beta}^{\pm}\}$, and use (1). Part (3) is clear from (1) and the formula in Theorem~\ref{thm:geometry1}, and the fact that $C(w)=wT(w)$. Finally, the partition~$\mathscr{T}$ is convex by the description of the parts given in Theorem~\ref{thm:conetypeprojection} combined with~(1). \end{proof} In particular, note that Proposition~\ref{prop:convexitybasics}(2) shows that $\mathscr{D}$, $\mathscr{J}$, and $\mathscr{S}_n$ are all convex. \begin{rem}\label{rem:garsideconvexity} Based on examples, we expect that if $B$ is a Garside shadow, then $\mathscr{G}_B$ is convex. However we have only proved that $\mathscr{G}_B$ satisfies a weaker form of convexity (see Proposition~\ref{prop:garsidegated}). See Remark~\ref{rem:conicalconvex} for further discussion. \end{rem} \subsection{Gated partitions}\label{sec:gated} We now define gated partitions, provide some of the main examples, and prove some basic properties of the gates of a regular gated partition. \begin{defn}\label{defn:gatedpartitions} A subset $X\subseteq W$ is \textit{gated} if there exists $g\in X$ such that $g\preccurlyeq x$ for all $x\in X$. The element $g$ is called a \textit{gate} of $X$. A partition $\mathscr{P}$ of $W$ is called \textit{gated} if each part $P\in\mathscr{P}$ is gated. If $\mathscr{P}$ is gated we write $\Gamma(\mathscr{P})$ for the set of all gates of $\mathscr{P}$. \end{defn} \begin{lem}\label{lem:uniquegate} Every gated subset $X\subseteq W$ has a unique gate, and this gate is the unique minimal length element of~$X$. \end{lem} \begin{proof} If $X\subseteq W$ is gated, and if $g_1,g_2\in X$ are gates, then $g_1\preccurlyeq g_2$ and $g_2\preccurlyeq g_1$. Hence $g_1=g_2$. Let $g\in X$ be the unique gate. Since $g\preccurlyeq x$ for all $x\in X$ the element $g$ is the unique minimal length element of~$X$. \end{proof} We will show in Corollary~\ref{cor:gateexist} that $\mathscr{T}$ is gated (this proves Theorem~\ref{thm:main1}). We first develop some basic theory for gated partitions, and provide simple examples. The following weaker notion of convexity is useful for studying Garside partitions. \begin{defn} Let $X\subseteq W$ be gated, with gate $g$. We say that $X$ is \textit{weakly convex} if $g\preccurlyeq y\preccurlyeq x$ and $x\in X$ implies that $y\in X$. A gated partition $\mathscr{P}$ is called \textit{weakly convex} if each part $P\in\mathscr{P}$ is weakly convex. \end{defn} It is obvious that if a gated set $X\subseteq W$ (respectively a gated partition $\mathscr{P}\in\mathscr{P}(W)$) is convex, then $X$ (respectively $\mathscr{P}$) is also weakly convex, however the converse is clearly false. For example, consider the partition $W=\{\{e,s,t,st,ts\},\{sts\}\}$ of the $\mathsf{A}_2$ Coxeter group. This gated partition is weakly convex but not convex. The gates of a gated weakly convex partition have the following characterisation. \begin{prop}\label{prop:weakconvexity} If $X\subseteq W$ is gated and weakly convex then the gate $g$ of $X$ is characterised by the properties $g\in X$ and $gs\notin X$ for all $s\in D_R(g)$. \end{prop} \begin{proof} We have $gs\notin X$ for all $s\in D_R(g)$ as $g$ has minimal length in $X$ (by Lemma~\ref{lem:uniquegate}). Conversely, suppose that $x\in X$ is not the gate. Then $g\preccurlyeq x$ gives $x=gs_1\cdots s_n$ with $n\geq 1$ and $\ell(x)=\ell(g)+n$. Then $s_n\in D_R(x)$, and $g\preccurlyeq xs_n\preccurlyeq x$, and by weak convexity $xs_n\in X$. \end{proof} The following is a simple, but important, example of a gated partition. \begin{lem}\label{lem:Spartitiongated} The partition $\mathscr{D}$ is a locally constant, convex, gated partition. Moreover, $$\Gamma(\mathscr{D})=\{w_J\mid J\subseteq S\text{ is spherical}\}.$$ \end{lem} \begin{proof} The $S$-partition is locally constant by the description of the parts in Proposition~\ref{prop:partsdescription}, and it is convex by Proposition~\ref{prop:convexitybasics}. Let $J\subseteq S$ be spherical, and let $w_J$ be the longest element of $W_J$. Each $w\in W$ with $D_L(w)=J$ can be written as $w=w_Jv$ with $\ell(w)=\ell(w_J)+\ell(v)$ (see \cite[Proposition~2.17]{AB:08}), and hence $w_J\preccurlyeq w$. Thus the part $D_L^{-1}(J)$ of $\mathscr{D}$ is gated, with gate~$w_J$. \end{proof} The following theorem, applied to the case $X=W$, shows that the join of convex gated partitions is again convex and gated. If $X\subseteq W$ then the notion of a gated partition of~$X$ has the obvious meaning. \begin{thm}\label{thm:joingated} Let $X\subseteq W$ be convex. Let $\mathscr{P}_i$, $i\in I$, be a family of convex (respectively weakly convex) gated partitions of $X$. Then the join $\mathscr{P}=\bigvee_{i\in I}\mathscr{P}_i$ is a convex (respectively weakly convex) gated partition of $X$. \end{thm} \begin{proof} Recall that the parts of $\mathscr{P}=\bigvee_{i\in I}\mathscr{P}_i$ are of the form $P=\bigcap_{i\in I}P_i$ with $P_i\in\mathscr{P}_i$ and $P\neq\emptyset$. Let $g_i$ be the gate of $P_i$. Since $P\neq\emptyset$ there is $w\in P$, and since $w\in P_i$ we have $g_i\preccurlyeq w$ for all $i\in I$. Thus $\{g_i\mid i\in I\}$ is bounded, and so $g=\bigvee\{g_i\mid i\in I\}$ exists. Moreover, for any $w\in P$ we have $g\preccurlyeq w$, and thus $g_i\preccurlyeq g\preccurlyeq w$ (for all $i\in I$). We may assume $\mathscr{P}_i$ is weakly convex (for if $\mathscr{P}_i$ is convex then it is also weakly convex). Thus we have $g\in P_i$ for each $i\in I$, and so $g\in P$. Thus $P$ is gated with gate~$g$. Moreover, if $w\in P$ and $g\preccurlyeq y\preccurlyeq w$ then $g_i\preccurlyeq g\preccurlyeq y\preccurlyeq w$ for all $i\in I$ and so $y\in P_i$ for all $i\in I$, giving $y\in P$. Thus $\mathscr{P}$ is weakly convex. Finally, it is clear that if each $\mathscr{P}_i$ is convex then $\mathscr{P}$ is convex (as the intersection of convex sets is convex, by Proposition~\ref{prop:convexitybasics}). \end{proof} \begin{cor}\label{cor:JIsGated} The spherical partition $\mathscr{J}$ is gated (see Proposition~\ref{prop:Jgated} for a description of the set of gates of~$\mathscr{J}$). \end{cor} \begin{proof} It is clear that for each spherical subset $J\subseteq S$ the partition $\mathscr{J}_J=\mathscr{H}(\Phi_J)$ is convex (being a hyperplane partition, see Propostion~\ref{prop:convexitybasics}) and gated (with $\Gamma(\mathscr{J}_J)=W_J$). Since $$ \mathscr{J}=\bigvee_{J}\mathscr{J}_J, $$ with the union over spherical subsets $J\subseteq S$, the partition $\mathscr{J}$ is gated by Theorem~\ref{thm:joingated}. \end{proof} In the following proposition we show that Garside partitions are gated (see Theorem~\ref{thm:conesgated} for a generalisation). \begin{prop}\label{prop:garsidegated} Let $B$ be a Garside shadow. The partition $\mathscr{G}_B$ is gated and weakly convex, with $\Gamma(\mathscr{G}_B)=B$. \end{prop} \begin{proof} Each part of $\mathscr{G}_B$ is of the form $\pi^{-1}_B(b)$ for some $b\in B$, and if $x\in \pi^{-1}_B(b)$ then $b\preccurlyeq x$ by the definition of $\pi_B(x)$. Hence $\pi^{-1}_B(b)$ is gated with gate~$b$. We now show that $\mathscr{G}_B$ is weakly convex. Suppose that $b\in B$ and $x\in\pi^{-1}_B(b)$, and that $b\preccurlyeq y\preccurlyeq x$. Let $\pi_B(y)=b'$. Thus $b'\preccurlyeq y$, and so $b'\preccurlyeq y\preccurlyeq x$, giving $b'\preccurlyeq \pi_B(x)=b$ (by definition of $\pi_B(x)$). On the other hand, since $b\preccurlyeq y$ we have $b\preccurlyeq b'$ (by definition of $\pi_B(y)$), and so $b=b'$. Thus $y\in \pi^{-1}_B(b)$. \end{proof} \begin{rem} We note the following. \begin{enumerate} \item It is unknown if the $n$-Shi partition $\mathscr{S}_n$ is gated (see Conjecture~\ref{conj:Shigated}). \item The partitions in Figures~\ref{fig:B2partitions}, \ref{fig:G2partitions}, \ref{fig:A2partitions}, and~\ref{fig:nongarside} are all gated. However, we note that the gated property is in fact rather rare. For example, the partition in Figure~\ref{fig:nongated} is convex, regular, but not gated (the part shaded red has no gate). \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.9] \path [fill=red!30] (-1,{-2*0.866})--(-1.5,{-3*0.866})--(1.5,{-3*0.866})--(1,{-2*0.866}); \path [fill=gray!90] (0,0) -- (-0.5,0.866) -- (0.5,0.866) -- (0,0); \draw (2.5, {-3*0.866})--( 3.5, {-1*0.866} ); \draw (1.5, {-3*0.866})--( 3.5, {1*0.866} ); \draw (0.5, {-3*0.866})--( 3.5, {3*0.866} ); \draw (-0.5, {-3*0.866})--( 3, {4*0.866} ); \draw [line width=2pt](-1.5, {-3*0.866})--( 2, {4*0.866} ); \draw [line width=2pt] (-2.5, {-3*0.866})--( 1, {4*0.866} ); \draw (-3.5, {-3*0.866})--(0, {4*0.866} ); \draw (-3.5, {-1*0.866})--(-1, {4*0.866} ); \draw (-3.5, {1*0.866})--(-2, {4*0.866} ); \draw (-3.5, {3*0.866})--(-3, {4*0.866} ); \draw (-2.5, {-3*0.866})--( -3.5, {-1*0.866} ); \draw (-1.5, {-3*0.866})--( -3.5, {1*0.866} ); \draw (-0.5, {-3*0.866})--( -3.5, {3*0.866} ); \draw (0.5, {-3*0.866})--( -3, {4*0.866} ); \draw [line width=2pt](1.5, {-3*0.866})--( -2, {4*0.866} ); \draw [line width=2pt] (2.5, {-3*0.866})--( -1, {4*0.866} ); \draw (3.5, {-3*0.866})--(0, {4*0.866} ); \draw (3.5, {-1*0.866})--(1, {4*0.866} ); \draw (3.5, {1*0.866})--(2, {4*0.866} ); \draw (3.5, {3*0.866})--(3, {4*0.866} ); \draw (-3.5, -2.598)--( 3.5, -2.598); \draw (-3.5, -1.732)--( 3.5, -1.732); \draw (-3.5, -0.866)--( 3.5, -0.866); \draw[line width=2pt] (-3.5, 0)--( 3.5, 0); \draw (-3.5, 3.464)--( 3.5, 3.464 ); \draw (-3.5, 2.598)--( 3.5, 2.598); \draw (-3.5, 1.732)--( 3.5, 1.732); \draw [line width=2pt] (-3.5, 0.866)--( 3.5, 0.866); \draw [line width=2pt] (-1,{-2*0.866})--(-2,0)--(-1,{2*0.866})--(1,{2*0.866})--(2,0)--(1,{-2*0.866})--(-1,{-2*0.866}); \draw [line width=2pt] (-1,0)--(-1.5,0.866); \draw [line width=2pt] (1,0)--(1.5,0.866); % \end{tikzpicture} \caption{A regular convex partition that is not gated}\label{fig:nongated} \end{figure} \item \noindent There exist regular gated partitions that are not convex. For example let $W$ be the infinite dihedral group generated by $s$ and $t$, and let $\mathscr{P}=\{P_0,P_1,P_2,P_3,P_4\}$ be the partition of~$W$ with $P_0=\{e\}$, $P_1=\{w\in W\mid D_L(w)=\{s\}\text{ and $\ell(w)\notin 2\mathbb{Z}$}\}$, $P_2=\{w\in W\mid D_L(w)=\{s\}\text{ and $\ell(w)\in 2\mathbb{Z}$}\}$, $P_3=\{w\in W\mid D_L(w)=\{t\}\text{ and $\ell(w)\notin 2\mathbb{Z}$}\}$, and $P_4=\{w\in W\mid D_L(w)=\{t\}\text{ and $\ell(w)\in 2\mathbb{Z}$}\}$. This partition is regular and gated (with corresponding gates $g_0=e$, $g_1=s$, $g_2=st$, $g_3=t$, and $g_4=ts$) however it is clearly not convex. \item The set of gates of a gated regular partition does not determine the partition. For example, let $\mathscr{P}$ be the gated regular partition of the infinite dihedral group from (2), and let $\mathscr{P}'=\{P_0',P_1',P_2',P_3',P_4'\}$ with $P_0'=\{e\}$, $P_1'=\{s\}$, $P_2'=\{w\in W\mid D_L(w)=\{s\}\text{ and }\ell(w)>1\}$, $P_3'=\{t\}$, and $P_4'=\{w\in W\mid D_L(w)=\{t\}\text{ and }\ell(w)>1\}$. Then $\mathscr{P}'$ is gated and regular, and $\Gamma(\mathscr{P}')=\Gamma(\mathscr{P})$. \end{enumerate} \end{rem} \newpage We are particularly interested in partitions that are both gated and regular. \begin{lem}\label{lem:minimalpath} Let $\mathscr{R}$ be a regular gated partition, and let $\mathcal{A}(\mathscr{R})=(\mathscr{R},\mu,\{e\})$ be the automaton constructed in Theorem~\ref{thm:regularautomaton}. Let $R\in\mathscr{R}$ with gate $g$, and let $(s_1,\ldots,s_n)$ be a reduced word. Then $g^{-1}=s_1\cdots s_n$ if and only if the path in $\mathcal{A}(\mathscr{R})$ starting at $\{e\}$ with edge labels $(s_1,\ldots,s_n)$ is of minimal length amongst all paths in $\mathcal{A}(\mathscr{R})$ from $\{e\}$ to~$R$. \end{lem} \begin{proof} By Theorem~\ref{thm:regularautomaton}, if $w=s_1\cdots s_n$ is reduced then the path in $\mathcal{A}(\mathscr{R})$ starting at $\{e\}$ with edge labels $(s_1,\ldots,s_n)$ ends at the part $R$ with $w^{-1}\in R$, and the lemma follows. \end{proof} \begin{thm}\label{thm:suffixclosuregates} Let $\mathscr{R}$ be a regular gated partition. Then the set $\Gamma(\mathscr{R})$ of gates is closed under taking suffix. Moreover, if $J\subseteq S$ is spherical then $W_J\subseteq \Gamma(\mathscr{R})$, and in particular $S\subseteq \Gamma(\mathscr{R})$. \end{thm} \begin{proof} Let $\mathcal{A}(\mathscr{R})=(\mathscr{R},\mu,\{e\})$ be the automaton constructed in Theorem~\ref{thm:regularautomaton} and let $R\in\mathscr{R}$ with gate $g\in\Gamma(\mathscr{R})$. Let $s\in D_L(g)$ and choose a reduced expression $g^{-1}=s_1\cdots s_{n-1}s$. Then, by Lemma~\ref{lem:minimalpath} the path $$ \{e\}=R_0\to_{s_1}R_1\to_{s_2}\cdots\to_{s_{n-1}}R_{n-1}\to_sR_n=R $$ in $\mathcal{A}(\mathscr{R})$ from $\{e\}$ to $R$ with edge labels $(s_1,\ldots,s_{n-1},s)$ is of minimal length amongst all paths in $\mathcal{A}(\mathscr{R})$ from $\{e\}$ to $R$. Hence the path from $\{e\}$ to $R_{n-1}$ with edge labels $(s_1,\ldots,s_{n-1})$ is of minimal length amongst all paths from $\{e\}$ to $R_{n-1}$, and so by Lemma~\ref{lem:minimalpath} $g'=s_{n-1}\cdots s_1=sg$ is the gate of $R_{n-1}$. Thus $\Gamma(\mathscr{R})$ is closed under taking suffixes by induction. Since $\mathscr{D}\leq \mathscr{R}$ (by Lemma~\ref{lem:Sminimal}) and $\mathscr{D}$ has gates $w_J$ with $J\subseteq S$ spherical, it follows that $w_J\in\Gamma(\mathscr{R})$. Since $\Gamma(\mathscr{R})$ is closed under suffix we have $W_J\subseteq \Gamma(\mathscr{R})$ for all spherical $J\subseteq S$. \end{proof} \begin{exa} We note that the set $\Gamma(\mathscr{R})$ of gates of a regular gated partition is not necessarily closed under join (and hence $\Gamma(\mathscr{R})$ is not necessarily a Garside shadow). An example, in type $\tilde{\mathsf{A}}_2$, is given in Figure~\ref{fig:nongarside}. With $x,y\in\Gamma(\mathscr{R})$ as shown, we have that $z=x\vee y$ exists, yet $z\notin\Gamma(\mathscr{R})$. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.9] \path [fill=blue!30] (-2,0) -- (-1.5,-0.866) -- (-0.5,-0.866)--(0,-1.732)--(0.5,-0.866)-- (1.5,-0.866) -- (2,0); \path [fill=blue!30] (-1,0) -- (1,0) -- (1.5,0.866) -- (1,1.732)--(-1,1.732)--(-1.5,0.866)--(-1,0); \path [fill=blue!30] (0,1.732)--(-0.5,2.598)--(0.5,2.598); \path [fill=gray!90] (0,0) -- (-0.5,0.866) -- (0.5,0.866) -- (0,0); \draw (2.5, {-3*0.866})--( 3.5, {-1*0.866} ); \draw (1.5, {-3*0.866})--( 3.5, {1*0.866} ); \draw (0.5, {-3*0.866})--( 3.5, {3*0.866} ); \draw (-0.5, {-3*0.866})--( 3, {4*0.866} ); \draw [line width=2pt](-1.5, {-3*0.866})--( 2, {4*0.866} ); \draw [line width=2pt] (-2.5, {-3*0.866})--( 1, {4*0.866} ); \draw (-3.5, {-3*0.866})--(0, {4*0.866} ); \draw (-3.5, {-1*0.866})--(-1, {4*0.866} ); \draw (-3.5, {1*0.866})--(-2, {4*0.866} ); \draw (-3.5, {3*0.866})--(-3, {4*0.866} ); \draw (-2.5, {-3*0.866})--( -3.5, {-1*0.866} ); \draw (-1.5, {-3*0.866})--( -3.5, {1*0.866} ); \draw (-0.5, {-3*0.866})--( -3.5, {3*0.866} ); \draw (0.5, {-3*0.866})--( -3, {4*0.866} ); \draw [line width=2pt](1.5, {-3*0.866})--( -2, {4*0.866} ); \draw [line width=2pt] (2.5, {-3*0.866})--( -1, {4*0.866} ); \draw (3.5, {-3*0.866})--(0, {4*0.866} ); \draw (3.5, {-1*0.866})--(1, {4*0.866} ); \draw (3.5, {1*0.866})--(2, {4*0.866} ); \draw (3.5, {3*0.866})--(3, {4*0.866} ); \draw (-3.5, -2.598)--( 3.5, -2.598); \draw (-3.5, -1.732)--( 3.5, -1.732); \draw (-3.5, -0.866)--( 3.5, -0.866); \draw[line width=2pt] (-3.5, 0)--( 3.5, 0); \draw (-3.5, 3.464)--( 3.5, 3.464 ); \draw (-3.5, 2.598)--( 3.5, 2.598); \draw (-3.5, 1.732)--( 3.5, 1.732); \draw [line width=2pt] (-3.5, 0.866)--( 3.5, 0.866); \draw [line width=2pt] (-1,0)--(-0.5,-0.866)--(0.5,-0.866)--(1,0); % \node at (-1,-0.55) {$x$}; \node at (0,-1.2) {$y$}; \node at (-0.5,-1.4) {$z$}; \end{tikzpicture} \caption{An convex regular gated partition that is not join-closed}\label{fig:nongarside} \end{figure} \end{exa} We conclude this section by noting that for each gated partition~$\mathscr{P}$ one can define a ``projection map'' $\pi_{\mathscr{P}}:W\to\Gamma(\mathscr{P})$ by \begin{align*} \pi_{\mathscr{P}}:W\to \Gamma(\mathscr{P}),\quad\text{with $\pi_{\mathscr{P}}(w)$ the gate of the part containing $w$}. \end{align*} The following lemma shows that this map generalises the projection map $\pi_B$ for a Garside shadow~$B$. \begin{lem}\label{lem:li} Let $B$ be a Garside shadow. Then $\pi_{\mathscr{G}_B}=\pi_B$. \end{lem} \begin{proof} Let $x\in \pi_B^{-1}(b)$, with $b\in B$. Since $b$ is the gate of $\pi_B^{-1}(b)$ we have $\pi_{\mathscr{G}_B}(x)=b=\pi_B(x)$. \end{proof} In fact, for a regular gated partition the associated automaton can be described purely using the gates (rather than the parts of the partition), and the resulting formulation mirrors the Garside case from Theorem~\ref{thm:garsideautomaton}. \begin{cor} Let $\mathscr{R}$ be a regular gated partition. Define $\mathcal{A}'(\mathscr{R})=(\Gamma(\mathscr{R}),\mu',e)$ by $$ \mu'(g,s)=\begin{cases} \pi_{\mathscr{R}}(sg)&\text{if $s\notin D_L(g)$}\\ \dagger&\text{if $s\in D_L(g)$}. \end{cases} $$ Then $\mathcal{A}'(\mathscr{R})\cong \mathcal{A}(\mathscr{R})$, where $\mathcal{A}(\mathscr{R})$ is the automaton constructed in Theorem~\ref{thm:regularautomaton}. \end{cor} \begin{proof} We define a bijection $f:\mathscr{R}\cup\{\dagger\}\to \Gamma(\mathscr{R})\cup\{\dagger\}$ by $f(\dagger)=\dagger$ and $f(P)=g$ if $g$ is the gate of $P$. We need to show that $f(\mu(P,s))=\mu'(f(P),s)$ for all $P\in\mathscr{R}$. Let $P\in\mathscr{R}$ and let $g$ be the gate of $P$. If $s\in D_L(P)$ then $f(\mu(P,s))=f(\dagger)=\dagger$ and also $\mu'(f(P),s)=\mu'(g,s)=\dagger$ as $D_L(g)=D_L(P)$. If $s\notin D_L(P)$ let $P'\in \mathscr{R}$ be the part of $\mathscr{R}$ with $sP\subseteq P'$. Let $g'$ be the gate of~$P'$. Then $f(\mu(P,s))=f(P')=g'$ and $\mu'(f(P),s)=\mu'(g,s)=\pi_{\mathscr{R}}(sg)$. By definition, $\pi_{\mathscr{R}}(sg)$ is the gate of the part containing $sg$, and since $g\in P$ and $sP\subseteq P'$ we have $\pi_{\mathscr{R}}(sg)=g'$, and hence the result. \end{proof} We note, in passing, the following analogue of \cite[Proposition~2.8]{HNW:16} for general projection maps. \begin{prop} Let $\mathscr{R}\in\mathscr{P}_{\mathrm{reg}}(W)$. If $w\in W$ and $s\notin D_L(w)$ then $\pi_{\mathscr{R}}(sw)=\pi_{\mathscr{R}}(s\pi_{\mathscr{R}}(w))$. \end{prop} \begin{proof} Let $P$ be the part of $\mathscr{R}$ containing $w$, and let $g$ be the gate of $P$. Thus $\pi_{\mathscr{R}}(w)=g$. Since $\mathscr{R}$ is regular we have $sP\subseteq P'$ for some part $P'$ of $\mathscr{R}$. Let $g'$ be the gate of $P'$. Since $sw\in sP\subseteq P'$ we have $\pi_{\mathscr{R}}(sw)=g'$. But also $sg\in P'$, and so $\pi_{\mathscr{R}}(s\pi_{\mathscr{R}}(w))=\pi_{\mathscr{R}}(sg)=g'$. Hence the result. \end{proof} \subsection{Simple refinements preserve the gate property}\label{sec:simplerefinementsgates} In this section we show that if $\mathscr{P}$ is locally constant, convex, and gated, and if $\mathscr{P}\to\mathscr{P}'$ via a simple refinement, then $\mathscr{P}'$ is also locally constant, convex, and gated. Theorem~\ref{thm1:preservegatedness} follows, and this is a key component of the proof of Theorem~\ref{thm:main1}. \begin{lem}\label{lem:littlegate} Let $\epsilon\in\{-,+\}$ and $s\in S$. If $X\subseteq H_{\alpha_s}^{\epsilon}$ is gated with gate $g$, then $sX$ is gated with gate~$sg$. \end{lem} \begin{proof} Let $x\in X$. Since $g$ is the gate of $X$ we have $\ell(g^{-1}x)=\ell(x)-\ell(g)$, and since $g,x\in X\subseteq H_{\alpha_s}^{\epsilon}$ we have $\ell(sx)-\ell(sg)=\ell(x)-\ell(g)$. Thus $$ \ell((sg)^{-1}(sx))=\ell(g^{-1}x)=\ell(x)-\ell(g)=\ell(sx)-\ell(sg), $$ and so $sg\preccurlyeq y$ for all $y=sx\in sX$. \end{proof} \begin{lem}\label{lem:littlejoin} Let $x,y\in W$ and $s\in S$ with $\{x,y\}$ bounded and $s\in D_L(x)\cap D_L(y)$. Then $s(x\vee y)=(sx)\vee (sy)$. \end{lem} \begin{proof} Let $z=x\vee y$. Since $\ell(sx)=\ell(x)-1$ and $x\preccurlyeq z$ we have $\ell(sz)=\ell(z)-1$. Then $ \ell((sx)^{-1}(sz))=\ell(x^{-1}z)=\ell(z)-\ell(x)=\ell(sz)-\ell(sx), $ and so $sx\preccurlyeq sz$. Similarly $sy\preccurlyeq sz$, and so $\{sx,sy\}$ is bounded, and $(sx)\vee(sy)\preccurlyeq sz$. Now let $w$ be any bound for $\{sx,sy\}$ with $w\preccurlyeq sz$. Since $s\notin D_L(sz)$ (as $\ell(sz)=\ell(z)-1$) and $w\preccurlyeq sz$ we have $s\notin D_L(w)$. Thus \begin{align*} \ell(x^{-1}(sw))&=\ell((sx)^{-1}w)\\ &=\ell(w)-\ell(sx)&&\text{as $sx\preccurlyeq w$ by assumption}\\ &=\ell(w)-\ell(x)+1&&\text{as $s\in D_L(x)$}\\ &=\ell(sw)-\ell(x)&&\text{as $s\notin D_L(w)$}. \end{align*} Thus $x\preccurlyeq sw$, and similarly $y\preccurlyeq sw$. So $sw$ is a bound for $\{x,y\}$. But also $sw\preccurlyeq z$, because \begin{align*} \ell((sw)^{-1}z)&=\ell(w^{-1}(sz))\\ &=\ell(sz)-\ell(w)&&\text{as $w\preccurlyeq sz$ by assumption}\\ &=\ell(z)-1-\ell(w)&&\text{as $s\in D_L(z)$}\\ &=\ell(z)-\ell(sw)&&\text{as $s\notin D_L(w)$}. \end{align*} Thus $sw=z$ (as $z=x\vee y$ is the least upper bound of $\{x,y\}$) and so $w=sz$. In particular we have $(sx)\vee(sy)=sz$. \end{proof} \begin{thm}\label{thm:simplerefinementsgates} Let $\mathscr{P}$ be a locally constant, convex, gated partition of $W$, and for each $P\in\mathscr{P}$ let $g_P$ denote the gate of~$P$. Suppose that $\mathscr{P}\mapsto\mathscr{P}'$ by a simple refinement based at $(P,s)$. Let $X=\{P'\in\mathscr{P}\mid sP\cap P'\neq\emptyset\}$, and let $P_0'$ be the element of $X$ with $sg_P\in P_0'$. Then \begin{enumerate} \item the partition $\mathscr{P}'$ is locally constant, convex, and gated; \item the element $g_P$ is the gate of $P\cap sP_0'$, and for each $P'\in X\backslash \{P_0'\}$ the set $\{g_P,sg_{P'}\}$ is bounded and $h_{P'}=g_P\vee sg_{P'}$ is the gate of $P\cap sP'$; \item if $\mathscr{P}$ has the property that each part $P\in\mathscr{P}$ is an intersection of finitely many half-spaces, then the partition $\mathscr{P}'$ also has this property. \end{enumerate} \end{thm} \begin{proof} Note that $s\notin D_L(P)$ (by the definition of simple refinements) and that $\mathscr{P}'$ is locally constant and convex (as all refinements of a locally constant partition are locally constant, and the intersection of convex sets is convex). It is also clear that (3) holds, for if $P\in\mathscr{P}$ and $P'\in X$ are expressed as an intersection of finitely many half-spaces, then $P\cap sP'$ can also be expressed as an intersection of finitely many half-spaces. Thus it remains to prove that each part $P\cap sP'$ of $\mathscr{P}'$ is gated. Since $g_P\in P\cap sP_0'$ and $g_P\preccurlyeq w$ for all $w\in P$ we have that $P\cap sP_0'$ is gated with gate $g_P$. Let $P'\in X\backslash\{P_0'\}$. Let $v\in sP\cap P'$ (note that $sP\cap P'$ is nonempty by hypothesis). Since $v\in sP$ we have $v=s(g_Px)$ for some $x\in W$ with $\ell(g_Px)=\ell(g_P)+\ell(x)$ (by the gate property of $P$), and since $s\notin D_L(P)$ we have $v=(sg_P)x$ with $\ell(sg_Px)=\ell(sg_P)+\ell(x)$. Since $v\in P'$ we have $v=g_{P'}y$ for some $y\in W$ with $\ell(g_{P'}y)=\ell(g_{P'})+\ell(y)$ (by the gate property of $P'$). Thus $v=(sg_P)x=g_{P'}y$ is an upper bound for $\{sg_P,g_{P'}\}$. Let $\overline{v}=sg_P\vee g_{P'}$ be the least upper bound of $\{sg_P,g_{P'}\}$. In particular, $\overline{v}$ is a prefix of each $v\in sP\cap P'$. We claim that $\overline{v}\in sP\cap P'$, and hence $\overline{v}$ is a gate of $sP\cap P'$. To see that $\overline{v}\in P'$, note that for all $v\in sP\cap P'$ we have $g_{P'}\preccurlyeq \overline{v}\preccurlyeq v$, because $\overline{v}=sg_P\vee g_{P'}$ and we showed above that $\overline{v}\preccurlyeq v$ for all $v\in sP\cap P'$. Since $g_{P'},v\in P'$ it follows that $\overline{v}\in P'$ as $P'$ is convex. Similarly, to see that $\overline{v}\in sP$ observe that $sg_P\preccurlyeq \overline{v}\preccurlyeq v$, and note that $sP$ is convex as $P$ is convex. Thus we have shown that $\overline{v}$ is a gate of $sP\cap P'$, and so $P\cap sP'$ is gated with gate $h_{P'}=s\overline{v}=s(sg_P\vee g_{P'})$ (by Lemma~\ref{lem:littlegate}). Since $s\in D_L(sg_P)\cap D_L(g_{P'})$ Lemma~\ref{lem:littlejoin} gives $h_{P'}=g_P\vee sg_{P'}$, completing the proof. \end{proof} \begin{exa}\label{ex:gates} Theorem~\ref{thm:simplerefinementsgates} is illustrated in Figure~\ref{fig:simplerefinement}. We have $X=\{P_0',P_1',P_2',P_3'\}$. The gates $g_0,g_1,g_2,g_3$ of the parts $P_0',P_1',P_2',P_3'$ are shown as black dots. The gate $g_P$ is shown as a red circle, and the ``new'' gates $h_j=g_P\vee sg_j$ are shown as red dots. \end{exa} We have the following important corollary, proving Theorem~\ref{thm1:preservegatedness}. \begin{cor}\label{cor:regularcompletiongated} Let $\mathscr{P}$ be a locally constant, convex and gated partition of $W$. If Algorithm~\ref{alg:regularisation} terminates in finite time then the regular completion $\widehat{\mathscr{P}}$ is gated and convex. \end{cor} \begin{proof} By assumption $\widehat{\mathscr{P}}$ can be obtained from $\mathscr{P}$ by a finite sequence of simple refinements, and hence the result by Theorem~\ref{thm:simplerefinementsgates}. \end{proof} \subsection{The gates of $W$ and minimal length cone type representatives} \label{sec:gatesW} We are finally able to prove Theorem~\ref{thm:main1}. \begin{cor}\label{cor:gateexist} The cone type partition $\mathscr{T}$ is regular, convex, and gated. In particular, each part $X_T$ of the cone type partition has a unique minimal length element $g_T$, and if $x\in X_T$ then $g_T\preccurlyeq x$. \end{cor} \begin{proof} By Theorem~\ref{thm:finitetermination} and Corollary~\ref{cor:regularlattice1} we have that Algorithm~\ref{alg:regularisation} applied to the $S$-partition $\mathscr{D}$ terminates in finite time with $\mathscr{T}=\widehat{\mathscr{D}}$. By Lemma~\ref{lem:Spartitiongated} the partition $\mathscr{D}$ is convex and gated, and so by Corollary~\ref{cor:regularcompletiongated} the partition $\mathscr{T}$ is also convex and gated. Hence the result. \end{proof} \begin{cor}\label{cor:minexist} Each cone type $T$ has a unique minimal length cone type representative. That is, for each cone type $T$ the set $\{w\in W\mid T(w)=T\}$ has a unique minimal length element~$m_T$. Moreover, if $w\in W$ with $T(w)=T(m_T)$ then $m_T$ is a suffix of~$w$. We have $m_T=g_T^{-1}$, where $g_T$ is the gate of $X_T$. \end{cor} \begin{proof} It is obvious from Corollary~\ref{cor:gateexist} that if $g_T$ is the gate of the part $X_T$ of $\mathscr{T}$ then $m_T=g_T^{-1}$ is the unique minimal length element with $T(m_T)=T$. \end{proof} \begin{cor}\label{cor:finiteintersection} For each cone type $T$ the set $X_T$ can be expressed as an intersection of finitely many half-spaces. \end{cor} \begin{proof} This follows from part (3) of Theorem~\ref{thm:simplerefinementsgates}. \end{proof} \begin{defn} The \textit{gates of $W$} are the gates of the cone type partition. Let $\Gamma=\Gamma(\mathscr{T})$ denote the set of gates of $W$. Then $\Gamma^{-1}$ is the set of minimal length cone type representatives. \end{defn} We record the following observations. \begin{prop} \label{prop:gatesbasicfacts} We have \begin{enumerate} \item the set $\Gamma$ is closed under suffix; \item $W_J\subseteq \Gamma$ for each spherical subset $J\subseteq S$; \item if $\mathscr{P}$ is regular and gated then $\Gamma\subseteq \Gamma(\mathscr{P})$; \item if $B$ is a Garside shadow, then $\Gamma\subseteq B$; \item $\Gamma\subseteq L$; \item $|\Gamma|\leq|\mathbb{E}|$ with equality if and only if $\mathcal{E}=\Phi_{\mathrm{sph}}^+$. \end{enumerate} \end{prop} \begin{proof} (1) and (2) are special cases of Theorem~\ref{thm:suffixclosuregates}. (3) follows from the fact that $\mathscr{T}\leq \mathscr{P}$ for all regular partitions~$\mathscr{P}$ (by Corollary~\ref{cor:Tisminimal}), and (4) is a special case of (3), using the Theorem~\ref{thm:regularpartitions} and Proposition~\ref{prop:garsidegated}. Moreover, since $L$ is a Garside shadow we have $\Gamma\subseteq L$ by (4). Finally, since $|\Gamma|$ is the number of states of the minimal automata recognising $\mathcal{L}(W,S)$, and since $\mathcal{A}_0$ has $|\mathbb{E}|$ states, we have $|\Gamma|\leq|\mathbb{E}|$. Equality holds if and only if the automaton $\mathcal{A}_0$ is minimal, and by \cite[Theorem~1]{PY:19} this occurs if and only if $\mathcal{E}=\Phi_{\mathrm{sph}}^+$. \end{proof} If $W$ is affine the partition $\mathscr{S}_0$ is gated by the classical work of Shi~\cite{Shi:87a,Shi:87b}, however in general it is unknown if the $n$-Shi partitions $\mathscr{S}_n$ are gated. We make the following conjecture. \begin{conj}\label{conj:Shigated} Let $n\in\mathbb{N}$. The $n$-Shi partition $\mathscr{S}_n$ is gated. \end{conj} In the case that $n=0$ and $(W,S)$ is affine, Conjecture~\ref{conj:Shigated} is true by Shi's work~\cite{Shi:87a,Shi:87b}. In \cite[Conjecture~2]{DH:16} Dyer and Hohlweg conjecture that the map $\Theta_n:L_n\to \mathbb{E}_n$ with $\Theta_n(x)=\mathcal{E}_n(x)$ is bijective, and we note that this conjecture, if true, readily implies Conjecture~\ref{conj:Shigated} (with the gates being the $n$-low elements). Recently Chapelier-Laget and Hohlweg~\cite{CH:21} have proved \cite[Conjecture~2]{DH:16} in the case $n=0$ for affine Coxeter groups. Finally, Corollary~\ref{cor:gateexist} implies the following further evidence for Conjecture~\ref{conj:Shigated}. \begin{prop}\label{prop:evidencegated} If $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ then $\mathscr{S}_0$ is gated. \end{prop} \begin{proof} By Corollary~\ref{cor:someobservations} if $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ then $\mathscr{S}_0=\mathscr{T}$, which is gated by Corollary~\ref{cor:gateexist}. (Another approach is to use Theorem~\ref{thm:conjsspherical}). \end{proof} In \cite[Conjecture~1]{HNW:16} Hohlweg, Nadeau and Williams conjecture that the automaton $\mathscr{G}_{\widetilde{S}}$ is the minimal automaton recognising $\mathcal{L}(W,S)$. We note that, in terms of the set $\Gamma$, minimality of $\mathscr{G}_{\widetilde{S}}$ is equivalent to the following. \begin{thm}\label{thm:GminGamma} The automaton $\mathscr{G}_{\widetilde{S}}$ is minimal if and only if $\Gamma$ is closed under join. \end{thm} \begin{proof} By Proposition~\ref{prop:gatesbasicfacts} we have $\Gamma\subseteq\widetilde{S}$. If $\mathscr{G}_{\widetilde{S}}$ is minimal then $\mathcal{A}_{\widetilde{S}}\cong \mathcal{A}(W,S)$ (by Theorem~\ref{thm:MyhillNerode}) and hence $|\Gamma|=|\widetilde{S}|$, giving $\Gamma=\widetilde{S}$. Thus $\Gamma$ is closed under join. Conversely, if $\Gamma$ is closed under join, then by Proposition~\ref{prop:gatesbasicfacts} parts (1) and (2) we have that $\Gamma$ is a Garside shadow. Since $\Gamma\subseteq \widetilde{S}$ we have $\Gamma=\widetilde{S}$. \end{proof} We have been unable to prove in general that $\Gamma$ is closed under join, however the following theorem establishes this fact in the case $\mathcal{E}=\Phi_{\mathrm{sph}}^+$, providing evidence for Conjecture~\ref{conj:garside}. \begin{thm}\label{thm:conjectures} Suppose that $\mathcal{E}=\Phi_{\mathrm{sph}}^+$. Then $\Gamma=\widetilde{S}=L$. In particular, $\Gamma$ is closed under join. \end{thm} \begin{proof} By Proposition~\ref{prop:gatesbasicfacts}(4) we have $\Gamma\subseteq \widetilde{S}$, and since $L$ is a Garside shadow we have $\widetilde{S}\subseteq L$. By \cite[Proposition~3.26]{DH:16} we have $|L|\leq|\mathbb{E}|$, and hence $|\Gamma|\leq|\widetilde{S}|\leq |L|\leq|\mathbb{E}|$. Thus if $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ then Proposition~\ref{prop:gatesbasicfacts}(6) forces $|\Gamma|=|\widetilde{S}|=|L|=|\mathbb{E}|$. Since $\Gamma\subseteq\widetilde{S}\subseteq L$ this gives $\Gamma=\widetilde{S}=L$. \end{proof} \begin{rem} Figures~\ref{fig:B2partitions}, \ref{fig:G2partitions}, and~\ref{fig:A2partitions} show the partitions $\mathscr{S}_0$ and $\mathscr{T}$ for $\tilde{\mathsf{B}}_2$, $\tilde{\mathsf{G}}_2$, and $\tilde{\mathsf{A}}_2$, respectively. In these case it turns out (and conjecturally this is always true) that $\mathscr{S}_0$ is gated, with $\Gamma(\mathscr{S}_0)=L$. In the $\tilde{\mathsf{B}}_2$ and $\tilde{\mathsf{G}}_2$ cases the inclusion $\Gamma\subseteq L$ is strict. The elements of $\Gamma$ are shaded blue, and the elements of $L\backslash\Gamma$ are shaded red. In the $\tilde{\mathsf{A}}_2$ case we have $\Gamma=L$. Moreover, since $\Gamma$ (and also $L$) are closed under suffix, the sets $\Gamma^{-1}$ and $L^{-1}$ are closed under prefix. Thus these sets are connected regions of the Coxeter complex. We draw these sets in Figure~\ref{fig:Shitriangles} for $\tilde{\mathsf{A}}_2$, $\tilde{\mathsf{B}}_2$, and $\tilde{\mathsf{G}}_2$, with $\Gamma^{-1}$ shaded blue, and $L^{-1}\backslash\Gamma^{-1}$ shaded red. The fact that $L^{-1}$ is a dilation of the fundamental alcove in these cases is explained by a celebrated result of Shi (see~\cite[\S8]{Shi:87b}) and the observation that $L$ is the set of gates of $\mathscr{S}_0$ in these cases. \begin{figure}[H] \centering \subfigure{ \begin{tikzpicture}[scale=0.8] \path [fill=blue!30] (-2,{2*0.866})--(0,{-2*0.866})--(2,{2*0.866})--(-2,{2*0.866}); \path [fill=gray!90] (0,0) -- (-0.5,0.866) -- (0.5,0.866) -- (0,0); \draw [line width=2pt] (-2,{2*0.866})--(0,{-2*0.866})--(2,{2*0.866})--(-2,{2*0.866}); \draw (-1,{2*0.866})--(-1.5,0.866); \draw (0,{2*0.866})--(-1,0); \draw (1,{2*0.866})--(-0.5,-0.866); \draw (1,{2*0.866})--(1.5,0.866); \draw (0,{2*0.866})--(1,0); \draw (-1,{2*0.866})--(0.5,-0.866); \draw (-0.5,-0.866)--(0.5,-0.866); \draw (-1,0)--(1,0); \draw (-1.5,0.866)--(1.5,0.866); \phantom{\draw (0,-3)--(0,2);} \end{tikzpicture} }\quad\quad\quad\,\, \subfigure{ \begin{tikzpicture}[scale=0.55] \path [fill=blue!30] (-1,-3)--(-1,2)--(4,2)--(-1,-3); \path [fill=red!30] (1,-1)--(2,0)--(1,0)--(1,-1); \path [fill=gray!90] (0,0) -- (1,1) -- (0,1) -- (0,0); \draw [line width=2pt] (-1,-3)--(-1,2)--(4,2)--(-1,-3); \draw (-1,1)--(3,1); \draw (-1,0)--(2,0); \draw (-1,-1)--(1,-1); \draw (-1,-2)--(0,-2); \draw (0,-2)--(0,2); \draw (1,-1)--(1,2); \draw (2,0)--(2,2); \draw (3,1)--(3,2); \draw (-1,1)--(0,2); \draw (-1,-1)--(2,2); \draw (-1,-1)--(0,-2); \draw (-1,1)--(1,-1); \draw (0,2)--(2,0); \draw (2,1)--(3,1); \phantom{\draw (0,-4.8)--(0,2);} \end{tikzpicture} }\quad \subfigure{ \begin{tikzpicture}[scale=0.8] \path [fill=red!30] (0,2)--(0,3)--(0.433,2.25); \path [fill=red!30] (1.732,0)--(1.732,2)--(2.598,2.5)--({2.598+0.433},2.25)--(1.732,0); \path [fill=blue!30] (0,-3)--(0,2)--(0.433,2.25)--(0,3)--(0,4)--(2.598,2.5)--(1.732,2)--(1.732,0)--(0,-3); \path [fill=gray!90] (0.866,1.5) -- (1.299,2.25) -- (0.866,2.5)--(0.866,1.5); \draw[line width=2pt](0,-3)--(0,4)--(3.031,2.25)--(0,-3); \draw(0,3)--(1.732,3); \draw(0,1.5)--(2.598,1.5); \draw(0,0)--(1.732,0); \draw(0,-1.5)--(0.866,-1.5); \draw(0,-3)--(0,4); \draw(.866,-1.5)--(.866,3.5); \draw(1.732,0)--(1.732,3); \draw(2.598,1.5)--(2.598,2.5); \draw(0,3)--(0.866,3.5); \draw(0,2)--(1.732,3); \draw(0,1)--(2.598,2.5); \draw(0,0)--(2.598,1.5); \draw(0,-1)--(1.732,0); \draw(0,-2)--(0.866,-1.5); \draw({2.598+0.433},2.25)--(0,4); \draw(2.598,1.5)--(0,3); \draw({1.732+0.433},0.75)--(0,2); \draw(1.732,0)--(0,1); \draw({0.866+0.433},-0.75)--(0,0); \draw(0.866,-1.5)--(0,-1); \draw(0.433,-2.25)--(0,-2); \draw(0,0)--(0.866,-1.5); \draw(0,3)--(1.732,0); \draw(1.732,3)--(2.598,1.5); \draw({2.598+0.433},2.25)--(0,-3); \draw(1.732,3)--(0,0); \draw(0.433,3.75)--(0,3); \phantom{\draw(-1.2,0)--(1,0);} \end{tikzpicture} } \caption{The sets $\Gamma^{-1}$ and $L^{-1}$}\label{fig:Shitriangles} \end{figure} \end{rem} \subsection{A characterisation of the gates of $W$} The following theorem gives a characterisation of the gates in terms of roots, and is linked to the concept of boundary roots (see Theorem~\ref{thm:boundaryroots}). Recall the definition of $\Phi^0(w)$ from Definition~\ref{defn:finalroots}. \begin{thm}\label{thm:characterisegates} Let $x\in W$. Then $x\in\Gamma$ if and only if for each $\beta\in\Phi^0(x)$ there exists $w\in W$ with $\Phi(x)\cap \Phi(w)=\{\beta\}$. \end{thm} \begin{proof} Let $x\in\Gamma$, and let $\beta\in\Phi^0(x)$. Hence $\beta=-x\alpha_s$ for some $s\in D_R(x)$. Since $\ell(xs)<\ell(x)$ we have $T(sx^{-1})\neq T(x^{-1})$ (as $x\in\Gamma$ is of minimal length in its part of $\mathscr{T}$). Since $T(x^{-1})\subseteq T(sx^{-1})$ (by Lemma~\ref{lem:containoneway}) we have strict containment, and so there exists $w\in T(sx^{-1})\backslash T(x^{-1})$. Thus by Proposition~\ref{prop:conetypebasics} we have $\Phi(xs)\cap \Phi(w)=\emptyset$ and $\Phi(x)\cap \Phi(w)\neq\emptyset$. Since $\Phi(x)=\Phi(xs)\sqcup \{\beta\}$ we have $\Phi(x)\cap\Phi(w)=\{\beta\}$. Conversely, suppose that $x\notin\Gamma$. Let $T=T(x^{-1})$, and let $g\in\Gamma$ be the gate of $X_T$. Then $g\preccurlyeq x$, and by convexity of $X_T$ (see Proposition~\ref{prop:convexitybasics}) each $y\in W$ with $g\preccurlyeq y\preccurlyeq x$ has $T(y^{-1})=T$. In particular, since $g\neq x$ there exists $s\in D_R(x)$ with $T(sx^{-1})=T=T(x^{-1})$. Then $\beta=-x\alpha_s\in\Phi^0(x)$, and by Proposition~\ref{prop:conetypebasics} we have that for all $w\in W$ we have $\Phi(x)\cap \Phi(w)=\emptyset$ if and only if $\Phi(xs)\cap \Phi(w)=\emptyset$. Since $\Phi(x)=\Phi(xs)\sqcup\{\beta\}$ it follows that there is no element $w$ with $\Phi(x)\cap \Phi(w)=\{\beta\}$. \end{proof} A priori, given $x\in W$ and $\beta\in\Phi^0(x)$, deciding if there exists $w\in W$ such that $\Phi(x)\cap\Phi(w)=\{\beta\}$ appears difficult to implement in an infinite Coxeter group. However we note that, by the following proposition, one only needs to check $w\in \Gamma$ (a finite set). \begin{prop}\label{prop:witnessgate} Let $x\in W$ and $\beta\in \Phi^+$. Suppose there exists $w\in W$ such that $\Phi(x)\cap\Phi(w)=\{\beta\}$, and let $w$ be of minimal length subject to this property. Then $\Phi^0(w)=\{\beta\}$, and $w$ is a gate. \end{prop} \begin{proof} Let $\alpha\in\Phi^0(w)$. Thus $s_{\alpha}w=ws$ for some $s\in S$ and $\ell(ws)=\ell(w)-1$. If $\alpha\neq\beta$ then $\Phi(x)\cap\Phi(ws)=\{\beta\}$, contradicting the minimal length assumption. Thus $\alpha=\beta$, and so $\Phi^0(w)=\{\beta\}$, and then $w$ is a gate by Theorem~\ref{thm:characterisegates}. \end{proof} Similar ideas give the following corollary. \begin{cor} If $T=T(x^{-1})$ then $ \partial T=\{\beta\in\Phi^+\mid \Phi(x)\cap\Phi(g)=\{\beta\}\text{ for some $g\in\Gamma$}\}. $ \end{cor} \begin{proof} By Theorem~\ref{thm:boundaryroots} we have $ \partial T=\{\beta\in\Phi^+\mid \Phi(x)\cap\Phi(w)=\{\beta\}\text{ for some $w\in W$}\}, $ and the result follows from Proposition~\ref{prop:witnessgate}. \end{proof} \begin{rem}\label{rem:calculation} Theorem~\ref{thm:characterisegates} and Proposition~\ref{prop:witnessgate} allow one to implement the calculation of the set $\Gamma$ into MAGMA~\cite{MAGMA}, utilising the existing Coxeter group package. The main steps are as follows. \begin{enumerate} \item The set $\mathcal{E}$ of elementary roots is computed inductively by setting $E_0=\{\alpha_s\mid s\in S\}$, and defining $E_{j+1}=E_j\cup\{s\alpha\mid \alpha\in E_j,\,s\in S,\,\text{ with } -1<(\alpha,\alpha_s)<0\}$. Once $E_{n+1}=E_n$ we have $\mathcal{E}=E_n$ (see \cite[\S4.7]{BB:05}). \item The set $L$ of low elements is computed by setting $M_0=\{e\}$, and defining $M_{j+1}=M_j\cup\{sw\mid w\in M_j,\,s\in S\backslash D_L(w),\,\Phi^1(sw)\subseteq \mathcal{E}\}$. Once $M_n=M_{n+1}$ we have $L=M_n$ (by the suffix closure property of $L$). \item Since $\Gamma\subseteq L$ we can then determine, in finite time, the set $\Gamma$ by checking, for each $x\in L$ and $\beta\in\Phi^0(x)$, whether there exists $w\in L$ such $\Phi(x)\cap\Phi(w)=\{\beta\}$ (using Proposition~\ref{prop:witnessgate} and the fact that $\Gamma\subseteq L$). \end{enumerate} We have carried through the calculations for a variety of Coxeter groups, and the data is presented in Figure~\ref{fig:data} for some affine and compact hyperbolic groups. See \cite{PY:19} for the definitions of the compact hyperbolic groups $\mathsf{X}_4(c)$ ($c\in\{4,5\}$), $\mathsf{X}_5(d)$ ($d\in\{3,4,5\}$), $\mathsf{Y}_4$, $\mathsf{Z}_4$, and $\mathsf{Z}_5$. We note that, as a general rule, groups with large spherical subgroups will have have many gates and low elements (by Proposition~\ref{prop:gatesbasicfacts}), while those with only small spherical subgroups tend to have very few gates. For example, in $\tilde{\mathsf{D}}_5$ there are $59049$ low elements and $58965$ gates, while in the corresponding Coxeter group with each occurrence of $m_{st}=3$ replaced by $m_{st}=4$ there are only $332$ low elements and $247$ gates. \begin{figure}[H] \centering \subfigure{ $ \begin{array}{|c||c|c|c|c|c|} \hline W& |\mathcal{E}| & |L| & |\Gamma| \\ \hline\hline \tilde{\mathsf{A}}_2& 6 &16&16\\ \hline \tilde{\mathsf{B}}_2& 8 &25 &24\\ \hline \tilde{\mathsf{G}}_2&12 &49 &41\\ \hline \tilde{\mathsf{A}}_3&12 &125 &125\\ \hline \tilde{\mathsf{B}}_3&18 &343 &315\\ \hline \tilde{\mathsf{C}}_3&18 &343 &317\\ \hline \tilde{\mathsf{A}}_4&20 &1296 &1296\\ \hline \tilde{\mathsf{B}}_4&32 &6561 &5789\\ \hline \tilde{\mathsf{C}}_4&32 &6561 &5860\\ \hline \tilde{\mathsf{D}}_4&24 &2401 &2400\\ \hline \tilde{\mathsf{F}}_4&48 &28561 &22428\\ \hline \tilde{\mathsf{A}}_5&30&16807&16807\\ \hline \tilde{\mathsf{B}}_5&50&161051&137147\\ \hline \tilde{\mathsf{C}}_5&50&161051&139457\\ \hline \tilde{\mathsf{D}}_5&40&59049&58965\\ \hline \end{array} $ }\qquad\qquad \subfigure{ $ \begin{array}{|c||c|c|c|c|c|} \hline W& |\mathcal{E}| & |L| & |\Gamma| \\ \hline \hline \mathsf{X}_4(4)&25 &438 &392\\ \hline \mathsf{X}_4(5)&32 &516 &462\\ \hline \mathsf{Y}_4&32 &687 &578\\ \hline \mathsf{Z}_4&30 &513 &473\\ \hline \mathsf{X}_5(3)&114 &101412 &52542\\ \hline \mathsf{X}_5(4)&83 &25708 &22886\\ \hline \mathsf{X}_5(5)&135 &42064 &37956\\ \hline \mathsf{Z}_5&120 &41385 &39138\\ \hline \end{array} $} \caption{Data for low rank affine and compact hyperbolic Coxeter groups}\label{fig:data} \end{figure} \end{rem} We conclude this section with a conjecture. Define a partial order on the set $\mathbb{T}$ of cone types by $T_1\leq T_2$ if and only if $T_2\subseteq T_1$ (thus $\leq$ is given by reverse containment). \begin{conj}\label{conj:orderisomorphism} The map $\Theta:(\Gamma,\preccurlyeq)\to(\mathbb{T},\leq)$ given by $\Theta(g)=T(g^{-1})$ is an order isomorphism. \end{conj} Note that the conjecture, if true, generalises Theorem~\ref{thm:parabolicconetype} because $W_J\subseteq \Gamma$ for all spherical $J\subseteq S$, by Proposition~\ref{prop:gatesbasicfacts}. We note the following consequence of Conjecture~\ref{conj:orderisomorphism}. \newpage \begin{prop} If Conjecture~\ref{conj:orderisomorphism} holds then $\Gamma$ is closed under join (hence Conjecture~\ref{conj:garside} holds) \end{prop} \begin{proof} Let $x,y\in\Gamma$, and suppose that $\{x,y\}$ is bounded. Let $z=x\vee y$, and let $g$ be the gate of the part of $\mathscr{T}$ containing~$z$. Since $x\preccurlyeq z$ and $y\preccurlyeq z$ we have $T(z^{-1})\subseteq T(x^{-1})\cap T(y^{-1})$ (by Lemma~\ref{lem:containoneway}; in fact equality holds by Proposition~\ref{prop:joinconetype}). Since $T(g^{-1})=T(z^{-1})\subseteq T(x^{-1})\cap T(x^{-1})$ we have $x\preccurlyeq g$ and $y\preccurlyeq g$ (assuming Conjecture~\ref{conj:orderisomorphism}), and so $g$ is an upper bound for $\{x,y\}$. Thus $z\preccurlyeq g$. But also $g\preccurlyeq z$ by gate properties, and so $z=g$. \end{proof} \section{Conical partitions}\label{sec:conical} In this section we define \textit{conical partitions}, generalising the construction of Garside partitions. In fact, in Corollary~\ref{cor:garsideequivalent} we show that regular conical partitions are the partition theoretic equivalent to Garside shadows. \begin{defn}\label{defn:conical} Let $X\subseteq W$ with $e\in X$. The \textit{conical partition} induced by $X$ is the partition $\mathscr{C}(X)$ of $W$ induced by the covering $\{C(x)\mid x\in X\}$. (Note that $e\in X$ is required for this to be a covering). \end{defn} In particular, every Garside partition is conical by definition (see Definition~\ref{defn:partitions}). In fact, as we see in Theorem~\ref{thm:conesgated}, every conical partition is gated, and the gates of such a partition are necessarily closed under join, generalising Proposition~\ref{prop:garsidegated}. \begin{defn} The \textit{join-closure} of a subset $X\subseteq W$ is $$ X^{\vee}=\{\bigvee Y\mid Y\subseteq X\text{ is bounded}\}. $$ \end{defn} The following lemma shows that the join-closure of $X$ is indeed closed under joins. \begin{lem}\label{lem:joinclosure} Let $X\subseteq W$. If $Y\subseteq X^{\vee}$ is bounded, then $\bigvee Y\in X^{\vee}$. \end{lem} \begin{proof} Let $Y\subseteq X^{\vee}$ be bounded. If $z\in Y\backslash X$ then $z=\bigvee Z$ for some bounded subset $Z\subseteq X$ (by the definition of join-closure). It is clear that $$ \bigvee Y=\bigvee ((Y\backslash \{z\})\cup Z), $$ and it follows that $\bigvee Y$ can we written as $\bigvee Y'$ for a bounded subset $Y'\subseteq X$. Hence $\bigvee Y\in X^{\vee}$. \end{proof} \begin{thm}\label{thm:conesgated} Let $X\subseteq W$ with $e\in X$. Let $\mathscr{P}=\mathscr{C}(X)$ be the conical partition induced by~$X$. Then $\mathscr{P}$ is gated, with $\Gamma(\mathscr{P})=X^{\vee}$. In particular, the set $\Gamma(\mathscr{P})$ is closed under join. \end{thm} \begin{proof} Let $P$ be a part of $\mathscr{P}$, and let $w\in P$. Since $\{C(x)\mid x\in X\}$ is a covering of $W$, the set $$ \{x\in X\mid w\in C(x)\}=\{x\in X\mid x\preccurlyeq w\} $$ is nonempty, and bounded above by $w$. Thus $ g=\bigvee\{x\in X\mid w\in C(x)\} $ exists. By Lemma~\ref{lem:conejoin} we have $$ \bigcap_{\{x\in X\mid w\in C(x)\}}C(x)=C(g), $$ and so $w\in C(g)$. Thus $g\preccurlyeq w$ for all $w\in P$. We claim that $g$ and $w$ lie in the same part of $\mathscr{P}$ (from which it follows that $P$ is gated with gate $g$). Thus we must show that for all $x\in X$ we have $w\in C(x)$ if and only if $g\in C(x)$. If $x\in X$ with $w\in C(x)$, then by the definition of $g$ we have $x\preccurlyeq g$ and so $g\in C(x)$. Conversely, if $g\in C(x)$ then $x\preccurlyeq g$, and so since $g\preccurlyeq w$ we have $x\preccurlyeq w$, and so $w\in C(x)$. Hence the claim. Now, if $Y\subseteq X$ is any bounded set, and $g=\bigvee Y$, then clearly $$ g=\bigvee\{x\in X\mid g\in C(x)\}, $$ and by the above discussion $g$ is the gate of the part of $\mathscr{P}$ containing $g$. Hence $\Gamma(\mathscr{P})=X^{\vee}$. \end{proof} \newpage The following lemma shows that one may replace $X$ by its join closure when constructing~$\mathscr{C}(X)$. \begin{lem}\label{lem:conicalclosed} If $X\subseteq W$ then $\mathscr{C}(X)=\mathscr{C}(X^{\vee})$. \end{lem} \begin{proof} Let $u,v$ be in the same part of $\mathscr{C}(X)$. Let $x\in C(X^{\vee})$. Then $x=\bigvee Y$ for some $Y\subseteq X$. Then $C(x)=\bigcap_{y\in Y}C(y)$ (by Lemma~\ref{lem:conejoin}). So if $u\in C(x)$ we have $u\in C(y)$ for all $y\in Y$ (as $u,v$ are in the same part of $\mathscr{C}(X)$) and so $v\in C(x)$. Thus $u,v$ are in the same part of $\mathscr{C}(X^{\vee})$. Conversely, it is clear that if $u,v$ are in the same part of $\mathscr{C}(X^{\vee})$ then they are also in the same part of $\mathscr{C}(X)$. \end{proof} \begin{rem}\label{rem:conicalconvex} We note the following. \begin{enumerate} \item Not all conical partitions are Garside partitions. For example, consider $W$ of spherical type $\mathsf{A}_2$ and let $X=\{e,sts\}$. The conical partition induced by $X$ is $\mathscr{P}=\{\{e,s,t,st,ts\},\{sts\}\}$, which is obviously not a Garside partition (in fact, the only Garside partition of a finite Coxeter group is the partition $\mathbf{1}$ into singletons). We shall show below (in Corollary~\ref{cor:garsideequivalent}) that Garside partitions are equivalent to regular conical partitions. \item Conical partitions are not necessarily convex, as the above $\mathsf{A}_2$ example shows (compare with Remark~\ref{rem:garsideconvexity}). However conical partitions $\mathscr{P}=\mathscr{C}(X)$ are necessarily weakly convex. To see this, note that by Lemma~\ref{lem:conicalclosed} we may assume that $X=X^{\vee}$ and hence $\Gamma(\mathscr{P})=X$ by Theorem~\ref{thm:conesgated}. Suppose that $w\in W$ and that $g\in X$ is the gate of the part of $\mathscr{P}$ containing $w$. We need to show that if $g\preccurlyeq v\preccurlyeq w$ then $v$ and $w$ lie in the same part of $\mathscr{P}$. Let $g'\in\Gamma(\mathscr{P})$. If $v\in C(g')$ then $g'\preccurlyeq v\preccurlyeq w$ and so $w\in C(g')$. On the other hand, if $w\in C(g')$ then since $g$ and $w$ lie in the same part we have $g\in C(g')$ (by the definition of conical partitions) and hence $g'\preccurlyeq g\preccurlyeq v$, giving $v\in C(g')$. Thus $v$ and $w$ lie in the same part of $\mathscr{P}$. \end{enumerate} \end{rem} \begin{cor}\label{cor:garsideequivalent} Let $X\subseteq W$ with $e\in X$ and let $\mathscr{P}=\mathscr{C}(X)$ be the (necessarily gated, c.f. Theorem~\ref{thm:conesgated}) conical partition induced by $X$. Then $\Gamma(\mathscr{P})$ is a Garside shadow if and only if $\mathscr{P}$ is regular. \end{cor} \begin{proof} By Theorem~\ref{thm:conesgated} $\mathscr{P}$ is gated, with $\Gamma(\mathscr{P})=X^{\vee}$. Thus $\Gamma(\mathscr{P})$ is closed under joins. If $\mathscr{P}$ is regular then by Theorem~\ref{thm:suffixclosuregates} we have $S\subseteq \Gamma(\mathscr{P})$ and that $\Gamma(\mathscr{P})$ is closed under taking suffixes, and hence $\Gamma(\mathscr{P})$ is a Garside shadow. Conversely, suppose that $\Gamma(\mathscr{P})$ is a Garside shadow. By Lemma~\ref{lem:conicalclosed} we have $\mathscr{P}=\mathscr{C}(X^{\vee})$, and thus $\mathscr{P}$ is the Garside partition of the Garside shadow $X^{\vee}=\Gamma(\mathscr{P})$. Hence $\mathscr{P}$ is regular by Theorem~\ref{thm:regularpartitions}. \end{proof} We showed in Corollary~\ref{cor:JIsGated} that the spherical partition~$\mathscr{J}$ is gated. We now give another proof, using Theorem~\ref{thm:conesgated}, that has the advantage of determining the set of gates. Let $W_{\mathrm{sph}}$ denote the union of all spherical parabolic subgroups of~$W$. \begin{prop}\label{prop:Jgated} We have $\mathscr{J}=\mathscr{C}(W_{\mathrm{sph}})$. Thus $\mathscr{J}$ is gated, with $\Gamma(\mathscr{J})=W_{\mathrm{sph}}^{\vee}$ \end{prop} \begin{proof} Once we show that $\mathscr{J}=\mathscr{C}(W_{\mathrm{sph}})$ the result follows from Theorem~\ref{thm:conesgated}. So, consider a part $P=\{w\in W\mid \Phi_{\mathrm{sph}}(w)=\Sigma\}$ of $\mathscr{J}$, for some $\Sigma\in \mathbb{S}$. Let $x,y\in P$, and let $z\in W_{\mathrm{sph}}$. It is sufficient to show, by symmetry of $x$ and $y$, that if $x\in C(z)$ then $y\in C(z)$ too. First we note that since $\Phi_{\mathrm{sph}}(x)=\Phi_{\mathrm{sph}}(y)$ we have $\Phi_J(x)=\Phi_J(y)$ for all spherical subsets $J\subseteq S$. Suppose that $x\in C(z)$ with $z\in W_{\mathrm{sph}}$. Choose a reduced expression of $z$, and let $J$ be the set of generators appearing in this reduced expression (thus $J$ is spherical, and note that $J$ does not depend on the particular reduced expression chosen, by \cite[Proposition~2.16]{AB:08}). Since $z\preccurlyeq x$ we have $\Phi(z)\subseteq \Phi_J(x)=\Phi_J(y)\subseteq \Phi(y)$. Thus $z\preccurlyeq y$, and so $y\in C(z)$. \end{proof} \section{Ultra low elements}\label{sec:ultralow} We now define a new class of elements of $W$ called \textit{ultra low} elements, which we denote by $U$. Conjecturally, $U$ is the set of gates $\Gamma$ of $W$, which in turn is conjecturally the smallest Garside shadow $\widetilde{S}$ (see the comments below). \begin{defn} \label{def:ultra_low_elements} An element $x \in W$ is \textit{ultra low} if for each $\beta \in \Phi^1(x)$ there exists $w\in W$ such that $\Phi(x) \cap \Phi(w) = \{ \beta \}$. \end{defn} For example, each $s\in S$ is ultra low, and $e$ is trivially ultra low. \begin{prop} We have $U\subseteq \Gamma\subseteq L$. \end{prop} \begin{proof} The containment $U\subseteq \Gamma$ follows from the definition of ultra low elements, and the characterisation of gates in Theorem~\ref{thm:characterisegates}, noting that $\Phi^0(x)\subseteq\Phi^1(x)$. The containment $\Gamma\subseteq L$ is Proposition~\ref{prop:gatesbasicfacts}. \end{proof} \begin{rem} One can show more directly that $U\subseteq L$ without passing through~$\Gamma$. For if $x\in U$ then from the definition and Lemma~\ref{lem:EandPhi1} we have $\Phi^1(x)\subseteq \mathcal{E}$, and hence $x$ is low. \end{rem} The following proposition connects the concept of ultra low elements with boundary roots. \begin{prop}\label{prop:g} Let $x\in W$ and let $T=T(x^{-1})$. Then $x$ is ultra low if and only if $\Phi^1(x) = \partial T$. Moreover, if $\Phi^1(x)=\partial T$ then $x$ is the gate of $X_T$. \end{prop} \begin{proof} Note that $\partial T\subseteq\Phi^1(x)$ by Corollary~\ref{cor:boundaryroots}. We have $x\in U$ if and only if for each $\beta\in\Phi^1(x)$ there exists $w\in W$ with $\Phi(x)\cap \Phi(w)=\{\beta\}$, if and only if $\beta\in\partial T$ by Theorem~\ref{thm:boundaryroots}. Let $y \in X_T$, and so $T(y^{-1}) = T$. By Corollary~\ref{cor:boundaryroots} we have $\partial T \subseteq \Phi^1(y)$. Since $\Phi^1(x) = \partial T$, we have $\Phi(x) = \mathrm{cone}_{\Phi}(\partial T) \subseteq \mathrm{cone}_{\Phi}(\Phi^1(y))=\Phi(y)$ (see Theorem~\ref{thm:eqcond}). Thus by Proposition~\ref{prop:rootsystembasics}(5) we have $x\preccurlyeq y$ for all $y\in X_T$. Thus $x=g_T$ is the gate of~$X_T$. \end{proof} We conjecture that the reverse implication also holds in the second statement of Proposition~\ref{prop:g} (this is the content of Conjecture~\ref{conj:boundary} in the introduction). If this conjecture holds then it follows that $U=\Gamma$. \begin{rem} We have verified that $U=\Gamma$ for right angled Coxeter groups, rank~$3$ Coxeter groups, and all Coxeter groups with complete Coxeter graph (that is, $m_{st}>2$ for all $s\neq t$). See \cite[Chapter~5]{YY:21} for details. \end{rem} \section{Super elementary roots}\label{sec:superelementary} In Section~\ref{sec:boundaryroots} we observed that the boundary roots $\partial T\subseteq\mathcal{E}$ of a cone type $T$ gives the minimal amount of root data required to determine $T$. It is natural to ask whether every elementary root is a boundary root of some cone type. Equivalently, is it true that $\Phi(\mathscr{T})=\mathcal{E}$ (in the notation of Section~\ref{sec:simplerefinements})? In this section we show that $\Phi(\mathscr{T})=\mathcal{E}$ for spherical and affine Coxeter groups (amongst others), however in general the containment $\Phi(\mathscr{T})\subseteq \mathcal{E}$ can be strict. In particular, we exhibit a class of rank $4$ Coxeter groups for which there is an elementary root that is not the boundary root of any cone type. We thank Bob Howlett for inspiring the work in this section and for suggesting a motivating example. \begin{defn} \label{defn:superelementary} A root $\beta\in\Phi^+$ is \textit{super-elementary} if there exists $x,y \in W$ with $$\Phi(x) \cap \Phi(y) = \{ \beta \}.$$ Let $\mathcal{S}$ denote the set of all super-elementary roots. \end{defn} By Theorem~\ref{thm:boundaryroots} $\mathcal{S}$ is precisely the set of roots that occur as the boundary root of some cone type, and hence $\mathcal{S}=\Phi(\mathscr{T})$. \begin{prop}\label{prop:superareelementary} Every super-elementary root is elementary. Thus $\mathcal{S}\subseteq\mathcal{E}$. \end{prop} \begin{proof} See Lemma~\ref{lem:EandPhi1}. \end{proof} We now provide various classes of Coxeter systems for which $\mathcal{S}=\mathcal{E}$. \begin{thm} \label{thm:sphericalsuperelementary} If $W$ is spherical then $\mathcal{S}=\mathcal{E}=\Phi^+$. \end{thm} \begin{proof} Let $w_0=s_1\cdots s_n$ be a reduced expression for the longest element of $W$. Since $\Phi(w_0)=\Phi^+$ we have $\Phi^+=\{\beta_1,\ldots,\beta_n\}$ where $\beta_j=s_1\cdots s_{j-1}\alpha_{s_j}$ for $1\leq j\leq n$. We claim that $$ \Phi(s_1\cdots s_j)\cap \Phi(s_1\cdots s_{j-1}w_0)=\{\beta_j\}\quad\text{for $j=1,\ldots,n$}. $$ For if $w\in W$ then $\Phi(ww_0)=\Phi^+\backslash\Phi(w)$ (because if $\alpha\in\Phi^+\backslash \Phi(w)$ then $(ww_0)^{-1}\alpha=w_0w^{-1}\alpha<0$ as $w^{-1}\alpha>0$, and $\ell(ww_0)=\ell(w_0)-\ell(w)=|\Phi^+\backslash\Phi(w)|$). Thus $\Phi(s_1\cdots s_{j-1}w_0)=\{\beta_j,\ldots,\beta_n\}$, and the claim follows as $\Phi(s_1\cdots s_j)=\{\beta_1,\ldots,\beta_j\}$. Thus every positive root is super elementary, and hence the theorem. \end{proof} \begin{cor} \label{cor:sphericalsuperelementary} We have $\Phi_{\mathrm{sph}}^+\subseteq\mathcal{S}$. In particular, if $\mathcal{E}=\Phi_{\mathrm{sph}}^+$ then $\mathcal{S}=\mathcal{E}$. \end{cor} \begin{proof} If $\beta\in\Phi_J^+$ with $J\subseteq S$ spherical, then by Theorem~\ref{thm:sphericalsuperelementary} there exists $x,w\in W_J$ with $\Phi(x)\cap\Phi(w)=\{\beta\}$, and so $\beta\in\mathcal{S}$. \end{proof} In particular, Corollary~\ref{cor:sphericalsuperelementary} shows that if $W$ is right angled, or if $W$ has complete Coxeter graph, then $\mathcal{S}=\mathcal{E}$ (see \cite[Theorem~1]{PY:19} for the classification of Coxeter systems with $\mathcal{E}=\Phi_{\mathrm{sph}}^+$). We will now show that $\mathcal{S}=\mathcal{E}$ for all affine Coxeter groups. When $W$ is affine there is a related notion of a ``root system'', for $W$ where one starts with a crystallographic root system $\Phi_0$ of a spherical Coxeter group~$W_0$. We will not repeat this construction here, we refer to \cite[Section~2.2]{PY:19} for details, and we will use the notation of \cite{PY:19} in the following paragraphs. In particular, the finite crystallographic root system $\Phi_0$ has simple system $\{\alpha_1,\ldots,\alpha_n\}$, and $\langle\cdot,\cdot\rangle$ denotes the bilinear form on the underlying vector space. For $\alpha\in\Phi_0$ we write $\alpha^{\vee}=2\alpha/\langle\alpha,\alpha\rangle$. Let $\omega_1,\ldots,\omega_n$ be the basis dual to the simple roots basis, and so $\langle\alpha_i,\omega_j\rangle=1$ if $i=j$ and equals $0$ otherwise. Let $P=\mathbb{Z}\omega_1+\cdots+\mathbb{Z}\omega_n$ (the \textit{coweight lattice} of $\Phi_0$). The affine root system is $\Phi=\Phi_0+\mathbb{Z}\delta$ with positive roots $\Phi^+=(\Phi_0^++\mathbb{Z}_{\geq 0}\delta)\cup(-\Phi_0^++\mathbb{Z}_{>0}\delta)$. The set of elementary roots is $\mathcal{E}=\Phi_0^+\cup(-\Phi_0^++\delta)$. \begin{lem}\label{lem:affine1} Let $K>0$ be an integer and $\alpha\in\Phi_0^+$. There exists $\lambda\in P$ such that $\langle\lambda,\alpha\rangle=1$ and $|\langle\lambda,\beta\rangle|>K$ for all $\beta\in\Phi_0^+\backslash\{\alpha\}$. \end{lem} \begin{proof} Let $w\in W_0$ be such that $w\alpha=\alpha_i$ (a simple root). Let $$ \lambda'=\omega_i+(K+1)\sum_{j\neq i}\omega_j. $$ Then $\langle\lambda',\alpha_i\rangle=1$ and $\langle\lambda',\beta\rangle>K$ for all $\beta\in\Phi_0^+\backslash\{\alpha_i\}$. Now let $\lambda=w^{-1}\lambda'$. So $\langle\lambda,\alpha\rangle=\langle\lambda,w^{-1}\alpha_i\rangle=\langle w\lambda,\alpha_i\rangle=1$, and if $\beta\in\Phi_0^+\backslash\{\alpha\}$ we have $\langle\lambda,\beta\rangle=\langle\lambda',w\beta\rangle$. Since $w\beta\in\Phi_0\backslash\{-\alpha,\alpha\}$ there is an index $j\neq i$ and an element $\gamma$ in the $\mathbb{Z}_{\geq 0}$-span of the simple roots, such that either $w\beta=-\alpha_j-\gamma$ (in the case that $w\beta\in-\Phi_0^+$) or $w\beta=\alpha_j+\gamma$ (in the case that $w\beta\in\Phi_0^+$). Then $|\langle\lambda',w\beta\rangle|>K$, and so $|\langle\lambda,\beta\rangle|>K$. \end{proof} \begin{thm}\label{thm:affineSE} If $W$ is affine then $\mathcal{S}=\mathcal{E}$. \end{thm} \begin{proof} The roots in $\Phi_0^+$ are super elementary Corollary~\ref{cor:sphericalsuperelementary}. So consider the roots $-\alpha+\delta$ with $\alpha\in\Phi_0^+$. Let $$ K=\max_{\alpha,\beta\in\Phi_0^+}|\langle\alpha^{\vee},\beta\rangle| $$ (this is an integer, as $\Phi_0$ is crystallographic). Using Lemma~\ref{lem:affine1}, choose $\lambda\in P$ with $\langle\lambda,\alpha\rangle=1$ and $|\langle\lambda,\beta\rangle|>K$ for all $\beta\in\Phi_0^+\backslash\{\alpha\}$. Let $\mu=-\lambda+\alpha^{\vee}$. Then $\langle\mu,\alpha\rangle=-\langle\lambda,\alpha\rangle+\langle\alpha^{\vee},\alpha\rangle=1$, and $\langle\mu,\beta\rangle=-\langle\lambda,\beta\rangle+\langle\alpha^{\vee},\beta\rangle$ for $\beta\in\Phi_0^+\backslash\{\alpha\}$. It follows, from the choice of $K$, that $\langle\mu,\beta\rangle$ and $\langle\lambda,\beta\rangle$ have opposite signs for all $\beta\in\Phi_0^+\backslash\{\alpha\}$. Let $w_{\lambda}$ and $w_{\mu}$ be the (unique) elements of $W$ such that $w_{\lambda}(C_0)=\lambda+C_0$ and $w_{\mu}(C_0)=\mu+C_0$ (as sets, where $C_0$ is the fundamental chamber in the standard geometric realisation). Therefore every root in $\Phi(w_{\lambda})\cap\Phi(w_{\mu})$ lies in the parallelism class of $\alpha$, and since $\langle\lambda,\alpha\rangle=\langle\mu,\alpha\rangle=1$ we have $\Phi(w_{\lambda})\cap\Phi(w_{\mu})=\{-\alpha+\delta\}$. Thus the root $-\alpha+\delta$ is super elementary. \end{proof} We have now concluded our discussion of affine Coxeter groups, and thus we return to the ``standard'' notion of root systems henceforth. We turn our attention to exhibiting a class of Coxeter systems for which $\mathcal{S}$ is a strict subset of $\mathcal{E}$. Let $(W,S)$ be a rank $4$ Coxeter system with $S=\{s_1,s_2,s_3,s_4\}$ and $m_{s_1,s_2}=m_{s_1,s_3}=m_{s_2,s_3}=2$. Let $m_i = m_{s_i,s_4}$ and $t_i=2\cos(\pi/m_i)$ for $i=1,2,3$. Thus $W$ has Coxeter graph \begin{center} \begin{tikzpicture}[scale=1] \draw (2,-3) node[fill=black,circle,scale=0.6] (0) {} (3,-3) node[fill=black,circle,scale=0.6] (1) {} (4,-2.5) node[fill=black,circle,scale=0.6] (2) {} (4,-3.5) node[fill=black,circle,scale=0.6] (3) {}; \draw (0)--node [midway,above]{$m_{1}$}(1); \draw (1)--node [midway,above]{$m_{2}$}(2); \draw (1)--node [midway,below]{$m_{3}$}(3); \end{tikzpicture} \end{center} Write $\alpha_i=\alpha_{s_i}$ for $i=1,2,3,4$. If $\lambda=a\alpha_1+b\alpha_2+c\alpha_3+d\alpha_4\in V$ with $a,b,c,d\geq 0$ and $\lambda\neq 0$ we write $\lambda>0$ (note that this notation does not imply that $\lambda$ is a root). \newpage \begin{lem}\label{lem:SElemma} Let $\lambda=a\alpha_1+b\alpha_2+c\alpha_3+d\alpha_4\neq 0$ with $a,b,c,d\geq 0$, $t_1d-2a\geq 0$, $t_2d-2b\geq 0$, $t_3d-2c\geq 0$, and $t_1a+t_2b+t_3c-2d\geq 0$. Then $w\lambda> 0$ for all $w\in W$. \end{lem} \begin{proof} Induction on $\ell(w)$, with the case $w=e$ true by hypothesis (as $a,b,c,d\geq 0$). Suppose the result is true for $w$, and suppose that $\ell(ws_i)=\ell(w)+1$. Thus $w\alpha_i>0$. If $i=1$ we have $s_1\lambda=\lambda+(t_1d-2a)\alpha_1$, and hence $$ ws_i\lambda=w(\lambda+(t_1d-2a)\alpha_1)=w\lambda+(t_1d-2a)w\alpha_1> 0 $$ as $t_1d-2a\geq 0$ and $w\alpha_1>0$. Similarly for $i=2,3$. If $i=4$ then $$ ws_i\lambda=w(\lambda+(t_1a+t_2b+t_3c-2d)\alpha_4)> 0, $$ hence the result. \end{proof} \begin{thm}\label{thm:SEtheorem} Let $(W,S)$ be the rank $4$ Coxeter system as above, and suppose that $\frac{1}{m_i}+\frac{1}{m_j}\leq \frac{1}{2}$ for each pair $i,j\in\{1,2,3\}$ with $i\neq j$, and that $m_1,m_2,m_3<\infty$. Then the root $$\beta=t_1\alpha_1+t_2\alpha_2+t_3\alpha_3+\alpha_4$$ is elementary but not super elementary. \end{thm} \begin{proof} Since $m_{s_1,s_2}=m_{s_1,s_3}=m_{s_2,s_3}=2$ we have $\beta =s_3s_2s_1(\alpha_4)$. Thus $\alpha_4\mapsto_{s_1}t_1\alpha_1+\alpha_4\mapsto_{s_2}t_1\alpha_1+t_2\alpha_2+\alpha_4\mapsto_{s_3}\beta$ is a path moving up the root poset of $\Phi^{+}$ such that the difference in the $\alpha_i$ coordinate at the $i$th step in the path is less than $2$ (as $m_1,m_2,m_3<\infty$). Thus $\beta$ is elementary by~\cite[\S4.7]{BB:05}. Let $\lambda=\beta+\alpha_4$ and $\lambda_i=s_i\beta+\alpha_4$ for $i=1,2,3$. We claim that $\lambda$ and $\lambda_i$ satisfy the conditions of Lemma~\ref{lem:SElemma}. To see this, note that the condition $\frac{1}{m_i}+\frac{1}{m_j}\leq \frac{1}{2}$ is equivalent to $t_i^2+t_j^2\geq 4$ for $i,j\in\{1,2,3\}$ with $i\neq j$. We have $\lambda=t_1\alpha_1+t_2\alpha_2+t_3\alpha_3+2\alpha_4$, $\lambda_1=0\alpha_1+t_2\alpha_2+t_3\alpha_3+2\alpha_4$, $\lambda_2=t_1\alpha_1+0\alpha_2+t_3\alpha_3+2\alpha_4$, and $\lambda_3=t_1\alpha_1+t_2\alpha_2+0\alpha_3+2\alpha_4$. Then for $\lambda$ we have $t_1d-2a=t_2d-2b=t_3d-2c=0$ and $t_1a+t_2b+t_3c-2d=t_1^2+t_2^2+t_3^2-4\geq 0$, and for $\lambda_1$ we have $t_1d-2a=2t_1$, $t_2d-2b=t_3d-2c=0$, and $t_1a+t_2b+t_3c-2d=t_2^2+t_3^2-4\geq 0$. Similarly for $\lambda_2$ and $\lambda_3$, using $t_1^2+t_3^2\geq 4$ and $t_1^2+t_2^2\geq 4$. Thus by Lemma~\ref{lem:SElemma} we have $w^{-1}\lambda> 0$ and $w^{-1}\lambda_i> 0$ ($i=1,2,3$) for all $w\in W$. It follows that there exists no element $w \in W$ with $\{ \alpha_4, \beta \} \subseteq \Phi(w)$ or $\{ \alpha_4, s_i\beta \} \subseteq \Phi(w)$ (for $i = 1,2,3)$, because if so then either $w^{-1}\lambda < 0$ or $w^{-1}\lambda_i < 0$. We will now show that if $\beta \in \Phi(w)$ then $| \Phi(w) \cap \{ \alpha_1, \alpha_2, \alpha_3 \} | \geq 2 $ (and thus there cannot be $w, v \in W$ with $\Phi(w) \cap \Phi(v) = \{ \beta \}$, and so $\beta\notin\mathcal{S}$). Equivalently, we will show that at least two of the simple reflections $s_1,s_2,s_3$ are in $D_L(w)$. If $\beta\in\Phi(w)$ then the above observation gives $\alpha_4\notin\Phi(w)$, and so $s_4\notin D_L(w)$. Therefore at least one of $s_1,s_2,s_3$ lie in $D_L(w)$ (as $w\neq e$ since $\beta\in \Phi(w)$). Thus $w=s_iv$ with $\ell(s_iv)=\ell(v)+1$ for some $i\in\{1,2,3\}$ and $v\in W$, and note that $v\neq e$ (as $\beta\in\Phi(w)$ and $\beta\neq\alpha_i$). If $s_j\in D_L(v)$ for some $j\in\{1,2,3\}$ then we are done (as $i\neq j$, and $s_is_j=s_js_i$ giving $s_i,s_j\in D_L(w)$). So assume that $D_L(v)=\{s_4\}$. Then $\alpha_4\in\Phi(v)$, and so from the above observations $s_i\beta\notin\Phi(v)$. But then $w^{-1}\beta=v^{-1}s_i\beta>0$, a contradiction, completing the proof. \end{proof} \begin{cor} Let $W$ and $\beta$ be as in Theorem~\ref{thm:SEtheorem}. Then $\beta$ is not a boundary root of any cone type. In particular, $\Phi(\mathscr{T})$ is a strict subset of $\mathcal{E}$ in this case. \end{cor} \bibliographystyle{plain}
{ "timestamp": "2021-12-14T02:27:11", "yymm": "2107", "arxiv_id": "2107.09962", "language": "en", "url": "https://arxiv.org/abs/2107.09962", "abstract": "In this article we introduce the notion of a \\textit{regular partition} of a Coxeter group. We develop the theory of these partitions, and show that the class of regular partitions is essentially equivalent to the class of automata (not necessarily finite state) recognising the language of reduced words in the Coxeter group. As an application of this theory we prove that each cone type in a Coxeter group has a unique minimal length representative. This result can be seen as an analogue of Shi's classical result that each component of the Shi arrangement of an affine Coxeter group has a unique minimal length element. We further develop the theory of cone types in Coxeter groups by identifying the minimal set of roots required to express a cone type as an intersection of half-spaces. This set of \\textit{boundary roots} is closely related to the elementary inversion sets of Brink and Howlett, and also to the notion of the base of an inversion set introduced by Dyer.", "subjects": "Combinatorics (math.CO); Group Theory (math.GR)", "title": "Cone Types, Automata, and Regular Partitions in Coxeter Groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137895115187, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7087617869480458 }